A Massachusetts family has filed a federal lawsuit against Hingham High School after their son faced disciplinary action for using artificial intelligence to assist with a history paper. Jennifer and Dale Harris claim the school imposed “arbitrary and capricious” punishment on their high-achieving son, who used AI to “prepare the initial outline and research” for an assignment on a civil-rights activist.
The controversy centers on the school’s unclear AI policy. According to the lawsuit, the student received a Saturday detention, saw his social-studies grade dropped to a C+, and was barred from National Honor Society induction. The parents emphasize that AI was not used to write the actual paper, which included proper citations and a works cited page. Their son, described as a three-sport varsity athlete with a high GPA, 1520 SAT score, and perfect ACT score, is now applying to elite universities including Stanford University.
The Harris family argues that the school’s AI policy is fundamentally flawed. The Hingham High School student handbook states that “unauthorized use of technology, including Artificial Intelligence (AI), during an assessment” may constitute cheating, but fails to define what constitutes “unauthorized” use or specify acceptable applications of AI tools.
Legal experts have weighed in on the case’s broader implications. Matthew Sag, a professor of law in AI at Emory University School of Law, called the policy “hopelessly vague and unfair,” questioning whether tools like spell-check, text prediction, Google searches, or Grammarly constitute prohibited AI use. Ryan Abbott, a University of Surrey law professor specializing in AI, noted that AI use by students is “common even when prohibited” and difficult to detect, with detection tools being “error-prone.”
A 2023 Study.com survey found that 26% of 203 K-12 teachers reported catching students cheating using ChatGPT, highlighting the widespread nature of this issue. John Zerilli, a law professor at the University of Edinburgh, stated that “using AI tools in school assessments is now virtually entrenched” and suggested schools should embrace AI as part of education rather than prohibit it outright.
The parents are seeking grade correction, arguing that the current punishment will have a “significant, severe, and continuing impact” on their son’s college acceptance chances and “future earning capacity.” Their attorney, Peter Farrell, emphasized the urgency: “With college applications now due, the student is in serious jeopardy given the discipline imposed and the inequitable impact of the use of AI when it was not expressly prohibited.”
Key Quotes
They told us our son cheated on a paper, which is not what happened.
Jennifer Harris, the student’s mother, made this statement to WCVB-TV, emphasizing her belief that using AI for research and outlining does not constitute cheating, especially when the actual writing was done by the student with proper citations.
They basically punished him for a rule that doesn’t exist.
Jennifer Harris told WCBV-TV, highlighting the central legal argument that the school’s AI policy was too vague to enforce fairly, as it failed to define what constituted authorized versus unauthorized AI use.
For example, can students use AI tools for studying, drafting papers, or checking grammar? Is spell-check AI? Is text prediction AI? Is a Google search AI? Is Grammarly AI?
Matthew Sag, a professor of law in AI at Emory University School of Law, illustrated the fundamental problem with vague AI policies by listing common tools students use daily, demonstrating how unclear boundaries leave students “guessing, with apparently dire consequences if they guess wrong.”
Using AI tools in school assessments is now virtually entrenched.
John Zerilli, a law professor at the University of Edinburgh and research associate at the Oxford Institute for Ethics in AI, acknowledged the reality that AI use in education is already widespread and suggested schools should embrace teaching proper AI use rather than attempting to ban it.
Our Take
This case exemplifies education’s reactive rather than proactive approach to AI integration. Schools are punishing students for using technology that’s become standard in professional environments, creating a disconnect between academic preparation and real-world expectations. The vagueness of the policy—failing to distinguish between AI-assisted research and AI-generated writing—reflects institutional uncertainty about AI’s role in learning.
What’s particularly troubling is the severe consequences imposed without clear guidelines. A C+ grade and blocked Honor Society induction could genuinely impact this student’s future, yet the “violation” involved using AI for research and outlining—tasks that could reasonably be considered legitimate study aids. This suggests schools need nuanced AI literacy programs rather than blanket prohibitions. The lawsuit may ultimately benefit education by forcing institutions to develop explicit, fair policies that acknowledge AI’s permanence while maintaining academic integrity standards. The question isn’t whether students will use AI, but how schools will teach them to use it responsibly.
Why This Matters
This lawsuit represents a watershed moment in education’s struggle to adapt to AI technology. As artificial intelligence tools become ubiquitous, schools nationwide are grappling with how to establish fair, clear policies that distinguish between legitimate AI assistance and academic dishonesty. The case highlights a critical gap: many educational institutions are punishing AI use without defining acceptable boundaries.
The implications extend beyond one student’s college prospects. This case could set legal precedents for how schools must communicate technology policies and what constitutes fair enforcement. With 26% of teachers already catching students using ChatGPT, and experts noting AI use is “virtually entrenched,” educational institutions face mounting pressure to develop comprehensive, explicit AI guidelines.
The lawsuit also raises fundamental questions about 21st-century education: Should schools ban AI tools that students will inevitably use in their careers, or should they teach responsible AI integration? As detection tools prove unreliable and AI capabilities expand, this case may force a broader reckoning about how education adapts to technological reality. The outcome could influence policy development across thousands of schools nationwide, affecting millions of students navigating the intersection of AI and academic integrity.
Recommended Reading
For those interested in learning more about artificial intelligence, machine learning, and effective AI communication, here are some excellent resources: