A Massachusetts couple claims that their son’s high school attempted to derail his future by giving him detention and a bad grade on an assignment he wrote using generative AI.

An old and powerful force has entered the fraught debate over generative AI in schools: litigious parents angry that their child may not be accepted into a prestigious university.

In what appears to be the first case of its kind, at least in Massachusetts, a couple has sued their local school district after it disciplined their son for using generative AI tools on a history project. Dale and Jennifer Harris allege that the Hingham High School student handbook did not explicitly prohibit the use of AI to complete assignments and that the punishment visited upon their son for using an AI tool—he received Saturday detention and a grade of 65 out of 100 on the assignment—has harmed his chances of getting into Stanford University and other elite schools.

Yeah, I’m 100% with the school on this one.

  • brucethemoose@lemmy.world
    link
    fedilink
    arrow-up
    1
    ·
    edit-2
    2 months ago

    You worded this much better than I could.

    Yes I was thinking of two directions:

    • A “smarter” AI, though I think a better term would be “customized,” specifically tailored to only help with knowledge that the student already “learned” in the context.

    • A “dumb” AI thats too unreliable to use for lazy ChatGPT style answers, but can be a primitive assistant to bounce ideas off of or help with phrasing, wording, formatting and basic tasks that are too onerous or trivial for a human/student to help with.

    Not many people are familiar with the latter because, well, they only use uncached ChatGPT, but I find small LLMs to already be useful as a kind of autocomplete or sanity check when my brain is stuck (much like it was without my TI84 BASIC program), and the experience is totally different because the response is instant (as the context is cached on your machine).