About a year and a half ago, I wrote about my kid’s experience with an AI checker tool that was pre-installed on a school-issued Chromebook. The assignment had been to write an essay about Kurt Vonnegut’s Harrison Bergeron—a story about a dystopian society that enforces “equality” by handicapping anyone who excels—and the AI detection tool flagged the essay as “18% AI written.” The culprit? Using the word “devoid.” When the word was swapped out for “without,” the score magically dropped to 0%.
The irony of being forced to dumb down an essay about a story warning against the forced suppression of excellence was not lost on me. Or on my kid, who spent a frustrating afternoon removing words and testing sentences one at a time, trying to figure out what invisible tripwire the algorithm had set. The lesson the kid absorbed was clear: write less creatively, use simpler vocabulary, and don’t sound too good, because sounding good is now suspicious.
At the time, I worried this was going to become a much bigger problem. That the fear of AI “cheating” would create a culture that actively punished good writing and pushed students toward mediocrity. I was hoping I’d be wrong about that.
Turns out … I was not wrong.
I’m accused of being AI on other sites simply because I construct complex sentences with regularity – and use emdashes.



How would that test work more than once per student though
Exactly the point.
I run teacher training on this stuff, and that’s always a core part of the message: education is about relationships. Damaging your relationship with a student over an accusation of AI use is backwards; instead, come with curiosity.
Also, AI writes poorly, so you don’t even need to call them out on it. And then when they (inevitably) include a source or fact hallucination, return the paper and explain that the error needs to be fixed, and why. That’s your “in” to explain ethical use of AI.