- cross-posted to:
- technology@lemmit.online
- cross-posted to:
- technology@lemmit.online
A Massachusetts couple claims that their son’s high school attempted to derail his future by giving him detention and a bad grade on an assignment he wrote using generative AI.
An old and powerful force has entered the fraught debate over generative AI in schools: litigious parents angry that their child may not be accepted into a prestigious university.
In what appears to be the first case of its kind, at least in Massachusetts, a couple has sued their local school district after it disciplined their son for using generative AI tools on a history project. Dale and Jennifer Harris allege that the Hingham High School student handbook did not explicitly prohibit the use of AI to complete assignments and that the punishment visited upon their son for using an AI tool—he received Saturday detention and a grade of 65 out of 100 on the assignment—has harmed his chances of getting into Stanford University and other elite schools.
Yeah, I’m 100% with the school on this one.
Standard citation rules already rule out using AI to write your essay.
How, you can ask the ai for where it sourced the info, and what books to acquire. You just used AI, and can use whatever citation method the teacher asks for. If you mean for the AI to write the essay, I would say it is plagiarism, but to use AI is no different than using a search engine to find sources.
Shit, you could use the AI to tell you how to properly write your citations in the form requested by the teacher as well.
Because if you didn’t write it, you have to cite it.
If a computer writes it and you say it’s yours, your plagiarizing. You’re not allowed to pay someone to write the essay for you, same goes for a computer.
I do think the method should be mentioned. Kids should be taught to cite what they used ai for and which one amd there should be precise rules stating which use cases are ok and which are not tolerated (some SHOULD be tolerated). Detention is too much though.
All of academia is breathing a sigh of relief that you’re not a teacher.
:) Ok. But why not to teach children how to work with new technology correctly? It’s not as simple as AI bad.
Because the point of the essay is to teach a topic, and hone research skills.
Typing a prompt in to a chat bot and copying the result is not an effective teaching tool. If you wanted to use it as a starting point it may occasionally be handy, but you need to know enough about what you’re asking it to know when it’s wrong. You need to take the results, and modify them to both sound human and correct mistakes. By that point you might as well just write the essay. And sure, there’s something to be said for teaching that, but clearly that’s not what this student was doing, nor are the multitude of my wife’s students doing that when they use an LLM for their essays.
Exactly, that was my whole point - llms can be useful for making nice sentences out of data that you already know are right from your previous research and the whole result must be overviewed to make sure it makes sense. Using them like this can save you time or even help you out immensely if you happen to have no literary talent (believe me). It’s not ok to use it for the research itself, the results are not good and the kid learns nothing. And that’s why I think we need to teach kids how to use it appropriately. It’s just like using a calculator. You need to learn to count first, but then it can save you time.
I don’t know which LLM you’re using, but I haven’t seen any that disclose that information. And if you ask the probable word generator, you’ll just get probable words back, no guarantee that they’re real sources.
I verified you can do so with ChatGPT earlier, put it in my comment elsewhere. Asked it how many battles took place during the American Civil war, then asked where it sourced the data from, then asked if I was doing a research project on it what books I should consider, and it gave me a list and such.
https://lemmy.dbzer0.com/comment/14108419