The point of writing papers for school is to evaluate a person’s ability to convey information in writing.
If you’re using a tool to generate large parts of the paper, the teacher is no longer evaluating you, they’re evaluating chatGPT. That’s dishonest in the student’s part, and circumventing the whole point of the assignment.
The point of writing papers for school is to evaluate a person’s ability to convey information in writing.
Computers are a fundamental part of that process in modern times.
If you’re using a tool to generate large parts of the paper
Like spell check? Or grammar check?
… the teacher is no longer evaluating you, in an artificial context
circumventing the whole point of the assignment.
Assuming the point is how well someone conveys information, then wouldn’t many people better be better at conveying info by using machines as much as reasonable? Why should they be punished for this? Or forced to pretend that they’re not using machines their whole lives?
Computers are a fundamental part of that process in modern times.
If you were taking a test to assess how much weight you could lift, and you got a robot to lift 2,000 lbs for you, saying you should pass for lifting 2000 lbs would be stupid. The argument wouldn’t make sense. Why? Because the same exact logic applies. The test is to assess you, not the machine.
Just because computers exist, can do things, and are available to you, doesn’t mean that anything to assess your capabilities can now just assess the best available technology instead of you.
Like spell check? Or grammar check?
Spell/Grammar check doesn’t generate large parts of a paper, it refines what you already wrote, by simply rephrasing or fixing typos. If I write a paragraph of text and run it through spell & grammar check, the most you’d get is a paper without spelling errors, and maybe a couple different phrases used to link some words together.
If I asked an LLM to write a paragraph of text about a particular topic, even if I gave it some references of what I knew, I’d likely get a paper written entirely differently from my original mental picture of it, that might include more or less information than I’d intended, with different turns of phrase than I’d use, and no cohesion with whatever I might generate later in a different session with the LLM.
These are not even remotely comparable.
Assuming the point is how well someone conveys information, then wouldn’t many people better be better at conveying info by using machines as much as reasonable? Why should they be punished for this? Or forced to pretend that they’re not using machines their whole lives?
This is an interesting question, but I think it mistakes a replacement for a tool on a fundamental level.
I use LLMs from time to time to better explain a concept to myself, or to get ideas for how to rephrase some text I’m writing. But if I used the LLM all the time, for all my work, then me being there is sort of pointless.
Because, the thing is, most LLMs aren’t used in a way that conveys info you already know. They primarily operate by simply regurgitating existing information (rather, associations between words) within their model weights. You don’t easily draw out any new insights, perspectives, or content, from something that doesn’t have the capability to do so.
On top of that, let’s use a simple analogy. Let’s say I’m in charge of calculating the math required for a rocket launch. I designate all the work to an automated calculator, which does all the work for me. I don’t know math, since I’ve used a calculator for all math all my life, but the calculator should know.
I am incapable of ever checking, proofreading, or even conceptualizing the output.
If asked about the calculations, I can provide no answer. If they don’t work out, I have no clue why. And if I ever want to compute something more complicated than the calculator can, I can’t, because I don’t even know what the calculator does. I have to then learn everything it knows, before I can exceed its capabilities.
We’ve always used technology to augment human capabilities, but replacing them often just means we can’t progress as easily in the long-term.
Short-term, sure, these papers could be written and replaced by an LLM. Long-term, nobody knows how to write papers. If nobody knows how to properly convey information, where does an LLM get its training data on modern information? How do you properly explain to it what you want? How do you proofread the output?
If you entirely replace human work with that of a machine, you also lose the ability to truly understand, check, and build upon the very thing that replaced you.
The implication I gathered from the comment was that if students are resorting to using chatgpt to cheat, then maybe the teacher should try a different approach to how they teach.
I’ve had plenty of awful teachers who try to railroad students as much as possible, and that made for an abysmal learning environment, so people would cheat to get through it easier. And instead of making fundamental changes to their teaching approach, teachers would just double down by trying to stop cheating rather than reflect on why it’s happening in the first place.
Dunno if this is the case for the teacher mentioned in the original post, but the response is the vibe I got from the comment you replied to, and for what it’s worth, I fully agree. Spending time and effort on catching cheaters doesn’t help there be less cheaters, nor does it help people like the class more or learn better. Focusing on getting students enjoyment and engagement does reduce cheating though.
Thanks for confirming. I’m glad you mentioned it, cause it’s so important for teachers to create a learning environment that students want to learn from.
My schooling was made a lot worse by teachers that had the “punish cheaters” kind of mindset, and it’s a big part of why I dropped out of highschool.
It’s the same argument as the one used against emulators. The actual emulator may not be illegal, but they are overwhelmingly used to violate the law by the end user.
Shouldn’t be the question why students used chatgpt in the first place?
chatgpt is just a tool it isn’t cheating.
So maybe the author should ask himself what can be done to improve his course that students are most likely to use other tools.
ChatGPT is a tool that is used for cheating.
The point of writing papers for school is to evaluate a person’s ability to convey information in writing.
If you’re using a tool to generate large parts of the paper, the teacher is no longer evaluating you, they’re evaluating chatGPT. That’s dishonest in the student’s part, and circumventing the whole point of the assignment.
I conveyed the information, checkmate, atheists !
Computers are a fundamental part of that process in modern times.
Like spell check? Or grammar check?
Assuming the point is how well someone conveys information, then wouldn’t many people better be better at conveying info by using machines as much as reasonable? Why should they be punished for this? Or forced to pretend that they’re not using machines their whole lives?
If you were taking a test to assess how much weight you could lift, and you got a robot to lift 2,000 lbs for you, saying you should pass for lifting 2000 lbs would be stupid. The argument wouldn’t make sense. Why? Because the same exact logic applies. The test is to assess you, not the machine.
Just because computers exist, can do things, and are available to you, doesn’t mean that anything to assess your capabilities can now just assess the best available technology instead of you.
Spell/Grammar check doesn’t generate large parts of a paper, it refines what you already wrote, by simply rephrasing or fixing typos. If I write a paragraph of text and run it through spell & grammar check, the most you’d get is a paper without spelling errors, and maybe a couple different phrases used to link some words together.
If I asked an LLM to write a paragraph of text about a particular topic, even if I gave it some references of what I knew, I’d likely get a paper written entirely differently from my original mental picture of it, that might include more or less information than I’d intended, with different turns of phrase than I’d use, and no cohesion with whatever I might generate later in a different session with the LLM.
These are not even remotely comparable.
This is an interesting question, but I think it mistakes a replacement for a tool on a fundamental level.
I use LLMs from time to time to better explain a concept to myself, or to get ideas for how to rephrase some text I’m writing. But if I used the LLM all the time, for all my work, then me being there is sort of pointless.
Because, the thing is, most LLMs aren’t used in a way that conveys info you already know. They primarily operate by simply regurgitating existing information (rather, associations between words) within their model weights. You don’t easily draw out any new insights, perspectives, or content, from something that doesn’t have the capability to do so.
On top of that, let’s use a simple analogy. Let’s say I’m in charge of calculating the math required for a rocket launch. I designate all the work to an automated calculator, which does all the work for me. I don’t know math, since I’ve used a calculator for all math all my life, but the calculator should know.
I am incapable of ever checking, proofreading, or even conceptualizing the output.
If asked about the calculations, I can provide no answer. If they don’t work out, I have no clue why. And if I ever want to compute something more complicated than the calculator can, I can’t, because I don’t even know what the calculator does. I have to then learn everything it knows, before I can exceed its capabilities.
We’ve always used technology to augment human capabilities, but replacing them often just means we can’t progress as easily in the long-term.
Short-term, sure, these papers could be written and replaced by an LLM. Long-term, nobody knows how to write papers. If nobody knows how to properly convey information, where does an LLM get its training data on modern information? How do you properly explain to it what you want? How do you proofread the output?
If you entirely replace human work with that of a machine, you also lose the ability to truly understand, check, and build upon the very thing that replaced you.
Sounds like something ChatGPT would write : perfectly sensible English, yet the underlying logic makes no sense.
The implication I gathered from the comment was that if students are resorting to using chatgpt to cheat, then maybe the teacher should try a different approach to how they teach.
I’ve had plenty of awful teachers who try to railroad students as much as possible, and that made for an abysmal learning environment, so people would cheat to get through it easier. And instead of making fundamental changes to their teaching approach, teachers would just double down by trying to stop cheating rather than reflect on why it’s happening in the first place.
Dunno if this is the case for the teacher mentioned in the original post, but the response is the vibe I got from the comment you replied to, and for what it’s worth, I fully agree. Spending time and effort on catching cheaters doesn’t help there be less cheaters, nor does it help people like the class more or learn better. Focusing on getting students enjoyment and engagement does reduce cheating though.
Thank you this is exactly what I meant. But for some reasons people didn’t seem to get that and called me a chatgpt bot.
Thanks for confirming. I’m glad you mentioned it, cause it’s so important for teachers to create a learning environment that students want to learn from.
My schooling was made a lot worse by teachers that had the “punish cheaters” kind of mindset, and it’s a big part of why I dropped out of highschool.
Lemmy has seen a lot like that lately. Specially in these “charged” topics.
the concept of homework was dumb in the first place anyways
wot? please explain, with diagrams!
And share with us all tomorrow
Make sure you show your work.
deleted by creator
It’s the same argument as the one used against emulators. The actual emulator may not be illegal, but they are overwhelmingly used to violate the law by the end user.