I’m sorry, so fucking angry. Students with sources that don’t exist. Students with sources that exist but then the quotation doesn’t exist.

I’m so fucking mad, because it’s extra work for me (that I’m sure as hell not getting compensated for), and it also entirely defeats the purpose of the fucking class (it’s writing/research, so like, engaging in a discipline and looking at what’s been written before on your topic, etc.)

Kill me please. Comrades, I’m so tired. I just want to teach writing. I want to give students a way to exercise agency in the world – to both see bad arguments and make good ones. They don’t care. I’m so tired.

BTW, I took time to look up some of these sources my student used, couldn’t find the quotes they quote, so told them the paper is an “A” if they can show me every quotation and failing otherwise. Does this seem like a fair policy (my thought is – no matter the method, fabrication of evidence is justification for failing work)?

foucault-madness agony-shivering allende-rhetoric

  • TrustedFeline [she/her, comrade/them]@hexbear.net
    link
    fedilink
    English
    arrow-up
    58
    ·
    4 days ago

    Trash Future repeatedly makes the point that AI chat bots are the inverse of the printing press. The printing press created a way for information to be reliably stored, retrieved, and exchanged. It created a sort of ecosystem where ideas (including competing ideas) could circulate in society.

    Chat bots do the opposite. They basically destroy the reliable transmission of information and ideas. Instead of creating reliable records of human thought (models, stories, theories, etc.), it’s a black box which randomly messes with averages. It’s so fucking harmful

    • Simon 𐕣he 🪨 Johnson@lemmy.ml
      link
      fedilink
      arrow-up
      21
      ·
      edit-2
      4 days ago

      This makes no sense because it gives the general problem of epistemology a year zero date of November 30th, 2022.

      People were lying, distorting and destroying prior to the invention of the printing press. For example one of the most obvious is the Donation of Constantine which the Catholic Church used to extort European kings starting 8th century.

      The printing press actually made things worse. For example the Gospel of Barnabas was thought to have been so widely proliferated because the forger printed fabricated copies of the the Gelasian Decree.

      Creating reliable records of “human thought” doesn’t matter because the problem isn’t one of what do people think, it’s what is the actual truth. This isn’t even the first system that greatly obscures historical thought for the benefit of a select few. If you were a peasant in 1500’s your ChatGPT was the conspiracy between your local lord and your local pastor to keep you compliant. The German peasants literally fought a war over it.

      There is no place in academia in which an LLM would be a reliable store of information because it’s a statistical compilation not a deterministic primary source, secondary or tertiary source. Trash Future as always is tilting at windmills erected by capitalist propaganda.

      • TrustedFeline [she/her, comrade/them]@hexbear.net
        link
        fedilink
        English
        arrow-up
        20
        ·
        4 days ago

        because it gives the general problem of epistemology a year zero date of November 30th, 2022.

        No it doesn’t. It’s just pointing out that a slop machine was invented around then. The printing press enabled information to be shared at a much greater scale than before. The LLM has enabled slop to be produced at a much greater scale than before. It’s a question of degree.

        Creating reliable records of “human thought” doesn’t matter because the problem isn’t one of what do people think, it’s what is the actual truth.

        And how is truth determined? What do you call a truth that nobody believes? Global warming is happening whether or not people believe in it. But it could have been avoided if more people believed in climate change, and also believed it was worth taking action against. IDK what to think about some platonic ideal of “truth”

        There is no place in academia in which an LLM would be a reliable store of information because it’s a statistical compilation not a deterministic primary source, secondary or tertiary source.

        Do you not see how the general public is actually using LLMs? It’s bleeding into academia, too

        • Simon 𐕣he 🪨 Johnson@lemmy.ml
          link
          fedilink
          arrow-up
          14
          ·
          edit-2
          4 days ago

          The LLM has enabled slop to be produced at a much greater scale than before. It’s a question of degree.

          The problem with this is that LLM allows you to make slop to a much greater scale than ever before. Commercial, and institutional slop has been at these scales for centuries. Monasteries were literally LLMs that forged legal documents en-masse all over Europe.. The Gulf of Tonkin Resolution was literally birthed by slop. LLMs make institutional and commercial slop cheaper but the scale is still limited by the capitalist class controlling the risk of brand dilution and loss of trust.

          There are way better arguments against LLMs, ex. that it pushes automation of social engineering scams into overdrive. The idea that there was ever a “truth world” and a “post truth” world that LLMs have created is ignorant. People have always fallen victim to these sorts of things en-masse. As far as the workers involved, I can see the argument that databases destroyed secretarial pools and jobs, but I’m not going to extend the sympathy to the “honorable workers” at the racism factory.

          The racism factory has always existed, they’ve just made it super cheap to order from now.

          And how is truth determined? What do you call a truth that nobody believes? Global warming is happening whether or not people believe in it. But it could have been avoided if more people believed in climate change, and also believed it was worth taking action against. IDK what to think about some platonic ideal of “truth”

          Again this is the general problem of epistemology. It has no answer. Turning away from it puts you right back to ordering from the racism factory because the whole point of learning about epistemology, journalism, sourcing, research and the scientific method is to give you tools to realize that you’re ordering from the racism factory. Manufacturing consent happens regardless about what you think about the latest tooling. What OP is attempting to teach their students is how to research and communicate truthfully – regardless of the tooling they are simply engaging in forgery.

          You can literally make the same argument about the printing press itself, that it enabled greater forgery to happen than what was possible prior. Which was my point in the original reply.

          Do you not see how the general public is actually using LLMs? It’s bleeding into academia, too

          Sure, but we’re not talking about the general public. We’re talking about collegiate course work, at the end of the day if the assignment is to understand proper research and sourcing directly using the output of an LLM should be an instant failure because it avoids the skills you should be exercising.

          Yes, LLMs have strained(broken?) the already austere and unrealistic expectations of academic systems across all levels. We should be leveraging that to reforge these systems to better serve students and educators alike rather than trying to get the cat back in the bag regarding LLMs. You can only “defeat” LLMs by raising people up in spite of the existence of LLMs.

          • TrustedFeline [she/her, comrade/them]@hexbear.net
            link
            fedilink
            English
            arrow-up
            10
            ·
            edit-2
            4 days ago

            The problem with this is that LLM allows you to make slop to a much greater scale than ever before.

            Yeah, and the printing press allowed weirdos like Martin Luther to spread ideas faster. You’re just repeating what I said. That’s exactly what I meant by “The LLM has enabled slop to be produced at a much greater scale than before” and “it’s a question of degree”

            The idea that there was ever a “truth world” and a “post truth” world that LLMs have created is ignorant

            I didn’t say or imply that!!

            Again this is the general problem of epistemology. It has no answer.

            Yes, and the whole point of the paragraph is that I’m not talking about “truth”. I specifically said that I don’t know what to think of some platonic ideal of “truth”, scare quotes and all. The whole point of that paragraph is that I’m not talking about heady epistemology. I’m just talking about social phenomena and human behavior. That’s why I talked about “human thought”. That’s why I talked about “information”, which is a separate concept from “truth”. I was specifically avoiding talking about “truth”, but you brought it up

            You can literally make the same argument about the printing press itself, that it enabled greater forgery to happen than what was possible prior. Which was my point in the original reply.

            Yes, forgeries carry information. I talked about information and models in my first post, not truth

            Yes, LLMs have strained(broken?) the already austere and unrealistic expectations of academic systems across all levels. We should be leveraging that to reforge these systems to better serve students and educators alike rather than trying to get the cat back in the bag regarding LLMs.

            We should be dismantling the industrial chatbots, and redirecting all that compute towards something useful. Get that cat in the fucking bag. I feel like we basically agree on everything, you’re just seeing an epistemological argument where there is none. I was trying to avoid epistemology from the start.

            Maybe TF talked about “post-truth” and I just don’t recall? But I’m pretty sure they were talking about information as well

            • Simon 𐕣he 🪨 Johnson@lemmy.ml
              link
              fedilink
              arrow-up
              6
              ·
              edit-2
              4 days ago

              Yeah, and the printing press allowed weirdos like Martin Luther to spread ideas faster.

              Except it’s not the same thing. It’s different in economic function, social function, and historical context.

              You can effectively own an LLM. It’s incredibly cheap all things considered. You can tell an LLM to constantly generate a stream of bullshit from here into infinity your only limit is cost of compute which is cheap.

              Martin Luther did not own a printing press. Martin Luther’s success is attributable to speculative nature of publishing and his ability at being at being charismatic through the written word. Yes without the printing press Martin Luther would be an unknown weirdo that may have been rediscovered at some point. However the content itself was what was driving the economic demand for proliferation, not the printing press itself.

              The printing press revolutionized the creation and distribution of media. LLMs have done no such thing, the various media we have is already cheap, easy to distribute and widely proliferated.

              Slop is much more comparable to straight up forgery and propaganda factories because it exists despite market demand. Propaganda is always a cost center, and forgery is a profit center because the lie enables leverage. However these are not things that people typically want to the same degree as something like Martin Luther’s ideas.

              Martin Luther actually convinced people of something. AI Slop isn’t reaching the same levels of demand as Luther. AI Slop isn’t even reaching the same historic levels of demand as Artisanal Hand Crafted Slop like Marvel Movies or NYT Op-Eds. AI Slop undercuts its own value proposition!

              We were already drowning in bullshit prior to the proliferation of LLMs. Grandma believed foreigners were making no-go zones in inner cities when that bullshit was fed to her manually by unscrupulous news editors. LLMs don’t change the consumption of the information market directly. They only change the upstream circumstances that lead to consumption of slop, like poor education – which has already been failing.

              Don’t get me wrong it’s a perfect storm, but putting the blame mostly on ChatGPT is tech doomerism, humans were doing this to each other in industrial quantities before we outsourced it to a machine that could be owned and operated by anyone. We’ve been ignoring and exacerbating the issues that lead to this problem for at least a decade. 1 in 5 adults was functionally illiterate in 2014. We’ve known since the 90’s that whole language approach was leading to more illiteracy and worse educational outcomes among students. We failed so long ago that we cannot even agree on what was the primary failure.

              Martin Luther on the other hand is actually credited with an increase in literacy.

              We’ve been destroying the institutions and mechanisms that give the general population the cognitive tools to fight back against LLM bullshit for a very long time, and the statistics on education among the general populace in the US bear that out.

              • Feels like we’re talking past each other.

                Martin Luther convinced people of ideas. AI slop doesn’t. The printing press increased literacy at a faster rate than years prior. AI slop is decreasing literacy (in the broad sense, including things like media literacy) at a faster rate than it was decreasing before. I think we agree on those points.

                I think those points are enough to make the argument that LLM chatbots are the inverse of the printing press. It’s a vibe-based assessment based on those points we agree on. This isn’t math, there’s no exact definition of (printing press)^-1 , You disagree with the vibe, but we’re pretty much in agreement on everything else

                • Simon 𐕣he 🪨 Johnson@lemmy.ml
                  link
                  fedilink
                  arrow-up
                  4
                  ·
                  edit-2
                  4 days ago

                  AI slop is decreasing literacy (in the broad sense, including things like media literacy) at a faster rate than it was decreasing before.

                  This is the cornerstone of your argument and there is no real proof of:

                  1. Literacy falling faster after November 30th 2022 than before that date (correlation)
                  2. Literacy falling because of LLMs (causation)

                  This is exactly where you’re making a jump that I cannot make. My argument is that I can believe in LLMs being responsible for exacerbating upstream effects, but I cannot accept that LLMs are even in the running compared to the elephant in the room: austerity

                  We’re arguing a well known documented effect vs a novel unstudied effect. There is more evidence that austerity is a fuzzy vibes based inverse of the printing press than there that LLMs are.

  • Esoteir [he/him]@hexbear.net
    link
    fedilink
    English
    arrow-up
    47
    ·
    4 days ago

    cheating in education in general, AI or not, is more caused by the financial and systemic repercussions of failing. When these students fail a class, it’s often another few thousand dollars they don’t have down the drain, and if they fail too many classes it locks them out of higher education entirely

    failure is one of the biggest drivers of true learning, and the educational system directly discourages it

    • ChestRockwell [comrade/them, any]@hexbear.netOP
      link
      fedilink
      English
      arrow-up
      36
      ·
      4 days ago

      Oh I get that – the financial reality is there for sure, and I recognize they have other classes, etc. Don’t get me wrong, I know who the “true” villain is.

      Doesn’t mean I can’t be mad at these AI companies for unleashing this on us. It actively makes teaching the skills to understand writing harder since students can get close to “good” writing with these machines, but the writing it produces crumbles under the slightest scrutiny. We’re actively harming thought and understanding with them.

  • Seasonal_Peace [he/him]@hexbear.net
    link
    fedilink
    English
    arrow-up
    48
    ·
    5 days ago

    I tried using AI to help find sources for my partners thesis. It’s a niche topic on body phenomenology and existentialism in pregnancy and birth. Instead, it cited Heidegger books that don’t even exist. A colleague recommended it, but honestly, you would have to be insane to rely on this.

    • fox [comrade/them]@hexbear.net
      link
      fedilink
      English
      arrow-up
      31
      ·
      4 days ago

      I get so annoyed when people tell me to ask an AI something. It has no knowledge and no capacity for reason. The only thing it can do is produce an output that an inexpert human could potentially accept as true because the underlying statistics favour sequences of characters that, when converted to text and read by a human, appear to have a confident tone. People talk about AI hallucinating wrong answers and that’s giving it too much credit; either everything it outputs is a hallucination that’s accepted more often than not, or nothing it outputs is a hallucination because it’s not conscious and can’t hallucinate, it’s just printing sequential characters.

  • LGOrcStreetSamurai [he/him]@hexbear.net
    link
    fedilink
    English
    arrow-up
    25
    ·
    edit-2
    4 days ago

    I’m going back to school for my Master’s degree. I am in my 30s and I have realized that “Generative AI” does nothing but cheat me out of learning. Using AI for your homework/assignments does nothing but outsource your thinking and learning to the machine. It makes you dumber.

    • Nacarbac [any]@hexbear.net
      link
      fedilink
      English
      arrow-up
      3
      ·
      4 days ago

      Yeah, I tried to use it a bit for my own mid-30’s MSc, and it was useful in the sense that it produced a terrible paragraph with some structure which I could then viciously edit into something new - decent at fixing grammar and finding words-for-things though. But that’s not too different from my earlier method of “just mash keys wildly and passionately and then go back over to edit out the sedition and most of the swearing”.

      The making up sources thing was interesting however, because what it reaaaaally did was put me onto the trick of following up the sources of my enemies, which very often revealed the dishonest cherry picking and outright misrepresentation involved, even in pretty Serious Works.

      As an aside I do think it’s good to get some experience with an LLM’s output even if - especially if - you’re against them, because it gives you a sense for them. I hear a very distinct and kinda annoying “chirpy ironic” voice in my head when reading LLM output, from my subconscious doing the analysis. Not totally reliable, I’m sure, but feels helpful.

      • LGOrcStreetSamurai [he/him]@hexbear.net
        link
        fedilink
        English
        arrow-up
        3
        ·
        edit-2
        3 days ago

        Yeah, I tried to use it a bit for my own mid-30’s MSc, and it was useful in the sense that it produced a terrible paragraph with some structure which I could then viciously edit into something new - decent at fixing grammar and finding words-for-things though. But that’s not too different from my earlier method of “just mash keys wildly and passionately and then go back over to edit out the sedition and most of the swearing”.

        That i can respect!

        As an aside I do think it’s good to get some experience with an LLM’s output even if - especially if - you’re against them, because it gives you a sense for them. I hear a very distinct and kinda annoying “chirpy ironic” voice in my head when reading LLM output, from my subconscious doing the analysis. Not totally reliable, I’m sure, but feels helpful.

        I have DeepSeek on my machine locally (it’s fine and free and the way i see all these MEGACORPS are the same US based or China whatever) and I have used it from time to time. However, in an academic-sense it can be a tool much like the advent of the “Google Search” in the 2000s. Much like the “Google search” it’s important to not just copy and paste whatever you find and call it a day. I think many people (my fellow student I know for a fact) just throw a question into the machine and just skim over the reponse and say “seems fine to me”. I don’t wanna be that type of dude.

    • SamotsvetyVIA [any]@hexbear.net
      link
      fedilink
      English
      arrow-up
      2
      ·
      4 days ago

      Almost the majority of my Bachelor’s and some of my Master’s was pointless busywork, but I guess it depends on the level of your university.

  • Philosoraptor [he/him, comrade/them]@hexbear.net
    link
    fedilink
    English
    arrow-up
    36
    ·
    5 days ago

    told them the paper is an “A” if they can show me every quotation and failing otherwise. Does this seem like a fair policy (my thought is – no matter the method, fabrication of evidence is justification for failing work)?

    If the policy for plagiarism at your school is a F on the assignment, that seems fair to me. Asking LLMs to do your work is plagiarism.

    • ChestRockwell [comrade/them, any]@hexbear.netOP
      link
      fedilink
      English
      arrow-up
      29
      ·
      edit-2
      5 days ago

      I mean, I could go to that, but I figure as a writer, to fabricate quotations and evidence is fundamentally failing work.

      I’m trying to give the student the chance to save themselves too. If they just cited that (for instance) the quotation about “all great historical figures appear twice” was from The German Ideology instead of 18th Brumaire that’s not a problem – the quotation exists, it’s simply the student being sloppy at documentation.

      However, to claim that someone stated something they didn’t – that’s just fundamentally failing work (it would be like going online and saying Mao said that “power grows out of the hands of the peasantry” instead of “power grows out of the barrel of a gun”).

      I should note - my class has a policy that students can use AI as long as they clear it with me. However, they’re responsible for their work, and I won’t accept work with fake quotes. That’s dogshit writing.

      • Moss [they/them]@hexbear.net
        link
        fedilink
        English
        arrow-up
        14
        ·
        4 days ago

        Seems generous tbh, if I submitted a work with incorrect citing I would lose marks and I would have to accept it, because that’s fair enough

      • Aceivan [they/them]@hexbear.net
        link
        fedilink
        English
        arrow-up
        3
        ·
        4 days ago

        I should note - my class has a policy that students can use AI as long as they clear it with me

        genuinely why? I mean I’m not saying you necessarily should go out and say “no AI you can’t even think about looking at an AI for this assignment”, but it feels like this policy might be a small part of giving students a perception that it’s fine to use for your class. If it’s impossible to prove whether or not they used AI definitively then why even mention it in any way, why not just focus on concrete things like fake sources and other bad writing

        • ChestRockwell [comrade/them, any]@hexbear.netOP
          link
          fedilink
          English
          arrow-up
          6
          ·
          4 days ago

          Primarily because I don’t want to police AI as source discovery stuff. Too much policing means students doing googles and using the AI summary would be afoul of my policy and I don’t want to deal with that. So the illicit use to write your paper is banned but I’m not really checking that and basically am upfront that their work is their own and they’re responsible for checking shit like this (also why this bungle is so infuriating). The whole “generate a wikipedia on a topic” trick it does can be a good starting point for key names, sources, etc and I don’t want to say all of that is banned.

          And while I think since students pay for the writing center that’s a better use of time and resources (or hell meeting with me), if a student wanted to chat about their essay with the bot instead, I could see this as potentially useful for those with really bad social anxiety. Balancing what the bot says with what I say isn’t the WORST outcome (indeed as long as there’s multiple vectors of critique/possibility the student has to weigh and choose between my pedagogy is basically working). I don’t like it, but I don’t like a lot of stuff.

          But fundamentally it’s because policing this kind of stuff is a time sink and a strict anti AI policy is just a ton of work. Instead, I try to assume student work is student work until presented with something otherwise. Which is also why this student is in danger not because of gen AI use but instead fabricated materials.

          I try to be a generous and kind reader. I don’t want to be a cop. But shit like this forces my hand (I can’t pretend otherwise!) and it really bugs me. I want to help the students with THEIR work and writing and I try not to jeopardize that by constantly assuming it’s not theirs. Leaving the door open helps avoid starting with that attitude too early.

          • Aceivan [they/them]@hexbear.net
            link
            fedilink
            English
            arrow-up
            3
            ·
            4 days ago

            My feeling was just “don’t have a written AI policy” (or I suppose, flesh out what has been discussed under this post into a nuanced policy), not “have a strict anti-AI policy (on paper)”. I definitely wouldn’t advocate for the latter, both for reasons you mention and because it’s basically unfalsifiable.

            Like, you don’t have to explicitly say “AI is allowed [with these conditions]” to allow it in practice, and I feel like doing so can give students the wrong idea that they can just go for it and ask forgiveness rather than permission (or more likely deny deny deny and then only when they’re clearly caught in it say “but the policy”)

            I definitely appreciate your stance against teacher-as-cop thinking. I’m glad you’re in the profession and sorry you have to deal with this garbage

            • ChestRockwell [comrade/them, any]@hexbear.netOP
              link
              fedilink
              English
              arrow-up
              2
              ·
              4 days ago

              deny deny deny and then only when they’re clearly caught in it say “but the policy”

              My goal (and I don’t know how exactly to achieve it) is to reverse this so that by copping to it quickly we can move to “do you understand why what you produced is shit and how the LLM did that” without them worried I’m going to drag them to academic integrity.

              I don’t know how to get them to do it though. It’s like, I don’t want to spend an hour in my office pulling teeth about “ok, this source isn’t real, but did you document badly or did you use AI”

              • Aceivan [they/them]@hexbear.net
                link
                fedilink
                English
                arrow-up
                3
                ·
                4 days ago

                yeah… I wish I had a better suggestion for you. talking about it head on might help but that’s assuming they listen and believe you when you say you aren’t going to narc on them, and even then one or two speeches about it probably doesn’t break the bad habits they developed/reinforce in other classes over time.

  • Beaver [he/him]@hexbear.net
    link
    fedilink
    English
    arrow-up
    30
    ·
    4 days ago

    (it’s writing/research, so like, engaging in a discipline and looking at what’s been written before on your topic, etc.)

    BTW, I took time to look up some of these sources my student used, couldn’t find the quotes they quote, so told them the paper is an “A” if they can show me every quotation and failing otherwise. Does this seem like a fair policy (my thought is – no matter the method, fabrication of evidence is justification for failing work)?

    I think they will learn an important life lesson: that if they’re going to cheat, then they have to, at a minimum, be sure that they are at least “getting the right answer”. The tide of AI dystopia is unstoppable, but you can at least teach them that they can’t just completely shut their brains off to the extent that they are just presenting completely fabricated research and factual claims.

  • DinosaurThussy [they/them]@hexbear.net
    link
    fedilink
    English
    arrow-up
    27
    ·
    4 days ago

    Using AI to write papers for a writing class is like using speech to text for a touch typing course. You’re bypassing the exercises that will actually provide the value you’re paying for

  • sgtlion [any]@hexbear.net
    link
    fedilink
    English
    arrow-up
    31
    ·
    5 days ago

    Seems a fair policy. I like to imagine if you stress this policy up front in advance, students might actually check and verify all their own sources (and thus actually do their own research even with ai stuff)

  • GoodGuyWithACat [he/him]@hexbear.net
    link
    fedilink
    English
    arrow-up
    25
    ·
    4 days ago

    BTW, I took time to look up some of these sources my student used, couldn’t find the quotes they quote, so told them the paper is an “A” if they can show me every quotation and failing otherwise. Does this seem like a fair policy (my thought is – no matter the method, fabrication of evidence is justification for failing work)?

    Most class syllabuses I’ve seen tie LLM into the same category as plagiarism. That’s an automatic failure on the assignment and sometimes failure of the class.

  • sewer_rat_420 [he/him, any]@hexbear.net
    link
    fedilink
    English
    arrow-up
    21
    ·
    4 days ago

    Your policy is fair, because then your students would hypothetically actually use their damn brains a tiny bit, which is what school should be about

    I would also posit that any false quotation could just be docked one letter grade, so 5 of them is an F

  • Mardoniush [she/her]@hexbear.net
    link
    fedilink
    English
    arrow-up
    18
    ·
    edit-2
    4 days ago

    Pass Fail Verbal exams. Seriously just ask them a random very simple question about their paper and if they cant answer it then 0%.

    This bypasses the AI problem because if they’ve gone to all the trouble of making a fake paper and then learning it’s bullshit by heart then hopefully they’ve learned something about how papers are written, even if by accident.

  • plinky [he/him]@hexbear.net
    link
    fedilink
    English
    arrow-up
    23
    ·
    edit-2
    4 days ago

    if two students are using exact same (unneeded) variable in a script, did they do similar prompt or do they talk to each other saruman-orb

    fucking hate this shit, something which could be done in like 20 lines of code is like 200. I don’t particularly have to care, cause i’m not teaching programming but jesus christ