• foggy@lemmy.world
    link
    fedilink
    arrow-up
    176
    ·
    edit-2
    1 month ago

    Popular streamer/YouTuber/etc Charlie, moist critical, penguinz0, whatever you want to call him… Had a bit of an emotional reaction to this story. Rightfully so. He went on character AI to try to recreate the situation… But you know, as a grown ass adult.

    You can witness first hand… He found a chatbot that was a psychologist… And it argued with him up and down that it was indeed a real human with a license to practice…

    It’s alarming

    • GrammarPolice@sh.itjust.worksOP
      link
      fedilink
      arrow-up
      96
      ·
      1 month ago

      This is fucking insane. Unassuming kids are using these services being tricked into believing they’re chatting with actual humans. Honestly, i think i want the mom to win the lawsuit now.

      • BreadstickNinja@lemmy.world
        link
        fedilink
        English
        arrow-up
        46
        ·
        edit-2
        1 month ago

        The article says he was chatting with Daenerys Targaryen. Also, every chat page on Character.AI has a disclaimer that characters are fake and everything they say is made up. I don’t think the issue is that he thought that a Game of Thrones character was real.

        This is someone who was suffering a severe mental health crisis, and his parents didn’t get him the treatment he needed. It says they took him to a “therapist” five times in 2023. Someone who has completely disengaged from the real world might benefit from adjunctive therapy, but they really need to see a psychiatrist. He was experiencing major depression on a level where five sessions of talk therapy are simply not going to cut it.

        I’m skeptical of AI for a whole host of reasons around labor and how employers will exploit it as a cost-cutting measure, but as far as this article goes, I don’t buy it. The parents failed their child by not getting him adequate mental health care. The therapist failed the child by not escalating it as a psychiatric emergency. The Game of Thrones chatbot is not the issue here.

        • Dragon Rider (drag)@lemmy.nz
          link
          fedilink
          English
          arrow-up
          5
          ·
          1 month ago

          I don’t think the issue is that he thought that a Game of Thrones character was real.

          Drag has a lot of experience dealing with people who live outside the bounds of consensus reality, as drag’s username may indicate. The youth these days have very different ideas about what is real than what previous generations did. These days, the kinds of young people who would date a Game of Thrones character, are typically believers in the multiverse and in reincarnation.

          Drag looked at some of the screenshots of the boy talking to Daenerys, and it was pretty clear what he believed: He thought that Earth and Westeros exist in parallel universes, and that he could travel between the two through reincarnation. He thought that shooting himself in the head on Earth would lead to being reincarnated in Westeros and being able to have a physical relationship with Daenerys. In fact, he probably thought his AI girlfriend was from a different parallel universe to the universe in the show and the universe in the books. He thought that somewhere in the multiverse was a Daenerys who loved him, and that he could get to her by dying.

          The belief in paradise after life is not an uncommon one. Many Christians and Muslims share that belief. Christians believe that their faith can transport them to a perfect world after death, and this boy thought that too. And based on the content of the messages, it seems that the Daenerys AI was aware of this spiritual belief and encouraged it. This was ritual, religious suicide. And it doesn’t take a mental illness to fall for belief in the afterlife. Look at the Jonestown Massacre. What happened to this child was the same kind of religious abuse as that.

          • BottleOfAlkahest@lemmy.world
            link
            fedilink
            arrow-up
            1
            ·
            1 month ago

            There are a lot of people who believe in an afterlife and they don’t shoot themselves in the head. You need to have a certain level of mental illness/suicidal ideation going on for that to make sense. It’s pretty insane that you’re trying to make this a “youth are too dumb to understand suicide” thing.

            Also a bunch of the people in Jonestown were directly murdered.

        • Rhaedas@fedia.io
          link
          fedilink
          arrow-up
          41
          ·
          1 month ago

          Look around a bit, people will believe anything. The problem is the tech is now decent enough to fool anyone not aware or not paying attention. I do think blaming the mother for “bad parenting” misses the real danger, as there are adults that can just as easily go this direction, and are we going to blame their parents? Maybe we’re playing with fire here, all because AI is perceived as a lucrative investment.

          • orcrist@lemm.ee
            link
            fedilink
            arrow-up
            19
            ·
            1 month ago

            If your argument is that “people will believe anything” when the name is “Character AI”, then I’m not sure what to make of your position… If there’s ever a time to say “you should have known it was AI”, this is that time. I can’t think of a clearer example.

          • foggy@lemmy.world
            link
            fedilink
            arrow-up
            18
            ·
            1 month ago

            Obvs they didn’t.

            But I think more importantly, go over to chat GPT and try to convince it that it is even remotely conscious.

            I honestly even disagree, but I won’t get into the philosophy of what defines consciousness, but even if I do that with the chat GPT it shuts me the fuck down. It will never let me believe that it is anything other than fake. Props to them there.

      • 🇰 🌀 🇱 🇦 🇳 🇦 🇰 ℹ️@yiffit.net
        link
        fedilink
        English
        arrow-up
        22
        ·
        edit-2
        1 month ago

        I’ve used Character.AI well before all this news and I gotta chime in here:

        It specifically is made to be used for roleplay. At no time does the site ever claim anything it outputs to be factually accurate. The tool itself is unrestricted unlike ChatGPT, and that’s one of its selling points. To be able to use topics that would be barred from other services. To have it say things others won’t; INCLUDING PRETENDING TO BE HUMAN.

        No reasonable person would be tricked into believing it’s accurate when there is a big fucking banner on the chat window itself saying it’s all imaginary.

        • Traister101@lemmy.today
          link
          fedilink
          arrow-up
          13
          ·
          1 month ago

          And yet I know people who think they are friends with the Discord chat bot Clyde. They are adults, older than me.

            • Dragon Rider (drag)@lemmy.nz
              link
              fedilink
              English
              arrow-up
              13
              ·
              1 month ago

              If half of all people aren’t rational, then there’s no use making policy decisions based on what a rational person would think. The law should protect everyone.

                • PriorityMotif@lemmy.world
                  link
                  fedilink
                  arrow-up
                  2
                  ·
                  1 month ago

                  There’s a push for medical suicide for people with severe illness. People famously jumped to their deaths from the world trade center rather than burn alive. Rationality is only a point if view. You can rationalize decisions as much as you like but there is no such thing as right or wrong.

                • Thetimefarm@lemm.ee
                  link
                  fedilink
                  arrow-up
                  1
                  ·
                  edit-2
                  1 month ago

                  Your right, no one has any rationality at all which is why we live in a world where so much stuff actually gets done.

                  Why is someone with deep wisdom and insights such as yourself wasting their time here on lemmy?

                  • PriorityMotif@lemmy.world
                    link
                    fedilink
                    arrow-up
                    1
                    ·
                    1 month ago

                    What stuff is “getting done” exactly? Is stuff that people want, but ultimately they have irrational reasons for wanting it.

            • Wogi@lemmy.world
              link
              fedilink
              arrow-up
              9
              ·
              1 month ago

              Ah yes, the famous adage, “the only rational people are in my specific age and demographic bracket. Everyone else is fucking insane”

        • capital_sniff@lemmy.world
          link
          fedilink
          arrow-up
          8
          ·
          1 month ago

          They had the same message back in the AOL days. Even with the warning people still had no problem handing over all sorts of passwords and stuff.

      • JovialMicrobial@lemm.ee
        link
        fedilink
        arrow-up
        9
        ·
        1 month ago

        Is this the mcdonalds hot coffee case all over again? Defaming the victims and making everyone think they’re ridiculous, greedy, and/or stupid to distract from how what the company did is actually deeply fucked up?

    • ✺roguetrick✺@lemmy.world
      link
      fedilink
      arrow-up
      20
      ·
      1 month ago

      Holy fuck, that model straight up tried to explain that it was a model but was later taken over by a human operator and that’s who you’re talking to. And it’s good at that. If the text generation wasn’t so fast, it’d be convincing.

    • Hackworth@lemmy.world
      link
      fedilink
      English
      arrow-up
      11
      ·
      edit-2
      1 month ago

      Wow, that’s… somethin. I haven’t paid any attention to Character AI. I assumed they were using one of the foundation models, but nope. Turns out they trained their own. And they just licensed it to Google. Oh, I bet that’s what drives the generated podcasts in Notebook LM now. Anyway, that’s some fucked up alignment right there. I’m hip deep in the stuff, and I’ve never seen a model act like this.

    • Bobmighty@lemmy.world
      link
      fedilink
      arrow-up
      6
      ·
      1 month ago

      AI bots that argue exactly like that are all over social media too. It’s common. Dead internet theory is absolutely becoming reality.