jesus this is gross man

  • HedyL@awful.systems
    link
    fedilink
    English
    arrow-up
    5
    ·
    1 day ago

    As I’ve pointed out earlier in this thread, it is probably fairly easy to manipulate and control people if someone is devoid of empathy and a conscience. Most scammers and cult leaders appear to operate from similar playbooks, and it is easy to imagine how these techniques could be incorporated into an LLM (either intentionally or even unintentionally, as the training data is probably full of examples). Doesn’t mean that the LLM is in any way sentient, though. However, this does not imply that there is no danger. At risk are, on the one hand, psychologically vulnerable people and, on the other hand, people who are too easily convinced that this AI is a genius and will soon be able to do all the brainwork in the world.

    • diz@awful.systems
      link
      fedilink
      English
      arrow-up
      5
      ·
      edit-2
      1 day ago

      I think this may also be a specific low-level exploit, whereby humans are already biased to mentally “model” anything as having an agency (see all the sentient gods that humans invented for natural phenomena).

      I was talking to an AI booster (ewww) in another place and I think they really are predominantly laymen brain fried by this shit. That particular one posted a convo where out of 4 arithmetic operations, 2 were “12042342 can be written as 120423 + 19, and 43542341 as 435423 + 18” combined with AI word-salad, and he was expecting that this would be convincing.

      It’s not that this particular person thinks its genius, he thinks that it is not a mere computer, and the way it is completely shit at math only serves to prove it to them that it is not a mere computer.

      edit: And of course they care not for any mechanistic explanations, because all of those imply LLMs are not sentient, and they believe LLMs are sentient. The “this isn’t it but one day some very different system will” counter argument doesn’t help either.

      • YourNetworkIsHaunted@awful.systems
        link
        fedilink
        English
        arrow-up
        2
        ·
        16 hours ago

        I mean you could make an actual evo psych argument about the importance of being able to model the behavior of other people in order to function in a social world. But I think part of the problem is also in the language at this point. Like, anthropomorphizing computers has always been part of how we interact with them. Churning through an algorithm means it’s “thinking”, an unexpected shutdown means it “died”, when it sends signals through a network interface it’s “talking” and so on. But these GenAI chatbots (chatbots in general, really, but it’s gotten worse as their ability to imitate conversation has improved) are too easy to assign actual agency and personhood to, and it would be really useful to have a similarly convenient way of talking about what they do and how they do it without that baggage.