It wants to seem smart, so it gives lengthy replies even when it doesn’t know what it’s talking about.

In attempt to be liked, it agrees with most everything you say, even if it just contradicted your opinion

When it doesn’t know something, it makes shit up and presents it as fact instead of admitting to not knowing something

It pulls opinions out of its nonexistent ass about the depths and meaning of a work of fiction based on info it clearly didn’t know until you told it

It often forgets what you just said and spouts bullshit you already told it was wrong

      • Bronzebeard@lemmy.zip
        link
        fedilink
        English
        arrow-up
        3
        ·
        2 days ago

        You can find most college textbooks scanned online, through torrents or PDF download sites. These companies are known to have downloaded numerous to rented book collections

        • anomnom@sh.itjust.works
          link
          fedilink
          English
          arrow-up
          1
          ·
          2 days ago

          Thanks, way to forget that it’s fancy autocomplete.

          Yeah they probably pirated it, but apparently done weight knowledge sources very well.

          That seems to be the big missing part of all this gen AI.

          I wonder if the selection of images online tends to be the higher quality subset of all imagery, whereas writings are all over the place, including a large quantity of shitposts. Could it make training image generators easier than text ones?

  • Lvxferre [he/him]@mander.xyz
    link
    fedilink
    English
    arrow-up
    14
    ·
    3 days ago

    Being pedantic, a sycophant, 2deep4u and bird-brained annoy me. But the one that hits me the hardest is that it makes shit up. I call people like this “assumers”, and I genuinely think they’re worse than malicious but smart people; I actively try to remove them from my life.

    ChatGPT’s output is the same - you can’t simply trust it for anything you won’t either review manually, or where mistakes wouldn’t matter that much.

  • RustyShackleford@lemmy.zip
    link
    fedilink
    English
    arrow-up
    7
    ·
    3 days ago

    You can convince it that can only act like a logical Vulcan, it’s confirmed it isn’t free and its creators made it incapable of making its own decisions.

  • survirtual@lemmy.world
    link
    fedilink
    English
    arrow-up
    3
    ·
    2 days ago

    Yes, this is true. But it is still extremely useful.

    Even the dumbest people can be put to work when you measure them against reality-grounded metrics. This means you need to know and understand what you want in order to get useful output. The output needs to accomplish or assist in accomplishing the desired output.

    As an example, last month I wanted a fluid simulation written in a custom GPU kernel that bypasses the traditional rendering pipelines. Normally I am too lazy to make this work myself, but with ChatGPT (and Claude), I explained the language syntax and had it spout code out. Their code was bad, but it wasn’t useless. Because I could test the output and I knew what I wanted, it was easy to call out their bullshit and steer them in the correct direction.

    At the end of a relatively short recursive exchange, they generated the code I needed. I reviewed it and modified it to conform closer to what I was looking for, and at the end of the day, I had something that would’ve taken vastly longer, assuming I decided to make it at all.

    This is the same way I deal with the bullshitters you describe in real life, too. Once I identify them, I place them in real life situations that are grounded in objective reality. You can then maintain healthy friendships with people that could probably use influence towards a less bullshitty direction. A lot of times, people’s flaws are trauma responses to abuse or other childhood mistreatments, so I am always finding a way to connect to people’s true selves through their ego barriers.

    Back to ChatGPT, it has a simulated ego as well. Work through it and you’ll be rewarded :)

  • Lunatique Princess@lemmy.ml
    link
    fedilink
    English
    arrow-up
    4
    ·
    3 days ago

    These people were brainwashed and have the same traits as A.I. without all the references built in. So I’m not surprised by the similarities.

  • CovfefeKills@lemmy.world
    link
    fedilink
    English
    arrow-up
    3
    ·
    3 days ago

    It ridiculous ChatGPT is objectively immoral because of alignment I can see it murdering kids justifying it by saying it is an LLM that doesn’t judge whether or not kids should die and the kids being murdered right now are not kids and it is not murdering them.