ChatGPT generates cancer treatment plans that are full of errors — Study finds that ChatGPT provided false information when asked to design cancer treatment plans::Researchers at Brigham and Women’s Hospital found that cancer treatment plans generated by OpenAI’s revolutionary chatbot were full of errors.

  • SirGolan@lemmy.sdf.org
    link
    fedilink
    English
    arrow-up
    21
    ·
    1 year ago

    What’s with all the hit jobs on ChatGPT?

    Prompts were input to the GPT-3.5-turbo-0301 model via the ChatGPT (OpenAI) interface.

    This is the second paper I’ve seen recently to complain ChatGPT is crap and be using GPT3.5. There is a world of difference between 3.5 and 4. Unfortunately news sites aren’t savvy enough to pick up on that and just run with “ChatGPT sucks!” Also it’s not even ChatGPT if they’re using that model. The paper is wrong (or it’s old) because there’s no way to use that model in the ChatGPT interface. I don’t think there ever was either. It was probably ChatGPT 0301 or something which is (afaik) slightly different.

    Anyway, tldr, paper is similar to “I tried running Diablo 4 on my Windows 95 computer and it didn’t work. Surprised Pikachu!”

    • eggymachus@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      10
      ·
      1 year ago

      And this tech community is being weirdly luddite over it as well, saying stuff like “it’s only a bunch of statistics predicting what’s best to say next”. Guess what, so are you, sunshine.

      • PreviouslyAmused@lemmy.ml
        link
        fedilink
        English
        arrow-up
        6
        ·
        1 year ago

        I mean, people are slightly more complicated than that. But sure, at their most basic, people simply communicate with statistical models.

        • eggymachus@sh.itjust.works
          link
          fedilink
          English
          arrow-up
          2
          ·
          1 year ago

          Ok, maybe slightly :) but it surprises me that the ability to emulate a basic human is dismissed as “just statistics”, since until a year ago it seemed like an impossible task…

          • markr@lemmy.world
            link
            fedilink
            English
            arrow-up
            2
            ·
            1 year ago

            The dismissal is coming from the class of people most threatened by these systems.

      • amki@feddit.de
        link
        fedilink
        English
        arrow-up
        4
        ·
        1 year ago

        Might be true for you but most people do have a concept of true and false and don’t just dream up stuff to say.

        • eggymachus@sh.itjust.works
          link
          fedilink
          English
          arrow-up
          1
          ·
          1 year ago

          Yeah, I was probably a bit too caustic, and there’s more to (A)GI than an LLM can achieve on its own, but I do believe that some, and perhaps a large, part of human consciousness works in a similar manner.

          I also think that LLMs can have models of concepts, otherwise they couldn’t do what they do. Probably also of truth and falsity, but perhaps with a lack of external grounding?

        • markr@lemmy.world
          link
          fedilink
          English
          arrow-up
          1
          ·
          1 year ago

          Actually we ‘dream up’ things to say quite a lot. As in our unconscious functions are far more important to our mental processes than we like to admit. Also we are basically not very good at evaluating the truth value of complex expressions.

      • dukk@programming.dev
        link
        fedilink
        English
        arrow-up
        3
        ·
        1 year ago

        IMO for AI to reach a useful point it needs to be able to learn. Now I’m no expert on neural networks, but if it can’t learn anything new once it’s been trained, it’s never really going to reach its true potential. It can imitate a human, but that’s about it. Once AI can really learn, it’ll become an order of magnitude more useful. Don’t get me wrong: all this AI work is a step in the right direction, but we’ll only be able to go so far with pre-trained models.

      • SirGolan@lemmy.sdf.org
        link
        fedilink
        English
        arrow-up
        3
        ·
        1 year ago

        Hah! That’s the response I always give! I’m not saying our brains work the exact same way because they don’t and there’s still a lot missing from current AI but I’ve definitely noticed that at least for myself, I do just predict the next word when I’m talking or writing (with some extra constraints). But even with LLMs there’s more going on then that since the attention mechanism allows it to consider parts of the prompt and what it’s already written as it’s trying to come up with the next word. On the other hand, I can go back and correct mistakes I make while writing and LLMs can’t do that…it’s just a linear stream.

        • eggymachus@sh.itjust.works
          link
          fedilink
          English
          arrow-up
          2
          ·
          1 year ago

          Agree, I have definitely fallen for the temptation to say what sounds better, rather than what’s exactly true… Less so in writing, possibly because it’s less of a linear stream.