ChatGPT generates cancer treatment plans that are full of errors — Study finds that ChatGPT provided false information when asked to design cancer treatment plans::Researchers at Brigham and Women’s Hospital found that cancer treatment plans generated by OpenAI’s revolutionary chatbot were full of errors.

  • @SirGolan@lemmy.sdf.org
    link
    fedilink
    English
    31 year ago

    Hah! That’s the response I always give! I’m not saying our brains work the exact same way because they don’t and there’s still a lot missing from current AI but I’ve definitely noticed that at least for myself, I do just predict the next word when I’m talking or writing (with some extra constraints). But even with LLMs there’s more going on then that since the attention mechanism allows it to consider parts of the prompt and what it’s already written as it’s trying to come up with the next word. On the other hand, I can go back and correct mistakes I make while writing and LLMs can’t do that…it’s just a linear stream.

    • @eggymachus@sh.itjust.works
      link
      fedilink
      English
      21 year ago

      Agree, I have definitely fallen for the temptation to say what sounds better, rather than what’s exactly true… Less so in writing, possibly because it’s less of a linear stream.