That would be right if they understood/knew what they were talking about. It’s more akin to really advanced autocorrect that sounds/reads like something the ai was trained on. So it sounds correct but really has 0 basis on truth other than “the model predicts a human would say X next”. Truth is rarely the goal of any of these machine learning language models afaik.
Don’t they pull from online sources? So it’s basically googling with extra steps and an unpredictable middleman
That would be right if they understood/knew what they were talking about. It’s more akin to really advanced autocorrect that sounds/reads like something the ai was trained on. So it sounds correct but really has 0 basis on truth other than “the model predicts a human would say X next”. Truth is rarely the goal of any of these machine learning language models afaik.