Google’s AI-driven Search Generative Experience have been generating results that are downright weird and evil, ie slavery’s positives.

  • ThunderingJerboa@kbin.social
    link
    fedilink
    arrow-up
    3
    ·
    edit-2
    1 year ago

    that was just one article by one rogue doctor.

    That was pushed by many media organizations because its sensationalist topic. Antivaxers are idiots but the media played a fucking huge role blowing a pilot study that had a rather fucking absurd conclusion out of proportions, so they can sell more ads/newspapers. I fucking doubt most antivaxers (Hell I doubt most people haven’t either) even read the original study and came to their own conclusions on this. They just watched on the telly some stupid idiots giving a bullshit story that they didn’t combat at all

    • livus@kbin.social
      link
      fedilink
      arrow-up
      1
      ·
      edit-2
      1 year ago

      To be fair no one expects The Lancet to publish falsified data. Only it does occasionally and getting it to retract is like trying to turn a container ship around in the Panama Canal.

      But yeah this is part of what I mean. Media cycles and digital reproduceability and algorithms that seek clicks can all potentially give AI-generated errors a lot of play and rewrites into more credible forms etc.

      • Sodis@feddit.de
        link
        fedilink
        English
        arrow-up
        1
        ·
        1 year ago

        Filtering falsified data before publishing it is near impossible. If you want to publish falsified data, you easily can. No one can verify it without replicating the experiment on their own, which is usually done after the publication by a different scientific group. Peer review is more suited to filter out papers with bad methodology.