With the rapid advances we’re currently seeing in generative AI, we’re also seeing a lot of concern for large scale misinformation. Any individual with sufficient technical knowledge can now spam a forum with lots of organic looking voices and generate photos to back them up. Has anyone given some thought on how we can combat this? If so, how do you think the solution should/could look? How do you personally decide whether you’re looking at a trustworthy source of information? Do you think your approach works, or are there still problems with it?

  • modulus@lemmy.ml
    link
    fedilink
    arrow-up
    0
    ·
    2 years ago

    I’m having trouble understanding why disinformation produced by an AI is more of a problem than that produced by a person. Sure, theoretically it can be made to scale a lot more–though I would point out AI is not, at the moment, light on resources either. But it’s unclear to me to what extent that makes a difference.

    • howrar@lemmy.caOP
      link
      fedilink
      arrow-up
      2
      ·
      2 years ago

      I don’t believe the content itself won’t be any more of an issue than human-generated misinformation. The main issue I see is that a single person can now achieve this on a large scale without ever leaving their mom’s basement and at a much lower cost. It’s the concentration of power that I find concerning.