• Rentlar@lemmy.ca
    link
    fedilink
    arrow-up
    9
    ·
    1 year ago

    For a permanent solution, it will to an extent require us to give up a level of anonymity. Whether it’s linking a discussion with a real life meetup… like this (NSFW warning)

    or some sort of government tracking system.

    When nobody knows whether you are a dog posting on the internet, or a robot or a human, mitigations like Captcha and challenge questions only will slow AI down but can’t hold it off forever.

    It doesn’t help that on Reddit (where there are/were a lot of interacting users), the top voted discussion is often the same tired memes/puns. That’s a handicap to allow AI to better imitate human users.

    • Gsus4
      link
      fedilink
      arrow-up
      6
      ·
      1 year ago

      Yes, this is the solution. Each user needs to know a certain critical number of other users in person who they can trust (and trust that they won’t lie about bots, like u/spez) in order for there to be a mesh of trust where you can verify if any user is human in a max of 6 hops.

      tl;dr: if you have no real-life friends…it’s all bots :P

      • Tabb5@vlemmy.net
        link
        fedilink
        arrow-up
        2
        ·
        1 year ago

        That sounds like the PGP Web of Trust, which has been in use for a long time and provides cryptographic signatures and encryption, particularly (but not only) for email.