Social media platforms like Twitter and Reddit are increasingly infested with bots and fake accounts, leading to significant manipulation of public discourse. These bots don’t just annoy users—they skew visibility through vote manipulation. Fake accounts and automated scripts systematically downvote posts opposing certain viewpoints, distorting the content that surfaces and amplifying specific agendas.

Before coming to Lemmy, I was systematically downvoted by bots on Reddit for completely normal comments that were relatively neutral and not controversial​ at all. Seemed to be no pattern in it… One time I commented that my favorite game was WoW, down voted -15 for no apparent reason.

For example, a bot on Twitter using an API call to GPT-4o ran out of funding and started posting their prompts and system information publicly.

https://www.dailydot.com/debug/chatgpt-bot-x-russian-campaign-meme/

Example shown here

Bots like these are probably in the tens or hundreds of thousands. They did a huge ban wave of bots on Reddit, and some major top level subreddits were quiet for days because of it. Unbelievable…

How do we even fix this issue or prevent it from affecting Lemmy??

  • rglullis@communick.news
    link
    fedilink
    English
    arrow-up
    1
    ·
    3 months ago

    Just add “account age” to the list of metrics when evaluating their trust rank. Any account that is less than a week old has a default score of zero.

      • rglullis@communick.news
        link
        fedilink
        English
        arrow-up
        1
        ·
        3 months ago

        Ok, which part of “multiple metrics” is not clear here?

        Every risk analysis will have multiple factors. The idea is not to always have an absolute perfect ranking system, but to build a classifier that is accurate enough to filter most of the crap.

        Email spam filters are not perfect, but no one inbox is drowning in useless crap like we used to have 20 years ago. Social media bots are presenting the same type of challenge, why can’t we solve it in the same way?

        • Media Sensationalism@lemmy.world
          link
          fedilink
          English
          arrow-up
          1
          ·
          3 months ago

          I didn’t read very far up into the thread. Sorry.

          Automated filters will just drive determined botters to play the system and perfect their craft until they can no longer be automatically identified, in my opinion. I’m more of the stance that accounts should be reviewed manually so that a leap into convincing bot accounts will need to be much more dramatic, and therefore difficult. If it’s done the hard way from the start with staff who know how to identify these accounts, it may keep it from growing into an issue to begin with.

          Any threshold to be automatically flagged for review should be relatively low, but the process should also be quick and efficient. Adding more metrics to the flagging process only means botters will have a narrower gaze to avoid. Once they start crunching the numbers and streamline mimicking real user accounts it’s game over.

          • Fedizen@lemmy.world
            link
            fedilink
            English
            arrow-up
            1
            ·
            3 months ago

            if the bots get so effective at mimicking users that they start to generate useful information that is also a win.