The best part of the fediverse is that anyone can run their own server. The downside of this is that anyone can easily create hordes of fake accounts, as I will now demonstrate.

Fighting fake accounts is hard and most implementations do not currently have an effective way of filtering out fake accounts. I’m sure that the developers will step in if this becomes a bigger problem. Until then, remember that votes are just a number.

  • PetrichorBias
    link
    fedilink
    arrow-up
    342
    ·
    edit-2
    1 年前

    This was a problem on reddit too. Anyone could create accounts - heck, I had 8 accounts:

    one main, one alt, one “professional” (linked publicly on my website), and five for my bots (whose accounts were optimistically created, but were never properly run). I had all 8 accounts signed in on my third-party app and I could easily manipulate votes on the posts I posted.

    I feel like this is what happened when you’d see posts with hundreds / thousands of upvotes but had only 20-ish comments.

    There needs to be a better way to solve this, but I’m unsure if we truly can solve this. Botnets are a problem across all social media (my undergrad thesis many years ago was detecting botnets on Reddit using Graph Neural Networks).

    Fwiw, I have only one Lemmy account.

    • impulse@lemmy.world
      link
      fedilink
      arrow-up
      152
      ·
      1 年前

      I see what you mean, but there’s also a large number of lurkers, who will only vote but never comment.

      I don’t think it’s unfeasible to have a small number of comments on a highly upvoted post.

      • PetrichorBias
        link
        fedilink
        arrow-up
        35
        ·
        1 年前

        Maybe you’re right, but it just felt uncanny to see thousands of upvotes on a post with only a handful of comments. Maybe someone who active on the bot-detection subreddits can pitch in.

          • randomname01@feddit.nl
            link
            fedilink
            English
            arrow-up
            4
            ·
            1 年前

            True, but there were also a number of subs (thinking of the various meirl spin-offs, for example) that naturally had limited engagement compared to other subs. It wasn’t uncommon to see a post with like 2K upvotes and five comments, all of them remarking how little comments there actually were.

    • simple@lemmy.world
      link
      fedilink
      arrow-up
      36
      ·
      1 年前

      Reddit had ways to automatically catch people trying to manipulate votes though, at least the obvious ones. A friend of mine posted a reddit link for everyone to upvote on our group and got temporarily suspended for vote manipulation like an hour later. I don’t know if something like that can be implemented in the Fediverse but some people on github suggested a way for instances to share to other instances how trusted/distrusted a user or instance is.

      • cynar@lemmy.world
        link
        fedilink
        arrow-up
        37
        ·
        1 年前

        An automated trust rating will be critical for Lemmy, longer term. It’s the same arms race as email has to fight. There should be a linked trust system of both instances and users. The instance ‘vouches’ for the users trust score. However, if other instances collectively disagree, then the trust score of the instance is also hit. Other instances can then use this information to judge how much to allow from users in that instance.

        • hawkwind@lemmy.management
          link
          fedilink
          arrow-up
          3
          ·
          1 年前

          LLM bots has make this approach much less effective though. I can just leave my bots for a few months or a year to get reputation, automate them in a way that they are completely indistinguishable from a natural looking 200 users, making my opinion carry 200x the weight. Mostly for free. A person with money could do so much more.

          • cynar@lemmy.world
            link
            fedilink
            arrow-up
            2
            ·
            1 年前

            It’s the same game as email. An arms race between spam detection, and spam detector evasion. The goal isn’t to get all the bots with it, but to clear out the low hanging fruit.

            In your case, if another server noticed a large number of accounts working in lockstep, then it’s fairly obvious bot-like behaviour. If their home server also noticed the pattern and reports it (lowers the users trust rating) then it wont be dinged harshly. If it reports all is fine, then it’s also assumed the instance might be involved.

            If you control the instance, then you can make it lie, but this downgrades the instance’s score. If it’s someone else’s, then there is incentive not to become a bot farm, or at least be honest in how it reports to the rest.

            This is basically what happens with email. It’s FAR from perfect, but a lot better than nothing. I believe 99+% of all emails sent are spam. Almost all get blocked. The spammers have to work to get them through.

        • fmstrat@lemmy.nowsci.com
          link
          fedilink
          arrow-up
          2
          ·
          1 年前

          This will be very difficult. With Lemmy being open source (which is good), bot maker’s can just avoid the pitfalls they see in the system (which is bad).

      • Thorny_Thicket@sopuli.xyz
        link
        fedilink
        arrow-up
        6
        ·
        1 年前

        I got that message too when switching accounts to vote several times. They can probably see it’s all coming from the same ip.

    • BrianTheeBiscuiteer@lemmy.world
      link
      fedilink
      arrow-up
      26
      ·
      1 年前

      Yes, I feel like this is a moot point. If you want it to be “one human, one vote” then you need to use some form of government login (like id.me, which I’ve never gotten to work). Otherwise people will make alts and inflate/deflate the “real” count. I’m less concerned about “accurate points” and more concerned about stability, participation, and making this platform as inclusive as possible.

      • PetrichorBias
        link
        fedilink
        arrow-up
        18
        ·
        edit-2
        1 年前

        In my opinion, the biggest (and quite possibly most dangerous) problem is someone artificially pumping up their ideas. To all the users who sort by active / hot, this would be quite problematic.

        I’d love to actually see some social media research groups actually consider how to detect and potentially eliminate this issue on Lemmy, considering Lemmy is quite new and is malleable at this point (compared to other social media). For example, if they think metric X may be a good idea to include in all metadata to increase chances of detection, then it may be possible to include this in the source code of posts / comments / activities.

        I know a few professors and researchers who do research on social media and associated technologies, I’ll go talk to them when they come to their office on Monday.

        • BrianTheeBiscuiteer@lemmy.world
          link
          fedilink
          arrow-up
          16
          ·
          1 年前

          This also vaguely reminds me of some advanced networking topics. In mesh networks there is the possibility of rogue nodes causing havoc and different methods exist to reduce their influence or cut them out of the process.

        • zuhayr@lemmy.world
          link
          fedilink
          arrow-up
          2
          ·
          1 年前

          I have been thinking about this government id aspect too. But it’s not coming to me.

          Users sign up with govt ID, obtain a unique social media key that’s used for all activities beyond the sign up. One key per person, but a person can have multiple accounts? You know, like that database primary key.

          The relationship between the govt id and social media key needs to be in a zero knowledge encryption so that no one can corelate the real person with their online presence. THIS is the bummer.

    • AndrewZabar@beehaw.org
      link
      fedilink
      arrow-up
      25
      ·
      1 年前

      On Reddit there were literally bot armies by which thousands of votes could be instantly implemented. It will become a problem if votes have any actual effect.

      It’s fine if they’re only there as an indicator, but if the votes are what determine popularity, prioritize visibility, it will become a total shitshow at some point. And it will be rapid. So yeah, better to have a defense system in place asap.

    • Thorny_Thicket@sopuli.xyz
      link
      fedilink
      arrow-up
      23
      ·
      1 年前

      I always had 3 or 4 reddit accounts in use at once. One for commenting, one for porn, one for discussing drugs and one for pics that could be linked back to me (of my car for example) I also made a new commenting account like once a year so that if someone recognized me they wouldn’t be able to find every comment I’ve ever written.

      On lemmy I have just two now (other is for porn) but I’m probably going to make one or two more at some point

      • auth@lemmy.ml
        link
        fedilink
        arrow-up
        10
        ·
        1 年前

        I have about 20 reddit accounts… I created/ switched account every few months when I used reddit

    • InternetPirate@lemmy.fmhy.ml
      link
      fedilink
      arrow-up
      23
      ·
      edit-2
      1 年前

      I feel like this is what happened when you’d see posts with hundreds / thousands of upvotes but had only 20-ish comments.

      Nah it’s the same here in Lemmy. It’s because the algorithm only accounts for votes and not for user engagement.

    • Dandroid@dandroid.app
      link
      fedilink
      arrow-up
      13
      ·
      1 年前

      If you and several other accounts all upvoted each other from the same IP address, you’ll get a warning from reddit. If my wife ever found any of my comments in the wild, she would upvoted them. The third time she did it, we both got a warning about manipulating votes. They threatened to ban both of our accounts if we did it again.

      But here, no one is going to check that.

    • MigratingtoLemmy@lemmy.world
      link
      fedilink
      arrow-up
      9
      ·
      1 年前

      Congratulations on such a tough project.

      And yes, as long as the API is accessible somebody will create bots. The alternative is far worse though

    • FartsWithAnAccent@lemmy.world
      link
      fedilink
      arrow-up
      6
      ·
      1 年前

      I’d just make new usernames whenever I thought of one I thought was funny. I’ve only used this one on Lemmy (so far) but eventually I’ll probably make a new one when I have one of those “Oh shit, that’d be a good username” moments.

    • Puph@lemmy.dbzer0.com
      link
      fedilink
      arrow-up
      6
      ·
      1 年前

      I had all 8 accounts signed in on my third-party app and I could easily manipulate votes on the posts I posted.

      There’s no chance this works. Reddit surely does a simple IP check.

      • Salamander@mander.xyz
        link
        fedilink
        arrow-up
        5
        ·
        1 年前

        I would think that they need to set a somewhat permissive threshold to avoid too many false positives due to people sharing a network. For example, a professor may share a reddit post in a class with 600 students with their laptops connected to the same WiFi. Or several people sharing an airport’s WiFi could be looking at /r/all and upvoting the top posts.

        I think 8 accounts liking the same post every few days wouldn’t be enough to trigger an alarm. But maybe it is, I haven’t tried this.

      • Valmond@lemmy.ml
        link
        fedilink
        arrow-up
        4
        ·
        1 年前

        I had one main account but also a couple for using when I didn’t want to mix my “private” life up with other things. I don’t even know if it’s not allowed in the TOS?

        Anyway, I stupidly made a Valmond account on several Lemmy instances before I got the hang of it, and when (if!) my server will one day function I’ll make an account there so …

        I guess it might be like in the old forum days, you have a respectable account and another if you wanted to ask a stupid question etc. admin would see (if they cared) but not the ordinary users.

    • Hexorg@beehaw.org
      link
      fedilink
      arrow-up
      6
      ·
      1 年前

      I think the best solution there is so far is to require captcha for every upvote but that’d lead to poor user experience. I guess it’s the cost benefit of user experience degrading through fake upvotes vs through requiring captcha.

      • magnetosphere @beehaw.org
        link
        fedilink
        arrow-up
        9
        ·
        1 年前

        If any instance ever requires a captcha for something as trivial as an upvote, I’ll simply stop upvoting on that instance.

      • Catsrules@lemmy.ml
        link
        fedilink
        arrow-up
        3
        ·
        1 年前

        I could see this being useful on a per community basis. Or something that a moderator could turn on and off.

        For example on a political or news community during an election. It might be worth while to turn captcha on.

      • PetrichorBias
        link
        fedilink
        arrow-up
        14
        ·
        edit-2
        1 年前

        I don’t use wefwef, I use jerboa for android.

        **bold**

        *italics*

        > quote

        `code`

        # heading

        - list

        • AndrewZabar@beehaw.org
          link
          fedilink
          arrow-up
          8
          ·
          edit-2
          1 年前

          Ah ok. Yeah I thought the markdown was the same as reddit being markdown but it used to have a toolbar.

          Thanks for response.

          Also I’ve wondered why don’t they have an underline markdown.

          • TWeaK@lemm.ee
            link
            fedilink
            arrow-up
            4
            ·
            edit-2
            1 年前

            Fun fact: old reddit used to use one of the header functions as an underline. I think it was 5x # that did it. However, this was an unofficial implementation of markdown, and it was discarded with new reddit. Also, being a header function you could only apply it to an entire line or paragraph, rather than individual words.

    • 🐱TheCat@sh.itjust.works
      link
      fedilink
      arrow-up
      3
      ·
      1 年前

      IMO the best way to solve it is to ‘lower the stakes’ - spread out between instances, avoid behaviors like buying any highly upvoted recommendation without due diligence etc. Basically, become ‘un-advertiseable’, or at least less so

    • I don’t know how you got away with that to be honest. Reddit has fairly good protection from that behaviour. If you up vote something from the same IP with different accounts reasonably close together there’s a warning. Do it again there’s a ban.

      • PetrichorBias
        link
        fedilink
        arrow-up
        2
        ·
        1 年前

        I did it two or three times with 3-5 accounts (never all 8). I also used to ask my friends (N=~8) to upvote stuff too (yes, I was pathetic) and I wasn’t warned/banned. This was five-six years ago.

    • Andy@lemmy.world
      link
      fedilink
      arrow-up
      1
      ·
      1 年前

      I’m curious what value you get from a bot? Were you using it to upvote your posts, or to crawl for things that you found interesting?

      • PetrichorBias
        link
        fedilink
        arrow-up
        1
        ·
        edit-2
        1 年前

        The latter. I was making bots to collect data (for the previously-mentioned thesis) and to make some form of utility bots whenever I had ideas.

        I once had an idea to make a community-driven tagging bot to tag images (like hashtags). This would have been useful for graph building and just general information-lookup. Sadly, the idea never came to fruition.