other accounts:

  • 2 Posts
  • 270 Comments
Joined 1 year ago
cake
Cake day: February 10th, 2024

help-circle
  • My idea was to serve it in a way that would allow ActivityPub clients to resolve content still, while having a light static render for other clients for local content.

    For content on other instances it would probably require some kind of lightweight redirection service towards the original item to be set up to prevent breaking those URLs.

    This could probably be built just from scraping, without requiring database access.




  • Ruud and Stux are not the only people involved.

    I’m personally only involved in Ruud’s side of things (mostly .world instances). Stux’ platforms are managed separately, I can’t say too much about those. Afaik finances between Ruud’s instances and Stux’ instances are also separate.

    On the .world side, we currently have 6 active members for infra. For moderation, LW currently has 4 active instance admins plus some community team members with elevated privileges. Other .world platforms have moderation separate from LW. We certainly don’t have resources to hire professional admins, but I’m sure that we would find a viable solution if Ruud ever wanted to leave things behind. Not all solutions require paying someone a salary for it, which seems to be your implication here.




  • essentially start by identifying the accounts posting links to the domain in question, then analyze the voting behavior of the accounts upvoting these posts. you can start by sorting out accounts that have legitimate activity and then narrow it down further and find common patterns that only apply to these accounts.

    most of them were also created in similar time frames.

    edit:

    to extend on this, once you have something to go on with it’s fairly easy. the hard part is finding something that applies in a more generic way to identify this happening before someone else discovers unusual voting patterns and reports them.



  • i’ve banned all the accounts i could identify as part of this scheme from lemmy.world now.

    i originally sent them a warning before i was aware of the scale of this involving a bunch of alts with different usernames. if i had known that when i sent the warning it would’ve been a ban straight away.

    they replied to my warning pretending they didn’t know about any recent vote manipulation, so they’re clearly not interested in acting in good faith going forward.



  • I’ve sent out warnings to 10 other users in the past few days about similar behavior already and also banned two users for this type of behavior. one of them appealed and is unbanned again.

    I also had this account in my list of such accounts, but I hadn’t followed up on it yet, as I figured I’d just deal with the top n users and then review it again at a later point in time.

    i’ve sent them a warning about this as well now that they’ll get banned from our instance if they continue engaging in this behavior.


  • we currently have our own solution to send emails with a custom text explaining why people were rejected and what they can do next. we’ll have to review whether the built-in solution would be capable of replacing this functionality adequately if we add rejection reasons to lemmy when rejecting the applications.

    our current solution rejects applications and then deletes the user from the database to ensure that they can sign up again if they want, as denied applications only get deleted after a week or so and an appeal process would require support tickets and a lot more time to be spent by us on addressing those.

    our application process is fully automatic and just depends on certain words to be provided and the email not being disposable.



  • The screenshot in my previous comment is directly from their abuse form at https://abuse.cloudflare.com/csam. Your email is specifically about their proactive scanner, not about abuse reports that are submitted.

    They also explicitly state on their website that they forward any received CSAM reports to NCMEC:

    Abuse reports filed under the CSAM category are treated as the highest priority for our Trust & Safety team and moved to the front of the abuse response queue. Whenever we receive such a report, generally within minutes regardless of time of day or day of the week, we forward the report to NCMEC, as well as to the hosting provider and/or website operator, along with some additional information to help them locate the content quickly.






  • unless you operate the instance that is being used to send this material you can generally only work with the content that is being posted/sent in PMs. almost all identifying information is stripped when it leaves your local instance to be federated to other instances. even if there was a group of instances collaborating on e.g. a shared blocklist, abusers would just switch to other instances that aren’t part of the blocking network. there’s a reason why it’s not recommended to run a lemmy instance with open signups if you don’t have additional anti-spam measures and a decently active admin team. smaller instances tend to have fewer prevention measures in place, which results in a burden for everyone else in the fediverse that is on the receiving end of such content. unfortunately this is not an easy task to solve without giving up (open) federation.