We’ve defederated from :
lemmy.k6qw.com,lemmy.podycust.co.uk,waveform.social,bbs.darkwitch.net,cubing.social,lemmy.roombob.cat,lemmy.jtmn.dev,lemmy.juggler.jp,bolha.social,sffa.community,dot.surf,granitestate.social,veenk.help,lemmyunchained.net,wumbo.buzz,lemmy.sbs,lemmy.shwizard.chat,clatter.eu,mtgzone.com,oceanbreeze.earth,mindshare.space,lemmy.tedomum.net,voltage.vn,lemmy.fyi,demotheque.com,thediscussion.site,latte.isnot.coffee,news.deghg.org,lemmy.primboard.de,baomi.tv,marginalcuriosity.net,lemmy.cloudsecurityofficehours.com,lemmy.game-files.net,lemmy.fedi.bub.org,lemmy.blue,lemmy.easfrq.live,narod.city,lemmy.ninja,lemmy.reckless.dev,nlemmy.nl,lemmy.mb-server.com,rammy.site,fedit.io,diggit.xyz,slatepacks.com,theotter.social,lemmy.nexus,kleptonix.com,rabbitea.rs,zapad.nstr.no,feddi.no
based on the list of instances made by @sunaurus@lemm.ee here - Thank you again for that work, it’s highly appreciated.
This is a preventive measure against massive amounts of accounts being created for botting purposes. Most instances banned appear to be 1 user instances so we don’t think this will have a great effect on anyone’s usage of Beehaw. If you are an admin of one of those instances, feel free to contact us at support@beehaw.org
here is a list, btw, of tools we’d like if possible:
- Role for approval of applications (to delegate)
- Site mods (to delegate from admins)
- Auto-report posts with certain keywords or domains (for easier time curating without reports)
- Statistics on growth (user, comments, posts, reports)
- User total
- MUA
- User retention
- Number of comments
- Number of posts
- Number of reports open
- Number of reports resolved
- Sort reports
- by resolved/open
- by local/remote
- Different ways to resolved a report
- Suspend account for a limited amount of time rather than just banning
- Send warning
- Account mod info
- Number of ‘strikes’ (global and local) and reports
- Moderation notes
- Change email
- Change password
- Change role
- Ability to pin messages in a post
- Admins should be able to purge
- Filter modlog to local
- Better federation tools (applications to communities, limiting)
- Applications to communities to allow safe spaces to exist (people should not be able to just “walk in” on a safe space - similarly to follow requests in Mastodon in a way)
- Limiting (Lock our communities down from certain instances but still allow people using our instance to talk to people from those instances)
federation tools are our highest priority, but any of these would be welcome additions
Adding to this - not a software solution but a platform one…
I think it would make sense to be able to outsource a lot of this stuff to a trusted third party. That’s how email generally works these days for example.
Beehaw would, of course, have it’s own moderation/policy/etc team, but wouldn’t it be better if you could also run a first pass check, possibly an automated one, that works across instances and flags accounts with known bad behavior (anything from illegal content to just being downvoted into obvlivion with every post they make on a reputable community).
This is one of the many, but perhaps most pressing of reasons we need better moderation and admin tools. There is currently no way to stop spam from instances like this except defederation at this time.
Realistically there are far too many small instances which are set up by a single person who’s setting up an instance for their own learning, their own fun, or other reasons but who isn’t a fullstack developer, a seasoned sysadmin, good at sysops, a database engineer, a security expert, and all the roles an instance currently needs with how new this platform currently is. There will always be more instances like this as an attack vector for spam, trolling, and other malicious purposes.
If the goal is to allow people to set up their own instances and we are promoting and encouraging people to do this, we need tools for the larger instances to manage this or we’re going to end up with a community that is constantly in the process of fragmenting more and more and one where measures are taken by the largest instances to isolate themselves from smaller instances which are frequently used as attack vectors.
If you’re a developer, please please please focus your efforts on more granular administrative tools. We cannot whitelist or blacklist instances. There are no tools to evaluate how trustworthy a source is. There are plenty of individuals with great ideas on how this might be accomplished, and having access to many different tools and strategies will make this platform healthy.
Some risk will be necessary. At some level we do have to let small instances with minuscule communities exist and participate in the wider fediverse, otherwise this whole thing will just fiat centralize.
The cautionary tale is email. In a way, Email is the most successful decentralized protocol. Anyone can technically throw up a client and start communicating with any other email server. The problem of course, is that if you do this in practice, your email will almost never get through to the majority of people. Why? Most of the large providers of email have formed what amounts to a whitelist of trust, and either outright reject participants they don’t recognize, or subject those outside participants to incredibly high standards that they themselves to not have to abide by.
So, email has practically become a centralized affair controlled by a few big stakeholders. A lot of small email providers have gotten out of the game in the last decade because they’re tired of dealing with it. It’s a mess.
I’m not against greater tools, but I’d inject that health is not solely measured by the lack of spam. A spam-free fediverse that’s just one instance and its three closest friends is not healthy. Whatever solutions are developed should leave open the door for small instances to still participate and have a honest chance at survival.
Thank you for eloquently putting something that I have been struggling to put into words. I really hope that the big instances don’t all end up moving to a whitelist federation model, the ability to have my own instance, with the ability to interact with any community in the fediverse, is what brought me here.
That said, a lot work needs to go into making this platform more resilient against spam bots. The biggest problem I see is that the default instance settings aren’t resistant at all. It seems to me that it shouldn’t even be possible to deploy a lemmy instance with no email verification, no captcha, and open sign-ups, but here we are.
Perhaps some sort of sanity check in lemmy that disables federation in that case might be a good idea. If someone is competent enough to implement their own spam protection beyond those, they’re probably competent enough to fork lemmy and disable said sanity check
The email comparison is pretty apt. I think one of the things they eventually had to deal with was reputation of different entities. Right now it’s essentially a boolean situation for the various server admins to identify things that could cause trouble and take preventative steps of blocking or not blocking them.
We would need to answer so e questions around how to quantify what good behavior looks like in lemmy that aren’t trivial to game/bypass.
This is an enormous undertaking (all the moderating) and we all appreciate it.
What’s the best way for beehaw users to support you and the other mods?
If you’re capable, probably working on Lemmy’s codebase to improve the mod and federation tools.
If not, donations would be appreciated
I know nothing about coding, so I went the donation route. It’s like when friends of mine had potluck dinners, but I couldn’t cook, so I just brought the beer lol
What sort of tools are lacking? Like, if there was a change you could make what would it be?
We have a list here https://discuss.online/post/12787 and in this thread by alyaza.
Maybe we should not federate by default but rather have admins send us an email to request federation, just like users have to send an email with a request to register?
Or not federate by default if the registration is open without requiring an email?
This would add friction since it would give lots of work for new instances. They’d need to manually ask federation with every other instance. Besides how would admins vet other instances?
An idea would be to allow any new instance to federate in one-direction mode (eg other instances’ user can see public beehaw post). Then after a waiting period automatically move them to read-only bidirectional mode (users see other the other instance’s posts, but can’t post), then after another period move them to full read-write federation mode.
It would give time for admins to vet the instance during an observation/waiting period, and the level of trust would incrementally increase over time if no block/action is taken by admins.
Glad to hear it, beehaw owns.
Begun, the bot wars have.
Thank y’all for being so proactive. I’ve been dismayed at the spike in Lemmy users over the past few days. The number of instances popping up with many thousands of accounts is suspicious.
Thank you again for working so hard to keep this space “clean”.
hey, my fedit.io instance is clear now. I deleted the entire db.
I really expected better from k6qw.com.
Side note, life has been so much better in my feed since defederating from the instances that were becoming furry/gonewild heavy. Thank you for your diligence.
Do you believe this will be a workable model going forward, or are you considering changing to an Allowlist for federation and have instances specifically request to federate?
we don’t think we need to switch to an allowlist–we actually haven’t been hit by this spam problem much if at all to this point–but we’d also really like it if we had better mod tools so we can just flat rule out needing to switch to one ever
I can also see there being blocklists similar to uBlock Origin or other browser extensions. Except, it will be done at the instance level by admins.
An allowlist would be nice if non-Allowlisted instances could still be discoverable by people using Beehaw IMO.
And then users from inside Beehaw could request federation maybe?
I mean, anyone could send us an email and we’d do a quick review. Adding then removing someone from the allowlist is easier if they cause trouble.
Glad to see we defederated from the rabbitears instance.
Thank you, sunaurus and Beehaw!
Good call… as our wonderful userbase across Lemmy gets bigger, so do our threats, unfortunately. Hopefully we can better combat spam and botting soon because it’s hurting big and small Lemmy servers.
This is what I would hope for. Botnests are going to pop up inevitably and measures need to be taken.
Individual instances can do everything in their power to keep their corner of things bot free, but the nature of federation means bots will just create their own instances and strike from there. It’s going to be a long game of wackamole but hopefully they can keep the swarm at bay
This is one of the many, but perhaps most pressing of reasons we need better moderation and admin tools. There is currently no way to stop spam from instances like this except defederation at this time.
Realistically there are far too many small instances which are set up by a single person who’s setting up an instance for their own learning, their own fun, or other reasons but who isn’t a fullstack developer, a seasoned sysadmin, good at sysops, a database engineer, a security expert, and all the roles an instance currently needs with how new this platform currently is. There will always be more instances like this as an attack vector for spam, trolling, and other malicious purposes.
If the goal is to allow people to set up their own instances and we are promoting and encouraging people to do this, we need tools for the larger instances to manage this or we’re going to end up with a community that is constantly in the process of fragmenting more and more and one where measures are taken by the largest instances to isolate themselves from smaller instances which are frequently used as attack vectors.
If you’re a developer, please please please focus your efforts on more granular administrative tools. We cannot whitelist or blacklist instances. There are no tools to evaluate how trustworthy a source is. If there’s a blacklist/whitelist, there’s no way for an instance or a user to apply to a community on another instance. There are plenty of individuals with great ideas on how this might be accomplished, and having access to many different tools and strategies will make this platform healthy.