Hey fellow users of the Fediverse, instance admins and platform developers. I’d like to see better means of handling adult content on the Fediverse. I’d like to spread a bit of awareness and request your opinions and comments.
Summary: A more nuanced concept is a necessary step towards creating a more inclusive and empowering space for adolescents while also catering to adult content. I suggest extending the platforms with additional tools for instance admins, content-labels and user roles. Further taking into account the way the Fediverse is designed, different jurisdictions and shifting the responsibility to the correct people. The concept of content-labels can also aid moderation in general.
The motivation:
We are currently disadvantaging adolescents and making life hard for instance admins. My main points:
-
Our platforms shouldn’t only cater to adults. I don’t want to delve down into providing a kids-safe space, because that’s different use-case and a complex task. But there’s quite some space in-between. Young people also need places on the internet where they can connect, try themselves and slowly grow and approach the adult world. I think we should be inclusive and empower the age-group - lets say of 14-17 yo people. Currently we don’t care. And I’d like that to change. It’d also help people who are parents, teachers and youth organizations.
-
But the platform should also cater to adults. I’d like to be able to discuss adult topics. Since everything is mixed together… For example if I were to share my experience on adult stuff, it’d make me uncomfortable if I knew kids are probably reading that. That restricts me in what I can do here.
-
Requirements by legislation: Numerous states and countries are exploring age verification requirements for the internet. Or it’s already mandatory but can’t be achieved with our current design.
-
Big platforms and porn sites have means to circumvent that. Money and lawyers. It’s considerably more difficult for our admins. I’m pretty sure they’d prosecute me at some point if I’d try to do the same. I don’t see how I could legally run my own instance at all without overly restricting it with the current tools I have available.
Some laws and proposals
- USA: CIPA COPA, state legislation.
- EU: European Strategy for a Better Internet for Children, national legislation, Digital Services Act
- UK: The Online Safety Bill, Proposed UK Internet age verification system Web blocking in the United Kingdom.
Why the Fediverse?
The Fediverse strives to be a nice space. A better place than just a copy of the established platforms including their issues. We should and can do better. We generally care for people and want to be inclusive. We should include adolecents and empower/support them, too.
I’d argue it’s easy to do. The Fediverse provides some unique advantages. And currently the alternative is to lock down an instance, overblock and rigorously defederate. Which isn’t great.
How?
There are a few design parameters:
- We don’t want to restrict other users’ use-cases in the process.
- The Fediverse connects people across very different jurisdictions. There is no one-size-fits-all solution.
- We can’t tackle an impossibly big task. But that shouldn’t keep up from doing anything. My suggestion is to not go for a perfect solution and fail in the process. But to implement something that is considerably better than the current situation. It doesn’t need to be perfect and water-tight to be a big step in the right direction and be of some good benefit for all users.
With that in mind, my proposal is to extend the platforms to provide additional tools to the individual instance admins.
Due to (1) not restricting users, the default instance setting should be to allow all content. The status quo is unchanged, we only offer optional means to the instance admins to tie down the place if they deem appropriate. And this is a federated platform. We can have instances that cater to adults and some that also cater to young people in parallel. This would extend the Fediverse, not make it smaller.
Because of (2) the different jurisdictions, the responsibility has to be with the individual instance admins. They have to comply with their legislation, they know what is allowed and they probably also know what kind of users they like to target with their instance. So we just give a configurable solution to them without assuming or enforcing too much.
Age-verification is hard. Practically impossible. The responsibility for that has to be delegated and handled on an instance level. We should stick to attaching roles to users and have the individual instance deal with it, come up with a way how people attain these roles. Some suggestions: Pull the role “adult” from OAuth/LDAP. Give the role to all logged-in users. Have admins and moderators assign the roles.
The current solution for example implemented by LemmyNSFW is to preface the website with a popup “Are you 18?.. Yes/No”. I’d argue this is a joke and entirely ineffective. We can skip a workaround like that, as it doesn’t comply with what is mandated in lots of countries. We’re exactly as well off with or without that popup in my country. And it’s redundant. We already have NSFW on the level of individual posts. And we can do better anyways. (Also: “NSFW” and “adult content” aren’t the same thing.)
I think the current situation with LemmyNSFW, which is blocked by most big instances, showcases the current tools don’t work properly. The situation as is leads to defederation.
Filtering and block-listing only works if people put in the effort and tag all the content. It’s probably wishful thinking that this becomes the standard and happens to a level that is satisfactory. We probably also need allow-listing to compensate for that. Allow-list certain instances and communities that are known to only contain appropriate content. And also differentiate between communities that do a good job and are reliably providing content labels. Allow-listing would switch the filtering around and allow authorized (adult) users to bypass the list. There is an option to extend upon this at a later point to approach something like a safe space in certain scenarios. Whether this is for kids or adults who like safe-spaces.
Technical implementation:
- Attach roles to user accounts so they can later be matched to content labels. (ActivityPub actors)
- Attach labeling to individual messages. (ActivityPub objects)
This isn’t necessarily a 1:1 relation. A simple “18+” category and a matching flag for the user account would be better than nothing. But legislation varies on what’s appropriate. Ultimately I’d like to see more nuanced content categories and have the instance match which user group can access which content. A set of labels for content would also be useful for other moderation purposes. Currently we’re just able to delete content or leave it there. But the same concept can also flag “fake-news” and “conspiracy theories” or “trolling” and make the user decide if they want to have that displayed to them. Currently this is up to the moderators, and they’re just given 2 choices.
For the specific categories we can have a look at existing legislation. Some examples might include: “nudity”, “pornography”, “gambling”, “extremism”, “drugs”, “self-harm”, “hate”, “gore”, “malware/phishing”. I’d like to refrain from vague categories such as “offensive language”. That just leads to further complications when applying it. Categories should be somewhat uncontroversial, comprehensible to the average moderator and cross some threshold appropriate to this task.
These categories need to be a well-defined set to be useful. And the admins need a tool to map them to user roles (age groups). I’d go ahead and also allow the users to filter out categories on top, in case they don’t like hate, trolling and such, they can choose to filter it out. And moderators also get another tool in addition to the ban hammer for more nuanced content moderation.
-
Instance settings should include: Show all content, (Blur/spoiler content,) Restrict content for non-logged-in users. Hide content entirely from the instance. And the user-group <-> content-flag mappings.
-
Add the handling of user-groups and the mapping to content-labels to the admin interface.
-
Add the content-labels to the UI so the users can flag their content.
-
Add the content-labels to the moderation tools
-
Implement allow-listing of instances and communities in a separate task/milestone.
-
We should anticipate age-verification getting mandatory in more and more places. Other software projects might pick up on it or need to implement it, too. This solution should tie into that. Make it extensible. I’d like to pull user groups from SSO, OAuth, OIDC, LDAP or whatever provides user roles and is supported as an authentication/authorization backend.
Caveats:
- It’s a voluntary effort. People might not participate enough to make it useful. If most content doesn’t include the appropriate labels, block-listing might prove ineffective. That remains to be seen. Maybe we need to implement allow-listing first.
- There will be some dispute, categories are a simplification and people have different judgment on exact boundaries. I think this proposal tries to compensate for some of this and tries not to oversimplify things. Also I believe most of society roughly agrees on enough of the underlying ethics.
- Filtering content isn’t great and can be abused. But it is a necessary tool if we want something like this.
🅭🄍 This text is licensed “No Rights Reserved”, CC0 1.0: This work has been marked as dedicated to the public domain.
…and you think 14-17 year olds won’t circumvent this in mere seconds? Like, they’d just sign up at an instance that doesn’t implement these labels, or doesn’t care about them, or use their parents accounts, or ask them, or an older friend to sign them up, and so on. Even if age verification would be widespread and legally mandated, I highly doubt any sufficiently determined 14-17 year old would have any trouble getting past it.
That is part of my idea. I don’t think it should be water-tight and not circumventable. My personal opinion is if a 16 yo really wants to watch porn or something, and they put in the effort to circumvent something that is a bit more elaborate than just clicking on “Yes” on a popup… They should be allowed to see it.
But that’s just my opinion. And I’m not really concerned with what other instances do. It’s enough if it enables me and a few other people to have my instance how I like and invite people to my instance without worrying too much. I mean my own server is also the only one I’m held responsible for. As far as I’m concerned other people can do what they like.
And it’s kind of pointless to try. Kids don’t need Lemmy or the Fediverse to watch adult content. They can just go to Pornhub and click yes. So I’m already in a position where I don’t care about other domains. But I’d like to keep my own Website and Minetest server clean. And also potentially offer some more services and at least do my best to do it right there.
There’s a very easy solution that lets you rest easy that your instance is how you want it to be: don’t do open registration. Vet the people you invite, and job done. If you want to be even safer, don’t post publicly - followers only. If you require follower approval, you can do some basic checks to see that whoever sends a follow request is someone you’re okay interacting with. This works on the microblogging side of the Fediverse quite well, today.
What I’m trying to say is that with registrations requiring admin approval gets you 99% of the way there, without needing anything more complex than that.
Sure. But what about federation? Arbitrary content is pulled from other instances. And my users are confronted with that content, too. Not only with each other. I’d need to also disable federation. (Or am I missing something?)
I think at that point I’d be better off installing Discourse or Flarum. And I’ve changed the whole vision of my instance. I’ve started with envisioning a federated platform that simultaneously can cater to adults and adolescents. And now I’ve locked it down to just cater to the few adolescents I directly invite, done away with the federation aspect and also cancelled all the appeal to adults. I think there’s not much left of what I’d like.
deleted by creator
Meh. Since you’re here… How is Friendica? Should I try that? I read it’s focusing on privacy and being a nice place, has communities and distributed forums, “relationship control” and add-ons.
On the paper it looks like it has many more features to offer than for example Lemmy. I’d be interested in the distributed forum aspect. Do the added features tie into every aspect of the platform? Or is it mainly microblogging with a basic forum added on top? And when participating for example in this Lemmy discussion… Is it a smooth experience, or can you tell you’ve left Friendica and only have basic functionality here? (I mean I can tell from over here, that you’re from a different platform, since it includes the @ user mentions like Mastodon does. And I’ve tried Mastodon and I think it’s not really a great experience interacting with Lemmy and KBin communities. Some of the structure of the threads gets lost in the process and comments from other branches of the discussion don’t show up.)
If you believe there’s unsatisfied demand for fediverse instances designed specifically for the needs of “14-17 yo people” maybe go ahead and create one?
That is kind of my point. I’d like to be able to do that. But Lemmy’s design /feature set prevents me from being able to do it.
Ah, well it wasn’t even clear to me that you were talking specifically about lemmy. Other software already has some of the features you want such as federating only with a chosen few.
It seems the main thing you actually propose is a new system for categorizing potentially undesirable content. I would suggest looking at what has already been tried in that area. There’s more to it than you’d suspect. How to choose the categories will never be uncontroversial, and coming up with a standard that any large fraction of the fediverse might accept would take some actual work.
I’ll research that. If you have some specific examples for federated platforms that provide more than a simple blocking or allowing on a domain level, feel free to drop me a hint to get me in the right direction. I’m not really a social media person so I have limited experience. I didn’t specify a platform. My initial motivation was linked to Lemmy as I really like this platform type. But I also like and use Peertube, maybe others.
Never having been an instance admin I don’t really know the details but I’m pretty sure it’s possible to configure Akkoma for example to federate only with specified domains. It has plenty of other features that may also be of interest. I like the “bubble” feature for example.
So what’s your opinion or alternative solution? I can see filtering being a controversial topic since it has been applied to restrict users.
My own motivation is: I want to run my own small federated instances. I’d like to invite friends and family and also maybe random people. And I want to like and use my instance myself, not lock it down for everyone and defederate too much. Currently Lemmy prevents me from doing that. I’d either need to close registrations and exclude minors, make sure my family also likes the communities I like to visit… Or be comfortable with potentially breaking the law or ending up with a severely limited instance I don’t like myself.
If it were possible to invite everyone while not blocking content, and I’d have some means of unlocking accounts… It’d allow me to provide service to everyone. And with this proposal it wouldn’t necessarily change anything for people on other instances.
Also it would allow me to swear and discuss adult topics in dedicated communities without taking into account whether that’s appropriate. That’s probably more of a concern for people who do more than just discuss politics here… But that’s some additional benefit.
Removed by mod
define “weird” content…
NSFW is already off by default when you sign up to most instances, and that blocks most porn. It should also be a user choice what they want to see, which is why I’m also opposed to defederation.
Why do you think porn is bad/unsuitable for 14-17 year olds?
NSFW is already off by default when you sign up to most instances
I don’t think this works in practice. Most big instances have gone the extra step to also defederate from the two major porn instances here. Showcasing that there are additional issues, otherwise they’d just have used this instead. I took a quick random sample of the biggest Lemmy instances and ~50-60% additionally block them entirely.
NSFW and 18+ aren’t the same thing. The NSFW tag is made for a slightly different purpose. And it’s a crutch that doesn’t work well for this purpose. There are some slightly vulgar topics that shouldn’t inadvertently pop up at your workplace but they might be safe to consume for minors. Also I think minors should have access to sex education. The Wikipedia has a similar stance. There are videos of “the act” on Wikimedia. You shouldn’t watch them while sitting in your open-plan office. But I think especially with the situation of sex ed in the USA, adolescents should get a chance to ask their questions and learn something about important aspects of life. The NSFW tag as is is doing them a disservice because now they can’t. Or everything else immediately gets mixed in. For example I’m not comfortable sharing my experiences >!sticking bluetooth-enabled things into somebody’s bum!< with kids. Or having sex ed and hardcore fetish stuff being the same category.
And I mean it’s not even just that. Gore and pictures of dead bodies in the Ukraine war also fall into the same category. So everyone just gets a yes/no decision on everything ranging from sex education to gore. In practice both these extremes aren’t very common on Lemmy. But in theory it’s just like that. (It’s not entirely theoretical. We just have a different community here. But there are examples in the wild. For example 4chan mixes pretty tame porn with fetish with crime, gore and death.)
So in summary the current state of (mis)using the NSFW tag actively leads to defederation and it’s doing a disservice to both people who participate in adult conversations and also adolescents. And its overly simplistic design prevents some conversations which should be allowed.
Why do you think porn is bad/unsuitable for 14-17 year olds?
My opinion doesn’t really count here. There are legal requirements people need to implement, whether they like it or not. So that’s kind of already the end of this conversation.
I think it makes a difference if you watch (somewhat tasteful) plain sex, or >!somebody dangling from a hook in the ceiling getting whipped by a disguised old man!<. I think it’s just not the same category. And we shouldn’t treat it as such. Similarly it’s also not the same if you deliberately explore that when you’re 17. Or you’re inadvertently exposed to it when 12 while researching what sex education has failed to provide you with.
And there’s the aspect of me inviting friends and family to my self-hosted services. Or discussing Linux server administration with people. I don’t want to mix either of that with porn. I think having it hidden per default is a good first step. And just requiring an extra, deliberate step to enable it is a good design. It just lacks any of the nuances to it, mingles valid use-cases with filtering that is made to do something else, and as I pointed out with the defederation happening, it comes with issues in practice.
And I think the Fediverse offers some important advantages over other platforms. ActivityPub is very vague. We can just attach fields to label content and the technical aspect is kind of simple to implement. And with federation, we have diversity built-in to the platform. This is our unique advantage. People have different use-cases, different moderation needs and perspectives and opinions on something like my proposal with the filtering. And I think the Fediverse turns out to be made exactly for something like this. I could have my instance my way and someone else can have a different opinion and have their instance another way.
But it requires some coordination effort. We need to agree on a foundation and some technical aspects. I don’t think a crazy rag rug works as a whole. And we already see some consequences of other disputes. Moderation being a constant issue in the background and instances seperating from each other because there’s no nuance to moderation and defederation. And ultimately we want to talk to each other and connect. And provide everyone with a place they like.
deleted by creator
Sure. For me it’s the other way around. I’ve never really fell in love with microblogging. My hobbies are kind of mixed and sometimes niche, I sometimes don’t have anything of substance to post from my everyday-life and I really disliked the mob mentality and regularly surfacing toxicity in places like Twitter. And at some time I tried Reddit and got hooked. It’s a very different approach whether you follow people or topics/communities. It’s less about who you are, but more a marketplace of ideas. A level playing field. Sorted by ideas and hobbies and you can just dip in. It also makes you target different audiences for different niche hobbies. And everything gets ordered like that. I mean you also have hashtags on Mastodon, but it’s not really designed around this concept.
It really has some appeal to me. I also sometimes participate in web forums and found a similar structure in this. And I always liked how the free software community is supposed to work. It doesn’t matter who you are, if you’re 15 or a 40 yo woman… You just all come to the same place and discuss your ideas and perspective on things.
It does have downsides. And it doesn’t necessarily foster good behaviour and being nice to people. I don’t have an ultimate opinion on this. I think encouraging good behaviour in discussions, requires some degree of ‘it matters who you are’. Because having an image stick to you incentivizes you to behave properly. It’s not than big of an issue in practice, the overwhelming majority of people is nice and they use the platforms to everyone’s benefit and not to troll and cause trouble.
Thanks for your input. I’ve come to the conclusion that maybe I need to broaden my perspective. Have a closer look at other places in the Fediverse. I’m pretty sure Mastodon ain’t it for me. But Friendica might be a good place to start. I’ve also had Akkoma reccomended to me in this discussion. Maybe some other software than Lemmy is more closely aligned to my vision of what I’d like to run on my server. I’d like to stay compatible with Lemmy, since there are lots of nice people here and it’s usually fun to talk here, more so than in some other places.
A completely flexible tagging system sounds like a lot of work. But adding an extra checkbox for NSFL alongside NSFW would be pretty easy… PieFed already does this and federates it on Activities using a “nsfl” attribute.
Thanks! At the time of writing, I wasn’t aware of the existence of piefed and sublinks. I read some of the Piefed blog posts today. Seems the author has some really good ideas how to address the shortcomings of the current approach. (Or what I view as shortcomings.) Splitting NSFW and NSFL is a really good start. Implementing better moderation tools is also a regularly requested feature. And judging by the other articles they mention, the project is closer aligned to my vision of a welcoming and inclusive platform. I’ll definitely keep an eye on it. Hope it approaches a usable state soon.
I guess I have my answer, here. I’ll wait until Piefed comes along and then use that. I’m somewhat optimistic about their claims. And if not, they included extensibility.
deleted by creator
Wouldn’t you find exactly the same stuff on porn websites ?
Yes. And i think it’s bad practice. We should strive to be better than the average porn site.
How would you do otherwise with preserving user privacy
I think there are two issues at play:
-
It’s a complex task. Usually that leads to people saying “we can never achieve 100%” and “it doesn’t fit every purpose” and then nothing gets done. I’d argue this gets us like 40% the way and that’s better than nothing. And it’d get me all the way and probably a few other people, too.
-
I think verification should be delegated to the instances. There isn’t a single solution. In some jurisdictions it might be enough that people claim to be 18. Those admins can choose a really simple solution. Other admins might not care or cater to minors, they can not activate the filters. A compromise might be requiring signup. That’d hide content from kids who aren’t logged in and just browsing the web. And already far better than just displaying it to them. What I’d like to do is have users request access and handle that manually. Alike some Discord servers or other software does. I know a few people I’d like to invite and their age. So it’d be no problem to unlock their accounts. I think it’s the same for other communities. And usually “eyeballing it” also works to some degree. It might be a valid approach for some admins. I know from experience you can often tell if your opponent in a computer game or the person you’re arguing with is a 13 year old kid, or 35. It’s not perfect but surely does a decent job with the extremes.
I’d like to abstain from privacy-breaching methods that are in use by big tech companies like Google etc. Requiring phone numbers on signup or showing your ID into the camera is too much. And it’s bad. I don’t want to tell the admins how to handle verification. If they’re required to, or would like to see the IDs and their users are comfortable with it… I’ve included extensibility to the requirements. So they can. Maybe we’re provided with a solution in the near future. My German ID card can already vouch for my age without revealing my identity. It’s a zero knowledge proof and the proper technical solution to age-verification. I can also envision some “Web of Trust” providing this. Something like PGP or CAcert does.
I think there are some valid ideas and some technical solutions are already out there and available. The issue is just nobody uses them. And neither do we.
Also, how do you avoid falling in the reddit trap where every discussion vaguely about sexuality end-up being 18+
That is a good question. These categories need to be concise. And the people ticking the boxed need to comprehend the meaning and consequence. I think moderators will do. With the users, I’m not sure. I don’t think I had that issue on Reddit. A year ago when I still was there, I’ve occasinally replied to people on relationship_advice and some more explicit subreddits. I didn’t see any problems. But I’ve not been a heavy user. Maybe I didn’t pay attention. I’ll listen if this is deemed a likely scenario or proves to happen in practice.
-
deleted by creator