You know, I’ve had an idea fermenting for some time now around how content moderation at scale might work. I have no idea if it’s feasible, or not, nor do I have the technical expertise to bring it to fruition but I think the following pertinent points lead towards the capability of content moderation at scale:
- The 90-9-1 rule doesn’t just apply to lurking and commenting on websites, it’s about participation on many facets. It’s who creates videos, it’s who volunteers to moderate, it’s really all aspects of user interaction
- People like to feel included and useful in communities and contribute in ways which work for them. For some, this is money, for some its the creation of art, some socialize, some connect, some offer goods and services, some trade, etc.
- Moderating content doesn’t have to be so centralized. The final call on moderation doesn’t even necessarily have to revolve around a single individual - it can be a crowd-sourced decision (it often is groups of moderators having conversations on more nuanced or important issues already).
- In-person content moderation, or communities policing behavior amongst itself often represents a lot of talking and a spread out reaction to an incident or incidents. Being a bad person in a small town might have many minor negative social consequences
When all of this combines, it makes you wonder if content moderation couldn’t be accomplished more akin to how a small town might deal with a problematic individual - which is to say lots of small interactions with the problematic person, with some people helping, others chastising, some educating, their actions being more monitored, etc. How does this translate to a digital environment? That’s the part I’m still trying to figure out. Perhaps comments that are problematic can be flagged by other users, such as in existing systems, but maybe this can fall into a queue where regular users or community members can vote on how appropriate it was and based on some kind of credit system (perhaps influenced by how much these people contribute or receive positive feedback in that particular community) determining the outcome of said comment. As it is, many of the conversational parts of this community feedback already happen (people both arguing with or pushing back against and educating or attempting to help). A system might even encourage or link up users with appropriate self-flagged educators who can talk directly with problematic individuals to help them learn and grow. Honestly, I don’t know all the specifics, but I think it’s interesting to think about.
One thing I think is interesting is how tildes.net is planning to handle moderation.
Basically - they give you broad powers initially, and take them away from you if you show yourself you can’t be trusted. So if you report a user and it’s a bad-faith report, they can ding you. If you keep making bad-faith reports, then over time you lose the ability to create reports at all.
By contrast - if you repeatedly prove to make good reports, and your reports are usually actioned upon, you become “trusted” over time and your reports may cause content to be removed as soon as you report it. (And of course - if a moderator restores a post that you got removed, that counts as a ding against you.)
Over time, trusted users get hand-picked to become moderators. This has the ability to create “power users”, of course, but a moderator that acts in bad faith can become less trusted over time and potentially loses their privileges. The thought is that the risk of power users is less than the detriment of an unmoderated community.
One thing I think is interesting is how tildes.net is planning to handle moderation.
(unfortunately) we’re actually very familiar with how tildes wants/wanted to do things! most of our original core are, you could say, disenchanted members with the site who didn’t like the direction it was going generally, and wished the trust system would actually be worked on (to my knowledge it’s still entirely conceptual, which is how it was 3 years ago when i was using the site).
It still seems conceptual, yeah. AFAIK it isn’t implemented over there yet.
I just noticed it when I was reading up on the site after it got recommended to me on Reddit. I like the concept of the place, but my turn-off is that they don’t allow cute cat pictures or memes. Most of my Reddit day is spent looking at cute cats and sending memes to my fiance.
They’ve talked about that for years yet they’ve also slowly become more and more rationalist and Deimos has withdrawn from interacting with the website more and more over time. Their ethos is part of where my thoughts come from, but until they actually decide to take that seriously (or even fight the slowly encroaching rationalism which pushed the minority voice off their website), I can’t in good conscience put any stock into their website.
@EnglishMobster @Gaywallet I think this is a way you could take that would actually work, seeing as more trusted people essentially have more power. And as long as the people in power don’t radically change their opinions there would be very little potential for abuse. Especially if moderator-actions for example had to go through review, meaning that if a moderator decides to do something another moderator has to sign off on it.
@EnglishMobster @Gaywallet Some years back I was moderating GMod-communities and we had a similar power structure. You would initially publicly (in a forum) apply for a moderation role, where everyone could comment on the prior experiences they have had with you and if the resonance is good you would become a Trial-Moderator, where you would be coached by a senior-member of staff. If you did well, you would be promoted to higher roles, to eventually teach new staff yourself.
Content moderation at scale is impossible to do well. Importantly, this is not an argument that we should throw up our hands and do nothing. Nor is it an argument that companies can’t do better jobs within their own content moderation efforts. But I do think there’s a huge problem in that many people — including many politicians and journalists — seem to expect that these companies not only can, but should, strive for a level of content moderation that is simply impossible to reach.
anyone who has tried to moderate an online community of any scale can relate to this, especially when looking at the downfall of traditional social media sites like facebook and twitter.
anyone who has tried to moderate an online community of any scale can relate to this, especially when looking at the downfall of traditional social media sites like facebook and twitter.
in the early days here moderating was kind of a nightmare like on those sites because we’d semi-frequently get spammed by neo-nazis and trolls who posted all sorts of heinous stuff. we’d also get a ton of spam. (it’s part of why we don’t have open registrations.) just so much work for dubious payoff–luckily it’s worked out in the end here
Fedi FTW! :-D \o/
Soon they’ll realise this applies to governments too.