• tarknassus@lemmy.world
    link
    fedilink
    English
    arrow-up
    10
    ·
    3 months ago

    The issue is more a loss of contextual awareness. For example - why should a true crime documentary have to self-censor words like rape, died, sexual assault when they are often relevant to the story being told? Why should a blogger talking about mental health issues have to self-censor the word suicide?

    These are valid issues. There’s a reply in here which has a screenshot of someone saying “elon musk can fucking die”. Context tells you immediately it’s not a targeted death threat, it’s an opinion. Yet the systems that places rely on cannot make that distinction.

    4chan existed way before the advent of the attempts to sanitize the internet. Heck I remember the biker forum I frequented having some nasty shit and attitudes on there. But despite their scummy viewpoints, these were people I could rely on when my motorbike shat itself.

    Smaller communities police themselves better as well. Large-scale social media and other platforms just made it much harder to have the moderator model that those forums had. The human touch is sorely lacking as much as the automated processes lack nuance and context. A modern form of the Scunthorpe problem I guess.

    • Incogni@lemmy.world
      link
      fedilink
      English
      arrow-up
      2
      ·
      edit-2
      3 months ago

      But I am not agruing for corporate sanitization and algorithm-based word filters. I am also with you that we are in a dire need of smaller communities with a human touch. I am merely arguing against anyone who “just” wants to normalize hyperviolent comments. Because introducing this to the current large-scale, algorithm-monitored communities will not fix the above issues, but backfire spectacularly, because it enables bigots.

      OP saw all these problems, but chose the wrong solution. We need smaller, closer communities instead.