• Incogni@lemmy.world
    link
    fedilink
    English
    arrow-up
    5
    ·
    3 months ago

    Fine by me.

    To add to my original point: What OP says against corporate sanitization is true (e.g. weid Youtube monetization rules), but it’s also the exact same thing bigots say because they want to call people slurs. And I don’t want to give them what they want.

    • tarknassus@lemmy.world
      link
      fedilink
      English
      arrow-up
      10
      ·
      3 months ago

      The issue is more a loss of contextual awareness. For example - why should a true crime documentary have to self-censor words like rape, died, sexual assault when they are often relevant to the story being told? Why should a blogger talking about mental health issues have to self-censor the word suicide?

      These are valid issues. There’s a reply in here which has a screenshot of someone saying “elon musk can fucking die”. Context tells you immediately it’s not a targeted death threat, it’s an opinion. Yet the systems that places rely on cannot make that distinction.

      4chan existed way before the advent of the attempts to sanitize the internet. Heck I remember the biker forum I frequented having some nasty shit and attitudes on there. But despite their scummy viewpoints, these were people I could rely on when my motorbike shat itself.

      Smaller communities police themselves better as well. Large-scale social media and other platforms just made it much harder to have the moderator model that those forums had. The human touch is sorely lacking as much as the automated processes lack nuance and context. A modern form of the Scunthorpe problem I guess.

      • Incogni@lemmy.world
        link
        fedilink
        English
        arrow-up
        2
        ·
        edit-2
        3 months ago

        But I am not agruing for corporate sanitization and algorithm-based word filters. I am also with you that we are in a dire need of smaller communities with a human touch. I am merely arguing against anyone who “just” wants to normalize hyperviolent comments. Because introducing this to the current large-scale, algorithm-monitored communities will not fix the above issues, but backfire spectacularly, because it enables bigots.

        OP saw all these problems, but chose the wrong solution. We need smaller, closer communities instead.

    • Valmond@lemmy.world
      link
      fedilink
      English
      arrow-up
      4
      ·
      3 months ago

      So you think slurs should be banned instead of bad behaviour?

      If I can’t ask what a TERF if or discuss it, it’s stupid IMO. Weaseling in racism with ordinary words is, you know it, bad IMO.

      • Incogni@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        3 months ago

        No? Where did I say that? Bad behaviour should be banned, no matter what.

        But in the current state (huge scale, algorithm based moderation only) this is not properly enforced already. My problem now is that if hyper violent language no longer gets you banned, bigots will use that and will get away with it too often. I’d rather not be able to call someone an asshole than getting called 50 slurs and then watch the algorithm do nothing about it.

        • Valmond@lemmy.world
          link
          fedilink
          English
          arrow-up
          6
          ·
          3 months ago

          You’re saying it again, “oh no we must ban bad words!!”.

          Do you think hyper violent language does not fall under bad behaviour if it isn’t a transcript for example? Should hyper violent language be banned so that you can’t talk about it?

          You’re just digging in.

          • Incogni@lemmy.world
            link
            fedilink
            English
            arrow-up
            1
            ·
            3 months ago

            No, the nuance is that I think that in the current state of social media, as shitty as it is, a blanket unbanning of bad words would be a net negative.

            I do not think that bad words should be banned generally. In quotes/transcripts etc it should be fine. I know they currently aren’t and I agree that’s a problem. Also, of course, in small, self-moderated communities bad words are fine if everyone there is ok with that.

            But OP isn’t arguing for that. They just argued that we should blindly add hyper-violent language (back) to the systems we currently have. And not only regarding transcriptions/etc. And I think that will lead to a way worse outcome that the current status quo.

            Again, I do not think bad words in a transcription/quoting/discussing should be banned because that’s ridiculous. But, on large social media platforms, they should stay banned when being directed at individuals, because yes, calling Elon an asshole to his face is carthatic, but you are buying that freedom with the price of bigots calling everyone slurs. And I simply don’t think that’s worth it, nothing more.

            The issues we face with capitalized social media can only be solved by forming smaller circles again. Not by adding slurs into the mix and hoping they will only benefit you and won’t make the place a miserable hellhole.