This is a follow-up from my previous thread.

The thread discussed the question of why people tend to choose proprietary microblogging platfroms (i.e. Bluesky or Threads) over the free and open source microblogging platform, Mastodon.

The reasons, summarised by @noodlejetski@lemm.ee are:

  1. marketing
  2. not having to pick the instance when registering
  3. people who have experienced Mastodon’s hermetic culture discouraging others from joining
  4. algorithms helping discover people and content to follow
  5. marketing

and I’m saying that as a firm Mastodon user and believer.

Now that we know why people move to proprietary microblogging platforms, we can also produce methods to counter this.

How do we get “normies” to adopt the Fediverse?

  • BeAware :fediverse:@social.beaware.live
    link
    fedilink
    arrow-up
    50
    ·
    edit-2
    3 months ago

    @dch82 first, “normies” have to not get harassed when they come here.

    Unfortunately the biggest Fedi software refuses to add automated reporting of offensive posts so if it’s not reported, the admins won’t even see it.

    People coming from corporate social media are used to ignoring the report button because in their experience, it either doesn’t work, or gets ignored by admins anyway.

    We need automated reporting.

    @fediverse

    • Lost_My_Mind@lemmy.world
      link
      fedilink
      English
      arrow-up
      18
      ·
      3 months ago

      We need automated reporting.

      I’m fine with auto REPORTING, but the actual moderation needs to be a human. Auto moderation is bad. It gets things wrong. It’s how I got banned from both twitter (calm down, this was back in 2018 before it was an elon owned nazi cesspool), and reddit.

      On twitter I saw a funny video that was posted, and I replied “Aw man, that killed me”.

      I was banned for “inciting death threats”

        • P03 Locke@lemmy.dbzer0.com
          link
          fedilink
          English
          arrow-up
          2
          ·
          3 months ago

          That’s the thing about automation and training models.

          First, they implement some sort of auto-reporting bot that requires a human to review them. In the beginning, it only about 50% accurate, but as they give it more and more examples of good and bad results through the human reviews, it moves to 80%, then 90%, then 99%, then 99.99% accuracy.

          After a while, the humans on the other end are so numb to the 9999 entries they have to mark as approved that they can barely tell what’s a rejection themselves, and the moderation team is asking itself just what this human review is actually doing. If it’s 99.99% accurate, why not let the bot decide?

          Then, the model moves on from auto-reporting to auto-moderation.

      • BeAware :fediverse:@social.beaware.live
        link
        fedilink
        arrow-up
        7
        ·
        edit-2
        3 months ago

        @AterNox @dch82 blocking and reporting work fine.

        However, people from corporate social media won’t report posts because in their experience, it either doesn’t get taken seriously or the admins ignore it. Corporate social media sites don’t exactly act on reports in a timely manner.

        I’m on my own instance, I moderate for myself. I don’t want slurs to exist on my instance at all. However, if I don’t see them with my own eyes, I cannot ban the user.

        PS. I’m talking about banning users that are harassing others on the instance level. These are user actions. I am an admin. I run my own instance.

        @fediverse

        • cm0002@lemmy.world
          link
          fedilink
          English
          arrow-up
          2
          ·
          3 months ago

          I’m confused, do you mean like automated enforcement rules/algorithms like big SM has? I.e. if user gets reported for breaking Y rule X amount of times ban user for Z amount of time and forward to admin for further action?

          • BeAware :fediverse:@social.beaware.live
            link
            fedilink
            arrow-up
            8
            ·
            edit-2
            3 months ago

            @cm0002 no, I want automated reports.

            A user using the n word, full on with the hard R, isn’t gonna be a good post. It should be automatically reported to me so that I can judge context and take action.

            If a user doesn’t report it, I won’t see it.

            I’m on my own instance, I am the user.

            If I don’t report it, nobody sees it.

            That’s dumb.

            @fediverse

            • cm0002@lemmy.world
              link
              fedilink
              English
              arrow-up
              5
              ·
              3 months ago

              Ah, makes sense now, that is dumb. I can totally see why they would have issues with automated enforcement, but what you described I don’t see why anyone would be against it lol

    • cy@fedicy.us.to
      link
      fedilink
      arrow-up
      3
      ·
      3 months ago

      We have instancewide admin blocks, so the accounts that would be automatically reported can be blocked preemptively, no report needed. That can be both good and bad… but pick a sheltered instance and you shouldn’t get harassed. How would automatic reporting even work? I don’t recall, but doesn’t the admin interface let you specify keywords that alert the admins in a post? Is that what you mean?

      CC: @dch82@lemmy.zip @fediverse@lemmy.world

    • osaerisxero@kbin.melroy.org
      link
      fedilink
      arrow-up
      3
      ·
      3 months ago

      I unironically think it would be easier to train users that the report button works now than it would to get automated reporting that was worth a damn implemented.

    • zeppo@lemmy.world
      link
      fedilink
      English
      arrow-up
      2
      ·
      3 months ago

      Definitely. Back when I used FB and Twitter I learned that reporting is entirely useless. You just end up with some automated message about how they reviewed it and it “didn’t violate their community standards” with some lame verbiage like “we realize this isn’t the outcome you were looking for”, regardless of how ridiculously blatant whatever you reported was. On the flip side, I was banned for clearly misinterpreted or brigaded comments, and then an appeal just gives you the inverse where they reviewed it and whatever you posted was definitely terrible and they “realize this isn’t the outcome you were looking for”.

    • ALostInquirer@lemm.ee
      link
      fedilink
      English
      arrow-up
      2
      ·
      3 months ago

      By automated reporting do you mean something like filters on the backend to flag offensive posts per some custom settings?