Blog post by Christine Lemmer-Webber, co-editor of ActivityPub: https://dustycloud.org/blog/how-decentralized-is-bluesky/

The likely answer to this is that there will always have to be a large corporation at the heart of Bluesky/ATProto, and the network will have to rely on that corporation to do the work of abuse mitigation, particularly in terms of illegal content and spam. This may be a good enough solution for Bluesky’s purposes, but on the economics alone it’s going to be a centralized system that relies on trusting centralized authorities.

  • MudMan@fedia.io
    link
    fedilink
    arrow-up
    1
    ·
    2 days ago

    Well, you are not people, then. But people did wait.

    The warnings were there from the start, and experts in sociology and communication were warning from pretty much the full suite of effects since day one. Nobody listened, though. Mass media was fixated on the downsides of TV until two billion people were on Facebook using their pictures to train facial recognition and being roped into misinformation-driven frenzies.

    And yes, I think the core mechanics of this stuff are inherent to massive, instant peer-to-peer communication. I don’t know you and you don’t know me, but we’ve had a long discussion about this because we disagree on it. The attention economy patterns are at play right here, with no algorithm, in a distributed network with no central owner. We have the same incentives and disincentives. It’s not min/maxed, but it’s not working fundamentally differently.

    Something people around here like to forget is that a bunch of that “Facebook incited genocide” stuff didn’t happen through algorithmically selected posts, it happened through Whatsapp and Facebook chatbooks that aren’t driven by their centralized engagement engines. The toxic patterns are built into the tech when applied en masse.

    • MentalEdge@sopuli.xyz
      link
      fedilink
      English
      arrow-up
      1
      ·
      edit-2
      2 days ago

      The warnings were there from the start, and experts in sociology and communication were warning from pretty much the full suite of effects since day one. Nobody listened, though. Mass media was fixated on the downsides of TV until two billion people were on Facebook using their pictures to train facial recognition and being roped into misinformation-driven frenzies.

      I’ll quote my other comment here: Some things only change once every person who ever shared that thought, is gone. That takes several hundred years, at least, if it happens at all.

      The stuff I think is at play here, is the part of human collective consciousness that is really, really, really slow to change.

      Something people around here like to forget is that a bunch of that “Facebook incited genocide” stuff didn’t happen through algorithmically selected posts, it happened through Whatsapp and Facebook chatbooks that aren’t driven by their centralized engagement engines. The toxic patterns are built into the tech when applied en masse.

      Not the tech. Us. This stuff happens, because that is how humans work. It’s where the word “meme” comes from. How we conceive ideas, spread them, and then alter them as we spread them, optimizing the idea to spread as effectively as possible, to the point it may no longer have anything in common with the original thought.

      And yes, I think the core mechanics of this stuff are inherent to massive, instant peer-to-peer communication.

      This stuff happens using the very first form of communication we ever used as a species. Word of mouth. How in the world can it be inherent to mass media, except in the way it amplifies it?

      Overcoming our own flaws and the biological biases of our brains is one of the challenges we face as a species, and another trial that cannot be opted out of.