Plebbit is pure peer-to-peer social media protocol, it has no central servers, no global admins, and no way shut down communities-meaning true censorship resistance.

Unlike federated platforms, like lemmy and Mastedon, there are no instances or servers to rely on

this project was created due to wanting to give control of communication and data back to the people.

Plebbit only hosts text. Images from google and other sites can be linked/embedded in posts. This fixes the issue of hosting any nefarious content.

ENS domain are used to name communities.

Plebbit currently offers different UIs. Old reddit UI and new reddit, 4chan, and have a Blog. Plebbit intend to have an app, internet archive, wiki and twitter and Lemmy UI . Choice is important. The backend/communities are shared across clients.

anyone can contribute, build their own client, and shape the ecosystem

Important Links :

Home

https://plebbit.com/home

App

https://plebbit.com/home#cb2a9c90-6f09-44b2-be03-75f543f9f5aa

FAQ

https://github.com/plebbit/whitepaper/blob/master/FAQ.md

Whitepapers

https://github.com/plebbit/whitepaper

https://github.com/plebbit/whitepaper/discussions/2

Github

https://github.com/plebbit

https://github.com/plebbit/plebbit-react

https://github.com/plebbit/plebbit-react/releases

https://github.com/plebbit/seedit

https://github.com/plebbit/seedit/releases

    • Tempy@programming.dev
      link
      fedilink
      English
      arrow-up
      12
      ·
      edit-2
      3 months ago

      Well from their site

      Moderation

      Since there are no global admins, the administrative control of a subplebbit rests solely with its creator. No one else can moderate content or accounts unless the subplebbit creator grants them permission.

      So, it’s not that there’s no moderation. It’s just “subplebbit” creator/delegates controlled as there is no over arching site wide company able to moderate it on the whole.

      It will mean, as a user, you’ll have to be liberal with removing subplebbits from your own feed though. I’m sure there will be some… not so pleasant subplebbits appearing.

    • MathGrunt@lemmy.dbzer0.com
      link
      fedilink
      English
      arrow-up
      6
      ·
      3 months ago

      I was there for the early days of Voat, which was an earlier version of a Reddit clone. Those first few months were nice, but it quickly devolved into a hate-filled N*zi safe haven.

      Fediverse has an early version of a bot problem, which is ubiquitous in all social media these days (thanks to state actors and also corpos that want you to think/feel a certain way). idk how the fediverse is handling the bot problem, but for now it seems to be less noticeable here than other places.

      If the platform is truly “free speech” without any moderation, then it will quickly devolve into worse than 4chan.

      Voat is gone now, which is a good thing, but if you have an programming background you can use an LLM to study the way it self-destructed by looking at the archive.org snippets. See how it quickly became theDonald but worse.

      That is what will happen to this un-moderated decentralized social media protocol.

    • Plebbitor@lemmy.worldOP
      link
      fedilink
      arrow-up
      2
      ·
      3 months ago

      The communities moderate themselves with their own admins, just like on reddit. The difference is, there’s no global admins that can censor communities or enforce global rules. However, the plebbit app developer can basically act like a global admin by blacklisting connections to certain communities. I predict the most popular plebbit apps won’t include such blacklisting functions.

      Plebbit is like BitTorrent, there’s no global BitTorrent admin. You use a BitTorrent client (like uTorrent) to download torrents, and the client could technically blacklist your torrent. You use a plebbit client (like Seedit) to download a subplebbit, and the client could technically blacklist your subplebbit.

      It’s entirely possible that more centralized plebbit clients will be created, to be published on app stores for example, and they will implement whitelists of safe communities to participate in, blocking any other community.

  • refalo@programming.dev
    link
    fedilink
    arrow-up
    16
    ·
    edit-2
    3 months ago

    How long until this gets overrun with 🍕 and nobody wants to use it…

    Not sure how moderation would even be possible with this model.

    • wizardbeard@lemmy.dbzer0.com
      link
      fedilink
      English
      arrow-up
      16
      ·
      3 months ago

      I bring this point up every time I see someone pushing the idea of P2P or federated social networks with no moderation and no one has a solution for it yet. Because there isn’t a solution.

      It’s like these people don’t even want to look at existing social media with minimal moderation. It doesn’t take long on 4chan and other less reputable *chan style sites to see that no matter how much you want to shake off the chains of overbearing moderators, there is a bare minimum moderation necessary for any social media to survive.

      Even social media sites on TOR have moderation.

      When even the darkest, least moderated cesspools online still have some minimal moderation, it should be a massive neon sign that there needs to be some moderation functionality.

      • BB_C@programming.dev
        link
        fedilink
        arrow-up
        4
        ·
        3 months ago

        Because there isn’t a solution.

        This has been discussed and experimented with to death where such networks existed for a long time. Just because you never heard of them or even knew they exist doesn’t mean that they don’t.

        See Freenet/Hyphanet and the three approaches (local trust, shared user trust lists, web of trust) if you want to learn something. The second one worked out the best from a performance and scalability point of view compared to the third.

        • wizardbeard@lemmy.dbzer0.com
          link
          fedilink
          English
          arrow-up
          8
          ·
          3 months ago

          Holy shit you cannot be serious. In the shortest possible terms: trust systems are forms of moderation. Anything implementing them would not fall under what I was talking about.

          This project doesn’t appear to implement that. It doesn’t even appear to have a bare minimum way for users to prevent themselves from sharing something they viewed but don’t want to share. Viewing something should not imply trust.

          Definitely appreciate the assumption that I’m just a dumbass and you’ve come to shine the light of enlightenment on me though. That my point of view could only be possible to reach through ignorance. That’s always nice.

          • BB_C@programming.dev
            link
            fedilink
            arrow-up
            7
            ·
            3 months ago

            Apologies if I was presumptions and/or my tone was too aggressive.

            Quibbling at No Moderation = Bad usually refers to central moderation where “someone” decides for others what they can and can’t see without them having any say in the matter.

            Bad moderation is an experienced problem at a much larger scale. It in fact was one of the reasons why this very place even exists. And it was one of the reasons why “transparent moderation” was one of the celebrated features of Lemmy with its public Modlog, although “some” quickly started to dislike that and try to work around it, because power corrupts, and the modern power seeker knows how to moral grandstand while power grabbing.

            All trust systems give the user the power, by either letting him/her be the sole moderator, or by letting him/her choose moderators (other users) and how much each one of them is trusted and how much weight their judgment carries, or by letting him/her configure more elaborate systems like WoT the way he/she likes.

          • BB_C@programming.dev
            link
            fedilink
            arrow-up
            3
            ·
            3 months ago

            Didn’t click on your links. But LEA does this move against any network that may offer anonymization. Don’t use Tor hidden services. Don’t go near I2P. Stay away from Freenet…etc. This even includes any platform that is seen as not fully under control, like Telegram at some point.

            In its essence, this move is no different from “Don’t go near Lemmy because it’s a Putin-supporting communist platform filled with evil state agents”.

            Does any network that may offer anonymization (even if misleadingly) attract undesirable people, possibly including flat out criminals? Yes.

            Should everyone stay away from all of them because of that? That’s up to each individual to decide, preferably after seeing for themselves.

            But parroting “think of the children” talking points against individual networks points to either intellectual deficiency, high susceptibility to consent-manufacturing propaganda, or some less innocent explanations.

    • Plebbitor@lemmy.worldOP
      link
      fedilink
      arrow-up
      1
      ·
      3 months ago

      there’s no 🍕 because ALL data on plebbit is text-only, you cannot upload media. We did this intentionally, so if you want to post media you must post a direct link to it (the interface embeds the media automatically), a link from centralized sites like imgur and stuff, who know your IP address, take down the media immediately (the embed 404’s) and report you to authorities. Further, plebbit works like torrents so your IP is already in the swarm, so you really shouldn’t use it for anything illegal or you’ll get caught.

  • notfromhere
    link
    fedilink
    arrow-up
    14
    ·
    3 months ago

    Plebbit only hosts text. Images from google and other sites can be linked/embedded in posts. This fixes the issue of hosting any nefarious content.

    Nowhere in the project whitepaper or FAQ does it talk about banning image hosting. Base64 encoding images in the text post is trivial, so maybe OP is the one projecting this intent or feature?

  • SorteKanin@feddit.dk
    link
    fedilink
    arrow-up
    11
    ·
    3 months ago

    no global admins, and no way shut down communities-meaning true censorship resistance.

    “True censorship resistance” is not a desirable property. No normal user wants to deal with moderation. You need to have a structure for delegating moderation and such tasks to other people.

    • Plebbitor@lemmy.worldOP
      link
      fedilink
      arrow-up
      2
      ·
      3 months ago

      You need to have a structure for delegating moderation and such tasks to other people.

      We actually have it: since there’s no central database of communities, who decides which ones appear in the homepage of the apps to first-time users? We use a “default list” of communities, which is effectively moderated (vetoed) by the app developer. This is the only “global admin” we basically have, but it’s only for the app itself, not the protocol, and it still doesn’t stop users from connecting p2p to the community (depending on the app, some plebbit client developers could implement blacklists).

    • Chakravanti@monero.town
      link
      fedilink
      English
      arrow-up
      1
      ·
      3 months ago

      Says who? Moderate your damn self, wannabe. Better yet, GTFO and don’t touch this horror show. You might break a pinky.

  • sirdorius@programming.dev
    link
    fedilink
    arrow-up
    7
    ·
    3 months ago

    Technically cool, but it’s scary that it tries to emulate the anonymous, unmoderated shithole that is 4chan. Go to 4chan now and try to imagine something even more racist, nazi and unhinged.

  • briggsyj@programming.dev
    link
    fedilink
    English
    arrow-up
    7
    ·
    3 months ago

    From the whitepaper:

    1. The user completes the captcha challenge and publishes his post and captcha challenge answer over pubsub.
    2. The subplebbit owner’s client gets notified that the user published to his pubsub, the post is not ignored because it contains a correct captcha challenge answer.
    3. The subplebbit owner’s client publishes a message over pubsub indicating that the captcha answer is correct or incorrect. Peers relaying too many messages with incorrect or no captcha answers get blocked to avoid DDOS of the pubsub.
    4. The subplebbit owner’s client updates the content of his subplebbit’s public key-based addressing automatically

    I may be misunderstanding how this protocol works, but at step 10 what prevents the owner from publishing the captcha answer as incorrect as a method of censorship based on the content of the post?

    • kazaika@lemmy.world
      link
      fedilink
      arrow-up
      2
      ·
      3 months ago

      The owner can obviously moderate and thereby censor, anyway. Thats not the kind of censorship free this thing advertises. Its not more or less censored as lemmy is when instance hosts do moderation.

    • Plebbitor@lemmy.worldOP
      link
      fedilink
      arrow-up
      1
      ·
      3 months ago

      nothing prevents it, the sub owner can put a challenge that’s impossible to solve to troll people. it’s required that this be possible otherwise the sub owner wouldnt have full control over what the challenge is.

      a lemmy instance could do the same thing so it’s not really an issue, the fix is just dont use subs / instances that dont work.

    • qaz@programming.dev
      link
      fedilink
      arrow-up
      1
      ·
      3 months ago

      Do you mean spamming faulty captcha answers to trigger the DDOS protection on the peers?

  • Kissaki@programming.dev
    link
    fedilink
    English
    arrow-up
    6
    ·
    3 months ago

    this project was created due to wanting to give control of communication and data back to the people

    The “giving control of communication” goal seems to contradict the “viewer automatically shares without a choice” and the dependence on good-intent node owners not moderating their node content.

    If a node owner hosts a community, what prevents them from moderating that community?

    • Plebbitor@lemmy.worldOP
      link
      fedilink
      arrow-up
      2
      ·
      3 months ago

      I agree in general, just like the word “decentralized”. But in this case it’s legit, because it simply means it’s p2p. I’d call bitcoin “serverless” as well, so it’s BitTorrent and IPFS. Plebbit is exactly the same: you open the desktop app and it runs a p2p node automatically in the background, to run your subplebbit, and users connect to it peer to peer. Your p2p node is not really a “server”, because it doesn’t require any centralized domain to function, it uses transport protocols and peer discovery instead.

  • BackgrndNoize@lemmy.world
    link
    fedilink
    arrow-up
    3
    ·
    3 months ago

    If there’s no central server then where is all the data stored?. With Lemmy I know the instance creator has to host it all on his own server.

    • notfromhere
      link
      fedilink
      arrow-up
      18
      ·
      3 months ago

      Great question! Unlike Lemmy, which relies on federation with dedicated servers, Plebbit is fully peer-to-peer (P2P) and does not have a central server or even instances. Instead, storage happens via a combination of IPFS and users seeding data. Here’s how it works:

      Where Is Plebbit’s Data Stored?

      1. Subplebbit Owners Host the Data (Like Torrent Seeders)

        • Each subplebbit owner runs a Plebbit node that stores and republishes their own community’s data.
        • Their device (or a server, if they choose) must be online 24/7 to ensure the subplebbit remains accessible.
        • If a subplebbit owner goes offline, their community disappears unless others seed it—very similar to how torrents work.
      2. Users Act as Temporary Seeders

        • Any user who visits a subplebbit automatically stores and seeds the content they read.
        • This means active users help distribute content, like in BitTorrent.
        • If a user closes their app and no one else is seeding the content, it becomes unavailable until the owner comes back online.
      3. IPFS for Content Addressing

        • Posts and comments are stored in IPFS, which ensures that popular content remains available longer.
        • Unlike a blockchain, there is no permanent historical ledgerif no one is seeding, the data is gone.
        • Each post has a content address (CID), meaning that as long as someone has the data, it can be re-fetched.
      4. PubSub for Live Updates

        • Plebbit uses peer-to-peer pubsub (publish-subscribe messaging) to broadcast new content between nodes in real-time.
        • This helps users see new posts without needing a central server to pull updates from.

      What Happens If Everyone Goes Offline?

      • If no one’s online to seed a subplebbit, it’s as if it never existed.
      • This is a trade-off for infinite scalability—it removes the need for central databases but relies on community participation.
      • Think of it like a dead torrent—no seeders, no content.

      Comparison With Lemmy

      Feature Lemmy Plebbit
      Hosting Model Federated servers (instances) Fully P2P (no servers)
      Who Stores Data? Instance owners (like Reddit mods running a server) Subplebbit owners & users (like torrents)
      If Owner Goes Offline? Instance still exists; data stays up The community disappears unless users seed it
      Historical Content Availability Instances keep all posts forever Older data may disappear if not seeded
      Scalability Limited by instance storage & bandwidth Infinite, as long as people seed

      Bottom Line: No Servers, Just Users

      • With Lemmy: The instance owner has to host everything themselves like a mini-Reddit admin.
      • With Plebbit: The subplebbit owner AND users seed the content—no one has to host a centralized database.
      • If something is popular, it stays alive.
      • If something isn’t seeded, it disappears, just like torrents.

      It’s a radical trade-off for decentralization and censorship resistance, but if no one cares about a community, the content naturally dies off. No server, no mods deleting you from a database—just pure P2P.

      Hope that clears it up! 🚀

      • wizardbeard@lemmy.dbzer0.com
        link
        fedilink
        English
        arrow-up
        12
        ·
        3 months ago

        How are users able to decide what they seed and what they don’t? Just because I viewed something doesn’t mean I necessarily want to support its proliferation.

          • wizardbeard@lemmy.dbzer0.com
            link
            fedilink
            English
            arrow-up
            12
            ·
            edit-2
            3 months ago

            Please spare me whatever philosophical navel gazing you’re trying to do here. I’m asking what should be an incredibly straightforward question about what should be basic functionality in any P2P seeding based system:

            What control, if any, does an individual user have over what they seed back into the system?

            Some P2P systems just give each user an encrypted blob of all sorts of stuff, so the individual user can’t choose and on paper isn’t responsible for whatever it is that they are seeding back in. I’m personally not ok with not having a way to ensure that I’m not seeding nazi manifestos that were stealthing as a reasonably named subplebbit.

            • sirdorius@programming.dev
              link
              fedilink
              arrow-up
              4
              ·
              edit-2
              3 months ago

              I’m personally not ok with not having a way to ensure that I’m not seeding nazi manifestos that were stealthing as a reasonably named subplebbit.

              I kind of get the feeling this is exactly the content they want to help host when they refer to “censorship resistance”. This was also the key selling point of Gab when it launched.

              Edit: even their logo is a meme commonly used in far right circles, so there seem to be a lot of dog whistles for the type of community they want to create.

      • BB_C@programming.dev
        link
        fedilink
        arrow-up
        6
        ·
        3 months ago

        Not only is IPFS not built on solid foundations, offered nothing new to the table, and is generally bad at data retention, but the “opt-in seeding” model was always a step backwards and not a good match for apps like plebbit.

        The anonymous distributes filesystem model (a la Freenet/Hyphanet) where each file segment is anonymously and randomly “inserted” into the distributed filesystem is the way to go. This fixes the “seeder power” problem, as undesirable but popular content can stay highly available automatically, and unpopular but desirable content can be re-inserted/healed periodically by healers (seeders). Only both unpopular and undesirable content may fizzle out of the network, but that can only happen in the context of messaging apps/platforms if 0 people tried pull and 0 people tried to reinsert the content in question over a long period of time.

      • BackgrndNoize@lemmy.world
        link
        fedilink
        arrow-up
        3
        ·
        3 months ago

        Thanks for the detailed reply that helps, this sounds really interesting, with the late stage capitalism we are going through, I’ve lost all interest in private corporate controled social networks, hence the switch to Lemmy, but the instance owner is still a single point of failure with Lemmy but at least you can switch to another instance.

        I have a few concerns about Plebbit though.

        A - with torrents you know the size of the torrent beforehand and can decide if you can download it all and continue seeding it so long as you have the space for it on your drive, but with a forum like Plebbit, how would a user know how much space on their drive Plebbit will take for the Plebbit content they interact with. Is there a way to dedicate X gigabytes of limited storage space for it and anything above that gets purged to make space for new data?

        B - One of the best uses Of Reddit imo is that it’s very easy to Google for something and find a relevant Reddit thread, especially for something niche, since Plebbit only keeps the most popular content and the rest goes away if not seeded, does it mean it won’t be a good for niche archival data, maybe that’s a use case that Plebbit isn’t design to handle and that’s okay.

        C - Bots are a big concern for most social media, especially the ones used for spreading propaganda and misinformation, how does a P2P social forum like Plebbit plan to handle bots.?

      • suoko@feddit.it
        link
        fedilink
        arrow-up
        2
        ·
        3 months ago

        When everyone will have a 50G PON fiber connection at home, IPFS is going to be the standard serverless configuration.

        2030? Everyone uses that date for everything futuristic

    • Plebbitor@lemmy.worldOP
      link
      fedilink
      arrow-up
      2
      ·
      3 months ago

      It’s stored in each plebbit node. Each subplebbit runs a custom IPFS node for plebbit, with its text-only database, which is the content you see in the app. Peers download it and seed it back.

  • pedroapero@lemmy.ml
    link
    fedilink
    arrow-up
    1
    ·
    edit-2
    1 month ago

    sounds to me like it’s a much better match for piracy than for social networking (similar to using autodl-irssi)

    • Plebbitor@lemmy.worldOP
      link
      fedilink
      arrow-up
      1
      ·
      edit-2
      3 months ago

      nobody is running the matrix server at the moment, if you are interested in running it dm @estebanabaroa on telegram