A few weeks ago Lemmy was buggy on computers and there were no good mobile clients out there, now on PC the site is pretty stable and fast, and there are now some pretty good iOS/Android clients too. Thanks to all the people who made this possible!

  • @Xanvial
    link
    English
    71 year ago

    Why not? I don’t see the drawback to develop ability to do horizontal scaling. If the instance owner doesn’t want to add additional servers, it’s up to them. Obviously they paid for it if they decide to add.

    Just to be clear, horizontal scaling means multiple servers handling same instance, it can be the backend service to allow handling more traffics, or multiple db to reduce database loads.

    Additionally it allows high availabilities, so if one of the backend service is down (either unexpectedly or do rolling update) the other service can still active so the instance can still be accessed by users

    • @rglullis@communick.news
      link
      fedilink
      English
      51 year ago

      Why not?

      • Because it creates an unnecessary incentive re-centralize the social network under a handful of instances
      • Because it leads to drama and power struggles (beehaw defederating from other big instances, claiming issues with moderation)
      • Because after a certain size, there is no real community, no common identity, no shared values and principles.
      • Because it makes the system (the fediverse) more vulnerable.
      • Because it is not sustainable in the long run
      • Because it is not needed. Even if one server has an incredibly popular community, it can be followed from remote instances.
      • @Xanvial
        link
        English
        5
        edit-2
        1 year ago

        All of your points only considers the community itself, which is not my argument. And that can already achieved by just doing Vertical scaling like most instance currently do, I think even lemmy.world just using 32 core now (google cloud can possibly run with more than 200 vCPU).

        I’m mostly approach this from technical standpoint. It’s impossible to have 100% uptime if there’s no horizontal scaling capability. For example on updating version, currently the instance will need to shut down for maintenance until it’s finished and usually there’s still some issue to fix. If horizontal scaling exist, the instance can update server (or add additional one), move the traffic a bit to test it, and then fully rollout if everything going well.

        • @rglullis@communick.news
          link
          fedilink
          English
          11 year ago

          All of your points only considers the community itself, which is not my argument. I’m mostly approach this from technical standpoint.

          I understand. But the point I am trying to make is that makes no sense to worry about technical issues now. Not only it is a premature optimization, it is the kind of metric that is actually damaging to us.

          What do you prefer? A server that can handle hundreds of thousands of users with 5 nines of uptime, focuses on “Web Scale” and ends up replicating all the issues from Facebook/Twitter/Instagram/Reddit, or an instance that is more aligned with the ideas of the SmolWeb and that is more likely to be a net-positive force in your life?

        • @wiki_me@lemmy.ml
          link
          fedilink
          English
          11 year ago

          Not to mention a hardware failure, which could take a couple of hours to fix at least, some mental health communities should stay online at all time, someone mentioned there is research showing when a person is suicidal there are a couple of hours he is vulnerable, and there is some research showing online support can improve mental health.

      • @SneakyWaffles@vlemmy.net
        cake
        link
        fedilink
        English
        41 year ago

        Dude, I think you’re just ignorant of how web hosting works. Every single site you visit is hosted on probably dozens or more servers so that it can load balance or guarantee better uptime. It’s normal. It’s weird to be in a place that is only on one server.

        Being able to host a stable site doesn’t mean everything is suddenly moving into one instance either. The NBA subreddit for example, a single community, has millions of members. Lemmy can’t handle anything like that. And technologically having no way to support large communities is a guaranteed way to kill your app.

        You also seem to be very in favor of spreading out and decentralizing… except for Beehaw. Wonder why you’re such a purist for decentralization except in this case. Weird. Being able to defederate, make moderation decisions for yourself, and making big decisions like that to defend your community is the whole point of these sites. Maybe you should go back to Reddit if you aren’t able to handle it. And for the record, you’d have to be blind to not see moderation controls are lacking at best for this brand new actively being developed site.

        • @rglullis@communick.news
          link
          fedilink
          English
          01 year ago

          Dude, I think you’re just ignorant of how web hosting works.

          I run a managed hosting service for Mastodon and Lemmy, but yeah…

          Every single site you visit is hosted on probably dozens or more servers so that it can load balance or guarantee better uptime.

          Hacker News: one single FreeBSD box. Not even a database.

          Also, your cargo-cult is showing… talking about “load balance” as a guarantee of uptime is the same as justifying using Mongo because it is webscale

          • @SneakyWaffles@vlemmy.net
            cake
            link
            fedilink
            English
            2
            edit-2
            1 year ago

            You sound like an old script kiddie who says they’re a hacker cause they ran a script from a forum. If it wasn’t obvious, I’m talking about actual web architecture. Not hobby junk. Managing to standup a tiny virtual instance for a few people does not mean that you understand anything.

            As I said, this I basic architecture shit. Like, an intern would understand the idea kinda basic.

            talking about “load balance” as a guarantee of uptime is the same as justifying using Mongo because it is we scale

            ??? Are you unironically implying that a site with a backend that has multiple servers stood up to spread the load won’t have tremendously better capacity, redundancy, and as a result better uptime than a single hobby pc in your living room or whatever you have setup?

            • @rglullis@communick.news
              link
              fedilink
              English
              11 year ago

              Can you please stop with the unnecessary snark and this silly attempt at dick-measuring? Are you upset at something?

              Are you unironically implying that a site with a backend that has multiple servers stood up to spread the load won’t have tremendously better capacity, redundancy…

              No. I am saying that the majority of websites out there don’t need to pay the costs or worry about this.

              Good engineering is about understanding trade-offs. We can be talking all day about the different strategies to have 4, 5 or 6 nines of availability, but all that would be pointless if the conversation is not anchored in how much will be the cost of implementing and operating such a solution.

              Lemmy - like all other social media software - does not need that. There is nothing critical about it. No one dies if the server goes offline for a couple of minutes in the month. No business will stop making money if we take the database down to do a migration instead of using blue-green deployments. Even the busiest instances are not seeing enough load to warrant more servers and are able to scale by simply (1) fine-tuning the database (which is the real bottleneck) and (2) launching more processes.

              Anyone that is criticizing Lemmy because “it can not scale out” is either talking out of their ass or a bad engineer. Possibly both.