Its now running on a dedicated server with 6 cores/12 threads and 32 gb ram. I hope this will be enough for the near future. Nevertheless, new users should still prefer to signup on other instances.

This server is financed from donations to the Lemmy project. If you want to support it, please consider donating.

  • Blaskowitz@lemmy.ml
    link
    fedilink
    English
    arrow-up
    5
    ·
    1 year ago

    Is it possible to horizontally scale these instances instead of just upping the machine hardware? What are the main performance bottlenecks typically?

    • mwlczk@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      edit-2
      1 year ago

      Hey, what do you mean by “scale horizontally”? There are multiple approaches to tackle this.

      • Have multiple nodes/pods for the same instance and run them on a cloud-like service provider
      • have RO-instances to handle to read-load
      • share/merge bigger communities/subs over multiple instances

      All of these requiere most likely a major rewrite/change of Lemmy server software I guess. They are already addressed as issues/feature requests on github In my opinion the first option would fit the most.

      • Blaskowitz@lemmy.ml
        link
        fedilink
        English
        arrow-up
        1
        ·
        1 year ago

        My comment was without knowing the topology of Lemmy at all, but my thoughts were initially that vertically scaling can have diminishing returns past a certain threshold. Since the servers seem to be struggling I’m wondering if that has been surpassed and if it’s more cost-effective and reliable to scale this way? But if the application isn’t written in that way, or the underlying data store isn’t equipped for multiple instances then fair enough, I’d be interested as to why especially if Lemmy grows. I’ll take a look at open issues and educate myself a bit more though.