Hello everyone!

My friend and I have each bought an optiplex server. Our goal is to selfhost a web app (static html) with redundancy. If my server goes down, his takes over and vice versa. I’ve looked into Docker Swarm, but each server has to be totally independent (each runs its own apps, with a few shared ones).

I can’t find a solution that allows each server to take over and manage loadbalancing between the two. Ideally with traefik, because that’s what we’re currently using. To me the real issue is the DNS A record that point to only one IP :(

  • cron@feddit.org
    link
    fedilink
    English
    arrow-up
    12
    ·
    17 days ago

    Your challenge is that you need a loadbalancer. By hosting the loadbalancer yourself (e.g. on a VPS), you could also host your websites directly there…

    My approach would be DNS-based. You can have multiple DNS A records, and the client picks one of them. With a little script you could remove one of the A Records of that server goes down. This way, you wouldn’t need a central hardware.

    • Jeena@piefed.jeena.net
      link
      fedilink
      English
      arrow-up
      3
      ·
      edit-2
      17 days ago

      That’s an interesting idea, need to check if they offer some kind of a API for that.

      But then there is this other thing, what about dns cache?

      • cron@feddit.org
        link
        fedilink
        English
        arrow-up
        3
        ·
        16 days ago

        Set the DNS cache time to 60 seconds.

        Set the script to run on every host delayed by some time to avoid simultaneously accessing the API (e.g. run the script every other minute).

        With this approach, you get automatic failover in at most 3 minutes.

    • Mateleo@lemmy.dbzer0.comOP
      link
      fedilink
      English
      arrow-up
      2
      ·
      12 days ago

      The VPS remains this single point of failure :(

      The DNS-based approach seems to be the best bet for my use case.

    • lando55@lemmy.world
      link
      fedilink
      English
      arrow-up
      2
      ·
      17 days ago

      Where would you host the script? If it’s expected that the server that fires it off is always online and performing health checks, why not have it host a load-balancer? Or another local instance of the website? It’s something fun to play around with, but if this is for anything beyond a fun exercise there are much better ways to accomplish this.

      • cron@feddit.org
        link
        fedilink
        English
        arrow-up
        7
        ·
        17 days ago

        I’d host it on both webservers. The script sets the A record to all the servers that are online. Obviously, the script als has to check it’s own service.

        It seems a little hacky though, for a business use case I would use another approach.

  • MangoPenguin@lemmy.blahaj.zone
    link
    fedilink
    English
    arrow-up
    12
    ·
    edit-2
    17 days ago

    Essentially you need a load balancer hosted somewhere that the traffic hits before getting routed to one of the 2 servers. That could be a VPS running Traefik if you prefer that.

    Alternatively you could both run something like IPFS and run the static site on that, but anyone accessing the site would either need IPFS installed, or use a gateway hosted somewhere (Cloudflare has a public for example).

  • lando55@lemmy.world
    link
    fedilink
    English
    arrow-up
    10
    ·
    17 days ago

    If you don’t want to mess with another VPS you can use a global server load balancer (GSLB) provider like Akamai, Cloudflare, Azure, etc.

    This being a self-host community though it’s unlikely you’d want to pursue something like this, but without knowing more about your specific use case it’s tough to make a recommendation.

    If global high-availability is your primary goal then a hosted solution is probably best.

    If this is just an exercise you and your friend are working on for giggles and it’s not for a mission-critical Production instance, then presumably self-hosting a load-balancer on each of your servers that includes both nodes in a target group would achieve this, though somewhat counterintuitive; if the website goes down at either location, I would imagine there’s a pretty high likelihood the LB itself would be down as well.

  • slowmotionrunner@lemmy.sdf.org
    link
    fedilink
    English
    arrow-up
    7
    ·
    16 days ago

    I think what you’re looking for is what is sometimes called a “dns load balancer”. Offerings like Azure Traffic Manager or AWS Route 53 do this. You can set up health checks that the service will use to determine if one of your locations is down and then automatically update the DNS record to point to the other one. You can also get clever and do things that allow the DNS to resolve the IP of whichever of your servers is physically closer so you get the best performance. I’m not sure what options there are for selfhosting a DNS service like this, however, these services are extremely affordable – pennies – and run on very reliable infrastructure, which is what you want.

  • jonw@links.mayhem.academy
    link
    fedilink
    English
    arrow-up
    4
    ·
    16 days ago

    Many moons ago I used heartbeat for this, but you’d need both servers in the same cidr range. I assume that’s not the case here.

    In your case you could probably use a dynamic DNS service to move the IP around, but the challenge would be knowing when to kick it off.

    You could write scripts to determine when the live one goes down, but we’re probably already more complicated than you were looking for.

  • corsicanguppy@lemmy.ca
    link
    fedilink
    English
    arrow-up
    2
    ·
    16 days ago

    It’s okay to still use a hyphen between ‘load’ and ‘balancing’. As a bonus, what you write would then be English, too.