I’m trying to get back into self hosting. I had previously used Unraid and it worked well to run VMs where needed and Docker containers whenever possible. This biggest benefit is that there is an easy way to give each container it’s own IP so you don’t have to worry about port conflicts. Nobody else does this for Docker as far as I can tell and after trying multiple “guides”, none of them work unless you’re using some ancient and very specific hardware and software situation. I give up. I’m going back to Unraid that just works. No more Docker compose errors because it’s Ubuntu host is using some port requiring me to disable key features.

  • midnight@infosec.pub
    link
    fedilink
    English
    arrow-up
    6
    ·
    1 year ago

    I’m confused on why you need a unique IP per VM/container. You can change the “external” port in your docker compose and be fine.

    I initially tried unRAID on bare metal but hated not being able to use versions of docker I wanted and using stuff that wasn’t in the community repo.

    I currently run unRAID as a proxmox vm (passing through my lsi card and USB for the OS) and it works flawlessly. I didn’t even have to reinstall since I passed through the necessary components it used when it was bare metal.

    Ultimately, use what works best for you but I do have to disagree that proxmox/docker is inferior.

    • johnnixon@rammy.siteOP
      link
      fedilink
      English
      arrow-up
      1
      ·
      1 year ago

      Sometimes you can’t change the external port because it has to be where it’s expected. Regarding being stuck in the community repo, try having up be restricted to what’s available for LXC documentation.

      I guess I could follow a 30 minute CLI procedure to spin up a container or I can run a command or two in Docker. If Docker simply had it’s networking straight without having to do Linux surgery with oven mits on this wouldn’t be a problem.

      • midnight@infosec.pub
        link
        fedilink
        English
        arrow-up
        5
        ·
        1 year ago

        Not saying I don’t believe you, but do you have any examples where changing the external port causes an issue? I change the port on almost every single docker container from what the default is. To be clear, I’m referring to the left side of the colon in the port declaration:

        
        ports:
              - 12080:80
        

        I should also clarify I don’t use LXC containers. My background had me more familiar with VMs so I went that route. I’ve never felt like I’m performing surgery when deploying containers, but I have seen other complaints around docker networking that I’ve apparently been lucky enough to avoid.

        Like I said though, do what works best for you. I don’t mind tinkering to get things tuned just right, which causes some friction with unRAID. I’ve invested enough time an energy for this where I just have to spin up a proxmox VM and pass the IP to a few Ansible playbooks I wrote to get to a healthy base state and then start deploying my docker containers. I recognize not everyone wants to do this though.

        • karlthemailman@sh.itjust.works
          link
          fedilink
          English
          arrow-up
          1
          ·
          1 year ago

          Not saying I don’t believe you, but do you have any examples where changing the external port causes an issue? I change the port on almost every single docker container from what the default is.

          Same here. I can’t think of an instance when this hasn’t worked. Perhaps if you have multiple applications that depend on each other? But you can just put those in the same compose file.