Hi, I figured out how to get docker containers to join an existing network with putting “networks” into the respective sections of the docker-compose.yml

If I want to also give them fixed ip’s on this network, what would the syntax look like in the docker-compose.yml?

  • cool_pebble@aussie.zone
    link
    fedilink
    English
    arrow-up
    11
    ·
    edit-2
    1 year ago
    networks:
      app:
        ipam:
          config:
            - subnet: 172.20.0.0/24
              gateway: 172.20.0.1
    services:
      app:
        image: my-app-image
        networks:
          app:
            ipv4_address: 172.20.0.10
    

    In the above example I’ve declared a Docker Bridge network with the range 172.20.0.0/24 and a gateway at 172.20.0.1. I have a service named app with a static IP of 172.20.0.10.

    The same is also possible with IPv6, though there are extra steps involved to make IPv6 networking work in Docker and it’s not enabled by default so I won’t go into detail in this comment.

    Out of curiosity, what’s the use case for a static IP in the Docker Bridge network? Docker compose assigns hostnames equal to the service name. That is, if you had another container in the app network from my example above, it could just do a DNS lookup for app and it would resolve to 172.20.0.10.

    • Solvena@lemmy.worldOP
      link
      fedilink
      English
      arrow-up
      3
      ·
      1 year ago

      thanky you, this looks like exactly what I need.

      I do run several webservices (nextcloud, matrix) behind the same reverse proxy (nginx prxy manager). In my setup I have one docker with nginx running, which is the only one to be exposed to the web. It proxy-ing for the other services relies upon them being in the same network. It all works well, however I ran into problems when restarting my server after a shutdown. I suspect that some of the services tried to get the same ip adress as my nginx service, which results in that service not running properly and my whole reverse proxy setup falls apart at that point.

      I’m not certain, that this is really what happens but I want to try and assign the fixed ip’s and see if that solves the problem.

      • cool_pebble@aussie.zone
        link
        fedilink
        English
        arrow-up
        2
        ·
        1 year ago

        Honestly, I’d be surprised if static IPs fix it. Docker’s default network type (bridge network) is very good at assigning IPs to containers without clashing, even with container restarts and replacements: it’s been battle tested for years in production use. As others have said, standard DNS hostnames of containers should be sufficient. But I’ll certainly be interested to hear your results.

        • Solvena@lemmy.worldOP
          link
          fedilink
          English
          arrow-up
          1
          ·
          edit-2
          1 year ago

          Edit: I fixed my problem by re-making my nginx reverse proxy and a do-over of my proxy hosts. I have yet to restart my server, though …

          I’m a beginner with all of this stuff, so I’m sure I’m not assessing correctly what’s wrong with my setup. It’s more of a methodical “trial and error” approach, that I have, where I change one thing at a time and see what happens … quite time consuming but it helps me to figure things out along the way :)

          However, if you have an idea, what could be wrong with my server, I’d appreciate any ideas: I run Nginx Reverse Proxy with nginx in a container within a custom network “my_network” and have assigned that container a fixed IP. I run other containers (portainer, mariadb, nextcloud, synapse) that all connect to the same custom network. The nginx container “see’s” the outside web with ports 80 and 443 openend on the firewall for that container’s fixed ip and routes traffic (and needed other ports) to my other containers. This is all working well and also works after restarting the server.

          Now I tried to install a lemmy instance and got it up and running by bringing the lemmy containers in my custom network as well and proxy’img my nginx to the lemmy proxy. However, when I made a restart of the server, something broke and I cannot get the web-ui of NPM to load. I think somehow host names and/or IP adresses got mixed up somewhere. The containers start just fine, but I can’t access it with web-ui anymore. Also reverse proxy-ing doesn’t work, but if I open the needed ports on my firewall manually I can access the other services containers.

          I hope this is even understandable, not sure if I’m using the correct terms …

  • cestvrai@lemm.ee
    link
    fedilink
    English
    arrow-up
    8
    ·
    edit-2
    1 year ago

    Containers have fixed host names already, why do you need static IPs on the internal network?

  • ChrislyBear@lemmy.world
    link
    fedilink
    English
    arrow-up
    8
    ·
    1 year ago

    I have never cared about the IP addresses of my docker containers and never will.

    Why do you? There is a docker internal DNS, you can just resolve IPs by service name/container_name.

    • MoogleMaestro@kbin.social
      link
      fedilink
      arrow-up
      3
      ·
      1 year ago

      There is a docker internal DNS, you can just resolve IPs by service name/container_name.

      Yes, and you can also control that as well by messing with docker network groups. I find the ability to network into docker servers from the host to be super simple.

      What I haven’t figured out yet is whether or not I can give my docker services their own IP on my router for access from another system on a fixed or reserved IP.

      • ChrislyBear@lemmy.world
        link
        fedilink
        arrow-up
        1
        ·
        edit-2
        1 year ago

        I see. Sure, that’s a valid way to manage networking. I personally don’t like to do this manually anymore, just like I don’t drive stick shift anymore.

        If you want to expose a service to the WWW I’d recommend using a reverse proxy. E.g. I use Traefik 2; it gets the config needed automatically from 5-6 labels per container and I don’t need to bother with IPs, certificates, NAT and what have you. It just creates virtual hosts procures a LetsEncrypt certificate and directs the traffic to the target container completely on its own.
        Spinning up a container and trying it out with its own subdomain with correct SSL certificates immediately never has been easier. (I have a “*” DNS entry to my Treafik server).

        You also could try installing cloudflared and create a Cloudflare tunnel. This way you don’t even have to forward any ports in your router.

        Just some tips, if you want to explore new things :)

      • NewDataEngineer@lemmy.world
        link
        fedilink
        arrow-up
        1
        ·
        edit-2
        1 year ago

        What I haven’t figured out yet is whether or not I can give my docker services their own IP on my router for access from another system on a fixed or reserved IP.

        You can. You have to set up a macvlan on your network and then assign an IP to your container that sits on your router’s subnet.

        I can only use traefik with a macvlan because Synology DSM uses ports 80 and 443. I assign traefik its own IP and use pihole’s DNS to route wildcard subdomain to it.

        I wrote a guide in my trillium notes. If you’re interested I can share.

  • VelociCatTurd@lemmy.world
    link
    fedilink
    English
    arrow-up
    2
    ·
    1 year ago

    I do not use docker-compose, but if it helps point you in the right direction, I’ve been using Mac VLANs to have all my containers have their own MAC address and IP.