• 6 Posts
  • 41 Comments
Joined 1 year ago
cake
Cake day: July 1st, 2023

help-circle




  • Et par curiosité, tu as essayé d’utiliser uniquement ipv6 ? Je me demande si ça serait encore une contrainte aujourd’hui. Est-ce qu’il y a encore des serveurs sur internet qui ne sont accessibles qu’en ipv4 ? Ou inversement des clients qui sont seulement ipv4 ?

    Puisqu’il n’y a plus d’ipv4 disponible, je me dis qu’il doit déjà y avoir des particuliers qui n’ont que de la v6, non ? Le CGNAT n’est qu’un pansement qui retarde l’inéluctable.


  • Je viens de revérifier. Peut-être que j’ai mal interprété ton premier message. Mais je tiens à éviter toute confusion : CGNAT n’existe pas en ipv6. Si tu as du CGNAT, c’est sur la partie stack ipv4 fournie par SFR en parallèle de la stack ipv6. C’est contraignant parce que l’ipv4 est plus facile à gérer que l’ipv6 donc on la préfère quand c’est possible.

    Le support semble pouvoir rebasculer en ipv4 full stack (en insistant pas mal). Et sinon la solution ultime c’est d’abandonner l’ipv4 et de migrer totalement vers l’ipv6 qui permet un accès total. Tes services ne seront alors plus accessibles en ipv4. Je ne sais pas à quel point c’est encore gênant en 2024, ça.



  • Du CGNAT sur de l’IPv6 ? Tu es sûr que ça existe ? Ma compréhension, c’est que le CGNAT est une astuce pour ralentir l’épuisement des IPv4 disponibles, en attribuant une même IPv4 à plusieurs abonnés. Chaque abonné se voit attribuer une part des ports disponibles. Free m’a un jour passé sur ce mode sans prévenir mais c’est heureusement désactivable en demandant une IPv4 full stack.

    qui ne permet donc pas de faire de la redirection de port

    Ce n’est pas tout à fait vrai. Tu peux continuer à rediriger des ports mais si le FAI ne t’a pas attribué la première tranche des ports (les plus utiles pour faire de l’auto hébergement), l’intérêt est moindre. Tu ne pourras pas héberger de service https sur 443, par exemple.

    Je ne suis pas sûr non plus que la notion de redirection de port ait encore un sens en IPv6. Pas au sens de la NAT en tout cas, qui est une techno typiquement IPv4, allant de pair avec les plages d’adresses privées (type 192.168.x.y). Tout ceci étant un contournement pour ralentir l’épuisement des IPv4 disponibles. En IPv6, il n’y a pas de risque d’épuisement donc les FAI attribuent une plage de 2^64 IPs à chaque abonné. Y a de quoi faire.


  • Quant à l’exposition par défaut du serveur SSH, je ne sais plus trop ce qu’il en est aujourd’hui mais à une époque pas si lointaine, Debian l’activait. Bon il fallait avoir coché l’option “serveurs usuels du système” ou un truc comme ça à l’install. Mais le serveur était configuré par défaut pour accepter les connexions par mot de passe, ce qui n’était pas glop glop.

    Pour moi, le plus grand danger, ce sont les petits appareils comme les caméras. Combien sont déballées, branchées et restent là avec leur mot de passe “admin1234” d’usine ? Tant que la cam est derrière une NAT IPv4, le danger est moindre. Mais si elle devient publique, c’est beaucoup moins rigolo.





  • I had a quick look at resolv.conf’s manpage on Debian and I think @daddy32@lemmy.world’s suggestion of adding a second nameserver would actually work:

    nameserver Name server IP address
        Internet  address  of a name server that the resolver should query, either an IPv4 address (in dot notation), or an IPv6 address in colon (and possibly dot) notation as per RFC 2373.  Up to MAXNS (currently 3, see <re‐
        solv.h>) name servers may be listed, one per keyword.  If there are multiple servers, the resolver library queries them in the order listed.  If no nameserver entries are present, the default is to use the name  server
        on  the  local  machine.   (The algorithm used is to try a name server, and if the query times out, try the next, until out of name servers, then repeat trying all the name servers until a maximum number of retries are
        made.)
    

    According to the doc, the resolver will try each name server in order until one is successful.


  • fendrax@jlai.luOPtoSelfhosted@lemmy.worldRunning DNS server in Docker
    link
    fedilink
    English
    arrow-up
    1
    ·
    edit-2
    7 months ago

    Sorry, I was unclear: I use dnsmasq as single source of truth. In its DHCP config, I set machine names, routes and all. And this is because this dnsmasq is the DHCP that it knows how to translate the names of the devices it configured. Pi-hole forwards all DNS requests to dnsmasq. Now if I use two instances of dnsmasq, only one can be a DHCP and the other won’t know how to resolve local names, unless it uses the first dnsmasq as upstream. But in scenarios where this first dnsmasq instance is down, we are back to square one.


  • My goodness, that’s some impressive responsiveness ^^

    I guess see your point. But then the problem shifts to the upstream dnsmasq instance which acts as DHCP + DNS for the local devices. This is the server ultimately able to translate local names.

    I don’t think it’s doable to have two instances of dnsmasq that are able to translate local names interchangeably. That would require two DHCPs to have authority on the network. But I’m no expert so I may be missing something obvious.


  • For some reason, I am only seeing this comment thread now, so sorry for the late response.

    Thanks for those valuable details. But I am still a bit confused. I understand why you are saying that pi hole should be the only DNS server handling requests sent by LAN devices (including the machine hosting the DNS). That’s because it is the only one which can resolve local names (well, that’s actually its upstream dnsmasq running as a sibling container that does that but that’s a minor detail).

    But then you say there should be another DNS server to solve my problem. If I put two server entries in /etc/resolv.conf, one being pi hole and the other my ISP’s DNS, the two of them will be randomly picked by DNS clients. When the ISP’s is used, it will fail to translate local names. I guess there is a way to let the client try the other server after a failure but it will add some undesirable latency.

    Sorry if I misunderstood your point but after reading the first comments I was quite convinced by the idea of adding a second nameserver entry in /etc/resolv.conf. Your explanations convinced me otherwise and now I have the impression that I can’t really solve my initial problem in a reliable way.


  • Well, I have not really thought about why. I guess that’s partly due to old habits of running services on the host with systemd (my migration to docker is recent and still a work in progress). But I guess I’d like to continue to be able to resolve names of local devices on my network when connected through ssh on the host. Is that inherently wrong, still? I will implement the secondary DNS as a fallback. I am hoping to get rid of the issue that way.


  • fendrax@jlai.luOPtoSelfhosted@lemmy.worldRunning DNS server in Docker
    link
    fedilink
    English
    arrow-up
    2
    ·
    edit-2
    7 months ago

    Yes, others have suggested something similar. I’ll do that first because it is easy. Monitoring-wise, I should already be covered but since prometheus is running on the same server, it was down during the outage. There is room for improvement, for sure! I have a couple of RPis on my network that I can leverage for better monitoring.


  • Your suggestion looks similar to this other comment and makes sense. I’ll try that!

    I have never managed to wrap my head around DoH and DoT but this is on my todo list ^^

    I didn’t know dnsmasq has an adblock plugin, I’ll have a look. Originally, I was using dnsmasq alone (running on bare metal). Then I migrated to docker and added pi-hole for ad blocking. I tried to get rid of dnsmasq but pi-hole’s embedded DHCP is not as configurable as dnsmasq’s and I could not address my use case.

    Thanks a lot for your time!



  • Yeah, that was my plan B. To be honest, I was not super confident that it would work when I put this setup together, because of the “host uses a container as DNS and docker uses the host as DNS” kind of circular dependency.

    But people do use docker for DNS servers so it has to work, right? That’s where I’d like to understand where I’m wrong. I’m fine with running pi hole and dnsmasq on the host as long as I get why this is not doable in docker.

    Thanks for your input, though. That’s helpful.


  • In both the pi-hole (exposed on the host) and dnsmasq (used as upstream by pi-hole) containers:

    # Generated by Docker Engine.
    # This file can be edited; Docker Engine will not make
     further changes once it
    # has been modified.
    
    nameserver 127.0.0.11
    options ndots:0
    
    # Based on host file: '/etc/resolv.conf' (internal res
    olver)
    # ExtServers: [host(127.0.0.1)]
    # Overrides: []
    # Option ndots from: internal
    

    So they are pointing to docker’s embedded DNS, itself forwarding to the host.