Hey there, im looking into setting up a DNS Server in my Homelab, i would like something like this:
- Server in Docker on my Proxmox Server
- Server in Docker on my NAS and
- Server in my “Cloud” Network
Do you guys have any recommendations on how i could accomplish this? Otherwise i will just use PiHole with sync again or something like it :)
Adguard home
Two PiHole servers. One is hosted via docker on my primary file server and the other is hosted in a Hyper-V VM on my sole windows box. The VM one is also my DHCP server.
I had this setup a couple of months ago, worked great with gravity-sync :)
Unbound on my OPNsense firewall. I don’t have advice for you, do you have some specific goals besides just having a DNS?
Not really, just fed up with remembering IP-Adresses :)
Look at reverse proxy instead. While you can do what you’re after with DNS, a bunch of the reverse proxy systems will automatically deal with SSL certificate, and there are even a couple that eliminate essentially all configuration outside of your docker file. Like, add a new docker and it automatically configures appName.domain.tld with SSL assigned. And if you ever decide to expose that address to the Internet, reverse proxy makes that simple and provides some security options as well.
I use Caddy for my reverse proxy running from my OPNsense firewall, but if you want the automation with docker there are better options.
I will give this one a try, had a eye on it before asking already :D
Solid choice. It’s been my go-to DNS+DHCP solution for over 5 years and has never let me down. Also a fan of DNSDist+PowerDNS, but for most environments (especially home/lab), Technitium wins by a mile.
I use pihole for its good filtering, selective filtering, statistics and logging capabilities, and technitium dns as its upstream for it’s superior capability in defining dns records, and because I can use a DoH dns provider with it
deleted by creator
you don’t need to. but then for the sake of easier maintenance you want to containerize it (docker/podman), and be careful to not overload your pihole device, because then DNS service will go away or get large delays (especially if the device is overloaded with ram usage, and swaps a lot)
besides, my experience has been that swapping to USB storage on a raspberry pi is unstable enough to cause a kernel panic every few months
I run Unbound on my opnsense firewall.
I think i will try unbound too :) Thank you!
I have a total of 4 (for now) DNS servers, 2 within the lab (AlmaLinux on Proxmox), 1 running on OpnSense and 1 on a VPS (AlmaLinux). All are running Unbound + dnscrypt-proxy for external resolution, the AlmaLinux ones also have unbound-exporter for telemetry.
The pair in the lab also run Keepalived with 2 VIPs for active/active setup (VIP 1 active/backup for DNS1/2, VIP2 active/backup for DNS2/1). All servers target the VIP addresses for resolution with
options timeout:1 attempts:3 rotate
in the/etc/resolve.conf
file.For internal DNS records I run FreeIPA (as well for server/ldap auth) with zone transfers to all Unbound instances, this way there’s no dependency on FreeIPA and the lab to be online for DNS resolution of internal records and it prevents the need for forwarding those queries to FreeIPA.
All instances also have a scheduled service to download and apply a blocklist from https://github.com/StevenBlack/hosts
I would like to look into Unbound views for the OpnSense instance to be able to resolve different records if the source it IOT/Untrusted zone vs LAN/Trusted zone, for now I have BIND tied to specific IPs used by IOT/Untrusted exclusively without access to resolve the lab zones.
Technitium
Two Pi-Hole docker container on two different servers. OpnSense DNS Plugin. Fallback, NextDNS Alternative, AdGuard is also a good DNS.
I have a philosophy of sticking close to reference implementations and upstream in the homelab because it forces me to learn principles rather than implementations. I use bind9, but that upstreams to pihole on a different port. It is hard to configure for sure, editing zone files in vi, but I learn a lot analyzing the reference syntax to understand features. I also use isc-dhcp-server for DHCP, again manually populating dhcpd.conf.
Bind can peer with other instances; right now it is it’s own ipam vm on my proxmox with bind/isc-dhcp/pihole docker, but I’m looking at dropping some hardware at a family member’s for a site 2.
I use Blocky as my DNS server.
Two pihole servers, one n VM vlan, one on device VLAN with OpnSense delivering them both via DHCP options. I sometimes update lists, like yearly… At best. They’ve been there over 7 years. Calling them robust is correct. The hypervisors are 3 proxmox servers in cluster using ceph. Intrl NUC 3rd Gen. Less than 80w combined with all vms. Also 8 years old no failures but tolerant for it.
I sometimes update lists, like yearly… At best
Don’t they get updated automatically?
I think you can configure them to do so :)
Iirc I seem to find whatever was configured dead or no longer the cool choice when I check online.
Whatever it is, I barely touch it and it works great. Very happy.
My home lab is small so I just run BIND ony server
CoreDNS in Docker to mix things up here a little.
I’m using leng in an dedicated LXC container in Proxmox
https://github.com/cottand/leng
I’m using defaults + some local dns lookups. Works fine for my use, and lighter than pihole. No web ui