How are y’all managing internal network certificates?
At any point in time, I have between 2-10 services, often running on a network behind an nginx reverse proxy, with some variation in certificates, none ideal. Here’s what I’ve done in the past:
- setup a CLI CA using openssl
- somewhat works, but importing CAs into phones was a hassle.
- self sign single cert per service
- works, very kludgy, very easy
- expose http port only on lo interface for sensitive services (e.g. pihole admin), ssh local tunnel when needed
I see easy-RSA seems to be more user friendly these days, but haven’t tried it yet.
I’m tempted to try this setup for my local LAN facing (as exposed to tunnel only, such as pihole) services:
- Get letsencrypt cert for single public DNS domain (e.g. lan.mydomain.org)… not sure about wildcard cert.
- use letsencrypt on nginx reverse proxy, expose various services as suburls (e.g. lan.mydomain.org/nextcloud)
Curious what y’all do and if I’m missing anything basic.
I have no intention of exposing these outside my local network, and prefer as less client side changes as possible.
You should be able to do wildcards with acme V2 and a dns challenge: https://community.letsencrypt.org/t/acme-v2-and-wildcard-certificate-support-is-live/55579
You would manage internal dns and would never need to expose anything as it’s all through validation through a TXT record.
You could use also something like traefik to manage the cert generation and reverse proxying:
deleted by creator
Fellow Caddy user here. I’d love to set that up. Can you share your Caddyfile or at least the important snippets?
deleted by creator
Oh that’s quite simple! I’ve been just using Nginx, I’ll have to have a look into Caddy, thank you!
I have public wildcard DNS entry (*.REMOVEDDOMAIN.com) on Cloudflare on my primary domain that resolves to 192.168.10.120 (my Caddy host)
Caddyfile
{ email EMAILREMOVED@gmail.com acme_dns cloudflare TOKENGOESHERE } portal.REMOVEDDOMAIN.com { reverse_proxy 127.0.0.1:8081 } speedtest.REMOVEDDOMAIN.com { reverse_proxy 192.168.10.125:8181 }
Thanks! I’ll try this out.
Certbot in cron if you’re still managing servers.
I’m using cert-manager in kube.
I haven’t manually managed a certificate in years… Would never want to do it again either.
Acronyms, initialisms, abbreviations, contractions, and other phrases which expand to something larger, that I’ve seen in this thread:
Fewer Letters More Letters DNS Domain Name Service/System HTTP Hypertext Transfer Protocol, the Web IP Internet Protocol SSH Secure Shell for remote terminal access SSL Secure Sockets Layer, for transparent encryption SSO Single Sign-On TLS Transport Layer Security, supersedes SSL VPN Virtual Private Network nginx Popular HTTP server
[Thread #155 for this sub, first seen 22nd Sep 2023, 05:05] [FAQ] [Full list] [Contact] [Source code]
Is nginx a HTTP server? I use it as a proxy.
It is primarily a http server, its ability to act as a http reverse proxy is a product of that. Apache can do the same thing, its just less common to see it used that way.
Oh alright, thank you!
And to make your terminology life a bit harder, the distinction between forward and reverse proxy matters: reverse proxies sit in front of web servers, while forward proxies sit in front of systems or networks.
Reverse proxies pretend to be the web server they’re terminating traffic for - programs like nginx, Apache (https), lighttpd, and HAProxy you may see doing this.
Forward proxies need to be told where to go by a web browser, and will then (if the ACL allows) connect (and often but not always filter) the browser to the final server. In some networks, the forward proxy can be seen as something like a firewall but specifically for web traffic. The only forward proxy I know of off hand is Squid, but I imagine many more exist that I do not remember.
I use the linuxserver.io SWAG container. It runs an nginx reverse proxy and does certificate management for you. It’s a pretty great minimal-config option.
Probably not the ‘recommended’ way, but I use a selfsigned cert for each service I’m running generated dynamically on each run with nginx as a reverse proxy. Then I use HAproxy and DNS SRV records to connect to each of those services. HAproxy uses a wildcard cert (*.domain.tld) for the real domain and uses host mapping for each subdomain, (service1.domain.TLD).
This way every service has its traffic encrypted between the HAproxy and the actual service, then the traffic is encrypted with a browser valid cert on the frontend. This way I only need to actually manage 1 cert. The HAproxy one. Its worked great for me for a couple of years now.
Edit: I’m running this setup for about 50 services, but mostly accessed over LAN/VPN.
I just use caddy. Does everything, both local ca and letsencrypt.
I use NPM (Nginx Proxy Manager) to handle all my reverse proxying and SSL certs. Authelia easily ties in to handle my SSO. What a time to be alive!
I went with the OpenSSL CA as cryptography has been a weakness of mine and I needed to tackle it. Glad I did, learned a lot throughout the process.
Importing certs is a bit of a pain at first but I just made my public root ca cert valid for 3 years (maybe 5 I can’t remember) and put that public cert in a file share accessible to all my home devices. From each device I go to the file share once, import the public root ca cert and done. It’s a one time per device pain so it’s manageable in my opinion.
Each service gets a 90 day cert signed by root ca and imported to nginx proxy manager to serve up for the service (wikijs.mydomain.io).
Anything externally exposed I use let’s encrypt for cert generation (within NPM) and internally I use the OpenSSL setup.
If you document your process and you’ve done it a few times, it’s gets quicker and easier.
You can self-host ACME server which lets you use certbot to do automatic renewals even for private, internal only certs. I don’t know if it would work with NPM. I plan to test that out at some point in the future but my current setup works & I’m not ready to break it for a maybe yet :P
Didn’t know you could do this. Interesting!
If your running behind OPN/PFsense I’ve found the easiest solution for internal only SSL is to use the router to create the certificate chains. Yes you’ll have to import 1 CA cert on each end user device but only the one then you can crank out internal certs without and https warnings or domain constraints/challenges.
As an alternative to this you can also use mkcert to roll out your own internal certifications.
I use Lets encypt and OVH DNS for my certs, I can get a wildcard for mulit service nginx or a single cert for the places that need them. The other thing I want to look at is the Small Step CA, I use that for SSH certs
This video was helpful and simple. It’s no longer any hassle. I later implemented the same using my own domain and cloudflare.
Using this as well. Works pretty solid so far.
I run freeipa internally, which handles all internal https certs (as well as nice things like handling non sudo auth so I can just ssh to machines from an already authed machine without a PW prompt, and doing ldaps for internal things that support it)
For external web, I have a single box running nginx as a reverse proxy thats web exposed. That nginx box has letsencrypt certs for the public web stuff. The nginx rp has the internal CA on it and will validate the internal https certs (no mullet SSL here!)
I also do different domains for internal vs external, but thats not a requirement for a setup like this
I use Caddy with the Cloudflare DNS plugin for Let’s Encrypt DNS based challenges, should work for wildcard too but only have a couple subdomains so never tried to do that. My DNS entries are public but point at private IP ranges, e.g. nc.PRIVATEDOMAIN.COM resolves to 192.168.1.20 where Caddy sends the traffic to my Nextcloud docker
deleted by creator
I’m running all my services behind Traefik, which handles getting a wildcard cert for all my (sub)domains with letsencrypt.
Once I figured out how to write the config properly, I haven’t had to touch it again, pretty happy with the result.