I see a lot of people here uses some form of remote access tool (VPN/Tailscale) to access their home network when not at home. I can’t really do this because my phone (iOS) can only activate one VPN profile at a time, and I often need this for other stuff.
So I chose to expose most web based services on the public internet, behind Authelia. But I don’t know how safe this is.
What I’m really unsure are things like Vaultwarden: while the web interface is protected by Authelia (even use 2FA), its API address needs to be bypassed for direct access, otherwise the mobile APP won’t work. It feels like this is negative everything I’ve done so far.
deleted by creator
I have my nextcloud server exposed, I keep it up to date, patched, etc. but I’d love to use the extra protection of a VPN. Just … I don’t think mobile apps work very well with that, unless I keep my phone constantly connected to the VPN, right? Or is there a smart way to do that?
I believe you can configure your phone to only route traffic to your server though the VPN.
Nice! … how exactly, I wanna know :)
I’m my case, I’m using the OpenVPN server on my router. When I set it up, there was an option for the client to only use the VPN for local traffic. That way it’s part of the config file on my phone. Works flawlessly
this thread mentions an OVPN client that can do split tunneling so all you’d do is whitelist your server in the android phone and turn it on.
ProtonVPN has tunneling too, for instance, but no self hosting option.
Yea that’s basically the reason why I can’t use a VPN.
In fact there isn’t really a problem to leave your phone connected to the selfhosted VPN all the time. If split tunneling works properly, only traffic that access your home network would actually go through the VPN, all others will just get bypassed.
But in my case, I already need to be connected to another VPN most of my day, so can’t really go this route.
Is the existing one wireguard? Because on android you can use the same interface and add another peer. You’d also have to use the same ip range and key on your VPN.
Not sure of your setup but I use OpenVPN on my Android and have set it to whitelist apps so only the apps that need to use the VPN do so. You can also go the opposite and blacklist so all apps except those you specify use the VPN.
deleted by creator
Well I’m trying to discuss when unable to use a VPN so….
Does a vpn endpoint redirect a user to a reverse proxy or something? I’ve considered running an authenticated onion service to access some less resource intensive services (gitea, etc)
I use a reverse proxy and client certificate authentication for anything I expose. That requires me to pre install the client certificate on all of my devices first, but afterwards they can connect freely via a web browser with no further prompting to authenticate. Anybody without the client certificate gets a 403 before they even get past the proxy.
There are limitations to this and overhead of managing a CA and the client certificates for your devices.
I’m a network engineer and >15 years of experience in IT. It’s never “safe”. Not even in corporate IT. You’re a home user and it’s less likely you’ll be targeted but bad actors do comb the internet for known vulnerabilities. Patch your shit, limit exposure, enable MFA on everything. I don’t run it, but I feel slightly sketched out not behind something like a Palo Alto. But again I’m just a small potato in a big sea and I patch everything.
There will always be risk. Just do what feels right for you. Follow beat practices.
It all just depends on how much you trust the app, and how you’ve set up things when it does go wrong. Not every container needs to be able to access other containers on the system, lan, have access to whatever folder, read/write permissions, etc
A good practice for things like vaultwarden would be to only whitelist the country/state you’re in to minimize your attack profile
fail2ban or crowdsec can also help with all the rats sniffing around
Limited container access is a good point. Noted.
I think the APP itself is fine, but would an API access give attackers a mean to brute force into it? Sorry no expert here.
The official wiki talks about securing password login with fail2ban. I guess this is not needed in my case, as it’s handled at the Authelia level.
It’s only bad practice if you don’t keep up on vulnerabilities/patching, don’t have any type of monitoring or ability to detect a potential breach, etc.
The nice thing about tucking everything behind a VPN is you only have one attack surface to really worry about.
My understanding is that it’s just not as secure. Any open port can be considered a potential way for a hacker to get in. Of course, that doesn’t mean it will 100% happen and you will get hacked, but at least in the case of Tailscale, it does it in a secure way that makes it so you don’t have those open ports. Basically, it’s not bad to just expose them to the internet, it’s just not as secure as using tools like Tailscale.
I have many of my services open to the internet, but behind authelia w/2fa and a reverse proxy. I haven’t had a security issue yet, been running this way for a few years.
I think it’s pretty safe as long as you keep them up to date. I run backups weekly and do updates at least once a month.
Using geoip restrictions will also help a lot because you can block most of the scanner bots by denying connections from outside your geographic region. These bots detect what services are open to the internet and then add them to databases like shodan. If a security flaw is found in one of those services, hackers will search those databases for servers with those services running and try to exploit them. If you aren’t in those databases they can’t easily find you before you are able to patch.
Very good point regarding geoip, thanks!
If you use strong passwords and keep an eye on your logs you are no less safe than any other public facing entity. I’ve had a bunch of services exposed since 2020 so far no one has even bothered to brute the basic auth on Apache (though bot nets take a run at SSH a few times a year).
I thought I was fine, until I installed IDS/IPS on my OPNSense box, and noticed one of my servers trying to contact a malicious IP. Took everything offline that day and keep publically facing services on other peoples networks :~)
A number of people have touched on the perimeter security, but you can also look at your internal network too and whether you have the systems being exposed on vlans with firewalls preventing connectivity from those systems back to your other stuff that doesn’t need to be exposed. Could help cover you if a system is compromised due to bad config, zero day exploit, or whatever, by limiting the ability to then go sideways through your network to exploit other systems. Depending on what you are hosting there may be zero requirement for your externally facing server to need to talk to the majority of devices on your network, or the talk could be one way only (internal facing to external facing).
Here’s how I solved the problem: https://blog.lchapman.dev/self-hosting-foundations/
Not free, but pretty cheap. Been doing it for a year or so and I’m happy with the solution.
If you don’t need to, then DON’T. I only expose my personal website, my CardDAV server, my Gitea instance and my SSH server. Update them regularly
If your application is insecure/old, use it behind a VPN
It’s not bad per se, but you really just need to understand the risks involved and have an idea of how to secure your services properly. I personally won’t expose anything if it doesn’t have some sort of centralized auth solution (LDAP preferred) and 2FA to better secure accounts.
It’s also good practice to have some way of mitigating brute-force attacks with something like fail2ban, and a way to outright block known bad IP addresses.