I’ve seen a lot of people saying things that amount to “those tech nerds need to understand that nobody wants to use the command line!”, but I don’t actually think that’s the hardest part of self-hosting today. I mean, even with a really slick GUI like ASUSTOR NASes provide, getting a reliable, non-NATed connection, with an SSL certificate, some kind of basic DDOS protection, backups, and working outgoing email (ugh), is a huge pain in the ass.
Am I wrong? Would a Sandstorm-like GUI for deploying Docker images solve all of our problems? What can we do to reshape the network such that people can more easily run their own stuff?
If you’re afraid of the CLI then you probably didn’t be hosting anything complex yourself. The CLI is one of the least complicated parts of server administration.
the hardest part is doing backups and updates. Repeat after me:
no backup, no pity,
updates neglected, compassion rejected.
Dear Debian users: please also update your Debian version, not just your packages. Like… once a decade would be an improvement for many poor servers.
Haha, yeah, I totally have proper backups…
It’s not the command line that’s hard but the lack of proper documentation and tutorials that makes things hard.
Especially when it’s extremely rare to find documentation that aren’t intended on being too verbose. Documentation with bottom line up front writing style is a rarity.
man <command> is your documentation for the tool itself.
Do we actually need people afraid of CLIs to host anything? Sounds like a hassle.
Getting a decent VPS is pretty cheap. Email is the enormous problem. Even if your VPS provider allows outgoing email, your IP address will be flagged and blocked by all mailservers everywhere for the crime of not being Google or Microsoft, or not having a full-time person working 24/7 to satisfy the people in charge of blacklists. You can pay someone else to send your email, but that’s going to cost you as much or more as the VPS you’re using to host your entire app.
It’s actually rare these days that mail from my personal server (on a Linode/Akamai IP) is rejected, and I don’t even have DMARC set up, only SPF and DKIM. I just use my old gmail address as a backup for those rare situations.
Something like Zoho is only $12 a year per hosted email address.
How many outgoing emails are we talking about? Because there are a lot of free or cheap options for personal use and small businesses.
Technology is complicated. Period. Anything that “seems” simple is in reality extremely complicated underneath the hood. A GUI is nice as long as it works. But if for some reason it doesn’t, you’re shit out of luck.
Look at installing Gentoo, or Arch, or Alpine vs Ubuntu. There’s no technical reason we can’t make Gentoo installation GUI. It’s just going to be very very tedious. Orders of magnitude more tedious.
At the same time Gentoo allows you to customize WAAAAY more things during its install than Ubuntu.
So specifically for lemmy - yeah we can probably make some sort of default AWS image where you just select it when spinning up new VM and you’re up and running. But what if you want something slightly different? Maybe you prefer MySQL instead of Postgres. Or Apache instead of nginx, or maybe you want images hosted on a different machine. Suddenly it’s the install GUI author’s responsibility to support install of 10 different databases, or load-balancers, or something else, and each one has their own GUI options. Then someone else wants 11th database added and it has 10 more custom options…. Oh and now someone else is asking for a DigitalOcean image instead… or and now someone’s asking for Docker image… You see where this is going.
deleted by creator
The sad truth is that non-techy types will never want to host something themselves unless there’s a reason why doing so is better. I’m not just talking about better the way you and I think of better, either. Nobody really cares about privacy or security or ownership of data. A lot of people like to say those things matter but until it’s as easy to host your own email as signing up for gmail, and doing so provides all the fringe benefits you get with Google, you’re not going to get completely non-technical people self hosting.
You’re right, though. As part of this, there needs to be a way to have an all-in-one package that defaults to enabling the things you’re talking about. There are a lot of plug-n-play methods of self hosting any number of things, but the hard part of hosting is doing it right and securely.
The sad truth is that non-techy types will never want to host something themselves unless there’s a reason why doing so is better.
Not even techy types want it. It’s not a coincidence that SaaS offerings are viable in enterprise contexts. Why build a shit ton of knowledge and drag yourself through the mud of learning tons of different tools if you can as well pay someone who already has all that knowledge. Then you can use the free mental capacity to solve your actual problems.
The only reasons to self host are “paranoia” (no matter if warranted or not) and - which is the important thing for us self-hosters here - curiosity (or rather the drive to learn shit). We basically do it for the sake of doing it.
That’s true. Though I would sub paranoia with control.
I self host things because I want control. I want to be in control of when it gets updates and goes down. I want to be in control of how to fix it when it breaks. I want to be in control of my account and whether it’s backed up etc.
I thought of that as well but concluded that this is also some kind of paranoia. The SaaS providers promise you availability, security etc, but don’t believe them and want that in our own hands. So IMO we only want to be in control, because we fear we could suddenly lose access or get betrayed. Which is a specific manifestation of paronia.
Fair point.
It’s not even about gui.
If you want to self host you get yourself a pile of software of community-level quality (i.e “it works good until it doesn’t” is the best outcome) you need to care about. This means constantly being involved - updating, maintaining, learning something, etc, and honestly it’s time-consuming even for experienced sysadmins.One-click would definitely lower the bar to entry but I have to admit the concept makes me uncomfortable. While it cloud eliminate those problem it creates the issue of creating thousands of server administrators who really don’t understand the platform that they are now responsible for. Infrastructure and security IS hard because it’s not just about getting the right syntax, it’s understanding the concepts so that not only does it work, it works safely and reliably.
I’ve seen quite a bit of bad troubleshooting going on as newcomers have sought to set up their instances. It doesn’t help that the current docker-compose in the Lemmy repository is outdated and doesn’t work out of the box. More than a few “this worked for me” solutions that I’ve seen may have gotten things working, but broke fundamental security principles that may or may not come back to bite the administrators later.
We need an actual official setup tutorial that is kept up to date. The existing documentation for the Docker setup process is extremely bare-bones, and it doesn’t even link to the right config files. There are some unofficial tutorials out there that are better, but they’re outdated and they link to the wrong config files too.
I had wanted to host an instance of myself… grabbed the docker setup, flailed around for a bit and threw up my hands in frustration.
The amount of config needed for the fronting proxy makes it annoying to bring your own proxy.
I would say specifically the hardest part for self hosting is the grok’ing of how SSL works and setting it up right with automatic renewal.
There’s a lot of extra steps involved often.
Id also say understanding how routing works and why you need a reverse proxy is the other big one.
I’ve been working on getting Matrix Synapse running on my NAS, and the CLI hasn’t been my problem. I’m a programmer, and CLI doesn’t scare me; but the other issues you mention are all new to me, and getting a web service set up so people outside my local network can access it but without leaving me open to bad actors is wicked stressful.
The biggest problems end up being that I need to work with the soup of technologies, and there’s no one place to do all the things. I’ve got TWO routers (because my internet comes through one, and I run my LAN and wifi off one I trust better) which means I’m double-NATed, which is apparently the root of all evil; I can use Cloudflare to tunnel to my NAS, but I can’t accept simple (CNAME) redirects from a family member’s domain to one of my subdomains without paying Cloudflare $200/month, so that means I’m back to dealing with the double-NAT, and then I have to learn setting up TLS, which sounds like it’s simple, but still it’s jimmy way another thing to screw around with and another thing I could screw up on accident.
I could pay for a VPS, but that to me defeats a lot of the point of “host your own” federation when some company could be subpoenaed for copies of all their hosted accounts or something. (Yes, I could get subpoenaed for my data just as easily, but it takes more work to subpoena a thousand people than one company for a thousand people’s accounts.)
Anyway, I’d love to see things evolve to where it’s easy for newbies to host their own private instances of everything.
Personally, I’d love a drop-in tool that runs more like a temporary server while it’s running, syncing federated data you missed while your device was off; and only serving your data when it’s on. Likely with some kind of redirect service/NAT punchthrough so other clients can find you…
…but I think we’re a long way off from being able to do that.
You could get a VPS only for getting around the double NAT.
Run a reverse proxy on the VPS and forward requests over WireGuard to your NAS. That way you wouldn‘t actually host any data on the VPS.
This is an idea I didn’t know about! I’ll have to look more into it. If you feel like it, I’d love to hear a bit more detail; but also I know how to use DuckDuckGo, so no pressure!
I don‘t know what specifically you would like to know and what your background is, so I will just elaborate a bit more.
The basic idea is that the VPS, which is not behind a NAT and has a static IP, listens on a port for WireGuard connections. You connect from the NAS to the VPS. On the NAS you configure the WireGuard connection with “PersistentKeepalive = 25”. That makes the NAS send keepalive packets every 25 seconds which should be enough to keep the connection alive, meaning that it keeps a port open in the firewall and keeps the NAT mapping alive. You now have a reliable tunnel between your VPS and your NAS even if your IP address changes at home.
If you can get a second (public) IP address from your provider you could even give your NAS that IP address on its WireGuard interface. Then, your VPS can just route IP packets to the NAS over WireGuard. No reverse proxy needed. You should get IPv6 addresses for free. In fact, your VPS should already have at least a /64 IPv6 network for itself. For an IPv4 address you will have to pay extra. You need the reverse proxy only if you can‘t give a public IP address to your NAS.
Edit: If you have any specific questions, feel free to ask.
Tailscale funnel
Tailscale is a fork of WireGuard. Tailscale has a cool feature called funnel that connects a node on your vpn to a domain at .ts.net
Wow, yeah, that sounds like a really frustrating situation. I wish you all the luck in figuring it out.
I got it working! I’m fortunately, I know a kindly professional who took pity on me and showed me the secrets of Cloudflare free-tier, and we did work something out.
I have had to learn SO MUCH in just the last week, though, it’s crazy intense!
As I can attest after playing with pfsense for years, GUI or not, if you don’t know what you’re doing you’re going to have a bad time.
For me personally, command line gives me a better understanding of what’s really going on. But then again I’m an old Unix nerd. But once I know what’s going on, I prefer the fancy GUI.
Yep. Agree but kinda the inverse of your takeaway.
I prefer to skip the gui when I know what’s going on. It’s just a waste of resources in many cases and sometimes obfuscates options that otherwise are there.
For example on my opnsense box the NUT package doesn’t work in the gui. Never has. But I have setup an innumerable number of nut instances with that same ups. I did it via the cli and it works, even when the gui says not possible.