It’s so easy to self host these days. I remember when you’d have to fuck around with Apache configs and fuck around with app config files etc. Now you just run docker. It’s so great these days!
I’m still fucking with the apache configs (I fucking hate apache…). As someone with no docker experience whatsoever, are there any getting started guides you would recommend for someone looking to make the switch?
I don’t have any specific guides in mind, but you’ll want to use docker-compose as much as possible, also create /home/your_user/docker/app/ for each app, and keep your docker compose files. If you use docker run, keep a copy of the commands you use, because if you need to restart your services, it will be a lot easier than having to search up the command in your bash history again. You can just cat docker.txt | bash and it will recreate your docker containers for you. That’s all i can really think of for getting started. Also, docker ps will be a godsend.
docker-compose helped me wrap my head around docker. I can use run commands now, but prefer to either modify a compose file or create my own to spin things up.
To be fair thats pretty much the IT gig in general, not the docker part but the rest copy/paste run commands, then learn from whatever you screw up couple of years later you might very well end up working in IT
What’s the difference between a docker and just installing an app on rented server space?
Functionally? Not much. But people who selfhost typically want everything on computers they own. Whether it’s for learning purposes, or to not have their stuff on “someone else’s computer” selfhosting usually means you’re running software on computers you own, almost always within your own house.
Does each dock (?) have its own server? (Apache or nginx or whatever?)
Each docker image usually has a web server built in. The philosophy of docker is that it contains everything needed to run the app. Even a small linux OS (LibELEC or Alpine are favorites for docker images). So while you’re not managing the web server in a docker image each docker image will have its own web server if web access is needed
Does each dock host a whole site, or do you have a dock for your database and a dock for your web app?
Docker the program is what runs all the docker images on a computer. Each docker image is built as per the software’s developer. Some docker images will have a web app and a database combined into a single image, while others will expect a separate database server running independent of the image (It won’t care if the database server is a docker image or not; just that it has access)
Docker is basically a virtual machine image you write your software in. Then when you run the software you don’t need to worry about compatibility or having the right dependencies installed, it’s all included in the docker image.
Think of Docker as being Nintendo cartridges that you can take to any friends house, plug them in, and play. Servers can run more than one Docker container.
The approach greatly simplifies writing code and having it work on your server, reduces errors, and adds a layer of security.
I’ve read and reread, listened and relistened to info on docker/containers and I still feel like I’m missing something tbh.
Let’s say you have a docker container for something and it’s for a Linux distro, that won’t run on another OS, will it? Maybe not even a different Linux distro from the one it was made for (e.g. Ubuntu or Arch or Fedora or whatever).
To go off your example, Docker’s not like an expansion module to make your Switch games work on a PlayStation or Xbox…Right? There seems to be some kind of mixed messaging on this, the way they’re so readily recommended (which seems to be related to a presumption of familiarity that often isn’t there toward those inquiring).
I guess I’ve also been confused because like…Shouldn’t old installers handle bundling or pulling relevant dependencies as they’re run? I’d imagine that’s where containers’ security benefits come into play though, alongside being virtualized processes if I’m not mistaken.
More control and security (you can set a command to run as “nobody” – which is an user with the least amount of permissions/privileges. Plus isolation/sandbox features).
Optimization is also a thing to consider and even saving time… depending of how “complex” the command is.
Self hosting what?
In general
It’s so easy to self host these days. I remember when you’d have to fuck around with Apache configs and fuck around with app config files etc. Now you just run docker. It’s so great these days!
I’m still fucking with the apache configs (I fucking hate apache…). As someone with no docker experience whatsoever, are there any getting started guides you would recommend for someone looking to make the switch?
I don’t have any specific guides in mind, but you’ll want to use docker-compose as much as possible, also create /home/your_user/docker/app/ for each app, and keep your docker compose files. If you use docker run, keep a copy of the commands you use, because if you need to restart your services, it will be a lot easier than having to search up the command in your bash history again. You can just
cat docker.txt | bash
and it will recreate your docker containers for you. That’s all i can really think of for getting started. Also,docker ps
will be a godsend.Also learn how to translate docker commands to docker-compose.yml
docker-compose helped me wrap my head around docker. I can use run commands now, but prefer to either modify a compose file or create my own to spin things up.
To be fair thats pretty much the IT gig in general, not the docker part but the rest copy/paste run commands, then learn from whatever you screw up couple of years later you might very well end up working in IT
I agree but use docker-compose instead!
Okay. I keep reading about docker. What’s the difference between a docker and just installing an app on rented server space?
Does each dock (?) have its own server? (Apache or nginx or whatever?)
Does each dock host a whole site, or do you have a dock for your database and a dock for your web app?
Functionally? Not much. But people who selfhost typically want everything on computers they own. Whether it’s for learning purposes, or to not have their stuff on “someone else’s computer” selfhosting usually means you’re running software on computers you own, almost always within your own house.
Each docker image usually has a web server built in. The philosophy of docker is that it contains everything needed to run the app. Even a small linux OS (LibELEC or Alpine are favorites for docker images). So while you’re not managing the web server in a docker image each docker image will have its own web server if web access is needed
Docker the program is what runs all the docker images on a computer. Each docker image is built as per the software’s developer. Some docker images will have a web app and a database combined into a single image, while others will expect a separate database server running independent of the image (It won’t care if the database server is a docker image or not; just that it has access)
Docker is basically a virtual machine image you write your software in. Then when you run the software you don’t need to worry about compatibility or having the right dependencies installed, it’s all included in the docker image.
Think of Docker as being Nintendo cartridges that you can take to any friends house, plug them in, and play. Servers can run more than one Docker container.
The approach greatly simplifies writing code and having it work on your server, reduces errors, and adds a layer of security.
I’ve read and reread, listened and relistened to info on docker/containers and I still feel like I’m missing something tbh.
Let’s say you have a docker container for something and it’s for a Linux distro, that won’t run on another OS, will it? Maybe not even a different Linux distro from the one it was made for (e.g. Ubuntu or Arch or Fedora or whatever).
To go off your example, Docker’s not like an expansion module to make your Switch games work on a PlayStation or Xbox…Right? There seems to be some kind of mixed messaging on this, the way they’re so readily recommended (which seems to be related to a presumption of familiarity that often isn’t there toward those inquiring).
I guess I’ve also been confused because like…Shouldn’t old installers handle bundling or pulling relevant dependencies as they’re run? I’d imagine that’s where containers’ security benefits come into play though, alongside being virtualized processes if I’m not mistaken.
More control and security (you can set a command to run as “nobody” – which is an user with the least amount of permissions/privileges. Plus isolation/sandbox features).
Optimization is also a thing to consider and even saving time… depending of how “complex” the command is.