ricw
2 days ago
So the problem is 1) port mapping and 2) backing up data volumes?
There are simple solutions to each
1) you simple have a separate docker-compose file for a different environment, ie docker-compose.dev.yml for your dev server. in this file you simply define the parts that differ from the primary / prod compose file. that way it's a simple command. line variable that initiates a dev vs prod vs any other type. for details see https://docs.docker.com/compose/how-tos/multiple-compose-fil...
2) this is literally as simple as running a small bash script that creates a backup, tars and gzips the file and then uploads it to an S3 repository. Sure, maybe not out of the box, but generally pretty simple. Likewise i'd always have a script that does the inverse (downloads the latest backup, loads it into the db and restores the files).
Not exactly rocket science...
cdrini
2 days ago
For 1, docker has a handy special file to make this even easier! `compose.override.yaml` loads automatically on top of the main `compose.yaml`. We keep dev configs in there so Devs only need to run `docker compose up` locally without any environment variables, but on prod (which is automated) we set the compose file to be `compose.yaml:compose.prod.yaml`.
number6
2 days ago
I do it the other way round, so in production you just have to use `docker compose up`. I did not understand the `compose.yaml:compose.prod.yaml` syntax or never saw it. What does this do?
cdrini
2 days ago
Oh if you set the environment variable COMPOSE_FILE, you can specify a colon-separated list of compose files, which get merged together. This lets us have our core services in our main compose.yaml, since they're shared between dev and production, and then have extra services that are prod-only, or toggle certain prod-only environment variables, inside the compose.prod.yaml . And extra services/settings which we only want in the dev environment go in compose.override.yaml .
Eg
COMPOSE_FILE="compose.yaml:compose.prod.yaml" docker compose up
My work is OS so you can check it out here if you wish: https://github.com/internetarchive/openlibrary/shubhamjain
2 days ago
Yup, I was surprised how weak the arguments were. I mean, surely there are better reasons why docker compose fails to scale? I have been pretty happy with it right now, and haven't felt the need to try something like k8s or docker swarm.
number6
2 days ago
Sounds more like a skill issue. I am quite glad with traeffik and docker-compose. But I don't have that high loads or interactions on my servers
zelon88
2 days ago
It is a skill issue, but not a problem. The developer who just wants to write unrelated code doesn't need Docker to write the code that goes into Docker.
For a lot of developers who have their primary focus set on writing code and documentation, trying to remember Docker or Kube on top of the rest of your stack is burdensome. Deployment is an end-user task. Obviously the development environment fits the target use-case already.
So please understand that there are lots of people who are building the things that go inside your containers who don't want, need, or see the value of these containers. For you they might offer value, but for someone like me they bring cognitive load and troubleshooting.
Too much abstraction can be just as bad as not enough. Like Akin's Laws of Spacecraft Design, but for software.
eliribble
2 days ago
How many steps is it for you to add a new service to your system? Say you wanted to try out Paperless NGX, what would you need to do?
dizhn
2 days ago
(Not parent) The quickest way I could do it is if I just used my existing wireguard setup to directly access the docker container IP. In that case it's just one compose up away even without mapping/exposing ports.
eliribble
2 days ago
Thanks, interesting.
Sounds like your container has some kind of side-car that makes it directly addressable over Wireguard without needing to address the host IP. Does that mean you'd need to modify the docker-compose in some way before `docker-compose up`?
How do you know which port Paperless is using for HTTP? When you want to load up Paperless in a web browser, are you typing in a service name, or the container IP address? If it's a service name, how are you doing DNS? Do you have TLS?
dizhn
20 hours ago
I can already directly access the docker network because it's in the allowedips setting of wireguard. But for convenience, yes, when I do get a docker-compose.yml file I change 127.0.0.1 IPs to that of the wireguard interface IP. (I can leave 0.0.0.0 IPs as is because this host does not have a public IP) This way I am exposing the ports, but I am exposing them in a way that they are not world-accessible but still accessible to me conveniently.
For services open to the public internet I just add a subdomain + a reverse proxy entry into an existing caddy instance and point to the docker IP.
ffsm8
2 days ago
> Sounds like your container has some kind of side-car that makes it directly addressable over Wireguard
Not necessary. You can access the deployed docker container without exposing any ports or having any reverse proxy (what you've likely thought about with sidecar, which is a k8s concept, not docker) or anything else by using the ipadress of the started container and the ports the started service used. This is usually only possible from localhost, but wireguard can be configured as what's essentially a bastion host and exit node, this would let connecting clients also address containers that were started on that server, without opening any ports.
You can technically also do that without wireguard even, as long as you configure the docker host to route relevant traffic into the docker ethernet and define the docker subnet as a static route that points to the docker host, but that's another story
soneil
a day ago
It took me a moment to put this together too, so to be clearer - the wireguard endpoint is in docker, so you're adding the docker bridge to your vpn. So DNS is handled by docker, just as containers can already address each other by name - you're on the container network with them.
dizhn
20 hours ago
I don't actually do this. I either access the services by IP or add a private IP to dns. (I think this is not widely supported but cloudflare does support it.)
Your explanation is interesting though. Would that actually work?
number6
2 days ago
Add the labels to the docker-compose file and then run it.
I regularly try out new services or tools with docker-compose
trvdex
2 days ago
[dead]
adamc
2 days ago
You can solve anything with enough scripts, but that in no way invalidates his point that the abstraction level seems a bit off.
I'm not sure that his solution is really an improvement, however, because then you have to bake in the semantics for a fixed set of product types, and when that list gets extended, you have to bake-in a new one.
smaudet
2 days ago
Not only that he wants to re-invent messaging and daemons, essentially.
There are pre-existing paradigms for service brokers, message passing, etc., they all try to solve this problem, and they all share this same fault, they become in-extensible...
Which is the core programming tradeoff, flexibility vs robust constrained behavior.
On the extreme end of flexibility you have AI goop you barely understand, but arguably it is far more flexible than anything you ever wrote, even if you can't make any guarantees about what it does, on the other hand you have these declarative things that work fantastic if that's what you wanted, but probably not worth getting into computers for...
eliribble
2 days ago
This is a really good conceptual model, the tradeoff between flexibility and constrained declarative frameworks. The goals is to make self-hosting applications extremely easy and extremely reliable. With that as a goal, being highly constrained seems like the way to go.
lolinder
2 days ago
Borgmatic works great for #2. before_backup runs docker compose stop, after_backup runs docker compose up, done.
oliwarner
2 days ago
> So the problem is…
This feels like quite a shallow reading. Being able to manage system-wide things (DNS, and a centralised Caddy, centralised database) from individual compose units in such a way that you're not going through a ton of setup each service. This much might just need Caddy to work better with other instances of itself but it isn't a solved problem.
I'll grant you that backing up volumes isn't hard.
smaudet
2 days ago
I think 2) actually has the most merit. If you have a large number of e.g. temp files a container creates, retaining that file history may not be particularly valuable.
Some sort of "hey back this data up, it's important to my function standard would be nice, and not just for container apps.
Trying to figure out where your app hid its configuration and save files is a perennial user frustration.
Loic
2 days ago
I use the convention that all my containers are getting a /data/ folder where data can be stored and will get an automatic daily backup.
It is easy to set the right storage behind /data/.
Just a convention, nothing more, smoothly working for the past "many" years.
eliribble
2 days ago
I assume you're saying this as a container image author, not as someone who is deploying containers. It'd be great if every other container image author standardized on something like this. We just need someone to create the standard and some tools to make the standard a well-paved path.
Loic
2 days ago
In this particular case, I am both creating the images and deploying them.
throwitawayfam
2 days ago
For #2 I use https://kopia.io/ and upload to Backblaze b3 (S3 api)
skybrian
2 days ago
I guess it’s simple if you already know which S3 service you want to use? For those of us who don’t, it means it’s time to go shopping for one.