Show HN: A CLI tool I made to self-host any app with two commands on a VPS

369 pointsposted a day ago
by mightymoud

131 Comments

dewey

21 hours ago

I'd also suggest people to take a look at Dokku, it's a very mature project with a similar scope and was discussed here a few weeks ago:

https://news.ycombinator.com/item?id=41358020

I wrote up my own experiences too (https://blog.notmyhostna.me/posts/selfhosting-with-dokku-and...) and I can only recommend it. It is ~3 commands to set up an app, and one push to deploy after that.

FloatArtifact

20 hours ago

Part of me dies every time I see projects not integrating robust restoring and backup systems.

dewey

18 hours ago

Providing robust restoring and backup systems for a system that allows to run any kind of workload is almost impossible. You'd have to provide database backups for all versions of all databases, correct file backup for the volumes etc.

It feels much more dangerous to have such a system instead in place and provide false sense of security. Users know best what kind of data they need to backup, where they want to back it up, if it needs to be encrypted or not, if it needs to be daily or weekly etc.

sgarland

18 hours ago

ZFS. Snapshot the entire filesystem, ship it off somewhere. Done. At worst, Postgres is slow to startup from the snapshot because it thinks it’s recovering from a crash.

prmoustache

11 hours ago

Most projects/products can survive a few seconds of downtime to have a clean snapshot.

dewey

10 hours ago

Classic HN reply that’s very disconnected from reality. Most people don’t run ZFS, most people using these tools are using this to self host their apps as it’s cheaper than managed cloud server. Usually on a dedicated or VPS server where by default you run stock Ubuntu and no niche file system.

transpute

7 hours ago

dewey

7 hours ago

You are missing the point, nobody doubts it's possible but defaults matter and people just commission a new "Ubuntu" server on OVH, Hetzner, DO etc. and don't configure ZFS or snapshotting or even want to know what that is.

Yiin

18 minutes ago

I'm such user and couldn't agree more. Also DO provides their own backup service that works fine for weekly backups.

sgarland

5 hours ago

I didn’t say most people run it, I offered it as a solution. If you’re willing to run a server, you should be willing to try ZFS. As long as you follow some best practices, you’ll be fine.

pmarreck

2 hours ago

> Most people don’t run ZFS

Well, the people who host the people do. Is that not argument enough in favor of it?

GauntletWizard

17 hours ago

Postgres is recovering from a crash if it's reading from a ZFS snapshot. It probably did have several of it's database writes succeed that it wasn't certain of, and others fail that it also wasn't certain of, and those might not have been "in order". That's why WAL files exist, and it needs to fully replay them.

sally_glance

8 hours ago

A viable strategy, but requires an experienced Linux/Unix admin and quite some planning & setup effort.

There are a lot of non-obvious gotchas with ZFS, and a lot of knobs to turn to make it do what you want. Anecdotally, a coworker of mine set it up on his development machine back when Ubuntu was heavily promoting it for default installs. It worked well until one day his machine started randomly freezing for minutes multiple times a day... He traced the issue back to some improper snapshotting setup, then spend a couple of days trying to fix it before going back to ext4.

For the Postgres data use case in particular, I would be wary of interactions and probably require a lot of testing if we were to introduce it... Though it seems at least some people are having success with it (not exactly plug and play or cheap setup though): https://lackofimagination.org/2022/04/our-experience-with-po...

sgarland

5 hours ago

I think you meant to reply to me.

There are a ton of ZFS knobs, yes, but you don’t need most of them to have a safe and performant setup. Optimal, no, but good enough.

It’s been well-tested with DBs for years; Percona in particular is quite fond of it, with many employees writing blog posts on their experiences.

trog

18 hours ago

My VPS provider just lets me take image snapshots of the whole machine so I can roll back to a point in time. It's a little slower and less flexible than application or component level but overall I don't even think about backup and restore now because I know it's handled there.

Aeolun

19 hours ago

None of my hobby projects across 15 years or so have ever needed backups or restoring. I can agree it would be nice to have, but it’s a far cry from necessary.

doublerabbit

30 minutes ago

So when your drives finally die you'll just going to shrug it good bye?

If you have your code stashed somewhere else than thats already backup.

prmoustache

11 hours ago

FWIW, backup can be ran from a separate docker container mounting same volume as main app and connecting to the db if any so it is not like backup can't be taken care of. That's it how it is often done in the kubernetes world.

mimischi

20 hours ago

Been using dokku for probably 8 years now? (or something close to that; it used to be written entirely in bash!) Hosting private stuff on it, and an application at $oldplace probably also still runs on this solid setup. Highly recommended, and the devs are a great sport!

rgrieselhuber

20 hours ago

I've kept a list of these tools that I've been meaning to check out. In scope, do they cover securing the instance? Is there any automation for creating networks of instances?

dewey

20 hours ago

> In scope, do they cover securing the instance?

Most of these I checked don't, but a recent Ubuntu version is perfectly fine to use as-is.

> Is there any automation for creating networks of instances?

Not that I'm aware, it would also defeat the purpose of these tools a bit that are supposed to be simple. (Dokku is "just" a shell script).

oulipo

19 hours ago

What would be the best between Dokku / Dokploy / Coolify?

dewey

19 hours ago

Depends on what you prefer. I went with Dokku as for me it was important that I could run docker-compose based apps along side with my "Dokku managed" apps. I didn't want to convert my existing apps (Sonarr, Radarr etc.) into Dokku apps and only use Dokku for my web projects.

I also wanted to be able to remove Dokku if needed and everything would continue to run as before. Both of these work very well with Dokku.

Aeolun

17 hours ago

I tried many, but eventually kept running on Portainer.

Best part is that I can just dump whole docker-compose.yml files in and it just works.

vickodin

10 hours ago

Also: Kamal, CapRover

pqdbr

a day ago

This looks really nice, congrats!

1) I see Kamal was an inspiration; care to explain what differs from it? I'm still rocking custom Ansible playbooks, but I was planning on checking out Kamal after version 2 is released soon (I think alongside Rails 8).

2) I see databases are in your roadmap, and that's great.

One feature that IMHO would be game changer for tools like this (and are lacking even in paid services like Hatchbox.io, which is overall great) is streaming replication of databases.

Even for side projects, a periodic SQL dump stored in S3 is generally not enough nowadays, and any project that gains traction will need to implement some sort of streaming backup, like Litestream (for SQLite) or Barman with streaming backup (for Postgres).

If I may suggest this feature, having this tool to provision a Barman server in a different VPS, and automate the process of having Postgres stream to it would be game changer.

One barman server can actually accommodate multiple database backups, so N projects could do streaming backup to one single barman server.

Of course, there would need to be a way to monitor if the streaming is working correctly, and maybe even help the user with the restoration process. But that effectively brings RTO down to near 0 (so no data loss) and can even allow point in time restoration.

mightymoud

a day ago

1) Kamal is more geared towards having one VPS for project - it' made for big projects really. They also show on the demo that even the db is hosted on its own VPS. Which is great! But not for me or Sidekick target audience. Kamal V2 will support multi-projects on a single VPS afaik

2) yes yes yes! I really like litestream. Also backup is one of those critical but annoying thing that Sidekick is meant to take care of for you. I'll look into Bearman. My vision is like we would have one command for most popular db types and it would use stubs to configure everything the right way. Need to sort out docker-compose support first though...

indigodaddy

a day ago

Pretty sure that fly.io for example supports litestream as I remember seeing some fly doc related to litestream when I was looking a few days ago for my own project. Would also make sense that they do given Litestream’s creator is currently Fly’s VP of Product (I believe).

ctvo

21 hours ago

Yes, fly.io is associated with Litestream, but... how is that related to the above thread or this tool?

indigodaddy

19 hours ago

Quoted from the parent comment:

“One feature that IMHO would be game changer for tools like this (and are lacking even in paid services like Hatchbox.io, which is overall great) is streaming replication of databases.”

And then I mentioned that I believe fly.io has litestream support. I think it’s fairly relevant to the comment/thread.

russelg

10 hours ago

The repo has the following sub title: "Bare metal to production ready in mins; imagine fly.io on your VPS"

4star3star

a day ago

I like what I'm seeing, though I'm not sure I have a use case. On a VPS, I'll typically run a cloudflared container and configure a Cloudflare tunnel to that VPS. Then, I can expose any port and point it to a subdomain I configure in the CF dashboard. This gives https for free. I can expose services in containers or anything else running on the VPS.

I'll concede there's probably a little more hands on work doing things this way, but I do like having a good grip on how things are working rather than leaning on a convenient tool. Maybe you could convince me Sidekick has more advantages?

skinner927

a day ago

I must be an old simpleton, but why get cloudflare involved? You can get https for free with nginx and letsencrypt.

mightymoud

a day ago

It's a tunnel. So VPS can only be reached through cloudflare. It's not only for https, but more for security and lockdown

mediumsmart

21 hours ago

excellent and if cloudflare thinks your IP is iranian its going to get a really secure lockdown.

nine_k

20 hours ago

More seriously, it also helps when you're a target of a DDoS.

It's always a balancing act between outsourcing your heavy lifting, and having to trust that party and depend on them.

newaccount74

10 hours ago

I've been running my business on VPS from Linode and Hetzner for the last 13 years and have never had an issue with DoS.

I think the benefits of Cloudflares DoS protection are vastly oversold and absolutely unnecessary for 99% of businesses.

(I think the false positives where some users randomly get captchas are actually bad for business)

BigJ1211

5 hours ago

The problem with that mindset is that by the time you do get hit, it'll be too late to act and remedy the issue easily.

Even though I agree with you in spirit. The chances that you will need it are slim.

hu3

a day ago

Nice setup.

But isn't this a little too tied to Cloudflare?

Caddy as a reverse proxy on that VPS would also give us free HTTPS. The downside is less security because no CF tunneling.

aborsy

21 hours ago

You could put Authentik in front. It does Cloudflare stuff on VPS.

SahAssar

21 hours ago

Are you also making sure that nothing on the VPS is actually listening on outside ports? A classic mistake is to setup something similar to what you are describing but not validating that the services are not listening on 0.0.0.0.

I'd also not want to have cloudflare as an extra company to trust, point of failure and configuration to manage.

mightymoud

a day ago

Interesting setup....

How do you run the containers on your VPS tho? You could still use Sidekick for that!

I think your setup is one step up in security from Sidekick nonetheless. A lot more work it seems too

tacone

a day ago

Interesting! How do you connect via ssh? Do you just leave the port open or is there any trick you'd like to share?

vineyardmike

21 hours ago

I do this for “internal” apps but with Tailscale.

renewiltord

19 hours ago

This is pretty cool. I did not know I could do this. Currently, I have:

1. nginx + letsencrypt

2. forward based on host + path to the appropriate local docker

3. run each thing in the docker container

4. put Cloudflare in front in proxy DNS mode and with caching enabled

Your thing is obviously better! Thank you.

jmpavlec

19 hours ago

I used to run it the cloudflared way as the other user described but the tunnel often went offline without explanation for short periods of time and the latency was so so in my testing. I run it more similar to you now and haven't had any stability problems since dropping the cloudflared setup. I use cloudflared for a less critical app on my own hardware and that also goes up and down from time to time.

renewiltord

19 hours ago

Oh thank you for that experience. This way has been entirely fire and forget (except for application layer issues) so I wouldn't want to change things then. The infra layer is pretty simple this way. I lost a 10 year server to bitrot (Hetzner wanted to sunset it and I had such a bespoke config I forgot how to admin it over the 10 years) so I'm trying to keep things simple so it will survive decades.

LVB

a day ago

This looks good, and I’m a target user in this space.

One thing I’ve noticed is the prevalence of Docker for this type of tool, or the larger self-managed PaaS tools. I totally get it, and it makes sense. I’m just slow to adapt. I’ve been so used to Go binary deployments for so long. But I also don’t really like tweaking Caddyfiles and futzing with systemd unit files, even though the pattern is familiar to me now. Been waffling on this for quite a while…

kokanee

a day ago

I'm a waffler on this as well, increasingly leaning away from containers lately. I can recall one time in my pre-Docker career when I was affected by a bug related to the fact that software developed on Mac OS ran differently than software running on CentOS in production. But I have spent untold countless hours trying to figure out various Docker-related quirks.

If you legitimately need to run your software on multiple OSes in production, by all means, containerize it. But in 15 years I have never had a need to do that. I have a rock solid bash script that deploys and daemonizes an executable on a linux box, takes like 2 seconds to run, and saves me hours and hours of Dockery.

bantunes

a day ago

I don't understand how running a single command to start either a single container or a stack of them with compose, that then gets all the requirements in a tarball similar and just runs is seen as more complicated than running random binaries, setting values on php.ini, setting up mysql or postgres, demonizing said binaries and making sure libraries and the like are in order.

hiAndrewQuinn

a day ago

You're going to be setting all that stuff up either way, though. It'll either be in a Dockerfile, or in a Vagrantfile (or an Ansible playbook, or a shell script, ...). But past a certain point you can't really get away from all that.

So I think it comes down to personal preference. This is going to sound a bit silly, but to me, running things in VMs feels like living in an apartment. Containers feel more like living out of a hotel room.

I know how to maintain an apartment, more or less. I've been living in them my whole life. I know what kinds of things I generally should and should not mess with. I'm not averse to hotels by any means, but if I'm going to spend a lot of time in a place, I will pick the apartment, where I can put all of my cumulative apartment-dwelling hours to good use.

kokanee

a day ago

Yes, thank you for answering on my behalf. To underscore this, the decision is whether to set up all of your dependencies and configurations with a tool like bash, or to set it all up within Docker, which involves setting up Docker itself, which sometimes involves setting up (and paying for) things like registries and orchestration tools.

I might tweak the apartment metaphor because I think it's generous to imply that, like a hotel, Docker does everything for you. Maybe Dockerless development is like living in an apartment and working on a boat, while using Docker is like living and working on a houseboat.

There is one thing I definitely prefer Docker for, and that's running images that were created by someone else, when little to no configuration is required. For example, running Postgres locally can be nicer with Docker than without, especially if you need multiple Postgres versions. I use this workflow for proofs of concepts, trials, and the like.

bluehatbrit

a day ago

I suppose like anything, it's a preference based on where the majority of your experience is, and what you're using it for. If you're running things you've written and it's all done the same way, docker probably is just an extra step.

I personally run a bunch of software I've written, as well as open source things. So for me docker makes everything significantly easier, and saves me installing a lot of rubbish I don't understand well.

oarsinsync

a day ago

After 20 years of various things breaking on my (admittedly franken) debian installs after each dist-upgrade, and spending days troubleshooting each time, I recently took the plunge and switched all services to docker-compose.

I then booted into a new fresh clean debian environment, mounted my disks, and:

  cd /opt/docker/configs; for i in *; do cd $i; docker-compose up -d; cd ..; done
voila, everything was up and working, and no longer tied to my underlying OS. Now at least I can keep my distro and kernel etc all up to date without worrying about anything else breaking.

Sure, I have a new set of problems, but they feel smaller.

dijksterhuis

20 hours ago

Thou hast discovered docker's truest use case.

Like, legit, this is the whole point of docker. Application/service dependencies are no longer tied to the server it is running on, mitigating the worst parts of dependency hell.

Although, in your case, I suppose your tolerance for dependency hell has been quite high ;)

Ringz

21 hours ago

I'm doing exactly the same thing. I started to do everything on Synology with Doctor Compose and got rid of most Synology apps: through open source applications.

At some point I moved individual containers to other machines and they work perfectly. VPS, NUC no matter what.

stackskipton

a day ago

Yea, in same boat and I'm wondering if there is big contingent of devs out there that bristle at Docker. Biggest issue I run into writing my lab software is finding decent enough container registry but now I just endorse free tier of Vultr CR.

bluehatbrit

7 hours ago

I just use the github registry, but I've been paying for their personal pro subscription for years now so it wasn't really an "additional" cost for me.

faangguyindia

a day ago

Here's the thing, we've code running on VPS in cloud for a decade with any problem

When we ran it on kubernets, without touching it, it broke itself in 3 years.

Docker is fantastic developement tool, I do see real value in it.

But kubernets and whole ecosystem? You must apply updates or your stuff will break one day.

Currently I am using docker with docker compose and GCR, it does make things very simply and easy to develop and it's also self documenting.

prmoustache

11 hours ago

Kubernetes doesn't break on itself. This is totally untrue. The only thing that is a timebomb are the ssl certificates it is using but they can easily be checked and renewed.

faangguyindia

17 minutes ago

We were using gcp and aws managed kubernets.

Same experience in both places.

The app broke itself.

Now we don't use managed kubernets anymore.

mikkelam

a day ago

There are tools like firecracker that significantly reduces docker overhead https://firecracker-microvm.github.io/

I believe fly.io uses that. Not sure if OP’s tool does that

mightymoud

a day ago

No Sidekick doesn't use firecracker. I know fly.io is built around it yes. They do that so they can put your app to sleep - basically shutting it down - then spin it up real quick when it gets a request. No place for this in Sidekick vision

indigodaddy

a day ago

Was wondering the same— didn’t see any mention of it in the GH page though, nor even in roadmap

silasb

a day ago

Nice, I'm working in the same space as you (not opensource, personal project). We landed on the same solution, encoding the commands inside Golang and distributing those via SSH.

I'm somewhat surprised not to see this more often. I'm guessing supporting multiple linux versions could get unwieldy, I focused on Ubuntu as my target.

Differences that I see.

* I modeled mine on-top of docker-plugins (these get installed during the bootstrapping process)

* I built a custom plugin for deploying which leveraged https://github.com/Wowu/docker-rollout for zero-downtime deployments

Your solution looks much simpler than mine. I started off modeling mine off fly.io CLI, which is much more verbose Go code. I'll likely continue to use mine, but for any future VPS I'll have to give this a try.

mightymoud

a day ago

hahah seems like we went down the same rabbit hole. I also considered `docker-rollout` but decided to write my own script. Heavily inspired by the docker-rollout source code btw. Just curious, why did you decide to go with docker plugins?

Humphrey

17 hours ago

Love this! That said, I achieve the same thing manually using Docker Compose & some shell scripts. It takes a bike longer, but it has forced me to learn the lower level tools that helpers like Sidekick use.

Also, all of these tools have great documentation on getting up and running, but SIGNIFICANTLY LESS INFO ON HOW TO MAINTAIN OVER THE LONG TERM. If I was going to start using a tool like Sidekick, Kamal, or Dokku I would want clear answers to the following:

- How do I keep my VPS host up and running with latest security updates? - How do I update to more recent versions of Docker? - How do I update services that maintain state (eg. update to a new Postgres version) - How do I seamlessly migrate to a new host (perhaps as a way to solve the above)? - How should I manage and serve static resources & user media? (store on host or use cloud storage?) - How do I manage database migrations during an update, and how do I control that process to avoid downtime during an update?

I just spent an entire evening transferring a side project to a new VPS because I needed to update Postgres. The ideal self-hosting solution would make that a 20 min task.

bluehatbrit

a day ago

This is super nice, and I'm a big fan of the detailed readme with screenshots.

I'll definitely be trying it out, although I do have a pretty nice setup now which will be hard to pull away from. It's ansible driven, lets me dump a compose file in a directory, along with a backup and restore shell script, and deploys it out to my server (hetzner dedicated via server auction).

It's really nice that this handles TLS/SSL, that was a real pain for me as I've been using nginx and automating cerbot wasn't the most fun in the world. This looks a lot easier on that front!

mightymoud

a day ago

Sounds like you have a great setup. My vision is to make a setup like yours more accessible really w/o having to play with low level config like ansible. I think you should try to replace nginx with Traefik - it handles certs out of the box!

bluehatbrit

7 hours ago

Mine is dead simple. I just have a repo with all my ansible in it, and have a nested module called "service". It takes in an app name, domain name, backup schedule, and a true/false on whether it should get a public nginx setup.

Then it finds the compose file based on the app name. It templates in the domain name wherever needed in the compose file, and if it's meant to be public it'll setup a nginx config (which runs on the host, not in docker). If the folder with the compose file has a backup.sh and restore.sh it also copies those over, and sets up a cron for the backup schedule. It's less than 70 lines of yaml, plus some more for restart handlers.

The only bit that irks me is the initial tls/ssl setup. Certbot changes the nginx config to insert the various certificates, which then makes my original nginx config out of date. I really like nginx and have used it for a long time so feel comfortable with it, but I've been considering traefik and caddy for a while just to get around this.

Although another option for me is to use a cloudflare tunnel instead, and then ignoring certificate management altogether. This is really attractive because it also means I can close some ports. I'll have to find some time to play around with traefik and caddy first though!

mxuribe

5 hours ago

I have not automated much of my setup...but for the nginx and certbot portion, i think there is an option you can choose to have certbot NOT alter your nginx config (basically, leave it as-is)...and because the change that certbot applies (if i recall correctly) is really only insert the location(s) of the cert files under like /etc...i think you can apply the cert location initially in your nginx config, have certbot do its thing, then have it not change your config, and proceed. If nginx complains about having the cert locations present in congif file and the certs technically don't exist yet (since certbot has not done its thing at this stage)...then there's always the not-sopisticated method of starting with your nginx config without those cert locations, then have certbot alter your config, then have one of your automated steps re-replace the config with one that has all your needs plus has the expected certbot cert location parths inserted....like i said, not sophisticated, but it would work. I'm sure there are severral other ways to do this beyond what i noted. ;-)

bluehatbrit

3 hours ago

As you say, nginx does complain about the cert files not existing, so that's pretty close to what I do. I just start with the non-ssl version, let certbot do it's thing, and then copy the result after it's deployed (if I remember). It's mildly annoying, but it takes about 2 minutes in total so it's been like that for 2 years now.

I'm sure there's something smarter I can do, like reading back the result afterwards or someting and altering my local file. But honestly, once nginx is configured for an application, I almost never touch the config again anyway.

I suspect I'm more likely to move everything over to cloudflare tunnels and ditch dealing with ssl locally altogether at this point.

interstice

15 hours ago

Really like this! Funnily enough I was just rabbit holing into terraform + ansible in an effort to do essentially this but with an anycast network. The thinking was to mirror apps across locations with a single deploy. I don't suppose you're plannning something similar with this one?

apitman

13 hours ago

Anycast is a really underrated feature and I'm surprised no major VPS providers offer it. It's one of my favorite things about fly.io

johnklos

a day ago

"to self-host any app"

Docker != app. Perhaps it'd be more accurate to say, "to host any Docker container"?

prmoustache

11 hours ago

I am not saying docker is the best way to run any app, but could you share a kind of app can't you host in a docker?

I mean on rootless containers, yes, a lot of apps that need access to the underlying system might not work, but they are usually system stuff, not the kind you want to host on a VPS anyway. But when running as root I can't think of many.

Hexigonz

a day ago

Ohhhh I like this. I really enjoy the flyctl CLI tools from Fly.io, which simplifies in a similar manner, but it's platform specific. Good work

jfdi

16 hours ago

These are great. Having tooling to get stuff out fast and as safely as possible to get to iterating openly.

Here’s a bash script I posted a while back on a different thread that does similar thing if of interest for anyone. It’s probably less nice than op’s for ex it only works with digitalocean (which is great!) - but it’s simple small and mostly readable. also assumes docker - but all via compose, with some samples like nginx w auto-ssl via le.

https://github.com/thomaswilley/tide.sh

tegiddrone

21 hours ago

Looks nice! Something I'd want in front is some sort of basic app firewall like fail2ban or CrowdSec to ban vuln scanners and other intrusion attempts. It is a nice thing about Cloudflare since they provide some of this protection.

strzibny

9 hours ago

Given the choice of Docker and Traefik I would love to know what's the exact difference to Kamal? And btw Kamal will soon have a new improved version with a custom proxy.

Its a simple cli in go It uses docker There is no k8s Handles certs Zero down time

I would love for it to support docker-compose as some of my side projects needs a library in python but I like having my service be in go, so I will wrap the python library in a super simple service.

Overall this is awesome and I love the simplicity, with the world just full of serverless, AI and a bunch of other "stuff". Paralysis through analysis is really an issue and when you are just trying to create a service for yourself or an MVP, it can be a real hinderance.

I have been gravitating towards Taskfile to perform similar tasks to this. God speed to you and keep up the great work.

mightymoud

a day ago

Thanks man! I'm working on the docker-compose support. I got it working locally, but the ergonomics are really hard to get right, cus compose files are so flexible. I was even considering using the `sidekick.yaml` file as the main config and then turn that into docker compose - similar to what fly.io does with fly.toml. But I wanna keep this docker centric... so yeah I am still doing more thinking around this

Sn0wCoder

a day ago

This looks great. Just bookmarked and then had to double check that I did not just bookmark it a few weeks ago. Turns out I had bookmarked Caddy which is similar but does not deploy the app and don’t think supports Docker. It was the auto CERT that was what I was interested in and what had stuck out in my mind. Have certbot setup and never think about it again, until my server needed to be rebuilt, and I started researching. Good to go for a few months, but my hosting will be up here in a year and going to switch providers and upgrade my setup to 2+ gig so I can run docker reliably. Thanks for posting this one just moved to the top of the list.

indigodaddy

a day ago

In what sense would Caddy not support Docker? You can use caddy on the host itself to proxy to a docker container, and you could also have Caddy as a Docker container to proxy to other Docker containers (would just need an initial incoming iptables rule to the caddy container for the latter scenario— although caddy might have instructions somewhere on a more elegant way than iptables to get the connections to the Docker caddy container not sure)

Sn0wCoder

15 hours ago

Hey indigo,

Thank you for pointing this out. When I was looking to install caddy, I was specifically looking for something without using docker since my VPS is 1g / 1cpu and that is what I based my comment off. When was reading the sidekick docs seemed by running one command that it would first install sidekick and then install the cert/app all with one docker file but now I am not even sure about that.

Appreciate you pointing that out, now I am back into analysis paralysis on which one I should use

turtlebits

21 hours ago

What about this is highly available? On a single VPS?

Does this only support a single app?

Nice project but the claims (production ready? Load balance on a single server?) are a bit ridiculous.

closewith

20 hours ago

In my experience, single apps on VPSes have far higher availability in practice than the majority of convoluted deployments.

dewey

21 hours ago

Highly available is overrated for most use cases, especially for any side projects.

pmarreck

2 hours ago

I would love this, but with Nix.

funshed

21 hours ago

Nice, you should probably explain what traefik, sops and age will do. First time I've heard of sops, very handy!

singhrac

18 hours ago

Any possibility you’d add support for a Mac Mini deployment? I think the extra complexity would be from changing the Docker images, but of course the devil is in the details. I just have a Mac Mini and it would be great to self-host some stuff.

brirec

17 hours ago

As someone who used to love hosting things on a Mac mini, have you tried installing Linux on it to use as a dedicated server? If you do, it should handle this just like any other platform you could install it on.

vickodin

10 hours ago

Very interesting. The similarity to the sidekiq is a bit confusing, but it doesn't really matter.

aag

a day ago

This could be great for my projects, but I'm confused about one thing: why does it need to push to a Docker registry? The Dockerfile is local, and each image is built locally. Can't the images be stored purely locally? Perhaps I'm missing something obvious. Not using a registry would reduce the number of moving parts.

3np

21 hours ago

You can easily set up a Docker/CNCF registry[0] container running locally. It can be run either as a caching pull-through mirror for a public registry (allowing you to easily run public containers in an environment without internet access) or as a private registry for your own image (this use-case). So if you want both features, you currently need two instances. Securing it for public use is a bit less trivial but for local use it's literally a 'run' or two.

So you can do 'docker build -t localhost/whatever' and then 'docker run localhost/whatever'. Also worth checking out podman to more easily run everything rootless.

If all you need is to move images between hosts like you would files, you don't even need a registry (docker save/load).

[0]: https://distribution.github.io/distribution/

mightymoud

a day ago

Locally here means the locally on your laptop locally, not locally on your VPS. Contrary to popular opinion, I believe your source code shouldn't be on your prod machine - a docker image is all you need. Lots of other projects push your code to VPS to build the image there then use it. I see no point in doing that...

sdf4j

21 hours ago

The docker registry can be avoided by exporting/importing the docker image over ssh.

joseferben

a day ago

this looks amazing!

i’m building https://www.plainweb.dev and i’m looking for the simplest way to deploy a plainweb/plainstack project.

looks like sidekick has the same spirit when it comes to simplicity.

in the plainstack docs i’ve been embracing fly.io, but reliability is an issue. and sqlite web apps (which is the core of plainstack) can’t have real zero downtime deployments, unless you count the proxy holding the pending request for 30 seconds while the fly machine is deployed.

i tried kamal but it felt like non-ruby and non-rails projects are second class citizens.

i was about to document deploying plainstack to dokku, but provisioning isn’t built-in.

my dream deployment tool would be dokku + provisioning & setup, sidekick looks very close to that.

definitely going to try this and maybe even have it in the blessed deploy path for plainstack if it works well!

mightymoud

15 hours ago

Planweb looks really promising. Keen to get in touch and see how I can help support the vision of Plainweb with Sidekick then you can reference it in the docs.

I'll reach out on twitter

gf297

20 hours ago

What's the purpose of encrypting the env file with sops, when the age secret key is stored on the VPS? If someone has access to the encrypted env file, they will also have access to the secret key, and can decrypt it.

achempion

a day ago

This looks amazing, congrats on the release! Really looking forward for the database hosting feature as well (and probably networking and mounting data dirs).

As a side note, any reason why you decided against using docker in swarm mode as it should have all these features already built it?

mightymoud

a day ago

Correct me if I'm wrong, Docker Swarm mode is made to manage multi node clusters. This is meant for only one single VPS.

achempion

a day ago

You can use docker swarm just for single VPS.

  - install docker 
  - run docker swarm init
  - create yaml that describes your stack (similar to docker-compose)
  - run docker stack deploy
That's basically it. My go-to solution when I need to run some service on single VPS.

If you want to just run a single container, you can also do this with `docker service create image:tag`

3np

21 hours ago

I thought docker-swarm had been considered neglected to the point of dead and without a future for a few years now. Is this impression incorrect/outdated?

EDIT: So apparently what used to be known as "Docker Swarm" has been posthumously renamed to "Swarm Classic"/"Classic Swarm" and is indeed dead, abandoned, and deprecated. The project currently known as "Docker Swarm" is a younger completely different project which appears actively maintained. "Classic" still has roughly twice the GH stars and forks compared to the new one. I can't be the only one who's dismissed the latter, assuming it to be the former. Very confusing naming and branding, they would probably have more way more users if they had not repurposed the name like this.

https://github.com/docker-archive/classicswarm

> Swarm Classic: a container clustering system. Not to be confused with Docker Swarm which is at https://github.com/docker/swarmkit

vander_elst

11 hours ago

How does it compare to ansible? I didn't immediately find a link

spelunker

a day ago

Looks great! I similarly got frustrated about the complexity of doing side-project ops stuff and messed around with Kamal, but this goes the extra mile by automatically setting up TLS as well. I'll give it a try!

dvaun

a day ago

Awesome! Love that it's written in Go—I've recently tested the language for some use cases at work and find it great. I'll dive into your repo to see if I can learn anything new :)

sigmonsays

21 hours ago

tools like this are pretty sweet but I would rather just run it myself.

docker-compose with a load balancer (traefik) is fairly straightforward and awesome. the TLS setup is nice but I wildcard that and just run certgen myself.

The main thing I think that's missing is some sort of authentication or zero trust system, maybe vpn tunnel provisioner. Most services I self host I do not want to be made public due to security concerns.

trey-jones

19 hours ago

"Wow, this really looks significantly better than my own CLI tools"

I'm going to have to look into this pterm thing.

InvOfSmallC

21 hours ago

Can I run more than one app on the same VPS with this solution?

I now run more than one app into one single VPS.

mattfrommars

15 hours ago

I consider myself bit techsavy knowing Linux and basic scripting.

But does anyone have a resource or link that explains the idea to make a service which OP shared here?

Because frankly, I'd feel lost reading the code from one file at a time without knowing where to start.

Plus it's written in Go which I have I am not familiar with.

Canada

a day ago

Very well presented, the README.md looks great.

mightymoud

a day ago

Thanks! This comment really makes my day!

jjkmk

a day ago

Looks really good, going to test it out.

hkon

18 hours ago

Have used caprover. Good that more tools enter this space.

devmor

a day ago

Wow this is super handy! I have paid tools that function like this for a couple of specific stacks but this seems like an amazing general purpose tool.

Considering the ease of setup the README purports, a few hours of dealing with this might save me a couple hundred bucks a month in service fees.

mightymoud

a day ago

Glad you found this useful. Let me know if you have specific features in mind.

devmor

a day ago

I didn't see anything in the readme about deploy hooks - do you have a feature that lets users run arbitrary commands after the image is deployed? I have common use cases for both pre (ex. database migrations) and post (ex. Resource caching, worker spinup) traffic switchover.

mightymoud

a day ago

Yup deploy hooks are on my mind. Just didn't put them on Readme. Shouldn't be very hard to implement. Might do this first before docker-compose support.

superkuh

a day ago

I don't know about you but I find the single command $ sudo apt install $x to be much faster, offers wider range of software, more reliable, less fragile, easier to network, and more secure when it comes to running applications on an Ubuntu VPS. The only thing the normal way of running applications is less good at (compared to this dependency manager manager) is "Zero downtime".

LVB

a day ago

I’m not sure what you’re comparing that to. This project is about easily deploying your own app/side-project, which wouldn’t be available via apt.

superkuh

a day ago

99% of what people run in docker is just normal applications.

indigodaddy

a day ago

Not sure how true this statement is in general, but it’s definitely not true of course for what the project described as the use case, eg your own side project/app, which you’d obviously not be able to “apt install.” Unless OP meant like the supporting hosting/proxy infra like Apache/nginx, which yeah, that’s what this project is trying to avoid/abstract for the user to have to deal with.

At the end of the day if you use this tool I guess all you’d need to worry about (given the tool is stable and works obviously) would be apt upgrades of the OS and even that you can automate, and then just figure out your reboot strategy. For me, I don’t even want to deal with that, so I happily use fly.

mightymoud

a day ago

Respect! Fly is an absolute beast and to me is best in class for sure!

mightymoud

a day ago

I think this is just miscommunication - I meant more in a side-project/application that you made yourself. Not an application package you install on ubuntu