Epskampie
10 months ago
> Requirements on the executing device: Docker is required.
arjvik
10 months ago
Good friend built dockerc[1] which doesn't have this limitation!
hnuser123456
10 months ago
That screenshot in the readme is hilarious. Nice project.
ecnahc515
10 months ago
Instead it requires QEMU!
remram
10 months ago
I can't tell what this does from the readme. Does it package a container runtime in the exe? Or a virtual machine? Something else?
vinceguidry
10 months ago
Looks like MacOS and Windows support is still being worked on.
ugh123
10 months ago
lol guy makes a fair point. Open source software suffers from this expectation that anyone interested in the project must be technical enough to be able to clone, compile, and fix the inevitable issues just to get something running and usable.
Hamuko
10 months ago
I'd say that a lot of people suffer from this expectation that just because I made a tool for myself and put it up on GitHub in case someone else would also enjoy it that I'm now obligated to provide support for you. Especially when the person in the screenshot is angry over the lack of a Windows binary.
dowager_dan99
10 months ago
Thank goodness; solving this "problem" for the general internet destroyed it. Your point seems to be someone else should do that for every stupid asshole on the web?
dheera
10 months ago
But will this run inside another docker container?
I normally hate things shipped as containers because I often want to use it inside a docker container and docker-in-docker just seems like a messy waste of resources.
vinceguidry
10 months ago
Docker in Docker is not a waste of resources, they just make the same container runtime the container is running on available to it. Really a better solution than a control plane like Kubernetes.
throwaway127482
10 months ago
Aren't you describing docker-out-of-docker rather than docker-in-docker?
vinceguidry
10 months ago
No, you're running docker inside a docker container. The container provides a docker daemon that just forwards the connection to the same runtime. It's not running two dockers, but you are still running docker inside docker.
https://medium.com/@moshedana058/understanding-docker-in-doc...
remram
10 months ago
Docker is not emulation so there's no waste of resources.
rcfox
10 months ago
Doesn't podman get around a lot of those issues?
dheera
10 months ago
Aw hell, more band-aids because people don't want to get software distribution done right.
Can we please go back to the days of sudo dpkg -i foo.deb and then just /usr/bin/foo ?
johnisgood
10 months ago
I am still using "ar x" and "tar xvf" for .deb files on Void Linux, because some projects only release .deb files!
harha_
10 months ago
Yeah, it feels like nothing but a little trick. Why would anyone want to actually use this? The exe simply calls docker, it can embed an image into the exe but even then it first calls docker to load the embedded image.
jve
10 months ago
I see a use case. The other day I wished that I could pack CLI commands as docker containers and execute them as CLI commands and get return codes.
I haven't tried this stuff, but maybe this is something in that direction.
lelanthran
10 months ago
> I see a use case. The other day I wished that I could pack CLI commands as docker containers and execute them as CLI commands and get return codes
I don't understand this requirement/specification; presumably this use-case will not be satisfied by a shell script, but I don't see how.
What are you wanting from this use-case that can't be done with a shell script?
lazide
10 months ago
Presumably, they don’t want to write/maintain a shell script wrapper for every time they want to do this, when they could use a tool which does it for them.
lelanthran
10 months ago
> Presumably, they don’t want to write/maintain a shell script wrapper for every time they want to do this, when they could use a tool which does it for them.
How's "packing" cli commands into a shell script any different from "packing" CLI commands into a container?
lazide
10 months ago
Calling a container on the CLI is a pain in the ass.
People generally don’t put stuff that works in whatever environment you’re in on the CLI already into contains. Stuff that doesn’t, of course they do.
Having a convenient shell script wrapper to make that not a pain in the ass, while letting all the environment management stuff still work correctly in a container is convenient.
Writing said wrapper each time, however is a pain in the ass.
Generating one, makes it not such a pain in the ass to use.
So then you get convenient CLI usage of something that needs a container to not be a pain in the ass to install/use.
james_marks
10 months ago
An icon a non-technical user can click to run it.
cmeacham98
10 months ago
A non-technical user that has docker installed?
matsemann
10 months ago
I do that for a lot of stuff. Got a bit annoyed with internal tools that was so difficult to set up (needed this exact version of global python, expected this and that to be in the path, constantly needed to be updated and then stuff broke again). So I built a docker image instead where everything is managed, and when I need to update or change stuff I can do it from a clean slate without affecting anything else on my computer.
To use it, it's basically just scripts loaded into my shell. So if I do "toolname command args" it will spin up the container, mount the current folder and some config folders some tools expect, forward some ports, then pass the command and args to the container which runs them.
99% of the time it works smooth. The annoying part is if some tool depends on some other tool on the host machine. Like for instance it wants to do some git stuff. I will then have to have git installed and my keys copied in as well for instance.
rzzzt
10 months ago
CoreOS had a toolbox container that worked similarly to the one you have (the Podman people took over its maintenance): https://github.com/containers/toolbox
endofreach
10 months ago
> my keys copied in as well for instance.
Tip: you could also forward your ssh agent. I remember it was a bit of a pain in the ass on macos and a windows WSL2 setup, but likely worth it for your setup.
johncs
10 months ago
Basically the same as Python’s zipapps which have some niche use cases.
Before zipapp came out I built superzippy to do it. Needed to distribute some python tooling to users in a university where everyone was running Linux in lab computers. Worked perfectly for it.
j45
10 months ago
Could be ease of use for end users who don't docker.
worldsayshi
10 months ago
But now you have two problems.
throwanem
10 months ago
The first of which can be p90 solved by "Okay, type 'apt install dash capital why docker return,' tell me what happens...okay, and 'docker dash vee' says...great! Now..."
Probably takes a couple minutes, maybe less if you've got a good fast distro mirror nearby. More if you're trying to explain it to a biologist - love those folks, they do great work, incredible parties, not always at home in the digital domain.
alumic
10 months ago
I was so blown away by the title and equally disappointed to discover this line.
Pack it in, guys. No magic today.
stingraycharles
10 months ago
Thank god there’s still this project that can build single executables that work on multiple OS’es, I’m still amazed by that level of magic.
cozyman
10 months ago
[flagged]
Hamuko
10 months ago
I feel like it's much easier to send a docker run snippet than an executable binary to my Docker-using friends. I usually try to include an example `docker run` and/or Docker Compose snippet in my projects too.
drawfloat
10 months ago
Is there any alternative way of achieving a similar goal (shipping a container to non technical customers that they can run as if it were an application)?
regularfry
10 months ago
It feels like there ought to be a way to wrap a UML kernel build with a container image. Never seen it done, but I can't think of an obvious reason why it wouldn't work.
mrbluecoat
10 months ago
See the dockerc comment above
user
10 months ago