Zb: An Early-Stage Build System

224 pointsposted 21 hours ago
by zombiezen

98 Comments

jjuliano

7 minutes ago

I made a graph-based orchestrator - https://github.com/jjuliano/runner - It uses declarative YAML, and preflight, postflight and skip conditions. I think it can also be a full-fledge build system.

bjourne

an hour ago

I've been using WAF for ages so naturally I wonder how this system compares to WAF? My experience with build systems is that they all get the easy parts rights. You can compile C and C++ code and they successfully scan header files for dependencies. But FEW get the hard parts rights. E.g., compiling LaTeX with multiple figures, custom fonts and classes, and multiple bib files. It requires correctly interfacing with pdfatex which is a complete PITA as it spews intermediate files everywhere and puts constraints on the current directory. Most build tools can't.

What I want in a build tool is universality. Sometimes a whole directory tree is the dependency of a target. Sometimes it's an url and the build tool should correctly download and cache that url. Sometimes the pre-requisite is training an ML model.

mikepurvis

20 hours ago

Whoa, nifty. Can you speak more to the interop issues with Nix? I've been working on a pretty large Nix deployment in the robotics space for the past 3ish years, and the infrastructure side is the biggest pain point:

* Running a bare `nix build` in your CI isn't really enough— no hosted logs, lack of proper prioritization, may end up double-building things.

* Running your own instance of Hydra is a gigantic pain; it's a big ball of perl and has compiled components that link right into Nix internals, and architectural fiasco.

* SaaS solutions are limited and lack maturity (Hercules CI is Github-only, nixbuild.net is based in Europe and last I checked was still missing some features I needed).

* Tvix is cool but not ready for primetime, and the authors oppose flakes, which is a deal-breaker for me.

Something that's a barebones capable of running these builds and could be wrapped in a sane REST API and simple web frontend would be very appealing.

zombiezen

19 hours ago

Tracking issue is https://github.com/256lights/zb/issues/2

The hurdles to interop I see are:

- Nixpkgs is not content-addressed (yet). I made a conscious decision to only support content-addressed derivations in zb to simplify the build model and provide easier-to-understand guarantees to users. As a result, the store paths are different (/zb/store instead of /nix/store). Which leads to... - Nix store objects have no notion of cross-store references. I am not sure how many assumptions are made on this in the codebases, but it seems gnarly in general. (e.g. how would GC work, how do you download the closure of a cross-store object, etc.) - In order to obtain Nixpkgs derivations, you need to run a Nix evaluator, which means you still need Nix installed. I'm not sure of a way around this, and seems like it would be a hassle for users.

I have experienced the same friction in build infra for Nix. My hope is that by reusing the binary cache layer and introducing a JSON-RPC-based public API (already checked in, but needs to be documented and cleaned up) for running builds that the infrastructure ecosystem will be easier.

flurie

20 hours ago

I've been wondering idly if it's possible for Nix to support the Bazel Remote Execution API that seems to be catching on[1] more generally.

[1] https://github.com/bazelbuild/remote-apis?tab=readme-ov-file...

mikepurvis

17 hours ago

I’m very interested in better bidirectional interop between bazel and nix; it seems such a travesty that for two projects that are so ideologically aligned to work so poorly together. Nix should be able to run builds on bazel and bazel builds should decompose and cache into multiple store paths in a nix environment (think how poetry2nix works).

flurie

16 hours ago

If you're attending BazelCon I'd love to have a chat with you about this stuff in some more detail. (If you're not I'd still love to have a chat!)

mikepurvis

2 hours ago

I'm afraid I'm not planning on it; I don't make it to the west coast nearly as often as I should. Feel free to hmu on LinkedIn or something though; I'd love to get plugged into some people interested in this stuff, and I'm about to have a block of time available when I could potentially work on it.

Rucadi

20 hours ago

Why are flakes such a deal-breaker? While not ideal, you can still tag your versions in the .nix file instead of the lockfile.

I even had to avoid flakes in a system I developed used by ~200 developers since it involved a non-nixos OS and it involved user secrets (Tokens etc...) So with flakes I had to keep track of the secrets (and was a pain point, since they obviously didn't have to push them into the git repo) but nix flakes doesn't handle well omitting files on git (it ignores them also on nix commands). In the end, the workarounds were too messy and had to drop flakes entirely.

mikepurvis

17 hours ago

As a new user, I learned flakes first, and the tie-in with git tags/branches and the corresponding cli ergonomics aren’t something I’d be able to give up.

laurentlb

20 hours ago

I'd like to know more about the "Support for non-determinism" and how that differs from other build systems. Usually, build systems rerun actions when at least one of the inputs has changed. Are non-deterministic targets rerun all the time?

Also, I'm curious to know if you've considered using Starlark or the build file syntax used in multiple other recent build systems (Bazel, Buck, Please, Pants).

zombiezen

20 hours ago

(Hi! I recognize your name from Bazel mailing lists but I forget whether we've talked before.)

I'm mostly contrasting from Nix, which has difficulty with poisoning cache when faced with non-deterministic build steps when using input-addressing (the default mode). If zb encounters a build target with multiple cached outputs for the same inputs, it rebuilds and then relies on content-addressing to obtain build outputs for subsequent steps if possible. (I have an open issue for marking a target as intentionally non-deterministic and always triggering this re-run behavior: https://github.com/256lights/zb/issues/33)

I'll admit I haven't done my research into how Bazel handles non-determinism, especially nowadays, so I can't remark there. I know from my Google days that even writing genrules you had to be careful about introducing non-determinism, but I forget how that failure mode plays out. If you have a good link (or don't mind giving a quick summary), I'd love to read up.

I have considered Starlark, and still might end up using it. The critical feature I wanted to bolt in from Nix was having strings carrying dependency information (see https://github.com/NixOS/nix/blob/2f678331d59451dd6f1d9512cb... for a description of the feature). In my prototyping, this was pretty simple to bolt on to Lua, but I'm not sure how disruptive that would be to Starlark. Nix configurations tend to be a bit more complex than Bazel ones, so having a more full-featured language felt more appropriate. Still exploring the design space!

aseipp

16 hours ago

I mean, to be fair, Nix is nothing more than a big ass pile of genrule() calls, at the end of the day. Everything is really just genrule. Nix just makes it all work with the sandbox it puts all builds in. Bazel has an equivalent sandbox and I'm pretty sure you can sandbox genrule so it's in a nice, hermetic container. (Side note, but one of my biggest pet peeves is that Nix without the sandbox is actually fundamentally _broken_, yet we let people install it without the sandbox. I have no idea why "Install this thing in a broken way!" is even offered as an option. Ridiculous.)

The way Nix-like systems achieve hermetic sandboxing isn't so much a technical feat, in my mind. That's part of it -- sure, you need to get rid of /dev devices, and every build always has to look like it happens at /tmp/build within a mount namespace, and you need to set SOURCE_EPOCH_DATE and blah blah, stuff like that.

But it's also a social one, because with Nix you are expected to wrap arbitrary build systems and package mechanisms and "go where they are." That means you have to bludgeon every random hostile badly written thing into working inside the sandbox you designed, carve out exceptions, and write ptaches for things that don't -- and get them working in a deterministic way. For example, you have to change the default search paths for nearly every single tool to look inside calculated Nix store path. That's not a technical feat, it's mostly just a huge amount of hard work to write all the abstractions, like buildRustPackage or makeDerivation. You need to patch every build system like CMake or Scons in order to alleviate some of their assumptions, and so on and so forth.

Bazel and Buck like systems do not avoid this pain but they do pay for it in a different way. They don't "go where they are", they expect everyone to "come to them." Culturally, Bazel users do not accept "just run Make under a sandbox" nearly as much. The idea is to write everything as a BUILD file rule, from scratch rewriting the build system, and those BUILD files instead should perform the build "natively" in a way that is designed to work hermetically. So you don't run ./configure, you actually pick an exact set of configuration options and build with that 100% of the time. Therefore, the impurities in the build are removed "by design", which makes the strict requirements on a sandbox somewhat more lenient. You still need the sandbox, but by definition your builds are much more robust anyway. So you are trading the pain of wrapping every system for the pain of integrating every system manually. They're not the same thing but have a lot of overlap.

So the answer is, yes you can write impure genrules, but the vast majority of impurity is totally encapsulated in a way that forces it to be pure, just like Nix, so it's mostly just a small nit rather than truly fundamental. The real question is a matter of when you want to pay the pied piper.

kaba0

8 hours ago

You (plural) seem to know a great deal about build systems, so I figured I would ask - what’s your opinion about Mill? It’s a not so well known build tool written in scala, but I find its underlying primitives are absolutely on point.

For those who don’t know, its build descriptors are just Scala classes with functions. A function calling another function denotes a dependency, and that’s pretty much it. The build tool will automatically take care of parallelizing build steps and caching them.

How do you think it relates to Nix and alia on a technical level?

msvan

20 hours ago

As a current Nix user, what I would really like is a statically typed language to define builds. Recreating Nix without addressing that feels like a missed opportunity.

zombiezen

20 hours ago

The Lua VSCode extension adds a type system that works really well IME

0cf8612b2e1e

20 hours ago

There are Lua flavors with typing. Teal is one I have heard that compiles down to regular Lua like a typescript

Rucadi

20 hours ago

For me the killer feature is Windows Support, Ericsson is doing a great job bringing nix into Windows, but the process it's understandably slow, If this project is similar enough to nix that I can kind-off translate easily the zb derivations to nix derivations, I'm willing to use it in windows (It's not like nix has windows programs in the nixpkgs either way I have to bring them in my own).

The problem for me is that I see no benefit on using this over nix language (which I kinda like a lot right now)

droelf

20 hours ago

We're working on rattler-build (https://github.com/prefix-dev/rattler-build/) - which is a build system inspired by Apko / conda-build and uses YAML files to statically define dependencies. It works really well with pixi (our package manager) but also any other conda compatible package managers (mamba, conda).

And it has Windows support, of course. It can also be used to build your own distribution (e.g. here is one for a bunch of Rust utilities: https://github.com/wolfv/rust-forge)

hamandcheese

19 hours ago

> Ericsson is doing a great job bringing nix into Windows

Is this Ericsson... the corporation? Windows support for nix is something I don't hear much about, but if there is progress being made (even slowly) I'd love to know more.

Iceland_jack

21 hours ago

o11c

20 hours ago

Definitely interesting, but it's flat-out wrong about the limitations of `make`.

In particular, the `release.txt` task is trivial by adding a dummy rule to generate and include dependencies; see https://www.gnu.org/software/make/manual/html_node/Remaking-... (be sure to add empty rules to handle the case of deleted dynamic dependencies). You can use hashes instead of file modification times by adding a different kind of dummy rule. The only downside is that you have to think about the performance a little.

I imagine it's possible for a project to have some kind of dynamic dependencies that GNU make can't handle, but I dare say that any such dependency tree is hard to understand for humans too, and thus should be avoided regardless. By contrast, in many other build tools it is impossible to handle some of the things that are trivial in `make`.

(if you're not using GNU make, you are the problem; do not blame `make`)

bjourne

an hour ago

I guess you aren't keen on Java then? Complex dynamic dependency graphs aren't difficult for humans to handle or many build tools other than make.

evanjrowley

21 hours ago

This looks really exciting and I absolutely must give it a try. Well done! At face value the vision and design choices appear to be great.

fizlebit

8 hours ago

I can't help but wonder whether the major problem is actually API changing from version to version of software and keeping everything compatible.

If the build language is LUA, doesn't it support top level variables. It probably just takes a few folks manipulating top level variables before the build steps and build logic is no longer hermetic, but instead plagued by side effects.

I think you need to build inside very effective sandboxes to stop build side effects and then you need your sandboxes to be very fast.

Anyway, nice to see attempts at more innovation in the build space.

I imagine a kind of merging between build systems, deployment systems, and running systems. Somehow a manageable sea of distributed processes running on a distributed operating system. I suspect Alan Kay thought that smalltalk might evolve in that direction, but there are many things to solve including billing, security, and somehow making the sea of objects comprehensible. It has the hope of everything being data driven, aka structured, schemad, versions, json like data rather than the horrendous mess that is unix configuration files and system information.

There was an interested talk on Developer Voice perhaps related to a merger of Ocaml and Erlang that moved a little in that direction.

weitzj

4 hours ago

Great idea. Just a tip. You can wrap your lua part into cosmopolitan C. This way you get lua on many architectures and os. Also cosmopolitan can be bootstrapped with tiny cc I guess. And personally wrapping your lua code in https://fennel-lang.org/ would be nice.

This way with libcosmopolitan, you could just checkin a copy of your build tool in a project, to be self sufficient. Think of it like gradlew( the gradle bash/bat wrapper) but completely self contained and air gapped

https://github.com/jart/cosmopolitan

zcam

3 hours ago

+1 for fennel

Ericson2314

18 hours ago

Nice to see Windows support. We/I are working on that with upstream Nix too.

Also I hope we can keep the store layer compatible. It would be good to replace ATerm with JSON, for example. We should coordinate that!

zombiezen

15 hours ago

Rad! Yes, please keep me in the loop!

stmonty

17 hours ago

This looks awesome. I've had this same exact idea for a build system, but I haven't had the time to build it yet. Cool to see someone basically build what I had imagined!

packetlost

21 hours ago

One request that I would make of a project like this is to support distributed builds out of the box. Like, really basic support for identical builder hosts (this is much easier now than in the past with containers) and caching of targets. Otherwise, this looks great! Big fan of the choice of Lua, though the modifications to strings might make it difficult to onboard new users depending on how the modification was made.

zombiezen

20 hours ago

Yup, remote building and caching is on my radar. I expect it will work much in the same way Nix does now, although I'm being a bit more deliberate in creating an RPC layer so build coordinators and other such tools are more straightforward to build.

The string tweak is transparent to users broadly speaking. IME with Nix this thing works the way people expect (i.e if you use a dependency variable in your build target, it adds a dependency).

droelf

20 hours ago

Cool that this space is getting more attention - I just came from the reproducible builds summit in Hamburg. We're working on similar low level build system tools with rattler-build and pixi. Would love to have a chat and potentially figure out if collaboration is possible.

zombiezen

19 hours ago

Cool! Contact info is in my profile and on my website. :)

steeleduncan

20 hours ago

Looks great, Nix-with-Lua that also supports Windows would be amazing. Two questions if I may

- Does this sandbox builds the way flakes do?

- What is MinGW used for on Windows? Does this rely on the MinGW userland, or is it just because it would be painful to write a full bootstrap for a windows compiler while also developing Zb?

Also, its great to see the live-bootstrap in there. I love the purity of how Guix's packages are built, and I like the idea Zb will be that way from the start

zombiezen

19 hours ago

Nix sandboxes derivation runs on Linux even without flakes, and I'm planning on implementing that, yes: https://github.com/256lights/zb/issues/29 and https://github.com/256lights/zb/issues/31

MinGW is used to build Lua using cgo. I'd like to remove that part, see https://github.com/256lights/zb/issues/28 I haven't started the userspace for Windows yet (https://github.com/256lights/zb/issues/6), but I suspect that it will be more "download the Visual C++ compiler binary from this URL" than the Linux source bootstrap.

Yeah, I'm happy with live-bootstrap, too! I tried emulating Guix's bootstrap, but it depended a little too much on Scheme for me to use as-is. live-bootstrap has mostly worked out-of-the-box, which was a great validation test for this approach.

steeleduncan

8 hours ago

Thanks for answering and I really hope it works out. A Nix alternative with less friction would be very welcome!

teddyh

8 hours ago

It’s amazing the lengths some people will go to in order to avoid scary parentheses.

mtndew4brkfst

4 hours ago

It's not Guile I want to avoid, it's GNU ideologues who insist on every freedom except "use proprietary software and hardware" and shame people for doing so.

greener_grass

19 hours ago

I'm excited by this!

Quick question: if the build graph can be dynamic (I think they call it monadic in the paper), then does it become impossible to reason about the build statically? I think this is why Bazel has a static graph and why it scales so well.

zombiezen

19 hours ago

According to Build systems à la carte, "it is not possible to express dynamic dependencies in [Bazel's] user-defined build rules; however some of the pre-defined build rules require dynamic dependencies and the internal build engine can cope with them by using a restarting task scheduler, which is similar to that of Excel but does not use the calc chain." (p6)

IME import-from-derivation and similar in Nix is usually used for importing build configurations from remote repositories. Bazel has a repository rule system that is similar: https://bazel.build/extending/repo

So to answer your question: yes from the strictest possible definition, but in practice, I believe the tradeoffs are acceptable.

aseipp

17 hours ago

Buck2 can express dynamic dependencies, so it can capture dynamic compilation problems like C++ modules, OCaml/Fortran modules, etc. in "user space" without built-in support like Bazel requires. The secret to why is twofold. One, your internal build graph can be fully dynamic at the implementation level; rather, it's a matter of how much expressivity you expose to the user in letting them leverage and control the dynamic graph. Just because you have a Monad, doesn't mean you have to have to expose it. You can just expose an Applicative.

And actually, if you take the view that build systems are a form of staged programming, then all build systems are monadic because the first stage is building the graph at all, and the second stage is evaluating it. Make, for example, has to parse the Makefiles, and during this phase it constructs the graph... dynamically! Based on the input source code! Rather it is during the second phase done later, when rules are evaluated, and that is now the time when the graph is static and all edges must be known. See some notes from Neil Mitchell about that.[1]

The other key is in a system like Buck or Bazel, there are actually two graphs that are clearly defined. There is the target graph where you have abstract dependencies between things (a cxx_binary depends on a cxx_library), and there is the action graph (the command gcc must run before the ld command can run).

You cannot have dynamic nodes in the target graph. Target graph construction MUST be deterministic and "complete" in the sense it captures all nodes. This is really important because it breaks features like target determination: given a list of changed files, what changed targets need to be rebuilt? You cannot know the complete list of targets when the target graph is dynamic, and evaluation can produce new nodes. That's what everyone means when they say it's "scalable." That you can detect, only given a list of input files from version control, what the difference between these two build graphs are. And then you can go build those targets exactly and skip everything else. So, if you make a small change to a monumentally sized codebase, you don't have to rebuild everything. Just a very small, locally impacted part of the whole pie.

In other words, "small changes to the code should have small changes in the resulting build." That's incremental programming in a nutshell.

OK, so there's no target graph dynamism. But you can have dynamic actions in the action graph, where the edges to those dynamic actions are well defined. For example, compiling an OCaml module first requires you to build a .m file, then read it, then run some set of commands in an order dictated by the .m file. The output is an .a file. So you always know the in/out edges for these actions, but you just don't know what order you need to run compiler commands in. That dynamic action can be captured without breaking the other stuff. There are some more notes from Neil about this.[2]

Under this interpretation, Nix also defines a static target graph in the sense that every store path/derivation is a node represented as term in the pure, lazy lambda calculus (with records). When you evaluate a Nix expression, it produces a fully closed term, and terms that are already evaluated previously (packaged and built) are shared and reused. The sharing is how "target determination" is achieved; you actually evaluate everything and anything that is shared is "free."

And under this same interpretation, the pure subset of Zb programs should, by definition, also construct a static target graph. It's not enough to just sandboxing I/O but also some other things; for example if you construct hash tables with undefined iteration order you might screw the pooch somewhere down the line. Or you could just make up things out of thin air I guess. But if you restrict yourself to the pure subset of Zb programs, you should in theory be fine (and that pure subset is arguably the actual valuable, useful subset, so it's maybe fine.)

[1] https://ndmitchell.com/downloads/paper-implementing_applicat...

[2] https://ndmitchell.com/downloads/slides-somewhat_dynamic_bui...

alxmng

20 hours ago

Did you consider writing a nicer language that compiles to Nix? A "friendly" tool on the outside with Nix inside.

zombiezen

19 hours ago

Yup, that was how I built the prototype: https://www.zombiezen.com/blog/2024/06/zb-build-system-proto...

The last commit using that approach was https://github.com/256lights/zb/tree/558c6f52b7ef915428c9af9... if you want to try it out. And actually, I haven't touched the Lua frontend much since I swapped out the backend: the .drv files it writes are the same.

The motivation behind replacing the backend was content-addressibility and Windows support, which have been slow to be adopted in Nix core.

Rucadi

19 hours ago

I don't think nix is that awful, while there are some tasks that are more difficult or can be a little bit verbose (if you want to play a lot with the attribute sets / lists or string manip) When using nix most of the time you'll end up just writing bash or using it as a templating language.

jiggawatts

17 hours ago

From the Build Systems à la Carte paper:

Topological. The topological scheduler pre-computes a linear order of tasks, which when followed, ensures the build result is correct regardless of the initial store. Given a task description and the output key, you can compute the linear order by first finding the (acyclic) graph of the key’s reachable dependencies, and then computing a topological sort. However this rules out dynamic dependencies.

Restarting. To handle dynamic dependencies we can use the following approach: build tasks in an arbitrary initial order, discovering their dependencies on the fly; whenever a task calls fetch on an out-of-date key dep, abort the task, and switch to building the dependency dep; eventually the previously aborted task is restarted and makes further progress thanks to dep now being up to date. This approach requires a way to abort tasks that have failed due to out-of-date dependencies. It is also not minimal in the sense that a task may start, do some meaningful work, and then abort.

Suspending. An alternative approach, utilised by the busy build system and Shake, is to simply build dependencies when they are requested, suspending the currently running task. By combining that with tracking the keys that have already been built, one can obtain a minimal build system with dynamic dependencies. This approach requires that a task may be started and then suspended until another task is complete. Suspending can be done with cheap green threads and blocking (the original approach of Shake) or using continuation-passing style (what Shake currently does).

IshKebab

20 hours ago

Interesting. I feel like I would have gone with Starlark over Lua, but I guess it's good to have options.

Does it support sandboxing?

israrkhan

19 hours ago

You need bazel if you need starlark & sandboxing

ramon156

21 hours ago

I'd definitely write a build systen in lua, looks promising!

kortex

18 hours ago

How do you pronounce "Zb"? Zee-bee?

zombiezen

18 hours ago

Heh, I think I need to add something to the README. I've been pronouncing it as "zeeb" in my head as in the first syllable Zebesian Space Pirate from Metroid, but TIL that that's canonically "Zay-bay-zee-uhn" so idk.

Naming is hard.

kortex

16 hours ago

I kinda dig zeeb! Naming is hard. Really awesome project by the way! Should have mentioned that first. Build systems are neat. I've always wanted to try building a build system, in a "learn how it works" sense, not so much "yet another build tool".

zombiezen

15 hours ago

Thanks! And go for it, it's a good learning experience! It's a really interesting problem domain and there's a lot of different directions you can take it.

imiric

20 hours ago

Happy to see someone inspired by Nix, but wanting to carve their own path. Nix popularized some powerful ideas in the Linux world, but it has a steep learning curve and a very unfriendly UI, so there is plenty of room for improvement there.

I'm not sure if Lua is the right choice, though. A declarative language seems like a better fit for reproducibility. The goal of supporting non-deterministic builds also seems to go against this. But I'm interested to know how this would work in practice. Good luck!

hinkley

20 hours ago

If you design it like SCons, it'll look imperative but behave more declaratively.

If I understand the architecture correctly, the imperative calls in the config file don't actually run the build process. They run a Builder Pattern that sets up the state machine necessary for the builds to happen. So it's a bit like LINQ in C# (but older).

I have no idea how that plays out single-step debugging build problems though. That depends on how it's implemented and a lot of abstractions (especially frameworks) seem to forget that breakpoints are things other people want to use as well.

zombiezen

19 hours ago

That's accurate (unless the config file attempts to read something from the build process, that will trigger a build).

It's a good point about debugging build problems. This is an issue I've experienced in Nix and Bazel as well. I'm not convinced that I have a great solution yet, but at least for my own debugging while using the system, I've included a `zb derivation env` command which spits out a .env file that matches the environment the builder runs under. I'd like to extend that to pop open a shell.

pdimitar

18 hours ago

Surface-level feedback: get rid of the word "derivation". Surely there must be a better way to describe the underlying thing...

imiric

11 hours ago

Agreed! It's such an alien term to describe something quite mundane. Language clarity is a big part of a friendly UI.

ulbu

8 hours ago

what word would you fit to what a nix derivation is?

imiric

8 hours ago

I'm not sure, I'm not a Nix expert. The comments here also refer to it as both instructions to build something, as well as the intermediate build artifact. This discussion[1] on the NixOS forums explains it as a "blueprint" or "recipe". So there's clearly a lot of confusion about what it is, yet everyone understands "blueprint", "recipe", or even "intermediate build artifact" if you want to be technical.

The same is true for "flakes". It's a uniquely Nix term with no previous technical usage AFAIK.

Ideally you want to avoid using specialized terms if possible. But if you do decide to do that, then your documentation needs to be very clear and precise, which is another thing that Nix(OS) spectacularly fumbles. Take this page[2] that's supposed to explain derivations, for example. The first sentence has a circular reference to the term, only mentioning in parenthesis that it's a "build task". So why not call it that? And why not start with the definition first, before bringing up technical terms like functions and attributes? There are examples like this in many places, even without general problems of it being outdated or incomplete.

Though I don't think going the other way and overloading general terms is a good idea either. For example, Homebrew likes to use terms like "tap" and "bottle" to describe technical concepts, which has the similar effect of having to explain what the thing actually is.

Docker is a good example of getting this right: containers, images, layers, build stages, intermediate images, etc. It uses already familiar technical terms and adopts them where they make most sense. When you additionally have excellent documentation, all these things come together for a good user experience, and become essential to a widespread adoption of the tool.

[1]: https://discourse.nixos.org/t/what-is-a-derivation/28311/6

[2]: https://nix.dev/manual/nix/2.18/language/derivations

ulbu

an hour ago

yes, i agree, nix should be considered the bible of bad documentations. it’s very bad at spotlighting the essentials and putting the non-essentials aside. it’s especially surprising for derivations, because nix is really, in the end, a frontend for building derivations. everything else converges on it.

and then i go to nix.dev and derivations are presented after fetchers? no surprise it’s so confusing, even though the concept is quite simple.

a derivation is a dict that is composed of (1) a shell script and (2) environment parameters it will have access to. a nix command will read the derivation, create the environment with only these parameters and execute the script. that’s it.

everything else about nix language is about building derivations. like copying files into its store. for example, evaluating “${pkgs.hello}” will be interpolated into a path. so in your derivation, you can define an env variable “hello = ${pkgs.hello}/bin” and it will be available in your script as “$hello” and will have the value of “/nix/store/<hash>-hello/bin”. nix will do the fetching and storing for you. so you can have “command $hello” in your script. neat!

play around with evaluating the ‘derivation’ built-in function.

umanwizard

17 hours ago

What’s wrong with it? It’s a term of art that means a specific thing in both nix and guix; it’d just be confusing if zb renamed it to something else.

kstenerud

14 hours ago

I'm 80% finished moving all of my servers from NixOS to Debian. I used NixOS for 3 years (even wrote some custom flakes) before finally giving up (for the final year I was just too scared to touch it, and then said "I shouldn't be scared of my OS"). I should know what "derivation" means, but I can't for the life of me remember...

umanwizard

11 hours ago

I don’t know Nix, but I’ll describe how Guix works, and hopefully it will be obvious what the corresponding Nix concepts are.

A “package” is a high-level description (written in scheme) of how to build something, like: “using the GNU build system with inputs a, b, c, and configure flags x, y, z, build the source available at https://github.com/foo/bar”

The actual builder daemon doesn’t know about the GNU build system, or how to fetch things from GitHub, or how to compute nested dependencies, etc.; it is very simple. All it knows is how to build derivations, which are low-level descriptions of how to build something: “create a container that can see paths a, b, and c (which are themselves other derivations or files stored in the store and addressed by their hash), then invoke the builder script x.”

So when you ask guix to build something, it reads the package definition, finds the source and stores it in the store, generates the builder script (which is by convention usually also written in scheme, though theoretically nothing stops you from defining a package whose builder was written in some other language), computes the input derivation paths, etc., and ultimately generates a derivation which it then asks the daemon to build.

I believe in Nix, rather than scheme, packages are written in nix lang and builder scripts can be written in any language but by convention are usually bash.

So basically long story short, the package is the higher-level representation on the guix side, and the derivation is the lower-level representation on the guix-daemon side.

Modified3019

13 hours ago

Yeah I ended up with the same issue. While I’m technically inclined, I’m not nearly to the point where I can handle the fire hose of (badly named) abstraction at all levels like some people.

I could never have pulled off what this guy did https://roscidus.com/blog/blog/2021/03/07/qubes-lite-with-kv..., though ironically his journal is probably one of the best “how nix actually works” tutorials I’ve ever seen, even though it isn’t intended for that or complete for such a purpose. He’s the only reason I know that a derivation is basically an intermediate build object.

smilliken

13 hours ago

"Derivation" refers to the nix intermediate build artifact, a .drv file, which contains the instructions to perform the build itself. Basically a nix program compiles to a derivation file which gets run to produce the build outputs. The hash in the /nix/store for a dependency is the hash of the derivation. Conveniently if the hash is already in a build cache, you can download the cached build outputs instead of building it yourself.

kstenerud

13 hours ago

Ah OK, then I'd actually never actually understood what a derivation is. But then again, the name "derivation" doesn't at all lead to guessing at such a definition, either.

umanwizard

8 hours ago

“Build plan” would maybe be a more obvious name, but it’d still be confusing to deviate from what Nox uses, IMO.

nurettin

10 hours ago

It is the name of a feature in Nix. This is as obfuscated as calling a rock a rock.

pdimitar

10 hours ago

Strange thing to say but you do you.

I tried to dabble in Nix several times and the term never stuck.

I suppose for you it's impossible to accept that the term is just bad and unintuitive. And other comments here say the same.

nurettin

4 hours ago

I mean it has variable names, configurations, documentation, a file extension and lots of code and a history behind it, so the strange thing to me is trying to suggest a replacement phrase as if you don't know what it is, acting like it's some high-brow language used in a blog to look smart, complaining about how this makes it less accessible (paraphrasing a little), then rolling back saying you dabbled in Nix and acting like you know what it is.

But then, you do you.

pdimitar

4 hours ago

The part you seem to deliberately miss is that what is obvious to people deeply invested in Nix is not obvious to anyone else.

I for one can't trace the train of thought that is going from "intermediate build artifact" and somehow arrives at "derivation".

I found out just enough about Nix to reject it. My take is still informed, I simply didn't buy its pitch.

nurettin

4 hours ago

I geniunely thought you knew nothing about derivations and were criticizing the blogger for writing the term in their blog, not the term standard to Nix itself. Which is just as weird to me as complaining about std::string, well why call it a string? it is obviously text!

imiric

2 hours ago

> Which is just as weird to me as complaining about std::string, well why call it a string? it is obviously text!

It's really not, though. String is a common technical term used in programming languages for many decades. If a new language decided to call them "textrons", _that_ would be weird. And this is the exact thing Nix did with "derivations", "flakes", etc. There is no precedent for these terms in other software, so they're unfamiliar even to its core audience.

It would be different if Nix invented an entirely new branch of technology that didn't have any known precedent. But for a reproducible build system that uses a declarative language? C'mon.

skybrian

18 hours ago

One thing I like to see is a 'dry run' like 'make -n'. Although, maybe that's not possible in all cases.

Another possibility might be to output a something like a shell script that would do a rebuild the same way, so you can see what it did and hack it when debugging.

photonthug

16 hours ago

Yes. Dry runs at least, and better yet terraform-style planning that produces an artifact that can be applied. These should really be more common with all kinds of software

hinkley

16 hours ago

I would like to see more tools iterate on trying to do terraform-like output because while terraform diffs are interesting, practically most of my teammates couldn’t tell what the fuck they said and a couple times I missed important lines that caused us prod issues. I think we can do a better job than showing a wall of text.

photonthug

15 hours ago

Presentation is a separate matter though, just like with git diffs ideally you could choose a wall of text or a side by side ui, see things at a high level or drill down to line by line. A tag layer plus custom validation between plan/apply gives you an automatic way to short circuit things. But none of it can work without a plan as a first class object.

Thing is the plan/apply split isn’t even primarily for users necessarily, it’s just good design. It makes testing easier, and leaves open the possibility for plugging in totally different resolution strategies without rewriting the whole core. The benefits are so big there that I’d strongly prefer that more software is using it more often, even if I’m going to shut my eyes and auto apply every time without glancing at the plan.

kortex

19 hours ago

> The goal of supporting non-deterministic builds also seems to go against this.

I think this is actually a great escape hatch. Supporting non-deterministic builds means more folks will be able to migrate their existing build to zb. Postel's law and all that.

imiric

11 hours ago

Right, could be.

One of the insane things with Nix is that the suggested workflow is to manage _everything_ with it. This means that it wants to replace every package manager in existence, so you see Python, Emacs and other dependency trees entirely replicated in Nix. As well as every possible configuration format. It's craziness... Now I don't need to depend on just the upstream package, I also have to wait for these changes to propagate to Nix packages. And sometimes I just want to do things manually as a quick fix, instead of spending hours figuring out why the Nix implementation doesn't work.

So, yeah, having an escape hatch that allows easier integration with other ecosystems or doing things manually in some cases, would be nice to have.

sweeter

14 hours ago

I thought of creating something similar and I was going to use a personal fork of the Go compiler with some mods, anko (which is a really cool go binding language) or righting my own DSL. It's quite the undertaking.

I like Nix and NixOS a lot, its really cool, but it has some really odd management issues and the language IMO is horrendous. I used NixOS for around a year and I was changing my Nixpkgs version and I got that same generic nonsense error that doesn't have any semantic meaning and I was just over it. I'm not too fond of commenting out random parts of code to figure out where something minor and obscure failed. Sometimes it tells you the module it had a problem with, or will point out an out of place comma, and other times its just like "idk bruh ¯\_(ツ)_/¯ "failed at builtin 'seq'" is the best I can do"

the paradigm is a million dollar idea though. I have no doubt its the future of a large portion of the future, both for programming and generic systems. I just wish it wasn't a pain to write and it had some sensible error handling.

danmur

11 hours ago

The language has grown on me a bit. I initially hated it but a lot of my pain was not actually the language but the lack of good docs for the standard library.

Still struggle with the tracebacks though. It's painful when things go wrong.

aseipp

16 hours ago

Whatever choices this project makes (I have some opinions, but I think they're not too important) I don't see it mentioning one of the most absolutely critical choices Nix made that was absolutely key to its insane success (at least, IMO, as a hardcore contributor and user for like 10+ years): the monorepo, containing all of the packages and all the libraries for use by everyone downstream, and all contributions trying to go there.

Please do not give into the temptation to just write a version manager and stitch together some hodgepodge and throw the hard problem over the fence to the "community", a set of balkanized repositories to make everything work. It is really really really hard to overstate how much value Nixpkgs gets from going the monorepo route and how much the project has been able to improve, adapt, and overcome things thanks to it. It feels like Nixpkgs regularly pulls off major code-wide changes on an average Tuesday that other projects would balk at.

(It's actually a benefit early on to just keep everything in one repo too, because you can just... clean up all the code in one spot if you do something like make a major breaking change. Huge huge benefit!)

Finally: as a die hard Nix user, I also have been using Buck2 as a kind of thing-that-is-hermetic-cloud-based-and-supports-Windows tool, and it competes in the same space as Zb; a monorepo containing all BUILD files is incredibly important for things to work reliably and it's what I'm exploring right now and seeing if that can be viable. I'm even exploring the possibility of starting from stage0-posix as well. Good luck! There's still work to be done in this space and Nix isn't the final answer, even if I love it.

theLiminator

11 hours ago

Buck2 looks very principled. Will definitely be interesting as it gets mature in the open source world.

I'm personally convinced monorepo is strictly superior (provided you have the right tooling to support it).

nvlled

7 hours ago

https://github.com/256lights/zb/blob/102795d6cb383a919dd378d...

TIL I can also use semicolons on lua tables, not just commas:

  return derivation {
    name = "hello.txt";
    ["in"] = path "hello.txt";
    builder = "/bin/sh";
    system = "x86_64-linux";
    args = {"-c", "while read line; do echo \"$line\"; done < $in > $out"};
  }
I like using lua as a DSL, now I like it even more! I've using lua as a html templating language that looks like this:

  DIV {
   id="id";
   class="class;
   H1 "heading";
   P [[
    Lorem ipsum dolor sit amet, consectetur adipiscing elit, 
    sed do eiusmod tempor ]] / EM incididunt / [[ ut labore et 
    dolore magna aliqua.
   ]];
   PRE ^ CODE [[ this is <code> tag inside <pre> ]];
  }