Zig's new plan for asynchronous programs

88 pointsposted 4 hours ago
by messe

77 Comments

AndyKelley

2 hours ago

Overall this article is accurate and well-researched. Thanks to Daroc Alden for due diligence. Here are a couple of minor corrections:

> When using an Io.Threaded instance, the async() function doesn't actually do anything asynchronously — it just runs the provided function right away.

While this is a legal implementation strategy, this is not what std.Io.Threaded does. By default, it will use a configurably sized thread pool to dispatch async tasks. It can, however, be statically initialized with init_single_threaded in which case it does have the behavior described in the article.

The only other issue I spotted is:

> For that use case, the Io interface provides a separate function, asyncConcurrent() that explicitly asks for the provided function to be run in parallel.

There was a brief moment where we had asyncConcurrent() but it has since been renamed more simply to concurrent().

thefaux

7 minutes ago

This design seems very similar to async in scala except that in scala the execution context is an implicit parameter rather than an explicit parameter. I did not find this api to be significantly better for many use cases than writing threads and communicating over a concurrent queue. There were significant downsides as well because the program behavior was highly dependent on the execution context. It led to spooky action at a distance problems where unrelated tasks could interfere with each and management of the execution context was a pain. My sense though is that the zig team has little experience with scala and thus do not realize the extent to which this is not a novel approach, nor is it a panacea.

woodruffw

3 hours ago

I think this design is very reasonable. However, I find Zig's explanation of it pretty confusing: they've taken pains to emphasize that it solves the function coloring problem, which it doesn't: it pushes I/O into an effect type, which essentially behaves as a token that callers need to retain. This is a form of coloring, albeit one that's much more ergonomic.

(To my understanding this is pretty similar to how Go solves asynchronicity, expect that in Go's case the "token" is managed by the runtime.)

flohofwoe

3 hours ago

If calling the same function with a different argument would be considered 'function coloring', every function in a program is 'colored' and the word loses its meaning ;)

Zig actually also had solved the coloring problem in the old and abandondend async-await solution because the compiler simply stamped out a sync- or async-version of the same function based on the calling context (this works because everything is a single compilation unit).

adamwk

2 hours ago

The subject of the function coloring article was callback APIs in Node, so an argument you need to pass to your IO functions is very much in the spirit of colored functions and has the same limitations.

jakelazaroff

2 hours ago

In Zig's case you pass the argument whether or not it's asynchronous, though. The caller controls the behavior, not the function being called.

layer8

an hour ago

The coloring is not the concrete argument (Io implementation) that is passed, but whether the function has an Io parameter in the first place. Whether the implementation of a function performs IO is in principle an implementation detail that can change in the future. A function that doesn't take an Io argument but wants to call another function that requires an Io argument can't. So you end up adding Io parameters just in case, and in turn require all callers to do the same. This is very much like function coloring.

In a language with objects or closures (which Zig doesn't have first-class support for), one flexibility benefit of the Io object approach is that you can move it to object/closure creation and keep the function/method signature free from it. Still, you have to pass it somewhere.

woodruffw

3 hours ago

> If calling the same function with a different argument would be considered 'function coloring', than every function in a program is 'colored' and the word loses its meaning ;)

Well, yes, but in this case the colors (= effects) are actually important. The implications of passing an effect through a system are nontrivial, which is why some languages choose to promote that effect to syntax (Rust) and others choose to make it a latent invariant (Java, with runtime exceptions). Zig chooses another path not unlike Haskell's IO.

jcranmer

2 hours ago

> If calling the same function with a different argument would be considered 'function coloring', than every function in a program is 'colored' and the word loses its meaning ;)

I mean, the concept of "function coloring" in the first place is itself an artificial distinction invented to complain about the incongruent methods of dealing with "do I/O immediately" versus "tell me when the I/O is done"--two methods of I/O that are so very different that it really requires very different designs of your application on top of those I/O methods: in a sync I/O case, I'm going to design my parser to output a DOM because there's little benefit to not doing so; in an async I/O case, I'm instead going to have a streaming API.

I'm still somewhat surprised that "function coloring" has become the default lens to understand the semantics of async, because it's a rather big misdirection from the fundamental tradeoffs of different implementation designs.

rowanG077

3 hours ago

If your functions suddenly requires (currently)unconstructable instance "Magic" which you now have to pass in from somewhere top level, that indeed suffers from the same issue as async/await. Aka function coloring.

But most functions don't. They require some POD or float, string or whatever that can be easily and cheaply constructed in place.

doyougnu

2 hours ago

Agreed. the Haskeller in me screams "You've just implemented the IO monad without language support".

dundarious

3 hours ago

There is a token you must pass around, sure, but because you use the same token for both async and sync code, I think analogizing with the typical async function color problem is incorrect.

jayd16

2 hours ago

Actually it seems like they just colored everything async and you pick whether you have worker threads or not.

I do wonder if there's more magic to it than that because it's not like that isn't trivially possible in other languages. The issue is it's actually a huge foot gun when you mix things like this.

For example your code can run fine synchronously but will deadlock asynchronously because you don't account for methods running in parallel.

Or said another way, some code is thread safe and some code isn't. Coloring actually helps with that.

flohofwoe

2 hours ago

> Actually it seems like they just colored everything async and you pick whether you have worker threads or not.

There is no 'async' anywhere yet in the new Zig IO system (in the sense of the compiler doing the 'state machine code transform' on async functions).

AFAIK the current IO runtimes simply use traditional threads or coroutines with stack switching. Bringing code-transform-async-await back is still on the todo-list.

The basic idea is that the code which calls into IO interface doesn't need to know how the IO runtime implements concurrency. I guess though that the function that's called through the `.async()` wrapper is expected to work properly both in multi- and single-threaded contexts.

jayd16

2 hours ago

> There is no 'async'

I meant this more as simply an analogy to the devX of other languages.

>Bringing code-transform-async-await back is still on the todo-list.

The article makes it seem like "the plan is set" so I do wonder what that Todo looks like. Is this simply the plan for async IO?

> is expected to work properly both in multi- and single-threaded contexts.

Yeah... about that....

I'm also interested in how that will be solved. RTFM? I suppose a convention could be that your public API must be thread safe and if you have a thread-unsafe pattern it must be private? Maybe something else is planned?

messe

2 hours ago

> The article makes it seem like "the plan is set" so I do wonder what that Todo looks like. Is this simply the plan for async IO?

There's currently a proposal for stackless coroutines as a language primitive: https://github.com/ziglang/zig/issues/23446

rowanG077

3 hours ago

Having used zig a bit as a hobby. Why is it more ergonomic? Using await vs passing a token have similar ergonomics to me. The one thing you could say is that using some kind of token makes it dead simple to have different tokens. But that's really not something I run into often at all when using async.

messe

3 hours ago

> The one thing you could say is that using some kind of token makes it dead simple to have different tokens. But that's really not something I run into often at all when using async.

It's valuable to library authors who can now write code that's agnostic of the users' choice of runtime, while still being able to express that asynchronicity is possible for certain code paths.

rowanG077

2 hours ago

But that can already be done using async await. If you write an async function in Rust for example you are free to call it with any async runtime you want.

messe

2 hours ago

But you can't call it from synchronous rust. Zig is moving toward all sync code also using the Io interface.

tcfhgj

an hour ago

yes, you can:

    runtime.block_on(async { })
https://play.rust-lang.org/?version=stable&mode=debug&editio...

messe

5 minutes ago

Let me rephrase, you can't call it like any other function.

In Zig, a function that does IO can be called the same way whether or not it performs async operations or not. And if those async operations don't need concurrency (which Zig expresses separately to asynchronicity), then they'll run equally well on a sync Io runtime.

ethin

2 hours ago

One thing the old Zig async/await system theoretically allowed me to do, which I'm not certain how to accomplish with this new io system without manually implementing it myself, is suspend/resume. Where you could suspend the frame of a function and resume it later. I've held off on taking a stab at OS dev in Zig because I was really, really hoping I could take advantage of that neat feature: configure a device or submit a command to a queue, suspend the function that submitted the command, and resume it when an interrupt from the device is received. That was my idea, anyway. Idk if that would play out well in practice, but it was an interesting idea I wanted to try.

nine_k

an hour ago

Can you create a thread pool consisting of one thread, and suspend / resume the thread?

NooneAtAll3

2 hours ago

what's the point of implementing cooperative "multithreading" (coroutines) with preemptive one (async)?

amluto

3 hours ago

I find this example quite interesting:

       var a_future = io.async(saveFile, .{io, data, "saveA.txt"});
        var b_future = io.async(saveFile, .{io, data, "saveB.txt"});

        const a_result = a_future.await(io);
        const b_result = b_future.await(io);
In Rust or Python, if you make a coroutine (by calling an async function, for example), then that coroutine will not generally be guaranteed to make progress unless someone is waiting for it (i.e. polling it as needed). In contrast, if you stick the coroutine in a task, the task gets scheduled by the runtime and makes progress when the runtime is able to schedule it. But creating a task is an explicit operation and can, if the programmer wants, be done in a structured way (often called “structured concurrency”) where tasks are never created outside of some scope that contains them.

From this example, if the example allows the thing that is “io.async”ed to progress all by self, then I guess it’s creating a task that lives until it finishes or is cancelled by getting destroyed.

This is certainly a valid design, but it’s not the direction that other languages seem to be choosing.

jayd16

2 hours ago

C# works like this as well, no? In fact C# can (will?) run the async function on the calling thread until a yield is hit.

throwup238

2 hours ago

So do Python and Javascript. I think most languages with async/await also support noop-ing the yield if the future is already resolved. It’s only when you create a new task/promise that stuff is guaranteed to get scheduled instead of possibly running immediately.

amluto

4 minutes ago

I can't quite parse what you're saying.

Python works like this:

    import asyncio

    async def sleepy() -> None:
        print('Sleepy started')
        await asyncio.sleep(0.25)
        print('Sleepy resumed once')
        await asyncio.sleep(0.25)
        print('Sleepy resumed and is done!')


    async def main():
        sleepy_future = sleepy()
        print('Started a sleepy')

        await asyncio.sleep(2)
        print('Main woke back up.  Time to await the sleepy.')

        await sleepy_future

    if __name__ == "__main__":
        asyncio.run(main())
Running it does this:

    $ python3 ./silly_async.py
    Started a sleepy
    Main woke back up.  Time to await the sleepy.
    Sleepy started
    Sleepy resumed once
    Sleepy resumed and is done!
So there mere act of creating a coroutine does not cause the runtime to run it. But if you explicitly create a task, it does get run:

    import asyncio

    async def sleepy() -> None:
        print('Sleepy started')
        await asyncio.sleep(0.25)
        print('Sleepy resumed once')
        await asyncio.sleep(0.25)
        print('Sleepy resumed and is done!')


    async def main():
        sleepy_future = sleepy()
        print('Started a sleepy')

        sleepy_task = asyncio.create_task(sleepy_future)
        print('The sleepy future is now in a task')

        await asyncio.sleep(2)
        print('Main woke back up.  Time to await the task.')

        await sleepy_task

    if __name__ == "__main__":
        asyncio.run(main())

    $ python3 ./silly_async.py
    Started a sleepy
    The sleepy future is now in a task
    Sleepy started
    Sleepy resumed once
    Sleepy resumed and is done!
    Main woke back up.  Time to await the task.
I personally like the behavior of coroutines not running unless you tell them to run -- it makes it easier to reason about what code runs when. But I do not particularly like the way that Python obscures the difference between a future-like thing that is a coroutine and a future-like thing that is a task.

nmilo

3 hours ago

This is how JS works

messe

3 hours ago

It's not guaranteed in Zig either.

Neither task future is guaranteed to do anything until .await(io) is called on it. Whether it starts immediately (possibly on the same thread), or queued on a thread pool, or yields to an event loop, is entirely dependent on the Io runtime the user chooses.

amluto

an hour ago

It’s not guaranteed, but, according to the article, that’s how it works in the Evented model:

> When using an Io.Threaded instance, the async() function doesn't actually do anything asynchronously — it just runs the provided function right away. So, with that version of the interface, the function first saves file A and then file B. With an Io.Evented instance, the operations are actually asynchronous, and the program can save both files at once.

Andrew Kelley’s blog (https://andrewkelley.me/post/zig-new-async-io-text-version.h...) discusses io.concurrent, which forces actual concurrency, and it’s distinctly non-structured. It even seems to require the caller to make sure that they don’t mess up and keep a task alive longer than whatever objects the task might reference:

    var producer_task = try io.concurrent(producer, .{
        io, &queue, "never gonna give you up",
    });
    defer producer_task.cancel(io) catch {};
Having personally contemplated this design space a little bit, I think I like Zig’s approach a bit more than I like the corresponding ideas in C and C++, as Zig at least has defer and tries to be somewhat helpful in avoiding the really obvious screwups. But I think I prefer Rust’s approach or an actual GC/ref-counting system (Python, Go, JS, etc) even more: outside of toy examples, it’s fairly common for asynchronous operations to conceptually outlast single function calls, and it’s really really easy to fail to accurately analyze the lifetime of some object, and having the language prevent code from accessing something beyond its lifetime is very, very nice. Both the Rust approach of statically verifying the lifetime and the GC approach of automatically extending the lifetime mostly solve the problem.

But this stuff is brand new in Zig, and I’ve never written Zig code at all, and maybe it will actually work very well.

messe

an hour ago

Ah, I think we might have been talking over each other. I'm referring to the interface not guaranteeing anything, not the particular implementation. The Io interface itself doesn't guarantee that anything will have started until the call to await returns.

et1337

4 hours ago

I’m excited to see how this turns out. I work with Go every day and I think Io corrects a lot of its mistakes. One thing I am curious about is whether there is any plan for channels in Zig. In Go I often wish IO had been implemented via channels. It’s weird that there’s a select keyword in the language, but you can’t use it on sockets.

jerf

3 hours ago

Wrapping every IO operation into a channel operation is fairly expensive. You can get an idea of how fast it would work now by just doing it, using a goroutine to feed a series of IO operations to some other goroutine.

It wouldn't be quite as bad as the perennial "I thought Go is fast why is it slow when I spawn a full goroutine and multiple channel operations to add two integers together a hundred million times" question, but it would still be a fairly expensive operation. See also the fact that Go had fairly sensible iteration semantics before the recent iteration support was added by doing a range across a channel... as long as you don't mind running a full channel operation and internal context switch for every single thing being iterated, which in fact quite a lot of us do mind.

(To optimize pure Python, one of the tricks is to ensure that you get the maximum value out of all of the relatively expensive individual operations Python does. For example, it's already handling exceptions on every opcode, so you could win in some cases by using exceptions cleverly to skip running some code selectively. Go channels are similar; they're relatively expensive, on the order of dozens of cycles, so you want to make sure you're getting sufficient value for that. You don't have to go super crazy, they're not like a millisecond per operation or something, but you do want to get value for the cost, by either moving non-trivial amount of work through them or by taking strong advantage of their many-to-many coordination capability. IO often involves moving around small byte slices, even perhaps one byte, and that's not good value for the cost. Moving kilobytes at a time through them is generally pretty decent value but not all IO looks like that and you don't want to write that into the IO spec directly.)

osigurdson

3 hours ago

At least Go didn't take the dark path of having async / await keywords. In C# that is a real nightmare and necessary to use sync over async anti-patterns unless willing to re-write everything. I'm glad Zig took this "colorless" approach.

rowanG077

3 hours ago

Where do you think the Io parameter comes from? If you change some function to do something async and now suddenly you require an Io instance. I don't see the difference between having to modify the call tree to be async vs modifying the call tree to pass in an Io token.

messe

3 hours ago

Synchronous Io also uses the Io instance now. The coloring is no longer "is it async?" it's "does it perform Io"?

This allows library authors to write their code in a manner that's agnostic to the Io runtime the user chooses, synchronous, threaded, evented with stackful coroutines, evented with stackless coroutines.

rowanG077

2 hours ago

Rust also allows writing async code that is agnostic to the async runtime used. Subsuming async under Io doesn't change much imo.

ecshafer

3 hours ago

Have you tried Odin? Its a great language thats also a “better C” but takes more Go inspiration than Zig.

kbd

3 hours ago

One of the harms Go has done is to make people think its concurrency model is at all special. “Goroutines” are green threads and a “channel” is just a thread-safe queue, which Zig has in its stdlib https://ziglang.org/documentation/master/std/#std.Io.Queue

jerf

3 hours ago

A channel is not just a thread-safe queue. It's a thread-safe queue that can be used in a select call. Select is the distinguishing feature, not the queuing. I don't know enough Zig to know whether you can write a bit of code that says "either pull from this queue or that queue when they are ready"; if so, then yes they are an adequate replacement, if not, no they are not.

Of course even if that exact queue is not itself selectable, you can still implement a Go channel with select capabilities in Zig. I'm sure one exists somewhere already. Go doesn't get access to any magic CPU opcodes that nobody else does. And languages (or libraries in languages where that is possible) can implement more capable "select" variants than Go ships with that can select on more types of things (although not necessarily for "free", depending on exactly what is involved). But it is more than a queue, which is also why Go channel operations are a bit to the expensive side, they're implementing more functionality than a simple queue.

kbd

an hour ago

> I don't know enough Zig to know whether you can write a bit of code that says "either pull from this queue or that queue when they are ready"; if so, then yes they are an adequate replacement, if not, no they are not.

Thanks for giving me a reason to peek into how Zig does things now.

Zig has a generic select function[1] that works with futures. As is common, Blub's language feature is Zig's comptime function. Then the io implementation has a select function[2] that "Blocks until one of the futures from the list has a result ready, such that awaiting it will not block. Returns that index." and the generic select switches on that and returns the result. Details unclear tho.

[1] https://ziglang.org/documentation/master/std/#std.Io.select

[2] https://ziglang.org/documentation/master/std/#std.Io.VTable

jeffbee

2 hours ago

If we're just arguing about the true nature of Scotsmen, isn't "select a channel" merely a convenience around awaiting a condition?

0x696C6961

3 hours ago

What other mainstream languages have pre-emptive green threads without function coloring? I can only think of Erlang.

smw

3 hours ago

I'm told modern Java (loom?) does. But I think that might be an exhaustive list, sadly.

dlisboa

2 hours ago

It was special. CSP wasn't anywhere near the common vocabulary back in 2009. Channels provide a different way of handling synchronization.

Everything is "just another thing" if you ignore the advantage of abstraction.

mono442

31 minutes ago

It look like promising idea, though I'm a bit spectical that they can actually make it work with other executors like for example stackless coroutines transparently and it probably won't work with code that uses ffi anyway.

qudat

4 hours ago

I'm excited to see where this goes. I recently did some io_uring work in zig and it was a pain to get right.

Although, it does seem like dependency injection is becoming a popular trend in zig, first with Allocator and now with Io. I wonder if a dependency injection framework within the std could reduce the amount of boilerplate all of our functions will now require. Every struct or bare fn now needs (2) fields/parameters by default.

scuff3d

31 minutes ago

I think a good compromise between a DI framework and having to pass everything individually would be some kind of Context object. It could be created to hold an Allocator, IO implementation, and maybe a Diagnostics struct since Zig doesn't like attaching additional information to errors. Then the whole Context struct or parts of it could be passed around as needed.

messe

3 hours ago

> Every struct or bare fn now needs (2) fields/parameters by default.

Storing interfaces a field in structs is becoming a bit of an an anti-pattern in Zig. There are still use cases for it, but you should think twice about it being your go-to strategy. There's been a recent shift in the standard library toward "unmanaged" containers, which don't store a copy of the Allocator interface, and instead Allocators are passed to any member function that allocates.

Previously, one would write:

    var list: std.ArrayList(u32) = .init(allocator);
    defer list.deinit();
    for (0..count) |i| {
        try list.append(i);
    }
Now, it's:

    var list: std.ArrayList(u32) = .empty;
    defer list.deinit(allocator);
    for (0..count) |i| {
        try list.append(allocator, i);
    }
Or better yet:

    var list: std.ArrayList(u32) = .empty;
    defer list.deinit(allocator);
    try list.ensureUnusedCapacity(allocator, count); // Allocate up front
    for (0..count) |i| {
        list.appendAssumeCapacity(i); // No try or allocator necessary here
    }

Mond_

2 hours ago

Yes, and it's good that way.

Please, anything but a dependency injection framework. All parameters and dependencies should be explicit.

SvenL

3 hours ago

I think and hope that they don’t do that. As far as I remember their mantra was „no magic, you can see everything which is happening“. They wanted to be a simple and obvious language.

qudat

3 hours ago

That's fair, but the same argument can be made for Go's verbose error handling. In that case we could argue that `try` is magical, although I don't think anyone would want to take that away.

dylanowen

3 hours ago

This seems a lot like what the scala libraries Zio or Kyo are doing for concurrency, just without the functional effect part.

ecshafer

4 hours ago

I like the look of this direction. I am not a fan of the `async` keyword that has become so popular in some languages that then pollutes the codebase.

davidkunz

4 hours ago

In JavaScript, I love the `async` keyword as it's a good indicator that something goes over the wire.

Dwedit

2 hours ago

Async always confused me as to when a function would actually create a new thread or not.

warmwaffles

4 hours ago

Async usually ends up being a coloring function that knows no bounds once it is used.

amonroe805-2

3 hours ago

I’ve never really understood the issue with this. I find it quite useful to know what functions may do something async vs which ones are guaranteed to run without stopping.

In my current job, I mostly write (non-async) python, and I find it to be a performance footgun that you cannot trivially tell when a method call will trigger I/O, which makes it incredibly easy for our devs to end up with N+1-style queries without realizing it.

With async/await, devs are always forced into awareness of where these operations do and don’t occur, and are much more likely to manage them effectively.

FWIW: The zig approach also seems great here, as the explicit Io function argument seems likely to force a similar acknowledgement from the developer. And without introducing new syntax at that! Am excited to see how well it works in practice.

newpavlov

3 hours ago

In my (Rust-colored) opinion, the async keyword has two main problems:

1) It tracks code property which is usually omitted in sync code (i.e. most languages do not mark functions with "does IO"). Why IO is more important than "may panic", "uses bounded stack", "may perform allocations", etc.?

2) It implements an ad-hoc problem-specific effect system with various warts. And working around those warts requires re-implementation of half of the language.

echelon

3 hours ago

> Why IO is more important than "may panic", "uses bounded stack", "may perform allocations", etc.?

Rust could use these markers as well.

newpavlov

3 hours ago

I agree. But it should be done with a proper effect system, not a pile of ad hoc hacks built on abuse of the type system.

echelon

25 minutes ago

`async` is in the type system. In your mind, how would you mark and bubble up panicky functions, etc.? What would that look like?

I felt like a `panic` label for functions would be nice, but if we start stacking labels it becomes cumbersome:

  pub async panic alloc fn foo() {}
That feels dense.

I think ideally it would be something readers could spot at first glance, not something inferred.

ecshafer

3 hours ago

Is this Django? I could maybe see that argument there. Some frameworks and ORMs can muddy that distinction. But most the code ive written its really clear if something will lead to io or not.

warmwaffles

an hour ago

I've watched many changes over time where the non async function uses an async call, then the function eventually becomes marked as async. Once majority of functions get marked as async, what was the point of that boilerplate?

LunicLynx

2 hours ago

Pro tip: use postfix keyword notation.

Eg.

doSomethingAsync().defer

This removes stupid parentheses because of precedence rules.

Biggest issue with async/await in other languages.

debugnik

3 hours ago

> Languages that don't make a syntactical distinction (such as Haskell) essentially solve the problem by making everything asynchronous

What the heck did I just read. I can only guess they confused Haskell for OCaml or something; the former is notorious for requiring that all I/O is represented as values of some type encoding the full I/O computation. There's still coloring since you can't hide it, only promote it to a more general colour.

Plus, isn't Go the go-to example of this model nowadays?

gf000

3 hours ago

Haskell has green threads. Plus nowadays Java also has virtual threads.

debugnik

3 hours ago

And I bet those green threads still need an IO type of some sort to encode anything non-pure, plus usually do-syntax. Comparing merely concurrent computations to I/O-async is just weird. In fact, I suspect that even those green threads already have a "colourful" type, although I can't check right now.

Ericson2314

2 hours ago

This is a bad explanation because it doesn't explain how the concurrency actually works. Is it based on stacks? Is there a heavy runtime? Is it stackless and everything is compiled twice?

IMO every low level language's async thing is terrible and half-baked, and I hate that this sort of rushed job is now considered de rigueur.

(IMO We need a language that makes the call stack just another explicit data structure, like assembly and has linearity, "existential lifetimes", locations that change type over the control flow, to approach the question. No language is very close.)

codr7

3 hours ago

Love it, async code is a major pita in most languages.

giancarlostoro

3 hours ago

When Microsoft added Tasks / Async Await, that was when I finally stopped writing single threaded code as often as I did, since the mental overhead drastically went away. Python 3 as well.

codr7

an hour ago

Isn't this exactly the mess Zig is trying to get out of here?

Every other example I've seen encodes the execution model in the source code.

cies

3 hours ago

I like Zig and I like their approach in this case.

From the article:

    std.Io.Threaded - based on a thread pool.

      -fno-single-threaded - supports concurrency and cancellation.
      -fsingle-threaded - does not support concurrency or cancellation.

    std.Io.Evented - work-in-progress [...]
Should `std.Io.Threaded` not be split into `std.Io.Threaded` and `std.Io.Sequential` instead? Single threaded is another word for "not threaded", or am I wrong here?