dilawar
a day ago
Someone has historical insights into why async/await seems to have taken over the world?
I often write Rust and I don't find it very attractive, but so many good projects seem to advertise it as a "killer feature". Diesel.rs doesn't have async, and they claim that perf improvement may not be worth it (https://users.rust-lang.org/t/why-use-diesel-when-its-not-as...).
For a single threaded JS program, async makes a lot of sense. I can't imagine any alternative pattern to get concurrency so cleanly.
robmccoll
a day ago
In single threaded scripting languages, it has arisen as a way to allow overlapping computation with communication without having to introduce multi threading and dealing with the fact that memory management and existing code in the language aren't thread-safe. In other languages it seems to be used as a away to achieve green threading with an opt-in runtime written as a library within the language rather than doing something like Go where the language and built-in runtime manage scheduling go routines onto OS threads. Personally I like Go's approach. Async / await seems like achieving a similar thing with way more complexity. Most of the time I want an emulation of synchronous behavior. I'd rather be explicit around when I want something to go run on it's own.
fmajid
20 hours ago
Agreed. Async I/O is something where letting the runtime keep track of it for you doesn't incur any more overhead, unlike garbage collection, and that makes for a much more natural programming pseudo-synchronous.
devjab
21 hours ago
Microsoft did some research on it 15-20 years ago for .NET which showed that sync doesn't scale for I/O workloads. The rest of the world sort of "knew" at this point, and all the callback and statemachine hell which came before was also leading the world toward async/await but the Microsoft research kind of formed the foundation for "universal" acceptance. It's not just for single threaded JS programs, you almost never want to tie up your threads even when you can have several of them because it's expensive in memory. As you'll likely see in this thread, some lower level programmers will mention that they prefer to build stackful coroutines themselves. Obviously that is not something Microsoft wanted people to have to do with C#, but it's a thing people do in c/c++ and similar (probably not with C#), and if you're lucky, you can even work in a place that doesn't turn it into the "hell" part.
I can't say why Diesel.rs doesn't need async, and I would like to point out that I know very little about Diesel.rs beyond the fact that it has to do with databases. It would seem strange that, anything, working with databases which an I/O heavy workload would not massively benefit from async though.
lukaslalinsky
a day ago
https://en.wikipedia.org/wiki/C10k_problem
Because when you require 1 thread per 1 connection, you have trouble getting to thousands of active connections and people want to scale way beyond that. System threads have overhead that makes them impractical for this use case. The alternatives are callbacks, which everybody hates and for a good reason. Then you have callbacks wrapped by Futures/Promises. And then you have some form of coroutines.
Keeping in mind that what Zig is introducing is not what languages call async/await. It's more like the I/O abstraction inside Java, where you can use the same APIs with platform threads and virtual threads, but in Zig, you will need to pass the io parameter around, in Java, it's done in the background.
matheusmoreira
a day ago
> The alternatives are callbacks, which everybody hates and for a good reason. Then you have callbacks wrapped by Futures/Promises. And then you have some form of coroutines.
The event loop model is arguably equivalent to coroutines. Just replace yield with return and have the underlying runtime decide which functions to call next by looping through them in a list. You can even stall the event loop and increase latency if you take too long to return. It's cooperative multitasking by another name.
zozbot234
21 hours ago
Coroutines/resumable functions are not restricted to yielding to a single runtime or event loop, they can simply "resume" each other directly. There are also extensions of coroutines that are more than one-shot (a resumable function where the current state can be copied and invoked more than once) and/or are allowed to provide values when "resuming" other code, which also goes beyond the common "event loop" model.
lukaslalinsky
21 hours ago
It's all the same concept, it's just a matter who/what is managing the state while you are waiting for I/O. When you yield, it's the compiler/runtime making sure the context is saved. When you return, it's your responsibility.
troupo
a day ago
> The alternatives are callbacks
No. The alternative is lightweight/green threads and actors.
The thing with await is that it can be retrofitted onto existing languages and runtimes with relatively little effort. That is, it's significantly less effort than retrofitting an actual honest-to-god proper actor system a la Erlang.
matheusmoreira
21 hours ago
> The alternative is lightweight/green threads and actors.
How lightweight should threads be to support high scale multitasking?
Writing my own language, capturing stack frames in continuations resulted in figures like 200-500 bytes. Grows with deeply nested code, of course, but surely this could be optimized...
https://www.erlang.org/docs/21/efficiency_guide/processes.ht...
This document says Erlang processes use 309 words which is in the same ballpark.
troupo
21 hours ago
I didn't have to answer :) Thank you for looking it up.
Erlang also enjoys quite a lot of optimizations on the VM level. E.g. a task is parked/hybernated if there's no work for it to perform (e.g. it's waiting for a message), the switch between tasks is extremely lightweight, VM internals are re-entrant, employ CPU-cache-friendly data structures, garbage collection is both lightweight and per-thread/task etc.
antihero
21 hours ago
Isn’t await often just sugar around the underlying implementation be that greenthreads, epoll, picoev, etc?
troupo
21 hours ago
I think it depends on the language?
Javascript's async/await probably started as a sugar for callbacks (since JS is single-threaded). Many others definitely have that as sugar for whatever threading implementation they have. In C# it's sugar on top of the whole mechanism of structured concurrency.
But I'm mostly talking out of my ass here, since I don't know much about this topic, so everything above is hardly a step above speculation.
lukaslalinsky
21 hours ago
> The alternative is lightweight/green threads and actors.
Those are all some form of coroutines.
jandrewrogers
19 hours ago
The classic use case for async was applications with extreme I/O intensity, like high-end database engines. If designed correctly it is qualitatively higher performance than classic multithreading. This is the origin of async style.
Those large performance gains do not actually come from async style per se, which is where people become confused.
What proper async style allows that multithreading does not is that you can design and implement sophisticated bespoke I/O and execution schedulers for your application. Almost all the performance gains are derived from the quality of the custom scheduling.
If you delegate scheduling to a runtime, it almost completely defeats the point of writing code in async style.
ajross
19 hours ago
> The classic use case for async was applications with extreme I/O intensity, like high-end database engines. If designed correctly it is qualitatively higher performance than classic multithreading.
FWIW, I'm not aware of any high end database engines that make significant use of async code on their performance paths. They manage concurrent state with event loops, state machines, and callbacks. Those techniques, while crufty and too old to be cool, are themselves significantly faster than async.
Async code (which is isomorphic to process-managed green threads) really isn't fast. It's just that OS thread switching is slow.
felipellrocha
21 hours ago
Being fully multithreaded comes with significant overhead, while browsers essentially proved how much unreasonable performance you can get out of a single cpu using async because of javascript’s async model.
It is hard to describe just how much more can be done on a single thread with just async.
user
21 hours ago
api
21 hours ago
I think it’s a terrible complexity multiplying workaround for the fact that we can’t fix our ancient 1970s OS APIs. Threads should be incredibly cheap. I should be able to launch them by the tens of millions, kill them at will, and this should be no more costly than goroutines.
(All modern OSes in common use are 1970s vintage under the hood. All Unix is Bell Labs Unix with some modernization and veneer, and NT is VMS with POSIX bolted on later.)
Go does this by shipping a mini VM in every binary that implements M:N thread pooling fibers in user space. The fact that Go has to do this is also a workaround for OS APIs that date back to before disco was king, but at least the programmer doesn’t have to constantly wrestle with it.
Our whole field suffers greatly from the fact that we cannot alter the foundation.
BTW I use Rust async right now pretty heavily. It strikes me as about as good as you can do to realize this nightmare in a systems language that does not ship a fat runtime like Go, but having to actually see the word “async” still makes me sad.
nananana9
21 hours ago
You don't need a fat runtime to do fibers/stackful coroutines. You don't need any language support for that matter, just 50 lines of assembly to save registers on the stack and switch stack pointers. Minicoro [1] is a C library that implements fibers in a single header (just the creation/destruction/context switching, you have to bring your own scheduler).
Our game engine has a in-house implementation - creating a fiber, scheduling it, and waiting for it to complete takes ~300ns on my box. Creating a OS thread and join()ing is just about 1000 slower, ~300us.
lukaslalinsky
21 hours ago
I have even simpler version:
https://github.com/lalinsky/zio/blob/main/src/coroutines.zig
Which has the benefit of Zig single unit of compilation, that the compiler can be smarter about which registers need to be saved.
zozbot234
21 hours ago
"Threads" are expensive because they are OS-managed "virtual" cores as seen by the current process. You can run coroutines as "user-level" tasks on top of kernel threads, and both Go and Rust essentially allow this, though in slightly different ways.
sapiogram
21 hours ago
Kill threads at will?
api
19 hours ago
That requires some explanation. Basically I think runtimes should be abort safe and have some defined thing that happens when a thread is aborted. Antiquated 70s blocking APIs do not, or do not consistently.
It’s a minor gripe compared to the heaviness of threads and making every programmer hand roll fibers by way of async.
rr808
a day ago
I think its as Javascript has taken over the world, people use those paradigms in other languages. It makes absolutely no sense to me as someone who doesn't touch JS or Python.