gwbas1c
4 months ago
Before making criticisms that Garbage Collection "defeats the point" of Rust, it's important to consider that Rust has many other strengths:
- Rust has no overhead from a "framework"
- Rust programs start up quickly
- The rust ecosystem makes it very easy to compile a command-line tool without lots of fluff
- The strict nature of the language helps guide the programmer to write bug-free code.
In short: There's a lot of good reasons to choose Rust that have little to do with the presence or absence of a garbage collector.
I think having a working garbage collection at the application layer is very useful; even if it, at a minimum, makes Rust easier to learn. I do worry about 3rd party libraries using garbage collectors, because they (garbage collectors) tend to impose a lot of requirements, which is why a garbage collector usually is tightly integrated into the language.
jvanderbot
4 months ago
You've just listed "Compiled language" features. Only the 4th point has any specificity to Rust, and even then, is vague in a way that could be misinterpreted.
Rust's predominant feature, the one that brings most of its safety and runtime guarantees, is borrow checking. There are things I love about Rust besides that, but the safety from borrow checking (and everything the borrow checker makes me do) is why I like programming in rust. Now, when I program elsewhere, I'm constantly checking ownership "in my head", which I think is a good thing.
gwbas1c
4 months ago
Oh no, I'm directly criticizing C/C++/Java/C#:
The heavyweight framework (and startup cost) that comes with Java and C# makes them challenging for widely-adopted lightweight command-line tools. (Although I love C# as a language, I find the Rust toolchain much simpler and easier to work with than modern dotnet.)
Building C (and C++) is often a nightmare.
hypeatei
4 months ago
> The heavyweight framework
Do you mean the VM/runtime? If so, you might be able to eliminate that with an AOT build.
> I find the Rust toolchain much simpler and easier to work with than modern dotnet
What part of the toolchain? I find them pretty similar with the only difference being the way you install them (with dotnet coming from a distro package and Rust from rustup)
jillesvangurp
4 months ago
Exactly, natively compiled garbage collected languages (like Java with Graal; or as executed on Android) don't have a lot of startup overhead. In Java the startup overhead is mostly two things that usually conspire to make things worse:
1) dynamic loading of jar files
2) reflection
Number 1 allows you to load arbitrary jar files with code and execute them. Number 2 allows you to programmatically introspect existing code and then execute logic like "Find me all Foo sub classes and create an instance of those and return the list of those objects". You can do that at any time but a lot of that kind of stuff happens at startup. That involves parsing, loading and introspecting thousands of class files in jar files that need to be opened and decompressed.
Most of "Java is slow" is basically programs loading a lot of stuff at startup, and then using reflection to look for code to execute. You don't have to do those things. But a lot of popular web frameworks like Spring do. A lot of that stuff is actually remarkably quick considering what it is doing. You'd struggle to do this in many other languages. Or at all because many languages don't have reflection. If you profile it, there are millions of calls happening in the first couple of seconds. It's taking time yes. But that code has also been heavily optimized over the years. Dismissing what that does as "java is slow" and X is fast is usually a bit of an apples and oranges discussion.
With Spring Boot, there are dozens of libraries that self initialize if you simply add the dependency or the right configuration to your project. We can argue about whether that's nice or not; I'm leaning to no. But it's a neat feature. I'm more into lighter weight frameworks these days. Ktor server is pretty nice, for example. It starts pretty quickly because it doesn't do a whole lot on startup.
Loading a tiny garbage collector library on startup isn't a big deal. It will add a few microseconds to your startup time maybe. Probably not milliseconds. Kotlin has a nice native compiler. If you compile hello world with it it's a few hundred kilobytes for a self contained binary with the program, runtime, and the garbage collection. It's not a great garbage collector. For memory intensive stuff you are better off using the JVM. But if that's not a concern, it will do the job.
mk89
4 months ago
You forgot to mention Quarkus :)
ComputerGuru
4 months ago
New AOT C# is nice, but not fully doable with the most common dependencies. It addresses a lot of the old issues (size, bloat, startup latency, etc)
jiggawatts
4 months ago
Hilariously, the Microsoft SQL Client is the primary blocker for AOT for most potential usecases.
Want fast startup for an Azure Function talking to Azure SQL Database? Hah… no.
In all seriousness, that one dependency is the chain around the ankle of modern .NET because it’s not even fully async capable! It’s had critical performance regression bugs open for years.
Microsoft’s best engineers are busy partying in the AI pool and forgot about drudgery like “make the basics components work”.
procaryote
4 months ago
Hello world in java is pretty fast. Not rust fast but a lot faster than you'd expect.
Java starting slowly is mostly from all the cruft in the typical java app, with springboot, dependency injection frameworks, registries etc. You don't have to have those, it's just that most java devs use them and can't conceive of a world of low dependencies
Still not great for commandline apps, but java itself is much better than java devs
gizmo686
4 months ago
Testing on my machine, Hello World in java (openjdk 21) takes about 30ms.
In contrast, "time" reports that rust takes 1ms, which is the limit of it's precision.
Python does Hello World in just 8ms, despite not having a separate AOT compilation step.
The general guidance I've seen for interaction is that things start to feel laggy at 100ms; so 30ms isn't a dealbreaker, but throwing a third of your time budget at the baseline runtime cost is a pretty steep ask.
If you want to use the application as a short lived component in a larger system, than 30ms on every invocation can be a massive cost.
zigzag312
4 months ago
App that actually does something will probably have even larger startup overhead in Java as there will be more to compile just-in-time.
pjmlp
4 months ago
Only when not using either AOT or JIT cache.
0cf8612b2e1e
4 months ago
I recall that Mercurial was really fighting their Python test harness. It essentially would startup a new Python process for each test. At 10ms per, it added up to something significant, given their volume of work to cover something as complicated as SCM.
typpilol
4 months ago
10ms?
Did they have like 100k tests?
0cf8612b2e1e
4 months ago
Found a 2014 thread where 10-18% of the test harness time was spent booting the interpreter for the 13,000 required instances. The deeper thread was showing some 500-700 seconds of test time was just the interpreter overhead. The original point of the article was how much worse the overhead was in Python3 vs Python2.
https://mail.python.org/pipermail/python-dev/2014-May/134528...
typpilol
4 months ago
That's a lot more extreme than 10ms and yea I get your point better now
guelo
4 months ago
I'm trying and failing to imagine a situation where 30ms startup time would be a problem. Maybe some kind of network service that needs to execute a separate process on every request?
user
4 months ago
davemp
4 months ago
30ms is pretty close to noticeable for anything that responds to user input. 30ms startup + 20-70ms processing would probably bump you into the noticeable latency range.
tacticus
4 months ago
30ms is the absolute best case. Throw some spring in there and you're very quickly at 10s. rub some spring-soap and it's near enough to 60s
pixelpoet
4 months ago
It's not about how long someone is willing to wait with a timer and judge it on human timescales, it's about what is an appropriate length of time for the task.
30ms for a program to start, print hello world, and terminate on a modern computer is batshit insane, and it's crazy how many programmers have completely lost sight of even the principle of this.
yeasku
4 months ago
Java is a tool, a very good one.
kbolino
4 months ago
Java's biggest weakness in this area is its lack of value types. It's well known, Project Valhalla has been trying to fix it for years, but the JVM just wasn't built around such types and it's hard to bolt them on after the fact. Java's next biggest weakness (which will become more evident with value types) is its type-erased generics. Both of these problems lead to time wasted on unnecessary GC, and though they can be worked around with arrays and codegen, it's unwieldy to say the least.
pron
4 months ago
Project Valhalla will also specialise generics for value types. When you say, "it's hard to bolt on", the challenge isn't technical, but how to do this in a way that adds minimal language complexity (i.e. less than in other languages with explicit "boxed" and "inlined" values). Ideally, this should be done in a way that tells the compiler know which types can be inlined (e.g. they don't require identity) and then letting the compiler decide when it wants to actually inline an instance as a transparent optimisation. The challenge would not have been any smaller had Java done this from the beginning.
kbolino
4 months ago
Maybe I picked the wrong wording--I don't mean to diminish the ambitions or scope of Valhalla--but I definitely think the decision to eschew value types at the start has immense bearing on the difficulty of adding them now.
Java's major competitors, C# and Go, both have had value types since day one and reified generics since they gained generics; this hasn't posed any major problems to either language (with the former being IMO already more complex than Java, but the latter being similarly or even less complex than Java).
If the technical side isn't that hard, I'd have expected the JVM to have implemented value types already, making it available to other less conservative languages like Kotlin, while work on smoothly integrating it in Java took as long as needed. Project Valhalla is over a decade old, and it still hasn't delivered, or even seems close to delivering, its primary goals yet.
Just to be clear, I don't think every language needs to meet every need. The lack of value types is not a critical flaw of Java in general, as it only really matters when trying to use Java for certain purposes. After all, C# is very well suited to this niche; Java doesn't have to fit in it too.
pron
4 months ago
> Java's major competitors, C# and Go, both have had value types since day one
Yes (well, structs; not really value types), but at a significant cost to FFI and/or GC and/or user-mode threads (due to pointers into the stack and/or middle of objects). Java would not have implemented value types in this way, and doing it the way we want to would have been equally tricky had it been done in Java 1.0. Reified generics also come at a high price, that of baking the language's variance strategy into the ABI (or VM, if you want). However, value types will be invariant (or possibly extensible in some different way), so it would be possible to specialise generics for them without necessarily baking the Java language's variance model into the JVM (as the C# variance model is baked into the CLR).
Also, C# and Go didn't have as much of a choice, as their optimising compilers and GCs aren't as sophisticated as Java's (e.g. Java doesn't actually allocate every `new` object on the heap). Java has long tried to keep the language as simple as possible, and have very advanced compilers and GCs.
> If the technical side isn't that hard, I'd have expected the JVM to have implemented value types already, making it available to other less conservative languages like Kotlin, while work on smoothly integrating it in Java took as long as needed
First, that's not how we do things. Users of all alternative Java Platform languages (aka alternative JVM languages) combined make up less than 10% of all Java platform users. We work on the language, VM, and standard library all together (this isn't the approach taken by .NET, BTW). We did deliver invokedynamic before it was used by the Java language, but 1. that was after we knew how the language would use it, and 2. that was at a time when the JDK's release model was much less flexible.
Second, even if we wanted to work in this way, it wouldn't have mattered here. Other Java Platform languages don't just use the JVM. They make extensive use of the standard library and observability tooling. Until those are modified to account for value types, just a JVM change would be of little use to those languages. The JVM comprises maybe 25% of the JDK, while Kotlin, for example, makes use of over 95% of the JDK.
Anyway, Project Valhalla has taken a very long time, but it's making good progress, and we hope to deliver some of its pieces soon enough.
pjmlp
4 months ago
Go I agree, .NET is on par with JVM, even if they don't have the pleothora of choice regarding JVM implementations, and the ability to do C++ like coding means there isn't that much of a pressure for pauseless GC as in Java.
Looking forward to Project Valhalla updates, I had some fun with the first EA.
pron
4 months ago
I'm not sure what is meant here by "on par with the JVM." I'm not trying to claim that one or the other is better, but there is a basic difference in how they're designed and continue to evolve. .NET believes in a language that gives more control on top of a more basic runtime, while Java believes in a language that's smaller built on top of a more advanced runtime. They just make different tradeoffs. .NET doesn't "need" a more advanced runtime because limitations in its runtime can be overcome by more explicit control in the language; Java doesn't "need" a more elaborate language because limitations in the level of control offered by the language can be overcome by a more sophisticated runtime.
I'm not saying these are huge differences, but they're real. C# has more features than the Java language, while Java's compiler and GCs are more sophisticated than the CLR's. Both of these differences are due to conscious choices made by both teams, and they each have their pros and cons. I think these differences are very apparent in how these two platforms tackled high-scale concurrency: .NET did it in the language; Java did it in the runtime. When it comes to value types, we see a similar difference: in C# you have classes and structs (with autoboxing); in Java we'll just have classes that declare whether they care about identity, and the runtime will then choose how to represent each instance in memory (earlier designs did explore "structs with autoboxing" but things have moved on from there, to the point of redefining autoboxing even for Java primitives; for a type that doesn't care about identity, autoboxing becomes an implementation detail - transparently made by the compiler - with no semantic difference, as a pointer or a value cannot be distinguidhed in such a case - hence https://openjdk.org/jeps/390 - unlike before, when an Integer instance could be distinguished from an int).
pjmlp
4 months ago
It means that it does the same JIT optimization tricks that Hotspot performs, escape analysis, devirtualization, inlining method calls, removing marshaling layers when calling into native code, PGO feedback,....
I would like to someday have someone write blog posts about performance like the famous ones from the .NET team, and also not having to depend on something external like JIT Watch, instead of having it in box like .NET.
Example for upcoming .NET 10,
https://devblogs.microsoft.com/dotnet/performance-improvemen...
Also C# and .NET low level programming features are here today, Project Valhala delivery is still in future, to be done across several versions, assuming that Oracle's management doesn't lose interest funding the effort after all these years.
It is kind of interesting how after all these years, the solution is going to be similar in spirit to what Eiffel expanded types were already offering in 1986.
https://wiki.liberty-eiffel.org/index.php/Expanded_or_refere...
https://archive.eiffel.com/doc/online/eiffel50/intro/languag...
I guess that is what happens when language adoption turns out to go in a different path than originally planned, given Java's origins.
pron
4 months ago
> It means that it does the same JIT optimization tricks that Hotspot performs, escape analysis, devirtualization, inlining method calls, removing marshaling layers when calling into native code, PGO feedback,....
It has rather recently started using most (or perhaps all) of the same techniques; it does not actually perform all the same optimisations. C2 is still significantly more advanced. Of course, some optimisations are more difficult in C#, as its "low level" features expose more implementation details, giving the compiler less freedom.
> It is kind of interesting how after all these years, the solution is going to be similar in spirit to what Eiffel expanded types were already offering in 1986.
I don't have time to compare the details (and I don't work on Valhalla), but it doesn't seem to me to be the same thing. In Eiffel, the class says whether or not it's expanded. In Java, the compiler decides, as an optimisation, whether to inline or box different object instances (on a per-object, not per-class basis). It's just that classes with certain characteristics give the compiler more freedom to inline in more cases.
Think about it this way: In Java 8, Integer and int are two different types with very different behaviours, albeit with an autoboxing relationship. You can't synchronize on an int instance or assign null to an int variable; equality comparisons on Integer compare object identity, not numeric value. We'll gradually turn Integer and int into effectively the same type (with two names, for historical reasons), and turn the decision of whether a particular instance is inlined or not up to the compiler, as an optimisation decision. It's not that autoboxing will expand to user defined types, but rather it will become nonexistent.
But generally, almost anything we do in Java is something that's been done (more or less) somewhere else a while ago, because we like to avoid ideas that have not been tested. Basically, for X to be considered for Java, there must exist some language that tried X in 1986. It's just that it's not always the same language.
kbolino
4 months ago
This has been a very informative discussion, so thank you for that.
I did want to point out, though, that both languages have other (compound) value types than just structs. Go has value-based arrays, and C# has value-based tuples. It looks like C# may gain value-based discriminated unions as well, though it's not settled yet: https://github.com/dotnet/csharplang/blob/main/proposals/uni...
pjmlp
4 months ago
Currently it takes lots of boilerplate code, however with Project Panama API you can model C types in memory, thus kind of already using value types even if Valhala isn't yet here.
To avoid manually writing all the Panama boilerplate, you can instead write a C header file with the desired types, and then run jextract through it.
pjmlp
4 months ago
Only for those that don't know how to use AOT compilation tools for Java and C#.
paulddraper
4 months ago
Compiling Java AOT doesn’t obviate the need for the JVM.
At least not for Graal.
https://stackoverflow.com/questions/75316542/why-do-i-need-j...
pjmlp
4 months ago
Because it is user problem, instead of compiling with native image, they produced a shared library out of the Jar.
As you can see, it has nothing to do with that Stack Overflow question,
https://www.graalvm.org/jdk25/reference-manual/native-image/
gudzpoz
4 months ago
... That post you linked was from two years ago, discussing JEP 295, which was delivered eight years ago. Graal-based AOT has evolved a lot ever since. And the answer even explicitly recommended using native images:
> I think what you actually want to do, is to compile a native image of your program. This would include all the implications like garbage collection from the JVM into the executable.
And it is this "native image" that all the comments above in this thread have been discussing, not JEP 295. (And Graal-based AOT in native images does remove the need to bundle a whole JRE.)
jraph
4 months ago
GraalVM indeed do wonders wrt startup times and in providing a single binary you can call.
pjmlp
4 months ago
Open J9 as well.
Then there are all the others that used to be commercial like ExcelsiorJET, or surviving ones like PTC and Aicas.
quotemstr
4 months ago
Heavyweight startup? What are you talking about? A Graal-compiled Java binary starts in a few milliseconds. Great example of how people don't update prejudices for decades.
jrop
4 months ago
Just going to jump in here and say that there's another reason I might want Rust with a Garbage Collector: The language/type-system/LSP is really nice to work with. There have indeed been times that I really miss having enums + traits, but DON'T miss the borrow checker.
tuveson
4 months ago
Maybe try a different ML-influenced language like OCaml or Scala. The main innovation of Rust is bringing a nice ML-style type system to a more low level language.
IshKebab
4 months ago
I wouldn't recommend OCaml unless you plan to never support Windows. It finally does support it in OCaml 5 but it's still based around cygwin which totally sucks balls.
Also the OCaml community is miniscule compared to Rust. And the syntax is pretty bonkers in places, whereas Rust is mostly sane.
Compile time is pretty great though. And the IDE support is also pretty good.
umanwizard
4 months ago
There are other nice things about Rust over OCaml that are mainly just due to its popularity. There are libraries for everything, the ecosystem is polished, you can find answers to any question easily, etc. I don't think the same can be said for OCaml, or at least not to the same extent. It's still a fairly niche language compared to Rust.
nobleach
4 months ago
I remember about 5 years ago, StackOverflow for OCaml was a nightmare. It was a mishmash of Core (from Jane Street) Batteries, and raw OCaml. New developers were confronted with the prospect of opening multiple libraries with the same functionality. (not the correct way of solving any problem)
Yoric
4 months ago
Jane Street apparently has a version of OCaml extended with affine types. I'd like to test that, because that would (almost) be the best of all worlds.
nobleach
4 months ago
I think you're referring to OxCaml. I'd love to see this make a huge splash. Right now one of the biggest shortcomings of OCaml, is one is still stuck implementing so much stuff from scratch. Languages like Rust, Go and Java have HUGE ecosystems. OCaml is just as old (even older than Rust since OCaml inspired Rust and its original compiler was written in OCaml) as these languages. Since it's not been as popular, it's hard to find well-supported libraries.
debugnik
4 months ago
I too wish that some OxCaml features bring new blood to OCaml. I've been using OCaml for a few years for personal projects and I find the language really simple and powerful at the same time, but I had to implement me some foundational libraries (e.g. proper JSON, parser combinators), and now I'm considering porting one of those projects to Rust just so I can have unboxed types and better Windows support.
> even older than Rust
That's an understatement, (O)Caml is between 17 and 25 years older than Rust 0.1 depending on which Caml implementation you start counting from.
seivan
4 months ago
[dead]
zamalek
4 months ago
- Rust is a nice language to use
tayo42
4 months ago
What other language has modern features like rust and is compiled?
procaryote
4 months ago
it depends completely on what you put in "modern features"
tayo42
4 months ago
Pattern matching, usable abstractions, non null types, tagged unions or w/e enums are, build tools etc
pjmlp
4 months ago
Standard ML from 1983, alongside all those influenced by it like Haskell, OCaml, Agda, Rocq,....
lmm
4 months ago
Most of those have nothing remotely approaching Rust's level of build tooling.
pjmlp
4 months ago
Yet parent was mostly talking about type systems.
If you prefer, Rust tooling is still quite far behind from languages like Kotlin and Scala, which I didn't mention, but also have such type system.
lmm
4 months ago
> If you prefer, Rust tooling is still quite far behind from languages like Kotlin and Scala
I'm not sure that's true, at least when it comes to specifically build tooling. I'd say Cargo is far ahead of Gradle, Ant, or worst of all SBT, and probably even slightly ahead of Maven (which never really reached critical mass in the Kotlin or Scala ecosystems sadly).
pjmlp
4 months ago
You are missing the IDE capabilities, maturity of GUI frameworks, a full OS that 80% of the world uses,... the whole tooling package.
procaryote
4 months ago
This sounds more like "this is what I like in rust" than "features any modern language should have" though
If you like rust, use rust. It's very likely the best rust
lmm
4 months ago
> This sounds more like "this is what I like in rust" than "features any modern language should have" though
Good build tooling has been around since 2004, and all of the rest of those features have been around since the late 1970s. There's really no excuse for a language not having all of them.
procaryote
4 months ago
Every language should have haskell level pattern matching?
lmm
4 months ago
Yes.
antonvs
4 months ago
That’s definitely a list of features that any modern language should have. It’s in no way specific to Rust.
munificent
4 months ago
I'm not sure what you mean by "usable abstractions" and tagged unions are a little verbose because they are defined in terms of closed sets of subtypes, but otherwise Dart has all of those.
tayo42
4 months ago
Nothing like "oh you can do that but with this weird work around" or if they're clunky to use
strobe
4 months ago
Scala but it's on JVM (also is https://scala-native.org without JVM but that not really has big user base)
cultofmetatron
4 months ago
nim, zig and ocaml come to mind
gizmo686
4 months ago
Also, the proposed garbage collector is still opt in. Only pointers that are specifically marked as GC are garbage collected. This means that most references are still cleaned up automatically when the owner goes out of scope. This greatly reduces the cost of GC compared to making all heap allocations garbage collected.
This isn't even a new concept in Rust. Rust already has a well accepted RC<T> type for reference counted pointers. From a usage perspective, GC<T> seems to fit in the same pattern.
zigzag312
4 months ago
Language where most of the libraries are without GC, but has an GC opt in would be interesting. For example only your business logic code would use GC (so you can write it more quickly). And parts where you don't want GC are still written in the same language, avoiding the complexity of FFI.
Add opt-in development compilation JIT for quick iteration and you don't need any other language. (Except for user scripts where needed.)
MaxBarraclough
4 months ago
The D programming language has an optional garbage collector. [0][1]
With the @nogc attribute the compiler can check/enforce that your function doesn't depend on the GC. [2]
[0] https://dlang.org/spec/garbage.html
seivan
4 months ago
[dead]
yoyohello13
4 months ago
I love the rust ecosystem, syntax, and type system. Being able to write Rust without worrying about ownership/lifetimes sounds great honestly.
victorbjorklund
4 months ago
Also assuming one can mix garbage collection with the borrower (is that what its called in rust?) one should be able to use GC for things that arent called that much / that important and use the normal way for things that benefit from no GC interupts etc
rixed
4 months ago
In all honesty, there are three topics I try to refrain myself from engaging with on HN, often unsuccesfully: politics, religion, and rust.
I don't know what you had to go through before reaching rust's secure haven, but what you just said is true for the vast majority of compiled languages, which are legions.
bregma
4 months ago
> politics, religion, and rust
Is there a real distinction between any of those?
quotemstr
4 months ago
It's the fledging of a new generation of developers. Every time I see one of these threads I tell myself, "you, too, were once this ignorant and obnoxious". I don't know any cute except letting them get it out of their system and holding my nose as they do.
Ar-Curunir
4 months ago
Well you might find it good to learn that Rust is based on plenty of ideas dating back decades, so _your_ obnoxious and patronizing attitude is unwarranted.
quotemstr
4 months ago
Rust gets some things right and some things wrong. Its designers are generally clueful, but like all humans, fallible. But what does this discussion have to do with Rust exactly? Exactly the same considerations would apply to a C++ GC.
The only thing more cringe than insisting on a GC strategy without understanding the landscape is to interpret everything as an attack on one's favored language.
jadenPete
4 months ago
Rust's choice of constructs also makes writing safe and performant code easy. Many other compiled languages lack proper sum and product types, and traits (type classes) offer polymorphism without many of the pitfalls of inheritance, to name a few.
imtringued
4 months ago
The problem with conventional garbage collection has very little to do with the principle or algorithms behind garbage collection and more to do with the fact that seemingly every implementation has decided to only support a single heap. The moment you can have isolated heaps almost every single problem associated with garbage collection fades away. The only thing that remains is that cleaning up memory as late as possible is going to consume more memory than doing it as early as possible.
tuveson
4 months ago
What problem does that solve with GC, specifically? It also seems like that creates an obvious new problem: If you have multiple heaps, how do you deal with an object in heap A pointing to an object in heap B? What about cyclic dependencies between the two?
If you ban doing that, then you’re basically back to manual memory management.
paulddraper
4 months ago
There’s a ton of work that goes into multi-generational management, incremental vs stop the world, frequency heuristics, etc.
A lot of the challenge is there is not just one universal answer for these, the optimum strategies vary case by case.
You are correct that each memory arena is the boundary of the GC. Any GC between them must be handled manually.
grogers
4 months ago
BEAM (i.e. erlang) is exactly that model, every lightweight process has its own heap. I don't see how you'd make that work in a more general environment that supports sharing pointers across threads.
drnick1
4 months ago
Aren't Rust programs still considerably larger than their C equivalent because everything is statically linked? It's kind of hard to see that as an advantage.
IshKebab
4 months ago
You can get Rust binaries pretty small: https://github.com/johnthagen/min-sized-rust
But in practice it's more like there's an overhead for "hello world" but it's a fixed overhead. So it's really only a problem where you have lots of binaries, e.g. for coreutils. The solution there is a multi-call binary like Busybox that switches on argv[0].
C programs often seem small because you don't see the size of their dependencies directly, but they obviously still take up disk space. In some cases they can be shared but actually the amount of disk space this saves is not very big except for things like libc (which Rust dynamically links) and maybe big libraries like Qt, GTK, X11.
nsvd2
4 months ago
Yes, all Rust libraries depended on are statically compiled into the final binary. One one hand, it makes binary size much larger, on the other, it makes it much easier to build an application that will "just work" without too much fuss.
In my personal projects with Rust, this ends up being very nice because it makes packaging easier. However, I've never been in a situation where binary size matters like in the embedded space, for example.
Rust isn't the only language with this approach, Go is another.
paulddraper
4 months ago
No.
They may be larger because they are doing more work, depends on the program.
But no they don’t statically compile everything.
nsvd2
4 months ago
Static compilation and static linking are two separate things - however, Rust is both statically compiled and (usually) uses static linking of dependency libraries.
paulddraper
4 months ago
You’re right. I mean static linking.
And by default Rust does not. Needs glibc.
James_K
4 months ago
Go is probably a better pick in this case.
throwaway127482
4 months ago
With data intensive Go applications you eventually hit a point where your code has performance bottlenecks that you cannot fix without either requiring insane levels of knowledge on how Go works under the hood, or using CGo and incurring a high cost for each CGo call (last I heard it was something like 90ns), at which point you find yourself regretting you didn't write the program in Rust. If GC in Rust could be made ergonomic enough, I think it could be a better default choice than Go for writing a compiled app with high velocity. You could start off with an ergonomic GC style of Rust, then later drop into manual mode wherever you need performance.
bryanlarsen
4 months ago
It's possible to turn off GC for most of go and only use it where I want it? That's what this solution gives us for Rust.
ViewTrick1002
4 months ago
Inviting in nil errors, data races and a near non-existent type system.
fithisux
4 months ago
I really like your work