One Year of Rust in Production

70 pointsposted 10 months ago
by skwee357

79 Comments

throwup238

10 months ago

> Compile time is still PITA

This is Rust's Achilles heel and has been since almost day one. Once a project is at a nontrivial scale, it really starts to weight it down. I'm working on a Rust/C++/QT QML desktop app and I've spent the last week refactoring my crates/libraries so I can split them off as shared libraries that can be built independently, otherwise even incremental builds can take minutes. Thankfully there are some crates to make a stable ABI and dynamic reload.

> LLMs rarely help with a proper solution, as most of the packages are kind of niche.

This has also bitten me quite a bit but at the same time, I've been impressed with what Claude 3.5 and o1-preview have been able to do with Rust even with niche libraries like cxx-qt - a relatively new library with little in the training data. A lot of stuff like writing Rust implementations of QAbstractListModel works really well when given the right context (like an example implementation of another list model).

LLMs have also been a boon for writing macros, both macro_rules and proc macros.

iTokio

10 months ago

There is a subset of Rust that compiles fast, but it requires to avoid serde and crates that depends on syn, quote, proc-macro…

The issue is that macros and generics can generate tons of code under the hood.

Unfortunately some major parts of the rust ecosystem like serialization and async frameworks are dependent on these features..

Ygg2

9 months ago

You don't need serde or proc macros to blow up your compilation times - just a lot of complex monomorphised generics.

Ygg2

9 months ago

Keep in mind the times written in article are unrealisticly long.

Bevy, being the chonker it is, with 300-700 dependencies takes about 5min from scratch. And 15 seconds on average.

What is happening here is combination of using underpowered Gitlab runner, no-caching of artifacts between runs and dog knows what else.

pjmlp

9 months ago

While true, not everyone can afford a gamming rig just for using a compiled language, so this hurts adoption.

Ygg2

9 months ago

Gaming rigs cost is mostly GPU. Compilers love cores, ssd, ram in that order.

Gaming rig is not great for compiling. And vice versa.

pjmlp

9 months ago

It was an example of the kind of compute power, of course we can get pedantic on which hardware.

Ygg2

9 months ago

You're overestimating the budget needed for the rig. Would working with a Windows 7 era laptop and Rust suck? Yes. But so would anything else.

I'm on a 4-year-old PC, granted it's 5900X, 3800 MHz RAM and my numbers are derived from it.

pjmlp

9 months ago

Quite far from the typical i5 most folks have access to on shopping malls.

Ygg2

9 months ago

Even i5 will only make it around twice as slow[1]. So from 5min -> 10min.

And that's a relatively pessimistic scenario - huge number of dependencies, slow processor (i5), fresh build. In incremental build, it goes from minutes to seconds.

[1]https://youtu.be/utWSSlyabjc?t=798

Klonoar

9 months ago

My M1 from ~4 years ago still rips through compiles. shrug

pjmlp

9 months ago

Outside wealthy countries Apple hardware isn't really an option.

noisy_boy

9 months ago

I am yet to see an example of a full-time Rust developer in a poor country who can't afford Apple hardware.

pjmlp

9 months ago

See, gatekeeping right there.

How did that lucky person become a full time Rust developer, and landed a job paying Apple level of income, in first place?

itishappy

9 months ago

Wait a bit longer?

https://xkcd.com/303/

I'd argue that gatekeeping here is being done by the "Rust requires overpowered hardware" position you're assuming. I happily use an RPi4.

pjmlp

9 months ago

When there are alternative that require waiting less, those alternatives eventually get more eyes.

itishappy

9 months ago

Sure, all else being equal. That's why we all write LISP right? No compile times.

ekaryotic

9 months ago

maybe we could use ai to optimise compilers somehow.

cosmic_quanta

9 months ago

The reason rustc and other compilers like it (e.g. GHC) are "slow" isn't because they are unoptimized; it's because they do A LOT of work!

rustc is enforcing a strong and expressive type system, checking for ownership, generating code from macros, etc. It will always be an order of magnitude slower than C/C++ compiler, kind-of by design.

imron

9 months ago

Large C++ projects aren't exactly known for their stellar compile times either.

Coming from a C++ background, I don't find Rust compile times to be that bad for comparable sized projects in C++.

pjmlp

9 months ago

Which is exactly why we traditionally don't build the world from scratch in C++, rather rely on binary libraries for 3rd party dependencies, or component frameworks.

Additionally rely on dynamic libraries during debug builds.

imron

9 months ago

Incremental build times in rust are also on par (and often faster) with c++ on the similar sized projects I’ve worked on.

pjmlp

9 months ago

Except those other compilers offer multiple toolchains, including interpreters, that are faster than rustc.

Standard ML, Idris, ..., F#, Haskell and OCaml have interpreters, and REPLs, in addition to the full blown AOT compilation.

Naturally Rust can also have those, however they aren't here today.

dietr1ch

10 months ago

Code generation isn't the fastest, but you can iterate with `cargo check`

pjmlp

9 months ago

Not a great advice for graphical applications.

Havoc

10 months ago

Well at least Moores law favours rust on compile time. So in that regard it seems like one of the less problematic language issues one can have - it’ll sort itself out. The same can’t be said for the issues on other langs

pmezard

9 months ago

Exactly, the same way Moore's law has solved C++ compile times.

ynik

9 months ago

Not sure if you dropped a "/s".

In my experience, C++ template usage will always expand until all reasonably available compile time is consumed.

Rust doesn't have C++'s header/implementation separation, so it's easy to accidentally write overly generic code. In C++ you'd maybe notice "do I really want to put all of this in the header?", but in Rust your compile times just suffer silently. On the other hand, C++'s lack of a standardized build system led to the popularity of header-only libraries, which are even worse for compile times.

pjmlp

9 months ago

From my point of view it is more like laziness to learn how to properly use compiler toolchains.

dymk

9 months ago

First rule of good design: misuse isn’t the humans fault

flohofwoe

9 months ago

Moore's Law never sorted it out, even when it wasn't dead yet. Software always got slower quicker than the hardware it runs on got faster (as can be seen currently on ARM Macs, they felt incredibly fast in the beginning, but now software is starting to catch up and new Macs are starting to feel just as slow as the Intel Macs they replaced (looking specifically at you, Xcode)

Stem0037

9 months ago

The "if it compiles, it runs" phenomenon is indeed one of Rust's strongest selling points. It's amazing how much confidence you can have in your code just because it compiled successfully. However, I'd caution against relying on this too heavily - there are still plenty of logical errors that can slip through compilation.

The author's experience with sqlx is spot on. It's a game-changer for database interactions, though it does come with some drawbacks in terms of compile times and CI/CD pipeline complexity.

One point I'd add is about the ecosystem. While it's true that Rust doesn't have the same breadth of libraries as, say, JavaScript, the quality of the libraries that do exist is generally very high. And the community is incredibly helpful when you do run into issues.

pjmlp

9 months ago

However this applies to all languages in ML linage, with their rich type systems.

brigadier132

9 months ago

I don't think sqlx is worth the compile time hit at all. Just use seaorm. Writing SQL queries using strings sucks.

satvikpendem

9 months ago

SeaORM is not typesafe, by design (and also uses sqlx itself as a dependency). Diesel can do dynamic queries too but it still remains completely typesafe.

brigadier132

9 months ago

> SeaORM is not typesafe, by design

What is your definition of typesafe here? With SeaOrm structs are created directly from the db schema with codegen. This is vastly superior to using strings and checking them at build time.

Just up front, it seems like you think I don't know about how all of these things are implemented.

I do.

So when you say SeaORM has Sqlx as a dependency. I know that. If the purpose of that statement was to say seaorm compile times can't be faster that sqlx because sqlx is a dependency of seaorm, that thought is wrong.

Using SeaORM the way it is intended to be used results in code that is much faster to compile than sqlx being used the way it's intended to be used.

satvikpendem

9 months ago

I make no assumption about your level of knowledge, I simply stated that sqlx is a dependency just for completeness. I also did not imply anything about whether having such a dependency makes for faster or slower compile times than without.

Anyway, my comparison was with regards to SeaORM (and/or sqlx) versus Diesel. SeaORM explicitly say they are not fully typesafe. From the docs [0]:

> Using a completely static query builder can eliminate this entire class of errors. However, it requires that every parameter be defined statically and available compile-time. This is a harsh requirement, as there is always something you could not know until your program starts (environment variables) and runs (runtime configuration change). This is especially awkward if you come from a scripting language background where the type system has always been dynamic.

> As such, SeaORM does not attempt to check things at compile-time. We intend to (still in development) provide runtime linting on the dynamically generated queries against the mentioned problems that you can enable in unit tests but disable in production.

That is simply a deal breaker for me. I use an ORM specifically to be fully compile-time safe, so having this sort of "escape hatch" in a language that is already typesafe and sound is just going backwards. In contrast, Diesel is fully typesafe and fixes all of the issues from [0], even for dynamic queries [1].

This all being said, I have also found Prisma Client Rust [2] and Cornucopia [3] to be viable alternatives to the "big 3 (SeaORM, sqlx, Diesel)" so to speak. PCR uses Prisma to generate an ORM but is generally slower due to the overhead of running the Prisma engine, but it's ergonomic if you already use Prisma in the TypeScript world, and it's a pretty good ORM itself. I have not used Cornucopia as much but it is less of an ORM and more of a way to generate types in Rust from your SQL. There is some discussion about them on reddit, for comparison [4].

[0] https://www.sea-ql.org/SeaORM/docs/write-test/testing/#1-typ...

[1] https://old.reddit.com/r/rust/comments/w54ydi/diesel_200_rc1...

[2] https://github.com/Brendonovich/Prisma-Client-Rust

[3] https://github.com/cornucopia-rs/cornucopia

[4] https://old.reddit.com/r/rust/comments/wdos9x/cornucopia_v08...

empath75

9 months ago

I've been running rust in production for roughly a year and the 'if it compiles it works' thing is very much true. Once we cleaned up all the 'unwraps' and 'expects' after a few days and switched to proper error handling, we had a _single_ crash caused by an unchecked add in an underlying time library (combined with a bad integer cast sending it bad input).

That statement is not really just about memory safety, but it's also about the type system and pattern matching, which prevents a lot of _logic_ bugs that would cause runtime errors and not just memory safety problems. You don't have nil pointer dereferences like you do with go, you don't have problems with accidentally passing a single item instead of a list of items, or a string instead of integer like you might get with python.

It's also a lot easier to write than it's reputation would imply, especially if you don't especially care about memory efficiency or performance and you aren't writing close to the metal, and you can just clone stuff onto the heap all over the place. Even with that, it uses way less memory and cpu than a comparable app in python or java and it's usually more efficient than go.

jksmith

10 months ago

# If it compiles, it runs # And when it runs, it’s very stable

Modula-2 40 years ago. The ship had strong bones to build upon a long time ago, but alas, C. The tyranny of the masses.

myworkinisgood

9 months ago

Not tyranny of masses. C compilers became more popular because they could actually run in reasonable amount of time.

pjmlp

9 months ago

More like they were free beer compilers, on a free beer OS.

On my piece of the world we were mostly using BASIC, Pascal dialects, Clipper, until Windows 3.x took over and make C relevant, alongside Petzold's book.

And by then, many of us would rather use Borland compilers with Object Windows Library in C++ and Turbo Pascal, than deal with Win16 directly.

Naturally on UNIX land it was another matter.

oneshtein

9 months ago

Pascal was fast, C was slow.

hu3

9 months ago

Pascal -> Oberon -> Go

Languages that cared about compilation speed from the start.

And it shows.

flaie

9 months ago

This is what I thought about OCaml when I first tried in 2001. I still do.

zesterer

9 months ago

It's no accident that Rust's bootstrap compiler was written in OCaml. Rust borrows a lot of ideas from it. Arguably, OCaml and ML are closer ancestors of the language than C.

I like Rust, I use it as much as I can. But, it actually doesn't bring many new features to the table. What it's done is successfully learn from the past, scooping up the good ideas and ditching the bad ones. Its real selling point is how it wraps all of those old ideas up into a modern, well-maintained, stable package that's ready for use in the real world.

pdimitar

9 months ago

> What it's done is successfully learn from the past, scooping up the good ideas and ditching the bad ones. Its real selling point is how it wraps all of those old ideas up into a modern, well-maintained, stable package that's ready for use in the real world.

Which is no small feat. I wish OCaml cleaned house and settled on just one stdlib and removed a lot of legacy baggage, and oh yeah, add built-in Unicode support, and 5-6 other things I am forgetting now.

Sure you can muscle through all that but having Rust around really makes you wonder if it's worth it and in my case I ultimately arrived at "nope, it is not" so I just use Elixir, Golang and Rust.

satvikpendem

9 months ago

To be fair, many languages have their bootstrapper written in OCaml or another similar ML. When I took my programming language interpreters and compilers classes, that's what they mentioned to us, because they are uniquely good at writing recursive descent parsers.

winrid

9 months ago

You still worked with raw pointers in modula 2 though right?

pjmlp

9 months ago

Yes, they are available, however Modula-2 provides safer strings, arrays and reference parameters, no need for raw pointers for those use cases.

In Modula-2, the equivalent to unsafe code blocks is IMPORT SYSTEM module.

EDIT: For those that aren't aware, GCC finally got GNU Modula-2 as part of the official set of frontends and no longer a side project, bring up to four (D, Go, Ada, Modula-2), the set of safer languages out of the box on a full GCC install.

dicytea

10 months ago

> sqlx — a compile time, type safe SQL wrapper that runs your queries against a real DB

SQLx seems like end game stuff at first glance, but after trying it out for a while I eventually decided that it wasn't for me. Writing dynamic/conditional queries just sucks and there isn't any good solution. On the DX side, completion, formatting/linting, highlighting is also non-existent (at least on VS Code).

I eventually settled on [Diesel][0] (a query builder-ish ORM) and I'm loving it so far. Its [performance][1] crushes every other SQL libraries, including SQLx (very counter-intuitive, huh?). It's technically an ORM, but the query builder is very flexible and you can also extend it with your own traits. It got its warts, but it's the most tolerable SQL rust library I've found so far.

[0]: https://diesel.rs

[1]: https://github.com/diesel-rs/metrics

skwee357

9 months ago

AFAIK diesel is not async. On top of that, I'm pretty sure you can find a query that you won't be able to perform with diesel DSL (some obscure PostgreSQL syntax). What I like about sqlx is that it's just pure SQL.

throwaway0665

9 months ago

diesel-async adds an AsyncConnection that can be used with diesel. You can also do raw queries quite easily if you need to.

Klonoar

9 months ago

diesel-async works fine for almost all use-cases, it just layers over standard diesel calls via importing an async `RunQueryDsl` trait. I almost never think about it.

brigadier132

9 months ago

SQL is good, writing SQL using strings is not good.

dathinab

9 months ago

> Writing dynamic/conditional queries just sucks and there isn't any good solution.

SQLx is just providing some "core" foundations, which means yes there is no good query builder or anything like that.

So if you want to compare SQLx with e.g. Diesel it's a bit comparing apple to apple trees, instead comparing sqlx+sea-query (or sqlx+sea-orm) would be a better comparison (through sqa-orm again isn't a fully even comparison as it goes ins some points further then what diesel provides).

Idk. about now but last time I touched diesel (quire a while ago) I found their query building catastrophically bad the moment the query isn't build all in one place, productivity also wasn't good with it. There was also pretty bad security issue they handled IMHO catastrophically convincing me to stay away from it. Through I guess a lot of things probably got much better since then.

dicytea

9 months ago

> sqlx+sea-query (or sqlx+sea-orm)

I tried sea-orm, but I find its ORM API way too limited (it can't even do multiple joins). For anything beyond simple queries, you end up needing to use its query builder (sea-query) which is blind to your db schema so you need to manually hand-validate all your queries. It's basically no better than pushing string queries + manually validating the output with serde.

> I found their query building catastrophically bad the moment the query isn't build all in one place

If you're talking about its crazy return types, there's the auto_type macro that lets you generate return types for functions automatically.

> There was also pretty bad security issue

That sounds concerning, can you link it here?

dathinab

9 months ago

> auto_type macro

> last time I touched [...] quite a while ago

it didn't exist back then ;=)

> That sounds concerning, can you link it here?

It shouldn't be there anymore the issue was mainly the way it was handled. I had removed the details as it was too long in the past to reliably remember all details. Through it was something along of the lines of: Transactions in case of a panic not getting reset and getting reused if you used the connection pool they reexported (which depending what you do, e.g. if you have row-level security policies, can be very bad). And when that was reported some other issues with the pool where found related to UnwindSafe (1). The pool fixed that issues and if it would have ended there everything would have been fine. The issue is the diesel authors didn't update to the newer version of the pool (which they reexported!) because they didn't like the whey the fix was done due to some resons I don't remember exactly but back then found found pedantic and unreasonable. Furthermore not only did they not adapt the fix (back then I guess they did by now), they also didn't deprecate the pool, didn't replace it with a different one , closed all issues with the vulnerability, didn't document it and criticized everyone opening the issue after having run into it for not "checking closed issues for very much still existing/open security vulnerability". But again that was years ago so whatever things might be much better by now and the developers might very well have learned to properly handle security issues by now.

(1): As a side note IMHO UnwindSafe is one of the worst design mistakes in rust as it uses the term "safe" but has nothing to do with "unsafe" or soundness. Every rust type has to be "safe" i.e. sound in context of crossing unwind boundaries. `UnwindSafe` just indicates if it will "behave reasonable" in such a context. E.g. a non `UnwindSafe` type might panic or outright abort if used after unwinding (but it still has to stay sound!). The reason I'm mentioning it is because back then diesel seemed to have been (as far as I could tell) designed with the mindset that if you don't use panic=abort it's your problem if you have bugs. Again that might very well have changed by now, or at least be well documented by now, but it wasn't back then.

dathinab

9 months ago

just to be really clear about it all that happened multiple years ago do not judge them on how they managed their project in the past but how they manage it now

super_linear

10 months ago

Nice article, but was looking forward to more of the "in production" angle (e.g. more operations, monitoring, performance). Instead this seemed to more focus on the the maintainability, developer experience using Rust and the ecosystem.

skwee357

9 months ago

Author here.

Thanks. Could you expand more, what specifically would you like to know? I'm happy to try and explore different angles of Rust in production, I'm just not sure that setting up open-telemetry is that different from other languages.

super_linear

9 months ago

Sorry, yeah I was a bit unclear. I was curious if there was any metrics to show how maybe server CPU/Memory or client-side latency changed after moving your app to Rust and running for a year.

But that's fair, if adding open-telemetry is maybe overkill to add to the application it might not apply as much to your use-case.

Havoc

10 months ago

> LLMs rarely help with a proper solution, as most of the packages are kind of niche.

I find that they’re quite good at converting example api calls between languages for the most part…though haven’t worked in rust for a while

noisy_boy

9 months ago

I keep coming back to learning/practicing Rust (I really like it) but every time the job market seems to have very few jobs where I am. Wonder if I need to look for fully remote Rust jobs? What is the experience of people doing Rust as $MAIN_JOB?

sophacles

9 months ago

Most people doing rust don't see themselves as rust coders. They are experts in whatever field they work in, and use rust. I do network stuff - we don't hire rust devs, we hire network devs and if they don't know rust, we teach them.

Other groups here approach it the same way - hire for domain knowledge, not tool knowledge. This is pretty widespread IME.

I think for "rising" languages it's often this way. I remember back in the 00s people complaining that there were no python jobs, or ruby jobs, even as those languages were exploding in popularity - same with node in the 2010s. I didn't see the same phenomenon with go or C#, i think the difference was those two languages were released by big companies - in go's case it was "google released this, lets use it... it must be good", during a time google was still widely respected. In C#, it was "this is a first class citizen of .Net, and works nicely in our existing ecosystem with MS support". As rust usage continues to grow, I suspect you'll see more and more listings with rust as a keyword, that takes time though.

jpc0

9 months ago

I was hoping to see a good review of rust that would convert me from Go/C#/Java but alas this was not it.

Seems like the writer finally experienced an errors as values statically typed language which doesn't have a ton of legacy in it. Many of the issues OP mentioned (CJS vs MJS, configuration file changes) comes from being around for a while, Rust just hasn't had the time in the saddle yet to show how they would deal with that. Don't get me wrong, I'm not saying they won't deal with it well but Python 2-3, Vue, JS CJS to MJS etc all show that a fundamental change to some core part of the language / framework is very tough to navigate and quite frankly Rust hasn't needed to do that yet.

satvikpendem

9 months ago

Rust simply has the best toolchain dev experience of any ML. OCaml for example is notoriously bad with its toolchain compared to cargo, although OCaml compiles insanely quickly.

dathinab

9 months ago

but Rust isn't a ML, it has ML aspects (and not few) but that's it

Many ML patters do not work well in rust at all, and trying to force them will lead to a lot of unnecessarily complex code i.e. hard to write, maintain and potentially even read. And to be clear I mean rust specific issues, not issues due to not being familiar with ML.

The issue is with ML in Rust you very easily hit some of the corners where Rust isn't yet fully flushed out, corners you tend to rarely run into if you write non-ML code with some ML aspects instead of ML code.

satvikpendem

9 months ago

True, I should have said ML-like language, not strictly an ML.

pjmlp

9 months ago

As F# user I don't think so.

jjtheblunt

9 months ago

F# is great, but what toolchain is your favorite? (i'm in a Rust phase as an old C person for work the last couple years, so am out of date with F#'s world)

pjmlp

9 months ago

Visual Studio, mostly. Otherwise go for Rider.

jjtheblunt

9 months ago

visual studio here too, having left Rider. thanks.

satvikpendem

9 months ago

F# is pretty good but the language has so few users, while Rust strikes the balance of best in class toolchain and also a growing user base.

pjmlp

9 months ago

So it is about toolchain or counting users?

Change of speech when it doesn't fit?

satvikpendem

9 months ago

It's about both, the toolchain and language is used by users, so there's no need for a toolchain (or indeed, any piece of software) that has no users. Or rather, I should amend my earlier point to be, of the languages that are generally used by people, ie are not niche, Rust has the best toolchain of them. I consider most MLs these days to be niche, for lack of multiple high profile projects in those languages.

pjmlp

9 months ago

So Rust also has a debugging experience at the same level as Rider and Visual Studio.

The ability to hot load code dynamically, fast compile-debug cycle thanks to a JIT and REPL, in addition to a native code compiler.

Being able to plug into OS events like ETW, Instruments and perf.

Cloud infrastructure tooling for container orchestration.

Being able to be hosted inside SQL Server for writing stored procedures.

user

9 months ago

[deleted]