sva_
13 hours ago
The original LinkedIn post is pretty wild. I wonder if he did a fat line of coke before writing that, or if it actually were any concrete plans that have been worked out.
cryptonector
35 minutes ago
| My goal ...
later
| ... our ...
I think Galen's goal to rewrite every bit of C/C++ was his goal, not Microsoft's.
hrdwdmrbl
12 hours ago
As a goal for 2030, it doesn’t seem that wild. Shoot for the moon
monocasa
12 hours ago
Rewriting Microsoft's 10s if not 100s of millions of lines of native code in four years doesn't sound that wild to you?
user
11 hours ago
bodash
7 hours ago
Windows’ downfall will finally give rise to the Linux desktops, already seeing trends in how popular Omarchy is and well received
xvector
12 hours ago
Just toss an agent at it (tm)
arvigeus
11 hours ago
“Make no mistakes” and it’s a few days of work…
koakuma-chan
12 hours ago
No? They will harness the power of AGI agents to rewrite everything in Rust. Sounds good to me.
ronsor
12 hours ago
> AGI
That thing we don't have yet?
koakuma-chan
12 hours ago
No, we have that. The thing we don't have is ASI.
wtfwhateven
12 hours ago
Not true.
koakuma-chan
12 hours ago
Sam Altman said it's true.
mcv
6 hours ago
The guy who can't raise a kid without AI? I wouldn't trust his opinion on things like this.
wtfwhateven
11 hours ago
lol
zeristor
9 hours ago
Maybe he looked it up on his AI
monocasa
11 hours ago
One of the least fun things about a hype bubble is that I legitimately can't tell when people are joking anymore.
koakuma-chan
10 hours ago
You should have been able to tell it's sarcasm because I said "harness the power." This phrase is only used in conjunction with bullshit.
monocasa
18 minutes ago
I agree it's mostly only used in conjunction with bullshit, but during a hype bubble a lot of the users of that phrase don't fully realize they're spouting bullshit, and it's used in earnest.
tarsinge
7 hours ago
It’s still wild because it’s mostly useless. Rewriting a few core components might improve security a bit, but otherwise it’ll not change anything for end users. This is the typical attractive but useless project for bored programmer with no product or business vision.
charcircuit
12 hours ago
>is pretty wild
How is it wild? On social media I kept seeing things like people falsely expecting the end goal would require manually reading through a million lines of code. It seemed more like people making up reasons to be mad or trying to dunk on the author.
overgard
12 hours ago
LLMs generate lots of security issues and bugs. Just being "Rust" doesn't automatically fix that. Generating that amount of code means no human review. How could this not end in obvious disaster?
bdangubic
12 hours ago
humans generate a lot more security issues and a lot more bugs, how could humans coding not end in obvious disaster…
Bridged7756
3 hours ago
So AI is based on the insecure and buggy human code, but on top of that it can't think for itself? Definitely, in 2025.... 2027 it will be, coding, for us all.
112233
4 hours ago
Do you have a study we can read and evaluate validity of it's methods?
If not, here, I can show you my opinion too!: No, what you said is completely false.
bdangubic
2 hours ago
do you have a study we can read and evaluate validity of methods of millions of SWEs that write code? if not…?
Arainach
12 hours ago
If you're "producing" a million lines of code (that's 50K lines per working day) and not reading them, that's even worse.
monocasa
12 hours ago
My read was rewriting one million lines of code per engineer month using ML to do the heavy lifting.
Which is absolutely batshit. There's no way that can be reviewed properly, even if it's putting all of the review work on all of the other teams.
dchftcs
12 hours ago
At some point velocity will slow down too. Figuring out edge cases in production to add or subtract a few lines, or backtracking from a bad change.
gorgoiler
12 hours ago
You’re right about the impossibility of reviewing for style, clarity, and coherence. For correctness though, Windows is famous for being insistent on backwards compatibility over timespans measured in decades and that must surely be automated to the hilt.
As a third-party developer in the late 2000s I remember my boss giving me a CDROM binder (binders?) of every single OS release that Microsoft had ever put out. I assume he’d been given it my his developer-relations rep at Microsoft. My team and I used it to ensure our code worked on every MSDOS/Win* platform we cared to target.
I expect that, internally, the Windows team have crazy amounts of resources to implement the most comprehensive regression testing suite ever created. To that extent, at least, you’d be able to tell if the Rust version did what the old code did even if you didn’t read the code itself.
monocasa
11 hours ago
> For correctness though, Windows is famous for being insistent on backwards compatibility over timespans measured in decades and that must surely be automated to the hilt.
That hasn't been nearly the same goal for decades now.
For instance, Crysis literally won't run on win10 or later anymore.
On top of that, security bugs aren't the kind of thing you can automate away during a rewrite that no one has the bandwidth to actually review.
dmitrygr
12 hours ago
What makes you think any existing recent code added to Windows has been reviewed by anyone? This is the company that broke the start menu and the login screen in two consecutive updates.
monocasa
12 hours ago
I've heard some inside stories from microsofties.
They do still review code, but the first wave of layoffs in 2022 mainly hit principal engineers and above because some bean counters said "oh, these are the engineers that are costing us the most per head", so it's kind of the inmates running the asylum now.
And I'll say that their biggest sin was always that their code from the late 90s on was about 20% too clever for their own good. Kind of goes to that classic quip about how how it takes twice your brain power to debug code as it takes to write it, so if you were already maxing out just writing it, then you're not smart enough to debug it. That's half of why features seemed to get a 1.0 release, then get replaced with something rather than iteratively improved (the other half being FAANG style internal incentive structures).
Were all seeing the effects of them clearing house of their weaponized autism that was barely keeping the wheels on the wagon. They do review, but they don't have the ability to do it properly at scale anymore. Which makes rewriting everything even more batshit.
stackghost
12 hours ago
Also the company whose start menu ads made the interface so laggy their "solution" was to just preload the bloat.
charcircuit
12 hours ago
With this mindset I feel like you would also think bumping a C++ compiler toolchain version is impossible due to all the different changes to code generation that could happen. This is already done today and has similar issues where technically all the code can be affected, but it's not reviewed via a process of manually reading every line.
monocasa
12 hours ago
There's a nearly incalculable difference between bumping a compiler version and rewriting it in a different language.
charcircuit
12 hours ago
A C++ compiler translates C++ to an assembly. This project would translate C++ to another language. It's not that different of a concept.
dagmx
12 hours ago
It’s significantly more straight forward to go from a higher level to a lower level representation than it is to go between different high level representations.
That’s not to trivialize what a compiler does, but it’s effectively going from a complex form to its building blocks while maintaining semantics.
Changing high level languages introduces fundamentally different semantics. Both can decompose to the same general building blocks, but you can’t necessarily compose them the same way.
At the simplest example, a compiler backend (the part you’re describing) can’t reason about data access rules. That is the domain of the language’s compiler frontend and a fundamental difference between C++ and Rust that can’t just be directly derived.
monocasa
12 hours ago
A compiler isn't using a statistical model of language more complex than anyone could understand with a lifetime of study to do its translation, adheres to a standard for that translation, and if you're important enough (and Microsoft internal teams are for MSVC), you get heads up on what specifically is changing so you know where to look for issues.
This is "lets put our postgres database on blockchain because I think blockchain is cool" level of crap you see in peak bubble.
overgard
12 hours ago
Compilers are deterministic.
Krssst
12 hours ago
There is a C++ standard that everyone writing C++ code follows and newer version are usually compatible with one another regardless of toolchain version. Behavior of the toolchain should not change. Worst case you can use deterministic, reliable tools to automatically detect problematic locations if there really is a behavior change. (compiler warnings/errors for example)
AI code generation is not deterministic and has no guarantee of behavior, thus requires review unless incorrect code is acceptable.
charcircuit
12 hours ago
>AI code generation is not deterministic
You don't have to use AI code generation to be what is generating the code or you could require some kind of proof of equivalence to verify the code that was generated.