simonw, I find it almost shocking how you had the chance to talk directly with the engineer who built this, and even when he directly says things that contradict what Cursor's own CEO said, you didn't push back a single iota.
Is the takeaway here that it's fine for a CEO to claim "it even has a custom JS VM!" on Twitter/X, then afterwards the engineer explains: "The JavaScript engine isn’t working yet" and "the agents decided to pause it", and this is all OK? Not a single pushback about this very obvious contradiction? This is just one example of many, and again, since it seems to be repeated: no, no one thinks this was supposed to rival Chrome, what a trite way of trying to change the narrative.
I understand you don't want to spook future potential interviewees, but damn if that didn't feel like you suddenly are trying to defend Cursor here, instead of being curious about what actually happened. It doesn't feel curious, it feels like we're all giving up the fight against unneeded hype, exaggeration and degradation of quality.
What happened with balanced perspectives, where we don't just take people for their words, and when we notice something is off, we bring it up?
On a separate note, I actually emailed Wilson Lin too, asking if I could ask questions about it. While he initially accepted, I never actually received any answers. I'm glad to you were able to get someone from Cursor to clarify a bit at least, even though we're still just scratching the surface. I just wish we had a bit more integrity in the ecosystem and community I guess.
> FastRender may not be a production-ready browser, but it represents over a million lines of Rust code, written in a few weeks, that can already render real web pages to a usable degree
I feel that we continue to miss the forest for the trees. Writing (or generating) a million lines of code in Rust should not count as an achievement in and of itself. What matters is whether those lines build, function as expected (especially in edge cases) and perform decently. As far as I can tell, AI has not been demonstrated to be useful yet at those three things.
100%. An equivalent situation would be:
Company X does not have a production-ready product, but they have thousands of employees.
I guess it could be a strange flex about funding but in general it would be a bad signal.
Line count also becomes less useful of a metric because LLM generated code tends to be unnecessarily verbose.
Makes me wonder what would happen if you introduce cyclomatic complexity constraints in your agents.md file.
Is this the project announced a week or two ago by an AI company claiming they had built a browser but it turned out to be a crappy wrapper around Servo that didn’t even build? Or is this another one? I thought it was Anthropic but this says Cursor.
The first paragraph of the article -
> Last week Cursor published Scaling long-running autonomous coding, an article describing their research efforts into coordinating large numbers of autonomous coding agents. One of the projects mentioned in the article was FastRender, a web browser they built from scratch using their agent swarms. I wanted to learn more so I asked Wilson Lin, the engineer behind FastRender, if we could record a conversation about the project. That 47 minute video is now available on YouTube. I’ve included some of the highlights below.
It is the same project, but my impression is that HN exaggerated many of the issues with it.
For example:
- They did eventually get it to build. Unknown to me: were the agents working on it able to build it, or were they blindly writing code? The codebase can't have been _that_ broken since it didn't take long for them to get it buildable, and they'd produced demo screenshots before that.
- It had a dependency on QuickJS, but also a homegrown JS implementation; apparently (according to this post) QuickJS was intended as a placeholder. I have no idea which, if either, ended up getting used, though I suspect it may not even matter for the static screenshots they were showing off (the sites may not have required JS to show that).
- Some of the dependencies (like Skia and HarfBuzz) are libraries that other browsers also depend on and are not part of browser projects themselves.
- Other dependencies probably shouldn't have been used, but they only represent a fraction of what a browser has to do.
However…
What I don't know, and seemingly nobody else knows, is how functional the rest of the codebase is. It's apparently very slow and fails to render most websites. But is this more like "lots of bugs, but a solid basis", or is it more like "cargo-culted slop; even the stuff that works only works by chance"? I hope someone investigates.
> were the agents working on it able to build it, or were they blindly writing code?
The project was able to build the whole time, and the agents were constantly compiling it using the Rust compiler and fixing any compile errors as they occurred.
The GitHub CI builds were failing, and when they first opened the repo people incorrectly assumed that meant the code didn't compile at all.
The biggest problem with the repo when they first released it was that there were no build instructions for end-users, so it was hard to try out. They fixed that within 24 hours of the initial release.
> What I don't know, and seemingly nobody else knows, is how functional the rest of the codebase is.
It's functional enough to render web pages - you can build it and run it yourself to see that, I have some screenshots from trying it out here: https://simonwillison.net/2026/Jan/19/scaling-long-running-a...
That said, it's very much intended as a research project into running parallel coding agents as opposed to a serious browser project that's intended for end users. At the end of my post I compare it to "hello world" - I think "build a browser" may be the "hello world" of massively parallel coding agent systems, which I find quite amusing.
If you were looking for a good long-term AI benchmark, “build me a Web browser” should last you for a while.
I wonder if we're heading to a situation where agent written code will function as something distinct, like bytecode.
I welcome living in this absurd time where monkeys with typewriters producing a work of Shakespeare has become a reality.
It took 2M years for the monkeys to produce typewriters and Shakespeare. Now the task is to make monkeys which can do the same in many orders of magnitude shorter time.
"Your scientists were so preoccupied with whether or not they could, they didn’t stop to think if they should."
I'm curious what is the energy/environmental/financial impact of this "research" effort of cobbling together a browser based on AI model that had been trained on freely available source code of existing browsers.
I can't imagine this browser being used outside of tinkering or curiosity toy - so the purpose of the research is just to see whether you can run absurd amount of agents simultaneously and produce something that somewhat works?
I'd love to see what happens if you hook this renderer up to AFL++...
I'm going to propose a law for these AI orchestration systems based on Greenspun's Tenth Law:
> Any sufficiently complicated AI orchestration system contains an ad hoc, informally-specified, bug-ridden, slow implementation of half of Gas Town.
Isn't it the other way around, Gas Town is an ad hoc, informally specified, bug ridden, slow implementation of other AI orchestration systems.
that statement is a bit early no?