JSIR: A High-Level IR for JavaScript

68 pointsposted 12 hours ago
by nnx

20 Comments

fg137

2 hours ago

It's funny they bother to bring up the half dead "Google Closure Compiler" as an example.

And my dumb brain still don't understand how IR is "better" than AST after reading this post. Current AST based JS tools working reasonably well, and it's not clear to me how introducing this JSIR helps tool authors or downstream users, when there are all those roadblocks mentioned at the end.

100ms

an hour ago

Why is it half dead? They only jettisoned the ancient support library, the compiler itself AFAIK remains best in class and has commits on GitHub as of 15 hours ago

jhavera

7 hours ago

Interesting timing. We have been working on something that takes the opposite design philosophy. JSIR is designed for high-fidelity round-trips back to source, preserving all information a human author put in. That makes sense when the consumer is a human-facing tool like a deobfuscator or transpiler.

We have been exploring what an IR looks like when the author is an AI and the consumer is a compiler, and no human needs to read the output at all. ARIA (aria-ir.org) goes the other direction from JSIR. No source round-trip, no ergonomic abstractions, but first-class intent annotations, declared effects verified at compile time, and compile-time memory safety.

The use cases are orthogonal. JSIR is the right tool when you need to understand and transform code humans wrote. ARIA is the right tool when you want the AI to skip the human-readable layer entirely.

The JSIR paper on combining Gemini and JSIR for deobfuscation is a good example of where the two worlds might intersect. Curious whether you have thought about what properties an IR should have to make that LLM reasoning more reliable.

oldmanhorton

6 hours ago

> when the author is an AI and the consumer is a compiler, and no human needs to read the output at all.

This seems like a big bet on the assumption that fully autonomous codegen without humans in the loop is imminent if not already present - frankly, I hope you are wrong.

Even if that comes to pass in some cases, I also find it hard to believe that an LLM will ever be able to generate code in any new language at the same level with which it can generate stack overflow-shaped JavaScript and python, because it’ll never have as robust of a training set for new languages.

measurablefunc

4 hours ago

I recently wrote a simple interpreter for a stack based virtual machine for a Firefox extension to do some basic runtime programming b/c extensions can't generate & evaluate JavaScript at runtime. None of the consumer AIs could generate any code for the stack VM of any moderate complexity even though the language specification could fit on a single page.

We don't have real AI & no one is anywhere near anything that can consistently generate code of moderate complexity w/o bugs or accidental issues like deleting files during basic data processing (something I ran into recently while writing a local semantic search engine for some of my PDFs using open source neural networks).

sieve

an hour ago

I am building an assembler+compiler+VM for a python-like statically typed language with monomorphized generics and Erlang-style concurrency. Claude Sonnet/Kimi/Gemini Pro (and even ChatGPT on occasion) are able to handle the task reasonably well because I give them specs for the VM that have been written and rewritten 200+ times to remove any ambiguities and make things as clear as possible.

I go subsystem by subsystem.

Writing the interpreter for a stack vm is as simple as it gets.

sheepscreek

11 hours ago

This is exciting stuff!

My interpretation: If the JSIR project can successfully prove bi-directional source to MLIR transformation, it could lead to a new crop of source to source compilers across different languages (as long as they can be lowered to MLIR and back).

Imagine transmorphing Rust to Swift and back. Of course you’d still need to implement or shim any libraries used in the source language. This might help a little bit with C++ to Rust conversions - as more optimizations and analysis would now be possible at the MLIR level. Though I won’t expect unsafe code to magically become safe without some manual intervention.

jeswin

4 hours ago

For tsonic (https://github.com/tsoniclang/tsonic) which is trying to convert TS to C# and then to native binary via NativeAOT, I took almost the opposite tradeoff from JSIR.

JSIR is optimizing for round-trips back to JavaScript source. But since in language to language conversion teh consumer is a backend emitter (C# in my case), instead of preserving source structure perfectly, my IR preserves resolved semantic facts: types, generic substitutions, overload decisions, package/binding resolution, and other lowering-critical decisions.

I could be wrong, but I suspect transpilers are easier to build if it's lowering oriented (for specific targets).

pizlonator

9 hours ago

> Industry trend of building high-level language-specific IRs

"Trend"?

This was always the best practice. It's not a "trend".

sjrd

8 hours ago

It seems to me that there's a certain "blindness" between two compiler worlds.

Compiler engineers for mostly linear-memory languages tend to only think in terms of SSA, and assume it's the only reasonable way to perform optimizations. That transpires in this particular article: the only difference between an AST and what they call IR is that the latter is SSA-based. So it's like for them something that's not SSA is not a "serious" data structure in which you can perform optimizations, i.e., it can't be an IR.

On the other side, you actually have a bunch of languages, typically GC-based for some reason, whose compilers use expression-based structures. Either in the form of an AST or stack-based IR. These compilers don't lack any optimization opportunities compared to SSA-based ones. However it often happens that compiler authors for those (I am one of them) don't always realize all the optimization set that SSA compilers do, although they could very well be applied in their AST/stack-based IR as well.

gobdovan

2 hours ago

I think the WASM world is a clear example that bridges the gap you're describing.

You usually compile from SSA to WASM bytecode, and then immediately JIT (Cranelift) by reconstructing an SSA-like graph IR. If you look at the flow, it's basically:

Graph IR -> WASM (stack-based bytecode) -> Graph IR

So the stack-based IR is used as a kind of IR serialization layer. Then I realized that this works well because a stack-based IR is just a linearized encoding of a dataflow graph. The data dependencies are implicit in the stack discipline, but they can be recovered mechanically. Once you see that, the blindness mostly disappears, since the difference between SSA/graph IRs and expression/stack-based IRs is about how the dataflow (mostly around def-use chains) is represented rather than about what optimizations are possible.

Fom there it becomes fairly obvious that graph IR techniques can be applied to expression-based structures as well, since the underlying information is the same, just represented differently.

Didn't look close enough to JSIR, but from looking around (and from building a restricted Source <-> Graph IR on JS for some code transforms), it basically shows you have at least a homomorphic mapping between expression-oriented JS and graph IR, if not even a proper isomorphism (at least in a structured and side-effect-constrained subsets).

sjrd

35 minutes ago

Only compilers that already had an SSA-based pipeline transform SSA to stack-based for Wasm. And several don't like that they have to comply with Wasm structured control flow (which, granted, is independent from SSA). Compilers that have been using an expression-based IR directly compile to Wasm without using an SSA intermediary.

catapart

2 hours ago

asking as someone who is writing a game engine in javascript with the intention to 'transpile' the games' source into a C# project for a native runtime: this provides a map that allows automated translation from javascript source to C# source, right?

conartist6

an hour ago

They're presenting this under the banner "the need for source to source transformations."

That seems a bit disingenuous given this is not a source-preserving IR! All comments and nonstandard spacing would be completely removed from your code if you gave it a round trip through this format. That doesn't sound like 99.9% source recovery to me...

croes

10 hours ago

hootz

35 minutes ago

Writers really should remember to write acronyms by their full name at least once at the beginning of an article.

tamimio

9 hours ago

Thank you, half way through the article and I am thinking infrared.

giorgioz

5 hours ago

I also didn't know the acronym IR. A good solution is passing the URL to ChatGPT and asking "what does IR mean in this url: "