Why SSA Compilers?

98 pointsposted 5 hours ago
by transpute

37 Comments

rdtsc

4 hours ago

I like the style of the blog but a minor nit I'd change is have a definition what SSA is right at the top. It discusses SSA for quite a while "SSA is a property of intermediate representations (IRs)", "it's frequently used" and only 10 paragraphs down actually defines what SSA is

> SSA stands for “static single assignment”, and was developed in the 80s as a way to enhance the existing three-argument code (where every statement is in the form x = y op z) so that every program was circuit-like, using a very similar procedure to the one described above.

I understand it's one of those "well if you don't know what it is, the post is not for you" but I think it's a nice article and could get people who are not familiar with the details interested in it

> The reason this works so well is because we took a function with mutation, and converted it into a combinatorial circuit, a type of digital logic circuit that has no state, and which is very easy to analyze.

That's an interesting insight, it made sense to me. I only dealt with SSA when decompiling bytecode or debugging compiler issues, and never knew why it was needed, but that sort of made it click.

tylerhou

2 hours ago

Here's a concise explanation of SSA. Regular (imperative) code is hard to optimize because in general statements are not pure -- if a statement has side effects, then it might not preserve the behavior to optimize that statement by, for example:

1. Removing that statement (dead code elimination)

2. Deduplicating that statement (available expressions)

3. Reordering that statement with other statements (hoisting; loop-invariant code motion)

4. Duplicating that statement (can be useful to enable other optimizations)

All of the above optimizations are very important in compilers, and they are much, much easier to implement if you don't have to worry about preserving side effects while manipulating the program.

So the point of SSA is to translate a program into an equivalent program whose statements have as few side effects as possible. The result is often something that looks like a functional program. (See: https://www.cs.princeton.edu/~appel/papers/ssafun.pdf, which is famous in the compilers community.) In fact, if you view basic blocks themselves as a function, phi nodes "declare" the arguments of the basic block, and branches correspond to tailcalling the next basic block with corresponding values. This has motivated basic block arguments in MLIR.

The "combinatorial circuit" metaphor is slightly wrong, because most SSA implementations do need to consider state for loads and stores into arbitrary memory, or arbitrary function calls. Also, it's not easy to model a loop of arbitrary length as a (finite) combinatorial circuit. Given that the author works at an AI accelerator company, I can see why he leaned towards that metaphor, though.

vidarh

4 hours ago

This post is frankly one of the most convoluted discussions of SSA I've read. There's lots of info there, but I'd frankly suggest going back and look at a paper on implementing it. I think I first came across SSA in a paper adding it to Wirths Oberon compiler, and it was much more accessible.

Edit: It was this paper by Brandis and Mössenböck: https://share.google/QNoV9G8yMBWQJqC82

jhallenworld

4 hours ago

Rochus

3 hours ago

Indeed a great book; I even have a paper copy.

The SSA book is also pretty good: https://web.archive.org/web/20201111210448/https://ssabook.g...

mananaysiempre

2 hours ago

I’ve found the SSA book to be... unforgiving in its difficulty. Not in the sense that I thought it to be a bad book but rather in that I was getting the feeling that a dilettante in compilers like me wasn’t the target audience.

Rochus

2 hours ago

Like so many compiler books from academia.

jchw

3 hours ago

Honestly, I think it's just something you either like or don't. If all you were trying to do was understand SSA, I agree this blog post is probably inefficient at that particular task, but often blog posts are entertainment as much as education, so meandering through a bunch of different things along the way is part of the deal. Personally I thought there were a lot of pretty interesting insights that I haven't seen a lot of discussion about in other places, though I will admit I mostly learned about SSA from Wikipedia and from people yelling about compilers online.

strbean

4 hours ago

I learned a bit about SSA in a compiler course. Among many other things, it is crucial for register assignment. You want to know each distinct value that will exist, and the lifetimes of those values, in order to give each a register. Then, if have more distinct values existing at one time than you have registers, you have to push stuff to the stack.

tylerhou

2 hours ago

It is not critical for register assignment -- in fact, SSA makes register assignment more difficult (see the swap problem; the lost copy problem).

Lifetime analysis is important for register assignment, and SSA can make lifetime analysis easier, but plenty of non-SSA compilers (lower-tier JIT compilers often do not use SSA because SSA is heavyweight) are able to register allocate just fine without it.

DannyBee

2 hours ago

The motivation and reason it works is also wrong anyway. Like i get it's a gentle intro, but i think there are ways to accomplish that without being egregiously history rewriting ;)

Ken zadeck was my office mate for years, so this is my recollection, but it's also been a few decades, so sorry for any errors :)

The reason of why it works is definitely wrong - they weren't even using rewriting forms of SSA, and didn't for a long time. Even the first "fully SSA" compiler (generally considered to be open64) did not rewrite the IR into SSA.

The reason it works so well is because it enables you to perform effective per-variable dataflow. In fact, this is the only problem it solves - the ability to perform unrestricted per-variable dataflow quickly. This was obvious given the historical context at the time, but less obvious now that it is history :)

In the simplest essence, SSA enables you to follow chains of dataflow for a variable very easily and simply, and that's probably what i would have said instead.

It's true that for simple scalar programs, the variable name reuse that breaks these dataflow chains mostly occur at explicit mutation points, but this is not always correct depending on the compiler IR, and definitely not correct you extend it to memory, among other things. It's also not correct at all as you extend the thing SSA enables you to do (per-variable dataflow quickly) to other classes of problems (SSU, SSI, etc).

History wise - this post also sort of implies SSA came out of nowhere and was some revolutionary thing nobody had ever thought about, just sort of developed in the 80's.

In fact, it was a formalization of attempts at per-variable dataflow they had been working on for a while.

I'd probably just say that as a gentle intro, but if you want the rest of history, here you go:

Well before SSA, it was already known that lower bounds on bitvector dataflow (the dominant approach at the time of SSA) were not great. Over the years, it turned out they were much better than initially expected, but in the end, worse than anyone wanted as programs got bigger and bigger. N^2 or N^3 bitvector operations for most problems. Incrementality is basically impossible[2]. They were also hard to understand and debug because of dependencies between related variables, etc.

Attempts at faster/better dataflow existed in two rough forms, neither of which are bound by the same lower bound:

1. Structured dataflow/Interval analysis algorithms/Elimination dataflow - reduce the CFG into various types of regions with a known system of equations, solve the equations, distribute the results to the regions. Only works well on reducible graphs Can be done incrementally with care. This was mostly studied in parallel to bitvectors, and was thought heavily about before it became known that there was a class of rapid dataflow problems (IE before the late 70's). Understanding the region equations well enough to debug them requires a very deep understanding of the basis of dataflow - lattices, etc.

In that sense it was worse than bitvectors to understand for most people. Graph reducibility restrictions were annoying but tractable on forward graphs through node splitting and whatnot (studies showed 90+% of programs at the time had reducible flowgraphs already), but almost all backwards CFG's are not reducible, making backwards dataflow problems quite annoying. In the end, as iterative dataflow became provably faster/etc, and compilers became the province of more than just theory experts, this sort of died[3]. If you ever become a compiler theory nerd, it's actually really interesting to look at IMHO.

2. Per-variable dataflow approaches. This is basically "solve a dataflow problem for single variable or cluster of variables at a time instead of for all variables at once". There were not actually a ton of people who thought this would ever turn into a practical approach:

a. The idea of solving reaching definitions (for example) one variable at a time seemed like it would be much slower than solving it for all variables at once.

b. It was very non-obvious how to be able to effectively partition variables to be able to compute a problem on one variable (or a cluster of variables) at a time without it devolving into either bitvector-like time bounds or structured dataflow like math complexity.

It was fairly obvious at the time that if you culd make it work, you could probably get much faster incremental dataflow solution.

SSA came out of this approach. Kenny's thesis dealt with incremental dataflow and partitioned variable problems, and proved time bounds on various times of partitioned/clustered variable problems. You can even see the basis of SSA in how it thinks about things. Here, i'll quote from a few parts:

" As the density of Def sites per variable increases, the average size of the affected area for each change will decrease...".

His thesis is (amont other things) on a method for doing this that is typical for the time (IE not SSA):

"The mechanism used to enhance performance is raising the density of Def sites for each variable. The most obvious way to increase the Use and Def density is to attempt a reduction on the program flow graph. This reduction would replace groups of nodes by a single new node. This new node would then be labeled with infcrmation that summarized the effects of execution through that group of nodes."

This is because, as i mentioned, figuring out how to do it by partitioning the variables based on dominance relationships was not known - that is literally SSA, and also because Kenny wanted to finish his thesis before the heat death of the sun.

In this sense, they were already aware that if they got "the right number of names" for a variable, the amount of dataflow computation needed for most problems (reaching defs, etc) would become very small, and that changes would be easy to handle. They knew fairly quickly that for the dataflow problems they wanted to solve, they needed each variable to have a single reaching definition (and reaching definitions was well known), etc.

SSA was the incremental culmination of going from there to figuring out a clustering/partitioning of variables that was not based on formally structured control flow regions (which are all-variables things), but instead based on local dataflow for a variable with incorporation of the part of the CFG structure that could actually affect the local result for a given variable. It was approached systematically - understanding the properties they needed, figuring out algorithms that solved them, etc.

Like a lot of things that turn out to be unexpectedly useful, take the world by storm, whatever, etc, history later seems to try to often represent them as a eureka moment.

[1] Bitvectors are assumed to be fixed size, and thus constant cost, but feel free to add another factor of N here if you want.

[2] This is not true in the sense that we knew how to recompute the result incrementally, and end up with a correct result, but doing so provably faster than solving the problem from scratch was not possible.

[3] They actually saw somewhat of a resurgence in the world of GPUs and more general computation graphs because it all becomes heavily applicable again to solving. However, we almost always have eventually developed easier to understand (even if potentially slower theory-wise) global algorithms and use those instead, because we have the compute power to do it, and this tradeoff is IMHO worth it.

jcranmer

an hour ago

I'm extremely uncertain of my history here, but my recollection is that SSA wasn't seen as practical until the development of the dominance frontier algorithm for inserting phi nodes, which seems to be 1991.

pubby

2 hours ago

I like this article a lot but it doesn't answer the question of "Why SSA?".

Sure, a graph representation is nice, but that isn't a unique property of SSA. You can have graph IRs that aren't SSA at all.

And sure, SSA makes some optimizations easy, but it also makes other operations more difficult. When you consider that, plus the fact that going into and out of SSA is quite involved, it doesn't seem like SSA is worth the fuss.

So why SSA?

Well, it turns out compilers have sequencing issues. If you view compilation as a series of small code transformations, your representation goes from A -> B, then B -> C, then C -> D and so on. At least, that's how it works for non-optimizing compilers.

For optimizing compilers however, passes want to loop. Whenever an optimization is found, previous passes should be run again with new inputs... if possible. The easiest way to ensure this is to make all optimizations input and output the same representation. So A -> B is no good. We want A -> A: a singular representation.

So if we want a singular representation, let's pick a good one right? One that works reasonably well for most things. That's why SSA is useful: it's a decently good singular representation we can use for every pass.

zachixer

3 hours ago

Every time I see a clean SSA explainer like this, I’m reminded that the “simplicity” of SSA only exists because we’ve decided mutation is evil. It’s not that SSA is simpler — it’s that we’ve engineered our entire optimization pipeline around pretending state doesn’t exist.

It’s a brilliant illusion that works… until you hit aliasing, memory models, or concurrency, and suddenly the beautiful DAG collapses into a pile of phi nodes and load/store hell.

jcranmer

2 hours ago

SSA isn't about saying mutation is evil. It's about trivializing chasing down def-use rules. In the Dragon Book, essentially the first two dataflow analyses introduced are "reaching definitions" and "live variables"; within an SSA-based IR, these algorithms are basically "traverse a few pointers". There's also some ancillary benefits--SSA also makes a flow-insensitive algorithm partially flow-sensitive just by the fact that it's renaming several variables.

Sure, you still need to keep those algorithms in place for being able to reason about memory loads and stores. But if you put effort into kicking memory operations into virtual register operations (where you get SSA for free), then you can also make the compiler faster since you're not constantly rerunning these analyses, but only on demand for the handful of passes that specifically care about eliminating or moving loads and stores.

toast0

2 hours ago

> pretending state doesn’t exist.

As a fan of a functional language, immutability doesn't mean state doesn't exist. You keep state with assignment --- in SSA, every piece of state has a new name.

If you want to keep state beyond the scope of a function, you have to return it, or call another function with it (and hope you have tail call elimination). Or, stash it in a mutable escape hatch.

vidarh

3 hours ago

SSA is appealing because you can defer the load/store hell until after a bunch of optimizations, though, and a lot of those optimizations becomes a lot easier to reason about when you get to pretend state doesn't exist.

achierius

3 hours ago

You have it backwards. Modern compilers don't use SSA because it's "simpler", we use it because it enables very fast data-flow optimizations (constant prop, CSE, register allocation, etc.) that would otherwise require a lot of state. It doesn't "pretend state doesn't exist", it's actually exactly what makes it possible/practical for the compiler to handle changes in state.

As some evidence to the second point: Haskell is a language that does enforce immutability, but it's compiler, GHC, does not use SSA for main IR -- it uses a "spineless tagless g-machine" graph representation that does, in fact, rely on that immutability. SSA only happens later once it's lowered to a mutating form. If your variables aren't mutated, then you don't even need to transform them to SSA!

Of course, you're welcome to try something else, people certainly have -- take a look at how V8's move to Sea-of-Nodes has gone for them.

mananaysiempre

2 hours ago

To appreciate the “fast” part, nothing beats reading though LuaJIT’s lj_opt_fold.c, none of which would work without SSA.

Of course, LuaJIT is cheating, because compared to most compilers it has redefined the problem to handling exactly two control-flow graphs (a line and a line followed by a loop), so most of the usual awkward parts of SSA simply do not apply. But isn’t creatively redefining the problem the software engineer’s main tool?..

antonvs

2 hours ago

> take a look at how V8's move to Sea-of-Nodes has gone for them.

Are you implying it hasn't gone well? I thought it bought some performance at least. What are the major issues? Any sources I can follow up on?

rpearl

2 hours ago

SSA form is a state representation. SSA encodes data flow information explicitly which therefore simplifies all other analysis passes. Including alias analysis.

antonvs

2 hours ago

> the “simplicity” of SSA only exists because we’ve decided mutation is evil.

Mutation is the result of sloppy thinking about the role of time in computation. Sloppy thinking is a hindrance to efficient and tractable code transformations.

When you "mutate" a value, you're implicitly indexing it on a time offset - the variable had one value at time t_0 and another value at time t_1. SSA simply uses naming to make this explicit. (As do CPS and ANF, which is where that "equivalence" comes from.)

If you don't use SSA, CPS, or ANF for this purpose, you need to do something else to make the time dimension explicit, or you're going to be dealing with some very hairy problems.

"Evil" in this case is shorthand for saying that mutable variables are an unsuitable model for the purpose. That's not a subjective decision - try to achieve similar results without dealing with the time/mutation issue and you'll find out why.

torginus

an hour ago

SSA makes me think of a few interesting points:

Considering it's a functional language (bar memory access bits), and most procedural languages can target this, we can say that a lot of procedural code can be compiled down to functional code - so procedural programming is syntactic sugar on top of a functional framework

Also functional programmers have a couple of pet peeves - tail recursion to implement infinite recursion and loops being one. SSA uses a completely different paradigm - phi nodes - to achieve looping. Considering I don't particularly like tail recursion, as for example the naive recursive implementation of fibonacci is not tail recursive (and thus dangerous, a property that a functional program should never have), and trying to make it tail recursive looks very much like a transformation a compiler should do, which goes against the spirit of functional programming.

I think functional programmers should think of other constructs to achieve infinite recursion or looping, considering I suspect there are infinitely many possible, I guess we could discover ones that are less inherently dangerous and easier to reason about as mere mortals.

noelwelsh

4 hours ago

The shocking truth is that SSA is functional! That's right, the compiler for your favourite imperative language actually optimizes functional programs. See, for example, https://www.jantar.org/papers/chakravarty03perspective.pdf. In fact, SSA, continuation passing style, and ANF are basically the same thing.

pizlonator

4 hours ago

No they're not.

The essence of functional languages is that names are created by lambdas, labmdas are first class, and names might not alias themselves (within the same scope, two references to X may be referencing two instances of X that have different values).

The essence of SSA is that names must-alias themselves (X referenced twice in the same scope will definitely give the same value).

There are lots of other interesting differences.

But perhaps the most important difference is just that when folks implement SSA, or CPS, or ANF, they end up with things that look radically different with little opportunity for skills transfer (if you're an SSA compiler hacker then you'll feel like a fish out of water in a CPS compiler).

Folks like to write these "cute" papers that say things that sound nice but aren't really true.

zozbot234

2 hours ago

The whole ad-hoc mechanism of phi-nodes in SSA can be replaced by local blocks with parameters. A block that can take parameters is not that different conceptually from a lambda.

pizlonator

40 minutes ago

Local blocks with parameters is the gross way to do it. The right way to do it is Phi/Upsilon form.

https://gist.github.com/pizlonator/cf1e72b8600b1437dda8153ea...

But even if you used block arguments, it's so very different from a lambda. Lambdas allow dynamic creation of variables, while SSA doesn't. Therefore, in SSA, variables must-alias themselves, while in the lambda calculus they don't. If you think that a block that takes arguments is the same as a lambda because you're willing to ignore such huge differences, then what's the limiting principle that would make you say that two languages really are different from one another?

Remember, all Turing complete languages are "conceptually the same" in the sense that you can compile them to one another

aatd86

3 hours ago

The same thing I don't know... but a long time ago, I remember reading that SSA and CPS were isomorphic. Basically CPS being used for functional languages.

edit: actually even discussed on here

CPS is formally equivalent to SSA, is it not? What are advantages of using CPS o... | Hacker News https://share.google/PkSUW97GIknkag7WY

Chabsff

4 hours ago

My experience with SSA is extremely limited, so that might be a stupid question. But does that remain true once memory enters the picture?

The llvm tutorials I played with (admittedly a long time ago) made it seem like "just allocate everything and trust mem2reg" basically abstracted SSA pretty completely from a user pov.

PhilipRoman

an hour ago

If you're hell bent on functional style, you can represent memory writes as ((Address, Value, X) -> X), where X is a compile-time-only linear type representing the current memory state, which can be manipulated like any other SSA variable. It makes some things more elegant, like two reads of the same address naturally being the same value (as long as it's reading from the same memory state). Doesn't help at all with aliasing analysis or write reordering though so I don't think any serious compilers do this.

mbauman

3 hours ago

Forget compilers, SSA is an immensely valuable readability improvement for humans, too.

ModernMech

29 minutes ago

That minimap is wild, it duplicates the entire post but every word is surrounded by a span. I thought maybe it was like a bitmap or something but no:

  <p><span class="censored">SSA</span> <span class="censored">is</span> <span   class="censored">hugely</span> <span class="censored">popular,</span> <span class="censored">to</span> <span class="censored">the</span> <span class="censored">point</span> <span class="censored">that</span> <span class="censored">most</span> <span class="censored">compiler</span> <span class="censored">projects</span> <span class="censored">no</span> <span class="censored">longer</span> <span class="censored">bother</span> <span class="censored">with</span> <span class="censored">other</span> <span class="censored">IRs</span> <span class="censored">for</span> <span class="censored">optimization</span><sup id="fnref:ghc" role="doc-noteref"><a href="#fn:ghc" class="footnote" rel="footnote"><span class="censored">7</span></a></sup><span class="censored">.</span></p>

KeplerBoy

3 hours ago

Smallest of nitpicks: the depicted multiplier is a 2 bit multiplier. A one bit multiplier is just an and gate.