This whole silly post seems to have been written so they can make this statement, which they seem to believe is profound:
"This failure is not surprising, because any machine learning model,
no matter how many magic parameters it has, is only capable of ex
ecuting a fixed pipeline of mathematical operations. In algorithmic
complexity terms it is O(1). Such a process can not solve any problem
that requires iteration or search, not even a simple algorithm like find
ing a square root. Certainly it can't tackle the complex logic involved
in answering queries about code, where even determining the behav
ior of a pure boolean expression is known to be in the much harder
NP-complete complexity class."
It is neither profound nor even close to correct.
A CPU capable of only executing a fixed pipeline of mathematical operations can easily deal with all the things they talk about. Amazing? No, it just turns out they have confused a whole bunch of unrelated CS concepts to make a huge muddle of a post (mixing algorithmic complexity, decidability, simulation vs explanation, etc).
This is true even if the pipeline is truly fixed (IE everything passes through every functional unit), as it was in the past.
Th counterpoint to the argument made by the author is simple:
When it comes to determining output, any of the NN's are universal enough could be taught to arbitrarily approximate a CPU well enough to execute code and determine the output with arbitrary levels of precision. Period. Once you take cost out of it, and we venture into the realm of the "what can be done", the author's statement is simply wrong on its face. Even within the real of "less abstract", there are lots of formal papers/studies of this with specific NN types, etc. Even for transformers, which people love to beat up on as huge brute force machines, see, e.g., https://arxiv.org/abs/2107.13163
You can even throw compilation of every possible programming language we have compilers for right now in there as a pre-step if you want, since we are in the realm of abstract "what is possible".
Heck, you can go further - any of these NN's could be taught to arbitrarily approximate any machine driven process that can explain code.
That is, if you build a combination static/dynamic analysis tool/whatever that explains code, you can theoretically build an NN can approximate that tool. This is easily provable, and directly counter to the author's point.
Now, much like the author's nonsense, neither of the above is not a particularly useful result in practice[1] - among other reasons, we can prove that it is possible, but the proofs are not constructive, they only show that it must be possible to exist, it doesn't tell you how to make it :)
That's before you get into cost/etc.
There are plenty of things to bash AI code understanding on that are totally valid criticisms, we don't need to waste our time on strange, wrong ones.
[1] - though nvidia would probably be very happy if you build a 1t parameter model to try to approximate a $100 intel CPU's results and convince others to use it.