From the article: Lookup tables are always faster than calculation - is that true? I'd think that while in the distant past maybe today due to memory being much slower than CPU the picture is different nowadays. If you're calculating a very expensive function over a small domain so the lookup fits in L1 Cache then I can see it would be faster, but you can do a lot of calculating in the time needed for a single main memory access.
Optimizing code on MMU-less processor versus MMU and even NUMA capable processor is vastly different.
The fact that the author achieves only a 3 to 6 times speedup on a processor running at a frequency 857 faster should have led to the conclusion that old optimizations tricks are awfully slow on modern architecture.
To be fair, execution pipeline optimization still works the same, but not taking into account the different layers of cache, the way the memory management works and even how and when actual RAM is queried will only lead to suboptimal code.
Writing style and length of paragraphs strongly suggest that this is AI generated in full.
It's amusing that the writing style is akin to a LinkedIn what XYZ taught me about B2B sales.
I have only one question: does the author know anything about coding ABAP like it's a Z80? I wish that they'd addressed this.