> Are you saying the model used to simulate many different cpu models is the same, which makes comparing CPUs harder? Or are you saying the model is not accurate?
Both, but mostly the former. You can view the scheduling models used for a given CPU here: https://github.com/llvm/llvm-project/blob/main/llvm/lib/Targ...
* CortexA53Model used for: A34, A35, A320, a53, a65, a65ae
* CortexA55Model used for: A55, r82, r82ae
* CortexA510Model used for: a510, a520, a520ae
* CortexA57Model used for: A57, A72, A73, A75, A76, A76ae, A77, A76, A78ae, A78c
* NeoverseN2Model used for: a710, a715, a720, a720ae, neoverse-n2
* NeoverseV1Model used for: X1, X1c, neoverse-v1/512tvb
* NeoverseV2Model used for: X2, X3, X4, X295, grace, neoverse-v2/3/v3ae
* NeoverseN3Model used for: neoverse-n3
It's even worse for Apple CPUs, all apple CPUs, from apple-a7 to apple-m4 use the same "CycloneModel" of a 6-issue out-of-order core from 2013.
There are more fine-grained target-specific feature flags used, e.g. for fusion, but the base scheduling model often isn't remotely close to the actual processor.
> It’s an interesting point that the newer neoverse cores use a model with smaller issue width. Are you saying this doesn’t match reality? If so do you have any idea why they model it that way?
Yes, I opened an issue about the Neoverse cores since then an independent PR adjusted the V2 down from 16 wide to a more realistic 8 wide: https://github.com/llvm/llvm-project/issues/136374
Part of the problem is that LLVMs scheduling model can't represent all properties of the CPU.
The issue width for those cores seems to be set to the maximum number of uops the core can execute at once.
If you look at the Neoverse V1 micro architecture, it indeed has 15 independent issue ports: https://en.wikichip.org/w/images/2/28/neoverse_v1_block_diag...
But notice how it can only decode 8 instructions (5 if you exclude MOP cache) per cycle.
This is partially because some operations take multiple cycles before the port can execute new instructions, so having more execution ports is still a gain in practice.
The other reason is uop cracking. Complex addressing modes and things like load/store pairs are cracked into multiple uops, which execute on separate ports.
The problem is that LLVMs IssueWidth parameter is used to model, decode and issue width. The execution port count is derived from the ports specified in the scheduling model itself, which basically are correct.
---
The reason for all of this is, if I had to guess, that modeling instruction scheduling doesn't matter all that much for codegen on OoO cores.
The other one is that just putting in the "real"/theoretical numbers doesn't automatically result in the best codegen.
It does matter, however, if you use it to visualize how a core would execute instructions.
The main point I want to make, is that you shouldn't use llvm-mca with -mcpu=apple-m4 and use it to compare against -mcpu=znver5 and expect any reasonable answers. Just be sure to check the source, so you realize you are actually comparing a scheduling model based on the apple Cyclone (2013) core and the Zen4 core (2022).