This is somewhat partial.
Overview:
* Latencies are reasonably-good {F7345689}
* There are still some inconsistencies.
* *Many* of the inconsistencies are noise
* fp measurements are flaky
* Non-fp measurements are somewhat flaky too
* NumMicroOps are somewhat good {F7345689}
* Most of the remaining inconsistencies are from `Ld` / `Ld_ReadAfterLd` classes
* Actual unit occupation (pipes, `ResourceCycles`) are undiscovered lands, i did not really look there.
They are basically verbatum copy from `btver2`
* Many `InstRW`. And there are still inconsistencies left...
What's not here:
* llvm-mca test coverage. I understand how to not just add the coverage here, but actually show a diff, but i did not look there yet.
* benchmarks. I'd say it's too soon for any measurements.
To be noted:
I think this is the first new schedule profile produced with the new next-gen tools like llvm-exegesis!
llvm-exegesis details:
{F7345685}
{F7345686}
{F7345687}
{F7345689}
{F7345690}