This is somewhat partial.
Overview:
* Latencies are good {F7354935}
* All of these remaining inconsistencies //appear// to be noise/noisy/flaky.
* NumMicroOps are somewhat good {F7354949}
* Most of the remaining inconsistencies are from `Ld` / `Ld_ReadAfterLd` classes
* Actual unit occupation (pipes, `ResourceCycles`) are undiscovered lands, i did not really look there.
They are basically verbatum copy from `btver2`
* Many `InstRW`. And there are still inconsistencies left...
What's not here:
* llvm-mca test coverage. I understand how to not just add the coverage here, but actually show a diff, but i did not look there yet.
* benchmarks. I'd say it's too soon for any measurements.
To be noted:
I think this is the first new schedule profile produced with the new next-gen tools like llvm-exegesis!
llvm-exegesis details:
{F7354931} {F7354935}
{F7354950} {F7354952} {F7354949}