This is a "misc" patch, making various tweaks to speed InstrRefBasedLDV up. I've been testing it on the compile-time-tracking project website, and it gets back something like 2% of instructions retired. There's still a non-trivial cost of instruction referencing, this patch eases the matter. Things done:
- Adjusting the default sizes of some densemaps / smallvectors, and calling reserve() early when we know how large they should be,
- Allowing LocIdx (the sequential location numbers) to be densemap keys,
- Allowing ValueIDNums to be densemap keys,
- Adjusting the representation of ValueIDNums to sharing a union with a uint64_t.
The latter is the most interesting: ValueIDNum is supposed to be a value type, but clang doesn't condense the comparison functions down to a single value comparison. Maybe I was doing something wrong, but explicitly having the bitfield as part of a union lets us do it manually.
Replacing other containers with dense ones is fairly straight forwards. I have to introduce a intermediate variable during an assignment, because DenseMap[A] = DenseMap[B] isn't always safe if one of those operator[]'s causes the container to reallocate.
Strangest of all: replacing the std::map in TransferTracker::loadInLocs with a DenseMap causes a performance loss, so I haven't done it. I still need to dig into why std::map is faster in this case.
backed -> packed?