This lets us avoid creating and destroying a CallbackVH every time we
check the cache.
This is good for a 2% e2e speedup when compiling one of the large Eigen
tests at -O3.
FTR, I tried making the ValueCache hashtable one-level -- i.e., mapping
a pair (Value*, BasicBlock*) to a lattice value, and that didn't seem to
provide any additional improvement. Saving a word in LVILatticeVal by
merging the Tag and Val fields also didn't yield a speedup.
This type is probably too large to use with DenseMap. I would expect that the keys would end up spaced out in memory, making lookup more expensive. Maybe try using a separate allocation for the value like this:
On the other hand, maybe your 2% win is from avoiding separate allocation overhead. This is probably worth a quick benchmark, though.