The type-shrinking logic in reduction detection, although narrow in scope, is also rather ad-hoc, which has led to bugs (e.g., PR35734). This patch modifies the approach to rely on the demanded bits and value tracking analyses, if available. We currently perform type-shrinking separately for reductions and other instructions in the loop. Long-term, we should probably think about computing minimal bit widths in a more complete way for the loops we want to vectorize.
Reference: https://bugs.llvm.org/show_bug.cgi?id=35734
(I don't know DB/ValueTracking in depth so maybe I'm missing something.)
If I understand correctly, we would try with value tracking in the following scenario:
But we wouldn't in the following one:
Is this the expected behavior? In other words, if DB returns a width narrower than the original one and it's later rounded up to the original width (first scenario), could value tracking return an even narrower width?
If so, shouldn't we always try with value tracking, even when MaxBitWidth != DL.getTypeSizeInBits?
Otherwise, shouldn't we skip value tracking for those cases (first scenario)? We could invoke isPowerOf2_64 only once at the end of the function.