LVI was symbolically executing binary operators only when the RHS was constant, missing the case where we have a ConstantRange for the RHS, but not an actual constant. Fixing this increases the total number of bits known by LVI from 15.6% to 16.1%.
Details
Diff Detail
Event Timeline
Code wise, this looks fine, however, I think we need to assess the impact on compile time here. My strong suspicion is that this will be prohibitively expensive. Unless the increased precision implies code enough to discount the increased runtime, I'm really hesitant to commit this.
Do we have a standard benchmark for measuring effect on compile time? If I was going to be somewhat careful about it I'd do a single-threaded build of LLVM using itself, then repeat enough times to get statistical significance. Is that good enough? Overkill? Is there something other than LLVM that I should be building? I have SPEC 2006 handy but an not sure it's widely available.
Looks like this patch makes a -j1 build of LLVM + Clang about 4 seconds slower on a Haswell-E. The total build takes about 87 minutes. The effect is robust across repetitions. I can give more details if anyone wants.
A few more reps of the build completed, and it's more like a 6 second slowdown on an 1.5-hour non-parallel build.
I'd argue that requiring a constant on the RHS makes LVI quirky and unpredictable.
Gah, thought I'd responded to this one months ago.
LGTM - I was convinced by the lack of observed compile time problem and the fact that LVI really should be able to catch this. If we can't, well, we should fix the algorithm so it can.
lib/Analysis/LazyValueInfo.cpp | ||
---|---|---|
1070 | separating out a helper function getRangeForOperand would help here. |
separating out a helper function getRangeForOperand would help here.