These calls aren't being passed the DominatorTree, and AssumptionCache or the correct CxtI instruction.
Details
Diff Detail
- Repository
- rL LLVM
Event Timeline
Can you concoct tests with 'llvm.assume' to show a difference?
I think that raises a question though: this change can make value tracking more effective, but it comes with non-zero compile-time cost if I understand computeKnownBits() correctly. Do we want to incur that overhead without a real-world motivation for the benefit?
cc'ing @dberlin @davide @hfinkel
Another question along this line: we call Simplify*Inst() for all instructions as the first step in every visit*() in InstCombine. In most cases, we don't pass a context instruction, but we do for visitICmpInst(). Is that difference intentional?
if (Value *V = SimplifyICmpInst(I.getPredicate(), Op0, Op1, SQ.getWithInstruction(&I))) return replaceInstUsesWith(I, V);
vs.
if (Value *V = SimplifyShlInst(Op0, Op1, I.hasNoSignedWrap(), I.hasNoUnsignedWrap(), SQ)) return replaceInstUsesWith(I, V);
I was operating under the assumption that this was an accident. Most other places use the InstCombine specific wrappers around computeKnownBits so always pass the the dominatortree and assumption cache. If there's a reason for these places to be different then that should be commented.
I agree. Consistency is king in this regard, unless we have a specific (commented) reason to deviate. Otherwise, it makes it very difficult to reason about what the calls will do. When the original changes were made, all of these calls were swept in, and specific justification should not be required to correct oversights in new/other code.