(x&1)==1 is not canonical, and we'd canonicalize it to (x&1)!=0 already,
but that excludes it from a few folds, which eventually leads to a worse codegen.
This essentially fixes a regression after we started to canonicalize
add/xor reductions w/ i1 element type to a parity check.
Details
Details
Diff Detail
Diff Detail
- Repository
- rG LLVM Github Monorepo
Unit Tests
Unit Tests
Time | Test | |
---|---|---|
34,460 ms | x64 debian > libFuzzer.libFuzzer::entropic-scale-per-exec-time.test |
Event Timeline
llvm/lib/CodeGen/SelectionDAG/TargetLowering.cpp | ||
---|---|---|
3875 | Yes, and i've guarded against infinite looping via if (N1C->isOne() || Op0 != OrigOp0) { |
Comment Actions
Actually, let's not.
llvm/test/CodeGen/X86/parity-vec.ll | ||
---|---|---|
66 | It's similar in a way, here we need (x&1)^1 or x^1. |
Does this renormalize only happen if VT doesn't match Op0.getValueType()?