I can't see the value in this fold:
// (X ^ C1) | C2 --> (X | C2) ^ (C1&~C2)
as an optimization or canonicalization, so I'm proposing to remove it.
Some examples of where this might fire:
define i8 @not_or(i8 %x) { %xor = xor i8 %x, -1 %or = or i8 %xor, 7 ret i8 %or } define i8 @xor_or(i8 %x) { %xor = xor i8 %x, 32 %or = or i8 %xor, 7 ret i8 %or } define i8 @xor_or2(i8 %x) { %xor = xor i8 %x, 33 %or = or i8 %xor, 7 ret i8 %or }
Regardless of whether we remove this fold, we get:
define i8 @not_or(i8 %x) { %xor = or i8 %x, 7 %or = xor i8 %xor, -8 ret i8 %or } define i8 @xor_or(i8 %x) { %xor = or i8 %x, 7 %or = xor i8 %xor, 32 ret i8 %or } define i8 @xor_or2(i8 %x) { %xor = or i8 %x, 7 %or = xor i8 %xor, 32 ret i8 %or }
There are no test changes because our current demanded-bits handling for xor constants will always remove set bits of the xor constant, and then we'll activate the fold a few lines later under the comment:
// (X^C)|Y -> (X|Y)^C iff Y&C == 0
The larger motivation for removing this code is that it could interfere with the fix for PR32706:
https://bugs.llvm.org/show_bug.cgi?id=32706
Ie, we're not checking if the 'xor' is actually a 'not', so we could reverse a 'not' optimization and cause an infinite loop by altering an 'xor X, -1'. We can restore this fold after the demanded-bits change is restored if we can find a reason it helps?