And this is finally the interesting part of that fold!
If we have a pattern (x & (~(-1 << maskNbits))) << shiftNbits,
we already know (have a fold) that will drop the & (~(-1 << maskNbits))
mask iff (maskNbits+shiftNbits) u>= bitwidth(x).
But that is actually ignorant, there's more general fold here:
In this pattern, (maskNbits+shiftNbits) actually correlates
with the number of low bits that will remain in the final value.
So even if (maskNbits+shiftNbits) u< bitwidth(x), we can still
fold, we will just need to apply a constant mask afterwards:
Name: a, normal+mask %onebit = shl i32 -1, C1 %mask = xor i32 %onebit, -1 %masked = and i32 %mask, %x %r = shl i32 %masked, C2 => %n0 = shl i32 %x, C2 %n1 = add i32 C1, C2 %n2 = zext i32 %n1 to i64 %n3 = shl i64 -1, %n2 %n4 = xor i64 %n3, -1 %n5 = trunc i64 %n4 to i32 %r = and i32 %n0, %n5
https://rise4fun.com/Alive/F5R
Naturally, old %masked will have to be one-use.
Similar fold exists for patterns c,d,e, will post patch later.