Don't do SCEV normalization and denormalization for max/min expressions.
Generally normalizing SCEV returns an inequivalent expression in which max/min may become redundant, so scalar evolution can simplify it. Denormalizing such expression wouldn't return the missing max/min back. This also works the other way around: if we first denormalize an expression that contains min/max-s, and then normalize it.
AFAIU for any expression S, S must be equal to normalize(denormalize(S)), and also the other way around - S == denormalize(normalize(S)), so it's illegal to do normalization/denormalization on expressions containing min/max.
Consider the following example. Imagine we have the following loop:
loop: %iv = phi [ 11, %entry ], [ %iv.next, %loop ] %umax = umax(%iv, 10) %iv.next = add %iv, -4 %loop.cond = %iv.next u< 7 br i1 %loop.cond, %exit, %loop
It executes exactly two iterations.
The SCEV for %umax is (10 umax {11,+,-4}).
Normalizing it for loop '%loop' would give us (10 umax {15,+,-4}).
Now, as the loop has only two iterations, the AddRec {15,+,-4} is always greater than 10, so SCEV simplifies the expression to {15,+,-4}.
Denormalizing it back would give a wrong result - {11,+,-4}.
Obviously on the 2nd iteration the value of %umax is 10, but according to SCEV it's 11 - 4 = 7.
One can also come up with analagous examples for smax, umin and smin.
Prohibiting normalization/denormalization for min/max expressions resolves the issue.
This fixes miscompilation described in https://github.com/llvm/llvm-project/issues/62563.
LSR does request normalization when creating initial formulae for values and it denormalized them back before expansion. The example described above is a short version of the test case in the bug.
Presumably umin_seq also needs to be handled.