This diff optimizes the sequence icmp-ugt(ashr,C_1) C_2. InstCombine already implements this optimization for sgt, and this patch adds support ugt.
@craig.topper came up with the idea and proof.
This patch adds the check for UGT, and also simplifies the check for SGT because Craig's proof shows that the comparison to min_int is not necessary.
define i1 @src(i8 %x, i8 %y, i8 %c) { %cp1 = add i8 %c, 1 %i = shl i8 %cp1, %y %i.2 = ashr i8 %i, %y %cmp = icmp eq i8 %cp1, %i.2 ;Assume: C + 1 == (((C + 1) << y) >> y) call void @llvm.assume(i1 %cmp) ; uncomment for the sgt case %j = shl i8 %cp1, %y %j.2 = sub i8 %j, 1 %cmp2 = icmp ne i8 %j.2, 127 ;Assume (((c + 1 ) << y) - 1) != 127 call void @llvm.assume(i1 %cmp2) %s = ashr i8 %x, %y %r = icmp sgt i8 %s, %c ret i1 %r } define i1 @tgt(i8 %x, i8 %y, i8 %c) { %cp1 = add i8 %c, 1 %j = shl i8 %cp1, %y %j.2 = sub i8 %j, 1 %r = icmp sgt i8 %x, %j.2 ret i1 %r } declare void @llvm.assume(i1)
This change is related to the optimizations in D117252.
As i have said in D117252, this should be