This is again motivated by D67122 sanitizer check enhancement.
That patch seemingly worsens -fsanitize=pointer-overflow
overhead from 25% to 50%, which strongly implies missing folds.
In this particular case, given
char* test(char& base, unsigned long offset) { return &base - offset; }
it will end up producing something like
https://godbolt.org/z/luGEju
which after optimizations reduces down to roughly
declare void @use64(i64) define i1 @test(i8* dereferenceable(1) %base, i64 %offset) { %base_int = ptrtoint i8* %base to i64 %adjusted = sub i64 %base_int, %offset call void @use64(i64 %adjusted) %not_null = icmp ne i64 %adjusted, 0 %no_underflow = icmp ule i64 %adjusted, %base_int %no_underflow_and_not_null = and i1 %not_null, %no_underflow ret i1 %no_underflow_and_not_null }
Without D67122 there was no %not_null,
and in this particular case we can "get rid of it", by merging two checks:
Here we are checking: Base u>= Offset && (Base u- Offset) != 0, but that is simply Base u> Offset
Alive proofs:
https://rise4fun.com/Alive/QOs
The @llvm.usub.with.overflow pattern itself is not handled here
because this is the main pattern, that we currently consider canonical.