This is motivated by D67122 sanitizer check enhancement.
That patch seemingly worsens -fsanitize=pointer-overflow
overhead from 25% to 50%, which strongly implies missing folds.
In this particular case, given
char* test(char& base, unsigned long offset) { return &base + offset; }
it will end up producing something like
https://godbolt.org/z/LK5-iH
which after optimizations reduces down to roughly
define i1 @t0(i8* nonnull %base, i64 %offset) { %base_int = ptrtoint i8* %base to i64 %adjusted = add i64 %base_int, %offset %non_null_after_adjustment = icmp ne i64 %adjusted, 0 %no_overflow_during_adjustment = icmp uge i64 %adjusted, %base_int %res = and i1 %non_null_after_adjustment, %no_overflow_during_adjustment ret i1 %res }
Without D67122 there was no %non_null_after_adjustment,
and in this particular case we can get rid of the overhead:
Here we add some offset to a non-null pointer,
and check that the result does not overflow and is not a null pointer.
But since the base pointer is already non-null, and we check for overflow,
that overflow check will already catch the null pointer,
so the separate null check is redundant and can be dropped.
Alive proofs:
https://rise4fun.com/Alive/WRzq
There are more patterns of "unsigned-add-with-overflow", they are not handled here,
but this is the main pattern, that we currently consider canonical,
so it makes sense to handle it.
Ehh, i messed up comments here