signum(x) is sometimes implemented as (x >> 63) | (-x >>> 63) (for
an i64 x). This change adds a pattern matcher for that pattern, and
adds an instcombine rule to optimize signum(x) s< 1.
Later, we can also consider optimizing:
icmp slt signum(x), 0 --> icmp slt x, 0 icmp sle signum(x), 1 --> true
etc.
Could you use getScalarSizeInBits? My thinking is that this would enable the transform to hack on vectors of integers. The only caveat is that you need to fail if it gives you back zero, this could happen if its a vector of pointers.