This is the instcombine part of unsigned saturation canonicalization patch.(Backend patches already commited: https://reviews.llvm.org/D37510, https://reviews.llvm.org/D37534)
It converts unsigned saturated subtraction patterns to pattern, recognized by backend:
(a > b) ? a - b : 0 -> ((a > b) ? a : b) - b)
(b < a) ? a - b : 0 -> ((a > b) ? a : b) - b)
(b > a) ? 0 : a - b -> ((a > b) ? a : b) - b)
(a < b) ? 0 : a - b -> ((a > b) ? a : b) - b)
((a > b) ? b - a : 0) -> - ((a > b) ? a : b) - b)
((b < a) ? b - a : 0) -> - ((a > b) ? a : b) - b)
((b > a) ? 0 : b - a) -> - ((a > b) ? a : b) - b)
((a < b) ? 0 : b - a) -> - ((a > b) ? a : b) - b)
This doesn't read clearly to me. How about:
Transform patterns such as: (a > b) ? a - b : 0
into: ((a > b) ? a : b) - b)
This produces a canonical max pattern that is more easily recognized by the backend and converted into saturated subtraction instructions if those exist.
There are 8 commuted/swapped variants of this pattern.