This adds support for swapping comparison operands when it may introduce new folding opportunities.
This is roughly the same as the code added to AArch64ISelLowering in 162435e7b5e026b9f988c730bb6527683f6aa853.
For an example of a testcase which exercises this, see llvm/test/CodeGen/AArch64/swap-compare-operands.ll
(Godbolt for that testcase: https://godbolt.org/z/43WEMb)
The idea behind this is that sometimes, we may be able to fold away, say, a shift or extend in a compare by swapping its operands.
e.g. in the case of this compare:
lsl x8, x0, #1 cmp x8, x1 cset w0, lt
The following is equivalent:
cmp x1, x0, lsl #1 cset w0, gt
Most of the code here is just a reimplementation of what already exists in AArch64ISelLowering.
(See getCmpOperandFoldingProfit and getAArch64Cmp for the equivalent code.)
Note that most of the AND code in the testcase doesn't actually fold. It seems like we're missing selection support for that sort of fold right now, since SDAG happily folds these away (e.g testSwapCmpWithShiftedZeroExtend8_32 in the original .ll testcase)
This will probably print too often in output.