This patch adds new canonical representation to the instcombineCall pass.
This new functionality will switch the Compare intrinsic's operands and
will create one canonical representation to the compare intrinsics:
CMP (A, CONST)
How are you ensuring the VEX encoding is valid?
Unfortunately, there's nothing in the backend IR parsing that guarantees that only a constant for the last intrinsic argument can get here. If you write a bad IR file you can fail this cast. Use a dyn_cast and check it defensively. A bad intrinsic will fail isel later and throw a graceful error, but a bad value here would just cause a crash.
Why are you using ConstantExpr::getIntegerValue? You should be able to use ConstantInt::get right?
What about sse_cmp_ps, sse2_cmp_pd and their avx2 equivalents?
These don't use an i32 for the comparison type immediate, but the X86CreateCanonicalCMP assumes they do when it creates the new Constant.
Add test cases for sse_cmp_ss and sse2_cmp_sd.
First, We are doing that because we are working on intrinsics and I don't do any special changing. I only reorder the operands.
Secondly, we can do it in the lowering but we prefer that this canonical representation will be in earlier step.
As I wrote in the comments. We are only working on AVX intrinsics.
In the main function, we only accept AVX and above intrinsics. This ensures us that we only work with i32 operands.
Spaces after commas, not before. Please be consistent in capitalization and hyphenation.
Typo in the function name (it could also be a bit more specific).
Transform compare(constant, variable, comparison, ...) into compare(variable, constant, flip(comparison), ...)
What's happening here?
What is the plan for supporting the SSE intrinsics?
What does this canonicalization enable if we can't properly do it for the SSE intrinsics? Are we getting worse codegen for the scalar and 128-bit intrinsics on AVX targets just because we can't know we're an AVX target in InstCombine?
IntrinsicName is unused in release builds and will throw a warning. Probably need to wrap it in #ifndef NDEBUG
You need to check for nullptr have a dyn_cast.
What about x86_avx_cmp_ps_256 and x86_avx_cmp_pd_256?
IIUC, the issue here is that @llvm.x86.sse.cmp.* instrinsics function calls can be lowered to either SSE encoded instructions or to AVX instructions. The difference in not only in the encoding, but also in the possible predicates the instructions support.
The SSE variants support 8 predicates and the AVX variants support a richer set of 32 variants.
The concern is that at InstCombine-time we don't have knowledge about what subtarget features will be enabled so we can't replace an SSE immediate with an AVX-only immediate because if we end-up lowering to an SSE target, we don't expect (or at least now it doesn't) the backend to reverse back to an SSE-legal form.
But what if the function contains "target-cpu" or "target-features" attributes which will allow us to assume these will be used by the backend? would it be ok then to perform the canonicalization?