This add as a fold of sub(0, buildvector_splat(sub(0, x))) -> buildvector_splat(x). This is something that can come up in the lowering of right shifts under AArch64, where we generate a shift left of a negated number.
Details
Diff Detail
- Repository
- rG LLVM Github Monorepo
Event Timeline
Funnily enough I was wondering about this pattern the other day as a followup to D98778...
Should we always be folding unaryop(splat(x)) -> splat(unaryop(x)) if the unaryop is legal/custom on the scalar type? And then maybe extend that to binop(splat(x),splat(y)) -> splat(binop(x,y)) as well?
Maybe. Not sure. I always find it difficult to see when optimizations like that would be universally beneficial over a wide range of very different architectures, considering how different they can be. I can see that it would make this more general, but it seems easy to think of cases where it would make things worse.
For this case I think it would need to work with the truncated type, not the scalar element type. For a v8i16 the scalar element type would not be legal under AArch64. If it was using the element type it would only handle half of the tests changed here.
llvm/lib/CodeGen/SelectionDAG/DAGCombiner.cpp | ||
---|---|---|
3313 | Can SelectionDAG::getSplatValue() be used here? If so, then this doesn't need to be limited to BUILD_VECTOR, as it seems this fold would work equally well for scalable vectors (which use SPLAT_VECTOR). |
llvm/lib/CodeGen/SelectionDAG/DAGCombiner.cpp | ||
---|---|---|
3313 | Thanks for taking a look. Yeah, seems to work OK. I'll make it so. |
llvm/test/CodeGen/AArch64/neon-shift-neg.ll | ||
---|---|---|
567 | Yeah.. The same pattern of negated shifts does not apply for SVE (which makes it a little less useful). I've added a more direct test in 77ae9b364a9d9b99501163761313cefbb345cea7. |
nit: s/bvsplat/splat/g