A single smov instruction is capable of moving from a vector register while performing the sign-extend during said move, rather than each step being performed by separate instructions.
Details
Diff Detail
- Repository
- rG LLVM Github Monorepo
Event Timeline
llvm/lib/Target/AArch64/AArch64SVEInstrInfo.td | ||
---|---|---|
2509–2512 | These should be using VectorIndexB. | |
2514–2517 | These should only use VectorIndexH. | |
2519–2520 | This should be using VectorIndexS. | |
llvm/test/CodeGen/AArch64/aarch64-smov-gen.ll | ||
6–18 | Please simplify the tests. For example target triple = "aarch64-unknown-linux-gnu" define i32 @extract_s8(<vscale x 16 x i8> %a) #0 { %elt = extractelement <vscale x 16 x i8> %a, i32 15 %conv = sext i8 %elt to i32 ret i32 %conv } attributes #0 = { "target-features"="+sve" } Should be enough to test the new patterns. Given the VectorIndex# issues above I think it's worth having tests for out-of-range indices as well. I guess testing extract element VF-1 and extract element VF will cover the good and less good cases. |
Addressed comments, adding additional test cases that cover cases with out-of-range indices.
The out-of-range tests show that with a few more patterns we can do better, but given they're not the common case I guess they can wait.
llvm/lib/Target/AArch64/AArch64SVEInstrInfo.td | ||
---|---|---|
2509–2520 | Can you move these patterns up a couple of blocks to be just after the UMOV variants as that's the block they relate to. |
These should be using VectorIndexB.