By implementing the method "unsigned RISCVTTIImpl::getRegisterBitWidth(bool Vector)", fixed-length vectorization is enabled when possible.
Without this method, the "#pragma clang loop" directive is needed to enable vectorization(or the cost model may inform LLVM that "Vectorization is possible but not beneficial").
Details
Diff Detail
Event Timeline
Test case?
llvm/lib/Target/RISCV/RISCVTargetTransformInfo.h | ||
---|---|---|
58 | Doesn't getMinRVVVectorSizeInBits() already max with 128? |
The changes LGTM, thanks for adding a test case.
However, I think the commit title/message could be expanded upon somewhat. For example, *how* does it improve LoopVectorizer support? Since we've got both fixed-length and scalable vectorization, I think it would be helpful to clarify that this relates to fixed-length vectorization. More specifically, doesn't this change *enable* fixed-length vectorization, not just improve upon it?
llvm/test/Transforms/LoopVectorize/RISCV/riscv-unroll.ll | ||
---|---|---|
2 | -mattr=+experimental-v should be enough. Should we specify -riscv-v-vector-bits-max in this test case? |
llvm/test/Transforms/LoopVectorize/RISCV/riscv-unroll.ll | ||
---|---|---|
2 | Yeah, "-riscv-v-vector-bits-max" is not needed. |
LGTM. I wouldn't say my last suggestion is a hard requirement.
llvm/test/Transforms/LoopVectorize/RISCV/riscv-unroll.ll | ||
---|---|---|
1 | We could perhaps add an extra RUN line for riscv32 just to make sure nothing silly is going on. |
llvm/test/Transforms/LoopVectorize/RISCV/riscv-unroll.ll | ||
---|---|---|
1 | Add RUN line for riscv32. |
Doesn't getMinRVVVectorSizeInBits() already max with 128?