Use "enable-vla-vectorization=vla" to generate a vector length agnostic
loops during vectorization. This option works for vectorization strategy 2.
Details
- Reviewers
aartbik - Commits
- rG7783a178f575: [mlir][Sparse] Add option for VLA sparsification
Diff Detail
- Repository
- rG LLVM Github Monorepo
Event Timeline
Thanks Javier for adding SVE to sparse!
Note that you may hold off on this revision until https://reviews.llvm.org/D117919 goes in (which restructures our pipeline a bit) to avoid conflicts later on....
mlir/lib/Dialect/SparseTensor/Transforms/Sparsification.cpp | ||
---|---|---|
98 | period at end |
Yes, I've seen that one. I'm also waiting for solutions to the other two open patches, you can't lower to LLVM IR without those. And I also want to add more tests. I made it public while all those details are sorted out because I think this is a good reference point for VLA vectorization. It's an important milestone, and an interesting use case :-)
I just landed D117919. Please be sure to add the new enableVLAVectorization option to the SparseCompilerOptions in include/mlir/Dialect/SparseTensor/Pipelines/Passes.h as well.
mlir/include/mlir/Dialect/SparseTensor/Transforms/Passes.h | ||
---|---|---|
52 | I'd rather see this default specified on the no-parameter constructor below, rather than having it here; for the same reason we don't give defaults for any of the other parameters of this constructor. |
mlir/lib/Dialect/SparseTensor/Transforms/Sparsification.cpp | ||
---|---|---|
560 | Why not rely on implicit coercion here? |
Note that we now have https://reviews.llvm.org/D118658 pending. Let's way until that goes in and then rebase and re-review
@jsetoain can you plesae rebase and ping me for final review, thanks for your patience adjusting to the new pipeline!
mlir/lib/Dialect/SparseTensor/Transforms/Sparsification.cpp | ||
---|---|---|
560 | General dislike for implicit conversions and a weak attempt at not obfuscating the semantic of "::get", but I see this is not discouraged in LLVM coding guidelines so I'm happy to change it for brevity's sake :-) |
I love how simple you make it seem! A few nits, but after that good to go!
mlir/include/mlir/Dialect/SparseTensor/Pipelines/Passes.h | ||
---|---|---|
44 ↗ | (On Diff #407481) | Just curious, is VLA the term going forward? Until now, SVE has been the norm (and the term scalable vector seems generic enough to include future platforms)? |
mlir/lib/Dialect/SparseTensor/Transforms/Sparsification.cpp | ||
99 | No need to copy and store this boolean. We can always access it as codegen.options.enableVLAVectorization as we do for most other read-only settings. Most direct fields in codegen are read/write. | |
560 | I am with you on this one, Javier. Can we find a compromise and use unsigned numScalableDims = codegen.generateVLA; just to remind the reader that VectorType::get() expects a number, not a bool? |
mlir/include/mlir/Dialect/SparseTensor/Pipelines/Passes.h | ||
---|---|---|
44 ↗ | (On Diff #407481) | VLA -> vector length agnostic, a vectorization model to handle scalable vectors. Indeed, there are other, non-Arm, scalable vector ISAs (like RISCVV). I've tried to be consistent and call the programming model VLA (non-Arm-specific) and SVE (or Arm SVE) when it's specifically related to Arm's extension. A review of all related documentation is pending, so there might be inconsistencies here and there :-| |
mlir/lib/Dialect/SparseTensor/Transforms/Sparsification.cpp | ||
560 | That is my preference as well. It makes it clear that it's taking the number of scalable dimensions, not whether it's scalable or not. |
mlir/lib/Dialect/SparseTensor/Transforms/Sparsification.cpp | ||
---|---|---|
560 | Fwiw, I too am not a fan of implicit conversions :) I only brought it up because the general mlir style seems to discourage making them explicit. I like the compromise |
I'd rather see this default specified on the no-parameter constructor below, rather than having it here; for the same reason we don't give defaults for any of the other parameters of this constructor.