This is an archive of the discontinued LLVM Phabricator instance.

[mlir][Sparse] Add option for VLA sparsification
ClosedPublic

Authored by jsetoain on Jan 27 2022, 8:42 AM.

Details

Summary

Use "enable-vla-vectorization=vla" to generate a vector length agnostic
loops during vectorization. This option works for vectorization strategy 2.

Diff Detail

Event Timeline

jsetoain created this revision.Jan 27 2022, 8:42 AM
jsetoain requested review of this revision.Jan 27 2022, 8:42 AM

Thanks Javier for adding SVE to sparse!

Note that you may hold off on this revision until https://reviews.llvm.org/D117919 goes in (which restructures our pipeline a bit) to avoid conflicts later on....

mlir/lib/Dialect/SparseTensor/Transforms/Sparsification.cpp
98

period at end

Note that you may hold off on this revision until https://reviews.llvm.org/D117919 goes in (which restructures our pipeline a bit) to avoid conflicts later on....

Yes, I've seen that one. I'm also waiting for solutions to the other two open patches, you can't lower to LLVM IR without those. And I also want to add more tests. I made it public while all those details are sorted out because I think this is a good reference point for VLA vectorization. It's an important milestone, and an interesting use case :-)

jsetoain updated this revision to Diff 403988.Jan 28 2022, 5:53 AM

Added extra tests

I just landed D117919. Please be sure to add the new enableVLAVectorization option to the SparseCompilerOptions in include/mlir/Dialect/SparseTensor/Pipelines/Passes.h as well.

mlir/include/mlir/Dialect/SparseTensor/Transforms/Passes.h
68

I'd rather see this default specified on the no-parameter constructor below, rather than having it here; for the same reason we don't give defaults for any of the other parameters of this constructor.

wrengr added inline comments.Jan 28 2022, 4:09 PM
mlir/lib/Dialect/SparseTensor/Transforms/Sparsification.cpp
556

Why not rely on implicit coercion here?

Note that we now have https://reviews.llvm.org/D118658 pending. Let's way until that goes in and then rebase and re-review

Just landed D118658, if you want to rebase now

@jsetoain can you plesae rebase and ping me for final review, thanks for your patience adjusting to the new pipeline!

jsetoain marked 3 inline comments as done.Feb 10 2022, 4:55 AM
jsetoain added inline comments.
mlir/lib/Dialect/SparseTensor/Transforms/Sparsification.cpp
556

General dislike for implicit conversions and a weak attempt at not obfuscating the semantic of "::get", but I see this is not discouraged in LLVM coding guidelines so I'm happy to change it for brevity's sake :-)

jsetoain updated this revision to Diff 407481.Feb 10 2022, 4:56 AM
jsetoain marked an inline comment as done.

Update to latests changes in sparse compiler and address comments.

@aartbik Hi Aart, I believe this is ready for final review. Thank you!

I love how simple you make it seem! A few nits, but after that good to go!

mlir/include/mlir/Dialect/SparseTensor/Pipelines/Passes.h
44

Just curious, is VLA the term going forward? Until now, SVE has been the norm (and the term scalable vector seems generic enough to include future platforms)?

mlir/lib/Dialect/SparseTensor/Transforms/Sparsification.cpp
99

No need to copy and store this boolean. We can always access it as codegen.options.enableVLAVectorization as we do for most other read-only settings. Most direct fields in codegen are read/write.

556

I am with you on this one, Javier. Can we find a compromise and use

unsigned numScalableDims = codegen.generateVLA;
return VectorType::get(codegen.curVecLength, etp, numScalableDims);

just to remind the reader that VectorType::get() expects a number, not a bool?

jsetoain updated this revision to Diff 407836.Feb 11 2022, 4:11 AM
jsetoain marked 3 inline comments as done.

Address comments

jsetoain added inline comments.Feb 11 2022, 4:15 AM
mlir/include/mlir/Dialect/SparseTensor/Pipelines/Passes.h
44

VLA -> vector length agnostic, a vectorization model to handle scalable vectors.
SVE -> Scalable Vector Extension, Arm's specific implementation of scalable vectors.

Indeed, there are other, non-Arm, scalable vector ISAs (like RISCVV). I've tried to be consistent and call the programming model VLA (non-Arm-specific) and SVE (or Arm SVE) when it's specifically related to Arm's extension. A review of all related documentation is pending, so there might be inconsistencies here and there :-|

mlir/lib/Dialect/SparseTensor/Transforms/Sparsification.cpp
556

That is my preference as well. It makes it clear that it's taking the number of scalable dimensions, not whether it's scalable or not.

aartbik accepted this revision.Feb 11 2022, 8:59 AM

\O/ yeah! sparse scalable vectorization! ship it!

This revision is now accepted and ready to land.Feb 11 2022, 8:59 AM
wrengr added inline comments.Feb 14 2022, 1:48 PM
mlir/lib/Dialect/SparseTensor/Transforms/Sparsification.cpp
556

Fwiw, I too am not a fan of implicit conversions :) I only brought it up because the general mlir style seems to discourage making them explicit. I like the compromise

jsetoain updated this revision to Diff 408940.Feb 15 2022, 9:55 AM

Adapt test to changes in vector.reduce

jsetoain updated this revision to Diff 410298.Feb 21 2022, 7:17 AM

Fix missing case in type casting

This revision was automatically updated to reflect the committed changes.
Herald added a project: Restricted Project. · View Herald TranscriptMar 25 2022, 3:56 AM