This is an archive of the discontinued LLVM Phabricator instance.

[mlir][Vector] Let vector.multi_reduction reduce down to a scalar.
ClosedPublic

Authored by nicolasvasilache on Oct 8 2021, 10:27 AM.

Details

Summary

vector.multi_reduction currently does not allow reducing down to a scalar.
This creates corner cases that are hard to handle during vectorization.
This revision extends the semantics and adds the proper transforms, lowerings and canonicalizations to allow lowering out of vector.multi_reduction to other abstractions all the way to LLVM.

In a future, where we will also allow 0-d vectors, scalars will still be relevant: 0-d vector and scalars are not equivalent on all hardware.

In the process, splice out the implementation patterns related to vector.multi_reduce into a new file.

Diff Detail

Event Timeline

nicolasvasilache requested review of this revision.Oct 8 2021, 10:27 AM
Herald added a project: Restricted Project. · View Herald TranscriptOct 8 2021, 10:27 AM
springerm added inline comments.
mlir/lib/Dialect/Vector/VectorMultiDimReductionTransforms.cpp
29–50

This will probably get out-of-sync with the rest of the file pretty fast... Any reason to put this here instead of class/struct comment?

34

comma

38–39

line break not necessary

55

transpose

145

I don't quite understand the notion of "parallel". Does it just mean "don't reduce but concatenate"?

146

ArrayRef<bool>

187–189

Shouldn't this be checking for some kind of equality of "outer" dims?

225

nd -> 2d?

pifon2a accepted this revision.Oct 11 2021, 1:24 AM

Just a general comment. This is quite interesting that this kind of transformation/canonicalization is happening now on different levels of the lowering stack. There are transforms on HLO level, that convert reductions to row/column reductions or to a 1D reduction. We can do the same transformation in Linalg, which would not prevent tile-n-fuse of input producers happening. Or we might need both.

mlir/lib/Dialect/Vector/VectorMultiDimReductionTransforms.cpp
2

nit: Multi-reduction

10

I am not sure what "target-independent rewrites as 1->N patterns" means.

25

nit: maybe just "vector-multi-reduction" or "vector-reduction"?

123

nit: multi

This revision is now accepted and ready to land.Oct 11 2021, 1:24 AM

Thanks, Nicolas! LGTM. I added a bunch of nits below.
On a personal note, I would like to better understand when the reduction transformation that involves transposition is worth it in terms of performance. Or is the goal of that transformation to fill the implementation gap in the runtime for the time being?
Anyways, probably a discussion for some other day :)

mlir/include/mlir/Dialect/Vector/VectorOps.td
303–304

We should extend the into an (n-k)-D vector to add the scalar case

mlir/lib/Dialect/Vector/VectorMultiDimReductionTransforms.cpp
10

rewrites -> rewrites of MultiDimReduction op?

29

If this is file summary, it should go to the file section (line 9)

29–50

+1. I would just add a brief summary to the file section and move the details to the file section.

35

nit: nd reads a bit weird... maybe nd -> n-D? Same for 2d? There is also a 1-d below.

55

// -> /// here and in all the classes/methods below.

81

nit: pre-increment per coding standards.

154

nit: spell out this auto and some others above and below (integers, Value, Location, etc.) would help readability a lot.

nicolasvasilache marked 20 inline comments as done.

Address review.

Formatting.

mlir/lib/Dialect/Vector/VectorMultiDimReductionTransforms.cpp
29

moved to .h

145

In practice it is "not-reduce", other places in the file used "parallel", likely as analogy with Linalg.
If we feel this is confusing and we want to improve this, we should do a global followup cleanup.

146

nope, that would create memory errors, we need ownership. SmallVector it is

154

ints I generally try to avoid so that I don't inadvertently introduce casts, updating the rest.

This revision was landed with ongoing or failed builds.Oct 12 2021, 4:04 AM
This revision was automatically updated to reflect the committed changes.