This is an archive of the discontinued LLVM Phabricator instance.

[mlir][Linalg] Add pass to remove unit-extent dims from tensor operands of Generic ops.
ClosedPublic

Authored by mravishankar on May 11 2020, 10:52 PM.

Details

Summary

Unit-extent dimensions are typically used for achieving broadcasting
behavior. The pattern added (along with canonicalization patterns
added previously) removes the use of unit-extent dimensions, and
instead uses a more canonical representation of the computation. This
new pattern is not added as a canonicalization for now since it
entails adding additional reshape operations. A pass is added to
exercise these patterns, along with an API entry to populate a
patterns list with these patterns.

Diff Detail

Event Timeline

mravishankar created this revision.May 11 2020, 10:52 PM
nicolasvasilache requested changes to this revision.May 12 2020, 5:30 PM

Generally looks good, thanks Mahesh.
I left some code laering comments.

mlir/lib/Dialect/Linalg/IR/LinalgOps.cpp
74

This does not belong here right?
It also intersects with this revision: https://reviews.llvm.org/D79759
Skipping since I am not seeing it used in this revision.

82

In the grander scheme, should this function be exposed?

I am asking because the way it is currently used, it takes and indexMap that comes from a GenericOp.
Are there cases where we'd want to use this but the genericOp may be invalid IR (because inversePermutation would fail) ?

Should this be refactored so that it can also help build correct IR that is canonical by construction?

No need to necessarily change, I am mostly asking form your higher-level user's perspective.

94
auto zero = getAffineConstantExpr(0, context);

then just: return shape[fdim] == 1 && exprs[dim] == zero;.

Why can dim overflow here?
IMO, it would be more natural to while (dim < rank && isUnitExtent(dim))

99

This should assert at least one dim is not a unitExtent otherwise reassociationMaps is empty?

101

trivial braces here and below.

104

I find the multi-dim++ a bit tricky to follow but I can't see offhand how to make it more readable.

172

typo: result

1142

The layering does not seem right to me. Traditionally we either:

  1. create an op canonicalization patterns for a specific op (i.e. set hasCanonicalization = 1 in Ops.td) and later collect them with OpName::getCanonicalizationPatterns() but this is not what you want here.
  2. what you want here, create a new pass or transform and put everything there this way Ops.cpp does not ned to include Passes.h

If you're worried that other things are not visible they should also be moved in the proper place.
Your populate impl should resemble:

patterns.insert<ReplaceUnitExtentTensors>(context);
GenericOp::getCanonicalizationPatterns(patterns);
TensorReshapeOp::getCanonicalizationPatterns(patterns);
mlir/lib/Dialect/Linalg/Transforms/Fusion.cpp
918

This should be its own pass in its own file, with its own pattern it is unclear to me why it is in the Fusion pass.

This revision now requires changes to proceed.May 12 2020, 5:30 PM
mravishankar marked 15 inline comments as done.

Folding with previous change in stack and addressing comments.

mravishankar added inline comments.May 14 2020, 9:46 PM
mlir/lib/Dialect/Linalg/IR/LinalgOps.cpp
74

Ah, sorry. This shouldnt be here . Removed.

82

I am not sure what you mean by exposed. This is a static function that is useful only for the callsite. The issue here might be that this was added to LinalgOps.cpp. Moving all of this into a separate pass. So it is not visible to op parsing/verification, etc. So in effect it isnt exposed outside of the pass.

Regarding other questions

  1. It should work on any indexMap. I dont see this to be related to whether the genericOp is valid or not since it is only looking at a single index map. The validity of the genericOp using the indexMap constructed here is the concern of the caller AFAICS
  2. I dont know how to refactor to help build correct IR. I cant trivially see how each index map construction of a generic op can be steered for the final generic op to be valid. The validity depends on concatenation and inversion. How do I break it down to the level of individual index maps?
94

Changed. Thanks!

99

Turns out it is OK to have all dims being unitExtent. Added a test for that.

104

Made this a while loop and made the dim++ more explicit.

1142

I agree. I moved everything to a separate file and added the relevant patterns into that.

mravishankar edited the summary of this revision. (Show Details)May 14 2020, 10:55 PM
nicolasvasilache accepted this revision.May 26 2020, 3:50 AM

Thanks!

Note the last minor points.

mlir/lib/Dialect/Linalg/Transforms/DropUnitDims.cpp
101

It seems like this whole function could be replaced by:

  1. create an affine map that either sets unitDims to 0 or drops them (depending on whether you want a projection or explictly set to 0).
  2. compose with the indexingMaps

?

143

We have ArrayAttr::getAsRange, seems a similar ArrayAttr::getFromRange would be useful?
No need to do it in this CL if this is too intrusive.

176

I am curious why does the iterator type matter here?

193

nit:

for (auto attr : llvm::enumerate(iteratorTypes))
  if (unitDims.count(attr.index()) == 0)
    newIteratorTypes.push_back(attr.value());

?

216

typo: dimension

244

Thanks, much simpler to follow AFAIR !

mlir/test/Dialect/Linalg/drop-unit-extent-dims.mlir
166

nit:newline

This revision is now accepted and ready to land.May 26 2020, 3:50 AM
rriddle added inline comments.May 27 2020, 2:14 PM
mlir/lib/Dialect/Linalg/Transforms/DropUnitDims.cpp
114

nit: Drop trivial braces.

250

nit: ++dim;

256

Same here.

mlir/lib/Dialect/Linalg/Transforms/Fusion.cpp
578–579

nit: Merge this into the if and use a type other than auto.

mravishankar marked 10 inline comments as done.
mravishankar edited the summary of this revision. (Show Details)

ddressing comments

Fixing a bug in the code.

This revision was automatically updated to reflect the committed changes.