This is an archive of the discontinued LLVM Phabricator instance.

[mlir][Linalg] Add pattern for folding reshape by collapsing.
ClosedPublic

Authored by mravishankar on Feb 9 2022, 12:24 PM.

Details

Summary

Fusion of linalg.generic with
tensor.expand_shape/tensor.collapse_shape currently handles fusion
with reshape by expanding the dimensionality of the linalg.generic
operation. This helps fuse elementwise operations better since they
are fused at the highest dimensionality while keeping all indexing
maps involved projected permutations. The intent of these is to push
the reshape to the boundaries of functions.

The presence of named ops (or other ops across which the reshape
cannot be propagated) stops the propagation to the edges of the
function. At this stage, the converse patterns that fold the reshapes
with generic ops by collapsing the dimensions of the generic op can
push the reshape towards edges. In particular it helps the case where
reshapes exist in between named ops and generic ops.

linalg.named_op -> tensor.expand_shape -> linalg.generic

Pushing the reshape down will help fusion of linalg.named_op ->
linalg.generic using tile + fuse transformations.

This pattern is intended to replace the following patterns

  1. FoldReshapeByLinearization : These patterns create indexing maps

that are not projected permutations that affect future
transformations. They are only useful for folding unit-dimensions.

  1. PushReshapeByExpansion : This pattern has the same functionality

but has some restrictions

a) It tries to avoid creating new reshapes that limits its
applicability. The pattern added here can achieve the same
functionality through use of the `controlFn` that allows clients
of the pattern freedom to make this decision.
b) It does not work for ops with indexing semantics.

These patterns will be deprecated in a future patch.

Diff Detail

Event Timeline

mravishankar created this revision.Feb 9 2022, 12:24 PM
mravishankar requested review of this revision.Feb 9 2022, 12:24 PM
gysit accepted this revision.Feb 10 2022, 4:44 AM

LGTM!

mlir/lib/Dialect/Linalg/Transforms/ElementwiseOpFusion.cpp
1181

nit: I think forwarding the constructor arguments should work here domainReassociation.emplace_back({dim});

1195

nit: existence

1207

Searching the dimSequence in the indexing map results using llvm::find_if may make this a little more compact?

1242

Is this needed? The code seems to support more than one output as well?

1264

nit: for all

1329

nit: I would shorten to: Map from the starting dimensions of the folded dimension sequences to their index in iterationReassociation.

1353

nit: of the collapsed

1460

I thought reshape op is always an expand shape?

mlir/test/lib/Dialect/Linalg/TestLinalgElementwiseFusion.cpp
157

nit: missing dot

Or may be write consumer.getOperandNumber() != 0 and skip the comment?

This revision is now accepted and ready to land.Feb 10 2022, 4:44 AM
mravishankar marked 5 inline comments as done.Feb 15 2022, 3:22 PM
mravishankar added inline comments.
mlir/lib/Dialect/Linalg/Transforms/ElementwiseOpFusion.cpp
1207

I am probably going to keep it as is. Sometimes llvm::find_if (and similar function with predicates) is hard for me to read.

1242

Being deliberate about this. If needed, this can be removed and tested.

1460

For now yes, but the same logic holds the other way to, i.e. when fusing a linalg.generic -> tensor.collapse_shape. I havent added this pattern, but the logic is essentially the same.

Rebase and address comments.

This revision was landed with ongoing or failed builds.Feb 15 2022, 7:15 PM
This revision was automatically updated to reflect the committed changes.