This is an archive of the discontinued LLVM Phabricator instance.

[mlir][Linalg] Add fusion of IndexedGenericOp with TensorReshapeOp by expansion.
ClosedPublic

Authored by mravishankar on Oct 23 2020, 4:38 PM.

Details

Summary

This patch adds support for fusing linalg.indexed_generic op with
linalg.tensor_reshape op by expansion, i.e.

  • linalg.indexed_generic op -> linalg.tensor_reshape op when the latter is expanding.
  • linalg.tensor_reshape op -> linalg.indexed_generic op when the former is folding.

Diff Detail

Event Timeline

mravishankar created this revision.Oct 23 2020, 4:38 PM
mravishankar requested review of this revision.Oct 23 2020, 4:38 PM
hanchung added inline comments.Oct 25 2020, 9:28 PM
mlir/lib/Dialect/Linalg/Transforms/FusionOnTensors.cpp
416

nit: add a period at the end.

463

If this is expandedDimsShape, why not use expandedType.getRank()?

585–586

Remove or update the comment?

738–739

update comment: ... its consumer generic op or indexed_generic op

and I think the below line can just write ... the loop in the consumer op is expanded. (ie, remove generic)

mravishankar marked 4 inline comments as done.

Address comments.

hanchung accepted this revision.Oct 27 2020, 11:54 AM
hanchung added inline comments.
mlir/lib/Dialect/Linalg/Transforms/FusionOnTensors.cpp
439

nit: isFusableWithReshapeByDimExpansion

missing Dim

463

oh I get the point here. This is because it's storing the expanding shapes for each folding dim.

594

nit: add a period at the end.

616–617

How about using llvm::seq for this for loop? I remembered that you told me this is LLVM style.

This revision is now accepted and ready to land.Oct 27 2020, 11:54 AM

LGTM, thanks!