This is an archive of the discontinued LLVM Phabricator instance.

[mlir][linalg] Fold init_tensor -> fill -> tensor_reshape chain
ClosedPublic

Authored by antiagainst on Mar 22 2021, 2:48 PM.

Details

Summary

For such op chains, we can create new linalg.init_tensor ops
with the result type of the linalg.tensor_reshape op.

Depends On D99115

Diff Detail

Event Timeline

antiagainst created this revision.Mar 22 2021, 2:48 PM
antiagainst requested review of this revision.Mar 22 2021, 2:48 PM
mravishankar requested changes to this revision.Mar 22 2021, 10:42 PM
mravishankar added inline comments.
mlir/lib/Dialect/Linalg/IR/LinalgOps.cpp
1673

I dont think you need to check that it is an InitTensor op. Its just needed for the shape. You can just take the shape of the outs operand of the FillOp.

This revision now requires changes to proceed.Mar 22 2021, 10:42 PM
antiagainst added inline comments.Mar 23 2021, 12:10 PM
mlir/lib/Dialect/Linalg/IR/LinalgOps.cpp
1673

Not that this pattern will change the InitTensor op itself; so it need to check that.

mravishankar added inline comments.Mar 24 2021, 11:59 AM
mlir/lib/Dialect/Linalg/IR/LinalgOps.cpp
1673

You just need to create a new InitTensor op. The old one will become dead. Stated differently, whether fill.output() is an InitTensorOp or not (it is always by construction today) is not needed for this pattern.

antiagainst marked 2 inline comments as done.

Address comments

mlir/lib/Dialect/Linalg/IR/LinalgOps.cpp
1673

Ah, I see your points now. Changed, thanks.

mravishankar accepted this revision.Mar 24 2021, 2:57 PM

LGTM. Just a few renaming nits.

mlir/lib/Dialect/Linalg/IR/LinalgOps.cpp
1662

Nit: Fix stale comment

1664

Just say new FillOp

1666

Nit: Rename to FoldFillWithTensorReshape

mlir/test/Dialect/Linalg/canonicalize.mlir
808

super nit : rename to func @fold_fill_tensor_reshape()

This revision is now accepted and ready to land.Mar 24 2021, 2:57 PM
This revision was landed with ongoing or failed builds.Mar 24 2021, 3:19 PM
This revision was automatically updated to reflect the committed changes.
antiagainst marked 4 inline comments as done.

What is the benefit you see in duplicating the init_tensor?
Bufferization of reshape is still not in a good place but offhand, if the original init has multiple consumers, this will strictly increase the amount of memory.

I think I would have just swapped the reshape and the fill and punt decision about inplace vs alloc to later as I don't have enough data.

Maybe you already have a use case in mind?