This is an archive of the discontinued LLVM Phabricator instance.

[mlir][TransformDialect] Simplify the lowering of pack/unpack when these are just pad/unpad
ClosedPublic

Authored by qcolombet on Apr 12 2023, 1:21 PM.

Details

Summary

This patch recognizes when tensor.pack/unpack operations are simple tensor.pad/unpad (a.k.a. tensor.extract_slice) and lowers them in a simpler sequence of instruction.

For pack, instead of doing:

pad
expand_shape
transpose

we do

pad
insert_slice

For unpack, instead of doing:

transpose
collapse_shape
extract_slice

we do

extract_slice

Note: returning nullptr for the transform dialect is fine. The related handles are just ignored by the following transformation.

Diff Detail

Event Timeline

qcolombet created this revision.Apr 12 2023, 1:21 PM
Herald added a project: Restricted Project. · View Herald TranscriptApr 12 2023, 1:21 PM
qcolombet requested review of this revision.Apr 12 2023, 1:21 PM
nicolasvasilache accepted this revision.Apr 12 2023, 1:57 PM

can we add a test with dynamic shape ?
(maybe we can't if some op does not support it, but I *think* we can)

mlir/lib/Dialect/Linalg/TransformOps/LinalgTransformOps.cpp
877

This assumes static, we really want createOrFold<DimOp> here; we should have some helpers around that return the vector<OpFoldResult> with the right quantities.
Not sure where to look offhand for those helpers ..

883

3 newlines ? :)

884

can we early exit here and avoid the else nesting ?

908

with early exit, I believe we can dropp the trailing assert.

995

same re dynamic sizes

1060

same re early exit and dropping the trailing assert.

mlir/lib/Dialect/Tensor/IR/TensorOps.cpp
3722

Something more idiomatic with ArrayRef == llvm::seq ?

3741

Something like return llvm::all_of(..., [](){ == 1; }) ?

This revision is now accepted and ready to land.Apr 12 2023, 1:57 PM
mravishankar added inline comments.Apr 12 2023, 2:06 PM
mlir/test/Dialect/Linalg/transform-lower-pack.mlir
41

I wondering if we even want to have an insert_slice here? we could just do a reshape here. In which case the only difference is whether there is a transpose or not? If we can teach bufferization to handle this reshape without introducing additional buffers, that'd be very useful (we used to have a memref.reshape to just change the indexing, wonder if we still have that).

mlir/lib/Dialect/Linalg/TransformOps/LinalgTransformOps.cpp
877

Ok, after quick offline chat, I see this is not strictly needed given that the shape is asserted to be static.
This is because tensor.expand_shape is not yet powerful enough re dynamic sizes, but this is a longer endeavor.
I still pulled out the relevant functionality here: https://reviews.llvm.org/D148201 for better reuse.

While it will do the same thing for now using an idiomatic API, it is also future proof.
Could you update to using getMixedDimensions ?

mlir/test/Dialect/Linalg/transform-lower-pack.mlir
41

In principle we could but I'd like to see how these things connect in practice.

At the graph level, reshapes tend to simplify reasonably well.
At the codegen level, as soon as we start tiling and taking slices, reshapes don't behave properly atm.

I'd say this is dependent on where we apply the lowering and we will likely want this to be configurable in the future, based on data.

Forgot to submit these comments!

mlir/include/mlir/Dialect/Utils/IndexingUtils.h
208

I ended up not needing this refactoring but I wanted to show it nevertheless.

I'll remove them from that commit after I get your feedbacks.
Do you think it makes sense to expose these in the utility functions (in that case I'll make a separate NFC commit) or should I just drop them all together?

mlir/include/mlir/Dialect/Utils/ReshapeOpsUtils.h
535

Ditto, not needed but I wanted to show it to you.

mlir/include/mlir/Dialect/Utils/IndexingUtils.h
208

I think these are useful utils to expose in general, fine as is IMO.

mlir/include/mlir/Dialect/Utils/ReshapeOpsUtils.h
535

I think these are useful utils to expose in general, fine as is IMO.

qcolombet updated this revision to Diff 513132.Apr 13 2023, 2:35 AM
  • Use all_of and llvm::seq where appropriate
  • Use getMixedDimensions to future proof the code w.r.t. dynamic dimensions
  • Early-exit to avoid else indentation
qcolombet marked 10 inline comments as done.Apr 13 2023, 2:37 AM
qcolombet added inline comments.
mlir/lib/Dialect/Linalg/TransformOps/LinalgTransformOps.cpp
883

It was like that in the exiting code :).
Removed it.

mlir/test/Dialect/Linalg/transform-lower-pack.mlir
41

Alright I'll land as is and we'll revisit later then.

nicolasvasilache accepted this revision.Apr 13 2023, 3:06 AM