This is an archive of the discontinued LLVM Phabricator instance.

[mlir][Linalg] Rewrite SubTensors that take a slice out of a unit-extend dimension.
ClosedPublic

Authored by mravishankar on Mar 23 2021, 4:52 PM.

Details

Summary

Subtensor operations that are taking a slice out of a tensor that is
unit-extent along a dimension can be rewritten to drop that
dimension.

Diff Detail

Event Timeline

mravishankar created this revision.Mar 23 2021, 4:52 PM
mravishankar requested review of this revision.Mar 23 2021, 4:52 PM

Fix test error.

nicolasvasilache requested changes to this revision.Mar 24 2021, 5:06 AM
nicolasvasilache added inline comments.
mlir/lib/Dialect/Linalg/Transforms/DropUnitDims.cpp
488

These days you can significantly simplify all this:

%1 = subtensor %0[0, %o1, 0] [1, %s1, 1] [1, 1, 1] : tensor<1x?x1xf32> to tensor<?xf32>

should just be valid.

You should use this helper to give you the rank-reduced type: https://github.com/llvm/llvm-project/blob/main/mlir/lib/Dialect/StandardOps/IR/Ops.cpp#L1728

This revision now requires changes to proceed.Mar 24 2021, 5:06 AM

Actually fix the test

THis is doing something different. It is reducing the rank of both the source and the dest, so

%1 = subtensor %0[0, %o1, 0] [1, %s1, 1] [1, 1, 1] : tensor<1x?x1xf32> to tensor<1x?x1xf32>

to

%1 = subtensor %0[%o1] [%s1 [1] : tensor<?xf32> to tensor<?xf32>

with reshapes for the modified source and results to match up with rest of the IR. The reshapes then get canonicalized away if possible. All this is meant for just eliminating the <...x1x...> dimensions that are not really needed (which is what the rest of this pass is for).

Ah yes, I see that the reshapes are actually needed for function boundaries.
Removing the blocker then, sorry about the noise.

Still, it seems we aggressively use reshape to drop 1 dimensions even in the absence of function boundaries.

Maybe a solution would be to have extra canonicalizations:

  • subtensor + reshape -> rank-reducing subtensor
  • reshape + subtensor_insert -> rank-expanding subtensor_insert

The overarching point (that I think I also leaked into other phab reviews I saw fly by) is that subview, subtensor and vector.transfer become better understodd and do well with inplace updates, fusion etc while reshape are still not super-well understood.

Thoughts?

Still, it seems we aggressively use reshape to drop 1 dimensions even in the absence of function boundaries.

Maybe a solution would be to have extra canonicalizations:

  • subtensor + reshape -> rank-reducing subtensor
  • reshape + subtensor_insert -> rank-expanding subtensor_insert

That could work. This solves half the problem (on the target side, but not the source side)

The overarching point (that I think I also leaked into other phab reviews I saw fly by) is that subview, subtensor and vector.transfer become better understodd and do well with inplace updates, fusion etc while reshape are still not super-well understood.

Thoughts?

Totally agree. The goal here is that if the reshapes produced here fold away with other operations. That seems to happen well. Some of the patterns here could well be canonicalizations. But this pass has patterns that are meant to drop unit-trip loop counts within Linalg (which has a couple of good side effects too). It is mostly an experiment, thats why all these patterns are kept within this pass. It is used in IREE, and it works pretty well. Completely on-board though that reshapes should be at the boundaries. As I am bringing up using Linalg on tensors path, the next things i am looking it as propogating reshapes either upto function boundaries, or upto named ops (at which point the reshape stays, and you cant do much with it).

@nicolasvasilache did you mean to approve the patch?

Ah yes, I see that the reshapes are actually needed for function boundaries.
Removing the blocker then, sorry about the noise.

or upto named ops (at which point the reshape stays, and you cant do much with it).

For this case, I'd argue the behavior we want is named -> generic -> reshape opt -> named.
This only requires implementing a generic -> named which can be done by comparing a generic against all the named we know of.
There are details and complications but there is some more hope than bailing out on named + reshape.

This revision is now accepted and ready to land.Mar 28 2021, 11:55 PM
This revision was landed with ongoing or failed builds.Mar 29 2021, 9:19 AM
This revision was automatically updated to reflect the committed changes.