Page MenuHomePhabricator

[mlir][Linalg] Add a hoistPaddingOnTensors transformation
ClosedPublic

Authored by nicolasvasilache on Jan 22 2021, 8:52 AM.

Details

Summary

This transformation anchors on a padding op whose result is only used as an input
to a Linalg op and pulls it out of a given number of loops.
The result is a packing of padded tailes of ops that is amortized just before
the outermost loop from which the pad operation is hoisted.

Depends on D95149

Diff Detail

Event Timeline

nicolasvasilache requested review of this revision.Jan 22 2021, 8:52 AM
ftynse added inline comments.Jan 25 2021, 2:08 AM
mlir/include/mlir/Dialect/StandardOps/IR/Ops.td
3063

There's no type passed in here

3067–3068

Nit: this sounds a bit misleading because the builder actually expects the type

3169

SubTensorInsertOp ?

mlir/lib/Dialect/Linalg/Transforms/Hoisting.cpp
338

Please finish the comment.

374–376

This looks insufficiently conservative. Terminators are not required to implement BranchOpInterface, and ops with regions can also have control flow. I would rather check that the ops are known non-terminators and that they don't have regions unless they are loop-like.

411

Consider mentioning that it hoists out of at most nLoops, the works if there are less loops present.

461

Is it guaranteed that the outermost loop is at the front of the slice? The slice computation algorithm looks pre-order DFS on linear list of operands, so if the IV of the outermost loop is not the first operand, this looks incorrect.

470–471

Nit: llvm::append_range(packedSHape, paddedTensorType.getShape()) may be a bit more concise.

474

Nit: I'd use auto here because I'm not sure the number of stack elements in SmallVector<Value> gets inferred to be 4. Alternatively, maybe llvm::to_vector<> without the number of stack elements, just works.

496

I suppose you want to bail out on any side-effecting op (except for specific cases you know how to handle) because cloning those is illegal in general.

497

OpBuilder::clone already does this internally.

500–501

Nit: I'd add a more visible NYI assertion message.

543

Is this scope necessary?

559

Do you want to erase the old pad?

569

Leftover debug

mlir/lib/Dialect/StandardOps/IR/Ops.cpp
3508

Plz document top-level functions.

3508–3509

Nit: I think SmallVectorImpl is still relevant for references, SmallVector<T> is just a shorthand for automatically determining the number of stack elements in SmallVector<T, N> based on sizeof(T).

mlir/test/Dialect/Linalg/hoist-padding.mlir
3

I thought one needs a trailing backslash for this to work

nicolasvasilache marked 18 inline comments as done.

Address review.

class -> struct

mlir/include/mlir/Dialect/StandardOps/IR/Ops.td
3067–3068

had swapped the comments, thanks

mlir/lib/Dialect/Linalg/Transforms/Hoisting.cpp
461

yes, added a comment and an assertion in the ensuring function.

496

This is what hoistPaddingOnTensorsPrerequisites does as a pre-pass.

mlir/test/Dialect/Linalg/hoist-padding.mlir
3

Not if one wishes to ignore the test results and shoot oneself in the foot with an incorrect test .. :p
Thanks!

pifon2a accepted this revision.Jan 25 2021, 4:12 AM
This revision is now accepted and ready to land.Jan 25 2021, 4:12 AM
ftynse accepted this revision.Jan 25 2021, 4:37 AM
ftynse added inline comments.
mlir/lib/Dialect/Linalg/Transforms/Hoisting.cpp
496

I don't see anything related to side effects there.

This revision was landed with ongoing or failed builds.Jan 25 2021, 4:42 AM
This revision was automatically updated to reflect the committed changes.
nicolasvasilache marked an inline comment as done.Jan 25 2021, 5:50 AM
nicolasvasilache added inline comments.
mlir/lib/Dialect/Linalg/Transforms/Hoisting.cpp
496

My apologies, I misread your comment. Sent a followup PR: 68eee55ce6a41bb294d63886679b599883e96c3a.