This is an archive of the discontinued LLVM Phabricator instance.

[mlir][tensor][linalg] Introduce DataLayoutPropagation pass.
ClosedPublic

Authored by hanchung on Nov 28 2022, 5:11 PM.

Details

Summary

It introduces a pattern that swaps linalg.generic + tensor.pack to
tensor.pack + linalg.generic. It requires all the iteration types
being parallel; the indexing map of output operand is identiy. They can
all be relaxed in the future.

The user can decide whether the propagation should be applied or not by
passing a control function.

Diff Detail

Event Timeline

hanchung created this revision.Nov 28 2022, 5:11 PM
hanchung requested review of this revision.Nov 28 2022, 5:11 PM
hanchung updated this revision to Diff 478427.Nov 28 2022, 5:40 PM

remove unused variable

update indexing_map

chelini added inline comments.Nov 29 2022, 5:19 AM
mlir/include/mlir/Dialect/Utils/IndexingUtils.h
83 ↗(On Diff #478427)

I think we should update the comment here. You get [c, a].

mlir/lib/Dialect/Linalg/Transforms/DataLayoutPropagation.cpp
151

is llvm::None padding? Can we also propagate if we have padding?

171

We can use:

rewriter.inlineRegionBefore(genericOp.getRegion(), newGenericOp.getRegion(), newGenericOp.getRegion().begin());
mlir/include/mlir/Dialect/Utils/IndexingUtils.h
85 ↗(On Diff #478427)

Why do you need the , N part ?

mlir/test/Dialect/Linalg/data-layout-propagation.mlir
12

please use linalg.map for all the generics in this test, they are simple maps

nicolasvasilache requested changes to this revision.Nov 29 2022, 7:22 AM
nicolasvasilache added inline comments.
mlir/include/mlir/Dialect/Linalg/Passes.td
49 ↗(On Diff #478427)

Should this be a test pass ?
We don't want to start adding (and especially maintaining) heuristics at this point, right?

mlir/include/mlir/Dialect/Linalg/Transforms/Transforms.h
84

this is lacking docs

mlir/lib/Dialect/Linalg/Transforms/DataLayoutPropagation.cpp
34

Please add this to PackOp

static PackOp::createDestinationTensor(...)

This is where one would expect to find this helper.

59

more docs plz, input IR, output IR etc.

68

The functionality should be implemented in a function that takes relevant pieces of IR (i.e. the packop) and returns the relevant pieces of IR (the new linalg op and the new unpack op).
The pattern should just call those.

This way we can connect a transform op and even replace that test pass.

87

this is lacking docs, and more especially why you need some non-serializable C++ lambda injected here

104

Seems like a very complex piece of code to leave undocumented and inlined here.

What can be hoisted, what becomes a util on some op etc?

This revision now requires changes to proceed.Nov 29 2022, 7:22 AM
hanchung marked 2 inline comments as done.Nov 29 2022, 11:42 AM
hanchung added inline comments.
mlir/include/mlir/Dialect/Utils/IndexingUtils.h
83 ↗(On Diff #478427)

I misunderstood the semantics of outer_dims_perm. It should be either a permutation or empty. The method does not make sense to me now.. We won't need this method anymore. I enhance the verifier in https://reviews.llvm.org/D138936

hanchung updated this revision to Diff 479111.Nov 30 2022, 5:08 PM
hanchung marked an inline comment as done.

address comments

hanchung marked 5 inline comments as done.Nov 30 2022, 5:10 PM
hanchung added inline comments.
mlir/lib/Dialect/Linalg/Transforms/DataLayoutPropagation.cpp
34

The trade-off is that we can't use tensor::createDimValues if we add it to PackOp. Otherwise, there will be cyclic deps. TensorOps -> TensorUtils -> TensorOps

68

Do we want it being able to connect to a transform op? My understanding is that transform ops aims to apply the pattern once, while propagation patterns would be applied until IR gets converged. The pass is at the position similar to element-wise fusion, IMO.

87

It was intended to provide a callback function for users. So the users can decide what to propagate. Since the pass is in early development phase, we can drop the support; and add it back when needed.

151

It could introduce undefined behavior if we unconditionally propagate pack op through all the ops. E.g., if the padding value is zero and there are division ops in a generic op. Some values of padding area could be NaN (0/0).

I disable the propagation in the pattern by default, and thinking to add an option for enabling aggressive propagation in the future.

mlir/test/Dialect/Linalg/data-layout-propagation.mlir
12

IIUC, linalg.map op is named op. I can't simply update the test file because the pattern works on linalg::GenericOp. Are you suggesting me to update the pattern taking linalg::LinalgOp instead? I think that could work for linalg.map, linalg.transpose, etc; I can take a stab at it, just wanna clarify before updating it.

hanchung updated this revision to Diff 479121.Nov 30 2022, 5:26 PM
hanchung marked 2 inline comments as done.

move the pass to test-only

mravishankar added inline comments.Nov 30 2022, 6:48 PM
mlir/include/mlir/Dialect/Linalg/Passes.td
49 ↗(On Diff #478427)

+1 to this point. Might be better to expose the core utility method or patterns as populate methods, and move the pass itself to a test pass.

mlir/include/mlir/Dialect/Utils/IndexingUtils.h
85 ↗(On Diff #478427)

If you want to update the SmallVector in place, you could use MutableArrayRef<..>.

mlir/lib/Dialect/Linalg/Transforms/DataLayoutPropagation.cpp
68

It is still worth separating the core implementation into a stand-alone function (even if it is static and not exposed right now). Using fixed point is one way of applying the functionality. Even the elementwise op fusion patterns are just a wrapper around the core functions that implement the transformation.

87

This is the control that is based on call site constraints. Instead of using heuristics in the core method that decides when to apply this, we defer the control to the caller, and make a go/no-go decision....

mlir/lib/Dialect/Linalg/Transforms/DataLayoutPropagation.cpp
87

Injecting lambda form above allows inversion of control that is convenient very short term and has almost always proven to very quickly turn to technical debt.

Given recent offline discussions I had and other parts of the codebase I have seen, I am going to more seriously push back against this anti-pattern globally.
Injecting C++ callback control from above is a sign of missing abstractions and should almost always be disallowed.
The alternative is often to refactor multiple times until the right abstractions emerge.

In other words, the transformations we add must be functional-style, statically return the pieces of IR (existing created or updated) that make sense for that transform.
This is not something customizable, if you need more information then statically return more information: no backchannels through lambdas.

If you need different behaviors, instead of injecting a dynamic mechanism through a lambda, write another transformation that takes more/different inputs and return more/different outputs.
Refactor the reusable utility helpers to avoid copy-pasta.

These transformations can then be plugged into patterns and transform dialect ops who can be responsible for the switch between different static behaviors.

hanchung updated this revision to Diff 479418.Dec 1 2022, 1:51 PM

remove unneeded changes

move implementations to functions

hanchung added inline comments.Dec 1 2022, 1:55 PM
mlir/lib/Dialect/Linalg/Transforms/DataLayoutPropagation.cpp
87

thanks for the details! So we provide methods upstream, and let upstream patterns and downstream projects use the methods. They can always wrap the methods into a custom patterns w/o a callback function. I moved them to functions and the pattern becomes a wrapper of the method.

mlir/lib/Dialect/Linalg/Transforms/DataLayoutPropagation.cpp
87

We discussed offline with Mahesh and this may be one of the cases that I could see as an exception to the rule.

Some loose characteristics of potential green/red flags that seem to emerge are:

  1. the information conveyed by the callback cannot be serialized in IR in a reasonable form
  2. the callback use remains superficial and does not permeate the transform implementation but is called once at the high level (e.g. in the match part of a pattern or in a transform dialect op)
  3. the callback does not capture or bypass the return mechanism
  4. maybe others ?

I'd love a more principled discussion on the topic and that we come up with good guidelines we can follow.

Removing the blocker, thanks for your patience.

Edit: I don't know how to do this without accepting or resigning but consider my concern addressed :)

hanchung added inline comments.Dec 1 2022, 5:27 PM
mlir/test/Dialect/Linalg/data-layout-propagation.mlir
12

It's quite tricky because the builder and clone method can't be reused. The most tricky part is about updating indexing_maps. The builder and clone method does not take indexing_maps into account. Also, there is not a clean way to use setting methods for them.

chelini added inline comments.Dec 5 2022, 6:03 AM
mlir/lib/Dialect/Linalg/Transforms/DataLayoutPropagation.cpp
181

We also need to make sure that the result of the current linalg generic operation has no users. Otherwise, the linalg generic remains alive without a body.

231

struct instead of class?

Overall looks good to me. Just left two minor comments.

hanchung updated this revision to Diff 480216.Dec 5 2022, 1:04 PM

inlineRegionBefore -> cloneRegionBefore

class -> struct

mravishankar accepted this revision.Dec 5 2022, 2:24 PM

Thanks! Overall looks good to me. Have a few nits.

mlir/lib/Dialect/Linalg/Transforms/DataLayoutPropagation.cpp
82

Nit: Move the comments before the statement to make it read better.

85

Nice! TIL.

nicolasvasilache added inline comments.
mlir/test/Dialect/Linalg/data-layout-propagation.mlir
12
hanchung updated this revision to Diff 480284.Dec 5 2022, 4:51 PM

move comments above the variable declaration

hanchung added inline comments.Dec 5 2022, 4:52 PM
mlir/lib/Dialect/Linalg/Transforms/DataLayoutPropagation.cpp
181

I see the point. I think we should use cloneRegionBefore instead of inline*.

mlir/test/Dialect/Linalg/data-layout-propagation.mlir
12

I'm going to land the PR to unblock some work on @chelini side tomorrow. If it's an easy fix, I can give it a shot. If not, I'd prefer fixing it afterwards. Thanks!

All the comments have been addressed except testing with linalg.map. I did some study and think that's not feasible to address. The pass works on the generic ops similar to ElementWiseFusion. If it can be done in elementwise fusion, it can also be done here. Since the other pass is testing with this form, I think it's not a big concern. I'm going to land the revision, and prefer addressing the issue afterwards if we know a way to address it.

This revision was not accepted when it landed; it landed in state Needs Review.Dec 6 2022, 3:00 PM
This revision was automatically updated to reflect the committed changes.
mlir/lib/Dialect/Linalg/Transforms/DataLayoutPropagation.cpp
64

std::pair

81

nit: typo

84

This seems like it will crash if not a projected permutation.
Plz add a clear assert at the top-level of the func if this can never happen or consider failing.