This is an archive of the discontinued LLVM Phabricator instance.

[mlir][Linalg] Add canonicalization of linalg op -> dim op.
ClosedPublic

Authored by mravishankar on Dec 10 2020, 4:39 PM.

Details

Summary

Add canonicalization to replace use of the result of a linalg
operation on tensors in a dim operation, to use one of the operands of
the linalg operations instead. This allows the linalg op itself to be
deleted when all its non-dim uses are removed (say through tiling, etc.)

Diff Detail

Event Timeline

mravishankar created this revision.Dec 10 2020, 4:39 PM
mravishankar requested review of this revision.Dec 10 2020, 4:39 PM
nicolasvasilache requested changes to this revision.Dec 11 2020, 2:58 AM
nicolasvasilache added a subscriber: pifon2a.
mlir/lib/Dialect/Linalg/IR/LinalgOps.cpp
1734

prevent

1748

This revision introduces OpInterface support for getLoopsToShapesMap and getShapesToLoopsMap and dropped special-case logic.

Let's please use those instead of reintroducing special-case logic, this will be more future-proof.

This should prob take the form of a new interface method Value inferDimFromInputs(unsigned resultIdx, unsigned dim) which:

  1. computes the index of result(resultIdx)[dim] in the flattened list of shapes (i.e. sum the ranks for all input operands and results < resultIdx + dim). let's call this "IDX"
  2. extract the map consisting of the single IDX result from getLoopsToShapesMap
  3. compose 2. with getShapesToLoopsMap that is the expression of the output dim you want in terms of the expression of the input dims.
  4. call applyMapToValues with 3. and the flattened list of dims.

e.g. assume some hypothetical op that resembles matmul with a twist: O(i + j, j) += A(i, k) * B(k, j):

  • getLoopsToShapesMap is (i, j, k) -> (i, k, k, j, i + j, j)
  • getShapesToLoopsMap is (d0, d1, d2, d3, d4, d5) -> (d0, d3, d1)

Assume you wanted the expression of dim res, 0 (i.e. i + j, a.k.a dim A, 0 + dim B, 1).

Step 1. IDX = 4
Step 2. extract IDX from getLoopsToShapesMap, i.e. (i, j, k) -> (i + j)
Step 3. compute ([ (i, j, k) -> (0, 0, 0, 0, i + j, 0) ]).compose( [(d0, d1, d2, d3, d4, d5) -> (d0, d3, d1)] ) returns [d0, d1, d2, d3, d4, d5) -> (d0 + d1)]
Step 4. call applyMapToValues(b, loc, [d0, d1, d2, d3, d4, d5) -> (d0 + d1)], op.createFlatListOfOperandDims())

In step 4. you will prob need to adjust to only take the input operands.
By default we also capture the output because of output buffers.

Does this make sense ?

This revision now requires changes to proceed.Dec 11 2020, 2:58 AM
hanchung added inline comments.Dec 11 2020, 7:50 AM
mlir/lib/Dialect/Linalg/IR/LinalgOps.cpp
1782–1783
mravishankar added inline comments.Dec 11 2020, 12:17 PM
mlir/lib/Dialect/Linalg/IR/LinalgOps.cpp
1748

I think it makes sense. But there is a corner case here that might make the pattern rewriter go into a loop, and it doesnt happen today. Take an op where the loop shape actually depends on the shape of the output.

%cst = constant 0.0 : f32
%1 = linalg.generic {indexing_maps = [affine_map<(d0, d1) -> (d0, d1)>],
                                 iterator_types = ["parallel", "parallel"]}
         ins() {
         ^bb0():
             linalg.yield %cst : f32
        } -> tensor<?x?xf32>

Today, the loop computation fails to get the right loop shape for such ops (the method createLoopRanges I think will throw an error cause createFlatListOfOperandDims does not append the shape of the results that makes affineApplyMapToValues(b, loc, getShapesToLoopsMap(), createFlatListOfOperandDims() fail due to mismatch in number of values passed and numdims of the map. (The next CL in the stack fixes that issue).
So here we need to guard against that. I like the approach you described above, but this canonicalization should fail in such cases otherwise it will put the rewriter in a loop. It doesnt happen today cause of what I described in the previous paragraph)

mravishankar marked 3 inline comments as done.

Address comments.

hanchung accepted this revision.Dec 12 2020, 12:23 AM
hanchung added inline comments.
mlir/lib/Dialect/Linalg/IR/LinalgOps.cpp
89

IIRC, Value can be nullptr, so we don't need optional here?

114–118

Let's initialize resultDimSubMapPos with dim, so we don't need the last statement ...+=dim?

mravishankar marked an inline comment as done.Dec 13 2020, 9:31 AM
mravishankar added inline comments.
mlir/lib/Dialect/Linalg/IR/LinalgOps.cpp
89

Thats true, but the signature of Optional makes it explicit that this can return llvm::None. Its easier to follow that. We can dcoument that it returns nullptr on failure, but that might just be easier to miss. This makes it more explicit. WDYT?

114–118

I feel like that would make the logic more complicated. This is easier to follow IMO. Will leave it as is if thats OK.

hanchung accepted this revision.Dec 14 2020, 2:11 AM
hanchung added inline comments.
mlir/lib/Dialect/Linalg/IR/LinalgOps.cpp
89

Yes, you are right on that. It is indeed more explicit. Personally, I don't have a strong feeling on this because I know Value could be nullptr and it could ends up with an assertion at some point. It already has Optional semantic to me.

Another point is that Optional introduces a bit more memory usage (ie sizeof(bool) and padding may be added). If Value itself carries this information and people commonly use it, I'd prefer returning nullptr on failure.

However, I went through some usage, and found that some people use Optional<Value> and some people use Value directly. There is no convention here. So I'll leave the decision to you.

114–118

Sure, both are okay to me. I just feel that the comment states what it's doing and it looks simpler to me. But both looks good to me.

mravishankar marked an inline comment as done.

Adapting this to work with current setup of linalg operations. The
specifics might change based on
https://llvm.discourse.group/t/linalg-and-shapes/2421.

Going through my unreviewed stack, note that with the revamp of linalg on tensors @pifon2a sent a diff today that implements the functionality with out tensors,

Going through my unreviewed stack, note that with the revamp of linalg on tensors @pifon2a sent a diff today that implements the functionality with out tensors,

Ok, Ill leave it upto you if this needs to be pushed in. I rebased it on top of the linalg on tensors changes. It still describes the shape in terms of "ins". THat might still be worth it. PTAL and let me know.

ok, my apologies, this looks great, I'll do a proper review in a bit but consider it landed :)

Thanks!

mlir/lib/Dialect/Linalg/IR/LinalgOps.cpp
127

Please give this a good name and add to the LinalgStructuredOpInterface.

128

inferred

132

Just iterate of getInputShapes ?
If it's not in the LinalgStructuredOpsInterface just add it too, plz

This revision is now accepted and ready to land.Jan 14 2021, 11:33 AM
mravishankar marked 2 inline comments as done.

Address comments

mravishankar added inline comments.Jan 14 2021, 3:49 PM
mlir/lib/Dialect/Linalg/IR/LinalgOps.cpp
132

Not needed anymore after interface method is added.