This is an archive of the discontinued LLVM Phabricator instance.

[mlir][linalg] Add support for WAW fusion on tensors.
ClosedPublic

Authored by nicolasvasilache on Apr 15 2021, 3:47 PM.

Diff Detail

Event Timeline

nicolasvasilache requested review of this revision.Apr 15 2021, 3:47 PM
gysit added inline comments.Apr 15 2021, 10:51 PM
mlir/test/Dialect/Linalg/fusion-tensor-pattern.mlir
173

Should probably be %{{.*}}, %{{.*}}?

gysit accepted this revision.Apr 16 2021, 1:16 AM
This revision is now accepted and ready to land.Apr 16 2021, 1:16 AM

Address comments.

This revision was landed with ongoing or failed builds.Apr 16 2021, 1:27 AM
This revision was automatically updated to reflect the committed changes.

Couldnt get to this before it landed. Leaving my comments here anyway.

mlir/lib/Dialect/Linalg/Analysis/DependenceAnalysis.cpp
179

Does this really need to be added as a dependence? init_tensor is mostly shape so there should not be any dependence between them....

183

Also interesting that this is added as a WAW dependence. In tensor world this is not a WAW dependence really. it is actually a RAW dependence. Take this example,

%1 = linalg.fill(%0, %cst)
%2 = linalg.matmul ins(%lhs1, %rhs1) outs(%1)
%3 = linalg.matmul ins(%lhs2, %rhs2) outs(%1)

In both these examples the %1 value is "read" by the two matmul operations. It is not written. So it is strange to see this as a WAW dependence. It might just be a matter of terminology clash between tensor world and buffer world, cause a WAW dependence in the traditional sense does not have a place in tensor world (you can write to a tensor twice). Its really a RAW dependence, but with the additional information that when bufferized it becomes a WAW dependence? So concern is that in might be conflating issues, and in general hard to follow.

Do we even need to make this a WAW dependence. Its just a dependence. Why not just say RAW dependence?

nicolasvasilache marked 2 inline comments as done.Apr 16 2021, 10:08 AM
nicolasvasilache added inline comments.
mlir/lib/Dialect/Linalg/Analysis/DependenceAnalysis.cpp
179

isInitTensor means the value is actually used, so yes.
Without a dependence the fill does not fuse into the matmul, I didn't add these just for making the impl more confusing :) .

183

This is a good point, I'll mull it over the week-end.
The thinking is that this dependence into an output tensor has special semantics, see how the scf.for iter_args is updated below.
I'll add an example for which it is not an init_tensor and see what happens.

nicolasvasilache marked an inline comment as done.Apr 16 2021, 10:18 AM