This is an archive of the discontinued LLVM Phabricator instance.

[mlir][Linalg] NFC: Refactor fusion on tensors to enable extending it to fusing different kinds of linalg operations on tensors.
ClosedPublic

Authored by mravishankar on Apr 19 2020, 3:37 PM.

Details

Summary

The implementation of fusion on tensor was initially planned for just
GenericOps (and maybe IndexedGenericOps). With addition of
linalg.tensor_reshape, and potentially other such non-structured ops,
refactor the existing implementation to allow easier specification of
fusion between different linalg operations on tensors.

Diff Detail

Event Timeline

mravishankar created this revision.Apr 19 2020, 3:37 PM

Mostly looks good but the extra level of indirection through the template class was harder to follow than necessary I think.

Can we make it just an untyped entry point and drop the extra class ?

mlir/include/mlir/Dialect/Linalg/Utils/Utils.h
77

This looks like an easy way to get link errors.
Can you avoid templating here, just take Operation* as a consumer and do a switch underneath?

mlir/lib/Dialect/Linalg/Transforms/Fusion.cpp
415

This level of indirection does not seem to pay for itself.
Can we move the body of this function just above:

if (auto genericOp = dyn_cast<GenericOp>(producer)) {

on l. 590

mravishankar marked an inline comment as done.Apr 21 2020, 1:32 PM
mravishankar added inline comments.
mlir/lib/Dialect/Linalg/Transforms/Fusion.cpp
415

I want to keep the implementation of different fusions separate. This only has fusion of GenericOp with GenericOp. The follow up CL just adds the method to fuse GenericOp with TensorRehapeOp, and TensorReshapeOp with GenericOp. (see D78464)
I wasnt able to see a way in which you dont have to these pairwise implementations. They seem to have conditions to check and the exact steps to generate the fused op diverges. Part of the problem is that TensorReshapeOp is not a structure op (not saying it should be thats what made me go in this direction).
I could go further. You could fuse GenericOp with a ConstantOp which is a splat constant. So given all that this was what I thought would be most extendible.
The base case is just a place where common functionality can be kept and a common invocation method is defined. In this case though it is only the operands marshalling that is common (and checking that the ops are fusible).
That being said, happy to course correct. Ill try to simplify the structure anyway.

Addressing comments

Updating D78463: [mlir][Linalg] NFC: Refactor fusion on tensors to enable extending it

to fusing different kinds of linalg operations on tensors.

Remmoving indirection in class heirarchy.

Updating D78463: [mlir][Linalg] NFC: Refactor fusion on tensors to enable extending it

to fusing different kinds of linalg operations on tensors.

mravishankar marked an inline comment as done.Apr 23 2020, 10:47 AM
mravishankar added inline comments.
mlir/lib/Dialect/Linalg/Transforms/Fusion.cpp
415

Removed the level of indirection here. PTAL

Thanks Mahesh!

This revision is now accepted and ready to land.Apr 23 2020, 10:58 AM
This revision was automatically updated to reflect the committed changes.

What is the plan for the fusion in general? Is this intended to stay specific to Linalg? Is this supposed to be extensible composable? It does not seem to be designed with interfaces to abstract away the dialect operations right now (the path under lib/Dialect/Linalg/Transforms/ may also contribute there).
There is an XLA fusion pass getting tested in the TensorFlow repo but that is a good candidate for upstream at some point. However we need to figure a path for all of these to compose.