Page MenuHomePhabricator

[mlir][Linalg] Add producer-consumer fusion when producer is a ConstantOp and Consumer is a GenericOp.
ClosedPublic

Authored by mravishankar on May 12 2020, 9:28 PM.

Diff Detail

Event Timeline

mravishankar created this revision.May 12 2020, 9:28 PM

How much of this special casing is impacted by the lack of existence (today) of the tied input/output tensor.
It seems to me once we have that, the constant op would (roughly) look like:

%1 = linalg.generic {args_in = 1 : i64, args_out = 1 : i64, indexing_maps = [#map0], iterator_types = ["parallel"}] %0 {
  ^bb0(%arg1: f32, %arg2: f32):
     %cst = constant 42.0: f32
     linalg.yield %cst : f32
}: tensor<5xf32> -> tensor<5xf32>

And we would use a generic form of fusion.
I think it is fine for now to unblock progress but, if you agree, we should mark this as TBD.

nicolasvasilache accepted this revision.May 14 2020, 9:10 PM
This revision is now accepted and ready to land.May 14 2020, 9:10 PM

How much of this special casing is impacted by the lack of existence (today) of the tied input/output tensor.
It seems to me once we have that, the constant op would (roughly) look like:

%1 = linalg.generic {args_in = 1 : i64, args_out = 1 : i64, indexing_maps = [#map0], iterator_types = ["parallel"}] %0 {
  ^bb0(%arg1: f32, %arg2: f32):
     %cst = constant 42.0: f32
     linalg.yield %cst : f32
}: tensor<5xf32> -> tensor<5xf32>

And we would use a generic form of fusion.
I think it is fine for now to unblock progress but, if you agree, we should mark this as TBD.

I dont think this is related to tied input/output tensors. I might be missing something here, but the tied input/output tensor is for initialization sorts of things (a symptom of such a thing is when one of the iterator types is "reduction"). There are cases I have seen in IREE where some operands to pointwise-add are splat constants (I dont know why that is the case though in the input though, havent dug into it)

I dont think this is related to tied input/output tensors. I might be missing something here, but the tied input/output tensor is for initialization sorts of things (a symptom of such a thing is when one of the iterator types is "reduction"). There are cases I have seen in IREE where some operands to pointwise-add are splat constants (I dont know why that is the case though in the input though, havent dug into it)

The point is that one would translate ConstantOp to Linalg generic beforehand and not introduce op-dependent special-casing in fusion.
Special casing in transformations is what we want to avoid at all costs.

This revision was automatically updated to reflect the committed changes.

The point is that one would translate ConstantOp to Linalg generic beforehand and not introduce op-dependent special-casing in fusion.
Special casing in transformations is what we want to avoid at all costs.

I agree. I am moving this pattern from IREE into here. In IREE I was lowering a ConstantOp to a GenericOp and using the fusion between two generic Ops. But that resulted in a generic op with 0 input arguments and 1 output argument. In an offline conversation I gathered that this was unexpected and future verifier semantics might make that illegal. I also seemed to be having some really nasty segfaults which I wasnt able to root cause. This gets around that. I am happy to drop it in favor of lowering a constant op to generic op transformation and fusion. On balance I prefer this way cause both of these live in core and avoids additional hops. Totally agree that we should avoid special casing, but at the level of tensors through the special casing is very localized. So its code overhead and more patterns, but doesnt propogate past the pattern (still need to be cognizant that we shouldnt have a lot of such pairwise combinations)