Page MenuHomePhabricator

[mlir][Linalg] Define a linalg.init_tensor operation.
ClosedPublic

Authored by mravishankar on Dec 15 2020, 11:21 PM.

Details

Summary

This operation is used to materialize a tensor of a particular
shape. The shape could be specified as a mix of static and dynamic
values.

The use of this operation is to be an init tensor for Linalg
structured operation on tensors where the bounds of the computation
depends on the shape of the output of the linalg operation. The result
of this operation will be used as the init tensor of such Linalg
operations. To note,

  1. The values in the tensor materialized is not used. Any operation to which this is an init tensor is expected to overwrite the entire tensor.
  2. The tensor is materialized only for the shape of the output and to make the loop bounds depend only on operands of the structured operation.

Based on (1) and (2) it is assumed that these operations eventually go
away since they are only used in dim operations that can be
canonicalized to make this operation dead. Such canonicalization are
added here too.

Depends On D93180

Diff Detail

Event Timeline

mravishankar created this revision.Dec 15 2020, 11:21 PM
mravishankar requested review of this revision.Dec 15 2020, 11:21 PM
silvas added a subscriber: silvas.Dec 16 2020, 12:34 PM
silvas added inline comments.
mlir/include/mlir/Dialect/Linalg/IR/LinalgOps.td
44

you can take the static sizes from the result type. (you probably just want to imitate how std.alloc, std.dynamic_tensor_from_elements, etc. model this; they have Variadic<Index> with one entry for each ? in the result type)

mravishankar added inline comments.Dec 17 2020, 9:32 AM
mlir/include/mlir/Dialect/Linalg/IR/LinalgOps.td
44

I was going with the approach the subview ops, etc. take. THe advantage of this explicitly is that it is easier to retrieve the dim as a static or dynamic value. The approach you descibe would also work, but then I need to look at the type instead. If you have strong opinions on that, I can change it.

pifon2a accepted this revision.Dec 17 2020, 9:45 AM
pifon2a added inline comments.
mlir/include/mlir/Dialect/Linalg/IR/LinalgOps.td
98

$sizes?

105

please, also add a builder for the mixed static-dynamic case:

OpBuilderDAG<(ins "ValueRange":$sizes, "ArrayRef<int64_t>":$staticSizes, "Type":$elementType),
  [{
    build($_builder, $_state,
          InitTensorOp::inferResultType(staticSizes, elementType),
          sizes, $_builder.getI64ArrayAttr(staticSizes));
  }]>,
mlir/include/mlir/Interfaces/ViewLikeInterface.h
38

nit: do you need "(1)" and "(2)" in this comment?

This revision is now accepted and ready to land.Dec 17 2020, 9:45 AM
silvas accepted this revision.Dec 17 2020, 10:50 AM
silvas added inline comments.
mlir/include/mlir/Dialect/Linalg/IR/LinalgOps.td
44

Ok, I haven't thought through the tradeoffs here fully, and wasn't aware of the alternative precedent for subview (I see now that this patch reuses a lot of the parse/print infra from those ops). I'm fine with this for now.

silvas added inline comments.Dec 17 2020, 11:03 AM
mlir/lib/Dialect/StandardOps/IR/Ops.cpp
1477 ↗(On Diff #312122)

Can this be in a separate patch? I don't see the relationship to this patch.

mlir/test/Dialect/Linalg/canonicalize.mlir
357

This test case seems to be testing two separate patterns. Can you do it with two test cases for clarity?

+1 it's fine for now and unlocks a bigger piece of the puzzle.
We can refactor and cleanup as followups: subview crams together 3 variadic static / dynamic + canonicalizations.
There is a nicer future where we can define a generic pair of variadic operand + attribute + canonicalizations for all such things.

That's 2021's problem though.

mravishankar marked 5 inline comments as done.

Address comments

silvas added inline comments.Dec 17 2020, 2:03 PM
mlir/test/Dialect/Linalg/canonicalize.mlir
357

I think you don't have a specific test for linalg.init_tensor [4, 5, %c6] : tensor<4x5x?xf32> -> linalg.init_tensor [4, 5, 6] anymore. That one should be its own test to avoid confusing it with init_tensor_static_dim (which is related, but not testing the same thing)

361

dim 2 is %c6 so it is not static (maybe internally to canonicalization it becomes so...). Best to use dim %0, %c0 instead so it gets the static "4" value.

Adding mized static-dynamic builder.

mravishankar added inline comments.Dec 17 2020, 2:11 PM
mlir/include/mlir/Dialect/Linalg/IR/LinalgOps.td
98

shape is consistent with what is gotten from ShapedType.

mlir/include/mlir/Interfaces/ViewLikeInterface.h
38

Just copied and pasted it from somewhere else. Ill leave it as is.

mlir/lib/Dialect/StandardOps/IR/Ops.cpp
1477 ↗(On Diff #312122)

I need it to bridge the gap from canonicalizer for canonicalizing a linalg.init_tensor operation which introduces a tensor_cast. SO a constant_index -> linalg.init_tensor -> dim to a constant requires some folding of the intermediate tensor_cast with dim.

mlir/test/Dialect/Linalg/canonicalize.mlir
357

Good point. I will add that one.

361

That might not even get to the linalg.init_tensor cause it is dim %0, %c0 : tensor<4x5x?xf32> and that is canonicalized by itself.

rriddle added inline comments.Dec 17 2020, 2:15 PM
mlir/lib/Dialect/StandardOps/IR/Ops.cpp
1477 ↗(On Diff #312122)

Still looks like it can be separated from this patch.

Adding another test.

Splitting out tensor_cast -> dim canonicalization.

I see what you mean now.

mravishankar retitled this revision from [mlir][Linalg] Define a linalg.init_tensor operation. to [mlir] Define a linalg.init_tensor operation..Dec 17 2020, 2:42 PM
mravishankar retitled this revision from [mlir] Define a linalg.init_tensor operation. to [mlir][Linalg] Define a linalg.init_tensor operation..
This revision was landed with ongoing or failed builds.Dec 17 2020, 2:49 PM
This revision was automatically updated to reflect the committed changes.
hanchung added inline comments.
mlir/test/Dialect/Linalg/roundtrip.mlir
715–738

these should be put in invalid.mlir