This is an archive of the discontinued LLVM Phabricator instance.

[mlir][tensor] Add tensor.dim operation
ClosedPublic

Authored by springerm on Jun 29 2021, 6:02 PM.

Details

Summary
  • Split memref.dim into two operations: memref.dim and tensor.dim. Both ops have the same builder interface and op argument names, so that they can be used with templates in patterns that apply to both tensors and memrefs (e.g., some patterns in Linalg).
  • Add constant materializer to TensorDialect (needed for folding in affine.apply etc.).
  • Remove some MemRefDialect dependencies, make some explicit.

Diff Detail

Event Timeline

springerm created this revision.Jun 29 2021, 6:02 PM
springerm requested review of this revision.Jun 29 2021, 6:02 PM
Herald added a project: Restricted Project. · View Herald TranscriptJun 29 2021, 6:02 PM
springerm edited the summary of this revision. (Show Details)Jun 29 2021, 6:03 PM
springerm updated this revision to Diff 355418.Jun 29 2021, 6:13 PM
springerm edited the summary of this revision. (Show Details)

small improvement

springerm edited the summary of this revision. (Show Details)Jun 29 2021, 6:18 PM
springerm edited the summary of this revision. (Show Details)Jun 29 2021, 6:21 PM
ftynse added inline comments.Jun 30 2021, 12:44 AM
mlir/include/mlir/Dialect/Tensor/IR/TensorOps.td
88–89
mlir/lib/Dialect/Affine/IR/AffineOps.cpp
68

I'm not sure Affine is supposed to work on tensors. Does something break if you don't add this check?

mlir/lib/Dialect/MemRef/IR/MemRefDialect.cpp
10

This is rather unfortunate to have MemRef depend on Tensor only to provide one helper function. Isn't there a better place? It seems to be predominantly used in Linalg, can it live there (and the remaining users can dispatch themselves if necessary; like SparseTensor is unlikely to work on memrefs).

springerm added inline comments.Jun 30 2021, 1:09 AM
mlir/lib/Dialect/MemRef/IR/MemRefDialect.cpp
10

I can put the helper function somewhere else, but MemRef would still depend on Tensor. (This dependency is not new, I only added an include here.)

There are MemRef canonicalization patterns unrelated to this change, that create/use Tensor dialect ops: LoadOfBufferCast, BufferCast

To give you the full picture: createOrFoldDimOp is also used in places outside of Linalg:

  • getDynOperands (memref dialect). However, this function could also go somewhere else.
  • VectorTransforms.cpp because vector.transfer_read/write work on tensors and memrefs.
  • VectorToSCF.cpp (same reason)
  • An internal G3 project

Should I duplicate the function is those places or leave it here?

springerm added inline comments.Jun 30 2021, 3:14 AM
mlir/lib/Dialect/Affine/IR/AffineOps.cpp
68

This is actually needed. Otherwise test/Dialect/Linalg/fusion-sequence.mlir and others are failing. (The generated IR, in particular affine maps are slightly different.)

silvas accepted this revision.Jun 30 2021, 9:26 AM
silvas added inline comments.
mlir/lib/Dialect/StandardOps/Transforms/Bufferize.cpp
26

This pattern can be moved to mlir/lib/Dialect/Tensor/Transforms/Bufferize.cpp (can be done in separate patch).

mlir/lib/Dialect/Tensor/Transforms/Bufferize.cpp
78

We shouldn't need this pattern.

We should already be handling this in mlir/lib/Dialect/StandardOps/Transforms/TensorConstantBufferize.cpp. Can you try adding that to your pipeline to see if it fixes things?

This revision is now accepted and ready to land.Jun 30 2021, 9:26 AM

(before submitting, please wait for LGTM from at least one other person since this change covers a lot of stuff)

ftynse accepted this revision.Jun 30 2021, 9:46 AM
ftynse added inline comments.
mlir/lib/Dialect/MemRef/IR/MemRefDialect.cpp
10

Cross-dialect canonicalizations are annoying but okay, this commit is not adding the dependency, just making it deeper.

getDynOperands (memref dialect). However, this function could also go somewhere else.

Yes, and this commit adds the use of createOrFoldDimOp there.

An internal G3 project

It must adapt to whatever we decide to do upstream.

So the only other actual user is vector. Well, I won't block this refactoring on code placement decision. But let's add a todo to revisit this. I remember @silvas floated an idea of a separate "tensor-to-memref" dialect that could contain the interface stuff.

aartbik accepted this revision.Jun 30 2021, 11:19 AM
aartbik added inline comments.
mlir/lib/Dialect/SparseTensor/Transforms/Sparsification.cpp
380

Thanks for the rewrite. Does the use of createOrFold here mean tha tI can remove the special casing I added to deal with constant vs. unknown dim ops?

(I can do that in a follow up, but just curious)

springerm marked an inline comment as done.Jun 30 2021, 5:58 PM
springerm added inline comments.
mlir/lib/Dialect/MemRef/IR/MemRefDialect.cpp
10

Added TODO, will prepare a commit that moves the function once we decided where it should go.

mlir/lib/Dialect/SparseTensor/Transforms/Sparsification.cpp
380

This function creates a memref::DimOp or tensor::DimOp, depending on the the type of t->get(). Basically just forwards the arguments to the constructor, nothing more.

I'm not familiar with the code here, but if t->get() is always a tensor, then this should be replaced with rewriter.create<tensor::DimOp>(...).

springerm updated this revision to Diff 355736.Jun 30 2021, 5:59 PM

address comments

springerm edited the summary of this revision. (Show Details)Jun 30 2021, 6:01 PM
This revision was landed with ongoing or failed builds.Jun 30 2021, 6:09 PM
This revision was automatically updated to reflect the committed changes.