- Split memref.dim into two operations: memref.dim and tensor.dim. Both ops have the same builder interface and op argument names, so that they can be used with templates in patterns that apply to both tensors and memrefs (e.g., some patterns in Linalg).
- Add constant materializer to TensorDialect (needed for folding in affine.apply etc.).
- Remove some MemRefDialect dependencies, make some explicit.
Details
Diff Detail
- Repository
- rG LLVM Github Monorepo
Event Timeline
mlir/include/mlir/Dialect/Tensor/IR/TensorOps.td | ||
---|---|---|
88–89 | ||
mlir/lib/Dialect/Affine/IR/AffineOps.cpp | ||
70 | I'm not sure Affine is supposed to work on tensors. Does something break if you don't add this check? | |
mlir/lib/Dialect/MemRef/IR/MemRefDialect.cpp | ||
10 | This is rather unfortunate to have MemRef depend on Tensor only to provide one helper function. Isn't there a better place? It seems to be predominantly used in Linalg, can it live there (and the remaining users can dispatch themselves if necessary; like SparseTensor is unlikely to work on memrefs). |
mlir/lib/Dialect/MemRef/IR/MemRefDialect.cpp | ||
---|---|---|
10 | I can put the helper function somewhere else, but MemRef would still depend on Tensor. (This dependency is not new, I only added an include here.) There are MemRef canonicalization patterns unrelated to this change, that create/use Tensor dialect ops: LoadOfBufferCast, BufferCast To give you the full picture: createOrFoldDimOp is also used in places outside of Linalg:
Should I duplicate the function is those places or leave it here? |
mlir/lib/Dialect/Affine/IR/AffineOps.cpp | ||
---|---|---|
70 | This is actually needed. Otherwise test/Dialect/Linalg/fusion-sequence.mlir and others are failing. (The generated IR, in particular affine maps are slightly different.) |
mlir/lib/Dialect/StandardOps/Transforms/Bufferize.cpp | ||
---|---|---|
26 | This pattern can be moved to mlir/lib/Dialect/Tensor/Transforms/Bufferize.cpp (can be done in separate patch). | |
mlir/lib/Dialect/Tensor/Transforms/Bufferize.cpp | ||
78 | We shouldn't need this pattern. We should already be handling this in mlir/lib/Dialect/StandardOps/Transforms/TensorConstantBufferize.cpp. Can you try adding that to your pipeline to see if it fixes things? |
(before submitting, please wait for LGTM from at least one other person since this change covers a lot of stuff)
mlir/lib/Dialect/MemRef/IR/MemRefDialect.cpp | ||
---|---|---|
10 | Cross-dialect canonicalizations are annoying but okay, this commit is not adding the dependency, just making it deeper.
Yes, and this commit adds the use of createOrFoldDimOp there.
It must adapt to whatever we decide to do upstream. So the only other actual user is vector. Well, I won't block this refactoring on code placement decision. But let's add a todo to revisit this. I remember @silvas floated an idea of a separate "tensor-to-memref" dialect that could contain the interface stuff. |
mlir/lib/Dialect/SparseTensor/Transforms/Sparsification.cpp | ||
---|---|---|
380 | Thanks for the rewrite. Does the use of createOrFold here mean tha tI can remove the special casing I added to deal with constant vs. unknown dim ops? (I can do that in a follow up, but just curious) |
mlir/lib/Dialect/MemRef/IR/MemRefDialect.cpp | ||
---|---|---|
10 | Added TODO, will prepare a commit that moves the function once we decided where it should go. | |
mlir/lib/Dialect/SparseTensor/Transforms/Sparsification.cpp | ||
380 | This function creates a memref::DimOp or tensor::DimOp, depending on the the type of t->get(). Basically just forwards the arguments to the constructor, nothing more. I'm not familiar with the code here, but if t->get() is always a tensor, then this should be replaced with rewriter.create<tensor::DimOp>(...). |