Page MenuHomePhabricator

[MLIR] Create memref dialect and move dialect-specific ops from std.
ClosedPublic

Authored by dfki-jugr on Mar 5 2021, 6:17 AM.

Details

Summary

Create the memref dialect and move dialect-specific ops
from std dialect to this dialect.

Moved ops:
AllocOp -> MemRef_AllocOp
AllocaOp -> MemRef_AllocaOp
AssumeAlignmentOp -> MemRef_AssumeAlignmentOp
DeallocOp -> MemRef_DeallocOp
DimOp -> MemRef_DimOp
MemRefCastOp -> MemRef_CastOp
MemRefReinterpretCastOp -> MemRef_ReinterpretCastOp
GetGlobalMemRefOp -> MemRef_GetGlobalOp
GlobalMemRefOp -> MemRef_GlobalOp
LoadOp -> MemRef_LoadOp
PrefetchOp -> MemRef_PrefetchOp
ReshapeOp -> MemRef_ReshapeOp
StoreOp -> MemRef_StoreOp
SubViewOp -> MemRef_SubViewOp
TransposeOp -> MemRef_TransposeOp
TensorLoadOp -> MemRef_TensorLoadOp
TensorStoreOp -> MemRef_TensorStoreOp
TensorToMemRefOp -> MemRef_BufferCastOp
ViewOp -> MemRef_ViewOp

The roadmap to split the memref dialect from std is discussed here:
https://llvm.discourse.group/t/rfc-split-the-memref-dialect-from-std/2667

Diff Detail

Event Timeline

dfki-jugr created this revision.Mar 5 2021, 6:17 AM
dfki-jugr requested review of this revision.Mar 5 2021, 6:17 AM
rriddle requested changes to this revision.Mar 8 2021, 12:28 PM

Do you have a plan for splitting out the memref parts of the StandardTo*/*ToStandard passes?

Thanks for tackling this!

mlir/docs/Dialects/MemRef.md
2–3

This doesn't seem true for MemRef dialect?

mlir/examples/toy/Ch7/mlir/LowerToAffineLoops.cpp
286–287

This comment needs updated, same for the other copies as well. I think this also applies the chapter documentation in docs/, can you check?

mlir/include/mlir/Conversion/Passes.td
319–324

Same with the others?

mlir/include/mlir/Dialect/Linalg/Passes.td
83–87

Is this over 80 characters? Can you reflow if so?

101–106

Please format this.

146–151

Over 80 characters? Please reflow.

mlir/include/mlir/Dialect/MemRef/EDSC/Intrinsics.h
9

Please fix

31

This looks off.

mlir/include/mlir/Dialect/MemRef/IR/MemRef.h
13

Do you need all of these?

43

Shouldn't these be in the memref namespace?

80

Unrelated to the current patch, but we should really move these to ODS.

mlir/include/mlir/Dialect/MemRef/IR/MemRefBase.td
19

Nit: Move the description above the hasConstantMaterializer. The description should generally be close to the top of the def.

mlir/include/mlir/Dialect/MemRef/IR/MemRefOps.td
114

This looks off.

133

Is this still necessary? It was only necessary for standard dialect because of the dialect prefix elision.

mlir/include/mlir/Dialect/StandardOps/Utils/Utils.h
34

Can this be forward declared?

39

Drop the llvm::

mlir/include/mlir/Transforms/Utils.h
95

Can you forward declare this op instead of including the above file?

mlir/lib/Dialect/Linalg/IR/LinalgInterfaces.cpp
191

Drop trivial braces.

mlir/lib/Dialect/Linalg/Transforms/Hoisting.cpp
17 ↗(On Diff #328501)

Why is this one needed?

mlir/lib/Dialect/SCF/Transforms/Bufferize.cpp
24

Can you add this dependency in the ODS definition instead?

mlir/lib/Dialect/Shape/Transforms/Bufferize.cpp
20

Add this in ODS instead?

mlir/lib/Transforms/Canonicalizer.cpp
35

The canonicalizer should not have a dependency on any dialect, please remove this.

This revision now requires changes to proceed.Mar 8 2021, 12:28 PM
herhut added a comment.Mar 9 2021, 2:54 AM

Wow, what a patch :)

Some nits but otherwise looks good.

mlir/docs/Rationale/UsageOfConst.md
204

I think this change should be undone. Or you need to replace standard with memref, as well.

mlir/docs/Traits.md
214

The link will be broken.

mlir/include/mlir/Dialect/MemRef/IR/MemRefOps.td
116

This has only one user it seems. Maybe just drop it and inline to the single use?

418

So now all dim operations live in the memref dialect, independently of whether they are on memref or tensor? I assume this is temporary until we find a better solution?

699

hyper-nit: spaces around = while you are here.

mlir/lib/Conversion/GPUToNVVM/LowerGpuOpsToNVVMOps.cpp
21

Why is this needed?

mlir/lib/Dialect/Affine/Transforms/AffineDataCopyGeneration.cpp
26

Why is this needed?

mlir/lib/Dialect/GPU/Transforms/MemoryPromotion.cpp
16

Extra include without any other changes?

mlir/lib/Dialect/StandardOps/Transforms/FuncBufferize.cpp
25–26

Please move to include/mlir/Dialect/StandardOps/Transforms/Passes.td

mlir/lib/Dialect/StandardOps/Transforms/TensorConstantBufferize.cpp
104

Please move to include/mlir/Dialect/StandardOps/Transforms/Passes.td

mlir/lib/Transforms/Bufferize.cpp
73

nit: fix name.

dfki-jugr updated this revision to Diff 329916.Mar 11 2021, 4:07 AM
dfki-jugr marked 28 inline comments as done and an inline comment as not done.

Adressed comments.

herhut accepted this revision.Mar 11 2021, 11:50 PM

Please address the clang tidy warnings. Otherwise this looks good to go to me.

This revision was not accepted when it landed; it landed in state Needs Review.Mar 15 2021, 3:17 AM
This revision was landed with ongoing or failed builds.
This revision was automatically updated to reflect the committed changes.
rriddle added inline comments.Mar 15 2021, 10:05 AM
mlir/lib/Transforms/Canonicalizer.cpp
15

Remove this.

bondhugula added inline comments.
mlir/include/mlir/Transforms/Utils.h
95

@dfki-jugr Why does this revision change the signature of this function here to pass AllocOp by pointer? As far as I can tell from the summary and a quick, this is only about the movement of ops as opposed to an API change.

ftynse added inline comments.Mar 18 2021, 1:04 AM
mlir/include/mlir/Transforms/Utils.h
95

Because it can then use a forward declaration instead of having MLIRTransforms depend on MLIRMemRef?

mehdi_amini added inline comments.Mar 18 2021, 12:50 PM
mlir/lib/Transforms/Utils/Utils.cpp
20

@ftynse: Won't libTransforms depend on memref anyway here?

It looks like this patch simply moved std.dim to memref.dim, without doing the "splitting" which we had discussed. I now have code that only operates on tensors that is needing to pull in the memref dialect just for this. Can you split memref.dim into tensor.dim for the tensor case?

It looks like this patch simply moved std.dim to memref.dim, without doing the "splitting" which we had discussed. I now have code that only operates on tensors that is needing to pull in the memref dialect just for this. Can you split memref.dim into tensor.dim for the tensor case?

This is true. Since it was up for discussion, how to handle such a splitting of ops (https://llvm.discourse.group/t/cross-dialect-folding-and-canonicalization/2740/11), the dim op is moved completely to the memref dialect. We are looking forward to an agreement on how to split the op.

I think just a patch splitting the op into tensor.dim and memref.dim would be fine. There isn't much to discuss.

I think just a patch splitting the op into tensor.dim and memref.dim would be fine. There isn't much to discuss.

So there was a discussion that this could be done via interfaces, for example. We could have a generic dim and rank operation that can be applied to any ShapedType type. That would also make it easier to rewrite between these types, as the dim and rank operation would no longer need to change.

Just to be clear, I am not saying we should not do this split, I am just trying to avoid the extra work if soon later we decide that there is a different and better way to model this. This is more work than "just a patch", especially for downstream users, so we should avoid unnecessary churn. As far as I understand it, this is about an extra link time dependency that is not worse than the monolithic dialect before.

I think just a patch splitting the op into tensor.dim and memref.dim would be fine. There isn't much to discuss.

So there was a discussion that this could be done via interfaces, for example. We could have a generic dim and rank operation that can be applied to any ShapedType type. That would also make it easier to rewrite between these types, as the dim and rank operation would no longer need to change.

Just to be clear, I am not saying we should not do this split, I am just trying to avoid the extra work if soon later we decide that there is a different and better way to model this. This is more work than "just a patch", especially for downstream users, so we should avoid unnecessary churn. As far as I understand it, this is about an extra link time dependency that is not worse than the monolithic dialect before.

ShapedType only covers tensor and memref. I'm not currently aware of any effort to generalize that, so splitting it now seems fairly safe.

I'm also well aware of the cost to downstream users, as I just updated my own codebases to memref.dim. The reason I wanted it to be split in the first place is to reduce that. Now when we split memref.dim and tensor.dim all existing code will compile and just have verifier errors... if we had initially done the split std.dim -> {tensor.dim,memref.dim} then we would have only had compile errors. I'm sorry I didn't notice this on the original patch or I would have mentioned it.

Any progress on splitting memref.dim into tensor.dim? Multiple downstream projects are now complaining about this.

I think just a patch splitting the op into tensor.dim and memref.dim would be fine. There isn't much to discuss.

So there was a discussion that this could be done via interfaces, for example. We could have a generic dim and rank operation that can be applied to any ShapedType type. That would also make it easier to rewrite between these types, as the dim and rank operation would no longer need to change.

Just to be clear, I am not saying we should not do this split, I am just trying to avoid the extra work if soon later we decide that there is a different and better way to model this. This is more work than "just a patch", especially for downstream users, so we should avoid unnecessary churn. As far as I understand it, this is about an extra link time dependency that is not worse than the monolithic dialect before.

ShapedType only covers tensor and memref. I'm not currently aware of any effort to generalize that, so splitting it now seems fairly safe.

Not really. VectorType is also a ShapedType, but the current dim op works only on tensors and memrefs (since vectors being of static shape would always have a dim on them folding to a constant).

It is odd now that memref.dim is needed for tensors as well. The current op documentation is also inaccurate now - the definition says "any memref or tensor type" but the description says memref only.

def DimOp : MemRef_Op<"dim", [NoSideEffect]> {
  let summary = "dimension index operation";
  let description = [{
    The `dim` operation takes a memref and a dimension operand of type `index`.
  ...
 let arguments = (ins AnyTypeOf<[AnyTensor, AnyRankedOrUnrankedMemRef],
                                 "any memref or tensor type">:$memrefOrTensor,
                       Index:$index);
mlir/include/mlir/Transforms/Utils.h
95

MLIRTransforms already depends on memrefs - but I guess you meant avoiding a header include. Passing a derived op type by pointer creates confusion to the extent that it might be a nullable argument which is definitely not the case here.