This is an archive of the discontinued LLVM Phabricator instance.

[MLIR] Create memref dialect and move dialect-specific ops from std.
ClosedPublic

Authored by dfki-jugr on Mar 5 2021, 6:17 AM.

Details

Summary

Create the memref dialect and move dialect-specific ops
from std dialect to this dialect.

Moved ops:
AllocOp -> MemRef_AllocOp
AllocaOp -> MemRef_AllocaOp
AssumeAlignmentOp -> MemRef_AssumeAlignmentOp
DeallocOp -> MemRef_DeallocOp
DimOp -> MemRef_DimOp
MemRefCastOp -> MemRef_CastOp
MemRefReinterpretCastOp -> MemRef_ReinterpretCastOp
GetGlobalMemRefOp -> MemRef_GetGlobalOp
GlobalMemRefOp -> MemRef_GlobalOp
LoadOp -> MemRef_LoadOp
PrefetchOp -> MemRef_PrefetchOp
ReshapeOp -> MemRef_ReshapeOp
StoreOp -> MemRef_StoreOp
SubViewOp -> MemRef_SubViewOp
TransposeOp -> MemRef_TransposeOp
TensorLoadOp -> MemRef_TensorLoadOp
TensorStoreOp -> MemRef_TensorStoreOp
TensorToMemRefOp -> MemRef_BufferCastOp
ViewOp -> MemRef_ViewOp

The roadmap to split the memref dialect from std is discussed here:
https://llvm.discourse.group/t/rfc-split-the-memref-dialect-from-std/2667

Diff Detail

Event Timeline

dfki-jugr created this revision.Mar 5 2021, 6:17 AM
dfki-jugr requested review of this revision.Mar 5 2021, 6:17 AM
rriddle requested changes to this revision.Mar 8 2021, 12:28 PM

Do you have a plan for splitting out the memref parts of the StandardTo*/*ToStandard passes?

Thanks for tackling this!

mlir/docs/Dialects/MemRef.md
2–3

This doesn't seem true for MemRef dialect?

mlir/examples/toy/Ch7/mlir/LowerToAffineLoops.cpp
286–287

This comment needs updated, same for the other copies as well. I think this also applies the chapter documentation in docs/, can you check?

mlir/include/mlir/Conversion/Passes.td
323–328

Same with the others?

mlir/include/mlir/Dialect/Linalg/Passes.td
83–87

Is this over 80 characters? Can you reflow if so?

101–106

Please format this.

146–151

Over 80 characters? Please reflow.

mlir/include/mlir/Dialect/MemRef/EDSC/Intrinsics.h
9

Please fix

31

This looks off.

mlir/include/mlir/Dialect/MemRef/IR/MemRef.h
13

Do you need all of these?

43

Shouldn't these be in the memref namespace?

80

Unrelated to the current patch, but we should really move these to ODS.

mlir/include/mlir/Dialect/MemRef/IR/MemRefBase.td
19

Nit: Move the description above the hasConstantMaterializer. The description should generally be close to the top of the def.

mlir/include/mlir/Dialect/MemRef/IR/MemRefOps.td
114

This looks off.

133

Is this still necessary? It was only necessary for standard dialect because of the dialect prefix elision.

mlir/include/mlir/Dialect/StandardOps/Utils/Utils.h
34

Can this be forward declared?

39

Drop the llvm::

mlir/include/mlir/Transforms/Utils.h
95

Can you forward declare this op instead of including the above file?

mlir/lib/Dialect/Linalg/IR/LinalgInterfaces.cpp
191

Drop trivial braces.

mlir/lib/Dialect/Linalg/Transforms/Hoisting.cpp
17 ↗(On Diff #328501)

Why is this one needed?

mlir/lib/Dialect/SCF/Transforms/Bufferize.cpp
24

Can you add this dependency in the ODS definition instead?

mlir/lib/Dialect/Shape/Transforms/Bufferize.cpp
20

Add this in ODS instead?

mlir/lib/Transforms/Canonicalizer.cpp
35

The canonicalizer should not have a dependency on any dialect, please remove this.

This revision now requires changes to proceed.Mar 8 2021, 12:28 PM
herhut added a comment.Mar 9 2021, 2:54 AM

Wow, what a patch :)

Some nits but otherwise looks good.

mlir/docs/Rationale/UsageOfConst.md
204

I think this change should be undone. Or you need to replace standard with memref, as well.

mlir/docs/Traits.md
214

The link will be broken.

mlir/include/mlir/Dialect/MemRef/IR/MemRefOps.td
116

This has only one user it seems. Maybe just drop it and inline to the single use?

418

So now all dim operations live in the memref dialect, independently of whether they are on memref or tensor? I assume this is temporary until we find a better solution?

699

hyper-nit: spaces around = while you are here.

mlir/lib/Conversion/GPUToNVVM/LowerGpuOpsToNVVMOps.cpp
21

Why is this needed?

mlir/lib/Dialect/Affine/Transforms/AffineDataCopyGeneration.cpp
26

Why is this needed?

mlir/lib/Dialect/GPU/Transforms/MemoryPromotion.cpp
16

Extra include without any other changes?

mlir/lib/Dialect/StandardOps/Transforms/FuncBufferize.cpp
25–26

Please move to include/mlir/Dialect/StandardOps/Transforms/Passes.td

mlir/lib/Dialect/StandardOps/Transforms/TensorConstantBufferize.cpp
104

Please move to include/mlir/Dialect/StandardOps/Transforms/Passes.td

mlir/lib/Transforms/Bufferize.cpp
73

nit: fix name.

dfki-jugr updated this revision to Diff 329916.Mar 11 2021, 4:07 AM
dfki-jugr marked 28 inline comments as done and an inline comment as not done.

Adressed comments.

herhut accepted this revision.Mar 11 2021, 11:50 PM

Please address the clang tidy warnings. Otherwise this looks good to go to me.

This revision was not accepted when it landed; it landed in state Needs Review.Mar 15 2021, 3:17 AM
This revision was landed with ongoing or failed builds.
This revision was automatically updated to reflect the committed changes.
rriddle added inline comments.Mar 15 2021, 10:05 AM
mlir/lib/Transforms/Canonicalizer.cpp
15

Remove this.

bondhugula added inline comments.
mlir/include/mlir/Transforms/Utils.h
95

@dfki-jugr Why does this revision change the signature of this function here to pass AllocOp by pointer? As far as I can tell from the summary and a quick, this is only about the movement of ops as opposed to an API change.

ftynse added inline comments.Mar 18 2021, 1:04 AM
mlir/include/mlir/Transforms/Utils.h
95

Because it can then use a forward declaration instead of having MLIRTransforms depend on MLIRMemRef?

mehdi_amini added inline comments.Mar 18 2021, 12:50 PM
mlir/lib/Transforms/Utils/Utils.cpp
20

@ftynse: Won't libTransforms depend on memref anyway here?

It looks like this patch simply moved std.dim to memref.dim, without doing the "splitting" which we had discussed. I now have code that only operates on tensors that is needing to pull in the memref dialect just for this. Can you split memref.dim into tensor.dim for the tensor case?

It looks like this patch simply moved std.dim to memref.dim, without doing the "splitting" which we had discussed. I now have code that only operates on tensors that is needing to pull in the memref dialect just for this. Can you split memref.dim into tensor.dim for the tensor case?

This is true. Since it was up for discussion, how to handle such a splitting of ops (https://llvm.discourse.group/t/cross-dialect-folding-and-canonicalization/2740/11), the dim op is moved completely to the memref dialect. We are looking forward to an agreement on how to split the op.

I think just a patch splitting the op into tensor.dim and memref.dim would be fine. There isn't much to discuss.

I think just a patch splitting the op into tensor.dim and memref.dim would be fine. There isn't much to discuss.

So there was a discussion that this could be done via interfaces, for example. We could have a generic dim and rank operation that can be applied to any ShapedType type. That would also make it easier to rewrite between these types, as the dim and rank operation would no longer need to change.

Just to be clear, I am not saying we should not do this split, I am just trying to avoid the extra work if soon later we decide that there is a different and better way to model this. This is more work than "just a patch", especially for downstream users, so we should avoid unnecessary churn. As far as I understand it, this is about an extra link time dependency that is not worse than the monolithic dialect before.

I think just a patch splitting the op into tensor.dim and memref.dim would be fine. There isn't much to discuss.

So there was a discussion that this could be done via interfaces, for example. We could have a generic dim and rank operation that can be applied to any ShapedType type. That would also make it easier to rewrite between these types, as the dim and rank operation would no longer need to change.

Just to be clear, I am not saying we should not do this split, I am just trying to avoid the extra work if soon later we decide that there is a different and better way to model this. This is more work than "just a patch", especially for downstream users, so we should avoid unnecessary churn. As far as I understand it, this is about an extra link time dependency that is not worse than the monolithic dialect before.

ShapedType only covers tensor and memref. I'm not currently aware of any effort to generalize that, so splitting it now seems fairly safe.

I'm also well aware of the cost to downstream users, as I just updated my own codebases to memref.dim. The reason I wanted it to be split in the first place is to reduce that. Now when we split memref.dim and tensor.dim all existing code will compile and just have verifier errors... if we had initially done the split std.dim -> {tensor.dim,memref.dim} then we would have only had compile errors. I'm sorry I didn't notice this on the original patch or I would have mentioned it.

Any progress on splitting memref.dim into tensor.dim? Multiple downstream projects are now complaining about this.

I think just a patch splitting the op into tensor.dim and memref.dim would be fine. There isn't much to discuss.

So there was a discussion that this could be done via interfaces, for example. We could have a generic dim and rank operation that can be applied to any ShapedType type. That would also make it easier to rewrite between these types, as the dim and rank operation would no longer need to change.

Just to be clear, I am not saying we should not do this split, I am just trying to avoid the extra work if soon later we decide that there is a different and better way to model this. This is more work than "just a patch", especially for downstream users, so we should avoid unnecessary churn. As far as I understand it, this is about an extra link time dependency that is not worse than the monolithic dialect before.

ShapedType only covers tensor and memref. I'm not currently aware of any effort to generalize that, so splitting it now seems fairly safe.

Not really. VectorType is also a ShapedType, but the current dim op works only on tensors and memrefs (since vectors being of static shape would always have a dim on them folding to a constant).

It is odd now that memref.dim is needed for tensors as well. The current op documentation is also inaccurate now - the definition says "any memref or tensor type" but the description says memref only.

def DimOp : MemRef_Op<"dim", [NoSideEffect]> {
  let summary = "dimension index operation";
  let description = [{
    The `dim` operation takes a memref and a dimension operand of type `index`.
  ...
 let arguments = (ins AnyTypeOf<[AnyTensor, AnyRankedOrUnrankedMemRef],
                                 "any memref or tensor type">:$memrefOrTensor,
                       Index:$index);
mlir/include/mlir/Transforms/Utils.h
95

MLIRTransforms already depends on memrefs - but I guess you meant avoiding a header include. Passing a derived op type by pointer creates confusion to the extent that it might be a nullable argument which is definitely not the case here.

mehdi_amini added inline comments.Apr 28 2021, 4:02 PM
mlir/include/mlir/Transforms/Passes.td
366

That is looking really suspicious: the canonicalize pass itself should not have any such dependency. It also seems related to River's comment.

mlir/include/mlir/Transforms/Utils.h
95

@dfki-jugr : this comment wasn't addressed either.

mlir/lib/Conversion/GPUToNVVM/LowerGpuOpsToNVVMOps.cpp
21

This comment wasn't addressed.

mlir/lib/Dialect/Affine/Transforms/AffineDataCopyGeneration.cpp
26

ditto.

mlir/lib/Transforms/Canonicalizer.cpp
15

This wasn't done apparently?

35

This was moved to TableGen instead of being removed ....

@dfki-jugr can you acknowledge? (and address...)

@mehdi_amini We have already discussed this "issue" with @herhut and we detected that the includes can not be removed in a straight forward way. It looks like removing those includes causes memref dependencies that can not be resolved properly. Unfortunately, we are not experts in this particular field of MLIR core. However, we are definitely looking forward to address your concerns. If we are able to get some support regarding these implicitely existing memref dependencies.

mlir/include/mlir/Transforms/Utils.h
95

As @rriddle mentioned, we added a forward declaration instead of an include. We agree that the signature change is misleading. This can be resolved by reverting this change. This should be possible since @bondhugula mentioned that there is already a dependency.

silvas added a comment.May 5 2021, 2:59 PM

Any progress on splitting memref.dim into tensor.dim?

@dfki-jugr: Can you specifically comment on why the canonicalize pass depends on the memref dialect? Maybe we can start here: both River and I asked inline about *why* you added this dependency to the canonicalize pass.

mehdi_amini added inline comments.May 5 2021, 3:58 PM
mlir/include/mlir/Transforms/Passes.td
366

@dfki-jugr here in particular if you missed it before

mlir/lib/Transforms/Canonicalizer.cpp
15

@dfki-jugr here in particular if you missed it before, please remove this header, if you can't and need help, please explain exactly why you need this here.

@mehdi_amini We tried to resolve the dependency, but ran into the problem that the canonicalize test did not recognize the memref ops. Before splitting the dialect, this was not an issue since the ops were part of the std-dialect. Now, we get an error that the splitted memref ops are not registered in MLIR context. That's the reason why this dependency is needed. As mentioned above, we are not experts in this particular field of MLIR and are looking forward to get some support how to get rid of this dependency. Maybe we can pre-register the memref-dialect somehow?

Any progress on splitting memref.dim into tensor.dim?

We are still waiting for an agreement in the discussion how to proceed:

https://llvm.discourse.group/t/cross-dialect-folding-and-canonicalization/2740/13?u=dfki-jugr

mehdi_amini added a comment.EditedMay 6 2021, 8:55 AM

@mehdi_amini We tried to resolve the dependency, but ran into the problem that the canonicalize test did not recognize the memref ops. Before splitting the dialect, this was not an issue since the ops were part of the std-dialect. Now, we get an error that the splitted memref ops are not registered in MLIR context. That's the reason why this dependency is needed. As mentioned above, we are not experts in this particular field of MLIR and are looking forward to get some support how to get rid of this dependency. Maybe we can pre-register the memref-dialect somehow?

canonicalize is a generic pass that is used in many situation where memref is not used, for example TensorFlow. I can't see a reason for the Memref dialect to be loaded in the context when I run the canonicalize on TensorFlow, however it is the case after your change here.

You are mentioning that "tried to resolve the dependency, but ran into the problem that the canonicalize test did not recognize the memref ops", can you post the error? (which test and what failed?)
The canonicalize pass itself is just applying patterns provided by dialects, if something is failing it is because a dialect does not implement a dependency on the memref dialect (by loading it in its constructor).

silvas added a comment.EditedJun 15 2021, 6:34 PM

Hi @dfki-jugr, can we please split memref.dim into tensor.dim (and std.rank into tensor.rank if you have time). I've now independently heard complaints about this from 5+ people across many different teams I interact with.