This is an archive of the discontinued LLVM Phabricator instance.

[mlir][Linalg] Provide a Tablegen backend to specify named Linalg ops declaratively
AbandonedPublic

Authored by nicolasvasilache on Jan 31 2020, 8:58 PM.

Details

Summary

This revision demonstrates some basic support for declaratively defining Linalg "named" ops.
Such named ops form the backbone of operations that are ubiquitous in the ML application domain.

This revision closely related to the definition of a "Tensor Computation Primitives Dialect"
and demonstrates that certain ops can be expressed as declarative configurations of the
linalg.generic op.

For now, this only supports linalg.matvec (fixed-rank) and linalg.fill (rank-polymorphic)
and replaces manual C++ implementations with a Tablegen'd equivalent.

These 2 ops are already enough to exhibit interesting behaviors, tradeoffs and open questions:

  1. reference maps/iterators as static vs instance methods and requirements for rank-polymorphism
  2. behavior in the presence / absence of maps/iterators
  3. handling of other attributes than the ones from linalg.generic
  4. ability to sugar the parsing/pretty-printing
  5. (future) declarative specification of properties that will automate matchers and emitters to

loop, scalar and vector form

  1. (future) specification of mapping to library call / HW ISA

Diff Detail

Event Timeline

Herald added a project: Restricted Project. · View Herald TranscriptJan 31 2020, 8:58 PM

roundtrip.mlir and tile.mlir have lerger NFC changes, they will be rebased, please ignore those for now.

Unit tests: unknown.

clang-tidy: unknown.

clang-format: unknown.

Build artifacts: diff.json, CMakeCache.txt, console-log.txt

Pre-merge checks is in beta. Report issue. Please join beta or enable it for your project.

The previous rev was reverted as during discussion this feature was seen as unclear if we wanted to support, was changing the functionality & contract of the tool. Opening a new rev while rolling in the previous change does not resolve that discussion.

The previous rev was reverted as during discussion this feature was seen as unclear if we wanted to support, was changing the functionality & contract of the tool. Opening a new rev while rolling in the previous change does not resolve that discussion.

No it does not and does not intend to resolve the discussion but to reopen it with a concrete use case.

This revision shows one possible implementation of "named" Linalg ops specified in a declarative fashion.
As often in that space, one has to try to build it to understand it, the 6 points in my commit message describe some of the tradeoffs involved.

A clear open problem is how to improve the core infra support to make it better, more natural and future proof.
The fact that this revision builds on previously reverted code is circumstantial, this was the simplest way to show people where Linalg intends to go.

I'll mark as WIP and RFC to make it clearer that this is not intended to land as is.

nicolasvasilache retitled this revision from [mlir][Linalg] OpGen hooks to [mlir][Linalg][WIP][RFC] OpGen hooks.Feb 1 2020, 10:12 AM
nicolasvasilache edited the summary of this revision. (Show Details)

The previous rev was reverted as during discussion this feature was seen as unclear if we wanted to support, was changing the functionality & contract of the tool. Opening a new rev while rolling in the previous change does not resolve that discussion.

At least now there is a motivation / example of uses to support the discussion!

I'm still not convinced that hooking into Tablegen C++ is something we want Dialect author to do, so I'd like to look closer into what you're trying to do here and see how we could achieve this without injecting code into TableGen emitters.

The previous rev was reverted as during discussion this feature was seen as unclear if we wanted to support, was changing the functionality & contract of the tool. Opening a new rev while rolling in the previous change does not resolve that discussion.

At least now there is a motivation / example of uses to support the discussion!

I'm still not convinced that hooking into Tablegen C++ is something we want Dialect author to do, so I'd like to look closer into what you're trying to do here and see how we could achieve this without injecting code into TableGen emitters.

I am not either and I am looking forward to getting better suggestions and moving the needle on this longstanding issue in Linalg.

The previous rev was reverted as during discussion this feature was seen as unclear if we wanted to support, was changing the functionality & contract of the tool. Opening a new rev while rolling in the previous change does not resolve that discussion.

At least now there is a motivation / example of uses to support the discussion!

I'm still not convinced that hooking into Tablegen C++ is something we want Dialect author to do, so I'd like to look closer into what you're trying to do here and see how we could achieve this without injecting code into TableGen emitters.

I am not either and I am looking forward to getting better suggestions and moving the needle on this longstanding issue in Linalg.

The above functionality could already be handled with OpInterfaces + the way export generation is handled for TFLite dialect [which doesn't use OpInterfaces as that predated it, but could], one need not change OpDefinitionGen to get this behavior.

Another benefit to coupling with OpInterfaces is that it also serves to pushed one to think more about the core properties being taken advantage of.

@jpienaar again, no need to pattern-match and overfit on the current implementation details as they are subject to change.

Your comment on how similar things are already achieved elsewhere without OpGen extensions is useful.
To be even more useful, could you point to a specific commit or place in the code that one can look out to understand how this is done in practice?

Thanks!

@jpienaar @mehdi_amini !pingplease.

The above functionality could already be handled with OpInterfaces + the way export generation is handled for TFLite dialect [which doesn't use OpInterfaces as that predated it, but could], one need not change OpDefinitionGen to get this behavior.

This is not actionable for me, I do not see how to declaratively express AffineMaps in a declarative fashion in Tablegen and generating the proper OpInterface methods without the OpDefinitionGen hook.
Concretely, the following is the full declarative definition of the MatvecOp attributes proposed by this revision and replaces a bunch of C++ code scattered in various places:

def MatvecOp : LinalgNamedStructured_Op<"matvec", [NInputs<2>, NOutputs<1>]> {
  let arguments = (ins AnyStridedMemRefOfRank<2>,
                       AnyStridedMemRefOfRank<1>,
                       AnyStridedMemRefOfRank<1>);
  let hasLibraryImpl = 1;
  let iterators = ["i", "r_j"];
  let iterators_types = ["parallel", "reduction"];
  //   A(i, r_j) * B(r_j) -> C(i)
  let input_indexing_maps = [AffineExpressions<["i", "r_j"]>,
                             AffineExpressions<["r_j"]>];
  let output_indexing_maps = [AffineExpressions<["i"]>];
  let hasFolder = 1;
}

please provide something concrete and actionable or reconsider a temporary solution to unblock progress.
I am happy to use any mechanism deemed appropriate that works, I do not see one with my current understanding of the stack.

Lastly, this revision raises 6 points that are important to solve in the grander vision of Linalg and Tensor Compute Primitives, we should move the needle so we can start answering those key questions.

The main value I see with the OpDefinitionGen hook is that it allows one to automatically insert traits and methods into the generated C++ op classes. This is preferable when a dialect's ops have a large variation of potential traits and methods, and that information is already encoded with other fields in the op definitions.

Use Linalg ops as an example, let's say we want to introduce a separate OpInterface for the newly introduced iterator_types, and let's call it HasIteratorTypesInterface. We still want to use the declarative way (let iterators_types = ["parallel", "reduction"];) for specifying them as shown by Nicolas. Then the ODS hook gives us a way to automatically insert the HasIteratorTypesInterface::Trait into the C++ op class and synthesize the implementation for any op containing let iterators_types = [...]; without requiring the op to explicitly list the trait in the ODS. (And IMO it makes sense to omit given the trait is just derived information from let iterators_types = [...];). Additionally it allows some Linalg ops to implement HasIteratorTypesInterface while some not, and we can use hasTrait<HasIteratorTypesInterface::Trait> to quickly check whether an op has iterator types defined. Similarly, one can have a separate hasIndexingMapsInterface for other newly introduced fields.

But I'm not sure that's the direction Nicolas is intended to evolve the Linalg dialect. Instead of having smaller and more atomic op interfaces and let each op to opt in suitable ones automatically, maybe the approach of having one gigantic op interface like today and requiring pretty much every op to implement it even if some methods does not make sense (so we have llvm_unreachable in the implementation) for some op, will still do for Linalg ops and carry us a long way. Then the above does not really matter.

Concretely for this patch to make progress, I think one way to avoid using ODS hook, as pointed out by Jacques, is to have a completely new TableGen backend that parses the whole LinalgStructuredOps.td, filters the ones that have LinalgNamedStructured_Op, generate the op interface implementation C++ code into a separate file, and then include that separately into the main Linalg impl file. Without using ODS hooks, one need to handle the class method signature by oneself though; but it shouldn't be too burdensome. I personally view ODS hooks as a simpler and more integrated way for even this case, but as I said in the above, it's not the main value I think ODS hooks bring. :)

nicolasvasilache retitled this revision from [mlir][Linalg][WIP][RFC] OpGen hooks to [mlir][Linalg] Provide a Tablegen backend to specify named Linalg ops declaratively.Feb 12 2020, 3:09 PM
nicolasvasilache edited the summary of this revision. (Show Details)
nicolasvasilache edited the summary of this revision. (Show Details)

Rebase on top of NFC changes and spin out a separate tablegen backend.

Continue the impl.

@mehdi_amini @jpienaar @rriddle @ftynse @antiagainst I have rebased and updated the revision description as well as the impl.
I would be interested in your opinion on where this is going and if it matches the expected behavior you may have.
If you see some red flags please let me know, I hope to have this working and pushable by tomorrow.

Thanks!

flaub added a subscriber: flaub.Feb 12 2020, 7:54 PM

Just throwing an idea out there, but what if instead of 'named' ops (which seems to be the source of the need to extend TableGen), we could use something like the 'contraction' in PlaidML's tile dialect. It's not as flexible as arbitrary generic ops, but it seems to cover most of the ML workloads we've thrown at it. Here's what the ContractionOp looks like: https://github.com/plaidml/plaidml/blob/plaidml-v1/pmlc/dialect/tile/ir/ops.td#L232 (you can ignore the other ops in there, they are meant for the construction phase, for turning values into the type system via attributes/AffineMaps, etc). I'm thinking we really only need 2 kinds of ops: Contraction op and a generic Elementwise op. Bonus: both of these forms are auto-differentiable. We could add the other properties of structured ops.

I realize this probably isn't the right forum for this discussion, we could start a thread on discourse if that's more appropriate.

The main value I see with the OpDefinitionGen hook is that it allows one to automatically insert traits and methods into the generated C++ op classes. This is preferable when a dialect's ops have a large variation of potential traits and methods, and that information is already encoded with other fields in the op definitions

There is no disagreement that this may be useful for some cases: but we should strive to improve ODS itself to provide this facilities. I don't think we want dialect author to write Tablegen backends: this defeats the whole point of ODS in my opinion!

@flaub Thanks that's very interesting! It reminds me: when are you up for a presentation on the Tile layer at an open design meeting? ;)

@flaub so actually, one of the (multiple) reasons I am pushing for this is to have the ability to add a generic contraction op and a generic pointwise op as special cases :)
We already have vector.contract added by andydavis@ with similar semantics and it would make the rewrite to vector form trivial.

So linalg.contract and linalg.pointwise are simple extensions of this and have the nice properties you mention.
I am unclear atm if we want autodiff to happen here (in Linalg) or at a higher level, we should discuss this in a live meeting since you have the most experience.

Still, linalg also wants to be future-proof and has additional expressiveness to represent e.g. Cholesky on triangular matrices, stencils, stacked RNNs and non-dense data types + gather/scatter in the future (but the road is still quite long until all that is available)

Lastly, this will also allow generating matchers and easily map the generic form to a library (in simple contract and pointwise cases but I conjecture also on much more complex cases, TBD :) ).

The main value I see with the OpDefinitionGen hook is that it allows one to automatically insert traits and methods into the generated C++ op classes. This is preferable when a dialect's ops have a large variation of potential traits and methods, and that information is already encoded with other fields in the op definitions

There is no disagreement that this may be useful for some cases: but we should strive to improve ODS itself to provide this facilities. I don't think we want dialect author to write Tablegen backends: this defeats the whole point of ODS in my opinion!

Well actually the new impl indeed changes to write a new TableGen backends. :) But I get your idea here. Yup I agree that we should strive to improve ODS itself and avoid dialect author to write TableGen expansion logic as much as possible. So features that apply to any dialect should be factored out and landed into main OpDefinitionGen. But there are cases it's inevitable like the Linalg case showed here: we have custom syntax defined for op definitions only reasonable to Linalg dialect. One can either completely roll its own TableGen backend or hook into OpDefinitionGen to avoid some boilerplate. I would argue the later saves developers' efforts and it's enhancing the functionality of ODS for dialect-specific use cases. :)

Finish the revision and update CMakeLists.txt.

antiagainst accepted this revision.Feb 13 2020, 9:24 AM

LGTM, but may want another pair of eyes to check the Linalg specific logic. :)

mlir/include/mlir/Dialect/Linalg/IR/LinalgStructuredInterface.td
1 ↗(On Diff #244322)

Super nit: Do we need to provide the "Linalg StructuredIfce" part with strange abbr.? Doesn't seem to provide anything to me. :)

mlir/tools/mlir-tblgen/LinalgNamedOpsGen.cpp
31

I'd prefer to have explicit using like the following that doing this.

93

Nit: just use char to be clear here?

This revision is now accepted and ready to land.Feb 13 2020, 9:24 AM
nicolasvasilache marked 2 inline comments as done.Feb 13 2020, 10:13 AM

Address review comments.

mehdi_amini added 1 blocking reviewer(s): mehdi_amini.Feb 13 2020, 11:06 AM
This revision now requires review to proceed.Feb 13 2020, 11:06 AM
jpienaar added inline comments.Feb 14 2020, 9:57 AM
mlir/include/mlir/Dialect/Linalg/IR/LinalgStructuredOps.td
56–57

Should this comment be outside of the "section heading" block?

290–296

Why not have a trait that declares these and then have you backend generate the definitions? (else it seems that to know you have this function you need to hardcode the names vs casting to the op interface)

327

TableGen (and therefore ODS) does not have namespaces, please follow what we are doign everywhere else and prefacing the dialect as namespace. Else one can't write patterns using these ops without concern of collisions.

(true above and below too)

mlir/include/mlir/Dialect/Linalg/IR/LinalgStructuredOpsInterface.td
44

Why unsigned but comment says true?

54

space after value? (same below too)

166

Is the only advantage here that you don't need to specify type? If so, why not use infer type op interface?

179

So this is different from clone in that you can specify location too?

jpienaar requested changes to this revision.Feb 14 2020, 11:27 AM

The main value I see with the OpDefinitionGen hook is that it allows one to automatically insert traits and methods into the generated C++ op classes. This is preferable when a dialect's ops have a large variation of potential traits and methods, and that information is already encoded with other fields in the op definitions

There is no disagreement that this may be useful for some cases: but we should strive to improve ODS itself to provide this facilities. I don't think we want dialect author to write Tablegen backends: this defeats the whole point of ODS in my opinion!

Well actually the new impl indeed changes to write a new TableGen backends. :) But I get your idea here. Yup I agree that we should strive to improve ODS itself and avoid dialect author to write TableGen expansion logic as much as possible. So features that apply to any dialect should be factored out and landed into main OpDefinitionGen. But there are cases it's inevitable like the Linalg case showed here: we have custom syntax defined for op definitions only reasonable to Linalg dialect. One can either completely roll its own TableGen backend or hook into OpDefinitionGen to avoid some boilerplate. I would argue the later saves developers' efforts and it's enhancing the functionality of ODS for dialect-specific use cases. :)

I think a problem is that we have ODS now becoming a custom DSL for every dialect as every dialect can perform arbitrary interpretation and so there is no need to move towards common/reusable abstractions. ODS is restrictive for a reason :)

The extension of mlir-tblgen from being ops defined in dialect independent manner & translate utilities for dialects, to custom DSL for dialect was snuck in and so while we have that landed, I think of that more as a bug that should be fixed instead rather than something we should point to as existance.

@jpienaar @mehdi_amini !pingplease.

The above functionality could already be handled with OpInterfaces + the way export generation is handled for TFLite dialect [which doesn't use OpInterfaces as that predated it, but could], one need not change OpDefinitionGen to get this behavior.

This is not actionable for me, I do not see how to declaratively express AffineMaps in a declarative fashion in Tablegen and generating the proper OpInterface methods without the OpDefinitionGen hook.

This change still does not do that, you do not have a way of specifying AffineMaps, you allow some subclass of them in a new format way and in a way that is dialect specific. I think this makes using ODS as Linalg DSL easier, but I'm not sure even there the added complexity is work it (and you could do this just using basic string interpolation locally with the same amount of portability and analyzability as you are providing here, nothing else in the system is gaining much from this and whether you had it in TableGen itself or as generator post seem to provide equal benefit here and if general patterns arose from which dialect indepent support could be added we could refactor).

mlir/include/mlir/Dialect/Linalg/IR/LinalgStructuredOps.td
338–340

Just to be clear, so all of this is so that you can write

let input_indexing_maps = [AffineExpressions<["i", "r_j"]>,
                             AffineExpressions<["r_j"]>];
let output_indexing_maps = [AffineExpressions<["i"]>];

along with the op and from this you are generating

return SmallVector<AffineMap, 4>{AffineMap::get(2, 0, {i, j}),
                                   AffineMap::get(2, 0, {j}),
                                   AffineMap::get(2, 0, {i})};

?

This revision now requires changes to proceed.Feb 14 2020, 11:27 AM

FYI:

mlir/lib/Dialect/Linalg/IR/LinalgOps.cpp:795:22: warning: unused function 'verify' [-Wunused-function]
static LogicalResult verify(FillOp op) {
                     ^
1 warning generated.

There is no disagreement that this may be useful for some cases: but we should strive to improve ODS itself to provide this facilities. I don't think we want dialect author to write Tablegen backends: this defeats the whole point of ODS in my opinion!

Well actually the new impl indeed changes to write a new TableGen backends. :) But I get your idea here. Yup I agree that we should strive to improve ODS itself and avoid dialect author to write TableGen expansion logic as much as possible. So features that apply to any dialect should be factored out and landed into main OpDefinitionGen. But there are cases it's inevitable like the Linalg case showed here: we have custom syntax defined for op definitions only reasonable to Linalg dialect. One can either completely roll its own TableGen backend or hook into OpDefinitionGen to avoid some boilerplate. I would argue the later saves developers' efforts and it's enhancing the functionality of ODS for dialect-specific use cases. :)

I think a problem is that we have ODS now becoming a custom DSL for every dialect as every dialect can perform arbitrary interpretation and so there is no need to move towards common/reusable abstractions. ODS is restrictive for a reason :)

That's my initial position as well: the bar should be rather high to get into Tablegen backend, I'd rather not do this for every single "sugar" that can be added: at some point we'll have custom embedded parsers in Tablegen and create brand new languages...
That means there is a tradeoff and we'd like to make sure we consider and weigh a bit more the alternatives, including having something a bit more ugly in ODS if it avoid custom backends. Hence the current question in the diff to try to do this.

The extension of mlir-tblgen from being ops defined in dialect independent manner & translate utilities for dialects, to custom DSL for dialect was snuck in and so while we have that landed, I think of that more as a bug that should be fixed instead rather than something we should point to as existance.

+1

mlir/include/mlir/Dialect/Linalg/IR/LinalgStructuredOps.td
263–264

the comment seems identical to line 55, but it is formatted like if it intends to open a new section

266

I don't parse this sentence

338–340

I'm interested in the answer here, I'd like to make sure we understand the problem this is solving :)

@jpienaar and @mehdi_amini thanks for your fast turnaround on this!

Ultimately, my objective still is and has always been to write many DSL ops as a linalg.generic with some variant of Einsum/TC notation: e.g. A(i, r_j) * B(r_j) -> C(i) and have ops be automatically generated from this almost mathematical form.
We have had quite some experience in that area and I claim it is a fundamentally "good thing to have".
Once that exists, it can be pattern-matched to library calls automatically + resist transformations such as tiling and fusion since we have all the IR in place to do that.

This revision is a baby step that wants to act as a forcing function on a topic that seems to be, well.. , interesting: it is good to hear the objections clearly as to where Tablegen-based infrastructure is allowed to evolve.

Is there a world in which we can have such an Einsum-based parser on textual format to specify Linalg ops?
Does this have to be a language outside of MLIR? (I would certainly hope not).
I think https://reviews.llvm.org/D73405 sets a nice precedent and I would very much want to see something like this for specifying Linalg ops declaratively.

If this Einsum-based parser + Linalg Tablegen backend will never be acceptable future, I would love to understand why not and what alternatives people would recommend.

Thanks for your insights!

Is there a world in which we can have such an Einsum-based parser on textual format to specify Linalg ops?
Does this have to be a language outside of MLIR? (I would certainly hope not).

I don't understand how these two questions can go together as they seem entirely disconnected to me, so I'm probably not understanding the questions in the first place :)

I think https://reviews.llvm.org/D73405 sets a nice precedent and I would very much want to see something like this for specifying Linalg ops declaratively.

The main reason I have been reluctant to move on here is exactly because I would very much want to see something like this for specifying Linalg ops declaratively!
Where "something like this" is a feature that is generic enough to generalize and be reused across dialects, like D73405 :)

Is there a world in which we can have such an Einsum-based parser on textual format to specify Linalg ops?
Does this have to be a language outside of MLIR? (I would certainly hope not).

I don't understand how these two questions can go together as they seem entirely disconnected to me, so I'm probably not understanding the questions in the first place :)

I think https://reviews.llvm.org/D73405 sets a nice precedent and I would very much want to see something like this for specifying Linalg ops declaratively.

The main reason I have been reluctant to move on here is exactly because I would very much want to see something like this for specifying Linalg ops declaratively!
Where "something like this" is a feature that is generic enough to generalize and be reused across dialects, like D73405 :)

Ok so if we agree on the principle I think this is a great start then.
My main concern is a cost benefit tradeoff: I am super interested in the result of this but do not have the background or resources to do it in a way that is portable and reusable across dialects. It still took me an inordinate amount of time to get to what you see here: this is not my area of expertise.
However I did start talking about this to various folks more than 3 months ago but everyone is very busy and it has reached a point where it is blocking.
So I went ahead to move the needle.

Is it acceptable to move this in a new binary say ‘linalg-named-opgen-dontlookhowugly-pleasehelpimprove” so I am not blocked. When the right people have cycles and they see the end result they can start lending a hand.

Is there a world in which we can have such an Einsum-based parser on textual format to specify Linalg ops?
Does this have to be a language outside of MLIR? (I would certainly hope not).

I don't understand how these two questions can go together as they seem entirely disconnected to me, so I'm probably not understanding the questions in the first place :)

I think https://reviews.llvm.org/D73405 sets a nice precedent and I would very much want to see something like this for specifying Linalg ops declaratively.

The main reason I have been reluctant to move on here is exactly because I would very much want to see something like this for specifying Linalg ops declaratively!
Where "something like this" is a feature that is generic enough to generalize and be reused across dialects, like D73405 :)

Is there a world in which we can have such an Einsum-based parser on textual format to specify Linalg ops?
Does this have to be a language outside of MLIR? (I would certainly hope not).

I don't understand how these two questions can go together as they seem entirely disconnected to me, so I'm probably not understanding the questions in the first place :)

I think https://reviews.llvm.org/D73405 sets a nice precedent and I would very much want to see something like this for specifying Linalg ops declaratively.

The main reason I have been reluctant to move on here is exactly because I would very much want to see something like this for specifying Linalg ops declaratively!
Where "something like this" is a feature that is generic enough to generalize and be reused across dialects, like D73405 :)

Ok so if we agree on the principle I think this is a great start then.
My main concern is a cost benefit tradeoff: I am super interested in the result of this but do not have the background or resources to do it in a way that is portable and reusable across dialects. It still took me an inordinate amount of time to get to what you see here: this is not my area of expertise.
However I did start talking about this to various folks more than 3 months ago but everyone is very busy and it has reached a point where it is blocking.

Fair enough.
Even though:

  • I don't remember you talking to me (or Jacques) about this 3 months ago (the first time this was mentioned was when Lei pushed the Tablegen hook and you mentioned you would use it later, but there was no detail at that time, and it was last month I believe).
  • It isn't clear what is really blocked here. We tried to ask before:

Just to be clear, so all of this is so that you can write

let input_indexing_maps = [AffineExpressions<["i", "r_j"]>,
                             AffineExpressions<["r_j"]>];
let output_indexing_maps = [AffineExpressions<["i"]>];

along with the op and from this you are generating

return SmallVector<AffineMap, 4>{AffineMap::get(2, 0, {i, j}),
                                   AffineMap::get(2, 0, {j}),
                                   AffineMap::get(2, 0, {i})};

But we didn't really get an answer to the questions. I also mentioned "That means there is a tradeoff and we'd like to make sure we consider and weigh a bit more the alternatives, including having something a bit more ugly in ODS if it avoid custom backends. Hence the current question in the diff to try to do this."

Is it acceptable to move this in a new binary say ‘linalg-named-opgen-dontlookhowugly-pleasehelpimprove” so I am not blocked. When the right people have cycles and they see the end result they can start lending a hand.

It may be, but I'd be more comfortable supporting this if you could first help us figuring out the answers to the questions asked a few days ago. I'm still not sure I got the complete picture about the urgency (what is blocked exactly? why is it important right now?) and the alternatives.

we didn't really get an answer to the questions. I also mentioned

I believe I did answer very clearly : the objective is to evolve into an einsum-like specification. I’ll paste it here again: a string such as “C(i, j) += A(i, k) * B(k, j)” is enough to specify the linalg.generic configuration for a named matmul op and this generalizes to many ops.

Where "something like this" is a feature that is generic enough to generalize and be reused across dialects, like D73405 :)

And yet we did not wait for having a nice declarative generic assembly form to be able to have Tablegen backed ops, did we?
As everything that has ever been contributed to MLIR, things are done on a per need basis. How is this different?

Now if you happen to have cycles to help with this, I’ll be thrilled! So could you please describe uses of einsum-like notation (or generalizations) for other dialects? Still, does coming up with such examples and a common solution need to slow down the progress of Linalg?

I don't remember you talking to me (or Jacques) about this 3 months ago

We originally talked about this with Jacques, Lei and River independently for some time, most recently back around Nov, there is an open internal bug that I can reference when I come back from vacation. At the time I don’t remember you being involved or having contributed to Tablegen aspects so it makes sense you may miss context. Note that at the time custom Tablegen backends were explicitly suggested as a solution (this is based on my recollection though so may be a bit fuzzy after all this time :) I’d have to look at the bug for details). Now that I need it for growing the set of Linalg ops, I am trying to male progress. This also happens to be an explicit KR, feel free to look it up internally or I can point you to it when I am back in office.

Bottom line, while I appreciate the objective of being future proof and general, I do not see a justification for blocking on this in this instance, place and time. I think this goes contrary to how MLIR contributors have been making progress: on a per need basis.

If the worry is that mlir-tblgen should not be augmented with DSL specific behavior I agree and I am happy to spin off a Linalg-specific binary that will be NFC for everything else Tablegen.

More generally I think we are touching a broader point: we want to allow and even encourage progress in specialized areas/dialects. When it is time for refactoring and generalization, we will do it, as usual, on a per-need basis. A very good example of this is the OpsInterface, which refactored and reused the Model/Concept you yourself proposed to solve the “inheritance” issue Linalg had in very beginning. This type of iteration takes months and benefits from existing concrete use cases.

Please let me know something actionnable that we can apply to unblock the declarative specification of named Linalg ops.

Thanks @joker-eph!

we didn't really get an answer to the questions. I also mentioned

I believe I did answer very clearly : the objective is to evolve into an einsum-like specification. I’ll paste it here again: a string such as “C(i, j) += A(i, k) * B(k, j)” is enough to specify the linalg.generic configuration for a named matmul op and this generalizes to many ops.

I don't see how this patch is getting in this direction though.

If the goal is to write a TensorComprehension specifications for the ops, it seems like you may end-up generating ODS directly from such specifications. In any case, we could discuss this direction, but this is another thing that you didn't bring before.

Where "something like this" is a feature that is generic enough to generalize and be reused across dialects, like D73405 :)

And yet we did not wait for having a nice declarative generic assembly form to be able to have Tablegen backed ops, did we?
As everything that has ever been contributed to MLIR, things are done on a per need basis. How is this different?

When we didn't have this feature, we were writing inline C++ in ODS, or adding extra method that were calling C++ implementation.

Now if you happen to have cycles to help with this, I’ll be thrilled! So could you please describe uses of einsum-like notation (or generalizations) for other dialects?

Still, does coming up with such examples and a common solution need to slow down the progress of Linalg?

You haven't clarified how this is "slowing down the progress of Linalg" so far. But at the same time "Linalg" is a very fuzzy things for most people...

More generally I think we are touching a broader point: we want to allow and even encourage progress in specialized areas/dialects. When it is time for refactoring and generalization, we will do it, as usual, on a per-need basis. A very good example of this is the OpsInterface, which refactored and reused the Model/Concept you yourself proposed to solve the “inheritance” issue Linalg had in very beginning. This type of iteration takes months and benefits from existing concrete use cases.

Please let me know something actionnable that we can apply to unblock the declarative specification of named Linalg ops.

The immediate actionable direction I can provide is the same as before: please start answering questions, and trying to build more consensus behind what you're trying to achieve. I don't think being more and more pushy on this is gonna suddenly enlighten anyone and convince them, at least not me.
So I'll re-ask:

  1. It isn't clear what is really blocked here, where is the urgency, especially with respect to current alternatives that I'd like to clarify. You just claimed progress is being blocked without specifics.
  2. Jacques' question is a simple yes or no : "Just to be clear, so all of this is so that you can write"
let input_indexing_maps = [AffineExpressions<["i", "r_j"]>,
                             AffineExpressions<["r_j"]>];
let output_indexing_maps = [AffineExpressions<["i"]>];

along with the op and from this you are generating

return SmallVector<AffineMap, 4>{AffineMap::get(2, 0, {i, j}),
                                   AffineMap::get(2, 0, {j}),
                                   AffineMap::get(2, 0, {i})};

Slowly getting back to this after vacation, I'll rebase on top of an NFC change so people see less noise.