This is an archive of the discontinued LLVM Phabricator instance.

[mlir][tensors] Introduce attribute interface/attribute for tensor encoding
ClosedPublic

Authored by aartbik on Apr 21 2021, 6:53 PM.

Details

Summary

The new "encoding" field in tensor types so far had no meaning. This revision introduces:

  1. an encoding attribute interface in IR: for verification between tensors and encodings in general
  2. an attribute in Tensor dialect; #tensor.sparse<dict> + concrete sparse tensors API

Active discussion:
https://llvm.discourse.group/t/rfc-introduce-a-sparse-tensor-type-to-core-mlir/2944/

Diff Detail

Event Timeline

aartbik created this revision.Apr 21 2021, 6:53 PM
aartbik requested review of this revision.Apr 21 2021, 6:53 PM
mehdi_amini added inline comments.Apr 21 2021, 7:18 PM
mlir/include/mlir/Dialect/Tensor/IR/TensorInterfaces.td
63 ↗(On Diff #339442)

This is a good addition, but like Sean suggested this makes sense on a separate interface that can be implemented by every encoding attributes.

148 ↗(On Diff #339442)

Is there a more efficient storage and convenient interface than an AffineMap to express a permutation for the client?
Like: LogicalResult getDimOrdering(SmallVectorImpl<int> orderings) that could just populate the permutation in the vector with the position for each dimension.

It may be worth having also a "bool hasPermutation()" or something like that for fast path for implementation that are identity (otherwise the implementation is forced to instantiate an affine map on the fly to answer the query).

185 ↗(On Diff #339442)

I suspect you're documenting the dictionary-based implementation instead of the interface here.
(I don't think there can be a "default" value for an API that does not return an optional).

212 ↗(On Diff #339442)

I think these members are implementation details and shouldn't be part of the interface.

Also all of the defaultImplementation here seems to belong to a particular implementation.
I would not provide any of them here and instead implement all of them on the SparseTensorEncodingAttr concrete implementation.

aartbik added inline comments.Apr 21 2021, 10:55 PM
mlir/include/mlir/Dialect/Tensor/IR/TensorInterfaces.td
63 ↗(On Diff #339442)

yes, I was going to add an interface like "VerifiableTensorEncoding" with just a single verify method to core "IR" (you will see commented out code in the current revision at the parser and verifier which will be replace by a cast<> + verify(tensor); the new attribute will implement that method, as will future attributes the can be used for encoding)

I just wanted to get some intial feedback on the current design with some running code, but will fill in the details once we agree on the overall design

aartbik retitled this revision from [mlir][tensors] Introduce an attribute and attribute interface for tensor encoding to [mlir][tensors] Introduce attribute interface/attribute for tensor encoding.Apr 22 2021, 10:04 AM
aartbik edited the summary of this revision. (Show Details)
aartbik marked 2 inline comments as done.Apr 22 2021, 10:37 AM
aartbik added inline comments.
mlir/include/mlir/Dialect/Tensor/IR/TensorInterfaces.td
63 ↗(On Diff #339442)

The new revision has all missing components.

148 ↗(On Diff #339442)

I prefer to keep this an AffineMap because block sparsity may need higher dimension -> lower dimension mappings in the future. An AffineMap seems the right abstraction here.

silvas added inline comments.Apr 22 2021, 10:52 AM
mlir/include/mlir/Dialect/Tensor/IR/TensorInterfaces.td
40 ↗(On Diff #339442)

I still don't think that we want an interface for this. I made a fairly strong argument that either adding the interface later is trivial, or that we would get the interface design wrong in the first place:

See: https://llvm.discourse.group/t/rfc-introduce-a-sparse-tensor-type-to-core-mlir/2944/57?u=_sean_silva

You should be able to just move all these methods onto SparseEncodingAttr.

silvas requested changes to this revision.Apr 22 2021, 10:52 AM
This revision now requires changes to proceed.Apr 22 2021, 10:52 AM
aartbik updated this revision to Diff 339722.Apr 22 2021, 10:53 AM

I am now comfortable enough with the attribute interfaces
to move them into "IR", which effectively completes my
dream of making sparse tensors a first-class citizen to MLIR

PTAL

aartbik added inline comments.Apr 22 2021, 10:54 AM
mlir/include/mlir/Dialect/Tensor/IR/TensorInterfaces.td
40 ↗(On Diff #339442)

Your comment just crossed the new revision. Note that in the new format, the interface makes a lot more sense hopefully.

I still would rather see this as just attributes on the SparseTensorAttr class, until there is a demonstrated need to make this an interface (and that this particular interface is the right choice). But I don't have the energy to argue it further. Carry on.

aartbik added a comment.EditedApr 22 2021, 11:19 AM

I still would rather see this as just attributes on the SparseTensorAttr class, until there is a demonstrated need to make this an interface (and that this particular interface is the right choice). But I don't have the energy to argue it further. Carry on.

It is not my intention to wear you out ;-), just to arrive at the right design. I was struggling a bit myself to get this right, so I may have gone back and forth too many times. Please give the latest one more careful look.

  1. In “IR”, we introduce two attribute interfaces:

          VerifiableTensorEncoding : an API for verifying that encoding matches tensor type data used by parser/verifier for "single" point verification
          SparseTensorEncoding: an API for a sparse tensor (nothing else), making it a first-class citizen; note that this is extended TACO style, so you have to trust us a bit that it has the right components 

  1. In “Tensor Dialect” we introduce an concrete attribute #tensor.sparse<dict> which implements both interfaces 

          SparseTensorEncoding: an API for a sparse tensor (nothing else), making it a first-class citizen; note that this is extended TACO style, so you have to trust us a bit that it has the right components 

I trust you that it has the right components. But even assuming that:

  • if it has the right components, my gut is that we shouldn't need an attribute interface (users can just use the attribute directly). Or do you expect there to be multiple attributes that encode this information in different ways (seems superfluous if we already have this in core) or which have additional information that #tensor.sparse doesn't include (what are those use cases?).
  • the assumption that an *attribute* interface is the right thing is not obvious to me. E.g. a !taco.tensor type might not put the info in an attribute.

Note that there is nothing second-class about #tensor.sparse. Having something in the IR directory doesn't really confer any blessed benefit (and we could move the SparseTensorEncodingAttr there if there was any).

In fact, I think that having #tensor.sparse in the tensor dialect, and not needing to touch the IR/ directory, is a validation of the effectiveness of our "encoding" design choice -- the whole point was to make this something that an open-ended ecosystem could be built out around, *without* touching IR/ or an interface directory that everything else depends on. Indeed, that was the entire goal of the "encoding" design in the first place.

aartbik added a comment.EditedApr 22 2021, 2:55 PM
  • if it has the right components, my gut is that we shouldn't need an attribute interface (users can just use the attribute directly). Or do you expect there to be multiple attributes that encode this information in different ways (seems superfluous if we already have this in core) or which have additional information that #tensor.sparse doesn't include (what are those use cases?).

Thanks Sean. At some point it of course becomes a bit subjective, but since we already need to add the VerifiableTensorEncoding attribute interface to "IR" to ensure single-point verification of all future encodings, it seemed right to bundle all abstract API's for the encoding, such as the conceptually agreed upon SparseTensorEncoding API, at the same place. And indeed other groups already have other attributes for TACO style encoding, and they could migrate simply by implementing the interface (although, of course, arguably, they could also just write an attribute converter).

We don't have to decide today. Let's see if others have a preference one way or the other....

aartbik updated this revision to Diff 339791.Apr 22 2021, 2:56 PM

windows build fix

  • if it has the right components, my gut is that we shouldn't need an attribute interface (users can just use the attribute directly). Or do you expect there to be multiple attributes that encode this information in different ways (seems superfluous if we already have this in core) or which have additional information that #tensor.sparse doesn't include (what are those use cases?).

Thanks Sean. At some point it of course becomes a bit subjective, but since we already need to add the VerifiableTensorEncoding attribute interface to "IR" to ensure single-point verification of all future encodings, it seemed right to bundle all abstract API's for the encoding, such as the conceptually agreed upon SparseTensorEncoding API, at the same place.

I think Sean is disputing that the sparse API is an abstract one actually. The VerifiableTensorEncoding is non disputed as it applies to every possible encodings attribute.

And indeed other groups already have other attributes for TACO style encoding,

That's interesting: could they just use the concrete SparseEncoding attribute or do they store different kind of information on top of what you store here? (which would motivate the interface as an extensibility point).
Any pointer to the needs there would be welcome I think.

aartbik added a comment.EditedApr 22 2021, 5:45 PM

Any pointer to the needs there would be welcome I think.

From the slides, the Comet project has tensor algebra operations like the one below. Even though it looks like an ordinary dictionary attribute is used, it also looks like an attribute that is sufficiently complex that they want to define it in a dialect sometimes, while implementing the SparseTensorEncoding API to connect to MLIR backends. I have pinged Gokcen to see what she thinks.

"ta.tc"(%1,%2,%3){ ... other info ...., f64, formats=["Dense", "Dense", "Dense"], indexing_maps = [ ... affine maps ... ] } : tensor<32x32x32,f64>, ...
aartbik edited the summary of this revision. (Show Details)Apr 26 2021, 12:55 PM
aartbik updated this revision to Diff 340625.Apr 26 2021, 1:25 PM

after discussion offline:

(1) keep attribute interface for verification in IR
(2) keep sparse tensor attribute + concrete interface in Dialect/Tensor

aartbik added inline comments.Apr 26 2021, 1:28 PM
mlir/include/mlir/Dialect/Tensor/IR/TensorOps.td
144

ooops this should not be here, hold on....

aartbik updated this revision to Diff 340630.Apr 26 2021, 1:37 PM

ready for review

silvas accepted this revision.Apr 26 2021, 3:33 PM

LGTM from me. Please get LGTM from someone with sparse compiler expertise on the exact API for SparseTensorEncodingAttr (choice of default, what would be most ergonomic, etc.).

mlir/lib/Dialect/Tensor/IR/TensorDialect.cpp
26

prefer "static" intsead of anonymous namespace in this case: https://llvm.org/docs/CodingStandards.html#anonymous-namespaces

This revision is now accepted and ready to land.Apr 26 2021, 3:33 PM
rriddle added inline comments.Apr 26 2021, 3:36 PM
mlir/lib/Dialect/Tensor/IR/TensorDialect.cpp
129

nit: These templates don't look necessary.

aartbik marked 2 inline comments as done.Apr 26 2021, 4:09 PM
aartbik added inline comments.
mlir/lib/Dialect/Tensor/IR/TensorDialect.cpp
26

static it is

129

Good eye! This was a left over from when it was still an attribute interface.

aartbik updated this revision to Diff 340679.Apr 26 2021, 4:13 PM
aartbik marked 2 inline comments as done.

addressed comments

bixia accepted this revision.Apr 26 2021, 4:46 PM
bixia added a subscriber: bixia.
bixia added inline comments.
mlir/test/Dialect/Tensor/invalid_sparse_tensor.mlir
31–38

Do we want to add similar tests for sparsePointerBitWidth or change one of these two tests for sparsePointerBitWidth?

penpornk accepted this revision.Apr 26 2021, 4:50 PM

(Not a compiler expert.)
SparseTensorEncodingAttr looks good to me. It follows the TACO format and is sufficient for what TFLite and TF are using/plan to use.

mlir/include/mlir/Dialect/Tensor/IR/TensorAttrDefs.td
46

The codegen doesn't support Singleton yet, right? But I agree that adding Singleton in advance doesn't seem to hurt since COO is prevalent enough that we would want to support it anyway.

79

Does this mean instead of defining data types for all index, pointer, and value type, the user only need to specify the type for value and just specify the number of bits for index and pointer (since they are always integral)? Do we treat them as unsigned type? Also, what about arbitrary width (e.g., uint6)?

mlir/include/mlir/Dialect/Tensor/IR/TensorInterfaces.td
148 ↗(On Diff #339442)

Agreed with AffineMap. It's necessary for describing block sparsity, e.g., actual_i = i * block_i + ii.

aartbik marked 3 inline comments as done.Apr 26 2021, 4:57 PM
aartbik added inline comments.
mlir/include/mlir/Dialect/Tensor/IR/TensorAttrDefs.td
46

Yes, Fredrik asked me to include this already. And it will enable me to add COO without having to touch these parts again :-)

79

Yeah, they are all unsigned in principle (but with the caveat documented in the codegen part, which for now is an implementation detail). The interface allows for arbitrary widths in the future (is there a need for those), but currently verifies the documented choices only.

mlir/test/Dialect/Tensor/invalid_sparse_tensor.mlir
31–38

Added one more test.

aartbik marked 3 inline comments as done.Apr 26 2021, 5:03 PM

(Not a compiler expert.)

But a sparse expert for sure!!!!

aartbik updated this revision to Diff 340692.Apr 26 2021, 5:07 PM

added one extra test case

mehdi_amini added inline comments.Apr 26 2021, 9:13 PM
mlir/include/mlir/Dialect/Tensor/IR/TensorAttrDefs.td
39

It isn't clear to me that we need a dictionary here.
Why can't we explicitly list the members instead?

The whole implementation is pretty noisy and there are too many unnecessary indirection because of this dictionary. This is akin implementing a regular C struct with some sort of map<string, value> instead.

65

Affine map always seems too much unconstrained to me as an API: it is a complex data structure that can model much more than a permutation/contraction. What kind of verifier exists here to prevent any confusion here?
(and isn't there a more suitable API than affinemap to model this?)

silvas added inline comments.Apr 27 2021, 12:08 PM
mlir/include/mlir/Dialect/Tensor/IR/TensorAttrDefs.td
39

I was thinking that too, but after a little bit of thought, it felt like it would end up being more code (printing, parsing, verifying, tablegen) in the end. Maybe I'm not familiar enough with the tools we have for doing this.

mehdi_amini added inline comments.Apr 27 2021, 1:26 PM
mlir/include/mlir/Dialect/Tensor/IR/TensorAttrDefs.td
39

Why would there be more code for verifying? Right now we will have a builder with a dictionary that can contain anything, you need to verify much more than a builder that takes an ArrayRef<DimLevelType> for example.

silvas added inline comments.Apr 27 2021, 1:31 PM
mlir/include/mlir/Dialect/Tensor/IR/TensorAttrDefs.td
39

Sorry, perhaps not for verifying. I'm not very familiar with this TableGen stuff. Just rememebr doing it for a custom type and going full custom was a lot more work than I anticipated.

Ping @aartbik ?

Sorry, I did not see the follow up discussion until you pinged me. I indeed picked a dictionary since it seems easiest for parsing/building (since fields are optional, at first glance making custom parser/builders seemed a lot of work to deal with absence/presence, arbitrary order etc.) with indeed just slightly more code in verification on making sure key-values are as expected. This way, I can also extend the attribute with a new key-value pair easily by just adding a few lines in the verifier + API, the parser/builders remain untouched. Lastly, I like the syntax of having named fields in the "struct " ;-)

#CSR = #tensor.sparse< {

sparseDimLevelType = [ "dense", "compressed" ],
sparseDimOrdering = affine_map<(i,j) -> (i,j)>,
sparsePointerBitWidth = 64,
sparseIndexBitWidth = 64

}>

I may be wrong there simply not having written many custom attributes before. Perhaps all this (named values, optional values, arbitrary order etc.) is just a easy there.

But, genuinely curious, does it matter a great deal? The parsing overhead and memory footprint of a sparse tensor type does not really seem to be on any critical path.

But as always, happy to rewrite to a struct-like attribute if there is a strong push, but I hope that my arguments above at least make some sense.

(and sorry for not responding to post-commit earlier)

mehdi_amini added a comment.EditedApr 28 2021, 10:40 AM

Ping @aartbik ?

Sorry, I did not see the follow up discussion until you pinged me. I indeed picked a dictionary since it seems easiest for parsing/building (since fields are optional, at first glance making custom parser/builders seemed a lot of work to deal with absence/presence, arbitrary order etc.) with indeed just slightly more code in verification on making sure key-values are as expected. This way, I can also extend the attribute with a new key-value pair easily by just adding a few lines in the verifier + API, the parser/builders remain untouched. Lastly, I like the syntax of having named fields in the "struct " ;-)

Not that I agree with you here, but it does not matter: none of these I believe would impact the attribute definition: you could still parse just a dictionary attribute without storing a dictionary in the attribute itself (i.e. parse a dictionary, and then extract the information to build the attribute itself)

But, genuinely curious, does it matter a great deal? The parsing overhead and memory footprint of a sparse tensor type does not really seem to be on any critical path.

I don't really mind the parsing, I mind the API that will be exposed in C++ to all the user of this class: I don't want to see a builder for this attribute based on a Dictionary for example, neither the storage should be based on this.
My point is that the C++ code that will produce this attribute and read this attribute should be as streamlined, safe, and efficient as any other usual C++ API.

My point is that the C++ code that will produce this attribute and read this attribute should be as streamlined, safe, and efficient as any other usual C++ API.

That is actually a strong argument, especially since we want this type to be easily generated by front-ends eventually too.
Ok, I think you convinced me. I will start migrating this to more struct-like builders.....

We will, of course, still cross our swords, purely metaphorically, on the use of an affine map ;-)