This is an archive of the discontinued LLVM Phabricator instance.

[mlir][sparse] Factoring out SparseTensorType class
ClosedPublic

Authored by wrengr on Feb 10 2023, 6:27 PM.

Details

Summary

This change adds a new SparseTensorType class for making the "dim" vs "lvl" distinction more overt, and for abstracting over the differences between sparse-tensors and dense-tensors. In addition, this change also adds new type aliases Dimension, Level, and FieldIndex to make code more self-documenting.

Although the diff is very large, the majority of the changes are mechanical in nature (e.g., changing types to use the new aliases, updating variable names to match, etc). Along the way I also made many variables const when they could be; the majority of which required only adding the keyword. A few places had conditional definitions of these variables, requiring actual code changes; however, that was only done when the overall change was extremely local and easy to extract. All these changes are included in the current patch only because it would be too onerous to split them off into a separate patch.

Diff Detail

Event Timeline

wrengr created this revision.Feb 10 2023, 6:27 PM
Herald added a project: Restricted Project. · View Herald Transcript
wrengr requested review of this revision.Feb 10 2023, 6:27 PM

@stella.stamenova This patch introduces some deprecation warnings, which are intended for other folks on our team to help track down the offending use-sites so they can fix them. Is there anything I need to do to ensure the LLVM buildbot doesn't reject this patch because of the new warnings?

@stella.stamenova This patch introduces some deprecation warnings, which are intended for other folks on our team to help track down the offending use-sites so they can fix them. Is there anything I need to do to ensure the LLVM buildbot doesn't reject this patch because of the new warnings?

It will fail because it enforces warnings as errors - are there a lot of warnings that will be generated and will it take long to fix them? I can see us living with a red bot for a day assuming that people are working on applying fixes as necessary.

It will fail because it enforces warnings as errors

Yeah, that's what I was guessing. I'm currently defining the deprecation annotation with a macro, so it'd be easy enough to (conditionally) disable. Since the buildbot enforces warnings as errors, I'm guessing there's no standard LLVM idiom for "only enable this macro when explicitly requested"?

are there a lot of warnings that will be generated and will it take long to fix them? I can see us living with a red bot for a day assuming that people are working on applying fixes as necessary.

I forget how many callsites remain for the deprecated functions, but there's a dozen or so iirc. Unfortunately they're not terribly easy to fix. (The functions are deprecated because they rely on some faulty assumptions, so the deprecation is to help track down all the places we made those faulty assumptions; but fixing them might require nontrivial code restructuring.) The team's working on it, but it might take more than a day.

I'll un-deprecate them for now

One thing that we could do is disable werror on the buildbot for the moment and then re-enable it. It's not terribly difficult, but it relies on buildbot master being re-started to take effect, so it might not even take effect before the warnings are fixed.

I don't want to cause more work for y'all. I just wanted to double-check before landing, to avoid getting auto-reverted :)

I've already added "fixme" comments at all the callsites I could find, so I'm fine with wrapping the deprecation macro in #if 0 so it's easy to re-enable locally as needed.

wrengr updated this revision to Diff 497167.Feb 13 2023, 6:13 PM

disabling the deprecation warnings. And adding a few static_cast to equality assertions between different types.

Peiming added inline comments.Feb 14 2023, 9:48 AM
mlir/include/mlir/Dialect/SparseTensor/IR/SparseTensor.h
52

Are we going to use Ship for real? How about DynSize or something else?

mlir/include/mlir/Dialect/SparseTensor/IR/SparseTensorType.h
102

Might be helpful to provide operator== so we can use pointer comparison between rtp1==rtp2 for SparseTensorType just like RankedTensorType.

aartbik added inline comments.Feb 14 2023, 10:36 AM
mlir/include/mlir/Dialect/SparseTensor/IR/SparseTensor.h
52

We could remove the TODO and use DynSize for now?

105

update doc too to say level?

122

If this is just to avoid the error on windows, can we make it a linux-only guard or so
(as you know, commented out code is usually frowned upon, even when there for a good reason ;-)

129

stored dim, i.e. level
(or just level)?

134

stored dimension -> level?

EDIT: will not be commenting on these further, since you also plan a doc cleanup after this

mlir/include/mlir/Dialect/SparseTensor/IR/SparseTensorAttrDefs.td
264

Although true in general, these are methods *on* the encoding, right?
So I am not sure we want to say this?

EDIT: Ah, later I see this is related to the getImpl() change you added, still very subtle ;-)

mlir/include/mlir/Dialect/SparseTensor/IR/SparseTensorType.h
49

In this case, it is more a pre-compute than memoize, right? I know that it does not matter much, but I have slightly different interpretation when I see memoization ;-)
[from my Prolog days ;-)]

mlir/lib/Dialect/SparseTensor/IR/SparseTensorDialect.cpp
413–416

can we just else-if this?

wrengr updated this revision to Diff 497441.Feb 14 2023, 2:21 PM
wrengr marked 7 inline comments as done.

Addressing comments

mlir/include/mlir/Dialect/SparseTensor/IR/SparseTensor.h
52

I'm not a big fan of DynSize personally, since I feel like there's a confusion between (1) the type where the kDynamic token itself lives, vs (2) the type where actual dynamic sizes live (i.e., the compiler's type for the runtime values; hence, a particular subset of Value). This distinction becomes particularly salient in a different CL I'm working on which defines a variant of OpFoldResult to encompass both static and dynamic types without losing track of which is which. One of the motivations for that CL is to clean up a bunch of places where we currently have ad-hoc versions of mlir::getMixedValues and similar stuff from Dialect/Utils/StaticValueUtils.h

Also, that name obfuscates that the alias is the singular of "shape" rather than of "sizes" ;)

But I'll make the change for now, for the sake of not blocking the rest of this patch.

122

Yeah, this is to avoid getting rejected by the LLVM buildbot. I could enable it for non-Windows, but I'm guessing their Linux bot is also configured with -Werror...

I'd love to have this be triggered by a "for developers" flag, so that our team automatically sees it but end-users don't have to. Alas, afaik there's no such flag already in our toolchains. (Of course, even if there were, Blaze disables deprecation warnings by default so folks would only see it when building via cmake)

mlir/include/mlir/Dialect/SparseTensor/IR/SparseTensorAttrDefs.td
264

Adding this comment for the sake of posterity (since you already figured it out :)

In the implementations of isAllDense, isAllOrdered, hasIdDimOrdering, etc, I have the implementations check getImpl() before checking the rest of the condition. Adding that check does three things:

(1) it avoids segfaults from calling these methods on the null-attr,
(2) it avoids the boilerplate of needing to use the condition (enc && enc.isAllDense()) at every callsite,
(3) and it unifies handling of sparse-tensors and dense-tensors, since the condition in the second point is the one we actually care about.

Since the semantic unification of the third point is a specific and intentional goal of the implementation, it makes sense to document that fact.

mlir/include/mlir/Dialect/SparseTensor/IR/SparseTensorType.h
49

True, this isn't tabulation like you get in Prolog/Datalog/etc; but it is still caching a memo in order to avoid recomputing the value every time it's needed. [Owing to my experience with Haskell, Dyna, AliceML, etc, I think of "memoization" as more than just dynamic-programming; but afaik my use of the term isn't at odds with how it's used elsewhere :)]

Whereas I interpret "precompute" to mean multi-pass algorithms which build up some intermediate/auxiliary datastructure (e.g., to make queries faster than going to the original data, or like our SparseTensorNNZ class).

102

It looks like operator== was one of the things I split off into a separate CL. Did you want me to merge it back into this CL, or is a separate CL fine?

wrengr updated this revision to Diff 497444.Feb 14 2023, 2:22 PM

git-clang-format

wrengr added inline comments.Feb 14 2023, 3:17 PM
mlir/include/mlir/Dialect/SparseTensor/IR/SparseTensorType.h
102
wrengr marked an inline comment as done.Feb 14 2023, 3:22 PM
aartbik accepted this revision.Feb 14 2023, 6:03 PM
aartbik added inline comments.
mlir/include/mlir/Dialect/SparseTensor/IR/SparseTensor.h
52

Ah, yeah, I remember the discussion on singular vs plural. Yeah, finding right names can be hard ;-(

122

Agreed that would be nice. But this is fine now as far as I am concerned.

mlir/include/mlir/Dialect/SparseTensor/IR/SparseTensorType.h
49

Fair enough!

This revision is now accepted and ready to land.Feb 14 2023, 6:03 PM
This revision was automatically updated to reflect the committed changes.