This is an archive of the discontinued LLVM Phabricator instance.

[mlir][sparse][linalg] add linalg rewriting specific to sparse tensors
ClosedPublic

Authored by aartbik on Feb 23 2022, 12:42 PM.

Details

Summary

Now that sparse tensor types are first-class citizens and the sparse compiler
is taking shape, it is time to make sure other compiler optimizations compose
well with sparse tensors. Mostly, this should be completely transparent (i.e.,
dense and sparse take the same path). However, in some cases, optimizations
only make sense in the context of sparse tensors. This is a first example of
such an optimization, where fusing a sampled elt-wise multiplication only makes
sense when the resulting kernel has a potential lower asymptotic complexity due
to the sparsity.

As an extreme example, running SDDMM with 1024x1024 matrices and a sparse
sampling matrix with only two elements runs in 463.55ms in the unfused
case but just 0.032ms in the fused case, with a speedup of 14485x that
is only possible in the exciting world of sparse computations!

Diff Detail

Event Timeline

aartbik created this revision.Feb 23 2022, 12:42 PM
aartbik requested review of this revision.Feb 23 2022, 12:42 PM
mravishankar requested changes to this revision.Feb 23 2022, 1:11 PM

I havent reviewed it yet, but could we add this in a separate file. Seems like it is mostly independent?

This revision now requires changes to proceed.Feb 23 2022, 1:11 PM

I havent reviewed it yet, but could we add this in a separate file. Seems like it is mostly independent?

Happy to oblige, but I was just wondering if we want to find the right shared elt-wise fusion abstractions in the future?
If so, keeping it together in one file would make more sense. But just let me know....

aartbik updated this revision to Diff 410951.Feb 23 2022, 3:04 PM

introduced new file for sparse tensor specific rewriting rules

aartbik edited the summary of this revision. (Show Details)Feb 23 2022, 3:05 PM
aartbik retitled this revision from [mlir][sparse][linalg] add kernel fusion in the context of sparse tensors to [mlir][sparse][linalg] add linalg rewriting specific to sparse tensors.Feb 23 2022, 3:18 PM
aartbik updated this revision to Diff 410955.Feb 23 2022, 3:23 PM

fixed a few comments and names

mravishankar requested changes to this revision.Feb 23 2022, 3:39 PM

Overall looks fine. Just a few comments.

mlir/lib/Dialect/Linalg/Transforms/ElementwiseOpFusion.cpp
2254

This should always be a yeild op. So you can just use cast<YieldOp>

mlir/lib/Dialect/Linalg/Transforms/SparseTensorRewriting.cpp
80

Its not required for this patch, but would be really cool to abstract out the distribution pattern here. Effectively make the operations involved here, i.e. the multiply and the add ops, (template?) parameters of the pattern. Then this generalizes to any set of ops where distributive property holds. Thats just a wish though, I dont expect that to be implemented :P .

118

I actually dont see this as a fusion pattern. This is changing order of operations, and satisfies equality only for integers. For floats this would be within a tolerance correct. So this is an arithmetic transformation. I tend to think of fusion as cases that are not changing the order of operations. but rather taking operations from different iteration space and giving them an different schedule while still maintaining the same depedences (the length of the dependence vector might change, but all dependences are preserved).
I'd suggest renaming this as DistributeMultiplyOverAddPattern (or something more succinct if you can come up with it)

135

Do we need to actually need them to be identity? This would work as well if they are projected permutations?

156

Nit: Change this to

if (!condition) return failure();
<current body>
179

Nit: I am not sure how much this indirection is buying in terms of code clarity. Its calling a method that is also a one-liner.

216

If you agree this is just an arithmetic transformation, I'd rename this to populateDistributeMultiplyOverAdditionPattern (or something more succinct - naming is hard :P )

This revision now requires changes to proceed.Feb 23 2022, 3:39 PM
aartbik marked 6 inline comments as done.Feb 23 2022, 4:16 PM

Thanks Mahesh!

mlir/lib/Dialect/Linalg/Transforms/SparseTensorRewriting.cpp
80

I agree, but when I was trying to reuse code from the elt-wise fusion module, none of them did exactly what I needed ;-) So I am afraid I need to see one or two more examples before getting a good sense on what could be factored out here.

118

Interesting. It this nomenclature common place? To me, we have two kernels before and one kernel after, so we "fused" the kernels into one ;-)

But I am okay with a name that reflects the arithmetic reordering better. See if you like the new name.

135

In this first instance yes, because I just do "mimic" at L162.
We could, if needed, account for permutations.

179

Well, if you "inline" it, it actually becomes two/three lines each, and requires braces on the loop. The trick is that assigning to the parameter saves the line for the declaration ;-)

216

I anticipate this file to grow with more sparse tensor rewriting rules, so kept it very vague at ":populateSparseTensorRewriting",

aartbik updated this revision to Diff 410971.Feb 23 2022, 4:33 PM
aartbik marked 4 inline comments as done.

addressed comments

mravishankar accepted this revision.Feb 23 2022, 4:57 PM
This revision is now accepted and ready to land.Feb 23 2022, 4:57 PM
wrengr added inline comments.Feb 23 2022, 5:23 PM
mlir/lib/Dialect/Linalg/Transforms/SparseTensorRewriting.cpp
49–57

Why not simplify this to the suggested edit?

wrengr added inline comments.Feb 23 2022, 5:30 PM
mlir/lib/Dialect/Linalg/Transforms/SparseTensorRewriting.cpp
126–134

Why not combine these ifs via ||?

147–150

ditto

152–156

ditto

aartbik marked an inline comment as done.Feb 24 2022, 9:05 AM
aartbik added inline comments.
mlir/lib/Dialect/Linalg/Transforms/SparseTensorRewriting.cpp
126–134

I will send out a follow up CL with this, Wren. Thanks!