This is an archive of the discontinued LLVM Phabricator instance.

[mlir][sparse] add support for "simply dynamic" sparse tensor expressions
ClosedPublic

Authored by aartbik on Jun 18 2021, 3:58 PM.

Details

Summary

Slowly we are moving toward full support of sparse tensor *outputs*. First
step was support for all-dense annotated "sparse" tensors. This step adds
support for truly sparse tensors, but only for operations in which the values
of a tensor change, but not the nonzero structure (this was refered to as
"simply dynamic" in the [Bik96] thesis).

Some background text was posted on discourse:
https://llvm.discourse.group/t/sparse-tensors-in-mlir/3389/25

Diff Detail

Event Timeline

aartbik created this revision.Jun 18 2021, 3:58 PM
aartbik requested review of this revision.Jun 18 2021, 3:58 PM
aartbik updated this revision to Diff 353134.Jun 18 2021, 5:19 PM

added conversion test

aartbik edited the summary of this revision. (Show Details)Jun 21 2021, 10:11 AM
gussmith23 accepted this revision.Jun 22 2021, 10:59 AM

No major changes requested from me. See minor comments, questions, and fixes!

mlir/lib/Dialect/SparseTensor/Transforms/SparseTensorConversion.cpp
282
285

For my own understanding: what does it mean when this is not the case? If op.getDefiningOp<CallOp>() returns null?

295
mlir/lib/Dialect/SparseTensor/Transforms/Sparsification.cpp
496

If I understand correctly, here's how this algorithm works:
We recursively descend through an expression, checking for two main things:

  1. There are only conjunctions, and
  2. at least one of the tensors at the leaves of the expression is equal to the input tensor.

Very simple solution!

518

I'm not sure I understand what you mean by "creation cannot occur"

1444

For my own understanding: how do you know this is the values array?

1460
mlir/test/Dialect/SparseTensor/conversion.mlir
205

What would a similar test look like if %arg0 was rank 2 or greater? Would it be largely the same? Would sparse_tensor.tensor take more arguments?

Edit: Ah, I see an example of this in @sparse_simply_dynamic1!

This revision is now accepted and ready to land.Jun 22 2021, 10:59 AM
aartbik marked 6 inline comments as done.Jun 22 2021, 11:21 AM
aartbik added inline comments.
mlir/lib/Dialect/SparseTensor/Transforms/SparseTensorConversion.cpp
285

Ah, good question. This call find the defining op, and then tries to cast it to a call. It fails if this is not a call. It is more or less a shorthand for

auto def = op.getDefiningOp();
if (def.isa<CallOp>) {
auto call = def.cast<CallOp>;
....
}

295

thanks, I always seem to mistype this!

mlir/lib/Dialect/SparseTensor/Transforms/Sparsification.cpp
496

Yes, this seems to capture all cases. I was first thinking of using the lattices for this, but that was more work than this *and* finds the same cases anyway ;-)

518

Ah, other terms are "fill in" or just plain "insertion". I changed the term.

1444

Ah, the output is the last bufferized tensor, so always appears at the back.

mlir/test/Dialect/SparseTensor/conversion.mlir
205

Yes, the rank and annotation drive the parameters. Note that in the current situation we really never use the incoming parameters, but it feels cleaner to keep the IR such that "in principle" the op could lower to something that really reconstructs the full dense tensor.

aartbik updated this revision to Diff 353740.Jun 22 2021, 11:56 AM
aartbik marked 5 inline comments as done.

addressed comments

gussmith23 accepted this revision.Jun 22 2021, 1:22 PM