This is an archive of the discontinued LLVM Phabricator instance.

[mlir][sparse] allow all-dense annotated "sparse" tensor output
ClosedPublic

Authored by aartbik on Jun 10 2021, 4:41 PM.

Details

Summary

This is a very careful start with alllowing sparse tensors at the
left-hand-side of tensor index expressions (viz. sparse output).
Note that there is a subtle difference between non-annotated tensors
(dense, remain n-dim, handled by classic bufferization) and all-dense
annotated "sparse" tensors (linearized to 1-dim without overhead
storage, bufferized by sparse compiler, backed by runtime support library).
This revision gently introduces some new IR to facilitate annotated outputs,
to be generalized to truly sparse tensors in the future.

Diff Detail

Event Timeline

aartbik created this revision.Jun 10 2021, 4:41 PM
aartbik requested review of this revision.Jun 10 2021, 4:41 PM
aartbik updated this revision to Diff 351301.Jun 10 2021, 4:46 PM

fixed old code

aartbik updated this revision to Diff 351499.Jun 11 2021, 10:53 AM

variadic operands (forward looking to sparse generalization)

aartbik updated this revision to Diff 351501.Jun 11 2021, 11:01 AM

removed code leftover

aartbik updated this revision to Diff 351558.Jun 11 2021, 1:55 PM

rebase with main

Some initial questions. The code looks good to me, though my code review muscles are out of shape, so I will take another pass after these comments!

mlir/include/mlir/Dialect/SparseTensor/IR/SparseTensorOps.td
125

Question for my own understanding: later, on line 248, you assert that there's just a single input argument (if I understand the code correctly). Will this always be the case? If so, why is this Variadic? Is ins always expected to be variadic?

mlir/lib/Dialect/SparseTensor/Transforms/SparseTensorConversion.cpp
285

Just to confirm: is this rewrite effectively a no-op? It just replaces the operator with the underlying pointer?

mlir/test/Dialect/SparseTensor/dense.mlir
118

Where would this fail, and why, exactly?

aartbik marked 3 inline comments as done.Jun 14 2021, 2:37 PM
aartbik added inline comments.
mlir/include/mlir/Dialect/SparseTensor/IR/SparseTensorOps.td
125

Good question. This is forward looking to sparse outputs, where we will need to feed in all arrays that constitute the sparse tensor (see L 146 for an example of that). All this is pretty much moot with the current approach of lowering to runtime support library (where the operation is folded again into the opaque pointer) but alternative implementations are possible in the future (e.g. actual code generation for stuff that is currently backed by the runtime support library). In such cases, we must correctly reflect the fact that reconstructing a tensor from sparse storage schemes needs all arrays, even though currently it doesn't.

mlir/lib/Dialect/SparseTensor/Transforms/SparseTensorConversion.cpp
285

Yes, even the variadic (in a later revision) will just verify that they all map to the same pointer and lower it to the no-op.

However, if we abandon the runtime support backing with actual codegen, that may chane in the future, and it is better to correctly reflect the full depenence on the sparse storage scheme!

mlir/test/Dialect/SparseTensor/dense.mlir
118

L623 in Sparsification.cpp

Not having the same "buffer" going in and out would require an explicit memory allocation for the tensor into which we reconstruct the sparse tensor according to the op (see the two ways of inplace/not-in-place for non-annotated outputs).

aartbik updated this revision to Diff 352026.Jun 14 2021, 6:30 PM
aartbik marked 3 inline comments as done.

rebased with main

After these comments, I think I'm good! However as I said, someone else may want to review this, as I'm brand new to this code and frankly still do not understand everything that is happening.

mlir/include/mlir/Dialect/SparseTensor/IR/SparseTensorOps.td
125

Ah, I see! I didn't think about how you'd potentially need multiple memrefs to implement a sparse tensor.

mlir/lib/Dialect/SparseTensor/Transforms/Sparsification.cpp
381–385
746

Should this comment be changed? Seems like this supports sparse tensors as well now?

aartbik marked an inline comment as done.Jun 15 2021, 12:56 PM
aartbik added inline comments.
mlir/lib/Dialect/SparseTensor/Transforms/Sparsification.cpp
746

Good catch. Changed.

aartbik updated this revision to Diff 352226.Jun 15 2021, 1:10 PM
aartbik marked an inline comment as done.

changed comment, rebased with main

gussmith23 accepted this revision.Jun 15 2021, 1:30 PM
This revision is now accepted and ready to land.Jun 15 2021, 1:30 PM
bixia accepted this revision.Jun 15 2021, 2:21 PM
bixia added inline comments.
mlir/lib/Dialect/SparseTensor/Transforms/SparseTensorConversion.cpp
278

s/fold/folds/

mlir/lib/Dialect/SparseTensor/Transforms/Sparsification.cpp
726–738

It seems to me that this code block calculating the args for load is repeated for store. If I am right here, we should find a way to put this to a subroutine for reuse.

aartbik marked 2 inline comments as done.Jun 15 2021, 2:27 PM

Thanks all!

mlir/lib/Dialect/SparseTensor/Transforms/Sparsification.cpp
726–738

Yes, with this move towards allowing sparse stores, the code is much more similar to before. I kept it like this to make the diffs smaller, but I will unify in next CL that touches this!

This revision was landed with ongoing or failed builds.Jun 15 2021, 2:55 PM
This revision was automatically updated to reflect the committed changes.
aartbik marked an inline comment as done.