This is an archive of the discontinued LLVM Phabricator instance.

[mlir][sparse] Improve concatenate operator rewriting for dense tensor results.
ClosedPublic

Authored by bixia on Nov 21 2022, 5:50 PM.

Diff Detail

Event Timeline

bixia created this revision.Nov 21 2022, 5:50 PM
Herald added a project: Restricted Project. · View Herald Transcript
bixia requested review of this revision.Nov 21 2022, 5:50 PM
Peiming added inline comments.Nov 22 2022, 4:17 PM
mlir/lib/Dialect/SparseTensor/Transforms/SparseTensorRewriting.cpp
505

A quick improvement for encoded all dense tensor can be done in a similar way as well.

536–537

Not an issue in this patch.

But RT path only emit ifOp if there is a dense dimension in the sparse tensor, do we need to test all compressed/singleton tensors ? Or we can assume only non-zero elements are stored in those tensors?

bixia added inline comments.Nov 22 2022, 4:57 PM
mlir/lib/Dialect/SparseTensor/Transforms/SparseTensorRewriting.cpp
505

All dense encoded tensor is trickier in a sense we will have to write to the linear values buffer.
We can do this in a follow up PR. This PR mostly is for matching the conversion path behavior.

536–537

dense dim can have 0, but compressed/singleton shouldn't have zero.
We should make such an assumption to avoid non-zero test for all compressed/singleton tensor.

Peiming added inline comments.Nov 22 2022, 5:16 PM
mlir/lib/Dialect/SparseTensor/Transforms/SparseTensorRewriting.cpp
505

Yes, you should be able to rely on sparse_tensor.insert to do linearize the address.

Peiming accepted this revision.Nov 22 2022, 5:17 PM
This revision is now accepted and ready to land.Nov 22 2022, 5:17 PM
bixia added inline comments.Nov 23 2022, 4:08 PM
mlir/lib/Dialect/SparseTensor/Transforms/SparseTensorRewriting.cpp
505

will do this in a follow up PR, to both codegen and conversion.

bixia updated this revision to Diff 477646.Nov 23 2022, 4:20 PM

Rebase.