This is an archive of the discontinued LLVM Phabricator instance.

[mlir][sparse] Avoid values buffer reallocation for annotated all dense tensors.
ClosedPublic

Authored by bixia on Jan 11 2023, 9:08 AM.

Details

Summary

Previously, we rely on the InsertOp to gradually increase the size of the
storage for all sparse tensors. We now allocate the full size values buffer
for annotated all dense tensors when we first allocate the tensor. This avoids
the cost of gradually increasing the buffer and allows accessing the values
buffer as if it were a dense tensor.

Diff Detail

Event Timeline

bixia created this revision.Jan 11 2023, 9:08 AM
Herald added a project: Restricted Project. · View Herald Transcript
bixia requested review of this revision.Jan 11 2023, 9:08 AM
Peiming added inline comments.Jan 11 2023, 3:26 PM
mlir/lib/Dialect/SparseTensor/Transforms/SparseTensorCodegen.cpp
244

LG, but you can probably do the same optimization for idxMemRef at the last level, right?

Peiming added inline comments.Jan 11 2023, 3:28 PM
mlir/lib/Dialect/SparseTensor/Transforms/SparseTensorCodegen.cpp
244

NVM, you are doing this for all dense tensors.

Probably we should extend the alloc tensor to take some heuristic, there are many time when the NNZ can be computed.

bixia added inline comments.Jan 11 2023, 3:36 PM
mlir/lib/Dialect/SparseTensor/Transforms/SparseTensorCodegen.cpp
244

Right, there is already a TODO in the code for improving the heuristic.

Peiming accepted this revision.Jan 11 2023, 3:37 PM
This revision is now accepted and ready to land.Jan 11 2023, 3:37 PM