This is an archive of the discontinued LLVM Phabricator instance.

[mlir][sparse][bufferize] Implement BufferizableOpInterface
ClosedPublic

Authored by springerm on Jun 19 2022, 4:25 AM.

Details

Summary

Only the analysis part of the interface is implemented. The bufferization itself is performed by the SparseTensorConversion pass.

Depends On D128137

Diff Detail

Event Timeline

springerm created this revision.Jun 19 2022, 4:25 AM
springerm requested review of this revision.Jun 19 2022, 4:25 AM
aartbik accepted this revision.Jun 22 2022, 4:37 PM
aartbik added inline comments.
mlir/include/mlir/Dialect/SparseTensor/Transforms/BufferizableOpInterfaceImpl.h
2

This is not really the Impl, right, since this is header. You have the same comment in the CPP file.

mlir/lib/Dialect/SparseTensor/Transforms/BufferizableOpInterfaceImpl.cpp
23

do we need those using, given that you add the namespace and fully qualify outside of it?

36

shall we remove the folding, just to keep things simple, and make that the "copy" op?

This revision is now accepted and ready to land.Jun 22 2022, 4:37 PM
springerm updated this revision to Diff 439326.Jun 23 2022, 4:12 AM
springerm marked an inline comment as done.

rebase

springerm marked an inline comment as done.Jun 23 2022, 4:16 AM
springerm added inline comments.
mlir/include/mlir/Dialect/SparseTensor/Transforms/BufferizableOpInterfaceImpl.h
2

Naming is a bit confusing, but this Impl refers to "Interface Implementation", not source vs header. registerBufferizableOpInterfaceExternalModels is a function that registers the impls.

mlir/lib/Dialect/SparseTensor/Transforms/BufferizableOpInterfaceImpl.cpp
36

AllocTensorOp is how we copy during bufferization. It doesn't really matter (for bufferization purposes) if this folds away or not. We anyway have to return true here because sparse -> dense or the other way around definitely allocates.

springerm updated this revision to Diff 439327.Jun 23 2022, 4:17 AM

address comments

aartbik added inline comments.Jun 23 2022, 9:45 AM
mlir/test/Dialect/SparseTensor/one_shot_bufferize_tensor_copy_insertion.mlir
14

Note that, as discussed, this will actually fail verification after the upcoming change.

aartbik added inline comments.Jun 23 2022, 2:11 PM
mlir/test/Dialect/SparseTensor/one_shot_bufferize_tensor_copy_insertion.mlir
14

You can avoid that by adding

%1 = sparse_tensor.load %0 : tensor<20x40xf32, SDCSR>
return %1 : tensor<20x40xf32, SDCSR>

note that it still escapes, but just not directly (i.e. not unitialized, since the load op wraps-up the "insertion").

This revision was landed with ongoing or failed builds.Jun 24 2022, 4:52 AM
This revision was automatically updated to reflect the committed changes.
springerm marked 2 inline comments as done.