This is an archive of the discontinued LLVM Phabricator instance.

[mlir][sparse] Switch to One-Shot Bufferize
ClosedPublic

Authored by springerm on Jul 8 2022, 2:54 AM.

Details

Summary

This change removes the partial bufferization passes from the sparse compilation pipeline and replaces them with One-Shot Bufferize. One-Shot Analysis (and TensorCopyInsertion) is used to resolve all out-of-place bufferizations, dense and sparse. Dense ops are then bufferized with BufferizableOpInterface. Sparse ops are still bufferized in the Sparsification pass.

Details:

  • Dense allocations are automatically deallocated, unless they are yielded from a block. (In that case the alloc would leak.) All test cases are modified accordingly. E.g., some funcs now have an "out" tensor argument that is returned from the function. (That way, the allocation happens at the call site.)
  • Sparse allocations are *not* automatically deallocated. They must be "released" manually. (No change, this will be addressed in a future change.)
  • Sparse tensor copies are not supported yet. (Future change)
  • Sparsification no longer has to consider inplacability. If necessary, allocations and/or copies are inserted during TensorCopyInsertion. All tensors are inplaceable by the time Sparsification is running. Instead of marking a tensor as "not inplaceable", it can be marked as "not writable", which will trigger an allocation and/or copy during TensorCopyInsertion.

Depends On D129354

Diff Detail

Event Timeline

springerm created this revision.Jul 8 2022, 2:54 AM
Herald added a project: Restricted Project. · View Herald TranscriptJul 8 2022, 2:54 AM
springerm requested review of this revision.Jul 8 2022, 2:54 AM
springerm updated this revision to Diff 443193.Jul 8 2022, 3:40 AM

fix cmake

This is a monumental revision Matthias, and a great step forward for the sparse compiler.
Your eye for detail in this change is impressive, and I love how much cleaner some of the code has become!

mlir/lib/Dialect/SparseTensor/Pipelines/SparseTensorPipelines.cpp
29

does this apply to returning sparse allocs too?

34

We cannot simply drop this pass, since e.g. sampled dense dense fusion requires this to run. Was this done to avoid problems, or was this an oversight?

72

that would be very cool!

mlir/lib/Dialect/SparseTensor/Transforms/DenseBufferizationPass.cpp
35

This comment seems to do the opposite of the code, but that is because a filter works the other way around.
So how about, filter sparse types out, so rewriting is done only for dense, or somthing like that

39

this cast to bool is not really required, and I don't use it elsewhere, so for consistency in style, please consider removing

mlir/lib/Dialect/SparseTensor/Transforms/SparseTensorConversion.cpp
55

This looks like a util that should be in Bufferization dialect itself, right, since it looks like a very common query?

326–327

buffer

630

Ah, during conversion, this will be trivial to implement with just a few lines (and without introducing a new op for just this). Is this the only place we will need it?

mlir/lib/Dialect/SparseTensor/Transforms/Sparsification.cpp
346–347

looks like comment needs to be updated (i.e. buffers are now always in place?)

476–477

here too the comments have become out of date

mlir/test/Dialect/SparseTensor/dense.mlir
59

can we still keep the test in some form? It is a bit strange to fully remove dense1 and start at dense2?

mlir/test/Dialect/SparseTensor/sparse_outbuf.mlir
105

could we copy some of this test? maybe by returning both argb and the new tensor, so we force a copy?

springerm marked 8 inline comments as done.Jul 12 2022, 1:28 AM
springerm added inline comments.
mlir/lib/Dialect/SparseTensor/Pipelines/SparseTensorPipelines.cpp
29

Sparse memory deallocation is fully manual at the moment, so I am only concerned with dense deallocation at the moment. We would eventually utilize the BufferDeallocation pass to deallocate all allocations that escape block boundaries, but it does not support sparse allocs yet (and will probably fail in the presence of sparse allocs). So for the moment, the safest thing to do would be disallow escaping of dense allocations. That would give us additional errors in case something dense leaks. (It won't help with sparse allocs.) But there's currently no way to set this flag for dense tensors only.

34

This pass does not fit well with bufferization because of the RemoveOutsDependency pattern, which breaks tensor SSA use-def chains, which are used by bufferization to decide which buffer to use. Is this particular pattern needed in the sparse compiler?

(This pattern is needed for IREE, so just removing it is not an option. I'm looking into alternatives in IREE at the moment... D129355)

mlir/lib/Dialect/SparseTensor/Transforms/DenseBufferizationPass.cpp
35

yes, this is a typo

mlir/lib/Dialect/SparseTensor/Transforms/SparseTensorConversion.cpp
630

Yes, this is the only place.

mlir/test/Dialect/SparseTensor/dense.mlir
59

dense1 and dense2 were identical except for the linalg.inplaceable attr. This attr does not exist anymore (everything is always inplaceable), so we can delete one of the two test cases.

Changed the numbers of the other test cases, so that it starts counting from 1 again.

mlir/test/Dialect/SparseTensor/sparse_outbuf.mlir
105

The sparsification pass is no longer concerned with inplacability. The copy is inserted by the TensorCopyInsertion pass. I moved the test case to one_shot_bufferize_tensor_copy_insertion.mlir and modified it as suggested.

springerm updated this revision to Diff 443858.Jul 12 2022, 1:28 AM
springerm marked 3 inline comments as done.

address some comments

aartbik accepted this revision.Jul 12 2022, 4:34 PM

Matthias, thank you so much for making this happen! I am thrilled to start using the much improved bufferization method for the sparse compiler.
The missing rewriting is a bit of a concern, but something I can fix later, no need to block this monumental revision just for that.

Ship it!

mlir/lib/Dialect/SparseTensor/Pipelines/SparseTensorPipelines.cpp
34

Is this particular pattern needed in the sparse compiler?

Yes, doing the dense rewritings is nice also for the sparse compiler path, but more importantly, we need <FuseSparseMultiplyOverAdd> form this pass to fuse sampled-dense-dense patterns. For the sake of progress, I won't block this revision on that, but we really need it (perhaps move it to a new pass or so).

So please add TODO on missing case!

mlir/lib/Dialect/SparseTensor/Transforms/SparseTensorPasses.cpp
130–140

I suspect you will get a bit of conflicts on a recent revision when you rebase with main here

This revision is now accepted and ready to land.Jul 12 2022, 4:34 PM

Oh, and a new reshaping integration test went in. I suspect you need to adjust the dense deallocation in that test too (sorry about the conflict)

springerm marked an inline comment as done.

rebase

This revision was landed with ongoing or failed builds.Jul 14 2022, 12:53 AM
This revision was automatically updated to reflect the committed changes.