This is an archive of the discontinued LLVM Phabricator instance.

[mlir][sparse] change memref argument to proper SSA components
ClosedPublic

Authored by aartbik on Sep 21 2022, 5:59 PM.

Details

Summary

The indices for insert/compress were previously provided as
a memref<?xindex> with proper rank, since that matched the
argument for the runtime support libary better. However, with
proper codegen coming, providing the indices as SSA values
is much cleaner. This also brings the sparse_tensor.insert
closer to unification with tensor.insert, planned in the
longer run.

Diff Detail

Event Timeline

aartbik created this revision.Sep 21 2022, 5:59 PM
Herald added a project: Restricted Project. · View Herald TranscriptSep 21 2022, 5:59 PM
aartbik requested review of this revision.Sep 21 2022, 5:59 PM

Note that some singleton stuff sneaked into the sparsification file, but I will rebase that as soon as it goes in, so the diff will not show that anymore (but it seemed a bit pointless to refactor it out right now....)

aartbik updated this revision to Diff 462060.Sep 21 2022, 6:28 PM

rebased with main

aartbik updated this revision to Diff 462190.Sep 22 2022, 9:01 AM

rebased with main

Peiming added inline comments.Sep 23 2022, 9:37 AM
mlir/include/mlir/Dialect/SparseTensor/IR/SparseTensorOps.td
233

love it!

mlir/lib/Dialect/SparseTensor/Transforms/Sparsification.cpp
611

Is it intended?

It should be redundant now, sine you do not need to stored the index? Or are you foreseeing that it will be needed in the near future?

812–815

Is it always true that the first rank loop always corresponds to output tensor? (Of course, without considering affine)

Peiming added inline comments.Sep 23 2022, 10:01 AM
mlir/lib/Dialect/SparseTensor/Transforms/Sparsification.cpp
383

nit: remove {} when there is only one statement (finally my turn ;-)

aartbik marked 2 inline comments as done.Sep 23 2022, 2:29 PM
aartbik added inline comments.
mlir/include/mlir/Dialect/SparseTensor/IR/SparseTensorOps.td
233

Me too, it is very readable this way. Bugs are easier to find!

mlir/lib/Dialect/SparseTensor/Transforms/Sparsification.cpp
383

:-)

But here, the true=branch has braces and then the else branch must follow.
(hehehehe ! ;-) :-)

611

Oh, yeah, I left it here so that it is clear nothing is done for sparse output.
I can also make it

else if (t != codegen.sparseOut) {

// Annotated sparse tensors (not involved in output)

do you like that better?

812–815

Yes, due to our topological sort *and* the criteria for expansion. No sparse tensor is ever visited non-lexicographical, and we only accept expansion for parallel outer , non related loops inner

Peiming added inline comments.Sep 23 2022, 4:09 PM
mlir/lib/Dialect/SparseTensor/Transforms/Sparsification.cpp
611

Yeah, look strange when you have empty branch. Or // Sparse out (do nothing)

wrengr added inline comments.Sep 23 2022, 5:03 PM
mlir/include/mlir/Dialect/SparseTensor/IR/SparseTensorOps.td
228

should surround with backticks (or single quotes?)

326

Does using single quotes cause this to make a link? or should these be backticks?

mlir/lib/Dialect/SparseTensor/IR/SparseTensorDialect.cpp
455–467

Can these be implemented via another TypesMatchWith trait (or similar) in the td-file, rather than with a separate verifier?

mlir/lib/Dialect/SparseTensor/Transforms/SparseTensorConversion.cpp
1231

why using the op's tensor rather than the adaptor's tensor? Unless that's necessary for correctness, it would be cleaner to reuse the tensor variable bound on the previous line

aartbik marked 8 inline comments as done.Sep 26 2022, 10:44 AM

Thanks for the feedback!

mlir/include/mlir/Dialect/SparseTensor/IR/SparseTensorOps.td
228

Yeah, good suggestion

326

backticks

mlir/lib/Dialect/SparseTensor/IR/SparseTensorDialect.cpp
455–467

Looks like both need to be custom checks right (note that other stuff is done through traits, but these check the length of indices, +1)

mlir/lib/Dialect/SparseTensor/Transforms/SparseTensorConversion.cpp
1231

This is the correct idiom, since we need the original type. Note that we use adaptor.getXXX for rewriting, but when querying before-conversion types (or attributes), using the op is acceptable idiom (per Alex and River even ;-)
In this case, the adaptor.getTensor().getType() would give us the "tuple" of memrefs.

aartbik updated this revision to Diff 463008.Sep 26 2022, 1:28 PM
aartbik marked 4 inline comments as done.

addressed comments, rebased with main

aartbik updated this revision to Diff 463308.Sep 27 2022, 12:06 PM

rebased against main

wrengr added inline comments.Sep 27 2022, 12:13 PM
mlir/lib/Dialect/SparseTensor/Transforms/SparseTensorConversion.cpp
1231

Thanks for the clarification :)

mlir/lib/Dialect/SparseTensor/Transforms/Sparsification.cpp
1902

Spelling was correct before

aartbik marked an inline comment as done.Sep 27 2022, 12:21 PM
aartbik added inline comments.
mlir/lib/Dialect/SparseTensor/Transforms/Sparsification.cpp
1902

Ha! Not sure why I changed that ;-)

aartbik updated this revision to Diff 463312.Sep 27 2022, 12:26 PM
aartbik marked an inline comment as done.

admissable -> admissible

aartbik updated this revision to Diff 463317.Sep 27 2022, 12:55 PM

rebased against main (Bixia's new sort op)

Peiming accepted this revision.Sep 27 2022, 3:59 PM
This revision is now accepted and ready to land.Sep 27 2022, 3:59 PM