This is an archive of the discontinued LLVM Phabricator instance.

[mlir][sparse] Add a new reduce operation in sparse_tensor dialect
ClosedPublic

Authored by jim22k on Jun 16 2022, 1:39 PM.

Details

Summary

Custom reduction is a binary operators, but differs from the existing
sparse_tensor.binary operation in significant ways.

  1. It does not deal with sparsity overlaps. It is given a finite set of non-empty values and must collapse them to a single value
  2. The nature of reduction requires a starting value (or identity value). This is normally zero for summation, but for the product needs to be 1.0. The custom code needs to explicitly pass an identity value.

Diff Detail

Event Timeline

jim22k created this revision.Jun 16 2022, 1:39 PM
jim22k requested review of this revision.Jun 16 2022, 1:39 PM
jim22k retitled this revision from Add a new reduce operation in sparse_tensor dialect to [mlir][sparse] Add a new reduce operation in sparse_tensor dialect.Jun 16 2022, 1:48 PM
aartbik added inline comments.Jun 17 2022, 12:44 PM
mlir/include/mlir/Dialect/SparseTensor/IR/SparseTensorOps.td
554

The documentation can use a bit more explanation on why we added this.

For example, you can show that "regular" reductions can be expressed "the easy way", but this makes the sparse specifications more expressive (I want to avoid that newcomers think *all* reductions need to be defined this way)

560

empty line after Example:

mlir/lib/Dialect/SparseTensor/IR/SparseTensorDialect.cpp
378–379

since we don't use the result of the cast, why not simply use a "isa"?

mlir/lib/Dialect/SparseTensor/Utils/Merger.cpp
959 ↗(On Diff #437689)

can't we start a new == 3 section then for ternary operations?

mlir/test/Dialect/SparseTensor/invalid.mlir
344

looks like we never had an invalid test for "must end with sparse_tensor.yield", for this or another one, so perhaps add this as part of this revision to get full coverage

jim22k updated this revision to Diff 440410.Jun 27 2022, 3:10 PM
jim22k marked 5 inline comments as done.
  • Merge with latest changes (D128000)
  • Update based on feedback
jim22k updated this revision to Diff 440630.Jun 28 2022, 7:55 AM

Fix formatting

Perhaps for the sake of progress, can we split this into two revisions? Once that introduces the new sparse dialect op, and then one that provides the right implementation?
I feel that the former is almost ready to go in, but the latter needs some work, so splitting this up into smaller pieces may make it easier.

mlir/include/mlir/Dialect/SparseTensor/IR/SparseTensorOps.td
570

This example is missing the surrounding linalg.generic, making it also harder to see what %x and %y are exactly. Also, in the examples above you use %a,%b,%c for the top level, and then %arg0 again in the block inside the semi-ring op. Please make the syntax in all examples similar, so it is easier to read and recognize common components.

mlir/lib/Dialect/SparseTensor/Transforms/Sparsification.cpp
745 ↗(On Diff #440630)

I am puzzled we need to have new code in the merger for this, can't we just handle that in the sparse codegen file as before. Also, please note that we need to deal with vector values during vectorization. Probably not for the semi ring op, but your ReducID seems to deal with the other cases too.

Since you are using so many if (codegen.redKind == kNoReduc) tests, it feels like you are entering the wrong methods, and should have deal with reductions at a higher level (see updateReduc() and friends)

jim22k updated this revision to Diff 444680.Jul 14 2022, 8:35 AM
jim22k marked an inline comment as done.

Add reduce operation definition and validation tests

I split the PR in two. This is now just the .td definition and validation tests. Hopefully this makes it easier to approve, and we can discuss implementation in a subsequent PR.

aartbik accepted this revision.Jul 14 2022, 3:02 PM

I split the PR in two. This is now just the .td definition and validation tests. Hopefully this makes it easier to approve, and we can discuss implementation in a subsequent PR.

Thanks for splitting this up! This part is directly good to go, and we can start working on the implementation with a smaller revision!

This revision is now accepted and ready to land.Jul 14 2022, 3:02 PM
This revision was automatically updated to reflect the committed changes.