This is an archive of the discontinued LLVM Phabricator instance.

[mlir][sparse] Add rewriting rule for the convert operator.
ClosedPublic

Authored by bixia on Oct 19 2022, 4:37 PM.

Diff Detail

Event Timeline

bixia created this revision.Oct 19 2022, 4:37 PM
Herald added a project: Restricted Project. · View Herald Transcript
bixia requested review of this revision.Oct 19 2022, 4:37 PM
aartbik added inline comments.Oct 20 2022, 2:02 PM
mlir/lib/Dialect/SparseTensor/Transforms/SparseTensorRewriting.cpp
159

This feels like it could be moved to IR/SparseTensor.h where we have several related tests on ranked tensor type and sparsity. WDYT?

162

I don't think we can have enc + rank == 0, so second test is strictly speaking not needed

166

Even for rank == 1, don't we need a compressed outermost?

A compressed sparse vector is basically in COO format ;-)
You cannot have a singleton as first entry

So I think you can simply to the test that compressed is outermost, followed by singletons.
For vectors, the folllowed by part will be skipped

Peiming added inline comments.Oct 20 2022, 2:31 PM
mlir/lib/Dialect/SparseTensor/Transforms/SparseTensorRewriting.cpp
556

Do you think it would be nicer if the body builder provide
(OpBuilder &builder, Location loc, Value v, ValueRange indices)? so that the value and indices are separated?

Peiming added inline comments.Oct 20 2022, 2:34 PM
mlir/lib/Dialect/SparseTensor/Transforms/SparseTensorRewriting.cpp
500–512

I think these two can be implemented in loop emitter probably

500–512

Probably not sparse constant.

Peiming added inline comments.Oct 20 2022, 2:37 PM
mlir/lib/Dialect/SparseTensor/Transforms/SparseTensorRewriting.cpp
500–512

I can extend foreach operator to accept these inputs if it is more desirable.

bixia marked 3 inline comments as done.Oct 20 2022, 4:47 PM
bixia added inline comments.
mlir/lib/Dialect/SparseTensor/Transforms/SparseTensorRewriting.cpp
159

Good idea.

500–512

the sparse constant actually have two constants: one for indices and one for values.
We currently use an existing block of code genDenseTensorOrSparseConstantIterLoop for the conversion path.
We modify this if we will have better supporting functions.

556

I don't see a big difference, the current interface is fine as the value is in args.back().

Peiming added inline comments.Oct 21 2022, 9:13 AM
mlir/lib/Dialect/SparseTensor/Transforms/SparseTensorRewriting.cpp
500–512

Okay, SG.

I extended the dense case. sparse constant probably need more work!

bixia updated this revision to Diff 469651.Oct 21 2022, 9:19 AM
bixia marked an inline comment as done.

Rebase.

bixia added inline comments.Oct 21 2022, 9:22 AM
mlir/lib/Dialect/SparseTensor/Transforms/SparseTensorRewriting.cpp
500–512

Thanks! I will see if I can modify genDenseTensorOrSparseConstantIterLoop for the codegen and conversion to take advantage of the fact that foreach can handle dense tensor.
I will do this in a separate PR.

aartbik added inline comments.Oct 21 2022, 4:24 PM
mlir/lib/Dialect/SparseTensor/IR/SparseTensorDialect.cpp
275

very minor nit, I would use

for (uint64_t i = 1, rank = tp.getRank();  i < rank; ++i)

since it is only used in the for loop, but up to you

mlir/lib/Dialect/SparseTensor/Transforms/SparseTensorRewriting.cpp
160

I think we should avoid returning vectors by value (even though the compiler may actually do some interesting optimizations). For such an output vector, simply pass in by reference

489

comment on dense2dense?

581

can be move that up into the actual codegen rewriting rule above, so it is now hidden inside this method?

mlir/test/Dialect/SparseTensor/convert_sparse2dense.mlir
1

add a FIXME or TODO

bixia updated this revision to Diff 470190.Oct 24 2022, 9:42 AM
bixia marked 5 inline comments as done.

Address review comments.

mlir/lib/Dialect/SparseTensor/IR/SparseTensorDialect.cpp
275

right, done.

mlir/lib/Dialect/SparseTensor/Transforms/SparseTensorRewriting.cpp
489

Add comment to stay that dense-to-dense is a nop handled by canonicalization.

581

Yes, moved this check up.

mlir/test/Dialect/SparseTensor/convert_sparse2dense.mlir
1

this was a mistake. Fixed.

aartbik added inline comments.Oct 28 2022, 12:39 PM
mlir/lib/Dialect/SparseTensor/IR/SparseTensorDialect.cpp
265

do we care about ordered/unique here? [there are a few different flavors of COO]

given the usage later, I think we can accept sorted and unsorted, but probably not with duplicates?

mlir/lib/Dialect/SparseTensor/Transforms/SparseTensorRewriting.cpp
159

document that dynSizes is the output (also, not sure if we have a convention on this, but I usually put input vectors before output parameters, especially now that they have the same reference type). WDYT?

bixia updated this revision to Diff 471898.Oct 30 2022, 9:57 PM
bixia marked 2 inline comments as done.

Rename isCOOType to isUniqueCOOType and checks that the last dim is unique.
Reorder parameters for getDynamicSizes.

mlir/lib/Dialect/SparseTensor/IR/SparseTensorDialect.cpp
265

Add a check to make sure that the last dimension is unique.
Rename the func to isUniqueCOOType.

mlir/lib/Dialect/SparseTensor/Transforms/SparseTensorRewriting.cpp
159

Add doc description for dynSizes.
C++ coding style says order input params before output params. Reorder the params.

aartbik added inline comments.Oct 31 2022, 6:55 PM
mlir/lib/Dialect/SparseTensor/IR/SparseTensorDialect.cpp
278

make a note that rank == 1 works (unique the only compressed) as does rank > 1 (unique on the last singleton)

bixia updated this revision to Diff 472324.Nov 1 2022, 9:14 AM
bixia marked an inline comment as done.

Add a comment for isUniqueCOOType.

aartbik accepted this revision.Nov 1 2022, 9:54 AM
This revision is now accepted and ready to land.Nov 1 2022, 9:54 AM
This revision was automatically updated to reflect the committed changes.