Details
Diff Detail
- Repository
- rG LLVM Github Monorepo
Event Timeline
mlir/lib/Dialect/SparseTensor/Transforms/SparseTensorRewriting.cpp | ||
---|---|---|
159 | This feels like it could be moved to IR/SparseTensor.h where we have several related tests on ranked tensor type and sparsity. WDYT? | |
162 | I don't think we can have enc + rank == 0, so second test is strictly speaking not needed | |
166 | Even for rank == 1, don't we need a compressed outermost? A compressed sparse vector is basically in COO format ;-) So I think you can simply to the test that compressed is outermost, followed by singletons. |
mlir/lib/Dialect/SparseTensor/Transforms/SparseTensorRewriting.cpp | ||
---|---|---|
555 | Do you think it would be nicer if the body builder provide |
mlir/lib/Dialect/SparseTensor/Transforms/SparseTensorRewriting.cpp | ||
---|---|---|
499–511 | I can extend foreach operator to accept these inputs if it is more desirable. |
mlir/lib/Dialect/SparseTensor/Transforms/SparseTensorRewriting.cpp | ||
---|---|---|
159 | Good idea. | |
499–511 | the sparse constant actually have two constants: one for indices and one for values. | |
555 | I don't see a big difference, the current interface is fine as the value is in args.back(). |
mlir/lib/Dialect/SparseTensor/Transforms/SparseTensorRewriting.cpp | ||
---|---|---|
499–511 | Okay, SG. I extended the dense case. sparse constant probably need more work! |
mlir/lib/Dialect/SparseTensor/Transforms/SparseTensorRewriting.cpp | ||
---|---|---|
499–511 | Thanks! I will see if I can modify genDenseTensorOrSparseConstantIterLoop for the codegen and conversion to take advantage of the fact that foreach can handle dense tensor. |
mlir/lib/Dialect/SparseTensor/IR/SparseTensorDialect.cpp | ||
---|---|---|
275 | very minor nit, I would use for (uint64_t i = 1, rank = tp.getRank(); i < rank; ++i) since it is only used in the for loop, but up to you | |
mlir/lib/Dialect/SparseTensor/Transforms/SparseTensorRewriting.cpp | ||
160 | I think we should avoid returning vectors by value (even though the compiler may actually do some interesting optimizations). For such an output vector, simply pass in by reference | |
488 | comment on dense2dense? | |
580 | can be move that up into the actual codegen rewriting rule above, so it is now hidden inside this method? | |
mlir/test/Dialect/SparseTensor/convert_sparse2dense.mlir | ||
1 | add a FIXME or TODO |
Address review comments.
mlir/lib/Dialect/SparseTensor/IR/SparseTensorDialect.cpp | ||
---|---|---|
275 | right, done. | |
mlir/lib/Dialect/SparseTensor/Transforms/SparseTensorRewriting.cpp | ||
488 | Add comment to stay that dense-to-dense is a nop handled by canonicalization. | |
580 | Yes, moved this check up. | |
mlir/test/Dialect/SparseTensor/convert_sparse2dense.mlir | ||
1 | this was a mistake. Fixed. |
mlir/lib/Dialect/SparseTensor/IR/SparseTensorDialect.cpp | ||
---|---|---|
265 | do we care about ordered/unique here? [there are a few different flavors of COO] given the usage later, I think we can accept sorted and unsorted, but probably not with duplicates? | |
mlir/lib/Dialect/SparseTensor/Transforms/SparseTensorRewriting.cpp | ||
159 | document that dynSizes is the output (also, not sure if we have a convention on this, but I usually put input vectors before output parameters, especially now that they have the same reference type). WDYT? |
Rename isCOOType to isUniqueCOOType and checks that the last dim is unique.
Reorder parameters for getDynamicSizes.
mlir/lib/Dialect/SparseTensor/IR/SparseTensorDialect.cpp | ||
---|---|---|
265 | Add a check to make sure that the last dimension is unique. | |
mlir/lib/Dialect/SparseTensor/Transforms/SparseTensorRewriting.cpp | ||
159 | Add doc description for dynSizes. |
mlir/lib/Dialect/SparseTensor/IR/SparseTensorDialect.cpp | ||
---|---|---|
278 | make a note that rank == 1 works (unique the only compressed) as does rank > 1 (unique on the last singleton) |
do we care about ordered/unique here? [there are a few different flavors of COO]
given the usage later, I think we can accept sorted and unsorted, but probably not with duplicates?