This patch implements sparse2sparse collapse for operands with dynamic shape.
Details
Diff Detail
- Repository
- rG LLVM Github Monorepo
Event Timeline
please use tags in the title, our team usually usesSparse reshaping for dynamic sizes
[mlir][sparse] sparse reshaping for dynamic sizes
mlir/lib/Dialect/SparseTensor/Transforms/SparseTensorConversion.cpp | ||
---|---|---|
514 | we already have hasStaticShape() as a utility |
The "Index is too large for the dimension"' failure indicates that you are not passing the right (runtime) sizes for the COO data structure. I suspect you pass the -1 of the dynamic size to the kEmptyCOO call.
mlir/lib/Dialect/SparseTensor/Transforms/SparseTensorConversion.cpp | ||
---|---|---|
603–613 | I'm using sizesFromPtr to compute the size of the destination tensor. Is it a valid thing to do? I was thinking that it should compute the runtime size because it generates the "sparseDimSize" function. |
mlir/lib/Dialect/SparseTensor/Transforms/SparseTensorConversion.cpp | ||
---|---|---|
516 | This is really very similar to translateIndices, and contains way too much codedup. Can we merge the two and deal with static/dynamic in the logic? | |
596 | llvm style does not want underscores in local vars | |
603–613 | The problem is that you are using sizesFromPtr(rewriter, dst_sizes, op, encDst, dstTp, src); to fill the dimensions of the *destination*, but are you are querying the *src* to get the dimension size dynamically! The problem, of course, is that you have no dst operand to query the sizes from ;-) |
Add function for generate collapse destination shape
Merge translateIndices with translateIndicesDyn
mlir/lib/Dialect/SparseTensor/Transforms/SparseTensorConversion.cpp | ||
---|---|---|
520 | Can we make expand of this revision too? If not, then don't assert, but simply return failure() in the rewriter, i.e. pattern will not kick in. |
mlir/lib/Dialect/SparseTensor/Transforms/SparseTensorConversion.cpp | ||
---|---|---|
496 | Is it more easy to follow to always generate the MLIR operation (same below) regardless of whether the size is statically know? We can rely on constant folding to eliminate unnecessary runtime computation. I am not sure whether it is worth it though :-) |
apologies for the long review time this time around....
mlir/lib/Dialect/SparseTensor/Transforms/SparseTensorConversion.cpp | ||
---|---|---|
470–471 | you have to use ArrayRef<> in the parameter; SmallVector<> is only used at the concrete implementation side (and as written, you copy contents into the call!) | |
477 | same, when assigning, only work on references | |
482 | perhaps good to keep this assert since you use the literal | |
496 | Can you check that. In the CHECK test below, we just use the unrelated --cse as extra parameter, and you see that constants are not folded. It is okay to add more flags there, but I would like to see evidence that the code folds indeed. | |
517 | document the "output" parameter, use ArrayRef for all others | |
527 | assert on not dynamic? | |
539 | I typically use the idiom for (unsigned i = 0, size = srcShape.size(); i < size; i++) to avoid the potential call in loop (depending on how smart compiler is) | |
547 | Is this skip over intended? Your code can definitely use some examples for each case | |
mlir/test/Dialect/SparseTensor/sparse_reshape.mlir | ||
27 ↗ | (On Diff #456805) | see above, okay to add an extra flag, but I would like to keep what is on the left |
71 ↗ | (On Diff #456805) | as here |
93 ↗ | (On Diff #456805) | why did you not add any CHECK tests for the dynamic case you only rely on integration test now, but I would like to see a complex example generating the right code. |
mlir/lib/Dialect/SparseTensor/Transforms/SparseTensorConversion.cpp | ||
---|---|---|
605–612 | srcTp or dstTp here? |
mlir/lib/Dialect/SparseTensor/Transforms/SparseTensorConversion.cpp | ||
---|---|---|
482 | I think I'm not using compile-time constants here. The shape vector here is computed at runtime, so we don't need this assert anymore. | |
496 | Added the --canonicalize flag which folds constants. | |
527 | IIUC srcShape is available no matter if the source shape is dynamic or static? It is computed by calling sizesFromPtr | |
547 | Yes, it is intended. For example, if the srcShape is [8], and the staticDstShape is [2 x ? x 2], then we compute the unknown dimension by dividing 8 by 2 x 2. Therefore we skip the unknown dimension when computing the denominator. Added detailed comments for the algorithm. | |
605–612 | I think they are equivalent since dstTp has static shape if and only if srcTp has static shape. But I agree that dstTp makes more sense here. Changed it to dstTp. | |
mlir/test/Dialect/SparseTensor/sparse_reshape.mlir | ||
93 ↗ | (On Diff #456805) | Added codegen tests for dynamic reshape |
mlir/test/Dialect/SparseTensor/sparse_reshape.mlir | ||
---|---|---|
25 ↗ | (On Diff #460571) | hardcoding %21 is way too brittle (and granted, it was wrong in the original test this should be either a captured variable, as in %[[C:.*]] = .... or simply scf.condition |
64 ↗ | (On Diff #460571) | same |
84 ↗ | (On Diff #460571) | extra above |
mlir/test/Dialect/SparseTensor/sparse_reshape.mlir | ||
---|---|---|
25 ↗ | (On Diff #460571) | Removed hardcoded variable |
one last nit, but good to go
mlir/lib/Dialect/SparseTensor/Transforms/SparseTensorConversion.cpp | ||
---|---|---|
482 | Yeah, but since you use the shape[j], I would like to make sure we never multiply it with -1 ;-) I realize we just constructed it, but defensive asserts like this ensure that if later somebody changes that logic to allow for dynamic, we will not forget about this one. |
mlir/lib/Dialect/SparseTensor/Transforms/SparseTensorConversion.cpp | ||
---|---|---|
482 | I still don't understand why the assert is necessary. This change already allows reshaping with dynamic sizes, so the shape variable here should contain actual dynamic sizes, and can never contain -1? |
you have to use ArrayRef<> in the parameter; SmallVector<> is only used at the concrete implementation side (and as written, you copy contents into the call!)