Details
Diff Detail
- Repository
- rG LLVM Github Monorepo
Event Timeline
mlir/lib/Dialect/SparseTensor/IR/SparseTensorDialect.cpp | ||
---|---|---|
620 | This is enabled in the next patch |
mlir/test/Dialect/SparseTensor/roundtrip.mlir | ||
---|---|---|
217 | I am still trying to put everything into a consistent picture, but here are my questions so far: (1) the "slice" in sparse_tensor.encoding here has three fields, offset/size/stride, but the specifier only has offset/stride (the size is represented in dim size). Maybe sparse_tensor.encoding shouldn't encode "size" info? I meant, you can use the same sparse_tensor.encoding to annotate different sizes of slices, similar to the fact that you can use the same sparse_tensor.encoding to annotate sparse tensors of the same layout but different sizes? (2) when the "slice" in sparse_tensor.encoding is not provided, would that be equivalent to having offset/stride == 0/1 ? In this PR, we issue an error if "slice" is not in the encoding we we query offset/stride. |
mlir/test/Dialect/SparseTensor/roundtrip.mlir | ||
---|---|---|
217 | For 1) You are right, and I thought about removing it too (it somehow overlapped with the tensor DimOp) . But I decided to keep it for future extension, it could be used to stored the actual sizes of the underlying tensor. Then it can be used to remove the check if the slice size equals to the tensor size. For 2), it is because I want to reduce the number of fields for non slices after lowering (so that we don’t need actual fields to store offset/stride array for non slices). We could of course skip the field for static slice but it would also require us to fill the hole when casting static slice to a dynamic one. See my comment in next patch in ExtractSliceOpConverter. Both problems are really some design choices, and we can have further discussion on them |
mlir/include/mlir/Dialect/SparseTensor/IR/SparseTensorAttrDefs.td | ||
---|---|---|
371 | Can you please update the doc in SparseTensorStorageLayout.h as part of this revision. | |
mlir/include/mlir/Dialect/SparseTensor/IR/SparseTensorOps.td | ||
386 | We probably need to show CSR/CSR_SLICE as part of the example now | |
mlir/lib/Dialect/SparseTensor/IR/SparseTensorDialect.cpp | ||
623 | hehe, okay ;-) | |
mlir/lib/Dialect/SparseTensor/Transforms/SparseStorageSpecifierToLLVM.cpp | ||
40 | period at end |
Please be sure to rebase this patch onto D144773, because otherwise this is coig to cause a lot of patch conflicts.
mlir/include/mlir/Dialect/SparseTensor/IR/SparseTensorAttrDefs.td | ||
---|---|---|
371 | Is this *really* for dimensions? Since everything else in StorageSpecifierKind is for levels, I'm pretty sure this should be for levels too. (N.B., the name of DimSize is corrected in D144773) | |
372 | Ditto | |
mlir/include/mlir/Dialect/SparseTensor/IR/SparseTensorOps.td | ||
367 | ditto |
mlir/include/mlir/Dialect/SparseTensor/IR/SparseTensorOps.td | ||
---|---|---|
371 | mark this with TODO or FIXME so we can easily find what we promised later ;-) | |
mlir/lib/Dialect/SparseTensor/Transforms/SparseStorageSpecifierToLLVM.cpp | ||
123 | We copy | |
mlir/lib/Dialect/SparseTensor/Transforms/SparseTensorStorageLayout.h | ||
81 | These part needs to move ABOVE examples, to L61 then, add a slide example also to doc |
Can you please update the doc in SparseTensorStorageLayout.h as part of this revision.
Good to keep that up to date!