This revision contains all "sparsification" ops and rewriting necessary to support sparse output tensors when the kernel has no reduction (viz. insertions occur in lexicographic order and are "injective"). This will be later generalized to allow reductions too. Also, this first revision only supports sparse 1-d tensors (viz. vectors) as output in the runtime support library. This will be generalized to n-d tensors shortly. But this way, the revision is kept to a manageable size.
Details
Diff Detail
- Repository
- rG LLVM Github Monorepo
Event Timeline
I still need to take a closer look through the logic of Sparsification.cpp and take a look at sparse_vector_ops.mlir, but other than that things look good modulo my comments
mlir/include/mlir/Dialect/SparseTensor/IR/SparseTensorOps.td | ||
---|---|---|
246–261 | If this op is just for codegen / progressive-bufferization, then shouldn't we know this operand statically whenever we generate the load op? That's certainly the case for Sparsification.cpp:1464 (and conversion.mlir:427, roundtrip.mlir:121, etc). So, are there actually any cases where we need this to be dataflow-dependent, and if not then why not make it an attribute rather than an operand? | |
258 | This description is a bit uninformative, since the reader won't necessarily know which conversions count as "non-trivial" for the purpose of being released. So we should explicate which conversions this includes, either here or at the convert op description (with a comment here that says to look there). | |
mlir/lib/Dialect/SparseTensor/Transforms/SparseTensorConversion.cpp | ||
876 | You're missing an "I" | |
mlir/lib/Dialect/SparseTensor/Transforms/Sparsification.cpp | ||
292 | Add a short description of this out-parameter to the doxygen | |
543 | rename to memTp for consistency with the rest of the code | |
mlir/lib/ExecutionEngine/SparseTensorUtils.cpp | ||
283 | Why pointer instead of reference? If pointer is for extern-C reasons, then should assert non-null before dereferencing it | |
mlir/test/Dialect/SparseTensor/conversion.mlir | ||
417–419 | Why the removal of #DenseVector, is that test no longer valid for LoadOp? | |
mlir/test/Dialect/SparseTensor/roundtrip.mlir | ||
137 | reindent/align |
mlir/include/mlir/Dialect/SparseTensor/IR/SparseTensorOps.td | ||
---|---|---|
246–261 | Note that it *is* an attribute, but I parse it indeed as an extra operand. So I guess you are proposing to parse the value in a more attribute-like manner ;-) | |
258 | Good point. Added the "non-trivial" definition. | |
mlir/lib/Dialect/SparseTensor/Transforms/SparseTensorConversion.cpp | ||
876 | oh, good catch! we really need to add more "test generators" so we can just iterate over all data types more easily than writing tests by hand ;-) | |
mlir/lib/ExecutionEngine/SparseTensorUtils.cpp | ||
283 | Yeah similar to all the other strided memrefs that are passed between code and lib. | |
mlir/test/Dialect/SparseTensor/conversion.mlir | ||
417–419 | The reconstruct (was .tensor, now .load to be more consistent with tensor equivalent) used to depend on the sparsity structure (so I tested a sparse and a dense one). I opted for the side-effect impureness for the time being, otherwise we get a lot of SSA spaghetti deps in intermediate very short lived code that adds no value. We may refine as needed. |
mlir/lib/Dialect/SparseTensor/Transforms/SparseTensorConversion.cpp | ||
---|---|---|
871–876 | We have two other places that use similar if-branches, genAddEltCall and genGetNextCall. lexInsert+ getFuncNameSuffix(eltType) |
mlir/lib/Dialect/SparseTensor/Transforms/SparseTensorConversion.cpp | ||
---|---|---|
871–876 | I like that suggestion! Will do in a followup CL |
This description is a bit uninformative, since the reader won't necessarily know which conversions count as "non-trivial" for the purpose of being released. So we should explicate which conversions this includes, either here or at the convert op description (with a comment here that says to look there).