Rationale:
Passing in a pointer to the memref data in order to implement the
dense to sparse conversion was a bit too low-level. This revision
improves upon that approach with a cleaner solution of generating
a loop nest in MLIR code itself that prepares the COO object before
passing it to our "swiss army knife" setup. This is much more
intuitive *and* now also allows for dynamic shapes.
Details
Diff Detail
- Repository
- rG LLVM Github Monorepo
Event Timeline
mlir/lib/Dialect/SparseTensor/Transforms/SparseTensorConversion.cpp | ||
---|---|---|
117 | So, perm is an output to support getEltCall. Can we document this? | |
199 | You mean add if to avoid making the call when val is zero? | |
mlir/lib/Dialect/SparseTensor/Transforms/SparseTensorPasses.cpp | ||
112–121 | This allows DimOp of non-sparse tensor types. Am I right? Can we add a comment for this | |
mlir/test/Dialect/SparseTensor/conversion.mlir | ||
162 | Shall we add test cases for non-trivial permutations and unknown dimension sizes? |
mlir/lib/Dialect/SparseTensor/Transforms/SparseTensorConversion.cpp | ||
---|---|---|
117 | Output parameters are okay for llvm, but documenting this is of course always a good idea! Added this to the method doc. | |
199 | Yes the != 0 is now inside the method, but we can avoid the call with an IF | |
mlir/lib/Dialect/SparseTensor/Transforms/SparseTensorPasses.cpp | ||
112–121 | Yes, but that was always the case already, i.e. passing existing ones through. In this particular case, I need to allow introducing them. But I added the comment as requested to clarify. | |
mlir/test/Dialect/SparseTensor/conversion.mlir | ||
162 | Yes, more tests are always good. |
So, perm is an output to support getEltCall. Can we document this?
Also, google coding style prefers return values over output parameters. We may return a pair of Values here instead.