Details
- Reviewers
aartbik Peiming - Commits
- rG8a583bd53dcb: [mlir][sparse] Add codegen for expand op.
Diff Detail
- Repository
- rG LLVM Github Monorepo
Event Timeline
mlir/lib/Dialect/SparseTensor/Transforms/SparseTensorCodegen.cpp | ||
---|---|---|
585 | keep the original comment // Determine the size for access expansion. | |
mlir/lib/Dialect/SparseTensor/Transforms/SparseTensorPasses.cpp | ||
178–180 | since you mention dialect at L180, L184 can now be removed, just as above | |
179 | rewriting rules -> conversion rules (or codegen rules) | |
mlir/test/Dialect/SparseTensor/codegen.mlir | ||
293 | how about a single letter name too, so it aligns better | |
295 | identation? | |
299 | maybe make this 8x10 so we see what size gets used! | |
304 | let's also add a test with ?x? dimensions and parameters in the bufferization.alloc_tensor(...) so we get a dimOp operator. Perhaps with a column-wise access, so we see that it swaps? |
mlir/test/Dialect/SparseTensor/codegen.mlir | ||
---|---|---|
299 | and also a CSC with constants |
mlir/lib/Dialect/SparseTensor/Transforms/SparseTensorCodegen.cpp | ||
---|---|---|
589 | This looks very similar to the original implementation, anyway that we can merge these two? |
mlir/lib/Dialect/SparseTensor/Transforms/SparseTensorCodegen.cpp | ||
---|---|---|
585 | Since DimOp operates on the original dimension, and you want the innermost stored dimension, we need to apply toOrig() to the computed rank-1 here. This will be an example how much better "codegen" composes, since we will fold this much more often than we did in the "conversion" case. |
mlir/lib/Dialect/SparseTensor/Transforms/SparseTensorCodegen.cpp | ||
---|---|---|
585 | Just out of curiosity: Is it okay to create DimOp here? since DimOp is illegal and will need to be converted in the same pass as well? Will MLIR recursively legalizes DimOp? If so, why can't we do that in the conversion pass? Also, will it have the same bug as https://reviews.llvm.org/D133472, you might want a toOrig conversion here as well? |
mlir/lib/Dialect/SparseTensor/Transforms/SparseTensorCodegen.cpp | ||
---|---|---|
585 | Uh oh, seems my comment was just repeating it. |
mlir/lib/Dialect/SparseTensor/Transforms/SparseTensorCodegen.cpp | ||
---|---|---|
494 | this comment is not really valid, since you are not using the utility please move over the comment in the revision "conversion" case, stating we rewrite recursively on the *original* tensor (hence op.getTensor and not adaptor.getTensor) | |
mlir/test/Dialect/SparseTensor/codegen.mlir | ||
309 | do we have a pass that does the store/load forwarding ;-) |
mlir/lib/Dialect/SparseTensor/Transforms/SparseTensorCodegen.cpp | ||
---|---|---|
494 | scratch this, you already did that |
mlir/test/Dialect/SparseTensor/codegen.mlir | ||
---|---|---|
309 | Yes. Another reason for switching away from the supporting lib. |
this comment is not really valid, since you are not using the utility
please move over the comment in the revision "conversion" case, stating we rewrite recursively on the *original* tensor (hence op.getTensor and not adaptor.getTensor)