Also includes a first codegen example (although full support need tuple access)
Details
Diff Detail
- Repository
- rG LLVM Github Monorepo
Event Timeline
mlir/lib/Dialect/SparseTensor/Transforms/SparseTensorCodegen.cpp | ||
---|---|---|
66 | I don't quite understand the implication of this optimization yet: Either casting a sparse tensor created from a dynamic shape to its "true static type" is not a noop, or we will have problems in reading the sparse tensor using its "true static type"? | |
66–67 | Need to indent from the previous line. |
mlir/lib/Dialect/SparseTensor/Transforms/SparseTensorCodegen.cpp | ||
---|---|---|
66 | For a static type, codegen never needs to query this, since all information is implicit in the context (I showed the dimop lowering as the first example of that). |
mlir/lib/Dialect/SparseTensor/Transforms/SparseTensorCodegen.cpp | ||
---|---|---|
66 | The cast operation back and forth will indeed need to add/remove the sizes back as metadata. |
I don't quite understand the implication of this optimization yet: Either casting a sparse tensor created from a dynamic shape to its "true static type" is not a noop, or we will have problems in reading the sparse tensor using its "true static type"?