Demonstrates how sparse tensor type -> tuple -> getter
will eventually yield actual code on the memrefs directly
Details
Diff Detail
- Repository
- rG LLVM Github Monorepo
Event Timeline
mlir/lib/Dialect/SparseTensor/Transforms/SparseTensorCodegen.cpp | ||
---|---|---|
252 | Why don't need an index conversion (toStored) here? The toPointersOp also take an already converted index as the input? |
mlir/lib/Dialect/SparseTensor/Transforms/SparseTensorCodegen.cpp | ||
---|---|---|
174 | Value index can simply be determined by querying the length of tuple? |
mlir/lib/Dialect/SparseTensor/Transforms/SparseTensorCodegen.cpp | ||
---|---|---|
168 | oh, I see. Perhaps, but this keeps the logic in one place. Also, perhaps we want to simply always use 2 fields per dimension, so we don't need to scan and can determine it more quickly later. But I don't like the wasted fields. Perhaps we should set up some meta data on the side, so that this method becomes a simple lookup. Not sure yet what is best.... |
So pointer idx are also already converted? (That's why you do not accumulate ptrIdx here)
Also would it be more clear to have to function for indice/pointers?