This prepare a subsequent revision that will generalize
the insertion code generation. Similar to the support lib,
insertions become much easier to perform with some "cursor"
bookkeeping. Note that we, in the long run, could perhaps
avoid storing the "cursor" permanently and use some
retricted-scope solution (alloca?) instead. However,
that puts harder restrictions on insertion-chain operations,
so for now we follow the more straightforward approach.
Details
Diff Detail
- Repository
- rG LLVM Github Monorepo
Event Timeline
mlir/lib/Dialect/SparseTensor/Transforms/SparseTensorCodegen.cpp | ||
---|---|---|
154 | Do we need a memref for it? or a list of scalars is preferred? Will scalars expose more chances for optimization (maybe)? |
mlir/lib/Dialect/SparseTensor/Transforms/SparseTensorCodegen.cpp | ||
---|---|---|
154 | For starters, I think I really prefer the memref (also, it makes the storage scheme layout easier to understand, because otherwise we would introduce variable length header data). We will probably do a lot of j > a[0] comparisons, which are optimized quite well by mlir/llvm |
mlir/lib/Dialect/SparseTensor/Transforms/SparseTensorCodegen.cpp | ||
---|---|---|
154 | Okay, SG! |
LGTM.
But I feel that the complexity of the sparse tensor layout keeps increasing, do you think it worth the effort to put all the index computation into its own class?
We can consider that for sure! But I also like having the layout close to the codegen rewriting rules, so it is easy to scan back and forth in the file.
And FWIW, since we have now exactly the same fields as in the support library, I *think* we are done ;-) [famous last words]
More importantly, I am worried about passing too many parameters as memrefs, as we are starting to do (since lowering to LLVM IR gives elaborate code for that).
I would really like to pass around a single struct (tuple ;=) while still maintaining visibility in the individual memrefs....
Do we need a memref for it? or a list of scalars is preferred? Will scalars expose more chances for optimization (maybe)?