Moves sparse tensor output support forward by generalizing from injective
insertions only to include reductions. This revision accepts the case with all
parallel outer and all reduction inner loops, since that can be handled with
an injective insertion still. Next revision will allow the inner parallel loop
to move inward (but that will require "access pattern expansion" aka "workspace").
Details
Diff Detail
- Repository
- rG LLVM Github Monorepo
Event Timeline
mlir/include/mlir/Dialect/SparseTensor/Utils/Merger.h | ||
---|---|---|
120 | Why the change from "output" to "out"? (Even if the code retains the abbreviation, the longer version flows better for documentation; unless "out tensor" is meant to be a technical term) | |
mlir/lib/Dialect/SparseTensor/Transforms/Sparsification.cpp | ||
49 | It'd be clearer to spell this out fully (i.e., as outNest or similar), like for numTensors, numLoops, etc. Also, the name "outNest" is strange. Initially I was thinking it ought to be nestOut akin to sparseOut, but then after reading isAdmissableTensorExp I'm thinking it'd be better to give it a longer name like outerParNest (or at least outerNest) to make it clear that the "outer" here is different from the "output" of sparseOut. | |
84 | to match the stylization elsewhere |
Why the change from "output" to "out"? (Even if the code retains the abbreviation, the longer version flows better for documentation; unless "out tensor" is meant to be a technical term)