Page MenuHomePhabricator

[mlir][sparse] generalize sparse tensor output implementation

Authored by aartbik on Mon, Nov 22, 2:22 PM.



Moves sparse tensor output support forward by generalizing from injective
insertions only to include reductions. This revision accepts the case with all
parallel outer and all reduction inner loops, since that can be handled with
an injective insertion still. Next revision will allow the inner parallel loop
to move inward (but that will require "access pattern expansion" aka "workspace").

Diff Detail

Event Timeline

aartbik created this revision.Mon, Nov 22, 2:22 PM
aartbik requested review of this revision.Mon, Nov 22, 2:22 PM
aartbik updated this revision to Diff 389043.Mon, Nov 22, 2:27 PM

fixed empty line

wrengr added inline comments.Mon, Nov 22, 3:45 PM

Why the change from "output" to "out"? (Even if the code retains the abbreviation, the longer version flows better for documentation; unless "out tensor" is meant to be a technical term)


It'd be clearer to spell this out fully (i.e., as outNest or similar), like for numTensors, numLoops, etc.

Also, the name "outNest" is strange. Initially I was thinking it ought to be nestOut akin to sparseOut, but then after reading isAdmissableTensorExp I'm thinking it'd be better to give it a longer name like outerParNest (or at least outerNest) to make it clear that the "outer" here is different from the "output" of sparseOut.


to match the stylization elsewhere

aartbik updated this revision to Diff 389061.Mon, Nov 22, 4:08 PM
aartbik marked 3 inline comments as done.


aartbik updated this revision to Diff 390390.Mon, Nov 29, 9:35 AM

rebased with main

aartbik updated this revision to Diff 390424.Mon, Nov 29, 11:35 AM

rebased with main, new bufferization dialect

aartbik updated this revision to Diff 390447.Mon, Nov 29, 12:47 PM

rebased main with my own previous submit

bixia accepted this revision.Mon, Nov 29, 4:07 PM
This revision is now accepted and ready to land.Mon, Nov 29, 4:07 PM