This is an archive of the discontinued LLVM Phabricator instance.

[mlir][sparse] support affine expression on sparse dimensions.
AbandonedPublic

Authored by Peiming on Nov 2 2022, 3:53 PM.

Diff Detail

Event Timeline

Peiming created this revision.Nov 2 2022, 3:53 PM
Herald added a project: Restricted Project. · View Herald Transcript
Peiming requested review of this revision.Nov 2 2022, 3:53 PM
Peiming added inline comments.Nov 2 2022, 4:01 PM
mlir/lib/Dialect/SparseTensor/Transforms/Sparsification.cpp
173–174

This is (probably) a typo, but I am not sure, it does not affect the patch actually. Let me know if you want to keep it or not.

Peiming updated this revision to Diff 472798.Nov 2 2022, 4:05 PM

remove redundant code.

Peiming updated this revision to Diff 473057.Nov 3 2022, 3:31 PM

add more test cases.

Peiming added inline comments.Nov 3 2022, 3:42 PM
mlir/lib/Dialect/SparseTensor/Transforms/Sparsification.cpp
473–477

@aartbik We should have some discussion on this!

mlir/test/Integration/Dialect/SparseTensor/CPU/sparse_conv_1d_nwc_wcf.mlir
12–13

@aartbik Maybe we can have some discussion on this

Peiming updated this revision to Diff 474631.Nov 10 2022, 4:30 PM

rebase + make affine dense more admissible

Peiming added inline comments.Nov 10 2022, 5:06 PM
mlir/lib/Dialect/SparseTensor/Transforms/Sparsification.cpp
473–477

The patch is updated per offline discussion (by only picking one AffineDimExpr from compound affines to establish the order instead of adding order constraints between every pair of their sub-AffineDimExprs)

Peiming updated this revision to Diff 474799.Nov 11 2022, 10:19 AM

fix building errors.

Peiming updated this revision to Diff 474802.Nov 11 2022, 10:44 AM

codegen path for sparse conv

Peiming updated this revision to Diff 474803.Nov 11 2022, 10:46 AM
This comment was removed by Peiming.
Peiming added inline comments.Nov 11 2022, 10:58 AM
mlir/test/Integration/Dialect/SparseTensor/CPU/sparse_conv_3d_ndhwc_dhwcf.mlir
7

we can now also support codegen path for sparse convolution.

Peiming updated this revision to Diff 474822.Nov 11 2022, 11:45 AM

update comments.

Peiming added inline comments.Nov 11 2022, 2:33 PM
mlir/lib/Dialect/SparseTensor/Transforms/Sparsification.cpp
173–174

Despite that we had an offline discussion on this, I still think it should be getNumDims() after taking more time understanding resolveCycles.

Also, I believe currently it failed to handle cases when NumDims != NumResults, e.g., AffineMap<(d0, d1, d2, d3) -> (d0, d1)>

Peiming updated this revision to Diff 474883.Nov 11 2022, 4:04 PM

fix resolveCycle

Peiming updated this revision to Diff 475212.Nov 14 2022, 10:53 AM

add missing comments.

Peiming updated this revision to Diff 475215.Nov 14 2022, 10:57 AM

small fixes.

Peiming updated this revision to Diff 475275.Nov 14 2022, 2:21 PM

fix crash on constant affine expression.

Peiming updated this revision to Diff 475592.Nov 15 2022, 2:51 PM

update comments.

Peiming updated this revision to Diff 475617.Nov 15 2022, 3:54 PM

favor filter loop when topsort

aartbik added inline comments.Nov 15 2022, 4:06 PM
mlir/lib/Dialect/SparseTensor/Transforms/Sparsification.cpp
173–174

Yes, you are absolutely right. I thought we discussed num inputs vs num dims (since they should be the same), but num results is of course wrong! the topsort has the size of the "parallel" index space. So either Dims or Inputs should be used.

Peiming updated this revision to Diff 475626.Nov 15 2022, 4:36 PM

fix permute

I made several attempts to review this as is, but there are too many changes at different places at once for me to do a thorough good job.
Let's discuss offline on how to split this into smaller, incremental steps (even if some merely improve analysis without codegen yet), so I can better track what is changing.

mlir/include/mlir/Dialect/SparseTensor/Utils/Merger.h
151

comment needs to be expanded

also, you can probably separate the merger changes (and some unit testing)
in its own revision for easier reviewing

241

just numNativeLoops?

(given that you do that for all others too, it is local enough that this does not go out of sync)

mlir/lib/Dialect/SparseTensor/Transforms/CodegenUtils.cpp
472

was this already set right?

mlir/lib/Dialect/SparseTensor/Transforms/CodegenUtils.h
527

This is really just the inverse topsort, right? It makes sense you need this for affine mapping, but the explanation right now seems to explain it in terms of the affine mapping, rather than its true meaning?

Peiming abandoned this revision.Nov 16 2022, 6:03 PM

This patch is splitted into 6 (see D138173)

Peiming added inline comments.Nov 16 2022, 6:06 PM
mlir/lib/Dialect/SparseTensor/Transforms/CodegenUtils.cpp
472

Yes, because filter loop will look like

for()
 if (idx == affine)
   // user-generated

so, we need to generate yield in the ifOp instead of forOp

But maybe I should add a doc to indicate that it is the users' responsibility to ensure the insertion point is correct.