This is an archive of the discontinued LLVM Phabricator instance.

[mlir][sparse] Add codegen for expand op.
ClosedPublic

Authored by bixia on Sep 7 2022, 2:35 PM.

Diff Detail

Event Timeline

bixia created this revision.Sep 7 2022, 2:35 PM
Herald added a project: Restricted Project. · View Herald Transcript
bixia requested review of this revision.Sep 7 2022, 2:35 PM
aartbik added inline comments.Sep 7 2022, 4:55 PM
mlir/lib/Dialect/SparseTensor/Transforms/SparseTensorCodegen.cpp
467

keep the original comment

// Determine the size for access expansion.

mlir/lib/Dialect/SparseTensor/Transforms/SparseTensorPasses.cpp
181

rewriting rules -> conversion rules (or codegen rules)

184

since you mention dialect at L180, L184 can now be removed, just as above

mlir/test/Dialect/SparseTensor/codegen.mlir
317

how about a single letter name too, so it aligns better

319

identation?

323

maybe make this 8x10 so we see what size gets used!

328

let's also add a test with ?x? dimensions and parameters in the bufferization.alloc_tensor(...) so we get a dimOp operator. Perhaps with a column-wise access, so we see that it swaps?

aartbik added inline comments.Sep 7 2022, 5:06 PM
mlir/test/Dialect/SparseTensor/codegen.mlir
323

and also a CSC with constants

Peiming added inline comments.Sep 8 2022, 8:40 AM
mlir/lib/Dialect/SparseTensor/Transforms/SparseTensorCodegen.cpp
471

This looks very similar to the original implementation, anyway that we can merge these two?

aartbik added inline comments.Sep 8 2022, 8:43 AM
mlir/lib/Dialect/SparseTensor/Transforms/SparseTensorCodegen.cpp
467

Since DimOp operates on the original dimension, and you want the innermost stored dimension, we need to apply toOrig() to the computed rank-1 here. This will be an example how much better "codegen" composes, since we will fold this much more often than we did in the "conversion" case.

Peiming added inline comments.Sep 8 2022, 8:46 AM
mlir/lib/Dialect/SparseTensor/Transforms/SparseTensorCodegen.cpp
467

Just out of curiosity: Is it okay to create DimOp here? since DimOp is illegal and will need to be converted in the same pass as well? Will MLIR recursively legalizes DimOp? If so, why can't we do that in the conversion pass?

Also, will it have the same bug as https://reviews.llvm.org/D133472, you might want a toOrig conversion here as well?

Peiming added inline comments.Sep 8 2022, 8:59 AM
mlir/lib/Dialect/SparseTensor/Transforms/SparseTensorCodegen.cpp
467

Uh oh, seems my comment was just repeating it.

bixia updated this revision to Diff 458796.Sep 8 2022, 10:52 AM
bixia marked 6 inline comments as done.

Address review comments.

mlir/lib/Dialect/SparseTensor/Transforms/SparseTensorCodegen.cpp
467

Put the comment back and after fix the bug that is fixed in D133472

mlir/test/Dialect/SparseTensor/codegen.mlir
323

I added CSC with dynamic value, combining this with the request below. PTAL.

bixia marked 2 inline comments as done.Sep 8 2022, 10:57 AM
bixia added inline comments.
mlir/lib/Dialect/SparseTensor/Transforms/SparseTensorCodegen.cpp
467

Yes, DimOp will be legalized.

471

Good question, I was going to ask whether we care about code sharing between conversion and codegen paths. But will refactor this after we have a decision.

bixia updated this revision to Diff 458822.Sep 8 2022, 12:19 PM

Modified the comment to align with D133512.

bixia updated this revision to Diff 458824.Sep 8 2022, 12:21 PM

Remove a temp variable.

aartbik accepted this revision.Sep 8 2022, 12:36 PM
aartbik added inline comments.
mlir/lib/Dialect/SparseTensor/Transforms/SparseTensorCodegen.cpp
435

this comment is not really valid, since you are not using the utility

please move over the comment in the revision "conversion" case, stating we rewrite recursively on the *original* tensor (hence op.getTensor and not adaptor.getTensor)

mlir/test/Dialect/SparseTensor/codegen.mlir
333

do we have a pass that does the store/load forwarding ;-)

This revision is now accepted and ready to land.Sep 8 2022, 12:36 PM
aartbik added inline comments.Sep 8 2022, 12:47 PM
mlir/lib/Dialect/SparseTensor/Transforms/SparseTensorCodegen.cpp
435

scratch this, you already did that

bixia updated this revision to Diff 458834.Sep 8 2022, 12:50 PM

Update the tests to align with D133512.

bixia added inline comments.Sep 8 2022, 12:56 PM
mlir/test/Dialect/SparseTensor/codegen.mlir
333

Yes. Another reason for switching away from the supporting lib.

This revision was landed with ongoing or failed builds.Sep 8 2022, 2:06 PM
This revision was automatically updated to reflect the committed changes.