This is an archive of the discontinued LLVM Phabricator instance.

[mlir][sparse] fix crash when using pure constant index in indexing mapping (fixes #61530)
ClosedPublic

Authored by Peiming on Mar 21 2023, 1:47 PM.

Diff Detail

Event Timeline

Peiming created this revision.Mar 21 2023, 1:47 PM
Herald added a project: Restricted Project. · View Herald Transcript
Peiming requested review of this revision.Mar 21 2023, 1:47 PM
Peiming edited the summary of this revision. (Show Details)Mar 21 2023, 1:48 PM
Peiming added a reviewer: wrengr.
aartbik accepted this revision.Mar 21 2023, 1:49 PM

can you add a before fail/after pass test?

This revision is now accepted and ready to land.Mar 21 2023, 1:49 PM

If you add "(fixes #61530)" to the title, then when you submit, github will automatically close the ticket and add a comment there linking to the commit

wrengr added inline comments.Mar 21 2023, 2:13 PM
mlir/include/mlir/Dialect/SparseTensor/Utils/Merger.h
441

This would read more fluently as "If we have an"

wrengr added inline comments.Mar 21 2023, 2:17 PM
mlir/include/mlir/Dialect/SparseTensor/Utils/Merger.h
446–447

I think it would be better to update the ctor/initialize so that it makes lvlToLoop have the right size in the first place, rather than doing this workaround. Since this looks like it's actually a problem with confusing srcLvl vs dstLvl.

Peiming updated this revision to Diff 507122.Mar 21 2023, 2:24 PM

add test cases.

Peiming retitled this revision from [mlir][sparse] fix crash when using pure constant index in indexing mapping to [mlir][sparse] fix crash when using pure constant index in indexing mapping (fixes #61530).Mar 21 2023, 2:24 PM
Peiming updated this revision to Diff 507123.Mar 21 2023, 2:26 PM

revert unintended change.

Peiming updated this revision to Diff 507163.Mar 21 2023, 3:46 PM

address comments.

Peiming updated this revision to Diff 507164.Mar 21 2023, 3:47 PM

revert unintended change.

Peiming updated this revision to Diff 507166.Mar 21 2023, 3:50 PM

revert unintended changes.

wrengr added inline comments.Mar 21 2023, 4:16 PM
mlir/include/mlir/Dialect/SparseTensor/Utils/Merger.h
270

You should add documentation for what this parameter is for. And should rename the parameter to make clear what sort of "rank" it's the maximum for (srcLvlRank? dstLvlRank? srcDimRank? dstDimRank? other?). Based on the changes to Merger.cpp it looks like it's the maximum srcLvlRank, but based on the code in GenericOpSparsifier it looks instead like it's the maximum dstDimRank.

mlir/lib/Dialect/SparseTensor/Transforms/LoopEmitter.cpp
295–296

👍

mlir/lib/Dialect/SparseTensor/Transforms/Sparsification.cpp
1822–1823

Reiterating my comment from before, this code computes the maximum dstDimRank but Merger.cpp looks like it actually wants the maximum srcLvlRank. So there's some confusion that needs to be cleared up here.

I'm pretty sure you want the level-rank (regardless of the src-vs-dst issue), so you should use SparseTensorType::getLvlRank here instead.

mlir/lib/Dialect/SparseTensor/Utils/Merger.cpp
223

Just to connect with the previous comments, this code is why I think "maxRank" should actually be "maxLvlRank" (since the lvlToLoop is indexed by (TensorId, Level))

Peiming updated this revision to Diff 507178.Mar 21 2023, 4:31 PM
Peiming marked 2 inline comments as done.

address comments.

Peiming marked 2 inline comments as done.Mar 21 2023, 4:32 PM
Peiming added inline comments.
mlir/include/mlir/Dialect/SparseTensor/Utils/Merger.h
270

There is no src/dstLvlRank difference outside LoopEmitter. The bug is also unrelated to view-based reshaping.

mlir/lib/Dialect/SparseTensor/Transforms/Sparsification.cpp
1822–1823

There is no src/dstLvlRank difference outside loopEmitter

wrengr accepted this revision.Mar 21 2023, 4:39 PM
wrengr added inline comments.
mlir/include/mlir/Dialect/SparseTensor/Utils/Merger.h
259

"storage"

Peiming updated this revision to Diff 507180.Mar 21 2023, 4:42 PM

fix typo.

This revision was landed with ongoing or failed builds.Mar 21 2023, 4:45 PM
This revision was automatically updated to reflect the committed changes.