This is an archive of the discontinued LLVM Phabricator instance.

[mlir][sparse] implement simple reshaping (expand/collapse)
ClosedPublic

Authored by aartbik on Jul 1 2022, 5:27 PM.

Details

Summary

The revision makes a start with implementing expand/collapse reshaping
for sparse tensors. When either source or destination is sparse, but
other is dense, the "cheap" dense reshape can be used prior to converting
from or to a sparse tensor.

Note1
sparse to sparse reshaping is still TBD.

Note2
in the long run, we may want to implement a "view" into a sparse tensor so that the operation remains cheap and does not require data shuffling

Diff Detail

Event Timeline

aartbik created this revision.Jul 1 2022, 5:27 PM
Herald added a project: Restricted Project. · View Herald TranscriptJul 1 2022, 5:27 PM
aartbik requested review of this revision.Jul 1 2022, 5:27 PM
aartbik edited the summary of this revision. (Show Details)
aartbik updated this revision to Diff 442669.Jul 6 2022, 12:46 PM

fixed badly worded comment

wrengr accepted this revision.Jul 6 2022, 1:36 PM

As a short-term implementation, LGTM. Though in the long-term we'll definitely want to avoid materializing the intermediate dense tensor whenever possible (should be doable by extending the implementation to handle any invertible AffineMap, instead of just permutations).

This revision is now accepted and ready to land.Jul 6 2022, 1:36 PM

Though in the long-term we'll definitely want to avoid materializing the intermediate dense tensor whenever possible

Note that the intermediate dense tensor does not really "materialize" in a costly manner, since we merely change the view into the already incoming or outgoing dense tensor (which is an extra O(1) [or better O(dim)] operation by simply putting some new stride values into the same value array). The real annoying materialization that we want to avoid will come in the next step, where I also implement sparse2sparse.

wrengr added a comment.Jul 6 2022, 6:45 PM

Note that the intermediate dense tensor does not really "materialize" in a costly manner, since we merely change the view

Aha, so tensor::{Expand,Collapse}ShapeOp already handle the view stuff, nice :) Yeah, I couldn't quite follow how the tensor dialect implements those in general, so I wasn't sure.