This is an archive of the discontinued LLVM Phabricator instance.

[mlir][sparse][gpu] recognize SpMM cuSparse during sparsification
ClosedPublic

Authored by aartbik on May 16 2023, 1:36 PM.

Diff Detail

Event Timeline

aartbik created this revision.May 16 2023, 1:36 PM
Herald added a project: Restricted Project. · View Herald TranscriptMay 16 2023, 1:36 PM
aartbik requested review of this revision.May 16 2023, 1:36 PM

I don't know the gpu dialect well enough to give a proper review, but other than a few nits everything looks sensible to me.

The only real concern I have (as mentioned in chat) is all the boilerplate of explicitly threading the async tokens through. There're some well-known design patterns for cleaning that up, and in the long-term I definitely think we should implement that (and upstream it so that other users of the gpu dialect can reuse it); but I don't think that should block the current CL

mlir/lib/Dialect/SparseTensor/Transforms/SparseGPUCodegen.cpp
524

Nit: why not "nseA"?

525–527

Nit: personally I think it'd be cleaner to have these in the order szm;szk;szn so that the dimops on a are together

553

This is more of a long-term nit, rather than something that needs to be fixed in the current CL, but: I mislike the asymmetry of not using getAsyncToken here. Of course, the only way to fix that would be to adjust genSpMat to return gpu::AsyncOpInterface in lieu of Operation*. Which is doable since both gpu::Create{Coo,Csr}Op support that interface; however, I'm not sure if that change would cause other problems for the various genSpMat callsites.

aartbik marked 3 inline comments as done.May 18 2023, 10:08 AM

Thanks Wren. And yes, the codedup will be cleaned up somehow, but I did not want to optimize too early, before we have a better idea what direction we are going.
But as we recognize the other cuSparse methods, most of this should be moved into more general utils indeed.

mlir/lib/Dialect/SparseTensor/Transforms/SparseGPUCodegen.cpp
524

Yeah, as you know I prefer the nse too over nnz, but cuSparse follows the nnz convention.
But since that is not visible here, I agree using nse is better

525–527

made it so

553

Yeah, agreed there is some redesign lingering there

aartbik updated this revision to Diff 523430.May 18 2023, 10:11 AM
aartbik marked 3 inline comments as done.

rebased with main, addressed comments

aartbik added inline comments.May 18 2023, 11:11 AM
mlir/test/Dialect/SparseTensor/GPU/gpu_matmul_lib.mlir
4

note to self: rebase with the renaming in main when ready

aartbik updated this revision to Diff 523915.May 19 2023, 1:30 PM

rebased with main, and renamed lvlTypes

Peiming accepted this revision.May 19 2023, 5:21 PM
This revision is now accepted and ready to land.May 19 2023, 5:21 PM