Details
Diff Detail
- Repository
- rG LLVM Github Monorepo
Event Timeline
I don't know the gpu dialect well enough to give a proper review, but other than a few nits everything looks sensible to me.
The only real concern I have (as mentioned in chat) is all the boilerplate of explicitly threading the async tokens through. There're some well-known design patterns for cleaning that up, and in the long-term I definitely think we should implement that (and upstream it so that other users of the gpu dialect can reuse it); but I don't think that should block the current CL
mlir/lib/Dialect/SparseTensor/Transforms/SparseGPUCodegen.cpp | ||
---|---|---|
524 | Nit: why not "nseA"? | |
525–527 | Nit: personally I think it'd be cleaner to have these in the order szm;szk;szn so that the dimops on a are together | |
553 | This is more of a long-term nit, rather than something that needs to be fixed in the current CL, but: I mislike the asymmetry of not using getAsyncToken here. Of course, the only way to fix that would be to adjust genSpMat to return gpu::AsyncOpInterface in lieu of Operation*. Which is doable since both gpu::Create{Coo,Csr}Op support that interface; however, I'm not sure if that change would cause other problems for the various genSpMat callsites. |
Thanks Wren. And yes, the codedup will be cleaned up somehow, but I did not want to optimize too early, before we have a better idea what direction we are going.
But as we recognize the other cuSparse methods, most of this should be moved into more general utils indeed.
mlir/lib/Dialect/SparseTensor/Transforms/SparseGPUCodegen.cpp | ||
---|---|---|
524 | Yeah, as you know I prefer the nse too over nnz, but cuSparse follows the nnz convention. | |
525–527 | made it so | |
553 | Yeah, agreed there is some redesign lingering there |
mlir/test/Dialect/SparseTensor/GPU/gpu_matmul_lib.mlir | ||
---|---|---|
4 | note to self: rebase with the renaming in main when ready |
Nit: why not "nseA"?