Details
Diff Detail
- Repository
- rG LLVM Github Monorepo
Event Timeline
mlir/lib/Dialect/SparseTensor/Transforms/SparseTensorConversion.cpp | ||
---|---|---|
590 | using the "zero" data value itself has a bit of a risk that a sparse data structure with a few explicitly stored zeros will bail this loop too soon; having a 0/1 result and passing the value as another ref parameter seems safer | |
616 | you cannot just pass "indices" to next call and use it here; you will need the IR to load the contents returned through the memref by the getNext() call, using explicit memref::LoadOp ops on the elements in the memref and passing this to the StoreOp | |
619 | You are replacing the op (which returns a tensor) with the result of an alloc (which is a memref). | |
619 | This replacement will also need some legality check changes. Up to now, we were replacing sparse tensors with opague pointers, and the checks/rewriting did all the work But now we have dense_tensor = convert .... and the mechanism will need some "love" to make it accept the rewriting even though the types were already legal to start with |
mlir/lib/Dialect/SparseTensor/Transforms/SparseTensorConversion.cpp | ||
---|---|---|
590 | Gross; but yes, you're right. I think I'll leave the commentary here as-is, however, since —like the field accesses on line 452, and like the commentary for other branches— it's reflecting more what the C++ code in the ExecutionEngine does, rather than reflecting what the generated MLIR code does; and there's no ambiguity about what the getNext() method returns, even though IMPL_COO_GETNEXT introduces the problem you mention. |
Everything should now be ready for review. I've added integration and codegen tests, fixed various infelicities, and rebased.
N.B., I'm planning to add support for dynamic sizes, but I want to do that in a separate differential
mlir/lib/Dialect/SparseTensor/Transforms/SparseTensorConversion.cpp | ||
---|---|---|
211 | returns (to be consistent with Generates) also "as" instead of "at"? (2x) | |
213 | is the "inline" really important (here and below) internal linkage gives the compiler sufficient room to make that decision using built-in heuristics? | |
358 | we typically do not show that much detail of the C++ lib in this file | |
365 | This is not a style we use in MLIR, and especially for an internally linked method, and compared to the surrounding methods, it feels a bit heavy | |
366 | let's not document SparseUtils inside this file | |
428 | not in this revision yet? seems a relatively minor addition (and then we are done for all cases!) | |
442 | same | |
593–595 | remove this; it more or less applies to all method calls here, and it will fall under the general umbrella of perhaps moving to 100% codegen over support lib usage.... | |
599 | we don't emit errors when rewriting rules fail to apply | |
608 | all this block scoping makes the code more lengthy than it could be I would either break this up in methods where it makes sense, or otherwise just take the scoping hit for readability | |
629 | shouldn't we bail out if there are dynamic sizes? | |
mlir/lib/ExecutionEngine/SparseUtils.cpp | ||
166–167 | this class adds an enormous amount of code for a very thin iterator how about just having the very simple start()/next() iteration inside the COO class itself? | |
573 | assert(action == kEmpty); disappeared in your rewrite | |
573 | this is of course inside a macro now, but LLVM wants braces {} on all branches if one of them is branched | |
663 | if you use a shorter name, perhaps it fits on one line | |
775–784 | I prefer not to have this function at all; but if we keep it, it should go to CRunnerUtils.cpp since we have some other printing utilities there but this is not related to sparse | |
mlir/test/Dialect/SparseTensor/conversion_sparse2dense.mlir | ||
17 | Ah, you made this much more accurate checking than conversion.mlir does (good). | |
mlir/test/Integration/Dialect/SparseTensor/CPU/sparse_conversion_sparse2dense.mlir | ||
46 | from dense to sparse | |
65–70 | I think you are overthinking this I agree that the silent exit(1) is not good (so perhaps we should just go back to fully FileChecked cases) but again, I actually prefer the fully FileChecked cases and just rely on printing and verifying the output | |
133–134 | yeah, we need to do that or asan will fail Proper way, I think (1) memref.buffer_cast from tensor to memref |
Ah, I saw this after writing my comments. So you can ignore the part where I asked about that ;-)
Addressing comments
mlir/lib/Dialect/SparseTensor/Transforms/SparseTensorConversion.cpp | ||
---|---|---|
428 | It's done in D112674. | |
629 | Done in D112674 | |
643 | Ha, actually I was right(-ish) the first time. We want the destination encoding here, so we don't apply the dimOrdering permutation twice. The problem was that I was passing a nullptr rather than explicitly constructing the SparseTensorEncodingAttr | |
mlir/lib/ExecutionEngine/SparseUtils.cpp | ||
775–784 | Yeah, I'd rather not have this function too. Unfortunately, when I tried using the CRunnerUtils.cpp functions that vector.print does, I could never get it to link right: it either complained about re-defining a symbol, or about using an un-defined symbol. As for implementing it in the integration test itself, I can't seem to find a way to define string values (not attributes) for passing to fputs() Moved to CRunnerUtils.cpp for now. Will try to rip it out in a future differential. | |
mlir/test/Dialect/SparseTensor/conversion_sparse2dense.mlir | ||
17 | I'm not sure I follow. I was just trying to do the same as conversion.mlir (but in the opposite direction, naturally); and elsewhere you suggested breaking things out into a separate test rather than reusing the ones already there. |
mlir/lib/ExecutionEngine/SparseUtils.cpp | ||
---|---|---|
775–784 | How about just leaving it at the exit(1) for not and think about this in a future differential. | |
mlir/test/Dialect/SparseTensor/conversion_sparse2dense.mlir | ||
17 | Ok, fair enough. I was suggesting to have one sparse->dense check in the conversion.mlir in the more concise style of that test, but having this more rigid tests is indeed sufficient. You can ignore the suggestion ;-) |
mlir/lib/ExecutionEngine/CRunnerUtils.cpp | ||
---|---|---|
47–58 ↗ | (On Diff #382862) | Let's not do this at all, but let's go with just using CHECKs https://reviews.llvm.org/D112688 |
yeah, looking good. few last nits before we submit...
mlir/lib/Dialect/SparseTensor/Transforms/SparseTensorConversion.cpp | ||
---|---|---|
439 | Inserts (I usually use the s-form of a verb in the top level comment, but the imperative form in the inlined code) | |
446–447 | is this comment still relevant? I think we can safely remove it? | |
627 | you still have some block scoping left? | |
mlir/lib/ExecutionEngine/SparseUtils.cpp | ||
96 | yeah, this is awesome! | |
125 | iteratorLocked = false? | |
770 | feel free to rename this into more intuitive names in a follow up revision too, btw |
mlir/lib/Dialect/SparseTensor/Transforms/SparseTensorConversion.cpp | ||
---|---|---|
601–612 | how about removing this comment block and simply using // Setup a synthetic all-dense, no-permutation encoding for the dense destination. op->getContext(), SmallVector<SparseTensorEncodingAttr::DimLevelType>( rank, SparseTensorEncodingAttr::DimLevelType::Dense), AffineMap(), 0, 0); we don't need anything copied from src here. |
mlir/lib/Dialect/SparseTensor/Transforms/SparseTensorConversion.cpp | ||
---|---|---|
601–612 | We need the (ptrTp,indTp,valTp) values for _mlir_ciface_newSparseTensor to enter the right CASE. But I can abbreviate the commentary |
mlir/lib/Dialect/SparseTensor/Transforms/SparseTensorConversion.cpp | ||
---|---|---|
601–612 | this last one may have crossed. I just feel L601-612 takes too much real estate explaining what could be a onliner | |
627 | Ok, acceptable ;-) | |
mlir/lib/ExecutionEngine/SparseUtils.cpp | ||
638 | tensor is a bit more consistent name with IMPL3 | |
647 | oh, we will leak memory right now! When iteration is done, we should release the COO tensor/ if (elem == nullptr) { delete iter; return false; } and document that tensor can no longer be used once getNext returns false | |
mlir/test/Integration/Dialect/SparseTensor/CPU/sparse_conversion_sparse2dense.mlir | ||
80–82 | this is taken from the original sparse_conversion.mlir (1) remove this comment, or |