Details
Diff Detail
- Repository
- rG LLVM Github Monorepo
- Build Status
Buildable 126660 Build 184003: arc lint + arc unit
Event Timeline
mlir/lib/Dialect/SparseTensor/Transforms/SparseTensorConversion.cpp | ||
---|---|---|
445 | Technically, we also fall into this branch for the strange dense to dense conversion. We will have typically folded those away, but I would not completely rely on this having taken place always and defend against that in the code. | |
451–453 | Note that in the context of another project, we may migrate library code to actual codegen (which has the advantage of a smaller memory footprint potentially and allows for "unforeseen" type combinations). Such a migration may take care of all such performance concerns. | |
459 | I think we need a subtle rewriting of the new call utility since we need the sparse encoding for some info, but the id permutation for the "new" tensor (ie. the dense result) | |
mlir/lib/ExecutionEngine/SparseUtils.cpp | ||
145–146 | this sentence does not flow quite well, or am I reading it wrong? | |
602–603 | We really should not need permutation here. If you call toCOO with ID permutation, it restores the indices to original order. Note that toCOO takes the permutation of the *target*, and internally restores permutation, if there was one from *source* (the internally stored inverse permutation). So if you call toCOO with ID perm, you don't need the perm here anymore, since indices are in the natural "dense" order MLIR expects. |
Also, please add tests for this
(1) in mlir/test/Dialect/SparseTensor conversion, a CHECK test on the expected loop structure (see examples there)
(2) in mlir/test/Integration/Dialect/SparseTensor/CPU, add a new "end-to-end" test that does a dense->sparse->dense roundtrip
mlir/lib/Dialect/SparseTensor/Transforms/SparseTensorConversion.cpp | ||
---|---|---|
445 | Just to be clear, that should only ever happen when the source is a "sparse" tensor that happens to use dense storage for all dimensions, right? Which is to say, the destination is always a bog-standard dense tensor, right? If so then the code should still work; if not then we'll have to figure out how to tighten up the guards for detecting the different cases. I'll update the commentary (on the assumption that the answer to the first question is yes; of course, if that's true, then shouldn't the "sparse=>sparse" case have the same caveat?) | |
451–453 | Yep, that's what I was thinking re "good enough for now" :) I mainly added the comment since when discussing the design with Tatiana, she raised some concerns about the performance implications of the method/function calls. For our current goals, I can't imagine this branch would be taken often enough to constitute a performance bottleneck. (And whenever it does, it's easy enough to fix at that point.) | |
459 | Yeah, I've been mulling over a few ways to clean genNewCall up. Did you want me to do that in/before this differential, or is it okay to do afterwards? | |
mlir/lib/ExecutionEngine/SparseUtils.cpp | ||
145–146 | I'll try rewording | |
602–603 | Oh good. I thought that's how things worked, but I wasn't quite certain. |
mlir/lib/Dialect/SparseTensor/Transforms/SparseTensorConversion.cpp | ||
---|---|---|
445 | No, I really means no encoding at all. There is a subtle difference between an un-annotated tensor and a tensor with all-dense annotations. All conversions work for the all-dense annotated case (it is treated as a sort of sparse tensors). But the logic on falling into a branch based on encDst, !encDest, srcDest, !srcDest (so four truth values) fell into this branch for two cases, but you only implemented the sparse->dense. So you will have to add one if-test and return failure. |
Re-updating commentary about covering dense=>sparse not dense=>dense. Also some preliminary WIP towards adding tests.
mlir/lib/Dialect/SparseTensor/Transforms/SparseTensorConversion.cpp | ||
---|---|---|
445 | Got it |
mlir/lib/Dialect/SparseTensor/Transforms/SparseTensorConversion.cpp | ||
---|---|---|
459 | Good catch :) That lead to an utterly inscrutable crash stacktrace |
mlir/lib/Dialect/SparseTensor/Transforms/SparseTensorConversion.cpp | ||
---|---|---|
448 | using the "zero" data value itself has a bit of a risk that a sparse data structure with a few explicitly stored zeros will bail this loop too soon; having a 0/1 result and passing the value as another ref parameter seems safer | |
474 | you cannot just pass "indices" to next call and use it here; you will need the IR to load the contents returned through the memref by the getNext() call, using explicit memref::LoadOp ops on the elements in the memref and passing this to the StoreOp | |
477 | You are replacing the op (which returns a tensor) with the result of an alloc (which is a memref). | |
477 | This replacement will also need some legality check changes. Up to now, we were replacing sparse tensors with opague pointers, and the checks/rewriting did all the work But now we have dense_tensor = convert .... and the mechanism will need some "love" to make it accept the rewriting even though the types were already legal to start with |
mlir/lib/Dialect/SparseTensor/Transforms/SparseTensorConversion.cpp | ||
---|---|---|
448 | Gross; but yes, you're right. I think I'll leave the commentary here as-is, however, since —like the field accesses on line 452, and like the commentary for other branches— it's reflecting more what the C++ code in the ExecutionEngine does, rather than reflecting what the generated MLIR code does; and there's no ambiguity about what the getNext() method returns, even though IMPL_COO_GETNEXT introduces the problem you mention. |
Everything should now be ready for review. I've added integration and codegen tests, fixed various infelicities, and rebased.
N.B., I'm planning to add support for dynamic sizes, but I want to do that in a separate differential
mlir/lib/Dialect/SparseTensor/Transforms/SparseTensorConversion.cpp | ||
---|---|---|
125 | returns (to be consistent with Generates) also "as" instead of "at"? (2x) | |
127 | is the "inline" really important (here and below) internal linkage gives the compiler sufficient room to make that decision using built-in heuristics? | |
263 | we typically do not show that much detail of the C++ lib in this file | |
270 | This is not a style we use in MLIR, and especially for an internally linked method, and compared to the surrounding methods, it feels a bit heavy | |
271 | let's not document SparseUtils inside this file | |
338 | not in this revision yet? seems a relatively minor addition (and then we are done for all cases!) | |
352 | same | |
451–453 | remove this; it more or less applies to all method calls here, and it will fall under the general umbrella of perhaps moving to 100% codegen over support lib usage.... | |
457 | we don't emit errors when rewriting rules fail to apply | |
466 | all this block scoping makes the code more lengthy than it could be I would either break this up in methods where it makes sense, or otherwise just take the scoping hit for readability | |
487 | shouldn't we bail out if there are dynamic sizes? | |
mlir/lib/ExecutionEngine/SparseUtils.cpp | ||
150–151 | this class adds an enormous amount of code for a very thin iterator how about just having the very simple start()/next() iteration inside the COO class itself? | |
556 | assert(action == kEmpty); disappeared in your rewrite | |
556 | this is of course inside a macro now, but LLVM wants braces {} on all branches if one of them is branched | |
643 | if you use a shorter name, perhaps it fits on one line | |
758–767 | I prefer not to have this function at all; but if we keep it, it should go to CRunnerUtils.cpp since we have some other printing utilities there but this is not related to sparse | |
mlir/test/Dialect/SparseTensor/conversion_sparse2dense.mlir | ||
16 ↗ | (On Diff #382805) | Ah, you made this much more accurate checking than conversion.mlir does (good). |
mlir/test/Integration/Dialect/SparseTensor/CPU/sparse_conversion_sparse2dense.mlir | ||
45 ↗ | (On Diff #382805) | from dense to sparse |
64–69 ↗ | (On Diff #382805) | I think you are overthinking this I agree that the silent exit(1) is not good (so perhaps we should just go back to fully FileChecked cases) but again, I actually prefer the fully FileChecked cases and just rely on printing and verifying the output |
132–133 ↗ | (On Diff #382805) | yeah, we need to do that or asan will fail Proper way, I think (1) memref.buffer_cast from tensor to memref |
Ah, I saw this after writing my comments. So you can ignore the part where I asked about that ;-)
Addressing comments
mlir/lib/Dialect/SparseTensor/Transforms/SparseTensorConversion.cpp | ||
---|---|---|
338 | It's done in D112674. | |
459 | Ha, actually I was right(-ish) the first time. We want the destination encoding here, so we don't apply the dimOrdering permutation twice. The problem was that I was passing a nullptr rather than explicitly constructing the SparseTensorEncodingAttr | |
487 | Done in D112674 | |
mlir/lib/ExecutionEngine/SparseUtils.cpp | ||
758–767 | Yeah, I'd rather not have this function too. Unfortunately, when I tried using the CRunnerUtils.cpp functions that vector.print does, I could never get it to link right: it either complained about re-defining a symbol, or about using an un-defined symbol. As for implementing it in the integration test itself, I can't seem to find a way to define string values (not attributes) for passing to fputs() Moved to CRunnerUtils.cpp for now. Will try to rip it out in a future differential. | |
mlir/test/Dialect/SparseTensor/conversion_sparse2dense.mlir | ||
16 ↗ | (On Diff #382805) | I'm not sure I follow. I was just trying to do the same as conversion.mlir (but in the opposite direction, naturally); and elsewhere you suggested breaking things out into a separate test rather than reusing the ones already there. |
mlir/lib/ExecutionEngine/SparseUtils.cpp | ||
---|---|---|
758–767 | How about just leaving it at the exit(1) for not and think about this in a future differential. | |
mlir/test/Dialect/SparseTensor/conversion_sparse2dense.mlir | ||
16 ↗ | (On Diff #382805) | Ok, fair enough. I was suggesting to have one sparse->dense check in the conversion.mlir in the more concise style of that test, but having this more rigid tests is indeed sufficient. You can ignore the suggestion ;-) |
mlir/lib/ExecutionEngine/CRunnerUtils.cpp | ||
---|---|---|
47–58 ↗ | (On Diff #382862) | Let's not do this at all, but let's go with just using CHECKs https://reviews.llvm.org/D112688 |
yeah, looking good. few last nits before we submit...
mlir/lib/Dialect/SparseTensor/Transforms/SparseTensorConversion.cpp | ||
---|---|---|
349 | Inserts (I usually use the s-form of a verb in the top level comment, but the imperative form in the inlined code) | |
356–357 | is this comment still relevant? I think we can safely remove it? | |
485 | you still have some block scoping left? | |
mlir/lib/ExecutionEngine/SparseUtils.cpp | ||
96 | yeah, this is awesome! | |
120 | iteratorLocked = false? | |
753 | feel free to rename this into more intuitive names in a follow up revision too, btw |
mlir/lib/Dialect/SparseTensor/Transforms/SparseTensorConversion.cpp | ||
---|---|---|
459–470 | how about removing this comment block and simply using // Setup a synthetic all-dense, no-permutation encoding for the dense destination. op->getContext(), SmallVector<SparseTensorEncodingAttr::DimLevelType>( rank, SparseTensorEncodingAttr::DimLevelType::Dense), AffineMap(), 0, 0); we don't need anything copied from src here. |
mlir/lib/Dialect/SparseTensor/Transforms/SparseTensorConversion.cpp | ||
---|---|---|
459–470 | We need the (ptrTp,indTp,valTp) values for _mlir_ciface_newSparseTensor to enter the right CASE. But I can abbreviate the commentary |
mlir/lib/Dialect/SparseTensor/Transforms/SparseTensorConversion.cpp | ||
---|---|---|
459–470 | this last one may have crossed. I just feel L601-612 takes too much real estate explaining what could be a onliner | |
485 | Ok, acceptable ;-) | |
mlir/lib/ExecutionEngine/SparseUtils.cpp | ||
603 | tensor is a bit more consistent name with IMPL3 | |
612 | oh, we will leak memory right now! When iteration is done, we should release the COO tensor/ if (elem == nullptr) { delete iter; return false; } and document that tensor can no longer be used once getNext returns false | |
mlir/test/Integration/Dialect/SparseTensor/CPU/sparse_conversion_sparse2dense.mlir | ||
79–81 ↗ | (On Diff #383144) | this is taken from the original sparse_conversion.mlir (1) remove this comment, or |
returns (to be consistent with Generates)
also "as" instead of "at"? (2x)