Page MenuHomePhabricator

[mlir][sparse] Implementing sparse=>dense conversion.

Authored by wrengr on Sep 29 2021, 4:47 PM.

Diff Detail

Event Timeline

There are a very large number of changes, so older changes are hidden. Show Older Changes

(more debugging attempt; not much progress)

aartbik added inline comments.Oct 5 2021, 1:40 PM

using the "zero" data value itself has a bit of a risk that a sparse data structure with a few explicitly stored zeros will bail this loop too soon; having a 0/1 result and passing the value as another ref parameter seems safer


you cannot just pass "indices" to next call and use it here; you will need the IR to load the contents returned through the memref by the getNext() call, using explicit memref::LoadOp ops on the elements in the memref and passing this to the StoreOp


You are replacing the op (which returns a tensor) with the result of an alloc (which is a memref).
That type mismatch will fail. You need a buffer cast in between.


This replacement will also need some legality check changes. Up to now, we were replacing sparse tensors with opague pointers, and the checks/rewriting did all the work

But now we have

dense_tensor = convert ....
return dense_tenor

and the mechanism will need some "love" to make it accept the rewriting even though the types were already legal to start with

wrengr updated this revision to Diff 379851.Oct 14 2021, 2:45 PM
wrengr marked 2 inline comments as done.

rebased, and factored out D111763 and D111766

wrengr added inline comments.Oct 14 2021, 2:57 PM

Gross; but yes, you're right.

I think I'll leave the commentary here as-is, however, since —like the field accesses on line 452, and like the commentary for other branches— it's reflecting more what the C++ code in the ExecutionEngine does, rather than reflecting what the generated MLIR code does; and there's no ambiguity about what the getNext() method returns, even though IMPL_COO_GETNEXT introduces the problem you mention.

wrengr marked an inline comment as done.Oct 14 2021, 2:57 PM
wrengr updated this revision to Diff 382805.Oct 27 2021, 2:54 PM

Everything should now be ready for review. I've added integration and codegen tests, fixed various infelicities, and rebased.

N.B., I'm planning to add support for dynamic sizes, but I want to do that in a separate differential

aartbik added inline comments.Oct 27 2021, 3:40 PM

returns (to be consistent with Generates)

also "as" instead of "at"? (2x)


is the "inline" really important (here and below)

internal linkage gives the compiler sufficient room to make that decision using built-in heuristics?


we typically do not show that much detail of the C++ lib in this file


This is not a style we use in MLIR, and especially for an internally linked method, and compared to the surrounding methods, it feels a bit heavy


let's not document SparseUtils inside this file


not in this revision yet? seems a relatively minor addition (and then we are done for all cases!)




remove this; it more or less applies to all method calls here, and it will fall under the general umbrella of perhaps moving to 100% codegen over support lib usage....


we don't emit errors when rewriting rules fail to apply


all this block scoping makes the code more lengthy than it could be

I would either break this up in methods where it makes sense, or otherwise just take the scoping hit for readability


shouldn't we bail out if there are dynamic sizes?
or, better yet, just add those in this revision


this class adds an enormous amount of code for a very thin iterator

how about just having the very simple start()/next() iteration inside the COO class itself?
That way, we can also assert an error if you try to insert while iterating


assert(action == kEmpty); disappeared in your rewrite


this is of course inside a macro now, but LLVM wants braces {} on all branches if one of them is branched


if you use a shorter name, perhaps it fits on one line


I prefer not to have this function at all; but if we keep it, it should go to CRunnerUtils.cpp since we have some other printing utilities there

but this is not related to sparse


Ah, you made this much more accurate checking than conversion.mlir does (good).
But, let's also add a check in that file in the more informal way there


from dense to sparse


I think you are overthinking this

I agree that the silent exit(1) is not good (so perhaps we should just go back to fully FileChecked cases)
But how about just printing -12345 on error and exiting and then also add a CHECK-NOT -12345 or something like that

but again, I actually prefer the fully FileChecked cases and just rely on printing and verifying the output


yeah, we need to do that or asan will fail

Proper way, I think

(1) memref.buffer_cast from tensor to memref
(2) dealloc memref

N.B., I'm planning to add support for dynamic sizes, but I want to do that in a separate differential

Ah, I saw this after writing my comments. So you can ignore the part where I asked about that ;-)

wrengr updated this revision to Diff 382836.EditedOct 27 2021, 4:07 PM

Added child differential D112674 for handling dynamic sizes

wrengr updated this revision to Diff 382862.Oct 27 2021, 5:24 PM
wrengr marked 19 inline comments as done.

Addressing comments


It's done in D112674.


Done in D112674


Ha, actually I was right(-ish) the first time. We want the destination encoding here, so we don't apply the dimOrdering permutation twice. The problem was that I was passing a nullptr rather than explicitly constructing the SparseTensorEncodingAttr


Yeah, I'd rather not have this function too. Unfortunately, when I tried using the CRunnerUtils.cpp functions that vector.print does, I could never get it to link right: it either complained about re-defining a symbol, or about using an un-defined symbol. As for implementing it in the integration test itself, I can't seem to find a way to define string values (not attributes) for passing to fputs()

Moved to CRunnerUtils.cpp for now. Will try to rip it out in a future differential.


I'm not sure I follow. I was just trying to do the same as conversion.mlir (but in the opposite direction, naturally); and elsewhere you suggested breaking things out into a separate test rather than reusing the ones already there.

aartbik added inline comments.Oct 27 2021, 6:42 PM

How about just leaving it at the exit(1) for not and think about this in a future differential.
Rather than introducing something we want to rip again anyway?


Ok, fair enough. I was suggesting to have one sparse->dense check in the conversion.mlir in the more concise style of that test, but having this more rigid tests is indeed sufficient. You can ignore the suggestion ;-)

aartbik added inline comments.Oct 27 2021, 8:42 PM
47–58 ↗(On Diff #382862)

Let's not do this at all, but let's go with just using CHECKs
I moved the sparse_conversion test back to this format as well:
wrengr updated this revision to Diff 383129.Oct 28 2021, 12:45 PM
wrengr marked 7 inline comments as done.

Addressing comments

yeah, looking good. few last nits before we submit...



(I usually use the s-form of a verb in the top level comment, but the imperative form in the inlined code)


is this comment still relevant? I think we can safely remove it?


you still have some block scoping left?
none of these really release stuff early to keep memory lower
so I would opt for readability over block scoping


yeah, this is awesome!


iteratorLocked = false?


feel free to rename this into more intuitive names in a follow up revision too, btw

wrengr updated this revision to Diff 383144.Oct 28 2021, 1:25 PM
wrengr marked 6 inline comments as done.

Addressing nits


Imo the blocks improve readability rather than detract.


Will do

aartbik added inline comments.Oct 28 2021, 1:34 PM

how about removing this comment block and simply using

// Setup a synthetic all-dense, no-permutation encoding for the dense destination.
encDst = SparseTensorEncodingAttr::get(

          rank, SparseTensorEncodingAttr::DimLevelType::Dense),
AffineMap(), 0, 0);

we don't need anything copied from src here.

wrengr marked an inline comment as done.Oct 28 2021, 1:59 PM
wrengr added inline comments.

We need the (ptrTp,indTp,valTp) values for _mlir_ciface_newSparseTensor to enter the right CASE. But I can abbreviate the commentary

wrengr updated this revision to Diff 383158.Oct 28 2021, 1:59 PM
wrengr marked an inline comment as done.

Addressing nits

aartbik added inline comments.Oct 28 2021, 2:09 PM

this last one may have crossed. I just feel L601-612 takes too much real estate explaining what could be a onliner


Ok, acceptable ;-)


tensor is a bit more consistent name with IMPL3


oh, we will leak memory right now!

When iteration is done, we should release the COO tensor/
The easiest way would be to do this

if (elem == nullptr) {

delete iter;  
return false;


and document that tensor can no longer be used once getNext returns false


this is taken from the original sparse_conversion.mlir
but with your addition it feels like we should

(1) remove this comment, or
(2) add a comment for tensor4,5,6 as well

aartbik added inline comments.Oct 28 2021, 2:12 PM

Oh yeah, right you are. Okay, then indeed just a bit less comment, but same code ;-)


this is the most important issue, since without this fix, asan will break the test

wrengr marked 7 inline comments as done.Oct 28 2021, 2:46 PM
wrengr updated this revision to Diff 383170.Oct 28 2021, 2:47 PM

Fixing memory leak

aartbik accepted this revision.Oct 28 2021, 3:02 PM

Ship it, Wren!

This revision is now accepted and ready to land.Oct 28 2021, 3:02 PM
This revision was automatically updated to reflect the committed changes.