This is an archive of the discontinued LLVM Phabricator instance.

[mlir][sparse] use loop emitter to generate loop in sparsification
ClosedPublic

Authored by Peiming on Oct 18 2022, 9:58 AM.

Diff Detail

Event Timeline

Peiming created this revision.Oct 18 2022, 9:58 AM
Herald added a project: Restricted Project. · View Herald Transcript
Peiming requested review of this revision.Oct 18 2022, 9:58 AM

Be sure to rebase over D136123, since that made a lot of changes to Sparsification.cpp
`

mlir/lib/Dialect/SparseTensor/Transforms/CodegenUtils.cpp
137

Why not use isZeroRankedTensorOrScalar here instead? If it's intentional, then you should add a comment explaining why rank-zero tensors don't continue.

219–222

Why not use MutableArrayRef instead of SmallVectorImpl?

231

Thanks for the clarifying rename :)

295

again, would probably be better to use MutableArrayRef unless you really need it to be a SmallVectorImpl

mlir/lib/Dialect/SparseTensor/Transforms/CodegenUtils.h
430–444

Should move all these fields to the bottom of the class together with the other fields

466

Should move this field to the bottom of the class, together with all the other fields.

Peiming updated this revision to Diff 468954.Oct 19 2022, 10:16 AM
Peiming marked 5 inline comments as done.

address comments from Wren + rebase

mlir/lib/Dialect/SparseTensor/Transforms/CodegenUtils.cpp
219–222

Good suggestion! Simply because I do not know there is a MutableArrayRef.

It is probably a better way, as I can then have a default empty value for the parameter. I will try!

231

;-)

I got somewhere deeper into this revision! I still want to do a very careful 1;1 comparison with old sparsification code and new emitter but I am confident to get to that early next week!

mlir/include/mlir/Dialect/SparseTensor/Utils/Merger.h
235 ↗(On Diff #468990)

we have the syntheticTensor field for that!

(btw you can send this single file change out by itself for faster review!
perhaps add a unit test to cover the new code in that revision)

mlir/lib/Dialect/SparseTensor/Transforms/CodegenUtils.h
282

It is still very confusing why L243 up to new emitter show up here as new?
You can you rebase to latest main?

305

... that (co-)iterate over sparse tensors.

(otherwise the () do not make much sense)

308

..generate the following..

326

this is very confusing given that we also generate for loops

just like it 3x

loopEmiter.exitCurrentLoop(); exit second k-loop
loopEmiter.exitCurrentLoop();
exit j-loop
loopEmiter.exitCurrentLoop();/ / exit i-loop

aartbik added inline comments.Oct 21 2022, 5:52 PM
mlir/lib/Dialect/SparseTensor/Transforms/CodegenUtils.cpp
807

don't break the == 8
(also, unrelated change in own quick revision?)

mlir/lib/Dialect/SparseTensor/Transforms/CodegenUtils.h
308

Note that loops are really over lattices and tensor index expressions, so the dimensions may not always line up.

and please use tensor0_0, tensor0_1 etc. see below

333

when initializing
*ing form

334

space after comma

347

remove L308-310, does not add to much given total # methods

365

syntax p0- > end is confusing

use C flavored explanation

for (int i = ..., i < ...; i++)

372

Comment that this ends a loop
(sort of obvious but still nice to say something)

381

Note that in the original code and dump, we use tensor_d for this and i_tensor_idx for the loop index (see dumpBits()). It would be helpful for me as reviewer to see that slightly more familiar syntax for the constructs

393

co-iteration in comment and CoIteration in method name

402

Returns

408

Gets

mlir/lib/Dialect/SparseTensor/Transforms/Sparsification.cpp
162

I can't find my original question in all the history, but why do we need this change in this revision.
It should be NFC for now, right?

Peiming updated this revision to Diff 470207.Oct 24 2022, 10:25 AM

rebase against main

Peiming updated this revision to Diff 470220.Oct 24 2022, 10:51 AM
Peiming marked 12 inline comments as done.

address comments.

mlir/lib/Dialect/SparseTensor/Transforms/CodegenUtils.cpp
807

I think it is automatically fomatted by emacs (clangd) ...

mlir/lib/Dialect/SparseTensor/Transforms/CodegenUtils.h
282

I rebased... and they are still here...

381

Yes, but here there is no concept of loop idx

mlir/lib/Dialect/SparseTensor/Transforms/Sparsification.cpp
162

This is a complicated story...

This does not matter whether we set it or not previously. In the previous implementation, you used sizes[i] (i is the loop index) for the loop bound. Note that, however, the sizes[i] are not initialized by the dimension of the dense tensor (because this is a complex affine, there is no corresponding loop index for it).

However, in loop emitter, it requires you to pass tid + dim, in that case, setting the level to dense will have the merger favors it over undefined dimension, and the loop bound will be computed from the dimension of the dense tensor (which is wrong!)

Apart from that, I think to avoid setting dimension level type makes more sense as well, because the complex affine does not actually corresponding to any loop index either, and the behavior is more consistent for affine expression on sparse tensor too.

Peiming updated this revision to Diff 470228.Oct 24 2022, 11:25 AM
Peiming marked an inline comment as done.

small fix

Peiming updated this revision to Diff 470244.Oct 24 2022, 11:55 AM

remove dup code.

mlir/lib/Dialect/SparseTensor/Transforms/CodegenUtils.h
282

You are right! Seems git failed to recognized this part is moved? (or I made some mistake merging two conflicts).

They should be removed.

Peiming updated this revision to Diff 470250.Oct 24 2022, 12:14 PM
Peiming marked an inline comment as done.

rebase

aartbik added inline comments.Oct 25 2022, 1:11 PM
mlir/lib/Dialect/SparseTensor/Transforms/CodegenUtils.h
364

tensor_t_dim notation, also at L364, 366?

464

period at end, but see below for name/comment

465

Shall we make this just "hasOutput". By convention that is indeed always push to the end.
But the isLastOutput name suggests that this is a very special situation, where we just have output/no-output conditions...

466

Just for my own sanity, can you please indicate if I got the following right given the original names?

/// Universal dense indices and upper bounds (by index). The loops array
/// is updated with the value of the universal dense index in the current
/// loop. The sizes array is set once with the inferred dimension sizes.
std::vector<Value> loops;   // now in getLoopIdxValue?  what do you keep in tensors? [note that loops correspond to lattices more than tensors, hence my original name]
std::vector<Value> sizes;  // not here?
/// Buffers for storing dense and sparse numerical values (by tensor).
/// This array is set once during bufferization of all tensors.
std::vector<Value> buffers;    // now valBuffer?
/// Sparse storage schemes (1-D): pointers and indices (by tensor and index).
/// This array is set once during bufferization of all sparse tensors.
std::vector<std::vector<Value>> pointers;  // now ptrBuffer?
std::vector<std::vector<Value>> indices;    // now idxBuffer?
/// Sparse iteration information (by tensor and index). These arrays
/// are updated to remain current within the current loop.
std::vector<std::vector<Value>> highs;  // still highs?
std::vector<std::vector<Value>> pidxs;   // still pidxs?
std::vector<std::vector<Value>> idxs;    // now coords?

Note that I am not against a renaming per se, especially if the new name is better ;-)
But in changes like this, it would have been easier to first move everything over "as is"
and then rename later

475

This comment seems out of place now? The fields below are pointers/indices/values

mlir/lib/Dialect/SparseTensor/Transforms/Sparsification.cpp
86

Yeah, L86-125 feels really out of place now. But I am okay cleaning that up after this revision

471

can you try not to break the tensor index expressions in the comment, so it is a bit easier to read

480

Bit confusing comment. There is only one output, but the two references to it should match ;-)

1085

note that the original had a " if (!inits[b]) continue" here
I suspect this is still covered by the undef case now, although it seems a change in behavior?

1117

I don't get this. In general, I like how sparsification has become simpler due to loop emitter.
But this huge block of code in something that really just read like "startloop" .

Why don't we keep genLocals at least? That would also make the diff a lot easier to read?

1166–1167

commented out? I would prefer to keep the method, since start/endLoopSeq and start/endLoop where originally intended to remain small
(sort of the very first loop emitter ;-)

mlir/test/Dialect/SparseTensor/sparse_2d.mlir
965

what happened here? missing > at end, but also different tensor?

1018

probably same reason?

1098

probably same reason?

mlir/test/Dialect/SparseTensor/sparse_3d.mlir
1129

missing > at end?

mlir/test/Dialect/SparseTensor/sparse_perm.mlir
68–70

did the order change here?

mlir/test/Dialect/SparseTensor/sparse_reshape.mlir
121

maybe DAG the load/add

269–270

maybe DAG the load/add

Peiming updated this revision to Diff 470610.Oct 25 2022, 1:52 PM
Peiming marked 14 inline comments as done.

address comments from Aart.

mlir/lib/Dialect/SparseTensor/Transforms/CodegenUtils.h
364

I delete L366 as we do not prepare for the next dimension now per offline discussion.

466

You are correct about all the variables names.

sizes are eliminated (or merged with highs for dense tensors)

loops are now managed with loopStack as loop emitter does not know the total number of loops it need to generated at beginning.

mlir/lib/Dialect/SparseTensor/Transforms/Sparsification.cpp
86

SG.

1085

I think it is because I change the loop from
for(int i = 0; i < bit.size(); i++) to iterator based loop.

The iterator based loop only visits set bit, so all the bits are guaranteed to be inited in the loop body.

1117

We still need to do the same translation even we keep the genLocals. It is because loop emitter and lattices uses different way to denote a loop.

The whole thing that the following blocks tries to do is translate between bit set -> vector<tid + dim>, but probably a until function for this will make it more readable (e.g., translateBitsToTidDimPair)?

WDYT?

1166–1167

I actually prefer deleting it, so that the internal state of loop emitter are less likely to be broken.

But we can have some discussion on this.

mlir/test/Dialect/SparseTensor/sparse_2d.mlir
965

Yes, because we there might be multiple tensor_dim for the same loop index. Pick arbitrary one is fine.

mlir/test/Dialect/SparseTensor/sparse_3d.mlir
1129

No, becuase it is a sparse tensor... I was being lazy to include all the sparse encoding...

Let me know if you want me to include the entire string.

mlir/test/Dialect/SparseTensor/sparse_perm.mlir
68–70

No, the order does not change, for the same reason above (multiple tensors for the same loop idx and loop emitter happens to use a different one), I have to adjust the loop bound variable accordingly in CHECK tests.

Peiming updated this revision to Diff 470612.Oct 25 2022, 1:56 PM

minor fixes.

Peiming added inline comments.Oct 25 2022, 1:59 PM
mlir/lib/Dialect/SparseTensor/Transforms/Sparsification.cpp
1117

Or maybe merger should be responsible for the translation? as the map should be managed by it as well.

Peiming updated this revision to Diff 470615.Oct 25 2022, 2:05 PM

remove out dated comments.

Peiming updated this revision to Diff 470634.Oct 25 2022, 3:40 PM

use util function to translate bits -> tid + dim.

aartbik added inline comments.Oct 25 2022, 4:50 PM
mlir/lib/Dialect/SparseTensor/Transforms/Sparsification.cpp
1085

Ah neat, of course!

mlir/test/Dialect/SparseTensor/sparse_3d.mlir
1129

Ah, well, you can use

#sparse_tensor.encoding<{{.*}}>>

to avoid clutter, but the missing > at the end looks like it was a typo ;-)

Peiming updated this revision to Diff 470647.Oct 25 2022, 4:55 PM

fix CHECK tests.

Peiming retitled this revision from [mlir][sparse] use loop emitter to generate loop in sparisficatioin to [mlir][sparse] use loop emitter to generate loop in sparsification.Oct 25 2022, 4:57 PM
aartbik accepted this revision.Oct 25 2022, 4:59 PM

Thanks for your patience during the review, Peiming!
And now, SHIP IT!

From now on, we prefer our loop emitting to be done in its own class ;-)

This revision is now accepted and ready to land.Oct 25 2022, 4:59 PM
Peiming updated this revision to Diff 470651.Oct 25 2022, 5:03 PM

fix CHECK test

This revision was landed with ongoing or failed builds.Oct 25 2022, 5:28 PM
This revision was automatically updated to reflect the committed changes.