This is an archive of the discontinued LLVM Phabricator instance.

[mlir][sparse] Adding a stress test
ClosedPublic

Authored by wrengr on Nov 17 2021, 1:07 PM.

Diff Detail

Event Timeline

wrengr created this revision.Nov 17 2021, 1:07 PM
wrengr requested review of this revision.Nov 17 2021, 1:07 PM
mehdi_amini requested changes to this revision.Nov 17 2021, 4:34 PM

I am not sure that mlir/test/python/... (and mlir/test/python/dialects/...) is intended for end-to-end integration tests. This likely would better live something in the integration testing directory, but I don't know if we have precedents to use python for this kind of testing at the moment.

Also: how long does this take to run?

This revision now requires changes to proceed.Nov 17 2021, 4:34 PM

I don't know if we have precedents to use python for this kind of testing at the moment.

I am afraid I set the wrong example with "test_SpMM.py" in this directory already. But I agree that we should migrate this to a better place since this does not test python functionality, but using the python dsl to test sparse functionality ;-)

Do you have suggestions for a better place? Would test/Integration/Python or something like that make sense. Or perhaps test/Integration/Dialect/SparseTensor/Python to make it more clear that we are testing sparse dialect through Python DSL.

Also: how long does this take to run?

Running on my local machine and writing the modules out, it takes:

real    0m3.369s
user    0m2.027s
sys     0m0.536s

If not writing the modules out (as it wouldn't be for the lit+FileCheck runs), it's a bit quicker:

real    0m2.703s
user    0m2.093s
sys     0m0.301s

I intentionally set the shape/rank small enough that it runs so quickly. Whereas if I increase the rank by 1, then it takes much longer; without writing the modules out it comes to:

real    0m23.446s
user    0m22.019s
sys     0m1.041s

Hence why I use rank 4 rather than the rank 7 suggested in https://bugs.llvm.org/show_bug.cgi?id=52410

wrengr updated this revision to Diff 388078.Nov 17 2021, 5:40 PM

Removing unused imports

Thanks! Timing is reasonable, any place suggested by Aart in the integration directory looks fine.

wrengr updated this revision to Diff 388304.Nov 18 2021, 1:13 PM

rebasing and splitting off D114192

wrengr updated this revision to Diff 389010.Nov 22 2021, 12:59 PM

Moved the file per D114192. (Apparently this never got uploaded when that differential got split off)

aartbik added inline comments.Nov 30 2021, 12:42 PM
mlir/test/Integration/Dialect/SparseTensor/python/test_stress.py
14 ↗(On Diff #389051)

I would put empty lines between the different "kinds" of imports

21 ↗(On Diff #389051)

empty line before this separator

183 ↗(On Diff #389051)

empty line after separator

191 ↗(On Diff #389051)

We now have the

@run
def testXXX

style in one file, and plain main() in the other. Let's unify how our tests are run.

239 ↗(On Diff #389051)

is this some hidden feature to write the mlir to file for debugging, if so document with sample invocation; otherwise, I would remove this debugging code from the actual test

wrengr updated this revision to Diff 391130.Dec 1 2021, 1:43 PM
wrengr marked 5 inline comments as done.

addressing comments

wrengr updated this revision to Diff 391131.Dec 1 2021, 1:44 PM

Forgot to remove the unused run() decorator

almost there!

mlir/test/Integration/Dialect/SparseTensor/python/test_stress.py
160 ↗(On Diff #391131)

we should move this also in its own file and share between all our examples, just like the TODO you added above

219 ↗(On Diff #391131)

space after comma

235 ↗(On Diff #391131)

one last request, the current version generates

 %0 = sparse_tensor.convert %arg0 : tensor<2x3x4x5xf64> to tensor<2x3x4x5xf64, #sparse_tensor.encoding<{ dimLevelType = [ "dense", "dense", "dense", "dense" ], dimOrdering = affine_map<(d0, d1, d2, d3) -> (d0, d1, d2, d3)>, pointerBitWidth = 0, indexBitWidth = 0 }>>
  %1 = sparse_tensor.convert %0 : tensor<2x3x4x5xf64, #sparse_tensor.encoding<{ dimLevelType = [ "dense", "dense", "dense", "dense" ], dimOrdering = affine_map<(d0, d1, d2, d3) -> (d0, d1, d2, d3)>, pointerBitWidth = 0, indexBitWidth = 0 }>> to tensor<2x3x4x5xf64, #sparse_tensor.encoding<{ dimLevelType = [ "dense", "dense", "dense", "dense" ], dimOrdering = affine_map<(d0, d1, d2, d3) -> (d0, d1, d3, d2)>, pointerBitWidth = 0, indexBitWidth = 0 }>>
....
  %384 = sparse_tensor.convert %383 : tensor<2x3x4x5xf64, #sparse_tensor.encoding<{ dimLevelType = [ "compressed", "compressed", "compressed", "compressed" ], dimOrdering = affine_map<(d0, d1, d2, d3) -> (d3, d2, d1, d0)>, pointerBitWidth = 0, indexBitWidth = 0 }>> to tensor<2x3x4x5xf64>
  return %384 : tensor<2x3x4x5xf64>

which means that all, except the last one could be freed. So just to set the right example, let's emit sparse_tensor.release on all the intermediate sparse tensors

wrengr updated this revision to Diff 391453.Dec 2 2021, 2:00 PM
wrengr marked 2 inline comments as done.

Releasing intermediate tensors.

mlir/test/Integration/Dialect/SparseTensor/python/test_stress.py
160 ↗(On Diff #391131)

Will do; though I think it ought to be in a separate CL, just to help keep the impact of each CL clearer (and hence also help track down bugs, as needed)

aartbik accepted this revision.Dec 3 2021, 10:24 AM
aartbik added inline comments.
mlir/test/Integration/Dialect/SparseTensor/python/test_stress.py
160 ↗(On Diff #391131)

Oh yes, I meant as a TODO, not in this revision.

wrengr updated this revision to Diff 391738.Dec 3 2021, 2:20 PM
wrengr marked 2 inline comments as done.

addressing comment (adding a todo). And rebasing

mehdi_amini accepted this revision.Dec 3 2021, 2:42 PM
This revision is now accepted and ready to land.Dec 3 2021, 2:42 PM
This revision was automatically updated to reflect the committed changes.