This is an archive of the discontinued LLVM Phabricator instance.

[mlir] Add benchmarks for sparse tensor multiplications
AbandonedPublic

Authored by SaurabhJha on Mar 12 2022, 2:30 PM.

Details

Reviewers
aartbik
bixia
Summary

The objective is to compare the performances of the mlir sparse tensor
reference implementation, numpy matrix multiplication, and pytaco
multiplication.

Diff Detail

Event Timeline

SaurabhJha created this revision.Mar 12 2022, 2:30 PM
Herald added a project: Restricted Project. · View Herald Transcript
SaurabhJha requested review of this revision.Mar 12 2022, 2:30 PM

I have copied pytaco tools directory from tests directory to benchmark to use python bindings from benchmark. We can decide on the next course of action now that we have a concrete revision.

mlir/benchmark/python/sparse_tensor/benchmark_sparse.py
128–134

I am choosing these matrix sizes here. Are they appropriate for benchmarking?

I also moved this code to setup because it seemed like it was taking a long time so I thought it appropriate to move it under compiler. Let me know your thoughts.

mehdi_amini added inline comments.Mar 12 2022, 4:34 PM
mlir/benchmark/python/sparse_tensor/tools/mlir_pytaco.py
1869

That is a non-trivial amount of code here duplicated in the repo isn't it?

aartbik added inline comments.Mar 12 2022, 8:46 PM
mlir/benchmark/python/sparse_tensor/tools/mlir_pytaco.py
1869

Yeah +1

I thought you would just move a single *small* routine over, and that we could consolidate that later.
If you need to duplicate the full taco support (which will diverge rather quickly), we need to find a better solution.

SaurabhJha added inline comments.Mar 13 2022, 5:14 AM
mlir/benchmark/python/sparse_tensor/tools/mlir_pytaco.py
1869

I will work on trimming this a bit more.

SaurabhJha abandoned this revision.Mar 27 2022, 9:28 AM

I couldn't trim it in obvious ways. I am closing this patch for now and approach it another way in a new patch. Sorry about that.

I couldn't trim it in obvious ways. I am closing this patch for now and approach it another way in a new patch. Sorry about that.

No worries, we are looking forward to your next patch. Please know that we appreciate your efforts a lot!