- User Since
- Feb 7 2021, 3:22 AM (85 w, 1 d)
@SaurabhJha did you ever get a chance to create a patch?
Hey @Florian, sorry I dropped the ball here. I don't think I would be able to spend time on this so abandoning this revision.
Jul 15 2022
Looks great, thank you!
Jul 14 2022
Thanks for submitting this patch! Added a comment :)
Mar 27 2022
I couldn't trim it in obvious ways. I am closing this patch for now and approach it another way in a new patch. Sorry about that.
Mar 13 2022
Mar 12 2022
I have copied pytaco tools directory from tests directory to benchmark to use python bindings from benchmark. We can decide on the next course of action now that we have a concrete revision.
Feb 9 2022
Feb 8 2022
Mistakenly pushed the previous version before. Fixed it now.
Integrating fixes in sparse-compiler pass. It works now.
Feb 3 2022
Feb 1 2022
Jan 31 2022
Jan 30 2022
Rename variable pipeline_new -> pipeline
Jan 27 2022
Alright, this is it! The build has passed, merging.
The build is taking quite a while. Is it okay to restart if it doesn't finish in about 9-10 hours?
I'll wait for the build to pass before merging this in.
Address final comments on naming
Jan 26 2022
Jan 20 2022
Remove references to BenchmarkRunConfig from README
llvm-lit is also a python program, how is it setup?
The rm looks a bit hacky to me, I rather not touch the source directory at all.
Yep, got a solution for it! The problem was mbr seeked to be both a library and a CLI runner. As a library, it used to provide BenchmarkRunConfig so that benchmarks
could import them and return them like this.
Jan 19 2022
Address latest round of comments.
Why do all these folder show up here? In general the execution is in the build (which is at the top level and not under MLIR).
We should leave our source directory "pristine".
I have now made an llvm-lit kind of arrangement where we move the executable to the build directory and mlir-mbr is invoked from there. That way, we won't be bothered by "*.pyc" files. Unfortunately, the egg-info and build/ directories are created by pip install -e which we have to do to install mlir-mbr. I included manual rm -rs in CMakeLists.txt.
Jan 18 2022
Add trailing line to mlir/.gitignore
Add CMake targets for installation of the library
Jan 13 2022
Addressed latest round of comments
Jan 10 2022
Jan 3 2022
The latest revision contains the following changes
- Adding a README.
- Having a configuration file for the library.
- Having a numpy benchmark as an example where there is no compile function.
- Improve benchmark filtering to filter by benchmark name.
Dec 27 2021
This diff has these changes:
- Address the comment of returning a compile function and a run function from a benchmark which the framework can use.
- Add filtering of benchmark paths.
- Dynamically determine the number of runs required for a benchmark function. I have used a strategy similar to python's timeit (https://github.com/python/cpython/blob/main/Lib/timeit.py#L31-L33) and google benchmark (https://github.com/google/benchmark/blob/main/src/benchmark_runner.cc#L231-L253). Let me know if need something different here.
Dec 26 2021
Still a work in progress but I have addressed some comments.
- I have abstracted everything into a library.
- Implemented benchmark discovery similar to pytests.
- Better separation of running passes, compiling, and running.
Dec 23 2021
Dec 22 2021
Added support of pushing benchmarks to an LNT server. Also added a README.
Dec 20 2021
Even if the current benchmarking is the way to go, I couldn't find a way to consistently run them. In local, I have been running them from command line like this
bash PYTHONPATH=build/tools/mlir/python_packages/mlir_core MLIR_C_RUNNER_UTILS=build/lib/libmlir_c_runner_utils.dylib MLIR_RUNNER_UTILS=build/lib/libmlir_runner_utils.dylib python mlir/benchmark/python/*.bench.py
Set up a python script to run benchmarks. Remove google benchmark setup.
Dec 13 2021
Dec 12 2021
@mehdi_amini @aartbik I have introduced a python example benchmark. Let me know what you think about this new approach. It needs more fleshing out and a possible integration with llvm-lit. I have suggested a way we can use FileCheck for these benchmarks.
Introduce a python framework for benchmarking llvm programs
Dec 8 2021
Dec 7 2021
Dec 6 2021
Address comments: move benchmarking configuration to inner directory and enable/disable benchmarking with a flag
Changing commit message by removing references to sparse kernel since we are introducing a general benchmark framework here
Jul 26 2021
Sorry, commented on incorrect patch.
@fhahn addressed your broadcast comment. Would you prefer that I create the initialisation implementation patch before we get this in?
Jul 22 2021
Add documentation for matrix broadcast initialization
Address second round of comments
Jul 21 2021
Updated docs to address comments
Jul 20 2021
Jul 19 2021
Jul 14 2021
Jun 26 2021
This is a light patch that probably does not require a review and I created a patch anyway.
Jun 24 2021
This is closed by this commit https://github.com/llvm/llvm-project/commit/cd256c8bcc9723f0ce7a32957f26600c966fa07c
Sorry, I committed this without the Differential Revision: https://reviews.llvm.org/D104198 line. Is there a way to change the commit message after it is in main? I could not push after git commit --amend
Thanks, the build is also passing now so I will land this in a bit.
Rebase with latest main
Address round 2 comments
Somehow the builds are failing even though this patch contains no code changes.
Jun 23 2021
Removing mistakenly added files
Did a --amend to rebuild
Jun 17 2021
Address comment: replace scalar variables by values
Jun 15 2021
Document matrix scalar division
Address review comments
Jun 13 2021
Forgot to add a colon in code-block header. Fixed.
Jun 5 2021
Jun 4 2021
For sure, will do.
May 27 2021
May 26 2021
Thanks, this looks good to me! The existing tests are failing but seems like they are not difficult to fix. Once those are fixed, I will mark this as accepted.