This patch adds a benchmarking infrastructure for llvm-libc memory functions.
In a nutshell, the code can benchmark small and large buffers for the memcpy, memset and memcmp functions.
It also produces graphs of size vs latency by running targets of the form render-libc-{memcpy|memset|memcmp}-benchmark-{small|big}.
The configurations are provided as JSON files and the benchmark also produces a JSON file.
This file is then parsed and rendered as a PNG file via the render.py script (make sure to run pip3 install matplotlib scipy numpy).
The script can take several JSON files as input and will superimpose the curves if they are from the same host.
TODO:
- The code benchmarks whatever is available on the host but should be configured to benchmark the -to be added- llvm-libc memory functions.
- Produce scores to track the performance of the functions over time to allow for regression detection.
Why was this not needed before? Or maybe a better question why is it needed now?