This is an archive of the discontinued LLVM Phabricator instance.

[test-suite] Add list of programs we might add.
ClosedPublic

Authored by Meinersbur on May 10 2018, 12:37 PM.

Details

Summary

Add a list of benchmarks, applications and algorithms which are under discussion to be added to the test-suite.

I added all the benchmarks mentioned under https://llvm.org/PR34216, missing SPEC benchmarks, some image processing algorithms and a few others.

The list at https://llvm.org/PR34216 only allows adding to the discussion, not removing, commenting, adding details to individual benchmarks.
The file includes a comment mentioning a regular review to edit this file is not required (which would add a lot of churn). This review a discussion for the general format of the file, or whether to include such a file at all.

Suggested-by: Hal Finkel

Diff Detail

Event Timeline

Meinersbur created this revision.May 10 2018, 12:37 PM

We can't add SPEC, as it's commercial. I'm not sure about others, but please make sure they are open source.

It's odd to have this in the repository, but admittedly we don't really have a wiki or similar in LLVM so I may be ok.

As we are on the topic: I think we should start discussions on breaking up the test-suite into multiple pieces/repositories.
From the technical side we can already do this today (at least with the cmake/lit mode), but we probably will need some rounds of discussions on how exactly to split things apart.

some image processing algorithms

I wonder if it would be of any interest to add a raw image decoding library? (the images produced by digital cameras, DSLRs)?
https://github.com/darktable-org/rawspeed

The downside is that it require the actual images to work on.
The upside to that downside is that there is a maintained set of such images exactly for this purpose already.
https://raw.pixls.us/data-unique/

We can't add SPEC, as it's commercial. I'm not sure about others, but please make sure they are open source.

I should have clarified: Regarding SPEC, I meant adding CMakeLists in the External directory.

It's odd to have this in the repository, but admittedly we don't really have a wiki or similar in LLVM so I may be ok.

It's also in bugzilla (which is also odd, but ok).

As we are on the topic: I think we should start discussions on breaking up the test-suite into multiple pieces/repositories.
From the technical side we can already do this today (at least with the cmake/lit mode), but we probably will need some rounds of discussions on how exactly to split things apart.

+1

We don't do a good job at separating benchmarks in test mode and benchmark mode, and right now, they're mostly independent runs, with independent buildbots anyway.

I should have clarified: Regarding SPEC, I meant adding CMakeLists in the External directory.

SPEC can be very sensitive on how you run it, so it may be a losing battle, but I'm not against doing this, as long as it doesn't break existing downstream scripts (which there are loads).

some image processing algorithms

I wonder if it would be of any interest to add a raw image decoding library? (the images produced by digital cameras, DSLRs)?
https://github.com/darktable-org/rawspeed

The downside is that it require the actual images to work on.
The upside to that downside is that there is a maintained set of such images exactly for this purpose already.
https://raw.pixls.us/data-unique/

I think it is a good candidate to be added to the file; we can remark the reason why it has not (yet) been added.

SPEC can be very sensitive on how you run it, so it may be a losing battle, but I'm not against doing this, as long as it doesn't break existing downstream scripts (which there are loads).

We already have SPEC CPU 2000/2006/2017 compile definitions in External (I myself added SPEC CPU 2017 CMakeLists.txt).
The results are not official anyway, i.e. not suitable to be submitted to https://www.spec.org/cpu2017/results/. I use them to not require configuring and invoking multiple benchmark suites.

It does seem like a wiki would be nice to maintain this kind of information. In the absence of that, I think that a file in the test-suite repository, or a page in www are about equally easy/hard to maintain: it requires commit access to make any changes.
A file in www in theory could be more visible as it becomes part of the llvm.org web pages. That being said, source code is also viewable online, so it's easy to browse this text too.

Next to listing future potential extensions to the test-suite, it might make sense to also have a section somewhere on test-suite design/philosophy and where we'd want the design to evolve to (e.g. a place where we can document in a bit more detail on what "breaking up the test-suite into multiple repositories" means?)

On the contents of the file as is: I wonder if it would be possible to group the proposed benchmarks by application domain, e.g. "HPC", "image processing", ...? That way it would help to identify an over-representation of some application domains and under-representation of other application domains.

TODO.txt
1–2 ↗(On Diff #146188)

It might be worthwhile to also state why we want to add more applications/benchmarks/algorithms to the test-suite.
My personal take on this is roughly:
"For benchmarking, many have observed that there isn't much overlap between performance regressions observed in programs or benchmarks not included in the test-suite and the benchmarks that are in the test-suite. This an indication that the test-suite doesn't have great coverage of 'typical' performance critical code. It is also an indication that a few hundred kernels doesn't seem to be enough to be able to cover most 'typical' performance critical codes. The hope is that adding a lot more and a lot more diverse code kernels will result in more coverage."

It does seem like a wiki would be nice to maintain this kind of information. In the absence of that, I think that a file in the test-suite repository, or a page in www are about equally easy/hard to maintain: it requires commit access to make any changes.
A file in www in theory could be more visible as it becomes part of the llvm.org web pages. That being said, source code is also viewable online, so it's easy to browse this text too.

That's actually a good point. We have the directory http://llvm.org/docs/Proposals/ for that reason.

Next to listing future potential extensions to the test-suite, it might make sense to also have a section somewhere on test-suite design/philosophy and where we'd want the design to evolve to (e.g. a place where we can document in a bit more detail on what "breaking up the test-suite into multiple repositories" means?)

This would also go into the testing docs we already have in www.

On the contents of the file as is: I wonder if it would be possible to group the proposed benchmarks by application domain, e.g. "HPC", "image processing", ...? That way it would help to identify an over-representation of some application domains and under-representation of other application domains.

+1

This kinda stalled, i think?

Meinersbur removed rOLDT svn-test-suite as the repository for this revision.

Sorry for the delay, I haven't forgotten this patch, but did not prioritize it.

As suggested by @rengolin, I moved the document to LLVM repository's docs/Proposals. I also added a few more benchmarks.

rengolin accepted this revision.Oct 23 2018, 12:24 PM

Awesome, thanks! LGTM.

I also have a list somewhere, that I will add once I find it.

This revision is now accepted and ready to land.Oct 23 2018, 12:24 PM
This revision was automatically updated to reflect the committed changes.