I know we've talked about this before, and I'm pretty sure I've proposed
something similar before but I think at the time I had a less good
understanding of what was actually under test.
But fundamentally, the current approach isn't a long-term viable pattern. This
test keeps getting slower, despite my working very hard to make it get some
"optimizations" even in -O0 builds in order to lower the constant factors.
Fundamentally, we're doing an unreasonable amount of work. And its crept back
into my critical path, so I'm returning to the idea of changing the testing
strategy here.
Looking at the specific thing being tested -- the goal seems very
clearly to be testing the *permutations*, not the *combinations*. The
combinations are driving up the complexity much more than anything else.
Instead, test every possible value for a given triple entry in every
permutation of *some* triple. This really seems to cover the core goal
of the test. Every single possible triple component is tested in every
position. But because we keep the rest of the triple constant, it does
so in a dramatically more scalable amount of time.
For me on a debug build, this goes from running for 19 seconds to 19
milliseconds, or a 1000x improvement. This makes a world of difference
for the critical path of 'ninja check-llvm' and other extremely common
workflows.
I think in-lining the computation in the reduction of the number of tests here would be useful (instead of just having it in the description). For example, it's useful to know that if we do the Cartesian Product and permute each of the tuples in that space, we get the following number of test-cases:
This patch cuts it down to:
That's the difference between a geometric series and an arithmetic series -- i.e. if any one of the sets grows by N, that X grows exponentially compared to linearly to Y.