I know we've talked about this before, and I'm pretty sure I've proposed
something similar before but I think at the time I had a less good
understanding of what was actually under test.
But fundamentally, the current approach isn't a long-term viable pattern. This
test keeps getting slower, despite my working very hard to make it get some
"optimizations" even in -O0 builds in order to lower the constant factors.
Fundamentally, we're doing an unreasonable amount of work. And its crept back
into my critical path, so I'm returning to the idea of changing the testing
strategy here.
Looking at the specific thing being tested -- the goal seems very
clearly to be testing the *permutations*, not the *combinations*. The
combinations are driving up the complexity much more than anything else.
Instead, test every possible value for a given triple entry in every
permutation of *some* triple. This really seems to cover the core goal
of the test. Every single possible triple component is tested in every
position. But because we keep the rest of the triple constant, it does
so in a dramatically more scalable amount of time.
For me on a debug build, this goes from running for 19 seconds to 19
milliseconds, or a 1000x improvement. This makes a world of difference
for the critical path of 'ninja check-llvm' and other extremely common
workflows.