This is an archive of the discontinued LLVM Phabricator instance.

[lit] Add SKIPPED test result category
ClosedPublic

Authored by yln on Apr 9 2020, 11:44 AM.

Details

Summary

Track and print the number of skipped tests. Skipped tests are tests
that should have been executed but weren't due to:

  • user interrupt [Ctrl+C]
  • --max-time (overall lit timeout)
  • --max-failures

This is part of a larger effort to ensure that all discovered tests are
properly accounted for.

Add test for overall lit timeout feature (--max-time option) to
observe skipped tests. Extend test for --max-failures option.

Diff Detail

Event Timeline

yln created this revision.Apr 9 2020, 11:44 AM
Herald added a project: Restricted Project. · View Herald TranscriptApr 9 2020, 11:44 AM

Thanks for doing this. I like the idea of the change.

Can skipped tests be covered in lit's test suite? I'm not sure how easy it is to do it in a portable manner, but platform-specific coverage is better than nothing.

llvm/utils/lit/lit/main.py
283–284

Does this todo mean you don't like the name SKIPPED?

284

There's a change in the effect of opts.max_failures on both skipped tests and the remaining unresolved tests.

It seems like that should be mentioned in the patch summary.

Can at least the latter change be covered in lit's test suite?

yln updated this revision to Diff 256623.Apr 10 2020, 11:24 AM

Improved commit message.

Add test for overall lit timeout feature (--max-time option) to
observe skipped tests.

yln marked 2 inline comments as done.Apr 10 2020, 11:33 AM

Can skipped tests be covered in lit's test suite? I'm not sure how easy it is to do it in a portable manner, but platform-specific coverage is better than nothing.

Added a test for the overall lit timeout (--max-time) which previously didn't have one, which let's us observe the "skipped test" logic. I am not sure how to add one for the user interrupt [Ctrl+C].

llvm/utils/lit/lit/main.py
283–284

No, I just noticed that handling of FLAKYPASS is missing here and I want to address it as a separate change.

yln edited the summary of this revision. (Show Details)Apr 10 2020, 11:34 AM
yln marked 3 inline comments as done.Apr 10 2020, 12:49 PM
yln added inline comments.
llvm/utils/lit/lit/main.py
284

UNRESOLVED tests are failures where the test failed due to an "infrastructure" issue, e.g., failure to parse a REQUIRES: line.

Previously all unexecuted tests (unexecuted for whatever reason: infrastructure failures, overall lit timeout, user interrupt, --max-tests) were all marked UNRESOLVED. So both logical groups for unexecuted tests: "skipped" and "unresolved" were given the same label and treated as failures. However, a "skipped" shouldn't imply failure.

I think the check for opts.max_failures here was just an ad-hoc way to dry to deal with mixing both skipped and failed tests into the UNRESOLVED category: when --max-failures was specified and we stop executing because of it we mark all remaining tests as UNRESOLVED. However we shouldn't print those as failures here.

Anyways, things are more consistent now. We have a proper category for "skipped" (not a failure) and the interaction between max_failures and UNRESOLVED has been removed.

yln updated this revision to Diff 256643.Apr 10 2020, 1:19 PM

Extend test for --max-failures option.

yln edited the summary of this revision. (Show Details)Apr 10 2020, 1:20 PM
jdenny marked an inline comment as done.Apr 10 2020, 1:57 PM
In D77819#1974786, @yln wrote:

Can skipped tests be covered in lit's test suite? I'm not sure how easy it is to do it in a portable manner, but platform-specific coverage is better than nothing.

Added a test for the overall lit timeout (--max-time) which previously didn't have one, which let's us observe the "skipped test" logic. I am not sure how to add one for the user interrupt [Ctrl+C].

It think it would be messy: run the lit test suite in the background redirecting to a file, record the PID, poll that file in the foreground until expected passes appear, and then kill -SIGINT. Very slow tests should be skipped. It works from my bash shell, but I'm not sure about portability.

For now, your -max-failures tests are a race-free, portable way to exercise at least one code path for skipped tests, so I would not object if you felt that was sufficient.

llvm/utils/lit/lit/main.py
283–284

Got it. Thanks.

284

Thanks for explaining.

llvm/utils/lit/tests/max-time.py
3

On heavily loaded test systems, is there a chance of a race here?

yln marked an inline comment as done.Apr 10 2020, 2:41 PM

It think it would be messy: run the lit test suite in the background redirecting to a file, record the PID, poll that file in the foreground until expected passes appear, and then kill -SIGINT. Very slow tests should be skipped. It works from my bash shell, but I'm not sure about portability.

For now, your -max-failures tests are a race-free, portable way to exercise at least one code path for skipped tests, so I would not object if you felt that was sufficient.

Yes, I would like to spend my time on other improvements ;)

llvm/utils/lit/tests/max-time.py
3

Yes, race is between --max-time=1 and sleep 5 in [slow.txt]. I will increase 5 to 60 when landing. That should work for all practical purposes.

jdenny accepted this revision.Apr 10 2020, 3:04 PM

LGTM. Thanks!

This revision is now accepted and ready to land.Apr 10 2020, 3:04 PM
jdenny added inline comments.Apr 10 2020, 3:13 PM
llvm/utils/lit/tests/max-time.py
3

Isn't there theoretically a race between --max-time=1 and fast.txt as well? I'm not saying you need to change it, but I at least want to be sure I'm not misunderstanding something.

yln marked an inline comment as done.Apr 10 2020, 3:22 PM
yln added inline comments.
llvm/utils/lit/tests/max-time.py
3

Yes, you are right! I forgot to consider this. I would like to keep this at 1 for now (because it increases the actual running time of the test) and only increase it if we discover that it is a problem in practice.

This revision was automatically updated to reflect the committed changes.
In D77819#1975150, @yln wrote:

It think it would be messy: run the lit test suite in the background redirecting to a file, record the PID, poll that file in the foreground until expected passes appear, and then kill -SIGINT. Very slow tests should be skipped. It works from my bash shell, but I'm not sure about portability.

For now, your -max-failures tests are a race-free, portable way to exercise at least one code path for skipped tests, so I would not object if you felt that was sufficient.

Yes, I would like to spend my time on other improvements ;)

So, if your -max-time test proves race-free in practice, my recipe above can be simplified a bit: just run lit in the background and record the PID, sleep long enough for expected passes, and then kill -SIGINT. Well, maybe one day. :-)

llvm/utils/lit/tests/max-time.py
3

I think that's fine.

yln marked an inline comment as done.Apr 10 2020, 4:20 PM
yln added inline comments.
llvm/utils/lit/tests/max-time.py
3

At least one bot actually hit this:

http://lab.llvm.org:8011/builders/clang-cmake-armv7-quick/builds/15079/steps/ninja%20check%201/logs/FAIL%3A%20lit%3A%3A%20max-time.py

I will adapt the timeouts:
fast.txt: true (0)
--max-time: 5 <-- this is how long the test runs (bot reported test time: 1.58s, so it should be good enough)
slow.txt: sleep 60

One more slow test in the lit test suite :(

yln marked an inline comment as done.Apr 10 2020, 4:26 PM
yln added inline comments.