This is an archive of the discontinued LLVM Phabricator instance.

Add a decorator option to skip tests based on a default setting
ClosedPublic

Authored by aprantl on Mar 9 2020, 10:47 AM.

Details

Summary

This patch allows skipping a test based on a default setting, which is useful when running the testsuite in different "modes" based on a default setting. This is a feature I need for the Swift testsuite, but I think it's generally useful.

See also https://reviews.llvm.org/D72662 for context.

Diff Detail

Event Timeline

aprantl created this revision.Mar 9 2020, 10:47 AM
aprantl edited the summary of this revision. (Show Details)
JDevlieghere added inline comments.Mar 9 2020, 3:14 PM
lldb/test/API/sanity/TestSettingSkipping.py
29

This won't trip if the tests doesn't run. If you make the assert trip and XFAIL the test it should catch it.

If a test requires a specific value of a setting, would it make more sense to just (re)set its value at the start of a test?

If a test requires a specific value of a setting, would it make more sense to just (re)set its value at the start of a test?

This is not meant for requiring a setting, but to run the testsuite once in normal mode, and then again with --setting "experimental-feature-that-doesn't-work-for-all-tests-yet=true" and have a way to mark tests as not yet supported with that feature.

aprantl marked an inline comment as done.Mar 10 2020, 8:28 AM
aprantl added inline comments.
lldb/test/API/sanity/TestSettingSkipping.py
29

I'm not sure how that would work. Can you post an example? I want to test the skipIf decorator — how would I XFAIL it?

LGTM but let's give Pavel another chance to take a look.

lldb/test/API/sanity/TestSettingSkipping.py
29

Yeah I think it has the same problem, skipping takes priority over XFAIL, so it wouldn't matter.

labath accepted this revision.Mar 10 2020, 9:10 AM

If a test requires a specific value of a setting, would it make more sense to just (re)set its value at the start of a test?

This is not meant for requiring a setting, but to run the testsuite once in normal mode, and then again with --setting "experimental-feature-that-doesn't-work-for-all-tests-yet=true" and have a way to mark tests as not yet supported with that feature.

Right, that makes sense, though it makes me cry a bit every time we add a new option to the test decorators...

lldb/test/API/sanity/TestSettingSkipping.py
29

At one point, Todd added some tests for the test harness. There are still some remnants of it in packages/Python/lldbsuite/test/test_runner/ but they're hopelessly out of date by now (and even when they were new they only seemed to work for Todd).

If we wanted to write self-tests something like that would seem appropriate. I'm not sure these tests are very useful as they stand now because they could pass or fail depending on what kind of --setting values I pass to dotest.

This revision is now accepted and ready to land.Mar 10 2020, 9:10 AM

Right, that makes sense, though it makes me cry a bit every time we add a new option to the test decorators...

We call it the Christmas tree approach to testing!

This revision was automatically updated to reflect the committed changes.
Herald added a project: Restricted Project. · View Herald TranscriptMar 11 2020, 10:10 AM