LLDBTestResult.hardMarkAsSkipped marked the whole class as skipped when the first class in the
test failed the category check. This meant that subsequent tests in the same class did not run
even if they were passing the category filter. Fix that.
Details
Diff Detail
- Repository
- rL LLVM
Event Timeline
Btw, I tried to make a unit test for this, but I could not get your meta test runner to work -- the existing test was failing for me (it was not getting any events apart from the global "test run started"/"test run finished" events). Do you have any idea what could be wrong?
Hmm no initial thoughts on that yet. What steps specifically did you take? (I'll see if I can repro it over here, it may have gone stale --- although I used it not too long ago.)
The change here looks good to me.
I used the command you mentioned in the original patch:
$ python -m unittest discover -s test/src -p 'Test*.py' FF ====================================================================== FAIL: test_with_function_filter (TestCatchInvalidDecorator.TestCatchInvalidDecorator) ---------------------------------------------------------------------- Traceback (most recent call last): File "/usr/local/google/home/labath/ll/lldb/packages/Python/lldbsuite/test_event/test/src/TestCatchInvalidDecorator.py", line 56, in test_with_function_filter "At least one job or test error result should have been returned") AssertionError: At least one job or test error result should have been returned ====================================================================== FAIL: test_with_whole_file (TestCatchInvalidDecorator.TestCatchInvalidDecorator) ---------------------------------------------------------------------- Traceback (most recent call last): File "/usr/local/google/home/labath/ll/lldb/packages/Python/lldbsuite/test_event/test/src/TestCatchInvalidDecorator.py", line 37, in test_with_whole_file "At least one job or test error result should have been returned") AssertionError: At least one job or test error result should have been returned ---------------------------------------------------------------------- Ran 2 tests in 0.149s FAILED (failures=2)
I don't remember seeing any changes here so it's quite possible it never worked in the first place, but I have no idea what could be different about my setup.
! In D22213#481728, @labath wrote:
I don't remember seeing any changes here so it's quite possible it never worked in the first place, but I have no idea what could be different about my setup.
I'll have to have a look at it. Thanks for pointing it out!