HomePhabricator

Roll dosep.py parallel test runner into dotest.py command line
AuditedrL246794

Description

Roll dosep.py parallel test runner into dotest.py command line

See the following for details:
http://reviews.llvm.org/D12587

Details

Auditors
dawn
Committed
tfialaSep 3 2015, 11:58 AM
Parents
rL246793: Make ObjectFiles private. NFC.
Branches
Unknown
Tags
Unknown

Event Timeline

dawn raised a concern with this commit.EditedSep 4 2015, 10:44 AM
dawn added a subscriber: dawn.

I'm very unhappy with this change. Before I could count up all the test failures using:

./dosep.py -s --options "-v --executable $INSTALLDIR/bin/lldb" 2>&1 | tee $INSTALLDIR/lldb_test_out.log || true

lldb_failures=`grep -E "^RESULT:" lldb_test_out.log | grep failures | awk '{count+=$5} END {print count}'` || true
lldb_errors=`grep -E "^RESULT:" lldb_test_out.log | grep errors | awk '{count+=$7} END {print count}'` || true
lldb_passes=`grep -E "^RESULT: " lldb_test_out.log | sed 's/(//' | awk '{count+=$3} END {print count}'` || true
lldb_total=`grep -E "^Ran [0-9]+ tests? in" lldb_test_out.log | awk '{count+=$2} END {print count}'`

And get the correct totals:

lldb_failures=2
lldb_errors=0
lldb_passes=1143
lldb_total=1376

Now it seems we are back to the broken behavior of dosep.py where there's no way to count the correct totals since it seems to forget to count test cases from 25% (or so) of the test suites. Only now I have no workaround, because even dotest.py can't report the totals correctly:

./dotest.py -v --executable $INSTALLDIR/bin/lldb 2>&1 | tee $INSTALLDIR/lldb_test_out.log
[...]
Ran 395 test suites (0 failed) (0.000000%)
Ran 826 test cases (0 failed) (0.000000%)

Is there anyway we can see the old "RESULT:(" output again please??? If not, please revert this until the counting issue is fixed so I can use dosep.py as before.

Thank you,
-Dawn

dawn added a comment.Sep 4 2015, 11:04 AM

I really liked that dosep.py -s and dotest.py would report the run of each test case:

Collected 6 tests

1: test_double_type_from_block_with_dsym (TestFloatTypes.FloatTypesTestCase)
Test that double-type variables are displayed correctly from a block. ... ok
2: test_double_type_with_dsym (TestFloatTypes.FloatTypesTestCase)
Test that double-type variables are displayed correctly. ... ok
3: test_double_type_with_dwarf (TestFloatTypes.FloatTypesTestCase)
Test that double-type variables are displayed correctly. ... ok
4: test_float_type_from_block_with_dsym (TestFloatTypes.FloatTypesTestCase)
Test that float-type variables are displayed correctly from a block. ... ok
5: test_float_type_with_dsym (TestFloatTypes.FloatTypesTestCase)
Test that float-type variables are displayed correctly. ... ok
6: test_float_type_with_dwarf (TestFloatTypes.FloatTypesTestCase)
Test that float-type variables are displayed correctly. ... ok

----------------------------------------------------------------------
Ran 6 tests in 6.844s

RESULT: PASSED (6 passes, 0 failures, 0 errors, 0 skipped, 0 expected failures, 0 unexpected successes)

[TestFloatTypes.py PASSED]

I would like to have that back please.

To be honest I think grepping the output is the wrong way to go about
counting the tests.

If there's a regression in that you no longer have access to a piece of
information that you need, then we should fix it by printing the value you
need. Log scraping is kind of a horrible thing to do, because it means you
are going to be broken by changes to the format of the output, and it will
hold back progress on the test suite making real improvements in
maintainability and performance because all the different people doing
their own custom log scraping will get upset if anything breaks.

Anyone who has worked on the test suite can vouch for the fact that it is
in need of some major love, and if that means a transition period where
things are a little wonky, and we lose some things only to get them back
later, I'm ok with that.

Dawn, the output supported by the command shouldn't have been changed. I'll have a peek at what may have caused that to happen. All this change was trying to do was stop the need to call dosep and roll that behavior into dotest.

Zachary, I'm in the process of add xUnit (and other pluggable) output support so we can flip a flag and have traditional build/test bots generate test result reports.

dawn accepted this commit.Sep 4 2015, 1:37 PM

Apparently now you must use:

./dotest.py --output-on-success -v --executable $INSTALLDIR/bin/lldb

to see the same detailed output as you used to see with:

./dosep.py -s --options "-v --executable $INSTALLDIR/bin/lldb"

or:

./dotest.py -v --executable $INSTALLDIR/bin/lldb

So no info is lost, just have to add the extra option --output-on-success.

tfiala added a comment.Sep 4 2015, 2:19 PM

I'm glad that worked for you, Dawn!