This is an archive of the discontinued LLVM Phabricator instance.

[test] Report error when inferior test processes exit with a non-zero code
AbandonedPublic

Authored by labath on Jul 15 2016, 3:09 AM.

Details

Summary

We've run into this problem when the test errored out so early (because it could not connect to
the remote device), that the code in D20193 did not catch the error. This resulted in the test
suite reporting success with 0 tests being run.

This patch makes sure that any non-zero exit code from the inferior process gets reported as an
error. Basically I expand the concept of "exceptional exits", which was previously being used for
signals to cover these cases as well.

Diff Detail

Event Timeline

labath updated this revision to Diff 64115.Jul 15 2016, 3:09 AM
labath retitled this revision from to [test] Report error when inferior test processes exit with a non-zero code.
labath updated this object.
labath added reviewers: tfiala, zturner.
labath added a subscriber: lldb-commits.

I think this also makes the code in D20193 obsolete. If this goes in, I can create a follow-up to remove that.

I don't think the original version tried to make a nice error message. It got flagged as an error, and that was it. Now it will get flagged as exceptional exit.

In any case we will print out the stderr/stdout (I changed this from stderr-only, because in my case the only useful information about the error was in stdout), which should contain the exception that caused the non-zero exit, it's backtrace and everything.

tfiala accepted this revision.Jul 15 2016, 1:07 PM
tfiala edited edge metadata.

Looks fine here. Did either of you try this with an exceptional exit to make sure that still shows up right?

This revision is now accepted and ready to land.Jul 15 2016, 1:07 PM
This revision was automatically updated to reflect the committed changes.

I have now :)

labath abandoned this revision.Jul 21 2016, 7:52 AM

That's weird - it shows up as abandoned, but you did check it in, right Pavel?

I also reverted it back, as it was causing some issues (if a specific test case in a test file failed, it still marked the whole file as failed). I'll need revisit this later...

Ah yes okay, that orients what I heard earlier. Thanks!