Page MenuHomePhabricator

[lldb] Set return status to failed when adding a command error
ClosedPublic

Authored by DavidSpickett on Fri, Jun 4, 9:00 AM.

Details

Summary

There is a common pattern:
result.AppendError(...);
result.SetStatus(eReturnStatusFailed);

I found that some commands don't actually "fail" but only
print "error: ..." because the second line got missed.

This can cause you to miss a failed command when you're
using the Python interface during testing.
(and produce some confusing script results)

I did not find any place where you would want to add
an error without setting the return status, so just
set eReturnStatusFailed whenever you add an error to
a command result.

This change does not remove any of the now redundant
SetStatus. This should allow us to see if there are any
tests that have commands unexpectedly fail with this change.
(the test suite passes for me but I don't have access to all
the systems we cover so there could be some corner cases)

Some tests that failed on x86 and AArch64 have been modified
to work with the new behaviour.

Diff Detail

Event Timeline

DavidSpickett created this revision.Fri, Jun 4, 9:00 AM
DavidSpickett requested review of this revision.Fri, Jun 4, 9:00 AM
Herald added a project: Restricted Project. · View Herald TranscriptFri, Jun 4, 9:00 AM

There are likely other tests that aren't enabled for x86/AArch64 or sets of registers that I don't have on my machines. So if this change is welcome then the plan would be to land this as is and monitor the bots for a week or so, revert and reland with test fixups as needed.

Then I can remove the now redundant SetStatus calls, which I've done locally and goes from ~580 to ~50. Again, in batches to account for tests I'm not running.

I can run this for you on macOS and Linux x86 which I think should cover every test.

FWIW, there is even more redundancy from what I remember as we IIRC return false within bool DoExecute everywhere. I can't recall why I didn't remove this yet (beside that it's a lot of work).

Oh I see, you're not just concerned about just having every command in the test suite covered before landing.

I can run this for you on macOS and Linux x86 which I think should cover every test.

That would be great, thanks!

This seems like a good change to me. And even if you wanted to do something cheesy like put an error into the result on spec, then later decide it was successful after all, you can always reset the result's status to the appropriate success status. This just makes the common practice less error prone. We should also remove all the redundant settings, leaving them in would be confusing. So I'd just make the change then fix whatever the bots bring up.

As to the bool from DoExecute, that also seems like a nice cleanup, but since you have to return something or the code doesn't compile, that's one that is harder to get wrong. Plus it would be a tedious to fix.

This revision was not accepted when it landed; it landed in state Needs Review.Tue, Jun 8, 1:41 AM
This revision was landed with ongoing or failed builds.
This revision was automatically updated to reflect the committed changes.

I've gone ahead and landed it, will revert on failures.

Got patches to remove the reundant calls locally so I'll put those up for review once this has had time to go through.
(those changes will be fairly mechanical but it's worth someone scanning them for silly mistakes)

I've gone ahead and landed it, will revert on failures.

Got patches to remove the reundant calls locally so I'll put those up for review once this has had time to go through.
(those changes will be fairly mechanical but it's worth someone scanning them for silly mistakes)

SGTM. My macOS node is suffering from some unrelated failures so I couldn't let this run over all tests yet, but if the next GreenDragon iteration is green then this should be fine.

1 failure on MacOS "lldb-api :: commands/register/register/register_command/TestRegisters.py", http://green.lab.llvm.org/green/blue/organizations/jenkins/lldb-cmake/detail/lldb-cmake/32693/pipeline/, hence the revert.

Should be easy enough to fix the test logic.

Relanded in https://reviews.llvm.org/rGa2363c0cf9b6 / https://reviews.llvm.org/rG0f94d68a2e15 (because I have a case of the Monday Mornings). If this does the trick there are 2/3 other tests that use this tactic to check for AVX so I can update those too.