This is an archive of the discontinued LLVM Phabricator instance.

[lldb/testsuite] Remove "healthy process" test from TestProcessCrashInfo.py
AbandonedPublic

Authored by friss on Mar 18 2020, 9:18 PM.

Details

Reviewers
mib
jingham
Summary

The crash info annotations are not necessarily indicative of a
crash. Some APIs will set them before doing something risk and
clean them up after, while some others might just leave a breadcrumb
there for the whole dureation of the process.

The latter happens for dyld on iOS. Dependeing on how the process is
launched, it will leave some information it its crash_info annotations
about its configuration.

This makes the test fail because it expects to find an empty crash
info struct when the process is running happily. This patch just
deletes this test, but I'm happy to consider other alternatives.

Diff Detail

Event Timeline

friss created this revision.Mar 18 2020, 9:18 PM
Herald added a project: Restricted Project. · View Herald TranscriptMar 18 2020, 9:18 PM
labath added a subscriber: labath.Mar 19 2020, 1:01 AM

Could the inferior manually wipe the crash annotation?

mib added inline comments.Mar 19 2020, 2:59 AM
lldb/test/API/functionalities/process_crash_info/TestProcessCrashInfo.py
72

What about adding a @skipIfDarwinEmbedded decorator for this test ? IIUC, it would skip the test of Darwin armv7/arm64 targets (iOS) but still run on other platforms.

friss marked an inline comment as done.Mar 19 2020, 7:28 AM

Could the inferior manually wipe the crash annotation?

This is not something you're supposed to wipe. It's additional information that is going to b dumped in the crashlog if the process crashes. If dyld runs in some kind of degraded mode, it will put this annotation there at program startup time and never remove it.

lldb/test/API/functionalities/process_crash_info/TestProcessCrashInfo.py
72

I could do this, and it would fix the issue I'm seeing. I chose to remove it completely, because there's no guarantee that what the test exercises is true. But I guess I can just @skipIfDarwinEmbedded now and we'll see if the test causes any more trouble in other scenarios

friss abandoned this revision.Mar 19 2020, 8:27 AM

I skipped the test for embedded in 8758d02074be7b80b804fad19e8b7de6ebd43c31. I'll abandon this for now

This test just seems wrong to me. We can pretend that processes that haven't crashed don't have crash_info, and manually suppress the output in lldb if the stop state isn't a crash. But it seems useful to ask "what are the crash_info bits in flight right now" even if you haven't crashed. So that doesn't seem right to me. Or you have to acknowledge that you have no way of knowing whether there is crash info data hanging around, in which case any test that depends on there being none is an invalid test. I'd delete this whole test, since it seems to me like it will only ever succeed by accident.

Could the inferior manually wipe the crash annotation?

This is not something you're supposed to wipe. It's additional information that is going to b dumped in the crashlog if the process crashes. If dyld runs in some kind of degraded mode, it will put this annotation there at program startup time and never remove it.

I can see how a "normal" application is not supposed to wipe that, but I think our test apps have some license to do unusual things... I don't know how hard would it be to implement the actual wiping..

Another option would be to run the test over a core file. That would have an additional advantage of being able to run on non-darwin platforms.

This test just seems wrong to me. We can pretend that processes that haven't crashed don't have crash_info, and manually suppress the output in lldb if the stop state isn't a crash. But it seems useful to ask "what are the crash_info bits in flight right now" even if you haven't crashed. So that doesn't seem right to me. Or you have to acknowledge that you have no way of knowing whether there is crash info data hanging around, in which case any test that depends on there being none is an invalid test. I'd delete this whole test, since it seems to me like it will only ever succeed by accident.

If it cannot reliably produce an process with no crash annotations, then the test is bad. But that doesn't mean we should not test the code path which we would take if the process happened to not have any crash annotations.

(This looks like a very good example of the limitations of the "run a process and see what happens" kinds of tests, and why I think we should avoid them whereever possible.)