This is an archive of the discontinued LLVM Phabricator instance.

[Support] Introduce a new utility for testing child process execution
AcceptedPublic

Authored by zturner on Jun 7 2018, 3:31 PM.

Details

Summary

I was auditing our process launching API and looking at how well it was tested, and I noticed that none of the functionality related to I/O redirection was tested at all. I also noticed that all testing was done via unit tests and they are all extremely hard to understand and follow.

The approach here is more LLVMish, using a tool to dump some output and then FileCheck it. Since I was specifically trying to test re-direction here, the only support I've added so far is for specifying command line arguments that describe how to redirect stdout and stderr, and I wrote some tests which I believe are significantly easier to understand than the existing unit tests.

There are plenty of other interesting opportunities for exploration with a tool such as this. For example, LLDB (or any kind of tracing utility) could use it as a kind of "strace" equivalent where it logs all of its events and we could FileCheck that.

In a followup patch, I plan to use this mechanism to test that launching a process with a specific environment works. I am going to do this by introducing meta-variable substitutions into the command language, so that you can write something like write stdout $ENV{PATH}

Diff Detail

Event Timeline

zturner created this revision.Jun 7 2018, 3:31 PM
zturner added a reviewer: rnk.Jun 7 2018, 5:08 PM

+rnk.

Also, one thing that's a little strange here is that this now marks the one and only Support test that is *not* a unit test. Is that weird? I feel like this is a much better pattern for testing process launching though, so I'd feel pretty bad if I had to re-write all this as unit tests.

rnk accepted this revision.Jun 7 2018, 5:21 PM

Looks good, but @dblaikie has helpful thoughts about testing and he is responsive, so I'd let him take a look.

This revision is now accepted and ready to land.Jun 7 2018, 5:21 PM
davide added a comment.Jun 8 2018, 3:04 PM

I like the idea. Some comments inline.

llvm/utils/trace/Trace.cpp
19–20

Is io.h really needed?

92–96

Could probably factor this out in a report_error function as the other tools do (or just call report_fatal_error if you can).

148–149

return run() ?

152

SmallVector ?

167–168

As we're basically flushing everytime we write to outs() should we provide a wrapper for this?

Hey Zach,

Thanks for looking into/improving test coverage!

My knee-jerk/gut reaction to the idea of adding a tool binary for testing a
fairly low-level API like this is that it doesn't /feel/ quite right, and
I've looked over the patch a bit but not in quite enough detail to provide
really actionable/specific feedback yet.

My broad/high-level question would be: Could the existing unit test
approach be made more/sufficiently readable by implementing helper
functions (essentially the kind of code that's in the new tool you're
introducing - providing that as utility functions with parameters, rather
than a program with command line parameters). It seems like that might be a
simpler alternative, but I'm certainly not 100% sure of it.

  • Dave

Hey Zach,

Thanks for looking into/improving test coverage!

My knee-jerk/gut reaction to the idea of adding a tool binary for testing a
fairly low-level API like this is that it doesn't /feel/ quite right, and
I've looked over the patch a bit but not in quite enough detail to provide
really actionable/specific feedback yet.

My broad/high-level question would be: Could the existing unit test
approach be made more/sufficiently readable by implementing helper
functions (essentially the kind of code that's in the new tool you're
introducing - providing that as utility functions with parameters, rather
than a program with command line parameters). It seems like that might be a
simpler alternative, but I'm certainly not 100% sure of it.

  • Dave

For simple cases, maybe. But I expect to gradually test more and more complicated things and it's going to get quite awkward testing those kinds of things with unit tests.

Right now the way the unit tests test that spawning a process with a specific environment works is that the particular unit test checks for the presence of an environment variable and returns a magic value if it finds it. Then the unit test spawns the top-level gtest executable again, filtered to this specific test. It should detect the environment variable and return the magic value.

That's about on the borderline of how gross I would want a unit test to be.

Imagine you implemented a ptrace loop on top of our sys::Execute functions. Then you might want to test that events that come through are as you would expect for a variety of different test programs, so now the unit tests start having inputs that you have to compile first.

I was hoping this utility could be useful in these types of situations as well.

Think of it sort of like a little miniature interpreted DSL that allow a parent and child process to communicate with each other. Maybe this just further cements your idea that this is too high level for testing low level process launching though :), I was just viewing it as a special case of more complicated scenarios.

It does feel a little weird that this is the only 2 tests that are not unit tests, but idk. If you don't think it's a good fit for a lit style test I can go back to the unit test model, but I do think the current Program unit tests are kind of pushing the limits of what unit tests should be doing.

dblaikie accepted this revision.Jun 12 2018, 4:13 PM

ah, yeah - thanks for pointing out/explaining the existing tests a bit & how they're re-executing the full gtest then waiting to be run themselves, etc.

Worth having /some/ binary rather than rerunning through gtest, I think - and once you have one, this doesn't seem too bad.

Could you port/move the existing unit tests over to this framework so there's not a mix of approaches to the same problem?