This diff adds documentation for allow-empty flag under FileCheck
I think the idea is to prevent unexpected empty output from occurring in cases when --allow-empty is not specified. In a case where all a user cares about is that some string doesn't appear in the output, that might help make the test more robust (because they expect some output, just not what they specified), although honestly I'm not convinced, hence my proposal in the mailing list to change it to --expect-empty.
If testing for the absence of an error message, FileCheck could give a successful return value even though a testcase failed to compile. Having FileCheck defaults to erroring on empty input allows to catch such a case. However there are genuine cases where one might be fine with an empty input which is why --allow-empty is needed.
I think it could happen if you have a testcase that you wish to test the absence of a pattern in debug output. You might still want to compile the testcase in release mode if before a fix it used to segfault at compilation. So you'd have empty output in release mode and non empty output in debug mode. Just a thought though.
I don't know why we want to make empty input a built-in special behavior... In that case, the test can make the intention explicit by specifying an option, say, --expect-non-empty:
# RUN: command | FileCheck %s --expect-non-empty # CHECK-NOT: aaa # CHECK-NOT: bbb
Since pure NOT checks are unreliable, so many tests are doing:
# RUN: command --enable | FileCheck %s # CHECK: aaa # CHECK: bbb # RUN: command --disable | FileCheck %s --check-prefix=NO # NO-NOT: aaa # NO-NOT: bbb
I agree that in this case adding --expect-non-empty can be cumbersome to the users.
Because it's pretty rare to expect empty - so making the default a bit more likely to catch bugs might be a good tradeoff.
(I tend to get pretty pushy about making sure tests aren't just "negative" (eg: "does anything other than crash" or "does anything other than <this>") - but they still slip through a lot, so having a mechanical feature that might catch a few issues that would otherwise slip through those less restrictive tests seems potentially useful)
Not sure it'd make a /huge/ deal either way - and it'd be hard to evaluate the cost/benefit. (since the benefit is in all the tests that have correctly diagnosed a bug due to their output being empty & that being a default failure - but we don't have a way to search for those instances)