Page MenuHomePhabricator

Warn about zero-parameter K&R definitions in -Wstrict-prototypes
Needs ReviewPublic

Authored by aaronpuchert on Aug 28 2019, 6:07 PM.

Details

Summary

Zero-parameter K&R definitions specify that the function has no
parameters, but they are still not prototypes, so calling the function
with the wrong number of parameters is just a warning, not an error.

The C11 standard doesn't seem to directly define what a prototype is,
but it can be inferred from 6.9.1p7: "If the declarator includes a
parameter type list, the list also specifies the types of all the
parameters; such a declarator also serves as a function prototype
for later calls to the same function in the same translation unit."
This refers to 6.7.6.3p5: "If, in the declaration “T D1”, D1 has
the form

D(parameter-type-list)

or

D(identifier-list_opt)

[...]". Later in 6.11.7 it also refers only to the parameter-type-list
variant as prototype: "The use of function definitions with separate
parameter identifier and declaration lists (not prototype-format
parameter type and identifier declarators) is an obsolescent feature."

We already correctly treat an empty parameter list as non-prototype
declaration, so we can just take that information.

GCC also warns about this with -Wstrict-prototypes.

This shouldn't affect C++, because there all FunctionType's are
FunctionProtoTypes. I added a simple test for that.

Event Timeline

aaronpuchert created this revision.Aug 28 2019, 6:07 PM
Herald added a project: Restricted Project. · View Herald TranscriptAug 28 2019, 6:07 PM

This could cause a lot of churn in existing projects (especially with static void foo()), without giving Clang any new information. I'm wary of this.

Zero-parameter K&R definitions specify that the function has no
parameters, but they are still not prototypes, so calling the function
with the wrong number of parameters is just a warning, not an error.

Why not just directly give an error for the problematic case? We could carve out a -W flag (if it doesn't already exist) that warns if you incorrectly pass parameters to a function whose definition has no prototype, and then make it -Werror-by-default.

Note that I'm just copying GCC, which seems the be the original intent behind the warning (https://bugs.llvm.org/show_bug.cgi?id=20796). So people who use both compilers will have seen that warning already. Note also that there is no warning if any declaration provides a prototype, so this is fine:

void f(void);  // provides a prototype
void f() {}    // not a prototype, but we have one already

We could carve out a -W flag (if it doesn't already exist) that warns if you incorrectly pass parameters to a function whose definition has no prototype

The warning exists, but there is no flag apparently.

void f() {}
void g() {
    f(0);
}

spits out

test.c:3:8: warning: too many arguments in call to 'f'
    f(0);
    ~  ^

But with f(void) we get

test.c:3:7: error: too many arguments to function call, expected 0, have 1
    f(0);
    ~ ^
test.c:1:1: note: 'f' declared here
void f(void) {}
^

This could cause a lot of churn in existing projects (especially with static void foo()), without giving Clang any new information. I'm wary of this.

Those projects likely aren't aware they're using prototypeless functions, which are trivial to call incorrectly. I suspect this diagnostic will find real bugs in code.

Zero-parameter K&R definitions specify that the function has no
parameters, but they are still not prototypes, so calling the function
with the wrong number of parameters is just a warning, not an error.

Why not just directly give an error for the problematic case? We could carve out a -W flag (if it doesn't already exist) that warns if you incorrectly pass parameters to a function whose definition has no prototype, and then make it -Werror-by-default.

It's not incorrect to pass arguments to a function without a prototype, so that should not be an error. It is incorrect to pass the wrong number or types of arguments to a function without a prototype. It's not a bad idea to error in that circumstances, but there's no solution for extern void foo() where we don't see the actual definition.

If this turns out to be chatty in practice, we could put it under its own diagnostic flag (-Wstrict-prototype-no-params or something).

This could cause a lot of churn in existing projects (especially with static void foo()), without giving Clang any new information. I'm wary of this.

Those projects likely aren't aware they're using prototypeless functions, which are trivial to call incorrectly. I suspect this diagnostic will find real bugs in code.

To be clear, my understanding is that -Wstrict-prototypes already warns on non-prototype declarations like this:

void foo();

we just don't warn on non-prototype defining declarations, where the meaning is unambiguous:

void foo() {}

It's not incorrect to pass arguments to a function without a prototype, so that should not be an error. It is incorrect to pass the wrong number or types of arguments to a function without a prototype. It's not a bad idea to error in that circumstances, but there's no solution for extern void foo() where we don't see the actual definition.

Given my understanding, then the only corner case that's left is when we *do* see the definition.

This could cause a lot of churn in existing projects (especially with static void foo()), without giving Clang any new information. I'm wary of this.

Those projects likely aren't aware they're using prototypeless functions, which are trivial to call incorrectly. I suspect this diagnostic will find real bugs in code.

To be clear, my understanding is that -Wstrict-prototypes already warns on non-prototype declarations like this:

void foo();

we just don't warn on non-prototype defining declarations, where the meaning is unambiguous:

void foo() {}

There are two different warnings, and perhaps we're speaking about different ones. We have a warning about not having a prototype (warning: this function declaration is not a prototype) and we have a warning about not seeing a preceding prototype (warning: this old-style function definition is not preceded by a prototype). I think this patch deals with the latter.

It's not incorrect to pass arguments to a function without a prototype, so that should not be an error. It is incorrect to pass the wrong number or types of arguments to a function without a prototype. It's not a bad idea to error in that circumstances, but there's no solution for extern void foo() where we don't see the actual definition.

Given my understanding, then the only corner case that's left is when we *do* see the definition.

Yeah, and we already handle that situation with an un-ignorable warning: https://godbolt.org/z/TPklNE

However, I think the inconsistency this patch is addressing is that we warn inconsistently here: https://godbolt.org/z/wvipEs I would expect the definition of bar() to warn similar to the definition of foo() for the same reasons that foo() is diagnosed.

we just don't warn on non-prototype defining declarations, where the meaning is unambiguous:

void foo() {}

"Meaning" is a difficult term. What we know is that this is a K&R-style definition, and does not define a prototype. So one would expect -Wstrict-prototypes to warn about this.

If you look at the section about function calls in the C11 standard (6.5.2.2), you can see that it distinguishes "Constraints" and "Semantics". Constraints are only placed on calls to prototype functions: "If the expression that denotes the called function has a type that includes a prototype, the number of arguments shall agree with the number of parameters. Each argument shall have a type such that its value may be assigned to an object with the unqualified version of the type of its corresponding parameter." Only in the semantics section we find something regarding calls to non-prototype functions: "If the expression that denotes the called function has a type that does not include a prototype, [...]. If the number of arguments does not equal the number of parameters, the behavior is undefined." So one is a compiler error, the other a runtime error (at best).

For me this warning is about usage of an "obsolescent" feature (6.11.6 and 6.11.7), and thus should warn even when we can warn about wrong usage at call sites. Note that the warning is off by default and neither enabled by -Wall nor -Wextra.

There are two different warnings, and perhaps we're speaking about different ones. We have a warning about not having a prototype (warning: this function declaration is not a prototype) and we have a warning about not seeing a preceding prototype (warning: this old-style function definition is not preceded by a prototype). I think this patch deals with the latter.

There is also -Wmissing-prototypes, which could have been named better. It warns about a definition (never a declaration) that doesn't have a preceding prototype declaration. It warns when there are preceding declarations that do not provide a prototype, and also when there is no preceding declaration at all, even if the definition itself does provide a prototype. It doesn't fire on static or inline definitions. I would describe the purpose of this warning as finding functions that should be declared static.

This warning (-Wstrict-prototypes) is rather straightforward: it warns on old-style (K&R) function declarators and enforces the usage of prototype declarations. (Well, with the caveat that among multiple non-definition declarations, only the first declaration needs to provide said prototype.) The purpose of this warning is to detect (perhaps unintentional) usage of the old-style unsafe declarator syntax.

So the warnings have some overlap, but I see their intention as different.

Given my understanding, then the only corner case that's left is when we *do* see the definition.

Yeah, and we already handle that situation with an un-ignorable warning: https://godbolt.org/z/TPklNE

A non-prototype definition could be inline in a header. Now we have a warning that detects when we call the function with the wrong number of arguments, but users of the header might use a different compiler and not see a warning: the compiler is not required to diagnose this, as it's undefined behavior. Now if Clang warns me that this isn't a prototype, I'll fix it and users of the header with the hypothetical other compiler now get an error if they use it wrong.

By the way, I'm open to adding a fix-it hint for zero-parameter K&R-style definitions, since the fix is pretty straightforward.

We could carve out a -W flag (if it doesn't already exist) that warns if you incorrectly pass parameters to a function whose definition has no prototype

The warning exists, but there is no flag apparently.

void f() {}
void g() {
    f(0);
}

spits out

test.c:3:8: warning: too many arguments in call to 'f'
    f(0);
    ~  ^

Great; then I think we should add a flag. I would suggest -Werror-by-default since the compiler knows it's incorrect code.

"Meaning" is a difficult term. What we know is that this is a K&R-style definition, and does not define a prototype. So one would expect -Wstrict-prototypes to warn about this.

All I'm suggesting is that the compiler knows that the call is wrong, so it should error (by-default).

After adding that, my feeling is that diagnosing a missing prototype on a defining declaration would be a style nitpick that might fit better in clang-tidy. IIRC, when we rolled out -Wstrict-prototypes we explicitly excluded this case since it hit a lot of code without finding bugs.

But at the same time, I don't want to block this. If you do pursue this please use a distinct -W flag so it can be turned off without losing the benefits of -Wstrict-prototypes.

We had a discussion on IRC yesterday and @aaron.ballman pointed out that the K&R syntax has been considered "obsolescent" (= deprecated) since C89, 30 years ago. He added that there are thoughts within the standards committee about removing the syntax entirely from C2x.

I would suggest -Werror-by-default since the compiler knows it's incorrect code.

Do we have any precedence for a warning that's treated as error by default? I haven't seen that before, that's why I'm asking. Even -Wreturn-type isn't treated as error by default.

All I'm suggesting is that the compiler knows that the call is wrong, so it should error (by-default).

K&R allows weird stuff, so I'm really not sure it's necessarily wrong. To my knowledge the ... varargs syntax was only introduced with prototypes, and the K&R version of printf was "implicitly vararg". Meaning that the additional parameters weren't mentioned in the function header at all, and scraped from the stack using the va_* macros as it's done today.

Also by that argument we shouldn't emit -Wstrict-prototypes at all on definitions, because clearly the compiler can also do type checking then. In fact, we actually do that to some extent: instead of default-promoting the call arguments, we default-promote the parameter types and build a FunctionProtoType, so that calls work as if there had been a proper prototype. This has the odd side effect of causing different behavior depending on whether the compiler sees the definition of a function, as @aaron.ballman noted. The example we looked at was

int f();
int g() { return f(1.0); }
int f(x) int x; {}
int h() { return f(1.0); }

One might think that we emit the same code for g and h, but we don't!

After adding that, my feeling is that diagnosing a missing prototype on a defining declaration would be a style nitpick that might fit better in clang-tidy.

It's not a stylistic issue, prototype and non-prototype declarations are semantically very different. C has no notion of argument checking for non-prototypes at all. The identifier-list of a K&R definition is essentially private.

Of course we can impose our own semantics for non-prototypes, but we can also just nudge people into using standard syntax that has been available for many decades that guarantees them proper argument checks, both regarding number and types of arguments.

IIRC, when we rolled out -Wstrict-prototypes we explicitly excluded this case since it hit a lot of code without finding bugs.

The GCC people didn't, so it can't be that bad. Also we already warn on block definitions with zero parameters for some reason. Lastly, the fix is very easy: just add void. With some -fixit magic it might just be a matter of running Clang over the entire code base once and you're done with it.

We don't only warn on actual bugs, but also on using deprecated language features and generally about problematic code. As I pointed out, zero-parameter K&R definitions are problematic even if we error out when they are called with the wrong number of parameters, because such definitions might be inline parts of an interface that's used with other compilers that don't have such checks.

If you do pursue this please use a distinct -W flag so it can be turned off without losing the benefits of -Wstrict-prototypes.

The warning is not on by default, and not enabled by -Wall or -Wextra, so this would serve the small sliver of developers who researched long enough to find this flag, decided to activate it only with Clang and not GCC, and then for some reason don't want it to do what it promises in its name. If the warning calls itself "strict", then that's what it should be.

IIRC, when we rolled out -Wstrict-prototypes we explicitly excluded this case since it hit a lot of code without finding bugs.

I'm not a historian, but I believe this was suggested by @rsmith in https://bugs.llvm.org/show_bug.cgi?id=20796#c8, and I think we agreed in IRC yesterday that this should in fact warn. As I wrote in the commit message, the standard isn't terribly clear on what a prototype is, but we already treat it as non-prototype and that's very likely correct.

There are no mentions of too many false positives in the bug report.

I went back to read notes from when we deployed -Wstrict-prototypes (which we have had on-by-default for our users for a couple of years, since it catches lots of -fblocks-related bugs). I was mainly objecting because I had (mis)remembered trying and backing out the specific behaviour you're adding. On the contrary, our notes say that we might want to strengthen the diagnostic as you're doing.

I'm still nervous about this, but I'm withdrawing my objection.

Do we have any precedence for a warning that's treated as error by default? I haven't seen that before, that's why I'm asking. Even -Wreturn-type isn't treated as error by default.

There are tons of -Werror-by-default in Objective-C, at least. I'm not sure if it's used elsewhere, but for almost-guaranteed bugs it seems like the right thing to do.

I went back to read notes from when we deployed -Wstrict-prototypes (which we have had on-by-default for our users for a couple of years, since it catches lots of -fblocks-related bugs). I was mainly objecting because I had (mis)remembered trying and backing out the specific behaviour you're adding. On the contrary, our notes say that we might want to strengthen the diagnostic as you're doing.

Thank you for having a look at the notes, that's good to hear.

Your users are Xcode/Apple Clang users? @aaron.ballman suggested to turn the warning on per default, and if Apple has that deployed already, it might be another argument in favor of doing so in vanilla Clang as well.

I went back to read notes from when we deployed -Wstrict-prototypes (which we have had on-by-default for our users for a couple of years, since it catches lots of -fblocks-related bugs). I was mainly objecting because I had (mis)remembered trying and backing out the specific behaviour you're adding. On the contrary, our notes say that we might want to strengthen the diagnostic as you're doing.

Thank you for having a look at the notes, that's good to hear.

Your users are Xcode/Apple Clang users? @aaron.ballman suggested to turn the warning on per default, and if Apple has that deployed already, it might be another argument in favor of doing so in vanilla Clang as well.

We do have numerous warnings that are default errors, you can look for DefaultError in the diagnostic .td files to see the uses.

We do have numerous warnings that are default errors, you can look for DefaultError in the diagnostic .td files to see the uses.

Thanks, I hadn't seen that before. It seems that most of these warnings are for extensions, but one comes pretty close to what @dexonsmith has suggested:

def warn_cannot_pass_non_pod_arg_to_vararg : Warning<
  "cannot pass object of %select{non-POD|non-trivial}0 type %1 through variadic"
  " %select{function|block|method|constructor}2; call will abort at runtime">,
  InGroup<NonPODVarargs>, DefaultError;

The standard explicitly says in C11 6.5.2.2p8: “the number and types of arguments are not compared with those of the parameters in a function definition that does not include a function prototype declarator”, but perhaps this just means a programmer can't rely on such a check?

We do have numerous warnings that are default errors, you can look for DefaultError in the diagnostic .td files to see the uses.

Thanks, I hadn't seen that before. It seems that most of these warnings are for extensions, but one comes pretty close to what @dexonsmith has suggested:

def warn_cannot_pass_non_pod_arg_to_vararg : Warning<
  "cannot pass object of %select{non-POD|non-trivial}0 type %1 through variadic"
  " %select{function|block|method|constructor}2; call will abort at runtime">,
  InGroup<NonPODVarargs>, DefaultError;

The standard explicitly says in C11 6.5.2.2p8: “the number and types of arguments are not compared with those of the parameters in a function definition that does not include a function prototype declarator”, but perhaps this just means a programmer can't rely on such a check?

I read that as stating that such a check does not happen. However, the standard places very little requirements on diagnostics; it's always permissible to take undefined behavior and define it to do something, such as diagnosing as a warning or an error, as a matter of QoI.