Type errors in function declarations were not (always) diagnosed prior
to this patch. Furthermore, certain remarks did not get associated
properly which caused them to be emitted multiple times.
Details
Diff Detail
- Repository
- rG LLVM Github Monorepo
Event Timeline
clang/lib/Sema/Sema.cpp | ||
---|---|---|
1798 | So, we assume that lexical context is a function and if it is not, we assume that the value declaration being checked is a function. Can we actually break this assumption? struct S { __float128 A = 1; int bar = 2 * A; }; #pragma omp declare target S s; #pragma omp end declare target | |
1835 | I was a bit confused by FPTy variable name and already started writing a comment that we already checked a case FunctionProtoType case, then I realized that it is a check for Function*No*Prototype :) . Can we change FPTy to something closer to FunctionNoProtoType ? | |
clang/test/OpenMP/nvptx_unsupported_type_messages.cpp | ||
42–43 | So, boo is diagnosed two times, at the point where it actually used and where it is defined, it looks a bit like a duplication isn't it? I think it makes sense to diagnose only usage point with note pointing to the definition. |
Thanks for taking a look!
clang/lib/Sema/Sema.cpp | ||
---|---|---|
1798 | This is diagnosed neither with nor without the patch. | |
clang/test/OpenMP/nvptx_unsupported_type_messages.cpp | ||
42–43 | It is diagnosed twice, right. I don't think this is a problem of this patch though, see for example line 54 on the left where we emit the boo error 4 times before. Also, take a look at 181 and following in this file. With this patch we reduce the number of times the error is emitted from 3 to 1. It's a bit of a mixed bag but I think this makes it better. |
I agree this is an improvement on the current status and should land.
Reading through it has made me nervous about our current semantic checking. In particular, I'm not sure suppressing error messages from code that ultimately proves to be discarded as the same as not parsing code that ultimately was not required, but that design choice seems to have been made a long time ago. Presumably lazy diagnostics had better engineering properties than lazy parsing and codegen - I'd be interested to see docs/emails about the early design choices for single source target offloading if someone has them on hand.
So, we assume that lexical context is a function and if it is not, we assume that the value declaration being checked is a function. Can we actually break this assumption?
For example, will something like this work correctly: