Make the example compiled and the test case passed.
The comment is a bit misleading, how about: Can take up to 15 optional arguments, to emulate accepting a variadic number of arguments.?
This comment describes what the code is doing but not why. How about: The example attribute supports up to 15 optional arguments, but this demonstrates how to test for a specific number of arguments. or something along those lines?
If has arguments -> If there are arguments
(And add a full stop to the end of the comment.)
This looks dangerous to me -- if there are two or three arguments supplied, then this code will result in a buffer overrun.
I don't think we need a GOOD_ATTR and BAD_ATTR run line any longer. Instead, you can use a single RUN line with -verify (no =) and use expected-error and expected-warning on the lines that need the diagnostics. This means you can also get rid of the @ line number constructs and just write the expected comments on the source lines with diagnostics.
A downside to this is that we lose the verification that the plugin actually attaches the attribute to the AST node. Another solution that doesn't involve generating LLVM bitcode from a Frontend test would be to use -ast-dump and check the AST nodes.
Oh, didn't noticed that, but I checked the code it's hard to get the address of the argument buffer, all APIs about it is private, even if ArgsUnion const *getArgsBuffer() const, which I think is not very reasonable. Do you think I should copy all the arguments to a new buffer (not very effective), or do you think I should contribute a patch to make the API public?
There's a third way, getTrailingObjects() is public in TrailingObjects, it is possible to cast an ParsedAttr to a TrailingObjects, but I think it's too hacky.
Seems the GOOD_ATTR and BAD_ATTR need to be exist since we use -ast-dump now, we need a clean build without error diagnostics to generate a nice AST.
I am unfamiliar with this construct -- is there a reason to do this as opposed to making two separate files with separate content (these are testing fundamentally different things)?
Thanks for the changes, LGTM!
Huh... we have exactly one other test case using this construct, which explains why I've never heard of it. :-D Thank you for pointing out that documentation.
I think it's usually best to keep test files focused on what they're testing, and this tests two different things (AST node attachment and semantic diagnostics) so my natural inclination is to have two separate files. But this provides the same test coverage, so I'd say we can leave it as-is until there's some other reason to split it up.