This change ensures that compiler invocations that target macOS, Mac Catalyst, and the iOS/tvOS/watchOS simulators for arm64 use the "apple-a12" CPU. The -mcpu option can still be used to override CPU to a newer variant.
Details
Diff Detail
- Repository
- rG LLVM Github Monorepo
Event Timeline
@t.p.northover @ab I noticed that the use of "apple-a12" doesn't infer the right target features when we're passing in a11 or older, so that's why my test file has the INFER-A12 separate line. Do you think this is a bug? This is decided here:
else if ((A = Args.getLastArg(options::OPT_mcpu_EQ))) success = getAArch64ArchFeaturesFromMcpu(D, A->getValue(), Args, Features);
and CPU value isn't checked here.
Interesting. Does it even pass the newer CPU then? If so I imagine the features are still enabled in the backend because of the cpu, so this would only affect frontend behavior (e.g., preprocessor macros, which might in turn affect some intrinsic headers, etc..). Either way, I agree it's a bug
But the apple-a12 floor isn't a hard requirement here (unlike e.g., arm64e where we'll fail to compile without certain features), so I'm tempted to avoid these shenanigans entirely and let users specify what they want. What do you think?
We're seeing some build warnings in ffmpeg due to this not being in yet. Could you try to land it soon?
Don't try to fallback to the "apple-a12" cpu type when the user explicitly specifies an older cpu. The default is still "apple-a12".
Testing this out: --target=arm64-apple-macos now does the right thing, but -march arm64 sets the target triple to iOS (and hence doesn't pass the a12 cpu either). That's different from what Xcode clang does. Could you upstream whatever causes that difference too?
(If -mmacosx-version-min is passed too, open-source clang does the right thing, but for quick command-line testing it'd be nice if both clangs behaved consistently.)