This patch splits the existing SveVectorBits LangOpt into VScaleMin and
VScaleMax LangOpts such that we can represent such an option. The cc1
option has also been split into -mvscale-{min,max}=<n> options so that the
cc1 arguments better reflect the vscale_range IR attribute.
Details
Diff Detail
- Repository
- rG LLVM Github Monorepo
Event Timeline
Are the references to "128-bit chunks" for the vscale flags necessary? That's really a nuisance of SVE that LLVM IR should not need to worry about. Can we speak exclusively in terms of vscale or is the "multiples of 128" required somewhere? Perhaps we're missing a target specific convert function from vscale+elt to bytes or something. Also there's nothing stoping vscale from being 3 (essentially any positive number) but you look to be restricting it to a power of two.
It does feel like we should have something somewhere that specifies the vscale chunk as I've had to hardcode 128 in a lot of places (thought they are SVE specific places, so I'll remove mention of it from the helptext). As for vscale values allowed, if we don't validate the values used in the option itself, where would we do it? I guess we could just blindly propagate the value down to the IR attribute and let the backends figure out what they want. The issue then is providing a nice error, though perhaps we don't need one given they are cc1 options?
- Remove mention of 128-bit chunks from help texts
- Allow any positive integer value for -mvscale-{min,max}, not just powers of 2
clang/lib/Sema/SemaType.cpp | ||
---|---|---|
7893 | This should be VScaleMax as the attribute is only valid when -msve-vector-bits= is given only a number (i.e. -msve-vector-bits=128+ does not enable the fixed length ACLE extension). I guess we need tests to cover this scenario. | |
7916 | I'm thinking there should be a check for S.getLangOpts().VScaleMin == S.getLangOpts().VScaleMax somewhere, but I guess the current diagnostic probably doesn't account for two values, nor should it really. What about just duplicating this block but using VScaleMax in place of VScaleMin? |
clang/lib/Sema/SemaType.cpp | ||
---|---|---|
7916 | I think I would rather have a separate check for min == max, and output a diagnostic stating that an exact -msve-vector-bits value must be used, does that sound sensible? This could then perhaps be combined with the previous comment to produce sensible output, i.e.: !vscale_min -> not supported |
clang/lib/Sema/SemaType.cpp | ||
---|---|---|
7916 | Actually, now that I re-read the error message, and the fact that is directly refers to the valid values, I think instead there should just be an initial check of min == 0 || min != max. |
- Update sema checking for sve_vector_bits attribute to emit an error when the vscale min != max, i.e. when -mvse-vector-bits=<n>+ is used
- Add test to cover the above case
clang/lib/Driver/ToolChains/Clang.cpp | ||
---|---|---|
1832 | There may be a way to avoid side-effects in assertions (getAsInteger changing the value of Bits passed by reference) similarly to the NewToSet example in https://llvm.org/docs/CodingStandards.html#assert-liberally, similarly to https://github.com/llvm/llvm-project/blob/release/13.x/llvm/include/llvm/TableGen/Record.h#L1980-L1983 or https://github.com/llvm/llvm-project/blob/release/13.x/clang/lib/StaticAnalyzer/Core/AnalyzerOptions.cpp#L181-L185. |
clang/lib/Basic/Targets/AArch64.cpp | ||
---|---|---|
454–455 | Similar to my previous comment these must only be defined when both Opts.VScaleMin and Opts.VScaleMax have the same non-zero value. | |
467–469 | Should this be LangOpts.VScaleMin || LangOpts.VScaleMax to be more future proof? | |
clang/lib/Driver/ToolChains/Clang.cpp | ||
1828 | I looks like this should be unsigned to ensure the correct version of getAsInteger is called. | |
1829–1833 | I'm wondering if with the extra complexity if it's worth parsing Val first before dealing with the numerical values. For example: if (!Val.equals("scalable")) { if (Val.endswith("+")) { MinOnly = true; Val = Val.substr(0, Val.size() - 1); } if (Val.getAsInteger(10, Bits) && (Bits == 128 || (Bits == 256...)) { Args.MakeArgString("-mvscale-min=", Bits) if (!MinOnly) Args.MakeArgString("-mvscale-max=", Bits) } else D.Diag(diag::err_drv_unsupported_option_argument } After writing this I'm not sure it's actually any better, so just something to consider. |
- Don't define SVE target bits macros when vscale min != max
- Add tests for above change
- Use correct (unsigned) version of getAsInteger
Similar to my previous comment these must only be defined when both Opts.VScaleMin and Opts.VScaleMax have the same non-zero value.