In a sense, this is implementing an ABI decision (how to lower the software construct of "blend"), but if there are interesting variants to consider, this could be made object-file-format-specific in some way.
Details
Diff Detail
Event Timeline
llvm/test/CodeGen/AArch64/ptrauth-intrinsic-blend.ll | ||
---|---|---|
20 | Overall, I think this patch looks fine.
and the signature of the llvm.ptrauth.blend is documented as: declare i64 @llvm.ptrauth.blend(i64 <address discriminator>, i64 <integer discriminator>) It makes me wonder what should happen when an immediate too large to fit into an i16 is passed.... I wonder if for consistency reasons it would be easily possible to make DAGISel behave like GlobalISel and accept immediates larger than 16 bits, and take the lower 16 bit value? |
llvm/test/CodeGen/AArch64/ptrauth-intrinsic-blend.ll | ||
---|---|---|
20 | Interesting! I would go the other way, and consider the GISel behavior too lax. I added a check that makes it fail isel as well. It's true we don't specify the intrinsics strictly enough, but for AArch64 I'd consider it always incorrect to use a wider value. It's also true that that's the emergent behavior for non-immediate blends, but I'd argue those are by far the exception: I think the only legitimate use is a dynamic loader that really does get the discriminator values from memory. Anything else is a red flag (and I considered somehow hiding them in the compiler, or detecting accidental uses, but that's tricky.) How does that sound? |
I had just 2 more minor nitpicks - mostly about adding 2 more regression tests to test/document expected behavior in corner cases.
llvm/lib/Target/AArch64/AArch64InstrInfo.td | ||
---|---|---|
8410 | It seems that the blend intrinsic can be implemented on cores that do not implement the PAuth extension. Which made me wonder if it should be allowed or not for a user to use the blend intrinsic when targeting a core not supporting PAuth? | |
llvm/test/CodeGen/AArch64/ptrauth-intrinsic-blend.ll | ||
20 | That sound good to me! |
llvm/lib/Target/AArch64/AArch64InstrInfo.td | ||
---|---|---|
8410 |
I don't have a strong argument either, but until we have a use for it on other cores, I'd leave it as disallowed there?
They're not, and I found out by later tests breaking ;) On SDAG, if we can't match the 16-bit immediate, we'll fallback to the arbitrary GPR64 value (which isn't ideal by itself.) So we can't easily fail on wider values, at least not without some machinery to match this case (maybe matching any immediate and erroring out when trying to encode; I'm not sure we can fail in a good way at isel time) |
llvm/lib/Target/AArch64/AArch64InstrInfo.td | ||
---|---|---|
8410 |
That seems OK to me.
I also don't see an immediate solution to this. |
Could this patch be landed? (I can commit it on your behalf if appropriate)
Overall, I would like to assist in preparing for review and upstreaming more PAC-related patches from https://github.com/ahmedbougacha/llvm-project/commits/eng/arm64e-upstream-llvmorg .
llvm/lib/Target/AArch64/GISel/AArch64InstructionSelector.cpp | ||
---|---|---|
5878–5879 | [nitpick] Maybe reorder these lines so the first removeOperand doesn't affect the index passed to the second one |
I don’t think that’s how we do things. Either way I was out for a while and will update the patches; thanks for taking a look.
It seems that the blend intrinsic can be implemented on cores that do not implement the PAuth extension. Which made me wonder if it should be allowed or not for a user to use the blend intrinsic when targeting a core not supporting PAuth?
I don't have a very strong argument either way. I wonder if you happen to have an opinion?
One thing I did think off - but couldn't answer from just looking at this patch - are DAGISel and GISel consistent in rejecting the blend intrinsic when PAuth is not implemented?
I think it would make sense to have a regression test checking that the blend intrinsic is indeed accepted/rejected as expected when targeting a core that does not support PAuth.