Currently over 256 non-temporal loads are broken inefficently. For example, v17i32 gets broken into 2 128-bit loads. It is better if we can use
256-bit loads instead.
Details
Diff Detail
- Repository
- rG LLVM Github Monorepo
Unit Tests
Event Timeline
Thanks for sharing the patch. As is, it looks like there are multiple test failures that are caused by this patch and need addressing (some comments inline about correctness issues)
llvm/lib/Target/AArch64/AArch64ISelLowering.cpp | ||
---|---|---|
54 | Is this needed? | |
73 | Is this needed? | |
18089 | move all variables that are used after the early exit down after the early exit. | |
18094 | We should also only do this for non-temporal loads. | |
18098 | Could you add a comment here illustrating what kind of DAG nodes we create to replace the original load? It would also be good to document the motivation for special handling for non-temporal loads here. | |
18105 | I don't think that's correct, you are passing the offset in bits, but I think it should be in bytes. Looking at some of the test changes, the offsets of the loads are wrong, e.g. ldnp q0, q2, [x0, #256]. | |
18136 | Do you need to create NullVector explicitly here? Could you just use DAG.degUNDEF(NewVT) here? | |
20132 | This should be kept I think. |
llvm/lib/Target/AArch64/AArch64ISelLowering.cpp | ||
---|---|---|
18098 |
Yes something like that, maybe: Try to beak up non-temporal stores into blocks of 256 bits early, so the LDNPQ can be selected. Might be good to add this as comment for the whole function. |
llvm/lib/Target/AArch64/AArch64ISelLowering.cpp | ||
---|---|---|
18098 | You can check the comments I added. I will add a comment for the whole function as well |
Thanks for the update, this looks like a nice improvement and should avoid regressions for those cases with D132559. A few more mostly stylistic comments.
llvm/lib/Target/AArch64/AArch64ISelLowering.cpp | ||
---|---|---|
18082 | It would be good to be more concrete what large means here (>256 bits) will be split into blocks of 256bits. nit: comments should be full sentences ending with a period (.) | |
18088 | Thinking about it a bit more, I think we should also not handle volatile and atomic loads here to be safe. Could you add tests and the extra conditions for early exit here? I think it would be fine to just add the tests directly in the patch. | |
18094 | LD->isNonTemporal() is already checked above | |
18099 | nit: BasePtr would be more accurate. | |
18110 | nit: used as unsigned, so maybe make this unsigned too? | |
18120 | nit: comments should be full sentences ending with a period (.) bits in the load operation -> bits of the load operation? It might also help with readability if you add a newline before the comment. | |
18123 | nit: comments should be full sentences ending with a period (.) | |
18136 | MaskVector not used? | |
18137 | UndefVector would be more accurate? |
- Add early exit for volatile loads and a test cases to check for it.
- Address some style issues
Removing checking if both operands are negative at the same time since it is handled by default.
llvm/lib/Target/AArch64/AArch64ISelLowering.cpp | ||
---|---|---|
18132 | Shouldn't the second operand of this call be PtrOffset again? | |
18139–18141 | Is this (and the implementation generally) big-endian correct? I don't know the answer here, I can never remember what's supposed to happen. But someone should definitely try it on an aarch64_be target and at least eyeball the assembly to check the offsets and so on. | |
18153 | The Chain here is the input chain, I think you need to TokenFactor all the loads' output chains together to make sure nothing gets reordered with them. | |
20212–20213 | This looks like it takes precedence over performLOADCombine and disables ldnp formation if the TBI feature is enabled. |
llvm/lib/Target/AArch64/AArch64ISelLowering.cpp | ||
---|---|---|
20212–20213 | Removing the checks and calling performLOADCombine caused many tests to fail. Maybe we can check if the load is non-temporal here ? |
llvm/lib/Target/AArch64/AArch64ISelLowering.cpp | ||
---|---|---|
18132 | Yes it should be, it is worth noting that the offsets on the generated assembly didn't change when I changed from using MemVT.getSizeInBits()) to PtrOffset |
llvm/lib/Target/AArch64/AArch64ISelLowering.cpp | ||
---|---|---|
18139–18141 | This is the generated assembly for big-endian define <17 x float> @test_ldnp_v17f32(<17 x float>* %A) { %lv = load<17 x float>, <17 x float>* %A, align 8, !nontemporal !0 ret <17 x float> %lv } !0 = !{i32 1} test_ldnp_v17f32: // @test_ldnp_v17f32 .cfi_startproc // %bb.0: ldnp q0, q1, [x0, #32] add x9, x8, #48 add x10, x8, #32 ldnp q2, q3, [x0] add x11, x8, #16 ldr s4, [x0, #64] st1 { v2.4s }, [x8] st1 { v1.4s }, [x9] st1 { v0.4s }, [x10] st1 { v3.4s }, [x11] str s4, [x8, #64] ret Looking at https://godbolt.org I think there are more load instructions before breaking them |
llvm/lib/Target/AArch64/AArch64ISelLowering.cpp | ||
---|---|---|
18139–18141 | It's coming back to me now, and I think that output is incorrect. Basically, the ld1 and st1 instructions would load each element big-endian but preserve the order (i.e. v0[0] is still loaded from &ptr[0] not &ptr[3]. On the other hand ldr and ldnp load the whole vector-register in big-endian order which loads v0[0] from &ptr[3]. This output mixes the two so it'll rearrange elements. But I think the real bug is in https://reviews.llvm.org/rG7155ed4289 that you committed earlier, or at least that needs fixing before any bug here is obvious. A proper fix would be to put an AArch64ISD::REV<n> (where <n> is the element size) after each ldnp to restore what an ld1 would have done, though I'd be OK with disabling the optimization for big-endian instead. I care if we break it, I'm less bothered if it isn't as fast as little-endian. | |
20212–20213 | The check is needed because otherwise we assume TBI even when we shouldn't, but it probably shouldn't cause an early return. Both optimizations should get the opportunity to run if they might be applicable. |
- Change Align to use PtrOffset
- Make sure that performTBISimplification optimisation can run
- Add TokenFactor node
LGTM, thanks for the latest changes. Some small stylistic comments inline.
llvm/lib/Target/AArch64/AArch64ISelLowering.cpp | ||
---|---|---|
18082 | nit: I think this is easier to read if you keep nontemporal loads together. Also, period at end of sentence. | |
18108 | nit: utilise -> utilize (American spelling) | |
18125 | nit: add newline after here, to separate blocks. | |
18127 | nit: UNDEF vector instead of null value vector? | |
18149 | nit: use ConcatVT to match other names here. | |
18164 | add newline before the new function/ |
Thanks for the update. 2 more small comments, but I'll fix them before committing the change on your behalf.
llvm/lib/Target/AArch64/AArch64ISelLowering.cpp | ||
---|---|---|
18082 | Looks like there's another typo: nontermporal -> nontemporal | |
18125 | The newline was meant to go after the } to separate it from the next block, not before it. |
Is this needed?