[AArch64][SVE] Add intrinsics for non-temporal gather-loads/scatter-stores
This patch adds the following LLVM IR intrinsics:
- SVE non-temporal gather loads
- @llvm.aarch64.sve.ldnt1.gather
- @llvm.aarch64.sve.ldnt1.gather.uxtw
- @llvm.aarch64.sve.ldnt1.gather.scalar.offset
- SVE non-temporal scatter stores
- @llvm.aarch64.sve.stnt1.scatter
- @llvm.aarch64.sve.ldnt1.gather.uxtw
- @llvm.aarch64.sve.ldnt1.gather.scalar.offset
These intrinsic are mapped to the corresponding SVE instructions
(example for half-words, zero-extending):
- ldnt1h { z0.s }, p0/z, [z0.s, x0]
- stnt1h { z0.s }, p0/z, [z0.s, x0]
Note that for non-temporal gathers/scatters, the SVE spec defines only
one instruction type: "vector + scalar". For this reason, we swap the
arguments when processing the following intrinsics (which implement the
"vector + scalar" addressing mode):
- @llvm.aarch64.sve.ldnt1.gather
- @llvm.aarch64.sve.ldnt1.gather.uxtw
- @llvm.aarch64.sve.stnt1.scatter
- @llvm.aarch64.sve.ldnt1.gather.uxtw
In other words, all intrinsics for gather-loads and scatter-stores
implemented in this patch are mapped to the same load and store
instruction, respectively.
The sve2_mem_gldnt_vs multiclass (and it's counterpart for scatter
stores) from SVEInstrFormats.td was split into:
- sve2_mem_gldnt_vec_vs_32_ptrs (32bit wide base addresses)
- sve2_mem_gldnt_vec_vs_62_ptrs (64bit wide base addresses)
This is consistent with what we did for
@llvm.aarch64.sve.ld1.scalar.offset and highlights the actual split in
the spec and the implementation.
Can you derive from AdvSIMD_GatherLoad_VectorBase_Intrinsic instead?
(and something similar for the scatter store)
This also makes it more clear that these have the exact same form as the normal gathers.