Index: ../docs/LangRef.rst =================================================================== --- ../docs/LangRef.rst +++ ../docs/LangRef.rst @@ -11851,6 +11851,114 @@ store i32 %val7, i32* %ptr7, align 4 +Masked Vector Expanding Load and Compressing Store Intrinsics +------------------------------------------------------------- + +LLVM provides intrinsics for expanding load and compressing store operations. Data selected from a vector according to a mask is stored in consecutive memory addresses (compressed store), and vice-versa (expanding load). These operations effective map to "if (cond) a[j++] = v.i" and "if (cond) v.i = a[j++]" patterns, respectively. Note that when the mask starts with '1' bits followed by '0' bits, these operations are identical to :ref:`llvm.masked.store ` and :ref:`llvm.masked.load `. + +.. _int_expandload: + +'``llvm.masked.expandload.*``' Intrinsics +^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ + +Syntax: +""""""" +This is an overloaded intrinsic. Several values of integer, floating point or pointer data type are loaded from consecutive memory addresses and stored into the elements of a vector according to the mask. + +:: + + declare <16 x float> @llvm.masked.expandload.v16f32 (float* , <16 x i1> , <16 x float> ) + declare <2 x i64> @llvm.masked.expandload.v2i64 (i64* , <2 x i1> , <2 x i64> ) + +Overview: +""""""""" + +Reads a number of scalar values sequentially from memory location provided in '``ptr``' and spreads them in a vector. The '``mask``' holds a bit for each vector lane. The number of elements read from memory is equal to the number of '1' bits in the mask. The loaded elements are positioned in the destination vector according to the sequence of '1' and '0' bits in the mask. E.g., if the mask vector is '10010001', "explandload" reads 3 values from memory addresses ptr, ptr+1, ptr+2 and places them in lanes 0, 3 and 7 accordingly. The masked-off lanes are filled by elements from the corresponding lanes of the '``passthru``' operand. + + +Arguments: +"""""""""" + +The first operand is the base pointer for the load. It has the same underlying type as the element of the returned vector. The second operand, mask, is a vector of boolean values with the same number of elements as the return type. The third is a pass-through value that is used to fill the masked-off lanes of the result. The return type and the type of the '``passthru``' operand are the same vector types. + +Semantics: +"""""""""" + +The '``llvm.masked.expandload``' intrinsic is designed for reading multiple scalar values from adjacent memory addresses into possibly non-adjacent vector lanes in a single IR operation. It is useful for targets that support vector expanding loads and allows vectorizing loop with cross-iteration dependency like in the following example: + +.. code-block:: c + + // In this loop we load from B and spread the elements into array A. + double *A, B; int *C; + for (int i = 0; i < size; ++i) { + if (C[i] != 0) + A[i] = B[j++]; + } + + +.. code-block:: llvm + + ; Load several elements from array B and expand them in a vector. + ; The number of loaded elements is equal to the number of '1' elements in the mask. + %Tmp = call <8 x double> @llvm.masked.expandload.v8f64(double* %Bptr, <8 x i1> %mask, <8 x double> undef) + ; Store the result in A + call @llvm.masked.store.v8f64.p0v8f64(<8 x double> %Tmp, <8 x double>* %Aptr, i32 8, <8 x i1> %mask) + + +Other targets may support this intrinsic differently, for example, by lowering it into a sequence of conditional scalar load operations and shuffles. +If all mask elements are '1', the intrinsic behavior is equivalent to the regular unmasked vector load. + +.. _int_compressstore: + +'``llvm.masked.compressstore.*``' Intrinsics +^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ + +Syntax: +""""""" +This is an overloaded intrinsic. A number of scalar values of integer, floating point or pointer data type are collected from an input vector and stored into adjacent memory addresses. A mask defines which elements to collect from the vector. + +:: + + declare void @llvm.masked.compressstore.v8i32 (<8 x i32> , i32* , <8 x i1> ) + declare void @llvm.masked.compressstore.v16f32 (<16 x float> , float* , <16 x i1> ) + +Overview: +""""""""" + +Selects elements from input vector '``value``' according to the '``mask``'. All selected elements are written into adjacent memory addresses starting at address '`ptr`', from lower to higher. The mask holds a bit for each vector lane, and is used to select elements to be stored. The number of elements to be stored is equal to the number of active bits in the mask. + +Arguments: +"""""""""" + +The first operand is the input vector, from which elements are collected and written to memory. The second operand is the base pointer for the store, it has the same underlying type as the element of the input vector operand. The third operand is the mask, a vector of boolean values. The mask and the input vector must have the same number of vector elements. + + +Semantics: +"""""""""" + +The '``llvm.masked.compressstore``' intrinsic is designed for compressing data in memory. It allows to collect elements from possibly non-adjacent lanes of a vector and store them contiguously in memory in one IR operation. It is useful for targets that support compressing store operations and allows vectorizing loops with cross-iteration dependences like in the following example: + +.. code-block:: c + + // In this loop we load elements from A and store them consecutively in B + double *A, B; int *C; + for (int i = 0; i < size; ++i) { + if (C[i] != 0) + B[j++] = A[i] + } + + +.. code-block:: llvm + + ; Load elements from A. + %Tmp = call <8 x double> @llvm.masked.load.v8f64.p0v8f64(<8 x double>* %Aptr, i32 8, <8 x i1> %mask, <8 x double> undef) + ; Store all selected elements consecutively in array B + call @llvm.masked.compressstore.v8f64(<8 x double> %Tmp, double* %Bptr, <8 x i1> %mask) + + +Other targets may support this intrinsic differently, for example, by lowering it into a sequence of branches that guard scalar store operations. + + Memory Use Markers ------------------