Index: docs/LangRef.rst =================================================================== --- docs/LangRef.rst +++ docs/LangRef.rst @@ -1468,6 +1468,16 @@ This attribute by itself does not imply restrictions on inter-procedural optimizations. All of the semantic effects the patching may have to be separately conveyed via the linkage type. +``"probe-stack"`` + This attribute indicates that the function will trigger a guard region + in the end of the stack. It ensures that accesses to the stack must be + no further apart than the size of the guard region to a previous + access of the stack. It takes one required string value, the name of + the stack probing function that will be called. + + If a function that has a ``"probe-stack"`` attribute is inlined into a + function that doesn't have a ``"probe-stack"`` attribute, then the + resulting function will have a ``"probe-stack"`` attribute. ``readnone`` On a function, this attribute indicates that the function computes its result (or decides to unwind an exception) based strictly on its arguments, @@ -4568,13 +4578,13 @@ int i; // offset 0 float f; // offset 4 }; - + struct Outer { float f; // offset 0 double d; // offset 4 struct Inner inner_a; // offset 12 }; - + void f(struct Outer* outer, struct Inner* inner, float* f, int* i, char* c) { outer->f = 0; // tag0: (OuterStructTy, FloatScalarTy, 0) outer->inner_a.i = 0; // tag1: (OuterStructTy, IntScalarTy, 12) @@ -5110,10 +5120,10 @@ ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ The ``invariant.group`` metadata may be attached to ``load``/``store`` instructions. -The existence of the ``invariant.group`` metadata on the instruction tells -the optimizer that every ``load`` and ``store`` to the same pointer operand -within the same invariant group can be assumed to load or store the same -value (but see the ``llvm.invariant.group.barrier`` intrinsic which affects +The existence of the ``invariant.group`` metadata on the instruction tells +the optimizer that every ``load`` and ``store`` to the same pointer operand +within the same invariant group can be assumed to load or store the same +value (but see the ``llvm.invariant.group.barrier`` intrinsic which affects when two pointers are considered the same). Pointers returned by bitcast or getelementptr with only zero indices are considered the same. @@ -5126,26 +5136,26 @@ %ptr = alloca i8 store i8 42, i8* %ptr, !invariant.group !0 call void @foo(i8* %ptr) - + %a = load i8, i8* %ptr, !invariant.group !0 ; Can assume that value under %ptr didn't change call void @foo(i8* %ptr) %b = load i8, i8* %ptr, !invariant.group !1 ; Can't assume anything, because group changed - - %newPtr = call i8* @getPointer(i8* %ptr) + + %newPtr = call i8* @getPointer(i8* %ptr) %c = load i8, i8* %newPtr, !invariant.group !0 ; Can't assume anything, because we only have information about %ptr - + %unknownValue = load i8, i8* @unknownPtr store i8 %unknownValue, i8* %ptr, !invariant.group !0 ; Can assume that %unknownValue == 42 - + call void @foo(i8* %ptr) %newPtr2 = call i8* @llvm.invariant.group.barrier(i8* %ptr) %d = load i8, i8* %newPtr2, !invariant.group !0 ; Can't step through invariant.group.barrier to get value of %ptr - + ... declare void @foo(i8*) declare i8* @getPointer(i8*) declare i8* @llvm.invariant.group.barrier(i8*) - + !0 = !{!"magic ptr"} !1 = !{!"other ptr"} @@ -5154,7 +5164,7 @@ to the SSA value of the pointer operand. .. code-block:: llvm - + %v = load i8, i8* %x, !invariant.group !0 ; if %x mustalias %y then we can replace the above instruction with %v = load i8, i8* %y @@ -6608,9 +6618,9 @@ Note that unsigned integer remainder and signed integer remainder are distinct operations; for signed integer remainder, use '``srem``'. - + Taking the remainder of a division by zero is undefined behavior. -For vectors, if any element of the divisor is zero, the operation has +For vectors, if any element of the divisor is zero, the operation has undefined behavior. Example: @@ -6662,7 +6672,7 @@ distinct operations; for unsigned integer remainder, use '``urem``'. Taking the remainder of a division by zero is undefined behavior. -For vectors, if any element of the divisor is zero, the operation has +For vectors, if any element of the divisor is zero, the operation has undefined behavior. Overflow also leads to undefined behavior; this is a rare case, but can occur, for example, by taking the remainder of a 32-bit division of @@ -7535,7 +7545,7 @@ instructions to save cache bandwidth, such as the ``MOVNT`` instruction on x86. -The optional ``!invariant.group`` metadata must reference a +The optional ``!invariant.group`` metadata must reference a single metadata name ````. See ``invariant.group`` metadata. Semantics: @@ -7641,10 +7651,10 @@ to operate on, a value to compare to the value currently be at that address, and a new value to place at that address if the compared values are equal. The type of '' must be an integer or pointer type whose -bit width is a power of two greater than or equal to eight and less +bit width is a power of two greater than or equal to eight and less than or equal to a target-specific size limit. '' and '' must -have the same type, and the type of '' must be a pointer to -that type. If the ``cmpxchg`` is marked as ``volatile``, then the +have the same type, and the type of '' must be a pointer to +that type. If the ``cmpxchg`` is marked as ``volatile``, then the optimizer is not allowed to modify the number or order of execution of this ``cmpxchg`` with other :ref:`volatile operations `. @@ -8931,7 +8941,7 @@ ``tail`` or ``musttail`` markers to the call. It is used to prevent tail call optimization from being performed on the call. -#. The optional ``fast-math flags`` marker indicates that the call has one or more +#. The optional ``fast-math flags`` marker indicates that the call has one or more :ref:`fast-math flags `, which are optimization hints to enable otherwise unsafe floating-point optimizations. Fast-math flags are only valid for calls that return a floating-point scalar or vector type. @@ -12669,7 +12679,7 @@ Overview: """"""""" -The '``llvm.invariant.group.barrier``' intrinsic can be used when an invariant +The '``llvm.invariant.group.barrier``' intrinsic can be used when an invariant established by invariant.group metadata no longer holds, to obtain a new pointer value that does not carry the invariant information. @@ -12683,7 +12693,7 @@ Semantics: """""""""" -Returns another pointer that aliases its argument but which is considered different +Returns another pointer that aliases its argument but which is considered different for the purposes of ``load``/``store`` ``invariant.group`` metadata. Constrained Floating Point Intrinsics @@ -12761,7 +12771,7 @@ Any FP exception that would have been raised by the original code must be raised by the transformed code, and the transformed code must not raise any FP exceptions that would not have been raised by the original code. This is the -exception behavior argument that will be used if the code being compiled reads +exception behavior argument that will be used if the code being compiled reads the FP exception status flags, but this mode can also be used with code that unmasks FP exceptions. @@ -12779,7 +12789,7 @@ :: - declare + declare @llvm.experimental.constrained.fadd( , , metadata , metadata ) @@ -12816,7 +12826,7 @@ :: - declare + declare @llvm.experimental.constrained.fsub( , , metadata , metadata ) @@ -12853,7 +12863,7 @@ :: - declare + declare @llvm.experimental.constrained.fmul( , , metadata , metadata ) @@ -12890,7 +12900,7 @@ :: - declare + declare @llvm.experimental.constrained.fdiv( , , metadata , metadata ) @@ -12927,7 +12937,7 @@ :: - declare + declare @llvm.experimental.constrained.frem( , , metadata , metadata ) @@ -12956,7 +12966,7 @@ The value produced is the floating point remainder from the division of the two value operands and has the same type as the operands. The remainder has the -same sign as the dividend. +same sign as the dividend. Constrained libm-equivalent Intrinsics @@ -12981,7 +12991,7 @@ :: - declare + declare @llvm.experimental.constrained.sqrt( , metadata , metadata ) @@ -13018,7 +13028,7 @@ :: - declare + declare @llvm.experimental.constrained.pow( , , metadata , metadata ) @@ -13055,7 +13065,7 @@ :: - declare + declare @llvm.experimental.constrained.powi( , i32 , metadata , metadata ) @@ -13094,7 +13104,7 @@ :: - declare + declare @llvm.experimental.constrained.sin( , metadata , metadata ) @@ -13130,7 +13140,7 @@ :: - declare + declare @llvm.experimental.constrained.cos( , metadata , metadata ) @@ -13166,7 +13176,7 @@ :: - declare + declare @llvm.experimental.constrained.exp( , metadata , metadata ) @@ -13201,7 +13211,7 @@ :: - declare + declare @llvm.experimental.constrained.exp2( , metadata , metadata ) @@ -13237,7 +13247,7 @@ :: - declare + declare @llvm.experimental.constrained.log( , metadata , metadata ) @@ -13273,7 +13283,7 @@ :: - declare + declare @llvm.experimental.constrained.log10( , metadata , metadata ) @@ -13308,7 +13318,7 @@ :: - declare + declare @llvm.experimental.constrained.log2( , metadata , metadata ) @@ -13343,7 +13353,7 @@ :: - declare + declare @llvm.experimental.constrained.rint( , metadata , metadata ) @@ -13382,7 +13392,7 @@ :: - declare + declare @llvm.experimental.constrained.nearbyint( , metadata , metadata ) @@ -14122,7 +14132,7 @@ memory from the source location to the destination location. These locations are not allowed to overlap. The memory copy is performed as a sequence of load/store operations where each access is guaranteed to be a multiple of ``element_size`` bytes wide and -aligned at an ``element_size`` boundary. +aligned at an ``element_size`` boundary. The order of the copy is unspecified. The same value may be read from the source buffer many times, but only one write is issued to the destination buffer per Index: include/llvm/IR/Attributes.td =================================================================== --- include/llvm/IR/Attributes.td +++ include/llvm/IR/Attributes.td @@ -214,3 +214,4 @@ def : MergeRule<"setOR">; def : MergeRule<"setOR">; def : MergeRule<"adjustCallerSSPLevel">; +def : MergeRule<"adjustCallerStackProbes">; Index: lib/IR/Attributes.cpp =================================================================== --- lib/IR/Attributes.cpp +++ lib/IR/Attributes.cpp @@ -1638,6 +1638,13 @@ Caller.addFnAttr(Attribute::StackProtect); } +/// \brief If the inlined function required stack probes, then ensure that +/// the calling function has those too. +static void adjustCallerStackProbes(Function &Caller, const Function &Callee) { + if (Callee.hasFnAttribute("probe-stack")) + Caller.addFnAttr("probe-stack", Callee.getFnAttribute("probe-stack").getValueAsString()); +} + #define GET_ATTR_COMPAT_FUNC #include "AttributesCompatFunc.inc" Index: test/Transforms/Inline/inline-probe-stack.ll =================================================================== --- /dev/null +++ test/Transforms/Inline/inline-probe-stack.ll @@ -0,0 +1,12 @@ +; RUN: opt %s -inline -S | FileCheck %s + +define internal void @inner() "probe-stack"="__probestack" { + ret void +} + +define void @outer() { + call void @inner() + ret void +} +; CHECK: define void @outer() #0 +; CHECK: attributes #0 = { "probe-stack"="__probestack" }