Index: docs/LangRef.rst
===================================================================
--- docs/LangRef.rst
+++ docs/LangRef.rst
@@ -2181,12 +2181,21 @@
same address in this global order. This corresponds to the C++0x/C1x
``memory_order_seq_cst`` and Java volatile.
-.. _singlethread:
+.. _syncscope:
-If an atomic operation is marked ``singlethread``, it only *synchronizes
-with* or participates in modification and seq\_cst total orderings with
-other operations running in the same thread (for example, in signal
-handlers).
+If an atomic operation is marked ``syncscope("singlethread")``, it only
+*synchronizes with*, and only participates in the seq\_cst total orderings of,
+other operations running in the same thread (for example, in signal handlers).
+
+If an atomic operation is marked ``syncscope("")``, where ```` is target
+specific synchronization scope, then it *synchronizes with*, and participates in
+the seq\_cst total orderings of, other atomic operations marked
+``syncscope("")`` that are members of the same instance of scope ````.
+
+Otherwise, an atomic operation that is not marked ``syncscope("singlethread")``
+or ``syncscope("")`` *synchronizes with*, and participates in the global
+seq\_cst total orderings of, other operations that are not marked
+``syncscope("singlethread")`` or ``syncscope("")``.
.. _fastmath:
@@ -7292,7 +7301,7 @@
::
= load [volatile] , * [, align ][, !nontemporal !][, !invariant.load !][, !invariant.group !][, !nonnull !][, !dereferenceable !][, !dereferenceable_or_null !][, !align !]
- = load atomic [volatile] , * [singlethread] , align [, !invariant.group !]
+ = load atomic [volatile] , * [syncscope("")] , align [, !invariant.group !]
! = !{ i32 1 }
! = !{i64 }
! = !{ i64 }
@@ -7313,15 +7322,15 @@
:ref:`volatile operations `.
If the ``load`` is marked as ``atomic``, it takes an extra :ref:`ordering
-` and optional ``singlethread`` argument. The ``release`` and
+` and optional ``syncscope("")`` argument. The ``release`` and
``acq_rel`` orderings are not valid on ``load`` instructions. Atomic loads
produce :ref:`defined ` results when they may see multiple atomic
stores. The type of the pointee must be an integer, pointer, or floating-point
type whose bit width is a power of two greater than or equal to eight and less
than or equal to a target-specific size limit. ``align`` must be explicitly
specified on atomic loads, and the load has undefined behavior if the alignment
-is not set to a value which is at least the size in bytes of the
-pointee. ``!nontemporal`` does not have any defined semantics for atomic loads.
+is not set to a value which is at least the size in bytes of the pointee.
+``!nontemporal`` does not have any defined semantics for atomic loads.
The optional constant ``align`` argument specifies the alignment of the
operation (that is, the alignment of the memory address). A value of 0
@@ -7421,7 +7430,7 @@
::
store [volatile] , * [, align ][, !nontemporal !][, !invariant.group !] ; yields void
- store atomic [volatile] , * [singlethread] , align [, !invariant.group !] ; yields void
+ store atomic [volatile] , * [syncscope("")] , align [, !invariant.group !] ; yields void
Overview:
"""""""""
@@ -7441,8 +7450,8 @@
structural type `) can be stored.
If the ``store`` is marked as ``atomic``, it takes an extra :ref:`ordering
-` and optional ``singlethread`` argument. The ``acquire`` and
-``acq_rel`` orderings aren't valid on ``store`` instructions. Atomic loads
+` and optional ``syncscope("")`` argument. The ``acquire`` and
+``acq_rel`` orderings aren't valid on ``store`` instructions. Atomic loads
produce :ref:`defined ` results when they may see multiple atomic
stores. The type of the pointee must be an integer, pointer, or floating-point
type whose bit width is a power of two greater than or equal to eight and less
@@ -7509,7 +7518,7 @@
::
- fence [singlethread] ; yields void
+ fence [syncscope("")] ; yields void
Overview:
"""""""""
@@ -7543,17 +7552,17 @@
``acquire`` and ``release`` semantics specified above, participates in
the global program order of other ``seq_cst`` operations and/or fences.
-The optional ":ref:`singlethread `" argument specifies
-that the fence only synchronizes with other fences in the same thread.
-(This is useful for interacting with signal handlers.)
+A ``fence`` instruction can also take an optional
+":ref:`syncscope `" argument.
Example:
""""""""
.. code-block:: llvm
- fence acquire ; yields void
- fence singlethread seq_cst ; yields void
+ fence acquire ; yields void
+ fence syncscope("singlethread") seq_cst ; yields void
+ fence syncscope("agent") seq_cst ; yields void
.. _i_cmpxchg:
@@ -7565,7 +7574,7 @@
::
- cmpxchg [weak] [volatile] * , , [singlethread] ; yields { ty, i1 }
+ cmpxchg [weak] [volatile] * , , [syncscope("")] ; yields { ty, i1 }
Overview:
"""""""""
@@ -7594,10 +7603,8 @@
stronger than that on success, and the failure ordering cannot be either
``release`` or ``acq_rel``.
-The optional "``singlethread``" argument declares that the ``cmpxchg``
-is only atomic with respect to code (usually signal handlers) running in
-the same thread as the ``cmpxchg``. Otherwise the cmpxchg is atomic with
-respect to all other code in the system.
+A ``cmpxchg`` instruction can also take an optional
+":ref:`syncscope `" argument.
The pointer passed into cmpxchg must have alignment greater than or
equal to the size in memory of the operand.
@@ -7651,7 +7658,7 @@
::
- atomicrmw [volatile] * , [singlethread] ; yields ty
+ atomicrmw [volatile] * , [syncscope("")] ; yields ty
Overview:
"""""""""
@@ -7685,6 +7692,9 @@
order of execution of this ``atomicrmw`` with other :ref:`volatile
operations `.
+A ``atomicrmw`` instruction can also take an optional
+":ref:`syncscope `" argument.
+
Semantics:
""""""""""
Index: include/llvm/Bitcode/LLVMBitCodes.h
===================================================================
--- include/llvm/Bitcode/LLVMBitCodes.h
+++ include/llvm/Bitcode/LLVMBitCodes.h
@@ -55,6 +55,8 @@
METADATA_KIND_BLOCK_ID,
STRTAB_BLOCK_ID,
+
+ SYNC_SCOPE_NAMES_BLOCK_ID,
};
/// Identification block contains a string that describes the producer details,
@@ -168,6 +170,10 @@
OPERAND_BUNDLE_TAG = 1, // TAG: [strchr x N]
};
+enum SyncScopeNameCode {
+ SYNC_SCOPE_NAME = 1,
+};
+
// Value symbol table codes.
enum ValueSymtabCodes {
VST_CODE_ENTRY = 1, // VST_ENTRY: [valueid, namechar x N]
@@ -392,12 +398,6 @@
ORDERING_SEQCST = 6
};
-/// Encoded SynchronizationScope values.
-enum AtomicSynchScopeCodes {
- SYNCHSCOPE_SINGLETHREAD = 0,
- SYNCHSCOPE_CROSSTHREAD = 1
-};
-
/// Markers and flags for call instruction.
enum CallMarkersFlags {
CALL_TAIL = 0,
Index: include/llvm/CodeGen/MachineFunction.h
===================================================================
--- include/llvm/CodeGen/MachineFunction.h
+++ include/llvm/CodeGen/MachineFunction.h
@@ -642,7 +642,7 @@
MachinePointerInfo PtrInfo, MachineMemOperand::Flags f, uint64_t s,
unsigned base_alignment, const AAMDNodes &AAInfo = AAMDNodes(),
const MDNode *Ranges = nullptr,
- SynchronizationScope SynchScope = CrossThread,
+ SyncScope::ID SSID = SyncScope::System,
AtomicOrdering Ordering = AtomicOrdering::NotAtomic,
AtomicOrdering FailureOrdering = AtomicOrdering::NotAtomic);
Index: include/llvm/CodeGen/MachineMemOperand.h
===================================================================
--- include/llvm/CodeGen/MachineMemOperand.h
+++ include/llvm/CodeGen/MachineMemOperand.h
@@ -119,8 +119,8 @@
private:
/// Atomic information for this memory operation.
struct MachineAtomicInfo {
- /// Synchronization scope for this memory operation.
- unsigned SynchScope : 1; // enum SynchronizationScope
+ /// Synchronization scope ID for this memory operation.
+ unsigned SSID : 8; // SyncScope::ID
/// Atomic ordering requirements for this memory operation. For cmpxchg
/// atomic operations, atomic ordering requirements when store occurs.
unsigned Ordering : 4; // enum AtomicOrdering
@@ -147,7 +147,7 @@
unsigned base_alignment,
const AAMDNodes &AAInfo = AAMDNodes(),
const MDNode *Ranges = nullptr,
- SynchronizationScope SynchScope = CrossThread,
+ SyncScope::ID SSID = SyncScope::System,
AtomicOrdering Ordering = AtomicOrdering::NotAtomic,
AtomicOrdering FailureOrdering = AtomicOrdering::NotAtomic);
@@ -197,9 +197,9 @@
/// Return the range tag for the memory reference.
const MDNode *getRanges() const { return Ranges; }
- /// Return the synchronization scope for this memory operation.
- SynchronizationScope getSynchScope() const {
- return static_cast(AtomicInfo.SynchScope);
+ /// Returns the synchronization scope ID for this memory operation.
+ SyncScope::ID getSyncScopeID() const {
+ return static_cast(AtomicInfo.SSID);
}
/// Return the atomic ordering requirements for this memory operation. For
Index: include/llvm/CodeGen/SelectionDAG.h
===================================================================
--- include/llvm/CodeGen/SelectionDAG.h
+++ include/llvm/CodeGen/SelectionDAG.h
@@ -882,7 +882,7 @@
SDValue Cmp, SDValue Swp, MachinePointerInfo PtrInfo,
unsigned Alignment, AtomicOrdering SuccessOrdering,
AtomicOrdering FailureOrdering,
- SynchronizationScope SynchScope);
+ SyncScope::ID SSID);
SDValue getAtomicCmpSwap(unsigned Opcode, const SDLoc &dl, EVT MemVT,
SDVTList VTs, SDValue Chain, SDValue Ptr,
SDValue Cmp, SDValue Swp, MachineMemOperand *MMO);
@@ -892,7 +892,7 @@
SDValue getAtomic(unsigned Opcode, const SDLoc &dl, EVT MemVT, SDValue Chain,
SDValue Ptr, SDValue Val, const Value *PtrVal,
unsigned Alignment, AtomicOrdering Ordering,
- SynchronizationScope SynchScope);
+ SyncScope::ID SSID);
SDValue getAtomic(unsigned Opcode, const SDLoc &dl, EVT MemVT, SDValue Chain,
SDValue Ptr, SDValue Val, MachineMemOperand *MMO);
Index: include/llvm/CodeGen/SelectionDAGNodes.h
===================================================================
--- include/llvm/CodeGen/SelectionDAGNodes.h
+++ include/llvm/CodeGen/SelectionDAGNodes.h
@@ -1178,8 +1178,8 @@
/// Returns the Ranges that describes the dereference.
const MDNode *getRanges() const { return MMO->getRanges(); }
- /// Return the synchronization scope for this memory operation.
- SynchronizationScope getSynchScope() const { return MMO->getSynchScope(); }
+ /// Returns the synchronization scope ID for this memory operation.
+ SyncScope::ID getSyncScopeID() const { return MMO->getSyncScopeID(); }
/// Return the atomic ordering requirements for this memory operation. For
/// cmpxchg atomic operations, return the atomic ordering requirements when
Index: include/llvm/IR/IRBuilder.h
===================================================================
--- include/llvm/IR/IRBuilder.h
+++ include/llvm/IR/IRBuilder.h
@@ -1183,22 +1183,22 @@
return SI;
}
FenceInst *CreateFence(AtomicOrdering Ordering,
- SynchronizationScope SynchScope = CrossThread,
+ SyncScope::ID SSID = SyncScope::System,
const Twine &Name = "") {
- return Insert(new FenceInst(Context, Ordering, SynchScope), Name);
+ return Insert(new FenceInst(Context, Ordering, SSID), Name);
}
AtomicCmpXchgInst *
CreateAtomicCmpXchg(Value *Ptr, Value *Cmp, Value *New,
AtomicOrdering SuccessOrdering,
AtomicOrdering FailureOrdering,
- SynchronizationScope SynchScope = CrossThread) {
+ SyncScope::ID SSID = SyncScope::System) {
return Insert(new AtomicCmpXchgInst(Ptr, Cmp, New, SuccessOrdering,
- FailureOrdering, SynchScope));
+ FailureOrdering, SSID));
}
AtomicRMWInst *CreateAtomicRMW(AtomicRMWInst::BinOp Op, Value *Ptr, Value *Val,
AtomicOrdering Ordering,
- SynchronizationScope SynchScope = CrossThread) {
- return Insert(new AtomicRMWInst(Op, Ptr, Val, Ordering, SynchScope));
+ SyncScope::ID SSID = SyncScope::System) {
+ return Insert(new AtomicRMWInst(Op, Ptr, Val, Ordering, SSID));
}
Value *CreateGEP(Value *Ptr, ArrayRef IdxList,
const Twine &Name = "") {
Index: include/llvm/IR/Instructions.h
===================================================================
--- include/llvm/IR/Instructions.h
+++ include/llvm/IR/Instructions.h
@@ -52,11 +52,6 @@
class DataLayout;
class LLVMContext;
-enum SynchronizationScope {
- SingleThread = 0,
- CrossThread = 1
-};
-
//===----------------------------------------------------------------------===//
// AllocaInst Class
//===----------------------------------------------------------------------===//
@@ -195,17 +190,16 @@
LoadInst(Value *Ptr, const Twine &NameStr, bool isVolatile,
unsigned Align, BasicBlock *InsertAtEnd);
LoadInst(Value *Ptr, const Twine &NameStr, bool isVolatile, unsigned Align,
- AtomicOrdering Order, SynchronizationScope SynchScope = CrossThread,
+ AtomicOrdering Order, SyncScope::ID SSID = SyncScope::System,
Instruction *InsertBefore = nullptr)
: LoadInst(cast(Ptr->getType())->getElementType(), Ptr,
- NameStr, isVolatile, Align, Order, SynchScope, InsertBefore) {}
+ NameStr, isVolatile, Align, Order, SSID, InsertBefore) {}
LoadInst(Type *Ty, Value *Ptr, const Twine &NameStr, bool isVolatile,
unsigned Align, AtomicOrdering Order,
- SynchronizationScope SynchScope = CrossThread,
+ SyncScope::ID SSID = SyncScope::System,
Instruction *InsertBefore = nullptr);
LoadInst(Value *Ptr, const Twine &NameStr, bool isVolatile,
- unsigned Align, AtomicOrdering Order,
- SynchronizationScope SynchScope,
+ unsigned Align, AtomicOrdering Order, SyncScope::ID SSID,
BasicBlock *InsertAtEnd);
LoadInst(Value *Ptr, const char *NameStr, Instruction *InsertBefore);
LoadInst(Value *Ptr, const char *NameStr, BasicBlock *InsertAtEnd);
@@ -235,34 +229,34 @@
void setAlignment(unsigned Align);
- /// Returns the ordering effect of this fence.
+ /// Returns the ordering constraint of this load instruction.
AtomicOrdering getOrdering() const {
return AtomicOrdering((getSubclassDataFromInstruction() >> 7) & 7);
}
- /// Set the ordering constraint on this load. May not be Release or
- /// AcquireRelease.
+ /// Sets the ordering constraint of this load instruction. May not be Release
+ /// or AcquireRelease.
void setOrdering(AtomicOrdering Ordering) {
setInstructionSubclassData((getSubclassDataFromInstruction() & ~(7 << 7)) |
((unsigned)Ordering << 7));
}
- SynchronizationScope getSynchScope() const {
- return SynchronizationScope((getSubclassDataFromInstruction() >> 6) & 1);
+ /// Returns the synchronization scope ID of this load instruction.
+ SyncScope::ID getSyncScopeID() const {
+ return SSID;
}
- /// Specify whether this load is ordered with respect to all
- /// concurrently executing threads, or only with respect to signal handlers
- /// executing in the same thread.
- void setSynchScope(SynchronizationScope xthread) {
- setInstructionSubclassData((getSubclassDataFromInstruction() & ~(1 << 6)) |
- (xthread << 6));
+ /// Sets the synchronization scope ID of this load instruction.
+ void setSyncScopeID(SyncScope::ID SSID) {
+ this->SSID = SSID;
}
+ /// Sets the ordering constraint and the synchronization scope ID of this load
+ /// instruction.
void setAtomic(AtomicOrdering Ordering,
- SynchronizationScope SynchScope = CrossThread) {
+ SyncScope::ID SSID = SyncScope::System) {
setOrdering(Ordering);
- setSynchScope(SynchScope);
+ setSyncScopeID(SSID);
}
bool isSimple() const { return !isAtomic() && !isVolatile(); }
@@ -297,6 +291,11 @@
void setInstructionSubclassData(unsigned short D) {
Instruction::setInstructionSubclassData(D);
}
+
+ /// The synchronization scope ID of this load instruction. Not quite enough
+ /// room in SubClassData for everything, so synchronization scope ID gets its
+ /// own field.
+ SyncScope::ID SSID;
};
//===----------------------------------------------------------------------===//
@@ -325,11 +324,10 @@
unsigned Align, BasicBlock *InsertAtEnd);
StoreInst(Value *Val, Value *Ptr, bool isVolatile,
unsigned Align, AtomicOrdering Order,
- SynchronizationScope SynchScope = CrossThread,
+ SyncScope::ID SSID = SyncScope::System,
Instruction *InsertBefore = nullptr);
StoreInst(Value *Val, Value *Ptr, bool isVolatile,
- unsigned Align, AtomicOrdering Order,
- SynchronizationScope SynchScope,
+ unsigned Align, AtomicOrdering Order, SyncScope::ID SSID,
BasicBlock *InsertAtEnd);
// allocate space for exactly two operands
@@ -358,34 +356,34 @@
void setAlignment(unsigned Align);
- /// Returns the ordering effect of this store.
+ /// Returns the ordering constraint of this store instruction.
AtomicOrdering getOrdering() const {
return AtomicOrdering((getSubclassDataFromInstruction() >> 7) & 7);
}
- /// Set the ordering constraint on this store. May not be Acquire or
- /// AcquireRelease.
+ /// Sets the ordering constraint of this store instruction. May not be
+ /// Acquire or AcquireRelease.
void setOrdering(AtomicOrdering Ordering) {
setInstructionSubclassData((getSubclassDataFromInstruction() & ~(7 << 7)) |
((unsigned)Ordering << 7));
}
- SynchronizationScope getSynchScope() const {
- return SynchronizationScope((getSubclassDataFromInstruction() >> 6) & 1);
+ /// Returns the synchronization scope ID of this store instruction.
+ SyncScope::ID getSyncScopeID() const {
+ return SSID;
}
- /// Specify whether this store instruction is ordered with respect to all
- /// concurrently executing threads, or only with respect to signal handlers
- /// executing in the same thread.
- void setSynchScope(SynchronizationScope xthread) {
- setInstructionSubclassData((getSubclassDataFromInstruction() & ~(1 << 6)) |
- (xthread << 6));
+ /// Sets the synchronization scope ID of this store instruction.
+ void setSyncScopeID(SyncScope::ID SSID) {
+ this->SSID = SSID;
}
+ /// Sets the ordering constraint and the synchronization scope ID of this
+ /// store instruction.
void setAtomic(AtomicOrdering Ordering,
- SynchronizationScope SynchScope = CrossThread) {
+ SyncScope::ID SSID = SyncScope::System) {
setOrdering(Ordering);
- setSynchScope(SynchScope);
+ setSyncScopeID(SSID);
}
bool isSimple() const { return !isAtomic() && !isVolatile(); }
@@ -423,6 +421,11 @@
void setInstructionSubclassData(unsigned short D) {
Instruction::setInstructionSubclassData(D);
}
+
+ /// The synchronization scope ID of this store instruction. Not quite enough
+ /// room in SubClassData for everything, so synchronization scope ID gets its
+ /// own field.
+ SyncScope::ID SSID;
};
template <>
@@ -437,7 +440,7 @@
/// An instruction for ordering other memory operations.
class FenceInst : public Instruction {
- void Init(AtomicOrdering Ordering, SynchronizationScope SynchScope);
+ void Init(AtomicOrdering Ordering, SyncScope::ID SSID);
protected:
// Note: Instruction needs to be a friend here to call cloneImpl.
@@ -449,10 +452,9 @@
// Ordering may only be Acquire, Release, AcquireRelease, or
// SequentiallyConsistent.
FenceInst(LLVMContext &C, AtomicOrdering Ordering,
- SynchronizationScope SynchScope = CrossThread,
+ SyncScope::ID SSID = SyncScope::System,
Instruction *InsertBefore = nullptr);
- FenceInst(LLVMContext &C, AtomicOrdering Ordering,
- SynchronizationScope SynchScope,
+ FenceInst(LLVMContext &C, AtomicOrdering Ordering, SyncScope::ID SSID,
BasicBlock *InsertAtEnd);
// allocate space for exactly zero operands
@@ -462,28 +464,26 @@
void *operator new(size_t, unsigned) = delete;
- /// Returns the ordering effect of this fence.
+ /// Returns the ordering constraint of this fence instruction.
AtomicOrdering getOrdering() const {
return AtomicOrdering(getSubclassDataFromInstruction() >> 1);
}
- /// Set the ordering constraint on this fence. May only be Acquire, Release,
- /// AcquireRelease, or SequentiallyConsistent.
+ /// Sets the ordering constraint of this fence instruction. May only be
+ /// Acquire, Release, AcquireRelease, or SequentiallyConsistent.
void setOrdering(AtomicOrdering Ordering) {
setInstructionSubclassData((getSubclassDataFromInstruction() & 1) |
((unsigned)Ordering << 1));
}
- SynchronizationScope getSynchScope() const {
- return SynchronizationScope(getSubclassDataFromInstruction() & 1);
+ /// Returns the synchronization scope ID of this fence instruction.
+ SyncScope::ID getSyncScopeID() const {
+ return SSID;
}
- /// Specify whether this fence orders other operations with respect to all
- /// concurrently executing threads, or only with respect to signal handlers
- /// executing in the same thread.
- void setSynchScope(SynchronizationScope xthread) {
- setInstructionSubclassData((getSubclassDataFromInstruction() & ~1) |
- xthread);
+ /// Sets the synchronization scope ID of this fence instruction.
+ void setSyncScopeID(SyncScope::ID SSID) {
+ this->SSID = SSID;
}
// Methods for support type inquiry through isa, cast, and dyn_cast:
@@ -500,6 +500,11 @@
void setInstructionSubclassData(unsigned short D) {
Instruction::setInstructionSubclassData(D);
}
+
+ /// The synchronization scope ID of this fence instruction. Not quite enough
+ /// room in SubClassData for everything, so synchronization scope ID gets its
+ /// own field.
+ SyncScope::ID SSID;
};
//===----------------------------------------------------------------------===//
@@ -513,7 +518,7 @@
class AtomicCmpXchgInst : public Instruction {
void Init(Value *Ptr, Value *Cmp, Value *NewVal,
AtomicOrdering SuccessOrdering, AtomicOrdering FailureOrdering,
- SynchronizationScope SynchScope);
+ SyncScope::ID SSID);
protected:
// Note: Instruction needs to be a friend here to call cloneImpl.
@@ -525,13 +530,11 @@
AtomicCmpXchgInst(Value *Ptr, Value *Cmp, Value *NewVal,
AtomicOrdering SuccessOrdering,
AtomicOrdering FailureOrdering,
- SynchronizationScope SynchScope,
- Instruction *InsertBefore = nullptr);
+ SyncScope::ID SSID, Instruction *InsertBefore = nullptr);
AtomicCmpXchgInst(Value *Ptr, Value *Cmp, Value *NewVal,
AtomicOrdering SuccessOrdering,
AtomicOrdering FailureOrdering,
- SynchronizationScope SynchScope,
- BasicBlock *InsertAtEnd);
+ SyncScope::ID SSID, BasicBlock *InsertAtEnd);
// allocate space for exactly three operands
void *operator new(size_t s) {
@@ -567,7 +570,12 @@
/// Transparently provide more efficient getOperand methods.
DECLARE_TRANSPARENT_OPERAND_ACCESSORS(Value);
- /// Set the ordering constraint on this cmpxchg.
+ /// Returns the success ordering constraint of this cmpxchg instruction.
+ AtomicOrdering getSuccessOrdering() const {
+ return AtomicOrdering((getSubclassDataFromInstruction() >> 2) & 7);
+ }
+
+ /// Sets the success ordering constraint of this cmpxchg instruction.
void setSuccessOrdering(AtomicOrdering Ordering) {
assert(Ordering != AtomicOrdering::NotAtomic &&
"CmpXchg instructions can only be atomic.");
@@ -575,6 +583,12 @@
((unsigned)Ordering << 2));
}
+ /// Returns the failure ordering constraint of this cmpxchg instruction.
+ AtomicOrdering getFailureOrdering() const {
+ return AtomicOrdering((getSubclassDataFromInstruction() >> 5) & 7);
+ }
+
+ /// Sets the failure ordering constraint of this cmpxchg instruction.
void setFailureOrdering(AtomicOrdering Ordering) {
assert(Ordering != AtomicOrdering::NotAtomic &&
"CmpXchg instructions can only be atomic.");
@@ -582,28 +596,14 @@
((unsigned)Ordering << 5));
}
- /// Specify whether this cmpxchg is atomic and orders other operations with
- /// respect to all concurrently executing threads, or only with respect to
- /// signal handlers executing in the same thread.
- void setSynchScope(SynchronizationScope SynchScope) {
- setInstructionSubclassData((getSubclassDataFromInstruction() & ~2) |
- (SynchScope << 1));
- }
-
- /// Returns the ordering constraint on this cmpxchg.
- AtomicOrdering getSuccessOrdering() const {
- return AtomicOrdering((getSubclassDataFromInstruction() >> 2) & 7);
- }
-
- /// Returns the ordering constraint on this cmpxchg.
- AtomicOrdering getFailureOrdering() const {
- return AtomicOrdering((getSubclassDataFromInstruction() >> 5) & 7);
+ /// Returns the synchronization scope ID of this cmpxchg instruction.
+ SyncScope::ID getSyncScopeID() const {
+ return SSID;
}
- /// Returns whether this cmpxchg is atomic between threads or only within a
- /// single thread.
- SynchronizationScope getSynchScope() const {
- return SynchronizationScope((getSubclassDataFromInstruction() & 2) >> 1);
+ /// Sets the synchronization scope ID of this cmpxchg instruction.
+ void setSyncScopeID(SyncScope::ID SSID) {
+ this->SSID = SSID;
}
Value *getPointerOperand() { return getOperand(0); }
@@ -658,6 +658,11 @@
void setInstructionSubclassData(unsigned short D) {
Instruction::setInstructionSubclassData(D);
}
+
+ /// The synchronization scope ID of this cmpxchg instruction. Not quite
+ /// enough room in SubClassData for everything, so synchronization scope ID
+ /// gets its own field.
+ SyncScope::ID SSID;
};
template <>
@@ -717,10 +722,10 @@
};
AtomicRMWInst(BinOp Operation, Value *Ptr, Value *Val,
- AtomicOrdering Ordering, SynchronizationScope SynchScope,
+ AtomicOrdering Ordering, SyncScope::ID SSID,
Instruction *InsertBefore = nullptr);
AtomicRMWInst(BinOp Operation, Value *Ptr, Value *Val,
- AtomicOrdering Ordering, SynchronizationScope SynchScope,
+ AtomicOrdering Ordering, SyncScope::ID SSID,
BasicBlock *InsertAtEnd);
// allocate space for exactly two operands
@@ -756,7 +761,12 @@
/// Transparently provide more efficient getOperand methods.
DECLARE_TRANSPARENT_OPERAND_ACCESSORS(Value);
- /// Set the ordering constraint on this RMW.
+ /// Returns the ordering constraint of this rmw instruction.
+ AtomicOrdering getOrdering() const {
+ return AtomicOrdering((getSubclassDataFromInstruction() >> 2) & 7);
+ }
+
+ /// Sets the ordering constraint of this rmw instruction.
void setOrdering(AtomicOrdering Ordering) {
assert(Ordering != AtomicOrdering::NotAtomic &&
"atomicrmw instructions can only be atomic.");
@@ -764,23 +774,14 @@
((unsigned)Ordering << 2));
}
- /// Specify whether this RMW orders other operations with respect to all
- /// concurrently executing threads, or only with respect to signal handlers
- /// executing in the same thread.
- void setSynchScope(SynchronizationScope SynchScope) {
- setInstructionSubclassData((getSubclassDataFromInstruction() & ~2) |
- (SynchScope << 1));
+ /// Returns the synchronization scope ID of this rmw instruction.
+ SyncScope::ID getSyncScopeID() const {
+ return SSID;
}
- /// Returns the ordering constraint on this RMW.
- AtomicOrdering getOrdering() const {
- return AtomicOrdering((getSubclassDataFromInstruction() >> 2) & 7);
- }
-
- /// Returns whether this RMW is atomic between threads or only within a
- /// single thread.
- SynchronizationScope getSynchScope() const {
- return SynchronizationScope((getSubclassDataFromInstruction() & 2) >> 1);
+ /// Sets the synchronization scope ID of this rmw instruction.
+ void setSyncScopeID(SyncScope::ID SSID) {
+ this->SSID = SSID;
}
Value *getPointerOperand() { return getOperand(0); }
@@ -805,13 +806,18 @@
private:
void Init(BinOp Operation, Value *Ptr, Value *Val,
- AtomicOrdering Ordering, SynchronizationScope SynchScope);
+ AtomicOrdering Ordering, SyncScope::ID SSID);
// Shadow Instruction::setInstructionSubclassData with a private forwarding
// method so that subclasses cannot accidentally use it.
void setInstructionSubclassData(unsigned short D) {
Instruction::setInstructionSubclassData(D);
}
+
+ /// The synchronization scope ID of this rmw instruction. Not quite enough
+ /// room in SubClassData for everything, so synchronization scope ID gets its
+ /// own field.
+ SyncScope::ID SSID;
};
template <>
Index: include/llvm/IR/LLVMContext.h
===================================================================
--- include/llvm/IR/LLVMContext.h
+++ include/llvm/IR/LLVMContext.h
@@ -42,6 +42,24 @@
} // end namespace yaml
+namespace SyncScope {
+
+typedef uint8_t ID;
+
+/// Known synchronization scope IDs, which always have the same value. All
+/// synchronization scope IDs that LLVM has special knowledge of are listed
+/// here. Additionally, this scheme allows LLVM to efficiently check for
+/// specific synchronization scope ID without comparing strings.
+enum {
+ /// Synchronized with respect to signal handlers executing in the same thread.
+ SingleThread = 0,
+
+ /// Synchronized with respect to all concurrently executing threads.
+ System = 1
+};
+
+} // end namespace SyncScope
+
/// This is an important class for using LLVM in a threaded context. It
/// (opaquely) owns and manages the core "global" data of LLVM's core
/// infrastructure, including the type and constant uniquing tables.
@@ -111,6 +129,16 @@
/// tag registered with an LLVMContext has an unique ID.
uint32_t getOperandBundleTagID(StringRef Tag) const;
+ /// getOrInsertSyncScopeID - Maps synchronization scope name to
+ /// synchronization scope ID. Every synchronization scope registered with
+ /// LLVMContext has unique ID except pre-defined ones.
+ SyncScope::ID getOrInsertSyncScopeID(StringRef SSN);
+
+ /// getSyncScopeNames - Populates client supplied SmallVector with
+ /// synchronization scope names registered with LLVMContext. Synchronization
+ /// scope names are ordered by increasing synchronization scope IDs.
+ void getSyncScopeNames(SmallVectorImpl &SSNs) const;
+
/// Define the GC for a function
void setGC(const Function &Fn, std::string GCName);
Index: lib/AsmParser/LLLexer.cpp
===================================================================
--- lib/AsmParser/LLLexer.cpp
+++ lib/AsmParser/LLLexer.cpp
@@ -542,7 +542,7 @@
KEYWORD(release);
KEYWORD(acq_rel);
KEYWORD(seq_cst);
- KEYWORD(singlethread);
+ KEYWORD(syncscope);
KEYWORD(nnan);
KEYWORD(ninf);
Index: lib/AsmParser/LLParser.h
===================================================================
--- lib/AsmParser/LLParser.h
+++ lib/AsmParser/LLParser.h
@@ -241,8 +241,9 @@
bool ParseOptionalCallingConv(unsigned &CC);
bool ParseOptionalAlignment(unsigned &Alignment);
bool ParseOptionalDerefAttrBytes(lltok::Kind AttrKind, uint64_t &Bytes);
- bool ParseScopeAndOrdering(bool isAtomic, SynchronizationScope &Scope,
+ bool ParseScopeAndOrdering(bool isAtomic, SyncScope::ID &SSID,
AtomicOrdering &Ordering);
+ bool ParseScope(SyncScope::ID &SSID);
bool ParseOrdering(AtomicOrdering &Ordering);
bool ParseOptionalStackAlignment(unsigned &Alignment);
bool ParseOptionalCommaAlign(unsigned &Alignment, bool &AteExtraComma);
Index: lib/AsmParser/LLParser.cpp
===================================================================
--- lib/AsmParser/LLParser.cpp
+++ lib/AsmParser/LLParser.cpp
@@ -1919,20 +1919,42 @@
}
/// ParseScopeAndOrdering
-/// if isAtomic: ::= 'singlethread'? AtomicOrdering
+/// if isAtomic: ::= SyncScope? AtomicOrdering
/// else: ::=
///
/// This sets Scope and Ordering to the parsed values.
-bool LLParser::ParseScopeAndOrdering(bool isAtomic, SynchronizationScope &Scope,
+bool LLParser::ParseScopeAndOrdering(bool isAtomic, SyncScope::ID &SSID,
AtomicOrdering &Ordering) {
if (!isAtomic)
return false;
- Scope = CrossThread;
- if (EatIfPresent(lltok::kw_singlethread))
- Scope = SingleThread;
+ return ParseScope(SSID) || ParseOrdering(Ordering);
+}
+
+/// ParseScope
+/// ::= syncscope("singlethread" | "")?
+///
+/// This sets synchronization scope ID to the ID of the parsed value.
+bool LLParser::ParseScope(SyncScope::ID &SSID) {
+ SSID = SyncScope::System;
+ if (EatIfPresent(lltok::kw_syncscope)) {
+ auto StartParenAt = Lex.getLoc();
+ if (!EatIfPresent(lltok::lparen))
+ return Error(StartParenAt, "Expected '(' in syncscope");
+
+ std::string SSN;
+ auto SSNAt = Lex.getLoc();
+ if (ParseStringConstant(SSN))
+ return Error(SSNAt, "Expected synchronization scope name");
- return ParseOrdering(Ordering);
+ auto EndParenAt = Lex.getLoc();
+ if (!EatIfPresent(lltok::rparen))
+ return Error(EndParenAt, "Expected ')' in syncscope");
+
+ SSID = Context.getOrInsertSyncScopeID(SSN);
+ }
+
+ return false;
}
/// ParseOrdering
@@ -6100,7 +6122,7 @@
bool AteExtraComma = false;
bool isAtomic = false;
AtomicOrdering Ordering = AtomicOrdering::NotAtomic;
- SynchronizationScope Scope = CrossThread;
+ SyncScope::ID SSID = SyncScope::System;
if (Lex.getKind() == lltok::kw_atomic) {
isAtomic = true;
@@ -6118,7 +6140,7 @@
if (ParseType(Ty) ||
ParseToken(lltok::comma, "expected comma after load's type") ||
ParseTypeAndValue(Val, Loc, PFS) ||
- ParseScopeAndOrdering(isAtomic, Scope, Ordering) ||
+ ParseScopeAndOrdering(isAtomic, SSID, Ordering) ||
ParseOptionalCommaAlign(Alignment, AteExtraComma))
return true;
@@ -6134,7 +6156,7 @@
return Error(ExplicitTypeLoc,
"explicit pointee type doesn't match operand's pointee type");
- Inst = new LoadInst(Ty, Val, "", isVolatile, Alignment, Ordering, Scope);
+ Inst = new LoadInst(Ty, Val, "", isVolatile, Alignment, Ordering, SSID);
return AteExtraComma ? InstExtraComma : InstNormal;
}
@@ -6149,7 +6171,7 @@
bool AteExtraComma = false;
bool isAtomic = false;
AtomicOrdering Ordering = AtomicOrdering::NotAtomic;
- SynchronizationScope Scope = CrossThread;
+ SyncScope::ID SSID = SyncScope::System;
if (Lex.getKind() == lltok::kw_atomic) {
isAtomic = true;
@@ -6165,7 +6187,7 @@
if (ParseTypeAndValue(Val, Loc, PFS) ||
ParseToken(lltok::comma, "expected ',' after store operand") ||
ParseTypeAndValue(Ptr, PtrLoc, PFS) ||
- ParseScopeAndOrdering(isAtomic, Scope, Ordering) ||
+ ParseScopeAndOrdering(isAtomic, SSID, Ordering) ||
ParseOptionalCommaAlign(Alignment, AteExtraComma))
return true;
@@ -6181,7 +6203,7 @@
Ordering == AtomicOrdering::AcquireRelease)
return Error(Loc, "atomic store cannot use Acquire ordering");
- Inst = new StoreInst(Val, Ptr, isVolatile, Alignment, Ordering, Scope);
+ Inst = new StoreInst(Val, Ptr, isVolatile, Alignment, Ordering, SSID);
return AteExtraComma ? InstExtraComma : InstNormal;
}
@@ -6193,7 +6215,7 @@
bool AteExtraComma = false;
AtomicOrdering SuccessOrdering = AtomicOrdering::NotAtomic;
AtomicOrdering FailureOrdering = AtomicOrdering::NotAtomic;
- SynchronizationScope Scope = CrossThread;
+ SyncScope::ID SSID = SyncScope::System;
bool isVolatile = false;
bool isWeak = false;
@@ -6208,7 +6230,7 @@
ParseTypeAndValue(Cmp, CmpLoc, PFS) ||
ParseToken(lltok::comma, "expected ',' after cmpxchg cmp operand") ||
ParseTypeAndValue(New, NewLoc, PFS) ||
- ParseScopeAndOrdering(true /*Always atomic*/, Scope, SuccessOrdering) ||
+ ParseScopeAndOrdering(true /*Always atomic*/, SSID, SuccessOrdering) ||
ParseOrdering(FailureOrdering))
return true;
@@ -6231,7 +6253,7 @@
if (!New->getType()->isFirstClassType())
return Error(NewLoc, "cmpxchg operand must be a first class value");
AtomicCmpXchgInst *CXI = new AtomicCmpXchgInst(
- Ptr, Cmp, New, SuccessOrdering, FailureOrdering, Scope);
+ Ptr, Cmp, New, SuccessOrdering, FailureOrdering, SSID);
CXI->setVolatile(isVolatile);
CXI->setWeak(isWeak);
Inst = CXI;
@@ -6245,7 +6267,7 @@
Value *Ptr, *Val; LocTy PtrLoc, ValLoc;
bool AteExtraComma = false;
AtomicOrdering Ordering = AtomicOrdering::NotAtomic;
- SynchronizationScope Scope = CrossThread;
+ SyncScope::ID SSID = SyncScope::System;
bool isVolatile = false;
AtomicRMWInst::BinOp Operation;
@@ -6271,7 +6293,7 @@
if (ParseTypeAndValue(Ptr, PtrLoc, PFS) ||
ParseToken(lltok::comma, "expected ',' after atomicrmw address") ||
ParseTypeAndValue(Val, ValLoc, PFS) ||
- ParseScopeAndOrdering(true /*Always atomic*/, Scope, Ordering))
+ ParseScopeAndOrdering(true /*Always atomic*/, SSID, Ordering))
return true;
if (Ordering == AtomicOrdering::Unordered)
@@ -6288,7 +6310,7 @@
" integer");
AtomicRMWInst *RMWI =
- new AtomicRMWInst(Operation, Ptr, Val, Ordering, Scope);
+ new AtomicRMWInst(Operation, Ptr, Val, Ordering, SSID);
RMWI->setVolatile(isVolatile);
Inst = RMWI;
return AteExtraComma ? InstExtraComma : InstNormal;
@@ -6298,8 +6320,8 @@
/// ::= 'fence' 'singlethread'? AtomicOrdering
int LLParser::ParseFence(Instruction *&Inst, PerFunctionState &PFS) {
AtomicOrdering Ordering = AtomicOrdering::NotAtomic;
- SynchronizationScope Scope = CrossThread;
- if (ParseScopeAndOrdering(true /*Always atomic*/, Scope, Ordering))
+ SyncScope::ID SSID = SyncScope::System;
+ if (ParseScopeAndOrdering(true /*Always atomic*/, SSID, Ordering))
return true;
if (Ordering == AtomicOrdering::Unordered)
@@ -6307,7 +6329,7 @@
if (Ordering == AtomicOrdering::Monotonic)
return TokError("fence cannot be monotonic");
- Inst = new FenceInst(Context, Ordering, Scope);
+ Inst = new FenceInst(Context, Ordering, SSID);
return InstNormal;
}
Index: lib/AsmParser/LLToken.h
===================================================================
--- lib/AsmParser/LLToken.h
+++ lib/AsmParser/LLToken.h
@@ -93,7 +93,7 @@
kw_release,
kw_acq_rel,
kw_seq_cst,
- kw_singlethread,
+ kw_syncscope,
kw_nnan,
kw_ninf,
kw_nsz,
Index: lib/Bitcode/Reader/BitcodeReader.cpp
===================================================================
--- lib/Bitcode/Reader/BitcodeReader.cpp
+++ lib/Bitcode/Reader/BitcodeReader.cpp
@@ -513,6 +513,7 @@
TBAAVerifier TBAAVerifyHelper;
std::vector BundleTags;
+ SmallVector SSIDs;
public:
BitcodeReader(BitstreamCursor Stream, StringRef Strtab,
@@ -648,6 +649,7 @@
Error parseTypeTable();
Error parseTypeTableBody();
Error parseOperandBundleTags();
+ Error parseSyncScopeNames();
Expected recordValue(SmallVectorImpl &Record,
unsigned NameIndex, Triple &TT);
@@ -668,6 +670,8 @@
Error findFunctionInStream(
Function *F,
DenseMap::iterator DeferredFunctionInfoIterator);
+
+ SyncScope::ID getDecodedSyncScopeID(unsigned Val);
};
/// Class to manage reading and parsing function summary index bitcode
@@ -998,14 +1002,6 @@
}
}
-static SynchronizationScope getDecodedSynchScope(unsigned Val) {
- switch (Val) {
- case bitc::SYNCHSCOPE_SINGLETHREAD: return SingleThread;
- default: // Map unknown scopes to cross-thread.
- case bitc::SYNCHSCOPE_CROSSTHREAD: return CrossThread;
- }
-}
-
static Comdat::SelectionKind getDecodedComdatSelectionKind(unsigned Val) {
switch (Val) {
default: // Map unknown selection kinds to any.
@@ -1745,6 +1741,44 @@
}
}
+Error BitcodeReader::parseSyncScopeNames() {
+ if (Stream.EnterSubBlock(bitc::SYNC_SCOPE_NAMES_BLOCK_ID))
+ return error("Invalid record");
+
+ if (!SSIDs.empty())
+ return error("Invalid multiple synchronization scope names blocks");
+
+ SmallVector Record;
+ while (true) {
+ BitstreamEntry Entry = Stream.advanceSkippingSubblocks();
+ switch (Entry.Kind) {
+ case BitstreamEntry::SubBlock: // Handled for us already.
+ case BitstreamEntry::Error:
+ return error("Malformed block");
+ case BitstreamEntry::EndBlock:
+ if (SSIDs.empty())
+ return error("Invalid empty synchronization scope names block");
+ return Error::success();
+ case BitstreamEntry::Record:
+ // The interesting case.
+ break;
+ }
+
+ // Synchronization scope names are implicitly mapped to synchronization
+ // scope IDs by their order.
+
+ if (Stream.readRecord(Entry.ID, Record) != bitc::SYNC_SCOPE_NAME)
+ return error("Invalid record");
+
+ SmallString<16> SSN;
+ if (convertToString(Record, 0, SSN))
+ return error("Invalid record");
+
+ SSIDs.push_back(Context.getOrInsertSyncScopeID(SSN));
+ Record.clear();
+ }
+}
+
/// Associate a value with its name from the given index in the provided record.
Expected BitcodeReader::recordValue(SmallVectorImpl &Record,
unsigned NameIndex, Triple &TT) {
@@ -3122,6 +3156,10 @@
if (Error Err = parseOperandBundleTags())
return Err;
break;
+ case bitc::SYNC_SCOPE_NAMES_BLOCK_ID:
+ if (Error Err = parseSyncScopeNames())
+ return Err;
+ break;
}
continue;
@@ -4194,7 +4232,7 @@
break;
}
case bitc::FUNC_CODE_INST_LOADATOMIC: {
- // LOADATOMIC: [opty, op, align, vol, ordering, synchscope]
+ // LOADATOMIC: [opty, op, align, vol, ordering, ssid]
unsigned OpNum = 0;
Value *Op;
if (getValueTypePair(Record, OpNum, NextValueNo, Op) ||
@@ -4216,12 +4254,12 @@
return error("Invalid record");
if (Ordering != AtomicOrdering::NotAtomic && Record[OpNum] == 0)
return error("Invalid record");
- SynchronizationScope SynchScope = getDecodedSynchScope(Record[OpNum + 3]);
+ SyncScope::ID SSID = getDecodedSyncScopeID(Record[OpNum + 3]);
unsigned Align;
if (Error Err = parseAlignmentValue(Record[OpNum], Align))
return Err;
- I = new LoadInst(Op, "", Record[OpNum+1], Align, Ordering, SynchScope);
+ I = new LoadInst(Op, "", Record[OpNum+1], Align, Ordering, SSID);
InstructionList.push_back(I);
break;
@@ -4250,7 +4288,7 @@
}
case bitc::FUNC_CODE_INST_STOREATOMIC:
case bitc::FUNC_CODE_INST_STOREATOMIC_OLD: {
- // STOREATOMIC: [ptrty, ptr, val, align, vol, ordering, synchscope]
+ // STOREATOMIC: [ptrty, ptr, val, align, vol, ordering, ssid]
unsigned OpNum = 0;
Value *Val, *Ptr;
if (getValueTypePair(Record, OpNum, NextValueNo, Ptr) ||
@@ -4270,20 +4308,20 @@
Ordering == AtomicOrdering::Acquire ||
Ordering == AtomicOrdering::AcquireRelease)
return error("Invalid record");
- SynchronizationScope SynchScope = getDecodedSynchScope(Record[OpNum + 3]);
+ SyncScope::ID SSID = getDecodedSyncScopeID(Record[OpNum + 3]);
if (Ordering != AtomicOrdering::NotAtomic && Record[OpNum] == 0)
return error("Invalid record");
unsigned Align;
if (Error Err = parseAlignmentValue(Record[OpNum], Align))
return Err;
- I = new StoreInst(Val, Ptr, Record[OpNum+1], Align, Ordering, SynchScope);
+ I = new StoreInst(Val, Ptr, Record[OpNum+1], Align, Ordering, SSID);
InstructionList.push_back(I);
break;
}
case bitc::FUNC_CODE_INST_CMPXCHG_OLD:
case bitc::FUNC_CODE_INST_CMPXCHG: {
- // CMPXCHG:[ptrty, ptr, cmp, new, vol, successordering, synchscope,
+ // CMPXCHG:[ptrty, ptr, cmp, new, vol, successordering, ssid,
// failureordering?, isweak?]
unsigned OpNum = 0;
Value *Ptr, *Cmp, *New;
@@ -4300,7 +4338,7 @@
if (SuccessOrdering == AtomicOrdering::NotAtomic ||
SuccessOrdering == AtomicOrdering::Unordered)
return error("Invalid record");
- SynchronizationScope SynchScope = getDecodedSynchScope(Record[OpNum + 2]);
+ SyncScope::ID SSID = getDecodedSyncScopeID(Record[OpNum + 2]);
if (Error Err = typeCheckLoadStoreInst(Cmp->getType(), Ptr->getType()))
return Err;
@@ -4312,7 +4350,7 @@
FailureOrdering = getDecodedOrdering(Record[OpNum + 3]);
I = new AtomicCmpXchgInst(Ptr, Cmp, New, SuccessOrdering, FailureOrdering,
- SynchScope);
+ SSID);
cast(I)->setVolatile(Record[OpNum]);
if (Record.size() < 8) {
@@ -4329,7 +4367,7 @@
break;
}
case bitc::FUNC_CODE_INST_ATOMICRMW: {
- // ATOMICRMW:[ptrty, ptr, val, op, vol, ordering, synchscope]
+ // ATOMICRMW:[ptrty, ptr, val, op, vol, ordering, ssid]
unsigned OpNum = 0;
Value *Ptr, *Val;
if (getValueTypePair(Record, OpNum, NextValueNo, Ptr) ||
@@ -4346,13 +4384,13 @@
if (Ordering == AtomicOrdering::NotAtomic ||
Ordering == AtomicOrdering::Unordered)
return error("Invalid record");
- SynchronizationScope SynchScope = getDecodedSynchScope(Record[OpNum + 3]);
- I = new AtomicRMWInst(Operation, Ptr, Val, Ordering, SynchScope);
+ SyncScope::ID SSID = getDecodedSyncScopeID(Record[OpNum + 3]);
+ I = new AtomicRMWInst(Operation, Ptr, Val, Ordering, SSID);
cast(I)->setVolatile(Record[OpNum+1]);
InstructionList.push_back(I);
break;
}
- case bitc::FUNC_CODE_INST_FENCE: { // FENCE:[ordering, synchscope]
+ case bitc::FUNC_CODE_INST_FENCE: { // FENCE:[ordering, ssid]
if (2 != Record.size())
return error("Invalid record");
AtomicOrdering Ordering = getDecodedOrdering(Record[0]);
@@ -4360,8 +4398,8 @@
Ordering == AtomicOrdering::Unordered ||
Ordering == AtomicOrdering::Monotonic)
return error("Invalid record");
- SynchronizationScope SynchScope = getDecodedSynchScope(Record[1]);
- I = new FenceInst(Context, Ordering, SynchScope);
+ SyncScope::ID SSID = getDecodedSyncScopeID(Record[1]);
+ I = new FenceInst(Context, Ordering, SSID);
InstructionList.push_back(I);
break;
}
@@ -4557,6 +4595,14 @@
return Error::success();
}
+SyncScope::ID BitcodeReader::getDecodedSyncScopeID(unsigned Val) {
+ if (Val == SyncScope::SingleThread || Val == SyncScope::System)
+ return SyncScope::ID(Val);
+ if (Val >= SSIDs.size())
+ return SyncScope::System; // Map unknown synchronization scopes to system.
+ return SSIDs[Val];
+}
+
//===----------------------------------------------------------------------===//
// GVMaterializer implementation
//===----------------------------------------------------------------------===//
Index: lib/Bitcode/Writer/BitcodeWriter.cpp
===================================================================
--- lib/Bitcode/Writer/BitcodeWriter.cpp
+++ lib/Bitcode/Writer/BitcodeWriter.cpp
@@ -259,6 +259,7 @@
const GlobalObject &GO);
void writeModuleMetadataKinds();
void writeOperandBundleTags();
+ void writeSyncScopeNames();
void writeConstants(unsigned FirstVal, unsigned LastVal, bool isGlobal);
void writeModuleConstants();
bool pushValueAndType(const Value *V, unsigned InstID,
@@ -309,6 +310,10 @@
return VE.getValueID(VI.getValue());
}
std::map &valueIds() { return GUIDToValueIdMap; }
+
+ unsigned getEncodedSyncScopeID(SyncScope::ID SSID) {
+ return unsigned(SSID);
+ }
};
/// Class to manage the bitcode writing for a combined index.
@@ -469,14 +474,6 @@
llvm_unreachable("Invalid ordering");
}
-static unsigned getEncodedSynchScope(SynchronizationScope SynchScope) {
- switch (SynchScope) {
- case SingleThread: return bitc::SYNCHSCOPE_SINGLETHREAD;
- case CrossThread: return bitc::SYNCHSCOPE_CROSSTHREAD;
- }
- llvm_unreachable("Invalid synch scope");
-}
-
static void writeStringRecord(BitstreamWriter &Stream, unsigned Code,
StringRef Str, unsigned AbbrevToUse) {
SmallVector Vals;
@@ -2020,6 +2017,24 @@
Stream.ExitBlock();
}
+void ModuleBitcodeWriter::writeSyncScopeNames() {
+ SmallVector SSNs;
+ M.getContext().getSyncScopeNames(SSNs);
+ if (SSNs.empty())
+ return;
+
+ Stream.EnterSubblock(bitc::SYNC_SCOPE_NAMES_BLOCK_ID, 2);
+
+ SmallVector Record;
+ for (auto SSN : SSNs) {
+ Record.append(SSN.begin(), SSN.end());
+ Stream.EmitRecord(bitc::SYNC_SCOPE_NAME, Record, 0);
+ Record.clear();
+ }
+
+ Stream.ExitBlock();
+}
+
static void emitSignedInt64(SmallVectorImpl &Vals, uint64_t V) {
if ((int64_t)V >= 0)
Vals.push_back(V << 1);
@@ -2636,7 +2651,7 @@
Vals.push_back(cast(I).isVolatile());
if (cast(I).isAtomic()) {
Vals.push_back(getEncodedOrdering(cast(I).getOrdering()));
- Vals.push_back(getEncodedSynchScope(cast(I).getSynchScope()));
+ Vals.push_back(getEncodedSyncScopeID(cast(I).getSyncScopeID()));
}
break;
case Instruction::Store:
@@ -2650,7 +2665,8 @@
Vals.push_back(cast(I).isVolatile());
if (cast(I).isAtomic()) {
Vals.push_back(getEncodedOrdering(cast(I).getOrdering()));
- Vals.push_back(getEncodedSynchScope(cast(I).getSynchScope()));
+ Vals.push_back(
+ getEncodedSyncScopeID(cast(I).getSyncScopeID()));
}
break;
case Instruction::AtomicCmpXchg:
@@ -2662,7 +2678,7 @@
Vals.push_back(
getEncodedOrdering(cast(I).getSuccessOrdering()));
Vals.push_back(
- getEncodedSynchScope(cast(I).getSynchScope()));
+ getEncodedSyncScopeID(cast(I).getSyncScopeID()));
Vals.push_back(
getEncodedOrdering(cast(I).getFailureOrdering()));
Vals.push_back(cast(I).isWeak());
@@ -2676,12 +2692,12 @@
Vals.push_back(cast(I).isVolatile());
Vals.push_back(getEncodedOrdering(cast(I).getOrdering()));
Vals.push_back(
- getEncodedSynchScope(cast(I).getSynchScope()));
+ getEncodedSyncScopeID(cast(I).getSyncScopeID()));
break;
case Instruction::Fence:
Code = bitc::FUNC_CODE_INST_FENCE;
Vals.push_back(getEncodedOrdering(cast(I).getOrdering()));
- Vals.push_back(getEncodedSynchScope(cast(I).getSynchScope()));
+ Vals.push_back(getEncodedSyncScopeID(cast(I).getSyncScopeID()));
break;
case Instruction::Call: {
const CallInst &CI = cast(I);
@@ -3692,6 +3708,7 @@
writeUseListBlock(nullptr);
writeOperandBundleTags();
+ writeSyncScopeNames();
// Emit function bodies.
DenseMap FunctionToBitcodeIndex;
Index: lib/CodeGen/AtomicExpandPass.cpp
===================================================================
--- lib/CodeGen/AtomicExpandPass.cpp
+++ lib/CodeGen/AtomicExpandPass.cpp
@@ -361,7 +361,7 @@
auto *NewLI = Builder.CreateLoad(NewAddr);
NewLI->setAlignment(LI->getAlignment());
NewLI->setVolatile(LI->isVolatile());
- NewLI->setAtomic(LI->getOrdering(), LI->getSynchScope());
+ NewLI->setAtomic(LI->getOrdering(), LI->getSyncScopeID());
DEBUG(dbgs() << "Replaced " << *LI << " with " << *NewLI << "\n");
Value *NewVal = Builder.CreateBitCast(NewLI, LI->getType());
@@ -444,7 +444,7 @@
StoreInst *NewSI = Builder.CreateStore(NewVal, NewAddr);
NewSI->setAlignment(SI->getAlignment());
NewSI->setVolatile(SI->isVolatile());
- NewSI->setAtomic(SI->getOrdering(), SI->getSynchScope());
+ NewSI->setAtomic(SI->getOrdering(), SI->getSyncScopeID());
DEBUG(dbgs() << "Replaced " << *SI << " with " << *NewSI << "\n");
SI->eraseFromParent();
return NewSI;
@@ -801,7 +801,7 @@
Value *FullWord_Cmp = Builder.CreateOr(Loaded_MaskOut, Cmp_Shifted);
AtomicCmpXchgInst *NewCI = Builder.CreateAtomicCmpXchg(
PMV.AlignedAddr, FullWord_Cmp, FullWord_NewVal, CI->getSuccessOrdering(),
- CI->getFailureOrdering(), CI->getSynchScope());
+ CI->getFailureOrdering(), CI->getSyncScopeID());
NewCI->setVolatile(CI->isVolatile());
// When we're building a strong cmpxchg, we need a loop, so you
// might think we could use a weak cmpxchg inside. But, using strong
@@ -924,7 +924,7 @@
auto *NewCI = Builder.CreateAtomicCmpXchg(NewAddr, NewCmp, NewNewVal,
CI->getSuccessOrdering(),
CI->getFailureOrdering(),
- CI->getSynchScope());
+ CI->getSyncScopeID());
NewCI->setVolatile(CI->isVolatile());
NewCI->setWeak(CI->isWeak());
DEBUG(dbgs() << "Replaced " << *CI << " with " << *NewCI << "\n");
Index: lib/CodeGen/GlobalISel/IRTranslator.cpp
===================================================================
--- lib/CodeGen/GlobalISel/IRTranslator.cpp
+++ lib/CodeGen/GlobalISel/IRTranslator.cpp
@@ -311,7 +311,7 @@
*MF->getMachineMemOperand(MachinePointerInfo(LI.getPointerOperand()),
Flags, DL->getTypeStoreSize(LI.getType()),
getMemOpAlignment(LI), AAMDNodes(), nullptr,
- LI.getSynchScope(), LI.getOrdering()));
+ LI.getSyncScopeID(), LI.getOrdering()));
return true;
}
@@ -329,7 +329,7 @@
*MF->getMachineMemOperand(
MachinePointerInfo(SI.getPointerOperand()), Flags,
DL->getTypeStoreSize(SI.getValueOperand()->getType()),
- getMemOpAlignment(SI), AAMDNodes(), nullptr, SI.getSynchScope(),
+ getMemOpAlignment(SI), AAMDNodes(), nullptr, SI.getSyncScopeID(),
SI.getOrdering()));
return true;
}
Index: lib/CodeGen/MIRParser/MILexer.h
===================================================================
--- lib/CodeGen/MIRParser/MILexer.h
+++ lib/CodeGen/MIRParser/MILexer.h
@@ -127,7 +127,8 @@
NamedIRValue,
IRValue,
QuotedIRValue, // ``
- SubRegisterIndex
+ SubRegisterIndex,
+ StringConstant
};
private:
Index: lib/CodeGen/MIRParser/MILexer.cpp
===================================================================
--- lib/CodeGen/MIRParser/MILexer.cpp
+++ lib/CodeGen/MIRParser/MILexer.cpp
@@ -365,6 +365,14 @@
return lexName(C, Token, MIToken::NamedIRValue, Rule.size(), ErrorCallback);
}
+static Cursor maybeLexStringConstant(Cursor C, MIToken &Token,
+ ErrorCallbackType ErrorCallback) {
+ if (C.peek() != '"')
+ return None;
+ return lexName(C, Token, MIToken::StringConstant, /*PrefixLength=*/0,
+ ErrorCallback);
+}
+
static Cursor lexVirtualRegister(Cursor C, MIToken &Token) {
auto Range = C;
C.advance(); // Skip '%'
@@ -630,6 +638,8 @@
return R.remaining();
if (Cursor R = maybeLexEscapedIRValue(C, Token, ErrorCallback))
return R.remaining();
+ if (Cursor R = maybeLexStringConstant(C, Token, ErrorCallback))
+ return R.remaining();
Token.reset(MIToken::Error, C.remaining());
ErrorCallback(C.location(),
Index: lib/CodeGen/MIRParser/MIParser.cpp
===================================================================
--- lib/CodeGen/MIRParser/MIParser.cpp
+++ lib/CodeGen/MIRParser/MIParser.cpp
@@ -192,6 +192,7 @@
bool parseMemoryOperandFlag(MachineMemOperand::Flags &Flags);
bool parseMemoryPseudoSourceValue(const PseudoSourceValue *&PSV);
bool parseMachinePointerInfo(MachinePointerInfo &Dest);
+ bool parseOptionalScope(LLVMContext &Context, SyncScope::ID &SSID);
bool parseOptionalAtomicOrdering(AtomicOrdering &Order);
bool parseMachineMemoryOperand(MachineMemOperand *&Dest);
@@ -281,6 +282,10 @@
///
/// Return true if the name isn't a name of a bitmask target flag.
bool getBitmaskTargetFlag(StringRef Name, unsigned &Flag);
+
+ /// parseStringConstant
+ /// ::= StringConstant
+ bool parseStringConstant(std::string &Result);
};
} // end anonymous namespace
@@ -2099,6 +2104,26 @@
return false;
}
+bool MIParser::parseOptionalScope(LLVMContext &Context,
+ SyncScope::ID &SSID) {
+ SSID = SyncScope::System;
+ if (Token.is(MIToken::Identifier) && Token.stringValue() == "syncscope") {
+ lex();
+ if (expectAndConsume(MIToken::lparen))
+ return error("expected '(' in syncscope");
+
+ std::string SSN;
+ if (parseStringConstant(SSN))
+ return true;
+
+ SSID = Context.getOrInsertSyncScopeID(SSN);
+ if (expectAndConsume(MIToken::rparen))
+ return error("expected ')' in syncscope");
+ }
+
+ return false;
+}
+
bool MIParser::parseOptionalAtomicOrdering(AtomicOrdering &Order) {
Order = AtomicOrdering::NotAtomic;
if (Token.isNot(MIToken::Identifier))
@@ -2138,12 +2163,10 @@
Flags |= MachineMemOperand::MOStore;
lex();
- // Optional "singlethread" scope.
- SynchronizationScope Scope = SynchronizationScope::CrossThread;
- if (Token.is(MIToken::Identifier) && Token.stringValue() == "singlethread") {
- Scope = SynchronizationScope::SingleThread;
- lex();
- }
+ // Optional synchronization scope.
+ SyncScope::ID SSID;
+ if (parseOptionalScope(MF.getFunction()->getContext(), SSID))
+ return true;
// Up to two atomic orderings (cmpxchg provides guarantees on failure).
AtomicOrdering Order, FailureOrder;
@@ -2208,7 +2231,7 @@
if (expectAndConsume(MIToken::rparen))
return true;
Dest = MF.getMachineMemOperand(Ptr, Flags, Size, BaseAlignment, AAInfo, Range,
- Scope, Order, FailureOrder);
+ SSID, Order, FailureOrder);
return false;
}
@@ -2421,6 +2444,14 @@
return false;
}
+bool MIParser::parseStringConstant(std::string &Result) {
+ if (Token.isNot(MIToken::StringConstant))
+ return error("expected string constant");
+ Result = Token.stringValue();
+ lex();
+ return false;
+}
+
bool llvm::parseMachineBasicBlockDefinitions(PerFunctionMIParsingState &PFS,
StringRef Src,
SMDiagnostic &Error) {
Index: lib/CodeGen/MIRPrinter.cpp
===================================================================
--- lib/CodeGen/MIRPrinter.cpp
+++ lib/CodeGen/MIRPrinter.cpp
@@ -16,6 +16,7 @@
#include "llvm/ADT/STLExtras.h"
#include "llvm/ADT/SmallBitVector.h"
+#include "llvm/ADT/StringExtras.h"
#include "llvm/CodeGen/GlobalISel/RegisterBank.h"
#include "llvm/CodeGen/MIRYamlMapping.h"
#include "llvm/CodeGen/MachineConstantPool.h"
@@ -109,6 +110,8 @@
ModuleSlotTracker &MST;
const DenseMap &RegisterMaskIds;
const DenseMap &StackObjectOperandMapping;
+ /// Synchronization scope names registered with LLVMContext.
+ SmallVector SSNs;
bool canPredictBranchProbabilities(const MachineBasicBlock &MBB) const;
bool canPredictSuccessors(const MachineBasicBlock &MBB) const;
@@ -132,7 +135,8 @@
void print(const MachineOperand &Op, const TargetRegisterInfo *TRI,
unsigned I, bool ShouldPrintRegisterTies,
LLT TypeToPrint, bool IsDef = false);
- void print(const MachineMemOperand &Op);
+ void print(const LLVMContext &Context, const MachineMemOperand &Op);
+ void printSyncScope(const LLVMContext &Context, SyncScope::ID SSID);
void print(const MCCFIInstruction &CFI, const TargetRegisterInfo *TRI);
};
@@ -701,11 +705,12 @@
if (!MI.memoperands_empty()) {
OS << " :: ";
+ const LLVMContext &Context = MF->getFunction()->getContext();
bool NeedComma = false;
for (const auto *Op : MI.memoperands()) {
if (NeedComma)
OS << ", ";
- print(*Op);
+ print(Context, *Op);
NeedComma = true;
}
}
@@ -996,7 +1001,7 @@
}
}
-void MIPrinter::print(const MachineMemOperand &Op) {
+void MIPrinter::print(const LLVMContext &Context, const MachineMemOperand &Op) {
OS << '(';
// TODO: Print operand's target specific flags.
if (Op.isVolatile())
@@ -1014,8 +1019,7 @@
OS << "store ";
}
- if (Op.getSynchScope() == SynchronizationScope::SingleThread)
- OS << "singlethread ";
+ printSyncScope(Context, Op.getSyncScopeID());
if (Op.getOrdering() != AtomicOrdering::NotAtomic)
OS << toIRString(Op.getOrdering()) << ' ';
@@ -1084,6 +1088,23 @@
OS << ')';
}
+void MIPrinter::printSyncScope(const LLVMContext &Context, SyncScope::ID SSID) {
+ switch (SSID) {
+ case SyncScope::System: {
+ break;
+ }
+ default: {
+ if (SSNs.empty())
+ Context.getSyncScopeNames(SSNs);
+
+ OS << "syncscope(\"";
+ PrintEscapedString(SSNs[SSID], OS);
+ OS << "\") ";
+ break;
+ }
+ }
+}
+
static void printCFIRegister(unsigned DwarfReg, raw_ostream &OS,
const TargetRegisterInfo *TRI) {
int Reg = TRI->getLLVMRegNum(DwarfReg, true);
Index: lib/CodeGen/MachineFunction.cpp
===================================================================
--- lib/CodeGen/MachineFunction.cpp
+++ lib/CodeGen/MachineFunction.cpp
@@ -308,11 +308,11 @@
MachineMemOperand *MachineFunction::getMachineMemOperand(
MachinePointerInfo PtrInfo, MachineMemOperand::Flags f, uint64_t s,
unsigned base_alignment, const AAMDNodes &AAInfo, const MDNode *Ranges,
- SynchronizationScope SynchScope, AtomicOrdering Ordering,
+ SyncScope::ID SSID, AtomicOrdering Ordering,
AtomicOrdering FailureOrdering) {
return new (Allocator)
MachineMemOperand(PtrInfo, f, s, base_alignment, AAInfo, Ranges,
- SynchScope, Ordering, FailureOrdering);
+ SSID, Ordering, FailureOrdering);
}
MachineMemOperand *
@@ -323,13 +323,13 @@
MachineMemOperand(MachinePointerInfo(MMO->getValue(),
MMO->getOffset()+Offset),
MMO->getFlags(), Size, MMO->getBaseAlignment(),
- AAMDNodes(), nullptr, MMO->getSynchScope(),
+ AAMDNodes(), nullptr, MMO->getSyncScopeID(),
MMO->getOrdering(), MMO->getFailureOrdering());
return new (Allocator)
MachineMemOperand(MachinePointerInfo(MMO->getPseudoValue(),
MMO->getOffset()+Offset),
MMO->getFlags(), Size, MMO->getBaseAlignment(),
- AAMDNodes(), nullptr, MMO->getSynchScope(),
+ AAMDNodes(), nullptr, MMO->getSyncScopeID(),
MMO->getOrdering(), MMO->getFailureOrdering());
}
@@ -362,7 +362,7 @@
(*I)->getFlags() & ~MachineMemOperand::MOStore,
(*I)->getSize(), (*I)->getBaseAlignment(),
(*I)->getAAInfo(), nullptr,
- (*I)->getSynchScope(), (*I)->getOrdering(),
+ (*I)->getSyncScopeID(), (*I)->getOrdering(),
(*I)->getFailureOrdering());
Result[Index] = JustLoad;
}
@@ -396,7 +396,7 @@
(*I)->getFlags() & ~MachineMemOperand::MOLoad,
(*I)->getSize(), (*I)->getBaseAlignment(),
(*I)->getAAInfo(), nullptr,
- (*I)->getSynchScope(), (*I)->getOrdering(),
+ (*I)->getSyncScopeID(), (*I)->getOrdering(),
(*I)->getFailureOrdering());
Result[Index] = JustStore;
}
Index: lib/CodeGen/MachineInstr.cpp
===================================================================
--- lib/CodeGen/MachineInstr.cpp
+++ lib/CodeGen/MachineInstr.cpp
@@ -563,7 +563,7 @@
uint64_t s, unsigned int a,
const AAMDNodes &AAInfo,
const MDNode *Ranges,
- SynchronizationScope SynchScope,
+ SyncScope::ID SSID,
AtomicOrdering Ordering,
AtomicOrdering FailureOrdering)
: PtrInfo(ptrinfo), Size(s), FlagVals(f), BaseAlignLog2(Log2_32(a) + 1),
@@ -574,8 +574,8 @@
assert(getBaseAlignment() == a && "Alignment is not a power of 2!");
assert((isLoad() || isStore()) && "Not a load/store!");
- AtomicInfo.SynchScope = static_cast(SynchScope);
- assert(getSynchScope() == SynchScope && "Value truncated");
+ AtomicInfo.SSID = static_cast(SSID);
+ assert(getSyncScopeID() == SSID && "Value truncated");
AtomicInfo.Ordering = static_cast(Ordering);
assert(getOrdering() == Ordering && "Value truncated");
AtomicInfo.FailureOrdering = static_cast(FailureOrdering);
Index: lib/CodeGen/SelectionDAG/SelectionDAG.cpp
===================================================================
--- lib/CodeGen/SelectionDAG/SelectionDAG.cpp
+++ lib/CodeGen/SelectionDAG/SelectionDAG.cpp
@@ -5408,7 +5408,7 @@
unsigned Opcode, const SDLoc &dl, EVT MemVT, SDVTList VTs, SDValue Chain,
SDValue Ptr, SDValue Cmp, SDValue Swp, MachinePointerInfo PtrInfo,
unsigned Alignment, AtomicOrdering SuccessOrdering,
- AtomicOrdering FailureOrdering, SynchronizationScope SynchScope) {
+ AtomicOrdering FailureOrdering, SyncScope::ID SSID) {
assert(Opcode == ISD::ATOMIC_CMP_SWAP ||
Opcode == ISD::ATOMIC_CMP_SWAP_WITH_SUCCESS);
assert(Cmp.getValueType() == Swp.getValueType() && "Invalid Atomic Op Types");
@@ -5424,7 +5424,7 @@
MachineMemOperand::MOStore;
MachineMemOperand *MMO =
MF.getMachineMemOperand(PtrInfo, Flags, MemVT.getStoreSize(), Alignment,
- AAMDNodes(), nullptr, SynchScope, SuccessOrdering,
+ AAMDNodes(), nullptr, SSID, SuccessOrdering,
FailureOrdering);
return getAtomicCmpSwap(Opcode, dl, MemVT, VTs, Chain, Ptr, Cmp, Swp, MMO);
@@ -5446,7 +5446,7 @@
SDValue Chain, SDValue Ptr, SDValue Val,
const Value *PtrVal, unsigned Alignment,
AtomicOrdering Ordering,
- SynchronizationScope SynchScope) {
+ SyncScope::ID SSID) {
if (Alignment == 0) // Ensure that codegen never sees alignment 0
Alignment = getEVTAlignment(MemVT);
@@ -5466,7 +5466,7 @@
MachineMemOperand *MMO =
MF.getMachineMemOperand(MachinePointerInfo(PtrVal), Flags,
MemVT.getStoreSize(), Alignment, AAMDNodes(),
- nullptr, SynchScope, Ordering);
+ nullptr, SSID, Ordering);
return getAtomic(Opcode, dl, MemVT, Chain, Ptr, Val, MMO);
}
Index: lib/CodeGen/SelectionDAG/SelectionDAGBuilder.cpp
===================================================================
--- lib/CodeGen/SelectionDAG/SelectionDAGBuilder.cpp
+++ lib/CodeGen/SelectionDAG/SelectionDAGBuilder.cpp
@@ -3893,7 +3893,7 @@
SDLoc dl = getCurSDLoc();
AtomicOrdering SuccessOrder = I.getSuccessOrdering();
AtomicOrdering FailureOrder = I.getFailureOrdering();
- SynchronizationScope Scope = I.getSynchScope();
+ SyncScope::ID SSID = I.getSyncScopeID();
SDValue InChain = getRoot();
@@ -3903,7 +3903,7 @@
ISD::ATOMIC_CMP_SWAP_WITH_SUCCESS, dl, MemVT, VTs, InChain,
getValue(I.getPointerOperand()), getValue(I.getCompareOperand()),
getValue(I.getNewValOperand()), MachinePointerInfo(I.getPointerOperand()),
- /*Alignment=*/ 0, SuccessOrder, FailureOrder, Scope);
+ /*Alignment=*/ 0, SuccessOrder, FailureOrder, SSID);
SDValue OutChain = L.getValue(2);
@@ -3929,7 +3929,7 @@
case AtomicRMWInst::UMin: NT = ISD::ATOMIC_LOAD_UMIN; break;
}
AtomicOrdering Order = I.getOrdering();
- SynchronizationScope Scope = I.getSynchScope();
+ SyncScope::ID SSID = I.getSyncScopeID();
SDValue InChain = getRoot();
@@ -3940,7 +3940,7 @@
getValue(I.getPointerOperand()),
getValue(I.getValOperand()),
I.getPointerOperand(),
- /* Alignment=*/ 0, Order, Scope);
+ /* Alignment=*/ 0, Order, SSID);
SDValue OutChain = L.getValue(1);
@@ -3955,7 +3955,7 @@
Ops[0] = getRoot();
Ops[1] = DAG.getConstant((unsigned)I.getOrdering(), dl,
TLI.getFenceOperandTy(DAG.getDataLayout()));
- Ops[2] = DAG.getConstant(I.getSynchScope(), dl,
+ Ops[2] = DAG.getConstant(I.getSyncScopeID(), dl,
TLI.getFenceOperandTy(DAG.getDataLayout()));
DAG.setRoot(DAG.getNode(ISD::ATOMIC_FENCE, dl, MVT::Other, Ops));
}
@@ -3963,7 +3963,7 @@
void SelectionDAGBuilder::visitAtomicLoad(const LoadInst &I) {
SDLoc dl = getCurSDLoc();
AtomicOrdering Order = I.getOrdering();
- SynchronizationScope Scope = I.getSynchScope();
+ SyncScope::ID SSID = I.getSyncScopeID();
SDValue InChain = getRoot();
@@ -3981,7 +3981,7 @@
VT.getStoreSize(),
I.getAlignment() ? I.getAlignment() :
DAG.getEVTAlignment(VT),
- AAMDNodes(), nullptr, Scope, Order);
+ AAMDNodes(), nullptr, SSID, Order);
InChain = TLI.prepareVolatileOrAtomicLoad(InChain, dl, DAG);
SDValue L =
@@ -3998,7 +3998,7 @@
SDLoc dl = getCurSDLoc();
AtomicOrdering Order = I.getOrdering();
- SynchronizationScope Scope = I.getSynchScope();
+ SyncScope::ID SSID = I.getSyncScopeID();
SDValue InChain = getRoot();
@@ -4015,7 +4015,7 @@
getValue(I.getPointerOperand()),
getValue(I.getValueOperand()),
I.getPointerOperand(), I.getAlignment(),
- Order, Scope);
+ Order, SSID);
DAG.setRoot(OutChain);
}
Index: lib/IR/AsmWriter.cpp
===================================================================
--- lib/IR/AsmWriter.cpp
+++ lib/IR/AsmWriter.cpp
@@ -2074,6 +2074,8 @@
bool ShouldPreserveUseListOrder;
UseListOrderStack UseListOrders;
SmallVector MDNames;
+ /// Synchronization scope names registered with LLVMContext.
+ SmallVector SSNs;
public:
/// Construct an AssemblyWriter with an external SlotTracker
@@ -2089,10 +2091,15 @@
void writeOperand(const Value *Op, bool PrintType);
void writeParamOperand(const Value *Operand, AttributeSet Attrs);
void writeOperandBundles(ImmutableCallSite CS);
- void writeAtomic(AtomicOrdering Ordering, SynchronizationScope SynchScope);
- void writeAtomicCmpXchg(AtomicOrdering SuccessOrdering,
+ void writeSyncScope(const LLVMContext &Context,
+ SyncScope::ID SSID);
+ void writeAtomic(const LLVMContext &Context,
+ AtomicOrdering Ordering,
+ SyncScope::ID SSID);
+ void writeAtomicCmpXchg(const LLVMContext &Context,
+ AtomicOrdering SuccessOrdering,
AtomicOrdering FailureOrdering,
- SynchronizationScope SynchScope);
+ SyncScope::ID SSID);
void writeAllMDNodes();
void writeMDNode(unsigned Slot, const MDNode *Node);
@@ -2153,30 +2160,42 @@
WriteAsOperandInternal(Out, Operand, &TypePrinter, &Machine, TheModule);
}
-void AssemblyWriter::writeAtomic(AtomicOrdering Ordering,
- SynchronizationScope SynchScope) {
- if (Ordering == AtomicOrdering::NotAtomic)
- return;
+void AssemblyWriter::writeSyncScope(const LLVMContext &Context,
+ SyncScope::ID SSID) {
+ switch (SSID) {
+ case SyncScope::System: {
+ break;
+ }
+ default: {
+ if (SSNs.empty())
+ Context.getSyncScopeNames(SSNs);
- switch (SynchScope) {
- case SingleThread: Out << " singlethread"; break;
- case CrossThread: break;
+ Out << " syncscope(\"";
+ PrintEscapedString(SSNs[SSID], Out);
+ Out << "\")";
+ break;
+ }
}
+}
+
+void AssemblyWriter::writeAtomic(const LLVMContext &Context,
+ AtomicOrdering Ordering,
+ SyncScope::ID SSID) {
+ if (Ordering == AtomicOrdering::NotAtomic)
+ return;
+ writeSyncScope(Context, SSID);
Out << " " << toIRString(Ordering);
}
-void AssemblyWriter::writeAtomicCmpXchg(AtomicOrdering SuccessOrdering,
+void AssemblyWriter::writeAtomicCmpXchg(const LLVMContext &Context,
+ AtomicOrdering SuccessOrdering,
AtomicOrdering FailureOrdering,
- SynchronizationScope SynchScope) {
+ SyncScope::ID SSID) {
assert(SuccessOrdering != AtomicOrdering::NotAtomic &&
FailureOrdering != AtomicOrdering::NotAtomic);
- switch (SynchScope) {
- case SingleThread: Out << " singlethread"; break;
- case CrossThread: break;
- }
-
+ writeSyncScope(Context, SSID);
Out << " " << toIRString(SuccessOrdering);
Out << " " << toIRString(FailureOrdering);
}
@@ -3176,21 +3195,22 @@
// Print atomic ordering/alignment for memory operations
if (const LoadInst *LI = dyn_cast(&I)) {
if (LI->isAtomic())
- writeAtomic(LI->getOrdering(), LI->getSynchScope());
+ writeAtomic(LI->getContext(), LI->getOrdering(), LI->getSyncScopeID());
if (LI->getAlignment())
Out << ", align " << LI->getAlignment();
} else if (const StoreInst *SI = dyn_cast(&I)) {
if (SI->isAtomic())
- writeAtomic(SI->getOrdering(), SI->getSynchScope());
+ writeAtomic(SI->getContext(), SI->getOrdering(), SI->getSyncScopeID());
if (SI->getAlignment())
Out << ", align " << SI->getAlignment();
} else if (const AtomicCmpXchgInst *CXI = dyn_cast(&I)) {
- writeAtomicCmpXchg(CXI->getSuccessOrdering(), CXI->getFailureOrdering(),
- CXI->getSynchScope());
+ writeAtomicCmpXchg(CXI->getContext(), CXI->getSuccessOrdering(),
+ CXI->getFailureOrdering(), CXI->getSyncScopeID());
} else if (const AtomicRMWInst *RMWI = dyn_cast(&I)) {
- writeAtomic(RMWI->getOrdering(), RMWI->getSynchScope());
+ writeAtomic(RMWI->getContext(), RMWI->getOrdering(),
+ RMWI->getSyncScopeID());
} else if (const FenceInst *FI = dyn_cast(&I)) {
- writeAtomic(FI->getOrdering(), FI->getSynchScope());
+ writeAtomic(FI->getContext(), FI->getOrdering(), FI->getSyncScopeID());
}
// Print Metadata info.
Index: lib/IR/Core.cpp
===================================================================
--- lib/IR/Core.cpp
+++ lib/IR/Core.cpp
@@ -2743,11 +2743,14 @@
llvm_unreachable("Invalid AtomicOrdering value!");
}
+// TODO: Should this and other atomic instructions support building with
+// "syncscope"?
LLVMValueRef LLVMBuildFence(LLVMBuilderRef B, LLVMAtomicOrdering Ordering,
LLVMBool isSingleThread, const char *Name) {
return wrap(
unwrap(B)->CreateFence(mapFromLLVMOrdering(Ordering),
- isSingleThread ? SingleThread : CrossThread,
+ isSingleThread ? SyncScope::SingleThread
+ : SyncScope::System,
Name));
}
@@ -3029,7 +3032,8 @@
case LLVMAtomicRMWBinOpUMin: intop = AtomicRMWInst::UMin; break;
}
return wrap(unwrap(B)->CreateAtomicRMW(intop, unwrap(PTR), unwrap(Val),
- mapFromLLVMOrdering(ordering), singleThread ? SingleThread : CrossThread));
+ mapFromLLVMOrdering(ordering), singleThread ? SyncScope::SingleThread
+ : SyncScope::System));
}
LLVMValueRef LLVMBuildAtomicCmpXchg(LLVMBuilderRef B, LLVMValueRef Ptr,
@@ -3041,7 +3045,7 @@
return wrap(unwrap(B)->CreateAtomicCmpXchg(unwrap(Ptr), unwrap(Cmp),
unwrap(New), mapFromLLVMOrdering(SuccessOrdering),
mapFromLLVMOrdering(FailureOrdering),
- singleThread ? SingleThread : CrossThread));
+ singleThread ? SyncScope::SingleThread : SyncScope::System));
}
@@ -3049,17 +3053,18 @@
Value *P = unwrap(AtomicInst);
if (AtomicRMWInst *I = dyn_cast(P))
- return I->getSynchScope() == SingleThread;
- return cast(P)->getSynchScope() == SingleThread;
+ return I->getSyncScopeID() == SyncScope::SingleThread;
+ return cast(P)->getSyncScopeID() ==
+ SyncScope::SingleThread;
}
void LLVMSetAtomicSingleThread(LLVMValueRef AtomicInst, LLVMBool NewValue) {
Value *P = unwrap(AtomicInst);
- SynchronizationScope Sync = NewValue ? SingleThread : CrossThread;
+ SyncScope::ID SSID = NewValue ? SyncScope::SingleThread : SyncScope::System;
if (AtomicRMWInst *I = dyn_cast(P))
- return I->setSynchScope(Sync);
- return cast(P)->setSynchScope(Sync);
+ return I->setSyncScopeID(SSID);
+ return cast(P)->setSyncScopeID(SSID);
}
LLVMAtomicOrdering LLVMGetCmpXchgSuccessOrdering(LLVMValueRef CmpXchgInst) {
Index: lib/IR/Instruction.cpp
===================================================================
--- lib/IR/Instruction.cpp
+++ lib/IR/Instruction.cpp
@@ -362,13 +362,13 @@
(LI->getAlignment() == cast(I2)->getAlignment() ||
IgnoreAlignment) &&
LI->getOrdering() == cast(I2)->getOrdering() &&
- LI->getSynchScope() == cast(I2)->getSynchScope();
+ LI->getSyncScopeID() == cast(I2)->getSyncScopeID();
if (const StoreInst *SI = dyn_cast(I1))
return SI->isVolatile() == cast(I2)->isVolatile() &&
(SI->getAlignment() == cast(I2)->getAlignment() ||
IgnoreAlignment) &&
SI->getOrdering() == cast(I2)->getOrdering() &&
- SI->getSynchScope() == cast(I2)->getSynchScope();
+ SI->getSyncScopeID() == cast(I2)->getSyncScopeID();
if (const CmpInst *CI = dyn_cast(I1))
return CI->getPredicate() == cast(I2)->getPredicate();
if (const CallInst *CI = dyn_cast(I1))
@@ -386,7 +386,7 @@
return EVI->getIndices() == cast(I2)->getIndices();
if (const FenceInst *FI = dyn_cast(I1))
return FI->getOrdering() == cast(I2)->getOrdering() &&
- FI->getSynchScope() == cast(I2)->getSynchScope();
+ FI->getSyncScopeID() == cast(I2)->getSyncScopeID();
if (const AtomicCmpXchgInst *CXI = dyn_cast(I1))
return CXI->isVolatile() == cast(I2)->isVolatile() &&
CXI->isWeak() == cast(I2)->isWeak() &&
@@ -394,12 +394,13 @@
cast(I2)->getSuccessOrdering() &&
CXI->getFailureOrdering() ==
cast(I2)->getFailureOrdering() &&
- CXI->getSynchScope() == cast(I2)->getSynchScope();
+ CXI->getSyncScopeID() ==
+ cast(I2)->getSyncScopeID();
if (const AtomicRMWInst *RMWI = dyn_cast(I1))
return RMWI->getOperation() == cast(I2)->getOperation() &&
RMWI->isVolatile() == cast(I2)->isVolatile() &&
RMWI->getOrdering() == cast(I2)->getOrdering() &&
- RMWI->getSynchScope() == cast(I2)->getSynchScope();
+ RMWI->getSyncScopeID() == cast(I2)->getSyncScopeID();
return true;
}
Index: lib/IR/Instructions.cpp
===================================================================
--- lib/IR/Instructions.cpp
+++ lib/IR/Instructions.cpp
@@ -1376,34 +1376,34 @@
LoadInst::LoadInst(Type *Ty, Value *Ptr, const Twine &Name, bool isVolatile,
unsigned Align, Instruction *InsertBef)
: LoadInst(Ty, Ptr, Name, isVolatile, Align, AtomicOrdering::NotAtomic,
- CrossThread, InsertBef) {}
+ SyncScope::System, InsertBef) {}
LoadInst::LoadInst(Value *Ptr, const Twine &Name, bool isVolatile,
unsigned Align, BasicBlock *InsertAE)
: LoadInst(Ptr, Name, isVolatile, Align, AtomicOrdering::NotAtomic,
- CrossThread, InsertAE) {}
+ SyncScope::System, InsertAE) {}
LoadInst::LoadInst(Type *Ty, Value *Ptr, const Twine &Name, bool isVolatile,
unsigned Align, AtomicOrdering Order,
- SynchronizationScope SynchScope, Instruction *InsertBef)
+ SyncScope::ID SSID, Instruction *InsertBef)
: UnaryInstruction(Ty, Load, Ptr, InsertBef) {
assert(Ty == cast(Ptr->getType())->getElementType());
setVolatile(isVolatile);
setAlignment(Align);
- setAtomic(Order, SynchScope);
+ setAtomic(Order, SSID);
AssertOK();
setName(Name);
}
LoadInst::LoadInst(Value *Ptr, const Twine &Name, bool isVolatile,
unsigned Align, AtomicOrdering Order,
- SynchronizationScope SynchScope,
+ SyncScope::ID SSID,
BasicBlock *InsertAE)
: UnaryInstruction(cast(Ptr->getType())->getElementType(),
Load, Ptr, InsertAE) {
setVolatile(isVolatile);
setAlignment(Align);
- setAtomic(Order, SynchScope);
+ setAtomic(Order, SSID);
AssertOK();
setName(Name);
}
@@ -1491,16 +1491,16 @@
StoreInst::StoreInst(Value *val, Value *addr, bool isVolatile, unsigned Align,
Instruction *InsertBefore)
: StoreInst(val, addr, isVolatile, Align, AtomicOrdering::NotAtomic,
- CrossThread, InsertBefore) {}
+ SyncScope::System, InsertBefore) {}
StoreInst::StoreInst(Value *val, Value *addr, bool isVolatile, unsigned Align,
BasicBlock *InsertAtEnd)
: StoreInst(val, addr, isVolatile, Align, AtomicOrdering::NotAtomic,
- CrossThread, InsertAtEnd) {}
+ SyncScope::System, InsertAtEnd) {}
StoreInst::StoreInst(Value *val, Value *addr, bool isVolatile,
unsigned Align, AtomicOrdering Order,
- SynchronizationScope SynchScope,
+ SyncScope::ID SSID,
Instruction *InsertBefore)
: Instruction(Type::getVoidTy(val->getContext()), Store,
OperandTraits::op_begin(this),
@@ -1510,13 +1510,13 @@
Op<1>() = addr;
setVolatile(isVolatile);
setAlignment(Align);
- setAtomic(Order, SynchScope);
+ setAtomic(Order, SSID);
AssertOK();
}
StoreInst::StoreInst(Value *val, Value *addr, bool isVolatile,
unsigned Align, AtomicOrdering Order,
- SynchronizationScope SynchScope,
+ SyncScope::ID SSID,
BasicBlock *InsertAtEnd)
: Instruction(Type::getVoidTy(val->getContext()), Store,
OperandTraits::op_begin(this),
@@ -1526,7 +1526,7 @@
Op<1>() = addr;
setVolatile(isVolatile);
setAlignment(Align);
- setAtomic(Order, SynchScope);
+ setAtomic(Order, SSID);
AssertOK();
}
@@ -1546,13 +1546,13 @@
void AtomicCmpXchgInst::Init(Value *Ptr, Value *Cmp, Value *NewVal,
AtomicOrdering SuccessOrdering,
AtomicOrdering FailureOrdering,
- SynchronizationScope SynchScope) {
+ SyncScope::ID SSID) {
Op<0>() = Ptr;
Op<1>() = Cmp;
Op<2>() = NewVal;
setSuccessOrdering(SuccessOrdering);
setFailureOrdering(FailureOrdering);
- setSynchScope(SynchScope);
+ setSyncScopeID(SSID);
assert(getOperand(0) && getOperand(1) && getOperand(2) &&
"All operands must be non-null!");
@@ -1579,25 +1579,25 @@
AtomicCmpXchgInst::AtomicCmpXchgInst(Value *Ptr, Value *Cmp, Value *NewVal,
AtomicOrdering SuccessOrdering,
AtomicOrdering FailureOrdering,
- SynchronizationScope SynchScope,
+ SyncScope::ID SSID,
Instruction *InsertBefore)
: Instruction(
StructType::get(Cmp->getType(), Type::getInt1Ty(Cmp->getContext())),
AtomicCmpXchg, OperandTraits::op_begin(this),
OperandTraits::operands(this), InsertBefore) {
- Init(Ptr, Cmp, NewVal, SuccessOrdering, FailureOrdering, SynchScope);
+ Init(Ptr, Cmp, NewVal, SuccessOrdering, FailureOrdering, SSID);
}
AtomicCmpXchgInst::AtomicCmpXchgInst(Value *Ptr, Value *Cmp, Value *NewVal,
AtomicOrdering SuccessOrdering,
AtomicOrdering FailureOrdering,
- SynchronizationScope SynchScope,
+ SyncScope::ID SSID,
BasicBlock *InsertAtEnd)
: Instruction(
StructType::get(Cmp->getType(), Type::getInt1Ty(Cmp->getContext())),
AtomicCmpXchg, OperandTraits::op_begin(this),
OperandTraits::operands(this), InsertAtEnd) {
- Init(Ptr, Cmp, NewVal, SuccessOrdering, FailureOrdering, SynchScope);
+ Init(Ptr, Cmp, NewVal, SuccessOrdering, FailureOrdering, SSID);
}
//===----------------------------------------------------------------------===//
@@ -1606,12 +1606,12 @@
void AtomicRMWInst::Init(BinOp Operation, Value *Ptr, Value *Val,
AtomicOrdering Ordering,
- SynchronizationScope SynchScope) {
+ SyncScope::ID SSID) {
Op<0>() = Ptr;
Op<1>() = Val;
setOperation(Operation);
setOrdering(Ordering);
- setSynchScope(SynchScope);
+ setSyncScopeID(SSID);
assert(getOperand(0) && getOperand(1) &&
"All operands must be non-null!");
@@ -1626,24 +1626,24 @@
AtomicRMWInst::AtomicRMWInst(BinOp Operation, Value *Ptr, Value *Val,
AtomicOrdering Ordering,
- SynchronizationScope SynchScope,
+ SyncScope::ID SSID,
Instruction *InsertBefore)
: Instruction(Val->getType(), AtomicRMW,
OperandTraits::op_begin(this),
OperandTraits::operands(this),
InsertBefore) {
- Init(Operation, Ptr, Val, Ordering, SynchScope);
+ Init(Operation, Ptr, Val, Ordering, SSID);
}
AtomicRMWInst::AtomicRMWInst(BinOp Operation, Value *Ptr, Value *Val,
AtomicOrdering Ordering,
- SynchronizationScope SynchScope,
+ SyncScope::ID SSID,
BasicBlock *InsertAtEnd)
: Instruction(Val->getType(), AtomicRMW,
OperandTraits::op_begin(this),
OperandTraits::operands(this),
InsertAtEnd) {
- Init(Operation, Ptr, Val, Ordering, SynchScope);
+ Init(Operation, Ptr, Val, Ordering, SSID);
}
//===----------------------------------------------------------------------===//
@@ -1651,19 +1651,19 @@
//===----------------------------------------------------------------------===//
FenceInst::FenceInst(LLVMContext &C, AtomicOrdering Ordering,
- SynchronizationScope SynchScope,
+ SyncScope::ID SSID,
Instruction *InsertBefore)
: Instruction(Type::getVoidTy(C), Fence, nullptr, 0, InsertBefore) {
setOrdering(Ordering);
- setSynchScope(SynchScope);
+ setSyncScopeID(SSID);
}
FenceInst::FenceInst(LLVMContext &C, AtomicOrdering Ordering,
- SynchronizationScope SynchScope,
+ SyncScope::ID SSID,
BasicBlock *InsertAtEnd)
: Instruction(Type::getVoidTy(C), Fence, nullptr, 0, InsertAtEnd) {
setOrdering(Ordering);
- setSynchScope(SynchScope);
+ setSyncScopeID(SSID);
}
//===----------------------------------------------------------------------===//
@@ -3901,12 +3901,12 @@
LoadInst *LoadInst::cloneImpl() const {
return new LoadInst(getOperand(0), Twine(), isVolatile(),
- getAlignment(), getOrdering(), getSynchScope());
+ getAlignment(), getOrdering(), getSyncScopeID());
}
StoreInst *StoreInst::cloneImpl() const {
return new StoreInst(getOperand(0), getOperand(1), isVolatile(),
- getAlignment(), getOrdering(), getSynchScope());
+ getAlignment(), getOrdering(), getSyncScopeID());
}
@@ -3914,7 +3914,7 @@
AtomicCmpXchgInst *Result =
new AtomicCmpXchgInst(getOperand(0), getOperand(1), getOperand(2),
getSuccessOrdering(), getFailureOrdering(),
- getSynchScope());
+ getSyncScopeID());
Result->setVolatile(isVolatile());
Result->setWeak(isWeak());
return Result;
@@ -3922,14 +3922,14 @@
AtomicRMWInst *AtomicRMWInst::cloneImpl() const {
AtomicRMWInst *Result =
- new AtomicRMWInst(getOperation(),getOperand(0), getOperand(1),
- getOrdering(), getSynchScope());
+ new AtomicRMWInst(getOperation(), getOperand(0), getOperand(1),
+ getOrdering(), getSyncScopeID());
Result->setVolatile(isVolatile());
return Result;
}
FenceInst *FenceInst::cloneImpl() const {
- return new FenceInst(getContext(), getOrdering(), getSynchScope());
+ return new FenceInst(getContext(), getOrdering(), getSyncScopeID());
}
TruncInst *TruncInst::cloneImpl() const {
Index: lib/IR/LLVMContext.cpp
===================================================================
--- lib/IR/LLVMContext.cpp
+++ lib/IR/LLVMContext.cpp
@@ -81,6 +81,16 @@
assert(GCTransitionEntry->second == LLVMContext::OB_gc_transition &&
"gc-transition operand bundle id drifted!");
(void)GCTransitionEntry;
+
+ SyncScope::ID SingleThreadSSID =
+ pImpl->getOrInsertSyncScopeID("singlethread");
+ assert(SingleThreadSSID == SyncScope::SingleThread &&
+ "singlethread synchronization scope ID drifted!");
+
+ SyncScope::ID SystemSSID =
+ pImpl->getOrInsertSyncScopeID("");
+ assert(SystemSSID == SyncScope::System &&
+ "system synchronization scope ID drifted!");
}
LLVMContext::~LLVMContext() { delete pImpl; }
@@ -248,6 +258,14 @@
return pImpl->getOperandBundleTagID(Tag);
}
+SyncScope::ID LLVMContext::getOrInsertSyncScopeID(StringRef SSN) {
+ return pImpl->getOrInsertSyncScopeID(SSN);
+}
+
+void LLVMContext::getSyncScopeNames(SmallVectorImpl &SSNs) const {
+ pImpl->getSyncScopeNames(SSNs);
+}
+
void LLVMContext::setGC(const Function &Fn, std::string GCName) {
auto It = pImpl->GCNames.find(&Fn);
Index: lib/IR/LLVMContextImpl.h
===================================================================
--- lib/IR/LLVMContextImpl.h
+++ lib/IR/LLVMContextImpl.h
@@ -1232,6 +1232,20 @@
void getOperandBundleTags(SmallVectorImpl &Tags) const;
uint32_t getOperandBundleTagID(StringRef Tag) const;
+ /// A set of interned synchronization scopes. The StringMap maps
+ /// synchronization scope names to their respective synchronization scope IDs.
+ StringMap SSC;
+
+ /// getOrInsertSyncScopeID - Maps synchronization scope name to
+ /// synchronization scope ID. Every synchronization scope registered with
+ /// LLVMContext has unique ID except pre-defined ones.
+ SyncScope::ID getOrInsertSyncScopeID(StringRef SSN);
+
+ /// getSyncScopeNames - Populates client supplied SmallVector with
+ /// synchronization scope names registered with LLVMContext. Synchronization
+ /// scope names are ordered by increasing synchronization scope IDs.
+ void getSyncScopeNames(SmallVectorImpl &SSNs) const;
+
/// Maintain the GC name for each function.
///
/// This saves allocating an additional word in Function for programs which
Index: lib/IR/LLVMContextImpl.cpp
===================================================================
--- lib/IR/LLVMContextImpl.cpp
+++ lib/IR/LLVMContextImpl.cpp
@@ -215,6 +215,20 @@
return I->second;
}
+SyncScope::ID LLVMContextImpl::getOrInsertSyncScopeID(StringRef SSN) {
+ auto NewSSID = SSC.size();
+ assert(NewSSID < std::numeric_limits::max() &&
+ "Hit the maximum number of synchronization scopes allowed!");
+ return SSC.insert(std::make_pair(SSN, SyncScope::ID(NewSSID))).first->second;
+}
+
+void LLVMContextImpl::getSyncScopeNames(
+ SmallVectorImpl &SSNs) const {
+ SSNs.resize(SSC.size());
+ for (const auto &SSE : SSC)
+ SSNs[SSE.second] = SSE.first();
+}
+
/// Singleton instance of the OptBisect class.
///
/// This singleton is accessed via the LLVMContext::getOptBisect() function. It
Index: lib/IR/Verifier.cpp
===================================================================
--- lib/IR/Verifier.cpp
+++ lib/IR/Verifier.cpp
@@ -3113,7 +3113,7 @@
ElTy, &LI);
checkAtomicMemAccessSize(ElTy, &LI);
} else {
- Assert(LI.getSynchScope() == CrossThread,
+ Assert(LI.getSyncScopeID() == SyncScope::System,
"Non-atomic load cannot have SynchronizationScope specified", &LI);
}
@@ -3142,7 +3142,7 @@
ElTy, &SI);
checkAtomicMemAccessSize(ElTy, &SI);
} else {
- Assert(SI.getSynchScope() == CrossThread,
+ Assert(SI.getSyncScopeID() == SyncScope::System,
"Non-atomic store cannot have SynchronizationScope specified", &SI);
}
visitInstruction(SI);
Index: lib/Target/ARM/ARMISelLowering.cpp
===================================================================
--- lib/Target/ARM/ARMISelLowering.cpp
+++ lib/Target/ARM/ARMISelLowering.cpp
@@ -3367,9 +3367,9 @@
static SDValue LowerATOMIC_FENCE(SDValue Op, SelectionDAG &DAG,
const ARMSubtarget *Subtarget) {
SDLoc dl(Op);
- ConstantSDNode *ScopeN = cast(Op.getOperand(2));
- auto Scope = static_cast(ScopeN->getZExtValue());
- if (Scope == SynchronizationScope::SingleThread)
+ ConstantSDNode *SSIDNode = cast(Op.getOperand(2));
+ auto SSID = static_cast(SSIDNode->getZExtValue());
+ if (SSID == SyncScope::SingleThread)
return Op;
if (!Subtarget->hasDataBarrier()) {
Index: lib/Target/SystemZ/SystemZISelLowering.cpp
===================================================================
--- lib/Target/SystemZ/SystemZISelLowering.cpp
+++ lib/Target/SystemZ/SystemZISelLowering.cpp
@@ -3196,13 +3196,13 @@
SDLoc DL(Op);
AtomicOrdering FenceOrdering = static_cast(
cast(Op.getOperand(1))->getZExtValue());
- SynchronizationScope FenceScope = static_cast(
+ SyncScope::ID FenceSSID = static_cast(
cast(Op.getOperand(2))->getZExtValue());
// The only fence that needs an instruction is a sequentially-consistent
// cross-thread fence.
if (FenceOrdering == AtomicOrdering::SequentiallyConsistent &&
- FenceScope == CrossThread) {
+ FenceSSID == SyncScope::System) {
return SDValue(DAG.getMachineNode(SystemZ::Serialize, DL, MVT::Other,
Op.getOperand(0)),
0);
Index: lib/Target/X86/X86ISelLowering.cpp
===================================================================
--- lib/Target/X86/X86ISelLowering.cpp
+++ lib/Target/X86/X86ISelLowering.cpp
@@ -22664,7 +22664,7 @@
auto Builder = IRBuilder<>(AI);
Module *M = Builder.GetInsertBlock()->getParent()->getParent();
- auto SynchScope = AI->getSynchScope();
+ auto SSID = AI->getSyncScopeID();
// We must restrict the ordering to avoid generating loads with Release or
// ReleaseAcquire orderings.
auto Order = AtomicCmpXchgInst::getStrongestFailureOrdering(AI->getOrdering());
@@ -22686,7 +22686,7 @@
// otherwise, we might be able to be more aggressive on relaxed idempotent
// rmw. In practice, they do not look useful, so we don't try to be
// especially clever.
- if (SynchScope == SingleThread)
+ if (SSID == SyncScope::SingleThread)
// FIXME: we could just insert an X86ISD::MEMBARRIER here, except we are at
// the IR level, so we must wrap it in an intrinsic.
return nullptr;
@@ -22705,7 +22705,7 @@
// Finally we can emit the atomic load.
LoadInst *Loaded = Builder.CreateAlignedLoad(Ptr,
AI->getType()->getPrimitiveSizeInBits());
- Loaded->setAtomic(Order, SynchScope);
+ Loaded->setAtomic(Order, SSID);
AI->replaceAllUsesWith(Loaded);
AI->eraseFromParent();
return Loaded;
@@ -22716,13 +22716,13 @@
SDLoc dl(Op);
AtomicOrdering FenceOrdering = static_cast(
cast(Op.getOperand(1))->getZExtValue());
- SynchronizationScope FenceScope = static_cast(
+ SyncScope::ID FenceSSID = static_cast(
cast(Op.getOperand(2))->getZExtValue());
// The only fence that needs an instruction is a sequentially-consistent
// cross-thread fence.
if (FenceOrdering == AtomicOrdering::SequentiallyConsistent &&
- FenceScope == CrossThread) {
+ FenceSSID == SyncScope::System) {
if (Subtarget.hasMFence())
return DAG.getNode(X86ISD::MFENCE, dl, MVT::Other, Op.getOperand(0));
Index: lib/Transforms/IPO/GlobalOpt.cpp
===================================================================
--- lib/Transforms/IPO/GlobalOpt.cpp
+++ lib/Transforms/IPO/GlobalOpt.cpp
@@ -837,7 +837,7 @@
if (StoreInst *SI = dyn_cast(GV->user_back())) {
// The global is initialized when the store to it occurs.
new StoreInst(ConstantInt::getTrue(GV->getContext()), InitBool, false, 0,
- SI->getOrdering(), SI->getSynchScope(), SI);
+ SI->getOrdering(), SI->getSyncScopeID(), SI);
SI->eraseFromParent();
continue;
}
@@ -854,7 +854,7 @@
// Replace the cmp X, 0 with a use of the bool value.
// Sink the load to where the compare was, if atomic rules allow us to.
Value *LV = new LoadInst(InitBool, InitBool->getName()+".val", false, 0,
- LI->getOrdering(), LI->getSynchScope(),
+ LI->getOrdering(), LI->getSyncScopeID(),
LI->isUnordered() ? (Instruction*)ICI : LI);
InitBoolUsed = true;
switch (ICI->getPredicate()) {
@@ -1605,7 +1605,7 @@
assert(LI->getOperand(0) == GV && "Not a copy!");
// Insert a new load, to preserve the saved value.
StoreVal = new LoadInst(NewGV, LI->getName()+".b", false, 0,
- LI->getOrdering(), LI->getSynchScope(), LI);
+ LI->getOrdering(), LI->getSyncScopeID(), LI);
} else {
assert((isa(StoredVal) || isa(StoredVal)) &&
"This is not a form that we understand!");
@@ -1614,12 +1614,12 @@
}
}
new StoreInst(StoreVal, NewGV, false, 0,
- SI->getOrdering(), SI->getSynchScope(), SI);
+ SI->getOrdering(), SI->getSyncScopeID(), SI);
} else {
// Change the load into a load of bool then a select.
LoadInst *LI = cast(UI);
LoadInst *NLI = new LoadInst(NewGV, LI->getName()+".b", false, 0,
- LI->getOrdering(), LI->getSynchScope(), LI);
+ LI->getOrdering(), LI->getSyncScopeID(), LI);
Value *NSI;
if (IsOneZero)
NSI = new ZExtInst(NLI, LI->getType(), "", LI);
Index: lib/Transforms/InstCombine/InstCombineLoadStoreAlloca.cpp
===================================================================
--- lib/Transforms/InstCombine/InstCombineLoadStoreAlloca.cpp
+++ lib/Transforms/InstCombine/InstCombineLoadStoreAlloca.cpp
@@ -448,7 +448,7 @@
LoadInst *NewLoad = IC.Builder->CreateAlignedLoad(
IC.Builder->CreateBitCast(Ptr, NewTy->getPointerTo(AS)),
LI.getAlignment(), LI.isVolatile(), LI.getName() + Suffix);
- NewLoad->setAtomic(LI.getOrdering(), LI.getSynchScope());
+ NewLoad->setAtomic(LI.getOrdering(), LI.getSyncScopeID());
MDBuilder MDB(NewLoad->getContext());
for (const auto &MDPair : MD) {
unsigned ID = MDPair.first;
@@ -532,7 +532,7 @@
StoreInst *NewStore = IC.Builder->CreateAlignedStore(
V, IC.Builder->CreateBitCast(Ptr, V->getType()->getPointerTo(AS)),
SI.getAlignment(), SI.isVolatile());
- NewStore->setAtomic(SI.getOrdering(), SI.getSynchScope());
+ NewStore->setAtomic(SI.getOrdering(), SI.getSyncScopeID());
for (const auto &MDPair : MD) {
unsigned ID = MDPair.first;
MDNode *N = MDPair.second;
@@ -1023,9 +1023,9 @@
SI->getOperand(2)->getName()+".val");
assert(LI.isUnordered() && "implied by above");
V1->setAlignment(Align);
- V1->setAtomic(LI.getOrdering(), LI.getSynchScope());
+ V1->setAtomic(LI.getOrdering(), LI.getSyncScopeID());
V2->setAlignment(Align);
- V2->setAtomic(LI.getOrdering(), LI.getSynchScope());
+ V2->setAtomic(LI.getOrdering(), LI.getSyncScopeID());
return SelectInst::Create(SI->getCondition(), V1, V2);
}
@@ -1532,7 +1532,7 @@
SI.isVolatile(),
SI.getAlignment(),
SI.getOrdering(),
- SI.getSynchScope());
+ SI.getSyncScopeID());
InsertNewInstBefore(NewSI, *BBI);
// The debug locations of the original instructions might differ; merge them.
NewSI->setDebugLoc(DILocation::getMergedLocation(SI.getDebugLoc(),
Index: lib/Transforms/Instrumentation/ThreadSanitizer.cpp
===================================================================
--- lib/Transforms/Instrumentation/ThreadSanitizer.cpp
+++ lib/Transforms/Instrumentation/ThreadSanitizer.cpp
@@ -379,10 +379,11 @@
}
static bool isAtomic(Instruction *I) {
+ // TODO: Ask TTI whether synchronization scope is between threads.
if (LoadInst *LI = dyn_cast(I))
- return LI->isAtomic() && LI->getSynchScope() == CrossThread;
+ return LI->isAtomic() && LI->getSyncScopeID() != SyncScope::SingleThread;
if (StoreInst *SI = dyn_cast(I))
- return SI->isAtomic() && SI->getSynchScope() == CrossThread;
+ return SI->isAtomic() && SI->getSyncScopeID() != SyncScope::SingleThread;
if (isa(I))
return true;
if (isa(I))
@@ -676,7 +677,7 @@
I->eraseFromParent();
} else if (FenceInst *FI = dyn_cast(I)) {
Value *Args[] = {createOrdering(&IRB, FI->getOrdering())};
- Function *F = FI->getSynchScope() == SingleThread ?
+ Function *F = FI->getSyncScopeID() == SyncScope::SingleThread ?
TsanAtomicSignalFence : TsanAtomicThreadFence;
CallInst *C = CallInst::Create(F, Args);
ReplaceInstWithInst(I, C);
Index: lib/Transforms/Scalar/GVN.cpp
===================================================================
--- lib/Transforms/Scalar/GVN.cpp
+++ lib/Transforms/Scalar/GVN.cpp
@@ -1166,7 +1166,7 @@
auto *NewLoad = new LoadInst(LoadPtr, LI->getName()+".pre",
LI->isVolatile(), LI->getAlignment(),
- LI->getOrdering(), LI->getSynchScope(),
+ LI->getOrdering(), LI->getSyncScopeID(),
UnavailablePred->getTerminator());
// Transfer the old load's AA tags to the new load.
Index: lib/Transforms/Scalar/JumpThreading.cpp
===================================================================
--- lib/Transforms/Scalar/JumpThreading.cpp
+++ lib/Transforms/Scalar/JumpThreading.cpp
@@ -1136,7 +1136,7 @@
LoadInst *NewVal = new LoadInst(
LoadedPtr->DoPHITranslation(LoadBB, UnavailablePred),
LI->getName() + ".pr", false, LI->getAlignment(), LI->getOrdering(),
- LI->getSynchScope(), UnavailablePred->getTerminator());
+ LI->getSyncScopeID(), UnavailablePred->getTerminator());
NewVal->setDebugLoc(LI->getDebugLoc());
if (AATags)
NewVal->setAAMetadata(AATags);
Index: lib/Transforms/Scalar/SROA.cpp
===================================================================
--- lib/Transforms/Scalar/SROA.cpp
+++ lib/Transforms/Scalar/SROA.cpp
@@ -2391,7 +2391,7 @@
LoadInst *NewLI = IRB.CreateAlignedLoad(&NewAI, NewAI.getAlignment(),
LI.isVolatile(), LI.getName());
if (LI.isVolatile())
- NewLI->setAtomic(LI.getOrdering(), LI.getSynchScope());
+ NewLI->setAtomic(LI.getOrdering(), LI.getSyncScopeID());
// Try to preserve nonnull metadata
if (TargetTy->isPointerTy())
@@ -2415,7 +2415,7 @@
getSliceAlign(TargetTy),
LI.isVolatile(), LI.getName());
if (LI.isVolatile())
- NewLI->setAtomic(LI.getOrdering(), LI.getSynchScope());
+ NewLI->setAtomic(LI.getOrdering(), LI.getSyncScopeID());
V = NewLI;
IsPtrAdjusted = true;
@@ -2558,7 +2558,7 @@
}
NewSI->copyMetadata(SI, LLVMContext::MD_mem_parallel_loop_access);
if (SI.isVolatile())
- NewSI->setAtomic(SI.getOrdering(), SI.getSynchScope());
+ NewSI->setAtomic(SI.getOrdering(), SI.getSyncScopeID());
Pass.DeadInsts.insert(&SI);
deleteIfTriviallyDead(OldOp);
Index: lib/Transforms/Utils/FunctionComparator.cpp
===================================================================
--- lib/Transforms/Utils/FunctionComparator.cpp
+++ lib/Transforms/Utils/FunctionComparator.cpp
@@ -511,8 +511,8 @@
if (int Res =
cmpOrderings(LI->getOrdering(), cast(R)->getOrdering()))
return Res;
- if (int Res =
- cmpNumbers(LI->getSynchScope(), cast(R)->getSynchScope()))
+ if (int Res = cmpNumbers(LI->getSyncScopeID(),
+ cast(R)->getSyncScopeID()))
return Res;
return cmpRangeMetadata(LI->getMetadata(LLVMContext::MD_range),
cast(R)->getMetadata(LLVMContext::MD_range));
@@ -527,7 +527,8 @@
if (int Res =
cmpOrderings(SI->getOrdering(), cast(R)->getOrdering()))
return Res;
- return cmpNumbers(SI->getSynchScope(), cast(R)->getSynchScope());
+ return cmpNumbers(SI->getSyncScopeID(),
+ cast(R)->getSyncScopeID());
}
if (const CmpInst *CI = dyn_cast(L))
return cmpNumbers(CI->getPredicate(), cast(R)->getPredicate());
@@ -582,7 +583,8 @@
if (int Res =
cmpOrderings(FI->getOrdering(), cast(R)->getOrdering()))
return Res;
- return cmpNumbers(FI->getSynchScope(), cast(R)->getSynchScope());
+ return cmpNumbers(FI->getSyncScopeID(),
+ cast