Index: docs/LangRef.rst
===================================================================
--- docs/LangRef.rst
+++ docs/LangRef.rst
@@ -2172,12 +2172,24 @@
     same address in this global order. This corresponds to the C++0x/C1x
     ``memory_order_seq_cst`` and Java volatile.
 
-.. _singlethread:
+.. _syncscope:
 
-If an atomic operation is marked ``singlethread``, it only *synchronizes
-with* or participates in modification and seq\_cst total orderings with
-other operations running in the same thread (for example, in signal
-handlers).
+If an atomic operation is marked ``syncscope("singlethread")``, it only
+*synchronizes with*, and only participates in the seq\_cst total orderings of,
+other operations running in the same thread (for example, in signal handlers).
+
+If an atomic operation is marked ``syncscope("<a>")``, where ``<a>`` is target
+specific synchronization scope, then it *synchronizes with*, and participates in
+the seq\_cst total orderings of, other atomic operations marked
+``syncscope("<a>")``. It is target defined how it interacts with atomic
+operations marked ``syncscope("singlethread")``, marked
+``syncscope("<b>")`` where ``<a> != <b>``, or not marked
+``syncscope("singlethread")`` or ``syncscope("<a>")``.
+
+Otherwise, an atomic operation that is not marked ``syncscope("singlethread")``
+or ``syncscope("<a>")`` *synchronizes with*, and participates in the global
+seq\_cst total orderings of, other operations that are not marked
+``syncscope("singlethread")`` or ``syncscope("<a>")``.
 
 .. _fastmath:
 
@@ -7280,7 +7292,7 @@
 ::
 
       <result> = load [volatile] <ty>, <ty>* <pointer>[, align <alignment>][, !nontemporal !<index>][, !invariant.load !<index>][, !invariant.group !<index>][, !nonnull !<index>][, !dereferenceable !<deref_bytes_node>][, !dereferenceable_or_null !<deref_bytes_node>][, !align !<align_node>]
-      <result> = load atomic [volatile] <ty>, <ty>* <pointer> [singlethread] <ordering>, align <alignment> [, !invariant.group !<index>]
+      <result> = load atomic [volatile] <ty>, <ty>* <pointer> [syncscope("<a>")] <ordering>, align <alignment> [, !invariant.group !<index>]
       !<index> = !{ i32 1 }
       !<deref_bytes_node> = !{i64 <dereferenceable_bytes>}
       !<align_node> = !{ i64 <value_alignment> }
@@ -7301,15 +7313,15 @@
 :ref:`volatile operations <volatile>`.
 
 If the ``load`` is marked as ``atomic``, it takes an extra :ref:`ordering
-<ordering>` and optional ``singlethread`` argument. The ``release`` and
+<ordering>` and optional ``syncscope("<a>")`` argument. The ``release`` and
 ``acq_rel`` orderings are not valid on ``load`` instructions. Atomic loads
 produce :ref:`defined <memmodel>` results when they may see multiple atomic
 stores. The type of the pointee must be an integer, pointer, or floating-point
 type whose bit width is a power of two greater than or equal to eight and less
 than or equal to a target-specific size limit.  ``align`` must be explicitly
 specified on atomic loads, and the load has undefined behavior if the alignment
-is not set to a value which is at least the size in bytes of the
-pointee. ``!nontemporal`` does not have any defined semantics for atomic loads.
+is not set to a value which is at least the size in bytes of the pointee.
+``!nontemporal`` does not have any defined semantics for atomic loads.
 
 The optional constant ``align`` argument specifies the alignment of the
 operation (that is, the alignment of the memory address). A value of 0
@@ -7409,7 +7421,7 @@
 ::
 
       store [volatile] <ty> <value>, <ty>* <pointer>[, align <alignment>][, !nontemporal !<index>][, !invariant.group !<index>]        ; yields void
-      store atomic [volatile] <ty> <value>, <ty>* <pointer> [singlethread] <ordering>, align <alignment> [, !invariant.group !<index>] ; yields void
+      store atomic [volatile] <ty> <value>, <ty>* <pointer> [syncscope("<a>")] <ordering>, align <alignment> [, !invariant.group !<index>] ; yields void
 
 Overview:
 """""""""
@@ -7429,8 +7441,8 @@
 structural type <t_opaque>`) can be stored.
 
 If the ``store`` is marked as ``atomic``, it takes an extra :ref:`ordering
-<ordering>` and optional ``singlethread`` argument. The ``acquire`` and
-``acq_rel`` orderings aren't valid on ``store`` instructions. Atomic loads
+<ordering>` and optional ``syncscope("<a>")`` argument. The ``acquire`` and
+``acq_rel`` orderings aren't valid on ``store`` instructions. Atomic loads 
 produce :ref:`defined <memmodel>` results when they may see multiple atomic
 stores. The type of the pointee must be an integer, pointer, or floating-point
 type whose bit width is a power of two greater than or equal to eight and less
@@ -7497,7 +7509,7 @@
 
 ::
 
-      fence [singlethread] <ordering>                   ; yields void
+      fence [syncscope("<a>")] <ordering>             ; yields void
 
 Overview:
 """""""""
@@ -7531,17 +7543,17 @@
 ``acquire`` and ``release`` semantics specified above, participates in
 the global program order of other ``seq_cst`` operations and/or fences.
 
-The optional ":ref:`singlethread <singlethread>`" argument specifies
-that the fence only synchronizes with other fences in the same thread.
-(This is useful for interacting with signal handlers.)
+A ``fence`` instruction can also take an optional
+":ref:`syncscope <syncscope>`" argument.
 
 Example:
 """"""""
 
 .. code-block:: llvm
 
-      fence acquire                          ; yields void
-      fence singlethread seq_cst             ; yields void
+      fence acquire                                        ; yields void
+      fence syncscope("singlethread") seq_cst              ; yields void
+      fence syncscope("agent") seq_cst                     ; yields void
 
 .. _i_cmpxchg:
 
@@ -7553,7 +7565,7 @@
 
 ::
 
-      cmpxchg [weak] [volatile] <ty>* <pointer>, <ty> <cmp>, <ty> <new> [singlethread] <success ordering> <failure ordering> ; yields  { ty, i1 }
+      cmpxchg [weak] [volatile] <ty>* <pointer>, <ty> <cmp>, <ty> <new> [syncscope("<a>")] <success ordering> <failure ordering> ; yields  { ty, i1 }
 
 Overview:
 """""""""
@@ -7582,10 +7594,8 @@
 stronger than that on success, and the failure ordering cannot be either
 ``release`` or ``acq_rel``.
 
-The optional "``singlethread``" argument declares that the ``cmpxchg``
-is only atomic with respect to code (usually signal handlers) running in
-the same thread as the ``cmpxchg``. Otherwise the cmpxchg is atomic with
-respect to all other code in the system.
+A ``cmpxchg`` instruction can also take an optional
+":ref:`syncscope <syncscope>`" argument.
 
 The pointer passed into cmpxchg must have alignment greater than or
 equal to the size in memory of the operand.
@@ -7639,7 +7649,7 @@
 
 ::
 
-      atomicrmw [volatile] <operation> <ty>* <pointer>, <ty> <value> [singlethread] <ordering>                   ; yields ty
+      atomicrmw [volatile] <operation> <ty>* <pointer>, <ty> <value> [syncscope("<a>")] <ordering>                   ; yields ty
 
 Overview:
 """""""""
@@ -7673,6 +7683,9 @@
 order of execution of this ``atomicrmw`` with other :ref:`volatile
 operations <volatile>`.
 
+A ``atomicrmw`` instruction can also take an optional
+":ref:`syncscope <syncscope>`" argument.
+
 Semantics:
 """"""""""
 
Index: include/llvm/Bitcode/LLVMBitCodes.h
===================================================================
--- include/llvm/Bitcode/LLVMBitCodes.h
+++ include/llvm/Bitcode/LLVMBitCodes.h
@@ -55,6 +55,8 @@
   METADATA_KIND_BLOCK_ID,
 
   STRTAB_BLOCK_ID,
+
+  SYNC_SCOPE_NAMES_BLOCK_ID,
 };
 
 /// Identification block contains a string that describes the producer details,
@@ -168,6 +170,10 @@
   OPERAND_BUNDLE_TAG = 1, // TAG: [strchr x N]
 };
 
+enum SyncScopeNameCode {
+  SYNC_SCOPE_NAME = 1,
+};
+
 // Value symbol table codes.
 enum ValueSymtabCodes {
   VST_CODE_ENTRY = 1,   // VST_ENTRY: [valueid, namechar x N]
@@ -392,12 +398,6 @@
   ORDERING_SEQCST = 6
 };
 
-/// Encoded SynchronizationScope values.
-enum AtomicSynchScopeCodes {
-  SYNCHSCOPE_SINGLETHREAD = 0,
-  SYNCHSCOPE_CROSSTHREAD = 1
-};
-
 /// Markers and flags for call instruction.
 enum CallMarkersFlags {
   CALL_TAIL = 0,
Index: include/llvm/CodeGen/MachineFunction.h
===================================================================
--- include/llvm/CodeGen/MachineFunction.h
+++ include/llvm/CodeGen/MachineFunction.h
@@ -642,7 +642,7 @@
       MachinePointerInfo PtrInfo, MachineMemOperand::Flags f, uint64_t s,
       unsigned base_alignment, const AAMDNodes &AAInfo = AAMDNodes(),
       const MDNode *Ranges = nullptr,
-      SynchronizationScope SynchScope = CrossThread,
+      SyncScope::ID SSID = SyncScope::System,
       AtomicOrdering Ordering = AtomicOrdering::NotAtomic,
       AtomicOrdering FailureOrdering = AtomicOrdering::NotAtomic);
 
Index: include/llvm/CodeGen/MachineMemOperand.h
===================================================================
--- include/llvm/CodeGen/MachineMemOperand.h
+++ include/llvm/CodeGen/MachineMemOperand.h
@@ -119,8 +119,8 @@
 private:
   /// Atomic information for this memory operation.
   struct MachineAtomicInfo {
-    /// Synchronization scope for this memory operation.
-    unsigned SynchScope : 1;      // enum SynchronizationScope
+    /// Synchronization scope ID for this memory operation.
+    unsigned SSID : 8;            // SyncScope::ID
     /// Atomic ordering requirements for this memory operation. For cmpxchg
     /// atomic operations, atomic ordering requirements when store occurs.
     unsigned Ordering : 4;        // enum AtomicOrdering
@@ -147,7 +147,7 @@
                     unsigned base_alignment,
                     const AAMDNodes &AAInfo = AAMDNodes(),
                     const MDNode *Ranges = nullptr,
-                    SynchronizationScope SynchScope = CrossThread,
+                    SyncScope::ID SSID = SyncScope::System,
                     AtomicOrdering Ordering = AtomicOrdering::NotAtomic,
                     AtomicOrdering FailureOrdering = AtomicOrdering::NotAtomic);
 
@@ -197,9 +197,9 @@
   /// Return the range tag for the memory reference.
   const MDNode *getRanges() const { return Ranges; }
 
-  /// Return the synchronization scope for this memory operation.
-  SynchronizationScope getSynchScope() const {
-    return static_cast<SynchronizationScope>(AtomicInfo.SynchScope);
+  /// Returns the synchronization scope ID for this memory operation.
+  SyncScope::ID getSyncScopeID() const {
+    return static_cast<SyncScope::ID>(AtomicInfo.SSID);
   }
 
   /// Return the atomic ordering requirements for this memory operation. For
Index: include/llvm/CodeGen/SelectionDAG.h
===================================================================
--- include/llvm/CodeGen/SelectionDAG.h
+++ include/llvm/CodeGen/SelectionDAG.h
@@ -878,7 +878,7 @@
                            SDValue Cmp, SDValue Swp, MachinePointerInfo PtrInfo,
                            unsigned Alignment, AtomicOrdering SuccessOrdering,
                            AtomicOrdering FailureOrdering,
-                           SynchronizationScope SynchScope);
+                           SyncScope::ID SSID);
   SDValue getAtomicCmpSwap(unsigned Opcode, const SDLoc &dl, EVT MemVT,
                            SDVTList VTs, SDValue Chain, SDValue Ptr,
                            SDValue Cmp, SDValue Swp, MachineMemOperand *MMO);
@@ -888,7 +888,7 @@
   SDValue getAtomic(unsigned Opcode, const SDLoc &dl, EVT MemVT, SDValue Chain,
                     SDValue Ptr, SDValue Val, const Value *PtrVal,
                     unsigned Alignment, AtomicOrdering Ordering,
-                    SynchronizationScope SynchScope);
+                    SyncScope::ID SSID);
   SDValue getAtomic(unsigned Opcode, const SDLoc &dl, EVT MemVT, SDValue Chain,
                     SDValue Ptr, SDValue Val, MachineMemOperand *MMO);
 
Index: include/llvm/CodeGen/SelectionDAGNodes.h
===================================================================
--- include/llvm/CodeGen/SelectionDAGNodes.h
+++ include/llvm/CodeGen/SelectionDAGNodes.h
@@ -1178,8 +1178,8 @@
   /// Returns the Ranges that describes the dereference.
   const MDNode *getRanges() const { return MMO->getRanges(); }
 
-  /// Return the synchronization scope for this memory operation.
-  SynchronizationScope getSynchScope() const { return MMO->getSynchScope(); }
+  /// Returns the synchronization scope ID for this memory operation.
+  SyncScope::ID getSyncScopeID() const { return MMO->getSyncScopeID(); }
 
   /// Return the atomic ordering requirements for this memory operation. For
   /// cmpxchg atomic operations, return the atomic ordering requirements when
Index: include/llvm/IR/IRBuilder.h
===================================================================
--- include/llvm/IR/IRBuilder.h
+++ include/llvm/IR/IRBuilder.h
@@ -1144,22 +1144,22 @@
     return SI;
   }
   FenceInst *CreateFence(AtomicOrdering Ordering,
-                         SynchronizationScope SynchScope = CrossThread,
+                         SyncScope::ID SSID = SyncScope::System,
                          const Twine &Name = "") {
-    return Insert(new FenceInst(Context, Ordering, SynchScope), Name);
+    return Insert(new FenceInst(Context, Ordering, SSID), Name);
   }
   AtomicCmpXchgInst *
   CreateAtomicCmpXchg(Value *Ptr, Value *Cmp, Value *New,
                       AtomicOrdering SuccessOrdering,
                       AtomicOrdering FailureOrdering,
-                      SynchronizationScope SynchScope = CrossThread) {
+                      SyncScope::ID SSID = SyncScope::System) {
     return Insert(new AtomicCmpXchgInst(Ptr, Cmp, New, SuccessOrdering,
-                                        FailureOrdering, SynchScope));
+                                        FailureOrdering, SSID));
   }
   AtomicRMWInst *CreateAtomicRMW(AtomicRMWInst::BinOp Op, Value *Ptr, Value *Val,
                                  AtomicOrdering Ordering,
-                               SynchronizationScope SynchScope = CrossThread) {
-    return Insert(new AtomicRMWInst(Op, Ptr, Val, Ordering, SynchScope));
+                                 SyncScope::ID SSID = SyncScope::System) {
+    return Insert(new AtomicRMWInst(Op, Ptr, Val, Ordering, SSID));
   }
   Value *CreateGEP(Value *Ptr, ArrayRef<Value *> IdxList,
                    const Twine &Name = "") {
Index: include/llvm/IR/Instructions.h
===================================================================
--- include/llvm/IR/Instructions.h
+++ include/llvm/IR/Instructions.h
@@ -47,11 +47,6 @@
 class DataLayout;
 class LLVMContext;
 
-enum SynchronizationScope {
-  SingleThread = 0,
-  CrossThread = 1
-};
-
 //===----------------------------------------------------------------------===//
 //                                AllocaInst Class
 //===----------------------------------------------------------------------===//
@@ -193,17 +188,16 @@
   LoadInst(Value *Ptr, const Twine &NameStr, bool isVolatile,
            unsigned Align, BasicBlock *InsertAtEnd);
   LoadInst(Value *Ptr, const Twine &NameStr, bool isVolatile, unsigned Align,
-           AtomicOrdering Order, SynchronizationScope SynchScope = CrossThread,
+           AtomicOrdering Order, SyncScope::ID SSID = SyncScope::System,
            Instruction *InsertBefore = nullptr)
       : LoadInst(cast<PointerType>(Ptr->getType())->getElementType(), Ptr,
-                 NameStr, isVolatile, Align, Order, SynchScope, InsertBefore) {}
+                 NameStr, isVolatile, Align, Order, SSID, InsertBefore) {}
   LoadInst(Type *Ty, Value *Ptr, const Twine &NameStr, bool isVolatile,
            unsigned Align, AtomicOrdering Order,
-           SynchronizationScope SynchScope = CrossThread,
+           SyncScope::ID SSID = SyncScope::System,
            Instruction *InsertBefore = nullptr);
   LoadInst(Value *Ptr, const Twine &NameStr, bool isVolatile,
-           unsigned Align, AtomicOrdering Order,
-           SynchronizationScope SynchScope,
+           unsigned Align, AtomicOrdering Order, SyncScope::ID SSID,
            BasicBlock *InsertAtEnd);
   LoadInst(Value *Ptr, const char *NameStr, Instruction *InsertBefore);
   LoadInst(Value *Ptr, const char *NameStr, BasicBlock *InsertAtEnd);
@@ -233,34 +227,34 @@
 
   void setAlignment(unsigned Align);
 
-  /// Returns the ordering effect of this fence.
+  /// Returns the ordering constraint of this load instruction.
   AtomicOrdering getOrdering() const {
     return AtomicOrdering((getSubclassDataFromInstruction() >> 7) & 7);
   }
 
-  /// Set the ordering constraint on this load. May not be Release or
-  /// AcquireRelease.
+  /// Sets the ordering constraint of this load instruction.  May not be Release
+  /// or AcquireRelease.
   void setOrdering(AtomicOrdering Ordering) {
     setInstructionSubclassData((getSubclassDataFromInstruction() & ~(7 << 7)) |
                                ((unsigned)Ordering << 7));
   }
 
-  SynchronizationScope getSynchScope() const {
-    return SynchronizationScope((getSubclassDataFromInstruction() >> 6) & 1);
+  /// Returns the synchronization scope ID of this load instruction.
+  SyncScope::ID getSyncScopeID() const {
+    return SSID;
   }
 
-  /// Specify whether this load is ordered with respect to all
-  /// concurrently executing threads, or only with respect to signal handlers
-  /// executing in the same thread.
-  void setSynchScope(SynchronizationScope xthread) {
-    setInstructionSubclassData((getSubclassDataFromInstruction() & ~(1 << 6)) |
-                               (xthread << 6));
+  /// Sets the synchronization scope ID of this load instruction.
+  void setSyncScopeID(SyncScope::ID SSID) {
+    this->SSID = SSID;
   }
 
+  /// Sets the ordering constraint and the synchronization scope ID of this load
+  /// instruction.
   void setAtomic(AtomicOrdering Ordering,
-                 SynchronizationScope SynchScope = CrossThread) {
+                 SyncScope::ID SSID = SyncScope::System) {
     setOrdering(Ordering);
-    setSynchScope(SynchScope);
+    setSyncScopeID(SSID);
   }
 
   bool isSimple() const { return !isAtomic() && !isVolatile(); }
@@ -294,6 +288,11 @@
   void setInstructionSubclassData(unsigned short D) {
     Instruction::setInstructionSubclassData(D);
   }
+
+  /// The synchronization scope ID of this load instruction.  Not quite enough
+  /// room in SubClassData for everything, so synchronization scope ID gets its
+  /// own field.
+  SyncScope::ID SSID;
 };
 
 //===----------------------------------------------------------------------===//
@@ -322,11 +321,10 @@
             unsigned Align, BasicBlock *InsertAtEnd);
   StoreInst(Value *Val, Value *Ptr, bool isVolatile,
             unsigned Align, AtomicOrdering Order,
-            SynchronizationScope SynchScope = CrossThread,
+            SyncScope::ID SSID = SyncScope::System,
             Instruction *InsertBefore = nullptr);
   StoreInst(Value *Val, Value *Ptr, bool isVolatile,
-            unsigned Align, AtomicOrdering Order,
-            SynchronizationScope SynchScope,
+            unsigned Align, AtomicOrdering Order, SyncScope::ID SSID,
             BasicBlock *InsertAtEnd);
 
   // allocate space for exactly two operands
@@ -355,34 +353,34 @@
 
   void setAlignment(unsigned Align);
 
-  /// Returns the ordering effect of this store.
+  /// Returns the ordering constraint of this store instruction.
   AtomicOrdering getOrdering() const {
     return AtomicOrdering((getSubclassDataFromInstruction() >> 7) & 7);
   }
 
-  /// Set the ordering constraint on this store.  May not be Acquire or
-  /// AcquireRelease.
+  /// Sets the ordering constraint of this store instruction.  May not be
+  /// Acquire or AcquireRelease.
   void setOrdering(AtomicOrdering Ordering) {
     setInstructionSubclassData((getSubclassDataFromInstruction() & ~(7 << 7)) |
                                ((unsigned)Ordering << 7));
   }
 
-  SynchronizationScope getSynchScope() const {
-    return SynchronizationScope((getSubclassDataFromInstruction() >> 6) & 1);
+  /// Returns the synchronization scope ID of this store instruction.
+  SyncScope::ID getSyncScopeID() const {
+    return SSID;
   }
 
-  /// Specify whether this store instruction is ordered with respect to all
-  /// concurrently executing threads, or only with respect to signal handlers
-  /// executing in the same thread.
-  void setSynchScope(SynchronizationScope xthread) {
-    setInstructionSubclassData((getSubclassDataFromInstruction() & ~(1 << 6)) |
-                               (xthread << 6));
+  /// Sets the synchronization scope ID of this store instruction.
+  void setSyncScopeID(SyncScope::ID SSID) {
+    this->SSID = SSID;
   }
 
+  /// Sets the ordering constraint and the synchronization scope ID of this
+  /// store instruction.
   void setAtomic(AtomicOrdering Ordering,
-                 SynchronizationScope SynchScope = CrossThread) {
+                 SyncScope::ID SSID = SyncScope::System) {
     setOrdering(Ordering);
-    setSynchScope(SynchScope);
+    setSyncScopeID(SSID);
   }
 
   bool isSimple() const { return !isAtomic() && !isVolatile(); }
@@ -419,6 +417,11 @@
   void setInstructionSubclassData(unsigned short D) {
     Instruction::setInstructionSubclassData(D);
   }
+
+  /// The synchronization scope ID of this store instruction.  Not quite enough
+  /// room in SubClassData for everything, so synchronization scope ID gets its
+  /// own field.
+  SyncScope::ID SSID;
 };
 
 template <>
@@ -433,7 +436,7 @@
 
 /// An instruction for ordering other memory operations.
 class FenceInst : public Instruction {
-  void Init(AtomicOrdering Ordering, SynchronizationScope SynchScope);
+  void Init(AtomicOrdering Ordering, SyncScope::ID SSID);
 
 protected:
   // Note: Instruction needs to be a friend here to call cloneImpl.
@@ -445,10 +448,9 @@
   // Ordering may only be Acquire, Release, AcquireRelease, or
   // SequentiallyConsistent.
   FenceInst(LLVMContext &C, AtomicOrdering Ordering,
-            SynchronizationScope SynchScope = CrossThread,
+            SyncScope::ID SSID = SyncScope::System,
             Instruction *InsertBefore = nullptr);
-  FenceInst(LLVMContext &C, AtomicOrdering Ordering,
-            SynchronizationScope SynchScope,
+  FenceInst(LLVMContext &C, AtomicOrdering Ordering, SyncScope::ID SSID,
             BasicBlock *InsertAtEnd);
 
   // allocate space for exactly zero operands
@@ -458,28 +460,26 @@
 
   void *operator new(size_t, unsigned) = delete;
 
-  /// Returns the ordering effect of this fence.
+  /// Returns the ordering constraint of this fence instruction.
   AtomicOrdering getOrdering() const {
     return AtomicOrdering(getSubclassDataFromInstruction() >> 1);
   }
 
-  /// Set the ordering constraint on this fence.  May only be Acquire, Release,
-  /// AcquireRelease, or SequentiallyConsistent.
+  /// Sets the ordering constraint of this fence instruction.  May only be
+  /// Acquire, Release, AcquireRelease, or SequentiallyConsistent.
   void setOrdering(AtomicOrdering Ordering) {
     setInstructionSubclassData((getSubclassDataFromInstruction() & 1) |
                                ((unsigned)Ordering << 1));
   }
 
-  SynchronizationScope getSynchScope() const {
-    return SynchronizationScope(getSubclassDataFromInstruction() & 1);
+  /// Returns the synchronization scope ID of this fence instruction.
+  SyncScope::ID getSyncScopeID() const {
+    return SSID;
   }
 
-  /// Specify whether this fence orders other operations with respect to all
-  /// concurrently executing threads, or only with respect to signal handlers
-  /// executing in the same thread.
-  void setSynchScope(SynchronizationScope xthread) {
-    setInstructionSubclassData((getSubclassDataFromInstruction() & ~1) |
-                               xthread);
+  /// Sets the synchronization scope ID of this fence instruction.
+  void setSyncScopeID(SyncScope::ID SSID) {
+    this->SSID = SSID;
   }
 
   // Methods for support type inquiry through isa, cast, and dyn_cast:
@@ -496,6 +496,11 @@
   void setInstructionSubclassData(unsigned short D) {
     Instruction::setInstructionSubclassData(D);
   }
+
+  /// The synchronization scope ID of this fence instruction.  Not quite enough
+  /// room in SubClassData for everything, so synchronization scope ID gets its
+  /// own field.
+  SyncScope::ID SSID;
 };
 
 //===----------------------------------------------------------------------===//
@@ -509,7 +514,7 @@
 class AtomicCmpXchgInst : public Instruction {
   void Init(Value *Ptr, Value *Cmp, Value *NewVal,
             AtomicOrdering SuccessOrdering, AtomicOrdering FailureOrdering,
-            SynchronizationScope SynchScope);
+            SyncScope::ID SSID);
 
 protected:
   // Note: Instruction needs to be a friend here to call cloneImpl.
@@ -521,13 +526,11 @@
   AtomicCmpXchgInst(Value *Ptr, Value *Cmp, Value *NewVal,
                     AtomicOrdering SuccessOrdering,
                     AtomicOrdering FailureOrdering,
-                    SynchronizationScope SynchScope,
-                    Instruction *InsertBefore = nullptr);
+                    SyncScope::ID SSID, Instruction *InsertBefore = nullptr);
   AtomicCmpXchgInst(Value *Ptr, Value *Cmp, Value *NewVal,
                     AtomicOrdering SuccessOrdering,
                     AtomicOrdering FailureOrdering,
-                    SynchronizationScope SynchScope,
-                    BasicBlock *InsertAtEnd);
+                    SyncScope::ID SSID, BasicBlock *InsertAtEnd);
 
   // allocate space for exactly three operands
   void *operator new(size_t s) {
@@ -563,7 +566,12 @@
   /// Transparently provide more efficient getOperand methods.
   DECLARE_TRANSPARENT_OPERAND_ACCESSORS(Value);
 
-  /// Set the ordering constraint on this cmpxchg.
+  /// Returns the success ordering constraint of this cmpxchg instruction.
+  AtomicOrdering getSuccessOrdering() const {
+    return AtomicOrdering((getSubclassDataFromInstruction() >> 2) & 7);
+  }
+
+  /// Sets the success ordering constraint of this cmpxchg instruction.
   void setSuccessOrdering(AtomicOrdering Ordering) {
     assert(Ordering != AtomicOrdering::NotAtomic &&
            "CmpXchg instructions can only be atomic.");
@@ -571,6 +579,12 @@
                                ((unsigned)Ordering << 2));
   }
 
+  /// Returns the failure ordering constraint of this cmpxchg instruction.
+  AtomicOrdering getFailureOrdering() const {
+    return AtomicOrdering((getSubclassDataFromInstruction() >> 5) & 7);
+  }
+
+  /// Sets the failure ordering constraint of this cmpxchg instruction.
   void setFailureOrdering(AtomicOrdering Ordering) {
     assert(Ordering != AtomicOrdering::NotAtomic &&
            "CmpXchg instructions can only be atomic.");
@@ -578,28 +592,14 @@
                                ((unsigned)Ordering << 5));
   }
 
-  /// Specify whether this cmpxchg is atomic and orders other operations with
-  /// respect to all concurrently executing threads, or only with respect to
-  /// signal handlers executing in the same thread.
-  void setSynchScope(SynchronizationScope SynchScope) {
-    setInstructionSubclassData((getSubclassDataFromInstruction() & ~2) |
-                               (SynchScope << 1));
-  }
-
-  /// Returns the ordering constraint on this cmpxchg.
-  AtomicOrdering getSuccessOrdering() const {
-    return AtomicOrdering((getSubclassDataFromInstruction() >> 2) & 7);
-  }
-
-  /// Returns the ordering constraint on this cmpxchg.
-  AtomicOrdering getFailureOrdering() const {
-    return AtomicOrdering((getSubclassDataFromInstruction() >> 5) & 7);
+  /// Returns the synchronization scope ID of this cmpxchg instruction.
+  SyncScope::ID getSyncScopeID() const {
+    return SSID;
   }
 
-  /// Returns whether this cmpxchg is atomic between threads or only within a
-  /// single thread.
-  SynchronizationScope getSynchScope() const {
-    return SynchronizationScope((getSubclassDataFromInstruction() & 2) >> 1);
+  /// Sets the synchronization scope ID of this cmpxchg instruction.
+  void setSyncScopeID(SyncScope::ID SSID) {
+    this->SSID = SSID;
   }
 
   Value *getPointerOperand() { return getOperand(0); }
@@ -654,6 +654,11 @@
   void setInstructionSubclassData(unsigned short D) {
     Instruction::setInstructionSubclassData(D);
   }
+
+  /// The synchronization scope ID of this cmpxchg instruction.  Not quite
+  /// enough room in SubClassData for everything, so synchronization scope ID
+  /// gets its own field.
+  SyncScope::ID SSID;
 };
 
 template <>
@@ -713,10 +718,10 @@
   };
 
   AtomicRMWInst(BinOp Operation, Value *Ptr, Value *Val,
-                AtomicOrdering Ordering, SynchronizationScope SynchScope,
+                AtomicOrdering Ordering, SyncScope::ID SSID,
                 Instruction *InsertBefore = nullptr);
   AtomicRMWInst(BinOp Operation, Value *Ptr, Value *Val,
-                AtomicOrdering Ordering, SynchronizationScope SynchScope,
+                AtomicOrdering Ordering, SyncScope::ID SSID,
                 BasicBlock *InsertAtEnd);
 
   // allocate space for exactly two operands
@@ -752,7 +757,12 @@
   /// Transparently provide more efficient getOperand methods.
   DECLARE_TRANSPARENT_OPERAND_ACCESSORS(Value);
 
-  /// Set the ordering constraint on this RMW.
+  /// Returns the ordering constraint of this rmw instruction.
+  AtomicOrdering getOrdering() const {
+    return AtomicOrdering((getSubclassDataFromInstruction() >> 2) & 7);
+  }
+
+  /// Sets the ordering constraint of this rmw instruction.
   void setOrdering(AtomicOrdering Ordering) {
     assert(Ordering != AtomicOrdering::NotAtomic &&
            "atomicrmw instructions can only be atomic.");
@@ -760,23 +770,14 @@
                                ((unsigned)Ordering << 2));
   }
 
-  /// Specify whether this RMW orders other operations with respect to all
-  /// concurrently executing threads, or only with respect to signal handlers
-  /// executing in the same thread.
-  void setSynchScope(SynchronizationScope SynchScope) {
-    setInstructionSubclassData((getSubclassDataFromInstruction() & ~2) |
-                               (SynchScope << 1));
+  /// Returns the synchronization scope ID of this rmw instruction.
+  SyncScope::ID getSyncScopeID() const {
+    return SSID;
   }
 
-  /// Returns the ordering constraint on this RMW.
-  AtomicOrdering getOrdering() const {
-    return AtomicOrdering((getSubclassDataFromInstruction() >> 2) & 7);
-  }
-
-  /// Returns whether this RMW is atomic between threads or only within a
-  /// single thread.
-  SynchronizationScope getSynchScope() const {
-    return SynchronizationScope((getSubclassDataFromInstruction() & 2) >> 1);
+  /// Sets the synchronization scope ID of this rmw instruction.
+  void setSyncScopeID(SyncScope::ID SSID) {
+    this->SSID = SSID;
   }
 
   Value *getPointerOperand() { return getOperand(0); }
@@ -801,13 +802,18 @@
 
 private:
   void Init(BinOp Operation, Value *Ptr, Value *Val,
-            AtomicOrdering Ordering, SynchronizationScope SynchScope);
+            AtomicOrdering Ordering, SyncScope::ID SSID);
 
   // Shadow Instruction::setInstructionSubclassData with a private forwarding
   // method so that subclasses cannot accidentally use it.
   void setInstructionSubclassData(unsigned short D) {
     Instruction::setInstructionSubclassData(D);
   }
+
+  /// The synchronization scope ID of this rmw instruction.  Not quite enough
+  /// room in SubClassData for everything, so synchronization scope ID gets its
+  /// own field.
+  SyncScope::ID SSID;
 };
 
 template <>
Index: include/llvm/IR/LLVMContext.h
===================================================================
--- include/llvm/IR/LLVMContext.h
+++ include/llvm/IR/LLVMContext.h
@@ -40,6 +40,24 @@
 class Output;
 } // end namespace yaml
 
+namespace SyncScope {
+
+typedef uint8_t ID;
+
+/// Known synchronization scope IDs, which always have the same value.  All
+/// synchronization scope IDs that LLVM has special knowledge of are listed
+/// here.  Additionally, this scheme allows LLVM to efficiently check for
+/// specific synchronization scope ID without comparing strings.
+enum {
+  /// Synchronized with respect to signal handlers executing in the same thread.
+  SingleThread = 0,
+
+  /// Synchronized with respect to all concurrently executing threads.
+  System = 1
+};
+
+} // end namespace SyncScope
+
 /// This is an important class for using LLVM in a threaded context.  It
 /// (opaquely) owns and manages the core "global" data of LLVM's core
 /// infrastructure, including the type and constant uniquing tables.
@@ -109,6 +127,22 @@
   /// tag registered with an LLVMContext has an unique ID.
   uint32_t getOperandBundleTagID(StringRef Tag) const;
 
+  /// getSyncScopeID - Maps synchronization scope name to synchronization scope
+  /// ID.  Every synchronization scope registered with LLVMContext has unique ID
+  /// except pre-defined ones.  Synchronization scope must be registered with
+  /// LLVMContext prior to calling this function.
+  SyncScope::ID getSyncScopeID(StringRef SSN) const;
+
+  /// getOrInsertSyncScopeID - Maps synchronization scope name to
+  /// synchronization scope ID.  Every synchronization scope registered with
+  /// LLVMContext has unique ID except pre-defined ones.
+  SyncScope::ID getOrInsertSyncScopeID(StringRef SSN);
+
+  /// getSyncScopeNames - Populates client supplied SmallVector with
+  /// synchronization scope names registered with LLVMContext.  Synchronization
+  /// scope names are ordered by increasing synchronization scope IDs.
+  void getSyncScopeNames(SmallVectorImpl<StringRef> &SSNs) const;
+
   /// Define the GC for a function
   void setGC(const Function &Fn, std::string GCName);
 
Index: lib/AsmParser/LLLexer.cpp
===================================================================
--- lib/AsmParser/LLLexer.cpp
+++ lib/AsmParser/LLLexer.cpp
@@ -542,7 +542,7 @@
   KEYWORD(release);
   KEYWORD(acq_rel);
   KEYWORD(seq_cst);
-  KEYWORD(singlethread);
+  KEYWORD(syncscope);
 
   KEYWORD(nnan);
   KEYWORD(ninf);
Index: lib/AsmParser/LLParser.h
===================================================================
--- lib/AsmParser/LLParser.h
+++ lib/AsmParser/LLParser.h
@@ -241,8 +241,9 @@
     bool ParseOptionalCallingConv(unsigned &CC);
     bool ParseOptionalAlignment(unsigned &Alignment);
     bool ParseOptionalDerefAttrBytes(lltok::Kind AttrKind, uint64_t &Bytes);
-    bool ParseScopeAndOrdering(bool isAtomic, SynchronizationScope &Scope,
+    bool ParseScopeAndOrdering(bool isAtomic, SyncScope::ID &SSID,
                                AtomicOrdering &Ordering);
+    bool ParseScope(SyncScope::ID &SSID);
     bool ParseOrdering(AtomicOrdering &Ordering);
     bool ParseOptionalStackAlignment(unsigned &Alignment);
     bool ParseOptionalCommaAlign(unsigned &Alignment, bool &AteExtraComma);
Index: lib/AsmParser/LLParser.cpp
===================================================================
--- lib/AsmParser/LLParser.cpp
+++ lib/AsmParser/LLParser.cpp
@@ -1905,20 +1905,42 @@
 }
 
 /// ParseScopeAndOrdering
-///   if isAtomic: ::= 'singlethread'? AtomicOrdering
+///   if isAtomic: ::= SyncScope? AtomicOrdering
 ///   else: ::=
 ///
 /// This sets Scope and Ordering to the parsed values.
-bool LLParser::ParseScopeAndOrdering(bool isAtomic, SynchronizationScope &Scope,
+bool LLParser::ParseScopeAndOrdering(bool isAtomic, SyncScope::ID &SSID,
                                      AtomicOrdering &Ordering) {
   if (!isAtomic)
     return false;
 
-  Scope = CrossThread;
-  if (EatIfPresent(lltok::kw_singlethread))
-    Scope = SingleThread;
+  return ParseScope(SSID) || ParseOrdering(Ordering);
+}
+
+/// ParseScope
+///   ::= syncscope("singlethread" | "<target scope>")?
+///
+/// This sets synchronization scope ID to the ID of the parsed value.
+bool LLParser::ParseScope(SyncScope::ID &SSID) {
+  SSID = SyncScope::System;
+  if (EatIfPresent(lltok::kw_syncscope)) {
+    auto StartParenAt = Lex.getLoc();
+    if (!EatIfPresent(lltok::lparen))
+      return Error(StartParenAt, "Expected '(' in syncscope");
+
+    std::string SSN;
+    auto SSNAt = Lex.getLoc();
+    if (ParseStringConstant(SSN))
+      return Error(SSNAt, "Expected synchronization scope name");
 
-  return ParseOrdering(Ordering);
+    auto EndParenAt = Lex.getLoc();
+    if (!EatIfPresent(lltok::rparen))
+      return Error(EndParenAt, "Expected ')' in syncscope");
+
+    SSID = Context.getOrInsertSyncScopeID(SSN);
+  }
+
+  return false;
 }
 
 /// ParseOrdering
@@ -6086,7 +6108,7 @@
   bool AteExtraComma = false;
   bool isAtomic = false;
   AtomicOrdering Ordering = AtomicOrdering::NotAtomic;
-  SynchronizationScope Scope = CrossThread;
+  SyncScope::ID SSID = SyncScope::System;
 
   if (Lex.getKind() == lltok::kw_atomic) {
     isAtomic = true;
@@ -6104,7 +6126,7 @@
   if (ParseType(Ty) ||
       ParseToken(lltok::comma, "expected comma after load's type") ||
       ParseTypeAndValue(Val, Loc, PFS) ||
-      ParseScopeAndOrdering(isAtomic, Scope, Ordering) ||
+      ParseScopeAndOrdering(isAtomic, SSID, Ordering) ||
       ParseOptionalCommaAlign(Alignment, AteExtraComma))
     return true;
 
@@ -6120,7 +6142,7 @@
     return Error(ExplicitTypeLoc,
                  "explicit pointee type doesn't match operand's pointee type");
 
-  Inst = new LoadInst(Ty, Val, "", isVolatile, Alignment, Ordering, Scope);
+  Inst = new LoadInst(Ty, Val, "", isVolatile, Alignment, Ordering, SSID);
   return AteExtraComma ? InstExtraComma : InstNormal;
 }
 
@@ -6135,7 +6157,7 @@
   bool AteExtraComma = false;
   bool isAtomic = false;
   AtomicOrdering Ordering = AtomicOrdering::NotAtomic;
-  SynchronizationScope Scope = CrossThread;
+  SyncScope::ID SSID = SyncScope::System;
 
   if (Lex.getKind() == lltok::kw_atomic) {
     isAtomic = true;
@@ -6151,7 +6173,7 @@
   if (ParseTypeAndValue(Val, Loc, PFS) ||
       ParseToken(lltok::comma, "expected ',' after store operand") ||
       ParseTypeAndValue(Ptr, PtrLoc, PFS) ||
-      ParseScopeAndOrdering(isAtomic, Scope, Ordering) ||
+      ParseScopeAndOrdering(isAtomic, SSID, Ordering) ||
       ParseOptionalCommaAlign(Alignment, AteExtraComma))
     return true;
 
@@ -6167,7 +6189,7 @@
       Ordering == AtomicOrdering::AcquireRelease)
     return Error(Loc, "atomic store cannot use Acquire ordering");
 
-  Inst = new StoreInst(Val, Ptr, isVolatile, Alignment, Ordering, Scope);
+  Inst = new StoreInst(Val, Ptr, isVolatile, Alignment, Ordering, SSID);
   return AteExtraComma ? InstExtraComma : InstNormal;
 }
 
@@ -6179,7 +6201,7 @@
   bool AteExtraComma = false;
   AtomicOrdering SuccessOrdering = AtomicOrdering::NotAtomic;
   AtomicOrdering FailureOrdering = AtomicOrdering::NotAtomic;
-  SynchronizationScope Scope = CrossThread;
+  SyncScope::ID SSID = SyncScope::System;
   bool isVolatile = false;
   bool isWeak = false;
 
@@ -6194,7 +6216,7 @@
       ParseTypeAndValue(Cmp, CmpLoc, PFS) ||
       ParseToken(lltok::comma, "expected ',' after cmpxchg cmp operand") ||
       ParseTypeAndValue(New, NewLoc, PFS) ||
-      ParseScopeAndOrdering(true /*Always atomic*/, Scope, SuccessOrdering) ||
+      ParseScopeAndOrdering(true /*Always atomic*/, SSID, SuccessOrdering) ||
       ParseOrdering(FailureOrdering))
     return true;
 
@@ -6217,7 +6239,7 @@
   if (!New->getType()->isFirstClassType())
     return Error(NewLoc, "cmpxchg operand must be a first class value");
   AtomicCmpXchgInst *CXI = new AtomicCmpXchgInst(
-      Ptr, Cmp, New, SuccessOrdering, FailureOrdering, Scope);
+      Ptr, Cmp, New, SuccessOrdering, FailureOrdering, SSID);
   CXI->setVolatile(isVolatile);
   CXI->setWeak(isWeak);
   Inst = CXI;
@@ -6231,7 +6253,7 @@
   Value *Ptr, *Val; LocTy PtrLoc, ValLoc;
   bool AteExtraComma = false;
   AtomicOrdering Ordering = AtomicOrdering::NotAtomic;
-  SynchronizationScope Scope = CrossThread;
+  SyncScope::ID SSID = SyncScope::System;
   bool isVolatile = false;
   AtomicRMWInst::BinOp Operation;
 
@@ -6257,7 +6279,7 @@
   if (ParseTypeAndValue(Ptr, PtrLoc, PFS) ||
       ParseToken(lltok::comma, "expected ',' after atomicrmw address") ||
       ParseTypeAndValue(Val, ValLoc, PFS) ||
-      ParseScopeAndOrdering(true /*Always atomic*/, Scope, Ordering))
+      ParseScopeAndOrdering(true /*Always atomic*/, SSID, Ordering))
     return true;
 
   if (Ordering == AtomicOrdering::Unordered)
@@ -6274,7 +6296,7 @@
                          " integer");
 
   AtomicRMWInst *RMWI =
-    new AtomicRMWInst(Operation, Ptr, Val, Ordering, Scope);
+    new AtomicRMWInst(Operation, Ptr, Val, Ordering, SSID);
   RMWI->setVolatile(isVolatile);
   Inst = RMWI;
   return AteExtraComma ? InstExtraComma : InstNormal;
@@ -6284,8 +6306,8 @@
 ///   ::= 'fence' 'singlethread'? AtomicOrdering
 int LLParser::ParseFence(Instruction *&Inst, PerFunctionState &PFS) {
   AtomicOrdering Ordering = AtomicOrdering::NotAtomic;
-  SynchronizationScope Scope = CrossThread;
-  if (ParseScopeAndOrdering(true /*Always atomic*/, Scope, Ordering))
+  SyncScope::ID SSID = SyncScope::System;
+  if (ParseScopeAndOrdering(true /*Always atomic*/, SSID, Ordering))
     return true;
 
   if (Ordering == AtomicOrdering::Unordered)
@@ -6293,7 +6315,7 @@
   if (Ordering == AtomicOrdering::Monotonic)
     return TokError("fence cannot be monotonic");
 
-  Inst = new FenceInst(Context, Ordering, Scope);
+  Inst = new FenceInst(Context, Ordering, SSID);
   return InstNormal;
 }
 
Index: lib/AsmParser/LLToken.h
===================================================================
--- lib/AsmParser/LLToken.h
+++ lib/AsmParser/LLToken.h
@@ -93,7 +93,7 @@
   kw_release,
   kw_acq_rel,
   kw_seq_cst,
-  kw_singlethread,
+  kw_syncscope,
   kw_nnan,
   kw_ninf,
   kw_nsz,
Index: lib/Bitcode/Reader/BitcodeReader.cpp
===================================================================
--- lib/Bitcode/Reader/BitcodeReader.cpp
+++ lib/Bitcode/Reader/BitcodeReader.cpp
@@ -520,6 +520,7 @@
   TBAAVerifier TBAAVerifyHelper;
 
   std::vector<std::string> BundleTags;
+  SmallVector<SyncScope::ID, 8> SSIDs;
 
 public:
   BitcodeReader(BitstreamCursor Stream, StringRef Strtab,
@@ -655,6 +656,7 @@
   Error parseTypeTable();
   Error parseTypeTableBody();
   Error parseOperandBundleTags();
+  Error parseSyncScopeNames();
 
   Expected<Value *> recordValue(SmallVectorImpl<uint64_t> &Record,
                                 unsigned NameIndex, Triple &TT);
@@ -675,6 +677,8 @@
   Error findFunctionInStream(
       Function *F,
       DenseMap<Function *, uint64_t>::iterator DeferredFunctionInfoIterator);
+
+  SyncScope::ID getDecodedSyncScopeID(unsigned Val);
 };
 
 /// Class to manage reading and parsing function summary index bitcode
@@ -1004,14 +1008,6 @@
   }
 }
 
-static SynchronizationScope getDecodedSynchScope(unsigned Val) {
-  switch (Val) {
-  case bitc::SYNCHSCOPE_SINGLETHREAD: return SingleThread;
-  default: // Map unknown scopes to cross-thread.
-  case bitc::SYNCHSCOPE_CROSSTHREAD: return CrossThread;
-  }
-}
-
 static Comdat::SelectionKind getDecodedComdatSelectionKind(unsigned Val) {
   switch (Val) {
   default: // Map unknown selection kinds to any.
@@ -1751,6 +1747,42 @@
   }
 }
 
+Error BitcodeReader::parseSyncScopeNames() {
+  if (Stream.EnterSubBlock(bitc::SYNC_SCOPE_NAMES_BLOCK_ID))
+    return error("Invalid record");
+
+  if (!SSIDs.empty())
+    return error("Invalid multiple blocks");
+
+  SmallVector<uint64_t, 64> Record;
+  while (true) {
+    BitstreamEntry Entry = Stream.advanceSkippingSubblocks();
+    switch (Entry.Kind) {
+    case BitstreamEntry::SubBlock: // Handled for us already.
+    case BitstreamEntry::Error:
+      return error("Malformed block");
+    case BitstreamEntry::EndBlock:
+      return Error::success();
+    case BitstreamEntry::Record:
+      // The interesting case.
+      break;
+    }
+
+    // Synchronization scope names are implicitly mapped to synchronization
+    // scope IDs by their order.
+
+    if (Stream.readRecord(Entry.ID, Record) != bitc::SYNC_SCOPE_NAME)
+      return error("Invalid record");
+
+    SmallString<16> SSN;
+    if (convertToString(Record, 0, SSN))
+      return error("Invalid record");
+
+    SSIDs.push_back(Context.getOrInsertSyncScopeID(SSN));
+    Record.clear();
+  }
+}
+
 /// Associate a value with its name from the given index in the provided record.
 Expected<Value *> BitcodeReader::recordValue(SmallVectorImpl<uint64_t> &Record,
                                              unsigned NameIndex, Triple &TT) {
@@ -3123,6 +3155,10 @@
         if (Error Err = parseOperandBundleTags())
           return Err;
         break;
+      case bitc::SYNC_SCOPE_NAMES_BLOCK_ID:
+        if (Error Err = parseSyncScopeNames())
+          return Err;
+        break;
       }
       continue;
 
@@ -4195,7 +4231,7 @@
       break;
     }
     case bitc::FUNC_CODE_INST_LOADATOMIC: {
-       // LOADATOMIC: [opty, op, align, vol, ordering, synchscope]
+       // LOADATOMIC: [opty, op, align, vol, ordering, ssid]
       unsigned OpNum = 0;
       Value *Op;
       if (getValueTypePair(Record, OpNum, NextValueNo, Op) ||
@@ -4217,12 +4253,12 @@
         return error("Invalid record");
       if (Ordering != AtomicOrdering::NotAtomic && Record[OpNum] == 0)
         return error("Invalid record");
-      SynchronizationScope SynchScope = getDecodedSynchScope(Record[OpNum + 3]);
+      SyncScope::ID SSID = getDecodedSyncScopeID(Record[OpNum + 3]);
 
       unsigned Align;
       if (Error Err = parseAlignmentValue(Record[OpNum], Align))
         return Err;
-      I = new LoadInst(Op, "", Record[OpNum+1], Align, Ordering, SynchScope);
+      I = new LoadInst(Op, "", Record[OpNum+1], Align, Ordering, SSID);
 
       InstructionList.push_back(I);
       break;
@@ -4251,7 +4287,7 @@
     }
     case bitc::FUNC_CODE_INST_STOREATOMIC:
     case bitc::FUNC_CODE_INST_STOREATOMIC_OLD: {
-      // STOREATOMIC: [ptrty, ptr, val, align, vol, ordering, synchscope]
+      // STOREATOMIC: [ptrty, ptr, val, align, vol, ordering, ssid]
       unsigned OpNum = 0;
       Value *Val, *Ptr;
       if (getValueTypePair(Record, OpNum, NextValueNo, Ptr) ||
@@ -4271,20 +4307,20 @@
           Ordering == AtomicOrdering::Acquire ||
           Ordering == AtomicOrdering::AcquireRelease)
         return error("Invalid record");
-      SynchronizationScope SynchScope = getDecodedSynchScope(Record[OpNum + 3]);
+      SyncScope::ID SSID = getDecodedSyncScopeID(Record[OpNum + 3]);
       if (Ordering != AtomicOrdering::NotAtomic && Record[OpNum] == 0)
         return error("Invalid record");
 
       unsigned Align;
       if (Error Err = parseAlignmentValue(Record[OpNum], Align))
         return Err;
-      I = new StoreInst(Val, Ptr, Record[OpNum+1], Align, Ordering, SynchScope);
+      I = new StoreInst(Val, Ptr, Record[OpNum+1], Align, Ordering, SSID);
       InstructionList.push_back(I);
       break;
     }
     case bitc::FUNC_CODE_INST_CMPXCHG_OLD:
     case bitc::FUNC_CODE_INST_CMPXCHG: {
-      // CMPXCHG:[ptrty, ptr, cmp, new, vol, successordering, synchscope,
+      // CMPXCHG:[ptrty, ptr, cmp, new, vol, successordering, ssid,
       //          failureordering?, isweak?]
       unsigned OpNum = 0;
       Value *Ptr, *Cmp, *New;
@@ -4301,7 +4337,7 @@
       if (SuccessOrdering == AtomicOrdering::NotAtomic ||
           SuccessOrdering == AtomicOrdering::Unordered)
         return error("Invalid record");
-      SynchronizationScope SynchScope = getDecodedSynchScope(Record[OpNum + 2]);
+      SyncScope::ID SSID = getDecodedSyncScopeID(Record[OpNum + 2]);
 
       if (Error Err = typeCheckLoadStoreInst(Cmp->getType(), Ptr->getType()))
         return Err;
@@ -4313,7 +4349,7 @@
         FailureOrdering = getDecodedOrdering(Record[OpNum + 3]);
 
       I = new AtomicCmpXchgInst(Ptr, Cmp, New, SuccessOrdering, FailureOrdering,
-                                SynchScope);
+                                SSID);
       cast<AtomicCmpXchgInst>(I)->setVolatile(Record[OpNum]);
 
       if (Record.size() < 8) {
@@ -4330,7 +4366,7 @@
       break;
     }
     case bitc::FUNC_CODE_INST_ATOMICRMW: {
-      // ATOMICRMW:[ptrty, ptr, val, op, vol, ordering, synchscope]
+      // ATOMICRMW:[ptrty, ptr, val, op, vol, ordering, ssid]
       unsigned OpNum = 0;
       Value *Ptr, *Val;
       if (getValueTypePair(Record, OpNum, NextValueNo, Ptr) ||
@@ -4347,13 +4383,13 @@
       if (Ordering == AtomicOrdering::NotAtomic ||
           Ordering == AtomicOrdering::Unordered)
         return error("Invalid record");
-      SynchronizationScope SynchScope = getDecodedSynchScope(Record[OpNum + 3]);
-      I = new AtomicRMWInst(Operation, Ptr, Val, Ordering, SynchScope);
+      SyncScope::ID SSID = getDecodedSyncScopeID(Record[OpNum + 3]);
+      I = new AtomicRMWInst(Operation, Ptr, Val, Ordering, SSID);
       cast<AtomicRMWInst>(I)->setVolatile(Record[OpNum+1]);
       InstructionList.push_back(I);
       break;
     }
-    case bitc::FUNC_CODE_INST_FENCE: { // FENCE:[ordering, synchscope]
+    case bitc::FUNC_CODE_INST_FENCE: { // FENCE:[ordering, ssid]
       if (2 != Record.size())
         return error("Invalid record");
       AtomicOrdering Ordering = getDecodedOrdering(Record[0]);
@@ -4361,8 +4397,8 @@
           Ordering == AtomicOrdering::Unordered ||
           Ordering == AtomicOrdering::Monotonic)
         return error("Invalid record");
-      SynchronizationScope SynchScope = getDecodedSynchScope(Record[1]);
-      I = new FenceInst(Context, Ordering, SynchScope);
+      SyncScope::ID SSID = getDecodedSyncScopeID(Record[1]);
+      I = new FenceInst(Context, Ordering, SSID);
       InstructionList.push_back(I);
       break;
     }
@@ -4558,6 +4594,12 @@
   return Error::success();
 }
 
+SyncScope::ID BitcodeReader::getDecodedSyncScopeID(unsigned Val) {
+  if (Val == SyncScope::SingleThread || Val == SyncScope::System)
+    return SyncScope::ID(Val);
+  return SSIDs[Val];
+}
+
 //===----------------------------------------------------------------------===//
 // GVMaterializer implementation
 //===----------------------------------------------------------------------===//
Index: lib/Bitcode/Writer/BitcodeWriter.cpp
===================================================================
--- lib/Bitcode/Writer/BitcodeWriter.cpp
+++ lib/Bitcode/Writer/BitcodeWriter.cpp
@@ -259,6 +259,7 @@
                                     const GlobalObject &GO);
   void writeModuleMetadataKinds();
   void writeOperandBundleTags();
+  void writeSyncScopeNames();
   void writeConstants(unsigned FirstVal, unsigned LastVal, bool isGlobal);
   void writeModuleConstants();
   bool pushValueAndType(const Value *V, unsigned InstID,
@@ -309,6 +310,10 @@
     return VE.getValueID(VI.getValue());
   }
   std::map<GlobalValue::GUID, unsigned> &valueIds() { return GUIDToValueIdMap; }
+
+  unsigned getEncodedSyncScopeID(SyncScope::ID SSID) {
+    return unsigned(SSID);
+  }
 };
 
 /// Class to manage the bitcode writing for a combined index.
@@ -469,14 +474,6 @@
   llvm_unreachable("Invalid ordering");
 }
 
-static unsigned getEncodedSynchScope(SynchronizationScope SynchScope) {
-  switch (SynchScope) {
-  case SingleThread: return bitc::SYNCHSCOPE_SINGLETHREAD;
-  case CrossThread: return bitc::SYNCHSCOPE_CROSSTHREAD;
-  }
-  llvm_unreachable("Invalid synch scope");
-}
-
 static void writeStringRecord(BitstreamWriter &Stream, unsigned Code,
                               StringRef Str, unsigned AbbrevToUse) {
   SmallVector<unsigned, 64> Vals;
@@ -2016,6 +2013,24 @@
   Stream.ExitBlock();
 }
 
+void ModuleBitcodeWriter::writeSyncScopeNames() {
+  SmallVector<StringRef, 8> SSNs;
+  M.getContext().getSyncScopeNames(SSNs);
+  if (SSNs.empty())
+    return;
+
+  Stream.EnterSubblock(bitc::SYNC_SCOPE_NAMES_BLOCK_ID, 2);
+
+  SmallVector<uint64_t, 64> Record;
+  for (auto SSN : SSNs) {
+    Record.append(SSN.begin(), SSN.end());
+    Stream.EmitRecord(bitc::SYNC_SCOPE_NAME, Record, 0);
+    Record.clear();
+  }
+
+  Stream.ExitBlock();
+}
+
 static void emitSignedInt64(SmallVectorImpl<uint64_t> &Vals, uint64_t V) {
   if ((int64_t)V >= 0)
     Vals.push_back(V << 1);
@@ -2632,7 +2647,7 @@
     Vals.push_back(cast<LoadInst>(I).isVolatile());
     if (cast<LoadInst>(I).isAtomic()) {
       Vals.push_back(getEncodedOrdering(cast<LoadInst>(I).getOrdering()));
-      Vals.push_back(getEncodedSynchScope(cast<LoadInst>(I).getSynchScope()));
+      Vals.push_back(getEncodedSyncScopeID(cast<LoadInst>(I).getSyncScopeID()));
     }
     break;
   case Instruction::Store:
@@ -2646,7 +2661,8 @@
     Vals.push_back(cast<StoreInst>(I).isVolatile());
     if (cast<StoreInst>(I).isAtomic()) {
       Vals.push_back(getEncodedOrdering(cast<StoreInst>(I).getOrdering()));
-      Vals.push_back(getEncodedSynchScope(cast<StoreInst>(I).getSynchScope()));
+      Vals.push_back(
+          getEncodedSyncScopeID(cast<StoreInst>(I).getSyncScopeID()));
     }
     break;
   case Instruction::AtomicCmpXchg:
@@ -2658,7 +2674,7 @@
     Vals.push_back(
         getEncodedOrdering(cast<AtomicCmpXchgInst>(I).getSuccessOrdering()));
     Vals.push_back(
-        getEncodedSynchScope(cast<AtomicCmpXchgInst>(I).getSynchScope()));
+        getEncodedSyncScopeID(cast<AtomicCmpXchgInst>(I).getSyncScopeID()));
     Vals.push_back(
         getEncodedOrdering(cast<AtomicCmpXchgInst>(I).getFailureOrdering()));
     Vals.push_back(cast<AtomicCmpXchgInst>(I).isWeak());
@@ -2672,12 +2688,12 @@
     Vals.push_back(cast<AtomicRMWInst>(I).isVolatile());
     Vals.push_back(getEncodedOrdering(cast<AtomicRMWInst>(I).getOrdering()));
     Vals.push_back(
-        getEncodedSynchScope(cast<AtomicRMWInst>(I).getSynchScope()));
+        getEncodedSyncScopeID(cast<AtomicRMWInst>(I).getSyncScopeID()));
     break;
   case Instruction::Fence:
     Code = bitc::FUNC_CODE_INST_FENCE;
     Vals.push_back(getEncodedOrdering(cast<FenceInst>(I).getOrdering()));
-    Vals.push_back(getEncodedSynchScope(cast<FenceInst>(I).getSynchScope()));
+    Vals.push_back(getEncodedSyncScopeID(cast<FenceInst>(I).getSyncScopeID()));
     break;
   case Instruction::Call: {
     const CallInst &CI = cast<CallInst>(I);
@@ -3690,6 +3706,7 @@
     writeUseListBlock(nullptr);
 
   writeOperandBundleTags();
+  writeSyncScopeNames();
 
   // Emit function bodies.
   DenseMap<const Function *, uint64_t> FunctionToBitcodeIndex;
Index: lib/CodeGen/AtomicExpandPass.cpp
===================================================================
--- lib/CodeGen/AtomicExpandPass.cpp
+++ lib/CodeGen/AtomicExpandPass.cpp
@@ -368,7 +368,7 @@
   auto *NewLI = Builder.CreateLoad(NewAddr);
   NewLI->setAlignment(LI->getAlignment());
   NewLI->setVolatile(LI->isVolatile());
-  NewLI->setAtomic(LI->getOrdering(), LI->getSynchScope());
+  NewLI->setAtomic(LI->getOrdering(), LI->getSyncScopeID());
   DEBUG(dbgs() << "Replaced " << *LI << " with " << *NewLI << "\n");
   
   Value *NewVal = Builder.CreateBitCast(NewLI, LI->getType());
@@ -451,7 +451,7 @@
   StoreInst *NewSI = Builder.CreateStore(NewVal, NewAddr);
   NewSI->setAlignment(SI->getAlignment());
   NewSI->setVolatile(SI->isVolatile());
-  NewSI->setAtomic(SI->getOrdering(), SI->getSynchScope());
+  NewSI->setAtomic(SI->getOrdering(), SI->getSyncScopeID());
   DEBUG(dbgs() << "Replaced " << *SI << " with " << *NewSI << "\n");
   SI->eraseFromParent();
   return NewSI;
@@ -808,7 +808,7 @@
   Value *FullWord_Cmp = Builder.CreateOr(Loaded_MaskOut, Cmp_Shifted);
   AtomicCmpXchgInst *NewCI = Builder.CreateAtomicCmpXchg(
       PMV.AlignedAddr, FullWord_Cmp, FullWord_NewVal, CI->getSuccessOrdering(),
-      CI->getFailureOrdering(), CI->getSynchScope());
+      CI->getFailureOrdering(), CI->getSyncScopeID());
   NewCI->setVolatile(CI->isVolatile());
   // When we're building a strong cmpxchg, we need a loop, so you
   // might think we could use a weak cmpxchg inside. But, using strong
@@ -931,7 +931,7 @@
   auto *NewCI = Builder.CreateAtomicCmpXchg(NewAddr, NewCmp, NewNewVal,
                                             CI->getSuccessOrdering(),
                                             CI->getFailureOrdering(),
-                                            CI->getSynchScope());
+                                            CI->getSyncScopeID());
   NewCI->setVolatile(CI->isVolatile());
   NewCI->setWeak(CI->isWeak());
   DEBUG(dbgs() << "Replaced " << *CI << " with " << *NewCI << "\n");
Index: lib/CodeGen/GlobalISel/IRTranslator.cpp
===================================================================
--- lib/CodeGen/GlobalISel/IRTranslator.cpp
+++ lib/CodeGen/GlobalISel/IRTranslator.cpp
@@ -311,7 +311,7 @@
       *MF->getMachineMemOperand(MachinePointerInfo(LI.getPointerOperand()),
                                 Flags, DL->getTypeStoreSize(LI.getType()),
                                 getMemOpAlignment(LI), AAMDNodes(), nullptr,
-                                LI.getSynchScope(), LI.getOrdering()));
+                                LI.getSyncScopeID(), LI.getOrdering()));
   return true;
 }
 
@@ -329,7 +329,7 @@
       *MF->getMachineMemOperand(
           MachinePointerInfo(SI.getPointerOperand()), Flags,
           DL->getTypeStoreSize(SI.getValueOperand()->getType()),
-          getMemOpAlignment(SI), AAMDNodes(), nullptr, SI.getSynchScope(),
+          getMemOpAlignment(SI), AAMDNodes(), nullptr, SI.getSyncScopeID(),
           SI.getOrdering()));
   return true;
 }
Index: lib/CodeGen/MIRParser/MIParser.cpp
===================================================================
--- lib/CodeGen/MIRParser/MIParser.cpp
+++ lib/CodeGen/MIRParser/MIParser.cpp
@@ -189,6 +189,7 @@
   bool parseMemoryOperandFlag(MachineMemOperand::Flags &Flags);
   bool parseMemoryPseudoSourceValue(const PseudoSourceValue *&PSV);
   bool parseMachinePointerInfo(MachinePointerInfo &Dest);
+  bool parseOptionalScope(LLVMContext &Context, SyncScope::ID &SSID);
   bool parseOptionalAtomicOrdering(AtomicOrdering &Order);
   bool parseMachineMemoryOperand(MachineMemOperand *&Dest);
 
@@ -2071,6 +2072,26 @@
   return false;
 }
 
+bool MIParser::parseOptionalScope(LLVMContext &Context,
+                                  SyncScope::ID &SSID) {
+  SSID = SyncScope::System;
+  if (Token.is(MIToken::Identifier) && Token.stringValue() == "syncscope") {
+    lex();
+    if (!consumeIfPresent(MIToken::lparen))
+      return error("expected '(' in syncscope");
+    if (!Token.is(MIToken::Identifier))
+      return error("expected identifier in syncscope");
+
+    SSID = Context.getOrInsertSyncScopeID(Token.stringValue());
+
+    lex();
+    if (!consumeIfPresent(MIToken::rparen))
+      return error("expected ')' in syncscope");
+  }
+
+  return false;
+}
+
 bool MIParser::parseOptionalAtomicOrdering(AtomicOrdering &Order) {
   Order = AtomicOrdering::NotAtomic;
   if (Token.isNot(MIToken::Identifier))
@@ -2110,12 +2131,10 @@
     Flags |= MachineMemOperand::MOStore;
   lex();
 
-  // Optional "singlethread" scope.
-  SynchronizationScope Scope = SynchronizationScope::CrossThread;
-  if (Token.is(MIToken::Identifier) && Token.stringValue() == "singlethread") {
-    Scope = SynchronizationScope::SingleThread;
-    lex();
-  }
+  // Optional synchronization scope.
+  SyncScope::ID SSID;
+  if (parseOptionalScope(MF.getFunction()->getContext(), SSID))
+    return true;
 
   // Up to two atomic orderings (cmpxchg provides guarantees on failure).
   AtomicOrdering Order, FailureOrder;
@@ -2180,7 +2199,7 @@
   if (expectAndConsume(MIToken::rparen))
     return true;
   Dest = MF.getMachineMemOperand(Ptr, Flags, Size, BaseAlignment, AAInfo, Range,
-                                 Scope, Order, FailureOrder);
+                                 SSID, Order, FailureOrder);
   return false;
 }
 
Index: lib/CodeGen/MIRPrinter.cpp
===================================================================
--- lib/CodeGen/MIRPrinter.cpp
+++ lib/CodeGen/MIRPrinter.cpp
@@ -104,6 +104,8 @@
   ModuleSlotTracker &MST;
   const DenseMap<const uint32_t *, unsigned> &RegisterMaskIds;
   const DenseMap<int, FrameIndexOperand> &StackObjectOperandMapping;
+  /// Synchronization scope names registered with LLVMContext.
+  SmallVector<StringRef, 8> SSNs;
 
 public:
   MIPrinter(raw_ostream &OS, ModuleSlotTracker &MST,
@@ -124,7 +126,8 @@
   void print(const MachineOperand &Op, const TargetRegisterInfo *TRI,
              unsigned I, bool ShouldPrintRegisterTies,
              LLT TypeToPrint, bool IsDef = false);
-  void print(const MachineMemOperand &Op);
+  void print(const LLVMContext &Context, const MachineMemOperand &Op);
+  void printSyncScope(const LLVMContext &Context, SyncScope::ID SSID);
 
   void print(const MCCFIInstruction &CFI, const TargetRegisterInfo *TRI);
 };
@@ -634,11 +637,13 @@
 
   if (!MI.memoperands_empty()) {
     OS << " :: ";
+    const LLVMContext &Context =
+        MI.getParent()->getParent()->getFunction()->getContext();
     bool NeedComma = false;
     for (const auto *Op : MI.memoperands()) {
       if (NeedComma)
         OS << ", ";
-      print(*Op);
+      print(Context, *Op);
       NeedComma = true;
     }
   }
@@ -929,7 +934,7 @@
   }
 }
 
-void MIPrinter::print(const MachineMemOperand &Op) {
+void MIPrinter::print(const LLVMContext &Context, const MachineMemOperand &Op) {
   OS << '(';
   // TODO: Print operand's target specific flags.
   if (Op.isVolatile())
@@ -947,8 +952,7 @@
     OS << "store ";
   }
 
-  if (Op.getSynchScope() == SynchronizationScope::SingleThread)
-    OS << "singlethread ";
+  printSyncScope(Context, Op.getSyncScopeID());
 
   if (Op.getOrdering() != AtomicOrdering::NotAtomic)
     OS << toIRString(Op.getOrdering()) << ' ';
@@ -1017,6 +1021,20 @@
   OS << ')';
 }
 
+void MIPrinter::printSyncScope(const LLVMContext &Context, SyncScope::ID SSID) {
+  switch (SSID) {
+  case SyncScope::System: {
+    break;
+  }
+  default: {
+    if (SSNs.empty())
+      Context.getSyncScopeNames(SSNs);
+    OS << "syncscope(" << SSNs[SSID] << ") ";
+    break;
+  }
+  }
+}
+
 static void printCFIRegister(unsigned DwarfReg, raw_ostream &OS,
                              const TargetRegisterInfo *TRI) {
   int Reg = TRI->getLLVMRegNum(DwarfReg, true);
Index: lib/CodeGen/MachineFunction.cpp
===================================================================
--- lib/CodeGen/MachineFunction.cpp
+++ lib/CodeGen/MachineFunction.cpp
@@ -308,11 +308,11 @@
 MachineMemOperand *MachineFunction::getMachineMemOperand(
     MachinePointerInfo PtrInfo, MachineMemOperand::Flags f, uint64_t s,
     unsigned base_alignment, const AAMDNodes &AAInfo, const MDNode *Ranges,
-    SynchronizationScope SynchScope, AtomicOrdering Ordering,
+    SyncScope::ID SSID, AtomicOrdering Ordering,
     AtomicOrdering FailureOrdering) {
   return new (Allocator)
       MachineMemOperand(PtrInfo, f, s, base_alignment, AAInfo, Ranges,
-                        SynchScope, Ordering, FailureOrdering);
+                        SSID, Ordering, FailureOrdering);
 }
 
 MachineMemOperand *
@@ -323,13 +323,13 @@
                MachineMemOperand(MachinePointerInfo(MMO->getValue(),
                                                     MMO->getOffset()+Offset),
                                  MMO->getFlags(), Size, MMO->getBaseAlignment(),
-                                 AAMDNodes(), nullptr, MMO->getSynchScope(),
+                                 AAMDNodes(), nullptr, MMO->getSyncScopeID(),
                                  MMO->getOrdering(), MMO->getFailureOrdering());
   return new (Allocator)
              MachineMemOperand(MachinePointerInfo(MMO->getPseudoValue(),
                                                   MMO->getOffset()+Offset),
                                MMO->getFlags(), Size, MMO->getBaseAlignment(),
-                               AAMDNodes(), nullptr, MMO->getSynchScope(),
+                               AAMDNodes(), nullptr, MMO->getSyncScopeID(),
                                MMO->getOrdering(), MMO->getFailureOrdering());
 }
 
@@ -362,7 +362,7 @@
                                (*I)->getFlags() & ~MachineMemOperand::MOStore,
                                (*I)->getSize(), (*I)->getBaseAlignment(),
                                (*I)->getAAInfo(), nullptr,
-                               (*I)->getSynchScope(), (*I)->getOrdering(),
+                               (*I)->getSyncScopeID(), (*I)->getOrdering(),
                                (*I)->getFailureOrdering());
         Result[Index] = JustLoad;
       }
@@ -396,7 +396,7 @@
                                (*I)->getFlags() & ~MachineMemOperand::MOLoad,
                                (*I)->getSize(), (*I)->getBaseAlignment(),
                                (*I)->getAAInfo(), nullptr,
-                               (*I)->getSynchScope(), (*I)->getOrdering(),
+                               (*I)->getSyncScopeID(), (*I)->getOrdering(),
                                (*I)->getFailureOrdering());
         Result[Index] = JustStore;
       }
Index: lib/CodeGen/MachineInstr.cpp
===================================================================
--- lib/CodeGen/MachineInstr.cpp
+++ lib/CodeGen/MachineInstr.cpp
@@ -563,7 +563,7 @@
                                      uint64_t s, unsigned int a,
                                      const AAMDNodes &AAInfo,
                                      const MDNode *Ranges,
-                                     SynchronizationScope SynchScope,
+                                     SyncScope::ID SSID,
                                      AtomicOrdering Ordering,
                                      AtomicOrdering FailureOrdering)
     : PtrInfo(ptrinfo), Size(s), FlagVals(f), BaseAlignLog2(Log2_32(a) + 1),
@@ -574,8 +574,8 @@
   assert(getBaseAlignment() == a && "Alignment is not a power of 2!");
   assert((isLoad() || isStore()) && "Not a load/store!");
 
-  AtomicInfo.SynchScope = static_cast<unsigned>(SynchScope);
-  assert(getSynchScope() == SynchScope && "Value truncated");
+  AtomicInfo.SSID = static_cast<unsigned>(SSID);
+  assert(getSyncScopeID() == SSID && "Value truncated");
   AtomicInfo.Ordering = static_cast<unsigned>(Ordering);
   assert(getOrdering() == Ordering && "Value truncated");
   AtomicInfo.FailureOrdering = static_cast<unsigned>(FailureOrdering);
Index: lib/CodeGen/SelectionDAG/SelectionDAG.cpp
===================================================================
--- lib/CodeGen/SelectionDAG/SelectionDAG.cpp
+++ lib/CodeGen/SelectionDAG/SelectionDAG.cpp
@@ -5351,7 +5351,7 @@
     unsigned Opcode, const SDLoc &dl, EVT MemVT, SDVTList VTs, SDValue Chain,
     SDValue Ptr, SDValue Cmp, SDValue Swp, MachinePointerInfo PtrInfo,
     unsigned Alignment, AtomicOrdering SuccessOrdering,
-    AtomicOrdering FailureOrdering, SynchronizationScope SynchScope) {
+    AtomicOrdering FailureOrdering, SyncScope::ID SSID) {
   assert(Opcode == ISD::ATOMIC_CMP_SWAP ||
          Opcode == ISD::ATOMIC_CMP_SWAP_WITH_SUCCESS);
   assert(Cmp.getValueType() == Swp.getValueType() && "Invalid Atomic Op Types");
@@ -5367,7 +5367,7 @@
                MachineMemOperand::MOStore;
   MachineMemOperand *MMO =
     MF.getMachineMemOperand(PtrInfo, Flags, MemVT.getStoreSize(), Alignment,
-                            AAMDNodes(), nullptr, SynchScope, SuccessOrdering,
+                            AAMDNodes(), nullptr, SSID, SuccessOrdering,
                             FailureOrdering);
 
   return getAtomicCmpSwap(Opcode, dl, MemVT, VTs, Chain, Ptr, Cmp, Swp, MMO);
@@ -5389,7 +5389,7 @@
                                 SDValue Chain, SDValue Ptr, SDValue Val,
                                 const Value *PtrVal, unsigned Alignment,
                                 AtomicOrdering Ordering,
-                                SynchronizationScope SynchScope) {
+                                SyncScope::ID SSID) {
   if (Alignment == 0)  // Ensure that codegen never sees alignment 0
     Alignment = getEVTAlignment(MemVT);
 
@@ -5409,7 +5409,7 @@
   MachineMemOperand *MMO =
     MF.getMachineMemOperand(MachinePointerInfo(PtrVal), Flags,
                             MemVT.getStoreSize(), Alignment, AAMDNodes(),
-                            nullptr, SynchScope, Ordering);
+                            nullptr, SSID, Ordering);
 
   return getAtomic(Opcode, dl, MemVT, Chain, Ptr, Val, MMO);
 }
Index: lib/CodeGen/SelectionDAG/SelectionDAGBuilder.cpp
===================================================================
--- lib/CodeGen/SelectionDAG/SelectionDAGBuilder.cpp
+++ lib/CodeGen/SelectionDAG/SelectionDAGBuilder.cpp
@@ -3893,7 +3893,7 @@
   SDLoc dl = getCurSDLoc();
   AtomicOrdering SuccessOrder = I.getSuccessOrdering();
   AtomicOrdering FailureOrder = I.getFailureOrdering();
-  SynchronizationScope Scope = I.getSynchScope();
+  SyncScope::ID SSID = I.getSyncScopeID();
 
   SDValue InChain = getRoot();
 
@@ -3903,7 +3903,7 @@
       ISD::ATOMIC_CMP_SWAP_WITH_SUCCESS, dl, MemVT, VTs, InChain,
       getValue(I.getPointerOperand()), getValue(I.getCompareOperand()),
       getValue(I.getNewValOperand()), MachinePointerInfo(I.getPointerOperand()),
-      /*Alignment=*/ 0, SuccessOrder, FailureOrder, Scope);
+      /*Alignment=*/ 0, SuccessOrder, FailureOrder, SSID);
 
   SDValue OutChain = L.getValue(2);
 
@@ -3929,7 +3929,7 @@
   case AtomicRMWInst::UMin: NT = ISD::ATOMIC_LOAD_UMIN; break;
   }
   AtomicOrdering Order = I.getOrdering();
-  SynchronizationScope Scope = I.getSynchScope();
+  SyncScope::ID SSID = I.getSyncScopeID();
 
   SDValue InChain = getRoot();
 
@@ -3940,7 +3940,7 @@
                   getValue(I.getPointerOperand()),
                   getValue(I.getValOperand()),
                   I.getPointerOperand(),
-                  /* Alignment=*/ 0, Order, Scope);
+                  /* Alignment=*/ 0, Order, SSID);
 
   SDValue OutChain = L.getValue(1);
 
@@ -3955,7 +3955,7 @@
   Ops[0] = getRoot();
   Ops[1] = DAG.getConstant((unsigned)I.getOrdering(), dl,
                            TLI.getFenceOperandTy(DAG.getDataLayout()));
-  Ops[2] = DAG.getConstant(I.getSynchScope(), dl,
+  Ops[2] = DAG.getConstant(I.getSyncScopeID(), dl,
                            TLI.getFenceOperandTy(DAG.getDataLayout()));
   DAG.setRoot(DAG.getNode(ISD::ATOMIC_FENCE, dl, MVT::Other, Ops));
 }
@@ -3963,7 +3963,7 @@
 void SelectionDAGBuilder::visitAtomicLoad(const LoadInst &I) {
   SDLoc dl = getCurSDLoc();
   AtomicOrdering Order = I.getOrdering();
-  SynchronizationScope Scope = I.getSynchScope();
+  SyncScope::ID SSID = I.getSyncScopeID();
 
   SDValue InChain = getRoot();
 
@@ -3981,7 +3981,7 @@
                            VT.getStoreSize(),
                            I.getAlignment() ? I.getAlignment() :
                                               DAG.getEVTAlignment(VT),
-                           AAMDNodes(), nullptr, Scope, Order);
+                           AAMDNodes(), nullptr, SSID, Order);
 
   InChain = TLI.prepareVolatileOrAtomicLoad(InChain, dl, DAG);
   SDValue L =
@@ -3998,7 +3998,7 @@
   SDLoc dl = getCurSDLoc();
 
   AtomicOrdering Order = I.getOrdering();
-  SynchronizationScope Scope = I.getSynchScope();
+  SyncScope::ID SSID = I.getSyncScopeID();
 
   SDValue InChain = getRoot();
 
@@ -4015,7 +4015,7 @@
                   getValue(I.getPointerOperand()),
                   getValue(I.getValueOperand()),
                   I.getPointerOperand(), I.getAlignment(),
-                  Order, Scope);
+                  Order, SSID);
 
   DAG.setRoot(OutChain);
 }
Index: lib/IR/AsmWriter.cpp
===================================================================
--- lib/IR/AsmWriter.cpp
+++ lib/IR/AsmWriter.cpp
@@ -2071,6 +2071,8 @@
   bool ShouldPreserveUseListOrder;
   UseListOrderStack UseListOrders;
   SmallVector<StringRef, 8> MDNames;
+  /// Synchronization scope names registered with LLVMContext.
+  SmallVector<StringRef, 8> SSNs;
 
 public:
   /// Construct an AssemblyWriter with an external SlotTracker
@@ -2086,10 +2088,15 @@
   void writeOperand(const Value *Op, bool PrintType);
   void writeParamOperand(const Value *Operand, AttributeSet Attrs);
   void writeOperandBundles(ImmutableCallSite CS);
-  void writeAtomic(AtomicOrdering Ordering, SynchronizationScope SynchScope);
-  void writeAtomicCmpXchg(AtomicOrdering SuccessOrdering,
+  void writeSyncScope(const LLVMContext &Context,
+                      SyncScope::ID SSID);
+  void writeAtomic(const LLVMContext &Context,
+                   AtomicOrdering Ordering,
+                   SyncScope::ID SSID);
+  void writeAtomicCmpXchg(const LLVMContext &Context,
+                          AtomicOrdering SuccessOrdering,
                           AtomicOrdering FailureOrdering,
-                          SynchronizationScope SynchScope);
+                          SyncScope::ID SSID);
 
   void writeAllMDNodes();
   void writeMDNode(unsigned Slot, const MDNode *Node);
@@ -2150,30 +2157,39 @@
   WriteAsOperandInternal(Out, Operand, &TypePrinter, &Machine, TheModule);
 }
 
-void AssemblyWriter::writeAtomic(AtomicOrdering Ordering,
-                                 SynchronizationScope SynchScope) {
+void AssemblyWriter::writeSyncScope(const LLVMContext &Context,
+                                    SyncScope::ID SSID) {
+  switch (SSID) {
+  case SyncScope::System: {
+    break;
+  }
+  default: {
+    if (SSNs.empty())
+      Context.getSyncScopeNames(SSNs);
+    Out << " syncscope(\"" << SSNs[SSID] << "\")";
+    break;
+  }
+  }
+}
+
+void AssemblyWriter::writeAtomic(const LLVMContext &Context,
+                                 AtomicOrdering Ordering,
+                                 SyncScope::ID SSID) {
   if (Ordering == AtomicOrdering::NotAtomic)
     return;
 
-  switch (SynchScope) {
-  case SingleThread: Out << " singlethread"; break;
-  case CrossThread: break;
-  }
-
+  writeSyncScope(Context, SSID);
   Out << " " << toIRString(Ordering);
 }
 
-void AssemblyWriter::writeAtomicCmpXchg(AtomicOrdering SuccessOrdering,
+void AssemblyWriter::writeAtomicCmpXchg(const LLVMContext &Context,
+                                        AtomicOrdering SuccessOrdering,
                                         AtomicOrdering FailureOrdering,
-                                        SynchronizationScope SynchScope) {
+                                        SyncScope::ID SSID) {
   assert(SuccessOrdering != AtomicOrdering::NotAtomic &&
          FailureOrdering != AtomicOrdering::NotAtomic);
 
-  switch (SynchScope) {
-  case SingleThread: Out << " singlethread"; break;
-  case CrossThread: break;
-  }
-
+  writeSyncScope(Context, SSID);
   Out << " " << toIRString(SuccessOrdering);
   Out << " " << toIRString(FailureOrdering);
 }
@@ -3169,21 +3185,22 @@
   // Print atomic ordering/alignment for memory operations
   if (const LoadInst *LI = dyn_cast<LoadInst>(&I)) {
     if (LI->isAtomic())
-      writeAtomic(LI->getOrdering(), LI->getSynchScope());
+      writeAtomic(LI->getContext(), LI->getOrdering(), LI->getSyncScopeID());
     if (LI->getAlignment())
       Out << ", align " << LI->getAlignment();
   } else if (const StoreInst *SI = dyn_cast<StoreInst>(&I)) {
     if (SI->isAtomic())
-      writeAtomic(SI->getOrdering(), SI->getSynchScope());
+      writeAtomic(SI->getContext(), SI->getOrdering(), SI->getSyncScopeID());
     if (SI->getAlignment())
       Out << ", align " << SI->getAlignment();
   } else if (const AtomicCmpXchgInst *CXI = dyn_cast<AtomicCmpXchgInst>(&I)) {
-    writeAtomicCmpXchg(CXI->getSuccessOrdering(), CXI->getFailureOrdering(),
-                       CXI->getSynchScope());
+    writeAtomicCmpXchg(CXI->getContext(), CXI->getSuccessOrdering(),
+                       CXI->getFailureOrdering(), CXI->getSyncScopeID());
   } else if (const AtomicRMWInst *RMWI = dyn_cast<AtomicRMWInst>(&I)) {
-    writeAtomic(RMWI->getOrdering(), RMWI->getSynchScope());
+    writeAtomic(RMWI->getContext(), RMWI->getOrdering(),
+                RMWI->getSyncScopeID());
   } else if (const FenceInst *FI = dyn_cast<FenceInst>(&I)) {
-    writeAtomic(FI->getOrdering(), FI->getSynchScope());
+    writeAtomic(FI->getContext(), FI->getOrdering(), FI->getSyncScopeID());
   }
 
   // Print Metadata info.
Index: lib/IR/Core.cpp
===================================================================
--- lib/IR/Core.cpp
+++ lib/IR/Core.cpp
@@ -2747,7 +2747,8 @@
                             LLVMBool isSingleThread, const char *Name) {
   return wrap(
     unwrap(B)->CreateFence(mapFromLLVMOrdering(Ordering),
-                           isSingleThread ? SingleThread : CrossThread,
+                           isSingleThread ? SyncScope::SingleThread
+                                          : SyncScope::System,
                            Name));
 }
 
@@ -3029,7 +3030,8 @@
     case LLVMAtomicRMWBinOpUMin: intop = AtomicRMWInst::UMin; break;
   }
   return wrap(unwrap(B)->CreateAtomicRMW(intop, unwrap(PTR), unwrap(Val),
-    mapFromLLVMOrdering(ordering), singleThread ? SingleThread : CrossThread));
+    mapFromLLVMOrdering(ordering), singleThread ? SyncScope::SingleThread
+                                                : SyncScope::System));
 }
 
 LLVMValueRef LLVMBuildAtomicCmpXchg(LLVMBuilderRef B, LLVMValueRef Ptr,
@@ -3041,7 +3043,7 @@
   return wrap(unwrap(B)->CreateAtomicCmpXchg(unwrap(Ptr), unwrap(Cmp),
                 unwrap(New), mapFromLLVMOrdering(SuccessOrdering),
                 mapFromLLVMOrdering(FailureOrdering),
-                singleThread ? SingleThread : CrossThread));
+                singleThread ? SyncScope::SingleThread : SyncScope::System));
 }
 
 
@@ -3049,17 +3051,18 @@
   Value *P = unwrap<Value>(AtomicInst);
 
   if (AtomicRMWInst *I = dyn_cast<AtomicRMWInst>(P))
-    return I->getSynchScope() == SingleThread;
-  return cast<AtomicCmpXchgInst>(P)->getSynchScope() == SingleThread;
+    return I->getSyncScopeID() == SyncScope::SingleThread;
+  return cast<AtomicCmpXchgInst>(P)->getSyncScopeID() ==
+             SyncScope::SingleThread;
 }
 
 void LLVMSetAtomicSingleThread(LLVMValueRef AtomicInst, LLVMBool NewValue) {
   Value *P = unwrap<Value>(AtomicInst);
-  SynchronizationScope Sync = NewValue ? SingleThread : CrossThread;
+  SyncScope::ID SSID = NewValue ? SyncScope::SingleThread : SyncScope::System;
 
   if (AtomicRMWInst *I = dyn_cast<AtomicRMWInst>(P))
-    return I->setSynchScope(Sync);
-  return cast<AtomicCmpXchgInst>(P)->setSynchScope(Sync);
+    return I->setSyncScopeID(SSID);
+  return cast<AtomicCmpXchgInst>(P)->setSyncScopeID(SSID);
 }
 
 LLVMAtomicOrdering LLVMGetCmpXchgSuccessOrdering(LLVMValueRef CmpXchgInst)  {
Index: lib/IR/Instruction.cpp
===================================================================
--- lib/IR/Instruction.cpp
+++ lib/IR/Instruction.cpp
@@ -364,13 +364,13 @@
            (LI->getAlignment() == cast<LoadInst>(I2)->getAlignment() ||
             IgnoreAlignment) &&
            LI->getOrdering() == cast<LoadInst>(I2)->getOrdering() &&
-           LI->getSynchScope() == cast<LoadInst>(I2)->getSynchScope();
+           LI->getSyncScopeID() == cast<LoadInst>(I2)->getSyncScopeID();
   if (const StoreInst *SI = dyn_cast<StoreInst>(I1))
     return SI->isVolatile() == cast<StoreInst>(I2)->isVolatile() &&
            (SI->getAlignment() == cast<StoreInst>(I2)->getAlignment() ||
             IgnoreAlignment) &&
            SI->getOrdering() == cast<StoreInst>(I2)->getOrdering() &&
-           SI->getSynchScope() == cast<StoreInst>(I2)->getSynchScope();
+           SI->getSyncScopeID() == cast<StoreInst>(I2)->getSyncScopeID();
   if (const CmpInst *CI = dyn_cast<CmpInst>(I1))
     return CI->getPredicate() == cast<CmpInst>(I2)->getPredicate();
   if (const CallInst *CI = dyn_cast<CallInst>(I1))
@@ -388,7 +388,7 @@
     return EVI->getIndices() == cast<ExtractValueInst>(I2)->getIndices();
   if (const FenceInst *FI = dyn_cast<FenceInst>(I1))
     return FI->getOrdering() == cast<FenceInst>(I2)->getOrdering() &&
-           FI->getSynchScope() == cast<FenceInst>(I2)->getSynchScope();
+           FI->getSyncScopeID() == cast<FenceInst>(I2)->getSyncScopeID();
   if (const AtomicCmpXchgInst *CXI = dyn_cast<AtomicCmpXchgInst>(I1))
     return CXI->isVolatile() == cast<AtomicCmpXchgInst>(I2)->isVolatile() &&
            CXI->isWeak() == cast<AtomicCmpXchgInst>(I2)->isWeak() &&
@@ -396,12 +396,13 @@
                cast<AtomicCmpXchgInst>(I2)->getSuccessOrdering() &&
            CXI->getFailureOrdering() ==
                cast<AtomicCmpXchgInst>(I2)->getFailureOrdering() &&
-           CXI->getSynchScope() == cast<AtomicCmpXchgInst>(I2)->getSynchScope();
+           CXI->getSyncScopeID() ==
+               cast<AtomicCmpXchgInst>(I2)->getSyncScopeID();
   if (const AtomicRMWInst *RMWI = dyn_cast<AtomicRMWInst>(I1))
     return RMWI->getOperation() == cast<AtomicRMWInst>(I2)->getOperation() &&
            RMWI->isVolatile() == cast<AtomicRMWInst>(I2)->isVolatile() &&
            RMWI->getOrdering() == cast<AtomicRMWInst>(I2)->getOrdering() &&
-           RMWI->getSynchScope() == cast<AtomicRMWInst>(I2)->getSynchScope();
+           RMWI->getSyncScopeID() == cast<AtomicRMWInst>(I2)->getSyncScopeID();
 
   return true;
 }
Index: lib/IR/Instructions.cpp
===================================================================
--- lib/IR/Instructions.cpp
+++ lib/IR/Instructions.cpp
@@ -1337,34 +1337,34 @@
 LoadInst::LoadInst(Type *Ty, Value *Ptr, const Twine &Name, bool isVolatile,
                    unsigned Align, Instruction *InsertBef)
     : LoadInst(Ty, Ptr, Name, isVolatile, Align, AtomicOrdering::NotAtomic,
-               CrossThread, InsertBef) {}
+               SyncScope::System, InsertBef) {}
 
 LoadInst::LoadInst(Value *Ptr, const Twine &Name, bool isVolatile,
                    unsigned Align, BasicBlock *InsertAE)
     : LoadInst(Ptr, Name, isVolatile, Align, AtomicOrdering::NotAtomic,
-               CrossThread, InsertAE) {}
+               SyncScope::System, InsertAE) {}
 
 LoadInst::LoadInst(Type *Ty, Value *Ptr, const Twine &Name, bool isVolatile,
                    unsigned Align, AtomicOrdering Order,
-                   SynchronizationScope SynchScope, Instruction *InsertBef)
+                   SyncScope::ID SSID, Instruction *InsertBef)
     : UnaryInstruction(Ty, Load, Ptr, InsertBef) {
   assert(Ty == cast<PointerType>(Ptr->getType())->getElementType());
   setVolatile(isVolatile);
   setAlignment(Align);
-  setAtomic(Order, SynchScope);
+  setAtomic(Order, SSID);
   AssertOK();
   setName(Name);
 }
 
 LoadInst::LoadInst(Value *Ptr, const Twine &Name, bool isVolatile, 
                    unsigned Align, AtomicOrdering Order,
-                   SynchronizationScope SynchScope,
+                   SyncScope::ID SSID,
                    BasicBlock *InsertAE)
   : UnaryInstruction(cast<PointerType>(Ptr->getType())->getElementType(),
                      Load, Ptr, InsertAE) {
   setVolatile(isVolatile);
   setAlignment(Align);
-  setAtomic(Order, SynchScope);
+  setAtomic(Order, SSID);
   AssertOK();
   setName(Name);
 }
@@ -1452,16 +1452,16 @@
 StoreInst::StoreInst(Value *val, Value *addr, bool isVolatile, unsigned Align,
                      Instruction *InsertBefore)
     : StoreInst(val, addr, isVolatile, Align, AtomicOrdering::NotAtomic,
-                CrossThread, InsertBefore) {}
+                SyncScope::System, InsertBefore) {}
 
 StoreInst::StoreInst(Value *val, Value *addr, bool isVolatile, unsigned Align,
                      BasicBlock *InsertAtEnd)
     : StoreInst(val, addr, isVolatile, Align, AtomicOrdering::NotAtomic,
-                CrossThread, InsertAtEnd) {}
+                SyncScope::System, InsertAtEnd) {}
 
 StoreInst::StoreInst(Value *val, Value *addr, bool isVolatile,
                      unsigned Align, AtomicOrdering Order,
-                     SynchronizationScope SynchScope,
+                     SyncScope::ID SSID,
                      Instruction *InsertBefore)
   : Instruction(Type::getVoidTy(val->getContext()), Store,
                 OperandTraits<StoreInst>::op_begin(this),
@@ -1471,13 +1471,13 @@
   Op<1>() = addr;
   setVolatile(isVolatile);
   setAlignment(Align);
-  setAtomic(Order, SynchScope);
+  setAtomic(Order, SSID);
   AssertOK();
 }
 
 StoreInst::StoreInst(Value *val, Value *addr, bool isVolatile,
                      unsigned Align, AtomicOrdering Order,
-                     SynchronizationScope SynchScope,
+                     SyncScope::ID SSID,
                      BasicBlock *InsertAtEnd)
   : Instruction(Type::getVoidTy(val->getContext()), Store,
                 OperandTraits<StoreInst>::op_begin(this),
@@ -1487,7 +1487,7 @@
   Op<1>() = addr;
   setVolatile(isVolatile);
   setAlignment(Align);
-  setAtomic(Order, SynchScope);
+  setAtomic(Order, SSID);
   AssertOK();
 }
 
@@ -1507,13 +1507,13 @@
 void AtomicCmpXchgInst::Init(Value *Ptr, Value *Cmp, Value *NewVal,
                              AtomicOrdering SuccessOrdering,
                              AtomicOrdering FailureOrdering,
-                             SynchronizationScope SynchScope) {
+                             SyncScope::ID SSID) {
   Op<0>() = Ptr;
   Op<1>() = Cmp;
   Op<2>() = NewVal;
   setSuccessOrdering(SuccessOrdering);
   setFailureOrdering(FailureOrdering);
-  setSynchScope(SynchScope);
+  setSyncScopeID(SSID);
 
   assert(getOperand(0) && getOperand(1) && getOperand(2) &&
          "All operands must be non-null!");
@@ -1540,27 +1540,27 @@
 AtomicCmpXchgInst::AtomicCmpXchgInst(Value *Ptr, Value *Cmp, Value *NewVal,
                                      AtomicOrdering SuccessOrdering,
                                      AtomicOrdering FailureOrdering,
-                                     SynchronizationScope SynchScope,
+                                     SyncScope::ID SSID,
                                      Instruction *InsertBefore)
     : Instruction(
           StructType::get(Cmp->getType(), Type::getInt1Ty(Cmp->getContext()),
                           nullptr),
           AtomicCmpXchg, OperandTraits<AtomicCmpXchgInst>::op_begin(this),
           OperandTraits<AtomicCmpXchgInst>::operands(this), InsertBefore) {
-  Init(Ptr, Cmp, NewVal, SuccessOrdering, FailureOrdering, SynchScope);
+  Init(Ptr, Cmp, NewVal, SuccessOrdering, FailureOrdering, SSID);
 }
 
 AtomicCmpXchgInst::AtomicCmpXchgInst(Value *Ptr, Value *Cmp, Value *NewVal,
                                      AtomicOrdering SuccessOrdering,
                                      AtomicOrdering FailureOrdering,
-                                     SynchronizationScope SynchScope,
+                                     SyncScope::ID SSID,
                                      BasicBlock *InsertAtEnd)
     : Instruction(
           StructType::get(Cmp->getType(), Type::getInt1Ty(Cmp->getContext()),
                           nullptr),
           AtomicCmpXchg, OperandTraits<AtomicCmpXchgInst>::op_begin(this),
           OperandTraits<AtomicCmpXchgInst>::operands(this), InsertAtEnd) {
-  Init(Ptr, Cmp, NewVal, SuccessOrdering, FailureOrdering, SynchScope);
+  Init(Ptr, Cmp, NewVal, SuccessOrdering, FailureOrdering, SSID);
 }
 
 //===----------------------------------------------------------------------===//
@@ -1569,12 +1569,12 @@
 
 void AtomicRMWInst::Init(BinOp Operation, Value *Ptr, Value *Val,
                          AtomicOrdering Ordering,
-                         SynchronizationScope SynchScope) {
+                         SyncScope::ID SSID) {
   Op<0>() = Ptr;
   Op<1>() = Val;
   setOperation(Operation);
   setOrdering(Ordering);
-  setSynchScope(SynchScope);
+  setSyncScopeID(SSID);
 
   assert(getOperand(0) && getOperand(1) &&
          "All operands must be non-null!");
@@ -1589,24 +1589,24 @@
 
 AtomicRMWInst::AtomicRMWInst(BinOp Operation, Value *Ptr, Value *Val,
                              AtomicOrdering Ordering,
-                             SynchronizationScope SynchScope,
+                             SyncScope::ID SSID,
                              Instruction *InsertBefore)
   : Instruction(Val->getType(), AtomicRMW,
                 OperandTraits<AtomicRMWInst>::op_begin(this),
                 OperandTraits<AtomicRMWInst>::operands(this),
                 InsertBefore) {
-  Init(Operation, Ptr, Val, Ordering, SynchScope);
+  Init(Operation, Ptr, Val, Ordering, SSID);
 }
 
 AtomicRMWInst::AtomicRMWInst(BinOp Operation, Value *Ptr, Value *Val,
                              AtomicOrdering Ordering,
-                             SynchronizationScope SynchScope,
+                             SyncScope::ID SSID,
                              BasicBlock *InsertAtEnd)
   : Instruction(Val->getType(), AtomicRMW,
                 OperandTraits<AtomicRMWInst>::op_begin(this),
                 OperandTraits<AtomicRMWInst>::operands(this),
                 InsertAtEnd) {
-  Init(Operation, Ptr, Val, Ordering, SynchScope);
+  Init(Operation, Ptr, Val, Ordering, SSID);
 }
 
 //===----------------------------------------------------------------------===//
@@ -1614,19 +1614,19 @@
 //===----------------------------------------------------------------------===//
 
 FenceInst::FenceInst(LLVMContext &C, AtomicOrdering Ordering, 
-                     SynchronizationScope SynchScope,
+                     SyncScope::ID SSID,
                      Instruction *InsertBefore)
   : Instruction(Type::getVoidTy(C), Fence, nullptr, 0, InsertBefore) {
   setOrdering(Ordering);
-  setSynchScope(SynchScope);
+  setSyncScopeID(SSID);
 }
 
 FenceInst::FenceInst(LLVMContext &C, AtomicOrdering Ordering, 
-                     SynchronizationScope SynchScope,
+                     SyncScope::ID SSID,
                      BasicBlock *InsertAtEnd)
   : Instruction(Type::getVoidTy(C), Fence, nullptr, 0, InsertAtEnd) {
   setOrdering(Ordering);
-  setSynchScope(SynchScope);
+  setSyncScopeID(SSID);
 }
 
 //===----------------------------------------------------------------------===//
@@ -3881,12 +3881,12 @@
 
 LoadInst *LoadInst::cloneImpl() const {
   return new LoadInst(getOperand(0), Twine(), isVolatile(),
-                      getAlignment(), getOrdering(), getSynchScope());
+                      getAlignment(), getOrdering(), getSyncScopeID());
 }
 
 StoreInst *StoreInst::cloneImpl() const {
   return new StoreInst(getOperand(0), getOperand(1), isVolatile(),
-                       getAlignment(), getOrdering(), getSynchScope());
+                       getAlignment(), getOrdering(), getSyncScopeID());
   
 }
 
@@ -3894,7 +3894,7 @@
   AtomicCmpXchgInst *Result =
     new AtomicCmpXchgInst(getOperand(0), getOperand(1), getOperand(2),
                           getSuccessOrdering(), getFailureOrdering(),
-                          getSynchScope());
+                          getSyncScopeID());
   Result->setVolatile(isVolatile());
   Result->setWeak(isWeak());
   return Result;
@@ -3902,14 +3902,14 @@
 
 AtomicRMWInst *AtomicRMWInst::cloneImpl() const {
   AtomicRMWInst *Result =
-    new AtomicRMWInst(getOperation(),getOperand(0), getOperand(1),
-                      getOrdering(), getSynchScope());
+    new AtomicRMWInst(getOperation(), getOperand(0), getOperand(1),
+                      getOrdering(), getSyncScopeID());
   Result->setVolatile(isVolatile());
   return Result;
 }
 
 FenceInst *FenceInst::cloneImpl() const {
-  return new FenceInst(getContext(), getOrdering(), getSynchScope());
+  return new FenceInst(getContext(), getOrdering(), getSyncScopeID());
 }
 
 TruncInst *TruncInst::cloneImpl() const {
Index: lib/IR/LLVMContext.cpp
===================================================================
--- lib/IR/LLVMContext.cpp
+++ lib/IR/LLVMContext.cpp
@@ -81,6 +81,16 @@
   assert(GCTransitionEntry->second == LLVMContext::OB_gc_transition &&
          "gc-transition operand bundle id drifted!");
   (void)GCTransitionEntry;
+
+  SyncScope::ID SingleThreadSSID =
+      pImpl->getOrInsertSyncScopeID("singlethread");
+  assert(SingleThreadSSID == SyncScope::SingleThread &&
+         "singlethread synchronization scope ID drifted!");
+
+  SyncScope::ID SystemSSID =
+      pImpl->getOrInsertSyncScopeID("");
+  assert(SystemSSID == SyncScope::System &&
+         "system synchronization scope ID drifted!");
 }
 
 LLVMContext::~LLVMContext() { delete pImpl; }
@@ -248,6 +258,18 @@
   return pImpl->getOperandBundleTagID(Tag);
 }
 
+SyncScope::ID LLVMContext::getSyncScopeID(StringRef SSN) const {
+  return pImpl->getSyncScopeID(SSN);
+}
+
+SyncScope::ID LLVMContext::getOrInsertSyncScopeID(StringRef SSN) {
+  return pImpl->getOrInsertSyncScopeID(SSN);
+}
+
+void LLVMContext::getSyncScopeNames(SmallVectorImpl<StringRef> &SSNs) const {
+  pImpl->getSyncScopeNames(SSNs);
+}
+
 void LLVMContext::setGC(const Function &Fn, std::string GCName) {
   auto It = pImpl->GCNames.find(&Fn);
 
Index: lib/IR/LLVMContextImpl.h
===================================================================
--- lib/IR/LLVMContextImpl.h
+++ lib/IR/LLVMContextImpl.h
@@ -1232,6 +1232,26 @@
   void getOperandBundleTags(SmallVectorImpl<StringRef> &Tags) const;
   uint32_t getOperandBundleTagID(StringRef Tag) const;
 
+  /// A set of interned synchronization scopes.  The StringMap maps
+  /// synchronization scope names to their respective synchronization scope IDs.
+  StringMap<SyncScope::ID> SSC;
+
+  /// getSyncScopeID - Maps synchronization scope name to synchronization scope
+  /// ID.  Every synchronization scope registered with LLVMContext has unique ID
+  /// except pre-defined ones.  Synchronization scope must be registered with
+  /// LLVMContext prior to calling this function.
+  SyncScope::ID getSyncScopeID(StringRef SSN) const;
+
+  /// getOrInsertSyncScopeID - Maps synchronization scope name to
+  /// synchronization scope ID.  Every synchronization scope registered with
+  /// LLVMContext has unique ID except pre-defined ones.
+  SyncScope::ID getOrInsertSyncScopeID(StringRef SSN);
+
+  /// getSyncScopeNames - Populates client supplied SmallVector with
+  /// synchronization scope names registered with LLVMContext.  Synchronization
+  /// scope names are ordered by increasing synchronization scope IDs.
+  void getSyncScopeNames(SmallVectorImpl<StringRef> &SSNs) const;
+
   /// Maintain the GC name for each function.
   ///
   /// This saves allocating an additional word in Function for programs which
Index: lib/IR/LLVMContextImpl.cpp
===================================================================
--- lib/IR/LLVMContextImpl.cpp
+++ lib/IR/LLVMContextImpl.cpp
@@ -215,6 +215,26 @@
   return I->second;
 }
 
+SyncScope::ID LLVMContextImpl::getSyncScopeID(StringRef SSN) const {
+  auto SSI = SSC.find(SSN);
+  assert(SSI != SSC.end() && "Unknown synchronization scope name!");
+  return SSI->second;
+}
+
+SyncScope::ID LLVMContextImpl::getOrInsertSyncScopeID(StringRef SSN) {
+  auto NewSSID = SSC.size();
+  assert(NewSSID < std::numeric_limits<SyncScope::ID>::max() &&
+         "Hit the maximum number of synchronization scopes allowed!");
+  return SSC.insert(std::make_pair(SSN, SyncScope::ID(NewSSID))).first->second;
+}
+
+void LLVMContextImpl::getSyncScopeNames(
+    SmallVectorImpl<StringRef> &SSNs) const {
+  SSNs.resize(SSC.size());
+  for (const auto &SSE : SSC)
+    SSNs[SSE.second] = SSE.first();
+}
+
 // ConstantsContext anchors
 void UnaryConstantExpr::anchor() { }
 
Index: lib/IR/Verifier.cpp
===================================================================
--- lib/IR/Verifier.cpp
+++ lib/IR/Verifier.cpp
@@ -3091,7 +3091,7 @@
            ElTy, &LI);
     checkAtomicMemAccessSize(ElTy, &LI);
   } else {
-    Assert(LI.getSynchScope() == CrossThread,
+    Assert(LI.getSyncScopeID() == SyncScope::System,
            "Non-atomic load cannot have SynchronizationScope specified", &LI);
   }
 
@@ -3120,7 +3120,7 @@
            ElTy, &SI);
     checkAtomicMemAccessSize(ElTy, &SI);
   } else {
-    Assert(SI.getSynchScope() == CrossThread,
+    Assert(SI.getSyncScopeID() == SyncScope::System,
            "Non-atomic store cannot have SynchronizationScope specified", &SI);
   }
   visitInstruction(SI);
Index: lib/Target/ARM/ARMISelLowering.cpp
===================================================================
--- lib/Target/ARM/ARMISelLowering.cpp
+++ lib/Target/ARM/ARMISelLowering.cpp
@@ -3365,9 +3365,9 @@
 static SDValue LowerATOMIC_FENCE(SDValue Op, SelectionDAG &DAG,
                                  const ARMSubtarget *Subtarget) {
   SDLoc dl(Op);
-  ConstantSDNode *ScopeN = cast<ConstantSDNode>(Op.getOperand(2));
-  auto Scope = static_cast<SynchronizationScope>(ScopeN->getZExtValue());
-  if (Scope == SynchronizationScope::SingleThread)
+  ConstantSDNode *SSIDNode = cast<ConstantSDNode>(Op.getOperand(2));
+  auto SSID = static_cast<SyncScope::ID>(SSIDNode->getZExtValue());
+  if (SSID == SyncScope::SingleThread)
     return Op;
 
   if (!Subtarget->hasDataBarrier()) {
Index: lib/Target/SystemZ/SystemZISelLowering.cpp
===================================================================
--- lib/Target/SystemZ/SystemZISelLowering.cpp
+++ lib/Target/SystemZ/SystemZISelLowering.cpp
@@ -3198,13 +3198,13 @@
   SDLoc DL(Op);
   AtomicOrdering FenceOrdering = static_cast<AtomicOrdering>(
     cast<ConstantSDNode>(Op.getOperand(1))->getZExtValue());
-  SynchronizationScope FenceScope = static_cast<SynchronizationScope>(
+  SyncScope::ID FenceSSID = static_cast<SyncScope::ID>(
     cast<ConstantSDNode>(Op.getOperand(2))->getZExtValue());
 
   // The only fence that needs an instruction is a sequentially-consistent
   // cross-thread fence.
   if (FenceOrdering == AtomicOrdering::SequentiallyConsistent &&
-      FenceScope == CrossThread) {
+      FenceSSID == SyncScope::System) {
     return SDValue(DAG.getMachineNode(SystemZ::Serialize, DL, MVT::Other,
                                       Op.getOperand(0)),
                    0);
Index: lib/Target/X86/X86ISelLowering.cpp
===================================================================
--- lib/Target/X86/X86ISelLowering.cpp
+++ lib/Target/X86/X86ISelLowering.cpp
@@ -22657,7 +22657,7 @@
 
   auto Builder = IRBuilder<>(AI);
   Module *M = Builder.GetInsertBlock()->getParent()->getParent();
-  auto SynchScope = AI->getSynchScope();
+  auto SSID = AI->getSyncScopeID();
   // We must restrict the ordering to avoid generating loads with Release or
   // ReleaseAcquire orderings.
   auto Order = AtomicCmpXchgInst::getStrongestFailureOrdering(AI->getOrdering());
@@ -22679,7 +22679,7 @@
   // otherwise, we might be able to be more aggressive on relaxed idempotent
   // rmw. In practice, they do not look useful, so we don't try to be
   // especially clever.
-  if (SynchScope == SingleThread)
+  if (SSID == SyncScope::SingleThread)
     // FIXME: we could just insert an X86ISD::MEMBARRIER here, except we are at
     // the IR level, so we must wrap it in an intrinsic.
     return nullptr;
@@ -22698,7 +22698,7 @@
   // Finally we can emit the atomic load.
   LoadInst *Loaded = Builder.CreateAlignedLoad(Ptr,
           AI->getType()->getPrimitiveSizeInBits());
-  Loaded->setAtomic(Order, SynchScope);
+  Loaded->setAtomic(Order, SSID);
   AI->replaceAllUsesWith(Loaded);
   AI->eraseFromParent();
   return Loaded;
@@ -22709,13 +22709,13 @@
   SDLoc dl(Op);
   AtomicOrdering FenceOrdering = static_cast<AtomicOrdering>(
     cast<ConstantSDNode>(Op.getOperand(1))->getZExtValue());
-  SynchronizationScope FenceScope = static_cast<SynchronizationScope>(
+  SyncScope::ID FenceSSID = static_cast<SyncScope::ID>(
     cast<ConstantSDNode>(Op.getOperand(2))->getZExtValue());
 
   // The only fence that needs an instruction is a sequentially-consistent
   // cross-thread fence.
   if (FenceOrdering == AtomicOrdering::SequentiallyConsistent &&
-      FenceScope == CrossThread) {
+      FenceSSID == SyncScope::System) {
     if (Subtarget.hasMFence())
       return DAG.getNode(X86ISD::MFENCE, dl, MVT::Other, Op.getOperand(0));
 
Index: lib/Transforms/IPO/GlobalOpt.cpp
===================================================================
--- lib/Transforms/IPO/GlobalOpt.cpp
+++ lib/Transforms/IPO/GlobalOpt.cpp
@@ -837,7 +837,7 @@
     if (StoreInst *SI = dyn_cast<StoreInst>(GV->user_back())) {
       // The global is initialized when the store to it occurs.
       new StoreInst(ConstantInt::getTrue(GV->getContext()), InitBool, false, 0,
-                    SI->getOrdering(), SI->getSynchScope(), SI);
+                    SI->getOrdering(), SI->getSyncScopeID(), SI);
       SI->eraseFromParent();
       continue;
     }
@@ -854,7 +854,7 @@
       // Replace the cmp X, 0 with a use of the bool value.
       // Sink the load to where the compare was, if atomic rules allow us to.
       Value *LV = new LoadInst(InitBool, InitBool->getName()+".val", false, 0,
-                               LI->getOrdering(), LI->getSynchScope(),
+                               LI->getOrdering(), LI->getSyncScopeID(),
                                LI->isUnordered() ? (Instruction*)ICI : LI);
       InitBoolUsed = true;
       switch (ICI->getPredicate()) {
@@ -1605,7 +1605,7 @@
           assert(LI->getOperand(0) == GV && "Not a copy!");
           // Insert a new load, to preserve the saved value.
           StoreVal = new LoadInst(NewGV, LI->getName()+".b", false, 0,
-                                  LI->getOrdering(), LI->getSynchScope(), LI);
+                                  LI->getOrdering(), LI->getSyncScopeID(), LI);
         } else {
           assert((isa<CastInst>(StoredVal) || isa<SelectInst>(StoredVal)) &&
                  "This is not a form that we understand!");
@@ -1614,12 +1614,12 @@
         }
       }
       new StoreInst(StoreVal, NewGV, false, 0,
-                    SI->getOrdering(), SI->getSynchScope(), SI);
+                    SI->getOrdering(), SI->getSyncScopeID(), SI);
     } else {
       // Change the load into a load of bool then a select.
       LoadInst *LI = cast<LoadInst>(UI);
       LoadInst *NLI = new LoadInst(NewGV, LI->getName()+".b", false, 0,
-                                   LI->getOrdering(), LI->getSynchScope(), LI);
+                                   LI->getOrdering(), LI->getSyncScopeID(), LI);
       Value *NSI;
       if (IsOneZero)
         NSI = new ZExtInst(NLI, LI->getType(), "", LI);
Index: lib/Transforms/InstCombine/InstCombineLoadStoreAlloca.cpp
===================================================================
--- lib/Transforms/InstCombine/InstCombineLoadStoreAlloca.cpp
+++ lib/Transforms/InstCombine/InstCombineLoadStoreAlloca.cpp
@@ -448,7 +448,7 @@
   LoadInst *NewLoad = IC.Builder->CreateAlignedLoad(
       IC.Builder->CreateBitCast(Ptr, NewTy->getPointerTo(AS)),
       LI.getAlignment(), LI.isVolatile(), LI.getName() + Suffix);
-  NewLoad->setAtomic(LI.getOrdering(), LI.getSynchScope());
+  NewLoad->setAtomic(LI.getOrdering(), LI.getSyncScopeID());
   MDBuilder MDB(NewLoad->getContext());
   for (const auto &MDPair : MD) {
     unsigned ID = MDPair.first;
@@ -532,7 +532,7 @@
   StoreInst *NewStore = IC.Builder->CreateAlignedStore(
       V, IC.Builder->CreateBitCast(Ptr, V->getType()->getPointerTo(AS)),
       SI.getAlignment(), SI.isVolatile());
-  NewStore->setAtomic(SI.getOrdering(), SI.getSynchScope());
+  NewStore->setAtomic(SI.getOrdering(), SI.getSyncScopeID());
   for (const auto &MDPair : MD) {
     unsigned ID = MDPair.first;
     MDNode *N = MDPair.second;
@@ -1025,9 +1025,9 @@
                                            SI->getOperand(2)->getName()+".val");
         assert(LI.isUnordered() && "implied by above");
         V1->setAlignment(Align);
-        V1->setAtomic(LI.getOrdering(), LI.getSynchScope());
+        V1->setAtomic(LI.getOrdering(), LI.getSyncScopeID());
         V2->setAlignment(Align);
-        V2->setAtomic(LI.getOrdering(), LI.getSynchScope());
+        V2->setAtomic(LI.getOrdering(), LI.getSyncScopeID());
         return SelectInst::Create(SI->getCondition(), V1, V2);
       }
 
@@ -1534,7 +1534,7 @@
                                    SI.isVolatile(),
                                    SI.getAlignment(),
                                    SI.getOrdering(),
-                                   SI.getSynchScope());
+                                   SI.getSyncScopeID());
   InsertNewInstBefore(NewSI, *BBI);
   // The debug locations of the original instructions might differ; merge them.
   NewSI->setDebugLoc(DILocation::getMergedLocation(SI.getDebugLoc(),
Index: lib/Transforms/Instrumentation/ThreadSanitizer.cpp
===================================================================
--- lib/Transforms/Instrumentation/ThreadSanitizer.cpp
+++ lib/Transforms/Instrumentation/ThreadSanitizer.cpp
@@ -379,10 +379,11 @@
 }
 
 static bool isAtomic(Instruction *I) {
+  // TODO: Ask TTI whether synchronization scope is between threads.
   if (LoadInst *LI = dyn_cast<LoadInst>(I))
-    return LI->isAtomic() && LI->getSynchScope() == CrossThread;
+    return LI->isAtomic() && LI->getSyncScopeID() != SyncScope::SingleThread;
   if (StoreInst *SI = dyn_cast<StoreInst>(I))
-    return SI->isAtomic() && SI->getSynchScope() == CrossThread;
+    return SI->isAtomic() && SI->getSyncScopeID() != SyncScope::SingleThread;
   if (isa<AtomicRMWInst>(I))
     return true;
   if (isa<AtomicCmpXchgInst>(I))
@@ -676,7 +677,7 @@
     I->eraseFromParent();
   } else if (FenceInst *FI = dyn_cast<FenceInst>(I)) {
     Value *Args[] = {createOrdering(&IRB, FI->getOrdering())};
-    Function *F = FI->getSynchScope() == SingleThread ?
+    Function *F = FI->getSyncScopeID() == SyncScope::SingleThread ?
         TsanAtomicSignalFence : TsanAtomicThreadFence;
     CallInst *C = CallInst::Create(F, Args);
     ReplaceInstWithInst(I, C);
Index: lib/Transforms/Scalar/GVN.cpp
===================================================================
--- lib/Transforms/Scalar/GVN.cpp
+++ lib/Transforms/Scalar/GVN.cpp
@@ -1166,7 +1166,7 @@
 
     auto *NewLoad = new LoadInst(LoadPtr, LI->getName()+".pre",
                                  LI->isVolatile(), LI->getAlignment(),
-                                 LI->getOrdering(), LI->getSynchScope(),
+                                 LI->getOrdering(), LI->getSyncScopeID(),
                                  UnavailablePred->getTerminator());
 
     // Transfer the old load's AA tags to the new load.
Index: lib/Transforms/Scalar/JumpThreading.cpp
===================================================================
--- lib/Transforms/Scalar/JumpThreading.cpp
+++ lib/Transforms/Scalar/JumpThreading.cpp
@@ -1105,7 +1105,7 @@
     LoadInst *NewVal = new LoadInst(
         LoadedPtr->DoPHITranslation(LoadBB, UnavailablePred),
         LI->getName() + ".pr", false, LI->getAlignment(), LI->getOrdering(),
-        LI->getSynchScope(), UnavailablePred->getTerminator());
+        LI->getSyncScopeID(), UnavailablePred->getTerminator());
     NewVal->setDebugLoc(LI->getDebugLoc());
     if (AATags)
       NewVal->setAAMetadata(AATags);
Index: lib/Transforms/Scalar/SROA.cpp
===================================================================
--- lib/Transforms/Scalar/SROA.cpp
+++ lib/Transforms/Scalar/SROA.cpp
@@ -2391,7 +2391,7 @@
       LoadInst *NewLI = IRB.CreateAlignedLoad(&NewAI, NewAI.getAlignment(),
                                               LI.isVolatile(), LI.getName());
       if (LI.isVolatile())
-        NewLI->setAtomic(LI.getOrdering(), LI.getSynchScope());
+        NewLI->setAtomic(LI.getOrdering(), LI.getSyncScopeID());
 
       // Try to preserve nonnull metadata
       if (TargetTy->isPointerTy())
@@ -2415,7 +2415,7 @@
                                               getSliceAlign(TargetTy),
                                               LI.isVolatile(), LI.getName());
       if (LI.isVolatile())
-        NewLI->setAtomic(LI.getOrdering(), LI.getSynchScope());
+        NewLI->setAtomic(LI.getOrdering(), LI.getSyncScopeID());
 
       V = NewLI;
       IsPtrAdjusted = true;
@@ -2558,7 +2558,7 @@
     }
     NewSI->copyMetadata(SI, LLVMContext::MD_mem_parallel_loop_access);
     if (SI.isVolatile())
-      NewSI->setAtomic(SI.getOrdering(), SI.getSynchScope());
+      NewSI->setAtomic(SI.getOrdering(), SI.getSyncScopeID());
     Pass.DeadInsts.insert(&SI);
     deleteIfTriviallyDead(OldOp);
 
Index: lib/Transforms/Utils/FunctionComparator.cpp
===================================================================
--- lib/Transforms/Utils/FunctionComparator.cpp
+++ lib/Transforms/Utils/FunctionComparator.cpp
@@ -511,8 +511,8 @@
     if (int Res =
             cmpOrderings(LI->getOrdering(), cast<LoadInst>(R)->getOrdering()))
       return Res;
-    if (int Res =
-            cmpNumbers(LI->getSynchScope(), cast<LoadInst>(R)->getSynchScope()))
+    if (int Res = cmpNumbers(LI->getSyncScopeID(),
+                             cast<LoadInst>(R)->getSyncScopeID()))
       return Res;
     return cmpRangeMetadata(LI->getMetadata(LLVMContext::MD_range),
         cast<LoadInst>(R)->getMetadata(LLVMContext::MD_range));
@@ -527,7 +527,8 @@
     if (int Res =
             cmpOrderings(SI->getOrdering(), cast<StoreInst>(R)->getOrdering()))
       return Res;
-    return cmpNumbers(SI->getSynchScope(), cast<StoreInst>(R)->getSynchScope());
+    return cmpNumbers(SI->getSyncScopeID(),
+                      cast<StoreInst>(R)->getSyncScopeID());
   }
   if (const CmpInst *CI = dyn_cast<CmpInst>(L))
     return cmpNumbers(CI->getPredicate(), cast<CmpInst>(R)->getPredicate());
@@ -582,7 +583,8 @@
     if (int Res =
             cmpOrderings(FI->getOrdering(), cast<FenceInst>(R)->getOrdering()))
       return Res;
-    return cmpNumbers(FI->getSynchScope(), cast<FenceInst>(R)->getSynchScope());
+    return cmpNumbers(FI->getSyncScopeID(),
+                      cast<FenceInst>(R)->getSyncScopeID());
   }
   if (const AtomicCmpXchgInst *CXI = dyn_cast<AtomicCmpXchgInst>(L)) {
     if (int Res = cmpNumbers(CXI->isVolatile(),
@@ -599,8 +601,8 @@
             cmpOrderings(CXI->getFailureOrdering(),
                          cast<AtomicCmpXchgInst>(R)->getFailureOrdering()))
       return Res;
-    return cmpNumbers(CXI->getSynchScope(),
-                      cast<AtomicCmpXchgInst>(R)->getSynchScope());
+    return cmpNumbers(CXI->getSyncScopeID(),
+                      cast<AtomicCmpXchgInst>(R)->getSyncScopeID());
   }
   if (const AtomicRMWInst *RMWI = dyn_cast<AtomicRMWInst>(L)) {
     if (int Res = cmpNumbers(RMWI->getOperation(),
@@ -612,8 +614,8 @@
     if (int Res = cmpOrderings(RMWI->getOrdering(),
                              cast<AtomicRMWInst>(R)->getOrdering()))
       return Res;
-    return cmpNumbers(RMWI->getSynchScope(),
-                      cast<AtomicRMWInst>(R)->getSynchScope());
+    return cmpNumbers(RMWI->getSyncScopeID(),
+                      cast<AtomicRMWInst>(R)->getSyncScopeID());
   }
   if (const PHINode *PNL = dyn_cast<PHINode>(L)) {
     const PHINode *PNR = cast<PHINode>(R);
Index: test/Assembler/atomic.ll
===================================================================
--- test/Assembler/atomic.ll
+++ test/Assembler/atomic.ll
@@ -5,14 +5,20 @@
 define void @f(i32* %x) {
   ; CHECK: load atomic i32, i32* %x unordered, align 4
   load atomic i32, i32* %x unordered, align 4
-  ; CHECK: load atomic volatile i32, i32* %x singlethread acquire, align 4
-  load atomic volatile i32, i32* %x singlethread acquire, align 4
+  ; CHECK: load atomic volatile i32, i32* %x syncscope("singlethread") acquire, align 4
+  load atomic volatile i32, i32* %x syncscope("singlethread") acquire, align 4
+  ; CHECK: load atomic volatile i32, i32* %x syncscope("agent") acquire, align 4
+  load atomic volatile i32, i32* %x syncscope("agent") acquire, align 4
   ; CHECK: store atomic i32 3, i32* %x release, align 4
   store atomic i32 3, i32* %x release, align 4
-  ; CHECK: store atomic volatile i32 3, i32* %x singlethread monotonic, align 4
-  store atomic volatile i32 3, i32* %x singlethread monotonic, align 4
-  ; CHECK: cmpxchg i32* %x, i32 1, i32 0 singlethread monotonic monotonic
-  cmpxchg i32* %x, i32 1, i32 0 singlethread monotonic monotonic
+  ; CHECK: store atomic volatile i32 3, i32* %x syncscope("singlethread") monotonic, align 4
+  store atomic volatile i32 3, i32* %x syncscope("singlethread") monotonic, align 4
+  ; CHECK: store atomic volatile i32 3, i32* %x syncscope("workgroup") monotonic, align 4
+  store atomic volatile i32 3, i32* %x syncscope("workgroup") monotonic, align 4
+  ; CHECK: cmpxchg i32* %x, i32 1, i32 0 syncscope("singlethread") monotonic monotonic
+  cmpxchg i32* %x, i32 1, i32 0 syncscope("singlethread") monotonic monotonic
+  ; CHECK: cmpxchg i32* %x, i32 1, i32 0 syncscope("workitem") monotonic monotonic
+  cmpxchg i32* %x, i32 1, i32 0 syncscope("workitem") monotonic monotonic
   ; CHECK: cmpxchg volatile i32* %x, i32 0, i32 1 acq_rel acquire
   cmpxchg volatile i32* %x, i32 0, i32 1 acq_rel acquire
   ; CHECK: cmpxchg i32* %x, i32 42, i32 0 acq_rel monotonic
@@ -23,9 +29,13 @@
   atomicrmw add i32* %x, i32 10 seq_cst
   ; CHECK: atomicrmw volatile xchg  i32* %x, i32 10 monotonic
   atomicrmw volatile xchg i32* %x, i32 10 monotonic
-  ; CHECK: fence singlethread release
-  fence singlethread release
+  ; CHECK: atomicrmw volatile xchg  i32* %x, i32 10 syncscope("image") monotonic
+  atomicrmw volatile xchg i32* %x, i32 10 syncscope("image") monotonic
+  ; CHECK: fence syncscope("singlethread") release
+  fence syncscope("singlethread") release
   ; CHECK: fence seq_cst
   fence seq_cst
+  ; CHECK: fence syncscope("device") seq_cst
+  fence syncscope("device") seq_cst
   ret void
 }
Index: test/Bitcode/atomic-no-syncscope.ll
===================================================================
--- /dev/null
+++ test/Bitcode/atomic-no-syncscope.ll
@@ -0,0 +1,14 @@
+; RUN: llvm-dis -o - %s.bc | FileCheck %s
+
+; CHECK: load atomic i32, i32* %x unordered, align 4
+; CHECK: load atomic volatile i32, i32* %x syncscope("singlethread") acquire, align 4
+; CHECK: store atomic i32 3, i32* %x release, align 4
+; CHECK: store atomic volatile i32 3, i32* %x syncscope("singlethread") monotonic, align 4
+; CHECK: cmpxchg i32* %x, i32 1, i32 0 syncscope("singlethread") monotonic monotonic
+; CHECK: cmpxchg volatile i32* %x, i32 0, i32 1 acq_rel acquire
+; CHECK: cmpxchg i32* %x, i32 42, i32 0 acq_rel monotonic
+; CHECK: cmpxchg weak i32* %x, i32 13, i32 0 seq_cst monotonic
+; CHECK: atomicrmw add i32* %x, i32 10 seq_cst
+; CHECK: atomicrmw volatile xchg  i32* %x, i32 10 monotonic
+; CHECK: fence syncscope("singlethread") release
+; CHECK: fence seq_cst
Index: test/Bitcode/atomic.ll
===================================================================
--- test/Bitcode/atomic.ll
+++ test/Bitcode/atomic.ll
@@ -11,8 +11,8 @@
   cmpxchg weak i32* %addr, i32 %desired, i32 %new acq_rel acquire
   ; CHECK: cmpxchg weak i32* %addr, i32 %desired, i32 %new acq_rel acquire
 
-  cmpxchg weak volatile i32* %addr, i32 %desired, i32 %new singlethread release monotonic
-  ; CHECK: cmpxchg weak volatile i32* %addr, i32 %desired, i32 %new singlethread release monotonic
+  cmpxchg weak volatile i32* %addr, i32 %desired, i32 %new syncscope("singlethread") release monotonic
+  ; CHECK: cmpxchg weak volatile i32* %addr, i32 %desired, i32 %new syncscope("singlethread") release monotonic
 
   ret void
 }
Index: test/Bitcode/compatibility-3.6.ll
===================================================================
--- test/Bitcode/compatibility-3.6.ll
+++ test/Bitcode/compatibility-3.6.ll
@@ -551,8 +551,8 @@
   ; CHECK: %cmpxchg.5 = cmpxchg weak i32* %word, i32 0, i32 9 seq_cst monotonic
   %cmpxchg.6 = cmpxchg volatile i32* %word, i32 0, i32 10 seq_cst monotonic
   ; CHECK: %cmpxchg.6 = cmpxchg volatile i32* %word, i32 0, i32 10 seq_cst monotonic
-  %cmpxchg.7 = cmpxchg weak volatile i32* %word, i32 0, i32 11 singlethread seq_cst monotonic
-  ; CHECK: %cmpxchg.7 = cmpxchg weak volatile i32* %word, i32 0, i32 11 singlethread seq_cst monotonic
+  %cmpxchg.7 = cmpxchg weak volatile i32* %word, i32 0, i32 11 syncscope("singlethread") seq_cst monotonic
+  ; CHECK: %cmpxchg.7 = cmpxchg weak volatile i32* %word, i32 0, i32 11 syncscope("singlethread") seq_cst monotonic
   %atomicrmw.xchg = atomicrmw xchg i32* %word, i32 12 monotonic
   ; CHECK: %atomicrmw.xchg = atomicrmw xchg i32* %word, i32 12 monotonic
   %atomicrmw.add = atomicrmw add i32* %word, i32 13 monotonic
@@ -571,33 +571,33 @@
   ; CHECK: %atomicrmw.max = atomicrmw max i32* %word, i32 19 monotonic
   %atomicrmw.min = atomicrmw volatile min i32* %word, i32 20 monotonic
   ; CHECK: %atomicrmw.min = atomicrmw volatile min i32* %word, i32 20 monotonic
-  %atomicrmw.umax = atomicrmw umax i32* %word, i32 21 singlethread monotonic
-  ; CHECK: %atomicrmw.umax = atomicrmw umax i32* %word, i32 21 singlethread monotonic
-  %atomicrmw.umin = atomicrmw volatile umin i32* %word, i32 22 singlethread monotonic
-  ; CHECK: %atomicrmw.umin = atomicrmw volatile umin i32* %word, i32 22 singlethread monotonic
+  %atomicrmw.umax = atomicrmw umax i32* %word, i32 21 syncscope("singlethread") monotonic
+  ; CHECK: %atomicrmw.umax = atomicrmw umax i32* %word, i32 21 syncscope("singlethread") monotonic
+  %atomicrmw.umin = atomicrmw volatile umin i32* %word, i32 22 syncscope("singlethread") monotonic
+  ; CHECK: %atomicrmw.umin = atomicrmw volatile umin i32* %word, i32 22 syncscope("singlethread") monotonic
   fence acquire
   ; CHECK: fence acquire
   fence release
   ; CHECK: fence release
   fence acq_rel
   ; CHECK: fence acq_rel
-  fence singlethread seq_cst
-  ; CHECK: fence singlethread seq_cst
+  fence syncscope("singlethread") seq_cst
+  ; CHECK: fence syncscope("singlethread") seq_cst
 
   ; XXX: The parser spits out the load type here.
   %ld.1 = load atomic i32* %word monotonic, align 4
   ; CHECK: %ld.1 = load atomic i32, i32* %word monotonic, align 4
   %ld.2 = load atomic volatile i32* %word acquire, align 8
   ; CHECK: %ld.2 = load atomic volatile i32, i32* %word acquire, align 8
-  %ld.3 = load atomic volatile i32* %word singlethread seq_cst, align 16
-  ; CHECK: %ld.3 = load atomic volatile i32, i32* %word singlethread seq_cst, align 16
+  %ld.3 = load atomic volatile i32* %word syncscope("singlethread") seq_cst, align 16
+  ; CHECK: %ld.3 = load atomic volatile i32, i32* %word syncscope("singlethread") seq_cst, align 16
 
   store atomic i32 23, i32* %word monotonic, align 4
   ; CHECK: store atomic i32 23, i32* %word monotonic, align 4
   store atomic volatile i32 24, i32* %word monotonic, align 4
   ; CHECK: store atomic volatile i32 24, i32* %word monotonic, align 4
-  store atomic volatile i32 25, i32* %word singlethread monotonic, align 4
-  ; CHECK: store atomic volatile i32 25, i32* %word singlethread monotonic, align 4
+  store atomic volatile i32 25, i32* %word syncscope("singlethread") monotonic, align 4
+  ; CHECK: store atomic volatile i32 25, i32* %word syncscope("singlethread") monotonic, align 4
   ret void
 }
 
Index: test/Bitcode/compatibility-3.7.ll
===================================================================
--- test/Bitcode/compatibility-3.7.ll
+++ test/Bitcode/compatibility-3.7.ll
@@ -596,8 +596,8 @@
   ; CHECK: %cmpxchg.5 = cmpxchg weak i32* %word, i32 0, i32 9 seq_cst monotonic
   %cmpxchg.6 = cmpxchg volatile i32* %word, i32 0, i32 10 seq_cst monotonic
   ; CHECK: %cmpxchg.6 = cmpxchg volatile i32* %word, i32 0, i32 10 seq_cst monotonic
-  %cmpxchg.7 = cmpxchg weak volatile i32* %word, i32 0, i32 11 singlethread seq_cst monotonic
-  ; CHECK: %cmpxchg.7 = cmpxchg weak volatile i32* %word, i32 0, i32 11 singlethread seq_cst monotonic
+  %cmpxchg.7 = cmpxchg weak volatile i32* %word, i32 0, i32 11 syncscope("singlethread") seq_cst monotonic
+  ; CHECK: %cmpxchg.7 = cmpxchg weak volatile i32* %word, i32 0, i32 11 syncscope("singlethread") seq_cst monotonic
   %atomicrmw.xchg = atomicrmw xchg i32* %word, i32 12 monotonic
   ; CHECK: %atomicrmw.xchg = atomicrmw xchg i32* %word, i32 12 monotonic
   %atomicrmw.add = atomicrmw add i32* %word, i32 13 monotonic
@@ -616,32 +616,32 @@
   ; CHECK: %atomicrmw.max = atomicrmw max i32* %word, i32 19 monotonic
   %atomicrmw.min = atomicrmw volatile min i32* %word, i32 20 monotonic
   ; CHECK: %atomicrmw.min = atomicrmw volatile min i32* %word, i32 20 monotonic
-  %atomicrmw.umax = atomicrmw umax i32* %word, i32 21 singlethread monotonic
-  ; CHECK: %atomicrmw.umax = atomicrmw umax i32* %word, i32 21 singlethread monotonic
-  %atomicrmw.umin = atomicrmw volatile umin i32* %word, i32 22 singlethread monotonic
-  ; CHECK: %atomicrmw.umin = atomicrmw volatile umin i32* %word, i32 22 singlethread monotonic
+  %atomicrmw.umax = atomicrmw umax i32* %word, i32 21 syncscope("singlethread") monotonic
+  ; CHECK: %atomicrmw.umax = atomicrmw umax i32* %word, i32 21 syncscope("singlethread") monotonic
+  %atomicrmw.umin = atomicrmw volatile umin i32* %word, i32 22 syncscope("singlethread") monotonic
+  ; CHECK: %atomicrmw.umin = atomicrmw volatile umin i32* %word, i32 22 syncscope("singlethread") monotonic
   fence acquire
   ; CHECK: fence acquire
   fence release
   ; CHECK: fence release
   fence acq_rel
   ; CHECK: fence acq_rel
-  fence singlethread seq_cst
-  ; CHECK: fence singlethread seq_cst
+  fence syncscope("singlethread") seq_cst
+  ; CHECK: fence syncscope("singlethread") seq_cst
 
   %ld.1 = load atomic i32, i32* %word monotonic, align 4
   ; CHECK: %ld.1 = load atomic i32, i32* %word monotonic, align 4
   %ld.2 = load atomic volatile i32, i32* %word acquire, align 8
   ; CHECK: %ld.2 = load atomic volatile i32, i32* %word acquire, align 8
-  %ld.3 = load atomic volatile i32, i32* %word singlethread seq_cst, align 16
-  ; CHECK: %ld.3 = load atomic volatile i32, i32* %word singlethread seq_cst, align 16
+  %ld.3 = load atomic volatile i32, i32* %word syncscope("singlethread") seq_cst, align 16
+  ; CHECK: %ld.3 = load atomic volatile i32, i32* %word syncscope("singlethread") seq_cst, align 16
 
   store atomic i32 23, i32* %word monotonic, align 4
   ; CHECK: store atomic i32 23, i32* %word monotonic, align 4
   store atomic volatile i32 24, i32* %word monotonic, align 4
   ; CHECK: store atomic volatile i32 24, i32* %word monotonic, align 4
-  store atomic volatile i32 25, i32* %word singlethread monotonic, align 4
-  ; CHECK: store atomic volatile i32 25, i32* %word singlethread monotonic, align 4
+  store atomic volatile i32 25, i32* %word syncscope("singlethread") monotonic, align 4
+  ; CHECK: store atomic volatile i32 25, i32* %word syncscope("singlethread") monotonic, align 4
   ret void
 }
 
Index: test/Bitcode/compatibility-3.8.ll
===================================================================
--- test/Bitcode/compatibility-3.8.ll
+++ test/Bitcode/compatibility-3.8.ll
@@ -627,8 +627,8 @@
   ; CHECK: %cmpxchg.5 = cmpxchg weak i32* %word, i32 0, i32 9 seq_cst monotonic
   %cmpxchg.6 = cmpxchg volatile i32* %word, i32 0, i32 10 seq_cst monotonic
   ; CHECK: %cmpxchg.6 = cmpxchg volatile i32* %word, i32 0, i32 10 seq_cst monotonic
-  %cmpxchg.7 = cmpxchg weak volatile i32* %word, i32 0, i32 11 singlethread seq_cst monotonic
-  ; CHECK: %cmpxchg.7 = cmpxchg weak volatile i32* %word, i32 0, i32 11 singlethread seq_cst monotonic
+  %cmpxchg.7 = cmpxchg weak volatile i32* %word, i32 0, i32 11 syncscope("singlethread") seq_cst monotonic
+  ; CHECK: %cmpxchg.7 = cmpxchg weak volatile i32* %word, i32 0, i32 11 syncscope("singlethread") seq_cst monotonic
   %atomicrmw.xchg = atomicrmw xchg i32* %word, i32 12 monotonic
   ; CHECK: %atomicrmw.xchg = atomicrmw xchg i32* %word, i32 12 monotonic
   %atomicrmw.add = atomicrmw add i32* %word, i32 13 monotonic
@@ -647,32 +647,32 @@
   ; CHECK: %atomicrmw.max = atomicrmw max i32* %word, i32 19 monotonic
   %atomicrmw.min = atomicrmw volatile min i32* %word, i32 20 monotonic
   ; CHECK: %atomicrmw.min = atomicrmw volatile min i32* %word, i32 20 monotonic
-  %atomicrmw.umax = atomicrmw umax i32* %word, i32 21 singlethread monotonic
-  ; CHECK: %atomicrmw.umax = atomicrmw umax i32* %word, i32 21 singlethread monotonic
-  %atomicrmw.umin = atomicrmw volatile umin i32* %word, i32 22 singlethread monotonic
-  ; CHECK: %atomicrmw.umin = atomicrmw volatile umin i32* %word, i32 22 singlethread monotonic
+  %atomicrmw.umax = atomicrmw umax i32* %word, i32 21 syncscope("singlethread") monotonic
+  ; CHECK: %atomicrmw.umax = atomicrmw umax i32* %word, i32 21 syncscope("singlethread") monotonic
+  %atomicrmw.umin = atomicrmw volatile umin i32* %word, i32 22 syncscope("singlethread") monotonic
+  ; CHECK: %atomicrmw.umin = atomicrmw volatile umin i32* %word, i32 22 syncscope("singlethread") monotonic
   fence acquire
   ; CHECK: fence acquire
   fence release
   ; CHECK: fence release
   fence acq_rel
   ; CHECK: fence acq_rel
-  fence singlethread seq_cst
-  ; CHECK: fence singlethread seq_cst
+  fence syncscope("singlethread") seq_cst
+  ; CHECK: fence syncscope("singlethread") seq_cst
 
   %ld.1 = load atomic i32, i32* %word monotonic, align 4
   ; CHECK: %ld.1 = load atomic i32, i32* %word monotonic, align 4
   %ld.2 = load atomic volatile i32, i32* %word acquire, align 8
   ; CHECK: %ld.2 = load atomic volatile i32, i32* %word acquire, align 8
-  %ld.3 = load atomic volatile i32, i32* %word singlethread seq_cst, align 16
-  ; CHECK: %ld.3 = load atomic volatile i32, i32* %word singlethread seq_cst, align 16
+  %ld.3 = load atomic volatile i32, i32* %word syncscope("singlethread") seq_cst, align 16
+  ; CHECK: %ld.3 = load atomic volatile i32, i32* %word syncscope("singlethread") seq_cst, align 16
 
   store atomic i32 23, i32* %word monotonic, align 4
   ; CHECK: store atomic i32 23, i32* %word monotonic, align 4
   store atomic volatile i32 24, i32* %word monotonic, align 4
   ; CHECK: store atomic volatile i32 24, i32* %word monotonic, align 4
-  store atomic volatile i32 25, i32* %word singlethread monotonic, align 4
-  ; CHECK: store atomic volatile i32 25, i32* %word singlethread monotonic, align 4
+  store atomic volatile i32 25, i32* %word syncscope("singlethread") monotonic, align 4
+  ; CHECK: store atomic volatile i32 25, i32* %word syncscope("singlethread") monotonic, align 4
   ret void
 }
 
Index: test/Bitcode/compatibility-3.9.ll
===================================================================
--- test/Bitcode/compatibility-3.9.ll
+++ test/Bitcode/compatibility-3.9.ll
@@ -698,8 +698,8 @@
   ; CHECK: %cmpxchg.5 = cmpxchg weak i32* %word, i32 0, i32 9 seq_cst monotonic
   %cmpxchg.6 = cmpxchg volatile i32* %word, i32 0, i32 10 seq_cst monotonic
   ; CHECK: %cmpxchg.6 = cmpxchg volatile i32* %word, i32 0, i32 10 seq_cst monotonic
-  %cmpxchg.7 = cmpxchg weak volatile i32* %word, i32 0, i32 11 singlethread seq_cst monotonic
-  ; CHECK: %cmpxchg.7 = cmpxchg weak volatile i32* %word, i32 0, i32 11 singlethread seq_cst monotonic
+  %cmpxchg.7 = cmpxchg weak volatile i32* %word, i32 0, i32 11 syncscope("singlethread") seq_cst monotonic
+  ; CHECK: %cmpxchg.7 = cmpxchg weak volatile i32* %word, i32 0, i32 11 syncscope("singlethread") seq_cst monotonic
   %atomicrmw.xchg = atomicrmw xchg i32* %word, i32 12 monotonic
   ; CHECK: %atomicrmw.xchg = atomicrmw xchg i32* %word, i32 12 monotonic
   %atomicrmw.add = atomicrmw add i32* %word, i32 13 monotonic
@@ -718,32 +718,32 @@
   ; CHECK: %atomicrmw.max = atomicrmw max i32* %word, i32 19 monotonic
   %atomicrmw.min = atomicrmw volatile min i32* %word, i32 20 monotonic
   ; CHECK: %atomicrmw.min = atomicrmw volatile min i32* %word, i32 20 monotonic
-  %atomicrmw.umax = atomicrmw umax i32* %word, i32 21 singlethread monotonic
-  ; CHECK: %atomicrmw.umax = atomicrmw umax i32* %word, i32 21 singlethread monotonic
-  %atomicrmw.umin = atomicrmw volatile umin i32* %word, i32 22 singlethread monotonic
-  ; CHECK: %atomicrmw.umin = atomicrmw volatile umin i32* %word, i32 22 singlethread monotonic
+  %atomicrmw.umax = atomicrmw umax i32* %word, i32 21 syncscope("singlethread") monotonic
+  ; CHECK: %atomicrmw.umax = atomicrmw umax i32* %word, i32 21 syncscope("singlethread") monotonic
+  %atomicrmw.umin = atomicrmw volatile umin i32* %word, i32 22 syncscope("singlethread") monotonic
+  ; CHECK: %atomicrmw.umin = atomicrmw volatile umin i32* %word, i32 22 syncscope("singlethread") monotonic
   fence acquire
   ; CHECK: fence acquire
   fence release
   ; CHECK: fence release
   fence acq_rel
   ; CHECK: fence acq_rel
-  fence singlethread seq_cst
-  ; CHECK: fence singlethread seq_cst
+  fence syncscope("singlethread") seq_cst
+  ; CHECK: fence syncscope("singlethread") seq_cst
 
   %ld.1 = load atomic i32, i32* %word monotonic, align 4
   ; CHECK: %ld.1 = load atomic i32, i32* %word monotonic, align 4
   %ld.2 = load atomic volatile i32, i32* %word acquire, align 8
   ; CHECK: %ld.2 = load atomic volatile i32, i32* %word acquire, align 8
-  %ld.3 = load atomic volatile i32, i32* %word singlethread seq_cst, align 16
-  ; CHECK: %ld.3 = load atomic volatile i32, i32* %word singlethread seq_cst, align 16
+  %ld.3 = load atomic volatile i32, i32* %word syncscope("singlethread") seq_cst, align 16
+  ; CHECK: %ld.3 = load atomic volatile i32, i32* %word syncscope("singlethread") seq_cst, align 16
 
   store atomic i32 23, i32* %word monotonic, align 4
   ; CHECK: store atomic i32 23, i32* %word monotonic, align 4
   store atomic volatile i32 24, i32* %word monotonic, align 4
   ; CHECK: store atomic volatile i32 24, i32* %word monotonic, align 4
-  store atomic volatile i32 25, i32* %word singlethread monotonic, align 4
-  ; CHECK: store atomic volatile i32 25, i32* %word singlethread monotonic, align 4
+  store atomic volatile i32 25, i32* %word syncscope("singlethread") monotonic, align 4
+  ; CHECK: store atomic volatile i32 25, i32* %word syncscope("singlethread") monotonic, align 4
   ret void
 }
 
Index: test/Bitcode/compatibility-4.0.ll
===================================================================
--- test/Bitcode/compatibility-4.0.ll
+++ test/Bitcode/compatibility-4.0.ll
@@ -698,8 +698,8 @@
   ; CHECK: %cmpxchg.5 = cmpxchg weak i32* %word, i32 0, i32 9 seq_cst monotonic
   %cmpxchg.6 = cmpxchg volatile i32* %word, i32 0, i32 10 seq_cst monotonic
   ; CHECK: %cmpxchg.6 = cmpxchg volatile i32* %word, i32 0, i32 10 seq_cst monotonic
-  %cmpxchg.7 = cmpxchg weak volatile i32* %word, i32 0, i32 11 singlethread seq_cst monotonic
-  ; CHECK: %cmpxchg.7 = cmpxchg weak volatile i32* %word, i32 0, i32 11 singlethread seq_cst monotonic
+  %cmpxchg.7 = cmpxchg weak volatile i32* %word, i32 0, i32 11 syncscope("singlethread") seq_cst monotonic
+  ; CHECK: %cmpxchg.7 = cmpxchg weak volatile i32* %word, i32 0, i32 11 syncscope("singlethread") seq_cst monotonic
   %atomicrmw.xchg = atomicrmw xchg i32* %word, i32 12 monotonic
   ; CHECK: %atomicrmw.xchg = atomicrmw xchg i32* %word, i32 12 monotonic
   %atomicrmw.add = atomicrmw add i32* %word, i32 13 monotonic
@@ -718,32 +718,32 @@
   ; CHECK: %atomicrmw.max = atomicrmw max i32* %word, i32 19 monotonic
   %atomicrmw.min = atomicrmw volatile min i32* %word, i32 20 monotonic
   ; CHECK: %atomicrmw.min = atomicrmw volatile min i32* %word, i32 20 monotonic
-  %atomicrmw.umax = atomicrmw umax i32* %word, i32 21 singlethread monotonic
-  ; CHECK: %atomicrmw.umax = atomicrmw umax i32* %word, i32 21 singlethread monotonic
-  %atomicrmw.umin = atomicrmw volatile umin i32* %word, i32 22 singlethread monotonic
-  ; CHECK: %atomicrmw.umin = atomicrmw volatile umin i32* %word, i32 22 singlethread monotonic
+  %atomicrmw.umax = atomicrmw umax i32* %word, i32 21 syncscope("singlethread") monotonic
+  ; CHECK: %atomicrmw.umax = atomicrmw umax i32* %word, i32 21 syncscope("singlethread") monotonic
+  %atomicrmw.umin = atomicrmw volatile umin i32* %word, i32 22 syncscope("singlethread") monotonic
+  ; CHECK: %atomicrmw.umin = atomicrmw volatile umin i32* %word, i32 22 syncscope("singlethread") monotonic
   fence acquire
   ; CHECK: fence acquire
   fence release
   ; CHECK: fence release
   fence acq_rel
   ; CHECK: fence acq_rel
-  fence singlethread seq_cst
-  ; CHECK: fence singlethread seq_cst
+  fence syncscope("singlethread") seq_cst
+  ; CHECK: fence syncscope("singlethread") seq_cst
 
   %ld.1 = load atomic i32, i32* %word monotonic, align 4
   ; CHECK: %ld.1 = load atomic i32, i32* %word monotonic, align 4
   %ld.2 = load atomic volatile i32, i32* %word acquire, align 8
   ; CHECK: %ld.2 = load atomic volatile i32, i32* %word acquire, align 8
-  %ld.3 = load atomic volatile i32, i32* %word singlethread seq_cst, align 16
-  ; CHECK: %ld.3 = load atomic volatile i32, i32* %word singlethread seq_cst, align 16
+  %ld.3 = load atomic volatile i32, i32* %word syncscope("singlethread") seq_cst, align 16
+  ; CHECK: %ld.3 = load atomic volatile i32, i32* %word syncscope("singlethread") seq_cst, align 16
 
   store atomic i32 23, i32* %word monotonic, align 4
   ; CHECK: store atomic i32 23, i32* %word monotonic, align 4
   store atomic volatile i32 24, i32* %word monotonic, align 4
   ; CHECK: store atomic volatile i32 24, i32* %word monotonic, align 4
-  store atomic volatile i32 25, i32* %word singlethread monotonic, align 4
-  ; CHECK: store atomic volatile i32 25, i32* %word singlethread monotonic, align 4
+  store atomic volatile i32 25, i32* %word syncscope("singlethread") monotonic, align 4
+  ; CHECK: store atomic volatile i32 25, i32* %word syncscope("singlethread") monotonic, align 4
   ret void
 }
 
Index: test/Bitcode/compatibility.ll
===================================================================
--- test/Bitcode/compatibility.ll
+++ test/Bitcode/compatibility.ll
@@ -705,8 +705,8 @@
   ; CHECK: %cmpxchg.5 = cmpxchg weak i32* %word, i32 0, i32 9 seq_cst monotonic
   %cmpxchg.6 = cmpxchg volatile i32* %word, i32 0, i32 10 seq_cst monotonic
   ; CHECK: %cmpxchg.6 = cmpxchg volatile i32* %word, i32 0, i32 10 seq_cst monotonic
-  %cmpxchg.7 = cmpxchg weak volatile i32* %word, i32 0, i32 11 singlethread seq_cst monotonic
-  ; CHECK: %cmpxchg.7 = cmpxchg weak volatile i32* %word, i32 0, i32 11 singlethread seq_cst monotonic
+  %cmpxchg.7 = cmpxchg weak volatile i32* %word, i32 0, i32 11 syncscope("singlethread") seq_cst monotonic
+  ; CHECK: %cmpxchg.7 = cmpxchg weak volatile i32* %word, i32 0, i32 11 syncscope("singlethread") seq_cst monotonic
   %atomicrmw.xchg = atomicrmw xchg i32* %word, i32 12 monotonic
   ; CHECK: %atomicrmw.xchg = atomicrmw xchg i32* %word, i32 12 monotonic
   %atomicrmw.add = atomicrmw add i32* %word, i32 13 monotonic
@@ -725,32 +725,32 @@
   ; CHECK: %atomicrmw.max = atomicrmw max i32* %word, i32 19 monotonic
   %atomicrmw.min = atomicrmw volatile min i32* %word, i32 20 monotonic
   ; CHECK: %atomicrmw.min = atomicrmw volatile min i32* %word, i32 20 monotonic
-  %atomicrmw.umax = atomicrmw umax i32* %word, i32 21 singlethread monotonic
-  ; CHECK: %atomicrmw.umax = atomicrmw umax i32* %word, i32 21 singlethread monotonic
-  %atomicrmw.umin = atomicrmw volatile umin i32* %word, i32 22 singlethread monotonic
-  ; CHECK: %atomicrmw.umin = atomicrmw volatile umin i32* %word, i32 22 singlethread monotonic
+  %atomicrmw.umax = atomicrmw umax i32* %word, i32 21 syncscope("singlethread") monotonic
+  ; CHECK: %atomicrmw.umax = atomicrmw umax i32* %word, i32 21 syncscope("singlethread") monotonic
+  %atomicrmw.umin = atomicrmw volatile umin i32* %word, i32 22 syncscope("singlethread") monotonic
+  ; CHECK: %atomicrmw.umin = atomicrmw volatile umin i32* %word, i32 22 syncscope("singlethread") monotonic
   fence acquire
   ; CHECK: fence acquire
   fence release
   ; CHECK: fence release
   fence acq_rel
   ; CHECK: fence acq_rel
-  fence singlethread seq_cst
-  ; CHECK: fence singlethread seq_cst
+  fence syncscope("singlethread") seq_cst
+  ; CHECK: fence syncscope("singlethread") seq_cst
 
   %ld.1 = load atomic i32, i32* %word monotonic, align 4
   ; CHECK: %ld.1 = load atomic i32, i32* %word monotonic, align 4
   %ld.2 = load atomic volatile i32, i32* %word acquire, align 8
   ; CHECK: %ld.2 = load atomic volatile i32, i32* %word acquire, align 8
-  %ld.3 = load atomic volatile i32, i32* %word singlethread seq_cst, align 16
-  ; CHECK: %ld.3 = load atomic volatile i32, i32* %word singlethread seq_cst, align 16
+  %ld.3 = load atomic volatile i32, i32* %word syncscope("singlethread") seq_cst, align 16
+  ; CHECK: %ld.3 = load atomic volatile i32, i32* %word syncscope("singlethread") seq_cst, align 16
 
   store atomic i32 23, i32* %word monotonic, align 4
   ; CHECK: store atomic i32 23, i32* %word monotonic, align 4
   store atomic volatile i32 24, i32* %word monotonic, align 4
   ; CHECK: store atomic volatile i32 24, i32* %word monotonic, align 4
-  store atomic volatile i32 25, i32* %word singlethread monotonic, align 4
-  ; CHECK: store atomic volatile i32 25, i32* %word singlethread monotonic, align 4
+  store atomic volatile i32 25, i32* %word syncscope("singlethread") monotonic, align 4
+  ; CHECK: store atomic volatile i32 25, i32* %word syncscope("singlethread") monotonic, align 4
   ret void
 }
 
Index: test/Bitcode/memInstructions.3.2.ll
===================================================================
--- test/Bitcode/memInstructions.3.2.ll
+++ test/Bitcode/memInstructions.3.2.ll
@@ -107,29 +107,29 @@
 ; CHECK-NEXT: %res8 = load atomic volatile i8, i8* %ptr1 seq_cst, align 1
   %res8 = load atomic volatile i8, i8* %ptr1 seq_cst, align 1
 
-; CHECK-NEXT: %res9 = load atomic i8, i8* %ptr1 singlethread unordered, align 1
-  %res9 = load atomic i8, i8* %ptr1 singlethread unordered, align 1
+; CHECK-NEXT: %res9 = load atomic i8, i8* %ptr1 syncscope("singlethread") unordered, align 1
+  %res9 = load atomic i8, i8* %ptr1 syncscope("singlethread") unordered, align 1
 
-; CHECK-NEXT: %res10 = load atomic i8, i8* %ptr1 singlethread monotonic, align 1
-  %res10 = load atomic i8, i8* %ptr1 singlethread monotonic, align 1
+; CHECK-NEXT: %res10 = load atomic i8, i8* %ptr1 syncscope("singlethread") monotonic, align 1
+  %res10 = load atomic i8, i8* %ptr1 syncscope("singlethread") monotonic, align 1
 
-; CHECK-NEXT: %res11 = load atomic i8, i8* %ptr1 singlethread acquire, align 1
-  %res11 = load atomic i8, i8* %ptr1 singlethread acquire, align 1
+; CHECK-NEXT: %res11 = load atomic i8, i8* %ptr1 syncscope("singlethread") acquire, align 1
+  %res11 = load atomic i8, i8* %ptr1 syncscope("singlethread") acquire, align 1
 
-; CHECK-NEXT: %res12 = load atomic i8, i8* %ptr1 singlethread seq_cst, align 1
-  %res12 = load atomic i8, i8* %ptr1 singlethread seq_cst, align 1
+; CHECK-NEXT: %res12 = load atomic i8, i8* %ptr1 syncscope("singlethread") seq_cst, align 1
+  %res12 = load atomic i8, i8* %ptr1 syncscope("singlethread") seq_cst, align 1
 
-; CHECK-NEXT: %res13 = load atomic volatile i8, i8* %ptr1 singlethread unordered, align 1
-  %res13 = load atomic volatile i8, i8* %ptr1 singlethread unordered, align 1
+; CHECK-NEXT: %res13 = load atomic volatile i8, i8* %ptr1 syncscope("singlethread") unordered, align 1
+  %res13 = load atomic volatile i8, i8* %ptr1 syncscope("singlethread") unordered, align 1
 
-; CHECK-NEXT: %res14 = load atomic volatile i8, i8* %ptr1 singlethread monotonic, align 1
-  %res14 = load atomic volatile i8, i8* %ptr1 singlethread monotonic, align 1
+; CHECK-NEXT: %res14 = load atomic volatile i8, i8* %ptr1 syncscope("singlethread") monotonic, align 1
+  %res14 = load atomic volatile i8, i8* %ptr1 syncscope("singlethread") monotonic, align 1
 
-; CHECK-NEXT: %res15 = load atomic volatile i8, i8* %ptr1 singlethread acquire, align 1
-  %res15 = load atomic volatile i8, i8* %ptr1 singlethread acquire, align 1
+; CHECK-NEXT: %res15 = load atomic volatile i8, i8* %ptr1 syncscope("singlethread") acquire, align 1
+  %res15 = load atomic volatile i8, i8* %ptr1 syncscope("singlethread") acquire, align 1
 
-; CHECK-NEXT: %res16 = load atomic volatile i8, i8* %ptr1 singlethread seq_cst, align 1
-  %res16 = load atomic volatile i8, i8* %ptr1 singlethread seq_cst, align 1
+; CHECK-NEXT: %res16 = load atomic volatile i8, i8* %ptr1 syncscope("singlethread") seq_cst, align 1
+  %res16 = load atomic volatile i8, i8* %ptr1 syncscope("singlethread") seq_cst, align 1
 
   ret void
 }
@@ -193,29 +193,29 @@
 ; CHECK-NEXT: store atomic volatile i8 2, i8* %ptr1 seq_cst, align 1
   store atomic volatile i8 2, i8* %ptr1 seq_cst, align 1
 
-; CHECK-NEXT: store atomic i8 2, i8* %ptr1 singlethread unordered, align 1
-  store atomic i8 2, i8* %ptr1 singlethread unordered, align 1
+; CHECK-NEXT: store atomic i8 2, i8* %ptr1 syncscope("singlethread") unordered, align 1
+  store atomic i8 2, i8* %ptr1 syncscope("singlethread") unordered, align 1
 
-; CHECK-NEXT: store atomic i8 2, i8* %ptr1 singlethread monotonic, align 1
-  store atomic i8 2, i8* %ptr1 singlethread monotonic, align 1
+; CHECK-NEXT: store atomic i8 2, i8* %ptr1 syncscope("singlethread") monotonic, align 1
+  store atomic i8 2, i8* %ptr1 syncscope("singlethread") monotonic, align 1
 
-; CHECK-NEXT: store atomic i8 2, i8* %ptr1 singlethread release, align 1
-  store atomic i8 2, i8* %ptr1 singlethread release, align 1
+; CHECK-NEXT: store atomic i8 2, i8* %ptr1 syncscope("singlethread") release, align 1
+  store atomic i8 2, i8* %ptr1 syncscope("singlethread") release, align 1
 
-; CHECK-NEXT: store atomic i8 2, i8* %ptr1 singlethread seq_cst, align 1
-  store atomic i8 2, i8* %ptr1 singlethread seq_cst, align 1
+; CHECK-NEXT: store atomic i8 2, i8* %ptr1 syncscope("singlethread") seq_cst, align 1
+  store atomic i8 2, i8* %ptr1 syncscope("singlethread") seq_cst, align 1
 
-; CHECK-NEXT: store atomic volatile i8 2, i8* %ptr1 singlethread unordered, align 1
-  store atomic volatile i8 2, i8* %ptr1 singlethread unordered, align 1
+; CHECK-NEXT: store atomic volatile i8 2, i8* %ptr1 syncscope("singlethread") unordered, align 1
+  store atomic volatile i8 2, i8* %ptr1 syncscope("singlethread") unordered, align 1
 
-; CHECK-NEXT: store atomic volatile i8 2, i8* %ptr1 singlethread monotonic, align 1
-  store atomic volatile i8 2, i8* %ptr1 singlethread monotonic, align 1
+; CHECK-NEXT: store atomic volatile i8 2, i8* %ptr1 syncscope("singlethread") monotonic, align 1
+  store atomic volatile i8 2, i8* %ptr1 syncscope("singlethread") monotonic, align 1
 
-; CHECK-NEXT: store atomic volatile i8 2, i8* %ptr1 singlethread release, align 1
-  store atomic volatile i8 2, i8* %ptr1 singlethread release, align 1
+; CHECK-NEXT: store atomic volatile i8 2, i8* %ptr1 syncscope("singlethread") release, align 1
+  store atomic volatile i8 2, i8* %ptr1 syncscope("singlethread") release, align 1
 
-; CHECK-NEXT: store atomic volatile i8 2, i8* %ptr1 singlethread seq_cst, align 1
-  store atomic volatile i8 2, i8* %ptr1 singlethread seq_cst, align 1
+; CHECK-NEXT: store atomic volatile i8 2, i8* %ptr1 syncscope("singlethread") seq_cst, align 1
+  store atomic volatile i8 2, i8* %ptr1 syncscope("singlethread") seq_cst, align 1
 
   ret void
 }
@@ -232,13 +232,13 @@
 ; CHECK-NEXT: %res2 = extractvalue { i32, i1 } [[TMP]], 0
   %res2 = cmpxchg volatile i32* %ptr, i32 %cmp, i32 %new monotonic monotonic
 
-; CHECK-NEXT: [[TMP:%[a-z0-9]+]] = cmpxchg i32* %ptr, i32 %cmp, i32 %new singlethread monotonic monotonic
+; CHECK-NEXT: [[TMP:%[a-z0-9]+]] = cmpxchg i32* %ptr, i32 %cmp, i32 %new syncscope("singlethread") monotonic monotonic
 ; CHECK-NEXT: %res3 = extractvalue { i32, i1 } [[TMP]], 0
-  %res3 = cmpxchg i32* %ptr, i32 %cmp, i32 %new singlethread monotonic monotonic
+  %res3 = cmpxchg i32* %ptr, i32 %cmp, i32 %new syncscope("singlethread") monotonic monotonic
 
-; CHECK-NEXT: [[TMP:%[a-z0-9]+]] = cmpxchg volatile i32* %ptr, i32 %cmp, i32 %new singlethread monotonic monotonic
+; CHECK-NEXT: [[TMP:%[a-z0-9]+]] = cmpxchg volatile i32* %ptr, i32 %cmp, i32 %new syncscope("singlethread") monotonic monotonic
 ; CHECK-NEXT: %res4 = extractvalue { i32, i1 } [[TMP]], 0
-  %res4 = cmpxchg volatile i32* %ptr, i32 %cmp, i32 %new singlethread monotonic monotonic
+  %res4 = cmpxchg volatile i32* %ptr, i32 %cmp, i32 %new syncscope("singlethread") monotonic monotonic
 
 
 ; CHECK-NEXT: [[TMP:%[a-z0-9]+]] = cmpxchg i32* %ptr, i32 %cmp, i32 %new acquire acquire
@@ -249,13 +249,13 @@
 ; CHECK-NEXT: %res6 = extractvalue { i32, i1 } [[TMP]], 0
   %res6 = cmpxchg volatile i32* %ptr, i32 %cmp, i32 %new acquire acquire
 
-; CHECK-NEXT: [[TMP:%[a-z0-9]+]] = cmpxchg i32* %ptr, i32 %cmp, i32 %new singlethread acquire acquire
+; CHECK-NEXT: [[TMP:%[a-z0-9]+]] = cmpxchg i32* %ptr, i32 %cmp, i32 %new syncscope("singlethread") acquire acquire
 ; CHECK-NEXT: %res7 = extractvalue { i32, i1 } [[TMP]], 0
-  %res7 = cmpxchg i32* %ptr, i32 %cmp, i32 %new singlethread acquire acquire
+  %res7 = cmpxchg i32* %ptr, i32 %cmp, i32 %new syncscope("singlethread") acquire acquire
 
-; CHECK-NEXT: [[TMP:%[a-z0-9]+]] = cmpxchg volatile i32* %ptr, i32 %cmp, i32 %new singlethread acquire acquire
+; CHECK-NEXT: [[TMP:%[a-z0-9]+]] = cmpxchg volatile i32* %ptr, i32 %cmp, i32 %new syncscope("singlethread") acquire acquire
 ; CHECK-NEXT: %res8 = extractvalue { i32, i1 } [[TMP]], 0
-  %res8 = cmpxchg volatile i32* %ptr, i32 %cmp, i32 %new singlethread acquire acquire
+  %res8 = cmpxchg volatile i32* %ptr, i32 %cmp, i32 %new syncscope("singlethread") acquire acquire
 
 
 ; CHECK-NEXT: [[TMP:%[a-z0-9]+]] = cmpxchg i32* %ptr, i32 %cmp, i32 %new release monotonic
@@ -266,13 +266,13 @@
 ; CHECK-NEXT: %res10 = extractvalue { i32, i1 } [[TMP]], 0
   %res10 = cmpxchg volatile i32* %ptr, i32 %cmp, i32 %new release monotonic
 
-; CHECK-NEXT: [[TMP:%[a-z0-9]+]] = cmpxchg i32* %ptr, i32 %cmp, i32 %new singlethread release monotonic
+; CHECK-NEXT: [[TMP:%[a-z0-9]+]] = cmpxchg i32* %ptr, i32 %cmp, i32 %new syncscope("singlethread") release monotonic
 ; CHECK-NEXT: %res11 = extractvalue { i32, i1 } [[TMP]], 0
-  %res11 = cmpxchg i32* %ptr, i32 %cmp, i32 %new singlethread release monotonic
+  %res11 = cmpxchg i32* %ptr, i32 %cmp, i32 %new syncscope("singlethread") release monotonic
 
-; CHECK-NEXT: [[TMP:%[a-z0-9]+]] = cmpxchg volatile i32* %ptr, i32 %cmp, i32 %new singlethread release monotonic
+; CHECK-NEXT: [[TMP:%[a-z0-9]+]] = cmpxchg volatile i32* %ptr, i32 %cmp, i32 %new syncscope("singlethread") release monotonic
 ; CHECK-NEXT: %res12 = extractvalue { i32, i1 } [[TMP]], 0
-  %res12 = cmpxchg volatile i32* %ptr, i32 %cmp, i32 %new singlethread release monotonic
+  %res12 = cmpxchg volatile i32* %ptr, i32 %cmp, i32 %new syncscope("singlethread") release monotonic
 
 
 ; CHECK-NEXT: [[TMP:%[a-z0-9]+]] = cmpxchg i32* %ptr, i32 %cmp, i32 %new acq_rel acquire
@@ -283,13 +283,13 @@
 ; CHECK-NEXT: %res14 = extractvalue { i32, i1 } [[TMP]], 0
   %res14 = cmpxchg volatile i32* %ptr, i32 %cmp, i32 %new acq_rel acquire
 
-; CHECK-NEXT: [[TMP:%[a-z0-9]+]] = cmpxchg i32* %ptr, i32 %cmp, i32 %new singlethread acq_rel acquire
+; CHECK-NEXT: [[TMP:%[a-z0-9]+]] = cmpxchg i32* %ptr, i32 %cmp, i32 %new syncscope("singlethread") acq_rel acquire
 ; CHECK-NEXT: %res15 = extractvalue { i32, i1 } [[TMP]], 0
-  %res15 = cmpxchg i32* %ptr, i32 %cmp, i32 %new singlethread acq_rel acquire
+  %res15 = cmpxchg i32* %ptr, i32 %cmp, i32 %new syncscope("singlethread") acq_rel acquire
 
-; CHECK-NEXT: [[TMP:%[a-z0-9]+]] = cmpxchg volatile i32* %ptr, i32 %cmp, i32 %new singlethread acq_rel acquire
+; CHECK-NEXT: [[TMP:%[a-z0-9]+]] = cmpxchg volatile i32* %ptr, i32 %cmp, i32 %new syncscope("singlethread") acq_rel acquire
 ; CHECK-NEXT: %res16 = extractvalue { i32, i1 } [[TMP]], 0
-  %res16 = cmpxchg volatile i32* %ptr, i32 %cmp, i32 %new singlethread acq_rel acquire
+  %res16 = cmpxchg volatile i32* %ptr, i32 %cmp, i32 %new syncscope("singlethread") acq_rel acquire
 
 
 ; CHECK-NEXT: [[TMP:%[a-z0-9]+]] = cmpxchg i32* %ptr, i32 %cmp, i32 %new seq_cst seq_cst
@@ -300,13 +300,13 @@
 ; CHECK-NEXT: %res18 = extractvalue { i32, i1 } [[TMP]], 0
   %res18 = cmpxchg volatile i32* %ptr, i32 %cmp, i32 %new seq_cst seq_cst
 
-; CHECK-NEXT: [[TMP:%[a-z0-9]+]] = cmpxchg i32* %ptr, i32 %cmp, i32 %new singlethread seq_cst seq_cst
+; CHECK-NEXT: [[TMP:%[a-z0-9]+]] = cmpxchg i32* %ptr, i32 %cmp, i32 %new syncscope("singlethread") seq_cst seq_cst
 ; CHECK-NEXT: %res19 = extractvalue { i32, i1 } [[TMP]], 0
-  %res19 = cmpxchg i32* %ptr, i32 %cmp, i32 %new singlethread seq_cst seq_cst
+  %res19 = cmpxchg i32* %ptr, i32 %cmp, i32 %new syncscope("singlethread") seq_cst seq_cst
 
-; CHECK-NEXT: [[TMP:%[a-z0-9]+]] = cmpxchg volatile i32* %ptr, i32 %cmp, i32 %new singlethread seq_cst seq_cst
+; CHECK-NEXT: [[TMP:%[a-z0-9]+]] = cmpxchg volatile i32* %ptr, i32 %cmp, i32 %new syncscope("singlethread") seq_cst seq_cst
 ; CHECK-NEXT: %res20 = extractvalue { i32, i1 } [[TMP]], 0
-  %res20 = cmpxchg volatile i32* %ptr, i32 %cmp, i32 %new singlethread seq_cst seq_cst
+  %res20 = cmpxchg volatile i32* %ptr, i32 %cmp, i32 %new syncscope("singlethread") seq_cst seq_cst
 
   ret void
 }
Index: test/CodeGen/AArch64/GlobalISel/arm64-irtranslator.ll
===================================================================
--- test/CodeGen/AArch64/GlobalISel/arm64-irtranslator.ll
+++ test/CodeGen/AArch64/GlobalISel/arm64-irtranslator.ll
@@ -1262,16 +1262,16 @@
 ; CHECK: G_STORE [[V0]](s8), [[ADDR]](p0) :: (store monotonic 1 into %ir.addr)
 ; CHECK: [[V1:%[0-9]+]](s8) = G_LOAD [[ADDR]](p0) :: (load acquire 1 from %ir.addr)
 ; CHECK: G_STORE [[V1]](s8), [[ADDR]](p0) :: (store release 1 into %ir.addr)
-; CHECK: [[V2:%[0-9]+]](s8) = G_LOAD [[ADDR]](p0) :: (load singlethread seq_cst 1 from %ir.addr)
-; CHECK: G_STORE [[V2]](s8), [[ADDR]](p0) :: (store singlethread monotonic 1 into %ir.addr)
+; CHECK: [[V2:%[0-9]+]](s8) = G_LOAD [[ADDR]](p0) :: (load syncscope(singlethread) seq_cst 1 from %ir.addr)
+; CHECK: G_STORE [[V2]](s8), [[ADDR]](p0) :: (store syncscope(singlethread) monotonic 1 into %ir.addr)
   %v0 = load atomic i8, i8* %addr unordered, align 1
   store atomic i8 %v0, i8* %addr monotonic, align 1
 
   %v1 = load atomic i8, i8* %addr acquire, align 1
   store atomic i8 %v1, i8* %addr release, align 1
 
-  %v2 = load atomic i8, i8* %addr singlethread seq_cst, align 1
-  store atomic i8 %v2, i8* %addr singlethread monotonic, align 1
+  %v2 = load atomic i8, i8* %addr syncscope("singlethread") seq_cst, align 1
+  store atomic i8 %v2, i8* %addr syncscope("singlethread") monotonic, align 1
 
   ret void
 }
Index: test/CodeGen/AArch64/fence-singlethread.ll
===================================================================
--- test/CodeGen/AArch64/fence-singlethread.ll
+++ test/CodeGen/AArch64/fence-singlethread.ll
@@ -16,6 +16,6 @@
 ; IOS: ; COMPILER BARRIER
 ; IOS-NOT: dmb
 
-  fence singlethread seq_cst
+  fence syncscope("singlethread") seq_cst
   ret void
 }
Index: test/CodeGen/ARM/fence-singlethread.ll
===================================================================
--- test/CodeGen/ARM/fence-singlethread.ll
+++ test/CodeGen/ARM/fence-singlethread.ll
@@ -11,6 +11,6 @@
 ; CHECK: @ COMPILER BARRIER
 ; CHECK-NOT: dmb
 
-  fence singlethread seq_cst
+  fence syncscope("singlethread") seq_cst
   ret void
 }
Index: test/CodeGen/MIR/AArch64/atomic-memoperands.mir
===================================================================
--- test/CodeGen/MIR/AArch64/atomic-memoperands.mir
+++ test/CodeGen/MIR/AArch64/atomic-memoperands.mir
@@ -14,7 +14,7 @@
 # CHECK: %3(s16) = G_LOAD %0(p0) :: (load acquire 2)
 # CHECK: G_STORE %3(s16), %0(p0) :: (store release 2)
 # CHECK: G_STORE %2(s32), %0(p0) :: (store acq_rel 4)
-# CHECK: G_STORE %1(s64), %0(p0) :: (store singlethread seq_cst 8)
+# CHECK: G_STORE %1(s64), %0(p0) :: (store syncscope(singlethread) seq_cst 8)
 name:            atomic_memoperands
 body: |
   bb.0:
@@ -25,6 +25,6 @@
     %3:_(s16) = G_LOAD %0(p0) :: (load acquire 2)
     G_STORE %3(s16), %0(p0) :: (store release 2)
     G_STORE %2(s32), %0(p0) :: (store acq_rel 4)
-    G_STORE %1(s64), %0(p0) :: (store singlethread seq_cst 8)
+    G_STORE %1(s64), %0(p0) :: (store syncscope(singlethread) seq_cst 8)
     RET_ReallyLR
 ...
Index: test/CodeGen/PowerPC/atomics-regression.ll
===================================================================
--- test/CodeGen/PowerPC/atomics-regression.ll
+++ test/CodeGen/PowerPC/atomics-regression.ll
@@ -354,7 +354,7 @@
 ; PPC64LE:       # BB#0:
 ; PPC64LE-NEXT:    lwsync
 ; PPC64LE-NEXT:    blr
-  fence singlethread acquire
+  fence syncscope("singlethread") acquire
   ret void
 }
 
@@ -363,7 +363,7 @@
 ; PPC64LE:       # BB#0:
 ; PPC64LE-NEXT:    lwsync
 ; PPC64LE-NEXT:    blr
-  fence singlethread release
+  fence syncscope("singlethread") release
   ret void
 }
 
@@ -372,7 +372,7 @@
 ; PPC64LE:       # BB#0:
 ; PPC64LE-NEXT:    lwsync
 ; PPC64LE-NEXT:    blr
-  fence singlethread acq_rel
+  fence syncscope("singlethread") acq_rel
   ret void
 }
 
@@ -381,7 +381,7 @@
 ; PPC64LE:       # BB#0:
 ; PPC64LE-NEXT:    sync
 ; PPC64LE-NEXT:    blr
-  fence singlethread seq_cst
+  fence syncscope("singlethread") seq_cst
   ret void
 }
 
@@ -1257,7 +1257,7 @@
 ; PPC64LE-NEXT:  # BB#3:
 ; PPC64LE-NEXT:    stbcx. 6, 0, 3
 ; PPC64LE-NEXT:    blr
-  %res = cmpxchg i8* %ptr, i8 %cmp, i8 %val singlethread monotonic monotonic
+  %res = cmpxchg i8* %ptr, i8 %cmp, i8 %val syncscope("singlethread") monotonic monotonic
   ret void
 }
 
@@ -1278,7 +1278,7 @@
 ; PPC64LE-NEXT:    stbcx. 6, 0, 3
 ; PPC64LE-NEXT:    lwsync
 ; PPC64LE-NEXT:    blr
-  %res = cmpxchg i8* %ptr, i8 %cmp, i8 %val singlethread acquire monotonic
+  %res = cmpxchg i8* %ptr, i8 %cmp, i8 %val syncscope("singlethread") acquire monotonic
   ret void
 }
 
@@ -1299,7 +1299,7 @@
 ; PPC64LE-NEXT:    stbcx. 6, 0, 3
 ; PPC64LE-NEXT:    lwsync
 ; PPC64LE-NEXT:    blr
-  %res = cmpxchg i8* %ptr, i8 %cmp, i8 %val singlethread acquire acquire
+  %res = cmpxchg i8* %ptr, i8 %cmp, i8 %val syncscope("singlethread") acquire acquire
   ret void
 }
 
@@ -1320,7 +1320,7 @@
 ; PPC64LE-NEXT:  # BB#3:
 ; PPC64LE-NEXT:    stbcx. 6, 0, 3
 ; PPC64LE-NEXT:    blr
-  %res = cmpxchg i8* %ptr, i8 %cmp, i8 %val singlethread release monotonic
+  %res = cmpxchg i8* %ptr, i8 %cmp, i8 %val syncscope("singlethread") release monotonic
   ret void
 }
 
@@ -1341,7 +1341,7 @@
 ; PPC64LE-NEXT:  # BB#3:
 ; PPC64LE-NEXT:    stbcx. 6, 0, 3
 ; PPC64LE-NEXT:    blr
-  %res = cmpxchg i8* %ptr, i8 %cmp, i8 %val singlethread release acquire
+  %res = cmpxchg i8* %ptr, i8 %cmp, i8 %val syncscope("singlethread") release acquire
   ret void
 }
 
@@ -1363,7 +1363,7 @@
 ; PPC64LE-NEXT:    stbcx. 6, 0, 3
 ; PPC64LE-NEXT:    lwsync
 ; PPC64LE-NEXT:    blr
-  %res = cmpxchg i8* %ptr, i8 %cmp, i8 %val singlethread acq_rel monotonic
+  %res = cmpxchg i8* %ptr, i8 %cmp, i8 %val syncscope("singlethread") acq_rel monotonic
   ret void
 }
 
@@ -1385,7 +1385,7 @@
 ; PPC64LE-NEXT:    stbcx. 6, 0, 3
 ; PPC64LE-NEXT:    lwsync
 ; PPC64LE-NEXT:    blr
-  %res = cmpxchg i8* %ptr, i8 %cmp, i8 %val singlethread acq_rel acquire
+  %res = cmpxchg i8* %ptr, i8 %cmp, i8 %val syncscope("singlethread") acq_rel acquire
   ret void
 }
 
@@ -1407,7 +1407,7 @@
 ; PPC64LE-NEXT:    stbcx. 6, 0, 3
 ; PPC64LE-NEXT:    lwsync
 ; PPC64LE-NEXT:    blr
-  %res = cmpxchg i8* %ptr, i8 %cmp, i8 %val singlethread seq_cst monotonic
+  %res = cmpxchg i8* %ptr, i8 %cmp, i8 %val syncscope("singlethread") seq_cst monotonic
   ret void
 }
 
@@ -1429,7 +1429,7 @@
 ; PPC64LE-NEXT:    stbcx. 6, 0, 3
 ; PPC64LE-NEXT:    lwsync
 ; PPC64LE-NEXT:    blr
-  %res = cmpxchg i8* %ptr, i8 %cmp, i8 %val singlethread seq_cst acquire
+  %res = cmpxchg i8* %ptr, i8 %cmp, i8 %val syncscope("singlethread") seq_cst acquire
   ret void
 }
 
@@ -1451,7 +1451,7 @@
 ; PPC64LE-NEXT:    stbcx. 6, 0, 3
 ; PPC64LE-NEXT:    lwsync
 ; PPC64LE-NEXT:    blr
-  %res = cmpxchg i8* %ptr, i8 %cmp, i8 %val singlethread seq_cst seq_cst
+  %res = cmpxchg i8* %ptr, i8 %cmp, i8 %val syncscope("singlethread") seq_cst seq_cst
   ret void
 }
 
@@ -1471,7 +1471,7 @@
 ; PPC64LE-NEXT:  # BB#3:
 ; PPC64LE-NEXT:    sthcx. 6, 0, 3
 ; PPC64LE-NEXT:    blr
-  %res = cmpxchg i16* %ptr, i16 %cmp, i16 %val singlethread monotonic monotonic
+  %res = cmpxchg i16* %ptr, i16 %cmp, i16 %val syncscope("singlethread") monotonic monotonic
   ret void
 }
 
@@ -1492,7 +1492,7 @@
 ; PPC64LE-NEXT:    sthcx. 6, 0, 3
 ; PPC64LE-NEXT:    lwsync
 ; PPC64LE-NEXT:    blr
-  %res = cmpxchg i16* %ptr, i16 %cmp, i16 %val singlethread acquire monotonic
+  %res = cmpxchg i16* %ptr, i16 %cmp, i16 %val syncscope("singlethread") acquire monotonic
   ret void
 }
 
@@ -1513,7 +1513,7 @@
 ; PPC64LE-NEXT:    sthcx. 6, 0, 3
 ; PPC64LE-NEXT:    lwsync
 ; PPC64LE-NEXT:    blr
-  %res = cmpxchg i16* %ptr, i16 %cmp, i16 %val singlethread acquire acquire
+  %res = cmpxchg i16* %ptr, i16 %cmp, i16 %val syncscope("singlethread") acquire acquire
   ret void
 }
 
@@ -1534,7 +1534,7 @@
 ; PPC64LE-NEXT:  # BB#3:
 ; PPC64LE-NEXT:    sthcx. 6, 0, 3
 ; PPC64LE-NEXT:    blr
-  %res = cmpxchg i16* %ptr, i16 %cmp, i16 %val singlethread release monotonic
+  %res = cmpxchg i16* %ptr, i16 %cmp, i16 %val syncscope("singlethread") release monotonic
   ret void
 }
 
@@ -1555,7 +1555,7 @@
 ; PPC64LE-NEXT:  # BB#3:
 ; PPC64LE-NEXT:    sthcx. 6, 0, 3
 ; PPC64LE-NEXT:    blr
-  %res = cmpxchg i16* %ptr, i16 %cmp, i16 %val singlethread release acquire
+  %res = cmpxchg i16* %ptr, i16 %cmp, i16 %val syncscope("singlethread") release acquire
   ret void
 }
 
@@ -1577,7 +1577,7 @@
 ; PPC64LE-NEXT:    sthcx. 6, 0, 3
 ; PPC64LE-NEXT:    lwsync
 ; PPC64LE-NEXT:    blr
-  %res = cmpxchg i16* %ptr, i16 %cmp, i16 %val singlethread acq_rel monotonic
+  %res = cmpxchg i16* %ptr, i16 %cmp, i16 %val syncscope("singlethread") acq_rel monotonic
   ret void
 }
 
@@ -1599,7 +1599,7 @@
 ; PPC64LE-NEXT:    sthcx. 6, 0, 3
 ; PPC64LE-NEXT:    lwsync
 ; PPC64LE-NEXT:    blr
-  %res = cmpxchg i16* %ptr, i16 %cmp, i16 %val singlethread acq_rel acquire
+  %res = cmpxchg i16* %ptr, i16 %cmp, i16 %val syncscope("singlethread") acq_rel acquire
   ret void
 }
 
@@ -1621,7 +1621,7 @@
 ; PPC64LE-NEXT:    sthcx. 6, 0, 3
 ; PPC64LE-NEXT:    lwsync
 ; PPC64LE-NEXT:    blr
-  %res = cmpxchg i16* %ptr, i16 %cmp, i16 %val singlethread seq_cst monotonic
+  %res = cmpxchg i16* %ptr, i16 %cmp, i16 %val syncscope("singlethread") seq_cst monotonic
   ret void
 }
 
@@ -1643,7 +1643,7 @@
 ; PPC64LE-NEXT:    sthcx. 6, 0, 3
 ; PPC64LE-NEXT:    lwsync
 ; PPC64LE-NEXT:    blr
-  %res = cmpxchg i16* %ptr, i16 %cmp, i16 %val singlethread seq_cst acquire
+  %res = cmpxchg i16* %ptr, i16 %cmp, i16 %val syncscope("singlethread") seq_cst acquire
   ret void
 }
 
@@ -1665,7 +1665,7 @@
 ; PPC64LE-NEXT:    sthcx. 6, 0, 3
 ; PPC64LE-NEXT:    lwsync
 ; PPC64LE-NEXT:    blr
-  %res = cmpxchg i16* %ptr, i16 %cmp, i16 %val singlethread seq_cst seq_cst
+  %res = cmpxchg i16* %ptr, i16 %cmp, i16 %val syncscope("singlethread") seq_cst seq_cst
   ret void
 }
 
@@ -1685,7 +1685,7 @@
 ; PPC64LE-NEXT:  # BB#3:
 ; PPC64LE-NEXT:    stwcx. 6, 0, 3
 ; PPC64LE-NEXT:    blr
-  %res = cmpxchg i32* %ptr, i32 %cmp, i32 %val singlethread monotonic monotonic
+  %res = cmpxchg i32* %ptr, i32 %cmp, i32 %val syncscope("singlethread") monotonic monotonic
   ret void
 }
 
@@ -1706,7 +1706,7 @@
 ; PPC64LE-NEXT:    stwcx. 6, 0, 3
 ; PPC64LE-NEXT:    lwsync
 ; PPC64LE-NEXT:    blr
-  %res = cmpxchg i32* %ptr, i32 %cmp, i32 %val singlethread acquire monotonic
+  %res = cmpxchg i32* %ptr, i32 %cmp, i32 %val syncscope("singlethread") acquire monotonic
   ret void
 }
 
@@ -1727,7 +1727,7 @@
 ; PPC64LE-NEXT:    stwcx. 6, 0, 3
 ; PPC64LE-NEXT:    lwsync
 ; PPC64LE-NEXT:    blr
-  %res = cmpxchg i32* %ptr, i32 %cmp, i32 %val singlethread acquire acquire
+  %res = cmpxchg i32* %ptr, i32 %cmp, i32 %val syncscope("singlethread") acquire acquire
   ret void
 }
 
@@ -1748,7 +1748,7 @@
 ; PPC64LE-NEXT:  # BB#3:
 ; PPC64LE-NEXT:    stwcx. 6, 0, 3
 ; PPC64LE-NEXT:    blr
-  %res = cmpxchg i32* %ptr, i32 %cmp, i32 %val singlethread release monotonic
+  %res = cmpxchg i32* %ptr, i32 %cmp, i32 %val syncscope("singlethread") release monotonic
   ret void
 }
 
@@ -1769,7 +1769,7 @@
 ; PPC64LE-NEXT:  # BB#3:
 ; PPC64LE-NEXT:    stwcx. 6, 0, 3
 ; PPC64LE-NEXT:    blr
-  %res = cmpxchg i32* %ptr, i32 %cmp, i32 %val singlethread release acquire
+  %res = cmpxchg i32* %ptr, i32 %cmp, i32 %val syncscope("singlethread") release acquire
   ret void
 }
 
@@ -1791,7 +1791,7 @@
 ; PPC64LE-NEXT:    stwcx. 6, 0, 3
 ; PPC64LE-NEXT:    lwsync
 ; PPC64LE-NEXT:    blr
-  %res = cmpxchg i32* %ptr, i32 %cmp, i32 %val singlethread acq_rel monotonic
+  %res = cmpxchg i32* %ptr, i32 %cmp, i32 %val syncscope("singlethread") acq_rel monotonic
   ret void
 }
 
@@ -1813,7 +1813,7 @@
 ; PPC64LE-NEXT:    stwcx. 6, 0, 3
 ; PPC64LE-NEXT:    lwsync
 ; PPC64LE-NEXT:    blr
-  %res = cmpxchg i32* %ptr, i32 %cmp, i32 %val singlethread acq_rel acquire
+  %res = cmpxchg i32* %ptr, i32 %cmp, i32 %val syncscope("singlethread") acq_rel acquire
   ret void
 }
 
@@ -1835,7 +1835,7 @@
 ; PPC64LE-NEXT:    stwcx. 6, 0, 3
 ; PPC64LE-NEXT:    lwsync
 ; PPC64LE-NEXT:    blr
-  %res = cmpxchg i32* %ptr, i32 %cmp, i32 %val singlethread seq_cst monotonic
+  %res = cmpxchg i32* %ptr, i32 %cmp, i32 %val syncscope("singlethread") seq_cst monotonic
   ret void
 }
 
@@ -1857,7 +1857,7 @@
 ; PPC64LE-NEXT:    stwcx. 6, 0, 3
 ; PPC64LE-NEXT:    lwsync
 ; PPC64LE-NEXT:    blr
-  %res = cmpxchg i32* %ptr, i32 %cmp, i32 %val singlethread seq_cst acquire
+  %res = cmpxchg i32* %ptr, i32 %cmp, i32 %val syncscope("singlethread") seq_cst acquire
   ret void
 }
 
@@ -1879,7 +1879,7 @@
 ; PPC64LE-NEXT:    stwcx. 6, 0, 3
 ; PPC64LE-NEXT:    lwsync
 ; PPC64LE-NEXT:    blr
-  %res = cmpxchg i32* %ptr, i32 %cmp, i32 %val singlethread seq_cst seq_cst
+  %res = cmpxchg i32* %ptr, i32 %cmp, i32 %val syncscope("singlethread") seq_cst seq_cst
   ret void
 }
 
@@ -1899,7 +1899,7 @@
 ; PPC64LE-NEXT:  # BB#3:
 ; PPC64LE-NEXT:    stdcx. 6, 0, 3
 ; PPC64LE-NEXT:    blr
-  %res = cmpxchg i64* %ptr, i64 %cmp, i64 %val singlethread monotonic monotonic
+  %res = cmpxchg i64* %ptr, i64 %cmp, i64 %val syncscope("singlethread") monotonic monotonic
   ret void
 }
 
@@ -1920,7 +1920,7 @@
 ; PPC64LE-NEXT:    stdcx. 6, 0, 3
 ; PPC64LE-NEXT:    lwsync
 ; PPC64LE-NEXT:    blr
-  %res = cmpxchg i64* %ptr, i64 %cmp, i64 %val singlethread acquire monotonic
+  %res = cmpxchg i64* %ptr, i64 %cmp, i64 %val syncscope("singlethread") acquire monotonic
   ret void
 }
 
@@ -1941,7 +1941,7 @@
 ; PPC64LE-NEXT:    stdcx. 6, 0, 3
 ; PPC64LE-NEXT:    lwsync
 ; PPC64LE-NEXT:    blr
-  %res = cmpxchg i64* %ptr, i64 %cmp, i64 %val singlethread acquire acquire
+  %res = cmpxchg i64* %ptr, i64 %cmp, i64 %val syncscope("singlethread") acquire acquire
   ret void
 }
 
@@ -1962,7 +1962,7 @@
 ; PPC64LE-NEXT:  # BB#3:
 ; PPC64LE-NEXT:    stdcx. 6, 0, 3
 ; PPC64LE-NEXT:    blr
-  %res = cmpxchg i64* %ptr, i64 %cmp, i64 %val singlethread release monotonic
+  %res = cmpxchg i64* %ptr, i64 %cmp, i64 %val syncscope("singlethread") release monotonic
   ret void
 }
 
@@ -1983,7 +1983,7 @@
 ; PPC64LE-NEXT:  # BB#3:
 ; PPC64LE-NEXT:    stdcx. 6, 0, 3
 ; PPC64LE-NEXT:    blr
-  %res = cmpxchg i64* %ptr, i64 %cmp, i64 %val singlethread release acquire
+  %res = cmpxchg i64* %ptr, i64 %cmp, i64 %val syncscope("singlethread") release acquire
   ret void
 }
 
@@ -2005,7 +2005,7 @@
 ; PPC64LE-NEXT:    stdcx. 6, 0, 3
 ; PPC64LE-NEXT:    lwsync
 ; PPC64LE-NEXT:    blr
-  %res = cmpxchg i64* %ptr, i64 %cmp, i64 %val singlethread acq_rel monotonic
+  %res = cmpxchg i64* %ptr, i64 %cmp, i64 %val syncscope("singlethread") acq_rel monotonic
   ret void
 }
 
@@ -2027,7 +2027,7 @@
 ; PPC64LE-NEXT:    stdcx. 6, 0, 3
 ; PPC64LE-NEXT:    lwsync
 ; PPC64LE-NEXT:    blr
-  %res = cmpxchg i64* %ptr, i64 %cmp, i64 %val singlethread acq_rel acquire
+  %res = cmpxchg i64* %ptr, i64 %cmp, i64 %val syncscope("singlethread") acq_rel acquire
   ret void
 }
 
@@ -2049,7 +2049,7 @@
 ; PPC64LE-NEXT:    stdcx. 6, 0, 3
 ; PPC64LE-NEXT:    lwsync
 ; PPC64LE-NEXT:    blr
-  %res = cmpxchg i64* %ptr, i64 %cmp, i64 %val singlethread seq_cst monotonic
+  %res = cmpxchg i64* %ptr, i64 %cmp, i64 %val syncscope("singlethread") seq_cst monotonic
   ret void
 }
 
@@ -2071,7 +2071,7 @@
 ; PPC64LE-NEXT:    stdcx. 6, 0, 3
 ; PPC64LE-NEXT:    lwsync
 ; PPC64LE-NEXT:    blr
-  %res = cmpxchg i64* %ptr, i64 %cmp, i64 %val singlethread seq_cst acquire
+  %res = cmpxchg i64* %ptr, i64 %cmp, i64 %val syncscope("singlethread") seq_cst acquire
   ret void
 }
 
@@ -2093,7 +2093,7 @@
 ; PPC64LE-NEXT:    stdcx. 6, 0, 3
 ; PPC64LE-NEXT:    lwsync
 ; PPC64LE-NEXT:    blr
-  %res = cmpxchg i64* %ptr, i64 %cmp, i64 %val singlethread seq_cst seq_cst
+  %res = cmpxchg i64* %ptr, i64 %cmp, i64 %val syncscope("singlethread") seq_cst seq_cst
   ret void
 }
 
@@ -5831,7 +5831,7 @@
 ; PPC64LE-NEXT:  # BB#2:
 ; PPC64LE-NEXT:    mr 3, 5
 ; PPC64LE-NEXT:    blr
-  %ret = atomicrmw xchg i8* %ptr, i8 %val singlethread monotonic
+  %ret = atomicrmw xchg i8* %ptr, i8 %val syncscope("singlethread") monotonic
   ret i8 %ret
 }
 
@@ -5846,7 +5846,7 @@
 ; PPC64LE-NEXT:  # BB#2:
 ; PPC64LE-NEXT:    lwsync
 ; PPC64LE-NEXT:    blr
-  %ret = atomicrmw xchg i8* %ptr, i8 %val singlethread acquire
+  %ret = atomicrmw xchg i8* %ptr, i8 %val syncscope("singlethread") acquire
   ret i8 %ret
 }
 
@@ -5861,7 +5861,7 @@
 ; PPC64LE-NEXT:  # BB#2:
 ; PPC64LE-NEXT:    mr 3, 5
 ; PPC64LE-NEXT:    blr
-  %ret = atomicrmw xchg i8* %ptr, i8 %val singlethread release
+  %ret = atomicrmw xchg i8* %ptr, i8 %val syncscope("singlethread") release
   ret i8 %ret
 }
 
@@ -5877,7 +5877,7 @@
 ; PPC64LE-NEXT:    mr 3, 5
 ; PPC64LE-NEXT:    lwsync
 ; PPC64LE-NEXT:    blr
-  %ret = atomicrmw xchg i8* %ptr, i8 %val singlethread acq_rel
+  %ret = atomicrmw xchg i8* %ptr, i8 %val syncscope("singlethread") acq_rel
   ret i8 %ret
 }
 
@@ -5893,7 +5893,7 @@
 ; PPC64LE-NEXT:    mr 3, 5
 ; PPC64LE-NEXT:    lwsync
 ; PPC64LE-NEXT:    blr
-  %ret = atomicrmw xchg i8* %ptr, i8 %val singlethread seq_cst
+  %ret = atomicrmw xchg i8* %ptr, i8 %val syncscope("singlethread") seq_cst
   ret i8 %ret
 }
 
@@ -5907,7 +5907,7 @@
 ; PPC64LE-NEXT:  # BB#2:
 ; PPC64LE-NEXT:    mr 3, 5
 ; PPC64LE-NEXT:    blr
-  %ret = atomicrmw xchg i16* %ptr, i16 %val singlethread monotonic
+  %ret = atomicrmw xchg i16* %ptr, i16 %val syncscope("singlethread") monotonic
   ret i16 %ret
 }
 
@@ -5922,7 +5922,7 @@
 ; PPC64LE-NEXT:  # BB#2:
 ; PPC64LE-NEXT:    lwsync
 ; PPC64LE-NEXT:    blr
-  %ret = atomicrmw xchg i16* %ptr, i16 %val singlethread acquire
+  %ret = atomicrmw xchg i16* %ptr, i16 %val syncscope("singlethread") acquire
   ret i16 %ret
 }
 
@@ -5937,7 +5937,7 @@
 ; PPC64LE-NEXT:  # BB#2:
 ; PPC64LE-NEXT:    mr 3, 5
 ; PPC64LE-NEXT:    blr
-  %ret = atomicrmw xchg i16* %ptr, i16 %val singlethread release
+  %ret = atomicrmw xchg i16* %ptr, i16 %val syncscope("singlethread") release
   ret i16 %ret
 }
 
@@ -5953,7 +5953,7 @@
 ; PPC64LE-NEXT:    mr 3, 5
 ; PPC64LE-NEXT:    lwsync
 ; PPC64LE-NEXT:    blr
-  %ret = atomicrmw xchg i16* %ptr, i16 %val singlethread acq_rel
+  %ret = atomicrmw xchg i16* %ptr, i16 %val syncscope("singlethread") acq_rel
   ret i16 %ret
 }
 
@@ -5969,7 +5969,7 @@
 ; PPC64LE-NEXT:    mr 3, 5
 ; PPC64LE-NEXT:    lwsync
 ; PPC64LE-NEXT:    blr
-  %ret = atomicrmw xchg i16* %ptr, i16 %val singlethread seq_cst
+  %ret = atomicrmw xchg i16* %ptr, i16 %val syncscope("singlethread") seq_cst
   ret i16 %ret
 }
 
@@ -5983,7 +5983,7 @@
 ; PPC64LE-NEXT:  # BB#2:
 ; PPC64LE-NEXT:    mr 3, 5
 ; PPC64LE-NEXT:    blr
-  %ret = atomicrmw xchg i32* %ptr, i32 %val singlethread monotonic
+  %ret = atomicrmw xchg i32* %ptr, i32 %val syncscope("singlethread") monotonic
   ret i32 %ret
 }
 
@@ -5998,7 +5998,7 @@
 ; PPC64LE-NEXT:  # BB#2:
 ; PPC64LE-NEXT:    lwsync
 ; PPC64LE-NEXT:    blr
-  %ret = atomicrmw xchg i32* %ptr, i32 %val singlethread acquire
+  %ret = atomicrmw xchg i32* %ptr, i32 %val syncscope("singlethread") acquire
   ret i32 %ret
 }
 
@@ -6013,7 +6013,7 @@
 ; PPC64LE-NEXT:  # BB#2:
 ; PPC64LE-NEXT:    mr 3, 5
 ; PPC64LE-NEXT:    blr
-  %ret = atomicrmw xchg i32* %ptr, i32 %val singlethread release
+  %ret = atomicrmw xchg i32* %ptr, i32 %val syncscope("singlethread") release
   ret i32 %ret
 }
 
@@ -6029,7 +6029,7 @@
 ; PPC64LE-NEXT:    mr 3, 5
 ; PPC64LE-NEXT:    lwsync
 ; PPC64LE-NEXT:    blr
-  %ret = atomicrmw xchg i32* %ptr, i32 %val singlethread acq_rel
+  %ret = atomicrmw xchg i32* %ptr, i32 %val syncscope("singlethread") acq_rel
   ret i32 %ret
 }
 
@@ -6045,7 +6045,7 @@
 ; PPC64LE-NEXT:    mr 3, 5
 ; PPC64LE-NEXT:    lwsync
 ; PPC64LE-NEXT:    blr
-  %ret = atomicrmw xchg i32* %ptr, i32 %val singlethread seq_cst
+  %ret = atomicrmw xchg i32* %ptr, i32 %val syncscope("singlethread") seq_cst
   ret i32 %ret
 }
 
@@ -6059,7 +6059,7 @@
 ; PPC64LE-NEXT:  # BB#2:
 ; PPC64LE-NEXT:    mr 3, 5
 ; PPC64LE-NEXT:    blr
-  %ret = atomicrmw xchg i64* %ptr, i64 %val singlethread monotonic
+  %ret = atomicrmw xchg i64* %ptr, i64 %val syncscope("singlethread") monotonic
   ret i64 %ret
 }
 
@@ -6074,7 +6074,7 @@
 ; PPC64LE-NEXT:  # BB#2:
 ; PPC64LE-NEXT:    lwsync
 ; PPC64LE-NEXT:    blr
-  %ret = atomicrmw xchg i64* %ptr, i64 %val singlethread acquire
+  %ret = atomicrmw xchg i64* %ptr, i64 %val syncscope("singlethread") acquire
   ret i64 %ret
 }
 
@@ -6089,7 +6089,7 @@
 ; PPC64LE-NEXT:  # BB#2:
 ; PPC64LE-NEXT:    mr 3, 5
 ; PPC64LE-NEXT:    blr
-  %ret = atomicrmw xchg i64* %ptr, i64 %val singlethread release
+  %ret = atomicrmw xchg i64* %ptr, i64 %val syncscope("singlethread") release
   ret i64 %ret
 }
 
@@ -6105,7 +6105,7 @@
 ; PPC64LE-NEXT:    mr 3, 5
 ; PPC64LE-NEXT:    lwsync
 ; PPC64LE-NEXT:    blr
-  %ret = atomicrmw xchg i64* %ptr, i64 %val singlethread acq_rel
+  %ret = atomicrmw xchg i64* %ptr, i64 %val syncscope("singlethread") acq_rel
   ret i64 %ret
 }
 
@@ -6121,7 +6121,7 @@
 ; PPC64LE-NEXT:    mr 3, 5
 ; PPC64LE-NEXT:    lwsync
 ; PPC64LE-NEXT:    blr
-  %ret = atomicrmw xchg i64* %ptr, i64 %val singlethread seq_cst
+  %ret = atomicrmw xchg i64* %ptr, i64 %val syncscope("singlethread") seq_cst
   ret i64 %ret
 }
 
@@ -6136,7 +6136,7 @@
 ; PPC64LE-NEXT:  # BB#2:
 ; PPC64LE-NEXT:    mr 3, 5
 ; PPC64LE-NEXT:    blr
-  %ret = atomicrmw add i8* %ptr, i8 %val singlethread monotonic
+  %ret = atomicrmw add i8* %ptr, i8 %val syncscope("singlethread") monotonic
   ret i8 %ret
 }
 
@@ -6152,7 +6152,7 @@
 ; PPC64LE-NEXT:  # BB#2:
 ; PPC64LE-NEXT:    lwsync
 ; PPC64LE-NEXT:    blr
-  %ret = atomicrmw add i8* %ptr, i8 %val singlethread acquire
+  %ret = atomicrmw add i8* %ptr, i8 %val syncscope("singlethread") acquire
   ret i8 %ret
 }
 
@@ -6168,7 +6168,7 @@
 ; PPC64LE-NEXT:  # BB#2:
 ; PPC64LE-NEXT:    mr 3, 5
 ; PPC64LE-NEXT:    blr
-  %ret = atomicrmw add i8* %ptr, i8 %val singlethread release
+  %ret = atomicrmw add i8* %ptr, i8 %val syncscope("singlethread") release
   ret i8 %ret
 }
 
@@ -6185,7 +6185,7 @@
 ; PPC64LE-NEXT:    mr 3, 5
 ; PPC64LE-NEXT:    lwsync
 ; PPC64LE-NEXT:    blr
-  %ret = atomicrmw add i8* %ptr, i8 %val singlethread acq_rel
+  %ret = atomicrmw add i8* %ptr, i8 %val syncscope("singlethread") acq_rel
   ret i8 %ret
 }
 
@@ -6202,7 +6202,7 @@
 ; PPC64LE-NEXT:    mr 3, 5
 ; PPC64LE-NEXT:    lwsync
 ; PPC64LE-NEXT:    blr
-  %ret = atomicrmw add i8* %ptr, i8 %val singlethread seq_cst
+  %ret = atomicrmw add i8* %ptr, i8 %val syncscope("singlethread") seq_cst
   ret i8 %ret
 }
 
@@ -6217,7 +6217,7 @@
 ; PPC64LE-NEXT:  # BB#2:
 ; PPC64LE-NEXT:    mr 3, 5
 ; PPC64LE-NEXT:    blr
-  %ret = atomicrmw add i16* %ptr, i16 %val singlethread monotonic
+  %ret = atomicrmw add i16* %ptr, i16 %val syncscope("singlethread") monotonic
   ret i16 %ret
 }
 
@@ -6233,7 +6233,7 @@
 ; PPC64LE-NEXT:  # BB#2:
 ; PPC64LE-NEXT:    lwsync
 ; PPC64LE-NEXT:    blr
-  %ret = atomicrmw add i16* %ptr, i16 %val singlethread acquire
+  %ret = atomicrmw add i16* %ptr, i16 %val syncscope("singlethread") acquire
   ret i16 %ret
 }
 
@@ -6249,7 +6249,7 @@
 ; PPC64LE-NEXT:  # BB#2:
 ; PPC64LE-NEXT:    mr 3, 5
 ; PPC64LE-NEXT:    blr
-  %ret = atomicrmw add i16* %ptr, i16 %val singlethread release
+  %ret = atomicrmw add i16* %ptr, i16 %val syncscope("singlethread") release
   ret i16 %ret
 }
 
@@ -6266,7 +6266,7 @@
 ; PPC64LE-NEXT:    mr 3, 5
 ; PPC64LE-NEXT:    lwsync
 ; PPC64LE-NEXT:    blr
-  %ret = atomicrmw add i16* %ptr, i16 %val singlethread acq_rel
+  %ret = atomicrmw add i16* %ptr, i16 %val syncscope("singlethread") acq_rel
   ret i16 %ret
 }
 
@@ -6283,7 +6283,7 @@
 ; PPC64LE-NEXT:    mr 3, 5
 ; PPC64LE-NEXT:    lwsync
 ; PPC64LE-NEXT:    blr
-  %ret = atomicrmw add i16* %ptr, i16 %val singlethread seq_cst
+  %ret = atomicrmw add i16* %ptr, i16 %val syncscope("singlethread") seq_cst
   ret i16 %ret
 }
 
@@ -6298,7 +6298,7 @@
 ; PPC64LE-NEXT:  # BB#2:
 ; PPC64LE-NEXT:    mr 3, 5
 ; PPC64LE-NEXT:    blr
-  %ret = atomicrmw add i32* %ptr, i32 %val singlethread monotonic
+  %ret = atomicrmw add i32* %ptr, i32 %val syncscope("singlethread") monotonic
   ret i32 %ret
 }
 
@@ -6314,7 +6314,7 @@
 ; PPC64LE-NEXT:  # BB#2:
 ; PPC64LE-NEXT:    lwsync
 ; PPC64LE-NEXT:    blr
-  %ret = atomicrmw add i32* %ptr, i32 %val singlethread acquire
+  %ret = atomicrmw add i32* %ptr, i32 %val syncscope("singlethread") acquire
   ret i32 %ret
 }
 
@@ -6330,7 +6330,7 @@
 ; PPC64LE-NEXT:  # BB#2:
 ; PPC64LE-NEXT:    mr 3, 5
 ; PPC64LE-NEXT:    blr
-  %ret = atomicrmw add i32* %ptr, i32 %val singlethread release
+  %ret = atomicrmw add i32* %ptr, i32 %val syncscope("singlethread") release
   ret i32 %ret
 }
 
@@ -6347,7 +6347,7 @@
 ; PPC64LE-NEXT:    mr 3, 5
 ; PPC64LE-NEXT:    lwsync
 ; PPC64LE-NEXT:    blr
-  %ret = atomicrmw add i32* %ptr, i32 %val singlethread acq_rel
+  %ret = atomicrmw add i32* %ptr, i32 %val syncscope("singlethread") acq_rel
   ret i32 %ret
 }
 
@@ -6364,7 +6364,7 @@
 ; PPC64LE-NEXT:    mr 3, 5
 ; PPC64LE-NEXT:    lwsync
 ; PPC64LE-NEXT:    blr
-  %ret = atomicrmw add i32* %ptr, i32 %val singlethread seq_cst
+  %ret = atomicrmw add i32* %ptr, i32 %val syncscope("singlethread") seq_cst
   ret i32 %ret
 }
 
@@ -6379,7 +6379,7 @@
 ; PPC64LE-NEXT:  # BB#2:
 ; PPC64LE-NEXT:    mr 3, 5
 ; PPC64LE-NEXT:    blr
-  %ret = atomicrmw add i64* %ptr, i64 %val singlethread monotonic
+  %ret = atomicrmw add i64* %ptr, i64 %val syncscope("singlethread") monotonic
   ret i64 %ret
 }
 
@@ -6395,7 +6395,7 @@
 ; PPC64LE-NEXT:  # BB#2:
 ; PPC64LE-NEXT:    lwsync
 ; PPC64LE-NEXT:    blr
-  %ret = atomicrmw add i64* %ptr, i64 %val singlethread acquire
+  %ret = atomicrmw add i64* %ptr, i64 %val syncscope("singlethread") acquire
   ret i64 %ret
 }
 
@@ -6411,7 +6411,7 @@
 ; PPC64LE-NEXT:  # BB#2:
 ; PPC64LE-NEXT:    mr 3, 5
 ; PPC64LE-NEXT:    blr
-  %ret = atomicrmw add i64* %ptr, i64 %val singlethread release
+  %ret = atomicrmw add i64* %ptr, i64 %val syncscope("singlethread") release
   ret i64 %ret
 }
 
@@ -6428,7 +6428,7 @@
 ; PPC64LE-NEXT:    mr 3, 5
 ; PPC64LE-NEXT:    lwsync
 ; PPC64LE-NEXT:    blr
-  %ret = atomicrmw add i64* %ptr, i64 %val singlethread acq_rel
+  %ret = atomicrmw add i64* %ptr, i64 %val syncscope("singlethread") acq_rel
   ret i64 %ret
 }
 
@@ -6445,7 +6445,7 @@
 ; PPC64LE-NEXT:    mr 3, 5
 ; PPC64LE-NEXT:    lwsync
 ; PPC64LE-NEXT:    blr
-  %ret = atomicrmw add i64* %ptr, i64 %val singlethread seq_cst
+  %ret = atomicrmw add i64* %ptr, i64 %val syncscope("singlethread") seq_cst
   ret i64 %ret
 }
 
@@ -6460,7 +6460,7 @@
 ; PPC64LE-NEXT:  # BB#2:
 ; PPC64LE-NEXT:    mr 3, 5
 ; PPC64LE-NEXT:    blr
-  %ret = atomicrmw sub i8* %ptr, i8 %val singlethread monotonic
+  %ret = atomicrmw sub i8* %ptr, i8 %val syncscope("singlethread") monotonic
   ret i8 %ret
 }
 
@@ -6476,7 +6476,7 @@
 ; PPC64LE-NEXT:  # BB#2:
 ; PPC64LE-NEXT:    lwsync
 ; PPC64LE-NEXT:    blr
-  %ret = atomicrmw sub i8* %ptr, i8 %val singlethread acquire
+  %ret = atomicrmw sub i8* %ptr, i8 %val syncscope("singlethread") acquire
   ret i8 %ret
 }
 
@@ -6492,7 +6492,7 @@
 ; PPC64LE-NEXT:  # BB#2:
 ; PPC64LE-NEXT:    mr 3, 5
 ; PPC64LE-NEXT:    blr
-  %ret = atomicrmw sub i8* %ptr, i8 %val singlethread release
+  %ret = atomicrmw sub i8* %ptr, i8 %val syncscope("singlethread") release
   ret i8 %ret
 }
 
@@ -6509,7 +6509,7 @@
 ; PPC64LE-NEXT:    mr 3, 5
 ; PPC64LE-NEXT:    lwsync
 ; PPC64LE-NEXT:    blr
-  %ret = atomicrmw sub i8* %ptr, i8 %val singlethread acq_rel
+  %ret = atomicrmw sub i8* %ptr, i8 %val syncscope("singlethread") acq_rel
   ret i8 %ret
 }
 
@@ -6526,7 +6526,7 @@
 ; PPC64LE-NEXT:    mr 3, 5
 ; PPC64LE-NEXT:    lwsync
 ; PPC64LE-NEXT:    blr
-  %ret = atomicrmw sub i8* %ptr, i8 %val singlethread seq_cst
+  %ret = atomicrmw sub i8* %ptr, i8 %val syncscope("singlethread") seq_cst
   ret i8 %ret
 }
 
@@ -6541,7 +6541,7 @@
 ; PPC64LE-NEXT:  # BB#2:
 ; PPC64LE-NEXT:    mr 3, 5
 ; PPC64LE-NEXT:    blr
-  %ret = atomicrmw sub i16* %ptr, i16 %val singlethread monotonic
+  %ret = atomicrmw sub i16* %ptr, i16 %val syncscope("singlethread") monotonic
   ret i16 %ret
 }
 
@@ -6557,7 +6557,7 @@
 ; PPC64LE-NEXT:  # BB#2:
 ; PPC64LE-NEXT:    lwsync
 ; PPC64LE-NEXT:    blr
-  %ret = atomicrmw sub i16* %ptr, i16 %val singlethread acquire
+  %ret = atomicrmw sub i16* %ptr, i16 %val syncscope("singlethread") acquire
   ret i16 %ret
 }
 
@@ -6573,7 +6573,7 @@
 ; PPC64LE-NEXT:  # BB#2:
 ; PPC64LE-NEXT:    mr 3, 5
 ; PPC64LE-NEXT:    blr
-  %ret = atomicrmw sub i16* %ptr, i16 %val singlethread release
+  %ret = atomicrmw sub i16* %ptr, i16 %val syncscope("singlethread") release
   ret i16 %ret
 }
 
@@ -6590,7 +6590,7 @@
 ; PPC64LE-NEXT:    mr 3, 5
 ; PPC64LE-NEXT:    lwsync
 ; PPC64LE-NEXT:    blr
-  %ret = atomicrmw sub i16* %ptr, i16 %val singlethread acq_rel
+  %ret = atomicrmw sub i16* %ptr, i16 %val syncscope("singlethread") acq_rel
   ret i16 %ret
 }
 
@@ -6607,7 +6607,7 @@
 ; PPC64LE-NEXT:    mr 3, 5
 ; PPC64LE-NEXT:    lwsync
 ; PPC64LE-NEXT:    blr
-  %ret = atomicrmw sub i16* %ptr, i16 %val singlethread seq_cst
+  %ret = atomicrmw sub i16* %ptr, i16 %val syncscope("singlethread") seq_cst
   ret i16 %ret
 }
 
@@ -6622,7 +6622,7 @@
 ; PPC64LE-NEXT:  # BB#2:
 ; PPC64LE-NEXT:    mr 3, 5
 ; PPC64LE-NEXT:    blr
-  %ret = atomicrmw sub i32* %ptr, i32 %val singlethread monotonic
+  %ret = atomicrmw sub i32* %ptr, i32 %val syncscope("singlethread") monotonic
   ret i32 %ret
 }
 
@@ -6638,7 +6638,7 @@
 ; PPC64LE-NEXT:  # BB#2:
 ; PPC64LE-NEXT:    lwsync
 ; PPC64LE-NEXT:    blr
-  %ret = atomicrmw sub i32* %ptr, i32 %val singlethread acquire
+  %ret = atomicrmw sub i32* %ptr, i32 %val syncscope("singlethread") acquire
   ret i32 %ret
 }
 
@@ -6654,7 +6654,7 @@
 ; PPC64LE-NEXT:  # BB#2:
 ; PPC64LE-NEXT:    mr 3, 5
 ; PPC64LE-NEXT:    blr
-  %ret = atomicrmw sub i32* %ptr, i32 %val singlethread release
+  %ret = atomicrmw sub i32* %ptr, i32 %val syncscope("singlethread") release
   ret i32 %ret
 }
 
@@ -6671,7 +6671,7 @@
 ; PPC64LE-NEXT:    mr 3, 5
 ; PPC64LE-NEXT:    lwsync
 ; PPC64LE-NEXT:    blr
-  %ret = atomicrmw sub i32* %ptr, i32 %val singlethread acq_rel
+  %ret = atomicrmw sub i32* %ptr, i32 %val syncscope("singlethread") acq_rel
   ret i32 %ret
 }
 
@@ -6688,7 +6688,7 @@
 ; PPC64LE-NEXT:    mr 3, 5
 ; PPC64LE-NEXT:    lwsync
 ; PPC64LE-NEXT:    blr
-  %ret = atomicrmw sub i32* %ptr, i32 %val singlethread seq_cst
+  %ret = atomicrmw sub i32* %ptr, i32 %val syncscope("singlethread") seq_cst
   ret i32 %ret
 }
 
@@ -6703,7 +6703,7 @@
 ; PPC64LE-NEXT:  # BB#2:
 ; PPC64LE-NEXT:    mr 3, 5
 ; PPC64LE-NEXT:    blr
-  %ret = atomicrmw sub i64* %ptr, i64 %val singlethread monotonic
+  %ret = atomicrmw sub i64* %ptr, i64 %val syncscope("singlethread") monotonic
   ret i64 %ret
 }
 
@@ -6719,7 +6719,7 @@
 ; PPC64LE-NEXT:  # BB#2:
 ; PPC64LE-NEXT:    lwsync
 ; PPC64LE-NEXT:    blr
-  %ret = atomicrmw sub i64* %ptr, i64 %val singlethread acquire
+  %ret = atomicrmw sub i64* %ptr, i64 %val syncscope("singlethread") acquire
   ret i64 %ret
 }
 
@@ -6735,7 +6735,7 @@
 ; PPC64LE-NEXT:  # BB#2:
 ; PPC64LE-NEXT:    mr 3, 5
 ; PPC64LE-NEXT:    blr
-  %ret = atomicrmw sub i64* %ptr, i64 %val singlethread release
+  %ret = atomicrmw sub i64* %ptr, i64 %val syncscope("singlethread") release
   ret i64 %ret
 }
 
@@ -6752,7 +6752,7 @@
 ; PPC64LE-NEXT:    mr 3, 5
 ; PPC64LE-NEXT:    lwsync
 ; PPC64LE-NEXT:    blr
-  %ret = atomicrmw sub i64* %ptr, i64 %val singlethread acq_rel
+  %ret = atomicrmw sub i64* %ptr, i64 %val syncscope("singlethread") acq_rel
   ret i64 %ret
 }
 
@@ -6769,7 +6769,7 @@
 ; PPC64LE-NEXT:    mr 3, 5
 ; PPC64LE-NEXT:    lwsync
 ; PPC64LE-NEXT:    blr
-  %ret = atomicrmw sub i64* %ptr, i64 %val singlethread seq_cst
+  %ret = atomicrmw sub i64* %ptr, i64 %val syncscope("singlethread") seq_cst
   ret i64 %ret
 }
 
@@ -6784,7 +6784,7 @@
 ; PPC64LE-NEXT:  # BB#2:
 ; PPC64LE-NEXT:    mr 3, 5
 ; PPC64LE-NEXT:    blr
-  %ret = atomicrmw and i8* %ptr, i8 %val singlethread monotonic
+  %ret = atomicrmw and i8* %ptr, i8 %val syncscope("singlethread") monotonic
   ret i8 %ret
 }
 
@@ -6800,7 +6800,7 @@
 ; PPC64LE-NEXT:  # BB#2:
 ; PPC64LE-NEXT:    lwsync
 ; PPC64LE-NEXT:    blr
-  %ret = atomicrmw and i8* %ptr, i8 %val singlethread acquire
+  %ret = atomicrmw and i8* %ptr, i8 %val syncscope("singlethread") acquire
   ret i8 %ret
 }
 
@@ -6816,7 +6816,7 @@
 ; PPC64LE-NEXT:  # BB#2:
 ; PPC64LE-NEXT:    mr 3, 5
 ; PPC64LE-NEXT:    blr
-  %ret = atomicrmw and i8* %ptr, i8 %val singlethread release
+  %ret = atomicrmw and i8* %ptr, i8 %val syncscope("singlethread") release
   ret i8 %ret
 }
 
@@ -6833,7 +6833,7 @@
 ; PPC64LE-NEXT:    mr 3, 5
 ; PPC64LE-NEXT:    lwsync
 ; PPC64LE-NEXT:    blr
-  %ret = atomicrmw and i8* %ptr, i8 %val singlethread acq_rel
+  %ret = atomicrmw and i8* %ptr, i8 %val syncscope("singlethread") acq_rel
   ret i8 %ret
 }
 
@@ -6850,7 +6850,7 @@
 ; PPC64LE-NEXT:    mr 3, 5
 ; PPC64LE-NEXT:    lwsync
 ; PPC64LE-NEXT:    blr
-  %ret = atomicrmw and i8* %ptr, i8 %val singlethread seq_cst
+  %ret = atomicrmw and i8* %ptr, i8 %val syncscope("singlethread") seq_cst
   ret i8 %ret
 }
 
@@ -6865,7 +6865,7 @@
 ; PPC64LE-NEXT:  # BB#2:
 ; PPC64LE-NEXT:    mr 3, 5
 ; PPC64LE-NEXT:    blr
-  %ret = atomicrmw and i16* %ptr, i16 %val singlethread monotonic
+  %ret = atomicrmw and i16* %ptr, i16 %val syncscope("singlethread") monotonic
   ret i16 %ret
 }
 
@@ -6881,7 +6881,7 @@
 ; PPC64LE-NEXT:  # BB#2:
 ; PPC64LE-NEXT:    lwsync
 ; PPC64LE-NEXT:    blr
-  %ret = atomicrmw and i16* %ptr, i16 %val singlethread acquire
+  %ret = atomicrmw and i16* %ptr, i16 %val syncscope("singlethread") acquire
   ret i16 %ret
 }
 
@@ -6897,7 +6897,7 @@
 ; PPC64LE-NEXT:  # BB#2:
 ; PPC64LE-NEXT:    mr 3, 5
 ; PPC64LE-NEXT:    blr
-  %ret = atomicrmw and i16* %ptr, i16 %val singlethread release
+  %ret = atomicrmw and i16* %ptr, i16 %val syncscope("singlethread") release
   ret i16 %ret
 }
 
@@ -6914,7 +6914,7 @@
 ; PPC64LE-NEXT:    mr 3, 5
 ; PPC64LE-NEXT:    lwsync
 ; PPC64LE-NEXT:    blr
-  %ret = atomicrmw and i16* %ptr, i16 %val singlethread acq_rel
+  %ret = atomicrmw and i16* %ptr, i16 %val syncscope("singlethread") acq_rel
   ret i16 %ret
 }
 
@@ -6931,7 +6931,7 @@
 ; PPC64LE-NEXT:    mr 3, 5
 ; PPC64LE-NEXT:    lwsync
 ; PPC64LE-NEXT:    blr
-  %ret = atomicrmw and i16* %ptr, i16 %val singlethread seq_cst
+  %ret = atomicrmw and i16* %ptr, i16 %val syncscope("singlethread") seq_cst
   ret i16 %ret
 }
 
@@ -6946,7 +6946,7 @@
 ; PPC64LE-NEXT:  # BB#2:
 ; PPC64LE-NEXT:    mr 3, 5
 ; PPC64LE-NEXT:    blr
-  %ret = atomicrmw and i32* %ptr, i32 %val singlethread monotonic
+  %ret = atomicrmw and i32* %ptr, i32 %val syncscope("singlethread") monotonic
   ret i32 %ret
 }
 
@@ -6962,7 +6962,7 @@
 ; PPC64LE-NEXT:  # BB#2:
 ; PPC64LE-NEXT:    lwsync
 ; PPC64LE-NEXT:    blr
-  %ret = atomicrmw and i32* %ptr, i32 %val singlethread acquire
+  %ret = atomicrmw and i32* %ptr, i32 %val syncscope("singlethread") acquire
   ret i32 %ret
 }
 
@@ -6978,7 +6978,7 @@
 ; PPC64LE-NEXT:  # BB#2:
 ; PPC64LE-NEXT:    mr 3, 5
 ; PPC64LE-NEXT:    blr
-  %ret = atomicrmw and i32* %ptr, i32 %val singlethread release
+  %ret = atomicrmw and i32* %ptr, i32 %val syncscope("singlethread") release
   ret i32 %ret
 }
 
@@ -6995,7 +6995,7 @@
 ; PPC64LE-NEXT:    mr 3, 5
 ; PPC64LE-NEXT:    lwsync
 ; PPC64LE-NEXT:    blr
-  %ret = atomicrmw and i32* %ptr, i32 %val singlethread acq_rel
+  %ret = atomicrmw and i32* %ptr, i32 %val syncscope("singlethread") acq_rel
   ret i32 %ret
 }
 
@@ -7012,7 +7012,7 @@
 ; PPC64LE-NEXT:    mr 3, 5
 ; PPC64LE-NEXT:    lwsync
 ; PPC64LE-NEXT:    blr
-  %ret = atomicrmw and i32* %ptr, i32 %val singlethread seq_cst
+  %ret = atomicrmw and i32* %ptr, i32 %val syncscope("singlethread") seq_cst
   ret i32 %ret
 }
 
@@ -7027,7 +7027,7 @@
 ; PPC64LE-NEXT:  # BB#2:
 ; PPC64LE-NEXT:    mr 3, 5
 ; PPC64LE-NEXT:    blr
-  %ret = atomicrmw and i64* %ptr, i64 %val singlethread monotonic
+  %ret = atomicrmw and i64* %ptr, i64 %val syncscope("singlethread") monotonic
   ret i64 %ret
 }
 
@@ -7043,7 +7043,7 @@
 ; PPC64LE-NEXT:  # BB#2:
 ; PPC64LE-NEXT:    lwsync
 ; PPC64LE-NEXT:    blr
-  %ret = atomicrmw and i64* %ptr, i64 %val singlethread acquire
+  %ret = atomicrmw and i64* %ptr, i64 %val syncscope("singlethread") acquire
   ret i64 %ret
 }
 
@@ -7059,7 +7059,7 @@
 ; PPC64LE-NEXT:  # BB#2:
 ; PPC64LE-NEXT:    mr 3, 5
 ; PPC64LE-NEXT:    blr
-  %ret = atomicrmw and i64* %ptr, i64 %val singlethread release
+  %ret = atomicrmw and i64* %ptr, i64 %val syncscope("singlethread") release
   ret i64 %ret
 }
 
@@ -7076,7 +7076,7 @@
 ; PPC64LE-NEXT:    mr 3, 5
 ; PPC64LE-NEXT:    lwsync
 ; PPC64LE-NEXT:    blr
-  %ret = atomicrmw and i64* %ptr, i64 %val singlethread acq_rel
+  %ret = atomicrmw and i64* %ptr, i64 %val syncscope("singlethread") acq_rel
   ret i64 %ret
 }
 
@@ -7093,7 +7093,7 @@
 ; PPC64LE-NEXT:    mr 3, 5
 ; PPC64LE-NEXT:    lwsync
 ; PPC64LE-NEXT:    blr
-  %ret = atomicrmw and i64* %ptr, i64 %val singlethread seq_cst
+  %ret = atomicrmw and i64* %ptr, i64 %val syncscope("singlethread") seq_cst
   ret i64 %ret
 }
 
@@ -7108,7 +7108,7 @@
 ; PPC64LE-NEXT:  # BB#2:
 ; PPC64LE-NEXT:    mr 3, 5
 ; PPC64LE-NEXT:    blr
-  %ret = atomicrmw nand i8* %ptr, i8 %val singlethread monotonic
+  %ret = atomicrmw nand i8* %ptr, i8 %val syncscope("singlethread") monotonic
   ret i8 %ret
 }
 
@@ -7124,7 +7124,7 @@
 ; PPC64LE-NEXT:  # BB#2:
 ; PPC64LE-NEXT:    lwsync
 ; PPC64LE-NEXT:    blr
-  %ret = atomicrmw nand i8* %ptr, i8 %val singlethread acquire
+  %ret = atomicrmw nand i8* %ptr, i8 %val syncscope("singlethread") acquire
   ret i8 %ret
 }
 
@@ -7140,7 +7140,7 @@
 ; PPC64LE-NEXT:  # BB#2:
 ; PPC64LE-NEXT:    mr 3, 5
 ; PPC64LE-NEXT:    blr
-  %ret = atomicrmw nand i8* %ptr, i8 %val singlethread release
+  %ret = atomicrmw nand i8* %ptr, i8 %val syncscope("singlethread") release
   ret i8 %ret
 }
 
@@ -7157,7 +7157,7 @@
 ; PPC64LE-NEXT:    mr 3, 5
 ; PPC64LE-NEXT:    lwsync
 ; PPC64LE-NEXT:    blr
-  %ret = atomicrmw nand i8* %ptr, i8 %val singlethread acq_rel
+  %ret = atomicrmw nand i8* %ptr, i8 %val syncscope("singlethread") acq_rel
   ret i8 %ret
 }
 
@@ -7174,7 +7174,7 @@
 ; PPC64LE-NEXT:    mr 3, 5
 ; PPC64LE-NEXT:    lwsync
 ; PPC64LE-NEXT:    blr
-  %ret = atomicrmw nand i8* %ptr, i8 %val singlethread seq_cst
+  %ret = atomicrmw nand i8* %ptr, i8 %val syncscope("singlethread") seq_cst
   ret i8 %ret
 }
 
@@ -7189,7 +7189,7 @@
 ; PPC64LE-NEXT:  # BB#2:
 ; PPC64LE-NEXT:    mr 3, 5
 ; PPC64LE-NEXT:    blr
-  %ret = atomicrmw nand i16* %ptr, i16 %val singlethread monotonic
+  %ret = atomicrmw nand i16* %ptr, i16 %val syncscope("singlethread") monotonic
   ret i16 %ret
 }
 
@@ -7205,7 +7205,7 @@
 ; PPC64LE-NEXT:  # BB#2:
 ; PPC64LE-NEXT:    lwsync
 ; PPC64LE-NEXT:    blr
-  %ret = atomicrmw nand i16* %ptr, i16 %val singlethread acquire
+  %ret = atomicrmw nand i16* %ptr, i16 %val syncscope("singlethread") acquire
   ret i16 %ret
 }
 
@@ -7221,7 +7221,7 @@
 ; PPC64LE-NEXT:  # BB#2:
 ; PPC64LE-NEXT:    mr 3, 5
 ; PPC64LE-NEXT:    blr
-  %ret = atomicrmw nand i16* %ptr, i16 %val singlethread release
+  %ret = atomicrmw nand i16* %ptr, i16 %val syncscope("singlethread") release
   ret i16 %ret
 }
 
@@ -7238,7 +7238,7 @@
 ; PPC64LE-NEXT:    mr 3, 5
 ; PPC64LE-NEXT:    lwsync
 ; PPC64LE-NEXT:    blr
-  %ret = atomicrmw nand i16* %ptr, i16 %val singlethread acq_rel
+  %ret = atomicrmw nand i16* %ptr, i16 %val syncscope("singlethread") acq_rel
   ret i16 %ret
 }
 
@@ -7255,7 +7255,7 @@
 ; PPC64LE-NEXT:    mr 3, 5
 ; PPC64LE-NEXT:    lwsync
 ; PPC64LE-NEXT:    blr
-  %ret = atomicrmw nand i16* %ptr, i16 %val singlethread seq_cst
+  %ret = atomicrmw nand i16* %ptr, i16 %val syncscope("singlethread") seq_cst
   ret i16 %ret
 }
 
@@ -7270,7 +7270,7 @@
 ; PPC64LE-NEXT:  # BB#2:
 ; PPC64LE-NEXT:    mr 3, 5
 ; PPC64LE-NEXT:    blr
-  %ret = atomicrmw nand i32* %ptr, i32 %val singlethread monotonic
+  %ret = atomicrmw nand i32* %ptr, i32 %val syncscope("singlethread") monotonic
   ret i32 %ret
 }
 
@@ -7286,7 +7286,7 @@
 ; PPC64LE-NEXT:  # BB#2:
 ; PPC64LE-NEXT:    lwsync
 ; PPC64LE-NEXT:    blr
-  %ret = atomicrmw nand i32* %ptr, i32 %val singlethread acquire
+  %ret = atomicrmw nand i32* %ptr, i32 %val syncscope("singlethread") acquire
   ret i32 %ret
 }
 
@@ -7302,7 +7302,7 @@
 ; PPC64LE-NEXT:  # BB#2:
 ; PPC64LE-NEXT:    mr 3, 5
 ; PPC64LE-NEXT:    blr
-  %ret = atomicrmw nand i32* %ptr, i32 %val singlethread release
+  %ret = atomicrmw nand i32* %ptr, i32 %val syncscope("singlethread") release
   ret i32 %ret
 }
 
@@ -7319,7 +7319,7 @@
 ; PPC64LE-NEXT:    mr 3, 5
 ; PPC64LE-NEXT:    lwsync
 ; PPC64LE-NEXT:    blr
-  %ret = atomicrmw nand i32* %ptr, i32 %val singlethread acq_rel
+  %ret = atomicrmw nand i32* %ptr, i32 %val syncscope("singlethread") acq_rel
   ret i32 %ret
 }
 
@@ -7336,7 +7336,7 @@
 ; PPC64LE-NEXT:    mr 3, 5
 ; PPC64LE-NEXT:    lwsync
 ; PPC64LE-NEXT:    blr
-  %ret = atomicrmw nand i32* %ptr, i32 %val singlethread seq_cst
+  %ret = atomicrmw nand i32* %ptr, i32 %val syncscope("singlethread") seq_cst
   ret i32 %ret
 }
 
@@ -7351,7 +7351,7 @@
 ; PPC64LE-NEXT:  # BB#2:
 ; PPC64LE-NEXT:    mr 3, 5
 ; PPC64LE-NEXT:    blr
-  %ret = atomicrmw nand i64* %ptr, i64 %val singlethread monotonic
+  %ret = atomicrmw nand i64* %ptr, i64 %val syncscope("singlethread") monotonic
   ret i64 %ret
 }
 
@@ -7367,7 +7367,7 @@
 ; PPC64LE-NEXT:  # BB#2:
 ; PPC64LE-NEXT:    lwsync
 ; PPC64LE-NEXT:    blr
-  %ret = atomicrmw nand i64* %ptr, i64 %val singlethread acquire
+  %ret = atomicrmw nand i64* %ptr, i64 %val syncscope("singlethread") acquire
   ret i64 %ret
 }
 
@@ -7383,7 +7383,7 @@
 ; PPC64LE-NEXT:  # BB#2:
 ; PPC64LE-NEXT:    mr 3, 5
 ; PPC64LE-NEXT:    blr
-  %ret = atomicrmw nand i64* %ptr, i64 %val singlethread release
+  %ret = atomicrmw nand i64* %ptr, i64 %val syncscope("singlethread") release
   ret i64 %ret
 }
 
@@ -7400,7 +7400,7 @@
 ; PPC64LE-NEXT:    mr 3, 5
 ; PPC64LE-NEXT:    lwsync
 ; PPC64LE-NEXT:    blr
-  %ret = atomicrmw nand i64* %ptr, i64 %val singlethread acq_rel
+  %ret = atomicrmw nand i64* %ptr, i64 %val syncscope("singlethread") acq_rel
   ret i64 %ret
 }
 
@@ -7417,7 +7417,7 @@
 ; PPC64LE-NEXT:    mr 3, 5
 ; PPC64LE-NEXT:    lwsync
 ; PPC64LE-NEXT:    blr
-  %ret = atomicrmw nand i64* %ptr, i64 %val singlethread seq_cst
+  %ret = atomicrmw nand i64* %ptr, i64 %val syncscope("singlethread") seq_cst
   ret i64 %ret
 }
 
@@ -7432,7 +7432,7 @@
 ; PPC64LE-NEXT:  # BB#2:
 ; PPC64LE-NEXT:    mr 3, 5
 ; PPC64LE-NEXT:    blr
-  %ret = atomicrmw or i8* %ptr, i8 %val singlethread monotonic
+  %ret = atomicrmw or i8* %ptr, i8 %val syncscope("singlethread") monotonic
   ret i8 %ret
 }
 
@@ -7448,7 +7448,7 @@
 ; PPC64LE-NEXT:  # BB#2:
 ; PPC64LE-NEXT:    lwsync
 ; PPC64LE-NEXT:    blr
-  %ret = atomicrmw or i8* %ptr, i8 %val singlethread acquire
+  %ret = atomicrmw or i8* %ptr, i8 %val syncscope("singlethread") acquire
   ret i8 %ret
 }
 
@@ -7464,7 +7464,7 @@
 ; PPC64LE-NEXT:  # BB#2:
 ; PPC64LE-NEXT:    mr 3, 5
 ; PPC64LE-NEXT:    blr
-  %ret = atomicrmw or i8* %ptr, i8 %val singlethread release
+  %ret = atomicrmw or i8* %ptr, i8 %val syncscope("singlethread") release
   ret i8 %ret
 }
 
@@ -7481,7 +7481,7 @@
 ; PPC64LE-NEXT:    mr 3, 5
 ; PPC64LE-NEXT:    lwsync
 ; PPC64LE-NEXT:    blr
-  %ret = atomicrmw or i8* %ptr, i8 %val singlethread acq_rel
+  %ret = atomicrmw or i8* %ptr, i8 %val syncscope("singlethread") acq_rel
   ret i8 %ret
 }
 
@@ -7498,7 +7498,7 @@
 ; PPC64LE-NEXT:    mr 3, 5
 ; PPC64LE-NEXT:    lwsync
 ; PPC64LE-NEXT:    blr
-  %ret = atomicrmw or i8* %ptr, i8 %val singlethread seq_cst
+  %ret = atomicrmw or i8* %ptr, i8 %val syncscope("singlethread") seq_cst
   ret i8 %ret
 }
 
@@ -7513,7 +7513,7 @@
 ; PPC64LE-NEXT:  # BB#2:
 ; PPC64LE-NEXT:    mr 3, 5
 ; PPC64LE-NEXT:    blr
-  %ret = atomicrmw or i16* %ptr, i16 %val singlethread monotonic
+  %ret = atomicrmw or i16* %ptr, i16 %val syncscope("singlethread") monotonic
   ret i16 %ret
 }
 
@@ -7529,7 +7529,7 @@
 ; PPC64LE-NEXT:  # BB#2:
 ; PPC64LE-NEXT:    lwsync
 ; PPC64LE-NEXT:    blr
-  %ret = atomicrmw or i16* %ptr, i16 %val singlethread acquire
+  %ret = atomicrmw or i16* %ptr, i16 %val syncscope("singlethread") acquire
   ret i16 %ret
 }
 
@@ -7545,7 +7545,7 @@
 ; PPC64LE-NEXT:  # BB#2:
 ; PPC64LE-NEXT:    mr 3, 5
 ; PPC64LE-NEXT:    blr
-  %ret = atomicrmw or i16* %ptr, i16 %val singlethread release
+  %ret = atomicrmw or i16* %ptr, i16 %val syncscope("singlethread") release
   ret i16 %ret
 }
 
@@ -7562,7 +7562,7 @@
 ; PPC64LE-NEXT:    mr 3, 5
 ; PPC64LE-NEXT:    lwsync
 ; PPC64LE-NEXT:    blr
-  %ret = atomicrmw or i16* %ptr, i16 %val singlethread acq_rel
+  %ret = atomicrmw or i16* %ptr, i16 %val syncscope("singlethread") acq_rel
   ret i16 %ret
 }
 
@@ -7579,7 +7579,7 @@
 ; PPC64LE-NEXT:    mr 3, 5
 ; PPC64LE-NEXT:    lwsync
 ; PPC64LE-NEXT:    blr
-  %ret = atomicrmw or i16* %ptr, i16 %val singlethread seq_cst
+  %ret = atomicrmw or i16* %ptr, i16 %val syncscope("singlethread") seq_cst
   ret i16 %ret
 }
 
@@ -7594,7 +7594,7 @@
 ; PPC64LE-NEXT:  # BB#2:
 ; PPC64LE-NEXT:    mr 3, 5
 ; PPC64LE-NEXT:    blr
-  %ret = atomicrmw or i32* %ptr, i32 %val singlethread monotonic
+  %ret = atomicrmw or i32* %ptr, i32 %val syncscope("singlethread") monotonic
   ret i32 %ret
 }
 
@@ -7610,7 +7610,7 @@
 ; PPC64LE-NEXT:  # BB#2:
 ; PPC64LE-NEXT:    lwsync
 ; PPC64LE-NEXT:    blr
-  %ret = atomicrmw or i32* %ptr, i32 %val singlethread acquire
+  %ret = atomicrmw or i32* %ptr, i32 %val syncscope("singlethread") acquire
   ret i32 %ret
 }
 
@@ -7626,7 +7626,7 @@
 ; PPC64LE-NEXT:  # BB#2:
 ; PPC64LE-NEXT:    mr 3, 5
 ; PPC64LE-NEXT:    blr
-  %ret = atomicrmw or i32* %ptr, i32 %val singlethread release
+  %ret = atomicrmw or i32* %ptr, i32 %val syncscope("singlethread") release
   ret i32 %ret
 }
 
@@ -7643,7 +7643,7 @@
 ; PPC64LE-NEXT:    mr 3, 5
 ; PPC64LE-NEXT:    lwsync
 ; PPC64LE-NEXT:    blr
-  %ret = atomicrmw or i32* %ptr, i32 %val singlethread acq_rel
+  %ret = atomicrmw or i32* %ptr, i32 %val syncscope("singlethread") acq_rel
   ret i32 %ret
 }
 
@@ -7660,7 +7660,7 @@
 ; PPC64LE-NEXT:    mr 3, 5
 ; PPC64LE-NEXT:    lwsync
 ; PPC64LE-NEXT:    blr
-  %ret = atomicrmw or i32* %ptr, i32 %val singlethread seq_cst
+  %ret = atomicrmw or i32* %ptr, i32 %val syncscope("singlethread") seq_cst
   ret i32 %ret
 }
 
@@ -7675,7 +7675,7 @@
 ; PPC64LE-NEXT:  # BB#2:
 ; PPC64LE-NEXT:    mr 3, 5
 ; PPC64LE-NEXT:    blr
-  %ret = atomicrmw or i64* %ptr, i64 %val singlethread monotonic
+  %ret = atomicrmw or i64* %ptr, i64 %val syncscope("singlethread") monotonic
   ret i64 %ret
 }
 
@@ -7691,7 +7691,7 @@
 ; PPC64LE-NEXT:  # BB#2:
 ; PPC64LE-NEXT:    lwsync
 ; PPC64LE-NEXT:    blr
-  %ret = atomicrmw or i64* %ptr, i64 %val singlethread acquire
+  %ret = atomicrmw or i64* %ptr, i64 %val syncscope("singlethread") acquire
   ret i64 %ret
 }
 
@@ -7707,7 +7707,7 @@
 ; PPC64LE-NEXT:  # BB#2:
 ; PPC64LE-NEXT:    mr 3, 5
 ; PPC64LE-NEXT:    blr
-  %ret = atomicrmw or i64* %ptr, i64 %val singlethread release
+  %ret = atomicrmw or i64* %ptr, i64 %val syncscope("singlethread") release
   ret i64 %ret
 }
 
@@ -7724,7 +7724,7 @@
 ; PPC64LE-NEXT:    mr 3, 5
 ; PPC64LE-NEXT:    lwsync
 ; PPC64LE-NEXT:    blr
-  %ret = atomicrmw or i64* %ptr, i64 %val singlethread acq_rel
+  %ret = atomicrmw or i64* %ptr, i64 %val syncscope("singlethread") acq_rel
   ret i64 %ret
 }
 
@@ -7741,7 +7741,7 @@
 ; PPC64LE-NEXT:    mr 3, 5
 ; PPC64LE-NEXT:    lwsync
 ; PPC64LE-NEXT:    blr
-  %ret = atomicrmw or i64* %ptr, i64 %val singlethread seq_cst
+  %ret = atomicrmw or i64* %ptr, i64 %val syncscope("singlethread") seq_cst
   ret i64 %ret
 }
 
@@ -7756,7 +7756,7 @@
 ; PPC64LE-NEXT:  # BB#2:
 ; PPC64LE-NEXT:    mr 3, 5
 ; PPC64LE-NEXT:    blr
-  %ret = atomicrmw xor i8* %ptr, i8 %val singlethread monotonic
+  %ret = atomicrmw xor i8* %ptr, i8 %val syncscope("singlethread") monotonic
   ret i8 %ret
 }
 
@@ -7772,7 +7772,7 @@
 ; PPC64LE-NEXT:  # BB#2:
 ; PPC64LE-NEXT:    lwsync
 ; PPC64LE-NEXT:    blr
-  %ret = atomicrmw xor i8* %ptr, i8 %val singlethread acquire
+  %ret = atomicrmw xor i8* %ptr, i8 %val syncscope("singlethread") acquire
   ret i8 %ret
 }
 
@@ -7788,7 +7788,7 @@
 ; PPC64LE-NEXT:  # BB#2:
 ; PPC64LE-NEXT:    mr 3, 5
 ; PPC64LE-NEXT:    blr
-  %ret = atomicrmw xor i8* %ptr, i8 %val singlethread release
+  %ret = atomicrmw xor i8* %ptr, i8 %val syncscope("singlethread") release
   ret i8 %ret
 }
 
@@ -7805,7 +7805,7 @@
 ; PPC64LE-NEXT:    mr 3, 5
 ; PPC64LE-NEXT:    lwsync
 ; PPC64LE-NEXT:    blr
-  %ret = atomicrmw xor i8* %ptr, i8 %val singlethread acq_rel
+  %ret = atomicrmw xor i8* %ptr, i8 %val syncscope("singlethread") acq_rel
   ret i8 %ret
 }
 
@@ -7822,7 +7822,7 @@
 ; PPC64LE-NEXT:    mr 3, 5
 ; PPC64LE-NEXT:    lwsync
 ; PPC64LE-NEXT:    blr
-  %ret = atomicrmw xor i8* %ptr, i8 %val singlethread seq_cst
+  %ret = atomicrmw xor i8* %ptr, i8 %val syncscope("singlethread") seq_cst
   ret i8 %ret
 }
 
@@ -7837,7 +7837,7 @@
 ; PPC64LE-NEXT:  # BB#2:
 ; PPC64LE-NEXT:    mr 3, 5
 ; PPC64LE-NEXT:    blr
-  %ret = atomicrmw xor i16* %ptr, i16 %val singlethread monotonic
+  %ret = atomicrmw xor i16* %ptr, i16 %val syncscope("singlethread") monotonic
   ret i16 %ret
 }
 
@@ -7853,7 +7853,7 @@
 ; PPC64LE-NEXT:  # BB#2:
 ; PPC64LE-NEXT:    lwsync
 ; PPC64LE-NEXT:    blr
-  %ret = atomicrmw xor i16* %ptr, i16 %val singlethread acquire
+  %ret = atomicrmw xor i16* %ptr, i16 %val syncscope("singlethread") acquire
   ret i16 %ret
 }
 
@@ -7869,7 +7869,7 @@
 ; PPC64LE-NEXT:  # BB#2:
 ; PPC64LE-NEXT:    mr 3, 5
 ; PPC64LE-NEXT:    blr
-  %ret = atomicrmw xor i16* %ptr, i16 %val singlethread release
+  %ret = atomicrmw xor i16* %ptr, i16 %val syncscope("singlethread") release
   ret i16 %ret
 }
 
@@ -7886,7 +7886,7 @@
 ; PPC64LE-NEXT:    mr 3, 5
 ; PPC64LE-NEXT:    lwsync
 ; PPC64LE-NEXT:    blr
-  %ret = atomicrmw xor i16* %ptr, i16 %val singlethread acq_rel
+  %ret = atomicrmw xor i16* %ptr, i16 %val syncscope("singlethread") acq_rel
   ret i16 %ret
 }
 
@@ -7903,7 +7903,7 @@
 ; PPC64LE-NEXT:    mr 3, 5
 ; PPC64LE-NEXT:    lwsync
 ; PPC64LE-NEXT:    blr
-  %ret = atomicrmw xor i16* %ptr, i16 %val singlethread seq_cst
+  %ret = atomicrmw xor i16* %ptr, i16 %val syncscope("singlethread") seq_cst
   ret i16 %ret
 }
 
@@ -7918,7 +7918,7 @@
 ; PPC64LE-NEXT:  # BB#2:
 ; PPC64LE-NEXT:    mr 3, 5
 ; PPC64LE-NEXT:    blr
-  %ret = atomicrmw xor i32* %ptr, i32 %val singlethread monotonic
+  %ret = atomicrmw xor i32* %ptr, i32 %val syncscope("singlethread") monotonic
   ret i32 %ret
 }
 
@@ -7934,7 +7934,7 @@
 ; PPC64LE-NEXT:  # BB#2:
 ; PPC64LE-NEXT:    lwsync
 ; PPC64LE-NEXT:    blr
-  %ret = atomicrmw xor i32* %ptr, i32 %val singlethread acquire
+  %ret = atomicrmw xor i32* %ptr, i32 %val syncscope("singlethread") acquire
   ret i32 %ret
 }
 
@@ -7950,7 +7950,7 @@
 ; PPC64LE-NEXT:  # BB#2:
 ; PPC64LE-NEXT:    mr 3, 5
 ; PPC64LE-NEXT:    blr
-  %ret = atomicrmw xor i32* %ptr, i32 %val singlethread release
+  %ret = atomicrmw xor i32* %ptr, i32 %val syncscope("singlethread") release
   ret i32 %ret
 }
 
@@ -7967,7 +7967,7 @@
 ; PPC64LE-NEXT:    mr 3, 5
 ; PPC64LE-NEXT:    lwsync
 ; PPC64LE-NEXT:    blr
-  %ret = atomicrmw xor i32* %ptr, i32 %val singlethread acq_rel
+  %ret = atomicrmw xor i32* %ptr, i32 %val syncscope("singlethread") acq_rel
   ret i32 %ret
 }
 
@@ -7984,7 +7984,7 @@
 ; PPC64LE-NEXT:    mr 3, 5
 ; PPC64LE-NEXT:    lwsync
 ; PPC64LE-NEXT:    blr
-  %ret = atomicrmw xor i32* %ptr, i32 %val singlethread seq_cst
+  %ret = atomicrmw xor i32* %ptr, i32 %val syncscope("singlethread") seq_cst
   ret i32 %ret
 }
 
@@ -7999,7 +7999,7 @@
 ; PPC64LE-NEXT:  # BB#2:
 ; PPC64LE-NEXT:    mr 3, 5
 ; PPC64LE-NEXT:    blr
-  %ret = atomicrmw xor i64* %ptr, i64 %val singlethread monotonic
+  %ret = atomicrmw xor i64* %ptr, i64 %val syncscope("singlethread") monotonic
   ret i64 %ret
 }
 
@@ -8015,7 +8015,7 @@
 ; PPC64LE-NEXT:  # BB#2:
 ; PPC64LE-NEXT:    lwsync
 ; PPC64LE-NEXT:    blr
-  %ret = atomicrmw xor i64* %ptr, i64 %val singlethread acquire
+  %ret = atomicrmw xor i64* %ptr, i64 %val syncscope("singlethread") acquire
   ret i64 %ret
 }
 
@@ -8031,7 +8031,7 @@
 ; PPC64LE-NEXT:  # BB#2:
 ; PPC64LE-NEXT:    mr 3, 5
 ; PPC64LE-NEXT:    blr
-  %ret = atomicrmw xor i64* %ptr, i64 %val singlethread release
+  %ret = atomicrmw xor i64* %ptr, i64 %val syncscope("singlethread") release
   ret i64 %ret
 }
 
@@ -8048,7 +8048,7 @@
 ; PPC64LE-NEXT:    mr 3, 5
 ; PPC64LE-NEXT:    lwsync
 ; PPC64LE-NEXT:    blr
-  %ret = atomicrmw xor i64* %ptr, i64 %val singlethread acq_rel
+  %ret = atomicrmw xor i64* %ptr, i64 %val syncscope("singlethread") acq_rel
   ret i64 %ret
 }
 
@@ -8065,7 +8065,7 @@
 ; PPC64LE-NEXT:    mr 3, 5
 ; PPC64LE-NEXT:    lwsync
 ; PPC64LE-NEXT:    blr
-  %ret = atomicrmw xor i64* %ptr, i64 %val singlethread seq_cst
+  %ret = atomicrmw xor i64* %ptr, i64 %val syncscope("singlethread") seq_cst
   ret i64 %ret
 }
 
@@ -8083,7 +8083,7 @@
 ; PPC64LE-NEXT:  .LBB480_3:
 ; PPC64LE-NEXT:    mr 3, 5
 ; PPC64LE-NEXT:    blr
-  %ret = atomicrmw max i8* %ptr, i8 %val singlethread monotonic
+  %ret = atomicrmw max i8* %ptr, i8 %val syncscope("singlethread") monotonic
   ret i8 %ret
 }
 
@@ -8102,7 +8102,7 @@
 ; PPC64LE-NEXT:  .LBB481_3:
 ; PPC64LE-NEXT:    lwsync
 ; PPC64LE-NEXT:    blr
-  %ret = atomicrmw max i8* %ptr, i8 %val singlethread acquire
+  %ret = atomicrmw max i8* %ptr, i8 %val syncscope("singlethread") acquire
   ret i8 %ret
 }
 
@@ -8121,7 +8121,7 @@
 ; PPC64LE-NEXT:  .LBB482_3:
 ; PPC64LE-NEXT:    mr 3, 5
 ; PPC64LE-NEXT:    blr
-  %ret = atomicrmw max i8* %ptr, i8 %val singlethread release
+  %ret = atomicrmw max i8* %ptr, i8 %val syncscope("singlethread") release
   ret i8 %ret
 }
 
@@ -8141,7 +8141,7 @@
 ; PPC64LE-NEXT:    mr 3, 5
 ; PPC64LE-NEXT:    lwsync
 ; PPC64LE-NEXT:    blr
-  %ret = atomicrmw max i8* %ptr, i8 %val singlethread acq_rel
+  %ret = atomicrmw max i8* %ptr, i8 %val syncscope("singlethread") acq_rel
   ret i8 %ret
 }
 
@@ -8161,7 +8161,7 @@
 ; PPC64LE-NEXT:    mr 3, 5
 ; PPC64LE-NEXT:    lwsync
 ; PPC64LE-NEXT:    blr
-  %ret = atomicrmw max i8* %ptr, i8 %val singlethread seq_cst
+  %ret = atomicrmw max i8* %ptr, i8 %val syncscope("singlethread") seq_cst
   ret i8 %ret
 }
 
@@ -8179,7 +8179,7 @@
 ; PPC64LE-NEXT:  .LBB485_3:
 ; PPC64LE-NEXT:    mr 3, 5
 ; PPC64LE-NEXT:    blr
-  %ret = atomicrmw max i16* %ptr, i16 %val singlethread monotonic
+  %ret = atomicrmw max i16* %ptr, i16 %val syncscope("singlethread") monotonic
   ret i16 %ret
 }
 
@@ -8198,7 +8198,7 @@
 ; PPC64LE-NEXT:  .LBB486_3:
 ; PPC64LE-NEXT:    lwsync
 ; PPC64LE-NEXT:    blr
-  %ret = atomicrmw max i16* %ptr, i16 %val singlethread acquire
+  %ret = atomicrmw max i16* %ptr, i16 %val syncscope("singlethread") acquire
   ret i16 %ret
 }
 
@@ -8217,7 +8217,7 @@
 ; PPC64LE-NEXT:  .LBB487_3:
 ; PPC64LE-NEXT:    mr 3, 5
 ; PPC64LE-NEXT:    blr
-  %ret = atomicrmw max i16* %ptr, i16 %val singlethread release
+  %ret = atomicrmw max i16* %ptr, i16 %val syncscope("singlethread") release
   ret i16 %ret
 }
 
@@ -8237,7 +8237,7 @@
 ; PPC64LE-NEXT:    mr 3, 5
 ; PPC64LE-NEXT:    lwsync
 ; PPC64LE-NEXT:    blr
-  %ret = atomicrmw max i16* %ptr, i16 %val singlethread acq_rel
+  %ret = atomicrmw max i16* %ptr, i16 %val syncscope("singlethread") acq_rel
   ret i16 %ret
 }
 
@@ -8257,7 +8257,7 @@
 ; PPC64LE-NEXT:    mr 3, 5
 ; PPC64LE-NEXT:    lwsync
 ; PPC64LE-NEXT:    blr
-  %ret = atomicrmw max i16* %ptr, i16 %val singlethread seq_cst
+  %ret = atomicrmw max i16* %ptr, i16 %val syncscope("singlethread") seq_cst
   ret i16 %ret
 }
 
@@ -8274,7 +8274,7 @@
 ; PPC64LE-NEXT:  .LBB490_3:
 ; PPC64LE-NEXT:    mr 3, 5
 ; PPC64LE-NEXT:    blr
-  %ret = atomicrmw max i32* %ptr, i32 %val singlethread monotonic
+  %ret = atomicrmw max i32* %ptr, i32 %val syncscope("singlethread") monotonic
   ret i32 %ret
 }
 
@@ -8292,7 +8292,7 @@
 ; PPC64LE-NEXT:  .LBB491_3:
 ; PPC64LE-NEXT:    lwsync
 ; PPC64LE-NEXT:    blr
-  %ret = atomicrmw max i32* %ptr, i32 %val singlethread acquire
+  %ret = atomicrmw max i32* %ptr, i32 %val syncscope("singlethread") acquire
   ret i32 %ret
 }
 
@@ -8310,7 +8310,7 @@
 ; PPC64LE-NEXT:  .LBB492_3:
 ; PPC64LE-NEXT:    mr 3, 5
 ; PPC64LE-NEXT:    blr
-  %ret = atomicrmw max i32* %ptr, i32 %val singlethread release
+  %ret = atomicrmw max i32* %ptr, i32 %val syncscope("singlethread") release
   ret i32 %ret
 }
 
@@ -8329,7 +8329,7 @@
 ; PPC64LE-NEXT:    mr 3, 5
 ; PPC64LE-NEXT:    lwsync
 ; PPC64LE-NEXT:    blr
-  %ret = atomicrmw max i32* %ptr, i32 %val singlethread acq_rel
+  %ret = atomicrmw max i32* %ptr, i32 %val syncscope("singlethread") acq_rel
   ret i32 %ret
 }
 
@@ -8348,7 +8348,7 @@
 ; PPC64LE-NEXT:    mr 3, 5
 ; PPC64LE-NEXT:    lwsync
 ; PPC64LE-NEXT:    blr
-  %ret = atomicrmw max i32* %ptr, i32 %val singlethread seq_cst
+  %ret = atomicrmw max i32* %ptr, i32 %val syncscope("singlethread") seq_cst
   ret i32 %ret
 }
 
@@ -8365,7 +8365,7 @@
 ; PPC64LE-NEXT:  .LBB495_3:
 ; PPC64LE-NEXT:    mr 3, 5
 ; PPC64LE-NEXT:    blr
-  %ret = atomicrmw max i64* %ptr, i64 %val singlethread monotonic
+  %ret = atomicrmw max i64* %ptr, i64 %val syncscope("singlethread") monotonic
   ret i64 %ret
 }
 
@@ -8383,7 +8383,7 @@
 ; PPC64LE-NEXT:  .LBB496_3:
 ; PPC64LE-NEXT:    lwsync
 ; PPC64LE-NEXT:    blr
-  %ret = atomicrmw max i64* %ptr, i64 %val singlethread acquire
+  %ret = atomicrmw max i64* %ptr, i64 %val syncscope("singlethread") acquire
   ret i64 %ret
 }
 
@@ -8401,7 +8401,7 @@
 ; PPC64LE-NEXT:  .LBB497_3:
 ; PPC64LE-NEXT:    mr 3, 5
 ; PPC64LE-NEXT:    blr
-  %ret = atomicrmw max i64* %ptr, i64 %val singlethread release
+  %ret = atomicrmw max i64* %ptr, i64 %val syncscope("singlethread") release
   ret i64 %ret
 }
 
@@ -8420,7 +8420,7 @@
 ; PPC64LE-NEXT:    mr 3, 5
 ; PPC64LE-NEXT:    lwsync
 ; PPC64LE-NEXT:    blr
-  %ret = atomicrmw max i64* %ptr, i64 %val singlethread acq_rel
+  %ret = atomicrmw max i64* %ptr, i64 %val syncscope("singlethread") acq_rel
   ret i64 %ret
 }
 
@@ -8439,7 +8439,7 @@
 ; PPC64LE-NEXT:    mr 3, 5
 ; PPC64LE-NEXT:    lwsync
 ; PPC64LE-NEXT:    blr
-  %ret = atomicrmw max i64* %ptr, i64 %val singlethread seq_cst
+  %ret = atomicrmw max i64* %ptr, i64 %val syncscope("singlethread") seq_cst
   ret i64 %ret
 }
 
@@ -8457,7 +8457,7 @@
 ; PPC64LE-NEXT:  .LBB500_3:
 ; PPC64LE-NEXT:    mr 3, 5
 ; PPC64LE-NEXT:    blr
-  %ret = atomicrmw min i8* %ptr, i8 %val singlethread monotonic
+  %ret = atomicrmw min i8* %ptr, i8 %val syncscope("singlethread") monotonic
   ret i8 %ret
 }
 
@@ -8476,7 +8476,7 @@
 ; PPC64LE-NEXT:  .LBB501_3:
 ; PPC64LE-NEXT:    lwsync
 ; PPC64LE-NEXT:    blr
-  %ret = atomicrmw min i8* %ptr, i8 %val singlethread acquire
+  %ret = atomicrmw min i8* %ptr, i8 %val syncscope("singlethread") acquire
   ret i8 %ret
 }
 
@@ -8495,7 +8495,7 @@
 ; PPC64LE-NEXT:  .LBB502_3:
 ; PPC64LE-NEXT:    mr 3, 5
 ; PPC64LE-NEXT:    blr
-  %ret = atomicrmw min i8* %ptr, i8 %val singlethread release
+  %ret = atomicrmw min i8* %ptr, i8 %val syncscope("singlethread") release
   ret i8 %ret
 }
 
@@ -8515,7 +8515,7 @@
 ; PPC64LE-NEXT:    mr 3, 5
 ; PPC64LE-NEXT:    lwsync
 ; PPC64LE-NEXT:    blr
-  %ret = atomicrmw min i8* %ptr, i8 %val singlethread acq_rel
+  %ret = atomicrmw min i8* %ptr, i8 %val syncscope("singlethread") acq_rel
   ret i8 %ret
 }
 
@@ -8535,7 +8535,7 @@
 ; PPC64LE-NEXT:    mr 3, 5
 ; PPC64LE-NEXT:    lwsync
 ; PPC64LE-NEXT:    blr
-  %ret = atomicrmw min i8* %ptr, i8 %val singlethread seq_cst
+  %ret = atomicrmw min i8* %ptr, i8 %val syncscope("singlethread") seq_cst
   ret i8 %ret
 }
 
@@ -8553,7 +8553,7 @@
 ; PPC64LE-NEXT:  .LBB505_3:
 ; PPC64LE-NEXT:    mr 3, 5
 ; PPC64LE-NEXT:    blr
-  %ret = atomicrmw min i16* %ptr, i16 %val singlethread monotonic
+  %ret = atomicrmw min i16* %ptr, i16 %val syncscope("singlethread") monotonic
   ret i16 %ret
 }
 
@@ -8572,7 +8572,7 @@
 ; PPC64LE-NEXT:  .LBB506_3:
 ; PPC64LE-NEXT:    lwsync
 ; PPC64LE-NEXT:    blr
-  %ret = atomicrmw min i16* %ptr, i16 %val singlethread acquire
+  %ret = atomicrmw min i16* %ptr, i16 %val syncscope("singlethread") acquire
   ret i16 %ret
 }
 
@@ -8591,7 +8591,7 @@
 ; PPC64LE-NEXT:  .LBB507_3:
 ; PPC64LE-NEXT:    mr 3, 5
 ; PPC64LE-NEXT:    blr
-  %ret = atomicrmw min i16* %ptr, i16 %val singlethread release
+  %ret = atomicrmw min i16* %ptr, i16 %val syncscope("singlethread") release
   ret i16 %ret
 }
 
@@ -8611,7 +8611,7 @@
 ; PPC64LE-NEXT:    mr 3, 5
 ; PPC64LE-NEXT:    lwsync
 ; PPC64LE-NEXT:    blr
-  %ret = atomicrmw min i16* %ptr, i16 %val singlethread acq_rel
+  %ret = atomicrmw min i16* %ptr, i16 %val syncscope("singlethread") acq_rel
   ret i16 %ret
 }
 
@@ -8631,7 +8631,7 @@
 ; PPC64LE-NEXT:    mr 3, 5
 ; PPC64LE-NEXT:    lwsync
 ; PPC64LE-NEXT:    blr
-  %ret = atomicrmw min i16* %ptr, i16 %val singlethread seq_cst
+  %ret = atomicrmw min i16* %ptr, i16 %val syncscope("singlethread") seq_cst
   ret i16 %ret
 }
 
@@ -8648,7 +8648,7 @@
 ; PPC64LE-NEXT:  .LBB510_3:
 ; PPC64LE-NEXT:    mr 3, 5
 ; PPC64LE-NEXT:    blr
-  %ret = atomicrmw min i32* %ptr, i32 %val singlethread monotonic
+  %ret = atomicrmw min i32* %ptr, i32 %val syncscope("singlethread") monotonic
   ret i32 %ret
 }
 
@@ -8666,7 +8666,7 @@
 ; PPC64LE-NEXT:  .LBB511_3:
 ; PPC64LE-NEXT:    lwsync
 ; PPC64LE-NEXT:    blr
-  %ret = atomicrmw min i32* %ptr, i32 %val singlethread acquire
+  %ret = atomicrmw min i32* %ptr, i32 %val syncscope("singlethread") acquire
   ret i32 %ret
 }
 
@@ -8684,7 +8684,7 @@
 ; PPC64LE-NEXT:  .LBB512_3:
 ; PPC64LE-NEXT:    mr 3, 5
 ; PPC64LE-NEXT:    blr
-  %ret = atomicrmw min i32* %ptr, i32 %val singlethread release
+  %ret = atomicrmw min i32* %ptr, i32 %val syncscope("singlethread") release
   ret i32 %ret
 }
 
@@ -8703,7 +8703,7 @@
 ; PPC64LE-NEXT:    mr 3, 5
 ; PPC64LE-NEXT:    lwsync
 ; PPC64LE-NEXT:    blr
-  %ret = atomicrmw min i32* %ptr, i32 %val singlethread acq_rel
+  %ret = atomicrmw min i32* %ptr, i32 %val syncscope("singlethread") acq_rel
   ret i32 %ret
 }
 
@@ -8722,7 +8722,7 @@
 ; PPC64LE-NEXT:    mr 3, 5
 ; PPC64LE-NEXT:    lwsync
 ; PPC64LE-NEXT:    blr
-  %ret = atomicrmw min i32* %ptr, i32 %val singlethread seq_cst
+  %ret = atomicrmw min i32* %ptr, i32 %val syncscope("singlethread") seq_cst
   ret i32 %ret
 }
 
@@ -8739,7 +8739,7 @@
 ; PPC64LE-NEXT:  .LBB515_3:
 ; PPC64LE-NEXT:    mr 3, 5
 ; PPC64LE-NEXT:    blr
-  %ret = atomicrmw min i64* %ptr, i64 %val singlethread monotonic
+  %ret = atomicrmw min i64* %ptr, i64 %val syncscope("singlethread") monotonic
   ret i64 %ret
 }
 
@@ -8757,7 +8757,7 @@
 ; PPC64LE-NEXT:  .LBB516_3:
 ; PPC64LE-NEXT:    lwsync
 ; PPC64LE-NEXT:    blr
-  %ret = atomicrmw min i64* %ptr, i64 %val singlethread acquire
+  %ret = atomicrmw min i64* %ptr, i64 %val syncscope("singlethread") acquire
   ret i64 %ret
 }
 
@@ -8775,7 +8775,7 @@
 ; PPC64LE-NEXT:  .LBB517_3:
 ; PPC64LE-NEXT:    mr 3, 5
 ; PPC64LE-NEXT:    blr
-  %ret = atomicrmw min i64* %ptr, i64 %val singlethread release
+  %ret = atomicrmw min i64* %ptr, i64 %val syncscope("singlethread") release
   ret i64 %ret
 }
 
@@ -8794,7 +8794,7 @@
 ; PPC64LE-NEXT:    mr 3, 5
 ; PPC64LE-NEXT:    lwsync
 ; PPC64LE-NEXT:    blr
-  %ret = atomicrmw min i64* %ptr, i64 %val singlethread acq_rel
+  %ret = atomicrmw min i64* %ptr, i64 %val syncscope("singlethread") acq_rel
   ret i64 %ret
 }
 
@@ -8813,7 +8813,7 @@
 ; PPC64LE-NEXT:    mr 3, 5
 ; PPC64LE-NEXT:    lwsync
 ; PPC64LE-NEXT:    blr
-  %ret = atomicrmw min i64* %ptr, i64 %val singlethread seq_cst
+  %ret = atomicrmw min i64* %ptr, i64 %val syncscope("singlethread") seq_cst
   ret i64 %ret
 }
 
@@ -8830,7 +8830,7 @@
 ; PPC64LE-NEXT:  .LBB520_3:
 ; PPC64LE-NEXT:    mr 3, 5
 ; PPC64LE-NEXT:    blr
-  %ret = atomicrmw umax i8* %ptr, i8 %val singlethread monotonic
+  %ret = atomicrmw umax i8* %ptr, i8 %val syncscope("singlethread") monotonic
   ret i8 %ret
 }
 
@@ -8848,7 +8848,7 @@
 ; PPC64LE-NEXT:  .LBB521_3:
 ; PPC64LE-NEXT:    lwsync
 ; PPC64LE-NEXT:    blr
-  %ret = atomicrmw umax i8* %ptr, i8 %val singlethread acquire
+  %ret = atomicrmw umax i8* %ptr, i8 %val syncscope("singlethread") acquire
   ret i8 %ret
 }
 
@@ -8866,7 +8866,7 @@
 ; PPC64LE-NEXT:  .LBB522_3:
 ; PPC64LE-NEXT:    mr 3, 5
 ; PPC64LE-NEXT:    blr
-  %ret = atomicrmw umax i8* %ptr, i8 %val singlethread release
+  %ret = atomicrmw umax i8* %ptr, i8 %val syncscope("singlethread") release
   ret i8 %ret
 }
 
@@ -8885,7 +8885,7 @@
 ; PPC64LE-NEXT:    mr 3, 5
 ; PPC64LE-NEXT:    lwsync
 ; PPC64LE-NEXT:    blr
-  %ret = atomicrmw umax i8* %ptr, i8 %val singlethread acq_rel
+  %ret = atomicrmw umax i8* %ptr, i8 %val syncscope("singlethread") acq_rel
   ret i8 %ret
 }
 
@@ -8904,7 +8904,7 @@
 ; PPC64LE-NEXT:    mr 3, 5
 ; PPC64LE-NEXT:    lwsync
 ; PPC64LE-NEXT:    blr
-  %ret = atomicrmw umax i8* %ptr, i8 %val singlethread seq_cst
+  %ret = atomicrmw umax i8* %ptr, i8 %val syncscope("singlethread") seq_cst
   ret i8 %ret
 }
 
@@ -8921,7 +8921,7 @@
 ; PPC64LE-NEXT:  .LBB525_3:
 ; PPC64LE-NEXT:    mr 3, 5
 ; PPC64LE-NEXT:    blr
-  %ret = atomicrmw umax i16* %ptr, i16 %val singlethread monotonic
+  %ret = atomicrmw umax i16* %ptr, i16 %val syncscope("singlethread") monotonic
   ret i16 %ret
 }
 
@@ -8939,7 +8939,7 @@
 ; PPC64LE-NEXT:  .LBB526_3:
 ; PPC64LE-NEXT:    lwsync
 ; PPC64LE-NEXT:    blr
-  %ret = atomicrmw umax i16* %ptr, i16 %val singlethread acquire
+  %ret = atomicrmw umax i16* %ptr, i16 %val syncscope("singlethread") acquire
   ret i16 %ret
 }
 
@@ -8957,7 +8957,7 @@
 ; PPC64LE-NEXT:  .LBB527_3:
 ; PPC64LE-NEXT:    mr 3, 5
 ; PPC64LE-NEXT:    blr
-  %ret = atomicrmw umax i16* %ptr, i16 %val singlethread release
+  %ret = atomicrmw umax i16* %ptr, i16 %val syncscope("singlethread") release
   ret i16 %ret
 }
 
@@ -8976,7 +8976,7 @@
 ; PPC64LE-NEXT:    mr 3, 5
 ; PPC64LE-NEXT:    lwsync
 ; PPC64LE-NEXT:    blr
-  %ret = atomicrmw umax i16* %ptr, i16 %val singlethread acq_rel
+  %ret = atomicrmw umax i16* %ptr, i16 %val syncscope("singlethread") acq_rel
   ret i16 %ret
 }
 
@@ -8995,7 +8995,7 @@
 ; PPC64LE-NEXT:    mr 3, 5
 ; PPC64LE-NEXT:    lwsync
 ; PPC64LE-NEXT:    blr
-  %ret = atomicrmw umax i16* %ptr, i16 %val singlethread seq_cst
+  %ret = atomicrmw umax i16* %ptr, i16 %val syncscope("singlethread") seq_cst
   ret i16 %ret
 }
 
@@ -9012,7 +9012,7 @@
 ; PPC64LE-NEXT:  .LBB530_3:
 ; PPC64LE-NEXT:    mr 3, 5
 ; PPC64LE-NEXT:    blr
-  %ret = atomicrmw umax i32* %ptr, i32 %val singlethread monotonic
+  %ret = atomicrmw umax i32* %ptr, i32 %val syncscope("singlethread") monotonic
   ret i32 %ret
 }
 
@@ -9030,7 +9030,7 @@
 ; PPC64LE-NEXT:  .LBB531_3:
 ; PPC64LE-NEXT:    lwsync
 ; PPC64LE-NEXT:    blr
-  %ret = atomicrmw umax i32* %ptr, i32 %val singlethread acquire
+  %ret = atomicrmw umax i32* %ptr, i32 %val syncscope("singlethread") acquire
   ret i32 %ret
 }
 
@@ -9048,7 +9048,7 @@
 ; PPC64LE-NEXT:  .LBB532_3:
 ; PPC64LE-NEXT:    mr 3, 5
 ; PPC64LE-NEXT:    blr
-  %ret = atomicrmw umax i32* %ptr, i32 %val singlethread release
+  %ret = atomicrmw umax i32* %ptr, i32 %val syncscope("singlethread") release
   ret i32 %ret
 }
 
@@ -9067,7 +9067,7 @@
 ; PPC64LE-NEXT:    mr 3, 5
 ; PPC64LE-NEXT:    lwsync
 ; PPC64LE-NEXT:    blr
-  %ret = atomicrmw umax i32* %ptr, i32 %val singlethread acq_rel
+  %ret = atomicrmw umax i32* %ptr, i32 %val syncscope("singlethread") acq_rel
   ret i32 %ret
 }
 
@@ -9086,7 +9086,7 @@
 ; PPC64LE-NEXT:    mr 3, 5
 ; PPC64LE-NEXT:    lwsync
 ; PPC64LE-NEXT:    blr
-  %ret = atomicrmw umax i32* %ptr, i32 %val singlethread seq_cst
+  %ret = atomicrmw umax i32* %ptr, i32 %val syncscope("singlethread") seq_cst
   ret i32 %ret
 }
 
@@ -9103,7 +9103,7 @@
 ; PPC64LE-NEXT:  .LBB535_3:
 ; PPC64LE-NEXT:    mr 3, 5
 ; PPC64LE-NEXT:    blr
-  %ret = atomicrmw umax i64* %ptr, i64 %val singlethread monotonic
+  %ret = atomicrmw umax i64* %ptr, i64 %val syncscope("singlethread") monotonic
   ret i64 %ret
 }
 
@@ -9121,7 +9121,7 @@
 ; PPC64LE-NEXT:  .LBB536_3:
 ; PPC64LE-NEXT:    lwsync
 ; PPC64LE-NEXT:    blr
-  %ret = atomicrmw umax i64* %ptr, i64 %val singlethread acquire
+  %ret = atomicrmw umax i64* %ptr, i64 %val syncscope("singlethread") acquire
   ret i64 %ret
 }
 
@@ -9139,7 +9139,7 @@
 ; PPC64LE-NEXT:  .LBB537_3:
 ; PPC64LE-NEXT:    mr 3, 5
 ; PPC64LE-NEXT:    blr
-  %ret = atomicrmw umax i64* %ptr, i64 %val singlethread release
+  %ret = atomicrmw umax i64* %ptr, i64 %val syncscope("singlethread") release
   ret i64 %ret
 }
 
@@ -9158,7 +9158,7 @@
 ; PPC64LE-NEXT:    mr 3, 5
 ; PPC64LE-NEXT:    lwsync
 ; PPC64LE-NEXT:    blr
-  %ret = atomicrmw umax i64* %ptr, i64 %val singlethread acq_rel
+  %ret = atomicrmw umax i64* %ptr, i64 %val syncscope("singlethread") acq_rel
   ret i64 %ret
 }
 
@@ -9177,7 +9177,7 @@
 ; PPC64LE-NEXT:    mr 3, 5
 ; PPC64LE-NEXT:    lwsync
 ; PPC64LE-NEXT:    blr
-  %ret = atomicrmw umax i64* %ptr, i64 %val singlethread seq_cst
+  %ret = atomicrmw umax i64* %ptr, i64 %val syncscope("singlethread") seq_cst
   ret i64 %ret
 }
 
@@ -9194,7 +9194,7 @@
 ; PPC64LE-NEXT:  .LBB540_3:
 ; PPC64LE-NEXT:    mr 3, 5
 ; PPC64LE-NEXT:    blr
-  %ret = atomicrmw umin i8* %ptr, i8 %val singlethread monotonic
+  %ret = atomicrmw umin i8* %ptr, i8 %val syncscope("singlethread") monotonic
   ret i8 %ret
 }
 
@@ -9212,7 +9212,7 @@
 ; PPC64LE-NEXT:  .LBB541_3:
 ; PPC64LE-NEXT:    lwsync
 ; PPC64LE-NEXT:    blr
-  %ret = atomicrmw umin i8* %ptr, i8 %val singlethread acquire
+  %ret = atomicrmw umin i8* %ptr, i8 %val syncscope("singlethread") acquire
   ret i8 %ret
 }
 
@@ -9230,7 +9230,7 @@
 ; PPC64LE-NEXT:  .LBB542_3:
 ; PPC64LE-NEXT:    mr 3, 5
 ; PPC64LE-NEXT:    blr
-  %ret = atomicrmw umin i8* %ptr, i8 %val singlethread release
+  %ret = atomicrmw umin i8* %ptr, i8 %val syncscope("singlethread") release
   ret i8 %ret
 }
 
@@ -9249,7 +9249,7 @@
 ; PPC64LE-NEXT:    mr 3, 5
 ; PPC64LE-NEXT:    lwsync
 ; PPC64LE-NEXT:    blr
-  %ret = atomicrmw umin i8* %ptr, i8 %val singlethread acq_rel
+  %ret = atomicrmw umin i8* %ptr, i8 %val syncscope("singlethread") acq_rel
   ret i8 %ret
 }
 
@@ -9268,7 +9268,7 @@
 ; PPC64LE-NEXT:    mr 3, 5
 ; PPC64LE-NEXT:    lwsync
 ; PPC64LE-NEXT:    blr
-  %ret = atomicrmw umin i8* %ptr, i8 %val singlethread seq_cst
+  %ret = atomicrmw umin i8* %ptr, i8 %val syncscope("singlethread") seq_cst
   ret i8 %ret
 }
 
@@ -9285,7 +9285,7 @@
 ; PPC64LE-NEXT:  .LBB545_3:
 ; PPC64LE-NEXT:    mr 3, 5
 ; PPC64LE-NEXT:    blr
-  %ret = atomicrmw umin i16* %ptr, i16 %val singlethread monotonic
+  %ret = atomicrmw umin i16* %ptr, i16 %val syncscope("singlethread") monotonic
   ret i16 %ret
 }
 
@@ -9303,7 +9303,7 @@
 ; PPC64LE-NEXT:  .LBB546_3:
 ; PPC64LE-NEXT:    lwsync
 ; PPC64LE-NEXT:    blr
-  %ret = atomicrmw umin i16* %ptr, i16 %val singlethread acquire
+  %ret = atomicrmw umin i16* %ptr, i16 %val syncscope("singlethread") acquire
   ret i16 %ret
 }
 
@@ -9321,7 +9321,7 @@
 ; PPC64LE-NEXT:  .LBB547_3:
 ; PPC64LE-NEXT:    mr 3, 5
 ; PPC64LE-NEXT:    blr
-  %ret = atomicrmw umin i16* %ptr, i16 %val singlethread release
+  %ret = atomicrmw umin i16* %ptr, i16 %val syncscope("singlethread") release
   ret i16 %ret
 }
 
@@ -9340,7 +9340,7 @@
 ; PPC64LE-NEXT:    mr 3, 5
 ; PPC64LE-NEXT:    lwsync
 ; PPC64LE-NEXT:    blr
-  %ret = atomicrmw umin i16* %ptr, i16 %val singlethread acq_rel
+  %ret = atomicrmw umin i16* %ptr, i16 %val syncscope("singlethread") acq_rel
   ret i16 %ret
 }
 
@@ -9359,7 +9359,7 @@
 ; PPC64LE-NEXT:    mr 3, 5
 ; PPC64LE-NEXT:    lwsync
 ; PPC64LE-NEXT:    blr
-  %ret = atomicrmw umin i16* %ptr, i16 %val singlethread seq_cst
+  %ret = atomicrmw umin i16* %ptr, i16 %val syncscope("singlethread") seq_cst
   ret i16 %ret
 }
 
@@ -9376,7 +9376,7 @@
 ; PPC64LE-NEXT:  .LBB550_3:
 ; PPC64LE-NEXT:    mr 3, 5
 ; PPC64LE-NEXT:    blr
-  %ret = atomicrmw umin i32* %ptr, i32 %val singlethread monotonic
+  %ret = atomicrmw umin i32* %ptr, i32 %val syncscope("singlethread") monotonic
   ret i32 %ret
 }
 
@@ -9394,7 +9394,7 @@
 ; PPC64LE-NEXT:  .LBB551_3:
 ; PPC64LE-NEXT:    lwsync
 ; PPC64LE-NEXT:    blr
-  %ret = atomicrmw umin i32* %ptr, i32 %val singlethread acquire
+  %ret = atomicrmw umin i32* %ptr, i32 %val syncscope("singlethread") acquire
   ret i32 %ret
 }
 
@@ -9412,7 +9412,7 @@
 ; PPC64LE-NEXT:  .LBB552_3:
 ; PPC64LE-NEXT:    mr 3, 5
 ; PPC64LE-NEXT:    blr
-  %ret = atomicrmw umin i32* %ptr, i32 %val singlethread release
+  %ret = atomicrmw umin i32* %ptr, i32 %val syncscope("singlethread") release
   ret i32 %ret
 }
 
@@ -9431,7 +9431,7 @@
 ; PPC64LE-NEXT:    mr 3, 5
 ; PPC64LE-NEXT:    lwsync
 ; PPC64LE-NEXT:    blr
-  %ret = atomicrmw umin i32* %ptr, i32 %val singlethread acq_rel
+  %ret = atomicrmw umin i32* %ptr, i32 %val syncscope("singlethread") acq_rel
   ret i32 %ret
 }
 
@@ -9450,7 +9450,7 @@
 ; PPC64LE-NEXT:    mr 3, 5
 ; PPC64LE-NEXT:    lwsync
 ; PPC64LE-NEXT:    blr
-  %ret = atomicrmw umin i32* %ptr, i32 %val singlethread seq_cst
+  %ret = atomicrmw umin i32* %ptr, i32 %val syncscope("singlethread") seq_cst
   ret i32 %ret
 }
 
@@ -9467,7 +9467,7 @@
 ; PPC64LE-NEXT:  .LBB555_3:
 ; PPC64LE-NEXT:    mr 3, 5
 ; PPC64LE-NEXT:    blr
-  %ret = atomicrmw umin i64* %ptr, i64 %val singlethread monotonic
+  %ret = atomicrmw umin i64* %ptr, i64 %val syncscope("singlethread") monotonic
   ret i64 %ret
 }
 
@@ -9485,7 +9485,7 @@
 ; PPC64LE-NEXT:  .LBB556_3:
 ; PPC64LE-NEXT:    lwsync
 ; PPC64LE-NEXT:    blr
-  %ret = atomicrmw umin i64* %ptr, i64 %val singlethread acquire
+  %ret = atomicrmw umin i64* %ptr, i64 %val syncscope("singlethread") acquire
   ret i64 %ret
 }
 
@@ -9503,7 +9503,7 @@
 ; PPC64LE-NEXT:  .LBB557_3:
 ; PPC64LE-NEXT:    mr 3, 5
 ; PPC64LE-NEXT:    blr
-  %ret = atomicrmw umin i64* %ptr, i64 %val singlethread release
+  %ret = atomicrmw umin i64* %ptr, i64 %val syncscope("singlethread") release
   ret i64 %ret
 }
 
@@ -9522,7 +9522,7 @@
 ; PPC64LE-NEXT:    mr 3, 5
 ; PPC64LE-NEXT:    lwsync
 ; PPC64LE-NEXT:    blr
-  %ret = atomicrmw umin i64* %ptr, i64 %val singlethread acq_rel
+  %ret = atomicrmw umin i64* %ptr, i64 %val syncscope("singlethread") acq_rel
   ret i64 %ret
 }
 
@@ -9541,6 +9541,6 @@
 ; PPC64LE-NEXT:    mr 3, 5
 ; PPC64LE-NEXT:    lwsync
 ; PPC64LE-NEXT:    blr
-  %ret = atomicrmw umin i64* %ptr, i64 %val singlethread seq_cst
+  %ret = atomicrmw umin i64* %ptr, i64 %val syncscope("singlethread") seq_cst
   ret i64 %ret
 }
Index: test/Instrumentation/ThreadSanitizer/atomic.ll
===================================================================
--- test/Instrumentation/ThreadSanitizer/atomic.ll
+++ test/Instrumentation/ThreadSanitizer/atomic.ll
@@ -1959,7 +1959,7 @@
 
 define void @atomic_signal_fence_acquire() nounwind uwtable {
 entry:
-  fence singlethread acquire, !dbg !7
+  fence syncscope("singlethread") acquire, !dbg !7
   ret void, !dbg !7
 }
 ; CHECK-LABEL: atomic_signal_fence_acquire
@@ -1975,7 +1975,7 @@
 
 define void @atomic_signal_fence_release() nounwind uwtable {
 entry:
-  fence singlethread release, !dbg !7
+  fence syncscope("singlethread") release, !dbg !7
   ret void, !dbg !7
 }
 ; CHECK-LABEL: atomic_signal_fence_release
@@ -1991,7 +1991,7 @@
 
 define void @atomic_signal_fence_acq_rel() nounwind uwtable {
 entry:
-  fence singlethread acq_rel, !dbg !7
+  fence syncscope("singlethread") acq_rel, !dbg !7
   ret void, !dbg !7
 }
 ; CHECK-LABEL: atomic_signal_fence_acq_rel
@@ -2007,7 +2007,7 @@
 
 define void @atomic_signal_fence_seq_cst() nounwind uwtable {
 entry:
-  fence singlethread seq_cst, !dbg !7
+  fence syncscope("singlethread") seq_cst, !dbg !7
   ret void, !dbg !7
 }
 ; CHECK-LABEL: atomic_signal_fence_seq_cst
Index: test/Linker/Inputs/syncscope-1.ll
===================================================================
--- /dev/null
+++ test/Linker/Inputs/syncscope-1.ll
@@ -0,0 +1,6 @@
+define void @syncscope_1() {
+  fence syncscope("agent") seq_cst
+  fence syncscope("workgroup") seq_cst
+  fence syncscope("wavefront") seq_cst
+  ret void
+}
Index: test/Linker/Inputs/syncscope-2.ll
===================================================================
--- /dev/null
+++ test/Linker/Inputs/syncscope-2.ll
@@ -0,0 +1,6 @@
+define void @syncscope_2() {
+  fence syncscope("image") seq_cst
+  fence syncscope("agent") seq_cst
+  fence syncscope("workgroup") seq_cst
+  ret void
+}
Index: test/Linker/syncscopes.ll
===================================================================
--- /dev/null
+++ test/Linker/syncscopes.ll
@@ -0,0 +1,11 @@
+; RUN: llvm-link %S/Inputs/syncscope-1.ll %S/Inputs/syncscope-2.ll -S | FileCheck %s
+
+; CHECK-LABEL: define void @syncscope_1
+; CHECK: fence syncscope("agent") seq_cst
+; CHECK: fence syncscope("workgroup") seq_cst
+; CHECK: fence syncscope("wavefront") seq_cst
+
+; CHECK-LABEL: define void @syncscope_2
+; CHECK: fence syncscope("image") seq_cst
+; CHECK: fence syncscope("agent") seq_cst
+; CHECK: fence syncscope("workgroup") seq_cst
Index: test/Transforms/GVN/PRE/atomic.ll
===================================================================
--- test/Transforms/GVN/PRE/atomic.ll
+++ test/Transforms/GVN/PRE/atomic.ll
@@ -208,14 +208,14 @@
   ret void
 }
 
-; Can't DSE across a full singlethread fence
+; Can't DSE across a full syncscope("singlethread") fence
 define void @fence_seq_cst_st(i32* %P1, i32* %P2) {
 ; CHECK-LABEL: @fence_seq_cst_st(
 ; CHECK: store
-; CHECK: fence singlethread seq_cst
+; CHECK: fence syncscope("singlethread") seq_cst
 ; CHECK: store
   store i32 0, i32* %P1, align 4
-  fence singlethread seq_cst
+  fence syncscope("singlethread") seq_cst
   store i32 0, i32* %P1, align 4
   ret void
 }
Index: test/Transforms/InstCombine/consecutive-fences.ll
===================================================================
--- test/Transforms/InstCombine/consecutive-fences.ll
+++ test/Transforms/InstCombine/consecutive-fences.ll
@@ -4,7 +4,7 @@
 
 ; CHECK-LABEL: define void @tinkywinky
 ; CHECK-NEXT:   fence seq_cst
-; CHECK-NEXT:   fence singlethread acquire
+; CHECK-NEXT:   fence syncscope("singlethread") acquire
 ; CHECK-NEXT:   ret void
 ; CHECK-NEXT: }
 
@@ -12,21 +12,21 @@
   fence seq_cst
   fence seq_cst
   fence seq_cst
-  fence singlethread acquire
-  fence singlethread acquire
-  fence singlethread acquire
+  fence syncscope("singlethread") acquire
+  fence syncscope("singlethread") acquire
+  fence syncscope("singlethread") acquire
   ret void
 }
 
 ; CHECK-LABEL: define void @dipsy
 ; CHECK-NEXT:   fence seq_cst
-; CHECK-NEXT:   fence singlethread seq_cst
+; CHECK-NEXT:   fence syncscope("singlethread") seq_cst
 ; CHECK-NEXT:   ret void
 ; CHECK-NEXT: }
 
 define void @dipsy() {
   fence seq_cst
-  fence singlethread seq_cst
+  fence syncscope("singlethread") seq_cst
   ret void
 }
 
Index: test/Transforms/Sink/fence.ll
===================================================================
--- test/Transforms/Sink/fence.ll
+++ test/Transforms/Sink/fence.ll
@@ -5,9 +5,9 @@
 define void @test1(i32* ()*) {
 entry:
   %1 = call i32* %0() #0
-  fence singlethread seq_cst
+  fence syncscope("singlethread") seq_cst
   %2 = load i32, i32* %1, align 4
-  fence singlethread seq_cst
+  fence syncscope("singlethread") seq_cst
   %3 = icmp eq i32 %2, 0
   br i1 %3, label %fail, label %pass
 
@@ -20,9 +20,9 @@
 
 ; CHECK-LABEL: @test1(
 ; CHECK:  %[[call:.*]] = call i32* %0()
-; CHECK:  fence singlethread seq_cst
+; CHECK:  fence syncscope("singlethread") seq_cst
 ; CHECK:  load i32, i32* %[[call]], align 4
-; CHECK:  fence singlethread seq_cst
+; CHECK:  fence syncscope("singlethread") seq_cst
 
 
 attributes #0 = { nounwind readnone }
Index: unittests/Analysis/AliasAnalysisTest.cpp
===================================================================
--- unittests/Analysis/AliasAnalysisTest.cpp
+++ unittests/Analysis/AliasAnalysisTest.cpp
@@ -180,10 +180,11 @@
   auto *VAArg1 = new VAArgInst(Addr, PtrType, "vaarg", BB);
   auto *CmpXChg1 = new AtomicCmpXchgInst(
       Addr, ConstantInt::get(IntType, 0), ConstantInt::get(IntType, 1),
-      AtomicOrdering::Monotonic, AtomicOrdering::Monotonic, CrossThread, BB);
+      AtomicOrdering::Monotonic, AtomicOrdering::Monotonic,
+      SyncScope::System, BB);
   auto *AtomicRMW =
       new AtomicRMWInst(AtomicRMWInst::Xchg, Addr, ConstantInt::get(IntType, 1),
-                        AtomicOrdering::Monotonic, CrossThread, BB);
+                        AtomicOrdering::Monotonic, SyncScope::System, BB);
 
   ReturnInst::Create(C, nullptr, BB);