+ remove redundant getAtomic* member functions from SelectionDAG.
In order to implement memory model in AMDGPU backend, we need to generate a specific sequence of instructions depending on atomic ordering and synchronization scope. These instructions have to stick together, for example:
... %val = load atomic i32, i32 addrspace(4)* %in acquire, align 4 ...
Results in:
... s_waitcnt flat_load_dword ... s_waitcnt buffer_wbinvl1_vol ...
One approach in implementing this is to use pseudo instructions, and expand them into real instructions in the post-RA expansion pass. The problem we have run into with this approach in AMDGPU backend is we have to define quite a lot of pseudo instructions due to having multiple different instruction opcodes for loads and stores.
Another approach we have explored is to move AtomicOrdering and SynchScope from MemSDNode and AtomicSDNode into MachineMemOperand. This way we do not have to define multiple pseudo instructions, and just use MachineMemOperand. There is also a fixme comment that suggests the same approach.
If this patch (or some variation of it) is acceptable, the size of SynchScope will increase to 8 bits. Furthermore, we can generalize MachineAtomicInfo to AdditionalData with extra 16 bits available.