[AArch64] Fix bug 35094 atomicrmw on Armv8.1-A+lse

Authored by christof on Mar 18 2019, 2:21 AM.


[AArch64] Fix bug 35094 atomicrmw on Armv8.1-A+lse

Fixes https://bugs.llvm.org/show_bug.cgi?id=35094

The Dead register definition pass should leave alone the atomicrmw
instructions on AArch64 (LTE extension). The reason is the following
statement in the Arm ARM:

"The ST<OP> instructions, and LD<OP> instructions where the destination
register is WZR or XZR, are not regarded as doing a read for the purpose
of a DMB LD barrier."

A good example was given in the gcc thread by Will Deacon (linked in the
bugzilla ticket 35094):

P0 (atomic_int* y,atomic_int* x) {

P1 (atomic_int* y,atomic_int* x) {
  atomic_fetch_add_explicit(y,1,memory_order_relaxed);  // STADD
  int r0 = atomic_load_explicit(x,memory_order_relaxed);

P2 (atomic_int* y) {
  int r1 = atomic_load_explicit(y,memory_order_relaxed);

My understanding is that it is forbidden for r0 == 0 and r1 == 2 after
this test has executed. However, if the relaxed add in P1 compiles to
STADD and the subsequent acquire fence is compiled as DMB LD, then we
don't have any ordering guarantees in P1 and the forbidden result could
be observed.

Change-Id: I419f9f9df947716932038e1100c18d10a96408d0
llvm-svn: 356360