This is an archive of the discontinued LLVM Phabricator instance.

[X86] Disable base pointer register when it is clobbered.
AbandonedPublic

Authored by LuoYuanke on Feb 20 2023, 4:30 PM.

Details

Summary

The base pointer register is reserved by compiler when there is
dynamic size alloca and stack realign in a function. However the
base pointer register is not defined in X86 ABI, so user can use
this register in inline assembly. The inline assembly would
clobber base pointer register without being awared by user. This
patch to disable the base pointer register when compiler detect
it is clobbered. Below is the example code for such case.

extern int bar(void *p);
long long foo(size_t size, char c, int id) {
  __attribute__((__aligned__(64))) int a;
  char *p = (char *)alloca(size);
  asm volatile ("nop"::"S"(405):);
  asm volatile ("movl %0, %1"::"r"(id), "m"(a):);
  p[2] = 8;
  memset(p, c, size);
  return bar(p);
}

Diff Detail

Event Timeline

LuoYuanke created this revision.Feb 20 2023, 4:30 PM
Herald added a project: Restricted Project. · View Herald TranscriptFeb 20 2023, 4:30 PM
LuoYuanke requested review of this revision.Feb 20 2023, 4:30 PM
Herald added a project: Restricted Project. · View Herald TranscriptFeb 20 2023, 4:30 PM
LuoYuanke edited the summary of this revision. (Show Details)Feb 20 2023, 4:31 PM
craig.topper added inline comments.
llvm/test/CodeGen/X86/i386-baseptr.ll
64

Are we not satisfying the alignment for this alloca now?

LuoYuanke added inline comments.
llvm/test/CodeGen/X86/i386-baseptr.ll
64

From the test case, it seems satisfy, but I'm not quite sure about it.

LuoYuanke added inline comments.Feb 20 2023, 4:54 PM
llvm/test/CodeGen/X86/i386-baseptr.ll
64

Read it again, it does not satisfy the alignment for this alloca now.

LuoYuanke added inline comments.Feb 20 2023, 5:04 PM
llvm/test/CodeGen/X86/i386-baseptr.ll
64

It seems difficult to satisfy both alignment and "base pointer register" clobber. Maybe user has to change the inline assembly to save/restore "esi" by him/herself. Should we report warning in assembly to indicate user that "esi" is reserved by compiler?

Should asm volatile ("nop"::"S"(405):); be asm volatile ("nop"::"b"(405):);?

Should asm volatile ("nop"::"S"(405):); be asm volatile ("nop"::"b"(405):);?

For i386 ebx is not reserved as base pointer register, and for x86-64 rbx is reserved as base pointer register. Sometimes user has to use "esi" (e.g., syscall).

__asm__ volatile("int $0x80"
                : "=a" (ret)
                : "S" (call_id), "a" (arg1), "d" (arg2), "c" (arg3),
                "b" (arg4)
                : "memory");
pengfei added inline comments.Feb 20 2023, 5:30 PM
llvm/test/CodeGen/X86/i386-baseptr.ll
64

Can we alloc a stack slot to spill/reload "esi" around inline asm?

LuoYuanke added inline comments.Feb 20 2023, 5:39 PM
llvm/test/CodeGen/X86/i386-baseptr.ll
64

The spill reload instruction may be hoisted or sunk, it doesn't always glue to inline asm instruction.

LuoYuanke added inline comments.Feb 22 2023, 12:54 AM
llvm/test/CodeGen/X86/i386-baseptr.ll
64

I follow Phoebe's idea to create another patch (https://reviews.llvm.org/D144541) for the fix.

LuoYuanke abandoned this revision.Mar 20 2023, 5:54 PM