This is an archive of the discontinued LLVM Phabricator instance.

[lldb] Disable tests for x86 that uses write command on XMM registers
Needs ReviewPublic

Authored by ljmf00 on Jan 21 2022, 2:14 PM.

Details

Reviewers
mgorny
Summary

This patch disables the following tests on non-AVX machines:

  • lldb-shell :: Register/x86-64-write.test
  • lldb-shell :: Register/x86-mm-xmm-write.test

This is due to a bug on GDB that prevents writing some XMM registers on
those machines https://sourceware.org/bugzilla/show_bug.cgi?id=28803 .

Diff Detail

Event Timeline

ljmf00 created this revision.Jan 21 2022, 2:14 PM
ljmf00 requested review of this revision.Jan 21 2022, 2:14 PM
ljmf00 added inline comments.Jan 21 2022, 2:16 PM
lldb/test/Shell/Register/x86-mm-xmm-write.test
1–2

Maybe those XFAILs are related to this too. Can anyone test this on those systems, if you have a CPU with AVX extensions? I don't know the rationale, although.

Why are we disabling a lldb test because of a bug in GDB?

jingham added a subscriber: jingham.EditedJan 21 2022, 3:15 PM

Why are we disabling a lldb test because of a bug in GDB?

Maybe somebody is running the lldb testsuite with gdbserver? That seems a legit thing to do, but if so, we should make an xfail category for gdbserver and xfail with that.

Why are we disabling a lldb test because of a bug in GDB?

Because GDB is used on systems that don't implement a RegisterContext. I added an inline comment about Darwin and Windows XFAIL, as I'm not sure if this is just Linux specific. See here https://github.com/llvm/llvm-project/blob/main/lldb/source/Plugins/Process/gdb-remote/GDBRemoteRegisterContext.cpp#L296 . Take a look at the issue. You can also reproduce this with register write/register read right after.

This ended up breaking my testsuite for a long time, since my server has an x86-64-v2 CPU, instead of an x86-64-v3. The buildbots seem to have at least x86-64-v3. As described by the issue I tested this on an AVX capable machine (x86-64-v3) and virtualized an older CPU to reproduce in a clean environment.

Why are we disabling a lldb test because of a bug in GDB?

Maybe somebody is running the lldb testsuite with gdbserver? That seems a legit thing to do, but if so, we should make an xfail category for gdbserver and xfail with that.

I haven't considered that. Shouldn't the testsuite run with lldb-server anyway? I think so, but how can I confirm that for you?

Why are we disabling a lldb test because of a bug in GDB?

Maybe somebody is running the lldb testsuite with gdbserver? That seems a legit thing to do, but if so, we should make an xfail category for gdbserver and xfail with that.

I haven't considered that. Shouldn't the testsuite run with lldb-server anyway? I think so, but how can I confirm that for you?

I guess we're just all confused that you are mentioning GDB at all, and the only thing I could think of is gdbserver, since that's the only way anything GDB could get mixed with anything LLDB...

I'm sorry but I don't understand what you're talking about. Could you explain how are you running tests? Is there some way we can reproduce the problem?

I guess we're just all confused that you are mentioning GDB at all, and the only thing I could think of is gdbserver, since that's the only way anything GDB could get mixed with anything LLDB...

Ok, I will try to give some context. I can be wrong/confused and this can be unrelated to GDB and a coincidence with my observable behaviour, but running LLDB with default settings, register write command triggers the following function https://github.com/llvm/llvm-project/blob/main/lldb/source/Plugins/Process/gdb-remote/GDBRemoteRegisterContext.cpp#L296 .

Interestingly, I just did a test right now: removed GDB from my machine and did a clean build. That function (GDBRemoteRegisterContext::WriteRegister) still triggers when register write. I'm a bit confused now on why this is triggered. Maybe there is a fallback system? I don't have any guidance on this whatsoever, so my view on how those plugins work can be wrong. Would be cool to have a brief explanation about the pipeline of RegisterContext and how it interacts with GDBRemoteRegisterContext to be in sync.

Can be confusing to add a link to the GDB bug tracker, but one thing I can be sure of is that this is dependent on the CPU and by the observed behaviour, the presence of AVX or some other extension introduced on x86_64-v2 related to XMM registers (can't think of another one other than SSE). I can also describe my testing environment and give the libvirt/qemu parameters I added to virtualize the CPU flags, if needed, but the CPU configuration is similar to the models I described. I ran LLDB on a clean archiso live installation.

I'm sorry but I don't understand what you're talking about. Could you explain how are you running tests? Is there some way we can reproduce the problem?

You can see a build log I posted on discourse: https://ipfs.io/ipfs/bafybeidsf745d4qxrzhbbir4h23ov4rppvo2uke7qh75oj6lfoi6zkcyre/

Ah, I think your confusion is that you missed the “Remote” part of all the classes in lldb that start with “GDB”. They are so called because they use the “gdb remote serial protocol” to communicate with the debug monitor, not because they have anything to do with gdb the debugger. That’s a protocol for communication between a debugger and a debug monitor of some sort, devised by people working on gdb aeons ago:

https://sourceware.org/gdb/onlinedocs/gdb/Remote-Protocol.html

In lldb’s case, that debug monitor can be debugserver (the default on Darwin systems) lldb-server (the default on Linux systems) or anything else that speaks the gdb remote protocol (a lot of JTAG parts come with a gdb remote protocol stub) including potentially gdb’s implementation of the monitor: gdbserver...

But there’s no shared code between gdb & lldb, just a shared communication protocol.

Jim

ljmf00 added a comment.EditedJan 21 2022, 7:24 PM

Ah, I think your confusion is that you missed the “Remote” part of all the classes in lldb that start with “GDB”. They are so called because they use the “gdb remote serial protocol” to communicate with the debug monitor, not because they have anything to do with gdb the debugger. That’s a protocol for communication between a debugger and a debug monitor of some sort, devised by people working on gdb aeons ago:

https://sourceware.org/gdb/onlinedocs/gdb/Remote-Protocol.html

In lldb’s case, that debug monitor can be debugserver (the default on Darwin systems) lldb-server (the default on Linux systems) or anything else that speaks the gdb remote protocol (a lot of JTAG parts come with a gdb remote protocol stub) including potentially gdb’s implementation of the monitor: gdbserver...

But there’s no shared code between gdb & lldb, just a shared communication protocol.

Jim

Thanks, this makes much more sense in my brain and gives me much more control to debug this. I'm digging a bit more. I'm tracing Linux kernel syscalls to check if data is sent correctly. At first glance, it seems fine to me, so maybe it's a ptrace issue? I need to investigate a bit more to say that tho. strace tells me this:

ptrace(PTRACE_SETREGSET, 29772, NT_FPREGSET, {iov_base={cwd=0x37f, swd=0, ftw=0, fop=0, rip=0, rdp=0, mxcsr=0x1f80, mxcr_mask=0, st_space=[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], xmm_space=[0xa090807, 0xe0d0c0b, 0x1211100f, 0x16151413, 0xa090807, 0xe0d0c0b, 0x1211100f, 0x16151413, 0xa090807, 0xe0d0c0b, 0x1211100f, 0x16151413, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, ...], padding=[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]}, iov_len=512}) = 0
ptrace(PTRACE_GETREGSET, 29772, NT_FPREGSET, {iov_base={cwd=0x37f, swd=0, ftw=0, fop=0, rip=0, rdp=0, mxcsr=0x1f80, mxcr_mask=0, st_space=[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], xmm_space=[0xa090807, 0xe0d0c0b, 0x1211100f, 0x16151413, 0xa090807, 0xe0d0c0b, 0x1211100f, 0x16151413, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, ...], padding=[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]}, iov_len=512}) = 0

First call when setting xmm2 and second call when getting the current registers.

Ok, so to summarize: there's some CPU families where setting xmm2..xmm9 via ptrace doesn't work for some reason? That's quite weird. Could it be an xsave bug perhaps?

FWICS the corresponding read test passes, so apparently setting them directly within the program works.

Could you tell us what CPU exactly is this? Ideally paste /proc/cpuinfo. I'm pretty sure this test passed successfully on my old Athlon64 that definitely didn't have AVX (or SSE3). Unfortunately, I can't retest it anymore since it died almost 2 years ago.

Ok, so to summarize: there's some CPU families where setting xmm2..xmm9 via ptrace doesn't work for some reason? That's quite weird. Could it be an xsave bug perhaps?

Exactly. ptrace with NT_FPREGSET doesn't work properly but NT_X86_XSTATE does. It makes sense to me that the presence of AVX triggers this, since, from my inspection of the LLDB code, there is a fallback system on ReadRegisterSet that tries to use ptrace with NT_X86_XSTATE and fallbacks to NT_FPREGSET if it fails. The call made with NT_X86_XSTATE gives me different output on strace:

ptrace(PTRACE_SETREGSET, 213817, NT_X86_XSTATE, {iov_base=0x555597dcd4a0, iov_len=1088}) = 0
ptrace(PTRACE_GETREGSET, 213817, NT_X86_XSTATE, {iov_base=0x555597dcd4a0, iov_len=1088}) = 0

FWICS the corresponding read test passes, so apparently setting them directly within the program works.

Yes, I can confirm that. Writing directly to registers is fine and reading them only triggers PTRACE_GETREGSET so, it is reading fine. The problem is when ptrace is called with PTRACE_SETREGSET.
Inspecting the kernel source code I see that NT_FPREGSET is triggered by xfpregs_set https://github.com/torvalds/linux/blob/master/arch/x86/kernel/fpu/regset.c#L89 . That memset seems very suspicious here. Blaming the source code, seems to be before Linux v5.16 .

This makes sense to me since I use Arch with the latest kernel, and a lot of people use LTS versions or outdated versions due to Ubuntu/Debian (according to Wikipedia, unstable Debian uses Linux 5.10.46). I will downgrade the kernel and try to reproduce this. Ultimately, I can try to recompile the kernel without that memset and see what happens. I can't find a logical reason in my brain other than wrong offsets? If I didn't calculate it wrongly, the range of bytes is the same size. Would be cool if anyone have any knowledge of the kernel and explain this to me.

Could you tell us what CPU exactly is this? Ideally paste /proc/cpuinfo. I'm pretty sure this test passed successfully on my old Athlon64 that definitely didn't have AVX (or SSE3). Unfortunately, I can't retest it anymore since it died almost 2 years ago.

Is lscpu enough? See https://termbin.com/c2pt .

Thank you. Yes, please test on an older kernel, in case it's specifically a kernel regression.

labath added a subscriber: labath.Jan 23 2022, 10:51 AM

If this is a problem with PTRACE_SETREGSET(NT_FPREGSET), then we might be able to work around it by using PTRACE_POKEUSER instead. But it'd definitely be good to confirm this, so that we can report the bug to kernel devs.

Thank you. Yes, please test on an older kernel, in case it's specifically a kernel regression.

I can confirm regression on Linux LTS 5.10.93. Probably later and introduced by the patch I mentioned. I reported to the kernel bug tracker. See https://bugzilla.kernel.org/show_bug.cgi?id=215524 .

If this is a problem with PTRACE_SETREGSET(NT_FPREGSET), then we might be able to work around it by using PTRACE_POKEUSER instead. But it'd definitely be good to confirm this, so that we can report the bug to kernel devs.

According to ptrace documentation PTRACE_SETFPREGS seems a better fit, although either this and PTRACE_POKEUSER disallow writing in some specific general-purpose registers, so we should only use this for FP/XMM registers? AFAIK BSD-based kernels implements PT_SETXMMREGS although I don't see any documentation on the Linux kernel side about this.

Could you try replacing the new function with the old one and seeing if that helps? Or alternatively, trying to build a kernel from the commit just before that change and with that change.

Thank you. Yes, please test on an older kernel, in case it's specifically a kernel regression.

I can confirm regression on Linux LTS 5.10.93. Probably later and introduced by the patch I mentioned. I reported to the kernel bug tracker. See https://bugzilla.kernel.org/show_bug.cgi?id=215524 .

If this is a problem with PTRACE_SETREGSET(NT_FPREGSET), then we might be able to work around it by using PTRACE_POKEUSER instead. But it'd definitely be good to confirm this, so that we can report the bug to kernel devs.

According to ptrace documentation PTRACE_SETFPREGS seems a better fit, although either this and PTRACE_POKEUSER disallow writing in some specific general-purpose registers, so we should only use this for FP/XMM registers?

I think either would be fine, if it works (I'm not sure how early the PTRACE_SETFPREGS and PTRACE_SETREGSET codepaths converge inside the kernel). I wouldn't be too worried about the write prohibition clause. I'm pretty sure that applies to things like setting the single-step bit inside the flags register, and similar mischief that could confuse/crash the kernel. Those restrictions should be enforced for any register-writing method. And now, thanks to @mgorny, we actually have tests that would catch situations when we're not able to change some register.

If you want, I can try to create a patch for this, though it might take me a couple of days to get around to it.

AFAIK BSD-based kernels implements PT_SETXMMREGS although I don't see any documentation on the Linux kernel side about this.

Yeah, linux doesn't have those.

To explain a bit, PT_SETXMMREGS exists because originally PT_SETFPREGS used FSAVE format on i386, and this format didn't support setting XMM registers. Newer CPUs support FXSAVE instead, and this is exposed via PT_SETFPREGS on amd64 and PT_SETXMMREGS on i386.

Could you try replacing the new function with the old one and seeing if that helps? Or alternatively, trying to build a kernel from the commit just before that change and with that change.

I can reproduce with the introduced change and by removing the memset call, I can write to the registers. I submitted a patch to the kernel mailing list. See the kernel bug tracker for discussion. I talked a bit about the flow of LLDB to discover XSAVE and FXSAVE, although correct me there if I'm wrong.

If you want, I can try to create a patch for this, though it might take me a couple of days to get around to it.

I'm probably not getting the full picture but my idea on this is: if we propose this change due to this specific kernel regression is not worth it, although we should consider it if there is any other benefit on using PTRACE_POKEUSER.

To explain a bit, PT_SETXMMREGS exists because originally PT_SETFPREGS used FSAVE format on i386, and this format didn't support setting XMM registers. Newer CPUs support FXSAVE instead, and this is exposed via PT_SETFPREGS on amd64 and PT_SETXMMREGS on i386.

Ok, makes sense.

Thanks for tracking this down. I'm looking forward to seeing how the kernel issue gets resolved.

If you want, I can try to create a patch for this, though it might take me a couple of days to get around to it.

I'm probably not getting the full picture but my idea on this is: if we propose this change due to this specific kernel regression is not worth it, although we should consider it if there is any other benefit on using PTRACE_POKEUSER.

PTRACE_POKE/PEEKUSER might be theoretically faster for some use cases, as it can access only some bytes instead of the whole XSAVE block, but we're nowhere near doing those kinds of optimizations.

As for being worth it, we have added workarounds for kernel bugs in the past (try grepping for SingleStepWorkaround), so this wouldn't be the first one and, if done right, it also shouldn't be particularly burdensome.

I'd say it all comes down to how much you care about this particular configuration having a clean test run. My position is that I'd rather have a workaround in place than to have to figure out how to skip this test for this exact setup.

If this is a problem with PTRACE_SETREGSET(NT_FPREGSET), then we might be able to work around it by using PTRACE_POKEUSER instead. But it'd definitely be good to confirm this, so that we can report the bug to kernel devs.

Oleg Nesterov <oleg@redhat.com> is the kernel maintainer for PTRACE; it seems there's no mailing list for PTRACE AFAICT.

I submitted a patch to the kernel mailing list. See the kernel bug tracker for discussion.

link: https://lore.kernel.org/lkml/20220125022015.874422-1-contact@lsferreira.net/

Meanwhile, a commit will land Linux stable branches soon to fix this issue: https://git.kernel.org/pub/scm/linux/kernel/git/tip/tip.git/commit/?id=44cad52cc14ae10062f142ec16ede489bccf4469 . Do we have an alternative way to fix this, e.g. a custom XFAIL flag for specific linux versions?

labath added a comment.Mar 1 2022, 8:08 AM

Meanwhile, a commit will land Linux stable branches soon to fix this issue: https://git.kernel.org/pub/scm/linux/kernel/git/tip/tip.git/commit/?id=44cad52cc14ae10062f142ec16ede489bccf4469 . Do we have an alternative way to fix this, e.g. a custom XFAIL flag for specific linux versions?

We don't. That's why I said this:

I'd say it all comes down to how much you care about this particular configuration having a clean test run. My position is that I'd rather have a workaround in place than to have to figure out how to skip this test for this exact setup.