This is an archive of the discontinued LLVM Phabricator instance.

[lldb][AArch64] Add memory tag reading to lldb-server
ClosedPublic

Authored by DavidSpickett on Jan 28 2021, 1:44 AM.

Details

Summary

This adds memory tag reading using the new "qMemTags"
packet and ptrace on AArch64 Linux.

This new packet is following the one used by GDB.
(https://sourceware.org/gdb/current/onlinedocs/gdb/General-Query-Packets.html)

On AArch64 Linux we use ptrace's PEEKMTETAGS to read
tags and we assume that lldb has already checked that the
memory region actually has tagging enabled.

We do not assume that lldb has expanded the requested range
to granules and expand it again to be sure.
(although lldb will be sending aligned ranges because it happens
to need them client side anyway)
Also we don't assume untagged addresses. So for AArch64 we'll
remove the top byte before using them. (the top byte includes
MTE and other non address data)

To do the ptrace read NativeProcessLinux will ask the native
register context for a memory tag manager based on the
type in the packet. This also gives you the ptrace numbers you need.
(it's called a register context but it also has non register data,
so it saves adding another per platform sub class)

The only supported platform for this is AArch64 Linux and the only
supported tag type is MTE allocation tags. Anything else will
error.

Ptrace can return a partial result but for lldb-server we will
be treating that as an error. To succeed we need to get all the tags
we expect.

(Note that the protocol leaves room for logical tags to be
read via qMemTags but this is not going to be implemented for lldb
at this time.)

Diff Detail

Event Timeline

DavidSpickett created this revision.Jan 28 2021, 1:44 AM
DavidSpickett requested review of this revision.Jan 28 2021, 1:44 AM
Herald added a project: Restricted Project. · View Herald TranscriptJan 28 2021, 1:44 AM
DavidSpickett added a subscriber: emaste.

This goes together with https://reviews.llvm.org/D95602, I've split them to make review more manageable.

@emaste I saw you were doing something in BSD land around MTE, if you see any blockers from that side of things please say. Otherwise just FYI. Obviously this first version is only doing Linux so out of the box it wouldn't work but I think the logic is generic enough to be moved around later to support BSD(s).

This patch looks quite neat overall I have a few minor questions, comments and suggestions.

  1. To get the review going consider further splitting up these patches into separate chunks containing, for example MemoryTagHandler API and related unit tests, GDB remote addons, ptrace work etc. Also add reference to GDB mailing list discussion where design of qMemTag packet has been discussed.
  1. Soft suggestion to rename MemoryTagHandler, IMO it sounds more like a routine rather than a set of helpers used for manipulating memory tags. Also lldb/include/lldb/Target doesnt seem like a good place for helper functions as MemoryTagHandler is not used as Memory Tag container class. May be consider putting it under source/Plugins/Process/Utility
  1. Changes to lldb/packages/Python/lldbsuite/test/decorators.py look reasonable and can be committed right away with a small caveat.

Some other minor comments inline as well.

lldb/packages/Python/lldbsuite/test/decorators.py
825 ↗(On Diff #319791)

Consider using Compiler instead of Toolchain to keep terminology consistent with rest of the code in this file.

858 ↗(On Diff #319791)

Omit one space.

lldb/source/Plugins/Process/Linux/NativeProcessLinux.cpp
1462

Do you think future architectures will have any different ptrace_read_req/ptrace_write_req than PTRACE_PEEKMTETAGS/PTRACE_POKEMTETAGS.

If not then there is is no advantage of putting them inside MemoryTaggingDetails

lldb/source/Plugins/Process/Linux/NativeRegisterContextLinux.h
62

Add comment about MemoryTaggingDetails

68

Can type be negative. Does gdb remote protocol specifies type as int32?

To get the review going consider further splitting up these patches into separate chunks containing, for example MemoryTagHandler API and related unit tests, GDB remote addons, ptrace work etc. Also add reference to GDB mailing list discussion where design of qMemTag packet has been discussed.

Will do. I erred on the side of larger patches so you could see the pieces connect in the same review.

Soft suggestion to rename MemoryTagHandler, IMO it sounds more like a routine rather than a set of helpers used for manipulating memory tags. Also lldb/include/lldb/Target doesnt seem like a good place for helper functions as MemoryTagHandler is not used as Memory Tag container class. May be consider putting it under source/Plugins/Process/Utility

How about "MemoryTagInterface"? It's very similar to a Java "interface" class.

I think my reasoning for putting the header in lldb/Target was because it's used in the client and the server, and one of them tends not to include from the latter location. I'll double check.

DavidSpickett added inline comments.Feb 15 2021, 8:21 AM
lldb/source/Plugins/Process/Linux/NativeProcessLinux.cpp
1462

Given that "MTE" is in the name, I would think yes. (though they might end up being aliases to the same number)
I also want to avoid having AArch64 only (at the current time) constants in generic code.

To get the review going consider further splitting up these patches into separate chunks containing, for example MemoryTagHandler API and related unit tests, GDB remote addons, ptrace work etc. Also add reference to GDB mailing list discussion where design of qMemTag packet has been discussed.

Will do. I erred on the side of larger patches so you could see the pieces connect in the same review.

Soft suggestion to rename MemoryTagHandler, IMO it sounds more like a routine rather than a set of helpers used for manipulating memory tags. Also lldb/include/lldb/Target doesnt seem like a good place for helper functions as MemoryTagHandler is not used as Memory Tag container class. May be consider putting it under source/Plugins/Process/Utility

How about "MemoryTagInterface"? It's very similar to a Java "interface" class.

MemoryTagHandler class does indeed look like an interface class example of an interface class in LLDB code is RegisterInfosInterface which serves as an interface of varies arch dependent register infos. But MemoryTagHandler serves more like a provider or manager rather than a container

May be you can use MemoryTagManager or MemoryTagUtils but then this just a soft comment. You can keep the existing name or change it to interface or anything you feel appropriate.

I think my reasoning for putting the header in lldb/Target was because it's used in the client and the server, and one of them tends not to include from the latter location. I'll double check.

Split the patches into smaller changes. (see stack)

DavidSpickett retitled this revision from [lldb][AArch64] Add MTE memory tag reading to lldb-server to [lldb][AArch64] Add memory tag reading to lldb-server.Feb 23 2021, 6:31 AM
DavidSpickett edited the summary of this revision. (Show Details)
DavidSpickett added inline comments.Feb 23 2021, 6:43 AM
lldb/test/API/tools/lldb-server/memory-tagging/main.c
37

This looks good apart from minor nits inline

lldb/source/Plugins/Process/Linux/NativeProcessLinux.cpp
1466

ptrace request is a success if number of tags requested is not equal to no of tags read? If not then this and following condition may be redundant.

lldb/source/Plugins/Process/Linux/NativeRegisterContextLinux_arm64.cpp
883

this piece needs to run clang-format

lldb/source/Plugins/Process/gdb-remote/GDBRemoteCommunicationServerLLGS.cpp
3481

Just curious response starts with 'm'. Whats the design need for using m in qMemTags response?

lldb/test/API/tools/lldb-server/memory-tagging/TestGdbRemoteMemoryTagging.py
20

If skipUnlessAArch64MTELinuxCompiler can check for AArch64 and Linux then we wont need above two decorators.

DavidSpickett marked an inline comment as done.Mar 3 2021, 3:52 AM
DavidSpickett added inline comments.
lldb/source/Plugins/Process/Linux/NativeProcessLinux.cpp
1466

Well ptracewrapper doesn't check the iovec, but I'll check the kernel source to see if it's actually possible for it to fail that way.

lldb/source/Plugins/Process/gdb-remote/GDBRemoteCommunicationServerLLGS.cpp
3481

This is for future multi part replies ala qfThreadInfo (https://sourceware.org/gdb/current/onlinedocs/gdb/General-Query-Packets.html#General-Query-Packets). I'll add a comment with this too.

DavidSpickett marked an inline comment as done.Mar 3 2021, 4:05 AM
DavidSpickett added inline comments.
lldb/test/API/tools/lldb-server/memory-tagging/TestGdbRemoteMemoryTagging.py
20

I'll merge them into one (at least one you use in tests, separate functions). Also I just realised this is not checking that the remote supports MTE, only worked because I've been using the one qemu instance.

DavidSpickett marked an inline comment as not done.Mar 3 2021, 6:12 AM
DavidSpickett marked an inline comment as done.Mar 4 2021, 6:26 AM
DavidSpickett added inline comments.
lldb/source/Plugins/Process/Linux/NativeProcessLinux.cpp
1466

In linux/arch/arm64/kernel/mte.c __access_remote_tags there is a comment:

+/*
+ * Access MTE tags in another process' address space as given in mm. Update
+ * the number of tags copied. Return 0 if any tags copied, error otherwise.
+ * Inspired by __access_remote_vm().
+ */

*any tags* being the key words.

So the scenario is:

  • ask to read from addr X in page 0, with length of pagesize+some so the range spills into page 1
  • kernel can access page 0, reads tags until the end of the page
  • tries to access page 1 to read the rest, fails, returns 0 (success) since *some* tags were read
  • we see the ptrace call succeeded but with less tags than we expected

I don't see it's worth dealing with this corner case here since lldb will look before it leaps. It would have errored much earlier here because either page 1 isn't in the tracee's memory regions or it wasn't MTE enabled.

lldb/test/API/tools/lldb-server/memory-tagging/TestGdbRemoteMemoryTagging.py
20

On further consideration I don't think it's worth merging them. Sure we save 2 lines in each test but then anyone reading it is going to have to lookup what the combo does. I'd rather keep them listed like this for clarity (and later adding new platforms?).

Also I was wrong, the test does check for non MTE systems. If the tracee prints (nil) for the buffer, that means it's not an MTE system.
(we can't use the isAArch64MTE call in this type of test)

  • Use RemoveNonAddressBits instead of RemoveLogicalTag
  • Make the top byte of test buffer pointer non zero.

As noted, setting the top byte doesn't prove much
but it means if a future kernel gets more strict
we're already coping with that.

DavidSpickett marked 2 inline comments as done.Mar 8 2021, 8:19 AM
DavidSpickett added inline comments.
lldb/source/Plugins/Process/Linux/NativeProcessLinux.cpp
1466

Added a comment in the code too.

  • Comments for the MemoryTaggingDetails struct
DavidSpickett marked 2 inline comments as done.Mar 8 2021, 8:27 AM
DavidSpickett added inline comments.
lldb/source/Plugins/Process/Linux/NativeRegisterContextLinux.h
68
+@var{type} is the type of tag the request wants to fetch.  The type is a signed
+integer.
omjavaid added inline comments.Mar 9 2021, 5:50 AM
lldb/test/API/tools/lldb-server/memory-tagging/main.c
14

infinite loop in test program may not be a good idea.

omjavaid added inline comments.Mar 9 2021, 6:13 AM
lldb/source/Plugins/Process/Linux/NativeProcessLinux.cpp
1466

This means emitting less than requested number of tags is legit. However we have set tags vector size equal to whatever we have requested. We set error string which is actually not being used anywhere and can be removed in favor of a log message to help with debugging if needed.

Also we need to resize the vector after ptrace request so we use this size in gdb remote reply.

omjavaid added inline comments.Mar 9 2021, 6:22 AM
lldb/source/Plugins/Process/Linux/NativeProcessLinux.cpp
1454

is there a difference between Granule and GranuleSize?

lldb/test/API/tools/lldb-server/memory-tagging/TestGdbRemoteMemoryTagging.py
103

Do we test partial read case here?

DavidSpickett marked an inline comment as done.Mar 9 2021, 8:00 AM
DavidSpickett added inline comments.Mar 9 2021, 8:28 AM
lldb/source/Plugins/Process/Linux/NativeProcessLinux.cpp
1454

Granule is used to mean the current Granule you're in. So if you were at address 0x10 you'd be in the [0x10,0x20) granule.
GranuleSize is the size of each granule.

If I saw manager->GetGranule I'd expect it to be something like std::pair<addr_t, addr_t> GetGranule(addr_t addr);.
As in, tell me which granule I'm in.

Though I admit this is more an issue of "ExpandToGranule" not being clear enough, rather than "GetGranuleSize" being too redunant.
AlignToGranule(s) perhaps? But then you ask "align how", hence "ExpandTo". Anyway.

1466

I'll log that error in in GDBRemoteCommunicationServerLLGS.cpp.

I'll do what you suggested to support partial read on the server side. Then lldb client can error if it doesn't get what it expected.
(testing partial reads on the lldb side is going to be very difficult anyway since we'd need a valid smaps entry to even issue one)

lldb/test/API/tools/lldb-server/memory-tagging/TestGdbRemoteMemoryTagging.py
103

Ack. No, but it should be a case of reading off of the end of the allocated buffer by some amount.

lldb/test/API/tools/lldb-server/memory-tagging/main.c
14

I'll check what the timeouts are on the expect packet sequence. I think it would get killed eventually if we didn't see the output we're looking for.
(I could do something more CPU friendly, sleep/halt/wait on something)

omjavaid added inline comments.Mar 10 2021, 12:02 AM
lldb/source/Plugins/Process/Linux/NativeProcessLinux.cpp
1454

Right I guess we can stay with current nomenclature. Thanks for detailed explanation.

1466

If we are following an approach similar to m/M gdb remote packets for tags then its ok to read partial data in case a part memory in requested address range was inaccessible. May be make appropriate adjustment for command output I dont recall what currently emit out for partial memory reads but should be something like <tags not available>

lldb/test/API/tools/lldb-server/memory-tagging/main.c
14

In past I have LLDB buildbot sometimes piling up excutables which python couldnt cleanup for whatever reason. Its better if executable can safely exit within a reasonable period.

DavidSpickett added inline comments.Mar 15 2021, 4:03 AM
lldb/source/Plugins/Process/Linux/NativeProcessLinux.cpp
1466

I did some digging and lldb-server does not return partial data when a read fails.

for (bytes_read = 0; bytes_read < size; bytes_read += remainder) {
  Status error = NativeProcessLinux::PtraceWrapper(
      PTRACE_PEEKDATA, GetID(), (void *)addr, nullptr, 0, &data);
  if (error.Fail())
    return error;

  remainder = size - bytes_read;
  <...>
  dst += k_ptrace_word_size;
}
return Status();

I was able to test partial writes too. There too we don't attempt to restore if we only wrote a smaller amount, writing less than the total is a failure.

However, it is true that I'm not handling the syscall properly. I need to loop like readmemory does. So I'm going to do that.
Loop until we've read all tags, or return the error we get.

omjavaid added inline comments.Mar 15 2021, 4:54 AM
lldb/source/Plugins/Process/Linux/NativeProcessLinux.cpp
1466

Considering gdb remote protocol this is not complying with 'm' packet whick says:
"The reply may contain fewer addressable memory units than requested if the server was able to read only part of the region of memory."

May be we can fix this in a separate patch where LLDB should emit proper error code based on which either we can completely fail or send partial read data.
What do you think ?

DavidSpickett added inline comments.Mar 15 2021, 5:42 AM
lldb/source/Plugins/Process/Linux/NativeProcessLinux.cpp
1466

That would be the ideal thing to do, however I was wrong about lldb not supporting it at all. In fact memory read can do partial results:

<step to clear caches>
(lldb) memory read 0x0000fffff7ff7000-16
lldb             <  34> send packet: $qMemoryRegionInfo:fffff7ff9010#df
lldb             <  55> read packet: $start:fffff7ff9000;size:1000;permissions:rw;flags:;#fa
lldb             <  50> send packet: $Mfffff7ff9010,f:000000000000000000000000000000#84
lldb             <   6> read packet: $OK#9a
lldb             <  34> send packet: $qMemoryRegionInfo:fffff7ff9010#df
lldb             <  55> read packet: $start:fffff7ff9000;size:1000;permissions:rw;flags:;#fa
lldb             <  36> send packet: $Mfffff7ff9010,8:0000000000000000#b6
lldb             <   6> read packet: $OK#9a
lldb             <  34> send packet: $qMemoryRegionInfo:fffff7ff9010#df
lldb             <  55> read packet: $start:fffff7ff9000;size:1000;permissions:rw;flags:;#fa
lldb             <  36> send packet: $Mfffff7ff9010,8:f06ffff7ffff0000#a9
lldb             <   6> read packet: $OK#9a
lldb             <  21> send packet: $xfffff7ff9000,200#00
lldb             < 516> read packet: $<512 bytes>#53
lldb             <  21> send packet: $xfffff7ff6e00,200#32
lldb             < 516> read packet: $<512 bytes>#0a
lldb             <  21> send packet: $xfffff7ff7000,200#fe    <<<< Fails to read the last 16 bytes
lldb             <   7> read packet: $E08#ad
0xfffff7ff6ff0: 01 02 03 04 00 00 00 00 00 00 00 00 00 00 00 00  ................
0xfffff7ff7000: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00  ................
warning: Not all bytes (16/32) were able to be read from 0xfffff7ff6ff0.

Except we're not doing it by getting a smaller reply (not in this example anyway), it's working because we split up the reads in such a way that they tend to line up with the failing addresses.

So yeah it's probably worth fixing from a correctness point of view but for lldb to lldb-server we've got an equivalent already.

As for MTE would you be ok with not allowing partial reads, since the spec as proposed does not mention them?
(what I said before but just to be sure)

DavidSpickett updated this revision to Diff 330684.EditedMar 15 2021, 9:10 AM

Correctly handle the ptrace call by looping until we
get all tags as an error.

I've gone ahead and treated anything anything less than all tags
as an error as far as lldb-server is concerned.

Tell me what you think of that.

DavidSpickett added inline comments.Mar 15 2021, 9:15 AM
lldb/test/API/tools/lldb-server/memory-tagging/TestGdbRemoteMemoryTagging.py
103

This has been added at the end of the tests here.

DavidSpickett marked 3 inline comments as done.
  • Remove extra newline after test decorator

@omjavaid Current status is that a partial read of memory tags is converted into an error by lldb-server. I think this is justifiable given that the GDB protocol as it stands doesn't describe how to really communicate a partial read (though we could make a reasonable guess I'm sure). Plus, lldb itself will be looking ahead via memory regions so it won't issue things that would fail. (fail for reason of address range at least)

Sound good to you? I'll be applying the same logic to writing tags.

  • Rebase onto main
  • skipUnlessAArch64MTELinuxCompiler was committed already so not added here, just used by the new tests

Rebase onto main. (switch to m_current_process)

it look good to me but I have some final minor nits inline.

lldb/source/Plugins/Process/Linux/NativeProcessLinux.cpp
1458

may be rename to tags_iovec? because reading code below i had the tags vector in mind and confused it for that vector which was passed as parameter to this function.

1459

may be use tags.data() ?

1464

this loop condition is is a little fishy. num_tags is unsigned which means if by chance it doesnt end up going to zero we ll keep looping for ever.

lldb/test/API/tools/lldb-server/memory-tagging/main.c
54

Just a side question about TBI, for memroy reads/write or tags/query is it necessary to send non address bits to remote? (tags and pauth masks). Can we instead clear these bits before sending address over using the code/data masks we have calculated in our host process class.

DavidSpickett added inline comments.May 6 2021, 1:34 AM
lldb/test/API/tools/lldb-server/memory-tagging/main.c
54

You shouldn't need to include them as the stripped address is equivalent as far as ptrace is concerned.

If you were gonna JIT some code and use it to load from that address then yes we'd need the bits.

That said, I'm not sure lldb-server can make the assumption that they are stripped. At least for reading memory tags, I'll have to think about it since the protocol doesn't require the client to remove tag bits.

Address comments in general.

Added an assert in the loop where we call ptrace
so that we catch any unexpected condition that would
make it an infinite loop.

DavidSpickett marked 2 inline comments as done.May 13 2021, 5:56 AM
DavidSpickett added inline comments.
lldb/source/Plugins/Process/Linux/NativeProcessLinux.cpp
1464

I couldn't see another loop condition that made sense to use, so I've added an assert below:

assert(tags_read && (tags_read <= num_tags));

If num_tags was 0 we'd never enter the loop in the first place. Then we assert that if there was no error, the kernel returned at least 1 tag and no more tags than we asked for.

This should prevent num_tags wrapping around 0 and causing an infinite loop.

pcc added a subscriber: pcc.May 25 2021, 5:04 PM
pcc added inline comments.
lldb/source/Plugins/Process/Linux/NativeRegisterContextLinux_arm64.cpp
15–16

Rebase, which brings in the header pcc mentioned.

DavidSpickett marked an inline comment as done.May 26 2021, 3:41 AM

I was looking at the wrong file, this adds the header.

omjavaid accepted this revision.May 26 2021, 6:00 AM
This revision is now accepted and ready to land.May 26 2021, 6:00 AM

Rebase, this is now ready to land.

DavidSpickett edited the summary of this revision. (Show Details)Jun 17 2021, 8:14 AM
This revision was landed with ongoing or failed builds.Jun 24 2021, 9:03 AM
This revision was automatically updated to reflect the committed changes.