This is an archive of the discontinued LLVM Phabricator instance.

[lldb][AArch64/Linux] Show memory tagged memory regions
ClosedPublic

Authored by DavidSpickett on Sep 10 2020, 2:20 AM.

Details

Summary

This extends the "memory region" command to
show tagged regions on AArch64 Linux when the MTE
extension is enabled.

(lldb) memory region the_page
[0x0000fffff7ff8000-0x0000fffff7ff9000) rw-
memory tagging: enabled

This is done by adding an optional "flags" field to
the qMemoryRegion packet. The only supported flag is
"mt" but this can be extended.

This "mt" flag is read from /proc/{pid}/smaps on Linux,
other platforms will leave out the "flags" field.

Where this "mt" flag is received "memory region" will
show that it is enabled. If it is not or the target
doesn't support memory tagging, the line is not shown.
(since majority of the time tagging will not be enabled)

Testing is added for the existing /proc/{pid}/maps
parsing and the new smaps parsing.
Minidump parsing has been updated where needed,
though it only uses maps not smaps.

Target specific tests can be run with QEMU and I have
added MTE flags to the existing helper scripts.

Diff Detail

Event Timeline

DavidSpickett created this revision.Sep 10 2020, 2:20 AM
Herald added a project: Restricted Project. · View Herald TranscriptSep 10 2020, 2:20 AM
DavidSpickett requested review of this revision.Sep 10 2020, 2:20 AM

This is in support of AArch64 memory tagging support as described here: http://lists.llvm.org/pipermail/lldb-dev/2020-August/016402.html
See "### Extending qMemoryRegion"

smaps format described here: https://man7.org/linux/man-pages/man5/proc.5.html
Support not added to minidumps, this would need doing in breakpad itself first (https://chromium.googlesource.com/breakpad/breakpad/+/refs/heads/master/src/google_breakpad/common/minidump_format.h#348).

General strategy is to keep the flags as strings so that we don't have to invent a new enum/bitfield value each time one is added. Plus we get to give some value by showing them all to the user without special support.

Later, I'll be adding a HasMemoryTagging() method to the region to use for memory tagging commands, something like:

bool HasMemoryTagging() {
  return m_flags && (m_flags->find("mt") != m_flags->end());
}

If BSD/MacOS/Windows had a different naming you could either check for both (if not overlapping with anything else), or add a platform argument to decide what to look for.

DavidSpickett added a reviewer: omjavaid.EditedSep 10 2020, 2:33 AM

What isn't covered in testing is checking that "memory region" will not print "flags:" if there was no flag info. This would need a way to make a fake lldbserver that didn't reply with the flags field, I could write tests that expect the "flags:" but then when you ran them on a different kernel they might fail.

Perhaps there is a way to replay a gdb comms session to lldb instead?

This seems fine to me with some minor nits. Also do you plan on writing a Linux API test for this which test memory regions on Linux? I couldnt locate one already written.

lldb/source/API/SBMemoryRegionInfo.cpp
125

This function always returns true. If there is no other use of HasFlags API function then may be merge GetFlags and HasFlags by returning false in case flags are not available.

lldb/source/Plugins/Process/Utility/LinuxProcMaps.cpp
19

May be consider converting this into a class enum.

lldb/unittests/Process/minidump/MinidumpParserTest.cpp
9

This file apparently requires a clang-format run.

  • clang-format Minidump file
  • Remove HasFlags from API, return True/False from GetFlags instead
  • Rename MapKind to MapsKind and make it an enum class
DavidSpickett marked 3 inline comments as done.Sep 15 2020, 2:30 AM

This seems fine to me with some minor nits. Also do you plan on writing a Linux API test for this which test memory regions on Linux? I couldnt locate one already written.

There are some tests in test/API/tools/lldb-server/TestLldbGdbServer.py that parse region packets manually (e.g. qMemoryRegionInfo_reports_code_address_as_executable) but they're not using the API.

There is no test for "memory region" or the API. I'll add some to test/API/linux/, one for Kernels with smaps, one without. (so you'll only run one or the other per platform)

I think this patch for its test coverage.

The thing I'm not so sure about is the verbatim passing of the os-specific flags. That's nice for the purpose of displaying them to the user, but it can make things messy for programatic use, particularly if multiple platforms start using these. What's do you intend to do with these flags? Like how many of them are you going to actually use, and for what purpose?

lldb/source/API/SBMemoryRegionInfo.cpp
127

strm << m_opaque_up->GetFlags()

lldb/source/Commands/CommandObjectMemory.cpp
1736

drop .c_str()

lldb/source/Plugins/Process/Linux/NativeProcessLinux.cpp
1308

Is this needed. Given that the parsing will stop at the first error, Result should always be empty (success) at this point)...

1319

Just for my own education: Does this mean that we will need to maintain both branches in perpetuity, as it is always possible to build kernels which don't have /smaps ?

lldb/source/Plugins/Process/Utility/LinuxProcMaps.cpp
19

And get rid of the leading e.

133

Compiling the regex for each line is pretty wasteful. I don't think a regex is really needed here. I think you could just split the line on the first : character and check that lhs does not contain spaces.

lldb/source/Plugins/Process/gdb-remote/GDBRemoteCommunicationServerLLGS.cpp
2498

drop .c_str()

lldb/source/Target/MemoryRegionInfo.cpp
21

flags = flags.ltrim();

lldb/unittests/Process/Utility/CMakeLists.txt
6

Why is this needed?

9–10

Just include the file as Plugins/Process/Utility/LinuxProcMaps.h

lldb/unittests/Process/Utility/LinuxProcMapsTest.cpp
39

&-capture here is dangerous. Capture what you need (this?) explicitly.

51–52

ASSERT_THAT(std::get<1>(params), testing::ContainerEq(regions));

The thing I'm not so sure about is the verbatim passing of the os-specific flags. That's nice for the purpose of displaying them to the user, but it can make things messy for programatic use, particularly if multiple platforms start using these. What's do you intend to do with these flags? Like how many of them are you going to actually use, and for what purpose?

At this point I only need to get the "mt" flag that shows whether a region has memory tagging enabled. This will be used by lldb internally and anyone checking for memory tagging with "memory region".

Some background on the design...

With the current implementation:

  • We don't have to translate the flags per OS to show them.
  • Has<flag>() needs to check for OS specific names, which balances things out. (you've got an OS specific switch either way)
  • The gdb protocol packet just contains the names as strings.
  • OSes that don't present flags as strings directly in their API have to provide string versions.

At first I did have a generically named bitset of flags internally. This means that:

  • In the "memory region" output the user can't tell the difference between an unrecognised flag, or a flag that's not enabled. Since we'd need to update lldb to handle the new flag. (a rare occurrence now that I think about it)
  • When "memory region" prints we'd need to translate those flags to OS specific names, or use long generic names like "memory tagging: enabled". (I prefer the former so you could compare it directly to the native tools)
  • A region object's Has<Flag>() is the same code regardless of OS.
  • The gdb protocol packet would contain generic versions of the flag names.

Now that I wrote that the second idea does look cleaner. New flags are going to be infrequent and internal calls won't have to care about what OS they're on to query flags.

Is that the direction you were thinking of? If so I'll rework this to do that instead.

  • reset flags in MemoryRegionInfo Clear()
  • add Linux specific memory region flags test
  • address comments from labath

(doing this before any change of approach,
to prevent any confusion)

DavidSpickett marked 11 inline comments as done.Oct 6 2020, 10:42 AM
DavidSpickett added inline comments.
lldb/source/Plugins/Process/Linux/NativeProcessLinux.cpp
1319

Unfortunately yes. The good part is that the header lines remain the same so we can share that code.
(gdb already uses smaps and has a similar fallback route)

lldb/source/Plugins/Process/Utility/LinuxProcMaps.cpp
133

Done and merged into ParseLinuxSMapRegions since it was much simpler.

(I couldn't find any hard guarantee that there won't be leading spaces but from reading kernel source it seems
that it might pad values but not names. So "VmFlags: re" could happen but not " VmFlags: re")

lldb/unittests/Process/Utility/CMakeLists.txt
6

It won't link without it. I followed the format of the other tests e.g. unittests/Process/POSIX/CMakeLists.txt
(you header suggestion does work fine though)

DavidSpickett updated this revision to Diff 296517.EditedOct 6 2020, 12:30 PM
DavidSpickett marked 2 inline comments as done.

Fix unit test failures due to dereferencing an empty
optional when smaps parsing failed.

To prevent this happening again pass an
Optional<MemoryRegionInfo> to the callback and let
it decide whether to use it or not.

labath added a comment.Oct 7 2020, 8:10 AM

I think this patch for its test coverage.

And by "think", I meant "like". :)

The thing I'm not so sure about is the verbatim passing of the os-specific flags. That's nice for the purpose of displaying them to the user, but it can make things messy for programatic use, particularly if multiple platforms start using these. What's do you intend to do with these flags? Like how many of them are you going to actually use, and for what purpose?

At this point I only need to get the "mt" flag that shows whether a region has memory tagging enabled. This will be used by lldb internally and anyone checking for memory tagging with "memory region".

Some background on the design...

With the current implementation:

  • We don't have to translate the flags per OS to show them.
  • Has<flag>() needs to check for OS specific names, which balances things out. (you've got an OS specific switch either way)
  • The gdb protocol packet just contains the names as strings.
  • OSes that don't present flags as strings directly in their API have to provide string versions.

At first I did have a generically named bitset of flags internally. This means that:

  • In the "memory region" output the user can't tell the difference between an unrecognised flag, or a flag that's not enabled. Since we'd need to update lldb to handle the new flag. (a rare occurrence now that I think about it)
  • When "memory region" prints we'd need to translate those flags to OS specific names, or use long generic names like "memory tagging: enabled". (I prefer the former so you could compare it directly to the native tools)
  • A region object's Has<Flag>() is the same code regardless of OS.
  • The gdb protocol packet would contain generic versions of the flag names.

Now that I wrote that the second idea does look cleaner. New flags are going to be infrequent and internal calls won't have to care about what OS they're on to query flags.

Is that the direction you were thinking of? If so I'll rework this to do that instead.

Yes, that's pretty much it. I sympathise with wanting to match the native tools, but that's not something lldb is very good at right now (and there's an opposite drive to make lldb behavior consistent across platforms). For now I'd just put on that and have lldb choose an "os-independent" flag name which "happens" to match the linux name (one of the advantages of being first).

That said, with this and @jasonmolenda's change to memory regions, I think the description of a single memory region becomes too complicated to reasonably fit on a single line. I think we might want to change the output of memory region to span multiple lines, at which point, the string "memory tagging: enabled" might not look too bad (we could add headings for other properties too).

lldb/source/Plugins/Process/Linux/NativeProcessLinux.cpp
1301–1338

If we're changing the signature, we might as well make this Expected<MemoryRegionInfo>.

1323
lldb/source/Plugins/Process/Utility/LinuxProcMaps.cpp
160

name.contains(' ')

DavidSpickett planned changes to this revision.Oct 7 2020, 9:12 AM

Yes, that's pretty much it.

Great, I will rework this.

  • Move to llvm::Expected for callbacks
  • Use flag names based on Linux names but for all platforms
  • Show generic names/descriptions in memory region output
  • Add minidump cases for error handling
DavidSpickett edited the summary of this revision. (Show Details)Oct 15 2020, 7:12 AM
DavidSpickett added a comment.EditedOct 15 2020, 7:16 AM

If you are wondering, the average number of flags for a Linux hello world's memory regions is 8 so 8 lines of output. Here's an example I had that does madvise.

(lldb) memory region addr
[0x00007ffff7ed5000-0x00007ffff7fd5000) rw- /dev/zero (deleted)
flags:
readable
writeable
shared
may read
may write
may execute
may share
soft-dirty
(lldb) n
(lldb) n (over madvise(addr, len, MADV_DONTFORK);)
(lldb) memory region addr
[0x00007ffff7ed5000-0x00007ffff7fd5000) rw- /dev/zero (deleted)
flags:
readable
writeable
shared
may read
may write
may execute
may share
do not copy area on fork
soft-dirty

I could add this as a test case but it only really adds checking that the region cache is updated which I think is covered already.

If you are wondering, the average number of flags for a Linux hello world's memory regions is 8 so 8 lines of output. Here's an example I had that does madvise.

(lldb) memory region addr
[0x00007ffff7ed5000-0x00007ffff7fd5000) rw- /dev/zero (deleted)
flags:
readable
writeable
shared
may read
may write
may execute
may share
soft-dirty
(lldb) n
(lldb) n (over madvise(addr, len, MADV_DONTFORK);)
(lldb) memory region addr
[0x00007ffff7ed5000-0x00007ffff7fd5000) rw- /dev/zero (deleted)
flags:
readable
writeable
shared
may read
may write
may execute
may share
do not copy area on fork
soft-dirty

I could add this as a test case but it only really adds checking that the region cache is updated which I think is covered already.

Woa, back up. I though you were just going to add the one flag you need right now... :(

I'd like to avoid adding flags we don't have any use for. That way we can decide the best course of action on a per-flag basis, and avoid overfitting any particular platform. Having a flag for whether a region supports memory tagging (and calling that flag mt) sounds pretty platform-neutral, but some of the other flags seem very linux specific, and are unlikely to be useful elsewhere...

Woa, back up. I though you were just going to add the one flag you need right now... :(

I was going for the showing flags as a feature of the "memory region" command then later adding the memory tagging flag to that.
I see your point though and yeah I don't need all the flags to unblock mte.

With the perspective I was coming from, adding a set of getter/setter for 20ish flags wasn't an appealing idea:

bool m_memory_tagged;
OptionalBool GetMemoryTagged() const { return m_memory_tagged; }
void SetMemoryTagged(bool v) { m_memory_tagged = v; }
<x20>

If it's just mt then I just add another set, can always merge it with a flags (plural) interface later if we accumulate more.

So if it makes sense to you, I will:

  • add only "mt" flag, with it's own getter/setter
  • keep the protocol changes (but only recognise the 1 flag)
  • keep the extra testing, use of expected etc. where it still applies
  • add tests to run on a memory tagging enabled kernel with qemu

Then we can at least agree in principle, even if this doesn't land until the new tagging commands have also been reviewed. (which is fine by me, but I don't have them ready yet)

Thanks for all your comments so far!

Woa, back up. I though you were just going to add the one flag you need right now... :(

I was going for the showing flags as a feature of the "memory region" command then later adding the memory tagging flag to that.
I see your point though and yeah I don't need all the flags to unblock mte.

With the perspective I was coming from, adding a set of getter/setter for 20ish flags wasn't an appealing idea:

bool m_memory_tagged;
OptionalBool GetMemoryTagged() const { return m_memory_tagged; }
void SetMemoryTagged(bool v) { m_memory_tagged = v; }
<x20>

If it's just mt then I just add another set, can always merge it with a flags (plural) interface later if we accumulate more.

So if it makes sense to you, I will:

  • add only "mt" flag, with it's own getter/setter
  • keep the protocol changes (but only recognise the 1 flag)
  • keep the extra testing, use of expected etc. where it still applies
  • add tests to run on a memory tagging enabled kernel with qemu

Then we can at least agree in principle, even if this doesn't land until the new tagging commands have also been reviewed. (which is fine by me, but I don't have them ready yet)

Thanks for all your comments so far!

Yep, that sounds like a plan.

lldb/source/Plugins/Process/Utility/LinuxProcMaps.h
20

We don't usually typedef expecteds like this, and the result is not much shorter than the original.

lldb/unittests/Process/Utility/CMakeLists.txt
6

What will not link? This definitely can't be the right solution as lldbPluginProcessLinux is not even being built on non-linux hosts (but Plugins/Process/Utility is).

DavidSpickett planned changes to this revision.Oct 20 2020, 1:54 AM
  • Only "mt" flag supported
  • Don't typedef expected MemoryRegionInfo
  • Link lldbPluginProcessUtility (not lldbPluginProcessLinux) in proc map tests
  • "mt" is now the only supported flag
  • "memory tagging: enabled" is printed if it is found for a region
  • lldb-server will only send "flags:mt;" or "flags:;"
  • lldb will treat flags as a list of flags, but only look for "mt" (so we can expand the flags list later)
  • Added target specific test to run via QEMU (future MTE tests can live in API/linux/aarch64/ too)
DavidSpickett marked 2 inline comments as done.Nov 4 2020, 7:41 AM
  • Revert stray change of run-qemu.sh file mode

I think this looks pretty good now. Some questions inline...

lldb/source/Target/MemoryRegionInfo.cpp
10

unused?

lldb/test/API/linux/aarch64/mte_memory_region/main.c
9–14 ↗(On Diff #302845)

Instead of duplicating these checks in dotest, maybe we could use the result of the inferior as a indication to skip the test. Like, if, instead of hitting the breakpoint, the inferior exits with code 47, we know that the required cpu feature is not supported?

lldb/unittests/Process/Utility/LinuxProcMapsTest.cpp
47–49

err_str = toString(Info.takeError());

  • Remove unused assert header
  • Simplify error handling in LinuxProcMapsTest
lldb/test/API/linux/aarch64/mte_memory_region/main.c
9–14 ↗(On Diff #302845)

Sounds good to me.

That would mean defining things like PROT_MTE in the test file, for toolchains that won't have it. I assume that's ok to do.
(I'll probably need to do that for lldb-server code later anyway)

DavidSpickett marked 2 inline comments as done.Nov 11 2020, 2:39 AM
labath added inline comments.Nov 12 2020, 5:33 AM
lldb/test/API/linux/aarch64/mte_memory_region/main.c
9–14 ↗(On Diff #302845)

Depends... How likely is the system to support memory tagging if the relevant headers don't define the constants? Do you want to support systems like those?

Maybe you could do something like this:

int main() {
#ifdef HWCAP2_MTE
  // do stuff
#else
  return 47;
#endif
}
DavidSpickett added inline comments.Nov 12 2020, 6:15 AM
lldb/test/API/linux/aarch64/mte_memory_region/main.c
9–14 ↗(On Diff #302845)

No I don't think so, if you want to run this test the toolchain should have the constants. I'll do what you suggested.

DavidSpickett marked an inline comment as not done.Nov 12 2020, 6:15 AM

Updated the way the test works.

  • If you build without MTE in the toolchain the binary just returns the magic failure number.
  • If you do have an MTE toolchain but your target doesn't have MTE then it will also fail with the magic return code.

If it does either of those we skip, otherwise we then run
again to do the actual checks.

The vmflags check will be needed for future tests so I've
not moved it into the test itself.

DavidSpickett retitled this revision from [lldb] Show flags for memory regions to [lldb][AArch64/Linux] Show memory tagged memory regions.Nov 18 2020, 6:43 AM
DavidSpickett edited the summary of this revision. (Show Details)

Hopefully last round of cosmetic fixes, and then this should be good.

lldb/include/lldb/Target/MemoryRegionInfo.h
15

I guess this is not used anymore..

lldb/packages/Python/lldbsuite/test/tools/lldb-server/gdbremote_testcase.py
805

I guess you could change this to self.assertIn(needle, haystack)

lldb/source/Plugins/Process/Linux/NativeProcessLinux.cpp
1308–1312

Status has an llvm::Error constructor. Some variation on Result = Info.takeError() should work.

lldb/source/Plugins/Process/Utility/LinuxProcMaps.cpp
200
lldb/source/Plugins/Process/minidump/MinidumpParser.cpp
270–272 ↗(On Diff #306091)

llvm::consumeError(region.takeError()), though maybe it would be better to actually log it (LLDB_LOG_ERROR)

lldb/test/API/linux/aarch64/mte_memory_region/TestAArch64LinuxMTEMemoryRegion.py
31 ↗(On Diff #306091)

if self.process().GetState() == lldb.eStateExited and self.process().GetExitStatus() == 47 would be nicer

34–38 ↗(On Diff #306091)

I think you could just run this once, and then choose to skip the test or not depending on whether the process exited, or hit a breakpoint.

  • Use assertIn
  • Cleanup Status/Error handling
  • Refactor test so it runs once and we skip based on whether it returns a value or hits a breakpoint.
  • Seperated exit codes for non MTE toolchain and non MTE target, for more specific skip reasons.

On the test, hopefully the comment in main.c
makes sense. Here's a cut down demo of what I'm
explaining:
https://godbolt.org/z/rex96b

This means we can always set an exact breakpoint,
we just won't hit it unless you have MTE. Which is
what we want.

DavidSpickett marked 9 inline comments as done.Nov 19 2020, 7:06 AM
labath accepted this revision.Nov 20 2020, 1:22 AM
  • Use assertIn
  • Cleanup Status/Error handling
  • Refactor test so it runs once and we skip based on whether it returns a value or hits a breakpoint.
  • Seperated exit codes for non MTE toolchain and non MTE target, for more specific skip reasons.

On the test, hopefully the comment in main.c
makes sense. Here's a cut down demo of what I'm
explaining:
https://godbolt.org/z/rex96b

This means we can always set an exact breakpoint,
we just won't hit it unless you have MTE. Which is
what we want.

I see.

Maybe a less convoluted pattern would be

void test_mte(void *page) {}

int main() {
#ifdef MTE
  ...
  test_mte(mmap(...));
  return 0;
#else
  return 47;
#endif
}

and setting a breakpoint on test_mte. But this is fine too....

This revision is now accepted and ready to land.Nov 20 2020, 1:22 AM

I'm going to stick with the test format as it is. No doubt I'll need to clean it up based on what future MTE tests look like.

This revision was landed with ongoing or failed builds.Nov 20 2020, 3:22 AM
This revision was automatically updated to reflect the committed changes.