This is an archive of the discontinued LLVM Phabricator instance.

Create "skinny corefiles" for Mach-O with process save-core / reading
ClosedPublic

Authored by jasonmolenda on Sep 27 2020, 6:35 PM.

Details

Summary

I implemented something that's been talked about for a while -- creating "skinny corefiles", or a user process corefile that only includes dirty memory in it. This is the patchset to implement this with lldb's "process save-core" and lldb's corefile reading, for Mach-O files.

Because of how the system libraries are shared across all processes on Darwin systems, dumping all available memory for a process involves writing multiple gigabytes for even a trivial command line app. The majority of this corefile are unmodified memory pages for the system libraries; it's a very expensive and slow operation.

This patchset does a few things:

  1. Adds support in debugserver to find the list of pages in a memory region that are dirty. It adds a key-value pair to qHostInfo to advertise how large a VM memory page is on the target system, and it includes a list of dirty pages for a memory region in the qMemoryRegionInfo reply to lldb's queries about memory regions.
  1. It adds a new LC_NOTE "all image infos" in the corefile, which specifies all binary images that are present in the inferior process and where they are loaded in memory. The filepaths, UUIDs, and segment name+load addresses are required because lldb will not have access to dyld's dyld_all_image_infos structure or the ability to read these binaries out of memory. (normally the segment load addresses could be found by reading the Mach-O load commands for the binaries from the corefile)
  1. It adds a new LC_NOTE "executing uuids" in the corefile, which is a list of the UUIDs of the binary images that are actually executing code at the point when the corefile was written. That is, the set of binaries that appear on any backtrace on any thread at the point of the coredump. A graphical app on macOS these days can easily have 500 different binary images loaded in the process; likely only a couple dozen of those are actually executing. Knowing which binary images are executing allows us to do a more expensive search for these most-important binaries, like we do with crashlog.py.
  1. Changes the Mach-O corefile creation to only include dirty pages when communicating with a debugserver that provides this information.
  1. Read and use the "all image infos" and "executing uuids" in ProcessMachCore.
  1. Finally, of course, adds a test case where we have 3 binaries (one executable, two dylibs), hits a breakpoint, saves a skinny corefile. Then it moves the executable to a hidden location that lldb can only discover by calling the dsymForUUID script (to test 'executing uuids'), deletes one dylib, and leaves one at the expected location. It loads the skinny corefile into lldb and confirms that the present libraries are present, the absent library is absent (and its non-dirty memory is unreadable), and we can read variables that are on the heap/stack correctly.

Before this change, the minimum size for a corefile on macOS 10.15 was around 2GB and it would take around 5 minutes to dump. With this change, a simple test binary user process corefile is around 500KB and takes a few seconds to dump. Larger real-world applications will have larger amounts of stack & heap and will have correspondingly larger corefiles.

I'm not thrilled that I'm touching the same parts of lldb as David Spickett in https://reviews.llvm.org/D87442 - we're going to create conflicts depending on which of us lands first. Both of us need to modify the qMemoryRegionInfo packet and lldb's MemoryRegionInfo class to store our additional informations.

The majority of this patch concerns ObjectFileMachO and ProcessMachCore, as well as debugserver; I expect those to be of less interest to any reviewer/commenters. But of course I welcome any suggestions/comments to those parts as well as the other parts of lldb I touched along the way. I did need to add some API/structs in the ObjectFile base class so ObjectFileMachO could pass information up to ProcessMachCore that are very specific to what I'm doing here.

Diff Detail

Event Timeline

jasonmolenda created this revision.Sep 27 2020, 6:35 PM
jasonmolenda requested review of this revision.Sep 27 2020, 6:35 PM
mib added a subscriber: mib.Sep 28 2020, 5:27 AM
mib added inline comments.
lldb/include/lldb/Symbol/ObjectFile.h
700

Little typo here.

lldb/source/Plugins/ObjectFile/Mach-O/ObjectFileMachO.cpp
6863–6868

I guess you probably forgot to remove this comment.

clayborg requested changes to this revision.Sep 28 2020, 2:08 PM

Many inline comments that should explain everything.

lldb/include/lldb/Symbol/ObjectFile.h
669–692

We shouldn't be exposing mach-o specific stuff here. I think an API that lets the ObjectFile (if it is a core file) load all needed images at the right addresses might be better encapsulated:

virtual llvm::Error LoadCoreFileImages(lldb_private::Target &target);

It returns an error if something the ObjectFile is not a core file or something fatal happens. Then you don't need to expose any of the mach-o specific things here, and we can probably remove the GetCorefileExecutingUUIDs() as well below.

lldb/source/Plugins/ObjectFile/Mach-O/ObjectFileMachO.cpp
6157–6162

Use RangeDataVector from RangeMap.h:

typedef lldb_private::RangeDataVector<lldb::addr_t, lldb::addr_t, uint32_t> PageObjects;
6164–6165

We should add an extra boolean as an argument to allow saving all memory regions, or just the dirty pages?

6202

Use PageObjects as defined above:

PageObjects pages_to_copy;
6223

If we add a bool argument, we might need to return an error if the lldb-server doesn't support the dirty page list stuff. Some regions won't have dirty pages, but we might need add detection for any dirty pages and then error out at the end if user requested a minimal core file

6225–6230
if (prot != 0) 
  pages_to_copy.Append({dirtypage, pagesize, prot});
6233–6238
if (prot != 0) 
  pages_to_copy.Append({addr, size, prot});
6246–6264

Remove all of this and do:

page_objects.CombineConsecutiveEntriesWithEqualData();
6266

use paige_objects directly now since we sorted it above:

for (page_object obj : page_objects) {
6582

Can we get rid of the UUID array we save and add a flag here that states that the library was loaded instead of saving it in a separate load command?

6811

Can we read a "uint32_t flags" field here where one of the bits is a "I was part of the thread stacks"?

6841

We can get rid of this entire load command if we put a bit into the all image info struct. We already have an unused uint32_t field available.

6884

We can get rid of this entire load command if we put a bit into the all image info struct. We already have an unused uint32_t field available.

lldb/source/Plugins/ObjectFile/Mach-O/ObjectFileMachO.h
120–124

Remove and just do:

llvm::Error LoadCoreFileImages(lldb_private::Target &target) override;
lldb/source/Plugins/Process/mach-core/ProcessMachCore.cpp
279–293

We can get rid of this entire load command if we put a bit into the all image info struct. We already have an unused uint32_t field available.

406–440

This functionality can be moved to the "ObjectFileMachO::LoadCoreFileImages(lldb_private::Target &target)" function. If we have flags in the all image infos for each library that states if it was part of an active stack, then we can also call the object and symbols download:

Symbols::DownloadObjectAndSymbolFile(module_spec, true);
if (FileSystem::Instance().Exists(module_spec.GetFileSpec()))
  GetTarget().GetOrCreateModule(module_spec, false);
else {
 ...
}
This revision now requires changes to proceed.Sep 28 2020, 2:08 PM
clayborg added inline comments.Sep 28 2020, 2:16 PM
lldb/include/lldb/Symbol/ObjectFile.h
692

basically all of the code from ProcessMachCore.cpp lines 421-455 can be moved into:

llvm::Error ObjectFileMachO::LoadCoreFileImages(lldb_private::Target &target);

And it can GetOrLoad images directly into the target and set the right load addresses. Then none of the mach-o specifics need to be exposed here.

703

This function probably isn't needed if we put a flag into the all image infos that is saved in the other load command. Inlined comments below will detail where I was thinking.

jasonmolenda marked 2 inline comments as done.Sep 28 2020, 3:20 PM

Thanks for the suggestions Greg, solid ideas. Let me rework this a bit.

The other thing to think about is using minidump files instead of mach-o core files. We have them in very good shape and they already have a built in notion of "all image infos". Many more things in there too. Let me know if you want to know more, but I'm not holding my breath! hhaha

The other thing to think about is using minidump files instead of mach-o core files. We have them in very good shape and they already have a built in notion of "all image infos". Many more things in there too. Let me know if you want to know more, but I'm not holding my breath! hhaha

For a pure-lldb scenario (process save core; lldb -c) this would be pretty interesting. But it would be a trickier sell for other corefile creators on Darwin, and none of our other tooling would operate on them. Mach-O ride or die! :)

This patch update incorporates most of Greg's suggestions. Most importantly, I isolated all of the loading of the binaries specified in the corefile down into ObjectFileMachO so I didn't have to add so much special case API to the ObjectFile base API.

Greg made a case in direct email for the usefulness of a full coredump -- e.g. people may save a skinny core on one mac system, and send it to someone on a different mac system where the system libraries are all different, and no backtraces would be meaningful. This patch implements a -f option to process save-core to force a full coredump, if the system defaults to skinny (as it will on macos).

The alternative we were discussing is a hybrid approach where the core dumper saves all of the memory pages that cover the binary, for images that are currently executing on a thread. We'd need the full symbol table, mach-o header, and text which is executing, so it's probably easier to just copy the whole thing in than to try to pare it back. This is an interesting idea, and most of the groundwork is laid for it - we are already computing which libraries are actively executing and noting that in the corefile metadata so lldb can do more expensive searches for those binary images.

I switched to using the RangeDataVector container, but it has a little bug in the CombineConsecutiveEntriesWithEqualData where consecutive entries with the same Data value are combined, even if their ranges are not consecutive. So you could get a vast address range claimed if a user process page and a shared cache page had the same protections. Fixing that and writing a test case, I found a second problem where RangeDataVector isn't setting its 'upper_bound' field properly for this use case and searches don't work as they should. At which point I booted it out of the patch and went back to the hand-rolled container that I'd started with; I'll try to look at RangeDataVector more closely next week and switch to using it in this method. It is a cleaner and simpler approach.

I also got rid of the "executing uuids" LC_NOTE - as Greg pointed out, I have a perfectly good unused field for alignment in the binary image entries already, using that to flag executing images was even easier.

jasonmolenda marked 3 inline comments as done.Oct 2 2020, 11:08 PM
jasonmolenda added inline comments.
lldb/source/Plugins/ObjectFile/Mach-O/ObjectFileMachO.cpp
6223

in the qMemoryRegionInfo packet response, a remote stub that can identify dirty pages should include a dirty-pages: key-value entry in every response where they're supported, even if it's an empty list. I clarified this in the docs. That's how we'll detect the difference between "no dirty pages" and "dirty pages not supported".

jasonmolenda added inline comments.Oct 2 2020, 11:21 PM
lldb/source/Plugins/ObjectFile/Mach-O/ObjectFileMachO.cpp
6223

Oh, and once we're inside lldb, I have the dirty page list returned as an Optional vector. For a memory region where we don't have any dirty page information, there's no vector. For a memory region where we have dirty page information -- and there are no pages, an empty vector is returned.

Thinking about process save-core and how it will default to a skinny corefile on macOS but a full corefile elsewhere, I'd like to surface some notification to the user about the type of corefile that was created to the user. Maybe add an PluginManager::SavedCoreType {eSavedCoreFull, eSavedCoreDirtyOnly, eSavedCoreDirtyPlusExecutingIMages}, have PluginManager::SaveCore return that enum value so that the "process save-core" user command can print text about the type of corefile that was saved. For a dirty corefile on macos, it could say "Modified-memory-only corefile has been written. This corefile will not be portable between systems, or usable as software updates are applied; --full-corefile can create a corefile appropriate for sharing/archiving." or something.

Also hoping that @DavidSpickett 's changes to qMemoryRegionInfo might land soon so I don't cause conflicts with his patchset. :)

clayborg requested changes to this revision.Oct 6 2020, 4:05 PM

Nice changes. Just a few changes and should be good to go.

lldb/include/lldb/Symbol/ObjectFile.h
675

process

676

process

lldb/source/API/SBProcess.cpp
1231

We should add another SaveCore() includes the bool argument and have this one call that one.

lldb/source/Plugins/ObjectFile/Mach-O/ObjectFileMachO.cpp
6600

Can this just be a static function in this file without being in the ObjectFileMachO class? That would keep the header file cleaner.

lldb/source/Plugins/ObjectFile/Mach-O/ObjectFileMachO.h
18

Don't need this include, it should be forward declared in lldb-forward.h. Move to .cpp file

215–216

move to ObjectFileMachO.cpp, not needed in header file.

218–220

Move to .cpp file? Not needed in header unless this static function takes avantage of being a part of the class which gives it access to non public stuff?

lldb/source/Plugins/Process/gdb-remote/GDBRemoteCommunicationClient.cpp
1252

IS there a getAsUnsigned?

lldb/source/Plugins/Process/gdb-remote/GDBRemoteCommunicationClient.h
249

uint32_t as the type? This should never be negative, so no need to use a signed value.

592

uint32_t as the type? This should never be negative, so no need to use a signed value.

This revision now requires changes to proceed.Oct 6 2020, 4:05 PM

For what it's worth I'm aware of this change, don't hold back on my account. It doesn't overlap that much and I think I'll be reworking mine anyway.

I was thinking about the UI of this some more today.

I think process save-core should take a --core-style argument with an enum of (currently) 'full' or 'modified-memory'. Plugin::SaveCore will take a requested_coredump_style and will return an created_coredump_style. The user can request a modified-memory coredump on linux, but it's not supported so they get a full coredump. We could have "stack-only" coredumps (this would be an awful lot like a minidump iiuc, and may be duplicative), or "modified-memory-plus-binary-images" (better name tbd) which would copy in the binary images of any executing binaries or whatever.

I was thinking about the UI of this some more today.

I think process save-core should take a --core-style argument with an enum of (currently) 'full' or 'modified-memory'. Plugin::SaveCore will take a requested_coredump_style and will return an created_coredump_style. The user can request a modified-memory coredump on linux, but it's not supported so they get a full coredump. We could have "stack-only" coredumps (this would be an awful lot like a minidump iiuc, and may be duplicative), or "modified-memory-plus-binary-images" (better name tbd) which would copy in the binary images of any executing binaries or whatever.

That seems like a lot of flavors. At the command level I would suggest just some arguments. To the SaveCore, it can take a bitfield with all of the options the user selected. Options I can think of:

--full: which would specify to save all memory regions
--modified: save any modified data pages that are not thread stacks. I would suggest the core file plug-in returns an error if it doesn't support this to provide a better user experience. This could imply --stacks or it could be a stand alone option.
--stacks: save the thread stacks, very close to minidumps as you mentioned if only this is specified
--stack-functions: save memory for the code for all function bodies from all stack frames in all threads. This helps with stack tracing if we have no access to the system object files
--minimal-heap: save only the allocated memory from heaps. Often times the system malloc allocators will grab large chunks of memory and hand out partial chunks when malloc is called. On Darwin, we could easily find out the actual allocations that are live on the heap and only save
those?
--crashing-thread-only: only save threads that crashed. Cuts down on memory size for stacks, and stack functions.

We could then make a bitfield of options:

enum uint32_t CoreFileOptions {
  eFull = (1u << 0),
  eModifiedPages = (1u << 1),
  eThreadStacks = (1u << 2),
  eThreadFunctions = (1u << 3),
  eMinimalHead = (1u << 4),
  eCrashingThreadOnly = (1u << 5)
};

And then pass a "uint32_t flags" to the save core instead of the bool.

Yeah, I'm not opposed to mutually exclusive options to process save-core to request a specific style of coredump to be created, whether it's a --core-style enum or a series of flags isn't important to me. Right now I only intend to add the two that are possible - a full coredump or a modified-memory (dirty memory) coredump, on macOS. Whether it's passed as an enum or bitflag I don't have a preference, although a bitflag makes it sound like there could be multiple of them selected, when that's not what I'd be envisioning here. (if we were doing a series of flags for the lldb API, it would make more sense if we had different types of memory that could be coredumped and you could set the flags for what you want included. stack, heap, shared memory, binary images, etc.)

I forgot to respond to your suggestion about the SB API Process::SaveCore. I'm a little reluctant to add anything there yet because we're going from one style of coredump to two right now, and I'm not sure if we're going to stay at 2 forever or if we're going to evolve this into a panoply of different coredump styles and I'm not confident enough about this to want to put it in API. But I'm not wedded to that position.

Yeah, I'm not opposed to mutually exclusive options to process save-core to request a specific style of coredump to be created, whether it's a --core-style enum or a series of flags isn't important to me. Right now I only intend to add the two that are possible - a full coredump or a modified-memory (dirty memory) coredump, on macOS. Whether it's passed as an enum or bitflag I don't have a preference, although a bitflag makes it sound like there could be multiple of them selected, when that's not what I'd be envisioning here. (if we were doing a series of flags for the lldb API, it would make more sense if we had different types of memory that could be coredumped and you could set the flags for what you want included. stack, heap, shared memory, binary images, etc.)

I think eventually it would be nice to get to multiple of the flags, but we can start with full or dirty only is fine.

I forgot to respond to your suggestion about the SB API Process::SaveCore. I'm a little reluctant to add anything there yet because we're going from one style of coredump to two right now, and I'm not sure if we're going to stay at 2 forever or if we're going to evolve this into a panoply of different coredump styles and I'm not confident enough about this to want to put it in API. But I'm not wedded to that position.

Sounds good, lets work this out first internally and then expose via the SB API when we are ready.

labath added a subscriber: labath.Oct 8 2020, 5:51 AM

/me parachutes into the discussion.

On the topic of interfaces, I'd like to say that eventually I'd like to have the ability to choose the format for the core file that lldb generates (so that one can generate minidump files on mac or linux). There's no direct implications of that for this patch, as the two choices are likely to be orthogonal (full or "skinny" minidumps make sense too), it would just be good to leave room for this extension in the future (e.g. in the SB API).

As for the flag discussion, it might be interesting to note that the linux coredump_filter is a set of flags

bit 0  Dump anonymous private mappings.
bit 1  Dump anonymous shared mappings.
bit 2  Dump file-backed private mappings.
bit 3  Dump file-backed shared mappings.
bit 4 (since Linux 2.6.24)
       Dump ELF headers.
bit 5 (since Linux 2.6.28)
       Dump private huge pages.
bit 6 (since Linux 2.6.28)
       Dump shared huge pages.
bit 7 (since Linux 4.4)
       Dump private DAX pages.
bit 8 (since Linux 4.4)
       Dump shared DAX pages.

I have no plans on doing anything with that, though someone theoretically might. However, I think that even if we do have the ability to fine-tune the core contents via individual flags, we should probably still have some way to select some reasonable set of predefined flags.

Lastly, I think it would be good to extend the memory region command to show the new MemoryRegionInfo contents. If for nothing else, then to enable the testing of this (pretty large) patch in a more granular way. Having this in the output of memory region would enable splitting the gdb-remote part of this into a separate patch.

Thanks for the feedback Pavel. Good point on piping the dirty pages list through the memory region command and adding tests for that. And good thoughts on different memory types that could be included in the corefile. I think we may revisit how the 'process save-core' command arguments work as we add more formats. We could do some pretty cool stuff here.

I'm not going to bite this off right now, but I was thinking about the fact that a "dirty memory only" corefile depends on having all the binaries on the system that the corefile was taken. Why can't you run lldb on a dirty memory corefile and create a full corefile from it, if it's on the same system with the same libraries?

Also, with a full corefile, if we know the dirty-memory-page list, if a memory region corresponds to a binary (libc.so text for instance), why should we read the memory from the inferior at all? We can copy the pages of the binary into the memory buffer. On a Darwin system, the vast majority of a full corefile are system binaries that are unmodified; it would make a full coredump a lot faster to create. I don't think perf is the most important thing with a corefile, but just some thoughts I had while looking at this.

Update to address previous review comments. (1) Update SBMemoryRegionInfo API to provide access to the dirty page information. (2) Update the 'memory region' command output to print the dirty page list. (3) Update the 'process save-core' command to take a --style option with accepts an enum for the type of corefile. (4) Update 'process save-core' text to explain the limitations of a modified-memory-only corefile. (5) Add a API/functionalities/gdb_remote_client test with a stub that returns the region infos with these new fields, and tests the SBMemoryRegionInfo object filled in on the debugger side. I think that's everything.

There's still a hang when running the GDBRemoteCommunicationClientTest.cpp tests (GDBRemoteCommunicationClientTest::GetMemoryRegionInfo and GDBRemoteCommunicationClientTest::GetMemoryRegionInfoInvalidResponse) that I haven't gotten to the bottom of yet, but I thought I'd update the patchset on the phab with where I am now, while I work through this last issue.

labath added inline comments.Nov 6 2020, 3:08 AM
lldb/include/lldb/Target/MemoryRegionInfo.h
43–45

This bit is unnecessary. In fact, I'd implement the entire function as *this = MemoryRegionInfo();

118

const Optional<...> & to avoid a copy.

125–127

m_dirty_pages = std::move(pagelist);

lldb/include/lldb/lldb-private-interfaces.h
57–58

Maybe just have one argument which will be set on exit to reflect the actual style that was used?

lldb/source/API/SBMemoryRegionInfo.cpp
142 ↗(On Diff #301531)

const T &

lldb/source/Commands/CommandObjectMemory.cpp
1742 ↗(On Diff #301531)

const T &

1750–1760 ↗(On Diff #301531)

I think something like this ought to do:
AppendMessageWithFormatv("Dirty pages: {0:@[x]}.\n", llvm::make_range(dirty_page_list->begin(), dirty_page_list->end()));

lldb/source/Plugins/ObjectFile/PECOFF/ObjectFilePECOFF.cpp
224

I don't think this is a good use of that constant. Minidumps can also come in "full" styles, which includes all memory, or some smaller ones, where only the heap (or it's modified portion, or just stack, etc.) is included.

In essence, "minidump" is just a format, just like "elf" and "macho". It's peculiarity is that it is different from the native object file format, but that's not relevant here.

I think that the best way to handle this is to make the choice of format should be orthogonal to the choice of what goes into the dump. Currently we don't allow the user to pick a format, and that's fine -- we'll just pick whatever is the native format for the given platform. But I don't think we should start mixing the two.

FYI my change to add the memory tagging flag (https://reviews.llvm.org/D87442) has landed so you will have some conflicts.

Update patch to:

  1. Fix the testsuite hang I was getting in the unittests.
  1. Incorporate Pavel's great review feedback from last November -- thanks Pavel! Most importantly, I misunderstood the nature of Minidumps, I thought a Minidump described a minimal snapshot of process memory (maybe stack memory) that can be captured for more detailed crash analysis, so I offered it as a coredump style in addition to "full coredump" and "dirty memory coredump", but this was incorrect. I've removed Minidump as a coredump style option.
  1. Rebased to current llvm.

It all looks good, pasts the testsuite on Darwin, by-hand testing all works right. I'll land it soon.

This revision was not accepted when it landed; it landed in state Needs Review.Jun 20 2021, 12:27 PM
This revision was landed with ongoing or failed builds.
This revision was automatically updated to reflect the committed changes.