After the recent refactoring that introduced parallel handling of different object, the binary holder became unique per object file. This defeats its optimization of caching archives, leading to an archive being opened for every binary it contains. This is obviously unfortunate and will need to be refactored soon.
Luckily in practice, the impact of this is limited as most files are mmap'ed instead of memcopy'd. There's a caveat however: when the memory buffer requires a null terminator and it's a multiple of the page size, we allocate instead of mmap'ing. If this happens for a static archive, we end up with N copies of it in memory, where N is the number of objects in the archive, leading to exuberant memory usage. This provided a stopgap solution to ensure that all the files it loads are mmap in memory by removing the requirement for a terminating null byte.