A profile shows the majority of time doing type merging is spent deserializing records from sequences of bytes into friendly C++ structures that we can easily access members of in order to find the type indices to re-write.
Records are prefixed with their length, however, and most records have type indices that appear at fixed offsets in the record. For these records, we can save some cycles by just looking at the right place in the byte sequence and re-writing the value, then skipping the record in the type stream. This saves us from the costly deserialization of examining every field, including potentially null terminated strings which are the slowest, even though it was unnecessary to begin with.
In addition, we apply another optimization. Previously, after deserializing a record and re-writing its type indices, we would unconditionally re-serialize it in order to compute the hash of the re-written record. This would result in an alloc and memcpy for every record. If no type indices were re-written, however, this was an unnecessary allocation. In this patch re-writing is made two phase. The first phase discovers the indices that need to be rewritten and their new values. This information is passed through to the de-duplication code, which only copies and re-writes type indices in the serialized byte sequence if at least one type index is different.
Some records have type indices which only appear after variable length strings, or which have lists of type indices, or various other situations that can make it tricky to make this optimization. While I'm not giving up on optimizing these cases as well, for now we can get the easy cases out of the way and lay the groundwork for more complicated cases later.
This patch yields another 50% speedup on top of the already large speedups submitted over the past 2 days. In two tests I have run, I went from 9 seconds to 3 seconds, and from 16 seconds to 8 seconds.