Currently, the DataExtractor::GetMaxU64Bitfield and GetMaxS64Bitfield
routines assume the incoming "bitfield_bit_offset" parameter uses
little-endian bit numbering, i.e. a bitfield_bit_offset 0 refers to
a bitfield whose least-significant bit coincides with the least-
significant bit of the surrounding integer.
On many big-endian systems, however, the big-endian bit numbering
is used for bit fields. Here, a bitfield_bit_offset 0 refers to
a bitfield whose most-significant bit conincides with the most-
significant bit of the surrounding integer.
Now, in principle LLDB could arbitrarily choose which semantics of
bitfield_bit_offset to use. However, there are two problems with
the current approach:
- When parsing DWARF, LLDB decodes bit offsets in little-endian bit numbering on LE systems, but in big-endian bit numbering on BE systems. Passing those offsets later on into the DataExtractor routines gives incorrect results on BE.
- In the interim, LLDB's type layer combines byte and bit offsets into a single number. I.e. instead of recording bitfields by specifying the byte offset and byte size of the surrounding integer *plus* the bit offset of the bit field within that field, it simply records a single bit offset number.
Now, note that converting from byte offset + bit offset to a single offset value and back is well-defined if we either use little-endian byte order *and* little-endian bit numbering, or use big-endian byte order *and* big-endian bit numbering. Any other combination will yield incorrect results.
Therefore, the simplest approach would seem to be to always use
the bit numbering that matches the system byte order. This makes
storing a single bit offset valid, and makes the existing DWARF
code correct. The only place to fix is to teach DataExtractor
to use big-endian bit numbering on big endian systems.
However, there is only additional caveat: we also get bit offsets
from LLDB synthetic bitfields. While the exact semantics of those
doesn't seem to be well-defined, from test cases it appears that
the intent was for the user-provided synthetic bitfield offset to
always use little-endian bit numbering. Therefore, on a big-endian
system we now have to convert those to big-endian bit numbering
to remain consistent.