We generate incorrect values for the DW_AT_data_bit_offset for interfaces in Objective-C. I can only speculate as to what we were trying to achieve by taking the modulo of the bit size with the byte size, but given that the size and offset is expressed in bits, this seems wrong.
Diff Detail
- Repository
- rC Clang
Event Timeline
clang/lib/CodeGen/CGDebugInfo.cpp | ||
---|---|---|
2369 | It might help to attempt some git blame archeology. |
clang/lib/CodeGen/CGDebugInfo.cpp | ||
---|---|---|
2369 | If that is the case, we'd need to relax llvm-dwarfdump --verify to accept this and make sure LLDB does the right thing instead. |
clang/lib/CodeGen/CGDebugInfo.cpp | ||
---|---|---|
2369 | Ah I see, yeah that sounds reasonable and explains the comment which I interpreted differently. Thanks! |
clang/lib/CodeGen/CGDebugInfo.cpp | ||
---|---|---|
2369 | We should check whether emitting the offsets like this violates the DWARF spec. If yes, it may be better to emit the static offsets like you are doing here and then still have LLDB ignore everything but the bit-offsets from the dynamic byte offset. |
clang/lib/CodeGen/CGDebugInfo.cpp | ||
---|---|---|
2369 | The standard says
which to me sounds like they should be mutually exclusive. I ran the lldb test suite with my change and there were no new failures, which leads me to believe that the comment from r167503 still holds and lldb ignores this attribute, at least for Objective-C. |
clang/lib/CodeGen/CGDebugInfo.cpp | ||
---|---|---|
2369 | In Clang's current code-generation, the ivar offset variable stores the offset of the byte containing the first bit of the bit-field, which is not necessarily the same as the offset of the byte at which the bit-field's storage begins. This is why the debug info %s by the number of bits in a char: it's the correct bit-offset for computing where the bit-field begins relative to the value in the ivar offset variable. See the behavior of CGObjCRuntime::EmitValueForIvarAtOffset, which does the same % when computing the layout. It seems right to me that the % is done consistently within the compiler, rather than expecting LLDB to do it. That's the more future-proof design; for example, it would allow the compiler to emit a single ObjC ivar record for the combined bit-field storage and then emit debug info for the bit-fields at their appropriate relative bit-offsets, which could then be >= 8. This would arguably be a better code-generation scheme anyway, for a number of reasons — at the cost of breaking reflective access to the individual ivars, which I don't believe the runtime really supports anyway. Anyway, the upshot is that I don't think this patch is correct. |
It might help to attempt some git blame archeology.
Judging from the comment, it sounds like the debugger is supposed to query the runtime for the *byte* offset and then add the bit offset from DWARF? Could that make sense?