The patch adds printing a special mark when dumping a section which is in the 64-bit DWARF format, which helps to distinguish the actual format of the section.
I wonder if it would make sense to print DWARF32 for when we're not using DWARF64? That would make the output more regular and parseable. What do you think?
Is there no testing already for DWARF64 debug_frame? If not, you should probably test the length fields and any other bits that are DWARF64 related too. If you don't want to do it as part of this change, that's fine, but perhaps it should be a prerequisite?
This block might belong below the ERR block to match order of usage.
Nit: too many blank lines.
Nit: unrelated (and bad?) change.
Perhaps it'd be too subtle, but another direction we could go would be to print the length as 32 or 64 bits depending on the encoding and using that to signal 64/32, without an explicit field. It'd match the input format (where the size of the length indicates whether it's 32 or 64).
The idea is interesting, but the output would not be that explicit, right? And all the leading zeros would look so tedious to my taste. Moreover, it would require changes in more places in the code to make the output uniform.
Another possibility is the way used in readelf:
Compilation Unit @ offset 0x0: Length: 0x430 (32-bit)
Length: 0x430 (64-bit)
However, it looks like it does that only for .debug_info sections; for other debug tables it just prints the value of the length. And for some sections, like .debug_frame, that approach cannot be used without breaking the output format.
I thought about that, but the changes in the output would make it incompatible with lots of existing tests and, probably, might affect some external tools.
I think the effect on external tools is a non-issue, not in the sense that it wouldn't be a problem (it could be), but rather in the sense that as it stands, this change introduces an issue anyway, since the external tools might suddenly start seeing this new format, or worse, might start seeing it at a future point when something causes a file to start using DWARF64.
Everything else in this change looks fine to me. I also don't feel strongly about the above point - nothing actually produces DWARF64 currently that I know of, so maybe it's not worth worrying about too much either way.
Looks like this format already provides some precedent for using 16 hex digit alignment for DWARF64 and 8 for DWARF32. It seems like you've updated /some/ places to do that but not all? (eg: debug_pubnames still seems to use an 8 hex digit rendering of the length )
& yeah, as Jared mentioned - I don't think the text format of the dumper is a place to worry too much about format changing, etc. But I don't feel /super/ strongly about only rendering this data via the length of the length field, versus being more explicit in some way, but I'd like the length rendering to be consistent across the different length fields/sections/etc.
I can speak from experience that I received a surprising of reports about various broken perl scripts whenever we changed the output format of dwarfdump, so it's reasonable to be mindful. But this change is additive and carries important information, so let's do it!
So, given we're going to start emitting DWARF64 stuff, unless it is going to be forevermore opt-in only (it won't...), I do think my earlier point stands, that people need to expect that output will sometimes change, if it means producing more useful output. In this case too, if the output might suddenly change and break their scripts, just because their debug data has grown a few bytes bigger, it might be very confusing. I think we should just bite the bullet, make the output format change for DWARF32 too, and document the change in behaviour in a release note. Yes, it will break things, but better that people are aware why there's a sudden breakage rather than one appearing in 5 years time when their DWARF grows a bit too big.