This is an archive of the discontinued LLVM Phabricator instance.

[DebugInfo] Make incorrect debug line extended opcode length non-fatal
ClosedPublic

Authored by jhenderson on Jan 3 2020, 7:35 AM.

Details

Summary

It is possible to try to keep parsing a debug line program even when the length of an extended opcode does not match what is expected for that opcode. This patch changes what was previously a fatal error to be non-fatal. The parser now continues by assuming the claimed length is correct and adjusting the offset if required.

Diff Detail

Event Timeline

jhenderson created this revision.Jan 3 2020, 7:35 AM
Herald added a project: Restricted Project. · View Herald TranscriptJan 3 2020, 7:35 AM
Herald added a subscriber: hiraditya. · View Herald Transcript

Early ping - I'd like to get this and the other related reviews in before the release branch is created if possible.

Out of curiosity: what's your broader goal with this work? (it'll help understand what's in-scope and out of scope, and better understand the framing when reviewing changes)

The parser now continues by assuming the larger of the claimed length and parsed length is correct.

This seems like a bit of a stretch to guess at which length is the one that's correct. I don't think there's a solid basis to choose either, and if the wrong one is chosen you could start reading from the middle of other opcodes & things, I would imagine, resulting in many follow-on error messages due to apparently further malformed input?

Out of curiosity: what's your broader goal with this work? (it'll help understand what's in-scope and out of scope, and better understand the framing when reviewing changes)

There's two parts to this:

  1. Making it easier for consumers to continue and try to do something with slightly bad output. One of the problems with using the unrecoverable errors is that it prevents people even trying to iterate over later tables.
  2. (this one's not specific to this patch, but some other patches in this area I've been doing are related): we have some local code that uses the DebugInfo library to read a line table. In order for this code to be sound, we need to make sure the line table makes sense. The parser in its current state doesn't pick up on a number of bad situations, so we added some local patches to detect these other errors (local because we didn't have time at that point to try to push them into the open source). We'd like to now get these into the opensource library, to avoid merge conflicts.

The parser now continues by assuming the larger of the claimed length and parsed length is correct.

This seems like a bit of a stretch to guess at which length is the one that's correct. I don't think there's a solid basis to choose either, and if the wrong one is chosen you could start reading from the middle of other opcodes & things, I would imagine, resulting in many follow-on error messages due to apparently further malformed input?

I was unsure what to do here, if I'm honest. The counter-argument to your point is that you might not start reading from the middle of things, and that it was just the one field that was bad for whatever reason. Essentially, I think you have fpur options in this sort of situation: a) treat the error as unrecoverable - this makes it impossible to parse later tables, regardless of the state of things, but would prevent other, potentially spurious, errors from coming out; b) use a "stated length wins" approach; c) use a "parsed length wins" approach; d) use a "largest length wins" approach. The right decision is going to differ in any given situation, but without attempting to look ahead and see what makes the most sense if parsing were to continue (which seems like it would be unnecessarily complicated), we have to just pick one. Previously, when I implemented these errors, I took approach a) on the basis of "you can't know what to do, so better not allow continuation, since it could lead to spurious information", but it was pointed out more recently to me that if we do this, it becomes impossible for people to see any later information at all.

I don't really have a strong opinion on this or the similar "bigger wins" patches I've been making. I'm happy to go with whatever the consensus is.

Making it easier for consumers to continue and try to do something with slightly bad output. One of the problems with using the unrecoverable errors is that it prevents people even trying to iterate over later tables.

I misremembered the rest of the code. An unrecoverable error doesn't prevent the SectionParser from continuing to the next line table. It just means that tools like llvm-dwarfdump will treat them as an error. The only difference between this and the recoverable error is that parsing of the CURRENT table will stop.

I took a step back and re-discussed the situation with a colleague offline. The errors are passed to callbacks so that clients can decide what to do when they see an error. For example, a consumer who needs to rely on the output being definitely correct can choose the callback to say "ignore this line table, and also ignore any further errors in that table". The only real difference between the unrecoverable and recoverable callbacks are that we stop parsing at the point the former is hit because we have no way of continuing, meaning our information is incomplete, whereas the latter will try to continue and give some information, but the information might be inaccurate. Given this, I don't think causing producing spurious information is wrong, as long as we have reported at least one error.

That being said, I do think "largest one wins" is the wrong approach, having considered some of the other cases. In particular, the SectionParser treats the unit length field as sacrosanct, and will always follow it (assuming it can possibly make sense - see totalLengthIsValid) when moving to the next table, evn if it means that the offset moves backwards from the point it got to after reading the table. I therefore think for consistency, we should treat all length fields as sacrosanct, i.e. jump to the position the offset should be according to the length (whether it is bigger or not) and continue from there. Yes, this might cause the parsing to produce bogus information, but that's okay (see my above point).

I'll update this and the other similar diffs accordingly.

jhenderson edited the summary of this revision. (Show Details)

Rebased + changed to assume the stated opcode length is always correct.

dblaikie accepted this revision.Jan 17 2020, 4:59 PM

Out of curiosity: what's your broader goal with this work? (it'll help understand what's in-scope and out of scope, and better understand the framing when reviewing changes)

There's two parts to this:

  1. Making it easier for consumers to continue and try to do something with slightly bad output. One of the problems with using the unrecoverable errors is that it prevents people even trying to iterate over later tables.
  2. (this one's not specific to this patch, but some other patches in this area I've been doing are related): we have some local code that uses the DebugInfo library to read a line table. In order for this code to be sound, we need to make sure the line table makes sense. The parser in its current state doesn't pick up on a number of bad situations, so we added some local patches to detect these other errors (local because we didn't have time at that point to try to push them into the open source). We'd like to now get these into the opensource library, to avoid merge conflicts.

Ah, OK - thanks for the context! (out of curiosity, though not necessary, who is "we"?)

So you'd like/are working on making libDebugInfoDWARF more pedantic/precise/detect more errors? Specifically only for the line table? Do you have any interest in adding fuzz testing for this, or are your needs less severe than would justify fuzzing?

Rebased + changed to assume the stated opcode length is always correct.

Looks good - thanks!

llvm/lib/DebugInfo/DWARF/DWARFDebugLine.cpp
687–699

Might be worth finding some clearer words here - this mentions 3 lengths ("stated", "parsed", and "claimed") - I see "claimed" is the same as "stated", so at least collapsing those two to use the same word would help clarify this. But in theory all the lengths are parsed, so "parsed" isn't completely unambiguous. Not sure what the perfect words would be here by any means.

This revision is now accepted and ready to land.Jan 17 2020, 4:59 PM

Out of curiosity: what's your broader goal with this work? (it'll help understand what's in-scope and out of scope, and better understand the framing when reviewing changes)

There's two parts to this:

  1. Making it easier for consumers to continue and try to do something with slightly bad output. One of the problems with using the unrecoverable errors is that it prevents people even trying to iterate over later tables.
  2. (this one's not specific to this patch, but some other patches in this area I've been doing are related): we have some local code that uses the DebugInfo library to read a line table. In order for this code to be sound, we need to make sure the line table makes sense. The parser in its current state doesn't pick up on a number of bad situations, so we added some local patches to detect these other errors (local because we didn't have time at that point to try to push them into the open source). We'd like to now get these into the opensource library, to avoid merge conflicts.

Ah, OK - thanks for the context! (out of curiosity, though not necessary, who is "we"?)

"We" is Sony, here. James is part of our binutils/linker team. We have local linker patches to edit the line table in response to GC'ing functions. Also doing other fixups to alert the debugger to GC'd functions, but the part where we actually parse something is for the line table.

So you'd like/are working on making libDebugInfoDWARF more pedantic/precise/detect more errors? Specifically only for the line table? Do you have any interest in adding fuzz testing for this, or are your needs less severe than would justify fuzzing?

Mostly we care about dealing with a possibly malformed/corrupt line-number program for one CU, and soldiering on to the next one. The middle of a user's edit/build/debug cycle is not really the optimal time to abort and whine about a malformed lump of debug info.
I don't know if the linker team has contemplated fuzzing in this area.

jhenderson marked an inline comment as done.Jan 27 2020, 2:28 AM

Sorry for the delayed response. Last week was pretty crazy, so I didn't get a chance to continue with this.

In D72155#1827856, @dblaikie wrote:(out of curiosity, though not necessary, who is "we"?)

"We" is Sony, here. James is part of our binutils/linker team. We have local linker patches to edit the line table in response to GC'ing functions. Also doing other fixups to alert the debugger to GC'd functions, but the part where we actually parse something is for the line table.

Thanks for clarifying @probinson!

So you'd like/are working on making libDebugInfoDWARF more pedantic/precise/detect more errors? Specifically only for the line table? Do you have any interest in adding fuzz testing for this, or are your needs less severe than would justify fuzzing?

Mostly we care about dealing with a possibly malformed/corrupt line-number program for one CU, and soldiering on to the next one. The middle of a user's edit/build/debug cycle is not really the optimal time to abort and whine about a malformed lump of debug info.
I don't know if the linker team has contemplated fuzzing in this area.

Like @probinson said, much of our work has been focused on the line table, because we mess about with it in relation to GC-sections work in the linker, so hence why I've been focusing on that area. We need to detect dodgy output to avoid us doing bad stuff when doing this manipulation of the line tables, and soldiering on if we do find bad things is better than failing the link completely. For other DWARF sections, we currently have other strategies which don't require parsing the sections, so we're not so interested in the other areas, at least within our team, at this point (although making llvm-dwarfdump and llvm-symbolizer more robust/useful on bad inputs etc is always beneficial, just not such high priorities). Fuzzing might eventually be useful, but we're mostly interested in the low-hanging fruit as shown by some older testing we had, so haven't looked at it at this point.

llvm/lib/DebugInfo/DWARF/DWARFDebugLine.cpp
687–699

I'll change this to something better. How about "Make sure the length as recorded in the table and the standard length for the opcode match. If they don't, continue from the end as claimed by the table"?

Make comment clearer

This revision was automatically updated to reflect the committed changes.
dblaikie added inline comments.Jan 28 2020, 11:45 AM
llvm/lib/DebugInfo/DWARF/DWARFDebugLine.cpp
687–699

Abstract concern: If the table length is too short for the standard length, that would be a problem - the thing would get truncated. The other way around seems relatively benign (there's some excess padding, perhaps the producer had some reason to structure it that way). If we were trying to make this thing maximally general, I'd imagine "too short" would be a warning/error, but "too long" would at best be a linter/verifier issue, let alone the same error that'd be relatively indistinguishable (at an API level, if you were trying to decide what things to show to the user, etc).

But I doubt this'll come up in practice enough to matter/need that sort of nuance anyway. *shrug*

jhenderson marked an inline comment as done.Jan 29 2020, 1:13 AM
jhenderson added inline comments.
llvm/lib/DebugInfo/DWARF/DWARFDebugLine.cpp
687–699

If the table length is too short for the standard length, that would be a problem - the thing would get truncated.

Actually, this change means the parser reads the data the standard expects, then goes back and continues from the length as recorded in the table. This means that we don't truncate the opcode (how would we interpret it otherwise?) whilst still (loosely) respecting the length. Of course, chances are either the operand is bogus, or the following opcodes will be incorrect, but in all likelihood, one at least will be right.

But I doubt this'll come up in practice enough to matter/need that sort of nuance anyway. *shrug*

That's probably true!

dblaikie added inline comments.
llvm/lib/DebugInfo/DWARF/DWARFDebugLine.cpp
687–699

If the table length is too short for the standard length, that would be a problem - the thing would get truncated.

Actually, this change means the parser reads the data the standard expects, then goes back and continues from the length as recorded in the table.

I don't think that's good behavior - any length read should, in my opinion, restrict what's read after that to only that length.

This means that we don't truncate the opcode (how would we interpret it otherwise?)

The same as we would if the file ended early & there weren't enough bytes to read, I think.

As per the design discussion on llvm-dev (DWARF debug line error handling changes) - perhaps these sort of things could benefit from the ability to create a more constrained DWARFDataExtractor (with a shorter length - specified by some length prefix provided such as contribution lengths, etc).

(@labath - Pavel, for another use case for a DWARFDataExtractor with a constrained length - once that DWARFDataExtractor feature is in place could one of you (Pavel or James) revisit this patch & port it to such a tool?)