This is an archive of the discontinued LLVM Phabricator instance.

[Debugify] Export per-pass debug info loss statistics
ClosedPublic

Authored by vsk on Jul 5 2018, 5:12 PM.

Details

Summary

Add a -debugify-export option to opt. This exports per-pass debugify
loss statistics to a file in CSV format.

Here is sample output from running opt -O2 on the amalgamated sqlite3 sources, exported from Numbers:

.

Diff Detail

Event Timeline

vsk created this revision.Jul 5 2018, 5:12 PM

This is definitely going to be very useful! It's not very large, but generally I wonder if llvm-developer-centric features like this should be made #ifndef NDEBUG-only to not weigh down clang releases' binary size?

vsk added a comment.Jul 5 2018, 5:46 PM

This is definitely going to be very useful! It's not very large, but generally I wonder if llvm-developer-centric features like this should be made #ifndef NDEBUG-only to not weigh down clang releases' binary size?

I think we'd get much more space savings per unit of effort by not shipping these 5 binaries (over 100 MB): http://lists.llvm.org/pipermail/llvm-dev/2018-May/123310.html. I'll get around to that one of these days .. :). I'd rather not pepper NDEBUG around opt for more marginal savings.

This is really nice. I agree it will be really useful, albeit with a minor concern about false positives in the stats themselves due to DCE transforms like what we ran into in https://llvm.org/PR37942 .

On some of the codebases I've tried, the warnings can be a bit noisy (and makes attempting to do any form of automatic testcase reduction based on their presence very difficult). For example, in my testing with this patch I can see that Dead Store Elimination is generating some missing values and locations in a few testcases. In every one I've looked at so far, these numbers relate to eliminated dead stores :). Similarly, we don't know how many of the large instcombine number in the above PDF is down to DCE and how much is down to genuine DI loss. We've seen cases of both, but I'm not sure of the ratio. Maybe a compelling argument for taking some approach along the lines of the suggested one of attaching undef dbg.values during DCE?

tools/opt/Debugify.cpp
402

prefix this with something like "could not open file: "?

vsk added a comment.Jul 6 2018, 11:07 AM

At a high-level is it fine to use CSV here? I've found it really convenient because Google Sheets, Numbers, python scripts etc. can ingest the data easily.

This is really nice. I agree it will be really useful, albeit with a minor concern about false positives in the stats themselves due to DCE transforms like what we ran into in https://llvm.org/PR37942 .

Taken individually these reports won't be very useful. It gives us a sense of which mid-level passes drop the most debug info, but the results aren't particularly surprising. I think the real signal will come from monitoring loss stats over time on fixed inputs. That should tell us which areas of the compiler to focus on, and which areas are improving/regressing.

On some of the codebases I've tried, the warnings can be a bit noisy (and makes attempting to do any form of automatic testcase reduction based on their presence very difficult). For example, in my testing with this patch I can see that Dead Store Elimination is generating some missing values and locations in a few testcases. In every one I've looked at so far, these numbers relate to eliminated dead stores :). Similarly, we don't know how many of the large instcombine number in the above PDF is down to DCE and how much is down to genuine DI loss. We've seen cases of both, but I'm not sure of the ratio.

Right, each issue needs to be looked at individually. Even in cases where a pass is doing straight DCE, it's sometimes possible to preserve debug values by rewriting instruction effects as DIExpressions. The exported stats won't precisely signal how many bugs there are to fix.

Maybe a compelling argument for taking some approach along the lines of the suggested one of attaching undef dbg.values during DCE?

I'm not sure this is the right tradeoff between improved stats tracking and compile-time overhead.

tools/opt/Debugify.cpp
402

Right, will do.

In D49003#1154631, @vsk wrote:

At a high-level is it fine to use CSV here? I've found it really convenient because Google Sheets, Numbers, python scripts etc. can ingest the data easily.

It works for me personally as we have plenty of tools and scripts internally that can easily operate on CSVs for tracking and graphing. If the plan is to monitor loss stats over time, would we want to integrate it into LNT or similar for providing continuous results on an upstream bot? I'm not sure what works best for that use-case if that's the direction we want to go in.

Maybe a compelling argument for taking some approach along the lines of the suggested one of attaching undef dbg.values during DCE?

I'm not sure this is the right tradeoff between improved stats tracking and compile-time overhead.

I agree, although I am still concerned that we lose a lot of the value of debugify tracking through false positives. If anything, it would have to be on a flag, but that comes with its own set of tradeoffs in terms of the additional code-complexity that it would create and the fact we'd be testing different code paths compared to what our users are actually using, which is never ideal.

vsk added a subscriber: tyb0807.Jul 12 2018, 10:39 AM
gbedwell accepted this revision.Jul 18 2018, 3:44 AM

Modulo my nit above about the error message text, this LGTM.

This revision is now accepted and ready to land.Jul 18 2018, 3:44 AM
This revision was automatically updated to reflect the committed changes.