Add a -debugify-export option to opt. This exports per-pass debugify
loss statistics to a file in CSV format.
Here is sample output from running opt -O2 on the amalgamated sqlite3 sources, exported from Numbers:
. Differential D49003
[Debugify] Export per-pass debug info loss statistics vsk on Jul 5 2018, 5:12 PM. Authored by
Details Add a -debugify-export option to opt. This exports per-pass debugify Here is sample output from running opt -O2 on the amalgamated sqlite3 sources, exported from Numbers: .
Diff Detail Event TimelineComment Actions This is definitely going to be very useful! It's not very large, but generally I wonder if llvm-developer-centric features like this should be made #ifndef NDEBUG-only to not weigh down clang releases' binary size? Comment Actions I think we'd get much more space savings per unit of effort by not shipping these 5 binaries (over 100 MB): http://lists.llvm.org/pipermail/llvm-dev/2018-May/123310.html. I'll get around to that one of these days .. :). I'd rather not pepper NDEBUG around opt for more marginal savings. Comment Actions This is really nice. I agree it will be really useful, albeit with a minor concern about false positives in the stats themselves due to DCE transforms like what we ran into in https://llvm.org/PR37942 . On some of the codebases I've tried, the warnings can be a bit noisy (and makes attempting to do any form of automatic testcase reduction based on their presence very difficult). For example, in my testing with this patch I can see that Dead Store Elimination is generating some missing values and locations in a few testcases. In every one I've looked at so far, these numbers relate to eliminated dead stores :). Similarly, we don't know how many of the large instcombine number in the above PDF is down to DCE and how much is down to genuine DI loss. We've seen cases of both, but I'm not sure of the ratio. Maybe a compelling argument for taking some approach along the lines of the suggested one of attaching undef dbg.values during DCE?
Comment Actions At a high-level is it fine to use CSV here? I've found it really convenient because Google Sheets, Numbers, python scripts etc. can ingest the data easily. Taken individually these reports won't be very useful. It gives us a sense of which mid-level passes drop the most debug info, but the results aren't particularly surprising. I think the real signal will come from monitoring loss stats over time on fixed inputs. That should tell us which areas of the compiler to focus on, and which areas are improving/regressing.
Right, each issue needs to be looked at individually. Even in cases where a pass is doing straight DCE, it's sometimes possible to preserve debug values by rewriting instruction effects as DIExpressions. The exported stats won't precisely signal how many bugs there are to fix.
I'm not sure this is the right tradeoff between improved stats tracking and compile-time overhead.
Comment Actions It works for me personally as we have plenty of tools and scripts internally that can easily operate on CSVs for tracking and graphing. If the plan is to monitor loss stats over time, would we want to integrate it into LNT or similar for providing continuous results on an upstream bot? I'm not sure what works best for that use-case if that's the direction we want to go in.
I agree, although I am still concerned that we lose a lot of the value of debugify tracking through false positives. If anything, it would have to be on a flag, but that comes with its own set of tradeoffs in terms of the additional code-complexity that it would create and the fact we'd be testing different code paths compared to what our users are actually using, which is never ideal. |
prefix this with something like "could not open file: "?