Skip memops if the total value profiled count is 0, we can't correctly
scale up the counts and there is no point anyway.
Details
Diff Detail
- Build Status
Buildable 5957 Build 5957: arc lint + arc unit
Event Timeline
lib/Transforms/Instrumentation/IndirectCallPromotion.cpp | ||
---|---|---|
877 | Or change it to if (TotalCount < MemOpCountThreshold) and move this check before the actual count check. |
lib/Transforms/Instrumentation/IndirectCallPromotion.cpp | ||
---|---|---|
877 | I could move this up, but note that the counts will be scaled up by ActualCount/SavedTotalCount (where SavedTotalCount == TotalCount at this point). So it is possible for TotalCount < MemOpCountThreshold but not ActualCount, and not the scaled counts. So that check seems overly conservative (and will have an effect other than just preventing 0 divides). |
lib/Transforms/Instrumentation/IndirectCallPromotion.cpp | ||
---|---|---|
877 | The scale is actually to scale down for most the cases -- this is because BB's count will be updated after inlining, while value profiling total count will be updated. In other words, it won't cause conservative behavior for hot sites. |
lib/Transforms/Instrumentation/IndirectCallPromotion.cpp | ||
---|---|---|
877 | I checked in one of our large apps, and there are over 44K memory intrinsics where we scale up the counts. So at the least, I would like to consider that change separately from this bugfix. |
Or change it to
if (TotalCount < MemOpCountThreshold)
return false;
and move this check before the actual count check.