This is an archive of the discontinued LLVM Phabricator instance.

[llvm-profgen][CSSPGO] On-demand function size computation for preinliner
ClosedPublic

Authored by wlei on Sep 24 2021, 7:16 PM.

Details

Summary

Similar to https://reviews.llvm.org/D110465, we can compute function size on-demand for the functions that's hit by samples.

Here we leverage the raw range samples' address to compute a set of sample hit function. Then BinarySizeContextTracker just works on those function range for the size.

Diff Detail

Event Timeline

wlei created this revision.Sep 24 2021, 7:16 PM
wlei requested review of this revision.Sep 24 2021, 7:16 PM
Herald added a project: Restricted Project. · View Herald TranscriptSep 24 2021, 7:16 PM
wlei edited the summary of this revision. (Show Details)Sep 24 2021, 7:23 PM
wlei added reviewers: hoy, wenlei, wmi.
hoy added inline comments.Sep 26 2021, 8:59 PM
llvm/tools/llvm-profgen/ProfileGenerator.cpp
430

Can this be done when preinliner is enabled?

447

Does this include ranges not executed? Those ranges should also be counted for size calculation.

wenlei added inline comments.Sep 26 2021, 9:00 PM
llvm/tools/llvm-profgen/ProfileGenerator.cpp
443

Some comment here would be helpful. Specifically what's in the map, and what computeFuncSizeForRange does.. I think probably computeInlinedContextSizeForRange is probably better name for computeFuncSizeForRange. (saying this because if the map contains start and end offset for a function, then compute size from given start and end offset seems weird..)

llvm/tools/llvm-profgen/ProfileGenerator.h
231

nit: computeSizeForProfiledFunctions

llvm/tools/llvm-profgen/ProfiledBinary.cpp
557

Is the EndAddr inclusive for function end? What is we have a 1-byte instr at EndAddr?

wenlei added inline comments.Sep 26 2021, 9:03 PM
llvm/tools/llvm-profgen/ProfileGenerator.cpp
430

See ctor of ProfiledBinary, TrackFuncContextSize = EnableCSPreInliner && UseContextCostForPreInliner,

447

Yes, not executed range is included by findFuncOffsetRange - that gives the function start, end for the containing top level inliner.

hoy added inline comments.Sep 26 2021, 9:58 PM
llvm/tools/llvm-profgen/ProfileGenerator.cpp
430

Sounds good, thanks for pointing it out.

447

I see. This converts each lbr range to its corresponding function range, so should cover the whole function.

llvm/tools/llvm-profgen/ProfiledBinary.cpp
557

EndAddr should be the start of the next function, as computed in ProfiledBinary::dissassembleSymbol.

wlei updated this revision to Diff 375326.Sep 27 2021, 10:23 AM

Update :

  1. Rename and add comments
  2. Refactor IP.Address related code to avoid redundant "offset -> address -> offset"
wlei added inline comments.Sep 27 2021, 10:25 AM
llvm/tools/llvm-profgen/ProfileGenerator.cpp
443

comments added.

computeInlinedContextSizeForRange

That's better name, thanks for the suggestion!

llvm/tools/llvm-profgen/ProfiledBinary.cpp
557

Yeah, in dissassembleSymbol the logic is like

uint64_t Offset = StartOffset;
while (Offset < EndOffset) { 
  ...
  Offset += Size;
 }

so it guarantees that the end Offset in this function(the last IP.Address) won't be EndOffset.

hoy accepted this revision.Sep 27 2021, 10:37 AM

lgtm

llvm/tools/llvm-profgen/ProfileGenerator.cpp
446

nit: range -> ranges

llvm/tools/llvm-profgen/ProfiledBinary.cpp
555

nit: "Invalid function start" -> "Invalid start"?

This revision is now accepted and ready to land.Sep 27 2021, 10:37 AM
wlei updated this revision to Diff 375355.Sep 27 2021, 12:04 PM

refine comments

wlei updated this revision to Diff 375356.Sep 27 2021, 12:07 PM

Updating D110466: [llvm-profgen][CSSPGO] On-demand function size computation for preinliner

wenlei added inline comments.Sep 27 2021, 12:21 PM
llvm/tools/llvm-profgen/ProfiledBinary.cpp
553–554

Is this the same as addressIsCode? or perhaps we need addrOffsetIsCode?

557

ok, thanks for clarification. Then the name EndOffset could be confusing. We could do StartOffset + Size (for the map in ProfiledBinary too) or rename them to avoid confusion.

wlei added inline comments.Sep 27 2021, 1:55 PM
llvm/tools/llvm-profgen/ProfiledBinary.cpp
553–554

The input of addressIsCode is the address. I guess the confusing part is CodeAddrs so I changed to codeAddrOffsets.

557

Yeah, I changed some code in dissassembleSymbol to ensure the EndOffset is the last valid offset of the function and the previous EndOffset is renamed to NextStartOffset

wlei updated this revision to Diff 375398.Sep 27 2021, 1:55 PM

Updating D110466: [llvm-profgen][CSSPGO] On-demand function size computation for preinliner

wenlei added inline comments.Sep 27 2021, 2:01 PM
llvm/tools/llvm-profgen/ProfiledBinary.cpp
553–554

What I meant is we could have something like this:

bool addrOffsetIsCode(uint64_t Offset) const {
  return Offset2LocStackMap.find(Offset) != Offset2LocStackMap.end();
}
which is similar to addressIsCode, except that we use offset now.
bool addressIsCode(uint64_t Address) const {
  uint64_t Offset = virtualAddrToOffset(Address);
  return Offset2LocStackMap.find(Offset) != Offset2LocStackMap.end();
}

I think it's cleaner to use API instead of dealing with internals like codeAddrOffsets directly when possible. Also Offset2LocStackMap is unordered_map while codeAddrOffsets is sorted vector. So look up with Offset2LocStackMap is faster.

wlei added inline comments.Sep 27 2021, 4:06 PM
llvm/tools/llvm-profgen/ProfiledBinary.cpp
553–554

I see. Here I used uint32_t Index = getIndexForOffset(StartOffset); because I need to use the Index to iterate all the invalid instructions which are stored in codeAddrOffsets. Offset2LocStackMap doesn't give the Index.

Index =  getIndexForOffset(StartOffset);
while(codeAddrOffsets[Index] <= End) {
    // use codeAddrOffsets[Index]  to do something.
    Index++;
}

To avoid using codeAddrOffsets directly, previously we use IP, but IP only working on Address and this introduces many redundancy. i. e. we need to change offset --> address and then change address back --> offset.

wenlei added inline comments.Sep 27 2021, 4:37 PM
llvm/tools/llvm-profgen/ProfiledBinary.cpp
553–554

Ok, I missed the use of that Index later.. However why do we care about invalid instructions? I assume invalid instructions won't be in the middle of a function, then for those from padding at the end, should we ignore them when computing size?

wlei added inline comments.Sep 27 2021, 4:59 PM
llvm/tools/llvm-profgen/ProfiledBinary.cpp
553–554

Yeah, we actually only care about valid instructions here.

uint32_t Index = getIndexForOffset(StartOffset);
uint64_t Offset = CodeAddrOffsets[Index];

If StartOffset is an invalid offset(give a warning), it will round to next valid Index. for other instructions, they're all got from the CodeAddrOffsets by increasing the index, so they're also valid.

wenlei accepted this revision.Sep 27 2021, 5:12 PM

lgtm, thanks.

llvm/tools/llvm-profgen/ProfiledBinary.cpp
553–554

I see. Though with current callers, StartOffset should always be valid offset, right? The check and warning is more about making the function robust for other use cases.

wlei added inline comments.Sep 27 2021, 5:37 PM
llvm/tools/llvm-profgen/ProfiledBinary.cpp
553–554

our callers' StartOffset is not from CodeAddrOffsets .

uint64_t Offset = StartOffset;
while ()
    bool Disassembled = DisAsm->getInstruction...
    if (Disassembled) { 
       // CodeAddrOffsets[Offset] = ..
     }
}

FuncStartOffsetMap.emplace(StartOffset,...);

if the Disassembled for StartOffset is false, it will be an invalid start. Perhaps we can see if the warning is triggered in the future.

This revision was landed with ongoing or failed builds.Sep 28 2021, 9:10 AM
This revision was automatically updated to reflect the committed changes.