This is an archive of the discontinued LLVM Phabricator instance.

[MLInliner] Support training that doesn't require partial rewards
ClosedPublic

Authored by mtrofin on Aug 24 2020, 11:42 AM.

Details

Summary

If we use training algorithms that don't need partial rewards, we don't
need to worry about an ir2native model. In that case, training logs
won't contain a 'delta_size' feature either (since that's the partial
reward).

Diff Detail

Event Timeline

mtrofin created this revision.Aug 24 2020, 11:42 AM
Herald added a project: Restricted Project. · View Herald TranscriptAug 24 2020, 11:42 AM
mtrofin requested review of this revision.Aug 24 2020, 11:42 AM
yundiqian accepted this revision.Aug 24 2020, 4:21 PM
yundiqian added inline comments.
llvm/lib/Analysis/DevelopmentModeInlineAdvisor.cpp
458

I'm not familiar with the UI. Is the '>>' expected?

This revision is now accepted and ready to land.Aug 24 2020, 4:21 PM
MaskRay added inline comments.
llvm/lib/Analysis/DevelopmentModeInlineAdvisor.cpp
458

It is expected. If Phabricator detects indentation changes, it will print >>

llvm/test/Transforms/Inline/ML/development-training-log.ll
4

--check-prefixes

mtrofin updated this revision to Diff 287533.Aug 24 2020, 5:35 PM
mtrofin marked 3 inline comments as done.

feedback

This revision was landed with ongoing or failed builds.Aug 24 2020, 5:36 PM
This revision was automatically updated to reflect the committed changes.