This is an archive of the discontinued LLVM Phabricator instance.

[llvm] Development-mode InlineAdvisor
ClosedPublic

Authored by mtrofin on Jul 13 2020, 5:37 PM.

Details

Summary

This is the InlineAdvisor used in 'development' mode. It enables two
scenarios:

  • loading models via a command-line parameter, thus allowing for rapid training iteration, where models can be used for the next exploration phase without requiring recompiling the compiler. This trades off some compilation speed for the added flexibility.
  • collecting training logs, in the form of tensorflow.SequenceExample protobufs. We generate these as textual protobufs, which simplifies generation and testing. The protobufs may then be readily consumed by a tensorflow-based training algorithm.

To speed up training, training logs may also be collected from the
'default' training policy. In that case, this InlineAdvisor does not
use a model.

RFC: http://lists.llvm.org/pipermail/llvm-dev/2020-April/140763.html

Diff Detail

Event Timeline

mtrofin created this revision.Jul 13 2020, 5:37 PM
Herald added a project: Restricted Project. · View Herald TranscriptJul 13 2020, 5:37 PM
mtrofin updated this revision to Diff 278325.Jul 15 2020, 3:09 PM

Removed direct access to TF APIs, fully leveraging the TFUtils abstraction instead.

davidxl added inline comments.Jul 16 2020, 8:10 PM
llvm/lib/Analysis/DevelopmentModeInlineAdvisor.cpp
30

TrainedModel sounds misleading as if it has been trained. Perhaps just TrainModel or ModelInTrain? Or TrainingModel?

131

May be documenting like this:

The advisor operates in two modes: 1) log collection, and 2) inference (during training). In the first mode, the default advisor is used while in 2), the MLInlineAdvisor is used.

132

Naming: MLInlineAdvisorForTraining or TrainingModeInlineAdvisor

256

SavedModel --> TrainedModel or ModelBeingTrained

258

Perhaps change DynamicModel name to be TrainModel to make the name consistent with the option.

356

can Inference and Logging both be on?

448

IsDoingInference

mtrofin updated this revision to Diff 278801.Jul 17 2020, 9:18 AM
mtrofin marked 13 inline comments as done.

feedback

mtrofin marked 2 inline comments as not done.Jul 17 2020, 9:20 AM
mtrofin added inline comments.
llvm/lib/Analysis/DevelopmentModeInlineAdvisor.cpp
30

It's actually correct, it is trained: this would be a model obtained from a previous iteration of the reinforcement learning training loop. We aren't mutating it here.

131

logging may happen with either advisors, actually. To bootstrap, we use the default, then, to improve the model under training, we use the MLInlineAdvisor.

I expanded the comment to further clarify.

132

We use the term 'development' everywhere else, though - e.g. the command line flag is called 'development'.

256

SavedModel is a tensorflow term. Added a link in the comment.

258

I don't want to suggest it's being trained, though. How about LoadableModelRunner - the key aspect of this ModelRunner is that the model may be loaded from command line;

or, to contrast the release/AOT case ('ReleaseModeModelRunner) - DevelopmentModeModelRunner

wdyt?

356

Yes. I updated the comment earlier on, so now this should be more clear, I think

davidxl added inline comments.Jul 17 2020, 9:29 AM
llvm/lib/Analysis/DevelopmentModeInlineAdvisor.cpp
30

As a reader, I feel the name means 'fully trained' instead of being 'partially trained' in the iteration loop.

258

How about IterativeModelRunner?

mtrofin updated this revision to Diff 278845.Jul 17 2020, 10:38 AM
mtrofin marked 5 inline comments as done.

renames

llvm/lib/Analysis/DevelopmentModeInlineAdvisor.cpp
30

renamed

258

ModelUnderTrainingRunner? similar to how I renamed the option.

davidxl accepted this revision.Jul 20 2020, 9:20 AM

lgtm

This revision is now accepted and ready to land.Jul 20 2020, 9:20 AM
This revision was automatically updated to reflect the committed changes.