This change yields an additional 2% size reduction on an internal search
binary and an additional 0.5% size reduction on fuchsia's Oz targets.
Details
Diff Detail
- Repository
- rG LLVM Github Monorepo
Event Timeline
llvm/include/llvm/Analysis/InlineCost.h | ||
---|---|---|
58 | I think it'd be preferable if this were defined on the ML side, like in InlineModelFeatureMaps.h. The reason is that it brings in this concept of feature name (e.g. "sroa_savings"), which is a ML side concept. | |
llvm/include/llvm/Analysis/InlineModelFeatureMaps.h | ||
17 | this wouldn't be needed once you move in the changes in InlineCost.h here. | |
llvm/lib/Analysis/InlineCost.cpp | ||
466 | I don't think the ML aspects should intermingle with the non-ml ones. |
Switch to visitor design pattern
Additionally, we re-use the heuristic based cost estimate feature in the model,
which is then supplemented by the expanded cost features. This increases
stability in training.
In a later patch, we will revisit completely removing the heuristics from the ML.
Updated the config for mock model generation.
Removed unnecessary edits to heuristic inliner.
llvm/include/llvm/Analysis/InlineAdvisor.h | ||
---|---|---|
40 | unrelated change - if the linter did this, just group it (and others like it) upfront, and commit that as a NFC, then rebase this one | |
llvm/include/llvm/Analysis/InlineCost.h | ||
276 | I'd drop the name 'array', suggests too much impl detail. | |
llvm/include/llvm/Analysis/InlineModelFeatureMaps.h | ||
49 | Aah! I see why InlineCostFeaturesArray. How about this enum were InlineCostFeatureIndex, and then the collection is InlineCostFeatures | |
llvm/lib/Analysis/CMakeLists.txt | ||
8 | more clear to say "<TO BE UPDATED>" | |
llvm/lib/Analysis/InlineCost.cpp | ||
132 | remove spurious change | |
414 | formatting change - see previous comment | |
425 | formatting change | |
943 | Where is this used, other than in onDisableLoadElimination, where it's set to 0? | |
944 | this is counting the opportunities for SROACostSavings, right? Maybe rename to SROACostSavingOpportunities? | |
947 | Is this really a threshold, or can it have a name that better communicates what it accumulates? (apologies, can't right now figure what that may be) same for the 2 bonuses above. | |
1156 | formatting | |
1262 | formatting | |
1417 | formatting | |
2155 | formatting | |
2622 | formatting | |
2716 | Not sure. what's the value of CFA.features() if !R? may be more clear to handle the error here and return {} or something like that. It creates less coupling between deep implementation details and their consumers; it also simplifies the state space of the values this API can produce. Better yet: return an Optional<InlineCostFeaturesArray>, because a 0-valued feature vector isn't necessarily equivalent to CFA.analyze() saying "no". Returning None in that case is a more clear API | |
2990 | formatting | |
llvm/lib/Analysis/MLInlineAdvisor.cpp | ||
46 | why | |
257 | do we still want this here? |
llvm/lib/Analysis/InlineCost.cpp | ||
---|---|---|
943 | it is meant so that InlineCostFeatures::LoadElimination is only ever set once, but there is a better way to express that. It also was never set to a nonzero value in the first place, which is just a bug. | |
947 | these are copied from the heuristic inline visitor and used in almost the same way - there is some complexity in the other visitor that isn't replicated here. I'll keep the names the same to reflect that duplication. In the future, there some be some abstraction so that the two visitors could share these values/logic for updating them. | |
2716 | I'm wasn't returning an optional for a very specific reason: std::array cannot be efficiently moved-from. so, if I return an optional, we couldn't explicitly unpack it later without eating another copy of the array. I can swap it back now, then in MLInlineAdvisor.cpp avoid the copy by taking a reference to the array inside the optional. In a later patch, I want to switch the array to a vector, so that we can efficiently std::move it, and then returning an optional here wouldn't come with a performance hit | |
llvm/lib/Analysis/MLInlineAdvisor.cpp | ||
46 | clang-format doesn't seem to understand these back-to-back macros and tries to add another level of indentation to each line, it makes it hard to read. | |
257 | ya - it increases stability in training. still looking in how to completely remove the heuristics from the ML without sacrificing training stability. |
llvm/include/llvm/Analysis/InlineModelFeatureMaps.h | ||
---|---|---|
103 | why not leave this where it was? because you can now populate the remainder in a loop? | |
116–117 | unrelated change? | |
llvm/lib/Analysis/InlineCost.cpp | ||
442 | spurious change? | |
947 | OK, can you add a FIXME explaining this, above their name or something like that. | |
950 | I think increment would be more clear. Then Delta can have default value 1, too (For the few cases that'd be the case) | |
983 | can this '3' be a const with some name (so it's easier to chase down any magic factors) | |
1048 | can we replace this 2 and the 4 above with const values with a good name same below - 3, 2 etc. |
llvm/include/llvm/Analysis/InlineModelFeatureMaps.h | ||
---|---|---|
103 | it's an artifact from when I was fiddling around with things - will move it back | |
116–117 | are you referring to the newline that is inserted here? I think it just makes things easier to read in the enum | |
llvm/lib/Analysis/InlineCost.cpp | ||
983 | discussing offline | |
1048 | ok |
Forgot to name the constants - now added.
One thing to note is the getExpectedNumberOfCompare function. Because the same
math is used in both visitors, I abstracted it and moved the relevant comment.
Just want to double check that you think that's ok.
unrelated change - if the linter did this, just group it (and others like it) upfront, and commit that as a NFC, then rebase this one