This API call has been used recently with, a very valid, expectation that it would do something useful but it doesn't actually query any backend information. So, remove this method and merge its functionality into getUserCost. As well as that, also use getCastInstrCost to get a proper cost from the backend for the concerned instructions which compensates for the removal of the BasicTTI layer. The next step would be to use other useful API calls in getUserCost too.
Details
Diff Detail
Event Timeline
llvm/test/Analysis/CostModel/X86/costmodel.ll | ||
---|---|---|
11 ↗ | (On Diff #250167) | Changes here look wrong to me |
llvm/test/Analysis/CostModel/X86/costmodel.ll | ||
---|---|---|
11 ↗ | (On Diff #250167) | I was suspicious, but since these should now be the costs reported by the backend so I'd like to get some clarification. I'm assuming it's instead just returning a default cost. |
llvm/include/llvm/Analysis/TargetTransformInfoImpl.h | ||
---|---|---|
92 | This code is missing now? Or at least the IntToPtr code was copied over, the PtrToInt wasn't and has become a cast. |
llvm/include/llvm/Analysis/TargetTransformInfoImpl.h | ||
---|---|---|
92 | Ah, I was intending to merge all the casts and so I've actually missed IntToPtr. This almost certainly the reason for the change, so I'll have a look at the default costs of casts. |
Ah, I love reminding myself why I don't like working in this area... I still can't figure out why we need TTIImpl, BasicTTIImpl, TargetLoweringInfo and then the target-specific TTIs too, and worse, why the generic parts seem to produce different results given the same values! So the code in TargetTransformInfoImpl still needs to try to special-case int/ptr conversions (instead of BasicTTI), as well as sext/zext because of the interactions with the inline cost model... I'd welcome some education of the hows and whys of these layers! Anyway, the int/ptr conversion costs are back to their original values.
You're mixing a bit much in this patch for it to be easily reviewable. I see a couple of small NFCs which should be pulled out landed, and then the patch rebased. 1) Pulling out static_cast in the getIntrinsicCost and 2) conversion of if chain to switch.
llvm/include/llvm/CodeGen/BasicTTIImpl.h | ||
---|---|---|
720–729 | You appear to have dropped the bitcast equal size case. |
Thanks @reames. I extracted out the couple of NFCs that you suggested. This patch also sinks int/ptr conversions into the default implementation of TargetTransformInfo so that the free costs can be calculated when only given the DataLayout.
Sanity check: i believe, as per llvm::TargetTransformInfo::getInstructionCost(),
there are three cost-models:
- throughput model (getInstructionThroughput())
- latency model (getInstructionLatency())
- size model (getUserCost())
I'm not sure what getOperationCost() is supposed to represent,
so i'm not sure how it's code should be redistributed should it be deleted.
llvm/include/llvm/Analysis/TargetTransformInfo.h | ||
---|---|---|
281–282 | \c get*Cost this isn't specific to getGEPCost at all | |
llvm/include/llvm/Analysis/TargetTransformInfoImpl.h | ||
867–870 | This is unreachable |
It's likely that nobody knows at this point. Anything we can do to clean up and simplify these classes is welcome, but it probably requires cleaning up the user classes too because they've come to expect different meanings for these various vaguely/wrongly named APIs. Example in R43591:
https://bugs.llvm.org/show_bug.cgi?id=43591
Anything we can do to clean up and simplify these classes is welcome, but it probably requires cleaning up the user classes too
@spatel my aim for this patch was to hoist the getOperationCost functionality into its only user 'getUserCost', while also providing the backends a better chance of conveying costs. I know I've got a couple of comments from @lebedev.ri to see to, but do see any fundamental issues with this change?
Thought about this a bit more...
The SystemZ test shows a result that seems unlikely to be intended. The most obvious place where that will have an impact is SimplifyCFG.
I suspect we'll end up causing regressions and be on an endless revert/fix cycle as those get noticed.
Can we limit this patch to an NFC cleanup of the API (even if that means propagating a known wrong answer)?
After that, I'd look at correcting SimplifyCFG. That pass is using what is now defined as the cost size model, but that's not what it was intended to be as SimplifyCFG evolved:
/// Compute an abstract "cost" of speculating the given instruction, /// which is assumed to be safe to speculate. TCC_Free means cheap, /// TCC_Basic means less cheap, and TCC_Expensive means prohibitively /// expensive. static unsigned ComputeSpeculationCost(const User *I, const TargetTransformInfo &TTI) { assert(isSafeToSpeculativelyExecute(I) && "Instruction is not safe to speculatively execute!"); return TTI.getUserCost(I); }
This should be the throughput cost and possibly a size cost add-on to limit damage?
Can we limit this patch to an NFC cleanup of the API (even if that means propagating a known wrong answer)?
I would certainly like to avoid the commit-revert cycle you describe! I will try, but I fear creating a true NFC will be hard but I'll have a go, at least I should be able to get rid of the test change relatively easily.
Looking simply at the SystemZ test case change, for the icmp/[zs]ext case (fun1/fun2), we actually need three instructions (compare, load zero, conditional move), so the change seems reasonable.
For the fun5 case, we actually can do the extension in a single instruction if the input is already in a register, so that change looks wrong. On the other hand, if I understand correctly, with the new approach after this patch we actually have greater control in the backend TTI and could fix that case there?
Then overall I guess I'd be OK with this.
Thanks for taking a look @uweigand, but I've now removed the change that affected the SystemZ test. I'll like to re-add it after this patch. There's also other parts that I'd like to change, like returning the actual cost reported, instead of sometimes only returning the result if it's 'free'. There's also cases where two API calls are used in an attempt to get a 'free' answer. I'm hoping that with some continued refactoring and reorganising, we can reduce the number of calls and further empower the backends.
Is this NFC now? I added some sanity tests for x86 here:
rGa2bb19c
...but this patch doesn't apply to master cleanly, so I could not verify if that wiggles or not.
llvm/include/llvm/Analysis/TargetTransformInfoImpl.h | ||
---|---|---|
890–892 | I know we're in some intermediate phase of rewriting this, but this seems backwards. if (getCastInstrCost(Opcode, Ty, OpTy, dyn_cast<Instruction>(U)) == 0) return TargetTransformInfo::TCC_Free; |
llvm/include/llvm/Analysis/TargetTransformInfoImpl.h | ||
---|---|---|
890–892 | Ok, I'll rebase this and take a look. |
Thanks @spatel for adding those extra tests, they highlighted changes around bitcasts:
; CHECK-NEXT: Cost Model: Found an estimated cost of 1 for instruction: %r = bitcast double %x to i64
Printing analysis 'Cost Model Analysis' for function 'bitcast_f64_i64':
note: possible intended match here
Cost Model: Found an estimated cost of 0 for instruction: %r = bitcast double %x to i64
; CHECK-NEXT: Cost Model: Found an estimated cost of 1 for instruction: %r = bitcast i64 %x to double
Printing analysis 'Cost Model Analysis' for function 'bitcast_i64_f64':
note: possible intended match here
Cost Model: Found an estimated cost of 0 for instruction: %r = bitcast i64 %x to double
So now 'TargetTTI' is not queried for bitcasts, unlike trunc and ptr conversions. Though I'm still very hesitate to say this is an NFC, because of the various layers and ordering of the calls, I think it is now trending that way.
llvm/include/llvm/Analysis/TargetTransformInfoImpl.h | ||
---|---|---|
876 | The earlier review comment shifted, but I think it still applies. The GEP opcode is always handled above here, right? | |
893 | This still isn't clear to me (let me know if I'm misunderstanding and/or reading too much into this). getCastInstrCost() returns some integer value as a cost. Do we really want to use the "TCC_Free" enum value as an alias for "0" in the comparison statement? If we are going to use that definition, then do we need to update the documentation comments to specify that TCC_Free is a designated return value and fixed at zero? |
llvm/include/llvm/Analysis/TargetTransformInfoImpl.h | ||
---|---|---|
876 | Ah, I've messed this up rebasing. | |
893 | My immediate response was, 'isn't that the purpose of an enum?' But I now I think I know what you mean, though I don't feel that it really matters here and isn't out-of-line with how this layer already operates. Maybe it could be addressed at the same time as fixing the unsigned/int mess..? Also, the next step here will be to return the raw value anyway, without the condition, to get a more accurate answer. |
llvm/include/llvm/Analysis/TargetTransformInfoImpl.h | ||
---|---|---|
893 | Yes, I assumed that we were trying to move away from the bogus enum values, so it seemed backwards to create another use of those. If that's just a temporary splotch on the way to the better fix, it's ok with me. |
LGTM, but give this a couple of days to see if anyone that commented earlier has more feedback.
I'm not sure what the policy is if another reviewer explicitly requests changes (ping @reames).
https://llvm.org/docs/CodeReview.html
\c get*Cost
this isn't specific to getGEPCost at all