No backend implemented this and it's simple to do the implementation within TargetTransformInfo.
Details
Diff Detail
Unit Tests
Event Timeline
Also, I see that D37170 is where the latency/throughput enhancements were made, so adding more potential reviewers. Is there an llvm-dev thread that describes what we want the cost model to end up looking like?
Is it a good idea to have getInstructionThroughput/getInstructionLatency/getUserCost being implemented through different mechanisms?
Thanks both.
Is it a good idea to have getInstructionThroughput/getInstructionLatency/getUserCost being implemented through different mechanisms?
I don't think so. I like the idea of using the getInstructionCost interface which then calls into the respective functions, which then calls into the concrete implementation. This means the getUserCost would disappear though, and various cost methods would take the explicit TargetCostKind argument. I think AMDGPU is the only backend with a significant getUserCost implementation.
Is there an llvm-dev thread that describes what we want the cost model to end up looking like?
I will try to get an RFC patch for what I described above and then post to the list.
I realize this is just copied over, but why 40?
Added with D38104.