This is an archive of the discontinued LLVM Phabricator instance.

[TTI] Devirtualize getInstructionLatency
AbandonedPublic

Authored by samparker on Apr 22 2020, 9:05 AM.

Details

Summary

No backend implemented this and it's simple to do the implementation within TargetTransformInfo.

Diff Detail

Event Timeline

samparker created this revision.Apr 22 2020, 9:05 AM
Herald added a project: Restricted Project. · View Herald TranscriptApr 22 2020, 9:05 AM
Herald added a subscriber: hiraditya. · View Herald Transcript
spatel added inline comments.
llvm/lib/Analysis/TargetTransformInfo.cpp
1164

I realize this is just copied over, but why 40?
Added with D38104.

Also, I see that D37170 is where the latency/throughput enhancements were made, so adding more potential reviewers. Is there an llvm-dev thread that describes what we want the cost model to end up looking like?

Is it a good idea to have getInstructionThroughput/getInstructionLatency/getUserCost being implemented through different mechanisms?

Thanks both.

Is it a good idea to have getInstructionThroughput/getInstructionLatency/getUserCost being implemented through different mechanisms?

I don't think so. I like the idea of using the getInstructionCost interface which then calls into the respective functions, which then calls into the concrete implementation. This means the getUserCost would disappear though, and various cost methods would take the explicit TargetCostKind argument. I think AMDGPU is the only backend with a significant getUserCost implementation.

Is there an llvm-dev thread that describes what we want the cost model to end up looking like?

I will try to get an RFC patch for what I described above and then post to the list.

samparker abandoned this revision.May 12 2020, 11:50 PM

See D79483 for first steps, abandoning this for now.