Fixes inconsistency in PerfHintAnalysis while using opaque
pointers. Opaque pointers got rid of bitcasts that affected
both MemoryBound and WaveLimiterHint calculation as
InstCost will be different.
Details
Diff Detail
- Repository
- rG LLVM Github Monorepo
Event Timeline
llvm/lib/Target/AMDGPU/AMDGPUPerfHintAnalysis.cpp | ||
---|---|---|
228 | I'm pretty sure this skips a lot more casts. This also looks backwards. Opaque pointers avoid a lot of bitcasts, not introduce them | |
272 | Probably shouldn't count free instructions here. Rather than inventing a new cost counting, can we use TTI costs? | |
llvm/test/CodeGen/AMDGPU/perfhint-instr-cost.ll | ||
4 | This is adding a new test with typed pointers |
llvm/lib/Target/AMDGPU/AMDGPUPerfHintAnalysis.cpp | ||
---|---|---|
228 | Updated bitcast check. | |
272 | How do we use that cost model here? We don't have IndirectAccessCost, isLargeStrideCost, etc defined there. |
llvm/lib/Target/AMDGPU/AMDGPUPerfHintAnalysis.cpp | ||
---|---|---|
228 | I can see this help get the same results for typed and opaque pointers, but in the direction that hurts the case you're looking at? All tests need to be updated to use opaque pointers. Typed pointers are off by default and are only still around because so many tests need to be updated | |
llvm/test/CodeGen/AMDGPU/perfhint-instr-cost.ll | ||
1 | If you're going to directly check the debug output, you need REQUIRES: asserts | |
llvm/test/CodeGen/AMDGPU/perfhint.ll | ||
2 | This is the wrong way to switch this. You need to textually replace all the pointers in the test. You can run https://gist.github.com/nikic/98357b71fd67756b0f064c9517b62a34, although that should be in a separate change from any functional patch |
llvm/lib/Target/AMDGPU/AMDGPUPerfHintAnalysis.cpp | ||
---|---|---|
272 | Call getInstructionCost from here? You don't need to push all the logic here into TTI |
I'm pretty sure this skips a lot more casts.
This also looks backwards. Opaque pointers avoid a lot of bitcasts, not introduce them