Change the default number of teams.
Based on kernel register usage, adjust the number of threads in a team.
Includes a corner case fix.
This change is dependent on https://reviews.llvm.org/D98829
Differential D98832
[libomptarget] Tune the number of teams and threads for kernel launch. dhruvachak on Mar 17 2021, 5:52 PM. Authored by
Details Change the default number of teams. Based on kernel register usage, adjust the number of threads in a team. Includes a corner case fix. This change is dependent on https://reviews.llvm.org/D98829
Diff Detail Event TimelineComment Actions This is really interesting. The idea seems to be to choose the dispatch parameters based on the kernel metadata and the limits of the machine. What's the underlying heuristic? Break across N CU's in chunks that match the occupancy limits of each CU? If so we probably want to compare LDS usage as well to avoid partitioning poorly for that. Maybe others - there might be a performance cliff on amount of private memory too.
Comment Actions Yes, that's the idea.
Agreed. However, I don't see LDS usage in the metadata table in the image. Is it present there? In theory, a very high sgpr count can limit the number of available workgroups if that's not factored in for determining the number of threads. But in practice, VGPRs tend to be the primary limiting factor. So perhaps we can start with using VGPRs for this purpose and have experience guide us in the future. Comment Actions Could you upload patches with full context please
Comment Actions Updated with the full context. Like xmm. Here in particular, I am referring to the vector register file of a GPU.
Comment Actions Yes, see https://llvm.org/docs/AMDGPUUsage.html for the list of what we can expect. What may not be obvious is that the metadata calls it ".group_segment_fixed_size". I don't know the origin of the terminology, maybe opencl?
If I understand correctly, occupancy rules all look something like (resource used / resource available) == number simultaneous, where one of the resources tends to be limiting. Offhand, I think that's VGPR, SGPR, LDS (group segment). I think there's also an architecture dependent upper bound on how many things can run at once even if they use very little of those, maybe 8 for gfx9 and 16 for gfx10. If that's right, perhaps the calculation should look something like: uint vgpr_occupancy = vgpr_used / vgpr_available; uint sgpr_occupancy = sgpr_used / sgpr_available; uint lds_occupancy = lds_used / lds_available; uint limiting_occupancy = min(vgpr_occupancy, sgpr_occupacny, lds_occupancy); and then we derive threadsPerGroup from that occupancy and the various other considerations.
Comment Actions Thanks for the pointer to the group segment. Yes, in general, my idea is similar to what you outlined above. However, note that SGPRs and LDS are at different granularities compared to VGPRs. VGPRs are per-thread, SGPRs are shared within a wavefront, and LDS is shared within a workgroup. So while VGPRs can be used to limit the number of threads, perhaps SGPRs and LDS can be used to limit the number of teams. Let me split up this patch further. I would like to land the default num_teams change sooner rather than later since that's a simple change and has shown improved performance. So let me separate that out. Incorporating SGPRs/LDS to constrain teams/threads will need more experimentation. Comment Actions [libomptarget] [amdgpu] Set number of teams and threads based on GPU occupancy. Determine total number of teams in a kernel and the number of threads in each Comment Actions I haven't tried to understand the control flow yet. Is the idea to map a target region to as large a fraction of a CU as we can, scaling it back when occupancy constraints would force some of it to be idle anyway? Comment Actions Yes, we start with the goal of filling up a CU with a pre-defined number of wavefronts. Given that goal, we try to choose team-count and team-size in a way so that their product approaches the pre-defined number of wavefronts. And the choices of team-count/team-size are constrained by register/LDS usage. Comment Actions [libomptarget] [amdgpu] Set number of teams and threads based on GPU occupancy. Perform teams/threads tuning in non-generic execution modes. Determine total number of teams in a kernel and the number of threads in each Comment Actions [libomptarget] [amdgpu] Set number of teams and threads based on GPU occupancy. Ensure that thread count is within the limit. Determine total number of teams in a kernel and the number of threads in each Comment Actions This stuff definitely needs to be tested.
|
Vector registers? Like xmm? or registers?