We need the support for per-team shared variables to support codegen for
lastprivates/reductions. Patch adds this support by using shared memory
if the total size of the reductions/lastprivates is <= 128 bytes,
then pre-allocated buffer in global memory if size is <= 4K bytes,or
uses malloc/free, otherwise.
Details
- Reviewers
gtbercea kkwli0 grokos Hahnfeld - Commits
- rG022bf16b417f: [OPENMP][NVPTX] Add support for lastprivates/reductions handling in SPMD…
rOMP342737: [OPENMP][NVPTX] Add support for lastprivates/reductions handling in SPMD…
rL342737: [OPENMP][NVPTX] Add support for lastprivates/reductions handling in SPMD…
Diff Detail
- Repository
- rL LLVM
Event Timeline
I really, really dislike adding even more global buffers. 4096 * 32 * 56 are another 7MiB that are not usable for applications. What's wrong with using the existing ones?
Can you upload the CodeGen patch for reductions somewhere? I thought we need a global scratchpad buffer that is adressable for all teams?
libomptarget/deviceRTLs/nvptx/src/option.h | ||
---|---|---|
37 ↗ | (On Diff #164715) | This doesn't exist unless you have information that are not public yet. Volta is 720 at most. |
I really, really dislike an implementation in ibm-devel, the scratchpad solution will never be added to the trunk. The existing ones cannot be reused, as they are allocated only if the full runtime is used.
libomptarget/deviceRTLs/nvptx/src/option.h | ||
---|---|---|
37 ↗ | (On Diff #164715) | According to this https://docs.nvidia.com/cuda/volta-tuning-guide/index.html, it is 84 |
What's the overhead of initializing it? The whole libomptarget-nvptx is already a pretty much mess, see my thread on openmp-dev.
libomptarget/deviceRTLs/nvptx/src/option.h | ||
---|---|---|
37 ↗ | (On Diff #164715) | I'm not commenting on MAX_SM, rather on the value of __CUDA_ARCH__. As such these defines are never active. |
It is not the runtime issue, it is the problem with the compiler itself. It breaks compatibility with the other outlined regions and, thus, it cannot be committed to trunk. I'd like to have this at least as a temporirily solution to support lastprivates/reductions in SPMD mode with lightweight runtime. We can reduce the size of the preallocated buffers, if you wish.
Can you please describe the problems? Again, maybe posting the patch may help.
I'd like to have this at least as a temporirily solution to support lastprivates/reductions in SPMD mode with lightweight runtime. We can reduce the size of the preallocated buffers, if you wish.
Is that a commitment to actively work on that area?
I already described it - it breaks the compatibility with other outlined regions and breaks the whole design of the OpenMP implementation.
I'd like to have this at least as a temporirily solution to support lastprivates/reductions in SPMD mode with lightweight runtime. We can reduce the size of the preallocated buffers, if you wish.
Is that a commitment to actively work on that area?
Yes, Alex Eichenberger tries to invent something, that will allow us to use something similar to ibm-devel but without breaking the design of OpenMP in the compiler. But it requires some time. But I'd like to have something working, at least.
First that's a general statement without any explanation. Second I'm not asking about the scratchpad pointer solution in ibm-devel but rather why we can't pass RequiresDataSharing = true to __kmpc_spmd_kernel_init. Which will give us the data sharing in existing buffers.
I'd like to have this at least as a temporirily solution to support lastprivates/reductions in SPMD mode with lightweight runtime. We can reduce the size of the preallocated buffers, if you wish.
Is that a commitment to actively work on that area?
Yes, Alex Eichenberger tries to invent something, that will allow us to use something similar to ibm-devel but without breaking the design of OpenMP in the compiler. But it requires some time. But I'd like to have something working, at least.
I'm referring to the process of cleaning up libomptarget-nvptx.
libomptarget/deviceRTLs/nvptx/src/option.h | ||
---|---|---|
37 ↗ | (On Diff #164715) | That's now 1 GiB of global memory that can't be used by the user application. |
First, stop talking like this. I don't owe you anything.
Second, RequiresDataSharing is not required, because I tend to use the preallocated buffer instead of dynamically allocated.
I'd like to have this at least as a temporirily solution to support lastprivates/reductions in SPMD mode with lightweight runtime. We can reduce the size of the preallocated buffers, if you wish.
Is that a commitment to actively work on that area?
Yes, Alex Eichenberger tries to invent something, that will allow us to use something similar to ibm-devel but without breaking the design of OpenMP in the compiler. But it requires some time. But I'd like to have something working, at least.
I'm referring to the process of cleaning up libomptarget-nvptx.
No, if you're interested in this, you can do it.
No, it is about 10-12 Mb.
libomptarget/deviceRTLs/nvptx/src/option.h | ||
---|---|---|
37 ↗ | (On Diff #164715) | Just like I said, I can reduce the size of the preallocated buffers. |
Sorry, my last comment sounds rude even though I didn't mean it.
My point is that it's impossible to review patches without a big picture: what are the other parts, which alternatives did you evaluate, why don't they work?
And to be honest: Disregarding technical review and simply ignoring my comments doesn't feel nice either.
Second, RequiresDataSharing is not required, because I tend to use the preallocated buffer instead of dynamically allocated.
The data sharing infrastructure also has preallocated buffers.
I'd like to have this at least as a temporirily solution to support lastprivates/reductions in SPMD mode with lightweight runtime. We can reduce the size of the preallocated buffers, if you wish.
Is that a commitment to actively work on that area?
Yes, Alex Eichenberger tries to invent something, that will allow us to use something similar to ibm-devel but without breaking the design of OpenMP in the compiler. But it requires some time. But I'd like to have something working, at least.
I'm referring to the process of cleaning up libomptarget-nvptx.
No, if you're interested in this, you can do it.
That's what I feared. Yes, I think this is needed, but to be honest, I'm already facing enough resistence with moderately conservative proposals.
The additional buffer, maybe. But raising MAX_SM from 56 to 84 scales the array of queues proportionally.
It is going to use the same globalization support we use for the generic data-sharing scheme. But in SPMD mode we need to share only lastprivates/reduction variables in the teams, so we can use simplified data allocation algorithm as we don't need to use it in other constructs.
Second, RequiresDataSharing is not required, because I tend to use the preallocated buffer instead of dynamically allocated.
The data sharing infrastructure also has preallocated buffers.
I'd like to have this at least as a temporirily solution to support lastprivates/reductions in SPMD mode with lightweight runtime. We can reduce the size of the preallocated buffers, if you wish.
Is that a commitment to actively work on that area?
Yes, Alex Eichenberger tries to invent something, that will allow us to use something similar to ibm-devel but without breaking the design of OpenMP in the compiler. But it requires some time. But I'd like to have something working, at least.
I'm referring to the process of cleaning up libomptarget-nvptx.
No, if you're interested in this, you can do it.
That's what I feared. Yes, I think this is needed, but to be honest, I'm already facing enough resistence with moderately conservative proposals.
The additional buffer, maybe. But raising MAX_SM from 56 to 84 scales the array of queues proportionally.
I will try to reuse the existing buffers in the global memory.
Reused preallocated memory for the full runtime as the global memory buffer for the lightweight runtime.
libomptarget/deviceRTLs/nvptx/src/omptarget-nvptx.cu | ||
---|---|---|
44 ↗ | (On Diff #164879) | Expected <space> number.... |
@Hahnfeld: Are the latest changes in line with your requirements/plans to reduce the memory footprint of the nvptx runtime?
Just to make sure I came to the right conclusions after trying to understand the code generated since rC342738 and for documentation purposes if the following explanation is correct: The compiler generated code asks the runtime for two loop schedules, one for distribute and the other to implement for. The latter iterates in the chunk returned from the distribute schedule.
For lastprivates on teams distribute parallel for this means that the global value needs to be updated in the last iteration of the last distribute chunk. However, the outlined parallel region only knows whether the current thread is executing the last iteration of the for worksharing construct. This means the lastprivate value of the parallel for is passed back to the distribute loop which decides if it has just executed the last chunk and needs to write to the global value.
In SPMD constructs all CUDA threads are executing the distribute loop, but only the thread executing the last iteration of the for loop has seen the lastprivate value. However the information of which thread this is has been lost at the end of the parallel region. So data sharing is used to communicate the lastprivate value to all threads in the team that is executing the last distribute chunk.
Assume a simple case like this:
int last; #pragma omp target teams distribute parallel for map(from: last) lastprivate(last) for (int i = 0; i < 10000; i++) { last = i; }
Clang conceptually generates the following:
void outlined_target_fn(int *last) { int *last_ds = /* get data sharing frame from runtime */ for (/* distribute loop from 0 to 9999 */) { outlined_parallel_fn(lb, ub, last_ds); } if (/* received last chunk */) { *last = *last_ds; } } void outlined_parallel_fn(int lb, int ub, int *last) { int last_privatized; for (/* for loop from lb to ub */) { last_privatized = i; } if (/* executed last iteration of for loop */) { *last = last_privatized; } }
I tried to solve this problem without support from the runtime and this appears to work:
void outlined_target_fn(int *last) { int last_dummy; for (/* distribute loop from 0 to 9999 */) { int *last_p = &last_dummy; if (/* is last chunk */) { last_p = last; } outlined_parallel_fn(lb, ub, last_p); } } void outlined_parallel_fn(int lb, int ub, int *last) { int last_privatized; for (/* for loop from lb to ub */) { last_privatized = i; } if (/* executed last iteration of for loop */) { *last = last_privatized; } }
(Alternatively it should also be possible to set last_p before entering the distribute loop. This will write to last multiple times but the final value should stay in memory after the kernel.)
As you can see the outlined parallel function is unchanged (which is probably what you mean with "breaks the compatibility", @ABataev?). This should work because all invocations of outlined_parallel_fn write their value of last into a dummy location, except the one executing the last distribute chunk.
What do you think?
I still think it's a waste of resources to statically allocate around 1 GB on sm_70 / 660 MB on sm_60. And I think it's worrying that we are adding more and more data structures because it seems convenient to quickly solve a problem. The truth seems to be that it's incredibly hard to get rid of them later on...
No, you're not correct here.
void outlined_target_fn(int *last) { int *last_ds = /* get data sharing frame from runtime */ for (/* distribute loop from 0 to 9999 */) { outlined_parallel_fn(lb, ub, last_ds); } if (/* received last chunk */) { *last = *last_ds; } }
This code is for the distribute loop. And here you have conflict without the datasharing scheme. The problem here is that this check /* received last chunk */ is true for all inner loop iterations for inner for directive and *last_ds may come not from the last iteration of for loop, but from some other iterations. To solve this problem, we need to share the same last_ds between all the threads in the team.
Yes, that's the current solution in Clang and actually what I described above:
I'm assuming that the pointer returned by /* get data sharing frame from runtime */ is shared between all threads in a team.
- It is not how clang works, it is how standard requires.
- Yes, it is shared between all the threads in the team and this is how it is intended to be according to the standard
The main problem with your solution is that distribute loop does not have information which thread actually executed the last
chunk of the loop. All the threads in the last team must execute the same check and only one shall write its private value to the original variable. But, just like I said, runtime does not provide this information to the compiler
I've tried to describe how the current implementation works, based on the IR that is generated.
Please let me know if this pseudo code conceptually doesn't match the current IR.
- Yes, it is shared between all the threads in the team and this is how it is intended to be according to the standard
The main problem with your solution is that distribute loop does not have information which thread actually executed the last
chunk of the loop. All the threads in the last team must execute the same check and only one shall write its private value to the original variable. But, just like I said, runtime does not provide this information to the compiler
Now you are talking about the second pseudo-code:
I don't see why the distribute loop cares which thread actually executes the last iteration of the for loop, that's only relevant in the outlined parallel region.
Because it marks as lastprivate not the last loop chunk executed by the last thread, but the set of loop chunks executed by the last team. It means that when you try to write the lastprivate value after the distribute loop you will have multiple writes from the different threads with the different values of lastprivates.
Say, last distribute chunk is [L, U]. In the inner for directive it is split into [L,U1], [U1+1, U2], ..., [Un-1 + 1, U]. Distribute marks all these chunks as last, not the last [Un-1 + 1, U].
I got that. This is why the outer distribute only passes the global address for its last chunk. Then the inner for decides which thread executes [Un-1 + 1, U] and writes the lastprivate value.
Plus, I need to add that I tried the solution you proposed here maybe a month or two ago. If it would work, I would definitely use this one rather than the one implemented now. Because it is much easier to implement and works much faster. But it just does not work!
So now you are agreeing to "my" solution which is different than what Clang currently does - I'm confused.
No, I do not agree with your solution, I thought you agreed with the implemented one. You said that you understood that actually inner for loop decides which chunk is actually the last one. And because of that, we need to share the distribute private copy of the lastprivate variable, so all the threads in the inner parallel region could modify it.