Condition (team_serial || tasking_ser) was mistakenly replaced with task_serial. As a result serialized tasks in active parallel region got lost causing memory leak.
This patch restores the original condition leaving the rest of the D23115 in place. The temp variable also renamed to correspond with the condition.
Details
- Reviewers
jlpeyton Hahnfeld tlwilmar - Commits
- rG2e68768d1e66: Fixed memory leak mistakenly introduced by https://reviews.llvm.org/D23115
rOMP284747: Fixed memory leak mistakenly introduced by https://reviews.llvm.org/D23115
rL284747: Fixed memory leak mistakenly introduced by https://reviews.llvm.org/D23115
Diff Detail
- Repository
- rL LLVM
Event Timeline
LGTM, I think this was left over from my previous tries to fix the problem.
When exactly does this happen? Something like this?
#pragma omp parallel num_threads(2) #pragma omp single { #pragma omp task if(0) { } }
Would it be possible to add a test case for this? Maybe with a destructor of a firstprivate C++ object?
I haven't debugged your particular example, probably it should have the problem, you cannot see it without debugger of some memory checker tool. We saw it on another pattern - when each thread launches tons of tasks so that all above first 256 were serialized in each thread (as the task queue is full) and the test crashes because of lack of memory after a long time execution.
I doubt writing small test case is feasible. User level code works with and without the fix, as the leak is in the library internals. Compiler-generated code also works in both cases.
We found the bug accidentally on Spec OMP2012 gross tests, but the failure we saw took about an hour of running before the crash on our machine. On small tests the problem is not visible.