parallelForEach is not reentrant. We use parallelForEach to call
each section's writeTo(), so calling the same function within writeTo()
is not safe.
Details
- Reviewers
andrewrk sbc100 - Commits
- rG0eebd31dff80: Merging r358547:
rL360791: Merging r358547:
rG5081e41bdae2: Fix a crash bug caused by a nested call of parallelForEach.
rL358547: Fix a crash bug caused by a nested call of parallelForEach.
rLLD358547: Fix a crash bug caused by a nested call of parallelForEach.
Diff Detail
- Repository
- rLLD LLVM Linker
Event Timeline
I will keep this patch for a day to give sbc100 to take a look.
This patch should be included in 8.0.1 once landed.
So IIUC, this moves the parallelism to per-input-file, rather per-chuck-per-input-file.
@ruiu presumably we want to restore more parallelism if we land this change? Having said that perhaps this won't be much of regression for reasonable sized projects since the projects that benefit from high parallelism will most likely have many input files. One concern is LTO, where there can be essentially one giant object file, in which case I think we would loose all our parallelism. However I imagine in that case that the link time would massively dominated by the compile time?
Otherwise LGTM.
Also, it makes me sad that parallelForEach can't detect this behaviour, especially since it leads to bugs that are hard to reliably reproduce. Should we try to get parallelForEach fixed?
There should not be a fundamental reason for parallelForEach to not be reentrant, so we eventually want to fix it so that we can call the function without worrying the nested condition. For the time being, we should be able to detect the error condition more easily by adding the assertions.