This patch is a follow up to https://reviews.llvm.org/D63196.
In this paper (full text) we studied the impact of the dependencies hash table capacity on the performance of one of our applications (see section 3.3, experiments are made using the "jacobi" kernel in the KASTORS benchmark suite).
We did a breakdown of the task management related task (Figure 3) which showed that a lot of time was spent in the check dependencies part, which we narrowed down to the dependency lookup part.
Statically changing the size proved to be an appropriate quickfix, but it would be nicer to have the hashtables be resized automatically when reaching some threshold.
While simply doubling the hashtable capacity would be a quite easy implementation, it also leads to disastrous collisions statistics (see here and here for some experiments on a 24 core haswell architecture and an arm architecture).
It shows how many buckets have x elements just before resizing the hashtable (I removed the 0 elements bar for visibility, as most buckets are actually empty), which basically shows there are a whole lot of collisions.
So instead I went for an arbitrarily fixed amount of resizing, using prime numbers close to twice the previous capacity.
The buckets distributions for the same application executions (here and here) look far better as the majority of the buckets are used and collisions don't go as high as before.
The hashtable resizing is triggered when the total number of conflicts in all buckets exceeds the number of buckets.
By using this resizing mechanic we managed to observe roughly the same performance gains as in the paper, but without having to manually change the hashtable size.
Do we want to give up here? I've seen people with *a lot* of dynamic tasks so we might want to scale somehow.