Ghashing is probably going to be faster in most cases, even without
precomputed ghashes in object files.
Here is my table of results linking clang.pdb:
threads | GHASH | NOGHASH |
j1 | 51.031s | 25.141s |
j2 | 31.079s | 22.109s |
j4 | 18.609s | 23.156s |
j8 | 11.938s | 21.984s |
j28 | 8.375s | 18.391s |
This shows that ghashing is faster if at least four cores are available.
This may make the linker slower if most cores are busy in the middle of
a build, but in that case, the linker probably isn't on the critical
path of the build. Incremental build performance is arguably more
important than highly contended batch build link performance.
The -time output indicates that ghash computation is the dominant
factor:
Input File Reading: 924 ms ( 1.8%) GC: 689 ms ( 1.3%) ICF: 527 ms ( 1.0%) Code Layout: 414 ms ( 0.8%) Commit Output File: 24 ms ( 0.0%) PDB Emission (Cumulative): 49938 ms ( 94.8%) Add Objects: 46783 ms ( 88.8%) Global Type Hashing: 38983 ms ( 74.0%) GHash Type Merging: 5640 ms ( 10.7%) Symbol Merging: 2154 ms ( 4.1%) Publics Stream Layout: 188 ms ( 0.4%) TPI Stream Layout: 18 ms ( 0.0%) Commit to Disk: 2818 ms ( 5.4%) -------------------------------------------------- Total Link Time: 52669 ms (100.0%)
We can speed that up with a faster content hash (not SHA1).
Depends on D102885
I must confess I intuitively like better -debug:noghash because it's searchable & unique, and it's harder to spot the 'minus' in a large block of text. But there are maybe arguments both ways? :)