Skip to content

Commit c183531

Browse files
committedDec 1, 2016
Parallelize ICF to make LLD's ICF really fast.
ICF is short for Identical Code Folding. It is a size optimization to identify two or more functions that happened to have the same contents to merges them. It usually reduces output size by a few percent. ICF is slow because it is computationally intensive process. I tried to paralellize it before but failed because I couldn't make a parallelized version produce consistent outputs. Although it didn't create broken executables, every invocation of the linker generated slightly different output, and I couldn't figure out why. I think I now understand what was going on, and also came up with a simple algorithm to fix it. So is this patch. The result is very exciting. Chromium for example has 780,662 input sections in which 20,774 are reducible by ICF. LLD previously took 7.980 seconds for ICF. Now it finishes in 1.065 seconds. As a result, LLD can now link a Chromium binary (output size 1.59 GB) in 10.28 seconds on my machine with ICF enabled. Compared to gold which takes 40.94 seconds to do the same thing, this is an amazing number. From here, I'll describe what we are doing for ICF, what was the previous problem, and what I did in this patch. In ICF, two sections are considered identical if they have the same section flags, section data, and relocations. Relocations are tricky, becuase two relocations are considered the same if they have the same relocation type, values, and if they point to the same section _in terms of ICF_. Here is an example. If foo and bar defined below are compiled to the same machine instructions, ICF can (and should) merge the two, although their relocations point to each other. void foo() { bar(); } void bar() { foo(); } This is not an easy problem to solve. What we are doing in LLD is some sort of coloring algorithm. We color non-identical sections using different colors repeatedly, and sections in the same color when the algorithm terminates are considered identical. Here is the details: 1. First, we color all sections using their hash values of section types, section contents, and numbers of relocations. At this moment, relocation targets are not taken into account. We just color sections that apparently differ in different colors. 2. Next, for each color C, we visit sections having color C to see if their relocations are the same. Relocations are considered equal if their targets have the same color. We then recolor sections that have different relocation targets in new colors. 3. If we recolor some section in step 2, relocations that were previously pointing to the same color targets may now be pointing to different colors. Therefore, repeat 2 until a convergence is obtained. Step 2 is a heavy operation. For Chromium, the first iteration of step 2 takes 2.882 seconds, and the second iteration takes 1.038 seconds, and in total it needs 23 iterations. Parallelizing step 1 is easy because we can color each section independently. This patch does that. Parallelizing step 2 is tricky. We could work on each color independently, but we cannot recolor sections in place, because it will break the invariance that two possibly-identical sections must have the same color at any moment. Consider sections S1, S2, S3, S4 in the same color C, where S1 and S2 are identical, S3 and S4 are identical, but S2 and S3 are not. Thread A is about to recolor S1 and S2 in C'. After thread A recolor S1 in C', but before recolor S2 in C', other thread B might observe S1 and S2. Then thread B will conclude that S1 and S2 are different, and it will split thread B's sections into smaller groups wrongly. Over- splitting doesn't produce broken results, but it loses a chance to merge some identical sections. That was the cause of indeterminism. To fix the problem, I made sections have two colors, namely current color and next color. At the beginning of each iteration, both colors are the same. Each thread reads from current color and writes to next color. In this way, we can avoid threads from reading partial results. After each iteration, we flip current and next. This is a very simple solution and is implemented in less than 50 lines of code. I tested this patch with Chromium and confirmed that this parallelized ICF produces the identical output as the non-parallelized one. Differential Revision: https://reviews.llvm.org/D27247 llvm-svn: 288373
1 parent 1a154e0 commit c183531

File tree

2 files changed

+86
-34
lines changed

2 files changed

+86
-34
lines changed
 

‎lld/ELF/ICF.cpp

+85-33
Original file line numberDiff line numberDiff line change
@@ -59,10 +59,12 @@
5959
#include "Config.h"
6060
#include "SymbolTable.h"
6161

62+
#include "lld/Core/Parallel.h"
6263
#include "llvm/ADT/Hashing.h"
6364
#include "llvm/Object/ELF.h"
6465
#include "llvm/Support/ELF.h"
6566
#include <algorithm>
67+
#include <mutex>
6668

6769
using namespace lld;
6870
using namespace lld::elf;
@@ -95,16 +97,16 @@ template <class ELFT> class ICF {
9597

9698
std::vector<InputSection<ELFT> *> Sections;
9799
std::vector<Range> Ranges;
100+
std::mutex Mu;
98101

99-
// The main loop is repeated until we get a convergence.
100-
bool Repeat = false; // If Repeat is true, we need to repeat.
101-
int Cnt = 0; // Counter for the main loop.
102+
uint32_t NextId = 1;
103+
int Cnt = 0;
102104
};
103105
}
104106

105107
// Returns a hash value for S. Note that the information about
106108
// relocation targets is not included in the hash value.
107-
template <class ELFT> static uint64_t getHash(InputSection<ELFT> *S) {
109+
template <class ELFT> static uint32_t getHash(InputSection<ELFT> *S) {
108110
return hash_combine(S->Flags, S->getSize(), S->NumRelocations);
109111
}
110112

@@ -128,33 +130,54 @@ template <class ELFT> void ICF<ELFT>::segregate(Range *R, bool Constant) {
128130
// issue in practice because the number of the distinct sections in
129131
// [R.Begin, R.End] is usually very small.
130132
while (R->End - R->Begin > 1) {
133+
size_t Begin = R->Begin;
134+
size_t End = R->End;
135+
131136
// Divide range R into two. Let Mid be the start index of the
132137
// second group.
133138
auto Bound = std::stable_partition(
134-
Sections.begin() + R->Begin + 1, Sections.begin() + R->End,
139+
Sections.begin() + Begin + 1, Sections.begin() + End,
135140
[&](InputSection<ELFT> *S) {
136141
if (Constant)
137-
return equalsConstant(Sections[R->Begin], S);
138-
return equalsVariable(Sections[R->Begin], S);
142+
return equalsConstant(Sections[Begin], S);
143+
return equalsVariable(Sections[Begin], S);
139144
});
140145
size_t Mid = Bound - Sections.begin();
141146

142-
if (Mid == R->End)
147+
if (Mid == End)
143148
return;
144149

145-
// Now we split [R.Begin, R.End) into [R.Begin, Mid) and [Mid, R.End).
146-
if (Mid - R->Begin > 1)
147-
Ranges.push_back({R->Begin, Mid});
148-
R->Begin = Mid;
149-
150-
// Update GroupIds for the new group members. We use the index of
151-
// the group first member as a group ID because that is unique.
152-
for (size_t I = Mid; I < R->End; ++I)
153-
Sections[I]->GroupId = Mid;
154-
155-
// Since we have split a group, we need to repeat the main loop
156-
// later to obtain a convergence. Remember that.
157-
Repeat = true;
150+
// Now we split [Begin, End) into [Begin, Mid) and [Mid, End).
151+
uint32_t Id;
152+
Range *NewRange;
153+
{
154+
std::lock_guard<std::mutex> Lock(Mu);
155+
Ranges.push_back({Mid, End});
156+
NewRange = &Ranges.back();
157+
Id = NextId++;
158+
}
159+
R->End = Mid;
160+
161+
// Update GroupIds for the new group members.
162+
//
163+
// Note on GroupId[0] and GroupId[1]: we have two storages for
164+
// group IDs. At the beginning of each iteration of the main loop,
165+
// both have the same ID. GroupId[0] contains the current ID, and
166+
// GroupId[1] contains the next ID which will be used in the next
167+
// iteration.
168+
//
169+
// Recall that other threads may be working on other ranges. They
170+
// may be reading group IDs that we are about to update. We cannot
171+
// update group IDs in place because it breaks the invariance that
172+
// all sections in the same group must have the same ID. In other
173+
// words, the following for loop is not an atomic operation, and
174+
// that is observable from other threads.
175+
//
176+
// By writing new IDs to write-only places, we can keep the invariance.
177+
for (size_t I = Mid; I < End; ++I)
178+
Sections[I]->GroupId[(Cnt + 1) % 2] = Id;
179+
180+
R = NewRange;
158181
}
159182
}
160183

@@ -211,7 +234,16 @@ bool ICF<ELFT>::variableEq(const InputSection<ELFT> *A, ArrayRef<RelTy> RelsA,
211234
auto *Y = dyn_cast<InputSection<ELFT>>(DB->Section);
212235
if (!X || !Y)
213236
return false;
214-
return X->GroupId != 0 && X->GroupId == Y->GroupId;
237+
if (X->GroupId[Cnt % 2] == 0)
238+
return false;
239+
240+
// Performance hack for single-thread. If no other threads are
241+
// running, we can safely read next GroupIDs as there is no race
242+
// condition. This optimization may reduce the number of
243+
// iterations of the main loop because we can see results of the
244+
// same iteration.
245+
size_t Idx = (Config->Threads ? Cnt : Cnt + 1) % 2;
246+
return X->GroupId[Idx] == Y->GroupId[Idx];
215247
};
216248

217249
return std::equal(RelsA.begin(), RelsA.end(), RelsB.begin(), Eq);
@@ -226,6 +258,14 @@ bool ICF<ELFT>::equalsVariable(const InputSection<ELFT> *A,
226258
return variableEq(A, A->rels(), B, B->rels());
227259
}
228260

261+
template <class IterTy, class FuncTy>
262+
static void foreach(IterTy Begin, IterTy End, FuncTy Fn) {
263+
if (Config->Threads)
264+
parallel_for_each(Begin, End, Fn);
265+
else
266+
std::for_each(Begin, End, Fn);
267+
}
268+
229269
// The main function of ICF.
230270
template <class ELFT> void ICF<ELFT>::run() {
231271
// Collect sections to merge.
@@ -239,14 +279,14 @@ template <class ELFT> void ICF<ELFT>::run() {
239279
// guaranteed) to have the same static contents in terms of ICF.
240280
for (InputSection<ELFT> *S : Sections)
241281
// Set MSB to 1 to avoid collisions with non-hash IDs.
242-
S->GroupId = getHash(S) | (uint64_t(1) << 63);
282+
S->GroupId[0] = S->GroupId[1] = getHash(S) | (1 << 31);
243283

244284
// From now on, sections in Sections are ordered so that sections in
245285
// the same group are consecutive in the vector.
246286
std::stable_sort(Sections.begin(), Sections.end(),
247287
[](InputSection<ELFT> *A, InputSection<ELFT> *B) {
248-
if (A->GroupId != B->GroupId)
249-
return A->GroupId < B->GroupId;
288+
if (A->GroupId[0] != B->GroupId[0])
289+
return A->GroupId[0] < B->GroupId[0];
250290
// Within a group, put the highest alignment
251291
// requirement first, so that's the one we'll keep.
252292
return B->Alignment < A->Alignment;
@@ -260,25 +300,37 @@ template <class ELFT> void ICF<ELFT>::run() {
260300
for (size_t I = 0, E = Sections.size(); I < E - 1;) {
261301
// Let J be the first index whose element has a different ID.
262302
size_t J = I + 1;
263-
while (J < E && Sections[I]->GroupId == Sections[J]->GroupId)
303+
while (J < E && Sections[I]->GroupId[0] == Sections[J]->GroupId[0])
264304
++J;
265305
if (J - I > 1)
266306
Ranges.push_back({I, J});
267307
I = J;
268308
}
269309

310+
// This function copies new GroupIds from former write-only space to
311+
// former read-only space, so that we can flip GroupId[0] and GroupId[1].
312+
// Note that new GroupIds are always be added to end of Ranges.
313+
auto Copy = [&](Range &R) {
314+
for (size_t I = R.Begin; I < R.End; ++I)
315+
Sections[I]->GroupId[Cnt % 2] = Sections[I]->GroupId[(Cnt + 1) % 2];
316+
};
317+
270318
// Compare static contents and assign unique IDs for each static content.
271-
std::for_each(Ranges.begin(), Ranges.end(),
272-
[&](Range &R) { segregate(&R, true); });
319+
auto End = Ranges.end();
320+
foreach(Ranges.begin(), End, [&](Range &R) { segregate(&R, true); });
321+
foreach(End, Ranges.end(), Copy);
273322
++Cnt;
274323

275324
// Split groups by comparing relocations until convergence is obtained.
276-
do {
277-
Repeat = false;
278-
std::for_each(Ranges.begin(), Ranges.end(),
279-
[&](Range &R) { segregate(&R, false); });
325+
for (;;) {
326+
auto End = Ranges.end();
327+
foreach(Ranges.begin(), End, [&](Range &R) { segregate(&R, false); });
328+
foreach(End, Ranges.end(), Copy);
280329
++Cnt;
281-
} while (Repeat);
330+
331+
if (End == Ranges.end())
332+
break;
333+
}
282334

283335
log("ICF needed " + Twine(Cnt) + " iterations");
284336

‎lld/ELF/InputSection.h

+1-1
Original file line numberDiff line numberDiff line change
@@ -289,7 +289,7 @@ template <class ELFT> class InputSection : public InputSectionBase<ELFT> {
289289
void relocateNonAlloc(uint8_t *Buf, llvm::ArrayRef<RelTy> Rels);
290290

291291
// Used by ICF.
292-
uint64_t GroupId = 0;
292+
uint32_t GroupId[2] = {0, 0};
293293

294294
// Called by ICF to merge two input sections.
295295
void replace(InputSection<ELFT> *Other);

0 commit comments

Comments
 (0)
Please sign in to comment.