In AArch32 ARM, the PC reads two instructions ahead of the currently executiing instruction. This evaluates to 8 in ARM state and 4 in Thumb state. Branch instructions on AArch32 compensate for this by subtracting the PC bias from the addend. For a branch to symbol this will result in an addend of -8 in ARM state and -4 in Thumb state.
The existing ARM Target::inBranchRange function accounted for this implict addend within the function meaning that if the addend were to be taken into account by the caller then it would be double counted. This complicates the interface for all Targets as callers wanting to account for addends had to account for the ARM PC-bias.
In certain situations such as: https://github.com/ClangBuiltLinux/linux/issues/1305 the PC-bias compensation code didn't match up. In particular normalizeExistingThunk() didn't put the PC-bias back in as Arm thunks did not store the addend.
The simplest fix for the problem is to add the PC bias in normalizeExistingThunk when restoring the addend. However I think it is worth refactoring the Arm inBranchRange implementation so that fewer calls to getPCBias are needed for other Targets. I wasn't able to remove getPCBias completely but hopefully the Relocations.cpp code is simpler now.
In principle a test could be written to replicate the linux kernel build failure but I wasn't able to reproduce with a small example that I could build up from scratch.
clang-format: please reformat the code