Page MenuHomePhabricator

[SCEV] Compare SCEVs by a complexity rank before lexic-al comparison
Needs RevisionPublic

Authored by rtereshin on Aug 20 2018, 4:44 PM.



This patch is planned to include 2 commits:

The first commit:

Revert "[SCEV][NFC] Check NoWrap flags before lexicographical comparison of SCEVs"

This reverts r319889.

Unfortunately, wrapping flags are not a part of SCEV's identity (they
do not participate in computing a hash value or in equality
comparisons) and in fact they could be assigned after the fact w/o
rebuilding a SCEV.

Grep for const_cast's to see quite a few of examples, apparently all
for AddRec's at the moment.

So, if 2 expressions get built in 2 slightly different ways: one with
flags set in the beginning, the other with the flags attached later
on, we may end up with 2 expressions which are exactly the same but
have their operands swapped in one of the commutative N-ary
expressions, and at least one of them will have "sorted by complexity"
invariant broken.

2 identical SCEV's won't compare equal by pointer comparison as they
are supposed to.

A real-world reproducer is added as a regression test: the issue
described causes 2 identical SCEV expressions to have different order
of operands and therefore compare not equal, which in its turn
prevents LoadStoreVectorizer from vectorizing a pair of consecutive

On a larger example (the source of the test attached, which is a
bugpoint) I have seen even weirder behavior: adding a constant to an
existing SCEV changes the order of the existing terms, for instance,
getAddExpr(1, ((A * B) + (C * D))) returns (1 + (C * D) + (A * B)).

The second commit:

[SCEV] Compare SCEVs by a complexity rank before lexic-al comparison

after "important" ordering (by type of the SCEV node and by dominance
relation on Loops for AddRec's) is done.

The exact order of operands of associative and commutative SCEV
operators (add, mul, smax, and umax) does not matter but needs to be
consistent. Currently it's resolved by a deep lexicographical
comparision which could be expensive.

This commit adds a small hash value to every SCEV node which is
supposed to have the following properties:

  1. Be equal for SCEVs of the same lexicographical complexity
  2. Collide for SCEVs of different complexities as rarely as possible
  3. Increase in its numerical value for small SCEVs with the increase in their complexity in a sensible way to aid debugging and pretty-printing.

The hash is rougly a polynomial hash with one major difference: it
only adds up the hash values of operands of commutative and
associative operators. This way the complexity of such operators
does not depend on order of the operands, which is rather expected,
and the hash value does not wrap around as quickly as otherwise,
making sure the property (3) holds for larger SCEVs.

CompareSCEVComplexity is also changed so it compares SCEVs by the hash
(called complexity rank) after the "important" checks and resorts to a
full lexicographical comparision only if the ranks are the same.

Expected increase in memory usage is zero, including 32-bit

The performance impact is measured on

with patch applied and
ScalarEvolution::checkValidity disabled on an x86 machine (100 runs total, ms):

BeforeAfter by W FlagsAfter by NC Rank

The major goal of this patch is to make sure we don't have a compile
time regression due to revert of r319889 ("[SCEV][NFC] Check NoWrap
flags before lexicographical comparison of SCEVs")

Diff Detail


Event Timeline

rtereshin created this revision.Aug 20 2018, 4:44 PM
mkazantsev requested changes to this revision.Aug 27 2018, 2:28 AM

If these are two patches, please put them on review as 2 different patches and set dependency between them. You can always have them merged together, but will be easier to review. It is not clear which changes in tests are caused by which patch now. I believe that the part with revert can be merged unconditionally because it shows a functional problem, even if it costs us some compile time.

As for the algorithm, see comments inline. I think you should use hash_combine and collect some statistics to be sure that your assumptions about rare collisisons is right.


I just wonder if 32M different hashes is enough (maybe the statistics collection will help us understand).


You only compare ranks when the SCEVTypes of two SCEVs match (lines 636-639):

// Primarily, sort the SCEVs by their getSCEVType().
unsigned LType = LHS->getSCEVType(), RType = RHS->getSCEVType();
if (LType != RType)
  return (int)LType - (int)RType;

There is no point to multiply by (SCEVType + 1U).


Per comment above, how is that different from getComplexityRank?


Please arrange includes in lexicographic order.


I guess what you want here is return Acc * X + Op->getComplexityRank();, otherwise it's not even a polynomial hash.


Could you please explain why you multiply by scMulExpr? If you need some constant for polynomial hash computation then it's better to declare it separately. It's better be a prime number to have less collisions (I don't even know what number it is now).

Otherwise, you might be interested in using the function hash_combine which happens over and over LLVM. This will spare you from manual polynoms calculation.

I'd also suggest to make a separate method for hash computation to keep things encapsulated.


Could you please add the statistics to check how often do we have a hash collision? I.e. when hashes matched but further comparison returned something different from 0.


Do we really need this? They have 1 argument, this check will be done if needed when we compare arguments, no?

This revision now requires changes to proceed.Aug 27 2018, 2:28 AM