It was detected that the reassociate pass could enter an inifite
loop when analysing dead code. Simply skipping to analyse basic
blocks that are dead avoids such problems (and as a side effect
we avoid spending time on optimising dead code).
Details
Diff Detail
- Repository
- rL LLVM
Event Timeline
lib/Transforms/Scalar/Reassociate.cpp | ||
---|---|---|
2182–2185 ↗ | (On Diff #76427) | You may want to expand this comment a bit. |
lib/Transforms/Scalar/Reassociate.cpp | ||
---|---|---|
2182–2185 ↗ | (On Diff #76427) | There is also a much simpler and more consistent fix: Iterate in the same order rankmap was built. You can share the ReversePostOrder ordering, and use it here, and then both are doing the same thing, so there can't be any mismatches. |
lib/Transforms/Scalar/Reassociate.cpp | ||
---|---|---|
2182–2185 ↗ | (On Diff #76427) | Ok. I have no idea if the iteration order in here in ReassociatePass::run is important or not, so I did not dare to do such a change. I just wanted to correct the bug in easiest way without changing anything else. I also read somewhere that the ReversePostOrder iteration is very expensive and should be avoided. So doing that twice seemed like a bad idea. But maybe your idea was to save the order somehow by adding another data structure? If it does not matter in which order the basic blocks are analysed, then I guess it is possible to iterate over the keys in the RankMap map (however, I'm not familiar about the DenseMap iterators so I do not know about if that is cheap, if order is predictable when debugging different builds, etc). I still think my solution is a minimal impact bugfix. |
Updated according to comments.
The solution is now based on using the same Reverse Post Order Traversal
both when building the RankMap and when doing optimisations.
This changes the order in which basic blocks are optimised, but as far
as I understand the end result should be the same.