scan-build flags a long chain where if LoopCost is 0, then a division by
zero might occur. Add a quick check after its final assignment before
use.
Details
- Reviewers
rengolin
Diff Detail
- Repository
- rG LLVM Github Monorepo
- Build Status
Buildable 32124 Build 32123: arc lint + arc unit
Event Timeline
LoopVectorizationCostModel::expectedCost default initializes its return value, an instance of LoopVectorizationCostModel::VectorizationCostTy declared as:
using VectorizationCostTy = std::pair<unsigned, bool>;
so looks like it's possible.
Regardless, the runtime is going to be dominated by the division, not the check against zero, so why not just always pay the cost? Especially if it reduces a warning from scan-build?
(I have the same opinion about pointers; "should I check this pointer isn't nullptr before dereferencing it? yeah, probably (short of it already have been checked and not updated since)." Skipping such a check doesn't add much value unless you have really hot code and strong guarantees (at which point, I'd prefer stronger guarantees from the type system).
That doesn't say much, just that it's initialised as zero. But if the expectedCost cannot return zero, then LoopCost cannot be zero (in this context).
Looking at expectedCost, it seems that there are only two situations where the cost can be zero:
- The trivial case where the loop is empty (no BBs, no insts).
- When VF == 1 && blockNeedsPredication(BB) and getReciprocalPredBlockProb() > BlockCost.first at the end (because first is unsigned) for every block (so Cost.first += BlockCost.first remains zero.
The first is not possible, as an empty loop wouldn't get this far. The second looks like a bug to me, so would warrant an assert at the end of expectedCost to make sure the cost is not zero.
Regardless, the runtime is going to be dominated by the division, not the check against zero, so why not just always pay the cost? Especially if it reduces a warning from scan-build?
This is not about performance, nor it should be about scan-build. Littering code to avoid static analysis false positives is a never ending and fruitless battle.
We've had this conversation in the LLVM list many times for a long time and general consensus was that safety is paramount, the illusion of safety, not so much. Promoting the habit of always checking for null pointers and zero denominators creates the problem of what to do with it if it is. Some cases are obvious, others, less so. That's why we assert so much so widely.