This is a continuation of https://reviews.llvm.org/D62563. Essentially I took latest change proposed by Artur and tried to address Philip's concern "Does this mean that evaluateAtIteration may return an incorrect result? If so, I'd very much like to see an assert which trips on that usage."
The original patch introduces a function minIterationWidthForEvaluateAtIteration. This way it exposes the limitation of evaluateAtIteration through the API and makes it possible for the user to choose the proper It width (e.g. like I did in the first revision https://reviews.llvm.org/D62563?vs=on&id=201795). What was the reason to remove it?
|347–349 ↗||(On Diff #211728)|
Because you removed minIterationWidthForEvaluateAtIteration from the API, now the description on evaluateAtIteration is very vague. I'd prefer to specify this limitation more specifically.
|1208–1212 ↗||(On Diff #211728)|
Any reason to express this check in this way and not as a comparison of CalculationBits with It bitwidth?
|11670 ↗||(On Diff #211728)|
I don't think that it makes much sense to ask loop disposition of scCouldNotCompute. I think the caller of this API should handle scCouldNotCompute case instead.
Splitting exitsting patch to two parts. The first one is an actual fix for the problem. The second one is an enhancement to evaluateAtIteration which is going to be reviewd and committed separately.