Disable large loop unroll as it will exhaust the stack if without setting with ulimit -s unlimited
while getting loop count in recursion function computeBackedgeTakenCount etc.
related issue:
https://bugs.llvm.org/show_bug.cgi?id=49783
Differential D104679
[WIP][LoopUnrolling] Add flag to restrict the unroll with large loop size Allen on Jun 21 2021, 7:01 PM. Authored by
Details
Disable large loop unroll as it will exhaust the stack if without setting with ulimit -s unlimited related issue:
Diff Detail Event TimelineComment Actions Skipping very large loops might make sense as a compile-time optimization (don't bother analyzing loops that definitely can't be unrolled), but I don't think it's a solution to a stack overflow in SCEV. It can happen through other ways than loop unrolling. SCEV uses recursive construction and stack overflows are a known issue -- it deals with them through various depth cutoffs. Maybe one is missing somewhere? Comment Actions Yes, if a case unroll the loops manually may also happen such issue. the root reason is too many node to dispose SCEV Comment Actions I'm still not fine with this approach:
It would be really best to deal with the actual stack overflows.
Comment Actions Agree with others, this seems like the wrong approach. And yes, if we need to add a depth parameter to getSCEV, we should. I really doubt we do though. We probably *do* need to add one to all of the getXExpr variants though. Comment Actions Yes, In fact, we already have many cutoffs , but they are only limit special type of getExpr , such as: MaxArithDepth and MaxAddRecSize is used for getMulExpr, MaxCastDepth is used for getTruncateExpr, so it is natural that if I add a new cutoff for getSCEV, there will still some other cases need to add, do we have some more general idea to dispose this ? |