Details
Diff Detail
Event Timeline
lib/Analysis/ScalarEvolutionExpander.cpp | ||
---|---|---|
1215 | How do you know that the AddRec being expanded here is equivalent to a canonical induction variable? That's only true if we're expanding '{0,+,1}', right? Does calling SE.getSCEVValues(Normalized) also find the original PHI node? |
lib/Analysis/ScalarEvolutionExpander.cpp | ||
---|---|---|
1215 | Thanks, that is a good point! I could either constrain it to just handle this special case or attempt to generalize it. Unfortunately I don't know enough about SCEV yet: |
lib/Analysis/ScalarEvolutionExpander.cpp | ||
---|---|---|
1215 |
If SE.getSCEVValues(Normalized) will give you the original PHI for a given SCEV, then that should be fully general (at least for cases where there is an existing PHI for a particular).
SCEV is an analysis, so this is really a question about what transformations might do. A transformation can change the trip count of a loop (generally by reducing it; peeling for example).
If I have the SCEV of the original induction variable, OrigIVAddRec, and the new induction variable, NewIVAddRec, then you can subtract them: IVDiff = SE->getMinusSCEV(NewIVAddRec, OrigIVAddRec); Now, in each iteration, IVDiff is what you need to add to the current induction variable, NewIV, in order to get the value of the existing one, OldIV. So you can construct a DWARF expression that adds IVDiff to NewIV. In some cases, IVDiff will actually just be an SCEVConstant, but it could be a more general expression. To handle the more-general case, a SCEV -> DWARF utility can be created. One way to do this would be to write an SCEVVisitor that builds up a DWARF expression as it recurses through an SCEV. When you reach an SCEVUnknown, that's just a wrapper around a Value*. And you might be able to short-circuit the evaluation at any SCEV,moreover, if SE.getSCEVValues on that SCEV returns a useful Value*. I suspect this is most of what you need to do to adjust for what IndVarSimplify does as well (although it also sometimes changes the bitwidth of induction variables, so there's a little more to do there I suspect for those cases). |
lib/Analysis/ScalarEvolutionExpander.cpp | ||
---|---|---|
1215 | Thanks, I have a couple more questions!
SE.getSCEVValues(Normalized) returns a nullptr unfortunately. Is the problem with L->getCanonicalInductionVariable() that multiple transformations might be applied on top of another and that the IV from the Loop object might be out of date? I guess I'm also slightly confused at what the SCEV of the original induction variable is. Does the SCEVAddRecExpr *Normalized represent the induction variable or the loop itself? Normalized->dump() says: {0,+,1}<nuw><nsw><%for.cond> Does the notation '{0,+,1}' mean start=0, increment_operator=+, increment=1? That said, the part with the SCEVVisitor ->DWARF translator sounds fairly straightforward and fun to do! |
lib/Analysis/ScalarEvolutionExpander.cpp | ||
---|---|---|
1215 | Based on this snippet from the top of SCEVExpander::getAddRecExprPHILiterally() it looks like PhiSCEV is the SCEV for the original induction variable. for (auto &I : *L->getHeader()) { auto *PN = dyn_cast<PHINode>(&I); if (!PN || !SE.isSCEVable(PN->getType())) continue; const SCEVAddRecExpr *PhiSCEV = dyn_cast<SCEVAddRecExpr>(SE.getSCEV(PN)); if (!PhiSCEV) continue; |
Here is a slightly more general version that includes most of Hal's feedback. I will create a couple of more interesting testcases next to see how to handle cases where the result of subtracting the two recurrences is not a constant.
First, should this logic be here or should it be in IndVarSimplify? I ask because there are transformations that can introduce new AddRecs into an existing loop without replacing the induction variable. Only the transformation using the SCEVExpander knows if it is planning to actually replace the induction variable.
A related question: Is it okay to have multiple values in a function define the same dbg.value's variable? If, for example, we were to end up with multiple induction variables in a loop, and all had a relationship to the source-level induction variable, would it be okay if all these PHIs were used by dbg.value intrinsics to define the same source-level variable? Or would that just confuse things?
In your test case, IndVarSimplify is doing induction-variable widening on an already-canonical induction variable. This is an interesting special case because both the old and new induction variables are canonical (i.e., they both start from zero and increment by one on each iteration). One thing that makes it special is that there already is an old canonical induction variable when the new one is added. In general, for IndVarSimplify, we won't have an existing canonical induction variable (we'll just have some other AddRecs - non-canonical induction variables) and we'll be creating the canonical induction variable.
Generally, for IndVarSimplify, you'll have some AddRec corresponding to some existing PHI - the non-canonical induction variable with the debug info - and IndVarSimplify will introduce a canonical induction variable. The AddRec for the existing PHI {a,+,b} is implicitly defined in terms of the canonical induction variable just created: OldPN = a + b*CanPN. So, the dbg.value defining the variable previously defined by OldPN can be defined now by a DWARF expression a + b*CanPN (plus, perhaps any trunctions/extensions).
Also, in your test case here for induction-variable widening, should the DWARF expression after this have some kind of DIExpression with a truncate in it to reverse the effect of the widening?
lib/Analysis/ScalarEvolutionExpander.cpp | ||
---|---|---|
1305 | L->getCanonicalInductionVariable() can fail (i.e., return nullptr), so I imagine this guard could/should be: if (PhiSCEV && OldPN) { (the goal of IndVarSimplify is to introduce such a variable, but it might not exist, especially before/during IndVarSimplify running) |
Thanks for pointing this out. I shall take a look at IndVarSimplify to see whether this is a more natural place for this.
A related question: Is it okay to have multiple values in a function define the same dbg.value's variable? If, for example, we were to end up with multiple induction variables in a loop, and all had a relationship to the source-level induction variable, would it be okay if all these PHIs were used by dbg.value intrinsics to define the same source-level variable? Or would that just confuse things?
Multiple dbg.values are totally fine. A dbg.value means that "from here on, the variable's value is SSA-value" and the value is valid until either the register the SSA-value is lowered to is clobbered, the basic block ends, or a new dbg.value describing the same variable is encountered.
In your test case, IndVarSimplify is doing induction-variable widening on an already-canonical induction variable. This is an interesting special case because both the old and new induction variables are canonical (i.e., they both start from zero and increment by one on each iteration). One thing that makes it special is that there already is an old canonical induction variable when the new one is added. In general, for IndVarSimplify, we won't have an existing canonical induction variable (we'll just have some other AddRecs - non-canonical induction variables) and we'll be creating the canonical induction variable.
Interesting. I will try and include the more general case, too. You probably saw on bugzilla that I was out hunting for other interesting testcases recently.
Generally, for IndVarSimplify, you'll have some AddRec corresponding to some existing PHI - the non-canonical induction variable with the debug info - and IndVarSimplify will introduce a canonical induction variable. The AddRec for the existing PHI {a,+,b} is implicitly defined in terms of the canonical induction variable just created: OldPN = a + b*CanPN. So, the dbg.value defining the variable previously defined by OldPN can be defined now by a DWARF expression a + b*CanPN (plus, perhaps any trunctions/extensions).
Thanks for the explanation!
Also, in your test case here for induction-variable widening, should the DWARF expression after this have some kind of DIExpression with a truncate in it to reverse the effect of the widening?
The truncation happens implicitly because the type of the variable and thus its size is also known to the debugger, but if there was a narrowing performed, we would need to insert a sign-extension operation.
I moved the logic over to indvars now, and that seems to fit naturally into WidenIV::createWideIV() with about the same functionality as above.
But I'm having trouble generating more interesting testcases. For example: The top-level comment in IndVarSimplify claims that the pass would
// 1. The exit condition for the loop is canonicalized to compare the // induction value against the exit value. This turns loops like: // 'for (i = 7; i*i < 1000; ++i)' into 'for (i = 0; i != 25; ++i)'
But I can't reproduce this:
char *a; void f(int j) { for (int i = 7; i*i < 1000; i+=1) a[i] = i; } clang -O3 -c -S -emit-llvm -o - /tmp/loop.c -fno-unroll-loops -mllvm -print-after-all -g 2>&1 *** IR Dump After Induction Variable Simplification *** for.body: ; preds = %entry, %for.body %indvars.iv = phi i64 [ 7, %entry ], [ %indvars.iv.next, %for.body ]
Has the functionality bitrotted, or is the comment out-of-date, or is there now a different transformation that would rewrite a loop like int that comment? Or can I change my testcase somehow?
I then put an assert that IVDiff (the diff between SCEV for the old and new phi in WidenIV::createWideIV()) is not a constant zero and check-llvm-transform still passes. That's not the end of the world, handling the constant-zero case would already be a nice improvement, but I'm wondering if I'm missing the big picture here.
Meanwhile I uploaded the simplified patch that just handles the common case in indvars.
It is out of date. @sanjoy may know more. Thinking about it, I do recall that we might now not do these replacements in cases where SCEV can easily construct the induction-variable recurrence.
I then put an assert that IVDiff (the diff between SCEV for the old and new phi in WidenIV::createWideIV()) is not a constant zero and check-llvm-transform still passes. That's not the end of the world, handling the constant-zero case would already be a nice improvement, but I'm wondering if I'm missing the big picture here.
Hrmm. I'm missing something too. We also don't seem to be collapsing logically-redundant phis, as in:
void foo(int *a, int *b, int n) { for (int i = 0, j = 1; i < n; ++i, ++j) a[i] = b[j] + 1; }
this loop still ends up with two phis after optimization. I thought IndVarSimplify took care of this, but evidently it doesn't right now.
For now, given the apparent current state of things, maybe we should just handle the zero-delta case, and then come back to this later as necessary.
Thanks for confirming! Also, unfortunate :-(
Do you have any feedback on the patch for the zero-delta-case as is that I should be addressing before landing this?
I don't think that we need to check the difference at all in this case. It will always be zero because the new SCEV is generated from the old one:
// Widen the induction variable expression. const SCEV *WideIVExpr = getExtendKind(OrigPhi) == SignExtended ? SE->getSignExtendExpr(AddRec, WideType) : SE->getZeroExtendExpr(AddRec, WideType);
so I think that you can make this debug-value propagation logic simpler?
How do you know that the AddRec being expanded here is equivalent to a canonical induction variable? That's only true if we're expanding '{0,+,1}', right?
Does calling SE.getSCEVValues(Normalized) also find the original PHI node?