Page MenuHomePhabricator

[InstCombine] do not insert nonnull assumption for undef
Needs RevisionPublic

Authored by inglorion on Tue, Nov 26, 3:56 PM.

Details

Summary

D69477 makes us keep non-null assumptions at the original call site
when sinking calls. It also does this when the non-null argument is
undef, which is incorrect - it can lead us to assume some code invokes
undefined behavior when it actually doesn't. This change avoids
creating non-null assumptions for undef.

Fixes PR44154.

Event Timeline

inglorion created this revision.Tue, Nov 26, 3:56 PM
Herald added a project: Restricted Project. · View Herald TranscriptTue, Nov 26, 3:56 PM
xbolva00 added inline comments.
llvm/lib/Transforms/InstCombine/InstructionCombining.cpp
3196–3197

Move before if above ?

inglorion updated this revision to Diff 231153.Tue, Nov 26, 4:50 PM

moved undef check above other if statement

inglorion marked an inline comment as done.Tue, Nov 26, 4:51 PM
efriedma added inline comments.
llvm/test/Transforms/InstCombine/assume-replacing-call.ll
199

Isn't this already UB? From LangRef: "if the parameter [...] is null, the behavior is undefined".

inglorion added inline comments.Tue, Nov 26, 5:36 PM
llvm/test/Transforms/InstCombine/assume-replacing-call.ll
199

That's basically how the optimization works. Behavior is undefined if the parameter is null, therefore we can assume it is not null, which unlock optimizations.

The problem is that we would also do this if the parameter is undef. It may not have been undef in the original program. In the example I was looking at, deadargelim replaced an unused (but declared non-null) parameter with undef, instcombine sank the call and generated a non-null assumption for the parameter (which became call void @llvm.assume(i1 undef)), and simplifycfg then eliminated the entire branch that code was on, leading to different behavior from the original program.

We could presumably fix this in other ways, but my thinking is that not generating assumptions for undef values is a sensible thing to do.

jdoerfert requested changes to this revision.Tue, Nov 26, 7:55 PM

That's basically how the optimization works. Behavior is undefined if the parameter is null, therefore we can assume it is not null, which unlock optimizations.

And I argue this is the problem, see below.

We could presumably fix this in other ways, but my thinking is that not generating assumptions for undef values is a sensible thing to do.

I doubt this is a "fix" but simply hides the problem one level deeper. Think of a non-undef value for which this happens but after one iteration of inlining, constant propagation, etc. it is replaced by undef, thus in the assumption. I guess you could write a test for this situation rather easily.


I think the "solution" here is to weaken the guarantees of nonnull. That is, if the value is dead, it might be null after all. So nonnull should mean what nonnull means right now for any use of the value that affects a side effect (incl. control flow). Other attributes will need similar adjustment. This is in line with out of bounds inbounds geps, and similar situations.

FWIW, I had similar thoughts in the context of the attributes the Attributor deduces. They can sometimes be "contradicting" or "odd" but only for dead values.


My recommendation: Change the lang-ref and make the code that produces the assume only do it if we know the pointer is "known to be live".


The only alternative I see is to make passes that replace dead values aware of attributes, e.g., let them clear them out. Though, I fear that also hides the problems only a few indirections deeper.

This revision now requires changes to proceed.Tue, Nov 26, 7:55 PM

Responding to @jdoerfert's comments:

I think the "solution" here is to weaken the guarantees of nonnull. That is, if the value is dead, it might be null after all.

My recommendation: Change the lang-ref and make the code that produces the assume only do it if we know the pointer is "known to be live".

The only alternative I see is to make passes that replace dead values aware of attributes, e.g., let them clear them out. Though, I fear that also hides the problems only a few indirections deeper.

I thought along those same lines. Since we replace values we have deduced to be unused with undef, my change is to only emit the assume for values that are not undef.

I doubt this is a "fix" but simply hides the problem one level deeper. Think of a non-undef value for which this happens but after one iteration of inlining, constant propagation, etc. it is replaced by undef, thus in the assumption.

My take on this is that the transformation that generates a non-null assumption for a non-undef value is correct: if the value is live, it must be nonnull. As for later passes replacing the value with undef, my reading of the langref is that this would be incorrect. Specifically,
https://llvm.org/docs/LangRef.html#llvm-assume-intrinsic states "Note that the optimizer might limit the transformations performed on values used by the llvm.assume intrinsic in order to preserve the instructions only used to form the intrinsic’s input argument." which suggests to me the intent that values passed by llvm.assume be preserved. In other words, you can't just replace those by undef. So I don't think generating llvm.assumes for non-undef values creates a problem; simply not generating assumes for undefs should be enough.

rnk added inline comments.Wed, Nov 27, 11:19 AM
llvm/test/Transforms/InstCombine/assume-replacing-call.ll
199

How was DAE able to replace an argument with undef which was fed to strlen? Is that the real bug?

I agree with Eli, though, with current IR langref rules, it's valid to replace this call with unreachable as is. Undef can be anything, we can choose it to be zero, and that would become unreachable. If we actually materialized a zero along this codepath, it would crash strlen if it were executed in the original program.

I think the "solution" here is to weaken the guarantees of nonnull. That is, if the value is dead, it might be null after all.

LLVM IR has no notion of "dead". We could say that if the argument would be null, the behavior is defined, but the value is poison in the callee. (Or something like that; not sure that phrasing properly captures the semantics.)

The only alternative I see is to make passes that replace dead values aware of attributes, e.g., let them clear them out. Though, I fear that also hides the problems only a few indirections deeper.

I don't think there's some deep class of problems hidden here beyond DeadArgElim itself. The current rule is clear. I would rather not change LangRef unless there's some interesting optimization that's blocked by the current semantics.

Tyker resigned from this revision.Wed, Nov 27, 12:14 PM

I don't believe i am qualified to review this. but I agree with rnk.

How was DAE able to replace an argument with undef which was fed to strlen? Is that the real bug?

I doubt this is a "fix" but simply hides the problem one level deeper. Think of a non-undef value for which this happens but after one iteration of inlining, constant propagation, etc. it is replaced by undef, thus in the assumption.

My take on this is that the transformation that generates a non-null assumption for a non-undef value is correct: if the value is live, it must be nonnull. As for later passes replacing the value with undef, my reading of the langref is that this would be incorrect. Specifically,
https://llvm.org/docs/LangRef.html#llvm-assume-intrinsic states "Note that the optimizer might limit the transformations performed on values used by the llvm.assume intrinsic in order to preserve the instructions only used to form the intrinsic’s input argument." which suggests to me the intent that values passed by llvm.assume be preserved. In other words, you can't just replace those by undef. So I don't think generating llvm.assumes for non-undef values creates a problem; simply not generating assumes for undefs should be enough.

I disagree with your interpretation of the lang ref here. Also, assuming it was correct, I'd argue that we are not, and we cannot, guarantee this anyway. There are way to many places where we perform folding of sorts that could move an existing undef to a new position.
My reading is: If you have an llvm.assume(%x), we might not perform a transformation involving %x, e.g. replacing %x, because of the assume. There is no guarantee whatsoever.

I think the "solution" here is to weaken the guarantees of nonnull. That is, if the value is dead, it might be null after all.

LLVM IR has no notion of "dead". We could say that if the argument would be null, the behavior is defined, but the value is poison in the callee. (Or something like that; not sure that phrasing properly captures the semantics.)

On first thought that sounds reasonable to me.

The only alternative I see is to make passes that replace dead values aware of attributes, e.g., let them clear them out. Though, I fear that also hides the problems only a few indirections deeper.

I don't think there's some deep class of problems hidden here beyond DeadArgElim itself. The current rule is clear. I would rather not change LangRef unless there's some interesting optimization that's blocked by the current semantics.

Every time we mix unused values with annotated information/attributes it is problematic. Take a function that returns a value but we know the return value is never inspected at any call site. We replace it with undef regardless of what the function and call site return arguments are.
Now we are looking for a mechanism to annotate information/attributes in the IR even if they are not arguments (see https://reviews.llvm.org/D37579#1761151) which will expose this mismatch not only for arguments and return values.

llvm/test/Transforms/InstCombine/assume-replacing-call.ll
199

One can build an example where there is an unused argument passed as reference for example (https://godbolt.org/z/DPjxuF).
And it is also not only nonnull (as I mentioned before).

I still believe we have to have "liveness" backed into attribute semantics or we will play whack-a-mole with optimizations that fold an new or existing undef into a position "it shouldn't be".

How was DAE able to replace an argument with undef which was fed to strlen? Is that the real bug?

The bug I'm working on is not in the strlen example the original optimization was written for. It is in this fragment, simplified from libc++'s locale __widen_and_group_float:

template <>
void
__num_put<char>::__widen_and_group_float(char* __nb, char* __np, char* __ne,
                                         char* __ob, char*& __op, char*& __oe,
                                         const locale& __loc)
{
    const ctype<char>& __ct = use_facet<ctype<char>>(__loc);
    if (__nb < __ne)                                                                                                                        
    {
        if (*__nb == '.')
        {
            *__ob++ = '.';
        }
        else
        {
            *__ob++ = __ct.widen(*__nb);
        }
    }
    __op = __ob;
}

In the IR, the call to widen is in a basic block before the assignment to *ob (I suppose because the declaration isn't nounwind). Its first parameter, ct, is nonnull. When compiling with -fwhole-program-vtables, we are able to determine widen's properties more precisely. We discover that it does not in fact use ct, so deadargelim replaces that with undef. We discover that we can sink it to the BB that the assignment to *ob is in, so instcombine does that, and emits @llvm.assume(i1 undef) in place of the original call site. Then simplifycfg notices that this branch of the if statement invokes undefined behavior, and so the other branch must be taken, and the code simplifies to something like

template <>
void
__num_put<char>::__widen_and_group_float(char* __nb, char* __np, char* __ne,
                                         char* __ob, char*& __op, char*& __oe,
                                         const locale& __loc)
{
  if (__nb < __ne)
  {
    *__ob++ = '.';
  }
  __op = __ob;
}

The important difference from the strlen example is that widen in this case really does not use the nonnull parameter in question.

The only alternative I see is to make passes that replace dead values aware of attributes, e.g., let them clear them out. Though, I fear that also hides the problems only a few indirections deeper.

I don't think there's some deep class of problems hidden here beyond DeadArgElim itself. The current rule is clear. I would rather not change LangRef unless there's some interesting optimization that's blocked by the current semantics.

Every time we mix unused values with annotated information/attributes it is problematic. Take a function that returns a value but we know the return value is never inspected at any call site. We replace it with undef regardless of what the function and call site return arguments are.
Now we are looking for a mechanism to annotate information/attributes in the IR even if they are not arguments (see https://reviews.llvm.org/D37579#1761151) which will expose this mismatch not only for arguments and return values.

For most optimizations, a use in a "call" or "ret" instruction is a real use, enough to force optimizations to assume that other code depends on the value. There's only a narrow class of IPO optimizations (I think just DeadArgElim and IPSCCP) which prove an argument or return value is "dead" despite having a use in a call or ret instruction.

There's a corresponding rule for "!nonnull", in LangRef: "If the value is null at runtime, the behavior is undefined." Code that hoists load instructions will erase the metadata. This works correctly, as far as I know.

More generally, there are many places in IR that have UB without an associated side-effect. For example, "sdiv" is UB if you divide by zero, or the result overflows. I don't think there's anything fundamentally wrong with that.

A few more thoughts:

  1. The poison solution mentioned by @efriedma should "fix" our problem with the nonnull annotated memcpy/set implementations. (I don't have the reference handy but there is a long history here)
  2. Even if optimizations would clear the attributes when they replace something with undef, I doubt that will solve the problem. The attribute might have been propagated already, causing the mismatch to happen later.
  3. When we want to do context and/or flow-sensitive deduction/reasoning (which should happen soon!) the current wording will not work (see https://godbolt.org/z/HBRGhb) while the poison solution would work fine.
  4. On the email list someone (maybe @simoll or @nlopes) proposed to introduce a "bottom" symbol to complement undef.

The only alternative I see is to make passes that replace dead values aware of attributes, e.g., let them clear them out. Though, I fear that also hides the problems only a few indirections deeper.

I don't think there's some deep class of problems hidden here beyond DeadArgElim itself. The current rule is clear. I would rather not change LangRef unless there's some interesting optimization that's blocked by the current semantics.

Every time we mix unused values with annotated information/attributes it is problematic. Take a function that returns a value but we know the return value is never inspected at any call site. We replace it with undef regardless of what the function and call site return arguments are.
Now we are looking for a mechanism to annotate information/attributes in the IR even if they are not arguments (see https://reviews.llvm.org/D37579#1761151) which will expose this mismatch not only for arguments and return values.

For most optimizations, a use in a "call" or "ret" instruction is a real use, enough to force optimizations to assume that other code depends on the value. There's only a narrow class of IPO optimizations (I think just DeadArgElim and IPSCCP) which prove an argument or return value is "dead" despite having a use in a call or ret instruction.

There's a corresponding rule for "!nonnull", in LangRef: "If the value is null at runtime, the behavior is undefined." Code that hoists load instructions will erase the metadata. This works correctly, as far as I know.

We are consistent by calling everything quickly UB, does not mean that this is the right choice. I would expect a transformation in the code base that does not erase the metadata or a bug report that exposed such behavior (because I would not have known to do it right now). This is again one of those situations where we force people to be always aware of implicit interactions in order to produce correct transformations. Having it result in poison would help us here and in the situations I mentioned in my last comment. Could you summarize the drawbacks? (I guess we should move this to the list.)

More generally, there are many places in IR that have UB without an associated side-effect. For example, "sdiv" is UB if you divide by zero, or the result overflows. I don't think there's anything fundamentally wrong with that.

I agree with the last part. Having UB without a side-effect is fine. Having UB for these argument attributes (and similar situations) is what I think we need to fix. I want us to have more fine grained control here. If you want to model UB at a location, we can still introduce an llvm.assume(false).

rnk added a comment.Wed, Nov 27, 3:16 PM

The only alternative I see is to make passes that replace dead values aware of attributes, e.g., let them clear them out. Though, I fear that also hides the problems only a few indirections deeper.

I guess this, then, would be the "simple" fix that will allow us to move forward without making LangRef changes. DAE is already pretty attribute aware. I think it only surveys functions with local linkage, so stripping nonnull from known-dead arguments would be pretty safe.

Another fix would be to treat arguments marked nonnull as live, but that would effectively make all C++ this pointers immune to DAE, since they are all nonnull.

The only alternative I see is to make passes that replace dead values aware of attributes, e.g., let them clear them out. Though, I fear that also hides the problems only a few indirections deeper.

I don't think there's some deep class of problems hidden here beyond DeadArgElim itself. The current rule is clear. I would rather not change LangRef unless there's some interesting optimization that's blocked by the current semantics.

Every time we mix unused values with annotated information/attributes it is problematic. Take a function that returns a value but we know the return value is never inspected at any call site. We replace it with undef regardless of what the function and call site return arguments are.
Now we are looking for a mechanism to annotate information/attributes in the IR even if they are not arguments (see https://reviews.llvm.org/D37579#1761151) which will expose this mismatch not only for arguments and return values.

For most optimizations, a use in a "call" or "ret" instruction is a real use, enough to force optimizations to assume that other code depends on the value. There's only a narrow class of IPO optimizations (I think just DeadArgElim and IPSCCP) which prove an argument or return value is "dead" despite having a use in a call or ret instruction.

There's a corresponding rule for "!nonnull", in LangRef: "If the value is null at runtime, the behavior is undefined." Code that hoists load instructions will erase the metadata. This works correctly, as far as I know.

We are consistent by calling everything quickly UB, does not mean that this is the right choice. I would expect a transformation in the code base that does not erase the metadata or a bug report that exposed such behavior (because I would not have known to do it right now). This is again one of those situations where we force people to be always aware of implicit interactions in order to produce correct transformations. Having it result in poison would help us here and in the situations I mentioned in my last comment. Could you summarize the drawbacks? (I guess we should move this to the list.)

  1. Source languages generally don't have any notion of poison. If we relax a marking from the source code to mean poison, we're throwing away information.
  2. For a long time, we didn't clearly specify the behavior at all in a lot of cases, which was a complete mess. In the process of clarifying LangRef, I chose instant UB for everything, and nobody presented an argument against that at the time. See D49041 and D47854.

If there are compelling reasons to produce poison for some specific attributes/metadata, or add equivalent attributes/metadata that produce poison instead of instant UB, we can consider that. sure.

In D70749#1762393, @rnk wrote:

The only alternative I see is to make passes that replace dead values aware of attributes, e.g., let them clear them out. Though, I fear that also hides the problems only a few indirections deeper.

I guess this, then, would be the "simple" fix that will allow us to move forward without making LangRef changes. DAE is already pretty attribute aware. I think it only surveys functions with local linkage, so stripping nonnull from known-dead arguments would be pretty safe.

I'm not certain but I couldn't come up with a reason why not. Though we will need to modify more than DAE.

Another fix would be to treat arguments marked nonnull as live, but that would effectively make all C++ this pointers immune to DAE, since they are all nonnull.

This might work, but it's also not only nonnull, I guess most argument attributes are affected.

The only alternative I see is to make passes that replace dead values aware of attributes, e.g., let them clear them out. Though, I fear that also hides the problems only a few indirections deeper.

I don't think there's some deep class of problems hidden here beyond DeadArgElim itself. The current rule is clear. I would rather not change LangRef unless there's some interesting optimization that's blocked by the current semantics.

Every time we mix unused values with annotated information/attributes it is problematic. Take a function that returns a value but we know the return value is never inspected at any call site. We replace it with undef regardless of what the function and call site return arguments are.
Now we are looking for a mechanism to annotate information/attributes in the IR even if they are not arguments (see https://reviews.llvm.org/D37579#1761151) which will expose this mismatch not only for arguments and return values.

For most optimizations, a use in a "call" or "ret" instruction is a real use, enough to force optimizations to assume that other code depends on the value. There's only a narrow class of IPO optimizations (I think just DeadArgElim and IPSCCP) which prove an argument or return value is "dead" despite having a use in a call or ret instruction.

There's a corresponding rule for "!nonnull", in LangRef: "If the value is null at runtime, the behavior is undefined." Code that hoists load instructions will erase the metadata. This works correctly, as far as I know.

We are consistent by calling everything quickly UB, does not mean that this is the right choice. I would expect a transformation in the code base that does not erase the metadata or a bug report that exposed such behavior (because I would not have known to do it right now). This is again one of those situations where we force people to be always aware of implicit interactions in order to produce correct transformations. Having it result in poison would help us here and in the situations I mentioned in my last comment. Could you summarize the drawbacks? (I guess we should move this to the list.)

  1. Source languages generally don't have any notion of poison. If we relax a marking from the source code to mean poison, we're throwing away information.

I assume you mean by source language markings attribute((nonnull)), correct? Arguably, people struggle here with the source language semantic and assume poison semantic often enough that we actually have problems sometimes.
Take the memcpy/memset example again. There were nonnull markings we actually had to ignore during propagation (and I think later strip) because people passed null and assumed something like poison (=no access no harm).

The questions seems to be:
Do we think we get enough optimizations from UB based on annotations of dead values worth having the discussion with users about these cases.

I am actively working on improvements to exploit UB (with the Attributor) but I fear once we replace the UB call in

static void foo(int &) {}
void bar() { foo(*nullptr); }

with an unreachable, we will get some angry emails.

  1. For a long time, we didn't clearly specify the behavior at all in a lot of cases, which was a complete mess. In the process of clarifying LangRef, I chose instant UB for everything, and nobody presented an argument against that at the time. See D49041 and D47854.

    If there are compelling reasons to produce poison for some specific attributes/metadata, or add equivalent attributes/metadata that produce poison instead of instant UB, we can consider that. sure.

I try to give you reasons here, right? I argued the problem is information written down on things we later declare dead. As I said before, this already affects arguments and return values (both at the call site and function site). If we go ahead and write down more information (=attributes) for floating values, the problem will appear there as well.

Tyker added a subscriber: Tyker.Fri, Nov 29, 5:47 AM

I think it's clear that dead arg elimination is incorrect in replacing a valid pointer with null in an attributed with non-null tag. Changing the semantics on non-respecting the tags from UB to poison doesn't help either.
The problem with dropping attributes is that a given function call site, the attributes to be considered are the union of the attributes in the function call and in the callee declaration. We can't rely drop attributes from the callee since some linking later may add them back.

Some random ideas:

  1. Introduce either negative attributes (so you can have A and !A), with !A meaning exclude A from the union of attributes
  2. Introduce an attribute to stop the union with callee attributes (added when erasing attributes)
  3. Introduce a placeholder value to be used in function calls that is not materialized and ignores all attributes.
  1. sounds too specific for DeadArgElim, while 1) and 2) are more generic. Not sure if the generality of 1) is needed.

It seems we have a variety of opinions on the right way to avoid the problem here, and many of them seem to require a fair bit of work. I'm all for getting to better semantics that avoid the problem we're seeing, but agreeing on a course of action, making the changes, testing it, and reviewing it seems like it would take a while. Meanwhile, we have a compiler that miscompiles cout << -10.0; Any chance we can get this change in to stop that from happening while we work on a better fix?

Or just avoid sinking for these cases and do not generate llvm.assume calls.

Or just avoid sinking for these cases and do not generate llvm.assume calls.

We can do that, too. But as you pointed out on D69477

I benchmarked spec2006int with -instcombine-code-sinking=false and I observed major regressions.

So I prefer to leave sinking alone and just not generate the nonnull assumes for undefs. That gets the nonnull undef case back to where it was before D69477, while leaving everything else alone.

xbolva00 added a comment.EditedMon, Dec 2, 2:49 PM

But -instcombine-code-sinking=false experiment was quite radical. We can stop sinking just for this exact case.

But anyway, if this fix fixes the issue, ok.

rnk added a comment.Mon, Dec 2, 2:54 PM

I think at this point a straight revert of D69477 might be preferable.

I think it's clear that dead arg elimination is incorrect in replacing a valid pointer with null in an attributed with non-null tag. Changing the semantics on non-respecting the tags from UB to poison doesn't help either.
The problem with dropping attributes is that a given function call site, the attributes to be considered are the union of the attributes in the function call and in the callee declaration. We can't rely drop attributes from the callee since some linking later may add them back.

Some random ideas:

    1. Introduce either negative attributes (so you can have A and !A), with !A meaning exclude A from the union of attributes
    2. Introduce an attribute to stop the union with callee attributes (added when erasing attributes)
    3. Introduce a placeholder value to be used in function calls that is not materialized and ignores all attributes.
  1. sounds too specific for DeadArgElim, while 1) and 2) are more generic. Not sure if the generality of 1) is needed.

You said "Changing the semantics on non-respecting the tags from UB to poison doesn't help either.", could you elaborate why?
If there are no uses of a violating instantiation (undef/null passed to a non-null) we would not get UB but an unused poison value.
It also gives us a way out of the memset/cpy problem.

I like 3) of the options you present the best actually. As @simoll noted in another thread, we overload undef to be multiple things, e.g., both top and bottom. If we would make them distinct symbols, this problem (and others) should be avoidable.


@rnk @inglorion I'm in favor of a revert (with some additional tests and FIXMES) to buy us time.

In D70749#1766088, @rnk wrote:

I think at this point a straight revert of D69477 might be preferable.

Agree.

The problem with dropping attributes is that a given function call site, the attributes to be considered are the union of the attributes in the function call and in the callee declaration. We can't rely drop attributes from the callee since some linking later may add them back.

Attributes on a callsite only apply to that call. For nonnull in particular, that means it doesn't matter what attributes the callee has, as long as the caller guarantees the nonnull-ness. Attributes don't propagate from the callsite to the function; they only propagate the other way.

I'm pretty sure we ignore the attributes on declarations when we link a module that declares a function with a module that defines that function. We could say that the attributes on the definition always win, even in other contexts like ThinLTO.

Given this, it's possible to have an overall model where nonnull is strong, and it's legal to strip it from definitions, without adding any new IR constructs. This treatment of declarations is potentially confusing, though...

nlopes added a comment.Wed, Dec 4, 1:37 AM

You said "Changing the semantics on non-respecting the tags from UB to poison doesn't help either.", could you elaborate why?
If there are no uses of a violating instantiation (undef/null passed to a non-null) we would not get UB but an unused poison value.

The issue I had in mind was a function call where the return value is actually used. e.g.:

x = foo(nonnull ptr)
->
x = foo(nonnull undef)

With the current semantics, we introduce UB. If changed to "the function returns poison", then x would change from a regular value to poison, also incorrect.


Just to add to the attribute stripping issue, see this example:

define @foo(nonnull)

x = @foo(nonnull %ptr)
->
x = @foo(%ptr)

Dropping nonnull from the call site doesn't help because we inherit it from the callee definition. We can only go this way if we guarantee we have a strong way of stripping attributes from definitions (as @efriedma mentioned). I don't think that's guaranteed in linking ATM, but if so, problem solved :)

hans added a subscriber: hans.Wed, Dec 4, 4:36 AM
In D70749#1766088, @rnk wrote:

I think at this point a straight revert of D69477 might be preferable.

Agree.

+1, having this unresolved in-tree is scary.