This is an archive of the discontinued LLVM Phabricator instance.

[FPEnv] Add fptosi and fptoui constrained intrinsics
ClosedPublic

Authored by kpn on Jun 25 2019, 11:56 AM.

Details

Summary

Constrained floating point intrinsics for FP to signed and unsigned integers are still needed.

Quoting from D32319: "The purpose of the constrained intrinsics is to force the optimizer to respect the restrictions that will be necessary to support things like the STDC FENV_ACCESS ON pragma without interfering with optimizations when these restrictions are not needed."

This diff replaces D43515. That ticket has too much history to be easily read.

Diff Detail

Event Timeline

kpn created this revision.Jun 25 2019, 11:56 AM
Herald added a project: Restricted Project. · View Herald TranscriptJun 25 2019, 11:56 AM

What happens if the input float is out of range? fptosi/fptoui instructions produce poison; not sure if you want that here.

kpn added a comment.Jun 26 2019, 9:50 AM

What happens if the input float is out of range? fptosi/fptoui instructions produce poison; not sure if you want that here.

I'm not well versed in poison. Are you worried about constant folding away the fptoXi? If a run-time conversion could raise a trap I expect constant folding to be skipped. Would that avoid poison?

The problem with poison is that it eventually leads to UB, and then your program has no defined meaning. Practically, it might mean some codepath that involves a call to llvm.experimental.constrained.fptosi.i32.f64 could get folded away because it's provably UB, or something like that.

In any case, we should explicitly state what happens, since it isn't specified by IEEE754.

What happens if the input float is out of range? fptosi/fptoui instructions produce poison; not sure if you want that here.

Is this a general comment or referring to a change in Kevin's patch?

Unless I'm misunderstanding, we should be leaving the constrained converts alone until a hardware instruction is produced. In the ConvertToInteger cases, the hw instructions should flag Invalid and then the rounding mode will determine the result.

If you're asking what would happen if we found a poison value as the fptoXi's operand, that's a tougher question...

Is this a general comment or referring to a change in Kevin's patch?

It's a question about the semantics of the proposed llvm.experimental.constrained.fptosi/llvm.experimental.constrained.fptoui. Specifically, what does it do when the result is larger than the largest integer representable in the destination format? LangRef for fptoui/fptosi says "If the value cannot fit in ty2, the result is a poison value."

Unless I'm misunderstanding, we should be leaving the constrained converts alone until a hardware instruction is produced.

We have to define the semantics; I mean, I guess we could say "If the value cannot fit in the destination type, the result is computed in a target-specific way", but we'd have to state it explicitly. And it's sort of awkward.

Unless I'm misunderstanding, we should be leaving the constrained converts alone until a hardware instruction is produced.

We have to define the semantics; I mean, I guess we could say "If the value cannot fit in the destination type, the result is computed in a target-specific way", but we'd have to state it explicitly. And it's sort of awkward.

I think IEEE-754 does define this:

When a numeric operand would convert to an integer outside the range of the destination format, the invalid operation exception shall be signaled if this situation cannot otherwise be indicated.

There's some language that follows about IEEE specific converts and rounding mode too.

Let me be clear that this is in one of my blind spots wrt IEEE-754, so take my opinions lightly. And it probably goes without saying that I haven't thought through the edge cases...

I think IEEE-754 does define this

Your citation doesn't actually specify what value is returned, only that an exception is raised.

Is this a general comment or referring to a change in Kevin's patch?

It's a question about the semantics of the proposed llvm.experimental.constrained.fptosi/llvm.experimental.constrained.fptoui. Specifically, what does it do when the result is larger than the largest integer representable in the destination format? LangRef for fptoui/fptosi says "If the value cannot fit in ty2, the result is a poison value."

Unless I'm misunderstanding, we should be leaving the constrained converts alone until a hardware instruction is produced.

We have to define the semantics; I mean, I guess we could say "If the value cannot fit in the destination type, the result is computed in a target-specific way", but we'd have to state it explicitly. And it's sort of awkward.

And we're generally avoided giving target-independent LLVM operations target-defined semantics, and I prefer that we continue to avoid doing that.

I think IEEE-754 does define this

Your citation doesn't actually specify what value is returned, only that an exception is raised.

Ok, I agree with that:

The invalid operation exception is signaled if and only if there is no usefully definable result.

So pragmatically, an invalid exception is an alarm that the code is off track. As long as the exception is handled appropriately (default or an alternative), the result of the invalid operation shouldn't matter. Whatever LLVM wants to do with the value gets no arguments from me, since we've already self-destructed (unless the program handles the exception gracefully, but that wouldn't require a defined result from the invalid operation anyway).

Thinking about LangRef, aren't all the constrained intrinsics special considering the side-effects? Would it be acceptable for LangRef to read something like:

Semantics:

@llvm.experimental.constrained.fadd maps to IEEE-754's addition(x, y)

[Also, I'm noticing that the LangRef semantics for the existing constrained intrinsics are pretty sloppy. Those will probably need to be reworked.]

I think IEEE-754 does define this

Your citation doesn't actually specify what value is returned, only that an exception is raised.

Ok, I agree with that:

The invalid operation exception is signaled if and only if there is no usefully definable result.

So pragmatically, an invalid exception is an alarm that the code is off track. As long as the exception is handled appropriately (default or an alternative), the result of the invalid operation shouldn't matter. Whatever LLVM wants to do with the value gets no arguments from me, since we've already self-destructed (unless the program handles the exception gracefully, but that wouldn't require a defined result from the invalid operation anyway).

But the exception could be masked couldn't it?

Thinking about LangRef, aren't all the constrained intrinsics special considering the side-effects? Would it be acceptable for LangRef to read something like:

Semantics:

@llvm.experimental.constrained.fadd maps to IEEE-754's addition(x, y)

[Also, I'm noticing that the LangRef semantics for the existing constrained intrinsics are pretty sloppy. Those will probably need to be reworked.]

So pragmatically, an invalid exception is an alarm that the code is off track. As long as the exception is handled appropriately (default or an alternative), the result of the invalid operation shouldn't matter. Whatever LLVM wants to do with the value gets no arguments from me, since we've already self-destructed (unless the program handles the exception gracefully, but that wouldn't require a defined result from the invalid operation anyway).

But the exception could be masked couldn't it?

Yeah, but I'm not sure if it matters. The program has already failed, so there's no guarantee that the results are useful.

Our typical user enables traps (inv, divz, and ovf) during development to convince themselves that the code is safe. Production runs are then done with traps disabled. But, of course, traps may be reenabled if a runtime problem is later found. They're basically a sanity check.

kpn added a comment.Jun 27 2019, 6:05 AM

So pragmatically, an invalid exception is an alarm that the code is off track. As long as the exception is handled appropriately (default or an alternative), the result of the invalid operation shouldn't matter. Whatever LLVM wants to do with the value gets no arguments from me, since we've already self-destructed (unless the program handles the exception gracefully, but that wouldn't require a defined result from the invalid operation anyway).

But the exception could be masked couldn't it?

Yeah, but I'm not sure if it matters. The program has already failed, so there's no guarantee that the results are useful.

Our typical user enables traps (inv, divz, and ovf) during development to convince themselves that the code is safe. Production runs are then done with traps disabled. But, of course, traps may be reenabled if a runtime problem is later found. They're basically a sanity check.

We run with traps _on_, except we don't care about Inexact, and we do this in the products that ship to customers. Traps get handled in some way that is reasonable to a higher level of the software, for example by at least sometimes having a handler insert a value to replace the value that didn't get produced by the instruction that trapped. So I want a constrained fptoXi that is given valid inputs to never generate poison or get folded in a way that hides a trap.

For the constrained intrinsics, at least in the case where the exception semantics argument is "fpexcept.strict", we need to make sure the invalid exception is raised. If the exception semantics argument is "fpexcept.ignore" or "fpexcept.maytrap" we could allow the conversion to be optimized away.

If we say the result of the constrained intrinsic is poison, will the fact that the intrinsic is defined as having side effects keep it from being eliminated?

If we say the result of the constrained intrinsic is poison, will the fact that the intrinsic is defined as having side effects keep it from being eliminated?

If a program uses a poison value in certain ways (defined in LangRef), the behavior of the program as a whole is undefined. So the optimizer can generate code that does anything in that case; most likely, it will erase the entire basic block containing the call. Side-effects like modifying the FP exception bits don't matter.

That said, if llvm.experimental.constrained.fptosi.i32.f64 is allowed to raise an unmasked exception, that would keep it from being eliminated: the return value would never be used.

I might be opening a can of worms here and I'm not a language expert, but it isn't clear to me from reading the C99 standard that defining fptosi/fptoui as returning poison values in the unrepresentable case allows correct implementation of the C standard. That is, it doesn't seem to me that the standard actually says this is undefined behavior. It just says the resulting value is unspecified, and the exception behavior is explicitly defined. On the other hand, C++ does say clearly that it is undefined behavior, right?

I understand that in the unconstrained case LLVM doesn't care about FP exceptions and that we would like the LLVM IR definition to be more precise than the C standard. I'm just trying to get my head wrapped around why we're doing what we are in that case and what we need to do to correctly implement strict FP semantics.

If we say that the constrained version returns undef in the unrepresentable case and clearly emphasize how this differs from the standard fptosi/fptoui instructions, would that have the effect of keeping the operation around so that it can raise the exception while still giving us well-defined IR semantics?

kpn added a comment.Jul 8 2019, 9:24 AM

I might be opening a can of worms here and I'm not a language expert, but it isn't clear to me from reading the C99 standard that defining fptosi/fptoui as returning poison values in the unrepresentable case allows correct implementation of the C standard. That is, it doesn't seem to me that the standard actually says this is undefined behavior. It just says the resulting value is unspecified, and the exception behavior is explicitly defined. On the other hand, C++ does say clearly that it is undefined behavior, right?

I understand that in the unconstrained case LLVM doesn't care about FP exceptions and that we would like the LLVM IR definition to be more precise than the C standard. I'm just trying to get my head wrapped around why we're doing what we are in that case and what we need to do to correctly implement strict FP semantics.

If we say that the constrained version returns undef in the unrepresentable case and clearly emphasize how this differs from the standard fptosi/fptoui instructions, would that have the effect of keeping the operation around so that it can raise the exception while still giving us well-defined IR semantics?

Ping? Can anyone address this above comment?

In D63782#1573780, @kpn wrote:

I might be opening a can of worms here and I'm not a language expert, but it isn't clear to me from reading the C99 standard that defining fptosi/fptoui as returning poison values in the unrepresentable case allows correct implementation of the C standard. That is, it doesn't seem to me that the standard actually says this is undefined behavior. It just says the resulting value is unspecified, and the exception behavior is explicitly defined. On the other hand, C++ does say clearly that it is undefined behavior, right?

It looks like it is undefined behavior in a recent(-ish) draft of the C Standard:

6.3.1.4 Real floating and integer

1 When a finite value of real floating type is converted to an integer type other than _Bool, the fractional part is discarded (i.e., the value is truncated toward zero). If the value of the integral part cannot be represented by the integer type, the behavior is undefined.

And it also appears to be UB in a draft of the C++ Standard:

7.10 Floating-integral conversions

1 A prvalue of a floating-point type can be converted to a prvalue of an integer type. The conversion truncates; that is, the fractional part is discarded. The behavior is undefined if the truncated value cannot be represented in the destination type.

I understand that in the unconstrained case LLVM doesn't care about FP exceptions and that we would like the LLVM IR definition to be more precise than the C standard. I'm just trying to get my head wrapped around why we're doing what we are in that case and what we need to do to correctly implement strict FP semantics.

If we say that the constrained version returns undef in the unrepresentable case and clearly emphasize how this differs from the standard fptosi/fptoui instructions, would that have the effect of keeping the operation around so that it can raise the exception while still giving us well-defined IR semantics?

I'd be okay with a constrained fptoXi returning poison, as long as the operation isn't replaced by a poison value. In other words, what happens after the trap (and processing) is immaterial, so it's fine for the compiler to treat it as undefined behavior.

How aggressive is LLVM's UB handling? Would it remove an entire block/function if UB is found in it? @eli.friedman

How aggressive is LLVM's UB handling? Would it remove an entire block/function if UB is found in it?

If LLVM can prove a basic block unconditionally executes UB, it will be erased. But "unconditionally" is an important qualifier. For example, consider the following function: void f(void g()) { g(); *(int*)0 = 0; }. The call to g isn't erased because we can't prove g will return.

How aggressive is LLVM's UB handling? Would it remove an entire block/function if UB is found in it?

If LLVM can prove a basic block unconditionally executes UB, it will be erased. But "unconditionally" is an important qualifier. For example, consider the following function: void f(void g()) { g(); *(int*)0 = 0; }. The call to g isn't erased because we can't prove g will return.

The constrained FP intrinsics should be opaque enough (besides 'ignore'), so that sounds fine to me.

Seems like we should let invalid fptoXi's return poison, like the unconstrained versions do. Anyone see a problem with this?

kpn added a comment.Jul 12 2019, 10:00 AM

How aggressive is LLVM's UB handling? Would it remove an entire block/function if UB is found in it?

If LLVM can prove a basic block unconditionally executes UB, it will be erased. But "unconditionally" is an important qualifier. For example, consider the following function: void f(void g()) { g(); *(int*)0 = 0; }. The call to g isn't erased because we can't prove g will return.

The constrained FP intrinsics should be opaque enough (besides 'ignore'), so that sounds fine to me.

Seems like we should let invalid fptoXi's return poison, like the unconstrained versions do. Anyone see a problem with this?

Is there any chance I can get this patch into 9? What do I need to do to make that happen?

Is there any chance I can get this patch into 9? What do I need to do to make that happen?

Constant folding the constrained intrinsics is a long way off. IMO, it shouldn't hold up these two (they're experimental right now anyway).

We'll have to address all the operations that can signal invalid in the future too...

That's just my $0.02 though. I'm open to other opinions.

craig.topper added inline comments.Jul 17 2019, 2:15 PM
include/llvm/CodeGen/ISDOpcodes.h
307

Is this what we want from a strict implementation. "icc -fp-model=strict" goes out of its way to generate an invalid exception when the input type doesn't fit and the hardware can't generate an exception on its own. https://godbolt.org/z/ABU80i

lib/CodeGen/SelectionDAG/TargetLowering.cpp
5381

Shouldn't this be the caller's responsibility? It is for the data result.

5382

Put else on the above line.

5389–5391

I think the use of "Strict" here is being overloaded. The "Strict" in shouldUseStrictFP_TO_INT isn't the same "strict" as the intrinsic. We may want the same behavior but we should clarify the terminology maybe.

5415

Same question as above

5416

Put else on this line

test/CodeGen/X86/fp-intrinsics.ll
293

I hope there are more instructions here. This is a signed conversion instruction. Or are we constant folding the setcc and select in the fp_to_uint expansion?

365

Where's the fptosi test case?

kpn marked 6 inline comments as done.Jul 18 2019, 10:54 AM
kpn added inline comments.
include/llvm/CodeGen/ISDOpcodes.h
307

No, I don't think this is what we want. I'll remove that sentence.

lib/CodeGen/SelectionDAG/TargetLowering.cpp
5381

That's a fair question. Splitting the responsibilities could be seen as confusing. But if you grep for "Legalize the chain result" you'll find 30+ places where we're already handling the chain the way we are here. So I think that we should be consistent here and if we want to change it everywhere then maybe another ticket would be better for that change.

5389–5391

Ok. How about UseOnlyOneFP_TO_SINT?

test/CodeGen/X86/fp-intrinsics.ll
293

It looks like it's constant folding away the rest of the instructions since 42 fits within the lowest bits of a value. I can change it to take a variable and then check for the other interesting instructions.

365

I had to move the tests over to the PowerPC's e500 chip with the SPE feature. That's the in-tree target that still has scalar FP_TO_SINT marked Legal. Most everywhere else it is not marked Legal and currently fails with the existing mutation support.

craig.topper added inline comments.
lib/CodeGen/SelectionDAG/TargetLowering.cpp
5381

Most of those are in the type legalizer though aren't they? I guess my concern here is that this is a helper function that can be called from multiple places. Doing part of the replacement inside make the behavior of this function confusing.

5389–5391

@RKSimon do you have an opinion here?

test/CodeGen/X86/fp-intrinsics.ll
365

Ok. Should we file bugs to implement support in the other targets?

kpn marked an inline comment as done and an inline comment as not done.Jul 18 2019, 11:37 AM
kpn added inline comments.
lib/CodeGen/SelectionDAG/TargetLowering.cpp
5381

OK, that's fair. I'll change it.

test/CodeGen/X86/fp-intrinsics.ll
365

The problem is in TargetLoweringBase::getStrictFPOperationAction(). The target is queried to see how it handles the non-strict version of a strict node, and then any result that isn't Legal gets bashed into Expand. So Promote and Custom can't be handled. That's the problem that needs to be solved.

Short term I'm not sure what to do. Maybe yank out the lines that bash Promote and Custom? That's not been tried and I don't know exactly what will happen.

Long term the mutation compatibility code needs to die.

Also, it was pointed out to me recently that the strict and non-strict nodes need to have the same action registered. Which is true. Do we want to put in checks to verify that this is the case at some point?

I've been thinking about the getStrictFPOperationAction issue. I believe this function makes no sense at all and really should go away. Instead, the strict opcodes should use their own operation actions just like everything else. In current code, those should now be set up in a reasonable manner: they all default to Expand, unless the target overrides them (to whatever makes sense).

If the operation action is anything but Expand, I believe it should simply be respected as-is (i.e. Legal stays until ISel, Custom calls the target hook, Promote -if ever used- should be implemented in common code in a way that preserves strictness, and LibCall should emit a library call - possibly a strict version if necessary).

Now, if the operation action of a strict FP operation is Expand, common code should try to implement an expansion rule if possible in a way that preserves strictness. If that is not possible, then and only then should we think about falling back to a non-strict implementation. That fallback (and only that fallback) now should look at the operation action of the non-strict equivalent. If that is anything but Legal, then the expansion logic should replace the strict node with a non-strict node and push that non-strict node back to the legalizer to handle it in whatever way is indicated by its operation action. If the operation action is Legal, then we can leave the strict node in and mutate it to the non-strict node only shortly before ISel as is done today.

As to when an expansion is possible in a way that preserves strictness, and when to fall back to the non-strict equivalent: For a vector op, I believe it always makes sense to expand a strict vector op by scalarizing it to strict component ops. (Except, possibly, if that scalar op would itself fall back to its non-strict eqiuvalent -- then we might as well do the fallback on the vector type.) For scalar ops, for some it could be possible to expand them while respecting strictness, e.g. for some of the fp-to-sint cases above. That can be decided on a case-by-case basis. If at the end of ExpandNode we haven't found a way to expand a strict node, we can to the fallback then.

Does this make sense?

I've been thinking about the getStrictFPOperationAction issue. I believe this function makes no sense at all and really should go away. Instead, the strict opcodes should use their own operation actions just like everything else. In current code, those should now be set up in a reasonable manner: they all default to Expand, unless the target overrides them (to whatever makes sense).

If the operation action is anything but Expand, I believe it should simply be respected as-is (i.e. Legal stays until ISel, Custom calls the target hook, Promote -if ever used- should be implemented in common code in a way that preserves strictness, and LibCall should emit a library call - possibly a strict version if necessary).

Now, if the operation action of a strict FP operation is Expand, common code should try to implement an expansion rule if possible in a way that preserves strictness. If that is not possible, then and only then should we think about falling back to a non-strict implementation. That fallback (and only that fallback) now should look at the operation action of the non-strict equivalent. If that is anything but Legal, then the expansion logic should replace the strict node with a non-strict node and push that non-strict node back to the legalizer to handle it in whatever way is indicated by its operation action. If the operation action is Legal, then we can leave the strict node in and mutate it to the non-strict node only shortly before ISel as is done today.

As to when an expansion is possible in a way that preserves strictness, and when to fall back to the non-strict equivalent: For a vector op, I believe it always makes sense to expand a strict vector op by scalarizing it to strict component ops. (Except, possibly, if that scalar op would itself fall back to its non-strict eqiuvalent -- then we might as well do the fallback on the vector type.) For scalar ops, for some it could be possible to expand them while respecting strictness, e.g. for some of the fp-to-sint cases above. That can be decided on a case-by-case basis. If at the end of ExpandNode we haven't found a way to expand a strict node, we can to the fallback then.

Does this make sense?

Makes sense to me. As I recall, the mutation was always intended as a temporary solution.

Makes sense to me. As I recall, the mutation was always intended as a temporary solution.

+1

I've now posted a patch implementing something along those lines here: https://reviews.llvm.org/D65226

The only difference is that if the strict operation is to fall back to the non-strict operation, and that non-strict operation is marked Custom, I'm now not attempting to call that Custom handler. This is mostly because I haven't found any case where this would actually be invoked today (and therefore couldn't test any code added to handle this case), and also because it doesn't really make much sense. Going forward, when this will show up (e.g. because of fptosi/fptoui), the target simply should mark the strict operation as Custom as well. (This will actually work correctly with D65226.)

kpn updated this revision to Diff 211562.Jul 24 2019, 10:56 AM

Address review comments.

When I changed the fp-intrinsics.ll test (which is scalar) to operate on a variable instead of a constant I found an issue that required adding a fourth ReplaceNode*() method to LegalizeDAG.cpp.

pengfei added inline comments.Aug 1 2019, 1:55 AM
include/llvm/CodeGen/ISDOpcodes.h
307

I don't understand the behavior of icc here. SDM says this instruction can raise floating-point invalid exception when the result out of range. Why icc add the code to raise another divz exception?

craig.topper added inline comments.Aug 1 2019, 8:27 AM
include/llvm/CodeGen/ISDOpcodes.h
307

The instructions raises an exception based on a 32-bit result size. But the C code has a 16-bit result size. We need to generate an exception if it doesn't fit in 16-bits to match the C code. But the available instructions can't do that.

include/llvm/CodeGen/ISDOpcodes.h
307

That seems like a hardware problem.

Does LLVM have a policy on fixing up hardware deficiencies? It's not obvious that we should care about the hardware doing the wrong thing.

@eli.friedman

efriedma added inline comments.Aug 1 2019, 11:37 AM
include/llvm/CodeGen/ISDOpcodes.h
307

In general, IR instructions have set semantics, and we generate whatever native code is necessary to make the generated code match the specified semantics. (This influences the way we specify instructions; for example, one of the reasons shl produces poison for large shift amounts is to allow it to be directly mapped to a hardware instruction on common targets.)

In this case, I think it's pretty clear; if we're going to have a "strict" fp-to-int conversion that promises to raise an overflow exception if and only if the conversion overflows, that should work as documented regardless of what the underlying hardware supports. Every architecture has some fp-to-int conversions that need to be emulated, and the exact set can vary depending on the specific CPU target. We don't want to produce unpredictable exceptions for emulated conversions.

If the performance penalty for this is too large, we could allow frontends to request a non-strict conversion that's allowed to generate arbitrary exceptions... but I'm not sure that would be useful in code that's cares about exceptions.

include/llvm/CodeGen/ISDOpcodes.h
307

Ok. Thanks for the clarification, Eli. That seems reasonable.

I agree with @pengfei that it would be nice if we could generate an artificial overflow instead of a divz, but that's picking nits.

include/llvm/CodeGen/ISDOpcodes.h
307

Oh, just caught that it's div(0,0), so produces an Invalid exception. Comment withdrawn...

pengfei added inline comments.Aug 1 2019, 6:34 PM
include/llvm/CodeGen/ISDOpcodes.h
307

Thanks @cameron.mcinally, I just learned SIMD div(0,0) raise invalid instead of divz exception.

craig.topper added inline comments.Aug 5 2019, 11:40 AM
lib/CodeGen/SelectionDAG/LegalizeDAG.cpp
2908

I think the comment has a typo or mistake in it.

lib/CodeGen/SelectionDAG/TargetLowering.cpp
5381

Remove the commented out code.

5415

Remvoe the commented out code

kpn updated this revision to Diff 213641.Aug 6 2019, 9:52 AM
kpn marked 3 inline comments as done.

Address review comments. Rebase.

craig.topper added inline comments.Aug 27 2019, 8:29 AM
lib/CodeGen/SelectionDAG/TargetLowering.cpp
5378

This isn't used below is it? The else block here returns true but this block continues. But the continuation code assigns Result without ever reading it.

kpn marked an inline comment as done.Aug 27 2019, 9:27 AM
kpn added inline comments.
lib/CodeGen/SelectionDAG/TargetLowering.cpp
5378

The else is a call to getNode() for regular ISD::FP_TO_SINT. Then both legs fall through to the return true.

craig.topper accepted this revision.Aug 27 2019, 9:53 AM

LGTM

lib/CodeGen/SelectionDAG/TargetLowering.cpp
5378

Oops. I think the lack of curly braces on the else and the comments in this area made me miss the indentation change.

This revision is now accepted and ready to land.Aug 27 2019, 9:53 AM
This revision was automatically updated to reflect the committed changes.