This is an archive of the discontinued LLVM Phabricator instance.

[InlineCost] Add support for unary fneg.
ClosedPublic

Authored by craig.topper on May 30 2019, 1:30 PM.

Details

Summary

This adds support for unary fneg based on the implementation of BinaryOperator without the soft float FP cost.

Previously we would just delegate to visitUnaryInstruction. I think the only real change is that we will pass the FastMath flags to SimplifyFNeg now.

Unfortunately, SimplifyFNegInst does not currently do anything with the fastmath flags so I don't think this changes any behavior. Thus I don't know how to test it.

Diff Detail

Event Timeline

craig.topper created this revision.May 30 2019, 1:30 PM
efriedma added inline comments.May 30 2019, 2:32 PM
llvm/lib/Analysis/InlineCost.cpp
1134

Is this actually true? fneg should be one or two native instructions for almost every combination of type/target, even ones that don't have native floating-point support.

craig.topper marked an inline comment as done.May 30 2019, 2:40 PM
craig.topper added inline comments.
llvm/lib/Analysis/InlineCost.cpp
1134

Probably not. But it is the answer we would have gotten for fsub -0.0, %x. Should we exclude both forms of writing fneg from this?

cameron.mcinally accepted this revision.May 30 2019, 2:57 PM

LGTM.

I'm not that familiar with InlineCost, but I see that almost all of this is copied from visitBinaryOperator(...), so I'm pretty confident in the review.

I was able to test the increased fp cost. The included test case will report a cost of 100 before this patch and 125 after this patch. I based this on an earlier arm test case in the test file and just changed an fmul by constant to an fneg. I can pre-commit this test but wanted opinions on if this test needs changes.

FMul seems like a good first attempt. Maybe a cost of XOR by constant is better, but they're probably pretty similar anyway.

This revision is now accepted and ready to land.May 30 2019, 2:57 PM
llvm/lib/Analysis/InlineCost.cpp
1134

This seems like a good check to have around for a general UnaryOperator. Of course there's only one right now, but who knows in the future...

FMul seems like a good first attempt. Maybe a cost of XOR by constant is better, but they're probably pretty similar anyway.

We're talking about the difference in cost between a runtime call and a xor by a constant; those are not really similar.

llvm/lib/Analysis/InlineCost.cpp
1134

We probably should treat "fsub -0.0, %x" the same way, yes.

It probably doesn't makes sense to try to predict the costs of future unary operators that don't current exist.

FMul seems like a good first attempt. Maybe a cost of XOR by constant is better, but they're probably pretty similar anyway.

We're talking about the difference in cost between a runtime call and a xor by a constant; those are not really similar.

Those are different. Why is the cost calculation so pessimistic? For soft float targets?

Yes, soft-float targets, since TTI.getFPOpCost(I.getType()) == TargetTransformInfo::TCC_Expensive is true.

Restrict to just FNeg since we have no other UnaryOperators yet.

craig.topper requested review of this revision.Jun 1 2019, 3:21 PM
craig.topper retitled this revision from [InlineCost] Add support for UnaryOperator to [InlineCost] Add support for unary fneg..
craig.topper edited the summary of this revision. (Show Details)
craig.topper edited the summary of this revision. (Show Details)

This is a good compromise. I'm okay with this if Eli is...

It should be possible to write a testcase based on the simplification? Otherwise looks fine.

The only optimizations that are in simplifyFNegInst currently are constant folding and (fneg (fneg X))->X. The constant folding case would already have been supported by using visitUnaryInstruction. For the (fneg (fneg X)) case, will InlineCost be able to do something if one fneg is in the caller and the other is in the callee?

Is it likely that we're ever going to be able to do anything useful with the fast-math flags?

Even if the new code behaves the same way as visitUnaryInstruction, I'd like to have some test coverage, to show this doesn't break anything.

Is it likely that we're ever going to be able to do anything useful with the fast-math flags?

I don't think that decision has been made. It's just a bit-twiddle, so I'm not sure if FMFs would give us anything.

That said, if we knew nnan and nsz, we could convert a unary FNeg into a binary FNeg. I don't think that would be a win, but haven't studied it...

Add test case that should requires constant folding an fneg.

This revision is now accepted and ready to land.Jun 6 2019, 10:40 AM
This revision was automatically updated to reflect the committed changes.