This is an archive of the discontinued LLVM Phabricator instance.

[Inliner] Penalise inlining of calls with loops at Oz
ClosedPublic

Authored by dmgreen on Oct 1 2018, 4:01 AM.

Details

Summary

We currently seem to underestimate the size of functions with loops in them, both in terms of absolute code size and in the difficulties of dealing with such code. (Functions, for example, can be tail merged to further reduce codesize). At -Oz, we can then increase code size by inlining small loops multiple times.

This attempts to penalise functions with loops at -Oz by adding a CallPenalty for each top level loop in the function. It uses LI (and hence DT) to calculate the number of loops. As we are dealing with minsize, the inline threshold is small and functions at this point should be relatively small, making the construction of these cheap (although we really only care if/how many functions there are, so there may be a cheaper way to do that.)

Diff Detail

Repository
rL LLVM

Event Timeline

dmgreen created this revision.Oct 1 2018, 4:01 AM

Before we start adding more heuristics, I'd like to make sure the existing heuristics are working correctly. In particular, for the given testcases, NumInstructionsSimplified seems way larger than it should be.

For loop-noinline.ll the "Simplified" instructions (the ones that cost nothing) appear to be:
br %body
3 x phis
one of the geps in the loop
the gep outside the loop
the ret

For the other testcase it's 8, again the 2 x unconditional branches, 3 x phis and 2 x geps + ret.

I guess we could argue the gep without a load isn't free, but I'm not sure why the other gep in the loop isn't free. In the second test case the two branches for the loop arn't free because they aren't fallthroughs, but that sounds tough to model.

Perhaps we could say that phis in loops have one "setup" cost? We would have to detect that the phi comes from a loop in that case.

We could probably improve our handling of GEPs; for example, a GEP used by a loop PHI node is probably not free.

In the second test case the two branches for the loop arn't free because they aren't fallthroughs, but that sounds tough to model.

Probably not that hard to model: if a block has more than one unconditional branch predecessor, count each branch beyond the first. Or something like that; maybe we could also do something more fancy for conditional branches, to guess whether they lower to one or two branch instructions at the machine level.

We could also try to improve our modeling of constants: we aren't assigning any cost to materializing @digits, or any cost to materializing "32" in @call2.

Perhaps we could say that phis in loops have one "setup" cost? We would have to detect that the phi comes from a loop in that case.

The issue isn't really whether the PHI is in a loop; it has to do with the uses of the operand. A PHI will eventually be lowered to a copy in each predecessor. The copy can be eliminated by the register allocator in many cases, but not all cases; the distinguishing factor is whether the PHI's operand is live after the PHI.


Granted, just refusing to inline loops at -Oz probably does the right thing in most cases, but I'd like to see how far we can get by improving the cost model rather than artificially skewing it.

dmgreen updated this revision to Diff 167983.Oct 2 2018, 11:36 AM

I've added a new memcpy test from the original reproducer. It's a byte memcpy (people seem to love writing those), which I think is worth focusing on because its small, but still increases codesize. It expands to:

	b	.LBB1_2
  .LBB1_1:
	ldrb	r3, [r1], #1
	subs	r2, #1
	strb	r3, [r0], #1
  .LBB1_2:
	cmp	r2, #0
	bne	.LBB1_1
	bx	lr

I would say from this that it's hard to argue the geps arn't free (unless we count that they are 32bit instructions? Do we do a lot of modelling of that?). They may be non-free on other architectures.

The llvm IR is 13 instuctions, 8 of which are simplified so 5 remain. Starting at a score of -45 (CallPenalty + Call + 3*Args), we end up at -20. We could add 1 for the branch, and another 2 (maybe) for the geps. That would get our score to -5, so we'd need another 2 instructions to cost something.

Or we'd need to not start out with such a negative score. In this case, I think the args make sense, the CallPenalty of 25 is a bit high maybe? As I understand it this number (for codesize) is essentially like changing our threshold from <=0 to <=25, and saying "more optimisations may happen, lets fudge it a bit". So changing it would mean less inlining in general (for minsize, as in it would affect more than just loops. It's a larger change.)

any cost to materializing "32" in @call2.

Yeah, that was what I was thinking of for the setup cost of the phi's not being free. I see what you mean about operands remaining live after the phi. The same could be said for function arguments, right? They may be free if they can be put into the correct register, but wont be if they are also needed after the call.

We could probably improve our handling of GEPs; for example, a GEP used by a loop PHI node is probably not free.

Exactly where would a gep be be free? Just loads and stores and other geps? Perhaps memcpy's if they are small enough? Or from other instructions in general?

I think that was a kind of long way of saying, if we are going to try to do this accurately, we should probably start at the CallPenalty. Let me know what you think, I'll try and get more numbers.

unless we count that they are 32bit instructions? Do we do a lot of modelling of that?

The inliner doesn't try to model that at all. It's not really that important outside of -Oz, anyway, and it's sort of a hard problem given that the size of an instruction depends on register allocation.

Exactly where would a gep be be free? Just loads and stores and other geps? Perhaps memcpy's if they are small enough? Or from other instructions in general?

Something like that. IIRC the inliner is a bit more generous at the moment, on the assumption that some GEPs will eventually simplify.

is essentially like changing our threshold from <=0 to <=25, and saying "more optimisations may happen, lets fudge it a bit"

In this context, yes. (There are other places that use CallPenalty, but they aren't really relevant here.) But changing it is effectively equivalent to just changing the inline threshold. If you think OptMinSizeThreshold is wrong, you can try modifying it (it doesn't necessarily have to be a positive number).

The same could be said for function arguments, right? They may be free if they can be put into the correct register, but wont be if they are also needed after the call.

Yes.

dmgreen updated this revision to Diff 170910.Oct 24 2018, 9:11 AM

Now ignores loops that will never be executed. I also have some code that uses SCEV to calculate if the backedge count is <= 1 and allow inlining there. It doesn't seem to come up very often though and needed some plumbing to get SE's/TLI's in the right places.

Eli, what do you think between this and D53405?

Friendly Ping

efriedma accepted this revision.Nov 2 2018, 2:10 PM

From the discussion here and on D53405, it seems like the pieces we aren't modeling are hard to model well, and it will be hard to get them all right. And I think you're right that it's unlikely to be profitable to inline loops. So LGTM

This revision is now accepted and ready to land.Nov 2 2018, 2:10 PM
This revision was automatically updated to reflect the committed changes.