This is an archive of the discontinued LLVM Phabricator instance.

[AggressiveInstCombine] convert a chain of 'and-shift' bits into masked compare
ClosedPublic

Authored by spatel on May 9 2018, 10:25 AM.

Details

Summary

This is a follow-up to D45986. As suggested there, we should match the "all-bits-set" pattern in addition to "any-bits-set".

This was a little more complicated than I thought it would be initially because the "and 1" instruction can be anywhere in the chain. Hopefully, the code comments make that logic understandable, but if you see a way to simplify or improve that, it's most appreciated.

This transforms patterns that emerge from bitfield tests as seen in PR37098:
https://bugs.llvm.org/show_bug.cgi?id=37098

I think it would also help reduce the large test from D46336 / D46595, but we need something to reassociate that case to the forms we're expecting here first. Alternatively, we could extend the matching here to account for that pattern (but I have not investigated what that would take).

Diff Detail

Repository
rL LLVM

Event Timeline

spatel created this revision.May 9 2018, 10:25 AM
lebedev.ri added inline comments.May 9 2018, 10:36 AM
lib/Transforms/AggressiveInstCombine/AggressiveInstCombine.cpp
75 ↗(On Diff #145953)

This looks, recursive.
I think this is a stack overflow waiting to happen.

Can you try to come up with some degenerative test-case
where it has to self-recurse 20-40 times,
just so we know it does not happen?

spatel added inline comments.May 9 2018, 11:06 AM
lib/Transforms/AggressiveInstCombine/AggressiveInstCombine.cpp
75 ↗(On Diff #145953)

Yes - it's definitely recursive. :)
Could convert to a worklist pattern.
But there is a practical cap on that recursion - you'd never have more recursion than number of bits in the value. So a 40-bit mask would be:

define i64 @allset_40_bit_mask(i64 %x) {
  %t1 = lshr i64 %x, 1
  %t2 = lshr i64 %x, 2
  %t3 = lshr i64 %x, 3
  %t4 = lshr i64 %x, 4
  %t5 = lshr i64 %x, 5
  %t6 = lshr i64 %x, 6
  %t7 = lshr i64 %x, 7
  %t8 = lshr i64 %x, 8
  %t9 = lshr i64 %x, 9
  %t10 = lshr i64 %x, 10
  %t11 = lshr i64 %x, 11
  %t12 = lshr i64 %x, 12
  %t13 = lshr i64 %x, 13
  %t14 = lshr i64 %x, 14
  %t15 = lshr i64 %x, 15
  %t16 = lshr i64 %x, 16
  %t17 = lshr i64 %x, 17
  %t18 = lshr i64 %x, 18
  %t19 = lshr i64 %x, 19
  %t20 = lshr i64 %x, 20
  %t21 = lshr i64 %x, 21
  %t22 = lshr i64 %x, 22
  %t23 = lshr i64 %x, 23
  %t24 = lshr i64 %x, 24
  %t25 = lshr i64 %x, 25
  %t26 = lshr i64 %x, 26
  %t27 = lshr i64 %x, 27
  %t28 = lshr i64 %x, 28
  %t29 = lshr i64 %x, 29
  %t30 = lshr i64 %x, 30
  %t31 = lshr i64 %x, 31
  %t32 = lshr i64 %x, 32
  %t33 = lshr i64 %x, 33
  %t34 = lshr i64 %x, 34
  %t35 = lshr i64 %x, 35
  %t36 = lshr i64 %x, 36
  %t37 = lshr i64 %x, 37
  %t38 = lshr i64 %x, 38
  %t39 = lshr i64 %x, 39
  %t40 = lshr i64 %x, 40

  %a1 = and i64 %t1, 1
  %a2 = and i64 %t2, %a1
  %a3 = and i64 %t3, %a2
  %a4 = and i64 %t4, %a3
  %a5 = and i64 %t5, %a4
  %a6 = and i64 %t6, %a5
  %a7 = and i64 %t7, %a6
  %a8 = and i64 %t8, %a7
  %a9 = and i64 %t9, %a8
  %a10 = and i64 %t10, %a9
  %a11 = and i64 %t11, %a10
  %a12 = and i64 %t12, %a11
  %a13 = and i64 %t13, %a12
  %a14 = and i64 %t14, %a13
  %a15 = and i64 %t15, %a14
  %a16 = and i64 %t16, %a15
  %a17 = and i64 %t17, %a16
  %a18 = and i64 %t18, %a17
  %a19 = and i64 %t19, %a18
  %a20 = and i64 %t20, %a19
  %a21 = and i64 %t21, %a20
  %a22 = and i64 %t22, %a21
  %a23 = and i64 %t23, %a22
  %a24 = and i64 %t24, %a23
  %a25 = and i64 %t25, %a24
  %a26 = and i64 %t26, %a25
  %a27 = and i64 %t27, %a26
  %a28 = and i64 %t28, %a27
  %a29 = and i64 %t29, %a28
  %a30 = and i64 %t30, %a29
  %a31 = and i64 %t31, %a30
  %a32 = and i64 %t32, %a31
  %a33 = and i64 %t33, %a32
  %a34 = and i64 %t34, %a33
  %a35 = and i64 %t35, %a34
  %a36 = and i64 %t36, %a35
  %a37 = and i64 %t37, %a36
  %a38 = and i64 %t38, %a37
  %a39 = and i64 %t39, %a38
  %a40 = and i64 %t40, %a39

  ret i64 %a40
}

That'll get folded down to:

define i64 @allset_40_bit_mask(i64 %x) local_unnamed_addr #0 {
  %1 = and i64 %x, 2199023255550
  %2 = icmp eq i64 %1, 2199023255550
  %3 = zext i1 %2 to i64
  ret i64 %3
}

I'll add a test like that.

This looks good to me, I'm not concerned about the recursion, most of it is tail-recursion anyway.

lebedev.ri added inline comments.May 9 2018, 11:07 AM
lib/Transforms/AggressiveInstCombine/AggressiveInstCombine.cpp
75 ↗(On Diff #145953)

you'd never have more recursion than number of bits in the value.

Yes, i realize, and thank you.

kparzysz accepted this revision.May 9 2018, 11:22 AM

With the extra 40-bit testcase.

This revision is now accepted and ready to land.May 9 2018, 11:22 AM
This revision was automatically updated to reflect the committed changes.