This is an archive of the discontinued LLVM Phabricator instance.

[InstCombine] Apply binary operator simplifications to associative/commutative cases.
Needs ReviewPublic

Authored by hjyamauchi on May 1 2018, 3:52 PM.

Details

Summary

Apply the instruction combiner binary operator simplifications to the
associative/commutative cases. For example, if we have "(A op B) op C", we try
to transform it to "A op (B op C)" and try to simplify the "(B op C)" part (even
when "(B op C)" doesn't fold to a constant).

A motivation example is a bit-check combining simplification like

((A & 1) == 0) && ((A & 2) == 0) && ((A & 4) == 0) &&
((B & 1) == 0) && ((B & 2) == 0) && ((B & 4) == 0)

-->

((A & 7) == 0) && ((B & 7) == 0)

which didn't fully happen previously.

Diff Detail

Event Timeline

hjyamauchi created this revision.May 1 2018, 3:52 PM
lebedev.ri added inline comments.
test/Transforms/InstCombine/bit-check-combine.ll
2
  1. Please use ; NOTE: Assertions have been autogenerated by utils/update_test_checks.py
  2. I was under impression that handling of bitchecks was being moved to aggressiveinstcombine.
spatel added a comment.May 2 2018, 6:57 AM

I think part of this has already landed with:
rL331311

There are proposals trying to do reassociation in a more complete way outside of instcombine:
D45842 (this would catch the cases that I recently added to and-xor-or.ll)
D41574

This is a response to discussions on llvm-dev that instcombine is already trying to do too much. This patch goes against that idea.

hjyamauchi updated this revision to Diff 144936.May 2 2018, 2:51 PM

Used utils/update_test_checks.py for bit-check-combine.ll.

I think part of this has already landed with:
rL331311

There are proposals trying to do reassociation in a more complete way outside of instcombine:
D45842 (this would catch the cases that I recently added to and-xor-or.ll)
D41574

This is a response to discussions on llvm-dev that instcombine is already trying to do too much. This patch goes against that idea.

Looks like some related issues are being looked into.

Where’s this discussion, and what’s the concern? Compile time?

In D46336#1085843, @yamauchi wrote:

I think part of this has already landed with:
rL331311

There are proposals trying to do reassociation in a more complete way outside of instcombine:
D45842 (this would catch the cases that I recently added to and-xor-or.ll)
D41574

This is a response to discussions on llvm-dev that instcombine is already trying to do too much. This patch goes against that idea.

Looks like some related issues are being looked into.

I think you'll find PR37098 ( https://bugs.llvm.org/show_bug.cgi?id=37098 ) covers several of the same issues you'd like to solve.

Where’s this discussion, and what’s the concern? Compile time?

Compile-time and general bloat of instcombine because it has no limits. Here's one link:
http://lists.llvm.org/pipermail/llvm-dev/2017-May/113184.html

There's no chance that I can explain this better than @dberlin - in fact, see comments from earlier today in D44626
Sorry for the cc, Daniel...but there's a chance you're in the same room/building? :)

I think part of this has already landed with:
rL331311

What's 'this'? I am not clear how rL331311 (partially?) helps with the bitcheck combining this patch is aiming for. Do you mean a similar approach could be taken?

FWIW, a nice thing is that this patch doesn't need to recognize and combine some specific patterns (eg. bitchecks, FoldPHIArgOrIntoPHI (D44626) or and-or-lshr (D45986)) or use some reassociation/canonicalization rules/orders based on some specific patterns (eg. some binops that operate on the same value for bitchecks, a certain definition of “matching pair” like D45842 or the ranking in the reassociate pass) (and those orders might conflict among them).

Rather, it applies the same instcombine simplification logic to the (previously-unexplored, alternative) associative/commutative positions of a binary operator as that to the original position, one associative/commutative opportunity at a time at the existing points in SimplifyAssociativeOrCommutative() and doesn’t require walking an entire subtree of instructions to find patterns or limiting the scope down to an artificially fixed size due to a compile time concern.

Re: bloat, is the consensus that we just don't add to instcombine any more?

This thread http://lists.llvm.org/pipermail/llvm-dev/2017-July/115398.html seemed to have concluded that it’s still acceptable to add to instcombine?

If instcombine is going to be split into multiple passes, assuming we can't call instcombine simplification routines from another pass (unlike instsimplify), could we end up with a situation where we would need to run another pass and instcombine in turn multiple times until a fix point? I'm not sure if any of the extra things that are currently (or proposed to be) part of instcombine are in fact like this and is in instcombine because of this. For example, factorization may open up new other instcombine opportunities, which in turn opens up more factorization opportunities, etc. Anyhow, it seems non-trivial.

spatel added a comment.May 4 2018, 8:43 AM
In D46336#1087038, @yamauchi wrote:

I think part of this has already landed with:
rL331311

What's 'this'? I am not clear how rL331311 (partially?) helps with the bitcheck combining this patch is aiming for. Do you mean a similar approach could be taken?

I was assuming from the name of this and similar tests:
"bit-check-combine1"
that 'this' was looking for any-bit-set / any-bit-clear / all-bits-set / all-bits-clear. Maybe the patterns you're looking for don't look like what I am matching though? If I run -instcombine on the first test, it is already substantially reduced...and at that point, it just looks like a problem for -reassociation?

define i1 @bit-check-combine1(i32 %a, i32 %b) {
entry:
  %0 = and i32 %b, 8
  %1 = and i32 %b, 16
  %2 = and i32 %b, 32
  %3 = and i32 %a, 7   <--- we got lucky on this one and found the reduction
  %4 = or i32 %3, %0
  %5 = or i32 %4, %1    <--- reassociate the 'or' operands, so we can factor out the mask ops
  %6 = or i32 %5, %2
  %7 = icmp eq i32 %6, 0
  ret i1 %7
}
spatel added a comment.May 4 2018, 9:10 AM
In D46336#1087159, @yamauchi wrote:

Re: bloat, is the consensus that we just don't add to instcombine any more?

This thread http://lists.llvm.org/pipermail/llvm-dev/2017-July/115398.html seemed to have concluded that it’s still acceptable to add to instcombine?

Thanks for finding that thread...somehow I find it difficult to search llvm-dev history...
Yes, I think we can still add to instcombine, but I am skeptical of large changes like this that alter the run loop. It doesn't fit in the spirit of small, constant-time combining.

If instcombine is going to be split into multiple passes, assuming we can't call instcombine simplification routines from another pass (unlike instsimplify), could we end up with a situation where we would need to run another pass and instcombine in turn multiple times until a fix point? I'm not sure if any of the extra things that are currently (or proposed to be) part of instcombine are in fact like this and is in instcombine because of this. For example, factorization may open up new other instcombine opportunities, which in turn opens up more factorization opportunities, etc. Anyhow, it seems non-trivial.

I agree that it's not always clear (and note that I posted an alternative version of D45842 in PR37098 that would do more reassociation in instcombine...it would be easier!).

I share your goal of getting these folds to occur, but I'm not an authority on anything, so I think you should raise this discussion on llvm-dev to get a better answer.

Having looked through D45842, i find this differential rather more complex.
Maybe instcombine shouldn't be doing this..

It would be interesting to know which of these testcase *aren't* handled by D45842.

In D46336#1087038, @yamauchi wrote:

I think part of this has already landed with:
rL331311

What's 'this'? I am not clear how rL331311 (partially?) helps with the bitcheck combining this patch is aiming for. Do you mean a similar approach could be taken?

I was assuming from the name of this and similar tests:
"bit-check-combine1"
that 'this' was looking for any-bit-set / any-bit-clear / all-bits-set / all-bits-clear. Maybe the patterns you're looking for don't look like what I am matching though? If I run -instcombine on the first test, it is already substantially reduced...and at that point, it just looks like a problem for -reassociation?

define i1 @bit-check-combine1(i32 %a, i32 %b) {
entry:
  %0 = and i32 %b, 8
  %1 = and i32 %b, 16
  %2 = and i32 %b, 32
  %3 = and i32 %a, 7   <--- we got lucky on this one and found the reduction
  %4 = or i32 %3, %0
  %5 = or i32 %4, %1    <--- reassociate the 'or' operands, so we can factor out the mask ops
  %6 = or i32 %5, %2
  %7 = icmp eq i32 %6, 0
  ret i1 %7
}

rL331311 seemed about folding a chain of or-shifts. How does it directly help with bit checks? That was my question.

As my above comment at Thu, May 3, 3:12 PM went, this patch doesn't look for the bit check pattern (or any other pattern) but indirectly promotes the bit check combining by triggering the instcombine folding for the associative/commutative positions of binops. The actual bit check combining is handled by foldLogOpOfMaskedICmps().

Based on the following description of the reassociation pass (-reassociation), I'm not sure if it's an issue specifically with it.

https://llvm.org/docs/Passes.html#reassociate-reassociate-expressions

This pass reassociates commutative expressions in an order that is designed to promote better constant propagation, GCSE, LICM, PRE, etc.

For example: 4 + (x + 5) ⇒ x + (4 + 5)

In the implementation of this algorithm, constants are assigned rank = 0, function arguments are rank = 1, and other values are assigned ranks corresponding to the reverse post order traversal of current function (starting at 2), which effectively gives values in deep loops higher rank than values not in loops.

Thanks for finding that thread...somehow I find it difficult to search llvm-dev history...

No problem. It was in my email archives.

I agree that it's not always clear (and note that I posted an alternative version of D45842 in PR37098 that would do more reassociation in instcombine...it would be easier!).

Maybe I do not see the full context of PR37098 but I wonder if there was an actual objection to the instcombine version of D45842.

Having looked through D45842, i find this differential rather more complex.
Maybe instcombine shouldn't be doing this..

It would be interesting to know which of these testcase *aren't* handled by D45842.

Input:

define i1 @bit-check-combine1(i32 %a, i32 %b) {
entry:
  %0 = and i32 %a, 1
  %1 = icmp eq i32 %0, 0
  %2 = and i32 %a, 2
  %3 = icmp eq i32 %2, 0
  %4 = and i32 %a, 4
  %5 = icmp eq i32 %4, 0
  %6 = and i32 %b, 8
  %7 = icmp eq i32 %6, 0
  %8 = and i32 %b, 16
  %9 = icmp eq i32 %8, 0
  %10 = and i32 %b, 32
  %11 = icmp eq i32 %10, 0
  %12 = and i1 %1, %3
  %13 = and i1 %12, %5
  %14 = and i1 %13, %7
  %15 = and i1 %14, %9
  %16 = and i1 %15, %11
  ret i1 %16
}

After -instcombine (no patch)

define i1 @bit-check-combine1(i32 %a, i32 %b) {
entry:
  %0 = and i32 %b, 8
  %1 = and i32 %b, 16
  %2 = and i32 %b, 32
  %3 = and i32 %a, 7
  %4 = or i32 %3, %0
  %5 = or i32 %4, %1
  %6 = or i32 %5, %2
  %7 = icmp eq i32 %6, 0
  ret i1 %7
}

After -instcombine, with this patch

define i1 @bit-check-combine1(i32 %a, i32 %b) {
entry:
  %0 = and i32 %a, 7
  %1 = and i32 %b, 56
  %2 = or i32 %0, %1
  %3 = icmp eq i32 %2, 0
  ret i1 %3
}

After -instcombine, with D45842

define i1 @bit-check-combine1(i32 %a, i32 %b) {
entry:
  %0 = and i32 %b, 8
  %1 = and i32 %b, 16
  %2 = and i32 %b, 32
  %3 = and i32 %a, 7
  %4 = or i32 %3, %0
  %5 = or i32 %4, %1
  %6 = or i32 %5, %2
  %7 = icmp eq i32 %6, 0
  ret i1 %7
}

After -instcombine -reassociate, with D45842

define i1 @bit-check-combine1(i32 %a, i32 %b) {
entry:
  %0 = and i32 %b, 8
  %1 = and i32 %b, 16
  %2 = and i32 %b, 32
  %3 = and i32 %a, 7
  %4 = or i32 %0, %3
  %5 = or i32 %1, %2
  %6 = or i32 %5, %4
  %7 = icmp eq i32 %6, 0
  ret i1 %7
}

After -instcombine -reassociate -instcombine, with D45842

define i1 @bit-check-combine1(i32 %a, i32 %b) {
entry:
  %0 = and i32 %b, 8
  %1 = and i32 %a, 7
  %2 = or i32 %0, %1
  %3 = and i32 %b, 48
  %4 = or i32 %3, %2
  %5 = icmp eq i32 %4, 0
  ret i1 %5
}
In D46336#1090588, @yamauchi wrote:

Having looked through D45842, i find this differential rather more complex.
Maybe instcombine shouldn't be doing this..

It would be interesting to know which of these testcase *aren't* handled by D45842.

Input:

define i1 @bit-check-combine1(i32 %a, i32 %b) {
entry:
  %0 = and i32 %a, 1
  %1 = icmp eq i32 %0, 0
  %2 = and i32 %a, 2
  %3 = icmp eq i32 %2, 0
  %4 = and i32 %a, 4
  %5 = icmp eq i32 %4, 0
  %6 = and i32 %b, 8
  %7 = icmp eq i32 %6, 0
  %8 = and i32 %b, 16
  %9 = icmp eq i32 %8, 0
  %10 = and i32 %b, 32
  %11 = icmp eq i32 %10, 0
  %12 = and i1 %1, %3
  %13 = and i1 %12, %5
  %14 = and i1 %13, %7
  %15 = and i1 %14, %9
  %16 = and i1 %15, %11
  ret i1 %16
}

...

After -instcombine -reassociate -instcombine, with D45842

define i1 @bit-check-combine1(i32 %a, i32 %b) {
entry:
  %0 = and i32 %b, 8
  %1 = and i32 %a, 7
  %2 = or i32 %0, %1
  %3 = and i32 %b, 48
  %4 = or i32 %3, %2
  %5 = icmp eq i32 %4, 0
  ret i1 %5
}

.. and if you run -reassociate -instcombine once more?

After -instcombine -reassociate -instcombine -reassociate -instcombine with D45842

define i1 @bit-check-combine1(i32 %a, i32 %b) {
entry:
  %0 = and i32 %b, 8
  %1 = and i32 %a, 7
  %2 = or i32 %0, %1
  %3 = and i32 %b, 48
  %4 = or i32 %2, %3
  %5 = icmp eq i32 %4, 0
  ret i1 %5
}
lebedev.ri added a comment.EditedMay 7 2018, 4:14 PM
In D46336#1090644, @yamauchi wrote:

After -instcombine -reassociate -instcombine -reassociate -instcombine with D45842

define i1 @bit-check-combine1(i32 %a, i32 %b) {
entry:
  %0 = and i32 %b, 8
  %1 = and i32 %a, 7
  %2 = or i32 %0, %1
  %3 = and i32 %b, 48
  %4 = or i32 %2, %3
  %5 = icmp eq i32 %4, 0
  ret i1 %5
}

So it seems D45842 did what it was supposed to, but the instcombine can't fold that down into

define i1 @bit-check-combine1(i32 %a, i32 %b) {
entry:
  %0 = and i32 %a, 7
  %1 = and i32 %b, 56
  %2 = or i32 %0, %1
  %3 = icmp eq i32 %2, 0
  ret i1 %3
}

I'd guess something in instcombine does not use commutative matchers.
(I did not analyse this at all yet, just 'saving' it as one comment)
https://rise4fun.com/Alive/PsC

I'd guess something in instcombine does not use commutative matchers.
(I did not analyse this at all yet, just 'saving' it as one comment)
https://rise4fun.com/Alive/PsC

It's not immediately obvious to me which matcher this may relate to, if it's matcher issue.

Could this also be what this patch aims to explore, ie. folding a commutative alternative position of the nested or's (%4/%2) above?

D46595 is a simpler but limited version of this. Note the test @bit-check-combine-256() doesn't get folded there as simple as here. But it doesn't modify the run loop and is less complex.

Here's a comparison with D45842 for the 4th test @bit-check-combine-256 (which I understand is a long one, I'll try to see if I can reduce this.)

After -instcombine with this patch

define i1 @bit-check-combine-256(i32* %bits0, i32* %bits1, i32* %bits2, i32* %bits3, i32* %bits4, i32* %bits5, i32* %bits6, i32* %bits7) {
entry:
  %0 = load i32, i32* %bits0, align 4
  %1 = icmp eq i32 %0, -2145382392
  %2 = load i32, i32* %bits1, align 4
  %3 = load i32, i32* %bits2, align 4
  %4 = load i32, i32* %bits3, align 4
  %5 = or i32 %3, %4
  %6 = and i32 %5, 2147483647
  %7 = or i32 %2, %6
  %8 = or i32 %3, %4
  %9 = and i32 %8, -2147483648
  %10 = or i32 %9, %7
  %11 = load i32, i32* %bits4, align 4
  %12 = load i32, i32* %bits5, align 4
  %13 = or i32 %11, %12
  %14 = and i32 %13, 2147483647
  %15 = or i32 %10, %14
  %16 = or i32 %11, %12
  %17 = and i32 %16, -2147483648
  %18 = or i32 %17, %15
  %19 = load i32, i32* %bits6, align 4
  %20 = load i32, i32* %bits7, align 4
  %21 = or i32 %19, %20
  %22 = and i32 %21, 2147483647
  %23 = or i32 %18, %22
  %24 = or i32 %19, %20
  %25 = and i32 %24, -2147483648
  %26 = or i32 %25, %23
  %27 = icmp eq i32 %26, 0
  %28 = and i1 %27, %1
  ret i1 %28
}

After -instcombine -reassociate -instcombine -reassociate -instcombine with D45842

define i1 @bit-check-combine-256(i32* %bits0, i32* %bits1, i32* %bits2, i32* %bits3, i32* %bits4, i32* %bits5, i32* %bits6, i32* %bits7) {
entry:
  %0 = load i32, i32* %bits0, align 4
  %1 = icmp eq i32 %0, -2145382392
  %2 = load i32, i32* %bits1, align 4
  %3 = load i32, i32* %bits2, align 4
  %and.i3199 = and i32 %3, 1
  %and.i3201 = and i32 %3, 2
  %and.i3203 = and i32 %3, 4
  %and.i3205 = and i32 %3, 8
  %and.i3207 = and i32 %3, 16
  %and.i3209 = and i32 %3, 32
  %and.i3211 = and i32 %3, 64
  %4 = and i32 %3, 384
  %and.i3216 = and i32 %3, 512
  %and.i3218 = and i32 %3, 1024
  %and.i3220 = and i32 %3, 2048
  %and.i3222 = and i32 %3, 4096
  %and.i3224 = and i32 %3, 8192
  %and.i3226 = and i32 %3, 16384
  %5 = and i32 %3, 98304
  %and.i3231 = and i32 %3, 131072
  %and.i3233 = and i32 %3, 262144
  %and.i3235 = and i32 %3, 524288
  %and.i3237 = and i32 %3, 1048576
  %and.i3239 = and i32 %3, 2097152
  %and.i3241 = and i32 %3, 4194304
  %and.i3243 = and i32 %3, 8388608
  %and.i3249 = and i32 %3, 67108864
  %and.i3255 = and i32 %3, 536870912
  %and.i3257 = and i32 %3, 1073741824
  %6 = and i32 %3, 402653184
  %7 = or i32 %6, %2
  %8 = or i32 %7, %and.i3249
  %9 = or i32 %8, %and.i3257
  %10 = or i32 %9, %and.i3255
  %11 = or i32 %10, %and.i3199
  %12 = or i32 %11, %and.i3201
  %13 = or i32 %12, %and.i3203
  %14 = or i32 %13, %and.i3205
  %15 = or i32 %14, %and.i3207
  %16 = or i32 %15, %and.i3209
  %17 = or i32 %16, %and.i3211
  %18 = or i32 %17, %4
  %19 = or i32 %18, %and.i3216
  %20 = or i32 %19, %and.i3218
  %21 = or i32 %20, %and.i3220
  %22 = or i32 %21, %and.i3222
  %23 = or i32 %22, %and.i3224
  %24 = or i32 %23, %and.i3226
  %25 = or i32 %24, %5
  %26 = or i32 %25, %and.i3231
  %27 = or i32 %26, %and.i3233
  %28 = or i32 %27, %and.i3235
  %29 = or i32 %28, %and.i3237
  %30 = or i32 %29, %and.i3239
  %31 = or i32 %30, %and.i3241
  %32 = or i32 %31, %and.i3243
  %33 = and i32 %3, 50331648
  %34 = or i32 %33, %32
  %35 = icmp eq i32 %34, 0
  %36 = and i1 %1, %35
  %cmp.i3259 = icmp sgt i32 %3, -1
  %and3752918 = and i1 %cmp.i3259, %36
  %37 = load i32, i32* %bits3, align 4
  %38 = load i32, i32* %bits4, align 4
  %and.i3321 = and i32 %38, 1
  %and.i3323 = and i32 %38, 2
  %and.i3325 = and i32 %38, 4
  %and.i3327 = and i32 %38, 8
  %and.i3329 = and i32 %38, 16
  %and.i3331 = and i32 %38, 32
  %and.i3333 = and i32 %38, 64
  %39 = and i32 %38, 384
  %and.i3338 = and i32 %38, 512
  %and.i3340 = and i32 %38, 1024
  %and.i3342 = and i32 %38, 2048
  %and.i3344 = and i32 %38, 4096
  %and.i3346 = and i32 %38, 8192
  %and.i3348 = and i32 %38, 16384
  %40 = and i32 %38, 98304
  %and.i3353 = and i32 %38, 131072
  %and.i3355 = and i32 %38, 262144
  %and.i3357 = and i32 %38, 524288
  %and.i3359 = and i32 %38, 1048576
  %and.i3361 = and i32 %38, 2097152
  %and.i3363 = and i32 %38, 4194304
  %and.i3365 = and i32 %38, 8388608
  %and.i3371 = and i32 %38, 67108864
  %and.i3377 = and i32 %38, 536870912
  %and.i3379 = and i32 %38, 1073741824
  %41 = and i32 %38, 402653184
  %42 = or i32 %41, %37
  %43 = or i32 %42, %and.i3371
  %44 = or i32 %43, %and.i3379
  %45 = or i32 %44, %and.i3377
  %46 = or i32 %45, %and.i3321
  %47 = or i32 %46, %and.i3323
  %48 = or i32 %47, %and.i3325
  %49 = or i32 %48, %and.i3327
  %50 = or i32 %49, %and.i3329
  %51 = or i32 %50, %and.i3331
  %52 = or i32 %51, %and.i3333
  %53 = or i32 %52, %39
  %54 = or i32 %53, %and.i3338
  %55 = or i32 %54, %and.i3340
  %56 = or i32 %55, %and.i3342
  %57 = or i32 %56, %and.i3344
  %58 = or i32 %57, %and.i3346
  %59 = or i32 %58, %and.i3348
  %60 = or i32 %59, %40
  %61 = or i32 %60, %and.i3353
  %62 = or i32 %61, %and.i3355
  %63 = or i32 %62, %and.i3357
  %64 = or i32 %63, %and.i3359
  %65 = or i32 %64, %and.i3361
  %66 = or i32 %65, %and.i3363
  %67 = or i32 %66, %and.i3365
  %68 = and i32 %38, 50331648
  %69 = or i32 %68, %67
  %70 = icmp eq i32 %69, 0
  %71 = and i1 %70, %and3752918
  %cmp.i3381 = icmp sgt i32 %38, -1
  %and6312982 = and i1 %cmp.i3381, %71
  %72 = load i32, i32* %bits5, align 4
  %73 = load i32, i32* %bits6, align 4
  %and.i3443 = and i32 %73, 1
  %and.i3445 = and i32 %73, 2
  %and.i3447 = and i32 %73, 4
  %and.i3449 = and i32 %73, 8
  %and.i3451 = and i32 %73, 16
  %and.i3453 = and i32 %73, 32
  %and.i3455 = and i32 %73, 64
  %74 = and i32 %73, 384
  %and.i3460 = and i32 %73, 512
  %and.i3462 = and i32 %73, 1024
  %and.i3464 = and i32 %73, 2048
  %and.i3466 = and i32 %73, 4096
  %and.i3468 = and i32 %73, 8192
  %and.i3470 = and i32 %73, 16384
  %75 = and i32 %73, 98304
  %and.i3475 = and i32 %73, 131072
  %and.i3477 = and i32 %73, 262144
  %and.i3479 = and i32 %73, 524288
  %and.i3481 = and i32 %73, 1048576
  %and.i3483 = and i32 %73, 2097152
  %and.i3485 = and i32 %73, 4194304
  %and.i3487 = and i32 %73, 8388608
  %and.i3493 = and i32 %73, 67108864
  %and.i3499 = and i32 %73, 536870912
  %and.i3501 = and i32 %73, 1073741824
  %76 = and i32 %73, 402653184
  %77 = or i32 %76, %72
  %78 = or i32 %77, %and.i3493
  %79 = or i32 %78, %and.i3501
  %80 = or i32 %79, %and.i3499
  %81 = or i32 %80, %and.i3443
  %82 = or i32 %81, %and.i3445
  %83 = or i32 %82, %and.i3447
  %84 = or i32 %83, %and.i3449
  %85 = or i32 %84, %and.i3451
  %86 = or i32 %85, %and.i3453
  %87 = or i32 %86, %and.i3455
  %88 = or i32 %87, %74
  %89 = or i32 %88, %and.i3460
  %90 = or i32 %89, %and.i3462
  %91 = or i32 %90, %and.i3464
  %92 = or i32 %91, %and.i3466
  %93 = or i32 %92, %and.i3468
  %94 = or i32 %93, %and.i3470
  %95 = or i32 %94, %75
  %96 = or i32 %95, %and.i3475
  %97 = or i32 %96, %and.i3477
  %98 = or i32 %97, %and.i3479
  %99 = or i32 %98, %and.i3481
  %100 = or i32 %99, %and.i3483
  %101 = or i32 %100, %and.i3485
  %102 = or i32 %101, %and.i3487
  %103 = and i32 %73, 50331648
  %104 = or i32 %103, %102
  %105 = icmp eq i32 %104, 0
  %106 = and i1 %105, %and6312982
  %cmp.i3503 = icmp sgt i32 %73, -1
  %and8873046 = and i1 %cmp.i3503, %106
  %107 = load i32, i32* %bits7, align 4
  %108 = icmp eq i32 %107, 0
  %109 = and i1 %108, %and8873046
  ret i1 %109
}