This patch fixes a regression introduced by revision 268094.
Revision 268094 added the following dag combine rule:
// trunc (shl x, K) -> shl (trunc x), K => K < vt.size / 2
That rule converts a truncate of a shift-by-constant into a shift of a truncated value. We do this only if the shift count is less than half the size in bits of the truncated value (K < vt.size / 2).
The problem is that the constraint on the shift count is incorrect. So, the rule doesn't work well in some cases involving vector types.
Example:
;;
define <8 x i16> @trunc_shift(<8 x i32> %a) {
entry:
%shl = shl <8 x i32> %a, <i32 17, i32 17, i32 17, i32 17, i32 17, i32 17, i32 17> %conv = trunc <8 x i32> %shl to <8 x i16> ret <8 x i16> %conv
}
;;
According to the above mentioned rule, it is valid to convert the trunc+shift-by-constant into a shift-by-constant of a truncated value.
(v8i16 (trunc (shl (v8i32 %a, <17,17,17,17,17,17,17,17>)))
-->
(v8i16 (shl (v8i16 (trunc v8i32 %a)), <17,17,17,17,17,17,17,17>)
The problem is that the new "shl" is undefined (the shift count is bigger than the vector element size). So, the dag combiner would later on replace the shift node with an 'undef' value.
The combine rule should have been written instead like this:
// trunc (shl x, K) -> shl (trunc x), K => K < vt.getScalarSizeInBits()
Basically, if K is smaller than the "scalar size in bits" of the truncated value, then we know that by "sinking" the truncate into the operand of the shift we would never accidentally make the shift undefined.
This patch fixes the check on the shift count, and adds a test case to show that we no longer fold the entire computation to undef.
Please let me know if this is okay to commit.
Thanks,
Andrea