Given a MLA reduction from two different types (say i8 and i16), we were previously failing to find the reduction pattern, often making us chose the lower vector factor. This improves that by using the largest of the two extension types, allowing us to use the larger VF as the type of the reduction.
As per https://godbolt.org/z/KP549EEYM the backend handles this valiantly, leading to better performance.
Is there any rationale for using the largest instead of the smallest type here? And would changing it to the smallest change the result of the test?