The motivating case, and the only one actually enabled by this patch, is a load or store followed by another op with the same SEW/LMUL ratio.
As an example, consider:
define void @test1(ptr %in, ptr %out) { entry: %0 = load <8 x i16>, ptr %in, align 2 %1 = sext <8 x i16> %0 to <8 x i32> store <8 x i32> %1, ptr %out, align 4 ret void }
Without this patch, we get:
vsetivli zero, 8, e16, mf4, ta, mu vle16.v v8, (a0) vsetvli zero, zero, e32, mf2, ta, mu vsext.vf2 v9, v8 vse32.v v9, (a1) ret
Whereas with the patch we get:
vsetivli zero, 8, e32, mf2, ta, mu vle16.v v8, (a0) vsext.vf2 v9, v8 vse32.v v9, (a1) ret
We have rewritten the first vsetvli and thus removed the second one.
As is strongly hinted by the code structure and todos, I am planning on communing this with all (or most all?) of the cases from isCompatible used in the forward data flow. This will be done in a series of following changes - some NFC reworks, and some reviewed optimization extensions.