These patterns for i8 and i16 VMLA's were missing. They end up from legalized vector.reduce.add.v8i16 and vector.reduce.add.v16i8, and although the instruction works differently (the mul and add are performed in a higher precision), I believe it is OK because only an i8/i16 are demanded from them, and so the results will be the same. At least, they pass any testing I can think to run on them.
There are some tests that end up looking worse, but are quite artificial due to passing half vector types through a call boundary. I would not expect the vmull to realistically come up like that, and a vmlava is likely better a lot of the time.