This commit implements an IR-level optimization to eliminate idempotent
SVE mul/fmul intrinsic calls. Currently, the following patterns are
captured:
fmul pg (dup_x 1.0) V => V mul pg (dup_x 1) V => V fmul pg V (dup_x 1.0) => V mul pg V (dup_x 1) => V fmul pg V (dup v pg 1.0) => V mul pg V (dup v pg 1) => V
The result of this commit is that code such as:
1 #include <arm_sve.h> 2 3 svfloat64_t foo(svfloat64_t a) { 4 svbool_t t = svptrue_b64(); 5 svfloat64_t b = svdup_f64(1.0); 6 return svmul_m(t, a, b); 7 }
will lower to a nop.
This commit does not capture all possibilities; only the simple cases
described above. There is still room for further optimisation.
nit: Is this better named as Pg instead of Op0 so it's more obvious that Op1 and Op2 are the integer/FP vector inputs?