This lowers the cost for FADD, FSUB, and FNEG. The motivation is to avoid over-eager SLP vectorisation, that makes it look like SLP vectorisation is profitable but results in significant slow downs. Lowering the cost for scalar FADD and FSUB costs helps the profitability decision to favour the scalar version where vectorisation isn't beneficial. Performance results show a 7% improvement for Imagick from SPEC FP 2017, a small improvement in Blender, and unchanged results for the other apps in SPEC. RAJAPerf is neutral (mostly no changes).
For a bit more context, this is related to over-eager vectorisation in https://github.com/llvm/llvm-project/issues/61047 but the motivating test case is slightly different.
It is probably worth adding ISD::FNEG, and we might need to be more careful about the types. FP128 shouldn't be cheap, for example. Possibly a higher cost with fp16/bf16 too, when they are not available?