Inspired by D138107.
We can add ADD, AND, OR, XOR, MUL to isAssociativeAndCommutative to
increase instruction-level parallelism by the existing MachineCombiner
pass.
Paths
| Differential D140530
[RISCV] Add integer scalar instructions to isAssociativeAndCommutative ClosedPublic Authored by HsiangKai on Dec 22 2022, 1:01 AM.
Details Summary Inspired by D138107. We can add ADD, AND, OR, XOR, MUL to isAssociativeAndCommutative to
Diff Detail
Event TimelineHerald added subscribers: sunshaoce, VincentWu, armkevincheng and 29 others. · View Herald Transcript
Comment Actions Remove MULH, MULHU from the list.
Comment Actions
Comment Actions Have you had a chance to make some performance measurements?
Comment Actions
Quoted from D138107, I run C/C++ benchmarks in SPECrate 2017 on Fujitsu A64FX processor, which has two pipelines for integer operations and SIMD/FP operations each. 511.povray_r had 4% improvement. Other benchmarks (int: 500, 502, 505, 520, 523, 525, 531, 541, 557; fp: 508, 510, 519, 538, 544) were within 1% up/down. For a synthetic benchmark, it doubled the performance. I have no performance number for RISC-V multiple issue machines. I am not sure what the impact is of the patch. Is there anyone can help to measure it?
Comment Actions
By the way, the number from D138107 are including SIMD/SVE instruction patterns. We only implement scalar part. (Vector instructions have more than 2 input operands. They are not applicable here.)
This revision is now accepted and ready to land.Dec 28 2022, 8:24 PM This revision was landed with ongoing or failed builds.Dec 29 2022, 3:59 AM Closed by commit rG002005e6740e: [RISCV] Add integer scalar instructions to isAssociativeAndCommutative (authored by HsiangKai). · Explain Why This revision was automatically updated to reflect the committed changes.
Revision Contents
Diff 485602 llvm/lib/Target/RISCV/RISCVInstrInfo.cpp
llvm/test/CodeGen/RISCV/addc-adde-sube-subc.ll
llvm/test/CodeGen/RISCV/addcarry.ll
llvm/test/CodeGen/RISCV/addimm-mulimm.ll
llvm/test/CodeGen/RISCV/alu64.ll
llvm/test/CodeGen/RISCV/bswap-bitreverse.ll
llvm/test/CodeGen/RISCV/calling-conv-ilp32-ilp32f-common.ll
llvm/test/CodeGen/RISCV/calling-conv-ilp32-ilp32f-ilp32d-common.ll
llvm/test/CodeGen/RISCV/calling-conv-lp64-lp64f-lp64d-common.ll
llvm/test/CodeGen/RISCV/compress.ll
llvm/test/CodeGen/RISCV/copysign-casts.ll
llvm/test/CodeGen/RISCV/div-by-constant.ll
llvm/test/CodeGen/RISCV/div-pow2.ll
llvm/test/CodeGen/RISCV/div.ll
llvm/test/CodeGen/RISCV/fpclamptosat.ll
llvm/test/CodeGen/RISCV/fpclamptosat_vec.ll
llvm/test/CodeGen/RISCV/iabs.ll
llvm/test/CodeGen/RISCV/machine-combiner.ll
llvm/test/CodeGen/RISCV/mul.ll
llvm/test/CodeGen/RISCV/neg-abs.ll
llvm/test/CodeGen/RISCV/rv32zbb.ll
llvm/test/CodeGen/RISCV/rv64zbb.ll
llvm/test/CodeGen/RISCV/rvv/fixed-vectors-elen.ll
llvm/test/CodeGen/RISCV/rvv/fixed-vectors-unaligned.ll
llvm/test/CodeGen/RISCV/sadd_sat.ll
llvm/test/CodeGen/RISCV/sadd_sat_plus.ll
llvm/test/CodeGen/RISCV/select-binop-identity.ll
llvm/test/CodeGen/RISCV/shadowcallstack.ll
llvm/test/CodeGen/RISCV/split-udiv-by-constant.ll
llvm/test/CodeGen/RISCV/srem-lkk.ll
llvm/test/CodeGen/RISCV/srem-seteq-illegal-types.ll
llvm/test/CodeGen/RISCV/srem-vector-lkk.ll
llvm/test/CodeGen/RISCV/ssub_sat.ll
llvm/test/CodeGen/RISCV/ssub_sat_plus.ll
llvm/test/CodeGen/RISCV/uadd_sat.ll
llvm/test/CodeGen/RISCV/uadd_sat_plus.ll
llvm/test/CodeGen/RISCV/umulo-128-legalisation-lowering.ll
llvm/test/CodeGen/RISCV/unaligned-load-store.ll
llvm/test/CodeGen/RISCV/urem-lkk.ll
llvm/test/CodeGen/RISCV/urem-seteq-illegal-types.ll
llvm/test/CodeGen/RISCV/urem-vector-lkk.ll
llvm/test/CodeGen/RISCV/usub_sat.ll
llvm/test/CodeGen/RISCV/usub_sat_plus.ll
llvm/test/CodeGen/RISCV/vararg.ll
llvm/test/CodeGen/RISCV/wide-scalar-shift-by-byte-multiple-legalization.ll
llvm/test/CodeGen/RISCV/wide-scalar-shift-legalization.ll
llvm/test/CodeGen/RISCV/xaluo.ll
|
swap the order of these conditions. There's no reason to call hasEqualFRM if we already know both indices are negative.