Details
Diff Detail
- Repository
- rG LLVM Github Monorepo
Event Timeline
llvm/lib/Target/LoongArch/LoongArchISelLowering.h | ||
---|---|---|
38–39 | Are you placing ROTR before ROTL because it's the native instruction? Because alphabetically it would be the opposite. Not a big concern though. | |
llvm/test/CodeGen/LoongArch/rotl-rotr.ll | ||
6 | Isn't N-bit left-rotates equivalent to GRLen - N-bit right-rotates? I think you can just transform ROTL to equivalent ROTR then get optimized for free. In other words, at instruction selection time there shouldn't be any ROTLs left around. |
llvm/lib/Target/LoongArch/LoongArchISelLowering.h | ||
---|---|---|
38–39 | Yes, thank you for your reminder. I'll sort it alphabetically | |
llvm/test/CodeGen/LoongArch/rotl-rotr.ll | ||
6 | Yes, DAGCombiner has done this optimization. For example, when rotl_32 function is compiled with the --mtriple=loongarch32 option, DAGCombiner uses ISD::rotr, and the same optimization happens in other functions, such as rotl_i32_fshl. However, in some cases, such as when compiling rotl_32 functions with the --mtriple=loongarch64 option, the backend needs to add the optimizations you mentioned. In order to avoid this patch being too large to increase the difficulty of review, I will implement this optimization in another patch, thanks:) |
I assume the remaining unoptimized LA64 cases are to be tackled in a later patch. For now this looks good. Thanks!
llvm/test/CodeGen/LoongArch/rotl-rotr.ll | ||
---|---|---|
374 | @gonglingqin You might want to review theses tests as the out of bounds shifts means they generate poison (which is exploited by rGae60706da07a128) - maybe use fshl/fshr intrinsics? |
Are you placing ROTR before ROTL because it's the native instruction? Because alphabetically it would be the opposite.
Not a big concern though.