The order of stack frame objects decides the offset size relative to sp/fp, and shorter offset is more possible to make the related instructions to be compressed and use less instructions to build the offset immediate. So it can improve the code size if we reorder the stack objects using proper cost model.
The precise cost model requires further complexity, and the overall gain isn't worth it. I reuse X86's
cost model that uses the estimated density, the cost is computed by
density = ObjectNumUses / ObjectSize,
ObjectNumUses is the number of instructions using the frame object, and the difference between x86 and RISCV is that we provide the double weight for ld/st instructions because it's more possible to be compressed.
ObjectSize is the size of frame object.
CodeSize may regress in some testcases if we don't add weight for ld/st(the reason is that more compressible ld/st get too much offset to stop them being compressed), and the double weight is estimate(other maybe better in some cases).
The main order algorithm is that the frame object with higher density gets shorter offset relative to sp/fp.
return STI.hasStdExtCOrZca() || STI.hasStdExtZce()