On AArch64 we generate redundant G_SEXTs or G_SEXT_INREGs because of this.
Details
Diff Detail
- Repository
- rG LLVM Github Monorepo
Event Timeline
This doesn't seem necessary for correctness? I think we do need some optimizing combines in the legalizer itself, but I assumed we would start putting those in a distinct place
True, I suppose we need to be stricter about these combines. I'll move this into the post legalizer combiner.
Move the combine to post legalizer, now matching what ends up coming out of the legalizer for AArch64 which is a G_SEXT_INREG.
llvm/lib/CodeGen/GlobalISel/CombinerHelper.cpp | ||
---|---|---|
588 ↗ | (On Diff #271520) | This is just a special case of checking computeNumSignBits? Do we recognize the number of sign bits in a sextload yet? |
llvm/lib/CodeGen/GlobalISel/CombinerHelper.cpp | ||
---|---|---|
588 ↗ | (On Diff #271520) | We don't, but using computeNumSignBits doesn't tell us how to find the original def in the chain that has redundantly been sign extended. E.g. in this case we won't know how to look through the truncate. |
llvm/lib/CodeGen/GlobalISel/GISelKnownBits.cpp | ||
---|---|---|
447 ↗ | (On Diff #274927) | I think this needs to be careful about a vector sextload |
I reverted this because it turns out that computeNumSignBits() doesn't actually do what we need here. Its definition of sign bits are any top-most bits which are known to be identical. Because of that it matches G_ZEXTLOAD with the known zero bits at the top returns as "sign bits". This behavior seems to be consistent with the SDAG KB implementation too.
I committed the original version of this patch in 3b10e42ba1a3ff97346bc51f13195ed9595800f4, with the vector bail out added. Let me know if I should change it post commit.