At the moment, <vscale x 1 x eltty> are not yet fully handled by the
code-generator, so to avoid vectorizing loops with that VF, we set the
cost-model to some high (expensive) value for now. The reason for not
adding a new "TTI::getMinimumScalableVF" is because the type is supposed
to be a type that can be legalized. It partially is, although the support
for these types need some more work.
Details
Diff Detail
- Repository
- rG LLVM Github Monorepo
Event Timeline
Personally, for the testing I'd pick a different element type than i128 given that is already problematic.
Why 9999? Why not an invalid cost? I thought that was what invalid was for, so we didn't need hacky random numbers like this.
We've chosen not to conflate legalization and cost-modelling, so that the cost-model must return a valid cost when the legalization phase says that a given VF is legal to use. The benefit of that is that it guards the requirement to have a complete (albeit not necessarily accurate) cost-model. For SVE a VF of vscale x 1 is legal in principle, meaning that we should be able to code-generate it. But because our code-generator is incomplete, we want to avoid using these VFs. I haven't added an interface to query the minimum legal VF, since I know we'll remove that interface at some point. The LV will expect all costs calculated to be Valid and will fail this assertion otherwise. So basically return <magic number> is just a temporary stop-gap to avoid failing that (and other) assertion failures.
Does that make more sense?
I would expect the vectorizer, when encountering an invalid cost, to not pick that vector factor. Not crash. Otherwise I'm not sure I understand what an invalid cost means or is for?
I'm not sure I'm a huge fan of the proliferation of isLegal.. methods. They feel poorly defined and we have two places to return one bit of information. But I can see the advantage in ruling out large swaths of code, and stopping vectorization early when we already know it won't be valid, before having to do any expensive vplan construction. I feel we should still have invalid costs blocking vectorization at those factors though, even if it does get as far as the costmodel.
My feeling is that the vectoriser should never try to vectorise a loop that is impossible to vectorise, with the act of costing a vectorisation candidate effectively saying the loop is safe to vectorise. So a way to guard this is to ensure that when costing a vectorisation candidate we force the need for a real cost. It's just unfortunate that when it comes to scalable vectorisation there are more scenarios that lead to loops that are not vectorisable (or rather, loops where scalable VFs must be pulled from the list of candidate VFs).
Now I agree that with this patch we're not exactly following this design goal but for SVE this is currently an experimental (default off) feature and thus we have wiggle room to do the wrong thing for the right reasons. In this instance it is more valuable to allow a base level of experimental vectorisation before all the code generation bugs are resolved. As Sander said, we could fix this by honouring the design (i.e. as part of the is legal checks) but that in itself would be a redundant interface once the code generation bug is fixed.
I think I understand that, but I am afraid I am still with Dave as return 9999 looks so extremely hacky.
I.e., I understand return <magic number> is a stop-gap, but why can we not use return Invalid instead as a stop-gap? I do see the point about conflating legalization and cost-modelling, but I don't that is really important here as we are talking about plugging a hole, not about a decision decision that we want to keep. Thus, I don't think we would be conflating things if we return Invalid here, if that works, and I would also vote for that. The comments where return 9999 is done has already the required explanations what is going on.
It seems like a simpler overall route to remove the assert from the vectorizer, and allow it to deal with invalid costs. But I am not blocking this patch.
(Although, you might want to pick a number higher than 9999, it's not particularly high. And like Sjoerd suggested giving it a name might be helpful).
Agreed with everything what Dave said.
Removing the assert for invalid costs seems easiest to me. And I am also not blocking this, but if you want to go ahead with the 9999 we should at least define and use a constant
CM_SVE_EXPENSIVE_HACK
or something along the lines and document what the idea and the plan is (if that isn't there already somewhere).
As far as I understand, the backend currently doesn't handle <vscale x 1 x ...> types? Is "invalid" not a better way to represent that, than "very expensive"? Even if it is only until the backend is fixed.
Use getInvalid() instead of getMax().
The downside is that this will increase the number of instructions printed
in the opt-remark (D105806) but I agree that it seems more correct to
mark them as Invalid, since we can't safely code-generate these types
at the moment.