Tosa to Linalg conversion crashes when input tensor is a float type other than fp32.
Because tosa.clamp and tosa.reluN have fp32 min/max attribute which is converted as arith.constant with the attribute type.
This commit fixes the crash by correctly setting the float constant type from the input tensor.
Details
Diff Detail
- Repository
- rG LLVM Github Monorepo
Event Timeline
In "things I'm just now noticing now that we're trying to send this upstream", should we add a test?
This doesn't directly impact this change, but in the TOSA specification, we've removed the ReluN operator, since it's trivially implementable with Clamp. I will look at removing ReluN from the dialect to line up with the specification.
@jpienaar Would you be willing to approve this, or do you not make sense as a reviewer?
It looks good to me, although I don't have commit rights, so it might not be all the approvals you need.
As another side note, we recently released a new version of the TOSA specification, where we call out non-fp32 data types directly for operators and their attributes: https://www.mlplatform.org/tosa/tosa_spec.html. Our goal is to do some fine tuning to make sure that the specification part looks good, and then try to align the dialect. This should end up with better fp16 support in the long run.