User Details
- User Since
- Oct 30 2020, 9:58 AM (125 w, 2 d)
Fri, Mar 24
Adding @rsuderman, the author of the existing fold code as a more appropriate reviewer.
Wed, Mar 22
Added Rob as an additional reviewer. As mentioned, this matches with the TOSA specification, and looks like a nice simplification at the same time.
Tue, Mar 21
Given Rob's comments, I think that the path forward is to get comments from a wider set of people about making the switch to signed/unsigned from signless/signed so we aren't surprising any TOSA dialect users when we get to the point of removing signless support. An RFC on discourse seems to be the right way to do that, along with some time to collect responses. All of the lit tests are going to need to change, which is going to be a big set of changes that need to land. Most of it is mechanical, but still a lot of overhead.
Mon, Mar 20
As noted in the thread, I want to make sure that we're moving to remove i8 support in the near future, not supporting i8/ui8/si8. As a starting point to the move of i8 -> si8, this change is fine (mod Rob's comment), but it is starting down a path where multiple projects will be affected.
Mon, Mar 13
Thanks for checking Rob. Yes, this looks good to me.
Thu, Mar 9
You're right that F64 isn't in any of the TOSA profiles today, which is why I was discouraging it's use in the overall definition of Tosa_Tensor. It's something we look at adding, but it adds significant requirements to any profiles it goes into. It would be good to get a sense of what networks need F64 vs ending up with that type as default. Many systems that support f64 do it at a performance cost against f32, and some systems don't implement it at all. Ideally the tooling would have a way to guide developers to minimize their use of f64 to where it has a significant improvement on results justifying the extra computation cost.
Tue, Mar 7
Looks reasonable to me, although I don't have merge priveleges.
Mon, Mar 6
This is definitely an improvement, I'm okay with this type, but it reads as only a 64-bit tensor. Perhaps Jacques is better at naming than I am, but I'd go something closer to Tosa_Tensor_Plus_F64, indicating that this is an extension of Tosa_Tensor with F64 (as opposed to I64).
Feb 9 2023
I don't have commit privileges. @rsuderman or @NatashaKnk, can you take a look?
Feb 7 2023
While waiting for the others to review, you might want to remove the extra include.
Feb 2 2023
Thanks. Looks good to me now.
Can you add a simple check to TosaValidation::runOnOperation that goes into the existing for loop and checks for float64 type, and does a signalPassFailure there? You don't need to check profileType, as f64 isn't in the spec. This pass is for implementations to use to check against the spec, so dialect specific changes that aren't in the spec should be flagged when it is run.
Jan 26 2023
The TOSA specification doesn't support F64, so I think we should be careful in how we expand the data types in the dialect. Adding it has implications for TOSA consumers.
Dec 15 2022
Decisions on what can be fused are often very hardware specific. I do like that the partitioning is parameterized, so that if I'm understanding properly, any set of ops can be defined as anchor as well as leading/trailing ops to be captured. Is a new function the right destination for these? Have you looked at using the ml_program dialect to capture this as a region? I would imagine the overall structure wouldn't need to change significantly. It's at least an option worth considering.
Nov 4 2022
Nov 3 2022
Oct 31 2022
Aug 29 2022
This looks nice. Is there something needed before this can be merged? Presumably any user of the tosa dialect that has lit tests would also need their tests to be modified.
Aug 12 2022
Rebased change
Aug 7 2022
Yes, help landing would be appreciated.
Jul 7 2022
It looks good to me, although I don't have commit rights, so it might not be all the approvals you need.
Jun 27 2022
I didn't find any uses of ReluN downstream, it must have been cleaned up already.
This doesn't directly impact this change, but in the TOSA specification, we've removed the ReluN operator, since it's trivially implementable with Clamp. I will look at removing ReluN from the dialect to line up with the specification.
Jun 9 2022
Fix padding in the DecomposeTransposeConv passes to be compatible with the new attribute.
Added a testcase to try to catch future problems with padded transpose_conv.
Jun 8 2022
Fix clang-format issue (clang-format 11 vs. clang-format 10 difference)
Nov 17 2021
Thanks for doing this, I think it's the right solution.
Jul 22 2021
Thanks for the clarification and comment.