- Queries
- All Stories
- Search
- Advanced Search
- Transactions
- Transaction Logs
Advanced Search
Tue, May 30
Fri, May 26
In D151117#4375142, @mehdi_amini wrote:to have a working legalization to linalg.
Is there a path from there to linalg? It's not clear to me how it works?
And actually I'm wondering if this lowering could be expressed with a linalg.generic indeed?
Thu, May 25
Thanks for taking this on. Scatter is a complicated op, and it is good to have a working legalization to linalg.
Seems reasonable. It's a fairly complex legalization, but necessary. Thanks for adding this.
Looks good, thanks for the fix.
Wed, May 24
I think the issue is that floor/ceil/clamp don't have the SameOperandsAndResultType trait which is required by the Idempotent trait. It should be okay to add for floor/ceil/clamp as they shouldn't change type.
Looks good, matches what is in the TOSA specification.
Looks good to me.
One minor change, otherwise it looks good to me.
Mon, May 22
Thanks! LGTM now. Let me know if you need help landing this.
Fri, May 19
Thu, May 18
I should be able to help land it, hopefully tomorrow.
Tue, May 16
Looks good to me.
I thought I had accepted this already, but don't see it recorded. LGTM for the reasons described in the discourse thread.
Thanks for the ping, this got lost in my todo list.
Mon, May 15
This seems like a reasonable fold to me, and a good heuristic to start with. Adding @jpienaar in case he sees anything I've missed.
Agreed that this looks good, with only changes in the description that seem to be needed. As mentioned inline, part of the blame lies on the spec, and I'm going to try to correct the spec to allow legalizations to choose the best option, listing the existing version as an option rather than appearing to be required. (Let me know if there is benefit to the spec as is)
LGTM.
Thu, May 11
LGTM.
May 5 2023
LGTM.
May 2 2023
looks good to me.
May 1 2023
LGTM.
Apr 21 2023
In D148498#4282904, @AviadCo wrote:Well, removing new_shape is a big refactor which I think reuqires propare discussion if we would like to do so.
I also agree with @jpienaar's comment, but until we decide otherwise I think that infer shape can be useful and missing functionality to check element type of TOSA reshape operation.
We can raise a discussion to get ride of new_shape attribute and part of the work will be to revert this change.
Apr 14 2023
In D148296#4270000, @mehdi_amini wrote:Yes this was somehow intentional: in the spirit of matching the minimum amount of information.
Best would be to use something like CHECK: tosa.cont followed by CHECK-SAME: value = 1
Thanks for adding this Mehdi.
Apr 4 2023
Looks good to me.
Mar 29 2023
Thanks for making all of the changes. Issue if you want to follow along: https://github.com/llvm/llvm-project/issues/61822. Not assigned to me, but if you're not working on it I'm less worried that someone else will pick it up without me knowing.
Good catch @mgehre-amd. I had been thinking along a similar path to what you described, your comment helped solidify my thoughts.
If we track fixing the verifier, then agree that we can drop the check here. In terms of this change, we should change the tests so that if we fix the verifier, we don't then have to change these tests. (We're likely to find other tests that fail, but let's make our future work easier 😄 ). While doing that, changing the commit message to drop the mention of lower dimension helps avoid confusion.
Mar 28 2023
In D145738#4228947, @sjw36 wrote:@eric-k256 thanks. I didn't see a hard requirement in the spec and was confused by the test above for add_zero_different_shape. Also turns out there is a AddZeroOptimization and MulOneOptimization that also allow mixed rank folding.. also kind of redundant. Shall I remove those as well?
I like the cleanup and the overall simplification of the code, but the broadcast issue still remains. Not specifically called out, but in addition to the marked issues, updating the commit message would also be good.
Mar 27 2023
Looks good to me. Thanks.
Mar 24 2023
Adding @rsuderman, the author of the existing fold code as a more appropriate reviewer.
Mar 22 2023
Added Rob as an additional reviewer. As mentioned, this matches with the TOSA specification, and looks like a nice simplification at the same time.
Mar 21 2023
Given Rob's comments, I think that the path forward is to get comments from a wider set of people about making the switch to signed/unsigned from signless/signed so we aren't surprising any TOSA dialect users when we get to the point of removing signless support. An RFC on discourse seems to be the right way to do that, along with some time to collect responses. All of the lit tests are going to need to change, which is going to be a big set of changes that need to land. Most of it is mechanical, but still a lot of overhead.
Mar 20 2023
As noted in the thread, I want to make sure that we're moving to remove i8 support in the near future, not supporting i8/ui8/si8. As a starting point to the move of i8 -> si8, this change is fine (mod Rob's comment), but it is starting down a path where multiple projects will be affected.
Mar 13 2023
Thanks for checking Rob. Yes, this looks good to me.
Mar 9 2023
You're right that F64 isn't in any of the TOSA profiles today, which is why I was discouraging it's use in the overall definition of Tosa_Tensor. It's something we look at adding, but it adds significant requirements to any profiles it goes into. It would be good to get a sense of what networks need F64 vs ending up with that type as default. Many systems that support f64 do it at a performance cost against f32, and some systems don't implement it at all. Ideally the tooling would have a way to guide developers to minimize their use of f64 to where it has a significant improvement on results justifying the extra computation cost.
Mar 7 2023
Looks reasonable to me, although I don't have merge priveleges.
Mar 6 2023
This is definitely an improvement, I'm okay with this type, but it reads as only a 64-bit tensor. Perhaps Jacques is better at naming than I am, but I'd go something closer to Tosa_Tensor_Plus_F64, indicating that this is an extension of Tosa_Tensor with F64 (as opposed to I64).
Feb 9 2023
I don't have commit privileges. @rsuderman or @NatashaKnk, can you take a look?
Feb 7 2023
While waiting for the others to review, you might want to remove the extra include.
Feb 2 2023
Thanks. Looks good to me now.
Can you add a simple check to TosaValidation::runOnOperation that goes into the existing for loop and checks for float64 type, and does a signalPassFailure there? You don't need to check profileType, as f64 isn't in the spec. This pass is for implementations to use to check against the spec, so dialect specific changes that aren't in the spec should be flagged when it is run.
Jan 26 2023
The TOSA specification doesn't support F64, so I think we should be careful in how we expand the data types in the dialect. Adding it has implications for TOSA consumers.
Dec 15 2022
Decisions on what can be fused are often very hardware specific. I do like that the partitioning is parameterized, so that if I'm understanding properly, any set of ops can be defined as anchor as well as leading/trailing ops to be captured. Is a new function the right destination for these? Have you looked at using the ml_program dialect to capture this as a region? I would imagine the overall structure wouldn't need to change significantly. It's at least an option worth considering.
Nov 4 2022
Nov 3 2022
Oct 31 2022
Aug 29 2022
This looks nice. Is there something needed before this can be merged? Presumably any user of the tosa dialect that has lit tests would also need their tests to be modified.
Aug 12 2022
Rebased change
Aug 7 2022
Yes, help landing would be appreciated.
Jul 7 2022
It looks good to me, although I don't have commit rights, so it might not be all the approvals you need.
Jun 27 2022
I didn't find any uses of ReluN downstream, it must have been cleaned up already.
This doesn't directly impact this change, but in the TOSA specification, we've removed the ReluN operator, since it's trivially implementable with Clamp. I will look at removing ReluN from the dialect to line up with the specification.
Jun 9 2022
Fix padding in the DecomposeTransposeConv passes to be compatible with the new attribute.
Added a testcase to try to catch future problems with padded transpose_conv.
Jun 8 2022
Fix clang-format issue (clang-format 11 vs. clang-format 10 difference)
Nov 17 2021
Thanks for doing this, I think it's the right solution.
Jul 22 2021
Thanks for the clarification and comment.
Jul 16 2021
In D105845#2883409, @jpienaar wrote:I don't know the ops that well, I could verify these if interested, but was going to rely on Suraj else :)