User Details
- User Since
- Aug 20 2012, 11:51 AM (514 w, 3 d)
Wed, Jun 29
@MaheshRavishankar -- is there a risk that this op will be optimized worse due to not being recognized as well as the regular depthwise?
Wed, Jun 8
Tue, Jun 7
I suspect the g dimension should be contiguous with the C dimension, and I'm not sure if it should be major or minor to it.
May 20 2022
This direction makes sense to me. I am a little out of the loop from the current bufferization interfaces, so I will let someone else review the code in detail.
Apr 20 2022
Apr 19 2022
Apr 11 2022
Can you add a pass / reusable pattern that converts ml_program.func to func.func and the reverse? Otherwise I feel like we are going to roll that many times downstream just for our initial push in IREE (can be a separate patch of course).
Apr 8 2022
Nice :)
Apr 6 2022
You can see my full code here: https://github.com/silvasean/iree/commit/cba1cf90fa8e8cc9b3bdacfd034f7d53c40661ad
Hey River, I was trying to use this downstream and it isn't working for me
Thanks for fixing this River :)
Apr 1 2022
Thanks River!
Mar 31 2022
Left a comment there.
Mar 21 2022
Mar 10 2022
Feb 17 2022
Dec 17 2021
Dec 13 2021
nice :)
Nov 22 2021
The idea here of preventing invalid ops makes sense to me, but @herhut probably is more knowledgeable about this code for a detailed review.
Nov 11 2021
Nice :) This looks pretty straightforward, but given how core this is, wait for another approver or two.
Nov 8 2021
Yay!!
Nov 2 2021
Nov 1 2021
This looks good to me. @rriddle @dexonsmith @dblaikie for thoughts.
Oct 28 2021
- I would just like Math dialect transforms to not depend on target-specific dialects. So concretely just RsqrtOpApproximation pattern would move to x86vector/Transform/, since it is the target specific one.
- I agree, removing Polynomial would be good.
Can we move this RsqrtOp lowering to the x86vector dialect? It seems really out of place in the math dialect, which should remain target agnostic. (also, since this approximation is newton-raphson based instead of polynomial based, it seems out of place in PolynomialApproximation.cpp as well)
This doesn't seem to be the right pass to do target-specific lowerings. Can you add math.rsqrt and move the vector unrolling to a generic pass in the vector dialect? Then this pass wouldn't depend on target-details.
Oct 26 2021
Glad to see this happening!
Oct 25 2021
Oct 21 2021
nice!
LGTM. Probably wait for another pair of eyes to LGTM it. @ezhulenev perhaps?
Oct 11 2021
Thanks!!!
Oct 6 2021
Oct 5 2021
I think it's fine for const_shape to always be static and require a tensor.cast to erase the static shape to dynamic.
Oct 4 2021
Thanks!
Sep 30 2021
+1 to Jacques suggestion to listing some prior art here. In the project ideally or in the industry as well would be good.
Sep 29 2021
@herhut - thoughts on keeping scalars (i1) as disallowed from Same*Shape traits?
Personally I think we should deprecate this behavior. In my mind, an i1 doesn't have a shape at all. So it doesn't make sense to say it has the same shape as anything else. Is there a specific use case for this behavior (IIRC this might have fallen out of an implementation detail of an earlier implementation of the trait).
Specifically, it is impossible for these two functions to have different semantics (as long as linalg.pad_tensor is marked NoSideEffect) -- both return a value that is identical to their argument, and it is always legal to fold @with_pad to @without_pad per tensor value semantics:
Sep 28 2021
Sorry I didn't see this earlier, but I don't think this direction makes sense. This op operates on tensors, which are value semantic, so terminology like "guaranteed to create a new tensor suitable for packing" simply don't make sense.
address
Sep 22 2021
did a pass. looks good to me. probably want to wait for a handful of LGTM's given the far-reaching consequences of this change. Ideally both River and Chris could weigh in.
Sep 17 2021
Sep 15 2021
Nice!
Sep 14 2021
Sep 9 2021
Sep 3 2021
Sep 2 2021
Awesome! Thanks!!!
Aug 11 2021
Thanks Alex!
Thanks Alex!
Aug 10 2021
Can you please revert this commit and the previous one I objected to? I think they are fundamentally going in the wrong direction.
Can you please add the integration test cases I suggested in the other patch? I'm still pretty sure this will miscompile. Fundamentally you are writing this as an "in-place" bufferization and that cannot be done with the current framework because it involves non-local reasoning.
Aug 9 2021
It feels like this bufferization pattern would miscompile if we were to write integration tests analogous to these:
Aug 5 2021
Aug 3 2021
Beautiful :)
Aug 2 2021
great work yi!
Jul 10 2021
Jul 9 2021
Thanks!
Jul 8 2021
I don't think we need an integration test here. A regular IR test would be enough to exercise it? (just snapshot the IR from the itnegration test before linalg-bufferize)
Do you know why previous tests didn't need this? Should we add a new test?
Jul 7 2021
Please add lowering to LLVM for this too (doesn't have to be this patch precisely, but a fast follow-on please).
Will let @sjarus review this as the detailed shape transfer function review is best done by someone deeply aware of the spec.
Jul 2 2021
reword