User Details
- User Since
- Aug 20 2012, 11:51 AM (553 w, 5 d)
Feb 8 2023
Dec 13 2022
Nice, this is definitely useful. Agree that "print op" is better than plural "print ops".
Dec 1 2022
LGTM. Maybe add a comment with a link to here for context: https://stackoverflow.com/questions/26921836/correct-way-to-test-for-numpy-dtype
Nov 21 2022
Nov 18 2022
Also adding @ezhulenev who has systems that use this op
LGTM, but let's wait for somebody outside our chat thread to weigh in.
Nov 4 2022
Nov 3 2022
Oct 20 2022
This is really great Sanjoy. Thanks! I have a few comments.
Oct 14 2022
More info on switches here: https://llvm.org/docs/CodingStandards.html#don-t-use-default-labels-in-fully-covered-switches-over-enumerations
Oct 10 2022
- Add some documentation / FAQs detailing the differences between side effects, undefined behavior, speculatabilty.
Sep 14 2022
Okay, let's revert it then.
This patch appears to be miscompiling. See discussion on https://github.com/llvm/torch-mlir/issues/1361
Jul 26 2022
@ftynse could you merge this for @mscuttari ? - my arc is locally broken and I'm still debugging how to fix it
+1 on confusion of DenseArrayAttr with ArrayAttr and DenseElementsAttr in the C++ API. It will be hard to keep those straight.
:)
Jul 25 2022
You can open a new one -- thanks!
I would recommend that we rename this to _mlir_memref_to_llvm_alloc, to differentiate it from other situations where MLIR might want a custom allocation function (memref to LLVM is not "MLIR" as whole)
Jul 21 2022
FYI IREE calls these files "FooOpFolders.cpp", but Canonicalizations sounds better since canonicalizers are not folders but folders are (in a sense) canonicalizers.
Jul 20 2022
Just pushed it. Please keep an eye on the CI emails in case this breaks any of the bots.
Jul 19 2022
:)
Jul 18 2022
FYI this caused us issues in Torch-MLIR as well https://github.com/llvm/torch-mlir/pull/1078#issuecomment-1188337337
Jul 14 2022
I see an empty file after this PR: https://github.com/llvm/llvm-project/blob/main/mlir/include/mlir/Analysis/DataFlowAnalysis.h
Jul 6 2022
sg
Okay, can you update the commit message to reflect that? Right now it's a bit hard to understand.
What is the bug, and how does this fix it?
Jun 29 2022
@MaheshRavishankar -- is there a risk that this op will be optimized worse due to not being recognized as well as the regular depthwise?
Jun 8 2022
Jun 7 2022
I suspect the g dimension should be contiguous with the C dimension, and I'm not sure if it should be major or minor to it.
May 20 2022
This direction makes sense to me. I am a little out of the loop from the current bufferization interfaces, so I will let someone else review the code in detail.
Apr 20 2022
Apr 19 2022
Apr 11 2022
Can you add a pass / reusable pattern that converts ml_program.func to func.func and the reverse? Otherwise I feel like we are going to roll that many times downstream just for our initial push in IREE (can be a separate patch of course).
Apr 8 2022
Nice :)
Apr 6 2022
You can see my full code here: https://github.com/silvasean/iree/commit/cba1cf90fa8e8cc9b3bdacfd034f7d53c40661ad
Hey River, I was trying to use this downstream and it isn't working for me
Thanks for fixing this River :)
Apr 1 2022
Thanks River!
Mar 31 2022
Left a comment there.
Mar 21 2022
Mar 10 2022
Feb 17 2022
Dec 17 2021
Dec 13 2021
nice :)
Nov 22 2021
The idea here of preventing invalid ops makes sense to me, but @herhut probably is more knowledgeable about this code for a detailed review.
Nov 11 2021
Nice :) This looks pretty straightforward, but given how core this is, wait for another approver or two.
Nov 8 2021
Yay!!
Nov 2 2021
Nov 1 2021
This looks good to me. @rriddle @dexonsmith @dblaikie for thoughts.
Oct 28 2021
- I would just like Math dialect transforms to not depend on target-specific dialects. So concretely just RsqrtOpApproximation pattern would move to x86vector/Transform/, since it is the target specific one.
- I agree, removing Polynomial would be good.
Can we move this RsqrtOp lowering to the x86vector dialect? It seems really out of place in the math dialect, which should remain target agnostic. (also, since this approximation is newton-raphson based instead of polynomial based, it seems out of place in PolynomialApproximation.cpp as well)
This doesn't seem to be the right pass to do target-specific lowerings. Can you add math.rsqrt and move the vector unrolling to a generic pass in the vector dialect? Then this pass wouldn't depend on target-details.
Oct 26 2021
Glad to see this happening!
Oct 25 2021
Oct 21 2021
nice!
LGTM. Probably wait for another pair of eyes to LGTM it. @ezhulenev perhaps?
Oct 11 2021
Thanks!!!
Oct 6 2021
Oct 5 2021
I think it's fine for const_shape to always be static and require a tensor.cast to erase the static shape to dynamic.
Oct 4 2021
Thanks!
Sep 30 2021
+1 to Jacques suggestion to listing some prior art here. In the project ideally or in the industry as well would be good.
Sep 29 2021
@herhut - thoughts on keeping scalars (i1) as disallowed from Same*Shape traits?
Personally I think we should deprecate this behavior. In my mind, an i1 doesn't have a shape at all. So it doesn't make sense to say it has the same shape as anything else. Is there a specific use case for this behavior (IIRC this might have fallen out of an implementation detail of an earlier implementation of the trait).
Specifically, it is impossible for these two functions to have different semantics (as long as linalg.pad_tensor is marked NoSideEffect) -- both return a value that is identical to their argument, and it is always legal to fold @with_pad to @without_pad per tensor value semantics:
Sep 28 2021
Sorry I didn't see this earlier, but I don't think this direction makes sense. This op operates on tensors, which are value semantic, so terminology like "guaranteed to create a new tensor suitable for packing" simply don't make sense.
address
Sep 22 2021
did a pass. looks good to me. probably want to wait for a handful of LGTM's given the far-reaching consequences of this change. Ideally both River and Chris could weigh in.
Sep 17 2021
Sep 15 2021
Nice!