User Details
- User Since
- Jan 7 2020, 7:38 AM (194 w, 3 d)
Mon, Sep 4
Thank you for adding this.
Aug 25 2023
Thank you!
Aug 18 2023
Aug 14 2023
I wanted to use presubmit bots to test this. This was now landed independently.
Aug 11 2023
Apr 18 2023
Apr 17 2023
Mar 22 2023
Just some minor nits.
Mar 21 2023
Looks good conditional on small nits.
Jan 26 2023
Than you @powderluv for describing your usecase. I was not aware that linalg is used as a storage/transit format.
Jan 16 2023
I am in favor of landing this (but please ensure consensus before doing so). I have argued against changes before that were purely aesthetic but introduced a lot of churn (like the one in https://reviews.llvm.org/D133076). Ultimately it is a subjective decision. I have a different opinion in this case because the linalg dialect has a smaller usage scope compared to arith, so I am willing to accept more churn. Also, this change improves ergonomics when reading linalg IR, which we increasingly do and hence care about. This was also one of the motivations to introduce new operations to linalg like map.
Jan 11 2023
Thanks for adding this.
Jan 5 2023
Nice. Just some comments.
Jan 3 2023
Nov 7 2022
I agree that this goes in the right direction, however, I am unsure what this direction is? Is there a writeup of where this should be going? That would make reviewing these changes easier.
Nov 2 2022
Thanks for refactoring. Having this in a separate interface addresses my concerns about the partial interface. Did not look at the deep details but also do not want to block this.
Oct 27 2022
Oct 26 2022
Oct 20 2022
Thanks!
Thanks for doing this. The patch looks correct to me.
Sep 19 2022
Not sure about the defaults but looks good otherwise. Also +1 to @bondhugula comments. Please address those first.
Sep 15 2022
Can you explain why this is needed? In usecases I have seen, the entire IR typically transitions from signed to signless types but I have not seen the need for a mixed form, yet.
Sep 13 2022
Sep 6 2022
Thanks for the explanation. My bazil-foo is not as strong as yours.
Sep 5 2022
+1 for the direction. Just a bazil question.
Sep 2 2022
Thanks!
Aug 23 2022
I find it very strange that arith.select supports memref in the first place. Are there other operations in the arith dialect that do this?
Aug 4 2022
Aug 3 2022
Can you provide some rationale for this change in the description? I assume this enables reuse of the functionality independent of extract_slice and friends but making this clear in the description would be nice.
Aug 2 2022
Aug 1 2022
Jul 7 2022
Jul 4 2022
May 31 2022
I don't think there are any guarantees that MLIR preserves unknown attributes during canonicalization. So this would be the first precedent in that direction.
Separate pass works for me.
May 25 2022
If it useful beyond testing, I don't see why it should not be a regular pass.
What was the motivation behind this canonicalization? In particular, why would a combined parallel loop be considered the canonical form?
Apr 28 2022
Thanks!
Thanks. Adding @csigg as an FYI.
Apr 27 2022
Apr 25 2022
What is the longer term plan here? Do all places that introduce alloca have to perform this local optimization? Should we instead have a pass that hoists alloca out of loops? Or rethink the introduction of allocation scope for loops?
Apr 19 2022
Thanks!
Apr 13 2022
Looks good to me. We really need to figure out a way to group dialects :)
Apr 12 2022
Thanks for adding this.
Mar 21 2022
Feb 18 2022
Feb 17 2022
Found the testing. I need to get my docker setup in order before I can test, so this will take a little to land.
Thanks!
Is there a way to test these configurations before submitting them?
Thanks!
Feb 16 2022
Thanks!
Consider also adding an integration test. I think we have one for the other cases.
Thanks for splitting this out.
Feb 15 2022
Thanks for your reference to the scoping operation @ftynse. I had the feeling we had something like it but could not find it.
My concern with only doing the scf.parallel operation is that we do lower it to scf.for loops in some cases and we would have to model this behavior there somehow. How would we do this? Do we have a suitable high-level abstraction that allows to insert an allocation scope? Alternatively, we could mark the scf.for loop the same. That seems a reasonable choice but scf.for supports returning values, so we might actually have escaping stack allocations currently. Is this a model we want to support?
Thank you. Just a naming nit.
Feb 14 2022
Thank you. The default is fairly arbitrary (well, it's good enough for some index computations) and we really should have some cost model here. +1 to making it configurable at least.
Jan 31 2022
Jan 26 2022
Jan 20 2022
Please fix the commit message though. I just noticed this now.
Thanks.
PTAL
Pull size checking out of helper.
Jan 18 2022
Thanks for adding this. I have wanted this a couple of times, too, but never went as far as creating a diff for it.
Jan 17 2022
Thanks for adding this. Could this rather reuse the logic from verifyAllocLike from the memref operations? It is the same logic but it would need to be exposed to other dialect.
Jan 14 2022
Jan 13 2022
fix test
Jan 12 2022
Jan 11 2022
The arith::minf operation is defined as
Thanks!
There could well be a separate pass that inserts these casts to avoid the behavior in shape reification if that is critical to your use cases. I think it would still be better to fix shape reification to return a static value where possible. After all, if this is valid linalg IR, reification should do as good as it can with it.
rebase
Thanks!