User Details
- User Since
- Sep 22 2022, 4:19 AM (25 w, 5 d)
Today
Addressing CR and adding documentation
Yesterday
CR fix
Fix bug
rebase
rebase
Sun, Mar 19
edit commit message
Thu, Mar 16
Update:
I pushed the fix and thanked you :)
commit message
Update commit message
There was a bug in a test in flang but it was fixed so you can push/rebase
it's my fault.. https://reviews.llvm.org/D145952
I just uploaded a fix because I didn't see yours:
https://reviews.llvm.org/D146213
Rebase after the bug in flang fixed
Wed, Mar 15
Tue, Mar 14
clang format
Mon, Mar 13
Wed, Feb 22
Tue, Feb 21
Feb 16 2023
Feb 12 2023
Feb 7 2023
Feb 6 2023
Feb 5 2023
Add custom attribute to the lit test
Feb 2 2023
Feb 1 2023
Your example will work for me, I'll just have to add a pass where I insert this alloc_tensor.
Actually I expected this to happen naturally in insertTensorCopies stage.
Jan 31 2023
Jan 30 2023
Then you can bufferize with One-Shot Bufferize first, excluding all FuncOps of which you know that they cannot be bufferized. Then run the bufferization with copyBeforeWrite = true on all excluded FuncOps.
Addressing code review
Jan 26 2023
Thanks @springerm
I tried it, added this patch: https://reviews.llvm.org/D142631
edit title
Jan 23 2023
@springerm - my case is a bit different/special. I have an IR with functions that have memref based signature.
Some of them have implementation - and it is a tensor based implementation, meaning I have to_tensor/to_memrefs on args/results.
I had to inline some of them to one function that will be pure tensor based function. On this function I run OneShotBufferize.
After inlining I had the intermediate to_tensor(to_memref), and I expected them to fold.
That's how I got to all the wondering about the folding.
I understand now that the general case is much more complicated than my case.
So I decided to manually move the to_tensor right after their to_memref - because in my case I know it is not changing the meaning of the IR.
Jan 20 2023
lit test change
Change operation in lit test.
Jan 19 2023
Do you suggest adding this attribute to the func dialect? Or do you suggest discussing the whole idea of not always returning true?
Jan 15 2023
I had a scenario where I wanted the callOp not to be inlined.
This is how I solved it, based on your change in Test dialect: https://reviews.llvm.org/D90359
Do you think I should implement it differently?
Jan 12 2023
Since this is my first commit, I can't commit it by myself.
Can you please commit it for me? (https://llvm.org/docs/Phabricator.html#committing-someone-s-change-from-phabricator)
Thanks!
Jan 11 2023
Jan 10 2023
The input in your test case has mixed tensor/memref ops. How did you get to that state? Can you use -one-shot-bufferize="bufferize-function-boundaries" on the initial IR when everything is tensors?
Hi!
I suspect this assertion is too strict.
I have here a piece of code that works without the assertion but fails when the assertion is there: