- User Since
- Jun 28 2018, 2:24 AM (248 w, 18 h)
Do you also have to update TransformDialect::verifyOperationAttribute?
Has this failure been clearly root-caused to this patch? Because I have really hard time seeing how SCF bufferization tests are related to this change.
Tue, Mar 28
Feel free to commit such changes directly.
Mon, Mar 27
It's okay to reland.
Fri, Mar 24
Remove code repetition by turning pattern-based applyToEach into a trait.
Will do some deduplication here.
Thu, Mar 23
Wed, Mar 22
This looks like an unrelated flang failure
Mostly code motion, didn't go beyond the screening for obvious layering problems.
Tue, Mar 21
The gpu testcase is based on the pass-pipeline of cubin，the current compiler is not supported.
Mon, Mar 20
Sorry for delay, already-approved patches don't show up in the review stream :(
Fri, Mar 17
Please add a test. The IR in the bug is a good start for the test, but can be simplified further.
Thu, Mar 16
Wed, Mar 15
Tue, Mar 14
Could you please rebase?
Mon, Mar 13
Sat, Mar 11
There isn't sufficient context to justify the change in the commit description, and the change itself doesn't seem to do anything else than lifting function argument attributes from LLVM IR to the Func dialect level. Yes, there is SPIR-V, but the change does not include the lowering to SPIR-V or even a description of how it can be done. Blindly copying LLVM IR is acceptable in the LLVM dialect, but not at the higher-level abstractions such as Func. It is unclear to me what these attributes would even mean in the full generality of what MLIR type system allows. For example, what is a writeonly tensor? How do two SPIR-V types noalias? How do even two memrefs noalias when the default conversion to LLVM produces two pointers that precisely do alias each other?
Could you check if the translation works in the opposite direction, too?
Sat, Mar 4
Do we need to also update the lowering to LLVM? It may assume type is always index and end up using a different bitwidth.
Fri, Mar 3
Wed, Mar 1
LGTM when the last comment is addressed, but I'd like an opinion from @nicolasvasilache on the promotion logic.
We can have test for this, but we probably don't want this to be on the _main_ build path.
Note that many other tools are not directly "used" upstream, mlir-reduce, mlir-parser-fuzzer, etc. They are nevertheless useless.
I don't see why this needs to be deleted. It was used to generate the initial ODS for some intrinsics and is still useful for adding new target-specific ones. It is not used in the build process because the complexity of adding a stage to the build process does not justify it.
Feb 28 2023
This allows existing dialect modules such as gpu.module to continue to not do dialect specific lowering, allowing them to opt in or out based on their dialects needs.
Feb 27 2023
Why does it have to be a separate interface function? Module is just the builtin.module operation, it looks like we can just call convertOperation for it and teach the default translation to do nothing with it.
Feb 25 2023
Feb 24 2023
Nice, almost there, thanks you!
Feb 23 2023
Feb 21 2023
Could we have a test?
Feb 16 2023
These patterns look very similar, any chance they can be generalized somehow?