LinalgOps that are all parallel do not use the value of outs
tensor. The semantics is that the outs tensor is fully
overwritten. Using anything other than init_tensor can add false
dependencies between operations, when the use is just for the shape of
the tensor. Adding a canonicalization to always use init_tensor in
such cases, breaks this dependence.
Details
Details
Diff Detail
Diff Detail
- Repository
- rG LLVM Github Monorepo
Event Timeline
Comment Actions
I don't think this should be a blanket canonicalization as it will interact badly with the current work on bufferization post-linalg transforms.
Can you please make this an opt-in rewrite pattern that we may or may not want to apply depending on the case?
Comment Actions
Discussed this offline. This pattern might be worth having as a canonicalization, but bufferization might not be able to account for this now. Instead, this is moved to elementwise op fusion. Once the interaction with bufferization is evaluated, this can be made a canonicalization (on all LinalgOps)
mlir/lib/Dialect/Linalg/Transforms/FusionOnTensors.cpp | ||
---|---|---|
1338–1343 ↗ | (On Diff #345924) | I think we only need index here. You can check if it is dynamic with operandType.isDynamicDim(idx). |
mlir/lib/Dialect/Linalg/Transforms/FusionOnTensors.cpp | ||
---|---|---|
1338–1343 ↗ | (On Diff #345924) | Maybe. I think the difference is not too much. Will leave it as is for now. |