Type should only be added to results if it is tensor.
Details
Diff Detail
- Repository
- rG LLVM Github Monorepo
Event Timeline
Just out of curiosity. Why do you need the mixed form of linalg.generic?
In our python compiler we lowering numpy to the mix of tensor and memref (we cannot lower everything to tensor land with the presence on like ops like setitem)
This is very interesting and a use case we have always wanted to support despite our lacking a concrete example.
We speculated that bufferization could want to be done partially but in reality this need never materialized.
Some short post describing the use case a bit more and stressing the need for ops that work with both tensors and buffers would be quite useful IMO :)
I would also go as far as adding a comment in the @cast_producer_mixed test that mentions some of this, lest you don't mind risking someone in the future coming and saying "oh the mixed form is only used in that 1 test, let's drop it.." :)
Thanks for pushing on this!
We are planning to present our numpy dialect (https://github.com/intel/mlir-extensions/blob/main/mlir/include/imex/Dialect/ntensor/IR/NTensorOps.td) and lowering on the MLIR ODM eventually, but we need more time to prepare. Specifically for the mixed generic's , we are using them to lower numpy calsl with out parameters. We lower entire numpy function to linalg-on-tensors and then insert final linalg.generic which copies result to out memref (https://github.com/intel/mlir-extensions/blob/main/mlir/lib/Conversion/NtensorToLinalg.cpp#L69), and hope it all will be fused together.
Very cool, thanks for sharing!
and hope it all will be fused together.
Well.. we better be sure we give you all the tools you need :)