This is an archive of the discontinued LLVM Phabricator instance.

[mlir][linalg] Fix `FoldTensorCastProducerOp` for generic with memref output
ClosedPublic

Authored by Hardcode84 on Nov 10 2022, 1:21 PM.

Details

Diff Detail

Event Timeline

Hardcode84 created this revision.Nov 10 2022, 1:21 PM
Hardcode84 requested review of this revision.Nov 10 2022, 1:21 PM

Just out of curiosity. Why do you need the mixed form of linalg.generic?

Just out of curiosity. Why do you need the mixed form of linalg.generic?

In our python compiler we lowering numpy to the mix of tensor and memref (we cannot lower everything to tensor land with the presence on like ops like setitem)

pifon2a accepted this revision.Nov 16 2022, 10:35 AM
This revision is now accepted and ready to land.Nov 16 2022, 10:35 AM

Just out of curiosity. Why do you need the mixed form of linalg.generic?

In our python compiler we lowering numpy to the mix of tensor and memref (we cannot lower everything to tensor land with the presence on like ops like setitem)

This is very interesting and a use case we have always wanted to support despite our lacking a concrete example.
We speculated that bufferization could want to be done partially but in reality this need never materialized.

Some short post describing the use case a bit more and stressing the need for ops that work with both tensors and buffers would be quite useful IMO :)

I would also go as far as adding a comment in the @cast_producer_mixed test that mentions some of this, lest you don't mind risking someone in the future coming and saying "oh the mixed form is only used in that 1 test, let's drop it.." :)

Thanks for pushing on this!

Just out of curiosity. Why do you need the mixed form of linalg.generic?

In our python compiler we lowering numpy to the mix of tensor and memref (we cannot lower everything to tensor land with the presence on like ops like setitem)

This is very interesting and a use case we have always wanted to support despite our lacking a concrete example.
We speculated that bufferization could want to be done partially but in reality this need never materialized.

Some short post describing the use case a bit more and stressing the need for ops that work with both tensors and buffers would be quite useful IMO :)

I would also go as far as adding a comment in the @cast_producer_mixed test that mentions some of this, lest you don't mind risking someone in the future coming and saying "oh the mixed form is only used in that 1 test, let's drop it.." :)

Thanks for pushing on this!

We are planning to present our numpy dialect (https://github.com/intel/mlir-extensions/blob/main/mlir/include/imex/Dialect/ntensor/IR/NTensorOps.td) and lowering on the MLIR ODM eventually, but we need more time to prepare. Specifically for the mixed generic's , we are using them to lower numpy calsl with out parameters. We lower entire numpy function to linalg-on-tensors and then insert final linalg.generic which copies result to out memref (https://github.com/intel/mlir-extensions/blob/main/mlir/lib/Conversion/NtensorToLinalg.cpp#L69), and hope it all will be fused together.

Just out of curiosity. Why do you need the mixed form of linalg.generic?

In our python compiler we lowering numpy to the mix of tensor and memref (we cannot lower everything to tensor land with the presence on like ops like setitem)

This is very interesting and a use case we have always wanted to support despite our lacking a concrete example.
We speculated that bufferization could want to be done partially but in reality this need never materialized.

Some short post describing the use case a bit more and stressing the need for ops that work with both tensors and buffers would be quite useful IMO :)

I would also go as far as adding a comment in the @cast_producer_mixed test that mentions some of this, lest you don't mind risking someone in the future coming and saying "oh the mixed form is only used in that 1 test, let's drop it.." :)

Thanks for pushing on this!

We are planning to present our numpy dialect (https://github.com/intel/mlir-extensions/blob/main/mlir/include/imex/Dialect/ntensor/IR/NTensorOps.td) and lowering on the MLIR ODM eventually, but we need more time to prepare. Specifically for the mixed generic's , we are using them to lower numpy calsl with out parameters. We lower entire numpy function to linalg-on-tensors and then insert final linalg.generic which copies result to out memref (https://github.com/intel/mlir-extensions/blob/main/mlir/lib/Conversion/NtensorToLinalg.cpp#L69), and hope it all will be fused together.

Very cool, thanks for sharing!

and hope it all will be fused together.

Well.. we better be sure we give you all the tools you need :)

rebase, add comment

This revision was landed with ongoing or failed builds.Nov 16 2022, 2:01 PM
This revision was automatically updated to reflect the committed changes.