This is an archive of the discontinued LLVM Phabricator instance.

[mlir][linalg] Allow some fusion on mixed generics
ClosedPublic

Authored by Hardcode84 on Nov 27 2022, 7:29 AM.

Details

Summary

Relax linalg elementwise fusion check to allow mixed consumers. Producer is still required to be fully tensor to avoid potential memref aliasing.

Diff Detail

Event Timeline

Hardcode84 created this revision.Nov 27 2022, 7:29 AM
Hardcode84 requested review of this revision.Nov 27 2022, 7:29 AM
mravishankar accepted this revision.Nov 28 2022, 12:44 PM

Nice! I am excited for the possibilities of mixed tensor and buffer semantics.

This revision is now accepted and ready to land.Nov 28 2022, 12:44 PM
This revision was automatically updated to reflect the committed changes.

Hi, I just have a question regarding this update.

I am currently having trouble lowering these generic ops that have mixed tensor and buffer into loops. I think transformation passes in linalg such as -convert-linalg-to-loops and -linalg-bufferize cannot handle such generic ops. So I'm just wondering if these passes will be updated in the future to allow lowering of generic ops with mixed tensor and buffer?

Thanks.

Hi, I just have a question regarding this update.

I am currently having trouble lowering these generic ops that have mixed tensor and buffer into loops. I think transformation passes in linalg such as -convert-linalg-to-loops and -linalg-bufferize cannot handle such generic ops. So I'm just wondering if these passes will be updated in the future to allow lowering of generic ops with mixed tensor and buffer?

Thanks.

For bufferization we have our own pattern https://github.com/intel/mlir-extensions/blob/6e36adce8d211deefdf395ebcdc6c5fd47e080a5/numba_dpcomp/numba_dpcomp/mlir_compiler/lib/pipelines/PlierToLinalg.cpp#L2657 , we are planning to upstream it eventually, but no concrete plans yet

For lowering we are using ConvertLinalgToParallelLoopsPass

Hi, I just have a question regarding this update.

I am currently having trouble lowering these generic ops that have mixed tensor and buffer into loops. I think transformation passes in linalg such as -convert-linalg-to-loops and -linalg-bufferize cannot handle such generic ops. So I'm just wondering if these passes will be updated in the future to allow lowering of generic ops with mixed tensor and buffer?

Thanks.

For bufferization we have our own pattern https://github.com/intel/mlir-extensions/blob/6e36adce8d211deefdf395ebcdc6c5fd47e080a5/numba_dpcomp/numba_dpcomp/mlir_compiler/lib/pipelines/PlierToLinalg.cpp#L2657 , we are planning to upstream it eventually, but no concrete plans yet

For lowering we are using ConvertLinalgToParallelLoopsPass

I see. Thank you for the reply.