This revision also inserts an end-to-end test that lowers tensors to buffers all the way to executable code on CPU.
Details
Diff Detail
- Repository
- rG LLVM Github Monorepo
Event Timeline
mlir/lib/Dialect/Linalg/Transforms/TensorsToBuffers.cpp | ||
---|---|---|
214 | FYI: you may want to look at how these get lowered in npcomp in full generality: https://github.com/llvm/mlir-npcomp/blob/master/lib/RefBackend/TensorToMemref/LowerConstantTensorsToMemref.cpp |
mlir/lib/Dialect/Linalg/Transforms/TensorsToBuffers.cpp | ||
---|---|---|
214 | Nice! Was reluctant to invest supporting more complex cases and I am glad I didn't try. Can we move your code to core? Your choice. |
mlir/lib/Dialect/Linalg/Transforms/TensorsToBuffers.cpp | ||
---|---|---|
214 | I'm happy moving the code to core, but it takes a couple opinions that I don't think we have fleshed out in core. Probably the biggest one is having a "global" op and a lowering of the "global" to LLVM. Maybe there's a bigger discussion here about support for "tensor to memref" conversion in core. I've seen scattered patches upstream related to it, but no real clear design. I'll send a discussion to discourse. For now, feel free to carry on with this patch. |
mlir/lib/Dialect/Linalg/Transforms/TensorsToBuffers.cpp | ||
---|---|---|
214 | Started the discussion here: https://llvm.discourse.group/t/what-is-the-strategy-for-tensor-memref-conversion-bufferization/1938 |
Rebase, make vector.type_cast a ViewLikeOpInterface so it plays nicely with buffer placement, fix test.
mlir/lib/Dialect/Linalg/Transforms/TensorsToBuffers.cpp | ||
---|---|---|
214 | Thanks for starting the discussion @silvas , I replied there. |
FYI: you may want to look at how these get lowered in npcomp in full generality: https://github.com/llvm/mlir-npcomp/blob/master/lib/RefBackend/TensorToMemref/LowerConstantTensorsToMemref.cpp
https://github.com/llvm/mlir-npcomp/blob/21255d5f8e7d6db2a20edf65c07fdbf253986456/test/E2E/lower-constant-tensors-to-memref.mlir#L1