Details
Diff Detail
- Repository
- rG LLVM Github Monorepo
Event Timeline
mlir/lib/Dialect/GPU/Transforms/ModuleToBinary.cpp | ||
---|---|---|
66 | Are these new #define 's I missed in the patch series? |
mlir/include/mlir/Dialect/GPU/Transforms/Passes.h | ||
---|---|---|
74 | Same as other patches: can this specify the interface instead of "Attribute"? | |
mlir/include/mlir/Dialect/GPU/Transforms/Passes.td | ||
41 | The restriction on ModuleOp does not seem necessary to me. I would let it be an OperationPass and specify that this finds all the GPUModule op immediately present in the attached regions and convert them to GPUBinaryOp holding a serialization of the result of the codegen of the GPUModule based on its target information. | |
43 | It's not useful to repeat the summary, please flesh this out. | |
44 | If the pass runs on a GPUModuleOp present in its input, then this shouldn't be needed. | |
48 | Document please. Do you have tests for it as well? | |
mlir/lib/Dialect/GPU/Transforms/ModuleToBinary.cpp | ||
97 | Nit: dyn_cast without check for nullptr | |
121 | Pattern aren't obviously appropriate to me. I would do something a bit more lightweight: void GpuModuleToBinaryPass::runOnOperation() { for (Region ®ion : getOperation()->getRegion()) { for (Block &block : region.getBlocks()) { for (auto gpuModuleOP : make_early_inc(block.getOperations<GPUModuleOp>())) process(gpuModuleOP); } } If we need a public API, it can be one that takes a GPUModuleOp and an OpBuilder and builds a GPUBinaryOp. | |
mlir/test/Dialect/GPU/module-to-binary.mlir | ||
1 ↗ | (On Diff #543194) | there is no diagnostics to verify here? |
mlir/include/mlir/Dialect/GPU/Transforms/Passes.h | ||
---|---|---|
74 | I'll change it. | |
mlir/include/mlir/Dialect/GPU/Transforms/Passes.td | ||
41 | I'll change it. | |
43 | I'll update it. | |
44 | I'll remove it. | |
48 | I'll add better documentation. Tests: Currently no, I think I need to add a method for parsing attributes as options for this to work as a cmd option, it was added here for having the option on C++. I can create a patch for that (cmd parsing of attributes) and the modify this. | |
mlir/lib/Dialect/GPU/Transforms/ModuleToBinary.cpp | ||
66 | These are introduced in the CMakeList.txt, I added them to not reuse the ones from the existing pipeline. | |
97 | I'll change it. | |
121 | Yeah, the reason for doing patterns was just to expose it to be used in other passes. But I can change it, | |
mlir/test/Dialect/GPU/module-to-binary.mlir | ||
1 ↗ | (On Diff #543194) | You're right, I'll remove it. |
Updated tests, changed the Pass to an OperationPass and switched from patterns to looking for nested Modules.
mlir/lib/Dialect/GPU/Transforms/ModuleToBinary.cpp | ||
---|---|---|
97 | Still present |
Added an assert check for the attribute.
Note: The verifier will also verify that the dyn_cast is valid, as it's a precondition on the elements of the array.
Separated gpu-module-to-binary test into 2 tests: nvvm & rocdl.
Also updated lit.cfg.py with:
if config.run_cuda_tests: config.available_features.add("host-supports-nvptx") if config.run_rocm_tests: config.available_features.add("host-supports-amdgpu")
Allowing the usage of // REQUIRES: host-supports-nvptx for disabling tests.
Same as other patches: can this specify the interface instead of "Attribute"?