Details
- Reviewers
herhut - Commits
- rGdf6cbd37f57f: [mlir] Lower gpu.memcpy to GPU runtime calls.
Diff Detail
- Repository
- rG LLVM Github Monorepo
Event Timeline
mlir/lib/Conversion/GPUCommon/ConvertLaunchFuncToRuntimeCalls.cpp | ||
---|---|---|
684 | This would only support vectors? Maybe use getMemrefDescriptorSizes from the LLVM lowering to compute the actual size? |
mlir/lib/Conversion/GPUCommon/ConvertLaunchFuncToRuntimeCalls.cpp | ||
---|---|---|
684 | For identity layout (verified on line 672), stride[0]*size[0] gives the correct number of elements (stride[0] is 'product(size[1..n-1])'). getMemrefDescriptorSizes is not the right API here, you would first need to extract the dynamic sizes from the struct. |
mlir/lib/Conversion/GPUCommon/ConvertLaunchFuncToRuntimeCalls.cpp | ||
---|---|---|
684 | Ah, this is subtle. Can you leave a comment so I understand this next time round, as well? Especially as the meaning of isSupportedMemRefType is not obvious here. |
mlir/lib/Conversion/GPUCommon/ConvertLaunchFuncToRuntimeCalls.cpp | ||
---|---|---|
684 | Added comment. I've been wanting to rename isSupportedMemRefType. Will do in a separate revision. |
This would only support vectors? Maybe use getMemrefDescriptorSizes from the LLVM lowering to compute the actual size?