This is an archive of the discontinued LLVM Phabricator instance.

[libc] Support setting 'native' GPU architecture for libc
ClosedPublic

Authored by jhuber6 on Mar 27 2023, 12:00 PM.

Details

Summary

We already use the amdgpu-arch and nvptx-arch tools to determine the
GPU architectures the user's system supports. We can provide
LIBC_GPU_ARCHITECTURES=native to allow users to easily build support
for only the one found on their system. This also cleans up the code
somewhat.

Diff Detail

Event Timeline

jhuber6 created this revision.Mar 27 2023, 12:00 PM
Herald added projects: Restricted Project, Restricted Project. · View Herald TranscriptMar 27 2023, 12:00 PM
jhuber6 requested review of this revision.Mar 27 2023, 12:00 PM
jhuber6 updated this revision to Diff 509049.Mar 28 2023, 9:30 AM

Add if incase the tools aren't found for some reason. (This should never reasonably happen).

tra added a subscriber: tra.Mar 28 2023, 11:15 AM
tra added inline comments.
libc/cmake/modules/prepare_libc_gpu_build.cmake
76

Is that a singular architecture, or can we supply a list?

We may want to be able to build for a set of user-specified GPUs and, if I can dream out loud, allow specifying which GPU to run them on, so I could just do niunja check-libc-sm_70 and it would run the tests with CUDA_VISIBLE_DEVICES=<ID of sm_70 GPU>

jhuber6 added inline comments.Mar 28 2023, 11:19 AM
libc/cmake/modules/prepare_libc_gpu_build.cmake
76

Right now it's a singular architecture. The reason for this is because the internal objects used for testing and the ones that will be exported via the libcgpu.a library are separate compilations. Since the existing libc testing architecture only ever expected a single executable, it was easiest to implement it this way. Basically, the libcgpu.a library contains N architectures packed into a single file and the internal testing implementation just builds for one. I don't think it's impossible to support what you're asking, it would just require a lot more CMake.

tra accepted this revision.Mar 28 2023, 11:29 AM

LGTM in general.

libc/cmake/modules/prepare_libc_gpu_build.cmake
76

Can we build a set of per-GPU test executables and then just pick one of them to run libc tests?

This revision is now accepted and ready to land.Mar 28 2023, 11:29 AM
jhuber6 added inline comments.Mar 28 2023, 11:32 AM
libc/cmake/modules/prepare_libc_gpu_build.cmake
76

So, libc works by making a bunch of target according to the directories and filename. So we compile some file and create the CMake target libc.src.string.memcmp. We then attach the compiled files to this target via some CMake property. We could potentially just attach several files to different properties and use the list of architectures to pull them out while building the tests. I didn't spend much time on changing it because I figured it would be sufficient to just test a single one at a time. But it would be interesting to be able to test an AMD and NVIDIA GPU at the same time.

This revision was automatically updated to reflect the committed changes.
tra added inline comments.Mar 28 2023, 11:56 AM
libc/cmake/modules/prepare_libc_gpu_build.cmake
76

My strawman idea was to, roughly, wrap the part of cmake which generates test targets into a loop iterating over GPUs, which would create the same targets, but with a GPU-specifc suffix, which should in theory make it a largely mechanical change. But I'm also not familiar with the details of libc cmake, so if it's more invasive than that, targeting single arch for tests is fine.
It just looks a bit odd, considering that we do allow targeting multiple GPUs for the library itself. One would assume that we'd want to test multiple GPU variants, too.

jhuber6 added inline comments.Mar 28 2023, 11:59 AM
libc/cmake/modules/prepare_libc_gpu_build.cmake
76

Yeah, it's mostly just a convenience. The test files are built differently, they're intended to be compiled directly to a GPU image and then executed via the loader. The GPU library on the other hand is all LLVM-IR for LTO that's packaged into a fatbinary for the "new driver". It's certainly doable, but it's not very high on my priorities right now.