This feature will primarily by used by the runtimes build. In the
runtimes build, we build libc-hdrgen once for the host and use it
to build the libc for other targets. So, the host libc-hdrgen is
like a prebuilt passed to the target builds.
Details
- Reviewers
lntue - Commits
- rG5274a460dac6: [libc] Use a prebuilt libc-hdrgen binary if available.
Diff Detail
- Repository
- rG LLVM Github Monorepo
Event Timeline
libc/CMakeLists.txt | ||
---|---|---|
155–156 | CMake Error at /home/jhuber/Documents/llvm/llvm-project/libc/utils/HdrGen/CMakeLists.txt:3 (add_tablegen): Unknown CMake command "add_tablegen". This gives me the following error when I run with -DLLVM_ENABLE_RUNTIMES=libc. I remember adding this code to work around it. |
libc/CMakeLists.txt | ||
---|---|---|
155–156 | Ah, OK. One way to fix that problem is proposed here: https://reviews.llvm.org/D141460. Essentially, libc-hdrgen is a host tool and should be built separately and not as part of the libc build. As a workaround for now, you can try building libc-hdrgen separately and then do a full build with the LIBC_HDRGEN_EXE option specified. Also, can you explain why you need full-build for GPU builds? For headers? |
libc/CMakeLists.txt | ||
---|---|---|
155–156 | Yes, it's primarily the headers since the user's system headers are most likely not going to be compatible. For example, on my machine if I try to include the standard headers while compiling for the GPU target it'll think it's a 32-bit system and try to include some stubs which don't exist. The second reason is that I think in order to run the code on the GPU, we need a "loader" of some kind. Which is usually handled by a language runtime, i.e. libcuda or libomptarget, so I think that's in-line with a full build. Is there a reason we don't want to just do what I did above? A runtimes build is somewhat different from a bootstrap since we can still reliably expect to be in-tree. if("libc" IN_LIST LLVM_ENABLE_RUNTIMES) include(TableGen) endif() |
libc/CMakeLists.txt | ||
---|---|---|
155–156 | The proposed patch that I linked to above solves this problem for the general case. In the general case, runtimes build/bootstrap build is a crossbuild. So, you cannot build libc-hdrgen for the target and run it at libc build time on the host. It should be built as a host tool separately and then used to do the actual runtimes build. If we go with what you are suggesting, then we will build libc-hdrgen for the target as well, which is incorrect. |
libc/CMakeLists.txt | ||
---|---|---|
155–156 | I see, that makes sense for cross builds. So how do you get the libc header generator tool then? The patch linked just seems to error if it hasn't been added as a target. So how do we get the binary in the first place without first building libc? |
libc/CMakeLists.txt | ||
---|---|---|
155–156 | The runtimes build for the libc full-build is currently broken in many ways without the proposed patch. So, I actually would say that until it is fixed, you should build clang separately and then do a standalone libc full build. That way, you will mimic the runtimes build but not have to deal with broken runtimes build for the full libc. |
libc/CMakeLists.txt | ||
---|---|---|
155–156 | The whole point of -DLLVM_ENABLE_RUNTIMES= is to remove the need for two-step builds, so it would be nice if we could get it to work. What's the problem with including the target? So, if I understand this correctly, the problem is that if the user wants to cross-build (e.g. x86_64 host but AArch64 target) then the HdrGen executable will be built for AArch64 and thus not able to be run? Could we override the triple in the compilation? |
libc/CMakeLists.txt | ||
---|---|---|
155–156 | Ah, if all you want is a host build runtimes build (or a libc GPU build), you can still do it without any changes: Just list the libc project in LLVM_ENABLE_PROJECTS and LLVM_ENABLE_RUNTIMES. |
This gives me the following error when I run with -DLLVM_ENABLE_RUNTIMES=libc. I remember adding this code to work around it.