This is an attempt to reduce the time taken by the Bootstrapping
build job and the Clang CI job that builds the compiler from scratch.
Details
- Reviewers
Mordante philnik - Group Reviewers
Restricted Project - Commits
- rG19ef02e3f4f8: [libc++][ci] Use ccache in the jobs that build Clang
Diff Detail
- Repository
- rG LLVM Github Monorepo
Event Timeline
When the cache is preserved between runs we might want to do the same for libc++.
LGTM when the CI is green.
libcxx/utils/ci/buildkite-pipeline-clang.yml | ||
---|---|---|
27–28 | This gives statistics of the cache usage. I think that will at least initially be useful. Maybe we want to tweak cache parameters, for example the size. |
I don't think that makes a lot of sense. The libc++ build takes only a few seconds and (unfortunately) relies on large parts of our headers, which change with pretty much every patch, making it probably not very profitable. Also, while it's not a big problem, it does add another point of failure to the build. It's most likely worth the small risk for clang builds, but I't not convinced it is for libc++ builds.
I don't recall the last time I had issues with ccache. But as said we "might want to" consider it so we need to measure. I've no idea how much time the runners spend on building libc++. This is the same for a lot of runners since we always build it with C++20. Also not all headers are always touched.
libcxx/utils/ci/buildkite-pipeline-clang.yml | ||
---|---|---|
26 | I've fixed this in https://reviews.llvm.org/rG4de9936fe0e31ceb817db1cdfc5dd4af2d44e01e |
libcxx/utils/ci/buildkite-pipeline-clang.yml | ||
---|---|---|
26 | Ah, thanks for the catch. Yeah this is an obvious mistake but I forgot to trigger the Clang CI to test this patch. |
I've fixed this in https://reviews.llvm.org/rG4de9936fe0e31ceb817db1cdfc5dd4af2d44e01e
I've created D151780 to validate whether this solves the build issue (https://buildkite.com/llvm-project/libcxx-ci/builds/25187)