This is an archive of the discontinued LLVM Phabricator instance.

[Driver] Support default libc++ library location on Darwin
AcceptedPublic

Authored by phosek on Apr 13 2018, 2:56 PM.

Details

Summary

Darwin driver currently uses libc++ headers that are part of Clang
toolchain when available (by default ../include/c++/v1 relative to
executable), but it completely ignores the libc++ library itself
because it doesn't pass the location of libc++ library that's part
of Clang (by default ../lib relative to the exceutable) to the linker
always using the system copy of libc++.

This may lead to subtle issues when the compilation fails because the
headers that are part of Clang toolchain are incompatible with the
system library. Either the driver should ignore both headers as well as
the library, or it should always try to use both when available.

This patch changes the driver behavior to do the latter which seems more
reasonable, it makes it easy to test and use custom libc++ build on
Darwin while still allowing the use of system version. This also matches
the Clang driver behavior on other systems.

Diff Detail

Repository
rC Clang

Event Timeline

phosek created this revision.Apr 13 2018, 2:56 PM

I'm not sure I understand this. The proper location for libc++ on the darwin layout is in the SDK, not relative to the driver. The default behaviour is similar to cross-compiling, and with a (derived) SDK. This definitely needs to be reviewed by @dexonsmith

I'm not sure I understand this. The proper location for libc++ on the darwin layout is in the SDK, not relative to the driver. The default behaviour is similar to cross-compiling, and with a (derived) SDK. This definitely needs to be reviewed by @dexonsmith

Yes, but the location of libc++ headers on Darwin is always relative to the driver. How do you then ensure the consistency between the libc++ library and headers when using your own build of Clang?

thakis added a subscriber: thakis.Apr 25 2018, 11:44 AM

because the headers that are part of Clang toolchain are incompatible with the system library

Do you have details on this? This isn't supposed to be the case as far as I know. We link chrome/mac against system libc++ with the bundled headers, and it at least seems to work fine (and, from what I understand, is supposed to work as well).

because the headers that are part of Clang toolchain are incompatible with the system library

Do you have details on this? This isn't supposed to be the case as far as I know. We link chrome/mac against system libc++ with the bundled headers, and it at least seems to work fine (and, from what I understand, is supposed to work as well).

Agreed; this sounds bad. libc++ goes to great pains to maintain both forward and backward compatibility in its headers; @dexonsmith should weigh in here.

because the headers that are part of Clang toolchain are incompatible with the system library

Do you have details on this? This isn't supposed to be the case as far as I know. We link chrome/mac against system libc++ with the bundled headers, and it at least seems to work fine (and, from what I understand, is supposed to work as well).

We haven't ran into any incompatibility issue yet AFAIK. If backwards and forwards compatibility for libc++ headers is guaranteed, this shouldn't be an issue. The problem I'm dealing with is actually slightly different. We would like to start using C++17 and C++2a features already supported by libc++, but it's unclear which of them are already supported by libc++ shipped on macOS and we're in this case at the whim of Apple. The way we address this on Linux is by always statically linking libc++ that's shipped with Clang, but on Darwin that's currently not possible because of D44671 and this issue. Neither of those seems like a technical problem, with these two patches applied, I can statically link our copy of libc++ on Darwin and everything seems working fine.

beanz added a reviewer: bruno.May 1 2018, 11:40 AM
beanz added a subscriber: bruno.

@dexonsmith is often super busy, so @bruno may be able to weigh in instead.

I think we just ran into one such issue. We're using our own Clang that's usually following tip-of-tree and we recently switched to C++17, but that started failing on our macOS 10.12 bots with:

Undefined symbols for architecture x86_64:
  "operator delete(void*, std::align_val_t)", referenced from:
      _main in main.cpp.o
      std::__1::__vector_base<std::__1::unique_ptr<fidl::SourceFile, std::__1::default_delete<fidl::SourceFile> >, std::__1::allocator<std::__1::unique_ptr<fidl::SourceFile, std::__1::default_delete<fidl::SourceFile> > > >::~__vector_base() in main.cpp.o
      std::__1::__vector_base<std::__1::unique_ptr<fidl::raw::UnionDeclaration, std::__1::default_delete<fidl::raw::UnionDeclaration> >, std::__1::allocator<std::__1::unique_ptr<fidl::raw::UnionDeclaration, std::__1::default_delete<fidl::raw::UnionDeclaration> > > >::~__vector_base() in main.cpp.o
      std::__1::__vector_base<std::__1::unique_ptr<fidl::raw::UnionMember, std::__1::default_delete<fidl::raw::UnionMember> >, std::__1::allocator<std::__1::unique_ptr<fidl::raw::UnionMember, std::__1::default_delete<fidl::raw::UnionMember> > > >::~__vector_base() in main.cpp.o
      std::__1::__vector_base<std::__1::unique_ptr<fidl::raw::Attribute, std::__1::default_delete<fidl::raw::Attribute> >, std::__1::allocator<std::__1::unique_ptr<fidl::raw::Attribute, std::__1::default_delete<fidl::raw::Attribute> > > >::~__vector_base() in main.cpp.o
      std::__1::__vector_base<std::__1::unique_ptr<fidl::raw::StructDeclaration, std::__1::default_delete<fidl::raw::StructDeclaration> >, std::__1::allocator<std::__1::unique_ptr<fidl::raw::StructDeclaration, std::__1::default_delete<fidl::raw::StructDeclaration> > > >::~__vector_base() in main.cpp.o
      std::__1::__vector_base<std::__1::unique_ptr<fidl::raw::StructMember, std::__1::default_delete<fidl::raw::StructMember> >, std::__1::allocator<std::__1::unique_ptr<fidl::raw::StructMember, std::__1::default_delete<fidl::raw::StructMember> > > >::~__vector_base() in main.cpp.o
      ...
  "operator new(unsigned long, std::align_val_t)", referenced from:
      std::__1::enable_if<__is_forward_iterator<char*>::value, void>::type std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >::__init<char*>(char*, char*) in main.cpp.o
      std::__1::unique_ptr<std::__1::__tree_node<std::__1::__value_type<(anonymous namespace)::Behavior, std::__1::basic_fstream<char, std::__1::char_traits<char> > >, void*>, std::__1::__tree_node_destructor<std::__1::allocator<std::__1::__tree_node<std::__1::__value_type<(anonymous namespace)::Behavior, std::__1::basic_fstream<char, std::__1::char_traits<char> > >, void*> > > > std::__1::__tree<std::__1::__value_type<(anonymous namespace)::Behavior, std::__1::basic_fstream<char, std::__1::char_traits<char> > >, std::__1::__map_value_compare<(anonymous namespace)::Behavior, std::__1::__value_type<(anonymous namespace)::Behavior, std::__1::basic_fstream<char, std::__1::char_traits<char> > >, std::__1::less<(anonymous namespace)::Behavior>, true>, std::__1::allocator<std::__1::__value_type<(anonymous namespace)::Behavior, std::__1::basic_fstream<char, std::__1::char_traits<char> > > > >::__construct_node<(anonymous namespace)::Behavior, std::__1::basic_fstream<char, std::__1::char_traits<char> > >((anonymous namespace)::Behavior&&, std::__1::basic_fstream<char, std::__1::char_traits<char> >&&) in main.cpp.o
      std::__1::__split_buffer<fidl::SourceManager, std::__1::allocator<fidl::SourceManager>&>::__split_buffer(unsigned long, unsigned long, std::__1::allocator<fidl::SourceManager>&) in main.cpp.o
      std::__1::vector<fidl::StringView, std::__1::allocator<fidl::StringView> >::__vallocate(unsigned long) in main.cpp.o
      std::__1::unique_ptr<std::__1::__tree_node<std::__1::__value_type<std::__1::vector<fidl::StringView, std::__1::allocator<fidl::StringView> >, std::__1::unique_ptr<fidl::flat::Library, std::__1::default_delete<fidl::flat::Library> > >, void*>, std::__1::__tree_node_destructor<std::__1::allocator<std::__1::__tree_node<std::__1::__value_type<std::__1::vector<fidl::StringView, std::__1::allocator<fidl::StringView> >, std::__1::unique_ptr<fidl::flat::Library, std::__1::default_delete<fidl::flat::Library> > >, void*> > > > std::__1::__tree<std::__1::__value_type<std::__1::vector<fidl::StringView, std::__1::allocator<fidl::StringView> >, std::__1::unique_ptr<fidl::flat::Library, std::__1::default_delete<fidl::flat::Library> > >, std::__1::__map_value_compare<std::__1::vector<fidl::StringView, std::__1::allocator<fidl::StringView> >, std::__1::__value_type<std::__1::vector<fidl::StringView, std::__1::allocator<fidl::StringView> >, std::__1::unique_ptr<fidl::flat::Library, std::__1::default_delete<fidl::flat::Library> > >, std::__1::less<std::__1::vector<fidl::StringView, std::__1::allocator<fidl::StringView> > >, true>, std::__1::allocator<std::__1::__value_type<std::__1::vector<fidl::StringView, std::__1::allocator<fidl::StringView> >, std::__1::unique_ptr<fidl::flat::Library, std::__1::default_delete<fidl::flat::Library> > > > >::__construct_node<std::__1::pair<std::__1::vector<fidl::StringView, std::__1::allocator<fidl::StringView> >, std::__1::unique_ptr<fidl::flat::Library, std::__1::default_delete<fidl::flat::Library> > > >(std::__1::pair<std::__1::vector<fidl::StringView, std::__1::allocator<fidl::StringView> >, std::__1::unique_ptr<fidl::flat::Library, std::__1::default_delete<fidl::flat::Library> > >&&) in main.cpp.o
      std::__1::enable_if<__is_forward_iterator<char const*>::value, void>::type std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >::__init<char const*>(char const*, char const*) in libfidl.a(c_generator.cpp.o)
      std::__1::__split_buffer<unsigned int, std::__1::allocator<unsigned int>&>::__split_buffer(unsigned long, unsigned long, std::__1::allocator<unsigned int>&) in libfidl.a(c_generator.cpp.o)
      ...
ld: symbol(s) not found for architecture x86_64

AFAICT this is because /usr/lib/libc++.dylib doesn't yet support C++17, but the headers that are part of Clang toolchain do. When we force Clang to use libc++ that's part of the toolchain by manually setting the necessary -L/-l flag resolves the issue. I don't know if this is an expected behavior, but I'd really appreciate some response on this.

beanz added a subscriber: jfb.Aug 8 2018, 1:19 PM

Adding @jfb since this is his domain now too.

jfb added a reviewer: ldionne.Aug 8 2018, 1:20 PM

Adding @jfb since this is his domain now too.

@ldionne is the libc++ expert :-)

@phosek I don't understand how you can expect code compiled with new headers to link against an old dylib, unless you're setting the target platform, in which case anything that would fail to link will instead be a compiler error. I mean, we're adding stuff to the dylib from one version to the next, so _of course_ it won't always work.

Regarding the specific error you're encountering (Undefined symbols for architecture x86_64: "operator delete(void*, std::align_val_t)", referenced from: [...]), we're aware of it and this will in fact be fixed by https://reviews.llvm.org/D50344.

I must say I therefore don't understand the purpose of this patch.

@phosek I don't understand how you can expect code compiled with new headers to link against an old dylib, unless you're setting the target platform, in which case anything that would fail to link will instead be a compiler error. I mean, we're adding stuff to the dylib from one version to the next, so _of course_ it won't always work.

@ldionne it's the other way round. Today, Clang driver on macOS automatically uses headers that are part of the compiler distribution (i.e. -I../include/c++/v1) but it always links against the system library (/usr/lib/libc++.dylib) because the library path to libraries that are shipped with the compiler (-L../lib) is never added to the link-line. Those two may not necessarily be the same version, and almost certainly won't be if you use Clang build from trunk.

We hit this case again recently where developers are trying to use libc++ filesystem library. Using #include <filesystem> just works but -lc++fs fails because that library is not shipped as part of macOS at the moment, so developers need to add explicit -L<path to compiler>/../lib on macOS. This is not the case on other platforms such as Linux.

The purpose of this patch is to make the driver behave more consistently across platforms so developers who build their own Clang don't need to explicitly pass -L<path to compiler>/../lib on macOS.

@phosek I don't understand how you can expect code compiled with new headers to link against an old dylib, unless you're setting the target platform, in which case anything that would fail to link will instead be a compiler error. I mean, we're adding stuff to the dylib from one version to the next, so _of course_ it won't always work.

@ldionne it's the other way round. Today, Clang driver on macOS automatically uses headers that are part of the compiler distribution (i.e. -I../include/c++/v1) but it always links against the system library (/usr/lib/libc++.dylib) because the library path to libraries that are shipped with the compiler (-L../lib) is never added to the link-line. Those two may not necessarily be the same version, and almost certainly won't be if you use Clang build from trunk.

Sorry, my comment was wrong. You're right, using new libc++ headers and linking against an old libc++.dylib is a supported use case, and in fact this is exactly what happens whenever you use new libc++ headers and link against the system-provided libc++.dylib on macOS. However, what is _not_ supported is linking against a new libc++.dylib and then trying to run using the system's libc++.dylib, which may be older.

If you add -L<driver-path>/../lib, will you start linking against the Clang-provided libc++.dylib? If so, and if you run the resulting application without setting the DYLD_LIBRARY_PATH to include the Clang-provided libc++.dylib, your program won't run because the system libc++.dylib may not include all symbols that the newer Clang-provided libc++.dylib contains.

We hit this case again recently where developers are trying to use libc++ filesystem library. Using #include <filesystem> just works but -lc++fs fails because that library is not shipped as part of macOS at the moment, so developers need to add explicit -L<path to compiler>/../lib on macOS. This is not the case on other platforms such as Linux.

Not shipping filesystem on macOS is a design choice we're making because the filesystem library is not ABI stable. Adding -L<path-to-compiler>/../lib explicitly shows that you understand you're doing something unusual (and not officially supported), which is good.

I assume this does not happen on Linux because Linux distributions must include libc++fs.a as part of their system -- is that really the case?

The purpose of this patch is to make the driver behave more consistently across platforms so developers who build their own Clang don't need to explicitly pass -L<path to compiler>/../lib on macOS.

Thanks for the good explanation -- now I understand the purpose of the patch. However, I think we need a larger discussion around how libc++ is shipped to users and what use cases we want to support. For example, one question I have is why we're even shipping libc++.dylib, libc++abi.dylib and libunwind.dylib in our LLVM releases for MacOS, given they are provided by the system (and mixing them is a recipe for disaster). Another question is whether the LLVM-provided Clang should instead always link to the LLVM-provided libraries (which would require users setting the DYLD_LIBRARY_PATH properly to avoid falling back onto the macOS-provided libc++.dylib).

I'm quite sympathetic to your use case (and in fact we have similar use cases), but I'm uncomfortable moving forward with this patch until we have a better understanding of some important surrounding questions. I'd like to talk with @dexonsmith about it and then maybe we can meet at the LLVM Dev Meeting (if you plan to attend) with other libc++ people to flesh those questions out?

I think on Darwin it would not make sense to have libc++fs.a ship in libc++.dylib especially considering that it ends up in the dyld cache and that has a lot of other implications. It would make sense to ship it as a separate library, perhaps as part of the SDK, at least for now. As far as making it a system dylib, it's a possibility as long as no core os components depend on it and it's there solely for consumers.

I think on Darwin it would not make sense to have libc++fs.a ship in libc++.dylib especially considering that it ends up in the dyld cache and that has a lot of other implications. It would make sense to ship it as a separate library, perhaps as part of the SDK, at least for now.

Indeed, this makes a lot of sense and this is exactly what I was thinking about. I'd like to get a more holistic plan before we do that though.

As far as making it a system dylib, it's a possibility as long as no core os components depend on it and it's there solely for consumers.

We can't do that unless that dylib is ABI-stable. Otherwise, imagine what happens when we update the OS (which would then contain a new version of that dylib): all consumers that link against that dylib will now crash at runtime because we'll have broken its ABI.

Sorry, my comment was wrong. You're right, using new libc++ headers and linking against an old libc++.dylib is a supported use case, and in fact this is exactly what happens whenever you use new libc++ headers and link against the system-provided libc++.dylib on macOS. However, what is _not_ supported is linking against a new libc++.dylib and then trying to run using the system's libc++.dylib, which may be older.

If you add -L<driver-path>/../lib, will you start linking against the Clang-provided libc++.dylib? If so, and if you run the resulting application without setting the DYLD_LIBRARY_PATH to include the Clang-provided libc++.dylib, your program won't run because the system libc++.dylib may not include all symbols that the newer Clang-provided libc++.dylib contains.

Yes, that's an issue and not something this change deals with. The way we handle it in our build is statically linking libc++.

Not shipping filesystem on macOS is a design choice we're making because the filesystem library is not ABI stable. Adding -L<path-to-compiler>/../lib explicitly shows that you understand you're doing something unusual (and not officially supported), which is good.

I assume this does not happen on Linux because Linux distributions must include libc++fs.a as part of their system -- is that really the case?

On Linux the driver always adds -L<path-to-compiler>/../lib so libc++fs.a is picked up whenever you pass -lc++fs.

Thanks for the good explanation -- now I understand the purpose of the patch. However, I think we need a larger discussion around how libc++ is shipped to users and what use cases we want to support. For example, one question I have is why we're even shipping libc++.dylib, libc++abi.dylib and libunwind.dylib in our LLVM releases for MacOS, given they are provided by the system (and mixing them is a recipe for disaster). Another question is whether the LLVM-provided Clang should instead always link to the LLVM-provided libraries (which would require users setting the DYLD_LIBRARY_PATH properly to avoid falling back onto the macOS-provided libc++.dylib).

I'm quite sympathetic to your use case (and in fact we have similar use cases), but I'm uncomfortable moving forward with this patch until we have a better understanding of some important surrounding questions. I'd like to talk with @dexonsmith about it and then maybe we can meet at the LLVM Dev Meeting (if you plan to attend) with other libc++ people to flesh those questions out?

I'll be at LLVM Dev Meeting and I'd be happy to meet and discuss this.

dexonsmith resigned from this revision.Oct 17 2020, 5:30 AM
dexonsmith added a reviewer: Bigcheese.
dexonsmith added a subscriber: Bigcheese.

If this is still relevant, I’m happy for @ldionne and @Bigcheese to make the call.

Re-reading this. the whole discussion about filesystem is now irrelevant, since it's part of the dylib.

The comment I have is that libc++.dylib is considered to be a system library on macOS, not a toolchain-provided library. This matters because we make sure that /usr/lib/libc++.dylib works well with the other system libraries it's built upon (such as libSystem.dylib) on a given release of macOS. For example, we make sure that the version of libc++ shipped on macOS 10.15 will work with the rest of the base system on macOS 10.15. If you're using trunk libc++ instead, there is no such guarantee. Now, I would argue this makes sense since you are using trunk libc++ -- you are probably not expecting a lot of guarantees.

I believe one of the two following end-situations make sense, but anything in between is just weird:

Solution 1:

  • Upstream Clang looks for headers in the toolchain and for the library in the toolchain. I think we should also make sure thusly built programs use the dylib in the toolchain, not the system one -- I guess setting rpath would achieve that?
  • AppleClang doesn't ship libc++ headers in the toolchain, and doesn't ship the dylib in the toolchain. Instead, those are found in the SDK (and in /usr/lib at runtime). This is already the case modulo a small caveat.
  • If someone wants to build an Upstream Clang toolchain that uses the system libc++, they can just omit the headers and the dylib from the toolchain, and it will fall back to the SDK ones.

Solution 2:

  • Upstream Clang doesn't look for headers or dylib in the toolchain, and we stop shipping those on Apple platforms with the LLVM release cause that's just confusing. The SDK ones are always used, and you have to resort to compiler flags if you want to stray from that.
  • AppleClang becomes the same as Upstream Clang in that respect, i.e. it doesn't ship the headers or the dylib as part of the toolchain, and finds them in the SDK.

Both of these solutions are self consistent.
Solution (1) is more flexible, but you could end up with a libc++.dylib / headers that don't work on the specific system you're shipping to. For example, if you download the latest LLVM toolchain to an old macOS, your ToT libc++.dylib might need some symbols from libSystem.dylib that don't exist yet.
Solution (2) is also self-consistent but less flexible. It has the advantage that you know the LLVM-built toolchain will always work fine on Apple platforms, cause it's using the system libc++.dylib and the headers from the SDK.

IIUC, an accurate summary of the state we are currently in is:

  • Upstream Clang prefers the headers in the toolchain, but the library in the SDK. Yet, both the headers and the dylib are shipped alongside the toolchain.
  • AppleClang prefers the headers in the SDK, and the library in the SDK. (The headers are currently still shipped in the toolchain but they should be ignored if you have a sufficiently recent SDK that contains the headers -- this is transitional only).

The current state of Upstream Clang is messed up. I personally think we should go with Solution (1) after double-checking with our linker folks that it's going to result in a reasonable Upstream LLVM toolchain.

What do y'all think?

Note: This patch basically implements Solution (1). I would love to see it rebased onto master and for tests to be added if we're all comfortable going down that route.

  • AppleClang prefers the headers in the SDK, and the library in the SDK. (The headers are currently still shipped in the toolchain but they should be ignored if you have a sufficiently recent SDK that contains the headers -- this is transitional only).

How recent is this change? I have Xcode 12, and I still see the C++ headers as only being included in the toolchain (Contents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/include/c++/v1) and not any of the SDKs.

bruno resigned from this revision.Nov 9 2020, 12:33 PM
  • AppleClang prefers the headers in the SDK, and the library in the SDK. (The headers are currently still shipped in the toolchain but they should be ignored if you have a sufficiently recent SDK that contains the headers -- this is transitional only).

How recent is this change? I have Xcode 12, and I still see the C++ headers as only being included in the toolchain (Contents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/include/c++/v1) and not any of the SDKs.

Just following up on this, cos I'm curious :) I have 12.1 now, and I still only see the C++ headers in the toolchain and not in any of the SDKs.

phosek updated this revision to Diff 304342.Nov 10 2020, 3:34 PM

I agree with your analysis and I'd also prefer Solution (1), I've rebased the change and included a simple test.

Right now, there's nothing specific to libc++, we're simply relying on the linker. Alternative would be to have a more explicit logic in DarwinClang::AddCXXStdlibLibArgs akin to the existing logic for libstdc++ (which looks like a legacy code), but we would probably need a new flag to control whether to look for libc++ in the toolchain or fallback onto SDK.

Regarding rpath, that particular aspect is problematic when you want to build relocatable binaries. Our strategy is to use static linking for libc++ to avoid this issue. That way we can guarantee that the binaries we produce with our toolchain always use libc++ version we want (which is consistent across all platforms we support). Since Darwin driver currently doesn't support the -static-libstdc++ flag like other targets, we set LDFLAGS=-nostdlib++ /path/to/clang/lib/libc++.a on Darwin which works but I'd prefer if we could just use -static-libstdc++ everywhere.

I think this looks good. I must admit I'm a bit worried about any unintended consequences this might have or breakage this might cause in cases where we'd switch from linking against the SDK libc++.dylib to the toolchain one. This would only impact the toolchain released by LLVM, not Apple's, though, since Apple's toolchain doesn't contain the dylib in <clang>/../lib.

I think this would merit a Clang release note.

Just following up on this, cos I'm curious :) I have 12.1 now, and I still only see the C++ headers in the toolchain and not in any of the SDKs.

Look in Xcode 12.5 beta 3, you should see libc++ headers in the SDK. You'll also see headers alongside Clang, however those are not being used. They are just there for some internal reasons but eventually we'll have only one copy of the headers, and they'll be in the SDK.

As I explained in https://reviews.llvm.org/D45639#2360267, I think this is the right way forward. We want LLVM Clang to prefer the libc++.dylib (and headers) shipped in the toolchain if those are present, since that's the most consistent approach.

Just one question: with this patch, do we prefer the library in the SDK or the one in the toolchain if both are present? Can we get into trouble if we have both paths on the -L list? I'm trying to think of subtle issues like:

<toolchain>/lib/libc++.a
<sysroot>/lib/libc++.dylib

Which one would we pick here?

Just following up on this, cos I'm curious :) I have 12.1 now, and I still only see the C++ headers in the toolchain and not in any of the SDKs.

Look in Xcode 12.5 beta 3, you should see libc++ headers in the SDK. You'll also see headers alongside Clang, however those are not being used. They are just there for some internal reasons but eventually we'll have only one copy of the headers, and they'll be in the SDK.

As I explained in https://reviews.llvm.org/D45639#2360267, I think this is the right way forward. We want LLVM Clang to prefer the libc++.dylib (and headers) shipped in the toolchain if those are present, since that's the most consistent approach.

Just one question: with this patch, do we prefer the library in the SDK or the one in the toolchain if both are present? Can we get into trouble if we have both paths on the -L list? I'm trying to think of subtle issues like:

<toolchain>/lib/libc++.a
<sysroot>/lib/libc++.dylib

Which one would we pick here?

It's depends on the order: whichever comes first wins. The default order of paths that the driver uses is (1) toolchain library paths, (2) library paths specified explicitly using -L, (3) sysroot library paths. So if <toolchain>/lib/libc++.a exists, it'd be picked up, otherwise <sysroot>/lib/libc++.dylib would be used.

ldionne accepted this revision.Apr 16 2021, 6:51 AM

It's depends on the order: whichever comes first wins. The default order of paths that the driver uses is (1) toolchain library paths, (2) library paths specified explicitly using -L, (3) sysroot library paths. So if <toolchain>/lib/libc++.a exists, it'd be picked up, otherwise <sysroot>/lib/libc++.dylib would be used.

Okay, so if that's the behavior after this patch, I think this is good.

This revision is now accepted and ready to land.Apr 16 2021, 6:51 AM
This revision was automatically updated to reflect the committed changes.
Herald added a project: Restricted Project. · View Herald TranscriptApr 20 2021, 12:30 PM

Looks like this breaks tests on windows: http://45.33.8.238/win/37191/step_7.txt

Looks like this breaks tests on windows: http://45.33.8.238/win/37191/step_7.txt

Should be addressed by rGf5efe0aa048b.

Another attempt is rGcaff17e503fe.

This breaks TestAppleSimulatorOSType.py on GreenDragon. First failed build: http://green.lab.llvm.org/green/view/LLDB/job/lldb-cmake/31346/

/Users/buildslave/jenkins/workspace/lldb-cmake/lldb-build/bin/clang  main.o -g -O0 -fno-builtin -isysroot "/Applications/Xcode.app/Contents/Developer/Platforms/WatchSimulator.platform/Developer/SDKs/WatchSimulator6.2.sdk" -arch i386  -I/Users/buildslave/jenkins/workspace/lldb-cmake/llvm-project/lldb/packages/Python/lldbsuite/test/make/../../../../../include -I/Users/buildslave/jenkins/workspace/lldb-cmake/llvm-project/lldb/test/API/tools/lldb-server -I/Users/buildslave/jenkins/workspace/lldb-cmake/llvm-project/lldb/packages/Python/lldbsuite/test/make -include /Users/buildslave/jenkins/workspace/lldb-cmake/llvm-project/lldb/packages/Python/lldbsuite/test/make/test_common.h  -fno-limit-debug-info -target i386-apple-watchos6.2-simulator -mwatchos-simulator-version-min=6.2 -D__STDC_LIMIT_MACROS -D__STDC_FORMAT_MACROS   --driver-mode=g++ -o "test_simulator_platform_watchos"
ld: warning: ignoring file /Users/buildslave/jenkins/workspace/lldb-cmake/lldb-build/lib/libc++.dylib, building for watchOS Simulator-i386 but attempting to link with file built for macOS-x86_64

Based on your description above, shouldn't it prefer the libc++ form the sysroot?

@phosek I've reverted this in 05eeed9691aeb3e0316712195b998e9078cdceb0 to turn the bot green again. Happy to help you look into this tomorrow.

This breaks TestAppleSimulatorOSType.py on GreenDragon. First failed build: http://green.lab.llvm.org/green/view/LLDB/job/lldb-cmake/31346/
[...]

Based on your description above, shouldn't it prefer the libc++ form the sysroot?

Actually, it's the other way around. In the discussion above, we say it's the toolchain path first, then anything provided with -L, and then the sysroot. Here you have a dylib in the toolchain root (assuming that's what jenkins/workspace/lldb-cmake/lldb-build is), so it's using that. It seems that the library built in the toolchain root is built for x68_64 (probably your host), whereas you'd need it to be built for the target (i386 simulator).

This breaks TestAppleSimulatorOSType.py on GreenDragon. First failed build: http://green.lab.llvm.org/green/view/LLDB/job/lldb-cmake/31346/
[...]

Based on your description above, shouldn't it prefer the libc++ form the sysroot?

Actually, it's the other way around. In the discussion above, we say it's the toolchain path first, then anything provided with -L, and then the sysroot. Here you have a dylib in the toolchain root (assuming that's what jenkins/workspace/lldb-cmake/lldb-build is), so it's using that. It seems that the library built in the toolchain root is built for x68_64 (probably your host), whereas you'd need it to be built for the target (i386 simulator).

Correct, that's exactly what's happening here. Since this bot is only running check-debuginfo and check-lldb, would it make sense to stop building libcxx altogether (that is drop libcxx;libcxxabi from LLVM_ENABLE_PROJECTS)? That way the compiler should pick up the one inside the sysroot since that's the only one available.

This breaks TestAppleSimulatorOSType.py on GreenDragon. First failed build: http://green.lab.llvm.org/green/view/LLDB/job/lldb-cmake/31346/
[...]

Based on your description above, shouldn't it prefer the libc++ form the sysroot?

Actually, it's the other way around. In the discussion above, we say it's the toolchain path first, then anything provided with -L, and then the sysroot. Here you have a dylib in the toolchain root (assuming that's what jenkins/workspace/lldb-cmake/lldb-build is), so it's using that. It seems that the library built in the toolchain root is built for x68_64 (probably your host), whereas you'd need it to be built for the target (i386 simulator).

Correct, that's exactly what's happening here. Since this bot is only running check-debuginfo and check-lldb, would it make sense to stop building libcxx altogether (that is drop libcxx;libcxxabi from LLVM_ENABLE_PROJECTS)? That way the compiler should pick up the one inside the sysroot since that's the only one available.

No, the bot is also meant to catch changes to libc++ breaking LLDB (or at least making sure we update the corresponding data formatters). I don't think that really matters for the simulator tests though. So if the toolchain takes precedence over -L too, how can I use the just-built compiler and make sure we continue to use the libc++ from the SDK?

Given that these tests are macOS specific and already require a specific SDK, I'll just update them to use the compiler from the SDK instead of the just-built one.

Given that these tests are macOS specific and already require a specific SDK, I'll just update them to use the compiler from the SDK instead of the just-built one.

Done in 5d1c43f333c2327be61604dc90ea675f0d1e6913 and reverted my revert (i.e. re-landed your change in 6331680ad2ad000fdaf7e72f3c1880c7908ffa25).

No, the bot is also meant to catch changes to libc++ breaking LLDB (or at least making sure we update the corresponding data formatters).

Given that these tests are macOS specific and already require a specific SDK, I'll just update them to use the compiler from the SDK instead of the just-built one.

I'm not sure I'm totally following, but just want to double check that the tests won't somehow use the libc++ from the SDK instead of ToT? I guess the test uses -nostdinc++ or something?

Assuming that's doing what we want... I also wanted to highlight this adds a dependency on bot's host clang working with the ToT libc++ headers. That's probably fine -- I assume we keep the bot's Xcode relatively up-to-date -- but I know @ldionne at some point was trying to avoid requiring new libc++ to work with older clangs.

I'm not sure I'm totally following, but just want to double check that the tests won't somehow use the libc++ from the SDK instead of ToT? I guess the test uses -nostdinc++ or something?

Assuming that's doing what we want... I also wanted to highlight this adds a dependency on bot's host clang working with the ToT libc++ headers. That's probably fine -- I assume we keep the bot's Xcode relatively up-to-date -- but I know @ldionne at some point was trying to avoid requiring new libc++ to work with older clangs.

LLDB should either use the libc++ headers from the SDK *and* the dylib from the SDK, or build libc++ from scratch and test against both the trunk headers and the trunk dylib built for their target, not for the host as they seem to do right now. I'm following offline with @JDevlieghere to try and disentangle that.

phosek reopened this revision.Apr 23 2021, 12:20 AM
This revision is now accepted and ready to land.Apr 23 2021, 12:20 AM

I'm not sure I'm totally following, but just want to double check that the tests won't somehow use the libc++ from the SDK instead of ToT? I guess the test uses -nostdinc++ or something?

Assuming that's doing what we want... I also wanted to highlight this adds a dependency on bot's host clang working with the ToT libc++ headers. That's probably fine -- I assume we keep the bot's Xcode relatively up-to-date -- but I know @ldionne at some point was trying to avoid requiring new libc++ to work with older clangs.

The simulator tests are now using both headers and library from the SDK.

LLDB should either use the libc++ headers from the SDK *and* the dylib from the SDK, or build libc++ from scratch and test against both the trunk headers and the trunk dylib built for their target, not for the host as they seem to do right now. I'm following offline with @JDevlieghere to try and disentangle that.

As discussed on Slack on Wednesday, lldb is doing the former for the simulator tests. All other tests are building for the host and are using the just-built headers and library.

FYI I had to revert this change again because it broke ubsan build. The problem is that ubsan for Darwin is built as universal shared library and it links against libc++abi, but we currently don't support building libc++ and libc++abi as universal binaries in the runtimes build. That's something we'll need to address first before we can reland this change. I expect that if we resolve that issue, it would also address the lldb use case although it sounds like that's no longer an issue.

FYI I had to revert this change again because it broke ubsan build. The problem is that ubsan for Darwin is built as universal shared library and it links against libc++abi, but we currently don't support building libc++ and libc++abi as universal binaries in the runtimes build. That's something we'll need to address first before we can reland this change. I expect that if we resolve that issue, it would also address the lldb use case although it sounds like that's no longer an issue.

@phosek we also hit this problem downstream, but decided that we should just stop building libcxx alongside clang in our CI jobs, and have separate jobs for it. The compiler-rt build should use libcxx headers and library from the SDK (Xcode 12.5 should have libcxx headers in the SDKs now). Do you think this would be a feasible solution for you as well?