- Queries
- All Stories
- Search
- Advanced Search
- Transactions
- Transaction Logs
Advanced Search
Today
Try to fix CI
Next try
Address comments
Next try
In D143033#4097453, @var-const wrote:In D143033#4095911, @philnik wrote:No, this also applies to the non-__builtin_ versions. The math functions are declared at about line 1200 (currently).
Thanks. It means it applies to our definition of double fabs(double), right? (If that's the case, it doesn't need the nodiscard attribute on Clang, though I guess it's there for the sake of GCC)
No. Clang only adds them to overloads with C linkage (https://godbolt.org/z/rjY3xeo3o). GCC also warns with -Wall. I don't know where to see that though. So I guess hypothetically there could be a C library that doesn't have these functions declared extern "C" but I doubt these actually exist.
The [[nodiscard]] declarations are specifically there to match the diagnostics produced by the compiler, so I think we should check that the compiler actually produces the same diagnostic for all overloads. (https://godbolt.org/z/87hMPjeq7)
In the Godbolt link, the non-double overloads of trunc test our implementation, while trunc(0.) essentially tests that Clang adds the const attribute. I don't think we should be testing Clang behavior.
I guess we just disagree here. IMO we should test the Clang implementation if we mirror it, especially if it's pretty much trivial to check.
Moreover, this seems Clang-centric. Does GCC have the same behavior? If not, a platform that compiles with GCC and "overrides" one of the math functions will fail the expectation (though to be fair, we could mark the test as unsupported on GCC).
We can mark it as unsuported on GCC, although it seems kind-of pointless, since GCC will most likely never implement -verify and definitely won't mirror clang warning wording.
To be clear, I think the idea of adding the attributes is sound. However, testing it is hard, for reasons outside of our control.
Try to fix CI
Address comments
Address comments
Try to fix CI
I've uploaded D143071 to fix the problem. I talk to Louis later today, so a fix should be commited within the next 6 hours or so.
In D133661#4096544, @rupprecht wrote:In D133661#4096429, @alexfh wrote:In D133661#4096263, @alexfh wrote:@philnik could you commit one of the proposed abi_tag fixes?
Actually, I went ahead and committed it as 561105fb9d3a16f7fb8c718cc5da71b11f17a144 to unblock us. Hopefully, that's small and obvious enough to not violate the code review policies.
libc++ has a pretty good precommit test infra that gets triggered when a review is created, so in the future it would be better to create a review and wait for CI before landing w/o review.
Although gnu::abi_tag was suggested, I took that to mean __attribute__((__abi_tag__(...))) when I tested the suggestion locally, as elsewhere libc++ uses ABI tags like so:
# define _LIBCPP_HIDE_FROM_ABI \ _LIBCPP_HIDDEN _LIBCPP_EXCLUDE_FROM_EXPLICIT_INSTANTIATION \ __attribute__((__abi_tag__(_LIBCPP_TOSTRING(_LIBCPP_VERSIONED_IDENTIFIER))))
In D143033#4095618, @var-const wrote:In D143033#4095259, @philnik wrote:It's not exactly documentation, but you can look at clang/include/clang/Basic/Builtins.def to see what attributes are applied to builtins and what builtins there are.
Thanks. If I'm reading this correctly, though, this applies to __builtin_foo functions only, right?
No, this also applies to the non-__builtin_ versions. The math functions are declared at about line 1200 (currently).
Taking a step back, I also don't think we should be testing compiler behavior or relying on what is essentially an implementation detail of Clang and presumably GCC.
The [[nodiscard]] declarations are specifically there to match the diagnostics produced by the compiler, so I think we should check that the compiler actually produces the same diagnostic for all overloads. (https://godbolt.org/z/87hMPjeq7)
Yesterday
In D143033#4095228, @var-const wrote:In D143033#4095223, @philnik wrote:These are marked [[gnu::const]] through clang though. That's why these are regex matchers, not simple text versions.
Can you elaborate on how exactly this works? (or perhaps it's documented somewhere)
These are marked [[gnu::const]] through clang though. That's why these are regex matchers, not simple text versions.
In D133661#4095014, @dblaikie wrote:In D133661#4094932, @philnik wrote:In D133661#4094917, @dblaikie wrote:the definition of the __exception_guard class being different is somehow more problematic.
That's correct - Clang/LLVM debug info under LTO does require structures with linkage to have consistent size at least.
The sort of errors are:fragment is larger than or outside of variable call void @llvm.dbg.declare(metadata ptr undef, metadata !4274, metadata !DIExpression(DW_OP_LLVM_fragment, 200, 56)), !dbg !4288 !4274 = !DILocalVariable(name: "__guard", scope: !4275, file: !2826, line: 550, type: !2906)Because LLVM LTO is validating that the debug info describing a variable makes sense given the type - but the type is deduplicated based on the linkage name of the type - so some debug info was emitted in some +exceptions code that is larger than the type definition taken from some -exceptions code.
I think some possible fixes would include
- make the structure the same size regardless of +/-exceptions
- use a macro to choose between two differently-linkage-named types (yeah, this is still an ODR violaiton for the function definitions that use these different entities - but within the realms of things we do for +/-exception compatibility, and nothing validates that ODR function definitions are identical)
I'll see if I can repro this with godbolt, but I think this ^ should be enough to justify a timely revert and/or fix. This patch is broken for mixed exceptions LTO builds.
Please don't revert. Other patches depend on this and it should be a trivial fix. We can just add an ABI tag to the noexcept version of the class IIUC. Could you confirm that? If that is the case I can make a patch.
Not quite sure what you mean - got a rough example of what an ABI tag would look like in this case? (looking through the libc++ sources notihng immediately stands out) - an extra template parameter (I see "Abi" template parameters in some places, for instance) or some other technique?
In D133661#4094917, @dblaikie wrote:the definition of the __exception_guard class being different is somehow more problematic.
That's correct - Clang/LLVM debug info under LTO does require structures with linkage to have consistent size at least.
The sort of errors are:fragment is larger than or outside of variable call void @llvm.dbg.declare(metadata ptr undef, metadata !4274, metadata !DIExpression(DW_OP_LLVM_fragment, 200, 56)), !dbg !4288 !4274 = !DILocalVariable(name: "__guard", scope: !4275, file: !2826, line: 550, type: !2906)Because LLVM LTO is validating that the debug info describing a variable makes sense given the type - but the type is deduplicated based on the linkage name of the type - so some debug info was emitted in some +exceptions code that is larger than the type definition taken from some -exceptions code.
I think some possible fixes would include
- make the structure the same size regardless of +/-exceptions
- use a macro to choose between two differently-linkage-named types (yeah, this is still an ODR violaiton for the function definitions that use these different entities - but within the realms of things we do for +/-exception compatibility, and nothing validates that ODR function definitions are identical)
I'll see if I can repro this with godbolt, but I think this ^ should be enough to justify a timely revert and/or fix. This patch is broken for mixed exceptions LTO builds.
In D133661#4093298, @bgraur wrote:We're seeing a lot of fallout from this patch and they all look related to the ODR violations that seem to be intentionally added here: both the patch description and the comment for the __exception_guard mention explicitly that combinations of code compiled with exceptions and without exceptions are common.
Am I missing something here?
Please remove the format-only changes. This makes it incredibly hard to see what you actually changed.
Mon, Jan 30
In D142902#4091857, @rarutyun wrote:In D142902#4090931, @philnik wrote:Our implementation isn't ABI compatible with libstdc++ or the MSVL STL. Putting them in the same namespace just results in subtle bugs.
Is which sense?
- If we are talking about the full C++ standard library implementation - of course, it's not, but some parts were intentionally done compatible. For example: exception_ptr, <get|set>_terminate and other basic stuff.
- If we are talking about the full <memory_resource> implementation it also would not be fully ABI compatible. But probably it should not be.
- If we are talking just about std::pmr::memory_resource (and <get|set>_terminate), I believe it should be ABI compatible. The virtual methods are listed in the same order in libcxx file as in GNU and LLVM. So, to the best of my knowledge (and also based on experiments) the compiler correctly calculates the offset of the virtual table itself and the offset of the called method. That allows to pass memory resource from one binary (or translation unit) and interpret it correctly in other translation unit.
I think get|set_default_resource are also possible to make ABI compatible. Memory allocation is pretty basic operation and (as far as I know) there were attempts (probably successful, but as far as I remember something was broken with Windows toolchain) to make new ABI compatible with GNU and MSVC STL as well (set_new_handler, etc.) The behavior we want here is having the opportunity to have one default resource being set and then use it application-wide (for example if libcxx is built in compat mode with GNU). The similar scenario works with <get|set_terminate.
Other classes from pmr can potentially bring problems. If that's the concern probably the solution might be to put some APIs to __1 namespace. And I am not sure if those are bugs or not because nobody guarantees that two different C++ standard library implementations would work in one application (or I misunderstood the context). It might result in submitted bugs in bug tracker, though :) But still libcxx makes some effort to be ABI compatible with GNU and Microsoft for basic APIs. To me this use-cases as important as other examples I've provided above.
Our implementation isn't ABI compatible with libstdc++ or the MSVL STL. Putting them in the same namespace just results in subtle bugs.
Sun, Jan 29
Someone else is already working on this: D135248
Sat, Jan 28
Do we want to cherry-pick this? It's not super important, but also really small.
Thu, Jan 26
Try to fix CI
Wed, Jan 25
Tue, Jan 24
In D142521#4079004, @ldionne wrote:Do you know whether those are the last ones?
In D136765#4076621, @AdvenamTacet wrote:I see that the revision is accepted, but based on the inline commit, I updated static_assert.
Now TEST_ALIGNOF is used instead of two cases with #if.I do not have commiter permissions.
Mon, Jan 23
LGTM % nit.
Try to fix CI
Sun, Jan 22
In D142285#4071778, @Mordante wrote:In D142285#4071762, @philnik wrote:In D142285#4071753, @Mordante wrote:Can you please explain why this is done? What are the benefits for users/libc++ itself?
The main benefit is avoiding the portability pitfall.
That's a pitfall we have with our current design, but instead we create a language version upgrade pitfall in libc++. This means users who don't care about portability may get caught in this trap. I much rather have the portability trap, than the upgrade trap. (Preferable I would have no traps, but that ship has sailed years ago.)
There is no reason to use FTMs if you use only a single standard library though. They are only useful if you want your code to be portable.
I'm not convinced this is an improvement:
- most (all) toplevel headers have a generated part, which to me looks like a potential source of bugs.
What exactly would be the problem? It's not significantly different from having #include <__config> or #pragma GCC system_header everywhere.
The difference is I need to type these lines. Now the header needs a magic comment to include an auto-generated part. Some headers don't have the new magic, like ratio. So when there will be an update to ratio (for example https://www.open-std.org/jtc1/sc22/wg21/docs/papers/2023/p2734r0.pdf) the update script won't work out-of-the-box. So we need additional instructions on how to use the update script cmake target.
The scripts tells you that they are missing when they are.
In D142285#4071753, @Mordante wrote:Can you please explain why this is done? What are the benefits for users/libc++ itself?
The main benefit is avoiding the portability pitfall.
I'm not convinced this is an improvement:
- most (all) toplevel headers have a generated part, which to me looks like a potential source of bugs.
What exactly would be the problem? It's not significantly different from having #include <__config> or #pragma GCC system_header everywhere.
- this may give different behaviour for users who use FTM, but not include the proper header. (Obviously that is a bug in their code, but currently it works, after this change is may break of change the behaviour.)
This is also the case when switching implementations, which is a portability pitfall right now.
- since we don't want to break older language versions we now duplicate the macros.
I'm not sure what you mean. The macros would be defined in multiple places regardless of keeping includes.
This change should definitely be mentioned in the Release Notes.
This source for https://libcxx.llvm.org/DesignDocs/FeatureTestMacros.html should be updated too.
@EricWF can you tell why the original script took the approach to have one version header?
We have iterators of all categories (and sized/unsized sentinels) in test_iterators.h. Why don't you use them?
Sat, Jan 21
Try to fix CI
Try to fix CI
Try to fix CI
Fri, Jan 20
Don't remove includes in older versions