This is an archive of the discontinued LLVM Phabricator instance.

[OpenMP][Clang] Support for target math functions
ClosedPublic

Authored by gtbercea on May 1 2019, 2:01 PM.

Diff Detail

Repository
rC Clang

Event Timeline

gtbercea created this revision.May 1 2019, 2:01 PM
Herald added a project: Restricted Project. · View Herald TranscriptMay 1 2019, 2:01 PM
gtbercea updated this revision to Diff 197638.May 1 2019, 2:08 PM
gtbercea edited the summary of this revision. (Show Details)
  • Minor fixes.

For the record, this is an implementation of the scheme proposed in https://reviews.llvm.org/D60907#1484756.
There are drawbacks, see the TODO, but it will give most people a short term solution until we get OpenMP 5.0 variants.

Finally, there is a remote chance this will cause trouble to people that use math.h/cmath functions, e.g. with the old SSE hack, which are not available anymore.
I don't suspect that to happen but if it does we can, again as a short term solution, selectively extract declarations from the host cmath into the device cmath.

jdoerfert added inline comments.May 1 2019, 8:38 PM
lib/Headers/openmp_wrappers/__clang_openmp_math.h
30

I think this is a leftover we forgot to remove.

gtbercea updated this revision to Diff 197798.May 2 2019, 9:04 AM
  • Clean-up. Add header.
gtbercea marked an inline comment as done.May 2 2019, 9:05 AM
gtbercea edited the summary of this revision. (Show Details)
hfinkel accepted this revision.May 2 2019, 10:26 AM
hfinkel added inline comments.
lib/Driver/ToolChains/Clang.cpp
1164

Please add a driver test covering this.

lib/Headers/openmp_wrappers/__clang_openmp_math.h
21

And this is why I wanted to just make __device__ work in OpenMP mode. But, as a temporary solution until we get declare variant working, I'm okay with this.

Modulo the naming nit and the missing driver test, LGTM.

47

I'd prefer that, as this is clang specific, we make this clear by naming this __CLANG_NO_HOST_MATH__.

This revision is now accepted and ready to land.May 2 2019, 10:26 AM
tra accepted this revision.May 2 2019, 10:45 AM

I don't like this implementation. Seems to me, it breaks one of the OpenMP standard requirements: the program can be compiled without openmp support. I assume, that with this includes the program won't be able to be compiled without OpenMP support anymore because it may use some device-specific math functions explicitly.
Instead, I would like to see some additional, device-scpecific math header file, that must be included explicitly to support some device-specific math functions. And we need to provide default implementations for those extra math functions for all the platforms we're going to support, including default host implementations.

Moreover, I think this will cause troubles even in simple cases. Assume we have target if(cond) construct. In this case we will need to compile the target region for both, the device and the host. If the target region uses some device-specific math functions, it will break the compilation for the host.

ABataev requested changes to this revision.May 2 2019, 11:03 AM
This revision now requires changes to proceed.May 2 2019, 11:03 AM

I don't like this implementation. Seems to me, it breaks one of the OpenMP standard requirements: the program can be compiled without openmp support. I assume, that with this includes the program won't be able to be compiled without OpenMP support anymore because it may use some device-specific math functions explicitly.
Instead, I would like to see some additional, device-scpecific math header file, that must be included explicitly to support some device-specific math functions. And we need to provide default implementations for those extra math functions for all the platforms we're going to support, including default host implementations.

Can you provide an example of a conforming program that can't be compiled without OpenMP support? Regardless of the use of any device-specific functions (which isn't covered by the standard, of course, but might be needed in practice), the code still needs to be compilable by the host in order to generate the host-fallback version. This doesn't change that. Thus, any program that uses anything from this math.h, etc. needs to compile for the host, and thus, likely compiles without OpenMP support. Maybe I'm missing your point, however.

I don't like this implementation. Seems to me, it breaks one of the OpenMP standard requirements: the program can be compiled without openmp support. I assume, that with this includes the program won't be able to be compiled without OpenMP support anymore because it may use some device-specific math functions explicitly.
Instead, I would like to see some additional, device-scpecific math header file, that must be included explicitly to support some device-specific math functions. And we need to provide default implementations for those extra math functions for all the platforms we're going to support, including default host implementations.

Can you provide an example of a conforming program that can't be compiled without OpenMP support? Regardless of the use of any device-specific functions (which isn't covered by the standard, of course, but might be needed in practice), the code still needs to be compilable by the host in order to generate the host-fallback version. This doesn't change that. Thus, any program that uses anything from this math.h, etc. needs to compile for the host, and thus, likely compiles without OpenMP support. Maybe I'm missing your point, however.

Assume we have something like this:

#pragma omp target if(cond)
a = __nv_xxxx(....);

Instead of __nv_xxx you can try to use any Cuda-specific function, which is not the part of the standard math.h/cmath files. Will it be compilable even with OpenMP?

I don't like this implementation. Seems to me, it breaks one of the OpenMP standard requirements: the program can be compiled without openmp support. I assume, that with this includes the program won't be able to be compiled without OpenMP support anymore because it may use some device-specific math functions explicitly.
Instead, I would like to see some additional, device-scpecific math header file, that must be included explicitly to support some device-specific math functions. And we need to provide default implementations for those extra math functions for all the platforms we're going to support, including default host implementations.

Can you provide an example of a conforming program that can't be compiled without OpenMP support? Regardless of the use of any device-specific functions (which isn't covered by the standard, of course, but might be needed in practice), the code still needs to be compilable by the host in order to generate the host-fallback version. This doesn't change that. Thus, any program that uses anything from this math.h, etc. needs to compile for the host, and thus, likely compiles without OpenMP support. Maybe I'm missing your point, however.

Assume we have something like this:

#pragma omp target if(cond)
a = __nv_xxxx(....);

Instead of __nv_xxx you can try to use any Cuda-specific function, which is not the part of the standard math.h/cmath files. Will it be compilable even with OpenMP?

I don't think that this changes that one way or the other. Your example won't work, AFAIK, unless you do something like:

#pragma omp target if(cond)
#ifdef __NVPTX__
a = __nv_xxxx(....);
#else
a = something_on_the_host;
#endif

and anything from these headers that doesn't also have a host version will suffer the same fate: if it won't also compile for the host (one way or another), then it won't work.

I don't like this implementation. Seems to me, it breaks one of the OpenMP standard requirements: the program can be compiled without openmp support. I assume, that with this includes the program won't be able to be compiled without OpenMP support anymore because it may use some device-specific math functions explicitly.
Instead, I would like to see some additional, device-scpecific math header file, that must be included explicitly to support some device-specific math functions. And we need to provide default implementations for those extra math functions for all the platforms we're going to support, including default host implementations.

Can you provide an example of a conforming program that can't be compiled without OpenMP support? Regardless of the use of any device-specific functions (which isn't covered by the standard, of course, but might be needed in practice), the code still needs to be compilable by the host in order to generate the host-fallback version. This doesn't change that. Thus, any program that uses anything from this math.h, etc. needs to compile for the host, and thus, likely compiles without OpenMP support. Maybe I'm missing your point, however.

Assume we have something like this:

#pragma omp target if(cond)
a = __nv_xxxx(....);

Instead of __nv_xxx you can try to use any Cuda-specific function, which is not the part of the standard math.h/cmath files. Will it be compilable even with OpenMP?

I don't think that this changes that one way or the other. Your example won't work, AFAIK, unless you do something like:

#pragma omp target if(cond)
#ifdef __NVPTX__
a = __nv_xxxx(....);
#else
a = something_on_the_host;
#endif

and anything from these headers that doesn't also have a host version will suffer the same fate: if it won't also compile for the host (one way or another), then it won't work.

The problem with this header file is that it allows to use those Cuda-specific functions unconditionally in some cases:

#pragma omp target
a = __nv_xxxx(....);

It won't require any target-specific guards to compile this code (if we compile it only for Cuda-specific devices) and we're loosing the consistency here: in some cases target regions will require special device guards, in others, with the same function calls, it is not. And the worst thing, is that we implicitly allow to introduce this kind of incostistency into users code. That's why I would prefer to see a special kind of the include file, NVPTX specific, that must be included explicitly, so the user explictly commanded to use some target-specific math functions, if he really wants it. Plus, maybe, in this files we need force check of the platform and warn users that the functions from this header file must be used using device-specific checks. Or provide some kind of the default implementations for all the platforms, that do not support those math function natively.

I don't like this implementation. Seems to me, it breaks one of the OpenMP standard requirements: the program can be compiled without openmp support. I assume, that with this includes the program won't be able to be compiled without OpenMP support anymore because it may use some device-specific math functions explicitly.
Instead, I would like to see some additional, device-scpecific math header file, that must be included explicitly to support some device-specific math functions. And we need to provide default implementations for those extra math functions for all the platforms we're going to support, including default host implementations.

Can you provide an example of a conforming program that can't be compiled without OpenMP support? Regardless of the use of any device-specific functions (which isn't covered by the standard, of course, but might be needed in practice), the code still needs to be compilable by the host in order to generate the host-fallback version. This doesn't change that. Thus, any program that uses anything from this math.h, etc. needs to compile for the host, and thus, likely compiles without OpenMP support. Maybe I'm missing your point, however.

Assume we have something like this:

#pragma omp target if(cond)
a = __nv_xxxx(....);

Instead of __nv_xxx you can try to use any Cuda-specific function, which is not the part of the standard math.h/cmath files. Will it be compilable even with OpenMP?

I don't think that this changes that one way or the other. Your example won't work, AFAIK, unless you do something like:

#pragma omp target if(cond)
#ifdef __NVPTX__
a = __nv_xxxx(....);
#else
a = something_on_the_host;
#endif

and anything from these headers that doesn't also have a host version will suffer the same fate: if it won't also compile for the host (one way or another), then it won't work.

The problem with this header file is that it allows to use those Cuda-specific functions unconditionally in some cases:

#pragma omp target
a = __nv_xxxx(....);

It won't require any target-specific guards to compile this code (if we compile it only for Cuda-specific devices) and we're loosing the consistency here: in some cases target regions will require special device guards, in others, with the same function calls, it is not. And the worst thing, is that we implicitly allow to introduce this kind of incostistency into users code. That's why I would prefer to see a special kind of the include file, NVPTX specific, that must be included explicitly, so the user explictly commanded to use some target-specific math functions, if he really wants it. Plus, maybe, in this files we need force check of the platform and warn users that the functions from this header file must be used using device-specific checks. Or provide some kind of the default implementations for all the platforms, that do not support those math function natively.

I believe that I understand your point, but two things:

  1. I think that you're mistaken on the underlying premise. That code will not meaningfully compile without ifdefs, even if only CUDA-specific devices are the only ones selected. We *always* compile code for the host as well, not for offloading proper, but for the fallback (for execution when the offloading fails). If I emulate this situation by writing this:
#ifdef __NVPTX__
int __nv_floor();
#endif

int main() {
#pragma omp target
__nv_floor();
}

and try to compile using Clang with -fopenmp -fopenmp-targets=nvptx64, the compilation fails:

int1.cpp:8:1: error: use of undeclared identifier '__nv_floor'

and this is because, when we invoke the compilation for the host, there is no declaration for that function. This is true even though nvptx64 is the only target for which the code is being compiled (because we always also compile the host fallback).

  1. I believe that the future state -- what we get by following this patch, and then when declare variant is available using that -- gives us all what we want. When we have declare variant, then all of the definitions in these headers will be declared as variants only available on the nvptx device, and so, for a user to use such a function they would need to explicit add a variant that is only available on the host. This would be explicit.

I think that you're getting at what I mentioned earlier, where if you have code that uses some function called, for example, rnorm3d -- this is a non-standard function, and so a user is free to define it, and then they compile with OpenMP target and include math.h, then the version of rnorm3d we bring in from the CUDA header will silently override their version (which would otherwise be implicitly declare target). But I don't think that this will happen either with this patch, instead, they'll get a warning about conflicting definitions (error: redefinition of 'rnorm3d'), and if it's an external-linkage function, then something should give them an error (although we should check this). The more interesting case, I think, actually comes when we switch to using declare variant, because then I think this silent override does occur, so we'd want to add a warning for when a variant delcared in a system header file would silently override a plain (non-variant) function declare/defined elsewhere. I believe that will give us all of the benefits of this while also addressing the concern you highlight.

gtbercea updated this revision to Diff 197911.May 2 2019, 7:16 PM
  • Address comments.
  • Add math and cmath inclusion tests.
  • Add driver test.
gtbercea marked 2 inline comments as done.May 2 2019, 7:17 PM

I don't like this implementation. Seems to me, it breaks one of the OpenMP standard requirements: the program can be compiled without openmp support. I assume, that with this includes the program won't be able to be compiled without OpenMP support anymore because it may use some device-specific math functions explicitly.
Instead, I would like to see some additional, device-scpecific math header file, that must be included explicitly to support some device-specific math functions. And we need to provide default implementations for those extra math functions for all the platforms we're going to support, including default host implementations.

Can you provide an example of a conforming program that can't be compiled without OpenMP support? Regardless of the use of any device-specific functions (which isn't covered by the standard, of course, but might be needed in practice), the code still needs to be compilable by the host in order to generate the host-fallback version. This doesn't change that. Thus, any program that uses anything from this math.h, etc. needs to compile for the host, and thus, likely compiles without OpenMP support. Maybe I'm missing your point, however.

Assume we have something like this:

#pragma omp target if(cond)
a = __nv_xxxx(....);

Instead of __nv_xxx you can try to use any Cuda-specific function, which is not the part of the standard math.h/cmath files. Will it be compilable even with OpenMP?

I don't think that this changes that one way or the other. Your example won't work, AFAIK, unless you do something like:

#pragma omp target if(cond)
#ifdef __NVPTX__
a = __nv_xxxx(....);
#else
a = something_on_the_host;
#endif

and anything from these headers that doesn't also have a host version will suffer the same fate: if it won't also compile for the host (one way or another), then it won't work.

The problem with this header file is that it allows to use those Cuda-specific functions unconditionally in some cases:

#pragma omp target
a = __nv_xxxx(....);

It won't require any target-specific guards to compile this code (if we compile it only for Cuda-specific devices) and we're loosing the consistency here: in some cases target regions will require special device guards, in others, with the same function calls, it is not. And the worst thing, is that we implicitly allow to introduce this kind of incostistency into users code. That's why I would prefer to see a special kind of the include file, NVPTX specific, that must be included explicitly, so the user explictly commanded to use some target-specific math functions, if he really wants it. Plus, maybe, in this files we need force check of the platform and warn users that the functions from this header file must be used using device-specific checks. Or provide some kind of the default implementations for all the platforms, that do not support those math function natively.

I believe that I understand your point, but two things:

  1. I think that you're mistaken on the underlying premise. That code will not meaningfully compile without ifdefs, even if only CUDA-specific devices are the only ones selected. We *always* compile code for the host as well, not for offloading proper, but for the fallback (for execution when the offloading fails). If I emulate this situation by writing this:
#ifdef __NVPTX__
int __nv_floor();
#endif

int main() {
#pragma omp target
__nv_floor();
}

and try to compile using Clang with -fopenmp -fopenmp-targets=nvptx64, the compilation fails:

int1.cpp:8:1: error: use of undeclared identifier '__nv_floor'

and this is because, when we invoke the compilation for the host, there is no declaration for that function. This is true even though nvptx64 is the only target for which the code is being compiled (because we always also compile the host fallback).

  1. I believe that the future state -- what we get by following this patch, and then when declare variant is available using that -- gives us all what we want. When we have declare variant, then all of the definitions in these headers will be declared as variants only available on the nvptx device, and so, for a user to use such a function they would need to explicit add a variant that is only available on the host. This would be explicit.

I think that you're getting at what I mentioned earlier, where if you have code that uses some function called, for example, rnorm3d -- this is a non-standard function, and so a user is free to define it, and then they compile with OpenMP target and include math.h, then the version of rnorm3d we bring in from the CUDA header will silently override their version (which would otherwise be implicitly declare target). But I don't think that this will happen either with this patch, instead, they'll get a warning about conflicting definitions (error: redefinition of 'rnorm3d'), and if it's an external-linkage function, then something should give them an error (although we should check this). The more interesting case, I think, actually comes when we switch to using declare variant, because then I think this silent override does occur, so we'd want to add a warning for when a variant delcared in a system header file would silently override a plain (non-variant) function declare/defined elsewhere. I believe that will give us all of the benefits of this while also addressing the concern you highlight.

Ahh, yes, I forgot that we still generate the host version of the target region. Still, I think we need to prvide the default implementation of those non-standard functions (they can be very simple, maybe reporting error is going to be enough), which can be overriden by user.
Also, if I do recall correctly, this solution works only for C++ (because we use our own declaration for the standard math functions, they are automatically marked as non-builtins). For regular C in many cases, instead of the library function calls, the llvm intrinsic is generated. Did you check that it works for С?

@ABataev this patch works for both C and C++ and for both math.h and cmath headers.

@ABataev this patch works for both C and C++ and for both math.h and cmath headers.

Did you test it for the builtins? like pow, powf and powl? How are the builtins resolved with this patch?

@ABataev this patch works for both C and C++ and for both math.h and cmath headers.

Did you test it for the builtins? like pow, powf and powl? How are the builtins resolved with this patch?

I have. There are tests for this.

Still, I think we need to prvide the default implementation of those non-standard functions (they can be very simple, maybe reporting error is going to be enough), which can be overriden by user.

I appreciate your motivation, and I agree with you to some extent. I don't object to having generic versions of useful math functions, but I don't think they should be required. It's not reasonable to make someone add generic versions of every function which happens to appear in a system/target-specific math.h header. NVPTX won't be the only target that has target-optimized functions that get pulled in, even from our own headers, but system headers also have differences anyway depending on what preprocessor macros are defined. In the end, people can write portable code if they stick to what's in the standard, and we should make it reasonably easy for them to step outside of the standard to do what they need to do when the standard subset of available functionality doesn't meet their needs for whatever reason. This is what we do for C/C++, where we provide intrinsics and other system functions for those who can't write their code only using the facilities that C/C++ provide.

In any case, I think that we can figure out how to add generic versions of non-standard math functions in a separate thread. I think that we should move forward with this and then make progress on generic versions separately. It's also possible that we want to fold this discussion into the discussion on an LLVM math library (we've talked about this for some time in the context of vector math libraries, and I'd not thought about accelerators in this context, but maybe this is all related).

Still, I think we need to prvide the default implementation of those non-standard functions (they can be very simple, maybe reporting error is going to be enough), which can be overriden by user.

I appreciate your motivation, and I agree with you to some extent. I don't object to having generic versions of useful math functions, but I don't think they should be required. It's not reasonable to make someone add generic versions of every function which happens to appear in a system/target-specific math.h header. NVPTX won't be the only target that has target-optimized functions that get pulled in, even from our own headers, but system headers also have differences anyway depending on what preprocessor macros are defined. In the end, people can write portable code if they stick to what's in the standard, and we should make it reasonably easy for them to step outside of the standard to do what they need to do when the standard subset of available functionality doesn't meet their needs for whatever reason. This is what we do for C/C++, where we provide intrinsics and other system functions for those who can't write their code only using the facilities that C/C++ provide.

In any case, I think that we can figure out how to add generic versions of non-standard math functions in a separate thread. I think that we should move forward with this and then make progress on generic versions separately. It's also possible that we want to fold this discussion into the discussion on an LLVM math library (we've talked about this for some time in the context of vector math libraries, and I'd not thought about accelerators in this context, but maybe this is all related).

It is up to you. I don't have strong objections if you think this will work as required. Just the tests must be fixed, especially codegen tests.

test/CodeGen/nvptx_device_cmath_functions.c
1 ↗(On Diff #197911)
  1. Provide tests for C++ too.
  2. Do not use driver to generate the code, use frontend.
  3. Do not include real system header files. Use some stubs instead.

It is up to you. I don't have strong objections if you think this will work as required. Just the tests must be fixed, especially codegen tests.

Thanks, Alexey. I think this will work as required, and then we'll be able to update it when we get declare variant. Agreed on the tests (on all points).

gtbercea updated this revision to Diff 198105.May 3 2019, 4:00 PM
  • Add driver test.
gtbercea updated this revision to Diff 198108.May 3 2019, 4:14 PM
  • Add new tests. Add stub headers.
  • Remove old tests.

Alexey, is this is good to go now?

ABataev added inline comments.May 6 2019, 8:00 AM
lib/Headers/__clang_cuda_cmath.h
54

Why we have this guard here? It does not work for OpenMP? Why?

lib/Headers/__clang_cuda_device_functions.h
49–53

Can we do anything with this in С mode? I mean, to allow it in C.

gtbercea marked 2 inline comments as done.May 6 2019, 8:45 AM
gtbercea added inline comments.
lib/Headers/__clang_cuda_cmath.h
54

Because all the FP_XXX macros are defined in cmath and for OpenMP we can't include it yet because we don't support variant yet.

lib/Headers/__clang_cuda_device_functions.h
49–53

We can if we rename the function.

ABataev added inline comments.May 6 2019, 8:52 AM
lib/Headers/__clang_cuda_cmath.h
54

The better to add TODO or FIXME to fix this once variant construct in supported.

lib/Headers/__clang_cuda_device_functions.h
49–53

Take a look here, probably it will solve the problem:
https://clang.llvm.org/docs/AttributeReference.html#overloadable

gtbercea updated this revision to Diff 198301.May 6 2019, 10:28 AM
  • Address comments.
gtbercea marked 4 inline comments as done.May 6 2019, 10:28 AM
ABataev added inline comments.May 6 2019, 10:33 AM
lib/Headers/__clang_cuda_cmath.h
444

I see that the same guard is used lib/Headers/__clang_cuda_device_functions.h, but for different set of functions. Is this ok?

gtbercea marked an inline comment as done.May 6 2019, 10:45 AM
gtbercea added inline comments.
lib/Headers/__clang_cuda_cmath.h
444

Yep, it's intentional. Again the variant issue.

ABataev accepted this revision.May 6 2019, 10:46 AM

LG with a nit.

lib/Headers/__clang_cuda_cmath.h
444

Then again, add TODO or FIXME

This revision is now accepted and ready to land.May 6 2019, 10:46 AM
gtbercea updated this revision to Diff 198311.May 6 2019, 11:03 AM
  • Address comments.
gtbercea marked 2 inline comments as done.May 6 2019, 11:08 AM
This revision was automatically updated to reflect the committed changes.

I've reverted this in rL360192 because it breaks stage 2 builds on GreenDragon. Please see the commit message and the inline comment for more details.

lib/Headers/CMakeLists.txt
36

This doesn't do what you think it would do. The files are copied into the root of the resource directory, which causes stage 2 build failures on GreenDragon.

hfinkel added inline comments.May 7 2019, 3:11 PM
lib/Headers/CMakeLists.txt
36

Can you provide a link to the failure log? Is the problem that the files are not copied into their subdirectory?

JDevlieghere added inline comments.May 7 2019, 6:30 PM
lib/Headers/CMakeLists.txt
36

Correct. There's not much to see, but here's a build that fails because of this: http://green.lab.llvm.org/green/view/LLDB/job/lldb-cmake/25557/console

CMake Error at cmake/modules/HandleLLVMOptions.cmake:497 (message):
  LLVM_ENABLE_MODULES is not supported by this compiler

The cmake log output, which I grabbed from the bot is more descriptive:

Performing C++ SOURCE FILE Test CXX_SUPPORTS_MODULES failed with the following output:
Change Dir: /Users/buildslave/jenkins/workspace/lldb-cmake/lldb-build/CMakeFiles/CMakeTmp

Run Build Command:"/usr/local/bin/ninja" "cmTC_0dd11"
[1/2] Building CXX object CMakeFiles/cmTC_0dd11.dir/src.cxx.o
FAILED: CMakeFiles/cmTC_0dd11.dir/src.cxx.o
/Users/buildslave/jenkins/workspace/lldb-cmake/host-compiler/bin/clang++    -fPIC -fvisibility-inlines-hidden -Werror=date-time -Werror=unguarded-availability-new -std=c++11 -DCXX_SUPPORTS_MODULES  -Werror=unguarded-availability-new -fmodules -fmodules-cache-path=/Users/buildslave/jenkins/workspace/lldb-cmake/lldb-build/module.cache -fcxx-modules -isysroot /Applications/Xcode.app/Contents/Developer/Platforms/MacOSX.platform/Developer/SDKs/MacOSX10.14.sdk -mmacosx-version-min=10.9 -o CMakeFiles/cmTC_0dd11.dir/src.cxx.o -c src.cxx
While building module 'Darwin' imported from /Applications/Xcode.app/Contents/Developer/Platforms/MacOSX.platform/Developer/SDKs/MacOSX10.14.sdk/usr/include/assert.h:42:
While building module 'std' imported from /Applications/Xcode.app/Contents/Developer/Platforms/MacOSX.platform/Developer/SDKs/MacOSX10.14.sdk/usr/include/tgmath.h:27:
In file included from <module-includes>:2:
/Users/buildslave/jenkins/workspace/lldb-cmake/host-compiler/bin/../include/c++/v1/ctype.h:38:15: fatal error: cyclic dependency in module 'Darwin': Darwin -> std -> Darwin
#include_next <ctype.h>
              ^
While building module 'Darwin' imported from /Applications/Xcode.app/Contents/Developer/Platforms/MacOSX.platform/Developer/SDKs/MacOSX10.14.sdk/usr/include/assert.h:42:
In file included from <module-includes>:89:
In file included from /Users/buildslave/jenkins/workspace/lldb-cmake/host-compiler/lib/clang/9.0.0/include/tgmath.h:21:
/Applications/Xcode.app/Contents/Developer/Platforms/MacOSX.platform/Developer/SDKs/MacOSX10.14.sdk/usr/include/tgmath.h:27:10: fatal error: could not build module 'std'
#include <math.h>
 ~~~~~~~~^
In file included from src.cxx:2:
In file included from /Users/buildslave/jenkins/workspace/lldb-cmake/host-compiler/bin/../include/c++/v1/cassert:20:
/Applications/Xcode.app/Contents/Developer/Platforms/MacOSX.platform/Developer/SDKs/MacOSX10.14.sdk/usr/include/assert.h:42:10: fatal error: could not build module 'Darwin'
#include <sys/cdefs.h>
 ~~~~~~~~^
3 errors generated.
ninja: build stopped: subcommand failed.

Source file was:
#undef NDEBUG
                               #include <cassert>
                               #define NDEBUG
                               #include <cassert>
                               int main() { assert(this code is not compiled); }
gtbercea reopened this revision.May 7 2019, 7:53 PM
This revision is now accepted and ready to land.May 7 2019, 7:53 PM
gtbercea updated this revision to Diff 198578.May 7 2019, 7:55 PM
  • Fix move to openmp_wrapper folder. Fix header ordering problem.
gtbercea updated this revision to Diff 198664.May 8 2019, 8:21 AM
  • Eliminate declarations of functions not needed for math function resolution.
This revision was automatically updated to reflect the committed changes.
jlebar added a subscriber: tra.May 8 2019, 8:54 AM
jlebar added a subscriber: jlebar.