This is an archive of the discontinued LLVM Phabricator instance.

[CUDA] Bump CUDA version to 11.5
ClosedPublic

Authored by carlosgalvezp on Nov 5 2021, 2:14 AM.

Diff Detail

Event Timeline

carlosgalvezp created this revision.Nov 5 2021, 2:14 AM
carlosgalvezp requested review of this revision.Nov 5 2021, 2:14 AM
Herald added projects: Restricted Project, Restricted Project. · View Herald TranscriptNov 5 2021, 2:14 AM
carlosgalvezp retitled this revision from Bump CUDA version to 11.5 to [CUDA] Bump CUDA version to 11.5.Nov 5 2021, 2:17 AM

I'm not sure if it's actually correct to advertise full support for CUDA 11.5, but I didn't look into exact changes since 11.4

I'm not sure if it's actually correct to advertise full support for CUDA 11.5, but I didn't look into exact changes since 11.4

Good point. I was confused about the fact that 11.4 is both FULLY_SUPPORTED and PARTIALLY_SUPPORTED, so I thought to just follow the existing pattern. I didn't find any extra tests added for the bump 11.2 -> 11.4. Do we have infrastructure in place to test this, or how does it work?

tra requested changes to this revision.Nov 5 2021, 10:33 AM

I think we're missing few more changes here:

  • The driver needs to enable ptx75 when it constructs cc1 command line in clang/lib/Driver/ToolChains/Cuda.cpp
  • We also need to handle PTX75 in clang/include/clang/Basic/BuiltinsNVPTX.def

I'm not sure if it's actually correct to advertise full support for CUDA 11.5, but I didn't look into exact changes since 11.4

Technically we never support all the features supported by NVCC, so for clang it essentially means "works well enough". I.e. no known regressions vs previous clang and CUDA versions. Usually it boils down to being able to compile CUDA headers.
"Partially supported" happens when we can compile code compileable with the older CUDA versions, but are missing something critical introduced by the new CUDA version. E.g. a new GPU variant. Or new compiler builtins/functions that a user may expect from the new CUDA version. Or some CUDA headers may use new instructions in inline asm that would not compile with ptxas unless we generate PTX output using the new PTX version.

AFAICT from the CUDA-11.5 release notes, it didn't introduce anything particularly interesting. We've been using clang with CUDA-11.5 for a few weeks w/o any issues, so I think it's fine to stamp it as supported, once the missing bits are in place.

This revision now requires changes to proceed.Nov 5 2021, 10:33 AM

Experimental support for __int128 is new in CUDA 11.5, not sure if Clang enables this for CUDA. The release notes also specify

builtin_assume can now be used to specify address space to allow for efficient loads and stores.

The docs are very scarce on this, I could only find void __builtin_assume(bool exp) which I think is not what they are talking about...

tra added a comment.Nov 5 2021, 11:14 AM

Experimental support for __int128 is new in CUDA 11.5, not sure if Clang enables this for CUDA.

I think we've added support for i128 a while back: https://godbolt.org/z/18bEbhMYb

The release notes also specify

builtin_assume can now be used to specify address space to allow for efficient loads and stores.

The docs are very scarce on this, I could only find void __builtin_assume(bool exp) which I think is not what they are talking about...

AMD folks have D112041 under review which will have builtin_assume help AS inference. In any case, we've already been doing it reasonably well automatically.

Updated BuilintsNVPTX.def

  • The driver needs to enable ptx75 when it constructs cc1 command line in clang/lib/Driver/ToolChains/Cuda.cpp

@tra Haven't I already done it in line 712? Or where should I enable it?

Update release notes

tra accepted this revision.Nov 8 2021, 10:55 AM
  • The driver needs to enable ptx75 when it constructs cc1 command line in clang/lib/Driver/ToolChains/Cuda.cpp

@tra Haven't I already done it in line 712? Or where should I enable it?

You're right. I've missed that.

This revision is now accepted and ready to land.Nov 8 2021, 10:55 AM

@Hahnfeld Are you satisfied with the replies to your questions? If so I can go ahead and merge.

@Hahnfeld Are you satisfied with the replies to your questions? If so I can go ahead and merge.

Yes yes, I'm happy if @tra is happy :)

Awesome, thanks a lot for the reviews!

This revision was automatically updated to reflect the committed changes.