Page MenuHomePhabricator
Feed Advanced Search

Jun 24 2020

tra added inline comments to D82506: [HIP] Add missing options for lto.
Jun 24 2020, 4:20 PM · Restricted Project
tra added a comment to D81118: [buildbot] Added builders and slaves for the new CUDA build/test bots..

Here's an example of the failure to find external.py: http://lab.llvm.org:8014/builders/clang-cuda-gce-build/builds/303/steps/annotate/logs/stdio

Jun 24 2020, 4:20 PM
tra added a comment to D81118: [buildbot] Added builders and slaves for the new CUDA build/test bots..

You could put that script somewhere local and give me the fully qualified path to that script.I'll set the staging accordingly.

Jun 24 2020, 4:20 PM
tra added a comment to D81118: [buildbot] Added builders and slaves for the new CUDA build/test bots..

Staging is ready for your experiments.

Jun 24 2020, 2:40 PM
tra committed rGaf5e61bf4fd1: [NVPTX] Fix for NVPTX module asm regression (authored by tatz.j@northeastern.edu <tatz.j@husky.neu.edu>).
[NVPTX] Fix for NVPTX module asm regression
Jun 24 2020, 11:23 AM
tra closed D82280: Fix for NVPTX module asm regression.
Jun 24 2020, 11:23 AM · Restricted Project
tra updated the diff for D82280: Fix for NVPTX module asm regression.

Updated the test.

Jun 24 2020, 11:21 AM · Restricted Project
tra added a comment to D82280: Fix for NVPTX module asm regression.

I did not add a unit test as one was added in commit d2bbdf05e0b88524226589d89ffb2bfdc53ef3c8 but without calling ptxas -v on every emitted ptx file we can't not verify the correctness

Jun 24 2020, 10:15 AM · Restricted Project
tra added a comment to D82280: Fix for NVPTX module asm regression.

Will do.

Jun 24 2020, 10:15 AM · Restricted Project
tra accepted D82434: moved deployment to kubernetes files.
Jun 24 2020, 9:47 AM

Jun 23 2020

tra accepted D82280: Fix for NVPTX module asm regression.

Thank you for providing the examples.

Jun 23 2020, 4:11 PM · Restricted Project
tra added a comment to D78655: [CUDA][HIP] Let lambda be host device by default.

Could you give an example to demonstrate current use and how it will break?

Here is place where it would break:

https://github.com/ROCmSoftwarePlatform/AMDMIGraphX/blob/develop/src/targets/gpu/device/include/migraphx/gpu/device/multi_index.hpp#L129

This change was already included in a fork of llvm in rocm 3.5 and 3.6 releases which is why this compiles. This also compiles using the hcc-based hip compilers which is what previous rocm versions used. It would be best if this can be upstreamed, so we dont have to hold on to these extra changes in a fork.

Jun 23 2020, 10:43 AM · Restricted Project

Jun 22 2020

tra added inline comments to D78655: [CUDA][HIP] Let lambda be host device by default.
Jun 22 2020, 3:35 PM · Restricted Project
tra added a comment to D78655: [CUDA][HIP] Let lambda be host device by default.

It seems we can only promote non-capturing lambdas, no matter whether it has enclosing function or not.

Jun 22 2020, 12:54 PM · Restricted Project
tra added a comment to D78655: [CUDA][HIP] Let lambda be host device by default.
  • lambdas with any lambda-capture (which must therefore have an enclosing function) inherit the enclosing function's HDness.
Jun 22 2020, 10:12 AM · Restricted Project
tra added a comment to D82280: Fix for NVPTX module asm regression.

Could you post a trivial example of where the module-level inline assembly is emitted before/after this patch?

Jun 22 2020, 9:39 AM · Restricted Project

Jun 17 2020

tra committed rGac20150e299a: [CUDA] make the test more hermetic (authored by tra).
[CUDA] make the test more hermetic
Jun 17 2020, 3:41 PM

Jun 16 2020

tra added a comment to D81938: [InferAddressSpaces] Handle the pair of `ptrtoint`/`inttoptr`..

This should be two separate patches - inferaddressspace and SROA.

Yes, I prepared that into 2 commits but arc combines them together.

Jun 16 2020, 9:21 AM · Restricted Project, Restricted Project

Jun 15 2020

tra accepted D81861: [HIP] Do not use llvm-link/opt/llc for -fgpu-rdc .

LGTM.

Jun 15 2020, 3:29 PM · Restricted Project
tra committed rGd700237f1aa1: [CUDA,HIP] Use VFS for SDK detection. (authored by tra).
[CUDA,HIP] Use VFS for SDK detection.
Jun 15 2020, 1:15 PM
tra closed D81771: [CUDA,HIP] Use VFS for SDK detection..
Jun 15 2020, 1:15 PM · Restricted Project
tra added inline comments to D81737: docker images for mlir-nvidia.
Jun 15 2020, 10:55 AM · Restricted Project

Jun 12 2020

tra added a comment to D81118: [buildbot] Added builders and slaves for the new CUDA build/test bots..

I can temporarily set you up like this in the staging. You would do all the experiments you are after, tinker, and such. Once you are happy you could prepare a final patch for the review.
For how long do you expect tinkering?

Jun 12 2020, 3:58 PM
tra updated the diff for D81771: [CUDA,HIP] Use VFS for SDK detection..

Replaced another use of D.getVFS.

Jun 12 2020, 3:58 PM · Restricted Project
tra created D81771: [CUDA,HIP] Use VFS for SDK detection..
Jun 12 2020, 3:58 PM · Restricted Project
tra added a comment to D81118: [buildbot] Added builders and slaves for the new CUDA build/test bots..

ping! Will the current version of the patch do?

Jun 12 2020, 12:35 PM
tra added a comment to D81713: [HIP] Fix rocm not found on rocm3.5.

I can tell that it does not. We're still looking under /opt/rocm by default. I've ran into it trying to get comgr working with the recent LLVM which prompted my comment on Github

Not sure how that would be related to comgr? comgr doesn't rely on the dynamic path and embeds the bitcode in the library. The comgr build finds the libraries through cmake and doesn't care where they're located

Jun 12 2020, 12:04 PM · Restricted Project
tra accepted D81627: [HIP] Do not call opt/llc for -fno-gpu-rdc.

LGTM. Good to go if @arsenm is OK with fixing -fgpu-rdc in a separate patch.

Jun 12 2020, 12:02 PM · Restricted Project
tra added a comment to D60620: [HIP] Support target id by --offload-arch.
In D60620#2067134, @tra wrote:

Do you expect users to specify these IDs? How do you see it being used in practice? I think you do need to implement a user-friendly shortcut and expand it to the detailed offload-id internally. I'm fine with allowing explicit offload id as a hidden argument, but I don't think it's suitable for something that will be used by everyone who can't be expected to be aware of all the gory details of particular GPU features.

The good thing about this target id is that it is backward compatible with GPU arch. For common users who are not concerned with specific GPU configurations, they can just use the old GPU arch and nothing changes. This is because GPU arch without features implies default value for these features, which work on all configurations. For advanced users who do need to build for specific GPU configurations, they should already have the knowledge about the name and meaning of these configurations by reading the AMDGPU user guide (http://llvm.org/docs/AMDGPUUsage.html). Therefore a target id in the form of gfx908:xnack+ is not something cryptic to them. On the other hand, an encoded GPU arch like gfx908a is cryptic since it has no meaning at all.

Jun 12 2020, 12:02 PM · Restricted Project, Restricted Project
tra added a comment to D81713: [HIP] Fix rocm not found on rocm3.5.

Can you add tests for this? Is this also sufficient with the directory layout change?

You mean does this work after we move device lib from /opt/rocm/lib to /opt/rocm/amdgcn/bitcode ?

Yes

Jun 12 2020, 11:26 AM · Restricted Project
tra added inline comments to D81738: initial terraform configuration for Google buildbot workers.
Jun 12 2020, 10:52 AM
tra added a comment to D81737: docker images for mlir-nvidia.

LGTM.

Jun 12 2020, 10:21 AM · Restricted Project

Jun 11 2020

tra added a comment to D81627: [HIP] Do not call opt/llc for -fno-gpu-rdc.

Looks OK in general. I'm happy to see reduced opt/llc use.

Jun 11 2020, 12:07 PM · Restricted Project
tra added a comment to D80450: [CUDA][HIP] Fix implicit HD function resolution.
In D80450#2087938, @tra wrote:

Reproducer for the regression. https://gist.github.com/Artem-B/183e9cfc28c6b04c1c862c853b5d9575
It's not particularly small, but that's as far as I could get it reduced.

With the patch, an attempt to instantiate ag on line 36 (in the reproducer sources I linked to above) results in ambiguity between two templates on lines 33 and 24 that are in different namespaces.
Previously it picked the template on line 28.

Jun 11 2020, 12:07 PM · Restricted Project
tra added a comment to D80450: [CUDA][HIP] Fix implicit HD function resolution.

Reproducer for the regression. https://gist.github.com/Artem-B/183e9cfc28c6b04c1c862c853b5d9575
It's not particularly small, but that's as far as I could get it reduced.

Jun 11 2020, 10:27 AM · Restricted Project

Jun 8 2020

tra added a comment to D81118: [buildbot] Added builders and slaves for the new CUDA build/test bots..

Thanks for updating the patch, Artem.

Could you elaborate why you need the script launcher (zorg/buildbot/builders/annotated/external.py), please? You can use your cuda-related scripts directly with the annotated builder without having an extra layer.

Jun 8 2020, 4:39 PM
tra added a comment to D81427: [hip] Fix device-only relocatable code compilation..

LGTM in general, but I'll let Sam stamp it.

Jun 8 2020, 2:25 PM · Restricted Project
tra added a comment to D81118: [buildbot] Added builders and slaves for the new CUDA build/test bots..

Hello Artem.

I have commented the getCUDAAnnotatedBuildFactory, but it doesn't seem you need a special build factory for your builders.
Just use the existing AnnotatedBuilder.getAnnotatedBuildFactory in the builders.py for your build configurations.

Jun 8 2020, 2:24 PM
tra updated the diff for D81118: [buildbot] Added builders and slaves for the new CUDA build/test bots..

Use AnnotatedBuilder with an external build script.

Jun 8 2020, 1:55 PM
tra updated the diff for D81118: [buildbot] Added builders and slaves for the new CUDA build/test bots..

Addressed review comments.

Jun 8 2020, 12:08 PM
tra accepted D63403: Make myself code owner of InferAddressSpaces.
Jun 8 2020, 10:29 AM

Jun 4 2020

tra accepted D81176: [HIP] Add default header and include path.

Thank you for the patch. This will make my life a lot easier.

Jun 4 2020, 11:37 AM · Restricted Project

Jun 3 2020

tra updated the diff for D81118: [buildbot] Added builders and slaves for the new CUDA build/test bots..

Changed bot/slave structure

Jun 3 2020, 2:55 PM
tra updated the diff for D81118: [buildbot] Added builders and slaves for the new CUDA build/test bots..

Fixed notification list.

Jun 3 2020, 2:55 PM
tra created D81118: [buildbot] Added builders and slaves for the new CUDA build/test bots..
Jun 3 2020, 2:20 PM
tra accepted D80450: [CUDA][HIP] Fix implicit HD function resolution.

LGTM. Combined with D79526 it appears to work for tensorflow build.

Jun 3 2020, 1:10 PM · Restricted Project
tra accepted D79237: [CUDA][HIP] Fix constexpr variables for C++17.

Tested with tensorflow build. The patch- does not seem to break anything now.

Jun 3 2020, 10:25 AM · Restricted Project
tra added a comment to D80450: [CUDA][HIP] Fix implicit HD function resolution.
In D80450#2055463, @tra wrote:

Is this patch supposed to be used with D79526 or instead of it?

Jun 3 2020, 9:53 AM · Restricted Project

Jun 1 2020

tra added a comment to D80858: [CUDA][HIP] Support accessing static device variable in host code for -fno-gpu-rdc.

The value is based on llvm::sys::Process::GetRandomNumber(). So unless one provides a build-system-derived uuid for every compilation unit, recompiling identical source will yield an observably different binary.

The distinction between 'unique' and 'random' is significant for anyone depending on repeatable binary output, so this patch should probably rename 'unique' to 'random' everywhere.

Jun 1 2020, 4:16 PM · Restricted Project
tra added a comment to D60620: [HIP] Support target id by --offload-arch.

It means HIP will create two compilation passes: one for gfx908 and one for gfx908:xnack+:sramecc+.

Jun 1 2020, 12:59 PM · Restricted Project, Restricted Project
tra added a comment to D80897: [OpenMP] Initial support for std::complex in target regions.
In D80897#2066723, @tra wrote:

Hmm. I'm pretty sure tensorflow is using std::complex for various types. I'm surprised that we haven't seen these functions missing.

Which functions and missing from where? In CUDA-mode we did provide __XXXXc3 already.

Jun 1 2020, 12:27 PM · Restricted Project
tra added a comment to D80897: [OpenMP] Initial support for std::complex in target regions.

Hmm. I'm pretty sure tensorflow is using std::complex for various types. I'm surprised that we haven't seen these functions missing.
Plain CUDA (e.g. https://godbolt.org/z/Us6oXC) code appears to have no references to __mul* or __div*, at least for optimized builds, but they do popup in unoptimized ones. Curiously enough, unoptimized code compiled with -stdlib=libc++ --std=c++11 does not need the soft-float functions. That would explain why we don't see the build breaks.

Jun 1 2020, 10:45 AM · Restricted Project

May 27 2020

tra added inline comments to D60620: [HIP] Support target id by --offload-arch.
May 27 2020, 12:29 PM · Restricted Project, Restricted Project

May 26 2020

tra added inline comments to D60620: [HIP] Support target id by --offload-arch.
May 26 2020, 3:50 PM · Restricted Project, Restricted Project
tra added a comment to D80450: [CUDA][HIP] Fix implicit HD function resolution.

Is this patch supposed to be used with D79526 or instead of it?

May 26 2020, 11:27 AM · Restricted Project
tra added a comment to D80464: [CUDA] Missing __syncthreads intrinsic in __clang_cuda_device_functions.h.

__syncthreads is clang's built-in and as such should not be in any header file:
https://github.com/llvm/llvm-project/blob/master/clang/include/clang/Basic/BuiltinsNVPTX.def#L406

May 26 2020, 11:25 AM · Restricted Project
tra added inline comments to D71726: Let clang atomic builtins fetch add/sub support floating point types.
May 26 2020, 10:17 AM

May 21 2020

tra updated subscribers of D78759: Add Statically Linked Libraries.

The control flow in tools::gnutools::StaticLibTool::ConstructJob doesn't seem good though. It's a generic function that unconditionally calls a hip specific function which happens to return immediately in non-hip cases. That really should be a hip specific function calling the generic one, then doing more hip specific things afterwards.

May 21 2020, 2:05 PM · Restricted Project, Restricted Project
tra added inline comments to D71726: Let clang atomic builtins fetch add/sub support floating point types.
May 21 2020, 10:49 AM
tra added a reviewer for D78759: Add Statically Linked Libraries: JonChesterfield.

Few cosmetic nits. LGTM in general. I'll leave the approval to @JonChesterfield

May 21 2020, 10:47 AM · Restricted Project, Restricted Project

May 19 2020

tra added inline comments to D60620: [HIP] Support target id by --offload-arch.
May 19 2020, 4:34 PM · Restricted Project, Restricted Project
tra added inline comments to D80237: [hip] Ensure pointer in struct argument has proper `addrspacecast`..
May 19 2020, 2:18 PM · Restricted Project
tra accepted D78155: [OpenMP] Use __OPENMP_NVPTX__ instead of _OPENMP in wrapper headers.

LGTM.

May 19 2020, 1:43 PM · Restricted Project

May 18 2020

tra added a comment to D79526: [CUDA][HIP] Workaround for resolving host device function against wrong-sided function.

Reduced test case:

May 18 2020, 3:12 PM · Restricted Project
tra committed rGef649e8fd5d1: Revert "[CUDA][HIP] Workaround for resolving host device function against wrong… (authored by tra).
Revert "[CUDA][HIP] Workaround for resolving host device function against wrong…
May 18 2020, 12:28 PM
tra added a reverting change for rGe03394c6a6ff: [CUDA][HIP] Workaround for resolving host device function against wrong-sided…: rGef649e8fd5d1: Revert "[CUDA][HIP] Workaround for resolving host device function against wrong….
May 18 2020, 12:28 PM
tra added a comment to D79526: [CUDA][HIP] Workaround for resolving host device function against wrong-sided function.

e03394c6a6ff5832aa43259d4b8345f40ca6a22c Still breaks some of the existing CUDA code (got failures in pytorch and Eigen). I'll revert the patch and will send you a reduced reproducer.

May 18 2020, 12:26 PM · Restricted Project

May 15 2020

tra added inline comments to D79237: [CUDA][HIP] Fix constexpr variables for C++17.
May 15 2020, 3:47 PM · Restricted Project
tra added a comment to D79237: [CUDA][HIP] Fix constexpr variables for C++17.
In D79237#2039417, @tra wrote:

LGTM in general. Let me check the patch on our tensorflow build.

May 15 2020, 3:14 PM · Restricted Project
tra added a comment to D79237: [CUDA][HIP] Fix constexpr variables for C++17.

LGTM in general. Let me check the patch on our tensorflow build.

May 15 2020, 2:09 PM · Restricted Project

May 14 2020

tra added a reviewer for D79967: Fix debug info for NoDebug attr: dblaikie.

LGTM. Added @dblaikie as reviewer for debug info expertise.

May 14 2020, 3:14 PM · Restricted Project

May 13 2020

tra accepted D79866: [HIP] Do not emit debug info for stub function.
May 13 2020, 2:09 PM · Restricted Project
tra added a comment to D79866: [HIP] Do not emit debug info for stub function.

can you try set bp by using file name and line number on the kernel?

May 13 2020, 11:57 AM · Restricted Project
tra added a comment to D79866: [HIP] Do not emit debug info for stub function.

I do not see the behavior the patch is supposed to fix in CUDA.
If I compile a simple program, host-side debugger does not see the kernel, sees __device_stub_kernel and, if the breakpoint is set on kernel, it treats it as a yet-to-be-loaded one and does end up breaking on intry into the kernel on the GPU side.

May 13 2020, 10:49 AM · Restricted Project

May 12 2020

tra added a comment to D79237: [CUDA][HIP] Fix constexpr variables for C++17.

constexpr variables are compile time constants and implicitly const, therefore
they are safe to emit on both device and host side. Besides, in many cases
they are intended for both device and host, therefore it makes sense
to emit them on both device and host sides if necessary.

In most cases constexpr variables are used as rvalue and the variables
themselves do not need to be emitted.

May 12 2020, 10:12 AM · Restricted Project

May 11 2020

tra accepted D79526: [CUDA][HIP] Workaround for resolving host device function against wrong-sided function.

LGTM, modulo cosmetic test changes mentioned below.

May 11 2020, 3:40 PM · Restricted Project
tra added inline comments to D79526: [CUDA][HIP] Workaround for resolving host device function against wrong-sided function.
May 11 2020, 12:23 PM · Restricted Project

May 8 2020

tra added a comment to D79344: [cuda] Start diagnosing variables with bad target..

This triggers an assertion:

May 8 2020, 4:08 PM · Restricted Project
tra added a comment to D79526: [CUDA][HIP] Workaround for resolving host device function against wrong-sided function.

This one is just a FYI. I've managed to reduce the failure in the first version of this patch and it looks rather odd because the reduced test case has nothing to do with CUDA. Instead it appears to introduce a difference in compilation of regular host-only C++ code with -x cuda vs -x c++. I'm not sure how/why first version caused this and why the latest one fixes it. It may be worth double checking that we're not missing something here.

May 8 2020, 2:31 PM · Restricted Project
tra updated subscribers of D79526: [CUDA][HIP] Workaround for resolving host device function against wrong-sided function.

For implicit host device functions, since they are not guaranteed to work in device compilation, we can only resolve them as if they are host functions. This causes asymmetry but implicit host device functions are originally host functions so it is biased toward host compilation in the beginning.

May 8 2020, 1:26 PM · Restricted Project
tra added a comment to D79526: [CUDA][HIP] Workaround for resolving host device function against wrong-sided function.

The latest version of the patch works well enough to compile tensorflow. That's the good news.

May 8 2020, 11:14 AM · Restricted Project

May 7 2020

tra added a comment to D79344: [cuda] Start diagnosing variables with bad target..

Here's a slightly smaller variant which may be a good clue for tracking down the root cause. This one fails with:

var.cc:6:14: error: no matching function for call to 'copysign'
  double g = copysign(0, g);
             ^~~~~~~~
var.cc:5:56: note: candidate template ignored: substitution failure [with e = int, f = double]: reference to __host__ variable 'b' in __device__ function
__attribute__((device)) typename c<a<f>::b, double>::d copysign(e, f) {
                                         ~             ^
1 error generated when compiling for sm_60.
May 7 2020, 5:23 PM · Restricted Project
tra added a comment to D79344: [cuda] Start diagnosing variables with bad target..
In D79344#2026180, @tra wrote:

The problem is reproducible in upstream clang. Let's see if I can reduce it to something simpler.

May 7 2020, 5:23 PM · Restricted Project
tra added a comment to D79344: [cuda] Start diagnosing variables with bad target..

The problem is reproducible in upstream clang. Let's see if I can reduce it to something simpler.

May 7 2020, 3:14 PM · Restricted Project
tra added a comment to D79344: [cuda] Start diagnosing variables with bad target..
In D79344#2026025, @tra wrote:

We're calling copysign( int, double). The standard library provides copysign(double, double), CUDA provides only copysign(float, double). As far as C++ is concerned, both require one type conversion. I guess previously we would give __device__ one provided by CUDA a higher preference, considering that the callee is a device function. Now both seem to have equal weight. I'm not sure how/why,

@yaxunl, that may be related to the change of overload resolution. Back to this change, that error should not be related to the non-local variable checks.

May 7 2020, 3:14 PM · Restricted Project
tra added a comment to D79344: [cuda] Start diagnosing variables with bad target..

We're calling copysign( int, double). The standard library provides copysign(double, double), CUDA provides only copysign(float, double). As far as C++ is concerned, both require one type conversion. I guess previously we would give __device__ one provided by CUDA a higher preference, considering that the callee is a device function. Now both seem to have equal weight. I'm not sure how/why,

May 7 2020, 2:09 PM · Restricted Project
tra added a comment to D79344: [cuda] Start diagnosing variables with bad target..
In D79344#2018915, @tra wrote:

If you can wait, I can try patching this change into our clang tree and then see if it breaks anything obvious. If nothing falls apart, I'll be fine with the patch as is.

May 7 2020, 2:08 PM · Restricted Project
tra added a comment to D78655: [CUDA][HIP] Let lambda be host device by default.
In D78655#2020651, @tra wrote:

Ack. Let's give it a try. I'll test this on our code and see what falls out. Stay tuned.

May 7 2020, 2:08 PM · Restricted Project
tra added a comment to D79526: [CUDA][HIP] Workaround for resolving host device function against wrong-sided function.

I've tested the patch on our sources and it still breaks tensorflow compilation, though in a different way:

May 7 2020, 12:28 PM · Restricted Project

May 6 2020

tra committed rG314f99e7d42d: [CUDA] Enable existing builtins for PTX7.0 as well. (authored by tra).
[CUDA] Enable existing builtins for PTX7.0 as well.
May 6 2020, 2:45 PM
tra closed D79515: [CUDA] Enable existing builtins for PTX7.0 as well..
May 6 2020, 2:44 PM · Restricted Project
tra updated the diff for D79515: [CUDA] Enable existing builtins for PTX7.0 as well..

Updates test.

May 6 2020, 2:10 PM · Restricted Project
tra created D79515: [CUDA] Enable existing builtins for PTX7.0 as well..
May 6 2020, 1:00 PM · Restricted Project

May 5 2020

tra committed rG844096b996a0: [CUDA] Make NVVM builtins available with CUDA-11/PTX6.5 (authored by tra).
[CUDA] Make NVVM builtins available with CUDA-11/PTX6.5
May 5 2020, 4:13 PM
tra closed D79449: [CUDA] Make NVVM builtins available with CUDA-11 & PTX6.5.
May 5 2020, 4:13 PM · Restricted Project
tra updated the diff for D79449: [CUDA] Make NVVM builtins available with CUDA-11 & PTX6.5.

Actually added sm_80 predicate. It's not used on any builtins yet, but it makes
sense to add it right now along with ptx65.

May 5 2020, 3:40 PM · Restricted Project
tra updated the diff for D79449: [CUDA] Make NVVM builtins available with CUDA-11 & PTX6.5.

Test that the builtins work all the way to sm_80/ptx65

May 5 2020, 3:08 PM · Restricted Project
tra created D79449: [CUDA] Make NVVM builtins available with CUDA-11 & PTX6.5.
May 5 2020, 3:08 PM · Restricted Project
tra added a reverting change for rG55bcb96f3154: recommit c77a4078e01033aa2206c31a579d217c8a07569b with fix: rGbf6a26b06638: Revert D77954 -- it breaks Eigen & Tensorflow..
May 5 2020, 2:36 PM
tra committed rGbf6a26b06638: Revert D77954 -- it breaks Eigen & Tensorflow. (authored by tra).
Revert D77954 -- it breaks Eigen & Tensorflow.
May 5 2020, 2:36 PM