This is an archive of the discontinued LLVM Phabricator instance.

Revert "[TLI] Per-function fveclib for math library used for vectorization"
ClosedPublic

Authored by mehdi_amini on Apr 10 2020, 6:03 PM.

Details

Summary

This reverts commit 60c642e74be6af86906d9f3d982728be7bd4329f.

This patch is making the TLI "closed" for a predefined set of VecLib
while at the moment it is extensible for anyone to customize when using
LLVM as a library.
Reverting while we figure out a way to re-land it without losing the
generality of the current API.

Diff Detail

Event Timeline

mehdi_amini created this revision.Apr 10 2020, 6:03 PM
This revision is now accepted and ready to land.Apr 10 2020, 6:05 PM
This revision was automatically updated to reflect the committed changes.
wenlei added a comment.May 2 2020, 8:32 AM

@mehdi_amini @tejohnson When can we re-land this (with tweaks)? I'm under the impression that a test case demonstrating the 3rd party usage will be added very soon after this revert, then we can tweak the original patch to accommodate that usage, and re-land asap. Or am I missing something there? I'd like to get this unblocked asap. Currently we have to keep this as a private patch on our end which is a bit cumbersome, and I think this one can be useful for others too. Thanks..

Herald added a project: Restricted Project. · View Herald TranscriptMay 2 2020, 8:32 AM

@mehdi_amini @tejohnson When can we re-land this (with tweaks)? I'm under the impression that a test case demonstrating the 3rd party usage will be added very soon after this revert, then we can tweak the original patch to accommodate that usage, and re-land asap. Or am I missing something there? I'd like to get this unblocked asap. Currently we have to keep this as a private patch on our end which is a bit cumbersome, and I think this one can be useful for others too. Thanks..

@bkramer can you work with Wenlei on this (original patch is D77632).

@wenlei, in the meantime you can see the use case here:
https://github.com/tensorflow/tensorflow/blob/master/tensorflow/compiler/xla/service/cpu/compiler_functor.cc#L198
for revising the patch.

@mehdi_amini @tejohnson When can we re-land this (with tweaks)? I'm under the impression that a test case demonstrating the 3rd party usage will be added very soon after this revert, then we can tweak the original patch to accommodate that usage, and re-land asap. Or am I missing something there? I'd like to get this unblocked asap. Currently we have to keep this as a private patch on our end which is a bit cumbersome, and I think this one can be useful for others too. Thanks..

@bkramer can you work with Wenlei on this (original patch is D77632).

@wenlei, in the meantime you can see the use case here:
https://github.com/tensorflow/tensorflow/blob/master/tensorflow/compiler/xla/service/cpu/compiler_functor.cc#L198
for revising the patch.

Thanks for pointer, @tejohnson. Looks like we need a way for others to provide a set of vector functions. How about we introduced a dedicated VecLib type Custom, in addition to the existing ones (Accelerate, SVML and MASSV), and expose a public API addCustomVectorizableFunctions(ArrayRef<VecDesc> Fns) for TLII to allow registering custom function list. This way we preserve the openness through Custom, but also keep it clean and structured.

Then for XLA, you just need to specify -fveclib=Custom and call addCustomVectorizableFunctions instead of addVectorizableFunctions.

I rather have a non closed list of veclib: if you just have a map keyed by string instead of an enum it should just work out?

I rather have a non closed list of veclib: if you just have a map keyed by string instead of an enum it should just work out?

The veclib type is also tied to the accepted values for -fveclib, which is a list of supported lib, anything outside of the list is rejected with error. If you want to allow arbitrary strings as key, it's inconsistent with the restricted/closed values for -fveclib. So basically there's no openness from clang/llvm tool, while there was some openness through the libraries. I think by introducing a "Custom" veclib type, we can solve the impedance mismatch there. And in the XLA case, conceptually it's an indeed a custom veclib, right? Functionality-wise, Custom should just work for XLA usage too.

I rather have a non closed list of veclib: if you just have a map keyed by string instead of an enum it should just work out?

The veclib type is also tied to the accepted values for -fveclib, which is a list of supported lib, anything outside of the list is rejected with error. If you want to allow arbitrary strings as key, it's inconsistent with the restricted/closed values for -fveclib. So basically there's no openness from clang/llvm tool, while there was some openness through the libraries. I think by introducing a "Custom" veclib type, we can solve the impedance mismatch there. And in the XLA case, conceptually it's an indeed a custom veclib, right? Functionality-wise, Custom should just work for XLA usage too.

@mehdi_amini @tejohnson thoughts on the above?

Overall that would likely work for XLA. Something I'd like to mention though in response to:

The veclib type is also tied to the accepted values for -fveclib, which is a list of supported lib,

-fveclib is a Clang thing, it shouldn't limit what LLVM does. Of course LLVM needs to support Clang, but does not have to inherit the limitation of map 1:1 to Clang UI.
In particular as a library, it isn't clear why we would make the choice to write LLVM VecLib support this way.

Overall that would likely work for XLA. Something I'd like to mention though in response to:

The veclib type is also tied to the accepted values for -fveclib, which is a list of supported lib,

-fveclib is a Clang thing, it shouldn't limit what LLVM does. Of course LLVM needs to support Clang, but does not have to inherit the limitation of map 1:1 to Clang UI.
In particular as a library, it isn't clear why we would make the choice to write LLVM VecLib support this way.

Is there any benefit to keeping a closed list like this in LLVM? If not (and presumably clang is checking for valid values of -fveclib), then I think I agree with @mehdi_amini. Unless there is an efficiency reason for doing it via an enum. It's been awhile since I looked through this code in detail...

Overall that would likely work for XLA. Something I'd like to mention though in response to:

The veclib type is also tied to the accepted values for -fveclib, which is a list of supported lib,

-fveclib is a Clang thing, it shouldn't limit what LLVM does. Of course LLVM needs to support Clang, but does not have to inherit the limitation of map 1:1 to Clang UI.
In particular as a library, it isn't clear why we would make the choice to write LLVM VecLib support this way.

Fair point. I was trying to differentiate the accepted veclib from any other custom lib. I guess it's somewhat like namespace, even though built-in ones are different from user defined ones, the underlying support doesn't have to differentiate them.

Is there any benefit to keeping a closed list like this in LLVM? If not (and presumably clang is checking for valid values of -fveclib), then I think I agree with @mehdi_amini. Unless there is an efficiency reason for doing it via an enum. It's been awhile since I looked through this code in detail...

I think performance should be fine, the attributes on functions are in string form already. TLI compatibility check will involve a string compare, but a short string compare shouldn't be disastrous. I was mainly trying to let LLVM match clang's supported list.

Will get back to this hopefully next week.

llvm/lib/Analysis/TargetLibraryInfo.cpp