Details
Diff Detail
- Repository
- rG LLVM Github Monorepo
Event Timeline
This looks good. Please wait a couple of days before merging in case others have a different opinion.
The only question I had in mind is why there is no llvm.tan instruction in LLVM IR. And does it have something to do with multiple representations (tan(x), sin(x)/cos(x))? Do you plan to canonicalize sin(x)/cos(x) to tan(x)?
Clang has a builtin_tanf intrinsic. I tried the following program.
float func(float x) { float a = __builtin_tanf(x); float b = __builtin_sinf(x)/__builtin_cosf(x); return a+b; }
It was converted to the following under Ofast by LLVM so probably there is no issue.
define dso_local float @func(float noundef %x) local_unnamed_addr #0 { entry: %call = tail call fast float @tanf(float noundef %x) #2 %add = fadd fast float %call, %call ret float %add }
Thank you for the review! I will wait a little bit more...
The only question I had in mind is why there is no llvm.tan instruction in LLVM IR. And does it have something to do with multiple representations (tan(x), sin(x)/cos(x))?
Sorry, I do not know the answers to this.
Do you plan to canonicalize sin(x)/cos(x) to tan(x)?
Clang has a builtin_tanf intrinsic. I tried the following program.
float func(float x) { float a = __builtin_tanf(x); float b = __builtin_sinf(x)/__builtin_cosf(x); return a+b; }It was converted to the following under Ofast by LLVM so probably there is no issue.
define dso_local float @func(float noundef %x) local_unnamed_addr #0 { entry: %call = tail call fast float @tanf(float noundef %x) #2 %add = fadd fast float %call, %call ret float %add }
I think optimizing sin / cos into tan is a good idea, and as you noticed it is already happening in LLVM backend (given that math::SinOp/CosOp are converted to LLVM::SinOp/CosOp). What would be the benefit of doing the same optimization in MLIR?
I think optimizing sin / cos into tan is a good idea, and as you noticed it is already happening in LLVM backend (given that math::SinOp/CosOp are converted to LLVM::SinOp/CosOp). What would be the benefit of doing the same optimization in MLIR?
I can think of the following two.
To have a canonical representation.
Benefit for non-LLVM IR users.
I am not sure we can treat sin / cos => tan optimization as canonicalization. As I understand "canonicalization" implies that whenever we detect sin / cos pattern we transform it into tan, so this will happen even if sin and cos have other uses besides the division. So the resulting performance may be worse unless further optimizations undo the effect of such a canonicalization.
Anyway, if I am going to do such a canonicalization, what would be the right place to do it?
Benefit for non-LLVM IR users.
Valid point.
Hi @kiranchandramohan, from the point of canonicalization does it make sense to canonicalize tan into sin / cos? Basically, tan becomes just a convenience operation at this point in time. I anticipate that the canonicalization may be disabled later, if/when the math operations are attributed with accuracy/fp-behavior controls - then tan and sin /cos may become non-equivalent. But currently it should be legal, and at least LLVM backend should be able to "reverse" the canonicalization if TLI/TTI cost model proves that it is profitable.
Do you mean that llvm has the ability to create tan from sin/cos hence it is OK to canonicalize to sin/cos? https://github.com/llvm/llvm-project/blob/70039be62774ae8fc53bb3b8f1bdbd2b0efb3355/llvm/lib/Transforms/InstCombine/InstCombineMulDivRem.cpp#L1389
Exactly. The current Math operations imply "any optimizations are valid", since we do not have anything like FastMathFlags controls for them. So I think we can set any LLVM's FastMathFlags during converting Math to LLVM or Libm, including reassoc - this will enable the code you pointed out.