The lowering to NVVM and ROCm handles tanh operations differently by
mapping them to NVVM/ROCm specific intrinsics. This conflicts with
the lowering to LLVM, which uses the default llvm intrinsic. This change
declares the LLVM intrinsics to be illegal, hence disallowing the
correspondign rewrite.
Details
Diff Detail
- Repository
- rG LLVM Github Monorepo
Event Timeline
PTAL.
Writing this in code for discussion. Is this the way to resolve conflicts or do we want to hand-pick patterns from the LLVM conversion? The latter seems brittle as well, as the number of pattern keeps growing and we need to track. By disallowing, we can do this whenever we add a new intrinsic.
I am somewhat torn between the two myself, so I wrote it down to see what it looks like. This would unbreak the windows builds.
This would work for me. At scale, we may need to analyze effects of such approach on compile time. The converter essentially twice the number of patterns it has to search through, and there are potential rollbacks of rewrites that generated illegal operations.
In general, I think we want to expose individual LLVM patterns anyway because several people hit the selection problem repeatedly. I haven't had time to do that yet.
For intrinsics specifically, I have a vague idea of some fallback mechanism, as in: the default Std->LLVM-intrinsic patterns are enabled unless there is a different pattern.
mlir/lib/Conversion/GPUCommon/OpToFuncCallLowering.h | ||
---|---|---|
103 | Nit: SmallVector is re-exported into the mlir namespace, drop the llvm:: |
The granularity of exposing them is not clear to me. I also thought about an intrinsics/non-intrinsics split. But even the definition of what intrinsic based lowering means is vague. LLVM::ExpOp is an operation but ultimately lowered to some intrinsic. One way to make this cleaner would be to expose all intrinsics as LLVM::Operation in MLIR and only lower them when going to llvm proper. Then at least we can filter on operations and not intrinsic names.
For intrinsics specifically, I have a vague idea of some fallback mechanism, as in: the default Std->LLVM-intrinsic patterns are enabled unless there is a different pattern.
It would be great if one could define priorities of patterns so that certain pattern shadow others. This is different from benefit in that it would have stronger semantics (disallowing patterns rather than preferring them). One idea would be to group pattern into formal groups and then allowing to specify a partial order on these groups, which is then used to specify application order of patterns. That would solve out case where NVVM pattern are meant to shadow LLVM pattern.
For now, should we land this?
The notion of intrinsic does not exist in MLIR proper, everything is an operation. I replicated it in the LLVM dialect to better match the LLVM IR, and to commonalize the lowering.
One way to make this cleaner would be to expose all intrinsics as LLVM::Operation in MLIR and only lower them when going to llvm proper. Then at least we can filter on operations and not intrinsic names.
We should absolutely do that.
For intrinsics specifically, I have a vague idea of some fallback mechanism, as in: the default Std->LLVM-intrinsic patterns are enabled unless there is a different pattern.
It would be great if one could define priorities of patterns so that certain pattern shadow others. This is different from benefit in that it would have stronger semantics (disallowing patterns rather than preferring them). One idea would be to group pattern into formal groups and then allowing to specify a partial order on these groups, which is then used to specify application order of patterns. That would solve out case where NVVM pattern are meant to shadow LLVM pattern.
For now, should we land this?
Yes if it fixes the build.
Nit: SmallVector is re-exported into the mlir namespace, drop the llvm::