Power functions are implemented as linkonce_odr scalar functions
for integer types used by IPowI operations met in a module.
Vector form of IPowI is linearized into a sequence of calls
of the scalar functions.
Power functions are implemented as linkonce_odr scalar functions
The type should only be omitted if it's explicit on the right hand side on an assignment due to a cast, or if it's "too long" (e.g. vector<int32_t>::iterator)
Can you add a failure reason with rewriter.notifyMatchFailure? It can notoriously difficult to debug pattern matching without those messages.
Spell out the auto.
Should this be unsigned?
Please don't use auto for a loop index.
Please spell out all the autos as appropriate.
Would generating this function be made any easier by using scf instead of cf?
Since this is a partial dialect conversion, can you not just add math::IPowI as an illegal op?
Nit: typically this comes before the pass declaration/definition
I will fix auto declarations. I will keep auto for results of rewriter.create<T> - it looks like it is a common practice to write it with auto.
Sure! Thanks for the suggestion!
int64_t should be correct, since ShapedType::getNumElements is defined as returning int64_t in IR/BuiltinAttributes.h.
It seems to me unstructured control flow fits better for the code with early returns from the function.
The high level idea makes sense to me, but I think the pass should be structured as:
- Collect all element types for which functions need to be instantiated and instantiate those functions
- Run the vector to scalar conversions and scalar to function call rewrites
You can remove this include
These functions normally don't take a PatternBenefit.
These functions normally return just a Pass
Why is this type of linkage needed? It's unfortunate that this pass depends on the LLVM dialect just for this.
It's generally not valid for patterns to access and modify parent operations, especially since these "patterns" are exposed publicly. The generation of these software implementations should not be done inside patterns. Patterns should be local rewrites of operations.
Can you please explain the benefit of such a split? There are 4 existing conversion passes that access their getNearestSymbolTable() "on the fly" (ComplexToLibm, MathToLibm, MemRefToSPIRV and SCFToOpenMP), and I do not really want to invent the wheel here :)
Thank you for the review! I will upload updated files shortly.
There are some (e.g. ComplexToLLVM, MathToLibm), but it is currently not needed so I will remove it. It may be used in future in case there is target/project specific handling for such operations, e.g. in some cases there is target/project specific conversion for some flavors of the operation that is more preferable than MathToFuncs conversion - as I understand, in this case the patterns may be selected based on their benefit.
I see that many-many passes that are defined as ModuleOp passes in mlir/lib/Conversion/Passes.td (i.e. they are derived from Pass<"...", "ModuleOp">) all return std::unique_ptr<OperationPass<ModuleOp>> in the CPP files in mlir/lib/Conversion/*/*.cpp. I am not comfortable with stepping away from the common pattern. I guess it should be written this way to make it clear that the pass is not a generic pass but rather a specific operation pass.
In FuncToLLVM conversion all func.func operations have external linkage by default. We cannot use external linkage for the generated functions, because there may be multiple definitions in different translation modules and the linking will fail. So we have to specify either internal or linkonce_odr linkage - the latter is better, because it allows the linker to keep only one copy of the generated function across multiple modules being linked.
I do not know of any other way to control linkage in MLIR.
I understand the concern. I followed the same approach that MathToLibm and ComplexToLibm use.
I guess I can add a comment in MathToFuncs.h to warn about valid usage of populateMathToFuncsConversionPatterns - does it make sense?
The benefit is that the patterns won't 1. Keep looking up functions in the symbol table, which is painfully slow and 2. Won't violate the API contract for patterns.
In that case, they wouldn't use these patterns or this pass. The validity of composing patterns is inconsistent.
It is inconsistent throughout the MLIR codebase, but please return a Pass. There is no benefit to returning OperationPass<ModuleOp> because the pass manager API only accepts the former anyways.
Adding linkage to FuncDialect appears to be a TODO. It's fine for now then.
No. Those other patterns are also invalid. Patterns should not be modifying parent operations. It's something that "just happens to work" because no one has needed to compose these patterns sets in a parallelized pass (which would result in race conditions). This is especially true in conversion patterns, because there is extra bookkeeping in the dialect conversion pass that is violated if this happens.
Please don't propagate what has been done in the older passes and separate these.
Thank you for the explanation!
Then it looks like exposing the patterns is unsafe, in general, and I should rewrite the pass as you suggested without externalizing the match patterns (populateMathToFuncsConversionPatterns). The module pass will first visit all IPowI operations for collecting information about the kinds of implementation functions that need to be created in the module, then it will create the new functions, and then it will apply populateMathToFuncsConversionPatterns patterns to rewrite the operations into the calls.
Good evolution! The direction of the patch LGTM. I just have some minor complaints about the implementation details.
Can you trim these includes? I don't think all of these are needed
This isn't necessary. FunctionType can be hashed directly in a DenseMap.
I would prefer this to be a standalone function, not a static function on the pattern.
These two checks aren't necessary since they are invariants enforced by the verifier. You just need to check that this is a scalar integer op (done by the first condition).
If all the types are the same, then you only need to hash based on the element type.
Thank you for the prompt review!
Since I am going to add FPowI support later, does it make sense to keep the switch now?
This is true for IPowI but not for FPowI. I prefer to keep it as a function type so that the keys are consistent in the map, otherwise, for IPowI we will have IntegerType keys and for FPowI we will have FunctionType keys.
LGTM. Just one last complaint about the stringify/hashing
We generally don't use std::function
I think the "function type stringify" is too generic. For FPowI, just the floating point and integer type will suffice. This doesn't need to be done generally. I think you can just remove this function.
Can you remove the extra logic for now? I know you want to add support for other ops, but I think it'd be less automagical to just add another Case for FPowI
Okay, I will remove it.
Will do. flush should not be necessary according to this: https://llvm.org/doxygen/classllvm_1_1raw__string__ostream.html#details