- All functions that return an Operation now return an OpView.
- All functions that accept an Operation now accept an _OperationBase, which both Operation and OpView extend and can resolve to the backing Operation.
- Moves user-facing instance methods from Operation -> _OperationBase so that both can have the same API.
- Concretely, this means that if there are custom op classes defined (i.e. in Python), any iteration or creation will return the appropriate instance (i.e. if you get/create an std.addf, you will get an instance of the mlir.dialects.std.AddFOp class, getting full access to any custom API it exposes).
- Refactors all eq methods after realizing the proper way to do this for _OperationBase.
Details
Diff Detail
- Repository
- rG LLVM Github Monorepo
Event Timeline
mlir/lib/Bindings/Python/Globals.h | ||
---|---|---|
55 | Should we also change other const std::string & parameters in the class for the consistency. Is there a trade-off between llvm::StringRef and const std::string &? |
mlir/lib/Bindings/Python/Globals.h | ||
---|---|---|
55 | I should take another look through them: I had placed them originally so as to have fewer copies between the places that needed strings and those that didn't. In general, StringRef is preferred where possible, but in a mixed codebase like pybind11, where std::string is used for many things, it can result in extra copies. The real answer is to create a type caster for StringRef and SmallVector and then switch everything to them. I had looked into that briefly and it was a bigger change than any one patch and just needs to be done. If you're interested in giving it a try, it will be a while before I get to trying it. |
mlir/lib/Bindings/Python/IRModules.cpp | ||
---|---|---|
1043 | What kind of extra caching do you have in mind, it seems to me here that we're down do a string hash map lookup? | |
mlir/lib/Bindings/Python/MainModule.cpp | ||
123 | Isn't the negative cache preventing someone from having their dynamic loading of dialects? Or is the expectation are we flushing the cache on dialect load? Looking closer, I think the negative cache would prevent someone from adding a path through append_dialect_search_prefix after the negative cache has been populated. |
mlir/lib/Bindings/Python/Globals.h | ||
---|---|---|
55 | Looking forward to the type caster, and I feel that influence scope of the caster will be quite large. |
Comments and rebase.
mlir/lib/Bindings/Python/Globals.h | ||
---|---|---|
55 | Feel free to give it a try, if you have the time. The caster for SmallVector is likely easier than StringRef fwiw. | |
mlir/lib/Bindings/Python/IRModules.cpp | ||
1043 | I was more concerned with the cost of the temporaries and construction overhead. There is likely something clever to keep from doing that but for now, I've removed the TODO. | |
mlir/lib/Bindings/Python/MainModule.cpp | ||
123 | Yep - totally right and thanks for spotting this. I've reworked this to do proper caching and invalidation on appending to the search path. This caching could get in the way of various (nefarious) monkey patching, but I'm fine with that. Also added some gil acquires. Since these are re-entrant, better to avoid the races. |
Should we also change other const std::string & parameters in the class for the consistency. Is there a trade-off between llvm::StringRef and const std::string &?