This is an archive of the discontinued LLVM Phabricator instance.

[mlir][CAPI] Expose the rest of MLIRContext's constructors
ClosedPublic

Authored by krzysz00 on Jun 22 2023, 3:53 PM.

Details

Summary

It's recommended practice that people calling MLIR in a loop
pre-create a LLVM ThreadPool and a dialect registry and then
explicitly pass those into a MLIRContext for each compilation.
However, the C API does not expose the functions needed to follow this
recommendation from a project that isn't calling MLIR's C++ dilectly.

Add the necessary APIs to mlir-c, including a wrapper around LLVM's
ThreadPool struct (so as to avoid having to amend or re-export parts
of the LLVM API).

Diff Detail

Event Timeline

krzysz00 created this revision.Jun 22 2023, 3:53 PM
Herald added a project: Restricted Project. · View Herald TranscriptJun 22 2023, 3:53 PM
krzysz00 requested review of this revision.Jun 22 2023, 3:53 PM

not sure about the connection to DialectRegistry but can't you just call mlirContextEnableMultithreading which will do impl->ownedThreadPool = std::make_unique<llvm::ThreadPool>(); (and thus similarly create a hardware_concurrency() threadpool)?

https://github.com/llvm/llvm-project/blob/9566ee280607d91fa2e5eca730a6765ac84dfd0f/mlir/lib/CAPI/IR/IR.cpp#L79

Not that I'm opposed to this change or anything but just double-checking my own understanding.

@makslevental The purpose of these constructors is to specifically not create that owned thread pool.

The intended usage can be seen in https://github.com/ROCmSoftwarePlatform/AMDMIGraphX/blob/develop/src/targets/gpu/mlir.cpp#L178-L186 , which is a class whose instances will be run in parallel. We don't want to create one thread pool for each kernel compilation, but instead share a thread pool across all those instances.

makslevental accepted this revision.Jul 6 2023, 3:46 PM

@makslevental The purpose of these constructors is to specifically not create that owned thread pool.

The intended usage can be seen in https://github.com/ROCmSoftwarePlatform/AMDMIGraphX/blob/develop/src/targets/gpu/mlir.cpp#L178-L186 , which is a class whose instances will be run in parallel. We don't want to create one thread pool for each kernel compilation, but instead share a thread pool across all those instances.

gotcha - not sure why it wasn't clear from the beginning - thanks for spelling it out for me.

This revision is now accepted and ready to land.Jul 6 2023, 3:46 PM