Index: llvm/docs/tutorial/BuildingAJIT3.rst =================================================================== --- llvm/docs/tutorial/BuildingAJIT3.rst +++ llvm/docs/tutorial/BuildingAJIT3.rst @@ -5,65 +5,60 @@ .. contents:: :local: -**This tutorial is under active development. It is incomplete and details may -change frequently.** Nonetheless we invite you to try it out as it stands, and -we welcome any feedback. - Chapter 3 Introduction ====================== -**Warning: This text is currently out of date due to ORC API updates.** - -**The example code has been updated and can be used. The text will be updated -once the API churn dies down.** - Welcome to Chapter 3 of the "Building an ORC-based JIT in LLVM" tutorial. This chapter discusses lazy JITing and shows you how to enable it by adding an ORC -CompileOnDemand layer the JIT from `Chapter 2 `_. +CompileOnDemand layer on the top of JIT from `Chapter 2 `_. Lazy Compilation ================ When we add a module to the KaleidoscopeJIT class from Chapter 2 it is immediately optimized, compiled and linked for us by the IRTransformLayer, -IRCompileLayer and RTDyldObjectLinkingLayer respectively. This scheme, where all the -work to make a Module executable is done up front, is simple to understand and -its performance characteristics are easy to reason about. However, it will lead -to very high startup times if the amount of code to be compiled is large, and -may also do a lot of unnecessary compilation if only a few compiled functions -are ever called at runtime. A truly "just-in-time" compiler should allow us to +IRCompileLayer and RTDyldObjectLinkingLayer respectively when a Symbol from the +Materialization Unit is looked up. This scheme, where all the work to make a Module +executable is done up front, is simple to understand and its performance characteristics +are easy to reason about. However, it will lead to very high startup times if the amount +of code to be compiled is large, and may also do a lot of unnecessary compilation if +only a few compiled functions are ever called at runtime. + +A truly "just-in-time" compiler should allow us to defer the compilation of any given function until the moment that function is first called, improving launch times and eliminating redundant work. In fact, the ORC APIs provide us with a layer to lazily compile LLVM IR: *CompileOnDemandLayer*. -The CompileOnDemandLayer class conforms to the layer interface described in -Chapter 2, but its addModule method behaves quite differently from the layers -we have seen so far: rather than doing any work up front, it just scans the -Modules being added and arranges for each function in them to be compiled the -first time it is called. To do this, the CompileOnDemandLayer creates two small -utilities for each function that it scans: a *stub* and a *compile -callback*. The stub is a pair of a function pointer (which will be pointed at -the function's implementation once the function has been compiled) and an -indirect jump through the pointer. By fixing the address of the indirect jump -for the lifetime of the program we can give the function a permanent "effective -address", one that can be safely used for indirection and function pointer -comparison even if the function's implementation is never compiled, or if it is -compiled more than once (due to, for example, recompiling the function at a -higher optimization level) and changes address. The second utility, the compile -callback, represents a re-entry point from the program into the compiler that -will trigger compilation and then execution of a function. By initializing the -function's stub to point at the function's compile callback, we enable lazy -compilation: The first attempted call to the function will follow the function -pointer and trigger the compile callback instead. The compile callback will -compile the function, update the function pointer for the stub, then execute -the function. On all subsequent calls to the function, the function pointer -will point at the already-compiled function, so there is no further overhead -from the compiler. We will look at this process in more detail in the next -chapter of this tutorial, but for now we'll trust the CompileOnDemandLayer to -set all the stubs and callbacks up for us. All we need to do is to add the -CompileOnDemandLayer to the top of our stack and we'll get the benefits of -lazy compilation. We just need a few changes to the source: +The CompileOnDemandLayer class conforms to the Layer interface described in chapter 2, +When we lookup symbols through ExecutionSession, associated materialize method create stub definitions +in memory and return immediately without compiling. When they Stubs are called at runtime, stub will re-enter +to the JIT and trigger the actual compilation of the functions. So to compile a function that it added +to compile on demand layer require two lookups : 1) Create a Stub 2) Trigger compilation + +On the first lookup of a symbol in a Materialization Unit, COD emit method gets called, +it partitions the Callable and NonCallable Symbols and creates a new JITDylib with name suffix ".impl", +wraps the module into a PartitioningIRMaterializationUnit and move to ".impl" JITDylib. +It then creates two facade materialization units "ReexportsMaterializationUnit" for NonCallable Symbols +and "LazyReexportsMaterializationUnit" for Callable Symbols to re export data symbols and **lazyily** reexport +function symbols respectively from ".impl" JITDylib. These Materialization Unit steal the responsibilty +for materializing the Symbols it defines from the previously added Materialization Unit (one added, via COD::add method) + +Whenever a Symbol is lookedup, ExecutionSession finds which Materialization unit is responsible for materializing +that symbol and trigger the corresponding materialize method. + +LazyReexportsMaterializationUnit::Materialize method create stub definitions for the requested symbols. Whenever a Stub is called at runtime, ORC +find out the which Symbol definition the Stubs exports and find the corresponding materialization unit (in this case, PartitioningIRMaterializationUnit) +and call the PartitioningIRMaterializationUnit::materialize method, ORC will pack the requested function symbols into a new module and move them to the +layer below. + +Whenever ORC sees a call instruction in a function body, it does not eagerly compile the body of the called function, instead it creates a stub +(proxy to the actual definition) for that call instruction. To faciliate this, +ORC uses two utilities: 1) IndirectStubManager 2) Lazy Call Through manager +IndirectStubManager creates a Stubs for each symbol when the first time it is looked up. +Stub contains a lazy call through trampoline, trampolines are small pieces of machine code (executable memory) which helps to re-enter into +the JIT engine at runtime to compile a body of the function. Lazy Call Through trampolines are bind to the symbols which it needs to compile. +Lazy Call Through manager maintains the pool of trampolines, initialize them and set the memory permissions. .. code-block:: c++ @@ -71,104 +66,77 @@ #include "llvm/ExecutionEngine/SectionMemoryManager.h" #include "llvm/ExecutionEngine/Orc/CompileOnDemandLayer.h" #include "llvm/ExecutionEngine/Orc/CompileUtils.h" + #include "llvm/ExecutionEngine/Orc/LazyReexports.h" + #include "llvm/ExecutionEngine/Orc/IndirectionUtils.h" ... ... class KaleidoscopeJIT { private: - std::unique_ptr TM; - const DataLayout DL; + ExecutionSession ES; + std::unique_ptr LCTM; RTDyldObjectLinkingLayer ObjectLayer; - IRCompileLayer CompileLayer; - - using OptimizeFunction = - std::function(std::shared_ptr)>; - - IRTransformLayer OptimizeLayer; - - std::unique_ptr CompileCallbackManager; - CompileOnDemandLayer CODLayer; - - public: - using ModuleHandle = decltype(CODLayer)::ModuleHandleT; + IRCompileLayer CompileLayer; + IRTransformLayer OptimizeIRLayer; + CompileOnDemandLayer CODLayer; + DataLayout DL; + MangleAndInterner Mangle; + ThreadSafeContext Ctx; First we need to include the CompileOnDemandLayer.h header, then add two new -members: a std::unique_ptr and a CompileOnDemandLayer, -to our class. The CompileCallbackManager member is used by the CompileOnDemandLayer -to create the compile callback needed for each function. +members: a std::unique_ptr and a CompileOnDemandLayer, +to our class. The LazyCallThroughManager member is used by the CompileOnDemandLayer +to create the call through trampoline needed for each function. .. code-block:: c++ - KaleidoscopeJIT() - : TM(EngineBuilder().selectTarget()), DL(TM->createDataLayout()), - ObjectLayer([]() { return std::make_shared(); }), - CompileLayer(ObjectLayer, SimpleCompiler(*TM)), - OptimizeLayer(CompileLayer, - [this](std::shared_ptr M) { - return optimizeModule(std::move(M)); - }), - CompileCallbackManager( - orc::createLocalCompileCallbackManager(TM->getTargetTriple(), 0)), - CODLayer(OptimizeLayer, - [this](Function &F) { return std::set({&F}); }, - *CompileCallbackManager, - orc::createLocalIndirectStubsManagerBuilder( - TM->getTargetTriple())) { - llvm::sys::DynamicLibrary::LoadLibraryPermanently(nullptr); + KaleidoscopeJIT(): LCTM(cantFail(createLocalLazyCallThroughManager(JTMB.getTargetTriple(), + this->ES, 0))), + ObjectLayer(ES, + []() { return llvm::make_unique(); }), + CompileLayer(ES, ObjectLayer, ConcurrentIRCompiler(std::move(JTMB))), + OptimizeIRLayer(ES, CompileLayer, optimizeModule), + CODLayer(this->ES, OptimizeIRLayer, *LCTM, + createLocalIndirectStubsManagerBuilder(T)), + DL(std::move(DL)), Mangle(ES, this->DL), + Ctx(llvm::make_unique()) { + ES.getMainJITDylib().setGenerator( + cantFail(DynamicLibrarySearchGenerator::GetForCurrentProcess( + DL.getGlobalPrefix()))); } Next we have to update our constructor to initialize the new members. To create -an appropriate compile callback manager we use the -createLocalCompileCallbackManager function, which takes a TargetMachine and a -JITTargetAddress to call if it receives a request to compile an unknown -function. In our simple JIT this situation is unlikely to come up, so we'll -cheat and just pass '0' here. In a production quality JIT you could give the -address of a function that throws an exception in order to unwind the JIT'd -code's stack. +an appropriate LazyCallThroughManager we use the +createLocalLazyCallThroughManager function, and we use createLocalIndirectStubsManagerBuilder +to create IndirectStub Builder. Local prefix corresponds to "we are jitting the code to run +within the same process". Now we can construct our CompileOnDemandLayer. Following the pattern from previous layers we start by passing a reference to the next layer down in our -stack -- the OptimizeLayer. Next we need to supply a 'partitioning function': +stack -- the OptimizeLayer. Next we pass a reference to +our LazyCallThroughManager. Finally, we need to supply an "indirect stubs +manager builder": a utility function that constructs IndirectStubManagers, which +are in turn used to build the stubs for the functions in each module. The +CompileOnDemandLayer will call the indirect stub manager builder once for each +call to addModule, and use the resulting indirect stubs manager to create +stubs for all functions in all modules in the set. + +We can also set the 'partitioning function' to the CompileOnDemandLayer: when a not-yet-compiled function is called, the CompileOnDemandLayer will call this function to ask us what we would like to compile. At a minimum we need to compile the function being called (given by the argument to the partitioning function), but we could also request that the CompileOnDemandLayer compile other functions that are unconditionally called (or highly likely to be called) from the function being called. For KaleidoscopeJIT we'll keep it simple and just -request compilation of the function that was called. Next we pass a reference to -our CompileCallbackManager. Finally, we need to supply an "indirect stubs -manager builder": a utility function that constructs IndirectStubManagers, which -are in turn used to build the stubs for the functions in each module. The -CompileOnDemandLayer will call the indirect stub manager builder once for each -call to addModule, and use the resulting indirect stubs manager to create -stubs for all functions in all modules in the set. If/when the module set is -removed from the JIT the indirect stubs manager will be deleted, freeing any -memory allocated to the stubs. We supply this function by using the -createLocalIndirectStubsManagerBuilder utility. +request compilation of the function that was called. -.. code-block:: c++ +We can set this by CompileOnDemandLayer::setPartitionFunction function. - // ... - if (auto Sym = CODLayer.findSymbol(Name, false)) - // ... - return cantFail(CODLayer.addModule(std::move(Ms), - std::move(Resolver))); - // ... - - // ... - return CODLayer.findSymbol(MangledNameStream.str(), true); - // ... - - // ... - CODLayer.removeModule(H); - // ... - -Finally, we need to replace the references to OptimizeLayer in our addModule, -findSymbol, and removeModule methods. With that, we're up and running. - -**To be done:** +.. code-block:: c++ + using PartitionFunction = + std::function(GlobalValueSet Requested)>; -** Chapter conclusion.** + void setPartitionFunction(PartitionFunction Partition); Full Code Listing ================= Index: llvm/examples/Kaleidoscope/BuildingAJIT/Chapter3/KaleidoscopeJIT.h =================================================================== --- llvm/examples/Kaleidoscope/BuildingAJIT/Chapter3/KaleidoscopeJIT.h +++ llvm/examples/Kaleidoscope/BuildingAJIT/Chapter3/KaleidoscopeJIT.h @@ -13,33 +13,26 @@ #ifndef LLVM_EXECUTIONENGINE_ORC_KALEIDOSCOPEJIT_H #define LLVM_EXECUTIONENGINE_ORC_KALEIDOSCOPEJIT_H -#include "llvm/ADT/STLExtras.h" -#include "llvm/ExecutionEngine/ExecutionEngine.h" +#include "llvm/ADT/StringRef.h" #include "llvm/ExecutionEngine/JITSymbol.h" #include "llvm/ExecutionEngine/Orc/CompileOnDemandLayer.h" #include "llvm/ExecutionEngine/Orc/CompileUtils.h" +#include "llvm/ExecutionEngine/Orc/Core.h" +#include "llvm/ExecutionEngine/Orc/ExecutionUtils.h" #include "llvm/ExecutionEngine/Orc/IRCompileLayer.h" #include "llvm/ExecutionEngine/Orc/IRTransformLayer.h" -#include "llvm/ExecutionEngine/Orc/LambdaResolver.h" +#include "llvm/ExecutionEngine/Orc/IndirectionUtils.h" +#include "llvm/ExecutionEngine/Orc/JITTargetMachineBuilder.h" +#include "llvm/ExecutionEngine/Orc/LazyReexports.h" #include "llvm/ExecutionEngine/Orc/RTDyldObjectLinkingLayer.h" -#include "llvm/ExecutionEngine/RTDyldMemoryManager.h" -#include "llvm/ExecutionEngine/RuntimeDyld.h" #include "llvm/ExecutionEngine/SectionMemoryManager.h" #include "llvm/IR/DataLayout.h" +#include "llvm/IR/LLVMContext.h" #include "llvm/IR/LegacyPassManager.h" -#include "llvm/IR/Mangler.h" -#include "llvm/Support/DynamicLibrary.h" -#include "llvm/Support/raw_ostream.h" -#include "llvm/Target/TargetMachine.h" #include "llvm/Transforms/InstCombine/InstCombine.h" #include "llvm/Transforms/Scalar.h" #include "llvm/Transforms/Scalar/GVN.h" -#include -#include #include -#include -#include -#include namespace llvm { namespace orc { @@ -47,89 +40,67 @@ class KaleidoscopeJIT { private: ExecutionSession ES; - std::map> Resolvers; - std::unique_ptr TM; - const DataLayout DL; - LegacyRTDyldObjectLinkingLayer ObjectLayer; - LegacyIRCompileLayer CompileLayer; - - using OptimizeFunction = - std::function(std::unique_ptr)>; - - LegacyIRTransformLayer OptimizeLayer; - - std::unique_ptr CompileCallbackManager; - LegacyCompileOnDemandLayer CODLayer; + std::unique_ptr LCTM; + RTDyldObjectLinkingLayer ObjectLayer; + IRCompileLayer CompileLayer; + IRTransformLayer OptimizeIRLayer; + CompileOnDemandLayer CODLayer; + DataLayout DL; + MangleAndInterner Mangle; + ThreadSafeContext Ctx; public: - KaleidoscopeJIT() - : TM(EngineBuilder().selectTarget()), DL(TM->createDataLayout()), + KaleidoscopeJIT(JITTargetMachineBuilder JTMB, DataLayout DL, Triple T) + : LCTM(cantFail(createLocalLazyCallThroughManager(JTMB.getTargetTriple(), + this->ES, 0))), ObjectLayer(ES, - [this](VModuleKey K) { - return LegacyRTDyldObjectLinkingLayer::Resources{ - std::make_shared(), - Resolvers[K]}; - }), - CompileLayer(ObjectLayer, SimpleCompiler(*TM)), - OptimizeLayer(CompileLayer, - [this](std::unique_ptr M) { - return optimizeModule(std::move(M)); - }), - CompileCallbackManager(cantFail(orc::createLocalCompileCallbackManager( - TM->getTargetTriple(), ES, 0))), - CODLayer(ES, OptimizeLayer, - [&](orc::VModuleKey K) { return Resolvers[K]; }, - [&](orc::VModuleKey K, std::shared_ptr R) { - Resolvers[K] = std::move(R); - }, - [](Function &F) { return std::set({&F}); }, - *CompileCallbackManager, - orc::createLocalIndirectStubsManagerBuilder( - TM->getTargetTriple())) { - llvm::sys::DynamicLibrary::LoadLibraryPermanently(nullptr); + []() { return llvm::make_unique(); }), + CompileLayer(ES, ObjectLayer, ConcurrentIRCompiler(std::move(JTMB))), + OptimizeIRLayer(ES,CompileLayer,optimizeModule), + CODLayer(this->ES, OptimizeIRLayer, *LCTM, + createLocalIndirectStubsManagerBuilder(T)), + DL(std::move(DL)), Mangle(ES, this->DL), + Ctx(llvm::make_unique()) { + ES.getMainJITDylib().setGenerator( + cantFail(DynamicLibrarySearchGenerator::GetForCurrentProcess( + DL.getGlobalPrefix()))); } - TargetMachine &getTargetMachine() { return *TM; } - - VModuleKey addModule(std::unique_ptr M) { - // Create a new VModuleKey. - VModuleKey K = ES.allocateVModule(); - - // Build a resolver and associate it with the new key. - Resolvers[K] = createLegacyLookupResolver( - ES, - [this](const std::string &Name) -> JITSymbol { - if (auto Sym = CompileLayer.findSymbol(Name, false)) - return Sym; - else if (auto Err = Sym.takeError()) - return std::move(Err); - if (auto SymAddr = - RTDyldMemoryManager::getSymbolAddressInProcess(Name)) - return JITSymbol(SymAddr, JITSymbolFlags::Exported); - return nullptr; - }, - [](Error Err) { cantFail(std::move(Err), "lookupFlags failed"); }); - - // Add the module to the JIT with the new key. - cantFail(CODLayer.addModule(K, std::move(M))); - return K; + static Expected> Create() { + auto JTMB = JITTargetMachineBuilder::detectHost(); + + if (!JTMB) + return JTMB.takeError(); + + auto DL = JTMB->getDefaultDataLayoutForTarget(); + if (!DL) + return DL.takeError(); + + auto T = JTMB->getTargetTriple(); + return llvm::make_unique(std::move(*JTMB), std::move(*DL), + std::move(T)); } - JITSymbol findSymbol(const std::string Name) { - std::string MangledName; - raw_string_ostream MangledNameStream(MangledName); - Mangler::getNameWithPrefix(MangledNameStream, Name, DL); - return CODLayer.findSymbol(MangledNameStream.str(), true); + const DataLayout &getDataLayout() const { return DL; } + + LLVMContext &getContext() { return *Ctx.getContext(); } + + Error addModule(std::unique_ptr M) { + return CODLayer.add(ES.getMainJITDylib(), + ThreadSafeModule(std::move(M), Ctx)); } - void removeModule(VModuleKey K) { - cantFail(CODLayer.removeModule(K)); + Expected lookup(StringRef Name) { + return ES.lookup({&ES.getMainJITDylib()}, Mangle(Name.str())); } -private: - std::unique_ptr optimizeModule(std::unique_ptr M) { - // Create a function pass manager. - auto FPM = llvm::make_unique(M.get()); + void dumpState() { ES.dump(llvm::errs()); } + + private: + static Expected + optimizeModule(ThreadSafeModule TSM, const MaterializationResponsibility &R) { + // Create a Legacy function pass manager. + auto FPM = llvm::make_unique(TSM.getModule()); // Add some optimizations. FPM->add(createInstructionCombiningPass()); @@ -138,12 +109,11 @@ FPM->add(createCFGSimplificationPass()); FPM->doInitialization(); - // Run the optimizations over all functions in the module being added to - // the JIT. - for (auto &F : *M) + // Run the optimizations over functions that are packed up by COD and added to OptimizeIRLayer + for (auto &F : *TSM.getModule()) FPM->run(F); - - return M; + llvm::errs() << *TSM.getModule(); + return TSM; } }; Index: llvm/examples/Kaleidoscope/BuildingAJIT/Chapter3/toy.cpp =================================================================== --- llvm/examples/Kaleidoscope/BuildingAJIT/Chapter3/toy.cpp +++ llvm/examples/Kaleidoscope/BuildingAJIT/Chapter3/toy.cpp @@ -676,11 +676,11 @@ } /// toplevelexpr ::= expression -static std::unique_ptr ParseTopLevelExpr() { +static std::unique_ptr ParseTopLevelExpr(unsigned ExprCount) { if (auto E = ParseExpression()) { // Make an anonymous proto. - auto Proto = llvm::make_unique("__anon_expr", - std::vector()); + auto Proto = llvm::make_unique( + ("__anon_expr" + Twine(ExprCount)).str(), std::vector()); return llvm::make_unique(std::move(Proto), std::move(E)); } return nullptr; @@ -696,12 +696,13 @@ // Code Generation //===----------------------------------------------------------------------===// -static LLVMContext TheContext; -static IRBuilder<> Builder(TheContext); +static std::unique_ptr TheJIT; +static LLVMContext *TheContext; +static std::unique_ptr> Builder; static std::unique_ptr TheModule; static std::map NamedValues; -static std::unique_ptr TheJIT; static std::map> FunctionProtos; +static ExitOnError ExitOnErr; Value *LogErrorV(const char *Str) { LogError(Str); @@ -729,11 +730,11 @@ const std::string &VarName) { IRBuilder<> TmpB(&TheFunction->getEntryBlock(), TheFunction->getEntryBlock().begin()); - return TmpB.CreateAlloca(Type::getDoubleTy(TheContext), nullptr, VarName); + return TmpB.CreateAlloca(Type::getDoubleTy(*TheContext), nullptr, VarName); } Value *NumberExprAST::codegen() { - return ConstantFP::get(TheContext, APFloat(Val)); + return ConstantFP::get(*TheContext, APFloat(Val)); } Value *VariableExprAST::codegen() { @@ -743,7 +744,7 @@ return LogErrorV("Unknown variable name"); // Load the value. - return Builder.CreateLoad(V, Name.c_str()); + return Builder->CreateLoad(V, Name.c_str()); } Value *UnaryExprAST::codegen() { @@ -755,7 +756,7 @@ if (!F) return LogErrorV("Unknown unary operator"); - return Builder.CreateCall(F, OperandV, "unop"); + return Builder->CreateCall(F, OperandV, "unop"); } Value *BinaryExprAST::codegen() { @@ -778,7 +779,7 @@ if (!Variable) return LogErrorV("Unknown variable name"); - Builder.CreateStore(Val, Variable); + Builder->CreateStore(Val, Variable); return Val; } @@ -789,15 +790,15 @@ switch (Op) { case '+': - return Builder.CreateFAdd(L, R, "addtmp"); + return Builder->CreateFAdd(L, R, "addtmp"); case '-': - return Builder.CreateFSub(L, R, "subtmp"); + return Builder->CreateFSub(L, R, "subtmp"); case '*': - return Builder.CreateFMul(L, R, "multmp"); + return Builder->CreateFMul(L, R, "multmp"); case '<': - L = Builder.CreateFCmpULT(L, R, "cmptmp"); + L = Builder->CreateFCmpULT(L, R, "cmptmp"); // Convert bool 0/1 to double 0.0 or 1.0 - return Builder.CreateUIToFP(L, Type::getDoubleTy(TheContext), "booltmp"); + return Builder->CreateUIToFP(L, Type::getDoubleTy(*TheContext), "booltmp"); default: break; } @@ -808,7 +809,7 @@ assert(F && "binary operator not found!"); Value *Ops[] = {L, R}; - return Builder.CreateCall(F, Ops, "binop"); + return Builder->CreateCall(F, Ops, "binop"); } Value *CallExprAST::codegen() { @@ -828,7 +829,7 @@ return nullptr; } - return Builder.CreateCall(CalleeF, ArgsV, "calltmp"); + return Builder->CreateCall(CalleeF, ArgsV, "calltmp"); } Value *IfExprAST::codegen() { @@ -837,46 +838,46 @@ return nullptr; // Convert condition to a bool by comparing equal to 0.0. - CondV = Builder.CreateFCmpONE( - CondV, ConstantFP::get(TheContext, APFloat(0.0)), "ifcond"); + CondV = Builder->CreateFCmpONE( + CondV, ConstantFP::get(*TheContext, APFloat(0.0)), "ifcond"); - Function *TheFunction = Builder.GetInsertBlock()->getParent(); + Function *TheFunction = Builder->GetInsertBlock()->getParent(); // Create blocks for the then and else cases. Insert the 'then' block at the // end of the function. - BasicBlock *ThenBB = BasicBlock::Create(TheContext, "then", TheFunction); - BasicBlock *ElseBB = BasicBlock::Create(TheContext, "else"); - BasicBlock *MergeBB = BasicBlock::Create(TheContext, "ifcont"); + BasicBlock *ThenBB = BasicBlock::Create(*TheContext, "then", TheFunction); + BasicBlock *ElseBB = BasicBlock::Create(*TheContext, "else"); + BasicBlock *MergeBB = BasicBlock::Create(*TheContext, "ifcont"); - Builder.CreateCondBr(CondV, ThenBB, ElseBB); + Builder->CreateCondBr(CondV, ThenBB, ElseBB); // Emit then value. - Builder.SetInsertPoint(ThenBB); + Builder->SetInsertPoint(ThenBB); Value *ThenV = Then->codegen(); if (!ThenV) return nullptr; - Builder.CreateBr(MergeBB); + Builder->CreateBr(MergeBB); // Codegen of 'Then' can change the current block, update ThenBB for the PHI. - ThenBB = Builder.GetInsertBlock(); + ThenBB = Builder->GetInsertBlock(); // Emit else block. TheFunction->getBasicBlockList().push_back(ElseBB); - Builder.SetInsertPoint(ElseBB); + Builder->SetInsertPoint(ElseBB); Value *ElseV = Else->codegen(); if (!ElseV) return nullptr; - Builder.CreateBr(MergeBB); + Builder->CreateBr(MergeBB); // Codegen of 'Else' can change the current block, update ElseBB for the PHI. - ElseBB = Builder.GetInsertBlock(); + ElseBB = Builder->GetInsertBlock(); // Emit merge block. TheFunction->getBasicBlockList().push_back(MergeBB); - Builder.SetInsertPoint(MergeBB); - PHINode *PN = Builder.CreatePHI(Type::getDoubleTy(TheContext), 2, "iftmp"); + Builder->SetInsertPoint(MergeBB); + PHINode *PN = Builder->CreatePHI(Type::getDoubleTy(*TheContext), 2, "iftmp"); PN->addIncoming(ThenV, ThenBB); PN->addIncoming(ElseV, ElseBB); @@ -903,7 +904,7 @@ // br endcond, loop, endloop // outloop: Value *ForExprAST::codegen() { - Function *TheFunction = Builder.GetInsertBlock()->getParent(); + Function *TheFunction = Builder->GetInsertBlock()->getParent(); // Create an alloca for the variable in the entry block. AllocaInst *Alloca = CreateEntryBlockAlloca(TheFunction, VarName); @@ -914,17 +915,17 @@ return nullptr; // Store the value into the alloca. - Builder.CreateStore(StartVal, Alloca); + Builder->CreateStore(StartVal, Alloca); // Make the new basic block for the loop header, inserting after current // block. - BasicBlock *LoopBB = BasicBlock::Create(TheContext, "loop", TheFunction); + BasicBlock *LoopBB = BasicBlock::Create(*TheContext, "loop", TheFunction); // Insert an explicit fall through from the current block to the LoopBB. - Builder.CreateBr(LoopBB); + Builder->CreateBr(LoopBB); // Start insertion in LoopBB. - Builder.SetInsertPoint(LoopBB); + Builder->SetInsertPoint(LoopBB); // Within the loop, the variable is defined equal to the PHI node. If it // shadows an existing variable, we have to restore it, so save it now. @@ -945,7 +946,7 @@ return nullptr; } else { // If not specified, use 1.0. - StepVal = ConstantFP::get(TheContext, APFloat(1.0)); + StepVal = ConstantFP::get(*TheContext, APFloat(1.0)); } // Compute the end condition. @@ -955,23 +956,23 @@ // Reload, increment, and restore the alloca. This handles the case where // the body of the loop mutates the variable. - Value *CurVar = Builder.CreateLoad(Alloca, VarName.c_str()); - Value *NextVar = Builder.CreateFAdd(CurVar, StepVal, "nextvar"); - Builder.CreateStore(NextVar, Alloca); + Value *CurVar = Builder->CreateLoad(Alloca, VarName.c_str()); + Value *NextVar = Builder->CreateFAdd(CurVar, StepVal, "nextvar"); + Builder->CreateStore(NextVar, Alloca); // Convert condition to a bool by comparing equal to 0.0. - EndCond = Builder.CreateFCmpONE( - EndCond, ConstantFP::get(TheContext, APFloat(0.0)), "loopcond"); + EndCond = Builder->CreateFCmpONE( + EndCond, ConstantFP::get(*TheContext, APFloat(0.0)), "loopcond"); // Create the "after loop" block and insert it. BasicBlock *AfterBB = - BasicBlock::Create(TheContext, "afterloop", TheFunction); + BasicBlock::Create(*TheContext, "afterloop", TheFunction); // Insert the conditional branch into the end of LoopEndBB. - Builder.CreateCondBr(EndCond, LoopBB, AfterBB); + Builder->CreateCondBr(EndCond, LoopBB, AfterBB); // Any new code will be inserted in AfterBB. - Builder.SetInsertPoint(AfterBB); + Builder->SetInsertPoint(AfterBB); // Restore the unshadowed variable. if (OldVal) @@ -980,13 +981,13 @@ NamedValues.erase(VarName); // for expr always returns 0.0. - return Constant::getNullValue(Type::getDoubleTy(TheContext)); + return Constant::getNullValue(Type::getDoubleTy(*TheContext)); } Value *VarExprAST::codegen() { std::vector OldBindings; - Function *TheFunction = Builder.GetInsertBlock()->getParent(); + Function *TheFunction = Builder->GetInsertBlock()->getParent(); // Register all variables and emit their initializer. for (unsigned i = 0, e = VarNames.size(); i != e; ++i) { @@ -1004,11 +1005,11 @@ if (!InitVal) return nullptr; } else { // If not specified, use 0.0. - InitVal = ConstantFP::get(TheContext, APFloat(0.0)); + InitVal = ConstantFP::get(*TheContext, APFloat(0.0)); } AllocaInst *Alloca = CreateEntryBlockAlloca(TheFunction, VarName); - Builder.CreateStore(InitVal, Alloca); + Builder->CreateStore(InitVal, Alloca); // Remember the old variable binding so that we can restore the binding when // we unrecurse. @@ -1033,9 +1034,9 @@ Function *PrototypeAST::codegen() { // Make the function type: double(double,double) etc. - std::vector Doubles(Args.size(), Type::getDoubleTy(TheContext)); + std::vector Doubles(Args.size(), Type::getDoubleTy(*TheContext)); FunctionType *FT = - FunctionType::get(Type::getDoubleTy(TheContext), Doubles, false); + FunctionType::get(Type::getDoubleTy(*TheContext), Doubles, false); Function *F = Function::Create(FT, Function::ExternalLinkage, Name, TheModule.get()); @@ -1062,8 +1063,8 @@ BinopPrecedence[P.getOperatorName()] = P.getBinaryPrecedence(); // Create a new basic block to start insertion into. - BasicBlock *BB = BasicBlock::Create(TheContext, "entry", TheFunction); - Builder.SetInsertPoint(BB); + BasicBlock *BB = BasicBlock::Create(*TheContext, "entry", TheFunction); + Builder->SetInsertPoint(BB); // Record the function arguments in the NamedValues map. NamedValues.clear(); @@ -1072,7 +1073,7 @@ AllocaInst *Alloca = CreateEntryBlockAlloca(TheFunction, Arg.getName()); // Store the initial value into the alloca. - Builder.CreateStore(&Arg, Alloca); + Builder->CreateStore(&Arg, Alloca); // Add arguments to variable symbol table. NamedValues[Arg.getName()] = Alloca; @@ -1080,7 +1081,7 @@ if (Value *RetVal = Body->codegen()) { // Finish off the function. - Builder.CreateRet(RetVal); + Builder->CreateRet(RetVal); // Validate the generated code, checking for consistency. verifyFunction(*TheFunction); @@ -1102,8 +1103,11 @@ static void InitializeModule() { // Open a new module. - TheModule = llvm::make_unique("my cool jit", TheContext); - TheModule->setDataLayout(TheJIT->getTargetMachine().createDataLayout()); + TheModule = llvm::make_unique("my cool jit", *TheContext); + TheModule->setDataLayout(TheJIT->getDataLayout()); + + // Create a new builder for the module. + Builder = llvm::make_unique>(*TheContext); } static void HandleDefinition() { @@ -1112,7 +1116,7 @@ fprintf(stderr, "Read function definition:"); FnIR->print(errs()); fprintf(stderr, "\n"); - TheJIT->addModule(std::move(TheModule)); + ExitOnErr(TheJIT->addModule(std::move(TheModule))); InitializeModule(); } } else { @@ -1136,25 +1140,25 @@ } static void HandleTopLevelExpression() { + static unsigned ExprCount = 0; + + // Update ExprCount. This number will be added to anonymous expressions to + // prevent them from clashing. + ++ExprCount; + // Evaluate a top-level expression into an anonymous function. - if (auto FnAST = ParseTopLevelExpr()) { + if (auto FnAST = ParseTopLevelExpr(ExprCount)) { if (FnAST->codegen()) { // JIT the module containing the anonymous expression, keeping a handle so // we can free it later. - auto H = TheJIT->addModule(std::move(TheModule)); + ExitOnErr(TheJIT->addModule(std::move(TheModule))); InitializeModule(); - - // Search the JIT for the __anon_expr symbol. - auto ExprSymbol = TheJIT->findSymbol("__anon_expr"); - assert(ExprSymbol && "Function not found"); - - // Get the symbol's address and cast it to the right type (takes no - // arguments, returns a double) so we can call it as a native function. - double (*FP)() = (double (*)())(intptr_t)cantFail(ExprSymbol.getAddress()); + // Get the anonymous expression's JITSymbol. + auto Sym = + ExitOnErr(TheJIT->lookup(("__anon_expr" + Twine(ExprCount)).str())); + auto *FP = (double (*)())(intptr_t)Sym.getAddress(); + assert(FP && "Failed to codegen function"); fprintf(stderr, "Evaluated to %f\n", FP()); - - // Delete the anonymous expression module from the JIT. - TheJIT->removeModule(H); } } else { // Skip token for error recovery. @@ -1222,7 +1226,8 @@ fprintf(stderr, "ready> "); getNextToken(); - TheJIT = llvm::make_unique(); + TheJIT = ExitOnErr(KaleidoscopeJIT::Create()); + TheContext = &TheJIT->getContext(); InitializeModule();