diff --git a/llvm/docs/tutorial/BuildingAJIT2.rst b/llvm/docs/tutorial/BuildingAJIT2.rst --- a/llvm/docs/tutorial/BuildingAJIT2.rst +++ b/llvm/docs/tutorial/BuildingAJIT2.rst @@ -228,24 +228,26 @@ the "opt" phase of an LLVM compiler) whose API is opaque to ORC with an interface that ORC can call as needed. The add method takes an module in some input program representation (in this case an LLVM IR module) and -stores it in the target JITDylib, arranging for it to be passed back to the -Layer's emit method when any symbol defined by that module is requested. Layers -can compose neatly by calling the 'emit' method of a base layer to complete -their work. For example, in this tutorial our IRTransformLayer calls through to +stores it in the target ``JITDylib``, arranging for it to be passed back to +layer's emit method when someone requests any symbols defined by the module. Each layer +can complete own work by calling the ``emit`` method of its base layer. +For example, in this tutorial our IRTransformLayer calls through to our IRCompileLayer to compile the transformed IR, and our IRCompileLayer in turn calls our ObjectLayer to link the object file produced by our compiler. -So far we have learned how to optimize and compile our LLVM IR, but we have not -focused on when compilation happens. Our current REPL is eager: Each function -definition is optimized and compiled as soon as it is referenced by any other -code, regardless of whether it is ever called at runtime. In the next chapter we -will introduce fully lazy compilation, in which functions are not compiled until + + +So far we have learned how to optimize and compile our LLVM IR, but we have + not focused on when compilation happens. Our current REPL optimizes and + compiles each function as soon as it is referenced by any other code, + regardless of whether it is ever called at runtime. In the next chapter we +will introduce a fully lazy compilation, in which functions are not compiled until they are first called at run-time. At this point the trade-offs get much more interesting: the lazier we are, the quicker we can start executing the first function, but the more often we will have to pause to compile newly encountered functions. If we only code-gen lazily, but optimize eagerly, we will have a -longer startup time (as everything is optimized) but relatively short pauses as +longer startup time (as everything is optimized at that time) but relatively short pauses as each function just passes through code-gen. If we both optimize and code-gen lazily we can start executing the first function more quickly, but we will have longer pauses as each function has to be both optimized and code-gen'd when it