diff --git a/lld/docs/AtomLLD.rst b/lld/docs/AtomLLD.rst deleted file mode 100644 --- a/lld/docs/AtomLLD.rst +++ /dev/null @@ -1,62 +0,0 @@ -ATOM-based lld -============== - -Note: this document discuss Mach-O port of LLD. For ELF and COFF, -see :doc:`index`. - -ATOM-based lld is a new set of modular code for creating linker tools. -Currently it supports Mach-O. - -* End-User Features: - - * Compatible with existing linker options - * Reads standard Object Files - * Writes standard Executable Files - * Remove clang's reliance on "the system linker" - * Uses the LLVM `"UIUC" BSD-Style license`__. - -* Applications: - - * Modular design - * Support cross linking - * Easy to add new CPU support - * Can be built as static tool or library - -* Design and Implementation: - - * Extensive unit tests - * Internal linker model can be dumped/read to textual format - * Additional linking features can be plugged in as "passes" - * OS specific and CPU specific code factored out - -Why a new linker? ------------------ - -The fact that clang relies on whatever linker tool you happen to have installed -means that clang has been very conservative adopting features which require a -recent linker. - -In the same way that the MC layer of LLVM has removed clang's reliance on the -system assembler tool, the lld project will remove clang's reliance on the -system linker tool. - - -Contents --------- - -.. toctree:: - :maxdepth: 2 - - design - getting_started - development - open_projects - sphinx_intro - -Indices and tables ------------------- - -* :ref:`genindex` -* :ref:`search` - -__ https://llvm.org/docs/DeveloperPolicy.html#license diff --git a/lld/docs/Driver.rst b/lld/docs/Driver.rst deleted file mode 100644 --- a/lld/docs/Driver.rst +++ /dev/null @@ -1,82 +0,0 @@ -====== -Driver -====== - -Note: this document discuss Mach-O port of LLD. For ELF and COFF, -see :doc:`index`. - -.. contents:: - :local: - -Introduction -============ - -This document describes the lld driver. The purpose of this document is to -describe both the motivation and design goals for the driver, as well as details -of the internal implementation. - -Overview -======== - -The lld driver is designed to support a number of different command line -interfaces. The main interfaces we plan to support are binutils' ld, Apple's -ld, and Microsoft's link.exe. - -Flavors -------- - -Each of these different interfaces is referred to as a flavor. There is also an -extra flavor "core" which is used to exercise the core functionality of the -linker it the test suite. - -* gnu -* darwin -* link -* core - -Selecting a Flavor -^^^^^^^^^^^^^^^^^^ - -There are two different ways to tell lld which flavor to be. They are checked in -order, so the second overrides the first. The first is to symlink :program:`lld` -as :program:`lld-{flavor}` or just :program:`{flavor}`. You can also specify -it as the first command line argument using ``-flavor``:: - - $ lld -flavor gnu - -There is a shortcut for ``-flavor core`` as ``-core``. - - -Adding an Option to an existing Flavor -====================================== - -#. Add the option to the desired :file:`lib/Driver/{flavor}Options.td`. - -#. Add to :cpp:class:`lld::FlavorLinkingContext` a getter and setter method - for the option. - -#. Modify :cpp:func:`lld::FlavorDriver::parse` in :file: - `lib/Driver/{Flavor}Driver.cpp` to call the targetInfo setter - for the option. - -#. Modify {Flavor}Reader and {Flavor}Writer to use the new targetInfo option. - - -Adding a Flavor -=============== - -#. Add an entry for the flavor in :file:`include/lld/Common/Driver.h` to - :cpp:class:`lld::UniversalDriver::Flavor`. - -#. Add an entry in :file:`lib/Driver/UniversalDriver.cpp` to - :cpp:func:`lld::Driver::strToFlavor` and - :cpp:func:`lld::UniversalDriver::link`. - This allows the flavor to be selected via symlink and `-flavor`. - -#. Add a tablegen file called :file:`lib/Driver/{flavor}Options.td` that - describes the options. If the options are a superset of another driver, that - driver's td file can simply be included. The :file:`{flavor}Options.td` file - must also be added to :file:`lib/Driver/CMakeLists.txt`. - -#. Add a ``{flavor}Driver`` as a subclass of :cpp:class:`lld::Driver` - in :file:`lib/Driver/{flavor}Driver.cpp`. diff --git a/lld/docs/Readers.rst b/lld/docs/Readers.rst deleted file mode 100644 --- a/lld/docs/Readers.rst +++ /dev/null @@ -1,174 +0,0 @@ -.. _Readers: - -Developing lld Readers -====================== - -Note: this document discuss Mach-O port of LLD. For ELF and COFF, -see :doc:`index`. - -Introduction ------------- - -The purpose of a "Reader" is to take an object file in a particular format -and create an `lld::File`:cpp:class: (which is a graph of Atoms) -representing the object file. A Reader inherits from -`lld::Reader`:cpp:class: which lives in -:file:`include/lld/Core/Reader.h` and -:file:`lib/Core/Reader.cpp`. - -The Reader infrastructure for an object format ``Foo`` requires the -following pieces in order to fit into lld: - -:file:`include/lld/ReaderWriter/ReaderFoo.h` - - .. cpp:class:: ReaderOptionsFoo : public ReaderOptions - - This Options class is the only way to configure how the Reader will - parse any file into an `lld::Reader`:cpp:class: object. This class - should be declared in the `lld`:cpp:class: namespace. - - .. cpp:function:: Reader *createReaderFoo(ReaderOptionsFoo &reader) - - This factory function configures and create the Reader. This function - should be declared in the `lld`:cpp:class: namespace. - -:file:`lib/ReaderWriter/Foo/ReaderFoo.cpp` - - .. cpp:class:: ReaderFoo : public Reader - - This is the concrete Reader class which can be called to parse - object files. It should be declared in an anonymous namespace or - if there is shared code with the `lld::WriterFoo`:cpp:class: you - can make a nested namespace (e.g. `lld::foo`:cpp:class:). - -You may have noticed that :cpp:class:`ReaderFoo` is not declared in the -``.h`` file. An important design aspect of lld is that all Readers are -created *only* through an object-format-specific -:cpp:func:`createReaderFoo` factory function. The creation of the Reader is -parametrized through a :cpp:class:`ReaderOptionsFoo` class. This options -class is the one-and-only way to control how the Reader operates when -parsing an input file into an Atom graph. For instance, you may want the -Reader to only accept certain architectures. The options class can be -instantiated from command line options or be programmatically configured. - -Where to start --------------- - -The lld project already has a skeleton of source code for Readers for -``ELF``, ``PECOFF``, ``MachO``, and lld's native ``YAML`` graph format. -If your file format is a variant of one of those, you should modify the -existing Reader to support your variant. This is done by customizing the Options -class for the Reader and making appropriate changes to the ``.cpp`` file to -interpret those options and act accordingly. - -If your object file format is not a variant of any existing Reader, you'll need -to create a new Reader subclass with the organization described above. - -Readers are factories ---------------------- - -The linker will usually only instantiate your Reader once. That one Reader will -have its loadFile() method called many times with different input files. -To support multithreaded linking, the Reader may be parsing multiple input -files in parallel. Therefore, there should be no parsing state in you Reader -object. Any parsing state should be in ivars of your File subclass or in -some temporary object. - -The key function to implement in a reader is:: - - virtual error_code loadFile(LinkerInput &input, - std::vector> &result); - -It takes a memory buffer (which contains the contents of the object file -being read) and returns an instantiated lld::File object which is -a collection of Atoms. The result is a vector of File pointers (instead of -simple a File pointer) because some file formats allow multiple object -"files" to be encoded in one file system file. - - -Memory Ownership ----------------- - -Atoms are always owned by their File object. During core linking when Atoms -are coalesced or stripped away, core linking does not delete them. -Core linking just removes those unused Atoms from its internal list. -The destructor of a File object is responsible for deleting all Atoms it -owns, and if ownership of the MemoryBuffer was passed to it, the File -destructor needs to delete that too. - -Making Atoms ------------- - -The internal model of lld is purely Atom based. But most object files do not -have an explicit concept of Atoms, instead most have "sections". The way -to think of this is that a section is just a list of Atoms with common -attributes. - -The first step in parsing section-based object files is to cleave each -section into a list of Atoms. The technique may vary by section type. For -code sections (e.g. .text), there are usually symbols at the start of each -function. Those symbol addresses are the points at which the section is -cleaved into discrete Atoms. Some file formats (like ELF) also include the -length of each symbol in the symbol table. Otherwise, the length of each -Atom is calculated to run to the start of the next symbol or the end of the -section. - -Other sections types can be implicitly cleaved. For instance c-string literals -or unwind info (e.g. .eh_frame) can be cleaved by having the Reader look at -the content of the section. It is important to cleave sections into Atoms -to remove false dependencies. For instance the .eh_frame section often -has no symbols, but contains "pointers" to the functions for which it -has unwind info. If the .eh_frame section was not cleaved (but left as one -big Atom), there would always be a reference (from the eh_frame Atom) to -each function. So the linker would be unable to coalesce or dead stripped -away the function atoms. - -The lld Atom model also requires that a reference to an undefined symbol be -modeled as a Reference to an UndefinedAtom. So the Reader also needs to -create an UndefinedAtom for each undefined symbol in the object file. - -Once all Atoms have been created, the second step is to create References -(recall that Atoms are "nodes" and References are "edges"). Most References -are created by looking at the "relocation records" in the object file. If -a function contains a call to "malloc", there is usually a relocation record -specifying the address in the section and the symbol table index. Your -Reader will need to convert the address to an Atom and offset and the symbol -table index into a target Atom. If "malloc" is not defined in the object file, -the target Atom of the Reference will be an UndefinedAtom. - - -Performance ------------ -Once you have the above working to parse an object file into Atoms and -References, you'll want to look at performance. Some techniques that can -help performance are: - -* Use llvm::BumpPtrAllocator or pre-allocate one big vector and then - just have each atom point to its subrange of References in that vector. - This can be faster that allocating each Reference as separate object. -* Pre-scan the symbol table and determine how many atoms are in each section - then allocate space for all the Atom objects at once. -* Don't copy symbol names or section content to each Atom, instead use - StringRef and ArrayRef in each Atom to point to its name and content in the - MemoryBuffer. - - -Testing -------- - -We are still working on infrastructure to test Readers. The issue is that -you don't want to check in binary files to the test suite. And the tools -for creating your object file from assembly source may not be available on -every OS. - -We are investigating a way to use YAML to describe the section, symbols, -and content of a file. Then have some code which will write out an object -file from that YAML description. - -Once that is in place, you can write test cases that contain section/symbols -YAML and is run through the linker to produce Atom/References based YAML which -is then run through FileCheck to verify the Atoms and References are as -expected. - - - diff --git a/lld/docs/design.rst b/lld/docs/design.rst deleted file mode 100644 --- a/lld/docs/design.rst +++ /dev/null @@ -1,421 +0,0 @@ -.. _design: - -Linker Design -============= - -Note: this document discuss Mach-O port of LLD. For ELF and COFF, -see :doc:`index`. - -Introduction ------------- - -lld is a new generation of linker. It is not "section" based like traditional -linkers which mostly just interlace sections from multiple object files into the -output file. Instead, lld is based on "Atoms". Traditional section based -linking work well for simple linking, but their model makes advanced linking -features difficult to implement. Features like dead code stripping, reordering -functions for locality, and C++ coalescing require the linker to work at a finer -grain. - -An atom is an indivisible chunk of code or data. An atom has a set of -attributes, such as: name, scope, content-type, alignment, etc. An atom also -has a list of References. A Reference contains: a kind, an optional offset, an -optional addend, and an optional target atom. - -The Atom model allows the linker to use standard graph theory models for linking -data structures. Each atom is a node, and each Reference is an edge. The -feature of dead code stripping is implemented by following edges to mark all -live atoms, and then delete the non-live atoms. - - -Atom Model ----------- - -An atom is an indivisible chunk of code or data. Typically each user written -function or global variable is an atom. In addition, the compiler may emit -other atoms, such as for literal c-strings or floating point constants, or for -runtime data structures like dwarf unwind info or pointers to initializers. - -A simple "hello world" object file would be modeled like this: - -.. image:: hello.png - -There are three atoms: main, a proxy for printf, and an anonymous atom -containing the c-string literal "hello world". The Atom "main" has two -references. One is the call site for the call to printf, and the other is a -reference for the instruction that loads the address of the c-string literal. - -There are only four different types of atoms: - - * DefinedAtom - 95% of all atoms. This is a chunk of code or data - - * UndefinedAtom - This is a place holder in object files for a reference to some atom - outside the translation unit.During core linking it is usually replaced - by (coalesced into) another Atom. - - * SharedLibraryAtom - If a required symbol name turns out to be defined in a dynamic shared - library (and not some object file). A SharedLibraryAtom is the - placeholder Atom used to represent that fact. - - It is similar to an UndefinedAtom, but it also tracks information - about the associated shared library. - - * AbsoluteAtom - This is for embedded support where some stuff is implemented in ROM at - some fixed address. This atom has no content. It is just an address - that the Writer needs to fix up any references to point to. - - -File Model ----------- - -The linker views the input files as basically containers of Atoms and -References, and just a few attributes of their own. The linker works with three -kinds of files: object files, static libraries, and dynamic shared libraries. -Each kind of file has reader object which presents the file in the model -expected by the linker. - -Object File -~~~~~~~~~~~ - -An object file is just a container of atoms. When linking an object file, a -reader is instantiated which parses the object file and instantiates a set of -atoms representing all content in the .o file. The linker adds all those atoms -to a master graph. - -Static Library (Archive) -~~~~~~~~~~~~~~~~~~~~~~~~ - -This is the traditional unix static archive which is just a collection of object -files with a "table of contents". When linking with a static library, by default -nothing is added to the master graph of atoms. Instead, if after merging all -atoms from object files into a master graph, if any "undefined" atoms are left -remaining in the master graph, the linker reads the table of contents for each -static library to see if any have the needed definitions. If so, the set of -atoms from the specified object file in the static library is added to the -master graph of atoms. - -Dynamic Library (Shared Object) -~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ - -Dynamic libraries are different than object files and static libraries in that -they don't directly add any content. Their purpose is to check at build time -that the remaining undefined references can be resolved at runtime, and provide -a list of dynamic libraries (SO_NEEDED) that will be needed at runtime. The way -this is modeled in the linker is that a dynamic library contributes no atoms to -the initial graph of atoms. Instead, (like static libraries) if there are -"undefined" atoms in the master graph of all atoms, then each dynamic library is -checked to see if exports the required symbol. If so, a "shared library" atom is -instantiated by the by the reader which the linker uses to replace the -"undefined" atom. - -Linking Steps -------------- - -Through the use of abstract Atoms, the core of linking is architecture -independent and file format independent. All command line parsing is factored -out into a separate "options" abstraction which enables the linker to be driven -with different command line sets. - -The overall steps in linking are: - - #. Command line processing - - #. Parsing input files - - #. Resolving - - #. Passes/Optimizations - - #. Generate output file - -The Resolving and Passes steps are done purely on the master graph of atoms, so -they have no notion of file formats such as mach-o or ELF. - - -Input Files -~~~~~~~~~~~ - -Existing developer tools using different file formats for object files. -A goal of lld is to be file format independent. This is done -through a plug-in model for reading object files. The lld::Reader is the base -class for all object file readers. A Reader follows the factory method pattern. -A Reader instantiates an lld::File object (which is a graph of Atoms) from a -given object file (on disk or in-memory). - -Every Reader subclass defines its own "options" class (for instance the mach-o -Reader defines the class ReaderOptionsMachO). This options class is the -one-and-only way to control how the Reader operates when parsing an input file -into an Atom graph. For instance, you may want the Reader to only accept -certain architectures. The options class can be instantiated from command -line options, or it can be subclassed and the ivars programmatically set. - -Resolving -~~~~~~~~~ - -The resolving step takes all the atoms' graphs from each object file and -combines them into one master object graph. Unfortunately, it is not as simple -as appending the atom list from each file into one big list. There are many -cases where atoms need to be coalesced. That is, two or more atoms need to be -coalesced into one atom. This is necessary to support: C language "tentative -definitions", C++ weak symbols for templates and inlines defined in headers, -replacing undefined atoms with actual definition atoms, and for merging copies -of constants like c-strings and floating point constants. - -The linker support coalescing by-name and by-content. By-name is used for -tentative definitions and weak symbols. By-content is used for constant data -that can be merged. - -The resolving process maintains some global linking "state", including a "symbol -table" which is a map from llvm::StringRef to lld::Atom*. With these data -structures, the linker iterates all atoms in all input files. For each atom, it -checks if the atom is named and has a global or hidden scope. If so, the atom -is added to the symbol table map. If there already is a matching atom in that -table, that means the current atom needs to be coalesced with the found atom, or -it is a multiple definition error. - -When all initial input file atoms have been processed by the resolver, a scan is -made to see if there are any undefined atoms in the graph. If there are, the -linker scans all libraries (both static and dynamic) looking for definitions to -replace the undefined atoms. It is an error if any undefined atoms are left -remaining. - -Dead code stripping (if requested) is done at the end of resolving. The linker -does a simple mark-and-sweep. It starts with "root" atoms (like "main" in a main -executable) and follows each references and marks each Atom that it visits as -"live". When done, all atoms not marked "live" are removed. - -The result of the Resolving phase is the creation of an lld::File object. The -goal is that the lld::File model is **the** internal representation -throughout the linker. The file readers parse (mach-o, ELF, COFF) into an -lld::File. The file writers (mach-o, ELF, COFF) taken an lld::File and produce -their file kind, and every Pass only operates on an lld::File. This is not only -a simpler, consistent model, but it enables the state of the linker to be dumped -at any point in the link for testing purposes. - - -Passes -~~~~~~ - -The Passes step is an open ended set of routines that each get a change to -modify or enhance the current lld::File object. Some example Passes are: - - * stub (PLT) generation - - * GOT instantiation - - * order_file optimization - - * branch island generation - - * branch shim generation - - * Objective-C optimizations (Darwin specific) - - * TLV instantiation (Darwin specific) - - * DTrace probe processing (Darwin specific) - - * compact unwind encoding (Darwin specific) - - -Some of these passes are specific to Darwin's runtime environments. But many of -the passes are applicable to any OS (such as generating branch island for out of -range branch instructions). - -The general structure of a pass is to iterate through the atoms in the current -lld::File object, inspecting each atom and doing something. For instance, the -stub pass, looks for call sites to shared library atoms (e.g. call to printf). -It then instantiates a "stub" atom (PLT entry) and a "lazy pointer" atom for -each proxy atom needed, and these new atoms are added to the current lld::File -object. Next, all the noted call sites to shared library atoms have their -References altered to point to the stub atom instead of the shared library atom. - - -Generate Output File -~~~~~~~~~~~~~~~~~~~~ - -Once the passes are done, the output file writer is given current lld::File -object. The writer's job is to create the executable content file wrapper and -place the content of the atoms into it. - -lld uses a plug-in model for writing output files. All concrete writers (e.g. -ELF, mach-o, etc) are subclasses of the lld::Writer class. - -Unlike the Reader class which has just one method to instantiate an lld::File, -the Writer class has multiple methods. The crucial method is to generate the -output file, but there are also methods which allow the Writer to contribute -Atoms to the resolver and specify passes to run. - -An example of contributing -atoms is that if the Writer knows a main executable is being linked and such -an executable requires a specially named entry point (e.g. "_main"), the Writer -can add an UndefinedAtom with that special name to the resolver. This will -cause the resolver to issue an error if that symbol is not defined. - -Sometimes a Writer supports lazily created symbols, such as names for the start -of sections. To support this, the Writer can create a File object which vends -no initial atoms, but does lazily supply atoms by name as needed. - -Every Writer subclass defines its own "options" class (for instance the mach-o -Writer defines the class WriterOptionsMachO). This options class is the -one-and-only way to control how the Writer operates when producing an output -file from an Atom graph. For instance, you may want the Writer to optimize -the output for certain OS versions, or strip local symbols, etc. The options -class can be instantiated from command line options, or it can be subclassed -and the ivars programmatically set. - - -lld::File representations -------------------------- - -Just as LLVM has three representations of its IR model, lld has two -representations of its File/Atom/Reference model: - - * In memory, abstract C++ classes (lld::Atom, lld::Reference, and lld::File). - - * textual (in YAML) - - -Textual representations in YAML -~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ - -In designing a textual format we want something easy for humans to read and easy -for the linker to parse. Since an atom has lots of attributes most of which are -usually just the default, we should define default values for every attribute so -that those can be omitted from the text representation. Here is the atoms for a -simple hello world program expressed in YAML:: - - target-triple: x86_64-apple-darwin11 - - atoms: - - name: _main - scope: global - type: code - content: [ 55, 48, 89, e5, 48, 8d, 3d, 00, 00, 00, 00, 30, c0, e8, 00, 00, - 00, 00, 31, c0, 5d, c3 ] - fixups: - - offset: 07 - kind: pcrel32 - target: 2 - - offset: 0E - kind: call32 - target: _fprintf - - - type: c-string - content: [ 73, 5A, 00 ] - - ... - -The biggest use for the textual format will be writing test cases. Writing test -cases in C is problematic because the compiler may vary its output over time for -its own optimization reasons which my inadvertently disable or break the linker -feature trying to be tested. By writing test cases in the linkers own textual -format, we can exactly specify every attribute of every atom and thus target -specific linker logic. - -The textual/YAML format follows the ReaderWriter patterns used in lld. The lld -library comes with the classes: ReaderYAML and WriterYAML. - - -Testing -------- - -The lld project contains a test suite which is being built up as new code is -added to lld. All new lld functionality should have a tests added to the test -suite. The test suite is `lit `_ driven. Each -test is a text file with comments telling lit how to run the test and check the -result To facilitate testing, the lld project builds a tool called lld-core. -This tool reads a YAML file (default from stdin), parses it into one or more -lld::File objects in memory and then feeds those lld::File objects to the -resolver phase. - - -Resolver testing -~~~~~~~~~~~~~~~~ - -Basic testing is the "core linking" or resolving phase. That is where the -linker merges object files. All test cases are written in YAML. One feature of -YAML is that it allows multiple "documents" to be encoding in one YAML stream. -That means one text file can appear to the linker as multiple .o files - the -normal case for the linker. - -Here is a simple example of a core linking test case. It checks that an -undefined atom from one file will be replaced by a definition from another -file:: - - # RUN: lld-core %s | FileCheck %s - - # - # Test that undefined atoms are replaced with defined atoms. - # - - --- - atoms: - - name: foo - definition: undefined - --- - atoms: - - name: foo - scope: global - type: code - ... - - # CHECK: name: foo - # CHECK: scope: global - # CHECK: type: code - # CHECK-NOT: name: foo - # CHECK: ... - - -Passes testing -~~~~~~~~~~~~~~ - -Since Passes just operate on an lld::File object, the lld-core tool has the -option to run a particular pass (after resolving). Thus, you can write a YAML -test case with carefully crafted input to exercise areas of a Pass and the check -the resulting lld::File object as represented in YAML. - - -Design Issues -------------- - -There are a number of open issues in the design of lld. The plan is to wait and -make these design decisions when we need to. - - -Debug Info -~~~~~~~~~~ - -Currently, the lld model says nothing about debug info. But the most popular -debug format is DWARF and there is some impedance mismatch with the lld model -and DWARF. In lld there are just Atoms and only Atoms that need to be in a -special section at runtime have an associated section. Also, Atoms do not have -addresses. The way DWARF is spec'ed different parts of DWARF are supposed to go -into specially named sections and the DWARF references function code by address. - -CPU and OS specific functionality -~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ - -Currently, lld has an abstract "Platform" that deals with any CPU or OS specific -differences in linking. We just keep adding virtual methods to the base -Platform class as we find linking areas that might need customization. At some -point we'll need to structure this better. - - -File Attributes -~~~~~~~~~~~~~~~ - -Currently, lld::File just has a path and a way to iterate its atoms. We will -need to add more attributes on a File. For example, some equivalent to the -target triple. There is also a number of cached or computed attributes that -could make various Passes more efficient. For instance, on Darwin there are a -number of Objective-C optimizations that can be done by a Pass. But it would -improve the plain C case if the Objective-C optimization Pass did not have to -scan all atoms looking for any Objective-C data structures. This could be done -if the lld::File object had an attribute that said if the file had any -Objective-C data in it. The Resolving phase would then be required to "merge" -that attribute as object files are added. diff --git a/lld/docs/development.rst b/lld/docs/development.rst deleted file mode 100644 --- a/lld/docs/development.rst +++ /dev/null @@ -1,45 +0,0 @@ -.. _development: - -Development -=========== - -Note: this document discuss Mach-O port of LLD. For ELF and COFF, -see :doc:`index`. - -lld is developed as part of the `LLVM `_ project. - -Creating a Reader ------------------ - -See the :ref:`Creating a Reader ` guide. - - -Modifying the Driver --------------------- - -See :doc:`Driver`. - - -Debugging ---------- - -You can run lld with ``-mllvm -debug`` command line options to enable debugging -printouts. If you want to enable debug information for some specific pass, you -can run it with ``-mllvm '-debug-only='``, where pass is a name used in -the ``DEBUG_WITH_TYPE()`` macro. - - - -Documentation -------------- - -The project documentation is written in reStructuredText and generated using the -`Sphinx `_ documentation generator. For more -information on writing documentation for the project, see the -:ref:`sphinx_intro`. - -.. toctree:: - :hidden: - - Readers - Driver diff --git a/lld/docs/getting_started.rst b/lld/docs/getting_started.rst deleted file mode 100644 --- a/lld/docs/getting_started.rst +++ /dev/null @@ -1,87 +0,0 @@ -.. _getting_started: - -Getting Started: Building and Running lld -========================================= - -This page gives you the shortest path to checking out and building lld. If you -run into problems, please file bugs in the `LLVM Bugzilla`__ - -__ https://bugs.llvm.org/ - -Building lld ------------- - -On Unix-like Systems -~~~~~~~~~~~~~~~~~~~~ - -1. Get the required tools. - - * `CMake 2.8`_\+. - * make (or any build system CMake supports). - * `Clang 3.1`_\+ or GCC 4.7+ (C++11 support is required). - - * If using Clang, you will also need `libc++`_. - * `Python 2.4`_\+ (not 3.x) for running tests. - -.. _CMake 2.8: http://www.cmake.org/cmake/resources/software.html -.. _Clang 3.1: http://clang.llvm.org/ -.. _libc++: http://libcxx.llvm.org/ -.. _Python 2.4: http://python.org/download/ - -2. Check out LLVM and subprojects (including lld):: - - $ git clone https://github.com/llvm/llvm-project.git - -4. Build LLVM and lld:: - - $ cd llvm-project - $ mkdir build && cd build - $ cmake -G "Unix Makefiles" -DLLVM_ENABLE_PROJECTS=lld ../llvm - $ make - - * If you want to build with clang and it is not the default compiler or - it is installed in an alternate location, you'll need to tell the cmake tool - the location of the C and C++ compiler via CMAKE_C_COMPILER and - CMAKE_CXX_COMPILER. For example:: - - $ cmake -DCMAKE_CXX_COMPILER=/path/to/clang++ -DCMAKE_C_COMPILER=/path/to/clang ... - -5. Test:: - - $ make check-lld - -Using Visual Studio -~~~~~~~~~~~~~~~~~~~ - -#. Get the required tools. - - * `CMake 2.8`_\+. - * `Visual Studio 12 (2013) or later`_ (required for C++11 support) - * `Python 2.4`_\+ (not 3.x) for running tests. - -.. _CMake 2.8: http://www.cmake.org/cmake/resources/software.html -.. _Visual Studio 12 (2013) or later: http://www.microsoft.com/visualstudio/11/en-us -.. _Python 2.4: http://python.org/download/ - -#. Check out LLVM as above. - -#. Generate Visual Studio project files:: - - $ cd llvm-project/build (out of source build required) - $ cmake -G "Visual Studio 11" -DLLVM_ENABLE_PROJECTS=lld ../llvm - -#. Build - - * Open LLVM.sln in Visual Studio. - * Build the ``ALL_BUILD`` target. - -#. Test - - * Build the ``lld-test`` target. - -More Information -~~~~~~~~~~~~~~~~ - -For more information on using CMake see the `LLVM CMake guide`_. - -.. _LLVM CMake guide: https://llvm.org/docs/CMake.html diff --git a/lld/docs/index.rst b/lld/docs/index.rst --- a/lld/docs/index.rst +++ b/lld/docs/index.rst @@ -10,9 +10,7 @@ several different linkers. The ELF port is the one that will be described in this document. The PE/COFF port is complete, including Windows debug info (PDB) support. The WebAssembly port is still a work in -progress (See :doc:`WebAssembly`). The Mach-O port is built based on a -different architecture than the others. For the details about Mach-O, please -read :doc:`AtomLLD`. +progress (See :doc:`WebAssembly`). Features -------- @@ -170,7 +168,6 @@ :maxdepth: 1 NewLLD - AtomLLD WebAssembly windows_support missingkeyfunction diff --git a/lld/docs/open_projects.rst b/lld/docs/open_projects.rst deleted file mode 100644 --- a/lld/docs/open_projects.rst +++ /dev/null @@ -1,9 +0,0 @@ -.. _open_projects: - -Open Projects -============= - -Documentation TODOs -~~~~~~~~~~~~~~~~~~~ - -.. todolist:: diff --git a/lld/docs/sphinx_intro.rst b/lld/docs/sphinx_intro.rst deleted file mode 100644 --- a/lld/docs/sphinx_intro.rst +++ /dev/null @@ -1,127 +0,0 @@ -.. _sphinx_intro: - -Sphinx Introduction for LLVM Developers -======================================= - -This document is intended as a short and simple introduction to the Sphinx -documentation generation system for LLVM developers. - -Quickstart ----------- - -To get started writing documentation, you will need to: - - 1. Have the Sphinx tools :ref:`installed `. - - 2. Understand how to :ref:`build the documentation - `. - - 3. Start :ref:`writing documentation `! - -.. _installing_sphinx: - -Installing Sphinx -~~~~~~~~~~~~~~~~~ - -You should be able to install Sphinx using the standard Python package -installation tool ``easy_install``, as follows:: - - $ sudo easy_install sphinx - Searching for sphinx - Reading http://pypi.python.org/simple/sphinx/ - Reading http://sphinx.pocoo.org/ - Best match: Sphinx 1.1.3 - ... more lines here .. - -If you do not have root access (or otherwise want to avoid installing Sphinx in -system directories) see the section on :ref:`installing_sphinx_in_a_venv` . - -If you do not have the ``easy_install`` tool on your system, you should be able -to install it using: - - Linux - Use your distribution's standard package management tool to install it, - i.e., ``apt-get install easy_install`` or ``yum install easy_install``. - - macOS - All modern macOS systems come with ``easy_install`` as part of the base - system. - - Windows - See the `setuptools `_ package web - page for instructions. - - -.. _building_the_documentation: - -Building the documentation -~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ - -In order to build the documentation need to add ``-DLLVM_ENABLE_SPHINX=ON`` to -your ``cmake`` command. Once you do this you can build the docs using -``docs-lld-html`` build (``ninja`` or ``make``) target. - -That build target will invoke ``sphinx-build`` with the appropriate options for -the project, and generate the HTML documentation in a ``tools/lld/docs/html`` -subdirectory. - -.. _writing_documentation: - -Writing documentation -~~~~~~~~~~~~~~~~~~~~~ - -The documentation itself is written in the reStructuredText (ReST) format, and -Sphinx defines additional tags to support features like cross-referencing. - -The ReST format itself is organized around documents mostly being readable -plaintext documents. You should generally be able to write new documentation -easily just by following the style of the existing documentation. - -If you want to understand the formatting of the documents more, the best place -to start is Sphinx's own `ReST Primer `_. - - -Learning More -------------- - -If you want to learn more about the Sphinx system, the best place to start is -the Sphinx documentation itself, available `here -`_. - - -.. _installing_sphinx_in_a_venv: - -Installing Sphinx in a Virtual Environment ------------------------------------------- - -Most Python developers prefer to work with tools inside a *virtualenv* (virtual -environment) instance, which functions as an application sandbox. This avoids -polluting your system installation with different packages used by various -projects (and ensures that dependencies for different packages don't conflict -with one another). Of course, you need to first have the virtualenv software -itself which generally would be installed at the system level:: - - $ sudo easy_install virtualenv - -but after that you no longer need to install additional packages in the system -directories. - -Once you have the *virtualenv* tool itself installed, you can create a -virtualenv for Sphinx using:: - - $ virtualenv ~/my-sphinx-install - New python executable in /Users/dummy/my-sphinx-install/bin/python - Installing setuptools............done. - Installing pip...............done. - - $ ~/my-sphinx-install/bin/easy_install sphinx - ... install messages here ... - -and from now on you can "activate" the *virtualenv* using:: - - $ source ~/my-sphinx-install/bin/activate - -which will change your PATH to ensure the sphinx-build tool from inside the -virtual environment will be used. See the `virtualenv website -`_ for more information on using -virtual environments.