diff --git a/mlir/docs/Rationale/MLIRForGraphAlgorithms.md b/mlir/docs/Rationale/MLIRForGraphAlgorithms.md --- a/mlir/docs/Rationale/MLIRForGraphAlgorithms.md +++ b/mlir/docs/Rationale/MLIRForGraphAlgorithms.md @@ -115,7 +115,7 @@ ### A Lossless Human Editable Textual Representation The MLIR in-memory data structure has a human readable and writable format, as -well as [a specification](LangRef.md) for that format - built just like any +well as [a specification](../LangRef.md) for that format - built just like any other programming language. Important properties of this format are that it is compact, easy to read, and lossless. You can dump an MLIR program out to disk and munge around with it, then send it through a few more passes. @@ -167,7 +167,7 @@ The "CHECK" comments are interpreted by the [LLVM FileCheck tool](https://llvm.org/docs/CommandGuide/FileCheck.html), which is sort of like a really advanced grep. This test is fully self-contained: it -feeds the input into the [canonicalize pass](Canonicalization.md), and checks +feeds the input into the [canonicalize pass](../Canonicalization.md), and checks that the output matches the CHECK lines. See the `test/Transforms` directory for more examples. In contrast, standard unit testing exposes the API of the underlying framework to lots and lots of tests (making it harder to refactor and @@ -238,7 +238,7 @@ capture this (e.g. serialize it to proto), passes have to recompute it on demand with ShapeRefiner. -The [MLIR Tensor Type](LangRef.md#tensor-type) directly captures shape +The [MLIR Tensor Type](../Dialects/Builtin.md/#rankedtensortype) directly captures shape information, so you can have things like: ```mlir diff --git a/mlir/docs/Rationale/Rationale.md b/mlir/docs/Rationale/Rationale.md --- a/mlir/docs/Rationale/Rationale.md +++ b/mlir/docs/Rationale/Rationale.md @@ -113,16 +113,16 @@ ability to index into the same memref in other ways (something which C arrays allow for example). Furthermore, for the affine constructs, the compiler can follow use-def chains (e.g. through -[affine.apply operations](../Dialects/Affine.md#affineapply-operation)) or through -the map attributes of [affine operations](../Dialects/Affine.md#Operations)) to +[affine.apply operations](../Dialects/Affine.md/#affineapply-affineapplyop)) or through +the map attributes of [affine operations](../Dialects/Affine.md/#operations)) to precisely analyze references at compile-time using polyhedral techniques. This -is possible because of the [restrictions on dimensions and symbols](../Dialects/Affine.md#restrictions-on-dimensions-and-symbols). +is possible because of the [restrictions on dimensions and symbols](../Dialects/Affine.md/#restrictions-on-dimensions-and-symbols). A scalar of element-type (a primitive type or a vector type) that is stored in memory is modeled as a 0-d memref. This is also necessary for scalars that are live out of for loops and if conditionals in a function, for which we don't yet have an SSA representation -- -[an extension](#mlfunction-extensions-for-"escaping-scalars") to allow that is +[an extension](#affineif-and-affinefor-extensions-for-escaping-scalars) to allow that is described later in this doc. ### Symbols and types @@ -167,7 +167,7 @@ ### Block Arguments vs PHI nodes -MLIR Regions represent SSA using "[block arguments](../LangRef.md#blocks)" rather +MLIR Regions represent SSA using "[block arguments](../LangRef.md/#blocks)" rather than [PHI instructions](http://llvm.org/docs/LangRef.html#i-phi) used in LLVM. This choice is representationally identical (the same constructs can be represented in either form) but block arguments have several advantages: @@ -308,7 +308,7 @@ ### Specifying sign in integer comparison operations -Since integers are [signless](#signless-types), it is necessary to define the +Since integers are [signless](#integer-signedness-semantics), it is necessary to define the sign for integer comparison operations. This sign indicates how to treat the foremost bit of the integer: as sign bit or as most significant bit. For example, comparing two `i4` values `0b1000` and `0b0010` yields different @@ -513,12 +513,12 @@ systems. For these wrapper types there is no simple canonical name, it's logical to think of these types as existing within the namespace of the dialect. If a dialect wishes to assign a canonical name to a type, it can be done via -[type aliases](../LangRef.md#type-aliases). +[type aliases](../LangRef.md/#type-aliases). ### Tuple types The MLIR type system provides first class support for defining -[tuple types](../LangRef.md#tuple-type). This is due to the fact that `Tuple` +[tuple types](../Dialects/Builtin/#tupletype). This is due to the fact that `Tuple` represents a universal concept that is likely to, and has already begun to, present itself in many different dialects. Though this type is first class in the type system, it merely serves to provide a common mechanism in which to diff --git a/mlir/docs/Rationale/RationaleGenericDAGRewriter.md b/mlir/docs/Rationale/RationaleGenericDAGRewriter.md --- a/mlir/docs/Rationale/RationaleGenericDAGRewriter.md +++ b/mlir/docs/Rationale/RationaleGenericDAGRewriter.md @@ -54,7 +54,7 @@ result constant value. MLIR operations may override a -[`fold`](../Canonicalization.md/#canonicalizing-with-fold) routine, which +[`fold`](../Canonicalization.md/#canonicalizing-with-the-fold-method) routine, which exposes a simpler API compared to a general DAG-to-DAG pattern matcher, and allows for it to be applicable in cases that a generic matcher would not. For example, a DAG-rewrite can remove arbitrary nodes in the current function, which diff --git a/mlir/docs/Rationale/RationaleLinalgDialect.md b/mlir/docs/Rationale/RationaleLinalgDialect.md --- a/mlir/docs/Rationale/RationaleLinalgDialect.md +++ b/mlir/docs/Rationale/RationaleLinalgDialect.md @@ -16,7 +16,7 @@ Linalg is designed to solve the High-level Hierarchical Optimization (HHO box) and to interoperate nicely within a *Mixture Of Expert Compilers* environment (i.e. the *CGSel* box). -This work is inspired by a wealth of [prior art](#prior_art) in +This work is inspired by a wealth of [prior art](#prior-art) in the field, from which it seeks to learn key lessons. This documentation and introspection effort also comes in the context of the proposal for a working group for discussing the [Development of high-level Tensor Compute @@ -67,16 +67,16 @@ ### Evolution Since the initial implementation, the design has evolved with, and partially driven the evolution of the core MLIR infrastructure to use -[Regions](https://mlir.llvm.org/docs/LangRef/#regions), -[OpInterfaces](https://mlir.llvm.org/docs/Interfaces/), -[ODS](https://mlir.llvm.org/docs/OpDefinitions/) and -[Declarative Rewrite Rules](https://mlir.llvm.org/docs/DeclarativeRewrites/) +[Regions](../LangRef.md/#regions), +[OpInterfaces](../Interfaces.md), +[ODS](../OpDefinitions.md) and +[Declarative Rewrite Rules](../DeclarativeRewrites.md) among others. The approach adopted by Linalg was extended to become [StructuredOps abstractions]( https://drive.google.com/drive/u/0/folders/1sRAsgsd8Bvpm_IxREmZf2agsGU2KvrK-), with Linalg becoming its incarnation on tensors and buffers. It is complemented by the -[Vector dialect](https://mlir.llvm.org/docs/Dialects/Vector/), +[Vector dialect](../Dialects/Vector.md), which defines structured operations on vectors, following the same rationale and design principles as Linalg. (Vector dialect includes the higher-level operations on multi-dimensional vectors and abstracts away the lowering to @@ -85,7 +85,7 @@ The Linalg dialect itself grew beyond linear algebra-like operations to become more expressive, in particular by providing an abstraction of a loop nest supporting parallelism, reductions and sliding windows around arbitrary MLIR -[regions](https://mlir.llvm.org/docs/LangRef/#regions). It also has the +[regions](../LangRef.md/#regions). It also has the potential of growing beyond *dense* linear-algebra to support richer data types, such as sparse and ragged tensors and buffers. @@ -102,7 +102,7 @@ More components can be extracted, redesigned and generalized when new uses or requirements arise. -Several [design questions](#open_issues) remain open in Linalg, which does not +Several [design questions](../Dialects/Linalg.md/#open_issues) remain open in Linalg, which does not claim to be a general solution to all compilation problems. It does aim at driving thinking and implementations of domain-specific abstractions where programmer's intent can be captured at a very high level, @@ -112,7 +112,7 @@ "Linalg" could remove some of the confusions related to the dialect (and the underlying approach), its goals and limitations. -## Prior Art +## Prior Art Linalg draws inspiration from decades of prior art to design a modern a pragmatic solution. The following non-exhaustive list refers to some of the projects that influenced Linalg design: @@ -180,7 +180,7 @@ embed these additional nodes directly in the functional abstraction. Similarly to LIFT, Linalg uses local rewrite rules implemented with the MLIR -[Declarative Rewrite Rules](https://mlir.llvm.org/docs/DeclarativeRewrites/) +[Declarative Rewrite Rules](../DeclarativeRewrites.md) mechanisms. Linalg builds on, and helps separate concerns in the LIFT approach as follows: @@ -429,7 +429,7 @@ involves considerations related to: - concrete current and future needs of the application domain, - concrete current and future hardware properties and ISAs, -- understanding of strengths and limitations of [existing approaches](#prior_art), +- understanding of strengths and limitations of [existing approaches](#prior-art), - taking advantage of the coexistence of multiple levels of IR in MLIR, One needs to be methodical to avoid proliferation and redundancy. A given @@ -571,7 +571,7 @@ data. On the contrary, there is a very strong relationship between control-flow and data structures: one cannot exist without the other. This has multiple -implications on the [semantics of Linalg Ops](#linalg_ops) and their +implications on the [semantics of Linalg Ops](../Dialects/Linalg.md/#linalg_op) and their transformations. In particular, this observation influences whether certain transformations are better done: - as control flow or data structure manipulation, @@ -609,7 +609,7 @@ ### Summary of Existing Alternatives a Picture Lastly, we summarize our observations of lessons from [Prior -Art](#prior_art)---when viewed under the lense of our [Core Guiding +Art](#prior-art)---when viewed under the lense of our [Core Guiding Principles](#guiding_principles)---with the following picture.