diff --git a/mlir/docs/DeclarativeRewrites.md b/mlir/docs/DeclarativeRewrites.md
--- a/mlir/docs/DeclarativeRewrites.md
+++ b/mlir/docs/DeclarativeRewrites.md
@@ -153,7 +153,7 @@
#### Matching DAG of operations
-To match an DAG of ops, use nested `dag` objects:
+To match a DAG of ops, use nested `dag` objects:
```tablegen
@@ -530,7 +530,7 @@
The above example also shows how to replace a matched multi-result op.
-To replace a `N`-result op, the result patterns must generate at least `N`
+To replace an `N`-result op, the result patterns must generate at least `N`
declared values (see [Declared vs. actual value](#declared-vs-actual-value) for
definition). If there are more than `N` declared values generated, only the
last `N` declared values will be used to replace the matched op. Note that
@@ -668,12 +668,12 @@
`location` is of the following syntax:
-```tablgen
+```tablegen
(location $symbol0, $symbol1, ...)
```
where all `$symbol` should be bound previously in the pattern and one optional
-string may be specified as an attribute. The following locations are creted:
+string may be specified as an attribute. The following locations are created:
* If only 1 symbol is specified then that symbol's location is used,
* If multiple are specified then a fused location is created;
diff --git a/mlir/docs/Dialects/Affine.md b/mlir/docs/Dialects/Affine.md
--- a/mlir/docs/Dialects/Affine.md
+++ b/mlir/docs/Dialects/Affine.md
@@ -72,7 +72,7 @@
dynamic one in turn bound to a symbolic identifier. Dimensions may be bound not
only to anything that a symbol is bound to, but also to induction variables of
enclosing [`affine.for`](#affinefor-affineforop) and
-[`afffine.parallel`](#affineparallel-affineparallelop) operations, and the
+[`affine.parallel`](#affineparallel-affineparallelop) operations, and the
result of an
[`affine.apply` operation](#affineapply-operation) (which recursively may use
other dimensions and symbols).
diff --git a/mlir/docs/OpDefinitions.md b/mlir/docs/OpDefinitions.md
--- a/mlir/docs/OpDefinitions.md
+++ b/mlir/docs/OpDefinitions.md
@@ -146,7 +146,7 @@
### Operation documentation
-This includes both an one-line `summary` and a longer human-readable
+This includes both a one-line `summary` and a longer human-readable
`description`. They will be used to drive automatic generation of dialect
documentation. They need to be provided in the operation's definition body:
@@ -863,7 +863,7 @@
An operation's constraint can cover different range; it may
-* Only concern a single attribute (e.g. being an 32-bit integer greater than 5),
+* Only concern a single attribute (e.g. being a 32-bit integer greater than 5),
* Multiple operands and results (e.g., the 1st result's shape must be the same
as the 1st operand), or
* Intrinsic to the operation itself (e.g., having no side effect).
@@ -1039,13 +1039,13 @@
* `DefaultValuedAttr`: specifies the
[default value](#attributes-with-default-values) for an attribute.
-* `OptionalAttr`: specfies an attribute as [optional](#optional-attributes).
+* `OptionalAttr`: specifies an attribute as [optional](#optional-attributes).
* `Confined`: adapts an attribute with
[further constraints](#confining-attributes).
### Enum attributes
-Some attributes can only take values from an predefined enum, e.g., the
+Some attributes can only take values from a predefined enum, e.g., the
comparison kind of a comparison op. To define such attributes, ODS provides
several mechanisms: `StrEnumAttr`, `IntEnumAttr`, and `BitEnumAttr`.
diff --git a/mlir/docs/PassManagement.md b/mlir/docs/PassManagement.md
--- a/mlir/docs/PassManagement.md
+++ b/mlir/docs/PassManagement.md
@@ -382,7 +382,7 @@
```
Pipeline registration also allows for simplified registration of
-specifializations for existing passes:
+specializations for existing passes:
```c++
static PassPipelineRegistration<> foo10(
diff --git a/mlir/docs/Quantization.md b/mlir/docs/Quantization.md
--- a/mlir/docs/Quantization.md
+++ b/mlir/docs/Quantization.md
@@ -232,7 +232,7 @@
TensorFlow Lite would use the attributes of the fake_quant operations to make a
judgment about how to convert to use kernels from its quantized operations subset.
-In MLIR-based quantization, fake_quant_\* operationss are handled by converting them to
+In MLIR-based quantization, fake_quant_\* operations are handled by converting them to
a sequence of *qcast* (quantize) followed by *dcast* (dequantize) with an
appropriate *UniformQuantizedType* as the target of the qcast operation.
@@ -242,7 +242,7 @@
to a form based on integral arithmetic.
This scheme also naturally allows computations that are *partially quantized*
-where the parts which could not be reduced to integral operationss are still carried out
+where the parts which could not be reduced to integral operations are still carried out
in floating point with appropriate conversions at the boundaries.
## TFLite native quantization
diff --git a/mlir/docs/Rationale/Rationale.md b/mlir/docs/Rationale/Rationale.md
--- a/mlir/docs/Rationale/Rationale.md
+++ b/mlir/docs/Rationale/Rationale.md
@@ -67,7 +67,7 @@
The information captured in the IR allows a compact expression of all loop
transformations, data remappings, explicit copying necessary for explicitly
-addressed memory in accelerators, mapping to pre-tuned expert written
+addressed memory in accelerators, mapping to pre-tuned expert-written
primitives, and mapping to specialized vector instructions. Loop transformations
that can be easily implemented include the body of affine transformations: these
subsume all traditional loop transformations (unimodular and non-unimodular)
@@ -229,7 +229,7 @@
code-generation-related/lowering-related concerns explained above. In fact, the
`tensor` type even allows dialect-specific types as element types.
-### Bit width of a non-primitive types and `index` is undefined
+### Bit width of a non-primitive type and `index` is undefined
The bit width of a compound type is not defined by MLIR, it may be defined by a
specific lowering pass. In MLIR, bit width is a property of certain primitive
@@ -259,7 +259,7 @@
signedness with integer types; while others, especially closer to machine
instruction, might want signless integers. Instead of forcing each abstraction
to adopt the same integer modelling or develop its own one in house, Integer
-types provides this as an option to help code reuse and consistency.
+type provides this as an option to help code reuse and consistency.
For the standard dialect, the choice is to have signless integer types. An
integer value does not have an intrinsic sign, and it's up to the specific op
diff --git a/mlir/docs/Rationale/RationaleLinalgDialect.md b/mlir/docs/Rationale/RationaleLinalgDialect.md
--- a/mlir/docs/Rationale/RationaleLinalgDialect.md
+++ b/mlir/docs/Rationale/RationaleLinalgDialect.md
@@ -45,7 +45,7 @@
apparent that it could extend to larger application domains than just machine
learning on dense tensors.
-The design and evolution of Linalg follows a *codegen-friendly* approach where
+The design and evolution of Linalg follow a *codegen-friendly* approach where
the IR and the transformations evolve hand-in-hand.
The key idea is that op semantics *declare* and transport information that is
traditionally obtained by compiler analyses.
@@ -77,7 +77,7 @@
with Linalg becoming its incarnation on tensors and buffers.
It is complemented by the
[Vector dialect](https://mlir.llvm.org/docs/Dialects/Vector/),
-which define structured operations on vectors, following the same rationale and
+which defines structured operations on vectors, following the same rationale and
design principles as Linalg. (Vector dialect includes the higher-level
operations on multi-dimensional vectors and abstracts away the lowering to
single-dimensional vectors).
@@ -191,7 +191,7 @@
structure abstractions) potentially reusable across different dialects in the
MLIR's open ecosystem.
-LIFT is expected to further influence the design of Linalg as it evolve. In
+LIFT is expected to further influence the design of Linalg as it evolves. In
particular, extending the data structure abstractions to support non-dense
tensors can use the experience of LIFT abstractions for
[sparse](https://www.lift-project.org/publications/2016/harries16sparse.pdf)
@@ -255,9 +255,9 @@
transformations. But it's still too hard for newcomers to use or extend. The
level of performance you get from Halide is very different depending on
whether one is a seasoned veteran or a newcomer. This is especially true as
-the number of transformations grow.
+the number of transformations grows.
- Halide raises rather than lowers in two ways, going counter-current to the
-design goals we set for high-level codegen abstractions in in MLIR. First,
+design goals we set for high-level codegen abstractions in MLIR. First,
canonical Halide front-end code uses explicit indexing and math on scalar
values, so to target BLAS/DNN libraries one needs to add pattern matching
which is similarly brittle as in the affine case. While Halide's performance
@@ -425,7 +425,7 @@
workloads for high-performance and parallel hardware architectures: **this is
an HPC compilation problem**.
-The selection of relevant transformations follows a codesign approach and
+The selection of relevant transformations follows a co-design approach and
involves considerations related to:
- concrete current and future needs of the application domain,
- concrete current and future hardware properties and ISAs,
@@ -462,7 +462,7 @@
#### Declarative Specification: Avoid Raising
Compiler transformations need static structural information (e.g. loop-nests,
-graphs of basic blocks, pure functions etc). When that structural information
+graphs of basic blocks, pure functions, etc). When that structural information
is lost, it needs to be reconstructed.
A good illustration of this phenomenon is the notion of *raising* in polyhedral
@@ -518,7 +518,7 @@
- Allow creating customizable passes declaratively by simply selecting rewrite
rules. This allows mixing transformations, canonicalizations, constant folding
and other enabling rewrites in a single pass. The result is a system where pass
-fusion is very simple to obtain and gives hope to solving certain
+fusion is very simple to obtain and gives hope for solving certain
[phase ordering issues](https://dl.acm.org/doi/10.1145/201059.201061).
### Suitability for Search and Machine Learning
@@ -551,7 +551,7 @@
tables of records and maybe even graphs.
For such more advanced data types, the control-flow required to traverse the
-data structures, termination conditions etc are much less simple to analyze and
+data structures, termination conditions, etc are much less simple to analyze and
characterize statically. As a consequence we need to also design solutions that
stand a chance of evolving into runtime-adaptive computations (e.g.
inspector-executor in which an *inspector* runs a cheap runtime
@@ -582,7 +582,7 @@
### The Dialect Need not be Closed Under Transformations
This is probably the most surprising and counter-intuitive
observation. When one designs IR for transformations, closed-ness is
-often a nonnegotiable property.
+often a non-negotiable property.
This is a key design principle of polyhedral IRs such as
[URUK](http://icps.u-strasbg.fr/~bastoul/research/papers/GVBCPST06-IJPP.pdf)
and
diff --git a/mlir/docs/ShapeInference.md b/mlir/docs/ShapeInference.md
--- a/mlir/docs/ShapeInference.md
+++ b/mlir/docs/ShapeInference.md
@@ -117,7 +117,7 @@
is, these two type systems differ and both should be supported, but the
intersection of the two should not be required. As a particular example,
if a compiler only wants to differentiate exact shapes vs dynamic
- shapes, then it need not consider a more generic shape latice even
+ shapes, then it need not consider a more generic shape lattice even
though the shape description supports it.
* Declarative (e.g., analyzable at compile time, possible to generate
diff --git a/mlir/docs/Tutorials/CreatingADialect.md b/mlir/docs/Tutorials/CreatingADialect.md
--- a/mlir/docs/Tutorials/CreatingADialect.md
+++ b/mlir/docs/Tutorials/CreatingADialect.md
@@ -134,8 +134,8 @@
add_mlir_conversion_library(MLIRBarToFoo
BarToFoo.cpp
- ADDITIONAL_HEADER_DIRS
- ${MLIR_MAIN_INCLUDE_DIR}/mlir/Conversion/BarToFoo
+ ADDITIONAL_HEADER_DIRS
+ ${MLIR_MAIN_INCLUDE_DIR}/mlir/Conversion/BarToFoo
)
target_link_libraries(MLIRBarToFoo
PUBLIC
diff --git a/mlir/docs/doxygen.cfg.in b/mlir/docs/doxygen.cfg.in
--- a/mlir/docs/doxygen.cfg.in
+++ b/mlir/docs/doxygen.cfg.in
@@ -46,7 +46,7 @@
PROJECT_BRIEF =
-# With the PROJECT_LOGO tag one can specify an logo or icon that is included in
+# With the PROJECT_LOGO tag one can specify a logo or icon that is included in
# the documentation. The maximum height of the logo should not exceed 55 pixels
# and the maximum width should not exceed 200 pixels. Doxygen will copy the logo
# to the output directory.
diff --git a/mlir/include/mlir/Dialect/Linalg/IR/LinalgStructuredOps.td b/mlir/include/mlir/Dialect/Linalg/IR/LinalgStructuredOps.td
--- a/mlir/include/mlir/Dialect/Linalg/IR/LinalgStructuredOps.td
+++ b/mlir/include/mlir/Dialect/Linalg/IR/LinalgStructuredOps.td
@@ -260,7 +260,7 @@
/// OptionalAttr:$strides
/// OptionalAttr:$dilations
/// OptionalAttr:$padding
-/// `stirdes` denotes the step of each window along the dimension.
+/// `strides` denotes the step of each window along the dimension.
class PoolingBase_Op props>
: LinalgStructured_Op {
let description = [{
diff --git a/mlir/include/mlir/IR/OpBase.td b/mlir/include/mlir/IR/OpBase.td
--- a/mlir/include/mlir/IR/OpBase.td
+++ b/mlir/include/mlir/IR/OpBase.td
@@ -1440,7 +1440,7 @@
let returnType = ret;
code body = b;
- // Specify how to convert from the derived attribute to an attibute.
+ // Specify how to convert from the derived attribute to an attribute.
//
// ## Special placeholders
//