diff --git a/mlir/docs/Dialects/LLVM.md b/mlir/docs/Dialects/LLVM.md --- a/mlir/docs/Dialects/LLVM.md +++ b/mlir/docs/Dialects/LLVM.md @@ -32,7 +32,7 @@ For example, one can use primitive types `!llvm.i32`, pointer types `!llvm<"i8*">`, vector types `!llvm<"<4 x float>">` or structure types -`!llvm<"{i32, float}">`. The parsing and printing of the canonical form is +`!llvm<"{i32, float}">`. The parsing and printing of the canonical form are delegated to the LLVM assembly parser and printer. LLVM IR dialect types contain an `llvm::Type*` object that can be obtained by @@ -346,7 +346,7 @@ `llvm.mlir.constant` creates such values for scalars and vectors. It has a mandatory `value` attribute, which may be an integer, floating point attribute; dense or sparse attribute containing integers or floats. The type of the -attribute is one the corresponding MLIR standard types. It may be omitted for +attribute is one of the corresponding MLIR standard types. It may be omitted for `i64` and `f64` types that are implied. The operation produces a new SSA value of the specified LLVM IR dialect type. The type of that value _must_ correspond to the attribute type converted to LLVM IR. diff --git a/mlir/docs/Dialects/Linalg.md b/mlir/docs/Dialects/Linalg.md --- a/mlir/docs/Dialects/Linalg.md +++ b/mlir/docs/Dialects/Linalg.md @@ -29,7 +29,7 @@ 1. Tiled Producer-Consumer Fusion with Parametric Tile-And-Fuse. 1. Map to Parallel and Reduction Loops and Hardware. 1. Vectorization: Rewrite in Vector Form. -1. Lower to Loops (Affine, Generic and Parallel). +1. Lower to Loops (Affine, Generic, and Parallel). 1. Lower to Library Calls or Special Instructions, Intrinsics or ISA. 1. Partially Lower to Iterations Over a Finer-Grained Linalg Op. diff --git a/mlir/docs/Dialects/SPIR-V.md b/mlir/docs/Dialects/SPIR-V.md --- a/mlir/docs/Dialects/SPIR-V.md +++ b/mlir/docs/Dialects/SPIR-V.md @@ -79,7 +79,7 @@ * The prefix for all SPIR-V types and operations are `spv.`. * All instructions in an extended instruction set are further qualified with the extended instruction set's prefix. For example, all operations in the - GLSL extended instruction set is has the prefix of `spv.GLSL.`. + GLSL extended instruction set have the prefix of `spv.GLSL.`. * Ops that directly mirror instructions in the specification have `CamelCase` names that are the same as the instruction opnames (without the `Op` prefix). For example, `spv.FMul` is a direct mirror of `OpFMul` in the @@ -93,7 +93,7 @@ * Ops with `_snake_case` names are those that have no corresponding instructions (or concepts) in the binary format. They are introduced to satisfy MLIR structural requirements. For example, `spv._module_end` and - `spv._merge`. They maps to no instructions during (de)serialization. + `spv._merge`. They map to no instructions during (de)serialization. (TODO: consider merging the last two cases and adopting `spv.mlir.` prefix for them.) @@ -148,7 +148,7 @@ #### Use MLIR attributes for metadata * Requirements for capabilities, extensions, extended instruction sets, - addressing model, and memory model is conveyed using `spv.module` + addressing model, and memory model are conveyed using `spv.module` attributes. This is considered better because these information are for the execution environment. It's easier to probe them if on the module op itself. * Annotations/decoration instructions are "folded" into the instructions they @@ -172,17 +172,17 @@ * Normal constants are not placed in `spv.module`'s region; they are localized into functions. This is to make functions in the SPIR-V dialect to be isolated and explicit capturing. Constants are cheap to duplicate given - attributes are uniqued in `MLIRContext`. + attributes are made unique in `MLIRContext`. #### Adopt symbol-based global variables and specialization constant * Global variables are defined with the `spv.globalVariable` op. They do not generate SSA values. Instead they have symbols and should be referenced via - symbols. To use a global variables in a function block, `spv._address_of` is - needed to turn the symbol into a SSA value. + symbols. To use global variables in a function block, `spv._address_of` is + needed to turn the symbol into an SSA value. * Specialization constants are defined with the `spv.specConstant` op. Similar to global variables, they do not generate SSA values and have symbols for - reference, too. `spv._reference_of` is needed to turn the symbol into a SSA + reference, too. `spv._reference_of` is needed to turn the symbol into an SSA value for use in a function block. The above choices enables functions in the SPIR-V dialect to be isolated and @@ -415,13 +415,13 @@ A major difference between the SPIR-V dialect and the SPIR-V specification for functions is that the former are isolated and require explicit capturing, while -the latter allow implicit capturing. In SPIR-V specification, functions can +the latter allows implicit capturing. In SPIR-V specification, functions can refer to SSA values (generated by constants, global variables, etc.) defined in modules. The SPIR-V dialect adjusted how constants and global variables are modeled to enable isolated functions. Isolated functions are more friendly to compiler analyses and transformations. This also enables the SPIR-V dialect to better utilize core infrastructure: many functionalities in the core -infrastructure requires ops to be isolated, e.g., the +infrastructure require ops to be isolated, e.g., the [greedy pattern rewriter][GreedyPatternRewriter] can only act on ops isolated from above. @@ -742,23 +742,23 @@ SPIR-V supports versions, extensions, and capabilities as ways to indicate the availability of various features (types, ops, enum cases) on target hardware. For example, non-uniform group operations were missing before v1.3, and they -require special capabilites like `GroupNonUniformArithmetic` to be used. These +require special capabilities like `GroupNonUniformArithmetic` to be used. These availability information relates to [target environment](#target-environment) and affects the legality of patterns during dialect conversion. SPIR-V ops' availability requirements are modeled with [op interfaces][MlirOpInterface]: -* `QueryMinVersionInterface` and `QueryMaxVersionInterface` for vesion +* `QueryMinVersionInterface` and `QueryMaxVersionInterface` for version requirements * `QueryExtensionInterface` for extension requirements * `QueryCapabilityInterface` for capability requirements -These interface declarations are auto-generated from TableGen defintions +These interface declarations are auto-generated from TableGen definitions included in [`SPIRVBase.td`][MlirSpirvBase]. At the moment all SPIR-V ops -implements the above interfaces. +implement the above interfaces. -SPIR-V ops' availability implemention methods are automatically synthesized +SPIR-V ops' availability implementation methods are automatically synthesized from the availability specification on each op and enum attribute in TableGen. An op needs to look into not only the opcode but also operands to derive its availability requirements. For example, `spv.ControlBarrier` requires no @@ -917,9 +917,9 @@ Although the main objective of the SPIR-V dialect is to act as a proper IR for compiler transformations, being able to serialize to and deserialize from the binary format is still very valuable for many good reasons. Serialization -enables the artifacts of SPIR-V compilation to be consumed by a execution +enables the artifacts of SPIR-V compilation to be consumed by an execution environment; deserialization allows us to import SPIR-V binary modules and run -transformations on them. So serialization and deserialization is supported from +transformations on them. So serialization and deserialization are supported from the very beginning of the development of the SPIR-V dialect. The serialization library provides two entry points, `mlir::spirv::serialize()` @@ -1009,21 +1009,21 @@ it will be unconditionally converted to 32-bit. This should be switched to properly emulating non-32-bit scalar types.) -[Standard index Type][MlirIndexType] need special handling since they are not +[Standard index type][MlirIndexType] need special handling since they are not directly supported in SPIR-V. Currently the `index` type is converted to `i32`. (TODO: Allow for configuring the integer width to use for `index` types in the SPIR-V dialect) SPIR-V only supports vectors of 2/3/4 elements; so -[standard vector types][MlirVectorType] of these length can be converted +[standard vector types][MlirVectorType] of these lengths can be converted directly. (TODO: Convert other vectors of lengths to scalars or arrays) [Standard memref types][MlirMemrefType] with static shape and stride are converted to `spv.ptr>>`s. The resultant SPIR-V array -types has the same element type as the source memref and its number of elements +types have the same element type as the source memref and its number of elements is obtained from the layout specification of the memref. The storage class of the pointer type are derived from the memref's memory space with `SPIRVTypeConverter::getStorageClassForMemorySpace()`. @@ -1068,7 +1068,7 @@ ### Current conversions to SPIR-V -Using the above infrastructure, conversion are implemented from +Using the above infrastructure, conversions are implemented from * [Standard Dialect][MlirStandardDialect] : Only arithmetic and logical operations conversions are implemented. @@ -1240,7 +1240,7 @@ See any such function in [`SPIRVOps.cpp`][MlirSpirvOpsCpp] as an example. -If no additional verification is needed, one need to add the following to +If no additional verification is needed, one needs to add the following to the op's Op Definition Spec: ``` diff --git a/mlir/docs/Dialects/Vector.md b/mlir/docs/Dialects/Vector.md --- a/mlir/docs/Dialects/Vector.md +++ b/mlir/docs/Dialects/Vector.md @@ -44,7 +44,7 @@ intrinsics. This is referred to as the `LLVM` level. 2. Set of machine-specific operations and types that are built to translate almost 1-1 with the HW ISA. This is referred to as the Hardware Vector level; -a.k.a `HWV`. For instance, we have (a) a `NVVM` dialect (for `CUDA`) with +a.k.a `HWV`. For instance, we have (a) the `NVVM` dialect (for `CUDA`) with tensor core ops, (b) accelerator-specific dialects (internal), a potential (future) `CPU` dialect to capture `LLVM` intrinsics more closely and other dialects for specific hardware. Ideally this should be auto-generated as much