diff --git a/mlir/docs/Tutorials/Toy/Ch-5.md b/mlir/docs/Tutorials/Toy/Ch-5.md --- a/mlir/docs/Tutorials/Toy/Ch-5.md +++ b/mlir/docs/Tutorials/Toy/Ch-5.md @@ -26,7 +26,7 @@ MLIR has many different dialects, so it is important to have a unified framework for [converting](../../../getting_started/Glossary.md#conversion) between them. This is where the `DialectConversion` framework comes into play. This framework allows for -transforming a set of `illegal` operations to a set of `legal` ones. To use this +transforming a set of *illegal* operations to a set of *legal* ones. To use this framework, we need to provide two things (and an optional third): * A [Conversion Target](../../DialectConversion.md#conversion-target) @@ -40,7 +40,7 @@ [Rewrite Patterns](../../DialectConversion.md#rewrite-pattern-specification) - This is the set of [patterns](../../QuickstartRewrites.md) used to - convert `illegal` operations into a set of zero or more `legal` ones. + convert *illegal* operations into a set of zero or more *legal* ones. * Optionally, a [Type Converter](../../DialectConversion.md#type-conversion). @@ -67,17 +67,23 @@ // We also define the Toy dialect as Illegal so that the conversion will fail // if any of these operations are *not* converted. Given that we actually want // a partial lowering, we explicitly mark the Toy operations that don't want - // to lower, `toy.print`, as `legal`. + // to lower, `toy.print`, as *legal*. target.addIllegalDialect(); target.addLegalOp(); ... } ``` +Above, we first set the toy dialect to illegal, and then the print operation +as legal. But we could have done this the other way around. +Individual operations always take precendence over the (more generic) dialect +definitions, so the order doesn't matter. See `ConversionTarget::getOpInfo` +for the details. + ## Conversion Patterns After the conversion target has been defined, we can define how to convert the -`illegal` operations into `legal` ones. Similarly to the canonicalization +*illegal* operations into *legal* ones. Similarly to the canonicalization framework introduced in [chapter 3](Ch-3.md), the [`DialectConversion` framework](../../DialectConversion.md) also uses [RewritePatterns](../../QuickstartRewrites.md) to perform the conversion logic. @@ -154,29 +160,10 @@ ```c++ void ToyToAffineLoweringPass::runOnFunction() { - // The first thing to define is the conversion target. This will define the - // final target for this lowering. - mlir::ConversionTarget target(getContext()); - - // We define the specific operations, or dialects, that are legal targets for - // this lowering. In our case, we are lowering to a combination of the - // `Affine` and `Standard` dialects. - target.addLegalDialect(); - - // We also define the Toy dialect as Illegal so that the conversion will fail - // if any of these operations are *not* converted. Given that we actually want - // a partial lowering, we explicitly mark the Toy operations that don't want - // to lower, `toy.print`, as `legal`. - target.addIllegalDialect(); - target.addLegalOp(); - - // Now that the conversion target has been defined, we just need to provide - // the set of patterns that will lower the Toy operations. - mlir::OwningRewritePatternList patterns; - patterns.insert<..., TransposeOpLowering>(&getContext()); + ... // With the target and rewrite patterns defined, we can now attempt the - // conversion. The conversion will signal failure if any of our `illegal` + // conversion. The conversion will signal failure if any of our *illegal* // operations were not converted successfully. auto function = getFunction(); if (mlir::failed(mlir::applyPartialConversion(function, target, patterns))) @@ -195,29 +182,29 @@ * Generate `load` operations from the buffer -One option is to generate `load` operations from the buffer type to materialize -an instance of the value type. This allows for the definition of the `toy.print` -operation to remain unchanged. The downside to this approach is that the -optimizations on the `affine` dialect are limited, because the `load` will -actually involve a full copy that is only visible *after* our optimizations have -been performed. + One option is to generate `load` operations from the buffer type to materialize + an instance of the value type. This allows for the definition of the `toy.print` + operation to remain unchanged. The downside to this approach is that the + optimizations on the `affine` dialect are limited, because the `load` will + actually involve a full copy that is only visible *after* our optimizations have + been performed. * Generate a new version of `toy.print` that operates on the lowered type -Another option would be to have another, lowered, variant of `toy.print` that -operates on the lowered type. The benefit of this option is that there is no -hidden, unnecessary copy to the optimizer. The downside is that another -operation definition is needed that may duplicate many aspects of the first. -Defining a base class in [ODS](../../OpDefinitions.md) may simplify this, but -you still need to treat these operations separately. + Another option would be to have another, lowered, variant of `toy.print` that + operates on the lowered type. The benefit of this option is that there is no + hidden, unnecessary copy to the optimizer. The downside is that another + operation definition is needed that may duplicate many aspects of the first. + Defining a base class in [ODS](../../OpDefinitions.md) may simplify this, but + you still need to treat these operations separately. * Update `toy.print` to allow for operating on the lowered type -A third option is to update the current definition of `toy.print` to allow for -operating the on the lowered type. The benefit of this approach is that it is -simple, does not introduce an additional hidden copy, and does not require -another operation definition. The downside to this option is that it requires -mixing abstraction levels in the `Toy` dialect. + A third option is to update the current definition of `toy.print` to allow for + operating the on the lowered type. The benefit of this approach is that it is + simple, does not introduce an additional hidden copy, and does not require + another operation definition. The downside to this option is that it requires + mixing abstraction levels in the `Toy` dialect. For the sake of simplicity, we will use the third option for this lowering. This involves updating the type constraints on the PrintOp in the operation