diff --git a/mlir/docs/Dialects/Affine.md b/mlir/docs/Dialects/Affine.md --- a/mlir/docs/Dialects/Affine.md +++ b/mlir/docs/Dialects/Affine.md @@ -60,25 +60,46 @@ ### Restrictions on Dimensions and Symbols The affine dialect imposes certain restrictions on dimension and symbolic -identifiers to enable powerful analysis and transformation. An SSA value's use -can be bound to a symbolic identifier if that SSA value is either 1. a region -argument for an op with trait `AffineScope` (eg. `FuncOp`), 2. a value defined -at the top level of an `AffineScope` op (i.e., immediately enclosed by the -latter), 3. a value that dominates the `AffineScope` op enclosing the value's -use, 4. the result of a -[`constant` operation](Standard.md/#stdconstant-constantop), 5. the result of an -[`affine.apply` operation](#affineapply-affineapplyop) that recursively takes as -arguments any valid symbolic identifiers, or 6. the result of a -[`dim` operation](MemRef.md/#memrefdim-mlirmemrefdimop) on either a memref that -is an argument to a `AffineScope` op or a memref where the corresponding -dimension is either static or a dynamic one in turn bound to a valid symbol. -*Note:* if the use of an SSA value is not contained in any op with the -`AffineScope` trait, only the rules 4-6 can be applied. - -Note that as a result of rule (3) above, symbol validity is sensitive to the -location of the SSA use. Dimensions may be bound not only to anything that a -symbol is bound to, but also to induction variables of enclosing -[`affine.for`](#affinefor-affineforop) and +identifiers to enable analysis and transformation. Any region-holding operation +that does not have the [ExtendsAffineScope](../Traits.md#ExtendsAffineScope) +trait starts an *affine scope*. An affine scope is a unit of IR (operations) +used for affine/polyhedral optimization purposes. An affine scope is started by +any region-holding operation without the `ExtendsAffineScope` trait. For more +details, see [ExtendsAffineScope](../Traits.md#ExtendsAffineScope). + +The `affine.for`, `affine.parallel`, and +`affine.if` ops extend an affine scope started or in turn extended by their +parent operation. + +#### Symbolic SSA values +An SSA value's use can be bound to a symbolic identifier if that SSA value is +either: + +1. a region argument for an op without the + [ExtendsAffineScope](../Traits.md#ExtendsAffineScope) trait (eg. `FuncOp`), +2. a value defined at the top level of an op that starts an affine scope (i.e., + whose definition is immediately enclosed by such an op), +3. a value that dominates the affine scope that the value's use is part of, +4. the result of a [`constant` operation](Standard.md/#stdconstant-constantop), +5. the result of an [`affine.apply` operation](#affineapply-affineapplyop) that + recursively takes as arguments any valid symbolic identifiers, or +6. the result of a [dim operation](MemRef.md/#memrefdim-mlirmemrefdimop) on + either a memref that is a region argument to an op starting an affine scope + or a memref where the corresponding dimension is either static or a dynamic + one in turn bound to a valid symbol. + +*Note:* if the use of an SSA value is not contained in any op that starts a new +affine scope, only the rules 4-6 can be applied. + +Note that as a result of rule (3) above, the validity to use a value as a +symbols is sensitive to the location of the SSA use, or more precisely, the +affine scope such a use is part of. + +#### Dimensional SSA values +Dimensions may be bound not only to anything that a symbol is bound to, but +also to region arguments of any operation with the +[ExtendsAffineScope](../Traits.md#ExtendsAffineScope]) trait. This includes +induction variables of enclosing [`affine.for`](#affinefor-affineforop) and [`affine.parallel`](#affineparallel-affineparallelop) operations, and the result of an [`affine.apply` operation](#affineapply-affineapplyop) (which recursively may use other dimensions and symbols). diff --git a/mlir/docs/Traits.md b/mlir/docs/Traits.md --- a/mlir/docs/Traits.md +++ b/mlir/docs/Traits.md @@ -188,21 +188,6 @@ * `Header` - (`C++ class` -- `ODS class`(if applicable)) -### AffineScope - -* `OpTrait::AffineScope` -- `AffineScope` - -This trait is carried by region holding operations that define a new scope for -the purposes of polyhedral optimization and the affine dialect in particular. -Any SSA values of 'index' type that either dominate such operations, or are -defined at the top-level of such operations, or appear as region arguments for -such operations automatically become valid symbols for the polyhedral scope -defined by that operation. As a result, such SSA values could be used as the -operands or index operands of various affine dialect operations like affine.for, -affine.load, and affine.store. The polyhedral scope defined by an operation with -this trait includes all operations in its region excluding operations that are -nested inside of other operations that themselves have this trait. - ### AutomaticAllocationScope * `OpTrait::AutomaticAllocationScope` -- `AutomaticAllocationScope` @@ -253,6 +238,46 @@ trait. In particular, broadcasting behavior is not allowed. See the comments on `OpTrait::ElementwiseMappable` for the precise requirements. +### ExtendsAffineScope + +* `OpTrait::ExtendsAffineScope` -- `ExtendsAffineScope` + +This trait is carried by region-holding operations that further extend an +`affine scope`. An affine scope is a unit of IR (operations) used for +affine/polyhedral optimization purposes. An affine scope is started by any +region-holding operation without the `ExtendsAffineScope` trait. An operation +with this trait further extends such an affine scope started or extended by its +parent operation to its own blocks. An affine scope does not extend further +when a region-holding operation without the `ExtendsAffineScope` trait is +encountered. Any region arguments of an operation with the `ExtendsAffineScope` +trait are valid [dimensional +identifiers](Dialects/Affine.md#restrictions-on-dimensions-and-symbols) for that +affine +scope. + +The affine scope defined by a region-holding operation without this trait +includes all operations in its region excluding any operations that +are nested inside of other region-holding operations that themselves do not have +this trait. An example is shown below: `AS0` and `AS1` are affine scopes. + +``` + region_op() { // Starts a new affine scope `AS0`. + op_with_extends_affine_scope_trait(%a, %b) { // AS0 + op1 // AS0 + op2 // AS0 + op3() { // AS0 + op11 // New scope AS1 started by op3. // AS1 + op12 // AS1 + } + affine.for %i = 0 to %N { // AS0 + affine.for %i = 0 to %N { // AS0 + op22 // AS0 + } + } + } + } +``` + ### Function-Like * `OpTrait::FunctionLike` diff --git a/mlir/include/mlir/Dialect/Affine/IR/AffineOps.h b/mlir/include/mlir/Dialect/Affine/IR/AffineOps.h --- a/mlir/include/mlir/Dialect/Affine/IR/AffineOps.h +++ b/mlir/include/mlir/Dialect/Affine/IR/AffineOps.h @@ -24,10 +24,10 @@ class AffineBound; class AffineValueMap; -/// A utility function to check if a value is defined at the top level of an -/// op with trait `AffineScope` or is a region argument for such an op. A value -/// of index type defined at the top level is always a valid symbol for all its -/// uses. +/// Checks if a value is a top level value of an op that starts an affine scope. +/// If the value is defined in an unlinked region, it is conservatively assumed +/// not to be top-level. A value of index type defined at the top level is +/// always a valid symbol for affine purposes. bool isTopLevelValue(Value value); /// AffineDmaStartOp starts a non-blocking DMA operation that transfers data @@ -318,20 +318,24 @@ SmallVectorImpl &results); }; -/// Returns true if the given Value can be used as a dimension id in the region -/// of the closest surrounding op that has the trait `AffineScope`. +/// Returns true if the given value can be used as a symbol at all its use +/// sites. This is true iff it meets one of the following +/// conditions: +// *) It is valid as a symbol. +// *) It is a region argument for a block with the `ExtendsAffineScope` trait +// (eg. an induction variable of an affine.for or affine.parallel). +// *) It is the result of affine apply operation with dimension id arguments. bool isValidDim(Value value); -/// Returns true if the given Value can be used as a dimension id in `region`, -/// i.e., for all its uses in `region`. +/// Returns true if the `value` can be used as a dimension id in the affine +/// scope that begins at `region`. bool isValidDim(Value value, Region *region); -/// Returns true if the given value can be used as a symbol in the region of the -/// closest surrounding op that has the trait `AffineScope`. +/// Returns true if `value` can be used as a symbol at all its use sites. bool isValidSymbol(Value value); -/// Returns true if the given Value can be used as a symbol for `region`, i.e., -/// for all its uses in `region`. +/// Returns true if the given Value can be used as a symbol in the affine +/// scope that begins at `region`. bool isValidSymbol(Value value, Region *region); /// Parses dimension and symbol list. `numDims` is set to the number of diff --git a/mlir/include/mlir/Dialect/Affine/IR/AffineOps.td b/mlir/include/mlir/Dialect/Affine/IR/AffineOps.td --- a/mlir/include/mlir/Dialect/Affine/IR/AffineOps.td +++ b/mlir/include/mlir/Dialect/Affine/IR/AffineOps.td @@ -91,20 +91,21 @@ /// Returns the affine value map computed from this operation. AffineValueMap getAffineValueMap(); - /// Returns true if the result of this operation can be used as dimension id - /// in the region of the closest surrounding op with trait AffineScope. + /// Returns true if the result of this operation can be used as a dimension id + /// at any of its use sites. bool isValidDim(); /// Returns true if the result of this operation can be used as dimension id - /// within 'region', i.e., for all its uses with `region`. + /// within the affine scope starting with 'region', i.e., for all its uses + /// in that affine scope. bool isValidDim(Region *region); - /// Returns true if the result of this operation is a symbol in the region - /// of the closest surrounding op that has the trait AffineScope. + /// Returns true if the result of this operation is a valid symbol for all + /// of its uses. bool isValidSymbol(); - /// Returns true if the result of this operation is a symbol for all its - /// uses in `region`. + /// Returns true if the result of this operation is a valid symbol for all its + /// uses in the affine scope starting at `region`. bool isValidSymbol(Region *region); operand_range getMapOperands() { return getOperands(); } @@ -115,7 +116,7 @@ } def AffineForOp : Affine_Op<"for", - [ImplicitAffineTerminator, RecursiveSideEffects, + [ExtendsAffineScope, ImplicitAffineTerminator, RecursiveSideEffects, DeclareOpInterfaceMethods]> { let summary = "for operation"; let description = [{ @@ -353,8 +354,8 @@ } def AffineIfOp : Affine_Op<"if", - [ImplicitAffineTerminator, RecursiveSideEffects, - NoRegionArguments]> { + [ExtendsAffineScope, ImplicitAffineTerminator, + RecursiveSideEffects, NoRegionArguments]> { let summary = "if-then-else operation"; let description = [{ Syntax: @@ -611,7 +612,7 @@ } def AffineParallelOp : Affine_Op<"parallel", - [ImplicitAffineTerminator, RecursiveSideEffects, + [ExtendsAffineScope, ImplicitAffineTerminator, RecursiveSideEffects, DeclareOpInterfaceMethods, MemRefsNormalizable]> { let summary = "multi-index parallel band operation"; let description = [{ diff --git a/mlir/include/mlir/Dialect/Shape/IR/ShapeOps.td b/mlir/include/mlir/Dialect/Shape/IR/ShapeOps.td --- a/mlir/include/mlir/Dialect/Shape/IR/ShapeOps.td +++ b/mlir/include/mlir/Dialect/Shape/IR/ShapeOps.td @@ -1025,8 +1025,8 @@ //===----------------------------------------------------------------------===// def Shape_FunctionLibraryOp : Shape_Op<"function_library", - [AffineScope, IsolatedFromAbove, NoRegionArguments, SymbolTable, Symbol, - NoTerminator, SingleBlock]> { + [IsolatedFromAbove, NoRegionArguments, SymbolTable, Symbol, NoTerminator, + SingleBlock]> { let summary = "Represents shape functions and corresponding ops"; let description = [{ Represents a list of shape functions and the ops whose shape transfer diff --git a/mlir/include/mlir/IR/BuiltinOps.td b/mlir/include/mlir/IR/BuiltinOps.td --- a/mlir/include/mlir/IR/BuiltinOps.td +++ b/mlir/include/mlir/IR/BuiltinOps.td @@ -32,8 +32,8 @@ //===----------------------------------------------------------------------===// def FuncOp : Builtin_Op<"func", [ - AffineScope, AutomaticAllocationScope, CallableOpInterface, FunctionLike, - IsolatedFromAbove, Symbol + AutomaticAllocationScope, CallableOpInterface, FunctionLike, IsolatedFromAbove, + Symbol ]> { let summary = "An operation with a name containing a single `SSACFG` region"; let description = [{ @@ -161,8 +161,7 @@ //===----------------------------------------------------------------------===// def ModuleOp : Builtin_Op<"module", [ - AffineScope, IsolatedFromAbove, NoRegionArguments, SymbolTable, Symbol, - OpAsmOpInterface + IsolatedFromAbove, NoRegionArguments, SymbolTable, Symbol, OpAsmOpInterface ] # GraphRegionNoTerminator.traits> { let summary = "A top level container operation"; let description = [{ diff --git a/mlir/include/mlir/IR/OpBase.td b/mlir/include/mlir/IR/OpBase.td --- a/mlir/include/mlir/IR/OpBase.td +++ b/mlir/include/mlir/IR/OpBase.td @@ -1916,8 +1916,8 @@ class GenInternalOpTrait : GenInternalTrait, OpTrait; class PredOpTrait : PredTrait, OpTrait; -// Op defines an affine scope. -def AffineScope : NativeOpTrait<"AffineScope">; +// Op extends an affine scope. See `Traits.md#ExtendsAffineScope`. +def ExtendsAffineScope : NativeOpTrait<"ExtendsAffineScope">; // Op defines an automatic allocation scope. def AutomaticAllocationScope : NativeOpTrait<"AutomaticAllocationScope">; // Op supports operand broadcast behavior. diff --git a/mlir/include/mlir/IR/OpDefinition.h b/mlir/include/mlir/IR/OpDefinition.h --- a/mlir/include/mlir/IR/OpDefinition.h +++ b/mlir/include/mlir/IR/OpDefinition.h @@ -1173,13 +1173,17 @@ } }; -/// A trait of region holding operations that defines a new scope for polyhedral -/// optimization purposes. Any SSA values of 'index' type that either dominate -/// such an operation or are used at the top-level of such an operation -/// automatically become valid symbols for the polyhedral scope defined by that -/// operation. For more details, see `Traits.md#AffineScope`. +/// A trait of region-holding operations that further extends an `affine scope`. +/// An affine scope is used for affine/polyhedral optimization purposes and is +/// started by any region-holding operation without the `ExtendsAffineScope` +/// trait. An operation with the `ExtendsAffineScope` trait further extends the +/// affine scope started or extended by its parent operation to its own blocks. +/// The affine scope doesn't extend further when a region-holding operation +/// without this trait is encountered. Any region arguments of an operation with +/// the `ExtendsAffineScope` trait are valid `dimensional` identifiers for the +/// affine scope. For more details, see `Traits.md#ExtendsAffineScope`. template -class AffineScope : public TraitBase { +class ExtendsAffineScope : public TraitBase { public: static LogicalResult verifyTrait(Operation *op) { static_assert(!ConcreteType::template hasTrait(), diff --git a/mlir/lib/Analysis/AffineAnalysis.cpp b/mlir/lib/Analysis/AffineAnalysis.cpp --- a/mlir/lib/Analysis/AffineAnalysis.cpp +++ b/mlir/lib/Analysis/AffineAnalysis.cpp @@ -32,8 +32,6 @@ using namespace mlir; -using llvm::dbgs; - /// Get the value that is being reduced by `pos`-th reduction in the loop if /// such a reduction can be performed by affine parallel loops. This assumes /// floating-point operations are commutative. On success, `kind` will be the @@ -277,7 +275,7 @@ const FlatAffineValueConstraints &srcDomain, unsigned numCommonLoops) { // Get the chain of ancestor blocks to the given `MemRefAccess` instance. The - // search terminates when either an op with the `AffineScope` trait or + // search terminates when either an op that starts an affine scope or // `endBlock` is reached. auto getChainOfAncestorBlocks = [&](const MemRefAccess &access, SmallVector &ancestorBlocks, @@ -286,7 +284,7 @@ // Loop terminates when the currBlock is nullptr or equals to the endBlock, // or its parent operation holds an affine scope. while (currBlock && currBlock != endBlock && - !currBlock->getParentOp()->hasTrait()) { + currBlock->getParentOp()->hasTrait()) { ancestorBlocks.push_back(currBlock); currBlock = currBlock->getParentOp()->getBlock(); } diff --git a/mlir/lib/Dialect/Affine/IR/AffineOps.cpp b/mlir/lib/Dialect/Affine/IR/AffineOps.cpp --- a/mlir/lib/Dialect/Affine/IR/AffineOps.cpp +++ b/mlir/lib/Dialect/Affine/IR/AffineOps.cpp @@ -30,9 +30,7 @@ #include "mlir/Dialect/Affine/IR/AffineOpsDialect.cpp.inc" /// A utility function to check if a value is defined at the top level of -/// `region` or is an argument of `region`. A value of index type defined at the -/// top level of a `AffineScope` region is always a valid symbol for all -/// uses in that region. +/// `region` or is an argument of `region`. static bool isTopLevelValue(Value value, Region *region) { if (auto arg = value.dyn_cast()) return arg.getParentRegion() == region; @@ -146,10 +144,11 @@ /// rather than moved, indicating there may be other users. bool isLegalToInline(Region *dest, Region *src, bool wouldBeCloned, BlockAndValueMapping &valueMapping) const final { - // We can inline into affine loops and conditionals if this doesn't break - // affine value categorization rules. + // We can inline into affine loops, conditionals, and such other ops with + // the trait `ExtendsAffineScope` if this doesn't break affine value + // categorization rules. Operation *destOp = dest->getParentOp(); - if (!isa(destOp)) + if (!destOp->hasTrait()) return false; // Multi-block regions cannot be inlined into affine constructs, all of @@ -191,13 +190,12 @@ /// dialect, can be inlined into the given region, false otherwise. bool isLegalToInline(Operation *op, Region *region, bool wouldBeCloned, BlockAndValueMapping &valueMapping) const final { - // Always allow inlining affine operations into a region that is marked as - // affine scope, or into affine loops and conditionals. There are some edge - // cases when inlining *into* affine structures, but that is handled in the - // other 'isLegalToInline' hook above. - Operation *parentOp = region->getParentOp(); - return parentOp->hasTrait() || - isa(parentOp); + // Always allow inlining affine operations into any region. Region-holding + // operations without the `ExtendsAffineScope` trait always start a new + // affine scope, and so it's legal to inline into them. Those with the + // `ExtendsAffineScope` trait cannot always be inlined into, but that is + // handled in the other `isLegalToInline` hook above. + return true; } /// Affine regions should be analyzed recursively. @@ -225,30 +223,33 @@ return builder.create(loc, type, value); } -/// A utility function to check if a value is defined at the top level of an -/// op with trait `AffineScope`. If the value is defined in an unlinked region, -/// conservatively assume it is not top-level. A value of index type defined at -/// the top level is always a valid symbol. +/// Checks if a value is a top level value of an op that starts an affine scope. +/// If the value is defined in an unlinked region, it is conservatively assumed +/// not to be top-level. A value of index type defined at the top level is +/// always a valid symbol for affine purposes. bool mlir::isTopLevelValue(Value value) { if (auto arg = value.dyn_cast()) { + Operation *parentOp = arg.getOwner()->getParentOp(); + // The value can't be a block argument owned by an op extending an affine + // scope -- in the latter case, it can only be a dimensional value. // The block owning the argument may be unlinked, e.g. when the surrounding - // region has not yet been attached to an Op, at which point the parent Op + // region has not yet been attached to an op, at which point the parent op // is null. - Operation *parentOp = arg.getOwner()->getParentOp(); - return parentOp && parentOp->hasTrait(); + return parentOp && !parentOp->hasTrait(); } - // The defining Op may live in an unlinked block so its parent Op may be null. + // The defining op when it exists has to have a parent op that starts an + // affine scope. The defining op may live in an unlinked region's block so its + // parent op may be null. Operation *parentOp = value.getDefiningOp()->getParentOp(); - return parentOp && parentOp->hasTrait(); + return parentOp && !parentOp->hasTrait(); } -/// Returns the closest region enclosing `op` that is held by an operation with -/// trait `AffineScope`; `nullptr` if there is no such region. -// TODO: getAffineScope should be publicly exposed for affine passes/utilities. +/// Returns the closest region enclosing `op` that is held by an operation that +/// starts an affine scope; `nullptr` if there is no such region. static Region *getAffineScope(Operation *op) { - auto *curOp = op; + Operation *curOp = op; while (auto *parentOp = curOp->getParentOp()) { - if (parentOp->hasTrait()) + if (!parentOp->hasTrait()) return curOp->getParentRegion(); curOp = parentOp; } @@ -257,29 +258,32 @@ // A Value can be used as a dimension id iff it meets one of the following // conditions: -// *) It is valid as a symbol. -// *) It is an induction variable. +// 1) It is valid as a symbol. +// 2) It is a region argument on an op with the trait `ExtendsAffineScope` +// (eg. induction variable of an affine.for/affine.parallel). // *) It is the result of affine apply operation with dimension id arguments. bool mlir::isValidDim(Value value) { // The value must be an index type. if (!value.getType().isIndex()) return false; - if (auto *defOp = value.getDefiningOp()) - return isValidDim(value, getAffineScope(defOp)); + // Conditions (1) and (2) above imply any block argument would be a valid + // dimesional identifier. + if (value.isa()) + return true; - // This value has to be a block argument for an op that has the - // `AffineScope` trait or for an affine.for or affine.parallel. - auto *parentOp = value.cast().getOwner()->getParentOp(); - return parentOp && (parentOp->hasTrait() || - isa(parentOp)); + // If defined by an op, the value has to be a valid dim for the affine scope + // it's definition is part of. + return isValidDim(value, getAffineScope(value.getDefiningOp())); } -// Value can be used as a dimension id iff it meets one of the following -// conditions: -// *) It is valid as a symbol. -// *) It is an induction variable. -// *) It is the result of an affine apply operation with dimension id operands. +/// Returns true if the given Value can be used as a dimension id in the affine +/// scope starting at `region`, i.e., for all its uses in such an affine +/// scope. This is true if the value meets one of the following conditions: +// *) It is valid as a symbol for `region`: +// *) It is a region argument for a block with the `ExtendsAffineScope` trait +// (eg. an induction variable of an affine.for or affine.parallel). +// *) It is the result of an affine apply operation with dimensional operands. bool mlir::isValidDim(Value value, Region *region) { // The value must be an index type. if (!value.getType().isIndex()) @@ -289,19 +293,25 @@ if (isValidSymbol(value, region)) return true; - auto *op = value.getDefiningOp(); + Operation *op = value.getDefiningOp(); if (!op) { - // This value has to be a block argument for an affine.for or an - // affine.parallel. + // This value has to be a block argument for an op that extends the affine + // scope of `region`. An unlinked region's block arguments are not valid + // dimension ids. auto *parentOp = value.cast().getOwner()->getParentOp(); - return isa(parentOp); + if (!parentOp || !parentOp->hasTrait()) + return false; + return getAffineScope(parentOp) == region; } // Affine apply operation is ok if all of its operands are ok. if (auto applyOp = dyn_cast(op)) return applyOp.isValidDim(region); - // The dim op is okay if its operand memref/tensor is defined at the top - // level. + // The dim op is okay if its operand memref/tensor is a top-level value for + // `region` or dominates region. + // FIXME: these conditions are conservative; fix these in a subsequent + // revision. Use `isDimOpValidSymbol` and get rid of some conservative + // behavior in `isDimOpValidSymbol` as well. if (auto dimOp = dyn_cast(op)) return isTopLevelValue(dimOp.source()); if (auto dimOp = dyn_cast(op)) @@ -325,11 +335,12 @@ region); } -/// Returns true if the result of the dim op is a valid symbol for `region`. +/// Returns true if the result of the dim op is a valid symbol for the affine +/// scope starting at `region`. template static bool isDimOpValidSymbol(OpTy dimOp, Region *region) { // The dim op is okay if its source is defined at the top level. - if (isTopLevelValue(dimOp.source())) + if (isTopLevelValue(dimOp.source(), region)) return true; // Conservatively handle remaining BlockArguments as non-valid symbols. @@ -353,7 +364,7 @@ // the following conditions: // *) It is a constant. // *) Its defining op or block arg appearance is immediately enclosed by an op -// with `AffineScope` trait. +// that starts an affine scope. // *) It is the result of an affine.apply operation with symbol operands. // *) It is a result of the dim op on a memref whose corresponding size is a // valid symbol. @@ -375,8 +386,8 @@ return false; } -/// A value can be used as a symbol for `region` iff it meets one of the -/// following conditions: +/// A value can be used as a symbol in the affine scope that begins at `region`. +/// iff it meets one of the following conditions: /// *) It is a constant. /// *) It is the result of an affine apply operation with symbol arguments. /// *) It is a result of the dim op on a memref whose corresponding size is @@ -398,10 +409,10 @@ auto *defOp = value.getDefiningOp(); if (!defOp) { // A block argument that is not a top-level value is a valid symbol if it - // dominates region's parent op. + // dominates `region`'s parent op. Operation *regionOp = region ? region->getParentOp() : nullptr; if (regionOp && !regionOp->hasTrait()) - if (auto *parentOpRegion = region->getParentOp()->getParentRegion()) + if (auto *parentOpRegion = regionOp->getParentRegion()) return isValidSymbol(value, parentOpRegion); return false; } @@ -551,7 +562,7 @@ // defining the polyhedral scope for symbols. bool AffineApplyOp::isValidDim(Region *region) { return llvm::all_of(getOperands(), - [&](Value op) { return ::isValidDim(op, region); }); + [&](Value op) { return mlir::isValidDim(op, region); }); } // The result of the affine apply operation can be used as a symbol if all its diff --git a/mlir/lib/Dialect/Affine/Transforms/AffineParallelize.cpp b/mlir/lib/Dialect/Affine/Transforms/AffineParallelize.cpp --- a/mlir/lib/Dialect/Affine/Transforms/AffineParallelize.cpp +++ b/mlir/lib/Dialect/Affine/Transforms/AffineParallelize.cpp @@ -63,7 +63,7 @@ unsigned numParentParallelOps = 0; AffineForOp loop = candidate.loop; for (Operation *op = loop->getParentOp(); - op != nullptr && !op->hasTrait(); + op && op->hasTrait(); op = op->getParentOp()) { if (isa(op)) ++numParentParallelOps; diff --git a/mlir/test/Conversion/AffineToStandard/lower-affine.mlir b/mlir/test/Conversion/AffineToStandard/lower-affine.mlir --- a/mlir/test/Conversion/AffineToStandard/lower-affine.mlir +++ b/mlir/test/Conversion/AffineToStandard/lower-affine.mlir @@ -372,7 +372,7 @@ // CHECK-NEXT: for %{{.*}} = %[[c0]] to %[[c42]] step %[[c1]] { // CHECK-NEXT: %[[cm1:.*]] = arith.constant -1 : index // CHECK-NEXT: %[[a:.*]] = arith.muli %{{.*}}, %[[cm1]] : index -// CHECK-NEXT: %[[b:.*]] = arith.addi %[[a]], %{{.*}} : index +// CHECK-NEXT: %[[b:.*]] = arith.addi %{{.*}}, %[[a]] : index // CHECK-NEXT: %[[c:.*]] = arith.cmpi sgt, %{{.*}}, %[[b]] : index // CHECK-NEXT: %[[d:.*]] = select %[[c]], %{{.*}}, %[[b]] : index // CHECK-NEXT: %[[c10:.*]] = arith.constant 10 : index diff --git a/mlir/test/Dialect/Affine/canonicalize.mlir b/mlir/test/Dialect/Affine/canonicalize.mlir --- a/mlir/test/Dialect/Affine/canonicalize.mlir +++ b/mlir/test/Dialect/Affine/canonicalize.mlir @@ -426,6 +426,25 @@ // ----- +// CHECK-LABEL: func @symbol_or_dim +func @symbol_or_dim() { + %c0 = arith.constant 0 : index + %c1 = arith.constant 1 : index + %c100 = arith.constant 100 : index + affine.for %i = 0 to 100 { + scf.for %j = %c0 to %c100 step %c1 { + // %j should be a symbol here since it's part of the affine scope started + // by the above scf.for. + %s = affine.apply affine_map<(d0) -> (2 * d0)>(%j) + // CHECK: affine.apply #{{.*}}()[%{{.*}}] + "test.foo"(%s) : (index) -> () + } + } + return +} + +// ----- + // CHECK: #[[$MAP0:.*]] = affine_map<()[s0] -> (0, s0)> // CHECK: #[[$MAP1:.*]] = affine_map<()[s0] -> (100, s0)> diff --git a/mlir/test/Dialect/Affine/invalid.mlir b/mlir/test/Dialect/Affine/invalid.mlir --- a/mlir/test/Dialect/Affine/invalid.mlir +++ b/mlir/test/Dialect/Affine/invalid.mlir @@ -50,19 +50,6 @@ return } -// ----- -func @affine_load_invalid_dim(%M : memref<10xi32>) { - "unknown"() ({ - ^bb0(%arg: index): - affine.load %M[%arg] : memref<10xi32> - // expected-error@-1 {{index must be a dimension or symbol identifier}} - br ^bb1 - ^bb1: - br ^bb1 - }) : () -> () - return -} - // ----- #map0 = affine_map<(d0)[s0] -> (d0 + s0)> @@ -133,6 +120,34 @@ // ----- +// Test symbol and dim constraints for ops with ExtendAffineScope trait. + +// CHECK-LABEL: func @valid_symbol_affine_scope +func @valid_symbol_affine_scope(%n : index, %A : memref) { + // Any region holding op starts an affine scope unless it has an + // `ExtendAffineScope` trait in which case it extends the affine scope started + // by its parent op. + "test.foo"() ({ + %c1 = arith.constant 1 : index + %l = arith.subi %n, %c1 : index + // %l, %n are valid symbols + test.affine_scope_extend { + // %d is a valid dimensional identifier. + ^bb0(%d : index): + // %d is a valid dimensional identifier. + %s = affine.load %A[%d] : memref + // However, %s isn't. + // expected-error@+1 {{index must be a dimension or symbol identifier}} + affine.load %A[%s] : memref + "terminate"() : () -> () + } + "terminate"() : () -> () + }) : () -> () + return +} + +// ----- + func @affine_store_missing_l_square(%C: memref<4096x4096xf32>) { %9 = arith.constant 0.0 : f32 // expected-error@+1 {{expected '['}} diff --git a/mlir/test/Dialect/Affine/ops.mlir b/mlir/test/Dialect/Affine/ops.mlir --- a/mlir/test/Dialect/Affine/ops.mlir +++ b/mlir/test/Dialect/Affine/ops.mlir @@ -94,7 +94,7 @@ // ----- -func @valid_symbols(%arg0: index, %arg1: index, %arg2: index) { +func @valid_symbols(%arg0: index, %arg1: index, %arg2: index, %M: memref<10xi32>) { %c1 = arith.constant 1 : index %c0 = arith.constant 0 : index %0 = memref.alloc(%arg0, %arg1) : memref @@ -110,39 +110,74 @@ } } } + "test.unknown"() ({ + ^bb0(%arg: index): + // %arg is a valid symbolic identifier. + affine.load %M[%arg] : memref<10xi32> + "test.yield"() : () -> () + }) : () -> () return } // ----- -// Test symbol constraints for ops with AffineScope trait. +// Test ops with ExtendAffineScope trait. // CHECK-LABEL: func @valid_symbol_affine_scope func @valid_symbol_affine_scope(%n : index, %A : memref) { - test.affine_scope { + // Any region holding op starts an affine scope unless it has an + // `ExtendAffineScope` trait in which case it extends the affine scope started + // by its parent op. + "test.foo"() ({ %c1 = arith.constant 1 : index %l = arith.subi %n, %c1 : index - // %l, %n are valid symbols since test.affine_scope defines a new affine - // scope. + // %l, %n are valid symbols. affine.for %i = %l to %n { - %m = arith.subi %l, %i : index - test.affine_scope { - // %m and %n are valid symbols. - affine.for %j = %m to %n { - %v = affine.load %A[%n - 1] : memref - affine.store %v, %A[%n - 1] : memref - } + } + test.affine_scope_extend { + // %d is a valid dimensional identifier. + ^bb0(%d : index): + affine.load %A[%d] : memref "terminate"() : () -> () - } } "terminate"() : () -> () + }) : () -> () + return +} + +// ----- + +// Test dim/symbol rules involving memref dim ops. + +func @valid_memref_dim_symbols(%M : index) { + %c0 = arith.constant 0 : index + scf.execute_region { + %A = memref.alloc() : memref<8xf32> + affine.for %i = 0 to 100 { + // %N is a valid symbol. It's defined at the top-level of a region-holding + // op that doesn't have the ExtendsAffineScope trait. + %N = memref.dim %A, %c0 : memref<8xf32> + affine.for %j = 0 to %N { + } + } + scf.yield } + "test.foo"() ({ + %A = arith.constant dense<0.0> : tensor<8xf32> + affine.for %i = 0 to 100 { + // %N is a valid symbol. It's defined at the top-level of a region-holding + // op that doesn't have the ExtendsAffineScope trait. + %N = tensor.dim %A, %c0 : tensor<8xf32> + affine.for %j = 0 to %N { + } + } + }) : () -> () return } // ----- -// Test the fact that module op always provides an affine scope. +// Test the fact that module op always starts an affine scope. %idx = "test.foo"() : () -> (index) "test.func"() ({ diff --git a/mlir/test/Dialect/Linalg/comprehensive-module-bufferize.mlir b/mlir/test/Dialect/Linalg/comprehensive-module-bufferize.mlir --- a/mlir/test/Dialect/Linalg/comprehensive-module-bufferize.mlir +++ b/mlir/test/Dialect/Linalg/comprehensive-module-bufferize.mlir @@ -559,7 +559,7 @@ // CHECK-DAG: #[[$DYN_0D_MAP:.*]] = affine_map<()[s0] -> (s0)> // CHECK-DAG: #[[$DYN_1D_MAP:.*]] = affine_map<(d0)[s0, s1] -> (d0 * s1 + s0)> -// CHECK-DAG: #[[$TILE_MAP:.*]] = affine_map<(d0)[s0] -> (3, -d0 + s0)> +// CHECK-DAG: #[[$TILE_MAP:.*]] = affine_map<()[s0, s1] -> (3, s0 - s1)> // CHECK: func @tiled_dot( // CHECK-SAME: %[[A:[a-zA-Z0-9]*]]: memref diff --git a/mlir/test/Dialect/Linalg/fusion-indexed.mlir b/mlir/test/Dialect/Linalg/fusion-indexed.mlir --- a/mlir/test/Dialect/Linalg/fusion-indexed.mlir +++ b/mlir/test/Dialect/Linalg/fusion-indexed.mlir @@ -100,14 +100,14 @@ } return } -// CHECK: [[$MAP:#[a-zA-Z0-9_]*]] = affine_map<(d0, d1) -> (d0 + d1)> +// CHECK: [[$MAP:#[a-zA-Z0-9_]*]] = affine_map<()[s0, s1] -> (s0 + s1)> // CHECK-LABEL: func @fuse_indexed_producer // CHECK: scf.parallel ([[I:%.*]], [[J:%.*]]) = // CHECK: linalg.generic -// CHECK: [[idx0:%.*]] = linalg.index 0 : index -// CHECK: [[i_new:%.*]] = affine.apply [[$MAP]]([[idx0]], [[J]]) -// CHECK: [[idx1:%.*]] = linalg.index 1 : index -// CHECK: [[j_new:%.*]] = affine.apply [[$MAP]]([[idx1]], [[I]]) +// CHECK: %[[idx0:.*]] = linalg.index 0 : index +// CHECK: [[i_new:%.*]] = affine.apply [[$MAP]]()[%[[idx0]], [[J]]] +// CHECK: %[[idx1:.*]] = linalg.index 1 : index +// CHECK: [[j_new:%.*]] = affine.apply [[$MAP]]()[%[[idx1]], [[I]]] // CHECK: [[sum:%.*]] = arith.addi [[i_new]], [[j_new]] : index // CHECK: linalg.yield [[sum]] : index // CHECK: linalg.generic @@ -149,14 +149,13 @@ } return } -// CHECK: [[$MAP:#[a-zA-Z0-9_]*]] = affine_map<(d0, d1) -> (d0 + d1)> +// CHECK: [[$MAP:#[a-zA-Z0-9_]*]] = affine_map<()[s0, s1] -> (s0 + s1)> // CHECK-LABEL: func @fuse_indexed_producer_tiled_second_dim_only // CHECK: scf.parallel ([[J:%.*]]) = // CHECK: linalg.generic // CHECK: [[idx0:%.*]] = linalg.index 0 : index -// CHECK: [[idx1:%.*]] = linalg.index 1 : index -// CHECK: [[j_new:%.*]] = affine.apply [[$MAP]]([[idx1]], [[J]]) +// CHECK: %[[idx1:.*]] = linalg.index 1 : index +// CHECK: [[j_new:%.*]] = affine.apply [[$MAP]]()[%[[idx1]], [[J]]] // CHECK: [[sum:%.*]] = arith.addi [[idx0]], [[j_new]] : index // CHECK: linalg.yield [[sum]] : index // CHECK: linalg.generic - diff --git a/mlir/test/Dialect/Linalg/fusion-pattern.mlir b/mlir/test/Dialect/Linalg/fusion-pattern.mlir --- a/mlir/test/Dialect/Linalg/fusion-pattern.mlir +++ b/mlir/test/Dialect/Linalg/fusion-pattern.mlir @@ -12,12 +12,12 @@ } } -// CHECK-DAG: #[[MAP0:.+]] = affine_map<(d0)[s0] -> (32, -d0 + s0)> +// CHECK-DAG: #[[MAP0:.+]] = affine_map<()[s0, s1] -> (32, s0 - s1)> // CHECK-DAG: #[[MAP1:.+]] = affine_map<(d0, d1)[s0, s1] -> (d0 * s1 + s0 + d1)> -// CHECK-DAG: #[[MAP2:.+]] = affine_map<(d0)[s0] -> (64, -d0 + s0)> -// CHECK-DAG: #[[MAP3:.+]] = affine_map<(d0)[s0] -> (16, -d0 + s0)> -// CHECK-DAG: #[[MAP4:.+]] = affine_map<(d0)[s0, s1] -> (-d0 + s0, 32, -d0 + s1)> -// CHECK-DAG: #[[MAP5:.+]] = affine_map<(d0)[s0, s1] -> (-d0 + s0, 64, -d0 + s1)> +// CHECK-DAG: #[[MAP2:.+]] = affine_map<()[s0, s1] -> (64, s0 - s1)> +// CHECK-DAG: #[[MAP3:.+]] = affine_map<()[s0, s1] -> (16, s0 - s1)> +// CHECK-DAG: #[[MAP4:.+]] = affine_map<()[s0, s1, s2] -> (s0 - s1, 32, -s1 + s2)> +// CHECK-DAG: #[[MAP5:.+]] = affine_map<()[s0, s1, s2] -> (s0 - s1, 64, -s1 + s2)> // CHECK: func @basic_fusion // CHECK-SAME: %[[ARG0:[a-zA-Z0-9_]+]]: memref // CHECK-SAME: %[[ARG1:[a-zA-Z0-9_]+]]: memref @@ -35,26 +35,26 @@ // CHECK: scf.parallel (%[[IV0:.+]], %[[IV1:.+]]) = // CHECK-SAME: to (%[[M]], %[[N]]) // CHECK-SAME: step (%[[C32]], %[[C64]]) { -// CHECK: %[[TILE_M:.+]] = affine.min #[[MAP0]](%[[IV0]])[%[[M]]] +// CHECK: %[[TILE_M:.+]] = affine.min #[[MAP0]]()[%[[M]], %[[IV0]]] // CHECK: %[[K:.+]] = memref.dim %[[ARG0]], %[[C1]] // CHECK: %[[SV1:.+]] = memref.subview %[[ARG0]][%[[IV0]], 0] // CHECK-SAME: [%[[TILE_M]], %[[K]]] // CHECK: %[[K_2:.+]] = memref.dim %[[ARG1]], %[[C0]] -// CHECK: %[[TILE_N:.+]] = affine.min #[[MAP2]](%[[IV1]])[%[[N]]] +// CHECK: %[[TILE_N:.+]] = affine.min #[[MAP2]]()[%[[N]], %[[IV1]]] // CHECK: %[[SV2:.+]] = memref.subview %[[ARG1]][0, %[[IV1]]] // CHECK-SAME: %[[K_2]], %[[TILE_N]] // CHECK: %[[SV3:.+]] = memref.subview %[[ARG2]][%[[IV0]], %[[IV1]]] // CHECK-SAME: [%[[TILE_M]], %[[TILE_N]]] // CHECK: %[[M_2:.+]] = memref.dim %[[ARG2]], %[[C0]] // CHECK: %[[N_2:.+]] = memref.dim %[[ARG2]], %[[C1]] -// CHECK: %[[TILE_M_3:.+]] = affine.min #[[MAP4]](%[[IV0]])[%[[M_2]], %[[M]]] -// CHECK: %[[TILE_N_3:.+]] = affine.min #[[MAP5]](%[[IV1]])[%[[N_2]], %[[N]]] +// CHECK: %[[TILE_M_3:.+]] = affine.min #[[MAP4]]()[%[[M_2]], %[[IV0]], %[[M]]] +// CHECK: %[[TILE_N_3:.+]] = affine.min #[[MAP5]]()[%[[N_2]], %[[IV1]], %[[N]]] // CHECK: %[[SV3_2:.+]] = memref.subview %[[ARG2]][%[[IV0]], %[[IV1]]] // CHECK-SAME: [%[[TILE_M_3]], %[[TILE_N_3]]] // CHECK: linalg.fill(%[[CST]], %[[SV3_2]]) // CHECK-SAME: __internal_linalg_transform__ = "after_basic_fusion_producer" // CHECK: scf.for %[[IV2:.+]] = %[[C0]] to %[[K]] step %[[C16]] { -// CHECK: %[[TILE_K:.+]] = affine.min #[[MAP3]](%[[IV2]])[%[[K]]] +// CHECK: %[[TILE_K:.+]] = affine.min #[[MAP3]]()[%[[K]], %[[IV2]]] // CHECK: %[[SV4:.+]] = memref.subview %[[SV1]][0, %[[IV2]]] // CHECK-SAME: [%[[TILE_M]], %[[TILE_K]]] // CHECK: %[[SV5:.+]] = memref.subview %[[SV2]][%[[IV2]], 0] @@ -83,11 +83,12 @@ return } } -// CHECK-DAG: #[[MAP0:.+]] = affine_map<(d0)[s0] -> (64, -d0 + s0)> +// CHECK-DAG: #[[MAP0:.+]] = affine_map<()[s0, s1] -> (64, s0 - s1)> // CHECK-DAG: #[[MAP1:.+]] = affine_map<(d0, d1)[s0, s1] -> (d0 * s1 + s0 + d1)> -// CHECK-DAG: #[[MAP2:.+]] = affine_map<(d0)[s0] -> (32, -d0 + s0)> -// CHECK-DAG: #[[MAP3:.+]] = affine_map<(d0)[s0] -> (16, -d0 + s0)> -// CHECK-DAG: #[[MAP4:.+]] = affine_map<(d0)[s0, s1] -> (-d0 + s0, 64, -d0 + s1)> +// CHECK-DAG: #[[MAP2:.+]] = affine_map<()[s0, s1, s2] -> (s0 - s1, 64, -s1 + s2)> +// CHECK-DAG: #[[MAP3:.+]] = affine_map<()[s0, s1] -> (32, s0 - s1)> +// CHECK-DAG: #[[MAP4:.+]] = affine_map<()[s0, s1] -> (16, s0 - s1)> + // CHECK: func @rhs_fusion // CHECK-SAME: %[[ARG0:[a-zA-Z0-9_]+]]: memref // CHECK-SAME: %[[ARG1:[a-zA-Z0-9_]+]]: memref @@ -105,7 +106,7 @@ // CHECK: scf.parallel (%[[IV0:.+]]) = // CHECK-SAME: (%[[C0]]) to (%[[N]]) step (%[[C64]]) { // CHECK: %[[K:.+]] = memref.dim %[[ARG2]], %[[C0]] -// CHECK: %[[TILE_N:.+]] = affine.min #[[MAP0]](%[[IV0]])[%[[N]]] +// CHECK: %[[TILE_N:.+]] = affine.min #[[MAP0]]()[%[[N]], %[[IV0]]] // CHECK: %[[SV1:.+]] = memref.subview %[[ARG2]][0, %[[IV0]]] // CHECK-SAME: [%[[K]], %[[TILE_N]]] // CHECK: %[[M:.+]] = memref.dim %[[ARG3]], %[[C0]] @@ -113,7 +114,7 @@ // CHECK-SAME: [%[[M]], %[[TILE_N]] // CHECK: %[[N_3:.+]] = memref.dim %[[ARG1]], %[[C1]] // CHECK: %[[K_2:.+]] = memref.dim %[[ARG1]], %[[C0]] -// CHECK: %[[TILE_N_3:.+]] = affine.min #[[MAP4]](%[[IV0]])[%[[N_3]], %[[N]]] +// CHECK: %[[TILE_N_3:.+]] = affine.min #[[MAP2]]()[%[[N_3]], %[[IV0]], %[[N]]] // CHECK: %[[SV3:.+]] = memref.subview %[[ARG1]][0, %[[IV0]]] // CHECK-SAME: [%[[K_2]], %[[TILE_N_3]]] // CHECK: %[[SV3_2:.+]] = memref.subview %[[ARG2]][0, %[[IV0]]] @@ -126,8 +127,8 @@ // CHECK: scf.parallel (%[[IV1:.+]]) = // CHECK-SAME: (%[[C0]]) to (%[[M_2]]) step (%[[C32]]) { // CHECK-NEXT: scf.for %[[IV2:.+]] = %[[C0]] to %[[K_2]] step %[[C16]] { -// CHECK: %[[TILE_M:.+]] = affine.min #[[MAP2]](%[[IV1]])[%[[M_2]]] -// CHECK: %[[TILE_K:.+]] = affine.min #[[MAP3]](%[[IV2]])[%[[K_2]]] +// CHECK: %[[TILE_M:.+]] = affine.min #[[MAP3]]()[%[[M_2]], %[[IV1]]] +// CHECK: %[[TILE_K:.+]] = affine.min #[[MAP4]]()[%[[K_2]], %[[IV2]]] // CHECK: %[[SV4:.+]] = memref.subview %[[ARG0]][%[[IV1]], %[[IV2]]] // CHECK-SAME: [%[[TILE_M]], %[[TILE_K]]] // CHECK: %[[SV5:.+]] = memref.subview %[[SV1]][%[[IV2]], 0] @@ -160,11 +161,12 @@ return } } -// CHECK-DAG: #[[MAP0:.+]] = affine_map<(d0)[s0] -> (32, -d0 + s0)> + +// CHECK-DAG: #[[MAP0:.+]] = affine_map<()[s0, s1] -> (32, s0 - s1)> // CHECK-DAG: #[[MAP1:.+]] = affine_map<(d0, d1)[s0, s1] -> (d0 * s1 + s0 + d1)> -// CHECK-DAG: #[[MAP2:.+]] = affine_map<(d0)[s0] -> (16, -d0 + s0)> -// CHECK-DAG: #[[MAP3:.+]] = affine_map<(d0)[s0] -> (64, -d0 + s0)> -// CHECK-DAG: #[[MAP4:.+]] = affine_map<(d0)[s0, s1] -> (-d0 + s0, 32, -d0 + s1)> +// CHECK-DAG: #[[MAP2:.+]] = affine_map<()[s0, s1, s2] -> (s0 - s1, 32, -s1 + s2)> +// CHECK-DAG: #[[MAP3:.+]] = affine_map<()[s0, s1] -> (16, s0 - s1)> +// CHECK-DAG: #[[MAP4:.+]] = affine_map<()[s0, s1] -> (64, s0 - s1)> // CHECK: func @two_operand_fusion // CHECK-SAME: %[[ARG0:[a-zA-Z0-9_]+]]: memref // CHECK-SAME: %[[ARG1:[a-zA-Z0-9_]+]]: memref @@ -183,7 +185,7 @@ // CHECK-DAG: %[[M:.+]] = memref.dim %[[ARG1]], %[[C0]] // CHECK: scf.parallel (%[[IV0:.+]]) = // CHECK-SAME: (%[[C0]]) to (%[[M]]) step (%[[C32]]) { -// CHECK: %[[TILE_M:.+]] = affine.min #[[MAP0]](%[[IV0]])[%[[M]]] +// CHECK: %[[TILE_M:.+]] = affine.min #[[MAP0]]()[%[[M]], %[[IV0]]] // CHECK: %[[K:.+]] = memref.dim %[[ARG1]], %[[C1]] // CHECK: %[[SV1:.+]] = memref.subview %[[ARG1]][%[[IV0]], 0] // CHECK-SAME: [%[[TILE_M]], %[[K]]] @@ -191,11 +193,11 @@ // CHECK: %[[SV2:.+]] = memref.subview %[[ARG3]][%[[IV0]], 0] // CHECK-SAME: [%[[TILE_M]], %[[N]]] // CHECK: %[[M_2:.+]] = memref.dim %[[ARG3]], %[[C0]] -// CHECK: %[[TILE_M_3:.+]] = affine.min #[[MAP4]](%[[IV0]])[%[[M_2]], %[[M]]] +// CHECK: %[[TILE_M_3:.+]] = affine.min #[[MAP2]]()[%[[M_2]], %[[IV0]], %[[M]]] // CHECK: %[[SV2_2:.+]] = memref.subview %[[ARG3]][%[[IV0]], 0] // CHECK-SAME: [%[[TILE_M_3]], %[[N]]] // CHECK: %[[M_3:.+]] = memref.dim %[[ARG0]], %[[C0]] -// CHECK: %[[TILE_M_4:.+]] = affine.min #[[MAP4]](%[[IV0]])[%[[M_3]], %[[M]]] +// CHECK: %[[TILE_M_4:.+]] = affine.min #[[MAP2]]()[%[[M_3]], %[[IV0]], %[[M]]] // CHECK: %[[K_3:.+]] = memref.dim %[[ARG0]], %[[C1]] // CHECK: %[[SV3:.+]] = memref.subview %[[ARG0]][%[[IV0]], 0] // CHECK-SAME: [%[[TILE_M_4]], %[[K_3]]] @@ -209,10 +211,10 @@ // CHECK: scf.parallel (%[[IV1:.+]]) = // CHECK-SAME: (%[[C0]]) to (%[[N_2]]) step (%[[C64]]) { // CHECK-NEXT: scf.for %[[IV2:.+]] = %[[C0]] to %[[K]] step %[[C16]] { -// CHECK: %[[TILE_K:.+]] = affine.min #[[MAP2]](%[[IV2]])[%[[K]]] +// CHECK: %[[TILE_K:.+]] = affine.min #[[MAP3]]()[%[[K]], %[[IV2]]] // CHECK: %[[SV4:.+]] = memref.subview %[[SV1]][0, %[[IV2]]] // CHECK-SAME: [%[[TILE_M]], %[[TILE_K]]] -// CHECK: %[[TILE_N:.+]] = affine.min #[[MAP3]](%[[IV1]])[%[[N_2]]] +// CHECK: %[[TILE_N:.+]] = affine.min #[[MAP4]]()[%[[N_2]], %[[IV1]]] // CHECK: %[[SV5:.+]] = memref.subview %[[ARG2]][%[[IV2]], %[[IV1]]] // CHECK-SAME: [%[[TILE_K]], %[[TILE_N]]] // CHECK: %[[SV6:.+]] = memref.subview %[[SV2]][0, %[[IV1]]] @@ -242,11 +244,12 @@ return } } -// CHECK-DAG: #[[MAP0:.+]] = affine_map<(d0)[s0] -> (32, -d0 + s0)> +// CHECK-DAG: #[[MAP0:.+]] = affine_map<()[s0, s1] -> (32, s0 - s1)> // CHECK-DAG: #[[MAP1:.+]] = affine_map<(d0, d1)[s0, s1] -> (d0 * s1 + s0 + d1)> -// CHECK-DAG: #[[MAP2:.+]] = affine_map<(d0)[s0] -> (16, -d0 + s0)> -// CHECK-DAG: #[[MAP3:.+]] = affine_map<(d0)[s0] -> (64, -d0 + s0)> -// CHECK-DAG: #[[MAP4:.+]] = affine_map<(d0)[s0, s1] -> (-d0 + s0, 32, -d0 + s1)> +// CHECK-DAG: #[[MAP2:.+]] = affine_map<()[s0, s1, s2] -> (s0 - s1, 32, -s1 + s2)> +// CHECK-DAG: #[[MAP3:.+]] = affine_map<()[s0, s1] -> (16, s0 - s1)> +// CHECK-DAG: #[[MAP4:.+]] = affine_map<()[s0, s1] -> (64, s0 - s1)> + // CHECK: func @matmul_fusion // CHECK-SAME: %[[ARG0:[a-zA-Z0-9_]+]]: memref // CHECK-SAME: %[[ARG1:[a-zA-Z0-9_]+]]: memref @@ -263,7 +266,7 @@ // CHECK-DAG: %[[M:.+]] = memref.dim %[[ARG2]], %[[C0]] // CHECK: scf.parallel (%[[IV0:.+]]) = // CHECK-SAME: (%[[C0]]) to (%[[M]]) step (%[[C32]]) { -// CHECK: %[[TILE_M:.+]] = affine.min #[[MAP0]](%[[IV0]])[%[[M]]] +// CHECK: %[[TILE_M:.+]] = affine.min #[[MAP0]]()[%[[M]], %[[IV0]]] // CHECK: %[[K2:.+]] = memref.dim %[[ARG2]], %[[C1]] // CHECK: %[[SV1:.+]] = memref.subview %[[ARG2]][%[[IV0]], 0] // CHECK-SAME: [%[[TILE_M]], %[[K2]]] @@ -271,7 +274,7 @@ // CHECK: %[[SV2:.+]] = memref.subview %[[ARG4]][%[[IV0]], 0] // CHECK-SAME: [%[[TILE_M]], %[[N]]] // CHECK: %[[M_3:.+]] = memref.dim %[[ARG0]], %[[C0]] -// CHECK: %[[TILE_M_3:.+]] = affine.min #[[MAP4]](%[[IV0]])[%[[M_3]], %[[M]]] +// CHECK: %[[TILE_M_3:.+]] = affine.min #[[MAP2]]()[%[[M_3]], %[[IV0]], %[[M]]] // CHECK: %[[K1:.+]] = memref.dim %[[ARG0]], %[[C1]] // CHECK: %[[SV3:.+]] = memref.subview %[[ARG0]][%[[IV0]], 0] // CHECK-SAME: [%[[TILE_M_3]], %[[K1]]] @@ -286,10 +289,10 @@ // CHECK: scf.parallel (%[[IV1:.+]]) = // CHECK-SAME: (%[[C0]]) to (%[[N_2]]) step (%[[C64]]) { // CHECK-NEXT: scf.for %[[IV2:.+]] = %[[C0]] to %[[K2]] step %[[C16]] { -// CHECK: %[[TILE_K:.+]] = affine.min #[[MAP2]](%[[IV2]])[%[[K2]]] +// CHECK: %[[TILE_K:.+]] = affine.min #[[MAP3]]()[%[[K2]], %[[IV2]]] // CHECK: %[[SV6:.+]] = memref.subview %[[SV1]][0, %[[IV2]]] // CHECK-SAME: [%[[TILE_M]], %[[TILE_K]]] -// CHECK: %[[TILE_N:.+]] = affine.min #[[MAP3]](%[[IV1]])[%[[N_2]]] +// CHECK: %[[TILE_N:.+]] = affine.min #[[MAP4]]()[%[[N_2]], %[[IV1]]] // CHECK: %[[SV7:.+]] = memref.subview %[[ARG3]][%[[IV2]], %[[IV1]]] // CHECK-SAME: [%[[TILE_K]], %[[TILE_N]]] // CHECK: %[[SV8:.+]] = memref.subview %[[SV2]][0, %[[IV1]]] diff --git a/mlir/test/Dialect/Linalg/fusion-sequence.mlir b/mlir/test/Dialect/Linalg/fusion-sequence.mlir --- a/mlir/test/Dialect/Linalg/fusion-sequence.mlir +++ b/mlir/test/Dialect/Linalg/fusion-sequence.mlir @@ -83,10 +83,9 @@ } } -// CHECK-DAG: #[[MAP0:.+]] = affine_map<(d0)[s0] -> (16, -d0 + s0)> +// CHECK-DAG: #[[MAP0:.+]] = affine_map<()[s0, s1] -> (16, s0 - s1)> // CHECK-DAG: #[[MAP1:.+]] = affine_map<(d0, d1)[s0, s1] -> (d0 * s1 + s0 + d1)> -// CHECK-DAG: #[[MAP2:.+]] = affine_map<(d0)[s0, s1] -> (-d0 + s0, 16, -d0 + s1)> -// CHECK-DAG: #[[MAP3:.+]] = affine_map<(d0)[s0] -> (-d0 + s0, 16)> +// CHECK-DAG: #[[MAP2:.+]] = affine_map<()[s0, s1, s2] -> (s0 - s1, 16, -s1 + s2)> // CHECK: func @sequence_of_matmul @@ -105,22 +104,22 @@ // CHECK: %[[ALLOC2:.+]] = memref.alloc(%[[M]], %[[N2]]) // CHECK: scf.parallel (%[[IV0:.+]]) = (%[[C0]]) to (%[[M]]) // CHECK-SAME: step (%[[C16]]) { -// CHECK: %[[TILE_M:.+]] = affine.min #[[MAP0]](%[[IV0]])[%[[M]]] +// CHECK: %[[TILE_M:.+]] = affine.min #[[MAP0]]()[%[[M]], %[[IV0]]] // CHECK: %[[SV_ALLOC3:.+]] = memref.subview %[[ALLOC2]][%[[IV0]], 0] // CHECK-SAME: [%[[TILE_M]], %[[N2]]] // CHECK: %[[N3:.+]] = memref.dim %[[ARG4]], %[[C1]] // CHECK: %[[SV_ARG4:.+]] = memref.subview %[[ARG4]][%[[IV0]], 0] // CHECK-SAME: [%[[TILE_M]], %[[N3]]] // CHECK: %[[M_2:.+]] = memref.dim %[[ARG4]], %[[C0]] -// CHECK: %[[TILE_M_3:.+]] = affine.min #[[MAP2]](%[[IV0]])[%[[M_2]], %[[M]]] +// CHECK: %[[TILE_M_3:.+]] = affine.min #[[MAP2]]()[%[[M_2]], %[[IV0]], %[[M]]] // CHECK: %[[SV_ARG4_2:.+]] = memref.subview %[[ARG4]][%[[IV0]], 0] // CHECK-SAME: [%[[TILE_M_3]], %[[N3]]] -// CHECK: %[[TILE_M_4:.+]] = affine.min #[[MAP3]](%[[IV0]])[%[[M]]] +// CHECK: %[[TILE_M_4:.+]] = affine.min #[[MAP3]]()[%[[M]], %[[IV0]]] // CHECK: %[[SV_ALLOC1:.+]] = memref.subview %[[ALLOC1]][%[[IV0]], 0] // CHECK-SAME: [%[[TILE_M_4]], %[[N1]]] // CHECK: %[[SV_ALLOC2:.+]] = memref.subview %[[ALLOC2]][%[[IV0]], 0] // CHECK-SAME: [%[[TILE_M_4]], %[[N2]]] -// CHECK: %[[TILE_M_5:.+]] = affine.min #[[MAP2]](%[[IV0]])[%[[M]], %[[M]]] +// CHECK: %[[TILE_M_5:.+]] = affine.min #[[MAP2]]()[%[[M]], %[[IV0]], %[[M]]] // CHECK: %[[N0:.+]] = memref.dim %[[ARG0]], %[[C1]] // CHECK: %[[SV_ARG0:.+]] = memref.subview %[[ARG0]][%[[IV0]], 0] // CHECK-SAME: [%[[TILE_M_5]], %[[N0]]] @@ -212,8 +211,8 @@ } } -// CHECK: #[[MAP0:.+]] = affine_map<(d0)[s0] -> (16, -d0 + s0)> -// CHECK: #[[MAP1:.+]] = affine_map<(d0)[s0, s1] -> (-d0 + s0, 16, -d0 + s1)> +// CHECK: #[[MAP0:.+]] = affine_map<()[s0, s1] -> (16, s0 - s1)> +// CHECK: #[[MAP1:.+]] = affine_map<()[s0, s1, s2] -> (s0 - s1, 16, -s1 + s2)> // CHECK: func @tensor_matmul_fusion( // CHECK-SAME: %[[ARG0:[a-zA-Z0-9_]+]]: tensor @@ -228,11 +227,11 @@ // CHECK: %[[M:.+]] = tensor.dim %[[ARG0]], %c0 : tensor // CHECK: %[[R0:.+]] = scf.for %[[IV0:[a-zA-Z0-9_]+]] = // CHECK-SAME: iter_args(%[[ARG8:.+]] = %[[ARG6]]) -> (tensor) { -// CHECK: %[[TILE_M_1:.+]] = affine.min #[[MAP0]](%[[IV0]])[%[[M]]] +// CHECK: %[[TILE_M_1:.+]] = affine.min #[[MAP0]]()[%[[M]], %[[IV0]]] // CHECK: %[[N3:.+]] = tensor.dim %[[ARG8]], %[[C1]] // CHECK: %[[STARG6:.+]] = tensor.extract_slice %[[ARG8]][%[[IV0]], 0] // CHECK-SAME: [%[[TILE_M_1]], %[[N3]]] -// CHECK: %[[TILE_M_2:.+]] = affine.min #[[MAP1]](%[[IV0]])[%[[M]], %[[M]]] +// CHECK: %[[TILE_M_2:.+]] = affine.min #[[MAP1]]()[%[[M]], %[[IV0]], %[[M]]] // CHECK: %[[N2:.+]] = tensor.dim %[[ARG4]], %[[C1]] // CHECK: %[[STARG4:.+]] = tensor.extract_slice %[[ARG4]][%[[IV0]], 0] // CHECK-SAME: [%[[TILE_M_2]], %[[N2]]] diff --git a/mlir/test/Dialect/Linalg/fusion-tensor-pattern.mlir b/mlir/test/Dialect/Linalg/fusion-tensor-pattern.mlir --- a/mlir/test/Dialect/Linalg/fusion-tensor-pattern.mlir +++ b/mlir/test/Dialect/Linalg/fusion-tensor-pattern.mlir @@ -13,10 +13,10 @@ return %ABC : tensor } } -// CHECK-DAG: #[[MAP1:.+]] = affine_map<(d0)[s0] -> (32, -d0 + s0)> -// CHECK-DAG: #[[MAP2:.+]] = affine_map<(d0)[s0] -> (16, -d0 + s0)> -// CHECK-DAG: #[[MAP3:.+]] = affine_map<(d0)[s0] -> (64, -d0 + s0)> -// CHECK-DAG: #[[MAP5:.+]] = affine_map<(d0)[s0, s1] -> (-d0 + s0, 32, -d0 + s1)> +// CHECK-DAG: #[[MAP1:.+]] = affine_map<()[s0, s1] -> (32, s0 - s1)> +// CHECK-DAG: #[[MAP2:.+]] = affine_map<()[s0, s1] -> (16, s0 - s1)> +// CHECK-DAG: #[[MAP3:.+]] = affine_map<()[s0, s1] -> (64, s0 - s1)> +// CHECK-DAG: #[[MAP5:.+]] = affine_map<()[s0, s1, s2] -> (s0 - s1, 32, -s1 + s2)> // CHECK: func @matmul_fusion // CHECK-SAME: %[[ARG0:[a-zA-Z0-9_]+]]: tensor @@ -34,11 +34,11 @@ // CHECK: %[[RESULT:.+]] = scf.for %[[IV0:[a-zA-Z0-9]+]] = // CHECK-SAME: %[[C0]] to %[[M]] step %[[C32]] // CHECK-SAME: iter_args(%[[ARG6:.+]] = %[[ARG4]]) -> (tensor) { -// CHECK: %[[TILE_M_2:.+]] = affine.min #[[MAP1]](%[[IV0]])[%[[M]]] +// CHECK: %[[TILE_M_2:.+]] = affine.min #[[MAP1]]()[%[[M]], %[[IV0]]] // CHECK: %[[N3:.+]] = tensor.dim %[[ARG6]], %[[C1]] // CHECK: %[[ST_ARG6:.+]] = tensor.extract_slice %[[ARG6]][%[[IV0]], 0] // CHECK-SAME: [%[[TILE_M_2]], %[[N3]]] -// CHECK: %[[TILE_M_3:.+]] = affine.min #[[MAP5]](%[[IV0]])[%[[M]], %[[M]]] +// CHECK: %[[TILE_M_3:.+]] = affine.min #[[MAP5]]()[%[[M]], %[[IV0]], %[[M]]] // CHECK: %[[N1:.+]] = tensor.dim %[[ARG0]], %[[C1]] // CHECK: %[[ST_ARG0:.+]] = tensor.extract_slice %[[ARG0]][%[[IV0]], 0] // CHECK-SAME: [%[[TILE_M_3]], %[[N1]]] @@ -57,10 +57,10 @@ // CHECK: %[[YIELD1:.+]] = scf.for %[[IV2:[a-zA-Z0-9]+]] = // CHECK-SAME: %[[C0]] to %[[N2]] step %[[C16]] // CHECK-SAME: iter_args(%[[ARG10:.+]] = %[[ARG8]]) -> (tensor) { -// CHECK: %[[TILE_N2:.+]] = affine.min #[[MAP2]](%[[IV2]])[%[[N2]]] +// CHECK: %[[TILE_N2:.+]] = affine.min #[[MAP2]]()[%[[N2]], %[[IV2]]] // CHECK: %[[ST_LHS:.+]] = tensor.extract_slice %[[LHS]][0, %[[IV2]]] // CHECK-SAME: [%[[TILE_M_3]], %[[TILE_N2]]] -// CHECK: %[[TILE_N3:.+]] = affine.min #[[MAP3]](%[[IV1]])[%[[N3_2]]] +// CHECK: %[[TILE_N3:.+]] = affine.min #[[MAP3]]()[%[[N3_2]], %[[IV1]]] // CHECK: %[[ST_ARG3:.+]] = tensor.extract_slice %[[ARG3]][%[[IV2]], %[[IV1]]] // CHECK-SAME: [%[[TILE_N2]], %[[TILE_N3]]] // CHECK: %[[M_4:.+]] = tensor.dim %[[ARG10]], %[[C0]] diff --git a/mlir/test/Dialect/Linalg/fusion.mlir b/mlir/test/Dialect/Linalg/fusion.mlir --- a/mlir/test/Dialect/Linalg/fusion.mlir +++ b/mlir/test/Dialect/Linalg/fusion.mlir @@ -253,9 +253,9 @@ return %E : memref } -// CHECK-DAG: #[[BOUND_2_MAP:.+]] = affine_map<(d0)[s0] -> (2, -d0 + s0)> -// CHECK-DAG: #[[BOUND_2_MAP_2:.+]] = affine_map<(d0)[s0, s1] -> (-d0 + s0, 2, -d0 + s1)> -// CHECK-DAG: #[[BOUND_4_MAP:.+]] = affine_map<(d0)[s0] -> (4, -d0 + s0)> +// CHECK-DAG: #[[BOUND_2_MAP:.+]] = affine_map<()[s0, s1] -> (2, s0 - s1)> +// CHECK-DAG: #[[BOUND_2_MAP_2:.+]] = affine_map<()[s0, s1, s2] -> (s0 - s1, 2, -s1 + s2)> +// CHECK-DAG: #[[BOUND_4_MAP:.+]] = affine_map<()[s0, s1] -> (4, s0 - s1)> // CHECK: func @f5 // CHECK-SAME: (%[[A:.*]]:{{.*}}, %[[B:.*]]:{{.*}}, %[[C:.*]]:{{.*}}, %[[D:.*]]:{{.*}}, %[[E:.*]]:{{.*}}) // CHECK-DAG: %[[C0:.*]] = arith.constant 0 : index @@ -267,9 +267,9 @@ // CHECK-DAG: %[[D_1:.*]] = memref.dim %[[D]], %[[C1]] : memref // CHECK-DAG: %[[B_00:.*]] = memref.subview %[[B]][0, 0]{{.*}} // CHECK: scf.for %[[I:.*]] = %{{.*}} to %[[D_0]] step %{{.*}} { -// CHECK: %[[BOUND_2_C0:.+]] = affine.min #[[BOUND_2_MAP]](%[[I]])[%[[C_0]]] +// CHECK: %[[BOUND_2_C0:.+]] = affine.min #[[BOUND_2_MAP]]()[%[[C_0]], %[[I]]] // CHECK: %[[C_I0:.*]] = memref.subview %[[C]][%[[I]], 0] [%[[BOUND_2_C0]] -// CHECK: %[[BOUND_ID_C0:.+]] = affine.min #[[BOUND_2_MAP_2]](%[[I]])[%[[A_0]], %[[C_0]]] +// CHECK: %[[BOUND_ID_C0:.+]] = affine.min #[[BOUND_2_MAP_2]]()[%[[A_0]], %[[I]], %[[C_0]]] // CHECK: %[[A_I0:.*]] = memref.subview %[[A]][%[[I]], 0] // CHECK: %[[C_I0_OUT:.*]] = memref.subview %[[C]][%[[I]], 0] [%[[BOUND_ID_C0]] // CHECK: scf.for %[[J:.*]] = %{{.*}} to %[[B_1]] step %{{.*}} { @@ -277,7 +277,7 @@ // CHECK: scf.for %[[K:.*]] = %{{.*}} to %[[D_1]] step %{{.*}} { // CHECK: %[[D_IK:.*]] = memref.subview %[[D]][%[[I]], %[[K]]] [2, 4] // CHECK: %[[B_KJ:.*]] = memref.subview %[[B]][%[[K]], %[[J]]] -// CHECK: %[[BOUND_4_B1:.*]] = affine.min #[[BOUND_4_MAP]](%[[K]])[%[[B_1]]] +// CHECK: %[[BOUND_4_B1:.*]] = affine.min #[[BOUND_4_MAP]]()[%[[B_1]], %[[K]]] // CHECK: %[[B_0K:.*]] = memref.subview %[[B]][0, %[[K]]] // CHECK: %[[D_IK_OUT:.+]] = memref.subview %[[D]][%[[I]], %[[K]]] [%[[BOUND_2_C0]], %[[BOUND_4_B1]]] // CHECK: linalg.matmul ins(%[[A_I0]], %[[B_00]]{{.*}} outs(%[[C_I0_OUT]] diff --git a/mlir/test/Dialect/Linalg/hoist-padding.mlir b/mlir/test/Dialect/Linalg/hoist-padding.mlir --- a/mlir/test/Dialect/Linalg/hoist-padding.mlir +++ b/mlir/test/Dialect/Linalg/hoist-padding.mlir @@ -9,8 +9,6 @@ // RUN: mlir-opt %s -split-input-file -test-linalg-transform-patterns=test-hoist-padding=5 | FileCheck %s --check-prefix=VERIFIER-ONLY // RUN: mlir-opt %s -split-input-file -test-linalg-transform-patterns=test-hoist-padding=6 | FileCheck %s --check-prefix=VERIFIER-ONLY -// CHECK-DAG: #[[$DIV3:[0-9a-z]+]] = affine_map<(d0) -> (d0 ceildiv 3)> -// CHECK-DAG: #[[$DIV4:[0-9a-z]+]] = affine_map<(d0) -> (d0 ceildiv 4)> // CHECK-DAG: #[[$DIVS3:[0-9a-z]+]] = affine_map<()[s0] -> (s0 ceildiv 3)> // CHECK-DAG: #[[$DIVS4:[0-9a-z]+]] = affine_map<()[s0] -> (s0 ceildiv 4)> #map0 = affine_map<(d0)[s0] -> (2, -d0 + s0)> @@ -52,7 +50,7 @@ // 1-D loop // CHECK: %[[A:.*]] = scf.for %[[J1:[0-9a-z]+]] = // Iteration count along J1 - // CHECK: %[[IDXpad0_K:[0-9]+]] = affine.apply #[[$DIV4]](%[[J1]]) + // CHECK: %[[IDXpad0_K:[0-9]+]] = affine.apply #[[$DIVS4]]()[%[[J1]]] // CHECK: tensor.extract_slice %{{.*}} [1, 1] : tensor to tensor // CHECK: linalg.pad_tensor %{{.*}} // CHECK: : tensor to tensor<2x4xf32> @@ -65,10 +63,10 @@ // 2-D loop // CHECK: %[[B:.*]] = scf.for %[[K2:[0-9a-z]+]] = // Iteration count along K2 - // CHECK: %[[IDXpad1_K:[0-9]+]] = affine.apply #[[$DIV3]](%[[K2]]) + // CHECK: %[[IDXpad1_K:[0-9]+]] = affine.apply #[[$DIVS3]]()[%[[K2]]] // CHECK: scf.for %[[J2:[0-9a-z]+]] = // Iteration count along J2 - // CHECK: %[[IDXpad1_N:[0-9]+]] = affine.apply #[[$DIV4]](%[[J2]]) + // CHECK: %[[IDXpad1_N:[0-9]+]] = affine.apply #[[$DIVS4]]()[%[[J2]]] // CHECK: tensor.extract_slice %{{.*}} [1, 1] : tensor to tensor // CHECK: linalg.pad_tensor %{{.*}} // CHECK: : tensor to tensor<4x3xf32> @@ -78,13 +76,13 @@ // CHECK: scf.for %[[J:[0-9a-zA-Z]+]] // CHECK: scf.for %[[K:[0-9a-zA-Z]+]] // Iteration count along K - // CHECK: %[[IDXpad0_K:[0-9]+]] = affine.apply #[[$DIV4]](%[[K]]) + // CHECK: %[[IDXpad0_K:[0-9]+]] = affine.apply #[[$DIVS4]]()[%[[K]]] // CHECK: %[[stA:.*]] = tensor.extract_slice %[[A]][%[[IDXpad0_K]], 0, 0] [1, 2, 4] [1, 1, 1] : // CHECK-SAME: tensor to tensor<2x4xf32> // Iteration count along J - // CHECK: %[[IDXpad1_N:[0-9]+]] = affine.apply #[[$DIV3]](%[[J]]) + // CHECK: %[[IDXpad1_N:[0-9]+]] = affine.apply #[[$DIVS3]]()[%[[J]]] // Iteration count along K - // CHECK: %[[IDXpad1_K:[0-9]+]] = affine.apply #[[$DIV4]](%[[K]]) + // CHECK: %[[IDXpad1_K:[0-9]+]] = affine.apply #[[$DIVS4]]()[%[[K]]] // CHECK: %[[stB:.*]] = tensor.extract_slice %[[B]][%[[IDXpad1_N]], %[[IDXpad1_K]], 0, 0] [1, 1, 4, 3] [1, 1, 1, 1] : // CHECK-SAME: tensor to tensor<4x3xf32> // CHECK: %[[stC:.*]] = linalg.pad_tensor %{{.*}} @@ -142,11 +140,11 @@ // ----- -// CHECK-DAG: #[[$MIN_REST8:[0-9a-z]+]] = affine_map<(d0)[s0] -> (8, -d0 + s0)> -// CHECK-DAG: #[[$MIN_REST4:[0-9a-z]+]] = affine_map<(d0, d1) -> (4, d0 - d1)> -// CHECK-DAG: #[[$MIN_REST2:[0-9a-z]+]] = affine_map<(d0, d1) -> (2, d0 - d1)> -// CHECK-DAG: #[[$DIV4:[0-9a-z]+]] = affine_map<(d0) -> (d0 ceildiv 4)> -// CHECK-DAG: #[[$DIV2:[0-9a-z]+]] = affine_map<(d0) -> (d0 ceildiv 2)> +// CHECK-DAG: #[[$MIN_REST8:[0-9a-z]+]] = affine_map<()[s0, s1] -> (8, s0 - s1)> +// CHECK-DAG: #[[$MIN_REST4:[0-9a-z]+]] = affine_map<()[s0, s1] -> (4, s0 - s1)> +// CHECK-DAG: #[[$MIN_REST2:[0-9a-z]+]] = affine_map<()[s0, s1] -> (2, s0 - s1)> +// CHECK-DAG: #[[$DIVS4:[0-9a-z]+]] = affine_map<()[s0] -> (s0 ceildiv 4)> +// CHECK-DAG: #[[$DIV2:[0-9a-z]+]] = affine_map<()[s0] -> (s0 ceildiv 2)> #map0 = affine_map<(d0)[s0] -> (8, -d0 + s0)> #map1 = affine_map<(d0, d1) -> (4, d0 - d1)> #map2 = affine_map<(d0, d1) -> (2, d0 - d1)> @@ -167,8 +165,8 @@ // CHECK: scf.for %[[I:[0-9a-z]+]] = // - // CHECK: %[[MR8:.*]] = affine.min #[[$MIN_REST8]](%[[I]]) - // CHECK: %[[D0:.*]] = affine.apply #[[$DIV4]](%[[MR8]]) + // CHECK: %[[MR8:.*]] = affine.min #[[$MIN_REST8]]()[%{{.*}}, %[[I]]] + // CHECK: %[[D0:.*]] = affine.apply #[[$DIVS4]]()[%[[MR8]]] // Init tensor and pack. // CHECK: %[[INIT_PACKED_A:.*]] = linalg.init_tensor [%[[D0]], 2, 2] : tensor // CHECK: %[[CAST_INIT_PACKED_A:.*]] = tensor.cast %[[INIT_PACKED_A]] : tensor to tensor @@ -176,7 +174,7 @@ // CHECK: scf.for %[[III:[0-9a-z]+]] = // CHECK: tensor.insert_slice %{{.*}} into %{{.*}}[%{{.*}}, %{{.*}}, 0] [1, 1, 2] [1, 1, 1] : tensor<2xf32> into tensor // - // CHECK: %[[D0_2:.*]] = affine.apply #[[$DIV4]](%[[MR8]]) + // CHECK: %[[D0_2:.*]] = affine.apply #[[$DIVS4]]()[%[[MR8]]] // Init tensor and pack. // CHECK: %[[INIT_PACKED_B:.*]] = linalg.init_tensor [%[[D0_2]], 2, 2] : tensor // CHECK: %[[CAST_INIT_PACKED_B:.*]] = tensor.cast %[[INIT_PACKED_B]] : tensor to tensor @@ -186,11 +184,11 @@ // Compute. // CHECK: scf.for %[[II_3:[0-9a-z]+]] = // CHECK: scf.for %[[III_3:[0-9a-z]+]] = {{.*}} iter_args(%[[C:.*]] = %{{.*}}) -> (tensor) { - // CHECK: %[[IDX0:.*]] = affine.apply #[[$DIV4]](%[[II_3]]) - // CHECK: %[[IDX1:.*]] = affine.apply #[[$DIV2]](%[[III_3]]) + // CHECK: %[[IDX0:.*]] = affine.apply #[[$DIVS4]]()[%[[II_3]]] + // CHECK: %[[IDX1:.*]] = affine.apply #[[$DIV2]]()[%[[III_3]]] // CHECK: %[[A:.*]] = tensor.extract_slice %[[PACKED_A]][%[[IDX0]], %[[IDX1]], 0] [1, 1, 2] [1, 1, 1] : tensor to tensor<2xf32> - // CHECK: %[[IDX0_2:.*]] = affine.apply #[[$DIV4]](%[[II_3]]) - // CHECK: %[[IDX1_2:.*]] = affine.apply #[[$DIV2]](%[[III_3]]) + // CHECK: %[[IDX0_2:.*]] = affine.apply #[[$DIVS4]]()[%[[II_3]]] + // CHECK: %[[IDX1_2:.*]] = affine.apply #[[$DIV2]]()[%[[III_3]]] // CHECK: %[[B:.*]] = tensor.extract_slice %[[PACKED_B]][%[[IDX0_2]], %[[IDX1_2]], 0] [1, 1, 2] [1, 1, 1] : tensor to tensor<2xf32> // CHECK: linalg.dot ins(%[[A]], %[[B]] : tensor<2xf32>, tensor<2xf32>) outs(%[[C]] : tensor) -> tensor diff --git a/mlir/test/Dialect/Linalg/loops.mlir b/mlir/test/Dialect/Linalg/loops.mlir --- a/mlir/test/Dialect/Linalg/loops.mlir +++ b/mlir/test/Dialect/Linalg/loops.mlir @@ -7,12 +7,12 @@ // CHECK-DAG: #[[$strided1D:.*]] = affine_map<(d0)[s0] -> (d0 + s0)> // CHECK-DAG: #[[$strided2D:.*]] = affine_map<(d0, d1)[s0, s1] -> (d0 * s1 + s0 + d1)> // CHECK-DAG: #[[$strided3D:.*]] = affine_map<(d0, d1, d2)[s0, s1, s2] -> (d0 * s1 + s0 + d1 * s2 + d2)> -// CHECK-DAG: #[[$stride1Dilation1:.*]] = affine_map<(d0, d1) -> (d0 + d1)> +// CHECK-DAG: #[[$stride1Dilation1:.*]] = affine_map<()[s0, s1] -> (s0 + s1)> // CHECKPARALLEL-DAG: #[[$strided1D:.*]] = affine_map<(d0)[s0] -> (d0 + s0)> // CHECKPARALLEL-DAG: #[[$strided2D:.*]] = affine_map<(d0, d1)[s0, s1] -> (d0 * s1 + s0 + d1)> // CHECKPARALLEL-DAG: #[[$strided3D:.*]] = affine_map<(d0, d1, d2)[s0, s1, s2] -> (d0 * s1 + s0 + d1 * s2 + d2)> -// CHECKPARALLEL-DAG: #[[$stride1Dilation1:.*]] = affine_map<(d0, d1) -> (d0 + d1)> +// CHECKPARALLEL-DAG: #[[$stride1Dilation1:.*]] = affine_map<()[s0, s1] -> (s0 + s1)> func @matmul(%arg0: memref, %M: index, %N: index, %K: index) { %c0 = arith.constant 0 : index @@ -704,7 +704,7 @@ // CHECK: %[[dim1:.*]] = memref.dim %[[arg2]], %[[c0]] : memref // CHECK: scf.for %[[b:.*]] = %[[c0]] to %[[dim1]] step %[[c1]] { // CHECK: scf.for %[[m:.*]] = %[[c0]] to %[[dim0]] step %[[c1]] { -// CHECK: %[[aff:.*]] = affine.apply #[[$stride1Dilation1]](%[[b]], %[[m]]) +// CHECK: %[[aff:.*]] = affine.apply #[[$stride1Dilation1]]()[%[[b]], %[[m]]] // CHECK: %[[vb:.*]] = memref.load %[[arg0]][%[[aff]]] : memref // CHECK: %[[va:.*]] = memref.load %[[arg1]][%[[m]]] : memref // CHECK: %[[vc:.*]] = memref.load %[[arg2]][%[[b]]] : memref @@ -722,7 +722,7 @@ // CHECKPARALLEL: %[[dim1:.*]] = memref.dim %[[arg2]], %[[c0]] : memref // CHECKPARALLEL: scf.parallel (%[[b:.*]]) = (%[[c0]]) to (%[[dim1]]) step (%[[c1]]) { // CHECKPARALLEL: scf.for %[[m:.*]] = %[[c0]] to %[[dim0]] step %[[c1]] { -// CHECKPARALLEL: %[[aff:.*]] = affine.apply #[[$stride1Dilation1]](%[[b]], %[[m]]) +// CHECKPARALLEL: %[[aff:.*]] = affine.apply #[[$stride1Dilation1]]()[%[[m]], %[[b]]] // CHECKPARALLEL: %[[vb:.*]] = memref.load %[[arg0]][%[[aff]]] : memref // CHECKPARALLEL: %[[va:.*]] = memref.load %[[arg1]][%[[m]]] : memref // CHECKPARALLEL: %[[vc:.*]] = memref.load %[[arg2]][%[[b]]] : memref @@ -750,8 +750,8 @@ // CHECK: scf.for %[[arg4:.*]] = %[[c0]] to %[[dim3]] step %[[c1]] { // CHECK: scf.for %[[arg5:.*]] = %[[c0]] to %[[dim0]] step %[[c1]] { // CHECK: scf.for %[[arg6:.*]] = %[[c0]] to %[[dim1]] step %[[c1]] { -// CHECK: %[[aff:.*]] = affine.apply #[[$stride1Dilation1]](%[[arg3]], %[[arg5]]) -// CHECK: %[[aff2:.*]] = affine.apply #[[$stride1Dilation1]](%[[arg4]], %[[arg6]]) +// CHECK: %[[aff:.*]] = affine.apply #[[$stride1Dilation1]]()[%[[arg3]], %[[arg5]]] +// CHECK: %[[aff2:.*]] = affine.apply #[[$stride1Dilation1]]()[%[[arg4]], %[[arg6]]] // CHECK: %[[vb:.*]] = memref.load %[[arg0]][%[[aff]], %[[aff2]]] : memref // CHECK: %[[va:.*]] = memref.load %[[arg1]][%[[arg5]], %[[arg6]]] : memref @@ -774,8 +774,8 @@ // CHECKPARALLEL: scf.parallel (%[[arg3:.*]], %[[arg4:.*]]) = (%[[c0]], %[[c0]]) to (%[[dim2]], %[[dim3]]) step (%[[c1]], %[[c1]]) { // CHECKPARALLEL: scf.for %[[arg5:.*]] = %[[c0]] to %[[dim0]] step %[[c1]] { // CHECKPARALLEL: scf.for %[[arg6:.*]] = %[[c0]] to %[[dim1]] step %[[c1]] { -// CHECKPARALLEL: %[[aff:.*]] = affine.apply #[[$stride1Dilation1]](%[[arg3]], %[[arg5]]) -// CHECKPARALLEL: %[[aff2:.*]] = affine.apply #[[$stride1Dilation1]](%[[arg4]], %[[arg6]]) +// CHECKPARALLEL: %[[aff:.*]] = affine.apply #[[$stride1Dilation1]]()[%[[arg5]], %[[arg3]]] +// CHECKPARALLEL: %[[aff2:.*]] = affine.apply #[[$stride1Dilation1]]()[%[[arg6]], %[[arg4]]] // CHECKPARALLEL: %[[vb:.*]] = memref.load %[[arg0]][%[[aff]], %[[aff2]]] : memref // CHECKPARALLEL: %[[va:.*]] = memref.load %[[arg1]][%[[arg5]], %[[arg6]]] : memref // CHECKPARALLEL: %[[vc:.*]] = memref.load %[[arg2]][%[[arg3]], %[[arg4]]] : memref @@ -809,9 +809,9 @@ // CHECK: scf.for %[[arg6:.*]] = %[[c0]] to %[[dim0]] step %[[c1]] { // CHECK: scf.for %[[arg7:.*]] = %[[c0]] to %[[dim1]] step %[[c1]] { // CHECK: scf.for %[[arg8:.*]] = %[[c0]] to %[[dim2]] step %[[c1]] { -// CHECK: %[[aff:.*]] = affine.apply #[[$stride1Dilation1]](%[[arg3]], %[[arg6]]) -// CHECK: %[[aff2:.*]] = affine.apply #[[$stride1Dilation1]](%[[arg4]], %[[arg7]]) -// CHECK: %[[aff3:.*]] = affine.apply #[[$stride1Dilation1]](%[[arg5]], %[[arg8]]) +// CHECK: %[[aff:.*]] = affine.apply #[[$stride1Dilation1]]()[%[[arg3]], %[[arg6]]] +// CHECK: %[[aff2:.*]] = affine.apply #[[$stride1Dilation1]]()[%[[arg4]], %[[arg7]]] +// CHECK: %[[aff3:.*]] = affine.apply #[[$stride1Dilation1]]()[%[[arg5]], %[[arg8]]] // CHECK: %[[vb:.*]] = memref.load %[[arg0]][%[[aff]], %[[aff2]], %[[aff3]]] : memref // CHECK: %[[va:.*]] = memref.load %[[arg1]][%[[arg6]], %[[arg7]], %[[arg8]]] : memref @@ -838,9 +838,9 @@ // CHECKPARALLEL: scf.for %[[arg6:.*]] = %[[c0]] to %[[dim0]] step %[[c1]] { // CHECKPARALLEL: scf.for %[[arg7:.*]] = %[[c0]] to %[[dim1]] step %[[c1]] { // CHECKPARALLEL: scf.for %[[arg8:.*]] = %[[c0]] to %[[dim2]] step %[[c1]] { -// CHECKPARALLEL: %[[aff:.*]] = affine.apply #[[$stride1Dilation1]](%[[arg3]], %[[arg6]]) -// CHECKPARALLEL: %[[aff2:.*]] = affine.apply #[[$stride1Dilation1]](%[[arg4]], %[[arg7]]) -// CHECKPARALLEL: %[[aff3:.*]] = affine.apply #[[$stride1Dilation1]](%[[arg5]], %[[arg8]]) +// CHECKPARALLEL: %[[aff:.*]] = affine.apply #[[$stride1Dilation1]]()[%[[arg6]], %[[arg3]]] +// CHECKPARALLEL: %[[aff2:.*]] = affine.apply #[[$stride1Dilation1]]()[%[[arg7]], %[[arg4]]] +// CHECKPARALLEL: %[[aff3:.*]] = affine.apply #[[$stride1Dilation1]]()[%[[arg8]], %[[arg5]]] // CHECKPARALLEL: %[[vb:.*]] = memref.load %[[arg0]][%[[aff]], %[[aff2]], %[[aff3]]] : memref // CHECKPARALLEL: %[[va:.*]] = memref.load %[[arg1]][%[[arg6]], %[[arg7]], %[[arg8]]] : memref // CHECKPARALLEL: %[[vc:.*]] = memref.load %[[arg2]][%[[arg3]], %[[arg4]], %[[arg5]]] : memref diff --git a/mlir/test/Dialect/Linalg/pad-and-hoist.mlir b/mlir/test/Dialect/Linalg/pad-and-hoist.mlir --- a/mlir/test/Dialect/Linalg/pad-and-hoist.mlir +++ b/mlir/test/Dialect/Linalg/pad-and-hoist.mlir @@ -1,9 +1,9 @@ // RUN: mlir-opt %s -test-linalg-transform-patterns="test-pad-pattern pack-paddings=1,1,0 hoist-paddings=2,1,0" -cse -canonicalize -split-input-file | FileCheck %s // RUN: mlir-opt %s -test-linalg-transform-patterns="test-pad-pattern pack-paddings=1,1,0 hoist-paddings=4,3,0" -cse -canonicalize -split-input-file | FileCheck %s --check-prefix=CHECK-DOUBLE -// CHECK-DAG: #[[MAP0:[0-9a-z]+]] = affine_map<(d0) -> (5, -d0 + 24)> -// CHECK-DAG: #[[MAP1:[0-9a-z]+]] = affine_map<(d0) -> (8, -d0 + 12)> -// CHECK-DAG: #[[DIV6:[0-9a-z]+]] = affine_map<(d0) -> (d0 ceildiv 6)> +// CHECK-DAG: #[[MAP0:[0-9a-z]+]] = affine_map<()[s0] -> (5, -s0 + 24)> +// CHECK-DAG: #[[MAP1:[0-9a-z]+]] = affine_map<()[s0] -> (8, -s0 + 12)> +// CHECK-DAG: #[[DIV6:[0-9a-z]+]] = affine_map<()[s0] -> (s0 ceildiv 6)> #map0 = affine_map<(d0) -> (5, -d0 + 24)> #map1 = affine_map<(d0) -> (8, -d0 + 12)> #map2 = affine_map<(d0) -> (7, -d0 + 25)> @@ -34,8 +34,8 @@ // Packing the first input operand for all values of IV2 (IV2x5x6). // CHECK: = linalg.init_tensor [2, 5, 6] // CHECK: %[[PT0:.*]] = scf.for %[[P0IV2:[0-9a-z]+]] = - // CHECK: %[[PIDX0:.*]] = affine.apply #[[DIV6]](%[[P0IV2]]) - // CHECK: %[[TS0:.*]] = affine.min #[[MAP0]](%[[IV0]]) + // CHECK: %[[PIDX0:.*]] = affine.apply #[[DIV6]]()[%[[P0IV2]]] + // CHECK: %[[TS0:.*]] = affine.min #[[MAP0]]()[%[[IV0]]] // CHECK: %[[T0:.*]] = tensor.extract_slice %[[ARG0]] // CHECK-SAME: %[[IV0]], %[[P0IV2]] // CHECK-SAME: %[[TS0]], 6 @@ -50,8 +50,8 @@ // Packing the second input operand for all values of IV2 (IV2x6x8). // CHECK: = linalg.init_tensor [2, 6, 8] // CHECK: %[[PT1:.*]] = scf.for %[[P1IV2:[0-9a-z]+]] = - // CHECK: %[[PIDX1:.*]] = affine.apply #[[DIV6]](%[[P1IV2]]) - // CHECK: %[[TS1:.*]] = affine.min #[[MAP1]](%[[IV1]]) + // CHECK: %[[PIDX1:.*]] = affine.apply #[[DIV6]]()[%[[P1IV2]]] + // CHECK: %[[TS1:.*]] = affine.min #[[MAP1]]()[%[[IV1]]] // CHECK: %[[T3:.*]] = tensor.extract_slice %[[ARG1]] // CHECK-SAME: %[[P1IV2]], %[[IV1]] // CHECK-SAME: 6, %[[TS1]] @@ -64,7 +64,7 @@ %2 = scf.for %arg7 = %c0 to %c12 step %c6 iter_args(%arg8 = %arg6) -> (tensor<24x25xf32>) { %3 = affine.min #map0(%arg3) // Index the packed operands. - // CHECK-DAG: %[[IDX:.*]] = affine.apply #[[DIV6]](%[[IV2]]) + // CHECK-DAG: %[[IDX:.*]] = affine.apply #[[DIV6]]()[%[[IV2]]] // CHECK-DAG: %[[T6:.*]] = tensor.extract_slice %[[PT0]][%[[IDX]] // CHECK-DAG: %[[T7:.*]] = tensor.extract_slice %[[PT1]][%[[IDX]] %4 = tensor.extract_slice %arg0[%arg3, %arg7] [%3, 6] [1, 1] : tensor<24x12xf32> to tensor diff --git a/mlir/test/Dialect/Linalg/reshape_fusion.mlir b/mlir/test/Dialect/Linalg/reshape_fusion.mlir --- a/mlir/test/Dialect/Linalg/reshape_fusion.mlir +++ b/mlir/test/Dialect/Linalg/reshape_fusion.mlir @@ -198,7 +198,7 @@ } // Only check the body in the indexed version of the test. -// CHECK: #[[MAP:.+]] = affine_map<(d0, d1) -> (d0 + d1 * 4)> +// CHECK: #[[MAP:.+]] = affine_map<()[s0, s1] -> (s0 + s1 * 4)> // CHECK: func @indexed_consumer_reshape_producer_fusion // CHECK: linalg.generic // CHECK: ^{{.*}}( @@ -208,7 +208,7 @@ // CHECK-DAG: %[[IDX1:.+]] = linalg.index 1 : index // CHECK-DAG: %[[IDX2:.+]] = linalg.index 2 : index // CHECK-DAG: %[[IDX3:.+]] = linalg.index 3 : index -// CHECK-DAG: %[[T3:.+]] = affine.apply #[[MAP]](%[[IDX1]], %[[IDX0]]) +// CHECK-DAG: %[[T3:.+]] = affine.apply #[[MAP]]()[%[[IDX1]], %[[IDX0]]] // CHECK: %[[T4:.+]] = arith.muli %[[ARG3]], %[[ARG4]] // CHECK: %[[T5:.+]] = arith.index_cast %[[T3]] // CHECK: %[[T6:.+]] = arith.addi %[[T4]], %[[T5]] @@ -246,7 +246,7 @@ } // Only check the body in the indexed version of the test. -// CHECK: #[[MAP:.+]] = affine_map<(d0, d1, d2) -> (d0 + d1 * 5 + d2 * 20)> +// CHECK: #[[MAP:.+]] = affine_map<()[s0, s1, s2] -> (s0 + s1 * 5 + s2 * 20)> // CHECK: func @indexed_producer_reshape_consumer_fusion // CHECK: linalg.generic // CHECK: ^{{.*}}( @@ -256,7 +256,7 @@ // CHECK-DAG: %[[IDX1:.+]] = linalg.index 1 : index // CHECK-DAG: %[[IDX2:.+]] = linalg.index 2 : index // CHECK-DAG: %[[IDX3:.+]] = linalg.index 3 : index -// CHECK-DAG: %[[T3:.+]] = affine.apply #[[MAP]](%[[IDX3]], %[[IDX2]], %[[IDX1]]) +// CHECK-DAG: %[[T3:.+]] = affine.apply #[[MAP]]()[%[[IDX3]], %[[IDX2]], %[[IDX1]]] // CHECK: %[[T4:.+]] = arith.muli %[[ARG3]], %[[ARG4]] // CHECK: %[[T5:.+]] = arith.index_cast %[[IDX0]] // CHECK: %[[T6:.+]] = arith.addi %[[T4]], %[[T5]] @@ -299,8 +299,8 @@ // CHECK-DAG: #[[MAP5:.+]] = affine_map<(d0, d1, d2, d3, d4, d5) -> (d2, d3, d4, d0, d1, d5)> // CHECK-DAG: #[[MAP6:.+]] = affine_map<(d0, d1, d2, d3, d4, d5) -> (d2, d3, d4, d5)> // CHECK-DAG: #[[MAP7:.+]] = affine_map<(d0, d1, d2, d3, d4, d5) -> (d0, d1, d5, d2, d3, d4)> -// CHECK-DAG: #[[MAP8:.+]] = affine_map<(d0, d1) -> (d0 + d1 * 3)> -// CHECK-DAG: #[[MAP9:.+]] = affine_map<(d0, d1, d2) -> (d0 + d1 * 7 + d2 * 42)> +// CHECK-DAG: #[[MAP8:.+]] = affine_map<()[s0, s1] -> (s0 + s1 * 3)> +// CHECK-DAG: #[[MAP9:.+]] = affine_map<()[s0, s1, s2] -> (s0 + s1 * 7 + s2 * 42)> // CHECK: func @reshape_as_consumer_permutation // CHECK-SAME: %[[ARG0:.+]]: tensor<210x6x4xi32> // CHECK-SAME: %[[ARG1:.+]]: tensor<210x4xi32> @@ -322,8 +322,8 @@ // CHECK-DAG: %[[IDX3:.+]] = linalg.index 3 : index // CHECK-DAG: %[[IDX4:.+]] = linalg.index 4 : index // CHECK-DAG: %[[IDX5:.+]] = linalg.index 5 : index -// CHECK-DAG: %[[T5:.+]] = affine.apply #[[MAP8]](%[[IDX1]], %[[IDX0]]) -// CHECK-DAG: %[[T6:.+]] = affine.apply #[[MAP9]](%[[IDX4]], %[[IDX3]], %[[IDX2]]) +// CHECK-DAG: %[[T5:.+]] = affine.apply #[[MAP8]]()[%[[IDX1]], %[[IDX0]]] +// CHECK-DAG: %[[T6:.+]] = affine.apply #[[MAP9]]()[%[[IDX4]], %[[IDX3]], %[[IDX2]]] // CHECK-DAG: %[[T7:.+]] = arith.addi %[[ARG8]], %[[ARG9]] // CHECK: %[[T8:.+]] = arith.index_cast %[[T5]] // CHECK: %[[T9:.+]] = arith.addi %[[T7]], %[[T8]] @@ -362,7 +362,7 @@ // CHECK-DAG: #[[MAP0:.+]] = affine_map<(d0, d1, d2, d3) -> (d0, d1, d2)> // CHECK-DAG: #[[MAP1:.+]] = affine_map<(d0, d1, d2, d3) -> (d0, d1, d2, d3)> -// CHECK-DAG: #[[MAP2:.+]] = affine_map<(d0, d1) -> (d0 + d1 * 8)> +// CHECK-DAG: #[[MAP2:.+]] = affine_map<()[s0, s1] -> (s0 + s1 * 8)> // CHECK: @reshape_as_producer_projected_permutation // CHECK-SAME: %[[ARG0:.+]]: tensor<33x8x?xi32> // CHECK: %[[RES:.+]] = linalg.generic @@ -375,7 +375,7 @@ // CHECK-DAG: %[[IDX1:.+]] = linalg.index 1 : index // CHECK-DAG: %[[IDX2:.+]] = linalg.index 2 : index // CHECK-DAG: %[[IDX3:.+]] = linalg.index 3 : index -// CHECK-DAG: %[[T0:.+]] = affine.apply #[[MAP2]](%[[IDX1]], %[[IDX0]]) +// CHECK-DAG: %[[T0:.+]] = affine.apply #[[MAP2]]()[%[[IDX1]], %[[IDX0]]] // CHECK: %[[T1:.+]] = arith.index_cast %[[T0]] : index to i32 // CHECK: %[[T2:.+]] = arith.addi %[[ARG1]], %[[T1]] : i32 // CHECK: %[[T3:.+]] = arith.index_cast %[[IDX2]] : index to i32 diff --git a/mlir/test/Dialect/Linalg/tile-and-fuse-on-tensors.mlir b/mlir/test/Dialect/Linalg/tile-and-fuse-on-tensors.mlir --- a/mlir/test/Dialect/Linalg/tile-and-fuse-on-tensors.mlir +++ b/mlir/test/Dialect/Linalg/tile-and-fuse-on-tensors.mlir @@ -1,9 +1,9 @@ // RUN: mlir-opt %s -linalg-tile-and-fuse-tensor-ops="tile-sizes=5,4,7 tile-interchange=1,0,2" -cse -split-input-file | FileCheck %s -// CHECK-DAG: #[[MAP0:.*]] = affine_map<(d0) -> (5, -d0 + 24)> -// CHECK-DAG: #[[MAP1:.*]] = affine_map<(d0) -> (7, -d0 + 12)> -// CHECK-DAG: #[[MAP2:.*]] = affine_map<(d0, d1) -> (d0, -d1 + 24)> -// CHECK-DAG: #[[MAP3:.*]] = affine_map<(d0, d1) -> (d0, -d1 + 12)> +// CHECK-DAG: #[[MAP0:.*]] = affine_map<()[s0] -> (5, -s0 + 24)> +// CHECK-DAG: #[[MAP1:.*]] = affine_map<()[s0] -> (7, -s0 + 12)> +// CHECK-DAG: #[[MAP2:.*]] = affine_map<()[s0, s1] -> (s0, -s1 + 24)> +// CHECK-DAG: #[[MAP3:.*]] = affine_map<()[s0, s1] -> (s0, -s1 + 12)> // CHECK: fuse_input // CHECK-SAME: %[[ARG0:[0-9a-zA-Z]*]]: tensor<24x12xf32> @@ -20,13 +20,13 @@ // CHECK: scf.for %[[IV0:[0-9a-zA-Z]*]] = // CHECK: scf.for %[[IV1:[0-9a-zA-Z]*]] = - // CHECK: %[[TS1:.*]] = affine.min #[[MAP0]](%[[IV1]]) + // CHECK: %[[TS1:.*]] = affine.min #[[MAP0]]()[%[[IV1]]] // CHECK: scf.for %[[IV2:[0-9a-zA-Z]*]] = - // CHECK: %[[TS2:.*]] = affine.min #[[MAP1]](%[[IV2]]) + // CHECK: %[[TS2:.*]] = affine.min #[[MAP1]]()[%[[IV2]]] // Tile both input operand dimensions. - // CHECK: %[[UB1:.*]] = affine.min #[[MAP2]](%[[TS1]], %[[IV1]]) - // CHECK: %[[UB2:.*]] = affine.min #[[MAP3]](%[[TS2]], %[[IV2]]) + // CHECK: %[[UB1:.*]] = affine.min #[[MAP2]]()[%[[TS1]], %[[IV1]]] + // CHECK: %[[UB2:.*]] = affine.min #[[MAP3]]()[%[[TS2]], %[[IV2]]] // CHECK: %[[T0:.*]] = tensor.extract_slice %[[ARG0]] // CHECK-SAME: %[[IV1]], %[[IV2]] // CHECK-SAME: %[[UB1]], %[[UB2]] @@ -38,8 +38,8 @@ // ----- -// CHECK-DAG: #[[MAP0:.*]] = affine_map<(d0) -> (5, -d0 + 24)> -// CHECK-DAG: #[[MAP1:.*]] = affine_map<(d0) -> (4, -d0 + 25)> +// CHECK-DAG: #[[MAP0:.*]] = affine_map<()[s0] -> (5, -s0 + 24)> +// CHECK-DAG: #[[MAP1:.*]] = affine_map<()[s0] -> (4, -s0 + 25)> // CHECK: fuse_output // CHECK-SAME: %[[ARG2:[0-9a-zA-Z]*]]: tensor<24x25xf32> @@ -57,8 +57,8 @@ // Update the iteration argument of the outermost tile loop. // CHECK: scf.for %[[IV0:.*]] = {{.*}} iter_args(%[[ARG3:.*]] = %[[ARG2]] // CHECK: scf.for %[[IV1:.*]] = {{.*}} iter_args(%[[ARG4:.*]] = %[[ARG3]] - // CHECK: %[[TS1:.*]] = affine.min #[[MAP0]](%[[IV1]]) - // CHECK: %[[TS0:.*]] = affine.min #[[MAP1]](%[[IV0]]) + // CHECK: %[[TS1:.*]] = affine.min #[[MAP0]]()[%[[IV1]]] + // CHECK: %[[TS0:.*]] = affine.min #[[MAP1]]()[%[[IV0]]] // Tile the both output operand dimensions. // CHECK: %[[T0:.*]] = tensor.extract_slice %[[ARG4]] @@ -73,10 +73,10 @@ // ----- -// CHECK-DAG: #[[MAP0:.*]] = affine_map<(d0) -> (4, -d0 + 25)> -// CHECK-DAG: #[[MAP1:.*]] = affine_map<(d0) -> (7, -d0 + 12)> -// CHECK-DAG: #[[MAP2:.*]] = affine_map<(d0, d1) -> (d0, -d1 + 25)> -// CHECK-DAG: #[[MAP3:.*]] = affine_map<(d0, d1) -> (d0, -d1 + 12)> +// CHECK-DAG: #[[MAP0:.*]] = affine_map<()[s0] -> (4, -s0 + 25)> +// CHECK-DAG: #[[MAP1:.*]] = affine_map<()[s0] -> (7, -s0 + 12)> +// CHECK-DAG: #[[MAP2:.*]] = affine_map<()[s0, s1] -> (s0, -s1 + 25)> +// CHECK-DAG: #[[MAP3:.*]] = affine_map<()[s0, s1] -> (s0, -s1 + 12)> #map0 = affine_map<(d0, d1, d2) -> (d0, d1, d2)> #map1 = affine_map<(d0, d1, d2) -> (d0, d2)> @@ -100,11 +100,11 @@ // CHECK: scf.for %[[IV0:[0-9a-zA-Z]*]] = // CHECK: scf.for %[[IV1:[0-9a-zA-Z]*]] = - // CHECK: %[[TS0:.*]] = affine.min #[[MAP0]](%[[IV0]]) + // CHECK: %[[TS0:.*]] = affine.min #[[MAP0]]()[%[[IV0]]] // CHECK: scf.for %[[IV2:[0-9a-zA-Z]*]] = - // CHECK: %[[TS2:.*]] = affine.min #[[MAP1]](%[[IV2]]) - // CHECK: %[[UB2:.*]] = affine.min #[[MAP3]](%[[TS2]], %[[IV2]]) - // CHECK: %[[UB0:.*]] = affine.min #[[MAP2]](%[[TS0]], %[[IV0]]) + // CHECK: %[[TS2:.*]] = affine.min #[[MAP1]]()[%[[IV2]]] + // CHECK: %[[UB2:.*]] = affine.min #[[MAP3]]()[%[[TS2]], %[[IV2]]] + // CHECK: %[[UB0:.*]] = affine.min #[[MAP2]]()[%[[TS0]], %[[IV0]]] // Tile only the parallel dimensions but not the reduction dimension. // CHECK: %[[T0:.*]] = tensor.extract_slice %[[ARG3]] @@ -191,7 +191,7 @@ // ----- -// CHECK-DAG: #[[MAP0:.*]] = affine_map<(d0, d1) -> (d0 + d1)> +// CHECK-DAG: #[[MAP0:.*]] = affine_map<()[s0, s1] -> (s0 + s1)> #map0 = affine_map<(d0, d1) -> (d1, d0)> // CHECK: fuse_indexed @@ -222,9 +222,9 @@ // CHECK-SAME: %[[IV2]], %[[IV0]] // CHECK: linalg.generic {{.*}} outs(%[[T1]] // CHECK: %[[IDX0:.*]] = linalg.index 0 - // CHECK: %[[IDX0_SHIFTED:.*]] = affine.apply #[[MAP0]](%[[IDX0]], %[[IV0]]) + // CHECK: %[[IDX0_SHIFTED:.*]] = affine.apply #[[MAP0]]()[%[[IV0]], %[[IDX0]]] // CHECK: %[[IDX1:.*]] = linalg.index 1 - // CHECK: %[[IDX1_SHIFTED:.*]] = affine.apply #[[MAP0]](%[[IDX1]], %[[IV2]]) + // CHECK: %[[IDX1_SHIFTED:.*]] = affine.apply #[[MAP0]]()[%[[IV2]], %[[IDX1]]] // CHECK: %{{.*}} = arith.addi %[[IDX0_SHIFTED]], %[[IDX1_SHIFTED]] %1 = linalg.matmul ins(%arg0, %0 : tensor<24x12xi32>, tensor<12x25xi32>) outs(%arg2 : tensor<24x25xi32>) -> tensor<24x25xi32> return %1 : tensor<24x25xi32> @@ -232,9 +232,10 @@ // ----- -// CHECK-DAG: #[[MAP0:.*]] = affine_map<(d0, d1) -> (d0 + d1)> -// CHECK-DAG: #[[MAP1:.*]] = affine_map<(d0, d1) -> (8, -d0 - d1 + 17)> -// CHECK-DAG: #[[MAP2:.*]] = affine_map<(d0, d1, d2) -> (d0, -d1 - d2 + 17)> +// CHECK-DAG: #[[MAP0:.*]] = affine_map<()[s0, s1] -> (s0 + s1)> +// CHECK-DAG: #[[MAP1:.*]] = affine_map<()[s0, s1] -> (8, -s0 - s1 + 17)> +// CHECK-DAG: #[[MAP2:.*]] = affine_map<()[s0, s1, s2] -> (s2, -s0 - s1 + 17)> + #map0 = affine_map<(d0, d1) -> (d0, d0 + d1)> #map1 = affine_map<(d0, d1) -> (d0, d1)> @@ -258,9 +259,9 @@ // the offset is set to the sum of the induction variables, and the upper bound // to either 8 (tile size) or 17 (sum of max indices (9+7) then + 1) minus the // induction variables. - // CHECK: %[[SUM:.*]] = affine.apply #[[MAP0]](%[[IV1]], %[[IV0]] - // CHECK: %[[TS1:.*]] = affine.min #[[MAP1]](%[[IV1]], %[[IV0]] - // CHECK: %[[UB1:.*]] = affine.min #[[MAP2]](%[[TS1]], %[[IV1]], %[[IV0]] + // CHECK: %[[SUM:.*]] = affine.apply #[[MAP0]]()[%[[IV1]], %[[IV0]] + // CHECK: %[[TS1:.*]] = affine.min #[[MAP1]]()[%[[IV1]], %[[IV0]] + // CHECK: %[[UB1:.*]] = affine.min #[[MAP2]]()[%[[IV1]], %[[IV0]], %[[TS1]] // CHECK: %[[T0:.*]] = tensor.extract_slice %[[ARG0]] // CHECK-SAME: %[[IV1]], %[[SUM]] // CHECK-SAME: , %[[UB1]] diff --git a/mlir/test/Dialect/Linalg/tile-and-fuse-tensors.mlir b/mlir/test/Dialect/Linalg/tile-and-fuse-tensors.mlir --- a/mlir/test/Dialect/Linalg/tile-and-fuse-tensors.mlir +++ b/mlir/test/Dialect/Linalg/tile-and-fuse-tensors.mlir @@ -30,8 +30,8 @@ return %3 : tensor } -// CHECK: #[[BOUND2_MAP:.+]] = affine_map<(d0)[s0] -> (2, -d0 + s0)> -// CHECK: #[[BOUND4_MAP:.+]] = affine_map<(d0)[s0] -> (4, -d0 + s0)> +// CHECK: #[[BOUND2_MAP:.+]] = affine_map<()[s0, s1] -> (2, s0 - s1)> +// CHECK: #[[BOUND4_MAP:.+]] = affine_map<()[s0, s1] -> (4, s0 - s1)> // CHECK: func @matmul_tensors( // CHECK-SAME: %[[A:[0-9a-z]*]]: tensor @@ -45,7 +45,7 @@ // CHECK-DAG: %[[dB0:.*]] = tensor.dim %[[B]], %[[C0]] : tensor // CHECK-DAG: %[[dB1:.*]] = tensor.dim %[[B]], %[[C1]] : tensor // CHECK: scf.for %[[I:[0-9a-z]*]] -// CHECK: %[[sizeA0:.*]] = affine.min #[[BOUND2_MAP]](%[[I]])[%[[dA0]]] +// CHECK: %[[sizeA0:.*]] = affine.min #[[BOUND2_MAP]]()[%[[dA0]], %[[I]]] // CHECK: %[[stA:.*]] = tensor.extract_slice %[[A]][%[[I]], 0] [%[[sizeA0]], %[[dA1]]] [1, 1] : tensor to tensor // CHECK-NEXT: scf.for %[[J:[0-9a-z]*]] // CHECK-NEXT: scf.for %[[K:[0-9a-z]*]] {{.*}} iter_args(%[[RES:[0-9a-z]*]] @@ -53,7 +53,7 @@ // CHECK-DAG: %[[stF:.*]] = tensor.extract_slice %[[RES]][%[[I]], %[[J]]] [2, 3] [1, 1] : tensor to tensor<2x3xf32> // // slices of the producing matmul. -// CHECK: %[[sizeB1:.*]] = affine.min #[[BOUND4_MAP]](%[[K]])[%[[dB1]]] +// CHECK: %[[sizeB1:.*]] = affine.min #[[BOUND4_MAP]]()[%[[dB1]], %[[K]]] // CHECK: %[[stB2:.*]] = tensor.extract_slice %[[B]][0, %[[K]]] [%[[dB0]], %[[sizeB1]]] [1, 1] : tensor to tensor // CHECK: %[[stC:.*]] = tensor.extract_slice %[[C]][%[[I]], %[[K]]] [%[[sizeA0]], %[[sizeB1]]] [1, 1] : tensor to tensor // CHECK: %[[stD:.*]] = linalg.matmul ins(%[[stA]], %[[stB2]] : tensor, tensor) outs(%[[stC]] : tensor) -> tensor @@ -110,7 +110,7 @@ return %for0 : tensor<1x112x112x32xf32> } -// CHECK: #[[MAP0:.+]] = affine_map<(d0) -> (d0 * 2)> +// CHECK: #[[MAP0:.+]] = affine_map<()[s0] -> (s0 * 2)> // CHECK: #[[MAP1:.+]] = affine_map<(d0, d1, d2, d3) -> (d0, d1, d2, d3)> // CHECK: func @conv_tensors_static @@ -120,9 +120,9 @@ // CHECK-NEXT: %[[FILL:.+]] = linalg.fill(%cst, %[[INIT]]) : f32, tensor<1x112x112x32xf32> -> tensor<1x112x112x32xf32> // CHECK-NEXT: scf.for %[[IV0:.+]] = %{{.+}} to %{{.+}} step %{{.+}} iter_args(%[[ARG0:.+]] = %[[FILL]]) -// CHECK-NEXT: %[[OFFSET_H:.+]] = affine.apply #[[MAP0]](%[[IV0]]) +// CHECK-NEXT: %[[OFFSET_H:.+]] = affine.apply #[[MAP0]]()[%[[IV0]]] // CHECK-NEXT: scf.for %[[IV1:.+]] = %{{.+}} to %{{.+}} step %{{.+}} iter_args(%[[ARG1:.+]] = %[[ARG0]]) -// CHECK-NEXT: %[[OFFSET_W:.+]] = affine.apply #[[MAP0]](%[[IV1]]) +// CHECK-NEXT: %[[OFFSET_W:.+]] = affine.apply #[[MAP0]]()[%[[IV1]]] // CHECK-NEXT: %[[ST_INPUT:.+]] = tensor.extract_slice %arg0[0, %[[OFFSET_H]], %[[OFFSET_W]], 0] [1, 17, 33, 3] [1, 1, 1, 1] : tensor<1x225x225x3xf32> to tensor<1x17x33x3xf32> // CHECK-NEXT: scf.for %[[IV2:.+]] = %{{.+}} to %{{.+}} step %{{.+}} iter_args(%[[ARG2:.+]] = %[[ARG1]]) // CHECK-NEXT: %[[ST_ELEM:.+]] = tensor.extract_slice %[[ELEM]][0, %[[IV0]], %[[IV1]], %[[IV2]]] [1, 8, 16, 4] [1, 1, 1, 1] : tensor<1x112x112x32xf32> to tensor<1x8x16x4xf32> @@ -199,16 +199,16 @@ return %for0 : tensor } -// CHECK: #[[BOUND8_MAP:.+]] = affine_map<(d0)[s0] -> (8, -d0 + s0)> -// CHECK: #[[BOUND8_MAP_2:.+]] = affine_map<(d0)[s0, s1] -> (-d0 + s0, 8, -d0 + s1)> -// CHECK: #[[BOUND16_MAP:.+]] = affine_map<(d0)[s0] -> (16, -d0 + s0)> -// CHECK: #[[X2_MAP:.+]] = affine_map<(d0) -> (d0 * 2)> -// CHECK: #[[INPUT_BOUND:.+]] = affine_map<(d0, d1)[s0, s1] -> (d0 * 2 + s0 - 2, d1 * -2 + s0 + s1 * 2 - 2)> -// CHECK: #[[BOUND16_MAP_2:.+]] = affine_map<(d0)[s0, s1] -> (-d0 + s0, 16, -d0 + s1)> -// CHECK: #[[BOUND4_MAP:.+]] = affine_map<(d0)[s0] -> (4, -d0 + s0)> -// CHECK: #[[BOUND2_MAP:.+]] = affine_map<(d0)[s0] -> (2, -d0 + s0)> -// CHECK: #[[BOUND4_MAP_2:.+]] = affine_map<(d0)[s0, s1] -> (-d0 + s0, 4, -d0 + s1)> -// CHECK: #[[BOUND2_MAP_2:.+]] = affine_map<(d0, d1)[s0, s1] -> (-d0 + s0, 2, -d1 + s1)> +// CHECK: #[[BOUND8_MAP:.+]] = affine_map<()[s0, s1] -> (8, s0 - s1)> +// CHECK: #[[BOUND8_MAP_2:.+]] = affine_map<()[s0, s1, s2] -> (s0 - s1, 8, -s1 + s2)> +// CHECK: #[[BOUND16_MAP:.+]] = affine_map<()[s0, s1] -> (16, s0 - s1)> +// CHECK: #[[X2_MAP:.+]] = affine_map<()[s0] -> (s0 * 2)> +// CHECK: #[[INPUT_BOUND:.+]] = affine_map<()[s0, s1, s2, s3] -> (s0 * 2 + s1 - 2, s1 + s2 * 2 - s3 * 2 - 2)> +// CHECK: #[[BOUND16_MAP_2:.+]] = affine_map<()[s0, s1, s2] -> (s0 - s1, 16, -s1 + s2)> +// CHECK: #[[BOUND4_MAP:.+]] = affine_map<()[s0, s1] -> (4, s0 - s1)> +// CHECK: #[[BOUND2_MAP:.+]] = affine_map<()[s0, s1] -> (2, s0 - s1)> +// CHECK: #[[BOUND4_MAP_2:.+]] = affine_map<()[s0, s1, s2] -> (s0 - s1, 4, -s1 + s2)> +// CHECK: #[[BOUND2_MAP_2:.+]] = affine_map<()[s0, s1, s2, s3] -> (s0 - s1, 2, s2 - s3)> // CHECK: func @conv_tensors_dynamic // CHECK-SAME: (%[[INPUT]]: tensor, %[[FILTER]]: tensor, %[[ELEM]]: tensor) @@ -236,27 +236,27 @@ // CHECK-DAG: %[[FILL_W:.+]] = tensor.dim %[[FILL]], %[[C2]] : tensor // CHECK: scf.for %[[IV0:.+]] = %{{.+}} to %[[ELEM_N]] step %{{.+}} iter_args(%{{.+}} = %[[FILL]]) -// CHECK-NEXT: %[[SIZE_ELEM_N:.+]] = affine.min #[[BOUND8_MAP]](%[[IV0]])[%[[ELEM_N]]] -// CHECK-NEXT: %[[SIZE_INPUT_N:.+]] = affine.min #[[BOUND8_MAP_2]](%[[IV0]])[%[[INPUT_N]], %[[ELEM_N]]] +// CHECK-NEXT: %[[SIZE_ELEM_N:.+]] = affine.min #[[BOUND8_MAP]]()[%[[ELEM_N]], %[[IV0]]] +// CHECK-NEXT: %[[SIZE_INPUT_N:.+]] = affine.min #[[BOUND8_MAP_2]]()[%[[INPUT_N]], %[[IV0]], %[[ELEM_N]]] // CHECK-NEXT: scf.for %[[IV1:.+]] = %{{.+}} to %[[ELEM_OH]] -// CHECK-NEXT: %[[SIZE_ELEM_OH:.+]] = affine.min #[[BOUND16_MAP]](%[[IV1]])[%[[ELEM_OH]]] -// CHECK-NEXT: %[[OFFSET_OH:.+]] = affine.apply #[[X2_MAP]](%[[IV1]]) -// CHECK-NEXT: %[[SIZE_INPUT_H:.+]] = affine.min #[[INPUT_BOUND]](%[[SIZE_ELEM_OH]], %[[IV1]])[%[[FILTER_H]], %[[FILL_H]]] -// CHECK-NEXT: %[[SIZE_ELEM_OH_2:.+]] = affine.min #[[BOUND16_MAP_2]](%[[IV1]])[%[[FILL_H]], %[[ELEM_OH]]] +// CHECK-NEXT: %[[SIZE_ELEM_OH:.+]] = affine.min #[[BOUND16_MAP]]()[%[[ELEM_OH]], %[[IV1]]] +// CHECK-NEXT: %[[OFFSET_OH:.+]] = affine.apply #[[X2_MAP]]()[%[[IV1]]] +// CHECK-NEXT: %[[SIZE_INPUT_H:.+]] = affine.min #[[INPUT_BOUND]]()[%[[SIZE_ELEM_OH]], %[[FILTER_H]], %[[FILL_H]], %[[IV1]]] +// CHECK-NEXT: %[[SIZE_ELEM_OH_2:.+]] = affine.min #[[BOUND16_MAP_2]]()[%[[FILL_H]], %[[IV1]], %[[ELEM_OH]]] // CHECK-NEXT: scf.for %[[IV2:.+]] = %{{.+}} to %[[ELEM_OW]] -// CHECK-NEXT: %[[SIZE_ELEM_OW:.+]] = affine.min #[[BOUND4_MAP]](%[[IV2]])[%[[ELEM_OW]]] -// CHECK-NEXT: %[[SIZE_ELEM_OC:.+]] = affine.min #[[BOUND2_MAP]](%[[IV2]])[%[[ELEM_OC]]] -// CHECK-NEXT: %[[OFFSET_OW:.+]] = affine.apply #[[X2_MAP]](%[[IV2]]) -// CHECK-NEXT: %[[SIZE_INPUT_W:.+]] = affine.min #[[INPUT_BOUND]](%[[SIZE_ELEM_OW]], %[[IV2]])[%[[FILTER_W]], %[[FILL_W]]] +// CHECK-NEXT: %[[SIZE_ELEM_OW:.+]] = affine.min #[[BOUND4_MAP]]()[%[[ELEM_OW]], %[[IV2]]] +// CHECK-NEXT: %[[SIZE_ELEM_OC:.+]] = affine.min #[[BOUND2_MAP]]()[%[[ELEM_OC]], %[[IV2]]] +// CHECK-NEXT: %[[OFFSET_OW:.+]] = affine.apply #[[X2_MAP]]()[%[[IV2]]] +// CHECK-NEXT: %[[SIZE_INPUT_W:.+]] = affine.min #[[INPUT_BOUND]]()[%[[SIZE_ELEM_OW]], %[[FILTER_W]], %[[FILL_W]], %[[IV2]]] // CHECK-NEXT: %[[ST_INPUT:.+]] = tensor.extract_slice %[[INPUT]][%[[IV0]], %[[OFFSET_OH]], %[[OFFSET_OW]], 0] // CHECK-SAME: [%[[SIZE_INPUT_N]], %[[SIZE_INPUT_H]], %[[SIZE_INPUT_W]], %[[INPUT_C]]] -// CHECK-NEXT: %[[SIZE_ELEM_OW_2:.+]] = affine.min #[[BOUND4_MAP_2]](%[[IV2]])[%[[FILL_W]], %[[ELEM_OW]]] +// CHECK-NEXT: %[[SIZE_ELEM_OW_2:.+]] = affine.min #[[BOUND4_MAP_2]]()[%[[FILL_W]], %[[IV2]], %[[ELEM_OW]]] // CHECK-NEXT: scf.for %[[IV3:.+]] = %{{.+}} to %[[ELEM_OC]] step %{{.+}} iter_args(%[[ARG:[a-z0-9]+]] // CHECK-NEXT: %[[ST_ELEM:.+]] = tensor.extract_slice %[[ELEM]][%[[IV0]], %[[IV1]], %[[IV2]], %[[IV3]]] // CHECK-SAME: [%[[SIZE_ELEM_N]], %[[SIZE_ELEM_OH]], %[[SIZE_ELEM_OW]], %[[SIZE_ELEM_OC]]] // CHECK-NEXT: %[[ST_ARG:.+]] = tensor.extract_slice %[[ARG]][%[[IV0]], %[[IV1]], %[[IV2]], %[[IV3]]] // CHECK-SAME: [%[[SIZE_ELEM_N]], %[[SIZE_ELEM_OH]], %[[SIZE_ELEM_OW]], %[[SIZE_ELEM_OC]]] -// CHECK-NEXT: %[[SIZE_ELEM_OC_2:.+]] = affine.min #[[BOUND2_MAP_2]](%[[IV3]], %[[IV2]])[%[[FILTER_OC]], %[[ELEM_OC]]] +// CHECK-NEXT: %[[SIZE_ELEM_OC_2:.+]] = affine.min #[[BOUND2_MAP_2]]()[%[[FILTER_OC]], %[[IV3]], %[[ELEM_OC]], %[[IV2]]] // CHECK-NEXT: %[[ST_FILTER:.+]] = tensor.extract_slice %[[FILTER]][0, 0, 0, %[[IV3]]] // CHECK-SAME: [%[[FILTER_H]], %[[FILTER_W]], %[[FILTER_IC]], %[[SIZE_ELEM_OC_2]]] // CHECK-NEXT: %[[ST_FILL:.+]] = tensor.extract_slice %[[FILL]][%[[IV0]], %[[IV1]], %[[IV2]], %[[IV3]]] diff --git a/mlir/test/Dialect/Linalg/tile-conv.mlir b/mlir/test/Dialect/Linalg/tile-conv.mlir --- a/mlir/test/Dialect/Linalg/tile-conv.mlir +++ b/mlir/test/Dialect/Linalg/tile-conv.mlir @@ -1,9 +1,9 @@ // RUN: mlir-opt %s -linalg-tile="tile-sizes=2,3" | FileCheck %s -// CHECK-DAG: #[[MAP0:.*]] = affine_map<(d0)[s0, s1] -> (s0 + 1, -d0 + s0 + s1 - 1)> -// CHECK-DAG: #[[MAP1:.*]] = affine_map<(d0)[s0, s1] -> (s0 + 2, -d0 + s0 + s1 - 1)> -// CHECK-DAG: #[[MAP2:.*]] = affine_map<(d0)[s0] -> (2, -d0 + s0)> -// CHECK-DAG: #[[MAP3:.*]] = affine_map<(d0)[s0] -> (3, -d0 + s0)> +// CHECK-DAG: #[[MAP0:.*]] = affine_map<()[s0, s1, s2] -> (s0 + 1, s0 + s1 - s2 - 1)> +// CHECK-DAG: #[[MAP1:.*]] = affine_map<()[s0, s1, s2] -> (s0 + 2, s0 + s1 - s2 - 1)> +// CHECK-DAG: #[[MAP2:.*]] = affine_map<()[s0, s1] -> (2, s0 - s1)> +// CHECK-DAG: #[[MAP3:.*]] = affine_map<()[s0, s1] -> (3, s0 - s1)> func @conv(%arg0 : memref, %arg1 : memref, %arg2 : memref) { linalg.conv_2d ins(%arg0, %arg1 : memref, memref) outs(%arg2 : memref) @@ -24,11 +24,11 @@ // CHECK-DAG: %[[T3:.*]] = memref.dim %[[ARG2]], %[[C1]] // CHECK: scf.for %[[ARG3:.*]] = %[[C0]] to %[[T2]] step %[[C2]] // CHECK: scf.for %[[ARG4:.*]] = %[[C0]] to %[[T3]] step %[[C3]] -// CHECK: %[[T4:.*]] = affine.min #[[MAP0]](%[[ARG3]])[%[[T0]], %[[T2]]] -// CHECK: %[[T5:.*]] = affine.min #[[MAP1]](%[[ARG4]])[%[[T1]], %[[T3]]] +// CHECK: %[[T4:.*]] = affine.min #[[MAP0]]()[%[[T0]], %[[T2]], %[[ARG3]]] +// CHECK: %[[T5:.*]] = affine.min #[[MAP1]]()[%[[T1]], %[[T3]], %[[ARG4]]] // CHECK: %[[SV1:.*]] = memref.subview %[[ARG0]][%[[ARG3]], %[[ARG4]]] [%[[T4]], %[[T5]]] -// CHECK: %[[T6:.*]] = affine.min #[[MAP2]](%[[ARG3]])[%[[T2]] -// CHECK: %[[T7:.*]] = affine.min #[[MAP3]](%[[ARG4]])[%[[T3]]] +// CHECK: %[[T6:.*]] = affine.min #[[MAP2]]()[%[[T2]], %[[ARG3]]] +// CHECK: %[[T7:.*]] = affine.min #[[MAP3]]()[%[[T3]], %[[ARG4]]] // CHECK: %[[SV2:.*]] = memref.subview %[[ARG2]][%[[ARG3]], %[[ARG4]]] [%[[T6]], %[[T7]]] // CHECK: linalg.conv_2d // CHECK-SAME: ins(%[[SV1]], %[[ARG1]] diff --git a/mlir/test/Dialect/Linalg/tile-indexed.mlir b/mlir/test/Dialect/Linalg/tile-indexed.mlir --- a/mlir/test/Dialect/Linalg/tile-indexed.mlir +++ b/mlir/test/Dialect/Linalg/tile-indexed.mlir @@ -12,22 +12,22 @@ } return } -// TILE-10n25-DAG: [[$MAP:#[a-zA-Z0-9_]*]] = affine_map<(d0, d1) -> (d0 + d1)> +// TILE-10n25-DAG: [[$MAP:#[a-zA-Z0-9_]*]] = affine_map<()[s0, s1] -> (s0 + s1)> // TILE-10n25-LABEL: func @indexed_vector // TILE-10n25: %[[C10:.*]] = arith.constant 10 : index // TILE-10n25: scf.for %[[J:.*]] = {{.*}} step %[[C10]] // TILE-10n25: linalg.generic // TILE-10n25: %[[I:.*]] = linalg.index 0 : index -// TILE-10n25: %[[NEW_I:.*]] = affine.apply [[$MAP]](%[[I]], %[[J]]) +// TILE-10n25: %[[NEW_I:.*]] = affine.apply [[$MAP]]()[%[[I]], %[[J]]] // TILE-10n25: linalg.yield %[[NEW_I]] : index -// TILE-25n0-DAG: [[$MAP:#[a-zA-Z0-9_]*]] = affine_map<(d0, d1) -> (d0 + d1)> +// TILE-25n0-DAG: [[$MAP:#[a-zA-Z0-9_]*]] = affine_map<()[s0, s1] -> (s0 + s1)> // TILE-25n0-LABEL: func @indexed_vector // TILE-25n0: %[[C25:.*]] = arith.constant 25 : index // TILE-25n0: scf.for %[[J:.*]] = {{.*}} step %[[C25]] // TILE-25n0: linalg.generic // TILE-25n0: %[[I:.*]] = linalg.index 0 : index -// TILE-25n0: %[[NEW_I:.*]] = affine.apply [[$MAP]](%[[I]], %[[J]]) +// TILE-25n0: %[[NEW_I:.*]] = affine.apply [[$MAP]]()[%[[I]], %[[J]]] // TILE-25n0: linalg.yield %[[NEW_I]] : index // TILE-0n25-LABEL: func @indexed_vector @@ -48,7 +48,7 @@ } return } -// TILE-10n25-DAG: [[$MAP:#[a-zA-Z0-9_]*]] = affine_map<(d0, d1) -> (d0 + d1)> +// TILE-10n25-DAG: [[$MAP:#[a-zA-Z0-9_]*]] = affine_map<()[s0, s1] -> (s0 + s1)> // TILE-10n25-LABEL: func @indexed_matrix // TILE-10n25-DAG: %[[C25:.*]] = arith.constant 25 : index // TILE-10n25-DAG: %[[C10:.*]] = arith.constant 10 : index @@ -56,30 +56,30 @@ // TILE-10n25: scf.for %[[L:.*]] = {{.*}} step %[[C25]] // TILE-10n25: linalg.generic // TILE-10n25: %[[I:.*]] = linalg.index 0 : index -// TILE-10n25: %[[NEW_I:.*]] = affine.apply [[$MAP]](%[[I]], %[[K]]) +// TILE-10n25: %[[NEW_I:.*]] = affine.apply [[$MAP]]()[%[[I]], %[[K]]] // TILE-10n25: %[[J:.*]] = linalg.index 1 : index -// TILE-10n25: %[[NEW_J:.*]] = affine.apply [[$MAP]](%[[J]], %[[L]]) +// TILE-10n25: %[[NEW_J:.*]] = affine.apply [[$MAP]]()[%[[J]], %[[L]]] // TILE-10n25: %[[SUM:.*]] = arith.addi %[[NEW_I]], %[[NEW_J]] : index // TILE-10n25: linalg.yield %[[SUM]] : index -// TILE-25n0-DAG: [[$MAP:#[a-zA-Z0-9_]*]] = affine_map<(d0, d1) -> (d0 + d1)> +// TILE-25n0-DAG: [[$MAP:#[a-zA-Z0-9_]*]] = affine_map<()[s0, s1] -> (s0 + s1)> // TILE-25n0-LABEL: func @indexed_matrix // TILE-25n0: %[[C25:.*]] = arith.constant 25 : index // TILE-25n0: scf.for %[[L:.*]] = {{.*}} step %[[C25]] // TILE-25n0: linalg.generic // TILE-25n0: %[[I:.*]] = linalg.index 0 : index -// TILE-25n0: %[[NEW_I:.*]] = affine.apply [[$MAP]](%[[I]], %[[L]]) +// TILE-25n0: %[[NEW_I:.*]] = affine.apply [[$MAP]]()[%[[I]], %[[L]]] // TILE-25n0: %[[J:.*]] = linalg.index 1 : index // TILE-25n0: %[[SUM:.*]] = arith.addi %[[NEW_I]], %[[J]] : index // TILE-25n0: linalg.yield %[[SUM]] : index -// TILE-0n25-DAG: [[$MAP:#[a-zA-Z0-9_]*]] = affine_map<(d0, d1) -> (d0 + d1)> +// TILE-0n25-DAG: [[$MAP:#[a-zA-Z0-9_]*]] = affine_map<()[s0, s1] -> (s0 + s1)> // TILE-0n25-LABEL: func @indexed_matrix // TILE-0n25: %[[C25:.*]] = arith.constant 25 : index // TILE-0n25: scf.for %[[L:.*]] = {{.*}} step %[[C25]] // TILE-0n25: linalg.generic // TILE-0n25: %[[I:.*]] = linalg.index 0 : index // TILE-0n25: %[[J:.*]] = linalg.index 1 : index -// TILE-0n25: %[[NEW_J:.*]] = affine.apply [[$MAP]](%[[J]], %[[L]]) +// TILE-0n25: %[[NEW_J:.*]] = affine.apply [[$MAP]]()[%[[J]], %[[L]]] // TILE-0n25: %[[SUM:.*]] = arith.addi %[[I]], %[[NEW_J]] : index // TILE-0n25: linalg.yield %[[SUM]] : index diff --git a/mlir/test/Dialect/Linalg/tile-tensors.mlir b/mlir/test/Dialect/Linalg/tile-tensors.mlir --- a/mlir/test/Dialect/Linalg/tile-tensors.mlir +++ b/mlir/test/Dialect/Linalg/tile-tensors.mlir @@ -133,9 +133,9 @@ // ----- -// CHECK-DAG: #[[MAP0:.*]] = affine_map<(d0)[s0] -> (2, -d0 + s0)> -// CHECK-DAG: #[[MAP1:.*]] = affine_map<(d0) -> (d0 + 3)> -// CHECK-DAG: #[[MAP2:.*]] = affine_map<(d0) -> (d0 + 4)> +// CHECK-DAG: #[[MAP0:.*]] = affine_map<()[s0, s1] -> (2, s0 - s1)> +// CHECK-DAG: #[[MAP1:.*]] = affine_map<()[s0] -> (s0 + 3)> +// CHECK-DAG: #[[MAP2:.*]] = affine_map<()[s0] -> (s0 + 4)> // CHECK: fold_extract_slice // CHECK-SAME: %[[ARG0:[0-9a-zA-Z]*]]: tensor @@ -154,9 +154,9 @@ // CHECK: scf.for %[[IV1:[0-9a-zA-Z]*]] = // Fold the existing extract slice op into the one created by the tiling. - // CHECK: %[[SIZE0:.*]] = affine.min #[[MAP0]](%[[IV0]])[%[[DIM]] - // CHECK: %[[OFF0:.*]] = affine.apply #[[MAP1]](%[[IV0]] - // CHECK: %[[OFF1:.*]] = affine.apply #[[MAP2]](%[[IV1]] + // CHECK: %[[SIZE0:.*]] = affine.min #[[MAP0]]()[%[[DIM]], %[[IV0]]] + // CHECK: %[[OFF0:.*]] = affine.apply #[[MAP1]]()[%[[IV0]]] + // CHECK: %[[OFF1:.*]] = affine.apply #[[MAP2]]()[%[[IV1]]] // CHECK: %[[T0:.*]] = tensor.extract_slice %[[ARG0]] // CHECK-SAME: %[[OFF0]], %[[OFF1]] // CHECK-SAME: %[[SIZE0]], 3 diff --git a/mlir/test/Dialect/Linalg/tile.mlir b/mlir/test/Dialect/Linalg/tile.mlir --- a/mlir/test/Dialect/Linalg/tile.mlir +++ b/mlir/test/Dialect/Linalg/tile.mlir @@ -13,12 +13,12 @@ // TILE-002-DAG: #[[$strided2D:.*]] = affine_map<(d0, d1)[s0, s1] -> (d0 * s1 + s0 + d1)> // TILE-234-DAG: #[[$strided2D:.*]] = affine_map<(d0, d1)[s0, s1] -> (d0 * s1 + s0 + d1)> -// TILE-2-DAG: #[[$bound_map:.*]] = affine_map<(d0)[s0] -> (2, -d0 + s0)> -// TILE-02-DAG: #[[$bound_map:.*]] = affine_map<(d0)[s0] -> (2, -d0 + s0)> -// TILE-002-DAG: #[[$bound_map:.*]] = affine_map<(d0)[s0] -> (2, -d0 + s0)> -// TILE-234-DAG: #[[$bound_map_2:.*]] = affine_map<(d0)[s0] -> (2, -d0 + s0)> -// TILE-234-DAG: #[[$bound_map_3:.*]] = affine_map<(d0)[s0] -> (3, -d0 + s0)> -// TILE-234-DAG: #[[$bound_map_4:.*]] = affine_map<(d0)[s0] -> (4, -d0 + s0)> +// TILE-2-DAG: #[[$bound_map:.*]] = affine_map<()[s0, s1] -> (2, s0 - s1)> +// TILE-02-DAG: #[[$bound_map:.*]] = affine_map<()[s0, s1] -> (2, s0 - s1)> +// TILE-002-DAG: #[[$bound_map:.*]] = affine_map<()[s0, s1] -> (2, s0 - s1)> +// TILE-234-DAG: #[[$bound_map_2:.*]] = affine_map<()[s0, s1] -> (2, s0 - s1)> +// TILE-234-DAG: #[[$bound_map_3:.*]] = affine_map<()[s0, s1] -> (3, s0 - s1)> +// TILE-234-DAG: #[[$bound_map_4:.*]] = affine_map<()[s0, s1] -> (4, s0 - s1)> // TILE-2-DAG: #[[$stride_99_1_layout_map:.*]] = affine_map<(d0, d1)[s0] -> (d0 * 99 + s0 + d1)> // TILE-02-DAG: #[[$stride_99_1_layout_map:.*]] = affine_map<(d0, d1)[s0] -> (d0 * 99 + s0 + d1)> @@ -38,10 +38,10 @@ // TILE-2-DAG: %[[C2:.*]] = arith.constant 2 : index // TILE-2: %[[M:.*]] = memref.dim %{{.*}}, %c0 : memref // TILE-2: scf.for %[[I:.*]] = %{{.*}}{{.*}} to %[[M]] step %{{.*}} { -// TILE-2: %[[szM:.*]] = affine.min #[[$bound_map]](%[[I]])[%[[M]]] +// TILE-2: %[[szM:.*]] = affine.min #[[$bound_map]]()[%[[M]], %[[I]]] // TILE-2: %[[K:.*]] = memref.dim %{{.*}}, %c1 : memref // TILE-2: %[[sAi:.*]] = memref.subview %{{.*}}[%[[I]], 0] [%[[szM]], %[[K]]] [1, 1] : memref to memref -// TILE-2: %[[szK:.*]] = affine.min #[[$bound_map]](%[[I]])[%[[M]]] +// TILE-2: %[[szK:.*]] = affine.min #[[$bound_map]]()[%[[M]], %[[I]]] // TILE-2: %[[N:.*]] = memref.dim %{{.*}}, %c1 : memref // TILE-2: %[[sCi:.*]] = memref.subview %{{.*}}[%[[I]], 0] [%[[szK]], %[[N]]] [1, 1] : memref to memref // TILE-2: linalg.matmul ins(%[[sAi]]{{.*}} outs(%[[sCi]] @@ -52,10 +52,10 @@ // TILE-02: %[[N:.*]] = memref.dim %arg1, %c1 : memref // TILE-02: scf.for %[[J:.*]] = %{{.*}} to %[[N]] step %{{.*}} { // TILE-02: %[[K:.*]] = memref.dim %{{.*}}, %c0 : memref -// TILE-02: %[[szN:.*]] = affine.min #[[$bound_map]](%[[J]])[%[[N]]] +// TILE-02: %[[szN:.*]] = affine.min #[[$bound_map]]()[%[[N]], %[[J]]] // TILE-02: %[[sBj:.*]] = memref.subview %{{.*}}[0, %[[J]]] [%[[K]], %[[szN]]] [1, 1] : memref to memref // TILE-02: %[[M:.*]] = memref.dim %{{.*}}, %c0 : memref -// TILE-02: %[[szK:.*]] = affine.min #[[$bound_map]](%[[J]])[%[[N]]] +// TILE-02: %[[szK:.*]] = affine.min #[[$bound_map]]()[%[[N]], %[[J]]] // TILE-02: %[[sCj:.*]] = memref.subview %{{.*}}[0, %[[J]]] [%[[M]], %[[szK]]] [1, 1] : memref to memref // TILE-02: linalg.matmul ins(%{{.*}}, %[[sBj]]{{.*}} outs(%[[sCj]] @@ -65,9 +65,9 @@ // TILE-002: %[[ubK:.*]] = memref.dim %{{.*}}, %c1 : memref // TILE-002: scf.for %[[K:.*]] = %{{.*}}{{.*}} to %[[ubK]] step %{{.*}} { // TILE-002: %[[M:.*]] = memref.dim %{{.*}}, %c0 : memref -// TILE-002: %[[szK:.*]] = affine.min #[[$bound_map]](%[[K]])[%[[ubK]]] +// TILE-002: %[[szK:.*]] = affine.min #[[$bound_map]]()[%[[ubK]], %[[K]]] // TILE-002: %[[sAj:.*]] = memref.subview %{{.*}}[0, %[[K]]] [%[[M]], %[[szK]]] [1, 1] : memref to memref -// TILE-002: %[[szK:.*]] = affine.min #[[$bound_map]](%[[K]])[%[[ubK]]] +// TILE-002: %[[szK:.*]] = affine.min #[[$bound_map]]()[%[[ubK]], %[[K]]] // TILE-002: %[[N:.*]] = memref.dim %{{.*}}, %c1 : memref // TILE-002: %[[sBj:.*]] = memref.subview %{{.*}}[%[[K]], 0] [%[[szK]], %[[N]]] [1, 1] : memref to memref // TILE-002: linalg.matmul ins(%[[sAj]], %[[sBj]]{{.*}} outs(%{{.*}} @@ -83,14 +83,14 @@ // TILE-234: scf.for %[[I:.*]] = %{{.*}}{{.*}} to %[[ubM]] step %{{.*}} { // TILE-234: scf.for %[[J:.*]] = %{{.*}}{{.*}} to %[[ubN]] step %{{.*}} { // TILE-234: scf.for %[[K:.*]] = %{{.*}}{{.*}} to %[[ubK]] step %{{.*}} { -// TILE-234: %[[szM:.*]] = affine.min #[[$bound_map_2]](%[[I]])[%[[ubM]]] -// TILE-234: %[[szK:.*]] = affine.min #[[$bound_map_4]](%[[K]])[%[[ubK]]] +// TILE-234: %[[szM:.*]] = affine.min #[[$bound_map_2]]()[%[[ubM]], %[[I]]] +// TILE-234: %[[szK:.*]] = affine.min #[[$bound_map_4]]()[%[[ubK]], %[[K]]] // TILE-234: %[[sAik:.*]] = memref.subview %{{.*}}[%[[I]], %[[K]]] [%[[szM]], %[[szK]]] [1, 1] : memref to memref -// TILE-234: %[[szK:.*]] = affine.min #[[$bound_map_4]](%[[K]])[%[[ubK]]] -// TILE-234: %[[szN:.*]] = affine.min #[[$bound_map_3]](%[[J]])[%[[ubN]]] +// TILE-234: %[[szK:.*]] = affine.min #[[$bound_map_4]]()[%[[ubK]], %[[K]]] +// TILE-234: %[[szN:.*]] = affine.min #[[$bound_map_3]]()[%[[ubN]], %[[J]]] // TILE-234: %[[sBkj:.*]] = memref.subview %{{.*}}[%[[K]], %[[J]]] [%[[szK]], %[[szN]]] [1, 1] : memref to memref -// TILE-234: %[[szM:.*]] = affine.min #[[$bound_map_2]](%[[I]])[%[[ubM]]] -// TILE-234: %[[szN:.*]] = affine.min #[[$bound_map_3]](%[[J]])[%[[ubN]]] +// TILE-234: %[[szM:.*]] = affine.min #[[$bound_map_2]]()[%[[ubM]], %[[I]]] +// TILE-234: %[[szN:.*]] = affine.min #[[$bound_map_3]]()[%[[ubN]], %[[J]]] // TILE-234: %[[sCij:.*]] = memref.subview %{{.*}}[%[[I]], %[[J]]] [%[[szM]], %[[szN]]] [1, 1] : memref to memref // // TILE-234: linalg.matmul ins(%[[sAik]], %[[sBkj]]{{.*}} outs(%[[sCij]] @@ -170,10 +170,10 @@ // TILE-2-DAG: %[[C2:.*]] = arith.constant 2 : index // TILE-2: %[[M:.*]] = memref.dim %{{.*}}, %c0 : memref // TILE-2: scf.for %[[I:.*]] = %{{.*}}{{.*}} to %[[M]] step %{{.*}} { -// TILE-2: %[[szM:.*]] = affine.min #[[$bound_map]](%[[I]])[%[[M]]] +// TILE-2: %[[szM:.*]] = affine.min #[[$bound_map]]()[%[[M]], %[[I]]] // TILE-2: %[[N:.*]] = memref.dim %{{.*}}, %c1 : memref // TILE-2: %[[sAi:.*]] = memref.subview %{{.*}}[%[[I]], 0] [%[[szM]], %[[N]]] [1, 1] : memref to memref -// TILE-2: %[[szN:.*]] = affine.min #[[$bound_map]](%[[I]])[%[[M]]] +// TILE-2: %[[szN:.*]] = affine.min #[[$bound_map]]()[%[[M]], %[[I]]] // TILE-2: %[[sCi:.*]] = memref.subview %{{.*}}[%[[I]]] [%[[szN]]] [1] : memref to memref // TILE-2: linalg.matvec ins(%[[sAi]], %{{.*}} outs(%[[sCi]] @@ -186,9 +186,9 @@ // TILE-02: %[[K:.*]] = memref.dim %{{.*}}, %c1 : memref // TILE-02: scf.for %[[J:.*]] = %{{.*}}{{.*}} to %[[K]] step %{{.*}} { // TILE-02: %[[M:.*]] = memref.dim %{{.*}}, %c0 : memref -// TILE-02: %[[szN:.*]] = affine.min #[[$bound_map]](%[[J]])[%[[K]]] +// TILE-02: %[[szN:.*]] = affine.min #[[$bound_map]]()[%[[K]], %[[J]]] // TILE-02: %[[sAj:.*]] = memref.subview %{{.*}}[0, %[[J]]] [%[[M]], %[[szN]]] [1, 1] : memref to memref -// TILE-02: %[[szN:.*]] = affine.min #[[$bound_map]](%[[J]])[%[[K]]] +// TILE-02: %[[szN:.*]] = affine.min #[[$bound_map]]()[%[[K]], %[[J]]] // TILE-02: %[[sBj:.*]] = memref.subview %{{.*}}[%[[J]]] [%[[szN]]] [1] : memref to memref // TILE-02: linalg.matvec ins(%[[sAj]], %[[sBj]]{{.*}} outs(%{{.*}} @@ -209,12 +209,12 @@ // TILE-234: %[[K:.*]] = memref.dim %{{.*}}, %c1 : memref // TILE-234: scf.for %[[I:.*]] = %{{.*}}{{.*}} to %[[M]] step %{{.*}} { // TILE-234: scf.for %[[J:.*]] = %{{.*}}{{.*}} to %[[K]] step %{{.*}} { -// TILE-234: %[[szM:.*]] = affine.min #[[$bound_map_2]](%[[I]])[%[[M]]] -// TILE-234: %[[szN:.*]] = affine.min #[[$bound_map_3]](%[[J]])[%[[K]]] +// TILE-234: %[[szM:.*]] = affine.min #[[$bound_map_2]]()[%[[M]], %[[I]]] +// TILE-234: %[[szN:.*]] = affine.min #[[$bound_map_3]]()[%[[K]], %[[J]]] // TILE-234: %[[sAij:.*]] = memref.subview %{{.*}}[%[[I]], %[[J]]] [%[[szM]], %[[szN]]] [1, 1] : memref to memref -// TILE-234: %[[szN:.*]] = affine.min #[[$bound_map_3]](%[[J]])[%[[K]]] +// TILE-234: %[[szN:.*]] = affine.min #[[$bound_map_3]]()[%[[K]], %[[J]]] // TILE-234: %[[sBj:.*]] = memref.subview %{{.*}}[%[[J]]] [%[[szN]]] [1] : memref to memref -// TILE-234: %[[szM:.*]] = affine.min #[[$bound_map_2]](%[[I]])[%[[M]]] +// TILE-234: %[[szM:.*]] = affine.min #[[$bound_map_2]]()[%[[M]], %[[I]]] // TILE-234: %[[sCi:.*]] = memref.subview %{{.*}}[%[[I]]] [%[[szM]]] [1] : memref to memref // // TILE-234: linalg.matvec ins(%[[sAij]], %[[sBj]]{{.*}} outs(%[[sCi]] @@ -230,9 +230,9 @@ // TILE-2-DAG: %[[C2:.*]] = arith.constant 2 : index // TILE-2: %[[M:.*]] = memref.dim %{{.*}}, %c0 : memref // TILE-2: scf.for %[[I:.*]] = %{{.*}}{{.*}} to %[[M]] step %{{.*}} { -// TILE-2: %[[szM:.*]] = affine.min #[[$bound_map]](%[[I]])[%[[M]]] +// TILE-2: %[[szM:.*]] = affine.min #[[$bound_map]]()[%[[M]], %[[I]]] // TILE-2: %[[sAi:.*]] = memref.subview %{{.*}}[%[[I]]] [%[[szM]]] [1] : memref to memref -// TILE-2: %[[szM:.*]] = affine.min #[[$bound_map]](%[[I]])[%[[M]]] +// TILE-2: %[[szM:.*]] = affine.min #[[$bound_map]]()[%[[M]], %[[I]]] // TILE-2: %[[sBi:.*]] = memref.subview %{{.*}}[%[[I]]] [%[[szM]]] [1] : memref to memref // TILE-2: linalg.dot ins(%[[sAi]], %[[sBi]]{{.*}} outs( @@ -247,9 +247,9 @@ // TILE-234-DAG: %[[C2:.*]] = arith.constant 2 : index // TILE-234: %[[ubK:.*]] = memref.dim %{{.*}}, %c0 : memref // TILE-234: scf.for %[[I:.*]] = %{{.*}} to %[[ubK]] step %{{.*}} { -// TILE-234: %[[szM:.*]] = affine.min #[[$bound_map_2]](%[[I]])[%[[ubK]]] +// TILE-234: %[[szM:.*]] = affine.min #[[$bound_map_2]]()[%[[ubK]], %[[I]]] // TILE-234: %[[sAi:.*]] = memref.subview %{{.*}}[%[[I]]] [%[[szM]]] [1] : memref to memref -// TILE-234: %[[szM:.*]] = affine.min #[[$bound_map_2]](%[[I]])[%[[ubK]]] +// TILE-234: %[[szM:.*]] = affine.min #[[$bound_map_2]]()[%[[ubK]], %[[I]]] // TILE-234: %[[sBi:.*]] = memref.subview %{{.*}}[%[[I]]] [%[[szM]]] [1] : memref to memref // TILE-234: linalg.dot ins(%[[sAi]], %[[sBi]]{{.*}} outs( diff --git a/mlir/test/Dialect/SCF/for-loop-peeling.mlir b/mlir/test/Dialect/SCF/for-loop-peeling.mlir --- a/mlir/test/Dialect/SCF/for-loop-peeling.mlir +++ b/mlir/test/Dialect/SCF/for-loop-peeling.mlir @@ -2,7 +2,7 @@ // RUN: mlir-opt %s -for-loop-peeling=skip-partial=false -canonicalize -split-input-file | FileCheck %s -check-prefix=CHECK-NO-SKIP // CHECK-DAG: #[[MAP0:.*]] = affine_map<()[s0, s1, s2] -> (s1 - (s1 - s0) mod s2)> -// CHECK-DAG: #[[MAP1:.*]] = affine_map<(d0)[s0] -> (-d0 + s0)> +// CHECK-DAG: #[[MAP1:.*]] = affine_map<()[s0, s1] -> (-s0 + s1)> // CHECK: func @fully_dynamic_bounds( // CHECK-SAME: %[[LB:.*]]: index, %[[UB:.*]]: index, %[[STEP:.*]]: index // CHECK: %[[C0_I32:.*]] = arith.constant 0 : i32 @@ -15,7 +15,7 @@ // CHECK: } // CHECK: %[[RESULT:.*]] = scf.for %[[IV2:.*]] = %[[NEW_UB]] to %[[UB]] // CHECK-SAME: step %[[STEP]] iter_args(%[[ACC2:.*]] = %[[LOOP]]) -> (i32) { -// CHECK: %[[REM:.*]] = affine.apply #[[MAP1]](%[[IV2]])[%[[UB]]] +// CHECK: %[[REM:.*]] = affine.apply #[[MAP1]]()[%[[IV2]], %[[UB]]] // CHECK: %[[CAST2:.*]] = arith.index_cast %[[REM]] // CHECK: %[[ADD2:.*]] = arith.addi %[[ACC2]], %[[CAST2]] // CHECK: scf.yield %[[ADD2]] @@ -68,7 +68,7 @@ // ----- // CHECK-DAG: #[[MAP0:.*]] = affine_map<()[s0] -> ((s0 floordiv 4) * 4)> -// CHECK-DAG: #[[MAP1:.*]] = affine_map<(d0)[s0] -> (-d0 + s0)> +// CHECK-DAG: #[[MAP1:.*]] = affine_map<()[s0, s1] -> (-s0 + s1)> // CHECK: func @dynamic_upper_bound( // CHECK-SAME: %[[UB:.*]]: index // CHECK-DAG: %[[C0_I32:.*]] = arith.constant 0 : i32 @@ -83,7 +83,7 @@ // CHECK: } // CHECK: %[[RESULT:.*]] = scf.for %[[IV2:.*]] = %[[NEW_UB]] to %[[UB]] // CHECK-SAME: step %[[C4]] iter_args(%[[ACC2:.*]] = %[[LOOP]]) -> (i32) { -// CHECK: %[[REM:.*]] = affine.apply #[[MAP1]](%[[IV2]])[%[[UB]]] +// CHECK: %[[REM:.*]] = affine.apply #[[MAP1]]()[%[[IV2]], %[[UB]]] // CHECK: %[[CAST2:.*]] = arith.index_cast %[[REM]] // CHECK: %[[ADD2:.*]] = arith.addi %[[ACC2]], %[[CAST2]] // CHECK: scf.yield %[[ADD2]] @@ -107,7 +107,7 @@ // ----- // CHECK-DAG: #[[MAP0:.*]] = affine_map<()[s0] -> ((s0 floordiv 4) * 4)> -// CHECK-DAG: #[[MAP1:.*]] = affine_map<(d0)[s0] -> (-d0 + s0)> +// CHECK-DAG: #[[MAP1:.*]] = affine_map<()[s0, s1] -> (-s0 + s1)> // CHECK: func @no_loop_results( // CHECK-SAME: %[[UB:.*]]: index, %[[MEMREF:.*]]: memref // CHECK-DAG: %[[C4_I32:.*]] = arith.constant 4 : i32 @@ -120,7 +120,7 @@ // CHECK: memref.store %[[ADD]], %[[MEMREF]] // CHECK: } // CHECK: scf.for %[[IV2:.*]] = %[[NEW_UB]] to %[[UB]] step %[[C4]] { -// CHECK: %[[REM:.*]] = affine.apply #[[MAP1]](%[[IV2]])[%[[UB]]] +// CHECK: %[[REM:.*]] = affine.apply #[[MAP1]]()[%[[IV2]], %[[UB]]] // CHECK: %[[LOAD2:.*]] = memref.load %[[MEMREF]][] // CHECK: %[[CAST2:.*]] = arith.index_cast %[[REM]] // CHECK: %[[ADD2:.*]] = arith.addi %[[LOAD2]], %[[CAST2]] @@ -149,13 +149,13 @@ // does not rewrite ops that should not be rewritten. // CHECK-DAG: #[[MAP1:.*]] = affine_map<()[s0] -> (s0 + 1)> -// CHECK-DAG: #[[MAP2:.*]] = affine_map<(d0)[s0, s1] -> (s0, -d0 + s1 - 1)> -// CHECK-DAG: #[[MAP3:.*]] = affine_map<(d0)[s0, s1, s2] -> (s0, -d0 + s1, s2)> +// CHECK-DAG: #[[MAP2:.*]] = affine_map<()[s0, s1, s2] -> (s0, s1 - s2 - 1)> +// CHECK-DAG: #[[MAP3:.*]] = affine_map<()[s0, s1, s2, s3] -> (s0, s1 - s2, s3)> // CHECK-DAG: #[[MAP4:.*]] = affine_map<()[s0] -> (-s0)> -// CHECK-DAG: #[[MAP5:.*]] = affine_map<(d0)[s0] -> (-d0 + s0)> -// CHECK-DAG: #[[MAP6:.*]] = affine_map<(d0)[s0] -> (-d0 + s0 + 1)> -// CHECK-DAG: #[[MAP7:.*]] = affine_map<(d0)[s0] -> (-d0 + s0 - 1)> -// CHECK-DAG: #[[MAP8:.*]] = affine_map<(d0)[s0] -> (d0 - s0)> +// CHECK-DAG: #[[MAP5:.*]] = affine_map<()[s0, s1] -> (-s0 + s1)> +// CHECK-DAG: #[[MAP6:.*]] = affine_map<()[s0, s1] -> (-s0 + s1 + 1)> +// CHECK-DAG: #[[MAP7:.*]] = affine_map<()[s0, s1] -> (-s0 + s1 - 1)> +// CHECK-DAG: #[[MAP8:.*]] = affine_map<()[s0, s1] -> (s0 - s1)> // CHECK: func @test_affine_op_rewrite( // CHECK-SAME: %[[LB:.*]]: index, %[[UB:.*]]: index, %[[STEP:.*]]: index, // CHECK-SAME: %[[MEMREF:.*]]: memref, %[[SOME_VAL:.*]]: index @@ -166,25 +166,25 @@ // CHECK: memref.store %[[STEP]] // CHECK: %[[RES2:.*]] = affine.apply #[[MAP1]]()[%[[STEP]]] // CHECK: memref.store %[[RES2]] -// CHECK: %[[RES3:.*]] = affine.min #[[MAP2]](%[[IV]])[%[[STEP]], %[[UB]]] +// CHECK: %[[RES3:.*]] = affine.min #[[MAP2]]()[%[[STEP]], %[[UB]], %[[IV]]] // CHECK: memref.store %[[RES3]] -// CHECK: %[[RES4:.*]] = affine.min #[[MAP3]](%[[IV]])[%[[STEP]], %[[UB]], %[[SOME_VAL]]] +// CHECK: %[[RES4:.*]] = affine.min #[[MAP3]]()[%[[STEP]], %[[UB]], %[[IV]], %[[SOME_VAL]]] // CHECK: memref.store %[[RES4]] // CHECK: %[[RES5:.*]] = affine.apply #[[MAP4]]()[%[[STEP]]] // CHECK: memref.store %[[RES5]] // CHECK: } // CHECK: scf.for %[[IV2:.*]] = {{.*}} to %[[UB]] step %[[STEP]] { -// CHECK: %[[RES_IF_0:.*]] = affine.apply #[[MAP5]](%[[IV2]])[%[[UB]]] +// CHECK: %[[RES_IF_0:.*]] = affine.apply #[[MAP5]]()[%[[IV2]], %[[UB]]] // CHECK: memref.store %[[RES_IF_0]] -// CHECK: %[[RES_IF_1:.*]] = affine.apply #[[MAP6]](%[[IV2]])[%[[UB]]] +// CHECK: %[[RES_IF_1:.*]] = affine.apply #[[MAP6]]()[%[[IV2]], %[[UB]]] // CHECK: memref.store %[[RES_IF_1]] -// CHECK: %[[RES_IF_2:.*]] = affine.apply #[[MAP6]](%[[IV2]])[%[[UB]]] +// CHECK: %[[RES_IF_2:.*]] = affine.apply #[[MAP6]]()[%[[IV2]], %[[UB]]] // CHECK: memref.store %[[RES_IF_2]] -// CHECK: %[[RES_IF_3:.*]] = affine.apply #[[MAP7]](%[[IV2]])[%[[UB]]] +// CHECK: %[[RES_IF_3:.*]] = affine.apply #[[MAP7]]()[%[[IV2]], %[[UB]]] // CHECK: memref.store %[[RES_IF_3]] -// CHECK: %[[RES_IF_4:.*]] = affine.min #[[MAP3]](%[[IV2]])[%[[STEP]], %[[UB]], %[[SOME_VAL]]] +// CHECK: %[[RES_IF_4:.*]] = affine.min #[[MAP3]]()[%[[STEP]], %[[UB]], %[[IV2]], %[[SOME_VAL]]] // CHECK: memref.store %[[RES_IF_4]] -// CHECK: %[[RES_IF_5:.*]] = affine.apply #[[MAP8]](%[[IV2]])[%[[UB]]] +// CHECK: %[[RES_IF_5:.*]] = affine.apply #[[MAP8]]()[%[[IV2]], %[[UB]]] // CHECK: memref.store %[[RES_IF_5]] #map0 = affine_map<(d0, d1)[s0] -> (s0, d0 - d1)> #map1 = affine_map<(d0, d1)[s0] -> (d0 - d1 + 1, s0)> diff --git a/mlir/test/Dialect/SparseTensor/sparse_vector_peeled.mlir b/mlir/test/Dialect/SparseTensor/sparse_vector_peeled.mlir --- a/mlir/test/Dialect/SparseTensor/sparse_vector_peeled.mlir +++ b/mlir/test/Dialect/SparseTensor/sparse_vector_peeled.mlir @@ -18,7 +18,7 @@ } // CHECK-DAG: #[[$map0:.*]] = affine_map<()[s0, s1] -> (s0 + ((-s0 + s1) floordiv 16) * 16)> -// CHECK-DAG: #[[$map1:.*]] = affine_map<(d0)[s0] -> (-d0 + s0)> +// CHECK-DAG: #[[$map1:.*]] = affine_map<()[s0, s1] -> (-s0 + s1)> // CHECK-LABEL: func @mul_s // CHECK-DAG: %[[c0:.*]] = arith.constant 0 : index // CHECK-DAG: %[[c1:.*]] = arith.constant 1 : index @@ -40,7 +40,7 @@ // CHECK: vector.scatter %{{.*}}[%[[c0]]] [%[[zi]]], %[[mask]], %[[m]] : memref<1024xf32>, vector<16xi64>, vector<16xi1>, vector<16xf32> // CHECK: } // CHECK: scf.for %[[i2:.*]] = %[[boundary]] to %[[s]] step %[[c16]] { -// CHECK: %[[sub:.*]] = affine.apply #[[$map1]](%[[i2]])[%[[s]]] +// CHECK: %[[sub:.*]] = affine.apply #[[$map1]]()[%[[i2]], %[[s]]] // CHECK: %[[mask2:.*]] = vector.create_mask %[[sub]] : vector<16xi1> // CHECK: %[[li2:.*]] = vector.maskedload %{{.*}}[%[[i2]]], %[[mask2]], %{{.*}} : memref, vector<16xi1>, vector<16xi32> into vector<16xi32> // CHECK: %[[zi2:.*]] = arith.extui %[[li2]] : vector<16xi32> to vector<16xi64> diff --git a/mlir/test/lib/Dialect/Test/TestDialect.cpp b/mlir/test/lib/Dialect/Test/TestDialect.cpp --- a/mlir/test/lib/Dialect/Test/TestDialect.cpp +++ b/mlir/test/lib/Dialect/Test/TestDialect.cpp @@ -629,22 +629,6 @@ return RegionKind::Graph; } -//===----------------------------------------------------------------------===// -// Test AffineScopeOp -//===----------------------------------------------------------------------===// - -static ParseResult parseAffineScopeOp(OpAsmParser &parser, - OperationState &result) { - // Parse the body region, and reuse the operand info as the argument info. - Region *body = result.addRegion(); - return parser.parseRegion(*body, /*arguments=*/{}, /*argTypes=*/{}); -} - -static void print(OpAsmPrinter &p, AffineScopeOp op) { - p << "test.affine_scope "; - p.printRegion(op.getRegion(), /*printEntryBlockArgs=*/false); -} - //===----------------------------------------------------------------------===// // Test parser. //===----------------------------------------------------------------------===// diff --git a/mlir/test/lib/Dialect/Test/TestOps.td b/mlir/test/lib/Dialect/Test/TestOps.td --- a/mlir/test/lib/Dialect/Test/TestOps.td +++ b/mlir/test/lib/Dialect/Test/TestOps.td @@ -1576,15 +1576,14 @@ let printer = [{ return ::print(p, *this); }]; } -def AffineScopeOp : TEST_Op<"affine_scope", [AffineScope]> { - let summary = "affine scope operation"; +def ExtendAffineScopeOp : TEST_Op<"affine_scope_extend", [ExtendsAffineScope]> { + let summary = "an operation that extends an affine scope"; let description = [{ - Test op that defines a new affine scope. + Test op that extends an affine scope created by its ancestor op chain. }]; let regions = (region SizedRegion<1>:$region); - let parser = [{ return ::parse$cppClass(parser, result); }]; - let printer = [{ return ::print(p, *this); }]; + let assemblyFormat = "$region attr-dict"; } def WrappingRegionOp : TEST_Op<"wrapping_region",