diff --git a/mlir/include/mlir/Dialect/SparseTensor/IR/SparseTensorBase.td b/mlir/include/mlir/Dialect/SparseTensor/IR/SparseTensorBase.td --- a/mlir/include/mlir/Dialect/SparseTensor/IR/SparseTensorBase.td +++ b/mlir/include/mlir/Dialect/SparseTensor/IR/SparseTensorBase.td @@ -29,7 +29,7 @@ sparse code automatically was pioneered for dense linear algebra by [Bik96] in MT1 (see https://www.aartbik.com/sparse.php) and formalized to tensor algebra by [Kjolstad17,Kjolstad20] in the Sparse Tensor - Algebra Compiler (TACO) project (see http://tensor-compiler.org/). + Algebra Compiler (TACO) project (see http://tensor-compiler.org). The MLIR implementation closely follows the "sparse iteration theory" that forms the foundation of TACO. A rewriting rule is applied to each diff --git a/mlir/include/mlir/Dialect/SparseTensor/IR/SparseTensorOps.td b/mlir/include/mlir/Dialect/SparseTensor/IR/SparseTensorOps.td --- a/mlir/include/mlir/Dialect/SparseTensor/IR/SparseTensorOps.td +++ b/mlir/include/mlir/Dialect/SparseTensor/IR/SparseTensorOps.td @@ -56,20 +56,20 @@ Results<(outs AnyTensor:$dest)> { string summary = "Converts between different tensor types"; string description = [{ - Converts one sparse or dense tensor type to another tensor type. The rank - and dimensions of the source and destination types must match exactly, - only the sparse encoding of these types may be different. The name `convert` - was preferred over `cast`, since the operation may incur a non-trivial cost. - - When converting between two different sparse tensor types, only explicitly - stored values are moved from one underlying sparse storage format to - the other. When converting from an unannotated dense tensor type to a - sparse tensor type, an explicit test for nonzero values is used. When - converting to an unannotated dense tensor type, implicit zeroes in the - sparse storage format are made explicit. Note that the conversions can have - non-trivial costs associated with them, since they may involve elaborate - data structure transformations. Also, conversions from sparse tensor types - into dense tensor types may be infeasible in terms of storage requirements. + Converts one sparse or dense tensor type to another tensor type. The rank + and dimensions of the source and destination types must match exactly, + only the sparse encoding of these types may be different. The name `convert` + was preferred over `cast`, since the operation may incur a non-trivial cost. + + When converting between two different sparse tensor types, only explicitly + stored values are moved from one underlying sparse storage format to + the other. When converting from an unannotated dense tensor type to a + sparse tensor type, an explicit test for nonzero values is used. When + converting to an unannotated dense tensor type, implicit zeroes in the + sparse storage format are made explicit. Note that the conversions can have + non-trivial costs associated with them, since they may involve elaborate + data structure transformations. Also, conversions from sparse tensor types + into dense tensor types may be infeasible in terms of storage requirements. Examples: @@ -88,15 +88,15 @@ Results<(outs AnyStridedMemRefOfRank<1>:$result)> { let summary = "Extract pointers array at given dimension from a tensor"; let description = [{ - Returns the pointers array of the sparse storage scheme at the - given dimension for the given sparse tensor. This is similar to the - `memref.buffer_cast` operation in the sense that it provides a bridge - between a tensor world view and a bufferized world view. Unlike the - `memref.buffer_cast` operation, however, this sparse operation actually - lowers into a call into a support library to obtain access to the - pointers array. + Returns the pointers array of the sparse storage scheme at the + given dimension for the given sparse tensor. This is similar to the + `memref.buffer_cast` operation in the sense that it provides a bridge + between a tensor world view and a bufferized world view. Unlike the + `memref.buffer_cast` operation, however, this sparse operation actually + lowers into a call into a support library to obtain access to the + pointers array. - Example: + Example: ```mlir %1 = sparse_tensor.pointers %0, %c1 @@ -112,15 +112,15 @@ Results<(outs AnyStridedMemRefOfRank<1>:$result)> { let summary = "Extract indices array at given dimension from a tensor"; let description = [{ - Returns the indices array of the sparse storage scheme at the - given dimension for the given sparse tensor. This is similar to the - `memref.buffer_cast` operation in the sense that it provides a bridge - between a tensor world view and a bufferized world view. Unlike the - `memref.buffer_cast` operation, however, this sparse operation actually - lowers into a call into a support library to obtain access to the - indices array. + Returns the indices array of the sparse storage scheme at the + given dimension for the given sparse tensor. This is similar to the + `memref.buffer_cast` operation in the sense that it provides a bridge + between a tensor world view and a bufferized world view. Unlike the + `memref.buffer_cast` operation, however, this sparse operation actually + lowers into a call into a support library to obtain access to the + indices array. - Example: + Example: ```mlir %1 = sparse_tensor.indices %0, %c1 @@ -136,15 +136,15 @@ Results<(outs AnyStridedMemRefOfRank<1>:$result)> { let summary = "Extract numerical values array from a tensor"; let description = [{ - Returns the values array of the sparse storage scheme for the given - sparse tensor, independent of the actual dimension. This is similar to - the `memref.buffer_cast` operation in the sense that it provides a bridge - between a tensor world view and a bufferized world view. Unlike the - `memref.buffer_cast` operation, however, this sparse operation actually - lowers into a call into a support library to obtain access to the - values array. - - Example: + Returns the values array of the sparse storage scheme for the given + sparse tensor, independent of the actual dimension. This is similar to + the `memref.buffer_cast` operation in the sense that it provides a bridge + between a tensor world view and a bufferized world view. Unlike the + `memref.buffer_cast` operation, however, this sparse operation actually + lowers into a call into a support library to obtain access to the + values array. + + Example: ```mlir %1 = sparse_tensor.values %0 : tensor<64x64xf64, #CSR> to memref