diff --git a/mlir/include/mlir/Dialect/SparseTensor/IR/SparseTensorOps.td b/mlir/include/mlir/Dialect/SparseTensor/IR/SparseTensorOps.td --- a/mlir/include/mlir/Dialect/SparseTensor/IR/SparseTensorOps.td +++ b/mlir/include/mlir/Dialect/SparseTensor/IR/SparseTensorOps.td @@ -100,8 +100,8 @@ `bufferization.to_memref` operation in the sense that it provides a bridge between a tensor world view and a bufferized world view. Unlike the `bufferization.to_memref` operation, however, this sparse operation actually - lowers into a call into a support library to obtain access to the - pointers array. + lowers into code that extracts the pointers array from the sparse storage + scheme (either by calling a support library or through direct code). Example: @@ -125,8 +125,8 @@ `bufferization.to_memref` operation in the sense that it provides a bridge between a tensor world view and a bufferized world view. Unlike the `bufferization.to_memref` operation, however, this sparse operation actually - lowers into a call into a support library to obtain access to the - indices array. + lowers into code that extracts the indices array from the sparse storage + scheme (either by calling a support library or through direct code). Example: @@ -150,8 +150,8 @@ the `bufferization.to_memref` operation in the sense that it provides a bridge between a tensor world view and a bufferized world view. Unlike the `bufferization.to_memref` operation, however, this sparse operation actually - lowers into a call into a support library to obtain access to the - values array. + lowers into code that extracts the values array from the sparse storage + scheme (either by calling a support library or through direct code). Example: @@ -195,8 +195,9 @@ // Sparse Tensor Management Operations. These operations are "impure" in the // sense that they do not properly operate on SSA values. Instead, the behavior // is solely defined by side-effects. These operations provide a bridge between -// the code generator and the support library. The semantics of these operations -// may be refined over time as our sparse abstractions evolve. +// "sparsification" on one hand and a support library or actual code generation +// on the other hand. The semantics of these operations may be refined over time +// as our sparse abstractions evolve. //===----------------------------------------------------------------------===// def SparseTensor_LexInsertOp : SparseTensor_Op<"lex_insert", []>,