This enables the sparsification of more kernels, such as convolutions
where there is a x(i+j) subscript. It also enables more tensor invariants
such as x(1) or other affine subscripts such as x(i+1). Currently, we
reject sparsity altogether for such tensors. Despite this restriction,
however, we can already handle a lot more kernels with compound subscripts
for dense access (viz. convolution with dense input and sparse filter).
Some unit tests and an integration test demonstrate new capability.
Details
Diff Detail
- Repository
- rG LLVM Github Monorepo
Event Timeline
| mlir/lib/Dialect/SparseTensor/Transforms/Sparsification.cpp | ||
|---|---|---|
| 133 | It looks to me that the routine returns false if there is any Affine that can't be handled, and returns true if all Affines can be handled. Am I right? | |
| 478–479 | This routine currently generates Affine for unannotated tensor only. | |
| 516–518 | Can we add a comment here that says we currently only support direct indexing Affine for annotated tensors? | |
| 704 | s/Determine/Determines/ | |
| 705 | This can be a const reference. Right? | |
| mlir/test/Integration/Dialect/SparseTensor/CPU/sparse_filter_conv2d.mlir | ||
| 25 | TENSOR0 is not needed? | |
| mlir/test/Integration/Dialect/SparseTensor/CPU/sparse_filter_conv2d.mlir | ||
|---|---|---|
| 25 | sharp eye! | |
It looks to me that the routine returns false if there is any Affine that can't be handled, and returns true if all Affines can be handled. Am I right?