The sparse tensor code generator allocates memory for the output tensor. As
such, we only need to allocate a MemRefDescriptor to receive the output tensor
and do not need to allocate and initialize the storage for the tensor.
Details
Details
Diff Detail
Diff Detail
- Repository
- rG LLVM Github Monorepo
Event Timeline
Comment Actions
Can you refine:
"The sparse tensor code generator allocates memory for the output tensor."
in the description test. The sparse codegen does not do bufferization for dense tensors, but uses the to_memref/to_tensor at the boundaries.
So the actual allocation comes from a later bufferization. Just to make sure details are right.
mlir/test/Integration/Dialect/SparseTensor/python/test_SpMM.py | ||
---|---|---|
97–100 | can you split this into two lines, and assign an intuitive name to the memref descriptor something like ref_out = rt.make ... |
mlir/test/Integration/Dialect/SparseTensor/python/test_SpMM.py | ||
---|---|---|
97–100 | And actually add what you have in the description as comment
ref_out = ... or something like that |
can you split this into two lines, and assign an intuitive name to the memref descriptor
something like
ref_out = rt.make ...
mem_out = ctypes.pointer( ...