Details
Diff Detail
- Repository
 - rG LLVM Github Monorepo
 
Event Timeline
| mlir/include/mlir/Dialect/Tensor/IR/TensorOps.td | ||
|---|---|---|
| 322 | Can you also mention how the order of elements relates to elements in the tensor?  | |
| mlir/lib/Conversion/ShapeToStandard/ShapeToStandard.cpp | ||
| 198 | Debugging leftover?  | |
| 199 | Could this use the overload that just takes the values?  | |
| 572 | Here, too.  | |
| 573 | Could this use the overload that just takes extenTensor?  | |
| mlir/lib/Dialect/Tensor/IR/TensorOps.cpp | ||
| 413 | This needs to be the stride. So you can compute flatIndex *= tensorType.getDimSize(i) and then do flatIndex += index.getSExtValue.  | |
| mlir/lib/Dialect/Tensor/Transforms/Bufferize.cpp | ||
| 85 | Would it be easier to always allocate a 1d memref, write the values and then reshape it into the final shape?  | |
Address the comments.
| mlir/lib/Conversion/ShapeToStandard/ShapeToStandard.cpp | ||
|---|---|---|
| 199 | only if we are absolutely sure that the elements are not empty. Unfortunately, there is this ugly use case: %0 = tensor.from_elements : tensor<0xindex>  | |
| 573 | There is a builder that does it. But we have to be sure that extentValues are not empty.  | |
| mlir/lib/Dialect/Tensor/Transforms/Bufferize.cpp | ||
| 85 | It would be the same number of stores + additional reshape that might break some canonicalizations. I think the best variant would be to have a memref.from_elements and then we could lower it to llvm creating a necessary descriptor and linearly populating the underlying buffer.  | |
Can you also mention how the order of elements relates to elements in the tensor?