See https://www.tensorflow.org/xla/operation_semantics#concatenate for the operator semantics
Details
Diff Detail
- Repository
- rG LLVM Github Monorepo
Event Timeline
Much easier to review smaller revision! Thanks.
mlir/include/mlir/Dialect/SparseTensor/IR/SparseTensorOps.td | ||
---|---|---|
171 | Concatenates a list of tensor into a single tensor. | |
173 | Please mention that all operands have same rank, and that 0 <= dimension < rank | |
mlir/lib/Dialect/SparseTensor/IR/SparseTensorDialect.cpp | ||
359 | check 0 <= concatDim < rank ? | |
364 | You could refine the error by emitting the argument number and mismatched rank | |
368 | Can you add some comment here on what you allow for dynamic sizes? The logic for static sizes is clear (all the same for != concatDim and sum of all if == concatDim, but it it not so clear what cases you allow when some or dyanmic). Would it make sense to simply require them all to be static for the first version? Did you observe the need to support dynamic? |
mlir/include/mlir/Dialect/SparseTensor/IR/SparseTensorOps.td | ||
---|---|---|
174 | typo: negaive -> negative also, although you are right that (unsigned) indices are always non-negative, I would , at least here, in the doc, simply say 0 <= dimension < rank for readability | |
mlir/lib/Dialect/SparseTensor/IR/SparseTensorDialect.cpp | ||
361 | no { } needed in single stmt if also, we can decided to allow for size 1 and simply fold it into itself | |
367 | no { } | |
373 | same, no { } needed | |
382 | and here, no { } , also below |
mlir/lib/Dialect/SparseTensor/IR/SparseTensorDialect.cpp | ||
---|---|---|
361 | Should I do it now or in the future (when it is needed) Also, probably need to lowering it to sparse_tensor.convert in the case that input and output tensor have different encodings |
Two last nits, but good to go
mlir/lib/Dialect/SparseTensor/IR/SparseTensorDialect.cpp | ||
---|---|---|
361 | We can do that later. I was just thinking "out loud" here. | |
404 | it works but I would find code that assign the first dimension size here and then checks if they are all equal without the "prev" trick a bit cleaner (and you have just verified there are sufficient inputs for this) |
Some post-landing comments since I was out on vacation
mlir/include/mlir/Dialect/SparseTensor/IR/SparseTensorOps.td | ||
---|---|---|
173 | Note that there's an AllRanksMatch trait to ensure that ranks match between operands (and/or results iiuc). I'll post a CL to add that. Don't know that there's a premade trait for the other constraint, but it should be easy enough to write up | |
179–180 | As I mentioned in D130287, I think it'd be good to have this op not introduce any new dynamic sizes but rather force there to be an explicit cast doing so (like memref::CollapseShapeOp). | |
mlir/lib/Dialect/SparseTensor/IR/SparseTensorDialect.cpp | ||
368 | @aartbik fwiw, we had a big long discussion over at D130287. IMO the only valid thing would be to allow dynamic sizes for the dimension being concatenated along (and if any input has such, then the corresponding output dimension would also be dynamic). Since anything else would require introducing runtime assertions (or a sufficiently powerful analysis to statically determine that those assertions must always succeed). |
Concatenates a list of tensor into a single tensor.