Replaced definition of named ND ConvOps with tensor comprehension
syntax which reduces boilerplate code significantly. Furthermore,
new ops to support TF convolutions added (without strides and dilations).
Details
Diff Detail
- Repository
- rG LLVM Github Monorepo
Event Timeline
mlir/include/mlir/Dialect/Linalg/IR/LinalgNamedStructuredOpsSpec.tc | ||
---|---|---|
22 | Can we name these as conv_2d_<layout> ? I understand it can be seen as redundant we the layout, but it's much easier to grok with dimensionality included. |
mlir/include/mlir/Dialect/Linalg/IR/LinalgNamedStructuredOpsSpec.tc | ||
---|---|---|
22 | Yeah sure.. I wonder how to support padding, strides and dilatation.. I was thinking of passing additional arguments (1D arrays) for dilatations and strides. However we want to probably have them as attributes. But if there is some wrapper generic ConvOp such that they are not exposed we might tolerate it right? With padding in TF there are 2 "algorithms": SAME or VALID so we can have op types for those as well even though it would double their count and we would have less flexibility.. |
mlir/lib/Dialect/Linalg/IR/LinalgOps.cpp | ||
---|---|---|
1321 | Yeah :) Do you keep track of those TODO items somewhere? |
If you want your diff landed, rebase _always_ means rebase on the current head of the main repository. LLVM maintains linear revision history, which means any new diff is applied on top of the current head. Currently, your diff does not apply to the head.
If by "previous commit" you mean D83879, it has landed some time ago so the current head includes those changes.
Can we name these as conv_2d_<layout> ? I understand it can be seen as redundant we the layout, but it's much easier to grok with dimensionality included.