The dimension order of a filter in tensorflow is
[filter_height, filter_width, in_channels, out_channels], which is different
from current definition. The current definition follows TOSA spec. Add TF
version conv ops to .tc, so we do not have to insert a transpose op around a
conv op.
Details
Diff Detail
- Repository
- rG LLVM Github Monorepo
Event Timeline
Is it possible to mark mlir files were copied from other files?
I actually copied it from another file and only modified the dimension order of filters.
I did some research and look like not possible. The information is obtained from git, and there is no git cp command. If the old file exists, it will treat it as a new file, not renamed or branched file.
Thanks Hanhan for working on this!!
mlir/include/mlir/Dialect/Linalg/IR/LinalgNamedStructuredOpsSpec.tc | ||
---|---|---|
104 | Instead of using tf as a prefix to indicate the semantics, what about just being explicit in the name? For example, we can have conv_2d_input_nhwc_filter_hwcf? This does make the op name a bit lengthy but I thin it's worth it given the the clarity. One does not need to open TensorFlow documentation to understand the semantics here. | |
106 | Can you also add documentation for these ops? You can see https://reviews.llvm.org/D94966 as an example. |
mlir/lib/Dialect/Linalg/Transforms/Vectorization.cpp | ||
---|---|---|
451 | How much are you relying on these vectorization patterns? |
mlir/lib/Dialect/Linalg/Transforms/Vectorization.cpp | ||
---|---|---|
451 | I think we don't really rely on these patterns in IREE. I added these patterns because I looked into other conv ops usage and have a similar change with them. We can remove it if you think it's better. |
LGTM. Just additional requests regarding documentation for indexing maps and tests. It would be nice if you can add some additional tests in generalize-named-ops.mlir and named-ops.mlir to verify the semantics and IR format.
mlir/include/mlir/Dialect/Linalg/IR/LinalgNamedStructuredOpsSpec.tc | ||
---|---|---|
106 | Also the indexing maps' dimension order? That is typically very important to know but very obscure and hidden entirely. | |
mlir/lib/Dialect/Linalg/Transforms/Vectorization.cpp | ||
451 | Yeah. We aren't relying on the vectorization in core at the moment. :) |
mlir/test/Dialect/Linalg/generalize-named-ops.mlir | ||
---|---|---|
185 ↗ | (On Diff #322758) | It would be nice to turn one of the tests to have non-1 dilations and non-1 strides to make sure the affine_map is good for them. |
mlir/test/Dialect/Linalg/generalize-named-ops.mlir | ||
---|---|---|
185 ↗ | (On Diff #322758) | good point, done! |
Instead of using tf as a prefix to indicate the semantics, what about just being explicit in the name? For example, we can have conv_2d_input_nhwc_filter_hwcf? This does make the op name a bit lengthy but I thin it's worth it given the the clarity. One does not need to open TensorFlow documentation to understand the semantics here.