Infers output shape for dynamic width/height inputs.
Details
Diff Detail
- Repository
- rG LLVM Github Monorepo
Event Timeline
Nice, thanks!
mlir/lib/Conversion/TosaToLinalg/TosaToLinalgNamed.cpp | ||
---|---|---|
64 | As is currently named the function may be confusing as the name is quite general (index refers to the type here, but many could think of index into tensor/index in axis sense). If we follow the other shape work, reifyConstantDim could work (else materialize would feel more natural locally, but the method to materialize the shape computations as actual ops in the module is using reify). | |
66 | This is adding a cast unconditionally, one could use createOrFold (https://mlir.llvm.org/doxygen/classmlir_1_1ImplicitLocOpBuilder.html#aec0ceee6a6e834449c1f7c65d45b26e5) or just check the type before creating instead. (That also allows to assert that the type is compatible, so if you got a float attr instead of an integer one, you could flag it) | |
73 | Could you rewrite the formula in terms of what the inputs are below? (And then like H=F(IH, ...)) If one just reads the comment one could think that this returns 2 values, one for the w and one for the h, but instead you have noticed it is one formula needed that enables calculating either by passing in different attributes needed. | |
134 | Nit: prefer to not compute end in for (https://llvm.org/docs/CodingStandards.html#use-range-based-for-loops-wherever-possible) | |
mlir/test/Conversion/TosaToLinalg/tosa-to-linalg-named.mlir | ||
444 | So (OOC) the reification of shape computations are currently combined with lowering to linalg? (There isn't a an intermediate step) |
mlir/test/Conversion/TosaToLinalg/tosa-to-linalg-named.mlir | ||
---|---|---|
444 | Thats my thought. It would be nice to have a better computation mechanism for reify-ing shapes but we explicitly do it during the lowering. I was wondering whether a refactor in the future to push it to a reification would help simplify things. |
mlir/test/Conversion/TosaToLinalg/tosa-to-linalg-named.mlir | ||
---|---|---|
444 | I think we can get there with InferTensorTypeWithReify + perhaps some cleanups in its reify mechanism (ValueShapeRange is intended to allow injecting info without mutating). Such a refactor would be great! |
As is currently named the function may be confusing as the name is quite general (index refers to the type here, but many could think of index into tensor/index in axis sense). If we follow the other shape work, reifyConstantDim could work (else materialize would feel more natural locally, but the method to materialize the shape computations as actual ops in the module is using reify).