In addition, fixed a small bug with padding incorrectly inferring output shape for dynaic inputs in convolution
Details
Diff Detail
- Repository
- rG LLVM Github Monorepo
Event Timeline
mlir/lib/Conversion/TosaToLinalg/TosaToLinalgNamed.cpp | ||
---|---|---|
51–54 | You should be able to use ShapedType::kDynamicSize instead of -1. Makes things clearer what -1 means. | |
110 | Rather than putting width / height int he loop, just copy the array then special case the width / height after. It limits the loop body but still handles the special cases. |
mlir/lib/Conversion/TosaToLinalg/TosaToLinalgNamed.cpp | ||
---|---|---|
51–54 | Is there a preferred approach between ShapedType::isDynamic(inputShape[i]) and something like inputTy.isDynamicDim(i)? | |
106 | I'm assuming that running convolution on unranked input should error, correct? If that's the case, should I check for it in the outside function as a sanity check? | |
111 | I'm guessing it won't be necessary if I follow Robs advice above and take the special cases out of the loop? | |
113 | I'm not sure I'm understanding the question, but stride_y is how the tosa spec refers to the height parameter for stride. |
You should be able to use ShapedType::kDynamicSize instead of -1. Makes things clearer what -1 means.