Add GroupNonUniform arithmetic operations: FAdd, FMul, IMul.
Unify parser, printer, verifier for GroupNonUniform arithmetic
operations.
Details
Diff Detail
- Repository
- rG LLVM Github Monorepo
Event Timeline
Unit tests: pass. 62214 tests passed, 0 failed and 815 were skipped.
clang-tidy: pass.
clang-format: pass.
Build artifacts: diff.json, clang-tidy.txt, clang-format.patch, CMakeCache.txt, console-log.txt, test-results.xml
Pre-merge checks is in beta. Report issue. Please join beta or enable it for your project.
The (de)serialization needs to be implemented. Will that come later?
mlir/include/mlir/Dialect/SPIRV/SPIRVNonUniformOps.td | ||
---|---|---|
334 | The SPIR-V spec says that the cluster_size must come from a constant. Do we want to just have an integer-literal here? We need to create the constant op while serialization too... |
mlir/include/mlir/Dialect/SPIRV/SPIRVNonUniformOps.td | ||
---|---|---|
334 | @mravishankar thanks for review, can you please clarify a little bit, the cluster_size is a SSA value and not a integral-literal, also check that cluster_size comes from constant at L662, and test for serialization. Do I miss something? Thanks. |
mlir/include/mlir/Dialect/SPIRV/SPIRVNonUniformOps.td | ||
---|---|---|
334 | Sorry about the serialization comment. The test indeed works with serialization. In SPIR-V spec, the cluster_size is an SSA value, but it also says it has to come as a result of a constant instruction. One way to implement this is to accept an integer-literal in the op itself (so that you dont have an extra constant instruction generated to support this). But now that I think about this, the value could also come from a specialization constant. So its probably better to leave this as is, with the added advantage the the serialization will be auto-generated. So ignore this comment as a whole. | |
mlir/lib/Dialect/SPIRV/SPIRVOps.cpp | ||
631 | One suggestion is to make this a templated method (templated on OpTy). Then you could use ".execution_scope()" and ".group_operation()" to get the attribute values | |
646 | Same here. Maybe this could be templated on OpTy. Please add comments (one-liner should be fine) as well to describe what this method is for. |
mlir/include/mlir/Dialect/SPIRV/SPIRVNonUniformOps.td | ||
---|---|---|
334 | Right. In SPIR-V constant instructions include both normal OpConstant* and specialization OpSpecConstant*, so normally we don't want to represent a constant operand onto the op itself as an attribute. We do it for cases like for scope and memory semantic <id>s because the validation rule explicitly said that "All <id> used for Scope and Memory Semantics must be of an OpConstant". Making scope and memory semantics as attributes indeed improves the IR readability and handling. However, the specific validation rule is for Shader capability though; we need to relax this if, in the future, we support Kernel capability cases, or we need to design our own IntegerAttr that can encode the spec constant ID. At that time, maybe we can rethink here too whether we want to use that special IntegerAttr. It's a broader scope change though: we need modelling EnumAttr for it and all the stuff associated. Would prefer to defer until necessary. :) |
mlir/lib/Dialect/SPIRV/SPIRVOps.cpp | ||
---|---|---|
631 | @mravishankar @antiagainst I agree that templated methods look more readable, but on the other hand for each GroupNonUniform arithmetic operation will be created 3 methods with only one difference - function signature. They will implement the same logic and code. Also I was looking on the other parser/printer/verifier functions for operations from the same category, for example cast op, and they use Operation* as function parameter, what do you think? Thanks. |
mlir/lib/Dialect/SPIRV/SPIRVOps.cpp | ||
---|---|---|
631 | I was wrong about 3 methods, since parser is already generic, so it will be 2 methods for each GroupNonUniform arithmetic operation. |
Thanks Denis for adding these ops!
mlir/lib/Dialect/SPIRV/SPIRVOps.cpp | ||
---|---|---|
631 | That's reasonable. This is a minor issue anyway; I'm fine with the current implementation. |
The SPIR-V spec says that the cluster_size must come from a constant. Do we want to just have an integer-literal here? We need to create the constant op while serialization too...