This pass runs in any situations but we skip it when it is not O0 and the
function doesn't have optnone attribute. With -O0, the def of shape to amx
intrinsics is near the amx intrinsics code. We are not able to find a
point which post-dominate all the shape and dominate all amx intrinsics.
To decouple the dependency of the shape, we transform amx intrinsics
to scalar operation, so that compiling doesn't fail. In long term, we
should improve fast register allocation to allocate amx register.
Details
Diff Detail
- Repository
- rG LLVM Github Monorepo
Event Timeline
Strange. llvm/test/CodeGen/X86/AMX/amx-low-intrinsics.ll can pass in my local machine.
llvm/test/CodeGen/X86/AMX/amx-low-intrinsics.ll | ||
---|---|---|
78 | Sorry, there is a bug here. According to AMX's spec, dst's remaining part should be all zero. |
llvm/lib/Target/X86/X86LowerAMXIntrinsics.cpp | ||
---|---|---|
434 | bool C = false | |
440 | We can use a forward order to iterate it. | |
471 | Remove the {} for single line loop. | |
507 | You can just return it by return LAT.visit(). | |
llvm/test/CodeGen/X86/AMX/amx-low-intrinsics.ll | ||
61 | Maybe we can use zero mask load in future optimization. |
llvm/lib/Target/X86/X86TargetMachine.cpp | ||
---|---|---|
420 | We may add both pass anyway and skip the pass based on the option level and option attribute in the two passes. |
llvm/lib/Target/X86/X86LowerAMXIntrinsics.cpp | ||
---|---|---|
212–213 | In fact, no need handle Row, Col, K here, just use fix size 16x16, the result of calculation is some in effective area. (just need tileload "keep" the "unused" area is 0). |
llvm/lib/Target/X86/X86LowerAMXIntrinsics.cpp | ||
---|---|---|
212–213 | We should keep the code here. In bf16, since +0.0(0x0000) * negative float is equal to -0.0(0x8000), following your solution is not able to ensure outer edge is allzero. |
llvm/include/llvm/CodeGen/Passes.h | ||
---|---|---|
496 | Add comments to describe what the pass does? | |
llvm/lib/Target/X86/X86LowerAMXIntrinsics.cpp | ||
2 | This seems wrong file name. | |
12 | Type 'able'. | |
160 | Not sure if we can extract the common code of createTileLoadLoops and createTileStoreLoops, so that it can be used by both and some other functions. | |
251 | Delete the dead code. | |
268 | It should be in another line. | |
281 | Better to be in a new line. | |
293 | Better to be in a new line. | |
373 | The name seems not good. Is "PreBuilder" better? And why we need two builder in the function? | |
378 | Maybe use right shift instruction which is more efficient. Don't the following pass can optimize the operation. | |
412 | Is "PreBuilder" better? | |
416 | Shift? | |
449 | PreBuilder? | |
488 | Do we iterate the instructions in topology order or in post order? |
llvm/lib/Target/X86/X86LowerAMXIntrinsics.cpp | ||
---|---|---|
488 | It should be pre-order since we need to handle cases without bitcasts, such as, amx-low-intrinsics-no-bitcast.ll |
llvm/include/llvm/CodeGen/Passes.h | ||
---|---|---|
496 | transforms | |
llvm/lib/Target/X86/X86LowerAMXIntrinsics.cpp | ||
1 | We usually comment as ===--- filename - description ---=== | |
51 | Ctx | |
88 | Can we just use template <bool IsLoad>? I think it also can reduce the branch. | |
99 | Not sure how about the arithmetic intrinsics. But at least for load and store intrinsics we can use LLVM intrinsic llvm.masked.load/store to reduce the inner loop. | |
166 | Maybe we can just use cast to help to raise the assertion. | |
223 | You can use cast to help to check the failure so that VecA/B/C won't be uninitialized. | |
229 | ditto | |
231 | Should check it is V256I32? | |
232 | ditto | |
288 | eltc? | |
311 | Is it necessary to insert the ResElt to VecC? | |
340 | TileLoadStore | |
341 | Forgot to remove? | |
343 | ditto | |
387 | ditto | |
391 | ditto | |
llvm/lib/Target/X86/X86LowerAMXType.cpp | ||
333 | ditto | |
llvm/test/CodeGen/X86/AMX/amx-low-intrinsics-no-bitcast.ll | ||
1 | Better name it amx-low-intrinsics-no-amx-bitcast.ll | |
13 | It seems the body block is not necessary | |
19 | ditto. The lable TILELOAD_SCALARIZE_COLS_BODY even not been used. | |
31 | I think cols.latch is not necessary either. |
llvm/lib/Target/X86/X86LowerAMXIntrinsics.cpp | ||
---|---|---|
311 | Yes, it is necessary since you should use updated eltC(aka, Cij) when you are doing matrix dotproduct: |
llvm/lib/Target/X86/X86LowerAMXIntrinsics.cpp | ||
---|---|---|
311 | But you don't need to update both C and D. Something like the psudo code should enough: for (k : K) Dij += Aik * Bkj; Dij += Cij |