This is an archive of the discontinued LLVM Phabricator instance.

[mlir][Memref] Add memref.memory_space_cast and its lowerings
ClosedPublic

Authored by krzysz00 on Jan 6 2023, 10:37 AM.

Details

Summary

Address space casts are present in common MLIR targets (LLVM, SPIRV).
Some planned rewrites (such as one of the potential fixes to the fact
that the AMDGPU backend requires alloca() to live in address space 5 /
the GPU private memory space) may require such casts to be inserted
into MLIR code, where those address spaces could be represented by
arbitrary memory space attributes.

Therefore, we define memref.memory_space_cast and its lowerings.

Depends on D141293

Diff Detail

Event Timeline

krzysz00 created this revision.Jan 6 2023, 10:37 AM
krzysz00 requested review of this revision.Jan 6 2023, 10:37 AM
krzysz00 updated this revision to Diff 487457.Jan 9 2023, 8:37 AM

Split patch

ftynse added inline comments.Jan 10 2023, 2:52 AM
mlir/include/mlir/Conversion/LLVMCommon/MemRefBuilder.h
216

Nit

mlir/include/mlir/Dialect/MemRef/IR/MemRefOps.td
127–131

Nit: this is not needed when assemblyFormat is provided.

mlir/lib/Conversion/MemRefToLLVM/MemRefToLLVM.cpp
838

Nit

858

Nit: don't specify the explicit number of stack elements in small vectors unless there's a strong reason to.

872

Nit: please add trailing periods to all comments.

874–877

Memrefs allow for non-integer address spaces, but LLVM pointers do not. How about taking the address space from the converted type instead? Thus, the converted will have handled the address space conversion (or failed to produce a type, at which point we would have to stop anyway).

934–936

Are we sure that the pointers are tightly laid out, e.g. that if they are 32-bit values they cannot be aligned at 64 bit? I'd rather subtract the base address from the offset base address to be sure. This should normally be handled by constant folding at the LLVM IR level once it has full data layout.

mlir/lib/Dialect/MemRef/IR/MemRefOps.cpp
248–255

Aren't shape and element type guaranteed to be the same by the (trait) verifier? Equal shapes also require equal rank, so no need to check for both.

260–261

Nit: can't this be just return uaT.getElementType() == ubT.getElementType(); ? :)

krzysz00 updated this revision to Diff 490919.Jan 20 2023, 11:26 AM
krzysz00 marked 5 inline comments as done.
krzysz00 retitled this revision from [mlir][Memref] Add memref.address_space_cast and its lowerings to [mlir][Memref] Add memref.memory_space_cast and its lowerings.
krzysz00 edited the summary of this revision. (Show Details)

Change operation name, address some review comments

krzysz00 added inline comments.Jan 23 2023, 10:22 AM
mlir/lib/Conversion/MemRefToLLVM/MemRefToLLVM.cpp
934–936

We've been making that assumption in all our unranked memref descriptor code up to this point - computeSizes() very much relies on the tight packing to compute the alloca() size we need for such a descriptor.

ftynse accepted this revision.Jan 31 2023, 3:03 AM
ftynse added inline comments.
mlir/include/mlir/Dialect/MemRef/IR/MemRefOps.td
1203–1206

Nit: maybe use the GPU memory space attributes here instead of magic integers to promote good practices in the documentation.

This revision is now accepted and ready to land.Jan 31 2023, 3:03 AM
krzysz00 updated this revision to Diff 496220.Feb 9 2023, 1:02 PM

Rebase for opaque pointers