On Vega, whether the buffer size in the descriptor is interpreted
in bytes or in units of stride depends on whether IDXEN is enabled.
This modifies the format intrinsics to always use IDXEN for GFX9,
so that we can always rely on it being units of stride. Before this
the intrinsics were pretty much unusable as we cannot fill in the
size field without knowing if LLVM will apply the optimization.
I heard people on IRC that they preferred modifying the existing
intrinsic. If you want me to create new ones for the pattern, that
is OK with me.