This patch adds a softmax op.
For now, nothing interesting happens, we can only do a round trip.
Later patches will add the tiling interface and the lowering of this op to a sequence of simpler ops.
This is graduating the linag_ext.softmax op from iree to LLVM.
Original implementation from @harsh.
@nicolasvasilache co-authored this patch.
Would it make sense to allow more than one dimension here?
If your tensor is MB x Head x TileX x TileY you want the max over TileX x TileY, no?