This is the first patch adding an initial set of matrix intrinsics and a

corresponding lowering pass. This has been discussed on llvm-dev:

http://lists.llvm.org/pipermail/llvm-dev/2019-October/136240.html

The first patch introduces four new intrinsics (transpose, multiply,

columnwise load and store) and a LowerMatrixIntrinsics pass, that

lowers those intrinsics to vector operations.

Matrixes are embedded in a 'flat' vector (e.g. a 4 x 4 float matrix

embedded in a <16 x float> vector) and the intrinsics take the dimension

information as parameters. Those parameters need to be ConstantInt.

For the memory layout, we initially assume column-major, but in the RFC

we also described how to extend the intrinsics to support row-major as

well.

For the initial lowering, we split the input of the intrinsics into a

set of column vectors, transform those column vectors and concatenate

the result columns to a flat result vector.

This allows us to lower the intrinsics without any shape propagation, as

mentioned in the RFC. In follow-up patches, we plan to submit the

following improvements:

- Shape propagation to eliminate the embedding/splitting for each intrinsic.
- Fused & tiled lowering of multiply and other operations.
- Optimization remarks highlighting matrix expressions and costs.
- Generate loops for operations on large matrixes.
- More general block processing for operation on large vectors, exploiting shape information.

We would like to add dedicated transpose, columnwise load and store

intrinsics, even though they are not strictly necessary. For example, we

could instead emit a large shufflevector instruction instead of the

transpose. But we expect that to

(1) become unwieldy for larger matrixes (even for 16x16 matrixes, the resulting shufflevector masks would be huge), (2) risk instcombine making small changes, causing us to fail to detect the transpose, preventing better lowerings

For the load/store, we are additionally planning on exploiting the

intrinsics for better alias analysis.