Prior to this change, the memory management was deeply confusing here --
The way the MI was built relied on the SelectionDAG allocating memory
for these arrays of pointers using the MachineFunction's allocator so
that the raw pointer to the array could be blindly copied into an
eventual MachineInstr. This creates a hard coupling between how
MachineInstrs allocate their array of MachineMemOperand pointers and
how the MachineSDNode does.
This change is motivated in large part by a change I am making to how
MachineFunction allocates these pointers, but it seems like a layering
improvement as well.
This would run the risk of increasing allocations overall, but I've
implemented an optimization that should avoid that by storing a single
MachineMemOperand pointer directly instead of allocating anything.
This is expected to be a net win because the vast majority of uses of
these only need a single pointer.
As a side-effect, this makes the API for updating a MachineSDNode and
a MachineInstr reasonably different which seems nice to avoid
unexpected coupling of these two layers. We can map between them, but we
shouldn't be *surprised* at where that occurs. =]
Should we use TinyPtrVector here instead of making this PointerUnion and keeping NumMemRefs below?