The motivation is twofold:
- Allow plugging in a different training-time evaluator, e.g. TFLite-based, etc.
- Allow using TensorSpec for AOT, too, to support evolution: we start by extracting a superset of the features currently supported by a model. For the tensors the model does not support, we just return a valid, but useless, buffer. This makes using a 'smaller' model (less supported tensors) transparent to the compiler. The key is to dimension the buffer appropriately, and we already have TensorSpec modeling that info.
The only coupling was due to the reliance of a TF internal API for
getting the element size, but for the types we are interested in,
sizeof is sufficient.
A subsequent change will yank out TensorSpec in its own module.