I have implemented llvm::StaticVector, which is a fixed-capacity vector class. Think llvm::SmallVector without the ability to grow beyond its inline storage capacity. The llvm::StaticVector is lower overhead than an llvm::SmallVector with similar inline storage capacity:
Class storage savings:
- There is no data pointer; only the inline storage.
- The Size member can have its type scaled back depending on the stated capacity to save a few bytes. For example: sizeof(StaticVector<char,200>) == 201 since a uint8_t is sufficient to track the max number of elements.
Host generated code size savings:
- Heap buffer growth related code is not generated; thus eliminating control-flow + code size on common member functions.
If the user knows the required capacity at compile-time, then a StaticVector would be a more efficient choice than the jack-of-all-trades SmallVector. Compared to a std::array, there are nice properties to exploit as well. For example, not elements are constructed when an empty StaticVector object is instantiated (compared to a std::array, which constructs all its elements when instantiated).
I'm really not convinced that this is a good use of this data structure: the invariant that makes 64 value enough is non trivial and hard to validate.
The code above is:
It isn't clear from there where the 64 comes from and what guarantee we have it'll be enough "always".
More importantly: I'm not convinced that this is the kind of situation where we would want "by construction" to never exceed the SmallVector or if instead it just isn't best to continue to "just" have SmallVector as an optimization without change in the overall "behavior contract" that usual with vectors.
On the other hand, I find the YAML example more convincing.