Consider std::vector and reserve. If you reserve enough space than push_back's will be efficent(not perform any new allocations, not move existing objects), but a push_back on a "full" vector is largely a non-event. To the users outside vector it was just an unusually slow operation (iterators are now invalidated). I suppose it's mildly usefull to force the user to manage these expantion events, it just seems like haveing an automatic transmition with a clutch thrown in, but leaving the tourque converter ineffecencys. With a compile time fixed length queue our space is allocated the easy way with
Originally posted by 7stud
I don't know anything about optimization, but it seems reasonable to me that if you allocate all the memory up front for a fixed size queue, it could be more efficient than allocating memory object by object.
A compile time fixed size queue is usefull because it doesn't need to use the heap at all, and we give the compiler aditional freedoms to make things faster still.
That seems to contradict your first concern that it doesn't make sense to start with an infinite queue and then optionally limit its size. The motivation would be efficiency: it's more efficient to have a fixed queue.