well, even though theoratically std::list should be faster than std::vector, in practice this theory breaks down.
std::vector is contiguous, which gives it the following features vs. std::list:
1) you use one packed memory block over many small pieces - the heap doesn't get fragmanted like std::list makes it
2) there is no pointers involved so each object stay as lean as the sum of it's inner members - smaller size = more memory available
3) because the vector is packed, it works much much better with the cache than std::list. the amount of cache hits with std::vector is significally higher than std::list.
de-facto, you will probably use std::vector in most of the cases.
but for the original quesion - use whatever you need for the job. I use the same amount of std::vector /std::map/std::unordered_map/std::set in my code. do you use more cups than glasses? this question is pointless.
EDIT:
I will admit that I don't use tuples so much. the compile-time-constaint is too much annoying that simply pack the members in one class
Last edited by Dave11; 10-11-2015 at 03:11 AM.
Incidentally, a former lecturer of mine once recommended std::list over std::vector for a default container for general purpose lists in his website on competitive programming (he taught, and perhaps still teaches, a module dedicated to the topic), on the basis that insertion takes constant time once you have an iterator to the insertion point, whereas std::vector only takes amortised constant time for appending. I never got round to asking him if he really had evidence that this mattered more than other pros of using std::vector for the various competitive programming problems that he was concerned about.
Look up a C++ Reference and learn How To Ask Questions The Smart WayOriginally Posted by Bjarne Stroustrup (2000-10-14)
well, yes, accademically speaking.
std::vector has some optimizations tricks deep inside like expanding the capacity by 1.5* when reallocation happens, and many more.
and also, seasoned C++ developer will reserve the memory if he know even remotly what is the needed size. so that REALLY makes everything faster.
overall, the cons are much less than the pros, at least for most of the cases..
EDIT: this have being said, proper benchmarking is the right thing to do
The "optimization tricks deep inside" are necessary to meet the standard's requirement of amortised constant time, though the exact factor is implementation defined, i.e., it need not be 1.5, and it is possible that the implementation might not do this if the container is sufficiently small or sufficiently large (i.e., expanding by a fixed amount might be better as expanding by a factor might be too small or consume too much potentially unused space).Originally Posted by Dave11
Stroustrup recommends against using this optimisation unnecessarily:Originally Posted by Dave11
Originally Posted by Bjarne Stroustrup
Look up a C++ Reference and learn How To Ask Questions The Smart WayOriginally Posted by Bjarne Stroustrup (2000-10-14)
I wrote the * on top of 1.5 because I wanted to a comment ("on MSVC++") but I missed that.
and about reserve - I'm talking about extremly big vector size, like <1000 elements, for example.
I think all these implementation details of current machines are ruining the pure abstraction of list vs vector.
How about multi dimensional vectors? More to the point... Does it (and how so) effect the compiler if you have your extremely large vector of 1000 elements, vs a two dimensional vector of 100x100, or even a 3 dimensional vector of 333x3 of say... type string... To me, 1000 elements is 1000 elements... how would this be seen by the compiler?I'm talking about extremly big vector size, like <1000 elements, for example.
Isn't most of a vector runtime dependent? I'm not sure how much the compiler cares.
Recall that a "multi dimensional" vector isn't really: it is just a vector of vectors, so if you consider just one level it is just a vector of 100 elements such that each element happens to be a vector of 100 elements. So, at the level of the standard library implementation, there is no difference, 1000 elements is indeed just 1000 elements.Originally Posted by Ghostrider
That said, 1000 elements does not sound like "extremely big" to me, or even "big", though of course if we're talking about how much storage is used rather than the number of elements, it could be "extremely big" if the objects that are stored require a considerable amount of memory per object. I think that this is what Dave11 is getting at, i.e., if the objects are expensive to copy, then doing the reserve() should become more important because it avoids what is expensive. However, Stroustrup's observation could well apply even so: after all, we call push_back for std::vector as running in amortised constant time precise because in the long run, the cost of the occasional copying (and with C++11+: moving) is amortised over the repeated appending of many elements.
Look up a C++ Reference and learn How To Ask Questions The Smart WayOriginally Posted by Bjarne Stroustrup (2000-10-14)
This is one point where I disagree. Even though it might not matter that much performance-wise, we still have devices that run on battery, and the world is becoming increasingly mobile, and on those devices, performance matters because the faster you get work done, the faster the processor can go back to idle, which translates to longer battery life. For such a simple optimization, I'd argue that you might just as well do it. Like you should worry about picking the right algorithm for your program, you should take time to apply simple optimizations that don't make your program harder to read, harder to debug or maintain.
Last edited by King Mir; 10-11-2015 at 06:29 PM.
It is too clear and so it is hard to see.
A dunce once searched for fire with a lighted lantern.
Had he known what fire was,
He could have cooked his rice much sooner.
@ the Gurus...
I saw this video on vectors vs lists https://www.youtube.com/watch?v=YQs6IC-vgmo
Do you guys agree or disagree?
There's not much to disagree about, but I'm not sure that the point he makes with true OO vs. C++ diagram is realistic. For one thing, a language that works with references the way that he describes is garbage collected, and the collector could decide that the objects in the container will all die at the same time. It would make sense then that the contained objects would be allocated near each other, mitigating the risk of a cache miss. I would guess that collectors used in many languages work this way.
Last edited by whiteflags; 10-18-2015 at 10:38 PM.