I'm currently making an application using vectors with C++.
I know how pre-optimization is the root of all evil.
But I really can't help being curious.
I'm adding parts of other vectors into another vector.
We'll say the vector will have a size that never changes of 300.
Since I always append to the end of the vector
Is it faster to do:
a.reserve(300);
a.insert(a.end(), b.begin(), b.end());
or would it be faster to loop through the vector I want to append and add each items individually(while still reserving beforehand) with push_back
or emplace
. (unsure which is faster)
Anyone can help me on this?
Here's a general principle: when a library provides both do_x_once
and do_x_in_batch
, then the latter should be at least as fast as calling do_x_once
in a simple loop. If it isn't, then the library is very badly implemented since a simple loop is enough to get a faster version. Often, such batch functions/methods can perform additional optimizations because they have knowledge of data structure internals.
So, insert
should be at least as fast as push_back
in a loop. In this particular case, a smart implementation of insert
can do a single reserve
for all the elements you want to insert. push_back
would have to check the vector's capacity every time. Don't try to outsmart the library :)