This answer gives a nice high-level overview of short string optimization (SSO). However, I would like to know in more detail how it works in practice, specifically in the libc++ implementation:
How short does the string have to be in order to qualify for SSO? Does this depend on the target architecture?
How does the implementation distinguish between short and long
strings when accessing the string data? Is it as simple as m_size <= 16
or is it a flag that is part of some other member variable? (I
imagine that m_size
or part of it might also be used to store
string data).
I asked this question specifically for libc++ because I know that it uses SSO, this is even mentioned on the libc++ home page.
Here are some observations after looking at the source:
libc++ can be compiled with two slightly different memory layouts for the string class, this is governed by the _LIBCPP_ALTERNATE_STRING_LAYOUT
flag. Both of the layouts also distinguish between little-endian and big-endian machines which leaves us with a total of 4 different variants. I will assume the "normal" layout and little-endian in what follows.
Assuming further that size_type
is 4 bytes and that value_type
is 1 byte, this is what the first 4 bytes of a string would look like in memory:
// short string: (s)ize and 3 bytes of char (d)ata
sssssss0;dddddddd;dddddddd;dddddddd
^- is_long = 0
// long string: (c)apacity
ccccccc1;cccccccc;cccccccc;cccccccc
^- is_long = 1
Since the size of the short string is in the upper 7 bits, it needs to be shifted when accessing it:
size_type __get_short_size() const {
return __r_.first().__s.__size_ >> 1;
}
Similarly, the getter and setter for the capacity of a long string uses __long_mask
to work around the is_long
bit.
I am still looking for an answer to my first question, i.e. what value would __min_cap
, the capacity of short strings, take for different architectures?
Other standard library implementations
This answer gives a nice overview of std::string
memory layouts in other standard library implementations.
The libc++ basic_string
is designed to have a sizeof
3 words on all architectures, where sizeof(word) == sizeof(void*)
. You have correctly dissected the long/short flag, and the size field in the short form.
what value would __min_cap, the capacity of short strings, take for different architectures?
In the short form, there are 3 words to work with:
char
, 1 byte goes to the trailing null (libc++ will always store a trailing null behind the data).This leaves 3 words minus 2 bytes to store a short string (i.e. largest capacity()
without an allocation).
On a 32 bit machine, 10 chars will fit in the short string. sizeof(string) is 12.
On a 64 bit machine, 22 chars will fit in the short string. sizeof(string) is 24.
A major design goal was to minimize sizeof(string)
, while making the internal buffer as large as possible. The rationale is to speed move construction and move assignment. The larger the sizeof
, the more words you have to move during a move construction or move assignment.
The long form needs a minimum of 3 words to store the data pointer, size and capacity. Therefore I restricted the short form to those same 3 words. It has been suggested that a 4 word sizeof might have better performance. I have not tested that design choice.
_LIBCPP_ABI_ALTERNATE_STRING_LAYOUT
There is a configuration flag called _LIBCPP_ABI_ALTERNATE_STRING_LAYOUT
which rearranges the data members such that the "long layout" changes from:
struct __long
{
size_type __cap_;
size_type __size_;
pointer __data_;
};
to:
struct __long
{
pointer __data_;
size_type __size_;
size_type __cap_;
};
The motivation for this change is the belief that putting __data_
first will have some performance advantages due to better alignment. An attempt was made to measure the performance advantages, and it was difficult to measure. It won't make the performance worse, and it may make it slightly better.
The flag should be used with care. It is a different ABI, and if accidentally mixed with a libc++ std::string
compiled with a different setting of _LIBCPP_ABI_ALTERNATE_STRING_LAYOUT
will create run time errors.
I recommend this flag only be changed by a vendor of libc++.