Is the u8 string literal necessary in C++11

Lukas Schmelzeisen picture Lukas Schmelzeisen · Nov 18, 2012 · Viewed 27.1k times · Source

From Wikipedia:

For the purpose of enhancing support for Unicode in C++ compilers, the definition of the type char has been modified to be at least the size necessary to store an eight-bit coding of UTF-8.

I'm wondering what exactly this means for writing portable applications. Is there any difference between writing this

const char[] str = "Test String";

or this?

const char[] str = u8"Test String";

Is there be any reason not to use the latter for every string literal in your code?

What happens when there are non-ASCII-Characters inside the TestString?

Answer

Kerrek SB picture Kerrek SB · Nov 18, 2012

The encoding of "Test String" is the implementation-defined system encoding (the narrow, possibly multibyte one).

The encoding of u8"Test String" is always UTF-8.

The examples aren't terribly telling. If you included some Unicode literals (such as \U0010FFFF) into the string, then you would always get those (encoded as UTF-8), but whether they could be expressed in the system-encoded string, and if yes what their value would be, is implementation-defined.

If it helps, imagine you're authoring the source code on an EBCDIC machine. Then the literal "Test String" is always EBCDIC-encoded in the source file itself, but the u8-initialized array contains UTF-8 encoded values, whereas the first array contains EBCDIC-encoded values.