In different encodings of Unicode, for example UTF-16le or UTF-8, a character may occupy 2 or 3 bytes. Many Unicode applications doesn't take care of display width of Unicode chars just like they are all Latin letters. For example, in 80-column text, which should contains 40 Chinese characters or 80 Latin letters in one line, but most application (like Eclipse, Notepad++, and all well-known text editors, I dare if there's any good exception) just count each Chinese character as 1 width as Latin letter. This certainly make the result format ugly and non-aligned.
For example, a tab-width of 8 will get the following ugly result (count all Unicode as 1 display width):
apple 10
banana 7
苹果 6
猕猴桃 31
pear 16
However, the expected format is (Count each Chinese character as 2 width):
apple 10
banana 7
苹果 6
猕猴桃 31
pear 16
The improper calculation on display width of chars make these editors totally useless when doing tab-align, and line wrapping and paragraph reformat.
Though, the width of a character may vary between different fonts, but in all cases of Fixed-size terminal font, Chinese character is always double width. That is to say, in despite of font, each Chinese character is preferred to display in 2 width.
One of solution is, I can get the correct width by convert the encoding to GB2312, in GB2312 encoding each Chinese character takes 2 bytes. however, some Unicode characters doesn't exist in GB2312 charset (or GBK charset). And, in general it's not a good idea to compute the display width from the encoded size in bytes.
To simply calculate all character in Unicode in range of (\u0080
..\uFFFF
) as 2 width is also not correct, because there're also many 1-width chars scattered in the range.
There's also difficult when calculate the display width of Arabic letters and Korean letters, because they construct a word/character by arbitrary number of Unicode code points.
So, the display width of a Unicode code point maybe not an integer, I deem that is ok, they can be grounded to integer in practice, at least better than none.
So, is there any attribute related to the preferred display width of a char in Unicode standard? Or any Java library function to calculate the display width?
Sounds like you're looking for something like wcwidth
and wcswidth
, defined in IEEE Std 1003.1-2001, but removed from ISO C:
The
wcwidth()
function shall determine the number of column positions required for the wide character wc. Thewcwidth()
function shall either return 0 (if wc is a null wide-character code), or return the number of column positions to be occupied by the wide-character code wc, or return -1 (if wc does not correspond to a printable wide-character code).
Markus Kuhn wrote an open source version, wcwidth.c, based on Unicode 5.0. It includes a description of the problem, and an acknowledgement of the lack of standards in the area:
In fixed-width output devices, Latin characters all occupy a single "cell" position of equal width, whereas ideographic CJK characters occupy two such cells. Interoperability between terminal-line applications and (teletype-style) character terminals using the UTF-8 encoding requires agreement on which character should advance the cursor by how many cell positions. No established formal standards exist at present on which Unicode character shall occupy how many cell positions on character terminals. These routines are a first attempt of defining such behavior based on simple rules applied to data provided by the Unicode Consortium. [...]
It implements the following rules: