views:

73

answers:

2

In different encodings of Unicode, for example UTF-16le or UTF-8, a character may occupy 2 or 3 bytes. Many Unicode applications doesn't take care of display width of Unicode chars just like they are all Latin letters. For example, in 80-column text, which should contains 40 Chinese characters or 80 Latin letters in one line, but most application (like Eclipse, Notepad++, and all well-known text editors, I dare if there's any good exception) just count each Chinese character as 1 width as Latin letter. This certainly make the result format ugly and non-aligned.

For example, a tab-width of 8 will get the following ugly result (count all Unicode as 1 display width):

apple   10
banana  7
苹果      6
猕猴桃     31
pear    16

However, the expected format is (Count each Chinese character as 2 width):

apple   10
banana  7
苹果    6
猕猴桃  31
pear    16

The improper calculation on display width of chars make these editors totally useless when doing tab-align, and line wrapping and paragraph reformat.

Though, the width of a character may vary between different fonts, but in all cases of Fixed-size terminal font, Chinese character is always double width. That is to say, in despite of font, each Chinese character is preferred to display in 2 width.

One of solution is, I can get the correct width by convert the encoding to GB2312, in GB2312 encoding each Chinese character takes 2 bytes. however, some Unicode characters doesn't exist in GB2312 charset (or GBK charset). And, in general it's not a good idea to compute the display width from the encoded size in bytes.

To simply calculate all character in Unicode in range of (\u0080..\uFFFF) as 2 width is also not correct, because there're also many 1-width chars scattered in the range.

There's also difficult when calculate the display width of Arabic letters and Korean letters, because they construct a word/character by arbitrary number of Unicode code points.

So, the display width of a Unicode code point maybe not an integer, I deem that is ok, they can be grounded to integer in practice, at least better than none.

So, is there any attribute related to the preferred display width of a char in Unicode standard? Or any Java library function to calculate the display width?

+1  A: 

You are confusing code points, graphemes and encoding.

The encoding is how code points are converted into an octet stream for storage, transmission or processing. Both UTF-8 and UTF-16 are variable width encodings, with different code points needing a different number of octets (for UTF-8 anything from 1 to, IIRC, 6 and UTF-16 either 2 or 4).

Graphemes are "what we see as a character", these are what are displayed. One code point (e.g. LATIN LOWER CASE A) for one grapheme, but in other cases multiple code points might be needed (e.g. LATIN LOWER CASE A, COMBINING ACUTE and COMBINING UNDERSCORE to get an lower case with acute and underscore as used in Kwakwala). In some cases there is more than one combination of code points to create the same grapheme (e.g. LATIN LOWER CASE A WITH ACUTE and COMBINING UNDERSCORE), this is "normalisation",

I.e. the length of the encoding of a single grapheme will depend on the encoding and normalisation.

The display width of the grapheme will depend on the typeface, style and size independently of the encoding length.

For more information, see Wikipedia on Unicode and Unicode's home. There are also some excellent books, perhaps most notably "Fonts & Encodings" by Yannis Haralambous, O'Reilly.

Richard
+1. Just a minor remark: a valid UTF-8 encoded code points takes up to 4 octets.
Nemanja Trifunovic
@Nemanja the original definition (for the original 31bit Universal character set) or the refeined RFC 3629/Unicode definition for 24bit Unicode. The latter is indeed limited to 4 octets as that is all that is needed for 24bits.
Richard
You are right, but I'm not confused, though I didn't use the terminology correctly. You didn't get the point, I mean the fixed size terminal font here, and my question is about preferred display width, not precise display width. It's no doubt that, for example, all CJK characters take up 2 width, my question is whether Unicode gives such attribute, to handle Unicode in terminal window more correctly. Some characters (like combining) are constructed by several code points, in this case, I'd like to know whether there's defined function to calculate the preferred display width from a string.
谢继雷
@谢继雷 I think you are mixing things up. Unicode defines very little about the display of a glyph, really you need to consider the typeface in use. In a fixed with typeface one expects to have all glyphs the same width, but this breaks down with CJK glyphs which really need more space than Latin glyphs. However if you look at your platform documentation there should be API functions to calculate the display space needed for a string in a given size and style of a given typeface. E.g. in Windows API one is `GetTextExtentPoint32`: http://msdn.microsoft.com/en-us/library/dd144938(VS.85).aspx
Richard
+1  A: 

The Unicode property reflecting this concept is East_Asian_Width. It's not really reliable as a visual width in the context of general Unicode rendering, as non-Asian characters, combining characters etc. will fail to line up even in a monospaced font. (Your example certainly doesn't render lined-up for me.)

Java does not have the built-in ability to read this property for characters (though Android's extension does). You can get it from ICU4J if you really need it.

bobince
This is exactly what I want, and this property file is helpful:http://www.unicode.org/Public/UNIDATA/EastAsianWidth.txtIt also shows that the varied width is randomly scattered over all.
谢继雷