tags:

views:

959

answers:

9

Why is wchar_t needed? How is it superior to short (or __int16 or whatever)?

(If it matters: I live in Windows world. I don't know what Linux does to support Unicode.)

+10  A: 

See Wikipedia.

Basically, it's a portable type for "text" in the current locale (with umlauts). It predates Unicode and doesn't solve many problems, so today, it mostly exists for backward compatibility. Don't use it unless you have to.

Aaron Digulla
Amen. Dump the ANSI locale stuff entirely, in fact. Treat all text as utf8 (converting on input if you have to) and use the standard C library functions. That's the only sane way to do I18N in C.
Andy Ross
Unfortunatelly that won't always work. Some implementations of C standard library assume at most 2 bytes per character for multibyte strings and don't support UTF-8 locale. Search Michael Kaplan's blog for more info.
Nemanja Trifunovic
Nemanja, Michael Kaplan is a prolific writer. Can you please be a little more specific about what to search for?
Rob Kennedy
This is rather wrong, but I can't nail it down precisely. 2 simple counter-examples show a lot. On Windows, the universal encoding for wchar_t aka WCHAR is UTF-16, which is (A) not locale-specific and (B) definitely Unicode-based. On Mac OSX, wchar_t simply holds the Unicode code point. So, definitely not for backwards compatibility, it's how the two most common desktop OSes support Unicode.
MSalters
+5  A: 

It is usually considered a good thing to give things such as data types meaningful names.

What is best, char or int8? I think this:

char name[] = "Bob";

is much easier to understand than this:

int8 name[] = "Bob";

It's the same thing with wchar_t and int16.

Thomas Padron-McCarthy
Nice examples to make it really clear.
rstevens
wchar_t is not always the same size as int16, however. It is a type that varies in width from platform to platform, unfortunately...
fbrereto
+2  A: 

It is "superior" in a sense that it allows you to separate contexts: you use wchar_t in character contexts (like strings), and you use short in numerical contexts (numbers). Now the compiler can perform type checking to help you catch situations where you mistakenly mix one with another, like pass an abstract non-string array of shorts to a string processing function.

As a side node (since this was a C question), in C++ wchar_t allows you to overload functions independently from short, i.e. again provide independent overloads that work with strings and numbers (for example).

AndreyT
+1 for pointing out that `wchar_t` can be overloaded independently from short or int.
Michael Burr
+4  A: 

wchar_t is the primitive for storing and processing the platform's unicode characters. Its size is not always 16 bit. On unix systems wchar_t is 32 bit (maybe unix users are more likely to use the klingon charaters that the extra bits are used for :-).

This can pose problems for porting projects esp if you interchange whar_t and short, or if you interchange wchar_t and xerces' XMLCh.

Therefore having wchar_t as a different type to short is very import for writing cross platform code. Cleaning up this was one of the hardest parts of porting our application to unix and then from VC6 to VC2005.

iain
As an aside, UNIX programs often skip `wchar_t`, representing text as UTF-8 much of the time :)
bdonlan
I know if I was redoing our app again I would favor utf-8 over ucs-16/utf16.
iain
+2  A: 

As I read the relevant standards, it seems like Microsoft fcked this one up badly.

My manpage for the POSIX <stddef.h> says that:

  • wchar_t: Integer type whose range of values can represent distinct wide-character codes for all mem‐ bers of the largest character set specified among the locales supported by the compilation environment: the null character has the code value 0 and each member of the portable character set has a code value equal to its value when used as the lone character in an integer character constant.

So, 16 bits wchar_t is not enough if your platform supports Unicode. Each wchar_t is supposed to be a distinct value for a character. Therefore, wchar_t goes from being a useful way to work at the character level of texts (after a decoding from the locale multibyte, of course), to being completely useless on Windows platforms.

gnud
I don't think that's a problem in Microsoft's implementation, but rather that the C++ spec doesn't really account for Unicode. What is a character set in Unicode? Does `wchar_t` have to be able to represent all Unicode code points, or just all code *units*? In the case of UTF16, a code unit is a 16-bit integer, and all of these can be represented by Microsoft's `wchar_t`.
jalf
I think wide strings (`L"blah"`) are UTF-16 encoded on Windows. So it is able to represent full Unicode, but is a multi-byte encoding (at least for some of the Unicode characters). ICBWT.
sbi
If it's a multi-byte encoding, then it's 'range of values' can't really hold distinct values for all members of the character set, can it?
gnud
@gnud: You're right, of course, Windows can only represent UCS-2 in `wchar_t` characters. I was thinking in terms of `wchar_t` strings, not `wchar_t` characters.
sbi
@jalf - the whole point of `wchar_t` is to decode mutlibyte encodings into a simple representation with one character in each array position. The largest character set specified on Windows is Unicode. UTF-16 is not a character set, it's an encoding of Unicode.
gnud
I think the reason they have 16bit `wchar_t` is that they used to do UCS-2, only, in earlier versions of their OS.
sbi
It's not useless on Windows. It's useful for calling all those UTF-16-based WinAPI functions. But it is problematic that Windows doesn't have a "character" type that can *actually represent a character*. Until C++0x, anyway.
dan04
+7  A: 

Why is wchar_t needed? How is it superior to short (or __int16 or whatever)?

In the C++ world, wchar_t is its own type (I think it's a typedef in C), so you can overload functions based on this. For example, this makes it possible to output wide characters and not to output their numerical value. In VC6, where wchar_t was just a typedef for unsigned short, this code

wchar_t wch = L'A'
std::wcout << wch;

would output 65 because

std::ostream<wchar_t>::operator<<(unsigned short)

was invoked. In newer VC versions wchar_t is a distinct type, so

std::ostream<wchar_t>::operator<<(wchar_t)

is called, and that outputs A.

sbi
BTW: This behavior can be disabled in the project settings in new VCs (you should not but maybe it's needed for backwards compatibility)
rstevens
+5  A: 

The reason there's a wchar_t is pretty much the same reason there's a size_t or a time_t - it's an abstraction that indicates what a type is intended to represent and allows implementations to chose an underlying type that can represent the type properly on a particular platform.

Note that wchar_t doesn't need to be a 16 bit type - there are platforms where it's a 32-bit type.

Michael Burr
+2  A: 

To add to Aaron's comment - in C++0x we are finally getting real Unicode char types: char16_t and char32_t and also Unicode string literals.

Nemanja Trifunovic
+1  A: 

wchar_t is a bit of a hangover from before unicode standardisation. Unfortunately it's not very helpful because the encoding is platform specific (and on Solaris, locale-specific!), and the width is not specified. In addition there are no guarantees that utf-8/16/32 codecvt facets will be available, or indeed how you would access them. In general it's a bit of a nightmare for portable usage.

Apparently c++0x will have support for unicode, but at the current rate of progress that may never happen...

Robert Tuck