In C++0x, char16_t
and char32_t
will be used to store UTF-16 and UTF-32 and not wchar_t
.
From the draft n2798:
22.2.1.4 Class template codecvt
2 The class codecvt is for use when converting from one codeset to another, such as from wide characters to multibyte characters or between wide character encodings such as Unicode and
EUC.
3 The specializations required in Table 76 (22.1.1.1.1) convert the implementation-
defined native character set. codecvt implements a degenerate
conversion; it does not convert at all. The specialization codecvt<char16_t, char,
mbstate_t>
converts between the UTF-16 and UTF-8 encodings schemes, and the
specialization codecvt <char32_t, char, mbstate_t>
converts between the UTF-32 and
UTF-8 encodings schemes. codecvt<wchar_t,char,mbstate_t>
converts between the native
character sets for narrow and wide characters. Specializations on mbstate_t
perform
conversion between encodings known to the library implementor.
Other encodings can be converted by specializing on a user-defined stateT type. The stateT object can contain any state that is useful to communicate to or from the specialized do_in or
do_out members.
The thing about wchar_t
is that it does not give you any guarantees about the encoding used. It is a type that can hold a multibyte character. Period. If you are going to write software now, you have to live with this compromise. C++0x compliant compilers are yet a far cry. You can always give the VC2010 CTP and g++ compilers a try for what it is worth. Moreover, wchar_t
has different sizes on different platforms which is another thing to watch out for (2 bytes on VS/Windows, 4 bytes on GCC/Mac and so on). There is then options like -fshort-wchar
for GCC to further complicate the issue.
The best solution therefore is to use an existing library. Chasing UNICODE bugs around isn't the best possible use of effort/time. I'd suggest you take a look at:
More on C++0x Unicode string literals here