views:

424

answers:

4

The history of Encoding Schemes / multiple Operating Systems and Endian-nes have led to a mess in terms of encoding all forms of string data (--i.e., all alphabets); for this reason protocol buffers only deals with ASCII or UTF-8 in its string types, and I can't see any polymorphic overloads that accept the C++ wstring. The question then is how is one expected to get a UTF-16 string into a protocol buffer ?

Presumably I need to keep the data as a wstring in my application code and then perform a UTF-8 conversion before I stuff it into (or extract from) the message. What is the simplest - Windows/Linux portable way to do this (A single function call from a well-supported library would make my day) ?

Data will originate from various web-servers (Linux and windows) and will eventually ends up in SQL Server (and possibly other end points).

-- edit 1--

Mark Wilkins suggestion seems to fit the bill, perhaps someone who has experience with the library can post a code snippet -- from wstring to UTF-8 -- so that I can gauge how easy it will be.

-- edit 2 --

sth's suggestion even more so. I will investigate boost serialization further.

+1  A: 

It may be overkill, but the ICU libraries will do everything you need and you can use them on both Windows and Linux.

However, if you are only wanting conversion, then under Windows, a simple call to MultiByteToWideChar and WideCharToMultiByte can do the conversion between UTF-8 and

UTF-16. For example:

// utf-8 to utf-16
MultiByteToWideChar( CP_UTF8, 0, myUtf8String, -1,
                     myUtf16Buf, lengthOfUtf16Buf );

With Linux, libidn might do what you need. It can convert between UTF-8 and UCS, which I think is equivalent to UTF-32 at some level. For example:

// utf-8 to UCS
ucsStr = stringprep_utf8_to_ucs4( "asdf", 4, &items );

However, in Linux I think you might be best simply working with UTF-8. Unless you have an existing library for UTF-16, I am not sure there is a compelling reason to use it in Linux.

Mark Wilkins
I did stumble on this earlier, but at a second glance it looks more favourable -- especially since the license is not restrictive which I assumed it would be. I will hold out for a while with accepting to see if someone posts a code snippet :P.
Hassan Syed
+3  A: 

The Boost Serialization library contains a UTF-8 codecvt facet that you can use to convert unicode to UTF-8 and back. There even is an example in the documentation doing exactly that.

sth
hmm that looks nice, it would certainly be a weaker dependency than ICU.
Hassan Syed
+1  A: 

On Linux it's trivial: each wchar_t is one Unicode codepoint, and with trivial bitops you can find the corresponding UTF-8 byte(s). On Windows it isn't much harder, as there is an API for it: WideCharToMultiByte(CP_UTF8, 0, input.c_str(), input.size(), &out[0], out.size(), 0,0);

MSalters
+1  A: 

Take a look at UTF8-CPP:

// converts a utf-8 encoded std::string s to utf-16 wstring ws
utf8to16(s.begin(), s.end(), back_inserter(ws));
Nemanja Trifunovic