views:

91

answers:

2

Hello,

there are already a few questions relating to this problem. I think my question is a bit different because I don't have an actual problem, I'm only asking out of academic interest. I know that Windows's implementation of UTF-16 is sometimes contradictory to the Unicode standard (e.g. collation) or closer to the old UCS-2 than to UTF-16, but I'll keep the “UTF-16” terminology here for reasons of simplicity.

Background: In Windows, everything is UTF-16. Regardless of whether you're dealing with the kernel, the graphics subsystem, the filesystem or whatever, you're passing UTF-16 strings. There are no locales or charsets in the Unix sense. For compatibility with medieval versions of Windows, there is a thing called “codepages” that is obsolete but nonetheless supported. AFAIK, there is only one correct and non-obsolete function to write strings to the console, namely WriteConsoleW, which takes an UTF-16 string. Also, a similar discussion applies to input streams, which I'll ignore, too.

However, I think this represents a design flaw in the Windows API: there is a generic function that can be used to write to all stream objects (files, pipes, consoles…) called WriteFile, but this function is byte-oriented and doesn't accept UTF-16 strings. The documentation suggests using WriteConsoleW for console output, which is text oriented, and WriteFile for everything else, which is byte oriented. Since both console streams and file objects are represented by kernel object handles and console streams can be redirected, you have to call a function for every write to a standard output stream that checks whether the handle represents a console stream or a file, breaking polymorphy. OTOH, I do think that Windows's separation between text strings and raw bytes (which is mirrored in many other systems like Java or Python) is conceptually superior to Unix's char* approach that ignores encodings and doesn't distinguish between strings and byte arrays.

So my questions are: What to do in this situation? And why isn't this problem solved even in Microsoft's own libraries? Both the .NET Framework and the C and C++ libraries seem to adhere to the obsolete codepage model. How would you design the Windows API or an application framework to circumvent this issue?

I think that the general problem (which is not easy to solve) is that all libraries assume that all streams are byte-oriented, and implement text-oriented streams on top of that. However, we see that Windows does have special text-oriented streams on the OS level, and the libraries are unable to deal with this. So in any case we must introduce significant changes to all standard libraries. A quick and dirty way would be to treat the console as a special byte-oriented stream that accepts only one encoding. This still requires that the C and C++ standard libraries must be circumvented because they don't implement the WriteFile/WriteConsoleW switch. Is that correct?

+2  A: 

The general strategy I/we use in most (cross platform) applications/projects is: We just use UTF-8 (I mean the real standard) everywhere. We use std::string as the container and we just interpret everything as UTF8. And we also handle all file IO this way, i.e. we expect UTF8 and save UTF8. In the case when we get a string from somewhere and we know that it is not UTF8, we will convert it to UTF8.

The most common case where we stumble upon WinUTF16 is for filenames. So for every filename handling, we will always convert the UTF8 string to WinUTF16. And also the other way if we search through a directory for files.

The console isn't really used in our Windows build (in the Windows build, all console output is wrapped into a file). As we have UTF8 everywhere, also our console output is UTF8 which is fine for most modern systems. And also the Windows console log file has its content in UTF8 and most text-editors on Windows can read that without problems.

If we would use the WinConsole more and if we would care a lot that all special chars are displayed correctly, we maybe would write some automatic pipe handler which we install in between fileno=0 and the real stdout which will use WriteConsoleW as you have suggested (if there is really no easier way).

If you wonder about how to realize such automatic pipe handler: We have implemented such thing already for all POSIX-like systems. The code probably doesn't work on Windows as it is but I think it should be possible to port it. Our current pipe handler is similar to what tee does. I.e. if you do a cout << "Hello" << endl, it will both be printed on stdout and in some log-file. Look at the code if you are interested how this is done.

Albert
+1  A: 

Several points:

  1. One important difference between Windows "WriteConsoleW" and printf is that WriteConsoleW looks at console as GUI rather them text stream. For example if you use it and use pipe you would not capture output.
  2. I would never said that code-pages are obsolete. Maybe windows developers would like them to be so, but they never would be. All world, but windows api, uses byte oriented streams to represent data: XML, HTML, HTTP, Unix, etc, etc use encodings and most popular and most powerful one is UTF-8. So you may use Wide strings internally but in external world you'll need something else.

    Even when you print wcout << L"Hello World" << endl it is converted under the hood to byte oriented stream, on most systems (but windows) to UTF-8.

  3. My personal opinion, Microsoft did mistake when changed their API in every place to wide instead of supporting UTF-8 everywhere. Of course you may argue about it. But in fact you have to separate text and byte oriented streams and convert between them.

Artyom
1. Microsoft suggests to check whether the standard output stream goes to a console or something else before using WriteConsole. This is cumbersome, but seems to be the only possible and portable option. 2. Codepages and encodings are not the same. With codepages I mean the Windows console code pages. Since the Windows console is text oriented and uses UTF-16, code pages are obsolete—every string that use a code page will immediately get converted to UTF-16 anyway. The `wostream` issue is unfortunate, but mandated by the C++ standard. 3. I don't think the decision to use UTF-16 is unfortun...
Philipp
...ate, but the API is poorly designed. For example, you could think of something like `GetStdHandle(STD_UTF16LE_OUTPUT_HANDLE)` which would return a byte-oriented stream handle that expects UTF-16-LE-encoded strings. Then you could use `WriteFile` everywhere. OTOH, I think the issue that C and C++ have no real text streams is more important.
Philipp