views:

169

answers:

6

I'm wondering what the Stack Overflow community thinks when it comes to creating a project (thinking primarily c++ here) with a unicode or a multi-byte character set.

  • Are there pros to going Unicode straight from the start, implying all your strings will be in wide format? Are there performance issues / larger memory requirements because of a standard use of a larger character?

  • Is there an advantage to this method? Do some processor architectures handle wide characters better?

  • Are there any reasons to make your project Unicode if you don't plan on supporting additional languages?

  • What reasons would one have for creating a project with a multi-byte character set?

  • How do all of the factors above collide in a high performance environment (such as a modern video game) ?

+5  A: 

The short answer (IMO, and I've been proving wrong) is that it'd better to plan for the worse (or best depending on your point of view) and do unicode right now.

Unless your application is very string intensive, then going directly to unicode will not really matter; in the case of games, it should not be a big factor compared to the rest of the engine.

Max.

Max
What if, for some magical reason, you are using a character string in a tight loop. Will there be a sizable performance difference?
Stefan Valianu
@Stefan: That depends on what you're doing with that string. If you're copying it, and it consists mostly of ASCII characters, the MB version will be a bit shorter, and so copying it may be faster.If you're doing actual string processing, the Unicode version will likely be more efficient, because of its simpler structure. But really, this is such an absurdly hypothetical what-if question it's pointless. Your answer is "it doesn't matter performance-wise, and it never will, and if it does, you should test both and see what works best"
jalf
Also, if it matters performance wise you can just optimize that specific loop without changing the project type.
Brian
A: 

The first answer to that question should... answer everything you need to know.

Klaim
+3  A: 

Two issues I'd comment on.

First, you don't mention what platform you're targeting. Although recent Windows versions (Win2000, WinXP, Vista and Win7) support both Multibyte and Unicode versions of system calls using strings, the Unicode versions are faster (the multibyte versions are wrappers that convert to Unicode, call the Unicode version, then convert any returned strings back to mutlibyte). So if you're making a lot of these types of calls the Unicode will be faster.

Just because you're not planning on explicitly supporting additional languages, you should still consider supporting Unicode if your application saves and displays text entered by the users. Just because your application is unilingual, it doesn't follow that all it's users will be unilingual too. They may be perfectly happy to use your English language GUI, but might want to enter names, comments or other text in their own language and have them displayed properly.

Stephen C. Steel
"you should still consider supporting Unicode if your application saves and displays text entered by the users" - and if your application wants to deal with paths with arbitrary characters - and if it deals in any way with paths, it should.
Matteo Italia
This is exactly what I wanted to hear.. that one is a wrapper for the other. Unicode all the way baby.
Stefan Valianu
+2  A: 

You are talking about the VC++ Project setting here, right?

The only thing it affects is the version of Win32 API calls it ends up being exectuted. For instance, a call to MessageBox will end up as a call to MessageBoxA in case of the multi-byte setting, and MessageBoxW in case of Unicode setting. Of course, that will affect the types of string parameters to that functions as well. Internally, MessageBoxA calls MessageBoxW after converting the string paramteres from the current system locale to Unicode.

My advice is to use the Unicode settings and pass Unicode strings to Win32 API calls. That does not stop you from using strings in any other encoding internally.

Nemanja Trifunovic
+1  A: 

Are there pros to going Unicode straight from the start,

A few years and a million lines of code later, you're going to wish you had answered "yes".

implying all your strings will be in wide format?

I wish Microsoft would quit conflating "Unicode" with UTF-16.

You don't have to store all your strings in wide format. You can use UTF-8 instead, and get a smaller memory footprint (for Latin alphabet languages), and backwards compatibility with 7-bit ASCII.

The one downside to using UTF-8 on Windows is that it's not supported as an ANSI code page, so you have to convert your strings to UTF-16 to make WinAPI calls. How much inconvenience this causes depends on whether you're writing a Windows program or a program that just happens to run on Windows.

dan04
+1  A: 

Here's a simple consideration: should your program work if it's used by Mr. 菅 直人 ? His home directory might be hard to represent in ASCII.

MSalters
Excellent point
Stefan Valianu