What Martin writes is true:
It should work fine, if you only ever
treat strings as blobs. When you start
accessing them "char-by-char" (i.e.
byte-by-byte), things might go wrong
if you assume a C char is a complete
charater. Likewise, if you assume you
can split the string in the middle
into two substrings, it might go
wrong, etc.
But it's worse than that. Running on a Japanese or Chinese system merely makes it more likely your code will encounter multi-byte (non-ASCII) text. Even running on a US English system (the simplest case), it's entirely possible your code will encounter multi-byte (non-ASCII) text. Don't assume the strings used in the user interface by default are the limit of what you might encounter.
Also note that converting your project to "Unicode" (as Microsoft calls it) won't help because Microsoft's choice of Unicode encoding is UTF-16, which has similar problems (less often). (In UTF-16, the term to look out for is "surrogate pair".)
Text processing is hard. Let's go shopping!