If you write 'A'
, and that value gets converted to wchar_t
, then on Microsoft compilers at least, it will have the same value as if you'd written L'A'
or _T('A')
.
The same can't be said of string literals, since there is no useful conversion from const char*
to const wchar_t*
. I think this means it's rather less important to get character literal types right, than string literals.
It's easy to write code that behaves differently according to whether a character literal is wide or narrow - just have an overloaded function that does something completely different. But in practice, sensible functions overloaded to take both types of character are going to end up doing the same thing with 'A'
that they do with L'A'
. And functions which aren't overloaded, and only take wchar_t
, can take 'A'
just fine.
I don't immediately see anything in the standard to require that L'A' == (wchar_t)'A'
, so in theory non-Microsoft compilers might do something completely different. But you'd normally expect the wide character set to be an extension of the narrow character set, just as Unicode extends ISO-8859-1. To be specific what "extension" means, code points which are equal as integers designate the "same character".