Encoding in C++ is quite a bit complicated. Here is my understanding of it.
Every implementation has to support characters from the basic source character set. These include common characters listed in 2.2/1. These characters should all fit into one char
. In addition implementations have to support a way to name other characters using a way called universal character names
and look like \uffff
or \Uffffffff
and can be used to refer to unicode characters. A subset of them are usable in identifiers (listed in Annex E).
This is all nice, but the mapping from characters in the file, to source characters (used at compile time) is implementation defined. This constitutes the encoding used. Here is what it says literally:
Physical source file characters are
mapped, in an implementation-defined
manner, to the basic source character
set (introducing new-line characters
for end-of-line indicators) if
necessary. Trigraph sequences (2.3)
are replaced by corresponding
single-character internal
representations. Any source file
character not in the basic source
character set (2.2) is replaced by the
universal-character-name that des-
ignates that character. (An
implementation may use any internal
encoding, so long as an actual
extended character encountered in the
source file, and the same extended
character expressed in the source file
as a universal-character-name (i.e.
using the \uXXXX notation), are
handled equivalently.)
For gcc, you can change it using the option -finput-charset=charset
. Additionally, you can change the execution character used to represet values at runtime. The proper option for this is -fexec-charset=charset
for char (it defaults to utf-8
) and -fwide-exec-charset=charset
(which defaults to either utf-16
or utf-32
depending on the size of wchar_t
).