views:

1651

answers:

7

What is the standard encoding of C++ source code, and does standard even say something about it? For example, can I write C++ source in UNICODE? Like, use non-ASCII characters in comments?

Can I use chinese characters in comments (is full UNICODE allowed or just that 16-bit first page or whatever it's called).

Further more, can I use UNICODE for C++ UNICODE strings, like, for example:

Wstring str=L"Strange chars: â Țđ ě €€";

+4  A: 

The C++ standard doesn't say anything about source-code file encoding, so far as I know.

The usual encoding is (or used to be) 7-bit ASCII -- some compilers (Borland's, for instance) would balk at ASCII characters that used the high-bit. There's no technical reason that Unicode characters can't be used, if your compiler and editor accept them -- most modern Linux-based tools, and many of the better Windows-based editors, handle UTF-8 encoding with no problem, though I'm not sure that Microsoft's compiler will.

EDIT: It looks like Microsoft's compilers will accept Unicode-encoded files, but will sometimes produce errors on 8-bit ASCII too:

warning C4819: The file contains a character that cannot be represented
in the current code page (932). Save the file in Unicode format to prevent
data loss.
Head Geek
It sort of does. I don't think it explicitly prevents or allows unicode, but this is the minimum allowable character set: http://www.csci.csusb.edu/dick/c++std/cd2/lex.html#lex.charset
Greg Rogers
Since C++Builder2007, the Borland/Codegear compiler has supported unicode source files: i.e. Unicode string literals, unicode comments. the IDe has struggled a bit with them, but the compiler's happy!
Roddy
The Borland thing I mentioned was from roughly twenty years ago (the last time I tried putting a high-ASCII character in a source-code file). :-) I haven't used a Borland compiler in about ten years.
Head Geek
Microsoft compilers do support Unicode only for wide chars (L"...").
Sorin Sbarnea
+2  A: 

For encoding in strings I think you are meant to use the \u notation, e.g.:

std::wstring str = L"\u20AC"; // Euro character
Rob
A: 

AFAIK It's not standardized as you can put any type of characters in wide strings. You just have to check that your compiler is set to Unicode source code to make it work right.

Klaim
+9  A: 

Encoding in C++ is quite a bit complicated. Here is my understanding of it.

Every implementation has to support characters from the basic source character set. These include common characters listed in 2.2/1. These characters should all fit into one char. In addition implementations have to support a way to name other characters using a way called universal character names and look like \uffff or \Uffffffff and can be used to refer to unicode characters. A subset of them are usable in identifiers (listed in Annex E).

This is all nice, but the mapping from characters in the file, to source characters (used at compile time) is implementation defined. This constitutes the encoding used. Here is what it says literally:

Physical source file characters are mapped, in an implementation-defined manner, to the basic source character set (introducing new-line characters for end-of-line indicators) if necessary. Trigraph sequences (2.3) are replaced by corresponding single-character internal representations. Any source file character not in the basic source character set (2.2) is replaced by the universal-character-name that des- ignates that character. (An implementation may use any internal encoding, so long as an actual extended character encountered in the source file, and the same extended character expressed in the source file as a universal-character-name (i.e. using the \uXXXX notation), are handled equivalently.)

For gcc, you can change it using the option -finput-charset=charset. Additionally, you can change the execution character used to represet values at runtime. The proper option for this is -fexec-charset=charset for char (it defaults to utf-8) and -fwide-exec-charset=charset (which defaults to either utf-16 or utf-32 depending on the size of wchar_t).

Johannes Schaub - litb
A: 

It's also worth noting that wide characters in C++ aren't really Unicode strings as such. They are just strings of larger characters, usually 16, but sometimes 32 bits. This is implementation-defined, though, IIRC you can have an 8-bit wchar_t You have no real guarantee as to the encoding in them, so if you are trying to do something like text processing, you will probably want a typedef to the most suitable integer type to your Unicode entity.

C++1x has additional unicode support in the form of UTF-8 encoding string literals (u8"text"), and UTF-16 and UTF-32 data types (char16_t and char32_t IIRC) as well as corresponding string constants (u"text" and U"text"). The encoding on characters specified without \uxxxx or \Uxxxxxxxx constants is still implementation-defined, though (and there is no encoding support for complex string types outside the literals)

coppro
+1  A: 

There are two issues at play here. The first is what characters are allowed in C++ code (and comments), such as variable names. The second is what characters are allowed in strings and string literals.

As noted, C++ compilers must support a very restricted ASCII-based character set for the characters allowed in code and comments. In practice, this character set didn't work very well with some European character sets (and especially with some European keyboards that didn't have a few characters -- like square brackets -- available), so the concept of digraphs and trigraphs was introduced. Many compilers accept more than this character set at this time, but there isn't any guarantee.

As for strings and string literals, C++ has the concept of a wide character and wide character string. However, the encoding for that character set is undefined. In practice it's almost always Unicode, but I don't think there's any guarantee here. Wide character string literals look like L"string literal", and these can be assigned to std::wstring's.

Max Lybbert
+1  A: 

In addition to litb's post, MSVC++ supports Unicode too. I understand it gets the Unicode encoding from the BOM. It definitely supports code like int (*♫)(); or const std::set<int> ∅; If you're really into code obfuscuation:

typedef void ‼; // Also known as \u203C
class ooɟ {
    operator ‼() {}
};
MSalters