tags:

views:

102

answers:

5

I know that C strings are char[] with a '\0' in the last element. But how are the chars encoded?

Update: I found this cool link which talks about many other programming languages and their encoding conventions: Link

+4  A: 

The standard does not specify this. Typically with ASCII.

Oli Charlesworth
In Objective-C I'm able to create C strings using saying, char *cStr = [objcStr UTF8String], and print as printf("%s", cStr). Does it work because ASCII is a subset of UTF8?
Plumenator
Yes, ASCII is a subset of UTF8.
x3ro
@Plumenator It works because UTF-8 was designed be as transparent as possible to code already handling ASCII , and because your output terminal supports UTF-8
nos
+1 @nos, but to fill in some details, it works because UTF-8 guarantees that the zero byte doesn't occur in any multibyte character encoding, so `printf` will never inadvertently deliver just part of a UTF-8-encoded string to the terminal.
Marcelo Cantos
+1  A: 

They are not really "encoded" as such, they are simply stored as-is. The string "hello" represents an array with the char values 'h', 'e', 'l', 'l', 'o' and '\0', in that order. The C standard has a basic character set that includes these characters, but doesn't specify an encoding into bytes. It could be EBCDIC, for all you know.

Marcelo Cantos
Note: '\0' is literally the octal number 0 with a type of char. So yes the terminating character is always literally a 0.
Martin York
@Martin: thanks for pointing that out. I always forget whether the strange rules around null pointers apply to null characters too.
Marcelo Cantos
@Martin: Technically, the type of a character literal is `int` (at least it is in C)...
Oli Charlesworth
@Marcelo I'm talking about all the characters.
Plumenator
@Plumenator: I've amended my answer accordingly.
Marcelo Cantos
@Oli: Opps. I am more used to c++. You are correct in C the type is int. The value however is still zero.
Martin York
+1  A: 

All the standard says on the matter is that you get at least the 52 upper- and lower-case latin alphabet characters, the digits 0 to 9, the symbols ! " # % & ' ( ) * + , - . / : ; < = > ? [ \ ] ^ _ { | } ~, and the space character, and control characters representing horizontal tab, vertical tab, and form feed.

The only thing it says about numeric encoding is that all of the above fits in one byte, and that the value of each digit after zero is one greater that the value of the previous one.

The actual encoding is probably inherited from your locale settings. Probably something ASCII-compatible.

Cirno de Bergerac
I guess locale is also configurable in the compiler. Just found out about gcc's -finput-charset option (http://gcc.gnu.org/onlinedocs/cpp/Invocation.html). The default seems to be UTF8. No wonder I was able to print UTF8Strings.
Plumenator
Does the standard also say anything about the ordinal values of alphabets?
Plumenator
@Plumenator: No. There is not even a requirement that `'A' < 'B'`.
Bart van Ingen Schenau
@Bart Interesting, I wonder how strcmp() works.
Plumenator
@Plumenator: The only guarantee about `strcmp` is that the output value corresponds to the numeric value of the characters in the string. It says nothing about the result maps to the alphabet.
Oli Charlesworth
+4  A: 

A c string is pretty much just a sequence of bytes. That means, that it does not really have any encoding, it could be ASCII, UTF8 or anything else, for that matter. Because most operating systems understand ASCII by default, and source code is mostly written with ASCII encoding, so the data you will find in a simple (char*) will very often be ASCII as well. Nonetheless, there is no guarantee that what you get out of a (char*) will be UTF8 or even KOI8.

x3ro
Actually most modern OS use a wide character string in all internal interfaces (Win/Linux/Mac). So it is not ASCII they use.
Martin York
I didn't say that they use ASCII by default in their interfaces, but that they unterstand ASCII :)
x3ro
A: 

As other indicated already, C has some restrictions what is permitted for source and execution character encodings, but is relatively permissive. So in particular it is not necessarily ASCII, and in most cases nowadays at least an extensions of that.

Your execution environment is meant to do an eventual translation between source and execution character set. So generally you should not care about the encoding and in the contrary try to code independently of it. This why there are special escape sequences for special characters like '\n', or '\t' and universal character encodings like '\u0386'. So usually you shouldn't have to look up the encodings for the execution character set yourself.

Jens Gustedt