I know that C strings are char[] with a '\0' in the last element. But how are the chars encoded?
Update: I found this cool link which talks about many other programming languages and their encoding conventions: Link
I know that C strings are char[] with a '\0' in the last element. But how are the chars encoded?
Update: I found this cool link which talks about many other programming languages and their encoding conventions: Link
The standard does not specify this. Typically with ASCII.
They are not really "encoded" as such, they are simply stored as-is. The string "hello" represents an array with the char values 'h'
, 'e'
, 'l'
, 'l'
, 'o'
and '\0'
, in that order. The C standard has a basic character set that includes these characters, but doesn't specify an encoding into bytes. It could be EBCDIC, for all you know.
All the standard says on the matter is that you get at least the 52 upper- and lower-case latin alphabet characters, the digits 0 to 9, the symbols ! " # % & ' ( ) * + , - . / :
; < = > ? [ \ ] ^ _ { | } ~
, and the space character, and control characters representing horizontal tab, vertical tab, and form feed.
The only thing it says about numeric encoding is that all of the above fits in one byte, and that the value of each digit after zero is one greater that the value of the previous one.
The actual encoding is probably inherited from your locale settings. Probably something ASCII-compatible.
A c string is pretty much just a sequence of bytes. That means, that it does not really have any encoding, it could be ASCII, UTF8 or anything else, for that matter. Because most operating systems understand ASCII by default, and source code is mostly written with ASCII encoding, so the data you will find in a simple (char*) will very often be ASCII as well. Nonetheless, there is no guarantee that what you get out of a (char*) will be UTF8 or even KOI8.
As other indicated already, C has some restrictions what is permitted for source and execution character encodings, but is relatively permissive. So in particular it is not necessarily ASCII, and in most cases nowadays at least an extensions of that.
Your execution environment is meant to do an eventual translation between source and execution character set.
So generally you should not care about the encoding and in the contrary try to code independently of it. This why there are special escape sequences for special characters like '\n'
, or '\t'
and universal character encodings like '\u0386'
. So usually you shouldn't have to look up the encodings for the execution character set yourself.