It's a little hard to tell where to start here, since there are a lot of assumptions in play.
In C as we know and love it, there is a 'char' datatype. In all commonly-used implementations, that datatype holds an 8-bit byte.
In the language, as opposed to any library functions you use, these things are just twos-complement integers. They have no 'character' semantics whatsoever.
As soon as you start calling functions from the standard library with 'str' or 'is' in their names (e.g. strcmp, isalnum), you are dealing with character semantics.
C programs need to cope with the giant mess made of character semantics before the invention of Unicode. Various organizations invented a very large number of encoding standards. Some are one character per byte. Some are multiple characters per byte. In some, it's always safe to ask if (charvalue == 'a')
. In others, that can get the wrong answer due to a multi-byte sequence.
In just about every modern environment, the semantics of the standard library are determined by the locale setting.
Where does UTF-8 come in? Quite some time ago, the Unicode Consortium was founded to try to bring order out of all this chaos. Unicode defines a character value (in a 32-bit character space) for many, many, many characters. The intent is to cover all the characters of practical use.
If you want your code to work in English, and Arabic, and Chinese, and Sumerian Cuneiform, you want Unicode character semantics, not to write code that is ducking and weaving different character encoding.
Conceptually, the easiest way to do this would be to use 32-bit characters (UTF-32), and thus you'd have one item per logical character. Most people have decided that this is impractical. Note that, in modern versions of gcc, the data type wchar_t is a 32-bit character --- but Microsoft Visual Studio does not agree, defining that data type to be 16 bit values (UTF-16 or UCS-2, depending on your point of view).
Most non-Windows C programs are much too invested in 8-bit characters to change. And so, the Unicode standard includes UTF-8, a representation of Unicode text as a sequence of 8-bit bytes. In UTF-8, each logical character is between 1 and 4 bytes in length. The basic ISO-646 ('ascii') characters 'play themselves', so simple operations on simple characters work as expected.
If your environment includes locales for UTF-8, then you can set the locale to a UTF-8 locale, and all the standard lib functions will just work. If your environment does not include locales for UTF-8, you'll need an add-on, like ICU or ICONV.
This whole discussion has stuck, so far, to data sitting in variables in memory. You also have to deal with reading and writing it. If you call open(2)
or the Windows moral equivalent, you'll get the raw bytes from the file. If those are not in UTF-8, you'll have to convert them if you want to work in UTF-8.
If you call fopen(3)
, then the standard library may try to do you a favor and perform a conversion between its of the default encoding of files and its idea of what you want in memory. If you need, for example, to run a program on a system in a Greek locale and read in a file of Chinese in Big5, you'll need to be careful with the options you pass to fopen, or you'll perhaps want to avoid it. And you'll need ICONV or ICU to convert to and from UTF-8.
Your question mentions 'input strings.' Those could be a number of things. In a UTF-8 locale, argv
will be UTF-8. File descriptor 0 will be UTF-8. If the shell is not running in a UTF-8 locale, and you call setlocale
to a UTF-8 locale, you will not necessarily get value UTF-8 in argv
. If you connect the contents of a file to a file descriptor, you will get whatever is in the file, in whatever encoding it has to be in.