As others have said, # coding:
specifies the encoding the source file is saved in. Here are some examples to illustrate this:
A file saved on disk as cp437 (my console encoding), but no encoding declared
b = 'über'
u = u'über'
print b,repr(b)
print u,repr(u)
Output:
File "C:\ex.py", line 1
SyntaxError: Non-ASCII character '\x81' in file C:\ex.py on line 1, but no
encoding declared; see http://www.python.org/peps/pep-0263.html for details
Output of file with # coding: cp437
added:
über '\x81ber'
über u'\xfcber'
At first, Python didn't know the encoding and complained about the non-ASCII character. Once it knew the encoding, the byte string got the bytes that were actually on disk. For the Unicode string, Python read \x81, knew that in cp437 that was a ü, and decoded it into the Unicode codepoint for ü which is U+00FC. When the byte string was printed, Python sent the hex value 81
to the console directly. When the Unicode string was printed, Python correctly detected my console encoding as cp437 and translated Unicode ü to the cp437 value for ü.
Here's what happens with a file declared and saved in UTF-8:
├╝ber '\xc3\xbcber'
über u'\xfcber'
In UTF-8, ü is encoded as the hex bytes C3 BC
, so the byte string contains those bytes, but the Unicode string is identical to the first example. Python read the two bytes and decoded it correctly. Python printed the byte string incorrectly, because it sent the two UTF-8 bytes representing ü directly to my cp437 console.
Here the file is declared cp437, but saved in UTF-8:
├╝ber '\xc3\xbcber'
├╝ber u'\u251c\u255dber'
The byte string still got the bytes on disk (UTF-8 hex bytes C3 BC
), but interpreted them as two cp437 characters instead of a single UTF-8-encoded character. Those two characters where translated to Unicode code points, and everything prints incorrectly.