views:

162

answers:

4

I'm still learning python and I have a doubt:

In python 2.6.x I usually declare encoding in the file header like this (as in PEP 0263)

# -*- coding: utf-8 -*-

After that, my strings are written as usual:

a = "A normal string without declared Unicode"

But everytime I see a python project code, the encoding is not declared at the header. Instead, it is declared at every string like this:

a = u"A string with declared Unicode"

What's the difference? What's the purpose of this? I know Python 2.6.x sets ASCII encoding by default, but it can be overriden by the header declaration, so what's the point of per string declaration?

Addendum: Seems that I've mixed up file encoding with string encoding. Thanks for explaining it :)

+7  A: 

That doesn't set the format of the string; it sets the format of the file. Even with that header, "hello" is a byte string, not a Unicode string. To make it Unicode, you're going to have to use u"hello" everywhere. The header is just a hint of what format to use when reading the .py file.

icktoofay
I was mistaken then, I thought they were the same. So the use for unicode strings is i18n?
Oscar Carballal
@Oscar: Yes, for the most part. If you were making a website with Django or something and it had to handle people with non-ASCII characters, then that's another possible use.
icktoofay
+3  A: 

The header definition is to define the encoding of the code itself, not the resulting strings at runtime.

putting a non-ascii character like ۲ in the python script without the utf-8 header definition will throw a warning error

ebt
Wrong error, but yes.
Ignacio Vazquez-Abrams
oops, corrected thanks
ebt
+8  A: 

Those are two different things, as others have mentioned.

When you specify # -*- coding: utf-8 -*-, you're telling Python the source file you've saved is utf-8. The default for Python 2 is ASCII (for Python 3 it's utf-8). This just affects how the interpreter reads the characters in the file.

In general, it's probably not the best idea to embed high unicode characters into your file no matter what the encoding is; you can use string unicode escapes, which work in either encoding.


When you declare a string with a u in front, like u'This is a string', it tells the Python compiler that the string is Unicode, not bytes. This is handled mostly transparently by the interpreter; the most obvious difference is that you can now embed unicode characters in the string (that is, u'\u2665' is now legal). You can use from __future__ import unicode_literals to make it the default.

This only applies to Python 2; in Python 3 the default is Unicode, and you need to specify a b in front (like b'These are bytes', to declare a sequence of bytes).

Chris B.
Thanks for the explanation! I'll set this as accepted since is the most complete one :)
Oscar Carballal
The default source encoding for Python 2 is **ascii**.
Mark Tolonen
It's actually a great idea to embed high unicode characters into your file. I doubt non-English speakers want to read unicode escapes in their strings.
Mark Tolonen
@Mark: Thanks for the ASCII correction; I quickly skimmed the PEP (http://www.python.org/dev/peps/pep-0263/) and it talks about Latin-1 in the preamble. I don't think it's a great idea to embed high unicode characters in your file most cases. Certainly, if you're coding a lot of non-English strings in your source file it can make it easier, but you generally do that for display to the user, and you should probably define those in a separate place anyway. And a single misconfigured text editor can corrupt all those characters.
Chris B.
@Chris, agreed if you are programming an i18nalized app, but consider if you are a Chinese or French programmer. It's not just the strings, but the comments as well. It's great the Python is flexible with source encodings. Python 3 can even have non-ASCII characters in variable names.
Mark Tolonen
+1  A: 

As others have said, # coding: specifies the encoding the source file is saved in. Here are some examples to illustrate this:

A file saved on disk as cp437 (my console encoding), but no encoding declared

b = 'über'
u = u'über'
print b,repr(b)
print u,repr(u)

Output:

  File "C:\ex.py", line 1
SyntaxError: Non-ASCII character '\x81' in file C:\ex.py on line 1, but no
encoding declared; see http://www.python.org/peps/pep-0263.html for details

Output of file with # coding: cp437 added:

über '\x81ber'
über u'\xfcber'

At first, Python didn't know the encoding and complained about the non-ASCII character. Once it knew the encoding, the byte string got the bytes that were actually on disk. For the Unicode string, Python read \x81, knew that in cp437 that was a ü, and decoded it into the Unicode codepoint for ü which is U+00FC. When the byte string was printed, Python sent the hex value 81 to the console directly. When the Unicode string was printed, Python correctly detected my console encoding as cp437 and translated Unicode ü to the cp437 value for ü.

Here's what happens with a file declared and saved in UTF-8:

├╝ber '\xc3\xbcber'
über u'\xfcber'

In UTF-8, ü is encoded as the hex bytes C3 BC, so the byte string contains those bytes, but the Unicode string is identical to the first example. Python read the two bytes and decoded it correctly. Python printed the byte string incorrectly, because it sent the two UTF-8 bytes representing ü directly to my cp437 console.

Here the file is declared cp437, but saved in UTF-8:

├╝ber '\xc3\xbcber'
├╝ber u'\u251c\u255dber'

The byte string still got the bytes on disk (UTF-8 hex bytes C3 BC), but interpreted them as two cp437 characters instead of a single UTF-8-encoded character. Those two characters where translated to Unicode code points, and everything prints incorrectly.

Mark Tolonen
+1 for the examples :)
Oscar Carballal