views:

419

answers:

2

I start by creating a string variable with some non-ascii utf-8 encoded data on it:

>>> text = 'á'
>>> text
'\xc3\xa1'
>>> text.decode('utf-8')
u'\xe1'

Using unicode() on it raises errors...

>>> unicode(text)
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
UnicodeDecodeError: 'ascii' codec can't decode byte 0xc3 in position 0: 
                    ordinal not in range(128)

...but if I know the encoding I can use it as second parameter:

>>> unicode(text, 'utf-8')
u'\xe1'
>>> unicode(text, 'utf-8') == text.decode('utf-8')
True

Now if I have a class that returns this text in the __str__() method:

>>> class ReturnsEncoded(object):
...     def __str__(self):
...         return text
... 
>>> r = ReturnsEncoded()
>>> str(r)
'\xc3\xa1'

unicode(r) seems to use str() on it, since it raises the same error as unicode(text) above:

>>> unicode(r)
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
UnicodeDecodeError: 'ascii' codec can't decode byte 0xc3 in position 0: 
                    ordinal not in range(128)

Until now everything is as planned!

But as no one would ever expect, unicode(r, 'utf-8') won't even try:

>>> unicode(r, 'utf-8')
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
TypeError: coercing to Unicode: need string or buffer, ReturnsEncoded found

Why? Why this inconsistent behavior? Is it a bug? is it intended? Very awkward.

+3  A: 

unicode does not guess the encoding of your text. If your object can print itself as unicode, define the __unicode__() method that returns a Unicode string.


The secret is that unicode(r) is not actually calling __str__() itself. Instead, it's looking for a __unicode__() method. The default implementation of __unicode__() will call __str__() and then attempt to decode it using the ascii charset. When you pass the encoding, unicode() expects the first object to be something that can be decoded -- that is, an instance of basestring.


Behavior is weird because it tries to decode as ascii if I don't pass 'utf-8'. But if I pass 'utf-8' it gives a different error...

That's because when you specify "utf-8", it treats the first parameter as a string-like object to be decoded. Without it, it treats the parameter as an object to be coerced to unicode.

I do not understand the confusion. If you know that the object's text attribute will always be UTF-8 encoded, just define __unicode__() and then everything will work fine.

John Millikin
I think I may not have made myself clear. I know that. What I mean is to know WHY unicode(r) has a different behavior than unicode(r, 'utf-8')???
nosklo
Behavior is weird because it tries to decode as ascii if I don't pass 'utf-8'. But if I pass 'utf-8' it gives a different error...
nosklo
+7  A: 

The behaviour does seem confusing, but intensional. I reproduce here the entirety of the unicode documentation from the Python Built-In Functions documentation (for version 2.5.2, as I write this):

unicode([object[, encoding [, errors]]])

Return the Unicode string version of object using one of the following modes:

If encoding and/or errors are given, unicode() will decode the object which can either be an 8-bit string or a character buffer using the codec for encoding. The encoding parameter is a string giving the name of an encoding; if the encoding is not known, LookupError is raised. Error handling is done according to errors; this specifies the treatment of characters which are invalid in the input encoding. If errors is 'strict' (the default), a ValueError is raised on errors, while a value of 'ignore' causes errors to be silently ignored, and a value of 'replace' causes the official Unicode replacement character, U+FFFD, to be used to replace input characters which cannot be decoded. See also the codecs module.

If no optional parameters are given, unicode() will mimic the behaviour of str() except that it returns Unicode strings instead of 8-bit strings. More precisely, if object is a Unicode string or subclass it will return that Unicode string without any additional decoding applied.

For objects which provide a __unicode__() method, it will call this method without arguments to create a Unicode string. For all other objects, the 8-bit string version or representation is requested and then converted to a Unicode string using the codec for the default encoding in 'strict' mode.

New in version 2.0. Changed in version 2.2: Support for __unicode__() added.

So, when you call unicode(r, 'utf-8'), it requires an 8-bit string or a character buffer as the first argument, so it coerces your object using the __str__() method, and attempts to decode that using the utf-8 codec. Without the utf-8, the unicode() function looks for a for a __unicode__() method on your object, and not finding it, calls the __str__() method, as you suggested, attempting to use the default codec to convert to unicode.

Blair Conrad