I have a unicode string like "Tanım" which is encoded as "Tan%u0131m" somehow. How can i convert this encoded string back to original unicode. Apparently urllib.unquote does not support unicode.
def unquote(text):
def unicode_unquoter(match):
return unichr(int(match.group(1),16))
return re.sub(r'%u([0-9a-fA-F]{4})',unicode_unquoter,text)
%uXXXX is a non-standard encoding scheme that has been rejected by the w3c, despite the fact that an implementation continues to live on in JavaScript land.
The more common technique seems to be to UTF-8 encode the string and then % escape the resulting bytes using %XX. This scheme is supported by urllib.unquote:
>>> urllib2.unquote("%0a")
'\n'
Unfortunately, if you really need to support %uXXXX, you will probably have to roll your own decoder. Otherwise, it is likely to be far more preferable to simply UTF-8 encode your unicode and then % escape the resulting bytes.
A more complete example:
>>> u"Tanım"
u'Tan\u0131m'
>>> url = urllib.quote(u"Tanım".encode('utf8'))
>>> urllib.unquote(url).decode('utf8')
u'Tan\u0131m'
This will do it if you absolutely have to have this (I really do agree with the cries of "non-standard"):
from urllib import unquote
def unquote_u(source):
result = unquote(source)
if '%u' in result:
result = result.replace('%u','\\u').decode('unicode_escape')
return result
print unquote_u('Tan%u0131m')
> Tanım
there is a bug in the above version where it freaks out sometimes when there are both ascii encoded and unicode encoded characters in the string. I think its specifically when there are characters from the upper 128 range like '\xab' in addition to unicode.
eg. "%5B%AB%u03E1%BB%5D" causes this error.
I found if you just did the unicode ones first, the problem went away:
def unquote_u(source):
result = source
if '%u' in result:
result = result.replace('%u','\\u').decode('unicode_escape')
result = unquote(result)
return result