>>> teststring = 'aõ'
>>> type(teststring)
<type 'str'>
>>> teststring
'a\xf5'
>>> print teststring
aõ
>>> teststring.decode("ascii", "ignore")
u'a'
>>> teststring.decode("ascii", "ignore").encode("ascii")
'a'
which is what i really wanted it to store internally as i remove non-ascii characters. Why did the decode("ascii give out a unicode string ?
>>> teststringUni = u'aõ'
>>> type(teststringUni)
<type 'unicode'>
>>> print teststringUni
aõ
>>> teststringUni.decode("ascii" , "ignore")
Traceback (most recent call last):
File "<pyshell#79>", line 1, in <module>
teststringUni.decode("ascii" , "ignore")
UnicodeEncodeError: 'ascii' codec can't encode character u'\xf5' in position 1: ordinal not in range(128)
>>> teststringUni.decode("utf-8" , "ignore")
Traceback (most recent call last):
File "<pyshell#81>", line 1, in <module>
teststringUni.decode("utf-8" , "ignore")
File "C:\Python27\lib\encodings\utf_8.py", line 16, in decode
return codecs.utf_8_decode(input, errors, True)
UnicodeEncodeError: 'ascii' codec can't encode character u'\xf5' in position 1: ordinal not in range(128)
>>> teststringUni.encode("ascii" , "ignore")
'a'
Which is again what i wanted. I don't understand this behavior. Can someone explain to me what is happening here?
edit: i thought this would me understand things so i could solve my real program problem that i state here: http://stackoverflow.com/questions/3669436/converting-unicode-objects-with-non-ascii-symbols-in-them-into-strings-objects-in