Python3.0: tokenize & BytesIO
When attempting to tokenize a string in python3.0, why do I get a leading 'utf-8' before the tokens start? From the python3 docs, tokenize should now be used as follows: g = tokenize(BytesIO(s.encode('utf-8')).readline) However, when attempting this at the terminal, the following happens: >>> from tokenize import tokenize >>> from i...