According to the documentation for open()
, you should add a U
to the mode:
open('textbase.txt', 'Ur')
This enables "universal newlines", which normalizes them to \n
in the strings it gives you.
However, the correct thing to do is to decode the UTF-16BE into Unicode objects first, before translating the newlines. Otherwise, a chance 0x0d
byte could get erroneously turned into a 0x0a
, resulting in
UnicodeDecodeError: 'utf16' codec can't decode byte 0x0a in position 12: truncated data.
Python's codecs
module supplies an open
function that can decode Unicode and handle newlines at the same time:
import codecs
for line in codecs.open('textbase.txt', 'Ur', 'utf-16be'):
...
If the file has a byte order mark (BOM) and you specify 'utf-16'
, then it detects the endianness and hides the BOM for you. If it does not (since the BOM is optional), then that decoder will just go ahead and use your system's endianness, which probably won't be good.
Specifying the endianness yourself (with 'utf-16be'
) will not hide the BOM, so you might wish to use this hack:
import codecs
firstline = True
for line in codecs.open('textbase.txt', 'Ur', 'utf-16be'):
if firstline:
firstline = False
line = line.lstrip(u'\ufeff')
See also: Python Unicode HOWTO