tags:

views:

356

answers:

3

I've got a Python program that stores and writes data to a file. The data is raw binary data, stored internally as str. I'm writing it out through a utf-8 codec. However, I get UnicodeDecodeError: 'charmap' codec can't decode byte 0x8d in position 25: character maps to <undefined> in the cp1252.py file.

This looks to me like Python is trying to interpret the data using the default code page. But it doesn't have a default code page. That's why I'm using str, not unicode.

I guess my questions are:

  • How do I represent raw binary data in memory, in Python?
  • When I'm writing raw binary data out through a codec, how do I encode/unencode it?
A: 

You shouldn't normally use codecs with str, except to turn them into unicodes. Perhaps you should be looking at using the latin-1 codec if you think you want "raw" data in your unicodes.

Ignacio Vazquez-Abrams
I don't want "raw" data in my unicodes.
Chris B.
Then why are you using a codec?
Ignacio Vazquez-Abrams
I'm writing raw binary data to a text file, along with a number of unicode strings. When I try writing the raw binary data (which I have internally stored in utf-8 format) to a utf-8 codec, I get the cp1252 error.
Chris B.
Then write it directly to the file, instead of through the codec.
Ignacio Vazquez-Abrams
A: 

For your first question: in Python, regular strings (ie, not unicode strings) are binary data. If you want to write the unicode strings and binary data, turn the unicode strings into binary data and put them together:

# encode the unicode string as a string
bytes = unicodeString.encode('utf-8')
# add it to the other string
raw_data += bytes
# write it all to a file
yourFile.write(raw_data)

For your second question: you write() the raw data; then, when you read it, you do so like this:

import codecs
yourFile = codecs.open( "yourFileName", "r", "utf-8" )
# and now just use yourFile.read() to read it
Daniel G
As I mentioned, I *have* a regular string.
Chris B.
And doing `yourFile.write(regular_string)` gives you the error? You don't need to further encode a regular string; like I said, it's already raw bytes.
Daniel G
@Chris: Are you doing something silly like using Python 3, perhaps?
SamB
It's not Python 3. It's a str, being written through a utf-8 codec, which is somehow being interpreted by the cp1252 codec during that process. I suspect Python expects unicode strings for its codec, so it automagically translates the str to a unicode object, which causes the conversion and the error. I don't quite know how to prevent that, though.
Chris B.
If you have raw binary data stored in a str, you don't want to get it anywhere near a codec. It should be written straight to a file open in binary mode.I have no idea what you mean by saying you have raw binary data stored internally in utf-8 format. That doesn't make sense.
Greg Ball
@Greg - exactly. @Chris Codecs are used for *interpreting* data (bytes) (as, for example, a utf-8 string). You don't interpret data when you write it (you just want to write the string of bytes out); you interpret it when you *read* it - hence the edit to my answer.
Daniel G
+4  A: 

Your use of str for raw binary data in memory is correct.
[If you're using Python 2.6+, it's even better to use bytes which in 2.6+ is just an alias to str but expresses your intention better, and will help if one day you port the code to Python 3.]

As others note, writing binary data through a codec is strange. A write codec takes unicode and outputs bytes into the file. You're trying to do it backwards, hence our confusion about your intentions...

[And your diagnosis of the error looks correct: since the codec expects unicode, Python is decoding your str into unicode with the system's default encoding, which chokes.]

What you want to see in the output file?

  • If the file should contain the binary data as-is:

    Then you must not send it through a codec; you must write it directly to the file. A codec encodes everything and can only emit valid encodings of unicode (in your case, valid UTF-8). There is no input you can give it to make it emit arbitrary byte sequences!

    • If you require a mixture of UTF-8 and raw binary data, you should open the file directly, and intermix writes of some_data with some_text.encode('utf8')...

    Note however that mixing UTF-8 with raw arbitrary data is very bad design, because such files are very inconvenient to deal with! Tools that understand unicode will choke on the binary data, leaving you with not convenient way to even view (let alone modify) the file.

  • If you want a friendly representation of arbitrary bytes in unicode:

    Pass data.encode('base64') to the codec. Base64 produces only clean ascii (letters, numbers, and a little punctuation) so it can be clearly embedded in anything, it clearly looks to people as binary data, and it's reasonably compact (slightly over 33% overhead).

    P.S. you may note that data.encode('base64') is strange.

    • .encode() is supposed to take unicode but I'm giving it a string?! Python has several pseudo-codecs that convert str->str such as 'base64' and 'zlib'.

    • .encode() always returns an str but you'll feed it into a codec expecting unicode?! In this case it will only contain clean ascii, so it doesn't matter. You may write explicitly data.encode('base64').encode('utf8') if it makes you feel better.

  • If you need a 1:1 mapping from arbitrary bytes to unicode:

    Pass data.decode('latin1') to the codec. latin1 maps bytes 0-255 to unicode characters 0-255, which is kinda elegant.

    The codec will, of course, encode your characters - 128-255 are encoded as 2 or 3 bytes in UTF-8 (surprisingly, the average overhead is 50%, more than base64!). This quite kills the "elegance" of having a 1:1 mapping.

    Note also that unicode characters 0-255 include nasty invisible/control characters (newline, formfeed, soft hyphen, etc.) making your binary data annoying to view in text editors.

    Considering these drawbacks, I do not recommend latin1 unless you understand exactly why you want it.
    I'm just mentioning it as the other "natural" encoding that springs to mind.

Beni Cherniavsky-Paskin