tags:

views:

3599

answers:

7

More specific dupe of 875228—Simple data storing in Python.

I have a rather large dict (6 GB) and I need to do some processing on it. I'm trying out several document clustering methods, so I need to have the whole thing in memory at once. I have other functions to run on this data, but the contents will not change.

Currently, every time I think of new functions I have to write them, and then re-generate the dict. I'm looking for a way to write this dict to a file, so that I can load it into memory instead of recalculating all it's values.

to oversimplify things it looks something like: {((('word','list'),(1,2),(1,3)),(...)):0.0, ....}

I feel that python must have a better way than me looping around through some string looking for : and ( trying to parse it into a dictionary.

+19  A: 

Why not use python pickle? Python has a great serializing module called pickle it is very easy to use.

import cPickle
cPickle.dump(obj, open('save.p', 'wb')) 
obj = cPickle.load(open('save.p', 'rb'))

There are two disadvantages with pickle:

  • It's not secure against erroneous or maliciously constructed data. Never unpickle data received from an untrusted or unauthenticated source.
  • The format is not human readable.

If you are using python 2.6 there is a builtin module called json. It is as easy as pickle to use:

import json
encoded = json.dumps(obj)
obj = json.loads(encoded)

Json format is human readable and is very similar to the dictionary string representation in python. And doesn't have any security issues like pickle. But might be slower than cPickle.

Nadia Alramli
A: 

You can also write simple data structures in human- and Python-readable form with repr. The inverse operation is eval:

x = {1: (2, 3, 'four')}
print repr(x)
print eval(repr(x), {})

Some caveats:

  • eval is vulnerable to code injection attacks, since it runs arbitrary Python code passed to it.
  • for some data types, repr emits a string, which eval cannot read.
pts
-1 for suggest the use of eval. It is really bad practice and must be avoided.
nosklo
Have to agree with nosklo and, I'm afraid, downvote this as well... this method of serializing/deserializing is a debugging nightmare and a security hole waiting to happen. Makes me cringe.
Jarret Hardie
A: 

I would use ZODB if you need a dict too large to fit into memory to be persistent.

Unknown
A: 

Write it out in a serialized format, such as pickle (a python standard library module for serialization) or perhaps by using JSON (which is a representation that can be evaled to produce the memory representation again).

workmad3
+2  A: 

I would suggest that you use YAML for your file format so you can tinker with it on the disc

How does it look:
  - It is indent based
  - It can represent dictionaries and lists
  - It is easy for humans to understand
An example: This block of code is an example of YAML (a dict holding a list and a string)
Full syntax: http://www.yaml.org/refcard.html

To get it in python, just easy_install pyyaml. See http://pyyaml.org/

It comes with easy file save / load functions, that I can't remember right this minute.

Tom Leys
+7  A: 

I'd use shelve, json, yaml, or whatever, as suggested by other answers.

shelve is specially cool because you can have the dict on disk and still use it. Values will be loaded on-demand.

But if you really want to parse the text of the dict, and it contains only strings, ints and tuples like you've shown, you can use ast.literal_eval to parse it. It is a lot safer, since you can't eval full expressions with it - It only works with strings, numbers, tuples, lists, dicts, booleans, and None:

>>> import ast
>>> print ast.literal_eval("{12: 'mydict', 14: (1, 2, 3)}")
{12: 'mydict', 14: (1, 2, 3)}
nosklo
A: 

This solution at SourceForge uses only standard Python modules:

y_serial.py module :: warehouse Python objects with SQLite

"Serialization + persistance :: in a few lines of code, compress and annotate Python objects into SQLite; then later retrieve them chronologically by keywords without any SQL. Most useful "standard" module for a database to store schema-less data."

http://yserial.sourceforge.net

The compression bonus will probably reduce your 6GB dictionary to 1GB. If you do not want a store a series of dictionaries, the module also contains a file.gz solution which might be more suitable given your dictionary size.

code43