views:

2095

answers:

7

I'm writing an application in Python (2.6) that requires me to use a dictionary as a data store.

I am curious as to whether or not it is more memory efficient to have one large dictionary, or to break that down into many (much) smaller dictionaries, then have an "index" dictionary that contains a reference to all the smaller dictionaries.

I know there is a lot of overhead in general with lists and dictionaries. I read somewhere that python internally allocates enough space that the dictionary/list # of items to the power of 2.

I'm new enough to python that I'm not sure if there are other unexpected internal complexities/suprises like that, that is not apparent to the average user that I should take into consideration.

One of the difficulties is knowing how the power of 2 system counts "items"? Is each key:pair counted as 1 item? That's seems important to know because if you have a 100 item monolithic dictionary then space 100^2 items would be allocated. If you have 100 single item dictionaries (1 key:pair) then each dictionary would only be allocation 1^2 (aka no extra allocation)?

Any clearly laid out information would be very helpful!

+1  A: 

Often times, dictionaries of dictionaries are useful for other than performance reasons. ie, they allow you to store context information about the data without having extra fields on the objects themselves, and make querying subsets of the data faster.

In terms of memory usage, it would stand to reason that one large dictionary will use less ram than multiple smaller ones. Remember, if you're nesting dictionaries, each additional layer of nesting will roughly double the number of dictionaries you need to allocate.

In terms of query speed, multiple dicts will take longer due to the increased number of lookups required.

So I think the only way to answer this question is for you to profile your own code. However, my suggestion is to use the method that makes your code the cleanest and easiest to maintain. Of all the features of Python, dictionaries are probably the most heavily tweaked for optimal performance.

Daniel
"Larger dicts obviously have longer lookup times than smaller ones": Wrong. Hashtables are avg. case O(1) time.
tgamblin
"it would stand to reason that one large dictionary will use less ram than multiple smaller ones": This is wrong too. Hashtables are O(n) space. There's no significant difference between the size of one large dictionary and multiple smaller ones. See below.
tgamblin
@tgambin - as I said in the answer, the space efficiency is due to the creation of multiple dicts. OF COURSE there will be additional space required when when you're allocating more objects. you're right about lookup speed, though. i modified the answer.
Daniel
@Daniel: Unless you are allocating one dictionary per mapping or close to it, then the tiny overhead from each of those extra objects isn't going to matter. What's INSIDE the dictionaries dominates, and he'll have the same number of mappings with one or multiple dictionaries.
tgamblin
@Daniel: "each additional layer of nesting will roughly double the number of dictionaries you need to allocate". If you are doing something like this, then you will end up with O(nlogn) space for n mappings, where log(n) comes from the extra dicts.
tgamblin
@DanieL: but this is kludgy. If you have log(n) levels of lookup, you don't have nice O(1) hash lookup anymore, and you might as well have used some balanced tree structure, or maybe a trie, for the data.
tgamblin
@tgamblin - apparently you missed my point. I said that the trade off of nested dicts is useful when you need to store extra context information, and, in the last paragraph, that he should choose the model that fits his data. You're just being argumentative.
Daniel
@Daniel: No, your answer is still wrong: "In terms of memory usage, it would stand to reason that one large dictionary will use less ram than multiple smaller ones" Your justification is "more objects == more memory". Is 50 cents in nickels more money than 50 cents in quarters?
tgamblin
@tgamblin - ho hum. more objects _does_ equal more memory.. it's a pedantic point, but your fixation on it is strange. Again, my advice was to use the model that fits the data.
Daniel
+11  A: 

If you're using Python, you really shouldn't be worrying about this sort of thing in the first place. Just build your data structure the way it best suits your needs, not the computer's.

This smacks of premature optimization, not performance improvement. Profile your code if something is actually bottlenecking, but until then, just let Python do what it does and focus on the actual programming task, and not the underlying mechanics.

Soviut
+4  A: 

"Simple" is generally better than "clever", especially if you have no tested reason to go beyond "simple". And anyway "Memory efficient" is an ambiguous term, and there are tradeoffs, when you consider persisting, serializing, cacheing, swapping, and a whole bunch of other stuff that someone else has already thought through so that in most cases you don't need to.

Think "Simplest way to handle it properly" optimize much later.

le dorfier
Why was this voted down?
Adam Bernier
+1  A: 

Honestly, you won't be able to tell the difference either way, in terms of either performance or memory usage. Unless you're dealing with tens of millions of items or more, the performance or memory impact is just noise.

From the way you worded your second sentence, it sounds like the one big dictionary is your first inclination, and matches more closely with the problem you're trying to solve. If that's true, go with that. What you'll find about Python is that the solutions that everyone considers 'right' nearly always turn out to be those that are as clear and simple as possible.

DNS
+3  A: 

Premature optimization bla bla, don't do it bla bla.

I think you're mistaken about the power of two extra allocation does. I think its just a multiplier of two. x*2, not x^2.

I've seen this question a few times on various python mailing lists.

With regards to memory, here's a paraphrased version of one such discussion (the post in question wanted to store hundreds of millions integers):

  1. A set() is more space efficient than a dict(), if you just want to test for membership
  2. gmpy has a bitvector type class for storing dense sets of integers
  3. Dicts are kept between 50% and 30% empty, and an entry is about ~12 bytes (though the true amount will vary by platform a bit).

So, the fewer objects you have, the less memory you're going to be using, and the fewer lookups you're going to do (since you'll have to lookup in the index, then a second lookup in the actual value).

Like others, said, profile to see your bottlenecks. Keeping an membership set() and value dict() might be faster, but you'll be using more memory.

I'd also suggest reposting this to a python specific list, such as comp.lang.python, which is full of much more knowledgeable people than myself who would give you all sorts of useful information.

Richard Levasseur
+27  A: 
tgamblin
Here's a more up-to-date version of the Python dictionary impl: http://code.python.org/loggerhead/users/benjamin.peterson/pydict/annotate/head%3A/dictimpl.py
Benjamin Peterson
@tgamblin: can u fix link to "python implementation"
Tshepang
It looks like it's gone -- I got it from Ben.
tgamblin
@tgamblin: can u remove that info then
Tshepang
+2  A: 

If your dictionary is so big that it does not fit into memory, you might want to have a look at ZODB, a very mature object database for Python.

The 'root' of the db has the same interface as a dictionary, and you don't need to load the whole data structure into memory at once e.g. you can iterate over only a portion of the structure by providing start and end keys.

It also provides transactions and versioning.

EoghanM