views:

575

answers:

6

Hello, I am writing a simple Python program.

My program seems to suffer from linear access to dictionaries, its run-time grows exponentially even though the algorithm is quadratic.
I use a dictionary to memoize values. That seems to be a bottleneck.

The values I'm hashing are tuples of points. Each point is: (x,y), 0 <= x,y <= 50
Each key in the dictionary is: A tuple of 2-5 points: ((x1,y1),(x2,y2),(x3,y3),(x4,y4))

The keys are read many times more often than they are written.

Am I correct that python dicts suffer from linear access times with such inputs?

As far as I know, sets have guaranteed logarithmic access times.
How can I simulate dicts using sets(or something similar) in Python?

edit As per request, here's a (simplified) version of the memoization function:

def memoize(fun):
    memoized = {}
    def memo(*args):
        key = args
        if not key in memoized:
            memoized[key] = fun(*args)
        return memoized[key]
    return memo
+2  A: 

You are not correct. dict access is unlikely to be your problem here. It is almost certainly O(1), unless you have some very weird inputs or a very bad hashing function. Paste some sample code from your application for a better diagnosis.

Eli Bendersky
Please don't generalize. I know they are "usually" O(1). Also, please don't be rude and ask for "sample code", my question includes all data you'll need to answer.
x10
asking for sample code isn't rude. dictionary access *is* almost always O(1) so we need to see sample code to suggest other possible bottlenecks.
Martin
+7  A: 

See Time Complexity. The python dict is a hashmap, its worst case is therefore O(n) if the hash function is bad and results in a lot of collisions. However that is a very rare case where every item added has the same hash and so is added to the same chain which for a major Python implementation would be extremely unlikely. The average time complexity is of course O(1).

The best method would be to check and take a look at the hashs of the objects you are using. The CPython Dict uses int PyObject_Hash (PyObject *o) which is the equivalent of hash(o).

After a quick check, I have not yet managed to find two tuples that hash to the same value, which would indicate that the lookup is O(1)

l = []
for x in range(0, 50):
    for y in range(0, 50):
        if hash((x,y)) in l:
            print "Fail: ", (x,y)
        l.append(hash((x,y)))
print "Test Finished"

CodePad (Available for 24 hours)

Yacoby
Thank you for your answer, but I already knew that. Please try and answer my particular question.
x10
heh, nice idea. It hadn't occurred to me that with a such a small range an exhaustive test was possible.
Martin
@Martin - It is a deceptively large range. I tested it up to 200 x 200 and it passes.
Chris Lutz
That small range is not relevant ... the OP is NOT using `(x, y)` points as keys -- he is using `((x0,y0),(x1, y1))` up to `((x0,y0), ..., (x4, y4))`. He has `sum(51**(n*2) for n in range(2,6))` (i.e. 119088209375236404) possible keys, not `51**2`
John Machin
@Chris: WHAT is a deceptively large range?
John Machin
+3  A: 

It would be easier to make suggestions if you provided example code and data.

Accessing the dictionary is unlikely to be a problem as that operation is O(1) on average, and O(N) amortized worst case. It's possible that the built-in hashing functions are experiencing collisions for your data. If you're having problems with has the built-in hashing function, you can provide your own.

Python's dictionary implementation reduces the average complexity of dictionary lookups to O(1) by requiring that key objects provide a "hash" function. Such a hash function takes the information in a key object and uses it to produce an integer, called a hash value. This hash value is then used to determine which "bucket" this (key, value) pair should be placed into.

You can overwrite the __hash__ method in your class to implement a custom hash function like this:

def __hash__(self):    
    return hash(str(self))

Depending on what your data actually looks like, you might be able to come up with a faster hash function that has fewer collisions than the standard function. However, this is unlikely. See the Python Wiki page on Dictionary Keys for more information.

James Thompson
James -- you're RUDE -- see his comment to my answer. You're asking for example code/data. don't do that
Eli Bendersky
A: 

As others have pointed out, accessing dicts in Python is fast. They are probably the best-oiled data structure in the language, given their central role. The problem lies elsewhere.

How many tuples are you memoizing? Have you considered the memory footprint? Perhaps you are spending all your time in the memory allocator or paging memory.

Ned Batchelder
A: 

My program seems to suffer from linear access to dictionaries, its run-time grows exponentially even though the algorithm is quadratic.

I use a dictionary to memoize values. That seems to be a bottleneck.

This is evidence of a bug in your memoization method.

Robert Rossney
A: 

To answer your specific questions:

Q1: """Am I correct that python dicts suffer from linear access times with such inputs?"""

A1: If you mean that average lookup time is O(N) where N is the number of entries in the dict, then it is highly likely that you are wrong. If you are correct, the Python community would very much like to know under what circumstances you are correct, so that the problem can be mitigated or at least warned about. Neither "sample" code nor "simplified" code are useful. Please show actual code and data that reproduce the problem. The code should be instrumented with things like number of dict items and number of dict accesses for each P where P is the number of points in the key (2 <= P <= 5)

Q2: """As far as I know, sets have guaranteed logarithmic access times. How can I simulate dicts using sets(or something similar) in Python?"""

A2: Sets have guaranteed logarithmic access times in what context? There is no such guarantee for Python implementations. Recent CPython versions in fact use a cut-down dict implementation (keys only, no values), so the expectation is average O(1) behaviour. How can you simulate dicts with sets or something similar in any language? Short answer: with extreme difficulty, if you want any functionality beyond dict.has_key(key).

John Machin