views:

458

answers:

4

I thought about the following question about computer's architecture. Suppose I do in Python

from bisect import bisect
index = bisect(x, a)      # O(log n)  (also, shouldn't it be a standard list function?)
x.insert(index, a)        # O(1) + memcpy()

which takes log n, plus, if I correctly understand it, a memory copy operation for x[index:]. Now I read recently that the bottleneck is usually in the communication between processor and the memory so the memory copy could be done by RAM quite fast. Is it how that works?

+3  A: 

Python is a language. Multiple implementations exist, and they may have different implementations for lists. So, without looking at the code of an actual implementation, you cannot know for sure how lists are implemented and how they behave under certain circumstances.

My bet would be that the references to the objects in a list are stored in contiguous memory (certainly not as a linked list...). If that is indeed so, then insertion using x.insert will cause all elements behind the inserted element to be moved. This may be done efficiently by the hardware, but the complexity would still be O(n).

For small lists the bisect operation may take more time than x.insert, even though the former is O(log n) while the latter is O(n). For long lists, however, I'd hazard a guess that x.insert is the bottleneck. In such cases you must consider using a different data structure.

Stephan202
Well, I'm not saying that memcpy() is O(1) -- I know it's O(n), but the constant can be small - and I'm not sure if it's really optimized by memory. But if it is optimized to be, say, 1000 times faster than you'd think naively, that's probably something worth knowing.
ilya n.
In some cases, there might not be any space left in the list, so the whole list has to be copied after new free memory is allocated instead of only a memmove/memcpy.
Torsten Marek
+1  A: 

CPython lists are contiguous arrays. Which one of the O(log n) bisect and O(n) insert dominates your performance profile depends on the size of your list and also the constant factors inside the O(). Particularly, the comparison function invoked by bisect can be something expensive depending on the type of objects in the list.

If you need to hold potentially large mutable sorted sequences then the linear array underlying Pythons list type isn't a good choice. Depending on your requirements heaps, trees or skip-lists might be appropriate.

Ants Aasma
+1  A: 

Use the blist module if you need a list with better insert performance.

Seun Osewa
A: 

Yes, I think sometimes O(n) is inevitable, Core Python Containers: Under the Hood talks about List performance from page 4 to page 14.It is concordant with your viewpoint and is worth reading.

sunqiang