There is no summary available of the big O notation for operations on the most common data structures including arrays, linked lists, hash tables etc.
views:
2164answers:
5I geuss I will start you off with the time complexity of a linked list:
Indexing---->O(n)
Inserting / Deleting at end---->O(1) or O(n)
Inserting / Deleting in middle--->O(1) with iterator O(n) with out
The time complexity for the Inserting at the end depends if you have the location of the last node, if you do, it would be O(1) other wise you will have to search through the linked list and the time complexity would jump to O(n).
Amortized Big-O for hashtables:
- Insert - O(1)
- Retrieve - O(1)
- Delete - O(1)
Note that there is a constant factor for the hashing algorithm, and the amortization means that actual measured performance may vary dramatically.
Red-Black trees:
- Insert - O(log n)
- Retrieve - O(log n)
- Delete - O(log n)
I feel this is what should be taught during basic Computer Science courses. No? If one didn't happen to attend them, read appropriate books.
Keep in mind that unless you're writing your own data structure (e.g. linked list in C), it can depend dramatically on the implementation of data structures in your language/framework of choice. As an example, take a look at the benchmarks of Apple's CFArray over at Ridiculous Fish. In this case, the data type, a CFArray from Apple's CoreFoundation framework, actually changes data structures depending on how many objects are actually in the array - changing from linear time to constant time at around 30,000 objects.
This is actually one of the beautiful things about object-oriented programming - you don't need to know how it works, just that it works, and the 'how it works' can change depending on requirements.