views:

2164

answers:

5

There is no summary available of the big O notation for operations on the most common data structures including arrays, linked lists, hash tables etc.

+4  A: 

I geuss I will start you off with the time complexity of a linked list:

Indexing---->O(n)
Inserting / Deleting at end---->O(1) or O(n)
Inserting / Deleting in middle--->O(1) with iterator O(n) with out

The time complexity for the Inserting at the end depends if you have the location of the last node, if you do, it would be O(1) other wise you will have to search through the linked list and the time complexity would jump to O(n).

Jose Vega
The complexity of inserting into the **middle** of a singularly linked list is O(n). If the list is doubly-linked and you know the node you want to insert at it is O(1)
Rob Walker
I had forgot about to add the iterator part. Thanks for pointing it out
Jose Vega
A: 

Amortized Big-O for hashtables:

  • Insert - O(1)
  • Retrieve - O(1)
  • Delete - O(1)

Note that there is a constant factor for the hashing algorithm, and the amortization means that actual measured performance may vary dramatically.

Hank Gay
What is Big-O of inserting N items into hash set? Think twice.
Ilya Ryzhenkov
Amortized, it's N. You may have issues with resizing the backing array, though. Also, it depends on your method for handling conflicts. If you do chaining and your chaining insertion algorithm is N (like at the tail of a singly-linked list), it can devolve into N^2.
Hank Gay
This is wrong. You have the wrong definition of "amortized". Amortized means the total time for doing a bunch of operations divided by the number of operations. The worst-case performance for inserting N items is definitely O(N^2), not O(N). So the operations above are still O(n) worst-case, amortized or not. You are confusing it with the "average" time complexity assuming a certain distribution of hash functions, which is O(1).
newacct
They still tell people that hashtables are O(1) for insertion/retrieval/delete even though a hashtables that resizes itself is most certainly *NOT* going to have constant performance on the insert that triggers a resize. I've always heard that explained as amortization. What do you call it?
Hank Gay
+1  A: 

Red-Black trees:

  • Insert - O(log n)
  • Retrieve - O(log n)
  • Delete - O(log n)
Hank Gay
A: 

I feel this is what should be taught during basic Computer Science courses. No? If one didn't happen to attend them, read appropriate books.

Ilya Ryzhenkov
You will be horrified to learn that there are some CS degree programmes in the UK that no longer teach these basics.Apparently, it's a widening trend. Argumentum ad populum, methinks...
Rob
Agreed, but Stack Overflow aims to put this information on the web.
James
@JCS, should we publish all "core" CS books on SO then?
Ilya Ryzhenkov
+1  A: 

Keep in mind that unless you're writing your own data structure (e.g. linked list in C), it can depend dramatically on the implementation of data structures in your language/framework of choice. As an example, take a look at the benchmarks of Apple's CFArray over at Ridiculous Fish. In this case, the data type, a CFArray from Apple's CoreFoundation framework, actually changes data structures depending on how many objects are actually in the array - changing from linear time to constant time at around 30,000 objects.

This is actually one of the beautiful things about object-oriented programming - you don't need to know how it works, just that it works, and the 'how it works' can change depending on requirements.

Dan Udey