I am trying to understand why Prim and Kruskal have different time complexities when it comes to sparse and dense graphs. After using a couple of applets that demonstrate how each works, I am still left a little confused about how the density of the graph affects the algorithms. I hope someone could give me a nudge in the right direction.
views:
149answers:
3Are these different complexities with respect to the number of vertices by any chance?
there is often, a slightly handwavy, argument that says for a sparse graph, the number of edges E = O(V) where V is the number of verticies, for a dense graph E = O(V^2). as both algotrithms potentially have complexity that depends on E, when you convert this to comlexity that depends on V you get different complexities depending on dense or sparse graphs
edit:
different data structures will also effect the complexity of course wikipedia has a break down on this
Algorithms by Cormen et al does indeed give an analysis, in both cases using a sparse representation of the graph. With Kruskal's algorithm (link vertices in disjoint components until everything joins up) the first step is to sort the edges of the graph, which takes time O(E lg E) and they simply establish that nothing else takes longer than this. With Prim's algorithm (extend the current tree by adding the closest vertex not already on it) they use a Fibonacci heap to store the queue of pending vertices and get O(E + V lgV), because with a Fibonacci tree decreasing the distance to vertices in the queue is only O(1) and you do this at most once per edge.