OK, here goes. O(N2) (2 = squared) means that the speed of the algorithm for large N varies as the square of N - so twice the size of graph will result in four times the time to compute.
The Kruskal rows are merely simplified, and assume that E = c * N2
. c
here is presumably a constant, that we can assume to be significantly smaller than N as N gets large. You need to know the following laws of logarithms: log(ab) = log a + log b
and log(a^n) = n * log a
. These two combined with the fact that log c << log N (is much less than and can be ignored) should let you understand the simplifications there.
Now, as for the original expressions and where they were derived from, you'd need to check the page you got these from. But I'm assuming that if you're looking at Prim's and Kruskal's then you will be able to understand the derivation, or at least that if you can't my explaining it to you is not actually going to help you in the long run...