tags:

views:

60

answers:

2

An algo takes .5 ms seconds for an input size of 100, how long will it take to run if the input size is 500 and the program is O(n lg(n))?

My book says that when the input size doubles, n lg(n), takes "slightly more than twice as long". That isn't really helping me much.

The way I've been doing it, is solving for the constant multiplier (which the book doesn't talk about, so I don't know if it's valid):

.5ms = c * 100 * lg(100) => c = .000753

So

.000753 * 500 * lg(500) = 3.37562ms

Is that a valid way to calculate running times, and is there a better way to figure it out?

+1  A: 

Yes. That is exactly how it works.

Of course this ignores any possible initialization overhead, as this is not specified in big-o notation, but that's irrelevant for most algorithms.

Tomas
+1  A: 

Thats not exactly right. Tomas was right in saying there is overhead and the real equation is more like

runtime = inputSize * lg(inputSize) * singleInputProcessTime + overhead

The singleInputProcessTime has to do with machine operations like loading of address spaces, arithmetic or anything that must be done each time you interact with the input. This generally has a runtime that ranges from a few CPU cycles to seconds or minutes depending on your domain. It is important to understand that this time is roughly CONSTANT and therefore does not affect the overall runtime very much AT LARGE ENOUGH input sizes.

The overhead is cost of setting up the problem/solution such as reading the algorithm into memory, spreading the input amongst servers/processes or any operation which only needs to happen once or a set number of times which does NOT depend on the input size. This cost is also constant and can range anywhere from a few CPU cycles to minutes depending on the method used to solve the problem.

The inputSize and n * lg(n) you know about already, but as for your homework problem, as long as you explain how you got to the solution, you should be just fine.

NickLarsen