views:

621

answers:

7

Hello, I have this code:

int foo(int n) 
{
    int x=2;
    while (x<n)
    {
        x = x*x*x;
    }

    return x;
}

Now, i need to analyze its time complexity. I noticed it reaches n much faster than just log(n). I mean, it does less steps than O(log(n)) would do. I read the answer but have no idea how they got to it: It is O(log(log(n)). Now, how do you approach such a question? Thanks in advance.

+1  A: 

If the code inside the while loop were

x = 2*x;

x would reach n in O(log(n)) iterations. Since you're cubing x instead of just multiplying it by a constant, you'll reach n faster.

Bill the Lizard
No, if the code were x = 2*x, the complexity would be log(n).With squaring its still O(log(log(n)) just with different factor.
Henrik
@Henrik: That's correct, thank you. I edited my response.
Bill the Lizard
A: 

Why not add a counter variable to count the number of iterations of the loop. Print it out just before the function returns.

Then call the function for a range of values, e.g. 3 to 1,000,000 to start with. Then plot your result using something like GNUPlot.

Then see if the graph matches a known curve.

Tarski
that's a nice idea, but this is a question from an exam, and i should know how to solve this without this tool
bks
+6  A: 

think of it as a recursive function:

f(i) = f(i-1)**3

if you expand it:

f(i) = ((f(i-k)**3)**3)(...k times) = f(i-k)**(3**k) = f(0)**(3**i)

the function grows as the power of the power... so the time (iterations) to reach a certain number (that is, calculating the inverse of the function) is the logarithm of the logarithm.

As in your example f(0) = 2, we want to know when f(i) >= n being n the input parameter (and i the number of iterations):

f(i) = 2**(3**i) >= n
            3**i >= log_2(n)
               i >= log_3(log_2(n))

So to reach a value of n, it takes log_3(log_2(n)) iterations (round up while dealing with integers to surpass it).

if the function would be:

f(i) = 2*f(i-1) //e.g. x=2*x

then the pattern would be:

f(i) = 2*2*(...k times)*f(i-k) = f(i-k)*(2**k) = f(0)*(2**i)

And in this case, then the inverse of the function would be a single logarithm in base 2.

My math is not very rigorous, but I hope you'll get the idea.

fortran
that's indeed helpful coz i know (sort of) how to handle recursion expression like: T(n) = T(n-1) + C the question is if there's a method to change any function to recursive one?
bks
or at least thumb rules?
bks
yes, any loop can be expressed with recursion, just make a expression that depends on the previous values, but sometimes is not necessary; in this case it could have been even easier to skip it and just put directly the analytic formula.
fortran
+1  A: 

Given

log ( A * x )     == log ( A ) + log ( x )
log ( x * x * x ) == 3 * log ( x )

So

log ( log ( x * x * x ) ) == log ( 3 * log ( x ) )
                          == log ( 3 ) + log ( log ( x ) )

How much faster or slower ( measured by number of iterations of the loop ) will this function be than your function?

int log_foo ( int n ) 
{
    double          log_x = log ( 2 );
    const double    log_n = log ( n );

    while ( log_x < log_n )
    {
        log_x = 3 * log_x;
    }

    return exp ( log_x );
}

And how much faster or slower will this function be than your function?

int log_log_foo ( int n ) 
{
    double          log_log_x = log ( log ( 2 ) );
    const double    log_log_n = log ( log ( n ) );
    const double    log_3     = log ( 3 );

    while ( log_log_x < log_log_n )
    {
        log_log_x += log_3;
    }

    return exp ( exp ( log_log_x ) );
}

But this function only increments log_log_x by a constant, so it's easy to work out how many iterations it does.

Pete Kirkham
the first would be slower than mine, (actually, of my lecturer)about the second, i don't know
bks
When considering big-O you take operations as constant time, so you need to ask whether it will do the same number of iterations. At what point in the original function will it not perform an iteration that these two do? What values of x,n > 0 will x < n be false but log(x) < log(n) true?
Pete Kirkham
the original function stops when x>n, where x is cubed in each iteration. about the other part of your answer, i don't know :/ isn't x<n ==> log(x) < log(n) for all x,n>0 ?
bks
so if the condition is true whenever the corresponding condition is true in the original, does it have the same number of iterations or not?
Pete Kirkham
+4  A: 

Think about how x changes with the number of iterations through the loop. Each time, you cube it. So after i iterations, the value will be 2 cubed, cubed again... and so on, i times. Let's use x(i) to denote this expression. Let's say x(0)=2, x(1)=2**3, etc (I'm using a**b to mean a raised to the bth power).

We're done when x(i)>=n. How long does it take? Let's solve for i.

First, we take a log on both sides: ln(x(i))>=ln(n)

ln(x(i)) = ln(x(i-1))*3 = ln(x(i-2))*(3**2) = ... = ln(x(0))*(3**i)

(the above uses [this property][1]: ln(x**b)==ln(x)*b)

so, 3**i * 2 >=ln(n). Let's take another logarithm:

ln(3**i * 2) = ln(2) + ln(3)*i 

so ln(2) + ln(3)* i >= ln(ln(n))

Now we can solve for i: i >= ( ln(ln(n))-ln(2) ) / ln(3)

We can ignore the constant factors, and we're left with the conclusion that we'll take log(log(n)) steps. That's the complexity of your algorithm.

Hopefully, breaking down all the steps like that helps.

redtuna
that's helpful, seems like something i can comprehand, thank you
bks
+2  A: 

Let

L3 = log to the base 3 L2 = Log to the base 2

Then the correct answer is O(L3(L2(n)) and NOT O(L2(L2(n)).

Start with x = x * 2. x will increase exponentially till it reaches n, thus making the time complexity O(L2(n))

Now consider x = x * x. x increases faster than the above. In every iteration the value of x jumps to the square of its previous value. Doing some simple math, here is what we get:

For x = 2 n = 4, iterations taken = 1 n = 16, iterations taken = 2 n = 256, iterations taken = 3 n = 65536, iterations taken = 4

Thus, the time complexity is O(L2(L2(n)). You can verify this by putting values above values for n.

Now coming to your problem, x = x * x * x. This will increase even faster than x = x * x. Here is the table:

For x = 2 n = 8, iterations taken = 1 n = 512, iterations taken = 2 n = (512*512*512), iterations taken = 3 and so on

If you look at this carefully, this turns out to be O(L3(L2(n)). L2(n) will get you the power of two, but since you are taking cube of x in every iteration, you will have to take log to the base 3 of it to find out the correct number of iteration taken.

So I think the correct answer is O(log-to-base-3(log-to-base-2(n))

Generalizing this, if x = x * x * x * x * .. (k times), then the time complexity is O(log-to-base-k(log-to-base-2(n)

Ashwin
the base of the logarithm is irrelevant as `log_a(x) = log_b(x)/log_b(a)` and `O(c·x) = O(x)` for `c = const`
Christoph
Yeah, I realize that now. It is kinda confusing. It gives an impression that log_4(x) would run twice as fast as log_2(x).
Ashwin
+1  A: 

Let i be the number of iteration steps and x(i) the value of x after i steps. We have

x(0) = 2
x(i) = x(i-1)³

The total number of steps is the biggest i so that x(i) < n.

We have

log x(i) = log x(i-1)³
         = 3·log x(i-1)
         = 3·log x(i-2)³
         = 3²·log x(i-2)
         = 3^i·log x(0)
         = 3^i·log 2

⇒  log log x(i) = log (3^i·log 2)
                 = log 3^i + log log 2
                 = i·log 3 + log log 2

The logarithm is strictly increasing, so

x(i) < n ⇔        log log x(i) < log log n
         ⇔ i·log 3 + log log 2 < log log n
         ⇔                   i < (log log n - log log 2) / log 3 ∈ O(log log n)
Christoph