views:

459

answers:

11

I want to write a really, really, slow program for MATLAB. I'm talking like, O(2^n) or worse. It has to finish, and it has to be deterministically slow, so no "if rand() = 123,123, exit!" This sounds crazy, but it's actually for a distributed systems test. I need to create a .m file, compile it (with MCC), and then run it on my distributed system to perform some debugging operations.

The program must constantly be doing work, so sleep() is not a valid option.

I tried making a random large matrix and finding its inverse, but this was completing too quickly. Any ideas?

+1  A: 

You could ask it to factor(X) for a suitably large X

Adam Wright
`factor(X)` for a suitably large prime `X` may or may not be slower.
Matthew Scharley
For large X with certain properties, it will be heavy work with minimal effort in setting up the problem. Unlikely to be O(2^n), but he just needs something slow.
Adam Wright
And for large X with certain properties it will be trivially easy. For instance any X that == 2^n will factor in no time at all. This is why I recommend a prime or pseudoprime. There are lists of either on Google.
Matthew Scharley
Well, I presume he's smart enough to at least try whatever integer he picks first
Adam Wright
+3  A: 

How about using inv? It has been reported to be quite slow.

_NT
I tried that... it can do a 1000x1000 matrix in 4 seconds...
Avatar_Squadron
Then try a 10000000x10000000 matrix
Vinko Vrsalovic
@Vinko - Hehehe :)
ldigas
+2  A: 

I don't speak MATLAB but something equivalent to the following might work.

loops = 0
counter = 0
while (loops < MAX_INT) {
    counter = counter + 1;
    if (counter == MAX_INT) {
        loops = loops + 1;
        counter = 0;
    }
}

This will iterate MAX_INT*MAX_INT times. You can put some computationally heavy thing in the loop for it to take longer if this is not enough.

Vinko Vrsalovic
This is especially good as loops in MATLAB are notoriously slow and it's possible to control exactly how long your slow code is.
David Johnstone
@David: JIT compilation!
Jacob
@David J: The idea that MATLAB is slow with loops, while once true, is not so much anymore. The JIT that we added a few years ago really changed this.
MatlabDoug
Well, new versions may be faster, but loops in matlab are still several orders of magnitude slower than those in C-like or java-like platforms. Course, if you want a more "notorious" level of slow, just use function calls...
Eamon Nerbonne
+2  A: 

Do some work in a loop. You can tune the time it takes to complete using the number of loop iterations.

Robert Harvey
+1  A: 

You could also test if a given input is prime by just dividing it by all smaller numbers. This would give you O(n^2).

Gabb0
that's O(n), not O(n^2).
Jason S
+4  A: 

Count to 2n. Optionally, make a slow function call in each iteration.

mobrule
+4  A: 

If you want real work that's easy to set up and stresses CPU way over memory:

  • Large dense matrix inversion (not slow enough? make it bigger.)
  • Factor an RSA number
280Z28
+4  A: 

This naive implementation of the Discrete Fourier Transform takes ~ 9 seconds for a 2048 long input vector x on my 1.86 GHz single core machine. Going to 4096 inputs extends the time to ~ 35 seconds, close to the 4x I would expect for O(N^2). I don't have the patience to try longer inputs :)

function y = SlowDFT(x)

t = cputime;
y = zeros(size(x));
for c1=1:length(x)
    for c2=1:length(x)
        y(c1) = y(c1) + x(c2)*(cos((c1-1)*(c2-1)*2*pi/length(x)) - ...
                            1j*sin((c1-1)*(c2-1)*2*pi/length(x)));
    end
end
disp(cputime-t);

EDIT: Or if you're looking to stress memory more than CPU:

function y = SlowDFT_MemLookup(x)

t = cputime;
y = zeros(size(x));
cosbuf = cos((0:1:(length(x)-1))*2*pi/length(x));
for c1=1:length(x)
    cosctr = 1;
    sinctr = round(3*length(x)/4)+1;
    for c2=1:length(x)
         y(c1) = y(c1) + x(c2)*(cosbuf(cosctr) ...
                            -1j*cosbuf(sinctr));
         cosctr = cosctr + (c1-1);
         if cosctr > length(x), cosctr = cosctr - length(x); end
         sinctr = sinctr + (c1-1);
         if sinctr > length(x), sinctr = sinctr - length(x); end
    end
end
disp(cputime-t);

This is faster than calculating sin and cos on each iteration. A 2048 long input took ~ 3 seconds, and a 16384 long input took ~ 180 seconds.

mtrw
+1  A: 

Try this one:

tic
isprime( primes(99999999) );
toc


EDIT:

For a more extensive set of tests, use these benchmarks (perhaps for multiple repetitions even):

disp(repmat('-',1,85))
disp(['MATLAB Version ' version])
disp(['Operating System: ' system_dependent('getos')])
disp(['Java VM Version: ' version('-java')]);
disp(['Date: ' date])
disp(repmat('-',1,85))

N = 3000;   % matrix size
A = rand(N,N);  
A = A*A;

tic;  A*A; t=toc;
fprintf('A*A \t\t\t%f sec\n', t)

tic; [L,U,P] = lu(A); t=toc; clear L U P
fprintf('LU(A)\t\t\t%f sec\n', t)

tic; inv(A); t=toc;
fprintf('INV(A)\t\t\t%f sec\n', t)

tic; [U,S,V] = svd(A); t=toc; clear U S V
fprintf('SVD(A)\t\t\t%f sec\n', t)

tic; [Q,R,P] = qr(A); t=toc; clear Q R P
fprintf('QR(A)\t\t\t%f sec\n', t)

tic; [V,D] = eig(A); t=toc; clear V D
fprintf('EIG(A)\t\t\t%f sec\n', t)

tic; det(A); t=toc;
fprintf('DET(A)\t\t\t%f sec\n', t)

tic; rank(A); t=toc;
fprintf('RANK(A)\t\t\t%f sec\n', t)

tic; cond(A); t=toc;
fprintf('COND(A)\t\t\t%f sec\n', t)

tic; sqrtm(A); t=toc;
fprintf('SQRTM(A)\t\t%f sec\n', t)

tic; fft(A(:)); t=toc;
fprintf('FFT\t\t\t%f sec\n', t)

tic; isprime(primes(10^7)); t=toc;
fprintf('Primes\t\t\t%f sec\n', t)

The following are the results on my machine using N=1000 for one iteration only (note primes is using as upper bound 10^7 NOT 10^8 [takes way more time!])

A*A             0.178329 sec
LU(A)           0.118864 sec
INV(A)          0.319275 sec
SVD(A)          15.236875 sec
QR(A)           0.841982 sec
EIG(A)          3.967812 sec
DET(A)          0.121882 sec
RANK(A)         1.813042 sec
COND(A)         1.809365 sec
SQRTM(A)        22.750331 sec
FFT             0.113233 sec
Primes          27.080918 sec
Amro
+2  A: 

Easy! Go back to your Turing machine roots and think of processes that are O(2^n) or worse.

Here's a fairly simple one (warning, untested but you get the point)

N = 12; radix = 10;
odometer = zeros(N, 1);
done = false;
while (~done)
    done = true;
    for i = 1:N
        odometer(i) = odometer(i) + 1;
        if (odometer(i) >= radix)
            odometer(i) = 0;
        else
            done = false;
            break;
        end
    end
end

Even better, how about calculating Fibonacci numbers recursively? Runtime is O(2^N), since fib(N) has to make two function calls fib(N-1) and fib(N-2), but stack depth is O(N), since only one of those function calls happens at a time.

function y = fib(n)
   if (n <= 1)
      y = 1;
   else
      y = fib(n-1) + fib(n-2);
   end
end
Jason S
You mixed up + and *. Not that that matters much, given the context ;-).
Eamon Nerbonne
? neither of those functions should have a multiply; the odometer is only an increment, and fibonacci is additive.
Jason S
+1  A: 

this will run 100% cpu for WANTED_TIME seconds

WANTED_TIME = 2^n; % seconds
t0=cputime; 
t=cputime; 
while (t-t0 < WANTED_TIME)
    t=cputime; 
end;
Ofri Raviv