views:

470

answers:

14

I'm looking to write a quick benchmark program that can be compiled and run on various machines. Rather than using commercially/open-sourceally available options, I'd rather have my own to play around with threading and algorithm optimization techniques.

I have a couple that I use already, which include recursively calculating the nth number of the Fibonacci sequence, and of seeding/rand()ing a few thousand times.

Are there any other algorithms that are relatively simple, but at the same time computationally-intensive (and possibly math-related)?

(Note that these operations will be implemented in the C language.)

+6  A: 

The Ackermann function is usually a fun one, but don't give it very large inputs if you want it to finish in your lifetime.

Welbog
+2  A: 

Inverting big matrices.

JC
A: 

Finding prime numbers is considered quite time-consuming.

AlbertoPL
+2  A: 

You could calc big primes or factorizing integers.

Jonathan
A: 

This does a lot of addition:

int c = 0; 
for (int n = 0; n < INT_MAX; n++)
    for (int m = 0; m < INT_MAX; m++)
        c++;

std::cout << c;
Daniel Earwicker
Actually, it doesn't. Any decentcompiler will see that you did absolutely nothing with n or m, and will therefore optimize it all away.
Ken White
And adding numbers is one of the things computers do best. Arbitrary width float multiplication would be more intensive!
Aiden Bell
@Ken White - have made adjustments.
Daniel Earwicker
@Aiden Bell - what if you did *even more* addition?
Daniel Earwicker
+2  A: 

Take a look at the NAS Parallel Benchmarks. These were originally written by NASA in Fortran for supercomputers using MPI (and are still available that way), but there are also C, Java, and OpenMP implementations available now.

Most of these are very computationally intensive, as they're intended to be representative of numerical algorithms used in scientific computing.

tgamblin
+2  A: 

Fractals

(at various resolutions) Some fractal source in C (without opengl)

Aiden Bell
+1: I like this one because you get to enjoy the results. Or if you use an algorithm that improves itself iteratively you can watch it working, which is ever more fun!
Welbog
Yep, just don't use a GPU through OpenGL obviously this will defeat the benchmarking purpose. You can also have fun with lots of complex math that together abuse the math API for benchmarking and make a pretty fractal
Aiden Bell
+3  A: 

I know you said you wanted to make your own, but perhaps you could draw upon existing benchmarks for inspiration. The Computer language benchmark game has run many programming languages through a a set of benchmarks. Perhaps you can get some ideas looking at their benchmarks.

Some quick ideas of the top of my head:

  • Matrix multiplication: mulitplying 2 large matrices is relatively computationally intensive, though you will have to take caching into account

  • Generating prime numbers

  • Integer factorization

  • Numerical methods for solving ODEs - Runge-kutta for example

Falaina
Raytracing is another good CPU intensive task.
KitsuneYMG
A: 

Checkout the benchmarks from the language shootout: http://shootout.alioth.debian.org/

However: benchmarks are only benchmarks and don't necessarily tell you a lot about the real world and can, on the contrary, be misleading.

bayer
+1  A: 

Try to calculate thousands or millions pi digits. There are quite a few formulas for that task.

Nick D
Fabrice Bellard of QEmu fame is the discoverer of the most efficient Pi calculation formula ... bellard.org/pi multi-talented! ... and he won a source obfuscation contest with the implementation!
Aiden Bell
I have read an article about Fabrice Bellard in a Linux magazine. Smart guy!
Nick D
A: 

If you want to try parallelism, do lots of matrix math. The size of your matrix you can use will be limited by memory, but you can do as many iterations as you want.

This will stress the SIMD instructions that modern CPUs come with.

samoz
+1  A: 

You have some really nice ones in project euler, those are all math related and can be time consuming as you want using higher values.

SinneR
not really, with the apropriate algorithms (which is what the puzzles try to motivate) most problems scale reasonably well. unfortunately, for many of those problems common hardware is fast enough that brute-force is fast enough, sometimes by a huge margin.
Javier
i know but if he uses some of the harder problems (and those that incresing the values makes them much more time consuming) he will get algorithms worth using in a benchmark app
SinneR
A: 

You could try a tsort (Turbo Sort) with a very large input set. I understand this to be a common operation.

Frank V
A: 

Heuristics for NP-Complete problems are a fun way to get some CPU intensive code. You could code a "solution" :) for one of Karps NP-Complete problems.

tr9sh