tags:

views:

195

answers:

6

I'm investigating using nvidia GPUs for Monte-Carlo simulations. However, I would like to use the gsl random number generators and also a parallel random number generator such as SPRNG. Does anyone know if this is possible?

Update

I've played about with RNG using GPUs. At present there isn't a nice solution. The Mersenne Twister that comes with the SDK isn't really suitable for (my) Monte-Carlo simulations since it takes an incredibly long time to generate seeds.

The NAG libraries are more promising. You can generate RNs either in batches or in individual threads. However, only a few distributions are currently supported - Uniform, exponential and Normal.

A: 

You will have to implement them by yourself.

Tobias P.
Hmmm...*always* worth mentioning that this does *not* mean designing one yourself. Use a well understood, high-quality algorithm. Really.
dmckee
He was talking of 2 special algorithms, so i think it's clear he will implement these two special algorithms and not invent something new.
Tobias P.
GSL and SPRNG are not algorithms, they are libraries. They contain a number of different generators.
Alexandros Gezerlis
+5  A: 

Massive parallel random generation as you need it for GPUs is a difficult problem. This is an active research topic. You really have to be careful not only to have a good sequential random generator (these you find in the literature) but something that guarantees that they are independent. Pairwise independence is not sufficient for a good Monte Carlo simulation. AFAIK there is no good public domain code available.

Jens Gustedt
+1  A: 

I've just found that NAG provide some RNG routines. These libraries are free for academics.

csgillespie
A: 

Use the Mersenne Twister PRNG, as provided in the CUDA SDK.

MJH
A: 

Here we use sobol sequences on the GPUs.

Alexandre C.
+4  A: 

The GSL manual recommends the Mersenne Twister.

The Mersenne Twister authors have a version for Nvidia GPUs. I looked into porting this to the R package gputools but found that I needed excessively large number of draws (millions, I think) before the combination of 'generate of GPU and make available to R' was faster than just drawing in R (using only the CPU).

It really is a computation / communication tradeoff.

Dirk Eddelbuettel