views:

131

answers:

2

Hello, I'm trying to convert a simple numerical analysis code (trapezium rule numerical integration) into something that will run on my CUDA enabled GPU. There is alot of literature out there but it all seems far more complex than what is required here! My current code is:

#include <stdio.h>
#include <math.h>
#include <stdlib.h>
#define N 1000

double function(double);

int main(void)
{
   int i;
   double lower_bound, upper_bound, h, ans;

   printf("Please enter the lower and upper bounds: ");
   scanf(" %lf %lf", &lower_bound, &upper_bound);
   h = (upper - lower) / N;
   ans = (function(lower) + function(upper)) / 2.0;
   for (i = 1; i < N; ++i) {
      ans += function(i * h);
   }
   printf("The integral is: %.20lf\n", h * ans));

   return 0;
}

double function(double x)
{
   return sin(x);
}

This runs well until N becomes very large. I've made an implementation with openMP which is faster but I think it will be handy to know a little about CUDA too. Has anyone got any suggestions about where to start or if there is a painless way to convert this code? Many Thanks, Jack.

A: 

You could get rid of the multiplication :D

   double nomul = h;
   for (i = 1; i < N; ++i) {
      ans += function(nomul);
      nomul += h;
   }
pmg
In my opinion, this does not work if you intend to go on parallel computing.
wok
Because you need i to be a private variable right?
Jack Medley
A: 

First, go ahead and install CUDA on your computer. After that, try to run some of the examples available on the SDK. They may look a little complicated at first sight, but don't worry, there are tons of CUDA "Hello World" examples on the web.

If you're looking for something fancier, you could try compiling this project (you'll need to install OpenCV), which converts an image to its grayscale representation (it has files to compile on Windows/Linux/Mac OS X, so its worth taking a look if you need help to compile your projects).

karlphillip