tags:

views:

129

answers:

5

I have a device to acquire XRay images. Due to some technical constrains, the detector is made of heterogeneous pixel size and multiple tilted and partially overlapping tiles. The image is thus distorted. The detector geometry is known precisely.

I need a function converting these distorted images into a flat image with homogeneous pixel size. I have already done this by CPU, but I would like to give a try with OpenGL to use the GPU in a portable way.

I have no experience with OpenGL programming, and most of the information I could find on the web was useless for this use. How should I proceed ? How do I do this ?

Image size are 560x860 pixels and we have batches of 720 images to process. I'm on Ubuntu.

+2  A: 

Rather than OpenGL, this sounds like a CUDA, or more generally GPGPU problem.

If you have C or C++ code to do it already, CUDA should be little more than figuring out the types you want to use on the GPU and how the algorithm can be tiled.

Andrew McGregor
I already programmed in CUDA but was seeking a portable solution that OpenGL should be able to provide. The other thing is that Ubuntu is not CUDA friendly.
chmike
CUDA runs perfectly with Ubuntu. I have done a lot with Ubuntu 9.04/9.10/10.04 and NVIDIA's CUDA SDK. Tough of course CUDA is limited to Nvidia hardware...
Danvil
Yup... CUDA and OpenCL work fine on Ubuntu. OpenCL is probably the better idea, it's more portable.
Andrew McGregor
How do you install the OpenCL compatible drivers ? I have an NVIDIA 8800GT on the production computer and could get a more powerful card (i.e. GTX285).
chmike
Just make sure you have the latest version of the video driver itself. OpenCL and CUDA install themselves all over the libraries, but they work with a standard NVidia proprietary video driver. An 8800GT is a pretty powerful GPU, it's roughly 15x more compute power than your CPU, even if it's a quad core.
Andrew McGregor
+4  A: 

OpenGL is for rendering polygons. You might be able to do multiple passes and use shaders to get what you want but you are better off re-writing the algorithm in Open*C*L. The bonus then would be you have something portable that will even use multi core CPUs if no graphics accelerator card is available.

Goz
OpenGL is more portable than OpenCL, especially on Ubuntu.
chmike
OpenCL runs perfectly with Ubuntu. I have done a lot with Ubuntu 9.04-10.04 and NVIDIA's OpenCL SDK.
Danvil
@danvil How do you install the driver ? I installed the driver "by hand" but every time the kernel is updated, I find my self in VGA and have to reinstall the driver. I do it on my work PC but this is not acceptable on a production machine.
chmike
The driver kernel module must be recompiled to match the new kernel version. For some other modules this is already done automatically. The nvidia module must be added manually. (This is the only essential step in the driver re-installation).
Danvil
This is the reason I can't use this on a production PC. Users would get the VGA screen every time the kernel is updated. This is why I looked for an OpenGL solution, but I'm not even sure I could use OpenGL without proprietary drivers.
chmike
I finally found this page here http://www.sucka.net/2010/04/how-to-install-nvidia-video-driver-in-10-04-lucid-lynx/ explaining how to install the drivers. There is a small error, see my comment below the page,but it works. Now I'll try CUDA and OpenCL.
chmike
+1  A: 

If you want to do this with OpengGL, you'd normally do it by supplying the current data as a texture, and writing a fragment shader that processes that data, and set it up to render to a texture. Once the output texture is fully rendered, you can retrieve it back to the CPU and write it out as a file.

I'm afraid it's hard to do much more than a very general sketch of the overall flow without knowing more about what you're doing -- but if (as you said) you've already done this with CUDA, you apparently already have a pretty fair idea of most of the details.

Jerry Coffin
This is what I wanted to do. A more detailed explanation with an example code would be very helpful. At least a good pointer on a tutorial we full functional code example I could follow.
chmike
+1  A: 
Crashworks
I start to think that you are probably right with your suggestion to use CPU instead of GPU. For my use case I have to apply the exact same correction to many images. So the mapping function parameters can be precomputed so that each pixel value can be computed of a simple weighted sum of the input image pixels. I already implemented such a program.
chmike
A: 

You might find this tutorial useful (it's a bit old, but note that it does contain some OpenGL 2.x GLSL after the Cg section). I don't believe there are any shortcuts to image processing in GLSL, if that's what you're looking for... you do need to understand a lot of the 3D rasterization aspect and historical baggage to use it effectively, although once you do have a framework for inputs and outputs set up you can forget about that and play around with your own algorithms in shader code relatively easily.

Having being doing this sort of thing for years (initially using Direct3D shaders, but more recently with CUDA) I have to say that I entirely agree with the posts here recommending CUDA/OpenCL. It makes life much simpler, and generally runs faster. I'd have to be pretty desperate to go back to a graphics API implementation of non-graphics algorithms now.

timday
Ok. I'll see if the new version of Ubuntu has made any progress to make using CUDA enabled drivers simpler. Note that I would then use the texture to store the weights of the linear combination of the input pixels. CUDA is indeed impressive. My tomographic reconstruction program was 600x faster than the CPU version on my GTX280 !
chmike