views:

282

answers:

3

I have a particle system where the positions and various properties are stored in a vertex buffer object. The values are continuously updated by a CUDA kernel. Presently I am just rendering them using GL_POINTS as flat circles. What I am interested in is rendering these particles are more involved things like 3d animated bird models for instance. I am trying to figure out what the best approach for would be. For the sake of conversation lets say the animation of the bird model is made up of 3 frames (position of the wings and what not)

I can see loading up the models into display lists and looping over all the particles in translating, rotating, etc the matrix then calling the display list. This doesn't seem like an ideal approach because it would require bringing all the particle data over to the host from the gpu just to do matrix operations to shove back to the GPU (unless I can call the drawing functions from a cuda kernel???)

I don't know much about shaders, would they be able to handle something like this?

At this point I am mostly looking for advice on what avenue to pursue, but if you know of any articles or tutorials that deal the subject, kudos.


I am working with the OpenGL natively on Windows 7 64bit with c++. I am not using GLUT

+4  A: 

You probably want to use: EXT_draw_instanced. I've never actually used Instancing, but most modern GPUS (GF6 and up I think) allow you to feed the GPU a model and a list of points and have it draw the model at every point.

I'll google some more info and see what I come up with....

Well this is the offical spec, and it looks like they have a tutorial on it.

http://www.opengl.org/registry/specs/ARB/draw_instanced.txt

Timothy Baldridge
Agreed. This is exactly the kind of thing instancing is designed for.
Toji
A: 

Are point sprites sufficient?

hrnt
No, I am looking for actual 3d representations
Mr Bell
A: 

You'll need to get creative to do something like that. If you used CUDA to compute a position and rotation into 2 3 component float textures and then fired an "index" along with each vertex and incrementing by 1 per geometry instance then you could write a shader that loads the textures and then uses vertex shader texture lookup to select the relevant pixel from the 2 textures and build a transformation matrix. You then transform the verts of the mesh by the transform matrix you have built.

Goz