views:

359

answers:

4

Hello everybody,

I am trying to solve the current problem using GPU capabilities: "given a point cloud P and an oriented plane described by a point and a normal (Pp, Np) return the points in the cloud which lye at a distance equal or less than EPSILON from the plane".

Talking with a colleague of mine I converged toward the following solution:

1) prepare a vertex buffer of the points with an attached texture coordinate such that every point has a different vertex coordinate 2) set projection status to orthogonal 3) rotate the mesh such that the normal of the plane is aligned with the -z axis and offset it such that x,y,z=0 corresponds to Pp 4) set the z-clipping plane such that z:[-EPSILON;+EPSILON] 5) render to a texture 6) retrieve the texture from the graphic card 7) read the texture from the graphic card and see what points were rendered (in terms of their indexes), which are the points within the desired distance range.

Now the problems are the following: q1) Do I need to open a window-frame to be able to do such operation? I am working within MATLAB and calling MEX-C++. By experience I know that as soon as you open a new frame the whole suit crashes miserably! q2) what's the primitive to give a GLPoint a texture coordinate? q3) I am not too clear how the render to a texture would be implemented? any reference, tutorial would be awesome... q4) How would you retrieve this texture from the card? again, any reference, tutorial would be awesome...

I am on a tight schedule, thus, it would be nice if you could point me out the names of the techniques I should learn about, rather to the GLSL specification document and the OpenGL API as somebody has done. Those are a tiny bit too vague answers to my question.

Thanks a lot for any comment.

p.s. Also notice that I would rather not use any resource like CUDA if possible, thus, getting something which uses as much OpenGL elements as possible without requiring me to write a new shader.

Note: cross posted at http://www.opengl.org/discussion_boards/ubbthreads.php?ubb=showflat&Number=245911#Post245911

A: 

Ok first as a little disclaimer: I know nothing about 3D programming.

Now my purely mathematical idea:

Given a plane by a normal N (of unit length) and a distance L of the plane to the center (the point [0/0/0]). The distance of a point X to the plane is given by the scalar product of N and X minus L the distance to the center. Hence you only have to check wether

|n . x - L| <= epsilon

. being the scalar product and | | the absolute value

Of course you have to intersect the plane with the normal first to get the distance L.

Maybe this helps.

Corporal Touchy
If you vote this down please leave a comment why. I'd like to know.
Corporal Touchy
+1  A: 

It's simple: Let n be the normal of the plane and x be the point.

n_u = n/norm(n)         //this is a normal vector of unit length
d   = scalarprod(n,x)   //this is the distance of the plane to the origin

for each point p_i
    d_i = abs(scalarprod(p_i,n) - d)  //this is the distance of the point to the plane

Obviously "scalarprod" means "scalar product" and "abs" means "absolute value". If you wonder why just read the article on scalar products at wikipedia.

Corporal Touchy
A: 

I have one question for Andrea Tagliasacchi, Why?

Only if you are looking at 1000s of points and possible 100s of planes, would there would be any benefit from using the method outlined. As apposed to dot producting the point and plane, as outlined my Corporal Touchy.

Also due to the finite nature of pixels you'll often find two or more points will project to the same pixel in the texture.

If you still want to do this, I could work up a sample glut program in C++, but how this would help with MATLAB I don't know, as I'm unfamiliar with it.

thing2k
I still don't see how it could be faster, the computations still have to be made. :) Maybe you know how that is done internally?
Corporal Touchy
A: 

IT seems to me you should be able to implement something similar to Corporal Touchy's method a a vertex program rather than in a for loop, right? Maybe use a C API to GPU programming, such as CUDA?