Hi guys,
I've been working on a point cloud player lately that should be ideally able to visualize terrain data points from a lidar capture and display them sequentially at around 30fps. I, however, seem to have come to a wall resulting from PCI-e IO.
What I need to do for every frame is load a large point cloud stored in memory, then calculate a color map due to height (I'm using something akin to matlab's jet map), then transfer the data to the GPU. This works fine on cloud captures with points < one million. However, at about 2 million points, this starts slowing down below 30 frames per second. I realize this is a lot of data (2 million frames per point * [3 floats per point + 3 floats per color point] * 4 bytes per float * 30 frames per second = around 1.34 gigabytes per second)
My rendering code looks something like this right now:
glPointSize(ptSize);
glEnableClientState(GL_VERTEX_ARRAY);
if(colorflag) {
glEnableClientState(GL_COLOR_ARRAY);
} else {
glDisableClientState(GL_COLOR_ARRAY);
glColor3f(1,1,1);
}
glBindBuffer(GL_ARRAY_BUFFER, vbobj[VERT_OBJ]);
glBufferData(GL_ARRAY_BUFFER, cloudSize, vertData, GL_STREAM_DRAW);
glVertexPointer(3, GL_FLOAT, 0, 0);
glBindBuffer(GL_ARRAY_BUFFER, vbobj[COLOR_OBJ]);
glBufferData(GL_ARRAY_BUFFER, cloudSize, colorData, GL_STREAM_DRAW);
glColorPointer(3, GL_FLOAT, 0, 0);
glDrawArrays(GL_POINTS, 0, numPoints);
glDisableClientState(GL_VERTEX_ARRAY);
glEnableClientState(GL_COLOR_ARRAY);
glBindBuffer(GL_ARRAY_BUFFER, 0);
The pointer for vertData and colorData are changed every frame.
What I would like to be able to do is be able to play at a minimum of 30 frames per second even when later using large point clouds that might reach up to 7 million points per frame. Is this even possible? Or perhaps it would be easier to grid them and construct a heightmap and somehow display that? I'm still pretty new to 3-D programming, so any advice would be appreciated.
Thanks in advance