I am currently working on an in house GIS app. Background images are loaded in OpenGL by breaking the image down into what I guess are termed texels and mipmapped, after which a display list is built to texture map each texel onto rectangles. This sounds pretty standard, but the issue is that currently for images that do not divide neatly into 2^n pixel x 2^m pixel texels, the remainders are thrown away. Even if I were to capture the remainders and handle them in some way, I can't imagine that the answer is to continually test for texel subdivisions that eventually result in total capture of the image on neat boundaries? Or is it?
In some cases the images that I'm loading are geotiffs, and I need every single pixel to be included on my background. I've heard that glDrawPixels is slow. I know I could test this for myself, but I have a feeling that people in this space are using textures in opengl, not keeping track of pixel dumps.
I'm new to OpenGL, and I believe that the app is limiting itself to 1.1 calls.
I can post code if it will clairfy.
Thanks in advance!