views:

713

answers:

3

Currently, I'm able to load in a static sized texture which I have created. In this case it's 512 x 512.

This code is from the header:

#define TEXTURE_WIDTH 512
#define TEXTURE_HEIGHT 512

GLubyte textureArray[TEXTURE_HEIGHT][TEXTURE_WIDTH][4];

Here's the usage of glTexImage2D:

glTexImage2D(
 GL_TEXTURE_2D, 0, GL_RGBA,
 TEXTURE_WIDTH, TEXTURE_HEIGHT,
 0, GL_RGBA, GL_UNSIGNED_BYTE, textureArray);

And here's how I'm populating the array (rough example, not exact copy from my code):

for (int i = 0; i < getTexturePixelCount(); i++)
{
    textureArray[column][row][0] = (GLubyte)pixelValue1;
    textureArray[column][row][1] = (GLubyte)pixelValue2;
    textureArray[column][row][2] = (GLubyte)pixelValue3;
    textureArray[column][row][3] = (GLubyte)pixelValue4;
}

How do I change that so that there's no need for TEXTURE_WIDTH and TEXTURE_HEIGHT? Perhaps I could use a pointer style array and dynamically allocate the memory...

Edit:

I think I see the problem, in C++ it can't really be done. The work around as pointed out by Budric is to use a single dimensional array but use all 3 dimensions multiplied to represent what would be the indexes:

GLbyte *array = new GLbyte[xMax * yMax * zMax];

And to access, for example x/y/z of 1/2/3, you'd need to do:

GLbyte byte = array[1 * 2 * 3];

However, the problem is, I don't think the glTexImage2D function supports this. Can anyone think of a workaround that would work with this OpenGL function?

Edit 2:

Attention OpenGL developers, this can be overcome by using a single dimensional array of pixels...

[0]: column 0 > [1]: row 0 > [2]: channel 0 ... n > [n]: row 1 ... n > [n]: column 1 .. n

... no need to use a 3 dimensional array. In this case I've had to use this work around as 3 dimensional arrays are apparently not strictly possible in C++.

+2  A: 

You can use

int width = 1024;
int height = 1024;
GLubyte * texture = new GLubyte[4*width*height];
...
glTexImage2D(
    GL_TEXTURE_2D, 0, GL_RGBA,
    width, height,
    0, GL_RGBA, GL_UNSIGNED_BYTE, textureArray);
delete [] texture;         //remove the un-needed local copy of the texture;

However you still need to specify the width and height to OpenGL in glTexImage2D call. This call copies texture data and that data is managed by OpenGL. You can delete, resize, change your original texture array all you want and it won't make a different to the texture you specified to OpenGL.

Edit: C/C++ deals with only 1 dimensional arrays. The fact that you can do texture[a][b] is hidden and converted by the compiler at compile time. The compiler must know the number of columns and will do texture[a*cols + b].

Use a class to hide the allocation, access to the texture.

For academic purposes, if you really want dynamic multi dimensional arrays the following should work:

int rows = 16, cols = 16;
char * storage = new char[rows * cols];
char ** accessor2D = new char *[rows];
for (int i = 0; i < rows; i++)
{
    accessor2D[i] = storage + i*cols;
}
accessor2D[5][5] = 2;
assert(storage[5*cols + 5] == accessor2D[5][5]);
delete [] accessor2D;
delete [] storage;

Notice that in all the cases I'm using 1D arrays. They are just arrays of pointers, and array of pointers to pointers. There's memory overhead to this. Also this is done for 2D array without colour components. For 3D dereferencing this gets really messy. Don't use this in your code.

Budric
Hmm, this would work, only when assigning values to the array, I get this error: invalid types ‘unsigned char[int]’ for array subscript
nbolton
Are you using texture[i][j]? You can't use that with the code above.texture[row * width + column].
Budric
@Budric, my array needs to have 3 dimensions, not 2. I'm accessing it in the style of texture[column][row][channel]
nbolton
+1  A: 

You could always wrap it up in a class. If you are loading the image from a file you get the height and width out with the rest of the data (how else could you use the file?), you could store them in a class that wraps the file loading instead of using preprocessor defines. Something like:

class ImageLoader
{
...
  ImageLoader(const char* filename, ...);
...
  int GetHeight();
  int GetWidth();
  void* GetDataPointer();
...
};

Even better you could hide the function calls to glTexImage2d in there with it.

class GLImageLoader
{
...
  ImageLoader(const char* filename, ...);
...
  GLuint LoadToTexture2D(); // returns texture id
...
};
jheriko
A: 

Have anyone tried this with glTexSubImage2D instead? I really need it and I can't put it to work properly, all I get is a striped, funny coloured square on my original texture.

Thanks in advance.

John, please post your question as a new question and not an answer - you'll get more exposure that way :) - please feel free to link back to my original question if it helps.
nbolton