I want to be able to store an atomic model in an OpenGL program I'm writing. Nothing fancy; just constant mesh vertex values stored as GLfloat[3]
, plus simple textures. I also want the model to be able to move and rotate freely and as a single object. Here's what I have so far:
typedef struct _coordnode {
GLfloat *pts; /* XYZ (vertex) or XY (texture) */
struct _coordnode *next;
} coordnode;
typedef struct _facenode {
GLfloat *norm; /* XYZ */
coordnode *vertices; /* head of linked list */
GLfloat *color; /* RGBA */
coordnode *textures; /* head of linked list */
struct _facenode *next;
} facenode;
typedef struct _model {
GLenum mode;
facenode *faces; /* head of linked list */
GLfloat *rot; /* delta-XYZ from Theta-origin */
GLfloat *rot_delta; /* delta-delta-XYZ */
GLfloat *trans; /* delta-XYZ from origin */
GLfloat *trans_delta; /* delta-delta-XYZ from origin */
} model;
This sets itself up in such a way that the model has a linked lists of facenode
, each of which has two linked lists of its vertexes and its texture coordinates, respectively.
Since this is my first C program, my question to seasoned programmers is whether this particular method presents any inconsistencies or inefficiencies, or whether it stores enough data.
More information, not necessarily relevant:
- There will only be a few objects in memory, and a two of them will be involved in collision detection.
- One model will have a partial transparency.
- One model will have raised rendered text applied to the model's faces, and move according to a gravity vector.
- Two models will rotate as one, based on external input.