views:

2052

answers:

6

This semester, I took a course in computer graphics at my University. At the moment, we're starting to get into some of the more advanced stuff like heightmaps, averaging normals, tesselation etc.

I come from an object-oriented background, so I'm trying to put everything we do into reusable classes. I've had good success creating a camera class, since it depends mostly on the one call to gluLookAt(), which is pretty much independent of the rest of the OpenGL state machine.

However, I'm having some trouble with other aspects. Using objects to represent primitives hasn't really been a success for me. This is because the actual render calls depend on so many external things, like the currently bound texture etc. If you suddenly want to change from a surface normal to a vertex normal for a particular class it causes a severe headache.

I'm starting to wonder whether OO principles are applicable in OpenGL coding. At the very least, I think that I should make my classes less granular.

What is the stack overflow community's views on this? What are your best practices for OpenGL coding?

A: 

I usually have a drawOpenGl() function, per class that can be rendered, that contains it's opengl calls. That function gets called from the renderloop. The class holds all info needed for its opengl function calls, eg. about position and orientation so it can do its own transformation.

When objects are dependent on eachother, eg. they make a part of a bigger object, then compose those classes in a other class that represents that object. Which has its own drawOpenGL() function that calls all the drawOpenGL() functions of its children, so you can have surrounding position/orientation calls using push- and popmatrix.

It has been some time, but i guess something similar is possible with textures.

If you want to switch between surface normals or vertex normals, then let the object remember if its one or the other and have 2 private functions for each occasion that drawOpenGL() calls when needed. There are certainly other more elegant solutions (eg. using the strategy design pattern or something), but this one could work as far as I understand your problem

Emile Vrijdags
This is the approach I'm using now (w.r.t. the 2 private functions). It doesn't quite work out for me, since each Triangle class depends on 6 other Triangles for normal averaging. Would you also recommend modelling a mesh instead of primitives as my basic class?
fluffels
sry for the late answer. yes, im thinking: a mesh class consisting of a list of triangle classes and maybe the function for the avarage normals; the triangle class can generate its own normal. The mesh could do the drawing of the triangles. A primitive can be a mesh with a specific form.
Emile Vrijdags
+13  A: 

The most practical approach seems to be to ignore most of OpenGL functionality that is not directly applicable (or is slow, or not hardware accelerated, or is a no longer a good match for the hardware).

OOP or not, to render some scene those are various types and entities that you usually have:

Geometry (meshes). Most often this is an array of vertices and array of indices (i.e. three indices per triangle, aka "triangle list"). A vertex can be in some arbitrary format (e.g. only a float3 position; a float3 position + float3 normal; a float3 position + float3 normal + float2 texcoord; and so on and so on). So to define a piece of geometry you need:

  • define it's vertex format (could be a bitmask, an enum from a list of formats; ...),
  • have array of vertices, with their components interleaved ("interleaved arrays")
  • have array of triangles.

If you're in OOP land, you could call this class a Mesh.

Materials - things that define how some piece of geometry is rendered. In a simplest case, this could be a color of the object, for example. Or whether lighting should be applied. Or whether the object should be alpha-blended. Or a texture (or a list of textures) to use. Or a vertex/fragment shader to use. And so on, the possibilities are endless. Start by putting things that you need into materials. In OOP land that class could be called (surprise!) a Material.

Scene - you have pieces of geometry, a collection of materials, time to define what is in the scene. In a simple case, each object in the scene could be defined by: - What geometry it uses (pointer to Mesh), - How it should be rendered (pointer to Material), - Where it is located. This could be a 4x4 transformation matrix, or a 4x3 transformation matrix, or a vector (position), quaternion (orientation) and another vector (scale). Let's call this a Node in OOP land.

Camera. Well, a camera is nothing more than "where it is placed" (again, a 4x4 or 4x3 matrix, or a position and orientation), plus some projection parameters (field of view, aspect ratio, ...).

So basically that's it! You have a scene which is a bunch of Nodes which reference Meshes and Materials, and you have a Camera that defines where a viewer is.

Now, where to put actual OpenGL calls is a design question only. I'd say, don't put OpenGL calls into Node or Mesh or Material classes. Instead, make something like OpenGLRenderer that can traverse the scene and issue all calls. Or, even better, make something that traverses the scene independent of OpenGL, and put lower level calls into OpenGL dependent class.

So yes, all of the above is pretty much platform independent. Going this way, you'll find that glRotate, glTranslate, gluLookAt and friends are quite useless. You have all the matrices already, just pass them to OpenGL. This is how most of real actual code in real games/applications work anyway.

Of course the above can be complicated by more complex requirements. Particularly, Materials can be quite complex. Meshes usually need to support lots of different vertex formats (e.g. packed normals for efficiency). Scene Nodes might need to be organized in a hierarchy (this one can be easy - just add parent/children pointers to the node). Skinned meshes and animations in general add complexity. And so on.

But the main idea is simple: there is Geometry, there are Materials, there are objects in the scene. Then some small piece of code is able to render them.

In OpenGL case, setting up meshes would most likely create/activate/modify VBO objects. Before any node is rendered, matrices would need to be set. And setting up Material would touch most of remaining OpenGL state (blending, texturing, lighting, combiners, shaders, ...).

NeARAZ
Your idea for the mesh class seems obvious to me now :) What I was trying to do was to use objects for primitives like triangles. Using objects to manage meshes makes a lot more sense, as they tend to be pretty self sufficient, correct?
fluffels
Also, thanks a lot for the insight into the platform independence stuff and the render trees! That helps a lot!
fluffels
+1  A: 

A standard technique is to insulate the objects' effect on the render state from each other by doing all changes from some default OpenGL state within a glPushAttrib/glPopAttrib scope. In C++ define a class with constructor containing

  glPushAttrib(GL_ALL_ATTRIB_BITS);
  glPushClientAttrib(GL_CLIENT_ALL_ATTRIB_BITS);

and destructor containing

  glPopClientAttrib();
  glPopAttrib();

and use the class RAII-style to wrap any code which messes with the OpenGL state. Provided you follow the pattern, each object's render method gets a "clean slate" and doesn't need to worry about prodding every possibly modified bit of openGL state to be what it needs.

As an optimisation, typically you'd set the OpenGL state once at app startup into some state which is as close as possible to what everything wants; this minimisies the number of calls which need to be made within the pushed scopes.

The bad news is these aren't cheap calls. I've never really investigated how many per second you can get away with; certainly enough to be useful in complex scenes. The main thing is to try and make the most of states once you've set them. If you've got an army of orcs to render, with different shaders, textures etc for armour and skin, don't iterate over all the orcs rendering armour/skin/armour/skin/...; make sure you set up the state for the armour once and render all the orcs' armour, then setup to render all the skin.

timday
It's kinda strange to instantiate an object, call its render() function and then destroy it, just to insulate the state. Am I understanding correctly?
fluffels
Sorry, I didn't explain it very well. The object doing push/pop on the GL state is just a convenience helper... nothing to do with the objects you're rendering. Code would look something like: renderable* thing=new...; { gl_pushed_scope p; thing->render(); }
timday
+1  A: 

Object transformations

Avoid depending on OpenGL to do your transformations. Often, tutorials teach you how to play with the transformation matrix stack. I would not recommend using this approach since you may need some matrix later that will only be accessible through this stack, and using it is very long since the GPU bus is designed to be fast from CPU to GPU but not the other way.

Master object

A 3D scene is often thought as a tree of objects in order to know object dependencies. There is a debate about what should be at the root of this tree, a list of object or a master object.

I advice using a master object. While it does not have a graphical representation, it will be simpler because you will be able to use recursion more effectively.

Decouple scene manager and renderer

I disagree with @ejac that you should have a method on each object doing OpenGL calls. Having a separate Renderer class browsing your scene and doing all the OpenGL calls will help you decouple your scene logic and OpenGL code.

This is adds some design difficulty but will give you more flexibility if you ever have to change from OpenGL to DirectX or anything else API related.

Vincent Robert
From what I've seen, I definitely agree with your last point. I will have to try out the second pattern, but my work isn't really complicated enough to warrant platform independence (it's just a couple of throw-away practical excercises).
fluffels
+1  A: 

if you do want to roll your own the above answers work well enough. A lot of the principles that are mentioned are implemented in most of the open source graphics engines. Scenegraphs are one method to move away from the direct mode opengl drawing.

OpenScenegraph is one Open Source app that gives you a large (maybe too large) library of tools for doing OO 3D graphics, there are a lot of other out there.

Harald Scheirich
Thanks for the reference, I'll definitely check it out later. Unfortunately, our lecturers are a little draconian and we can't use 3rd party tools... :(
fluffels
This is the approach that Interactive Computer Graphics by Edward Angel takes towards OOP graphics programming. After reading Chapter 10, I'm convinced this is the correct answer.
fluffels
A: 

oooGL would have already been solved if "Longs Peak" was not turned out to be "Long Speak"