tags:

views:

27

answers:

1

I'm basically looking to see if I have the design of the following system correct in my mind, since I was handed a prototype as a starting point - but it approached the problem in a separate way than I would.

The Application

Very simple OpenGL application (will be written in Java) where we will have a window that contains a number of quads, each with an assigned image. (Think window previews in OSX Expose, or Windows-Tab).

The quads can be positioned in an arbitrary location and orientation and I will be providing code to manage selection by clicking on a quad, and transitioning a quad from one place to another.

Prototype Design

The sample application I was handed achieves a tiny part of this: namely, there is a main class that creates the GL window and has a render loop. There is another class ImageTile that represents a quad: this has four vertices {-1,-1} {-1, 1} {1, 1} {1, -1} hard coded into it, and an assigned texture. Each of these objects contains its own implementation to render itself (using GL_BEGIN etc) and a 'collision' check, where a ray cast from the main class is passed to each ImageTile for an intersection test on the transformed object.

My Concerns

This implementation differs drastically from my own experience, where I would typically keep the ImageTile purely as a model, that didn't know anything about rendering/picking etc. My gut feeling would be to have the rendering performed in one place, and that it should maintain a list of all objects it needs to render, traversing this and performing transformations using data pulled from the model itself. I could also have a LayoutController that manages the relative position of ImageTiles etc.

So, pretty much take a MVC approach, with a very simple 'scenegraph' managing the various elements that need to be displayed/interacted with.

Notes

I'm not concerned with replacing the original code - I know it was a prototype and the guy who wrote it will still be working on the project with me. He wrote it the current way as it his the first thing that came to mind. I'm just curious of the 'better' approach. Should an object render itself? Should it have the local coordinates ( -1,-1 -> 1, 1) to build the quad? Should it also have world position information (world position, location etc).

I would like to move the 'ray picking' out of each object and do it from the main UI class, but we need to get not only the currently selected quad, but the position (UV) across the quad (so I can map that back to the main application). I've not managed to find any tutorials for this type of picking, so want to get the implementation correct first off to make this aspect as painless as possible.

Any advice/resources appreciated.

+2  A: 

Having objects know how to render themselves isn't really a major problem. You should handle transformations separately. You've said that right now each quad-object renders itself into the (-1,-1)-(1,1) space. That makes things easy -- you just set up the transformation matrix to put that where you want it, then have the quad draw itself. Change the matrix, and render the next.

Having objects dealing with picking is a lot bigger problem. By definition, picking deals with multiple objects, not just one, so having an individual object know about it sounds like a serious problem. OpenGL has a picking mode that makes it fairly easy to pick objects, but you'll have to do some extra work (more or less on your own) to map that back to the position on the quad.

Jerry Coffin