tags:

views:

549

answers:

3

I have a bit of experience writing OpenGL 2 applications and want to learn using OpenGL 3. For this I've bought the Addison Wesley "Red-book" and "Orange-book" (GLSL) which descirbe the deprecation of the fixed functionality and the new programmable pipeline (shaders). But what I can't get a grasp of is how to construct a scene with multiple objects without using the deprecated translate*, rotate* and scale* functions.

What I used to do in OGL2 was to "move about" in 3D space using the translate and rotate functions, and create the objects in local coordinates where I wanted them using glBegin ... glEnd. In OGL3 these functions are all deprecated, and, as I understand, replaced by shaders. But I can't call a shaderprogram for each and every object I make, can I? Wouldn't this affect all the other objects too?

I'm not sure if I've explained my problem satisfactory, but the core of it is how to program a scene with multiple objects defined in local coordinates in OpenGL 3.1. All the beginner tutorials I've found only uses a single object and doesn't have/solve this problem.

Edit: Imagine you want two spinning cubes. It would be a pain manually modifying each vertex coordinate, and you can't simply modify the modelview-matrix, because that would rather spin the camera around two static cubes...

A: 

I don't see anything in the 3.2 spec (PDF warning) about the rotate, translate or scale methods being deprecated. You can do interesting transformative effects with shaders, but for rendering a simple scene you should still use the basic matrix modification methods.

Jherico
say what ? Have you read it ? E.2.2 Removed features, 5th dot. Translate, Rotate, Scale... all are there.
Bahbar
+1  A: 

Those functions are apparently deprecated, but are technically still perfectly functional and indeed will compile. So you can certainly still use the translate3f(...) etc functions.

HOWEVER, this tutorial has a good explanation of how the new shaders and so on work, AND for multiple objects in space.

You can create x arrays of vertexes, and bind them into x VAO objects, and you render the scene from there with shaders etc...meh, it's easier for you to just read it - it is a really good read to grasp the new concepts.

Also, the OpenGL 'Red Book' as it is called has a new release - The Official Guide to Learning OpenGL, Versions 3.0 and 3.1. It includes 'Discussion of OpenGL’s deprecation mechanism and how to verify your programs for future versions of OpenGL'.

I hope that's of some assistance!

Mark Mayo
Again thanks for the quick answer. I've read the tutorial you mention, but although he creates two triangles, he creates them in global coordinates - not local as I want to. The problem is when you have a more complicated model it can be cumbersome to use global coordinates, especially if the model moves. Also, I want to load a premade object from an .obj-file and place it somewhere in 3D space.
Wonko
+5  A: 

Let's start with the basics.

Usually, you want to transform your local triangle vertices through the following steps:

local-space coords-> world-space coords -> view-space coords -> clip-space coords

In standard GL, the first 2 transforms are done through GL_MODELVIEW_MATRIX, the 3rd is done through GL_PROJECTION_MATRIX

These model-view transformations, for the many interesting transforms that we usually want to apply (say, translate, scale and rotate, for example), happen to be expressible as vector-matrix multiplication when we represent vertices in homogeneous coordinates. Typically, the vertex V = (x, y, z) is represented in this system as (x, y, z, 1).

Ok. Say we want to transform a vertex V_local through a translation, then a rotation, then a translation. Each transform can be represented as a matrix*, let's call them T1, R1, T2. We want to apply the transform to each vertex: V_view = V_local * T1 * R1 * T2. Matrix multiplication being associative, we can compute once and for all M = T1 * R1 * T2.

That way, we only need to pass down M to the vertex program, and compute V_view = V_local * M. In the end, a typical vertex shader multiplies the vertex position by a single matrix. All the work to compute that one matrix is how you move your object from local space to the clip space.

Ok... I glanced over a number of important details.

First, what I described so far only really covers the transformation we usually want to do up to the view space, not the clip space. However, the hardware expects the output position of the vertex shader to be represented in that special clip-space. It's hard to explain clip-space coordinates without significant math, so I will leave that out, but the important bit is that the transformation that brings the vertices to that clip-space can usually be expressed as the same type of matrix multiplication. This is what the old gluPerspective, glFrustum and glOrtho compute.

Second, this is what you apply to vertex positions. The math to transform normals is somewhat different. That's because you want the normal to stay perpendicular to the surface after transformation (for reference, it requires a multiplication by the inverse-transpose of the model-view in the general case, but that can be simplified in many cases)

Third, you never send 4-D coordinates to the vertex shader. In general you pass 3-D ones. OpenGL will transform those 3-D coordinates (or 2-D, btw) to 4-D ones so that the vertex shader does not have to add the extra coordinate. it expands each vertex to add the 1 as the w coordinate.

So... to put all that back together, for each object, you need to compute those magic M matrices based on all the transforms that you want to apply to the object. Inside the shader, you then have to multiply each vertex position by that matrix and pass that to the vertex shader Position output. Typical code is more or less (this is using old nomenclature):

mat4 MVP;
gl_Position=MVP * gl_Vertex;

* the actual matrices can be found on the web, notably on the man pages for each of those functions: rotate, translate, scale, perspective, ortho

Bahbar
Thank you for that very thorogh answer! A detail I had missed was that you can change the MVP matrix (e.g. declaring it uniform) in between calls to glDrawElements...Strange move from khronos to leave the specification of these very common matrices to the programmer though. Hope they will find their way into glu or similar soon...
Wonko
That is what middleware is for. GL3.2 is less expressive for rapid coding. But the previous versions were the wrong level of abstraction for real application anyways. It required a state management framework on top of it.Also, say you want to keep matrices. That means the driver has to keep all the matrix stacks, how to pass them only to _all_ shaders that require it (not just vertex), figure out which flavor you need - M, MV, MVP, IT(MV), IT(M). The worst part ? the app is the only one that can make the math efficient, by running many matrix operations at once. GL can't do that efficiently.
Bahbar