tags:

views:

205

answers:

7

I made a program (in C++, using gl/glut) for study purposes where you can basically run around a screen (in first person), and it has several solids around the scene. I tried to run it on a different computer and the speed was completely different, so I searched on the subject and I'm currently doing something like this:

Idle function:

    start = glutGet (GLUT_ELAPSED_TIME);
    double dt = (start-end)*30/1000;

    <all the movement*dt>

    glutPostRedisplay ();

    end = glutGet (GLUT_ELAPSED_TIME);

Display function:

    <rendering for all objects>

    glutSwapBuffers ();

My question is: is this the proper way to do it? The scene is being displayed after the idle function right?

I tried placing end = glutGet (GLUT_ELAPSED_TIME) before glutSwapBuffers () and didn't notice any change, but when I put it after glutSwapBuffers () it slows down alot and even stops sometimes.

EDIT: I just noticed that in the way I'm thinking, end-start should end up being the time that passed since all the drawing was done and before the movement update, as idle () would be called as soon as display () ends, so is it true that the only time that's not being accounted for here is the time the computer takes to do all of the movement? (Which should be barely nothing?)

Sorry if this is too confusing..

Thanks in advance.

+3  A: 

I don't know what "Glut" is, but as a general rule of game development, I would never base movement speed off of how fast the computer can process the directives. That's what they did in the late 80's and that's why when you play an old game, things move at light speed.

I would set up a timer, and base all of my movements off of clear and specific timed events.

George
GLUT. http://www.opengl.org/resources/libraries/glut/
KennyTM
`That's what they did in the late 80's and that's why when you play an old game, things move at light speed` Ahh yes, and hence the reason for the infamous "turbo" button. If your computer was too fast you could turn turbo off to make it slow enough to be playable.
Eric Petroelje
@Eric LOL -- I didn't know that's what the turbo button was for! Finally after 20 years, I have an answer. :)
George
Well, that's what I'm trying to do. I get the time before the movement and time after the movement. Then I calculate dt with the pervious idle movement and the current one, and multiply that to all of the movement so that if it was a fast computer, it would call idle more often than a slow computer, but after 10 minutes they would be at the same point. I just don't know if it's right..
+1  A: 

Set up a high-resolution timer (eg. QueryPerformanceCounter on Windows) and measure the time between every frame. This time, called delta-time (dt), should be used in all movement calculations, eg. every frame, set an object's position to:

obj.x += 100.0f * dt; // to move 100 units every second

Since the sum of dt should always be 1 over 1 second, the above code increments x by 100 every second, no matter what the framerate is. You should do this for all values which change over time. This way your game proceeds at the same rate on all machines (framerate independent), rather than depending on the rate the computer processes the logic (framerate dependent). This is also useful if the framerate starts to drop - the game doesn't suddenly start running in slow-motion, it keeps going at the same speed, just rendering less frequently.

AshleysBrain
That's what he's already doing.
meagar
+1  A: 

I wouldn't use a timer. Things can go wrong, and events can stack up if the PC is too slow or too busy to run at the required rate. I'd let the loop run as fast as it's allowed, and each time calculate how much time has passed and put this into your movement/logic calculations.

Internally, you might actually implement small fixed-time sub-steps, because trying to make everything work right on variable time-steps is not as simple as x+=v*dt.

Try gamedev.net for stuff like this. lots of articles and a busy forum.

John
A: 

I'm a games programmer and have done this many times.

Most games run the AI in fixed time increments like 60hz for example. Also most are synced to the monitor refresh to avoid screen tearing so the max rate would be 60 even if the machine was really fast and could do 1000 fps. So if the machine was slow and was running at 20 fps then it would call the update ai function 3 times per render. Doing it this way solves rounding error problems with small values and also makes the AI deterministic across multiple machines since the AI update rate is decoupled from the machine speed ( necessary for online multiplayer games).

KPexEA
Good example of why _internally_ you might still do small fixed timesteps, while still letting the app run at a different update rate. This way you get the benefit of frame-rate independence, but also the benefit of determinism when each actual step is the same interval. Collision detection is another prime example.
John
A: 

This is a very hard question.

The first thing you need to awnser yourself is, do you really want your application to really run at the same speed or just appear to run the same speed? 99% of the time you only want it to appear to run the same speed.

Now there are two problems: Speeding up you application or slowing it down.

Speeding up your application is really hard, since that requires things like dynamic LOD that adjusts to the current speed. This means LOD in everything, not only graphics.

Slowing your application down is fairly easy. You have two options sleeping or "busy waiting". It basically depends on your target frame rate for your simulation. If your simulation is way above something like 50 ms you can sleep. The problem is that when sleeping you are depended on the process scheduler and it works on average system at granularity of 10 ms.

In games busy waiting is not such a bad idea. What you do is you update your simulation and render your frame, then you use an time accumulator for the next frame. When rendering frames without simulation you then interpolate the state to get a smooth animation. A really great article on the subject can be found at http://gafferongames.com/game-physics/fix-your-timestep/.

Sean Farrell
A: 

There is a perfect article about game loops that should give you all the information you need.

zoul
+1  A: 

You have plenty of answers on how to do it the "right" way, but you're using GLUT, and GLUT sometimes sacrifices the "right" way for simplicity and maintaining platform independence. The GLUT way is to register a timer callback function with glutTimerFunc().

static void timerCallback (int value)
{
    // Calculate the deltas

    glutPostRedisplay(); // Have GLUT call your display function
    glutTimerFunc(elapsedMilliseconds, timerCallback, value);
}

If you set elapsedMilliseconds to 40, this function will be called slightly less than 25 times a second. That slightly less will depend upon how long the computer takes to process your delta calculation code. If you keep that code simple, your animation will run the same speed on all systems, as long as each system can process the display function in less than 40 milliseconds. For more flexibility, you can adjust the frame rate at runtime with a command line option or by adding a control to your interface.

You start the timer loop by calling glutTimerFunc(elapsedMilliseconds, timerCallback, value); in your initialization process.

Mr. Berna