views:

1023

answers:

4

I am building a 3d game from scratch in C++ using OpenGL and SDL on linux as a hobby and to learn more about this area of programming.

Wondering about the best way to simulate time while the game is running. Obviously I have a loop that looks something like:

void main_loop()
{
    while(!quit)
    {
         handle_events();
         DrawScene();
         ...
         SDL_Delay(time_left());
    }
}

I am using the SDL_Delay and time_left() to maintain a framerate of about 33 fps.

I had thought that I just need a few global variables like

int current_hour = 0;
int current_min = 0;
int num_days = 0;
Uint32 prev_ticks = 0;

Then a function like :

void handle_time()
{
    Uint32 current_ticks;
    Uint32 dticks;
    current_ticks = SDL_GetTicks();
    dticks = current_ticks - prev_ticks; // get difference since last time

    // if difference is greater than 30000 (half minute) increment game mins
    if(dticks >= 30000) {
         prev_ticks = current_ticks;
         current_mins++;
         if(current_mins >= 60) {
            current_mins = 0;
            current_hour++;
         }
         if(current_hour > 23) {
            current_hour = 0;
            num_days++;
         }
    }
 }

and then call the handle_time() function in the main loop.

It compiles and runs (using printf to write the time to the console at the moment) but I am wondering if this is the best way to do it. Is there easier ways or more efficient ways?

A: 

I am not a Linux developer, but you might want to have a look at using Timers instead of polling for the ticks.

http://linux.die.net/man/2/timer_create

EDIT:
SDL Seem to support Timers: SDL_SetTimer

dtroy
While good advice in general, this is not a good idea for game development.
zildjohn01
Not even for displaying the time in the game like he does ? Do you actually use polling instead ?
dtroy
zildjohn01: why are timers are a bad idea for a game? I am assuming they have some sort of performance impact?
Tim
This is bad because you don't want a game to have a callback that does something every (n) milliseconds -- some frames might finish a little ahead of 33 and some others might lag behind. If you have a frame that takes 37 milliseconds and then set the next frame for 33 after that, then you'll be effectively slowing down time as your two frames (representing 66ms of game time) would take place over 70ms of real time. It's much better to have the game loop running at full speed all the time -- never sleeping or pausing -- and use a high-resolution realtime clock to mark constant timesteps.
Crashworks
I never suggested handling the frames with a timer. He was asking about Time.
dtroy
That's why he wanted the time: "I am using the SDL_Delay and time_left() to maintain a framerate of about 33 fps."
Crashworks
If you look at the handle_time() function it's clear that he's not just counting minutes and hours for the fps ( which is handled by DrawScene()). As far as I understand, he wanted to keep a "clock" in the game. I accept your comment performance in game development and I won't argue with you here as it's not my field.But, as an idea for a different implementation, I don't think there was anything fundamentally wrong with my answer
dtroy
+1  A: 

Other than the issues already pointed out (you should use a structure for the times and pass it to handle_time() and your minute will get incremented every half minute) your solution is fine for keeping track of time running in the game.

However, for most game events that need to happen every so often you should probably base them off of the main game loop instead of an actual time so they will happen in the same proportions with a different fps.

DShook
+8  A: 

I've mentioned this before in other game related threads. As always, follow the suggestions by Glenn Fiedler in his Game Physics series

What you want to do is to use a constant timestep which you get by accumulating time deltas. If you want 33 updates per second, then your constant timestep should be 1/33. You could also call this the update frequency. You should also decouple the game logic from the rendering as they don't belong together. You want to be able to use a low update frequency while rendering as fast as the machine allows. Here is some sample code:

running = true;
unsigned int t_accum=0,lt=0,ct=0;
while(running){
    while(SDL_PollEvent(&event)){
        switch(event.type){
            ...
        }
    }
    ct = SDL_GetTicks();
    t_accum += ct - lt;
    lt = ct;
    while(t_accum >= timestep){
        t += timestep; /* this is our actual time, in milliseconds. */
        t_accum -= timestep;
        for(std::vector<Entity>::iterator en = entities.begin(); en != entities.end(); ++en){
            integrate(en, (float)t * 0.001f, timestep);
        }
    }
    /* This should really be in a separate thread, synchronized with a mutex */
    std::vector<Entity> tmpEntities(entities.size());
    for(int i=0; i<entities.size(); ++i){
        float alpha = (float)t_accum / (float)timestep;
        tmpEntities[i] = interpolateState(entities[i].lastState, alpha, entities[i].currentState, 1.0f - alpha);
    }
    Render(tmpEntities);
}

This handles undersampling as well as oversampling. If you use integer arithmetic like done here, your game physics should be close to 100% deterministic, no matter how slow or fast the machine is. This is the advantage of increasing the time in fixed time intervals. The state used for rendering is calculated by interpolating between the previous and current states, where the leftover value inside the time accumulator is used as the interpolation factor. This ensures that the rendering is is smooth, no matter how large the timestep is.

Mads Elvheim
This is essentially what we do (in our commercial product). We run our game logic at a constant timestep of 10hz, but let the rendering thread run as fast as possible and draw frames as quickly as the GPU is ready for them. This means that physics and AI and entity logic isn't impacted by render speed (otherwise the world would go into slow-time when the rendering bogged down). The rendering isn't on a constant timestep, it just goes as fast as it can (up to 60hz, of course).
Crashworks
@Crashworks : Exactly. You don't care about a fixed timestep for the rendering. It would make the rendering choppy.
Mads Elvheim
One can make the case that there *should* be a constant rendering timestep, too, since a variable framerate feels choppier than a consistent one, and because the monitor is updating at 30/60hz (or 25/50 for PAL), so any frame that isn't aligned with a vsync interval will "tear" in the middle of the screen. It's very subjective though, so we made "wait for vsync" a customer-settable option.
Crashworks
Hah, NTSC vs PAL differences was actually the reason why SEGA Genesis games for European consoles ran faster than North American ones. So even game consoles suffered from this issue, though less than PC games from the same era.
Mads Elvheim
A: 

One of Glenn's posts you will really want to read is Fix Your Timestep!. After looking up this link I noticed that Mads directed you to the same general place in his answer.

Michael Steele