views:

126

answers:

5

I'm writing an old school ASCII DOS-Prompt game. Honestly I'm trying to emulate ZZT to learn more about this brand of game design (Even if it is antiquated)

I'm doing well, got my full-screen text mode to work and I can create worlds and move around without problems BUT I cannot find a decent timing method for my renders.

I know my rendering and pre-rendering code is fast because if I don't add any delay()s or (clock()-renderBegin)/CLK_TCK checks from time.h the renders are blazingly fast.

I don't want to use delay() because it is to my knowledge platform specific and on top of that I can't run any code while it delays (Like user input and processing). So I decided to do something like this:

do {
    if(kbhit()) {
        input = getch();
        processInput(input);
    }

    if(clock()/CLOCKS_PER_SEC-renderTimer/CLOCKS_PER_SEC > RenderInterval) {
        renderTimer = clock();
        render();
        ballLogic();
    }
}while(input != 'p');

Which should in "theory" work just fine. The problem is that when I run this code (setting the RenderInterval to 0.0333 or 30fps) I don't get ANYWHERE close to 30fps, I get more like 18 at max.

I thought maybe I'd try setting the RenderInterval to 0.0 to see if the performance kicked up... it did not. I was (with a RenderInterval of 0.0) getting at max ~18-20fps.

I though maybe since I'm continuously calling all these clock() and "divide this by that" methods I was slowing the CPU down something scary, but when I took the render and ballLogic calls out of the if statement's brackets and set RenderInterval to 0.0 I get, again, blazingly fast renders.

This doesn't make sence to me since if I left the if check in, shouldn't it run just as slow? I mean it still has to do all the calculations

BTW I'm compiling with Borland's Turbo C++ V1.01

+1  A: 
clock()-renderTimer > RenderInterval * CLOCKS_PER_SEC

would compute a bit faster, possibly even faster if you pre-compute the RenderInterval * CLOCKS_PER_SEC part.

Ofir
Thank you for your response, I see the optimization. Even with it the issue still exists however. Just tried it though
Parad0x13
The compiler does all sort of optimizations on loops, which can create peculiarities (especially if they contain branches) - try to look at the generated code in both cases.In fact, the processor does some similar work (look up branch prediction).
Ofir
I checked the pre-processor but I cant find out what optimizations are enabled
Parad0x13
Try >= instead of >
Ofir
A: 

What about this: you are substracting from x (=clock()) y (=renderTimer). Both x and y are being divided by CLOCKS_PER_SEC:

clock()/CLOCKS_PER_SEC-renderTimer/CLOCKS_PER_SEC > RenderInterval

Wouldn't it be mor efficiente to write:

( clock() - renderTimer ) > RenderInterval

The very first problem I saw with the division was that you're not going to get a real number from it, since it happens between two long ints. The secons problem is that it is more efficiente to multiply RenderInterval * CLOCKS_PER_SEC and this way get rid of it, simplifying the operation.

Adding the brackets gives more legibility to it. And maybe by simplifying this phormula you will get easier what's going wrong.

Baltasarq
I tried these formula optimizations but I continued to have the same results.
Parad0x13
What if you force RenderInterval = 1 ?
Baltasarq
A: 

As you've spotted with your most recent question, you're limited by CLOCKS_PER_SEC which is only about 18. You get one frame per discrete value of clock, which is why you're limited to 18fps.

You could use the screen vertical blanking interval for timing, it's traditional for games as it avoids "tearing" (where half the screen shows one frame and half shows another)

pjc50
A: 

I figured out why it wasn't rendering right away, the timer that I created is fine the problem is that the actual clock_t is only accurate to .054547XXX or so and so I could only render at 18fps. The way I would fix this is by using a more accurate clock... which is a whole other story

Parad0x13
A: 

The best gaming experience is usually achieved by synchronizing with the vertical retrace of the monitor. In addition to providing timing, this will also make the game run smoother on the screen, at least if you have a CRT monitor connected to the computer.

In 80x25 text mode, the vertical retrace (on VGA) occurs 70 times/second. I don't remember if the frequency was the same on EGA/CGA, but am pretty sure that it was 50 Hz on Hercules and MDA. By measuring the duration of, say, 20 frames, you should have a sufficiently good estimate of what frequency you are dealing with.

Let the main loop be someting like:

  while (playing) {
     do whatever needs to be done for this particular frame
     VSync();
  }

  ...  /* snip */

  /* Wait for vertical retrace */
  void VSync() {
    while((inp(0x3DA) & 0x08));
    while(!(inp(0x3DA) & 0x08));
  }
norheim.se