views:

158

answers:

4

Is there any way to calculate how much updates should be made to reach desired frame rate, NOT system specific? I found that for windows, but I would like to know if something like this exists in openGL itself. It should be some sort of timer.

Or how else can I prevent FPS to drop or raise dramatically? For this time I'm testing it on drawing big number of vertices in line, and using fraps I can see frame rate to go from 400 to 200 fps with evident slowing down of drawing it.

+3  A: 

You have two different ways to solve this problem:

  1. Suppose that you have a variable called maximum_fps, which contains for the maximum number of frames you want to display.

    Then You measure the amount of time spent on the last frame (a timer will do)

    Now suppose that you said that you wanted a maximum of 60FPS on your application. Then you want that the time measured be no lower than 1/60. If the time measured s lower, then you call sleep() to reach the amount of time left for a frame.

  2. Or you can have a variable called tick, that contains the current "game time" of the application. With the same timer, you will incremented it at each main loop of your application. Then, on your drawing routines you calculate the positions based on the tick var, since it contains the current time of the application.

    The big advantage of option 2 is that your application will be much easier to debug, since you can play around with the tick variable, go forward and back in time whenever you want. This is a big plus.

Edison Gustavo Muenz
The first one is only for slowing FPS down and it's dangerous to rely on the accuracy of the sleep interval.
young
In some simple cases (and my current one) is the 1st solution good enough, because I just need to unite FPS while running app, knowing that my frame rate never dropped under 200 it will be efficient. So maybe a dangerous but simple to get into your code without many changes, it is as good as second..
Raven
+1  A: 

Is there any way to calculate how much updates should be made to reach desired frame rate, NOT system specific?

No.

There is no way to precisely calculate how many updates should be called to reach desired framerate.

However, you can measure how much time has passed since last frame, calculate current framerate according to it, compare it with desired framerate, then introduce a bit of Sleeping to reduce current framerate to the desired value. Not a precise solution, but it will work.

I found that for windows, but I would like to know if something like this exists in openGL itself. It should be some sort of timer.

OpenGL is concerned only about rendering stuff, and has nothing to do with timers. Also, using windows timers isn't a good idea. Use QueryPerformanceCounter, GetTickCount or SDL_GetTicks to measure how much time has passed, and sleep to reach desired framerate.

Or how else can I prevent FPS to drop or raise dramatically?

You prevent FPS from raising by sleeping.

As for preventing FPS from dropping...

It is insanely broad topic. Let's see. It goes something like this: use Vertex buffer objects or display lists, profile application, do not use insanely big textures, do not use too much alpha-blending, avoid "RAW" OpenGL (glVertex3f), do not render invisible objects (even if no polygons are being drawn, processing them takes time), consider learning about BSPs or OCTrees for rendering complex scenes, in parametric surfaces and curves, do not needlessly use too many primitives (if you'll render a circle using one million polygons, nobody will notice the difference), disable vsync. In short - reduce to absolute possible minimum number of rendering calls, number of rendered polygons, number of rendered pixels, number of texels read, read every available performance documentation from NVidia, and you should get a performance boost.

SigTerm
I have found that even OGL have it's own timer like this:glutTimerFunc(40,Timer,0); Anyway, when I was talking about windows one, I thought that QueryPerformanceCounter and QueryPerformanceFrequency are avalible only on windows.
Raven
@Raven: "glutTimerFunc" This is not an OpenGL function - glut is not a part of OpenGL. "are avalible only on windows" Yes, they're windows only. If you want cross-platform solution, use SDL_GetTicks.
SigTerm
@SigTerm: But if you are using a glut(freeglut in my case), you have still cross-platform solution is that right? Ofcourse there is need of freeglut library then, but glut provides valuable functions (as that timer) in that piece of file...
Raven
A: 

You're asking the wrong question. Your monitor will only ever display at 60 fps (50 fps in Europe, or possibly 75 fps if you're a pro-gamer).

Instead you should be seeking to lock your fps at 60 or 30. There are OpenGL extensions that allow you to do that. However the extensions are not cross platform (luckily they are not video card specific or it'd get really scary).

These extensions are closely tied to your monitor's v-sync. Once enabled calls to swap the OpenGL back-buffer will block until the monitor is ready for it. This is like putting a sleep in your code to enforce 60 fps (or 30, or 15, or some other number if you're not using a monitor which displays at 60 Hz). The difference it the "sleep" is always perfectly timed instead of an educated guess based on how long the last frame took.

caspin
"You're asking the wrong question." It is a quite reasonable game-development question. "Your monitor will only ever display" Yep, but an application can easily produce 600 frames per second. Also, very high framerate is useful for capturing video and slowing it down afterwards.
SigTerm
"Instead you should be seeking to lock your fps at 60 or 30.". You definitely shouldn't do that - if you do that game will not function if hardware is not powerful enough. Properly done game should be able to run on any framerate (from 5 to 1000)."wglSwapIntervalEXT" It has very little to do with framerate.
SigTerm
I was a bit vague. Of course you should gracefully degrade if the system cannot push 30 fps. However, for a better user experience it's better to "lock" your frame rate at something consistent rather than jump everywhere from 20-200.
caspin
Also I think we are talking past each other a bit. I'm approaching this from the video game standpoint. With video game it is a common beginner mistake to push the fps as high as possible because bigger is obviously better. With video games it is always better to have a consistent fps, the one exception being when you're testing graphic performance. I'd never considered that there would be a valid reason to push a huge fps other than to show off a game engine.
caspin
@Caspin: "I'm approaching this from the video game standpoint." I'm also talking from video game standpoint. Locking fps is unreliable (you'll never get exact value) and should be avoided unless there is some kind of limitation (say, in physics engine), while supporting variable framerate isn't difficult. From my opinion, the proper way is to make framerate variable - measure how much time has passed, update scene accordingly.
SigTerm
@Caspin: "valid reason to push a huge fps" I already saw enough heated discussions about this subject, there are two main arguments - with higher fps you'll get smoother control. Even if fps is above monitor refresh rate. Another argument is for making video game videos (say, with fraps) - when framerate is above 200, you can easily make a good slow-motion video from it.
SigTerm
@Caspin: Speaking of slow-motion, it will be more difficult to do slow-motion movement in game that supports only fixed framerate, variable framerate engine wouldn't care - to change time flow speed you'll need to multiply deltaT, and that's all you need. With fixed framerate you'll have to change number of updates, as a result you'll get jerky movement or extra cpu load (if you want everything to move faster). Also, if you want, it is very easy to convert variable framerate engine to fixed framerate - you'll only need to modify a class that calculates deltaT, and add a bit of sleeping.
SigTerm
@Caspin: "our monitor will only ever display at 60 fps" assuming that monitor will ever support only that framerate is incorrect. Not long ago, there were a lot of CRT monitors that could support 120hz refresh rate. If a new device appears on the market, and your game will be locked at refresh rate below monitors refresh rate, customer won't be happy. Engine should be able to support as much frames per second as it can, but a user should have an option to enable vsync. I believe this is the end of discussion.
SigTerm
Alright let me clarify, I only disagree with you on the fps issue. Everything else is best practice for a game development. We agree: game updates(engine/physics/etc) should be independent of frame rate. To run a game in slow motion I would down scale the time passed the the updates. We agree: sampling input faster then 60Hz is a good thing. I personally think that feeding you last frame time as the current deltaT is a bad idea (I don't think you were advocating this, just clarifying). Instead the deltaT should be smoothed in some way in case there was an fps spike.
caspin
We agree: a frame rate cannot really be locked. We *disagree*: frame rate should prefer consistency over raw speed. I think we'd agree that the perfect situation (which will never occur) is for the engine to get done with the latest screen update just as the monitor is ready for a new one. When using double-buffering, coordinating render updates with v-sync is the preferred way. When using triple-buffering both techniques work well. By default DirectX sychronizes with v-sync even when triple-buffering.
caspin
Raw speed is required for quality slow animation on a network game, as network updates cannot be slowed. Timing updates with v-sync has the advantage of wasting less CPU time (raw speed renders screens that will never be displayed on the monitor). I will argue that consistent monitor updates are important, e.g. a consistent 30 fps is superior to an fps ranging from 55-65 on a 60Hz monitor. That however does not apply to the question at hand (my answer is wrong). When frame rates are consistently larger than the monitors refresh rate, the monitor will still update at a consistent rate.
caspin
+1  A: 

Rule #1. Do not make update() or loop() kind of functions rely on how often it gets called.

You can't really get your desired FPS. You could try to boost it by skipping some expensive operations or slow it down by calling sleep() kind of functions. However, even with those techniques, FPS will be almost always different from the exact FPS you want.

The common way to deal with this problem is using elapsed time from previous update. For example,

// Bad
void enemy::update()
{
  position.x += 10; // this enemy moving speed is totally up to FPS and you can't control it.
}

// Good
void enemy::update(elapsedTime)
{
  position.x += speedX * elapsedTime; // Now, you can control its speedX and it doesn't matter how often it gets called.
}
young
bassically you are saying same thing as Edison but this code demonstration makes it clear for everyone now I think. Thanks
Raven
You're welcome~ :) and yeap, mine is the same as the 2nd one in Edison's answer. just wanted to point out weakness of the 1st one.
young