tags:

views:

574

answers:

9

What is the best way to exit out of a loop as close to 30ms as possible in C++. Polling boost:microsec_clock ? Polling QTime ? Something else?

Something like:

A = now;
for (blah; blah; blah) {
    Blah();
    if (now - A > 30000)
         break;
}

It should work on Linux, OS X, and Windows.

The calculations in the loop are for updating a simulation. Every 30ms, I'd like to update the viewport.

A: 

See QueryPerformanceCounter and QueryPerformanceFrequency

gatorfax
Assuming your'e on a Win32 platform; the OP mentioned Qt, so my guess is that OP doesn't/can't use the Win32 API.
sheepsimulator
Yeah, I'd like to stay OS independent if possible. If I find a solution for the three mentioned OSs, I could write the platform-independent time polling code, but I was hoping someone had already done it.
Neil G
My answer was posted BEFORE the question was updated to specify the OS. Thanks.
gatorfax
+1  A: 

The code snippet example in this link pretty much does what you want:

http://www.cplusplus.com/reference/clibrary/ctime/clock/

Adapted from their example:

void runwait ( int seconds )
{
   clock_t endwait;
   endwait = clock () + seconds * CLOCKS_PER_SEC ;
   while (clock() < endwait)
   {
      /* Do stuff while waiting */
   }
}
docflabby
Is it possible to use the ctime library to approach the granularity (30ms) that he is asking for?
sheepsimulator
-1, this is busy waiting (the process will suck CPU while doing nothing). The cases where busy waiting is acceptable are very specific (in that case, it may be acceptable for 30 ms if it's the only solution, but not for a whole second).
Bastien Léonard
True, on a graphical framework this is a no-no (guessing by the Qt reference). If it's just a console app it's probably okay as the OS would take care of it.
sheepsimulator
On my machine, Intel running OS X:#define CLOCKS_PER_SEC (__DARWIN_CLK_TCK)#define __DARWIN_CLK_TCK 100 /* ticks per second */
Neil G
I'm not trying to wait, but polling the clock() would work if CLOCKS_PER_SEC is always >= 100 (at least)
Neil G
This is what I went with, thanks.
Neil G
+2  A: 

Short answer is: you can't in general, but you can if you are running on the right OS or on the right hardware.

You can get CLOSE to 30ms on all the OS's using an assembly call on Intel systems and something else on other architectures. I'll dig up the reference and edit the answer to include the code when I find it.

The problem is the time-slicing algorithm and how close to the end of your time slice you are on a multi-tasking OS.

On some real-time OS's, there's a system call in a system library you can make, but I'm not sure what that call would be.

edit: LOL! Someone already posted a similiar snippet on SO: http://stackoverflow.com/questions/275004/c-timer-function-to-provide-time-in-nano-seconds

VonC has got the comment with the CPU timer assembly code in it.

James
I haven't been ignoring your answer -- I've been looking into it. This seems like the best way to get a resolution better than 0.01s. However, it's a lot more work for me.
Neil G
Using RDTSC directly is a recipe for pain, due to SMP and power management. The OS timing functions can use a sane timer (e.g. HPET) or attempt to work around SMP clock skew (e.g. QPC() w/AMD Processor Driver).
bk1e
+2  A: 

If you need to do work until a certain time has elapsed, then docflabby's answer is spot-on. However, if you just need to wait, doing nothing, until a specified time has elapsed, then you should use usleep()

TokenMacGuy
I need to do work, but thanks for letting me know about usleep().
Neil G
+2  A: 

According to your question, every 30ms you'd like to update the viewport. I wrote a similar app once that probed hardware every 500ms for similar stuff. While this doesn't directly answer your question, I have the following followups:

  • Are you sure that Blah(), for updating the viewport, can execute in less than 30ms in every instance?
  • Seems more like running Blah() would be done better by a timer callback.
  • It's very hard to find a library timer object that will push on a 30ms interval to do updates in a graphical framework. On Windows XP I found that the standard Win32 API timer that pushes window messages upon timer interval expiration, even on a 2GHz P4, couldn't do updates any faster than a 300ms interval, no matter how low I set the timing interval to on the timer. While there were high performance timers available in the Win32 API, they have many restrictions, namely, that you can't do any IPC (like update UI widgets) in a loop like the one you cited above.
  • Basically, the upshot is you have to plan very carefully how you want to have updates occur. You may need to use threads, and look at how you want to update the viewport.

Just some things to think about. They caught me by surprise when I worked on my project. If you've thought these things through already, please disregard my answer :0).

sheepsimulator
Thanks for your follow-ups.Blah() is the simulation update code--not the UI update code. It will almost surely not take longer than 30ms. If it does, I can add clock polling code inside it, assuming the clock-polling code is fast. If the atomic operations inside Blah() take longer than 30ms, then tough-luck user.The UI update code is called automatically by QT when the loop exits and the function returns. The function is called on a timer callback as you suggest.I don't think I have any IPC at all, but I could be wrong.
Neil G
A: 

If you are using Qt, here is a simple way to do this:

QTimer* t = new QTimer( parent ) ;
t->setInterval( 30 ) ; // in msec
t->setSingleShot( false ) ;
connect( t, SIGNAL( timeout() ), viewPort, SLOT( redraw() ) ) ;

You'll need to specify viewPort and redraw(). Then start the timer with t->start().

swongu
I don't think this will interrupt the for loop, since it's all happening on one thread.
Neil G
I am, however, using a QTimer to call the function that contains the for loop.
Neil G
Why don't you just use Qt's event loop? Will you be doing anything else inside your for loop?
swongu
my loop updates the simulation. It is running within the event loop, but it has to return to the event loop periodically-- not too quickly and not too slowly. That's why I'm asking.
Neil G
+9  A: 

The calculations in the loop are for updating a simulation. Every 30ms, I'd like to update the viewport.

Have you considered using threads? What you describe seems the perfect example of why you should use threads instead of timers.

The main process thread keeps taking care of the UI, and have a QTimer set to 30ms to update it. It locks a QMutex to have access to the data, performs the update, and releases the mutex.

The second thread (see QThread) does the simulation. For each cycle, it locks the QMutex, does the calculations and releases the mutex when the data is in a stable state (suitable for the UI update).

With the increasing trend on multi-core processors, you should think more and more on using threads than on using timers. Your applications automatically benefits from the increased power (multiple cores) of new processors.

Juliano
Upvoted. Thanks, I will give this some serious consideration. Right now, debugging the single-threaded app is a pain, but when it's stable I will probably go for a multi-threaded model. Thanks for detailing the plan for converting this to a multi-core application.
Neil G
If the poster was say implementing a progress control or something similar, I'd probably use a timer service/callback that checked some shared state. My normal thread would continue processing, and update a marker of some sort, and the callback thread would take care of doing the updating. On windows, you would catch the WM_TIMER message, for example.
Chris Kaminski
@darthcoder In multithread processes you don't update markers, you have to use a monitor or a mutex, or you will risk touching data that is not ready for consumption. Also, you should leave the main thread idle, receiving messages and doing the UI updates, and have a secondary thread (not in event-loop) doing the heavy work. That way your application stays responsive during processing. QTimer does what you proposed with WM_TIMER in a portable way (Neil G said that it had to be portable).
Juliano
+4  A: 

While this does not answer the question, it might give another look at the solution. What about placing the simulation code and user interface in different threads? If you use Qt, periodic update can be realized using a timer or even QThread::msleep(). You can adapt the threaded Mandelbrot example to suit your need.

Ariya Hidayat
thanks for the pointer to the example!
Neil G
+2  A: 

You might consider just updating the viewport every N simulation steps rather than every K milliseconds. If this is (say) a serious commercial app, then you're probably going to want to go the multi-thread route suggested elsewhere, but if (say) it's for personal or limited-audience use and what you're really interested in is the details of whatever it is you're simulating, then every-N-steps is simple, portable and may well be good enough to be getting on with.