views:

186

answers:

4

I have an asynchronous dataflow system written in C++. In dataflow architecture, the application is a set of component instances, which are initialized at startup, then they communicate each other with pre-defined messages. There is a component type called Pulsar, which provides "clock signal message" to other components which connect to one it (e.g. Delay). It fires message (calls the dataflow dispatcher API) every X ms, where X is the value of the "frequency" parameter, which is given in ms.

Short, the task is just to call a function (method) in every X ms. The question is: what's the best/official way to do it? Is there any pattern for it?

There are some methods I found:

  • Use SIGALRM. I think, signalling is not suits for that purpose. Altough, the resolution is 1 sec, it's too rare.
  • Use HW interrupt. I don't need this precisity. Also, I aware using HW-related solution (the server is compiled for several platforms, e.g. ARM).
  • Measure elapsed time, and usleep() until next call. I'm not sure that it's the best way to measure time to call time related system calls by 5 thread, each 10 times in every second - but maybe I'm wrong.
  • Use RealTime kernel functions. I don't know anything about it. Also, I don't need crystal precise call, it's not an atomreactor, and I can't install RT kernel on some platforms (also, 2.6.x Kernel is available).

Maybe, the best answer is a short commented part of an audio/video player's source code (which I can't find/understand by myself).

UPDATE (requested by @MSalters): The co-author of the DF project is using Mac OSX, so we should find a solution that works on most Posix-compilant op. systems, not only on Linux. Maybe, in the future there'll be a target device which uses BSD, or some restricted Linux.

+3  A: 

If you do not need hard real-time guarantees, usleep should do the job. If you want hard real-time guarantees then an interrupt based or realtime kernel based function will be necessary.

doron
usleep is obsolete you should be using nanosleep or select
aeh
Nanosleep is cool, it tells the remaining time upon interruption, but I need to call a time read function yet every round to correct the amount to sleep, and I don't want to.
ern0
+1  A: 

To be honest, I think having to have a "pulsar" in what claims to be an asynchronous dataflow system is a design flaw. Either it is asynchronous or it has a synchronizing clock event.

If you have a component that needs a delay, have it request one, through boost::asio::deadline_timer.async_wait or any of the lower level solutions (select() / epoll() / timer_create() / etc). Either way, the most effective C++ solution is probably the boost.asio timers, since they would be using whatever is most efficient on your linux kernel version.

Cubbi
I don't use Boost nor any other libs (only: stdlib, pthreads). We can open an argument on the design; my goal was to keep the Dispatcher (sometimes also called broker, scheduler) as weak as just can be, that's why Pulsar is a separate component. Maybe, a day Dispatcher will provide timer functions/services, but it will feed Pulsars. There are already other special Dispatcher functions, which are accessible thru components (one for example: message ordering). Have to say, that my system is in "prototype" state: it works, but there's lotsa thing to do.
ern0
Well, as far as as-is design goes, I'd go with timer_create() / timer_settime(). Don't forget to put SIGEV_THREAD in sigev_notify for timer_create, so that it delivers timer expirations to thread, rather than process. It would not be a "official C++ way", but it would be "official POSIX way".
Cubbi
I didn't know that signals are so thread-friendly, I'll try it.
ern0
+2  A: 

An alternative to the previously mentioned approaches is to use the Timer FD support in Linux Kernels 2.6.25+ (pretty much any distribution that's close to "current"). Timer FDs provide a bit more flexibility than the previous approaches.

Rakis
Yep, It's so cool! But I don't want to drop BSD compatibility (my friend, who is co-author in this project, is using Mac, unfortunatelly).
ern0
@ern0: Please add such concerns to the question!
MSalters
+1  A: 

Neglecting the question of design (which I think is an interesting question, but deserves its own thread)...

I would start off by designing an 'interrupt' idea, and using signals or some kernel function to interrupt every X usec. I would delay doing sleep-functions until the other ideas were too painful.

Paul Nathan