I'm developing a scheduler for an embedded system. This scheduler will call each process every X milliseconds; this time can be configured separately for each process, of course.
Everything is coded and calls every process as it should; the problem I'm facing is this: Imagine I set 4 processes to be called every 10, 15, 5 and 30 milliseconds respectively:
A: 10ms
B: 15ms
C: 5ms
D: 30ms
The resulting calling over time will be:
A |
A B A B |
C C C C C C C | processes being called
D |
----------------------------------
0 5 10 15 20 25 30 35... ms
The problem is that when 30ms is reached, all processes are called at the same moment (one after another) and this can delay the correct execution from here.
This can be solved by adding a delay to each process (but preserving its calling frequency), so the frequencies stops being multiples of each other. My problem is that I don't know how to calculate the delay to apply to each process so the number of collisions is minimized.
Is there any known algorithm for this, or some mathematical guidance?
Thank you.