views:

227

answers:

2

Does the RTOS play a major role or processor play a major role in determining the time for context switch ? What is the percentage of share between these two major players in determining the context switch time .

Can anyone tell with respect to uC/OS-II RTOS ?

+6  A: 

I would say both are significant, but it is not really as simple as that:

The actual context switch time is simply a matter of the number of instruction cycles required to perform the switch, like anything in software it may be coded efficiently or it may not. On the other hand, all other things being equal, a processor with a large register set will require more instruction cycles to save the context; but having a large register set may make other code far more efficient.

A processor may also have an architecture that directly supports fast context switching. For example the lowly 8bit 8051 has eight duplicate register banks; so a context switch is little more that a register bank switch (so long as you have not more that eight threads), and given that Silicon Labs produce 8051 based devices at 100MIPS, that could be very fast indeed!

More sophisticated processors and operating systems may use an MMU to provide thread memory protection, this is an additional context switch overhead but with benefits that may override that. Also of course such processors generally also have high clock rates which helps.

So all in all, the processor speed, the processor architecture, the quality of the RTOS implementation, and the functionality provided by the RTOS may all affect context switch time. But in the end the easiest way to improve switch time is almost certainly to increase the clock rate.

Although it is nice to have more headroom, if context switch time is a make or break issue for your project on any reputable RTOS you should consider the suitability of either your hardware or your design. You should aim toward a design that minimises context switches. For example, if an ADC conversion takes 6us and a context switch takes 20us, then you would do better to busy-wait than to use a conversion-complete interrupt; better yet use DMA transfers to avoid context switches on single data items where possible.

Clifford
@Alexandre: Fixed - thanks. You were of course at liberty to edit it yourself.
Clifford
@Clifford You need at least 2k rep to edit a normal post.
Alexandre Jasmin
@Alexandre: my error.
Clifford
I liked the busy-wait usage scenario that was conveyed here. Interesting use of busy wait. But, based on Simon's reponse, i feel that it is more largely dependent on processor as it also decides the RTOS implementation . I think that if the processor support is poor, then RTOS cannot improve it.
S.Man
@S.Man: uC/OS-II's scheduler allows only one task per priority level, and by implication does not support round-robin scheduling. This is often adequate and probably leads to a faster context switch time that an RTOS that supports round-robin/multiple tasks per priority level. That being the case, for the same processor, the RTOS implementation is significant - hence the answer "both".
Clifford
+2  A: 

uC/OS-II RTOS is written in C, with some very specific sections(maybe in assembly) for the processor specific handling. The context switching will be part of the sections that are very specific to the processor.

So the context switch time will be very dependent on the processor selected and the specific sections used to adapt uC/OS-II to that processor. I believe all the source code is available so you should be able to see how much source is needed for a context switch. I also think uC/OS-II has callback's that may allow you to add some performance measuring code.

simon
+1 for addressing the uC/OS-II part of the question, and providing accurate info to boot.
Dan