To my understanding a Thread.Sleep(0) force a context switch on the OS.
I wanted to check what was the maximum amount of time that could pass in an application before to receive some CPU time.
So I built an application that does Thread.Sleep(0) in a while loop (c#) and calculate the time that pass between each call.
When this application is the only one running on a two core test PC the maximum observed time is right under 1 millisecond (with an average of 0.9 microsecond) and it use all the CPU available (100%).
When I run it along a CPU Filling dummy application (all with the same priority) the max time is around 25ms and the average time is 20ms. It behaves exactly like I expect it. And the time is very stable.
Whenever it gets some CPU time it immediately give the control back to whoever have some processing to do, it's like the hot potato game (CPU usage drops to 0%). If theres no other application running then the control comes back immediately.
Given this behavior I expected this application to have a minimal impact on a computer running real life application. (And to give me the actual "latency" I could expect to see in the applications running there). But to my surprise it did affect negatively (in an observable way) the performance of this specific system.
Am I missing some important point concerning Thread.Sleep(0)?
As a reference here's the code of this application
private bool _running = true;
private readonly Stopwatch _timer = new Stopwatch();
private double _maxTime;
private long _count;
private double _average;
private double _current;
public Form1()
{
InitializeComponent();
Thread t = new Thread(Run);
t.Start();
}
public void Run()
{
while(_running)
{
_timer.Start();
Thread.Sleep(0);
_timer.Stop();
_current = _timer.Elapsed.TotalMilliseconds;
_timer.Reset();
_count++;
_average = _average*((_count - 1.0)/_count) + _current*(1.0/_count);
if(_current>_maxTime)
{
_maxTime = _current;
}
}
}
Edited for clarity (purpose of the application): I am currently running a soft real-time multi-threaded application (well, group of applications) that needs to react to some inputs every roughly 300ms but we do miss some deadlines from time to time (less then 1% of the time) and I'm currently trying to improve that number.
I wanted to verify what is the current variability caused by other process on the same machine: I tough that by fitting the application written above on this semi real-time machine the maximum time observed would tell me what variability is caused by the system. I.E. I have 300ms but max observed time before a thread gets some CPU time is standing at 50ms, so to improve the performance I should set my processing time to a maximum of 250ms (since I might already be 50ms late).