views:

182

answers:

4

Follow up question from Multi-core usage, threads, thread-pools.

Are threads moved from one core to another during their lifetime?

Of course. Imagine you have three threads running on a dualcore system. Show me a fair schedule that doesn't involve regularly moving threads between cores.

This is my first time on this site, so I didn't have enough rep to comment I guess. I decided to just make a new question referencing the one I wanted to comment on.

What is the process of selecting a core to move a thread to. Is it like the scheduler has a list of threads that need processing time and as one finishes it puts another one in?

Also I was wondering if there is a reference for the statement that threads are moved between cores at all. Or is it just considered "common knowlege"?

Thanks!

+1  A: 

MSDN has some articles that would probably help clarify some things: Scheduling Priorities and Multiple Processors.

Excerpt (Scheduling Priorities):

Threads are scheduled to run based on their scheduling priority. Each thread is assigned a scheduling priority. The priority levels range from zero (lowest priority) to 31 (highest priority). Only the zero-page thread can have a priority of zero. (The zero-page thread is a system thread responsible for zeroing any free pages when there are no other threads that need to run.)

The system treats all threads with the same priority as equal. The system assigns time slices in a round-robin fashion to all threads with the highest priority. If none of these threads are ready to run, the system assigns time slices in a round-robin fashion to all threads with the next highest priority. If a higher-priority thread becomes available to run, the system ceases to execute the lower-priority thread (without allowing it to finish using its time slice), and assigns a full time slice to the higher-priority thread.

And in regards to Multiple Processors:

Computers with multiple processors are typically designed for one of two architectures: non-uniform memory access (NUMA) or symmetric multiprocessing (SMP).

In a NUMA computer, each processor is closer to some parts of memory than others, making memory access faster for some parts of memory than other parts. Under the NUMA model, the system attempts to schedule threads on processors that are close to the memory being used. For more information about NUMA, see NUMA Support.

In an SMP computer, two or more identical processors or cores connect to a single shared main memory. Under the SMP model, any thread can be assigned to any processor. Therefore, scheduling threads on an SMP computer is similar to scheduling threads on a computer with a single processor. However, the scheduler has a pool of processors, so that it can schedule threads to run concurrently. Scheduling is still determined by thread priority, but it can be influenced by setting thread affinity and thread ideal processor, as discussed in this topic.

Donut
Good links, however, that doesn't really address the issue of thread migration. When I create the thread, changing the priority or thread affinity or thread ideal processor will allow me to select which core to run the thread on, but what happens if some other thread blocks that core. Does the scheduler then take account of that and move the thread I created to an available core?
mphair
+1  A: 

Windows provides API to set thread affinity (i.e. to set CPUs this thread should be scheduled to). There won't be need for such API if thread always executes on one core.

elder_george
+2  A: 

It's not like the thread is living on a particular core and that it is a process of moving it to another.

The operating system simply has a list of threads (and/or processes) that are ready to execute and will dispatch them on whatever core/cpu that happens to be available.

That said, any smart scheduler will try to schedule the thread on the same core as much as possible - simply to increase performance (data is more likely to be in that core's cache etc.)

Isak Savo
Is it simply a matter of "the cache was already on this core, so this one has a higher likelihood of getting the thread back"?It seemed to be more complicated than that after reading "Fast Switching of Threads Between Cores" by Strong and Tullsen et al. and "Performance Implications of Single Thread Migration on a Chip Multi-Core" by Constantinou, Sazeides et al.If it is just a matter of cache history, then is the windows scheduler under the group of "any smart scheduler" or should some care be taken to ensure that scheduling is "smart"?
mphair
It's more like "this thread has executed on this core, and will (due to things like CPU cache) probably execute faster if I schedule it on this core again". I haven't read the papers you cite so I can't comment on them. The scheduling algorithms in windows and other systems are more advanced than what I generalize them to, but the idea is the same. It's faster to run a thread on the same core, so the scheduler is more likely to put it there again. But no guarantees unless you manually set thread affinity.
Isak Savo
+1  A: 

Is it like the scheduler has a list of threads that need processing time and as one finishes it puts another one in?

Almost. What you describe is called cooperative multi-tasking, where the threads are expected to regularly yield execution back to the scheduler (e.g. by living only for a short while, or regularly calling Thread.Current.Sleep(0)). This is not how a modern consumer operating system works, because one rogue uncooperative thread can hog the CPU in such a system.

What happens instead is that at regular time intervals, a context switch occurs. The running thread, whether it likes it or not, is suspended. This involves storing a snapshot of the state of the CPU registers in memory. The kernel's scheduler then gets a chance to run and re-evaluates the situation, and may decide to let another thread run for a while. In this way slices of CPU time (measured in milliseconds or less) are given to the different threads. This is called pre-emptive multitasking.

When a system has more than one CPU or multiple CPU cores, the same thing happens for each core. Execution on each core is regularly suspended, and the scheduler decides which thread to run on it next. Since each CPU core has the same registers, the scheduler can and will move a thread around between cores while it attempts to fairly allocate time slices.

Wim Coenen