views:

1744

answers:

7

I was reading a review of the new Intel Atom 330, where they noted that Task Manager shows 4 cores - two physical cores, plus two more simulated by Hyperthreading.

Suppose you have a program with two threads. Suppose also that these are the only threads doing any work on the PC, everything else is idle. What is the probability that the OS will put both threads on the same core? This has huge implications for program throughput.

If the answer is anything other than 0%, are there any mitigation strategies other than creating more threads?

I expect there will be different answers for Windows, Linux, and Mac OS X.


Using sk's answer as Google fodder, then following the links, I found the GetLogicalProcessorInformation function in Windows. It speaks of "logical processors that share resources. An example of this type of resource sharing would be hyperthreading scenarios." This implies that jalf is correct, but it's not quite a definitive answer.

+2  A: 

You can make sure both threads get scheduled for the same execution units by giving them a processor affinity. This can be done in either windows or unix, via either an API (so the program can ask for it) or via administrative interfaces (so an administrator can set it). E.g. in WinXP you can use the Task Manager to limit which logical processor(s) a process can execute on.

Otherwise, the scheduling will be essentially random and you can expect a 25% usage on each logical processor.

sk
While I’ve never been one that likes to leave things up to the OS, setting a threads affinity mask can be detrimental to performance if things get busy. Would SetThreadIdealProcessor() be a better option?
NTDLS
A: 

"Premature optimization is the root of all evil"

Are you sure you need to care about this? Trust the OS to do the right thing.

Pyrolistical
Setting processor affinity is a reasonable optimization and is standard practice for single-threaded, computationally-intensive processes (e.g. Folding@Home).
sk
Setting processor affinity is usually not an optimization at all. It can be in certain border cases, but mostly, it is just a hack to fix applications that don't run correctly if they're switched to another core.
jalf
Not sure about the correctness part, but it can decrease the overhead of context switches and reduce cache thrashing when you have N threads on <N processors.
sk
There are certain problems that need all the help they can get. I can't say I'm working on one right this minute, but I have in the past and probably will in the future.
Mark Ransom
jalf: If you are running any kind of parallel application, you need to set affinity for your processes or they will get juggled around by most current Linux distros. Nearly all the major MPI implementations do this.
tgamblin
+3  A: 

The probability is essentially 0% that the OS won't utilize as many physical cores as possible. Your OS isn't stupid. Its job is to schedule everything, and it knows full well what cores it has available. If it sees two CPU-intensive threads, it will make sure they run on two physical cores.

Edit Just to elaborate a bit, for high-performance stuff, once you get into MPI or other serious parallelization frameworks, you definitely want to control what runs on each core.

The OS will make a sort of best-effort attempt to utilize all cores, but it doesn't have the long-term information that you do, that "this thread is going to run for a very long time", or that "we're going to have this many threads executing in parallel". So it can't make perfect decisions, which means that your thread will get assigned to a new core from time to time, which means you'll run into cache misses and similar, which costs a bit of time. For most purposes, it's good enough, and you won't even notice the performance difference. And it also plays nice with the rest of the system, if that matters. (On someone's desktop system, that's probably fairly important. In a grid with a few thousand CPU's dedicated to this task, you don't particularly want to play nice, you just want to use every clock cycle available).

So for large-scale HPC stuff, yes, you'll want each thread to stay on one core, fixed. But for most smaller tasks, it won't really matter, and you can trust the OS's scheduler.

jalf
I'd like to believe that too, but a little evidence would be useful.
Mark Ransom
Evidence of what? Create a program which runs two threads in an infinite loop, and check CPU usage. You'll find that any sane OS assigns a thread to each core. Do you think it's a problem the OS designers haven't considered? Of course not. It's a fundamental issue that an OS *has* to handle.
jalf
I don't have such a system at hand to test, otherwise that's not a bad suggestion.
Mark Ransom
jaff: there are still performance issues when these things context-switch and get juggled. We see this at the national labs, and all the runtimes on parallel machines set affinity to make sure processes stay on their cores. See http://www.open-mpi.org/projects/plpa/ and my answer below.
tgamblin
Yep, I know it's not 100% optimal, but for most purposes, it comes close enough. My point was simply that the OS isn't so dumb that it'll try to schedule all the CPU-heavy threads on the same core, leaving others totally unused. Of course for MPI or similar, yes, you want full control. :)
jalf
+1  A: 

I don't know about the other platforms, but in the case of Intel, they publish a lot of info on threading on their Intel Software Network. They also have a free newsletter (The Intel Software Dispatch) you can subscribe via email and has had a lot of such articles lately.

Jim Anderson
+2  A: 

Linux has quite a sophisticated thread scheduler which is HT aware. Some of its strategies include:

Passive Loadbalancing: If a physical CPU is running more than one task the scheduler will attempt to run any new tasks on a second physical processor.

Active Loadbalancing: If there are 3 tasks, 2 on one physical cpu and 1 on the other when the second physical processor goes idle the scheduler will attempt to migrate one of the tasks to it.

It does this while attempting to keep thread affinity because when a thread migrates to another physical processor it will have to refill all levels of cache from main memory causing a stall in the task.

So to answer your question (on Linux at least); given 2 threads on a dual core hyperthreaded machine, each thread will run on its own physical core.

joshperry
+4  A: 

A sane OS will try to schedule computationally intensive tasks on their own cores, but problems arise when you start context switching them. Modern OS's still have a tendency to schedule things on cores where there is no work at scheduling time, but this can result in processes in parallel applications getting swapped from core to core fairly liberally. For parallel apps, you do not want this, because you lose data the process might've been using in the caches on its core. People use processor affinity to control for this, but on Linux, the semantics of sched_affinity() can vary a lot between distros/kernels/vendors, etc.

If you're on Linux, you can portably control processor affinity with the Portable Linux Processor Affinity Library (PLPA). This is what OpenMPI uses internally to make sure processes get scheduled to their own cores in multicore and multisocket systems; they've just spun off the module as a standalone project. OpenMPI is used at Los Alamos among a number of other places, so this is well-tested code. I'm not sure what the equivalent is under Windows.

tgamblin
+1, just note that the function is `sched_setaffinity`.
avakar
A: 

I have been looking for some answers on thread scheduling on Windows, and have some empirical information that I'll post here for anyone who may stumble across this post in the future.

I wrote a simple C# program that launches two threads. On my quad core Windows 7 box, I saw some surprising results.

When I did not force affinity, Windows spread the workload of the two threads across all four cores. There are two lines of code that are commented out - one that binds a thread to a CPU, and one that suggests an ideal CPU. The suggestion seemed to have no effect, but setting thread affinity did cause Windows to run each thread on their own core.

To see the results best, compile this code using the freely available compiler csc.exe that comes with the .NET Framework 4.0 client, and run it on a machine with multiple cores. With the processor affinity line commented out, Task Manager showed the threads spread across all four cores, each running at about 50%. With affinity set, the two threads maxed out two cores at 100%, with the other two cores idling (which is what I expected to see before I ran this test).

EDIT: I initially found some differences in performance with these two configurations. However, I haven't been able to reproduce them, so I edited this post to reflect that. I still found the thread affinity interesting since it wasn't what I expected.

using System;
using System.Collections.Generic;
using System.Linq;
using System.Diagnostics;
using System.Runtime.InteropServices;
using System.Threading.Tasks;

class Program
{
    [DllImport("kernel32")]
    static extern int GetCurrentThreadId();

    static void Main(string[] args)
    {
        Task task1 = Task.Factory.StartNew(() => ThreadFunc(1));
        Task task2 = Task.Factory.StartNew(() => ThreadFunc(2));
        Stopwatch time = Stopwatch.StartNew();
        Task.WaitAll(task1, task2);
        Console.WriteLine(time.Elapsed);
    }

    static void ThreadFunc(int cpu)
    {
        int cur = GetCurrentThreadId();
        var me = Process.GetCurrentProcess().Threads.Cast<ProcessThread>().Where(t => t.Id == cur).Single();
        //me.ProcessorAffinity = (IntPtr)cpu;     //using this line of code binds a thread to each core
        //me.IdealProcessor = cpu;                //seems to have no effect

        //do some CPU / memory bound work
        List<int> ls = new List<int>();
        ls.Add(10);
        for (int j = 1; j != 30000; ++j)
        {
            ls.Add((int)ls.Average());
        }
    }
}
bart
You should be aware that if you are using Task Manager to look at the usage, Task Manager itself can be very disruptive to the system because it generally runs with a boosted priority. Try forcing Task Manager to Low Priority and see if the pattern changes.
Zan Lynx
Can you share your run times under the different configurations?
Mark Ransom