views:

323

answers:

8

I would like to use my multi-threading programming skills (I got skills), but I realize that that is not enough. My threads may still compete for the same core if the operating system is not aware of the potential. What OS/compiler/library combination can I use on Intel Xeon architecture to thread to the cores?

+4  A: 

On every operating system. That's pretty much the definition of a thread.

If you create an application which starts two threads, then the OS is able to put these on two separate cores. That's true for Windows, OSX, Linux, and any other OS you can think of.

jalf
A: 

Pretty much every modern OS out there will schedule threads across multiple cores, AFAIK. Certainly no variant of Unix that I've ever played with has the slightest problem with it, and I'm fairly certain that all the Windowses handle it fine. The compiler isn't an issue, as native threads are an OS-level thing, so the compiler just passes the syscall on down.

There are a few languages (such as Ruby) that don't use native threads, and instead use their own "green" threads, which are implemented in the interpreter and hence look like a single thread to the OS, but they're the exception rather than the rule, and it's usually pretty obvious in the docs what's going on.

womble
A: 

Let's make a small distinction. Software that is threaded isn't necessarily going to run on two cores simultaneously.

You need to write code that is Simultaneous Multi Threading (SMT) capable. Most OSs support this without issue - the only real difference is in how your software deals with locking and resources. If your threads depend on the same memory or resources at all, there's going to be contention and points in time where one or the other is stalled waiting for a resource, memory, or other lock.

Most programming languages that have threading are capable of this as well - making sure it runs simultaneously is really up to the programmer.

You can find information on how to do this in Windows with Visual Studio C++ here:

http://msdn.microsoft.com/en-us/library/172d2hhw.aspx

There are many tutorials on this, especially for Windows (C#, C++, VB, etc) - they can be found by searching:

http://www.google.com/search?q=simultaneous+multithreading+C%2B%2B

Adam Davis
A: 

As others have stated, any modern operating system will do this for you. The way in which it does it can have large impacts on performance, however, so you'll probably want to use threads in the manner your OS intended. This Wikipedia article seems to have a decent overview of the scheduling techniques used by the major operating systems.

rmeador
which does include MSDOS, Windows 3.1 and MacOS 9 as the OSes to avoid, if you happen to have a few musty floppies in your loft.
Pete Kirkham
Emphasis on "modern"...
womble
+6  A: 

All modern OSs distribute threads on all available cores; but there are several languages or libraries that prevent this from happening. the most common issues are:

  • green threads. it used to have a performance advantage when multiple CPUs were rare and OSs weren't optimised enough. there were a couple Java VMs that boasted this as a feature, later turned to M:N scheme, and i think it's now N:N everywhere.

  • GIL:Global Intepreter Lock. some scripting languages have a lot of global state deep in the interpreter loop, so there's a single big (mutex) lock to ensure consistency; but that prevents two threads of the same 'space' to run simultaneously. At least Python and Lua have this problem. in these cases it's preferred to use multiple processes instead of multiple threads.

also, it's good to remember that the biggest bottleneck in most cpu-bound applications is the RAM bandwith, usually not the CPU itself, so having several threads fighting for the same memory might not be the best design. it's usually much better to refactor in several separate processes that communicate via small messages.

Javier
A: 

Most modern operating systems are prepared for multiprocessing. Hence, they are prepared for multicore. However, the scheduler is the responsible for distributing the threads to the cores. One of the most efficient multicore and multiprocessing OS is FreeBSD. However, not every OS is capable of scheduling threads to different cores. For example, the old Windows 98 does not work with more than one core. Besides, many OS have restrictions on the maximum number of cores.

I've read some posts in Stackoverflow from a user talking about a new book from Packt Publishing and I've found the following article in Packt Publishing web page:

http://www.packtpub.com/article/simplifying-parallelism-complexity-c-sharp

I've read Concurrent Programming with Windows, Joe Duffy's book. Now, I am waiting for "C# 2008 and 2005 Threaded Programming", Hillar's book - http://www.amazon.com/2008-2005-Threaded-Programming-Beginners/dp/1847197108/ref=pd_rhf_p_t_2

A: 

In another post, I recommended a new book. If you are looking for a deep answerd, I do recommend you to read the first two chapters of "C# 2008 and 2005 threaded programming", by Gaston C. Hillar - Packt Publishing. I didn't know the answer to your question before I bought the book 5 days ago. Now, I am capable of watching my Core 2 Quad Q6700 reach 98% CPU usage programming in C# using 4 concurrent threads! It is easier than I thought. If you have multithreaded knowledge, it will be even easier for you. I am impressed with the results you can achieve using many cores at the same time. I recommend the book to those who are interested in beginning with multicore or threaded programming using C#. I am not a C++ programmer. For this reason, I needed a C# beginner's book to exploit multicore using threads.

+1  A: 

Since you "got skills," I'm going to assume you already know that pretty much all modern OS's will execute your threads over multiple cores if they are available and your threads don't have some sort of locking issue that effectively makes them sequential.

So I'm going to guess that you're really asking how to bind your threads to cores so that they won't compete with one another. This is done by setting the processor affinity of the thread. Below are links to articles on this for Windows and Linux. I'm sure others exist for other flavors of Unix as well. I'll also note that this usually isn't necessary, as outside of some special cases the OS knows better where to schedule threads that you do. Remember, modern OSes are multiprocess, so your threads aren't just competing with each other, they are competing with the threads from all the other processes on the box. Depending on load, limiting your threads to a single core may actually make them faster.

http://www.microsoft.com/technet/prodtechnol/windows2000serv/reskit/core/fnef_mul_dnpl.mspx?mfr=true

http://www.ibm.com/developerworks/linux/library/l-affinity.html

Erik Engbrecht