views:

71

answers:

1

Taken from Microsoft documentation:

By default, the thread pool has 250 worker threads per available processor. You can change this setting using the ThreadPool.SetMaxThreads method.

It's also said, as it's widely known, that there is some overhead:

Threads have some level of overhead. Therefore, if a computer has multiple processors and you split processing into two threads, you won’t see a 100 percent performance improvement.

Out of some experience and more out of guessing, I'd have had something like 1 to 4 threads per CPU, and not 250! Does someone know why 250? Is it some value that is supposed to give the best overall performance, or is it in order to have pretty much every task you give to that thread pool to be processed without waiting for other tasks to finish?

+4  A: 

The motivation isn't performance, since as you've mentioned, having too much threads can easily cause a performance degradation (due to context switching, cache thrashing, contentions etc.).
The idea behind this magical number is the attempt to avoid deadlocks in the user's code. A developer may cause a deadlock if it queues numerous work items to thread pool that waits on other items that were queue to the thread pool also. If a situation occurs where the thread pool have utilized its max number of threads (they are all in Wait state), then you've got yourself a deadlock.

Of course there isn't anything special with the number "250", and deadlocks may still occur if the user insists on using such problematic usage pattern of the thread pool, but it should decrease the chance to reach a deadlock in such scenarios.

Joe Duffy explains this reasoning in more depth in his post: Why the CLR 2.0 SP1's threadpool default max thread count was increased to 250/CPU.

Liran