views:

72

answers:

1

I understand that the TPL uses work-stealing queues for its tasks when I execute things like Parallel.For and similar constructs.

If I understand this correctly, the construct will spin up a number of tasks, where each will start processing items. If one of the tasks complete their allotted items, it will start stealing items from the other tasks which hasn't yet completed theirs. This solves the problem where items 1-100 are cheap to process and items 101-200 are costly, and one of the two tasks would just sit idle until the other completed. (I know this is a simplified exaplanation.)

However, how will this scale on a terminal server or in a web application (assuming we use TPL in code that would run in the web app)? Can we risk saturating the CPUs with tasks just because there are N instances of our application running side by side?

Is there any information on this topic that I should read? I've yet to find anything in particular, but that doesn't mean there is none.

+1  A: 

You might be able to use the TPL to improve I/O bound operations by moving to an asynchronous model. You might also be able to improve request latency by making more use of the available unused processor capacity on a web server in low load situations. You want to think about this pretty carefully as under high loads where the processors are already 100% utilized adding more parallelism will reduce throughput on the server.

This is discussed on the Parallel Extensions Team Blog:

Using Parallel Extensions for .NET 4 in ASP.NET apps

I suspect that the same argument applies to Terminal Server applications also.

Ade Miller
Thanks for the link, that kind of information was exactly what I was looking for.
Lasse V. Karlsen