views:

123

answers:

2

Will LINQ's parallel extensions automatically detect the number of cores and utilize them all? Conversely, if the code is run on a single core machine, will it still work or do I have to detect the number of cores and tell PLINQ how many to run across?

Sadly, I don't have access to any single core machines to test my code on so I can't even test this for myself and I haven't been able to find any useful info elsewhere...

Also, while it might at first seem obvious when to use parallelism, are there any rules of thumb regarding where it should and as importantly should not be used?

Side Note: I don't necessarily program in a specific environment. I tend to divide my time somewhat equally (depending on project) between web, client/server apps, windows apps, windows service and console utilities depending on the task at hand.

+6  A: 

Yes, it handles the core count itself, and is fine on single core, pseudo-multi-core (HT), right up to stupid numbers of cores - but you do need to code against Parallel yourself; it doesn't simply seize control over your existing code.

For when to use parallelism... that is a huge topic. Actually, if you are doing web programming - I'd forget it and simply let IIS use different parallel threads for different requests (rather than letting one request hog the machine).

It is mainly useful for big number crunching / data gathering - either on a dedicated app-server, or at the client.

Marc Gravell
Thanks, I think that answers my biggest question and that is "Will the parallel extensions automatically scale down to single core okay too?"
BenAlabaster
+1  A: 

Marc's answer is right on recommending you let the parallel nature of web requests handle much of your parallelism for you if that's the environment you're in.

One place I've had great success using the parallel extensions was in loading data from remote sources. I had an app where I'd need to fetch one piece of data, use some information from that to fetch 30 others, and return the results. I ran the first fetch normally, then defined a Future (from the parallel extensions) for each other request I needed, included the little bit of processing I needed in the anonymous method, and did a WaitAll() on the array of futures. Very simple to implement, and all the requests (or at least several of them at a time) run in parallel to fetch their data. Especially since the latency for my calls was very high (5s wait for about 1kb of data), this technique worked great.

Later on, I even modified it to return an IEnumerable, and used WaitOne() to wait for one of the futures to complete, then yielded it from my enumerator, allowing the caller to start processing the results from some of the requests before they all were complete.

Jonathan
+1 Excellent tip for fetching data
BenAlabaster