views:

575

answers:

3

I have a large multi-threaded C# application running on a multi-core 4-way server. Currently we're using "server mode" garbage collection. However testing has shown that workstation mode GC is quicker.

MSDN says:

Managed code applications that use the server API receive significant benefits from using the server-optimized garbage collector (GC) instead of the default workstation GC.

Workstation is the default GC mode and the only one available on single-processor computers. Workstation GC is hosted in console and Windows Forms applications. It performs full (generation 2) collections concurrently with the running program, thereby minimizing latency. This mode is useful for client applications, where perceived performance is usually more important than raw throughput.

The server GC is available only on multiprocessor computers. It creates a separate managed heap and thread for each processor and performs collections in parallel. During collection, all managed threads are paused (threads running native code are paused only when the native call returns). In this way, the server GC mode maximizes throughput (the number of requests per second) and improves performance as the number of processors increases. Performance especially shines on computers with four or more processors.

But we're not seeing performance shine!!!! Has anyone got any advice?

A: 

You should measure and compare the performance of your application, not the time required for GC. As said in MSDN, the workstation GC runs concurrently with the running program.

ammoQ
A: 

Are you enabling server GC in you app.config file ? If you don't workstation GC will be used.

MichaelT
+2  A: 

It's not explained very well, but as far as I can tell, the server mode is synchronous per core, while the workstation mode is asynchronous.

In other words, the workstation mode is intended for a small number of long running applications that need consistent performance. The garbage collection tries to "stay out of the way" but, as a result, is less efficient on average.

The server mode is intended for applications where each "job" is relatively short lived and handled by a single core (edit: think multi threaded web server). The idea is that each "job" gets all the cpu power, and gets done quickly, but that occasionally the core stops handling requests and cleans up memory. So in this case the hope is that GC is more efficient on average, but the coreis unavailable while its running, so the application needs to be able to adapt to that.

In your case it sounds like, because you have a single application whose threads are relatively coupled, you're fitting better into the model expected by the first mode rather than the second.

But that's all just after-the-fact justification. Measure your system's performance (as ammoQ said, not your GC performance, but how well you application behaves) and use what you measure to be best.

andrew cooke
It's perhaps worth adding that "server" often means different things to different people. You might be thinking that "server" means "my new expensive multicore box", while MS might be thinking "server" means "computer used to many different, unrelated tasks at once". Those are two valid, but completely orthogonal, definitions.
andrew cooke
For me its the former.....
DanC