views:

216

answers:

5

I’ve run into something strange about the effect of large memory allocations on the scalability of the .Net runtime. In my test application I create lots of strings in a tight loop for a fixed number of cycles and spit out a rate of loop iterations per second. The weirdness comes in when I run this loop in several threads – it appears that the rate does not increase linearly. The problem gets even worse when you create large strings.

Let me show you the results. My machine is an 8gb, 8-core box running Windows Server 2008 R1, 32-bit. It has two 4-core Intel Xeon 1.83ghz (E5320) processors. The "work" performed is a set of alternating calls to ToUpper() and ToLower() on a string. I run the test for one thread, two threads, etc – up to the maximum. The columns in the table below are:

  • Rate: The number of loops across all threads divided by the duration.
  • Linear Rate: The ideal rate if performance were to scale linearly. It is calculated as the rate achieved by one thread multiplied by the number of threads for that test.
  • Variance: Calculated as the percentage by which the rate falls short of the linear rate.

Example 1: 10,000 loops, 8 threads, 1024 chars per string

The first example starts off with one thread, then two threads and eventually runs the test with eight threads. Each thread creates 10,000 strings of 1024 chars each:

Creating 10000 strings per thread, 1024 chars each, using up to 8 threads
GCMode = Server

Rate          Linear Rate   % Variance    Threads
--------------------------------------------------------
322.58        322.58        0.00 %        1
689.66        645.16        -6.90 %       2
882.35        967.74        8.82 %        3
1081.08       1290.32       16.22 %       4
1388.89       1612.90       13.89 %       5
1666.67       1935.48       13.89 %       6
2000.00       2258.07       11.43 %       7
2051.28       2580.65       20.51 %       8
Done.

Example 2: 10,000 loops, 8 threads, 32,000 chars per string

In the second example I’ve increased the number of chars for each string to 32,000.

Creating 10000 strings per thread, 32000 chars each, using up to 8 threads
GCMode = Server

Rate          Linear Rate   % Variance    Threads
--------------------------------------------------------
14.10         14.10         0.00 %        1
24.36         28.21         13.64 %       2
33.15         42.31         21.66 %       3
40.98         56.42         27.36 %       4
48.08         70.52         31.83 %       5
61.35         84.63         27.51 %       6
72.61         98.73         26.45 %       7
67.85         112.84        39.86 %       8
Done.

Notice the difference in variance from the linear rate; in the second table the actual rate is 39% less than the linear rate.

My question is: Why does this app not scale linearly?

My Observations

False Sharing

I initially thought that this could be due to False Sharing but, as you’ll see in the source code, I’m not sharing any collections and the strings are quite big. The only overlap that could exist is at the beginning of one string and the end of another.

Server-mode Garbage Collector

I’m using gcServer enabled=true so that each core gets its own heap and garbage collector thread.

Large Object Heap

I don't think that objects I allocate are being sent to the Large Object Heap because they are under 85000 bytes big.

String Interning

I thought that string values may being shared under the hood due to interningMSDN, so I tried compiling interning disabled. This produced worse results than those shown above

Other data types

I tried the same example using small and large integer arrays, in which I loop through each element and change the value. It produces similar results, following the trend of performing worse with larger allocations.

Source Code

using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;
using System.Threading;
using System.Diagnostics;
using System.Runtime;
using System.Runtime.CompilerServices;

namespace StackOverflowExample
{
  public class Program
  {
    private static int columnWidth = 14;

    static void Main(string[] args)
    {
      int loopCount, maxThreads, stringLength;
      loopCount = maxThreads = stringLength = 0;
      try
      {
        loopCount = args.Length != 0 ? Int32.Parse(args[0]) : 1000;
        maxThreads = args.Length != 0 ? Int32.Parse(args[1]) : 4;
        stringLength = args.Length != 0 ? Int32.Parse(args[2]) : 1024;
      }
      catch
      {
        Console.WriteLine("Usage: StackOverFlowExample.exe [loopCount] [maxThreads] [stringLength]");
        System.Environment.Exit(2);
      }

      float rate;
      float linearRate = 0;
      Stopwatch stopwatch;
      Console.WriteLine("Creating {0} strings per thread, {1} chars each, using up to {2} threads", loopCount, stringLength, maxThreads);
      Console.WriteLine("GCMode = {0}", GCSettings.IsServerGC ? "Server" : "Workstation");
      Console.WriteLine();
      PrintRow("Rate", "Linear Rate", "% Variance", "Threads"); ;
      PrintRow(4, "".PadRight(columnWidth, '-'));

      for (int runCount = 1; runCount <= maxThreads; runCount++)
      {
        // Create the workers
        Worker[] workers = new Worker[runCount];
        workers.Length.Range().ForEach(index => workers[index] = new Worker());

        // Start timing and kick off the threads
        stopwatch = Stopwatch.StartNew();
        workers.ForEach(w => new Thread(
          new ThreadStart(
            () => w.DoWork(loopCount, stringLength)
          )
        ).Start());

        // Wait until all threads are complete
        WaitHandle.WaitAll(
          workers.Select(p => p.Complete).ToArray());
        stopwatch.Stop();

        // Print the results
        rate = (float)loopCount * runCount / stopwatch.ElapsedMilliseconds;
        if (runCount == 1) { linearRate = rate; }

        PrintRow(String.Format("{0:#0.00}", rate),
          String.Format("{0:#0.00}", linearRate * runCount),
          String.Format("{0:#0.00} %", (1 - rate / (linearRate * runCount)) * 100),
          runCount.ToString()); 
      }
      Console.WriteLine("Done.");
    }

    private static void PrintRow(params string[] columns)
    {
      columns.ForEach(c => Console.Write(c.PadRight(columnWidth)));
      Console.WriteLine();
    }

    private static void PrintRow(int repeatCount, string column)
    {
      for (int counter = 0; counter < repeatCount; counter++)
      {
        Console.Write(column.PadRight(columnWidth));
      }
      Console.WriteLine();
    }
  }

  public class Worker
  {
    public ManualResetEvent Complete { get; private set; }

    public Worker()
    {
      Complete = new ManualResetEvent(false);
    }

    public void DoWork(int loopCount, int stringLength)
    {
      // Build the string
      string theString = "".PadRight(stringLength, 'a');
      for (int counter = 0; counter < loopCount; counter++)
      {
        if (counter % 2 == 0) { theString.ToUpper(); }
        else { theString.ToLower(); }
      }
      Complete.Set();
    }
  }

  public static class HandyExtensions
  {
    public static IEnumerable<int> Range(this int max)
    {
      for (int counter = 0; counter < max; counter++)
      {
        yield return counter;
      }
    }

    public static void ForEach<T>(this IEnumerable<T> items, Action<T> action)
    {
      foreach(T item in items)
      {
        action(item);
      }
    }
  }
}

App.Config

<?xml version="1.0" encoding="utf-8" ?>
<configuration>
  <runtime>
    <gcServer enabled="true"/>
  </runtime>
</configuration>

Running the Example

To run StackOverflowExample.exe on your box, call it with these command-line parameters:

StackOverFlowExample.exe [loopCount] [maxThreads] [stringLength]

  • loopCount: The number of times each thread will manipulate the string.
  • maxThreads: The number of threads to progress to.
  • stringLength: the number of characters to fill the string with.
+4  A: 

You may want to look that this question of mine.

I ran into a similar problem that was due to the fact that the CLR performs inter-thread synchronization when allocating memory to avoid overlapping allocations. Now, with the server GC, the locking algorithm may be different - but something along those same lines may be affecting your code.

LBushkin
Luke, I would agree with LBushkin. Your code seems to have fell into the trap detailed by Igor Ostrovsky. You seem to be assigning new memory in the DoWorth method for the theString variable for every call by each thread.
scope_creep
+2  A: 

The hardware you're running this on is not capable of linear scaling of multiple processes or threads.

You have a single memory bank. that's a bottle neck (multiple channel memory may improve access, but not for more precess than you have memory banks (seems like the e5320 processor support 1 - 4 memory channels).

There is only one memory controller per physical cpu package (two in your case), that's a bottle neck.

There are 2 l2 caches per cpu package. that's a bottle neck. Cache coherency issues will happen if that cache is exhausted.

this doesn't even get to the OS/RTL/VM issues in managing process scheduling and memory management, which will also contribute to non-linear scaling.

I think you're getting pretty reasonable results. Significant speedup with multiple threads and at each increment to 8...

Truely, have you ever read anything to suggest that commodity multi-cpu hardware is capable of linear scaling of multiple processes/threads? I haven't.

SuperMagic
This is great information, thanks!
Luke Venediger
A: 

Your initial post is fundamentally flawed - you're assuming that a linear speedup is possible through parallel execution. It isn't, and never has been. See Amdahl's Law (Yes, I know, Wikipedia, but its easier than anything else).

Your code, viewed from the abstraction the CLR provides, appears to have no dependencies - however, as LBushkin pointed, out that isn't the case. As SuperMagic pointed out, the hardware itself implies dependencies between the threads of execution. This is true of just about any problem that can be parallelized - even with independent machines, with independent hardware, some portion of the problem usually requires some element of synchronization, and that synchronization prevents a linear speedup.

Matt Jordan
@Matt Jordan, thanks, I know that true linear speedup is not possible (and should've specified this), but I'm aiming for as close as I can get!
Luke Venediger
A: 

The effect of a memory allocator on application speedup is more closely related to the number of allocations than the amount allocated. It's also more influenced by the allocation latency (amount of time to complete a single allocation on a single thread), which in the case of the CLR is extremely fast due to the use of a bump-pointer allocator (see section 3.4.3).

Your question is asking why the actual speedup is sublinear, and to answer that you should certainly review Amdahl's Law.

Going back to the Notes on the CLR Garbage Collector, you can see that an allocation context belongs to a particular thread (section 3.4.1), which reduces (but does not eliminate) the amount of synchronization required during multi-threaded allocations. If you find that allocation is truly the weak point, I would suggest trying an object pool (possibly per-thread) to reduce the load on the collector. By reducing the sheer number of allocations, you'll reduce the number of times the collector has to run. However, this will also result in more objects making it to generation 2, which is the slowest to collect when it is needed.

Finally, Microsoft continues to improve the garbage collector in newer versions of the CLR, so you should target the most recent version you are able to (.NET 2 at a bare minimum).

280Z28
@280z28 thanks! Although in my example I allocate memory in the context of each executing thread. Is this what you're suggesting when you say: "you can see that an allocation context belongs to a particular thread" ?
Luke Venediger
It means when you allocate some memory, the allocator moves an 8kB block of memory to the requesting thread, so when future allocations on the same thread fit in that 8kB window, no cross-thread synchronization is required in the allocation process.
280Z28
A: 

Great question Luke! I'm very interested in the answer.

I suspect that you were not expecting linear scaling, but something better than a 39% variance.

NoBugz - Based on 280Z28's links, there would actually be a GC heap per core with GCMode=Server. There should also be a GC thread per heap. This shouldn't result in the concurrency issues you mention?

ran into a similar problem that was due to the fact that the CLR performs inter-thread synchronization when allocating memory to avoid overlapping allocations. Now, with the server GC, the locking algorithm may be different - but something along those same lines may be affecting your code

LBushkin - I think that that is the key question, does GCMode=Server still cause inter-thread locking when allocating memory? Anyone know - or can it simply be explained by hardware limitations as mentioned by SuperMagic?

Carls
@Carls yep that is exactly correct, I know it's impossible to get true linear scaling (don't we wish!) but I'm having a hard time convincing the tech lead on my project of this. He insists that we should simply be able to add gerbils to get better-than-linear performance. Quite frankly, I suspect that he might just go postal and blow the place up.
Luke Venediger
Yes, one GC heap per core. So that a garbage collection can be done with all available cores. It is not one GC heap per user thread, that cannot scale.
Hans Passant