views:

74

answers:

3

Modern x86 CPUs have the ability to support larger page sizes than the legacy 4K (ie 2MB or 4MB), and there are OS facilities (Linux, Windows) to access this functionality.

The Microsoft link above states large pages "increase the efficiency of the translation buffer, which can increase performance for frequently accessed memory". Which isn't very helpful in predicting whether large pages will improve any given situation. I'm interested in concrete, preferably quantified, examples of where moving some program logic (or a whole application) to use huge pages has resulted in some performance improvement. Anyone got any success stories ?

There's one particular case I know of myself: using huge pages can dramatically reduce the time needed to fork a large process (presumably as the number of TLB records needing copying is reduced by a factor on the order of 1000). I'm interested in whether huge pages can also be a benefit in less exotic scenarios.

+1  A: 

I've seen improvement in some HPC/Grid scenarios - specifically physics packages with very, very large models on machines with lots and lots of RAM. Also the process running the model was the only thing active on the machine. I suspect, though have not measured, that certain DB functions (e.g. bulk imports) would benefit as well.

Personally, I think that unless you have a very well profiled/understood memory access profile and it does a lot of large memory access, it is unlikely that you will see any significant improvement.

Pat
+1  A: 

I get a ~5% speedup on servers with a lot of memory (>=64GB) running big processes. e.g. for a 16GB java process, that's 4M x 4kB pages but only 4k x 4MB pages.

Benoît
+1  A: 

I tried to contrive some code which would maximise thrashing of the TLB with 4k pages in order to examine the gains possible from large pages. The stuff below runs 2.6 times faster (than 4K pages) when 2MByte pages are are provided by libhugetlbfs's malloc (Intel i7, 64bit Debian Lenny ); hopefully obvious what scoped_timer and random0n do.

  volatile char force_result;

  const size_t mb=512;
  const size_t stride=4096;
  std::vector<char> src(mb<<20,0xff);
  std::vector<size_t> idx;
  for (size_t i=0;i<src.size();i+=stride) idx.push_back(i);
  random0n r0n(/*seed=*/23);
  std::random_shuffle(idx.begin(),idx.end(),r0n);

  {
    scoped_timer t
      ("TLB thrash random",mb/static_cast<float>(stride),"MegaAccess");
    char hash=0;
    for (size_t i=0;i<idx.size();++i) 
      hash=(hash^src[idx[i]]);
    force_result=hash;
  }

A simpler "straight line" version with just hash=hash^src[i] only gained 16% from large pages, but (wild speculation) Intel's fancy prefetching hardware may be helping the 4K case when accesses are predictable (I suppose I could disable prefetching to investigate whether that's true).

timday