views:

53

answers:

1

I would like to limit the amount of physical memory a process can use without limiting the amount of virtual memory it can use. I am doing this in an effort to measure the behavior of various algorithms under memory pressure, and I need to test their performance with many different amounts of physical memory available - so I either need to buy a crapton of memory in lots of obscure sizes, or I need an operating system which supports limiting resident memory of a process in some way.

Unfortunately, Linux doesn't respect/enforce setrlimit(RLIMIT_RSS, ...), and neither does OSX.[1] Could you recommend me an operating system that CAN do this? Any of the non-OSX BSDs? Is there a mechanism to do this in Solaris? Can some variant of Windows do this?


[1] Linux ignores the request entirely, and OSX only uses it to decide what to swap out first when physical memory is exhausted. Neither of which helps me analyze the behavior of an algorithm when only X Megs of memory are available for use. It turns out Linux doesn't have any mechanism for doing this in the kernel, so it's pretty much out entirely unless someone can point me to a kernel fork which enables this. Solaris doesn't even have that option to setrlimit(), but it may have another way that I don't know about.

A: 

Figured it out. So, you pretty much can't do this with any Unix I could find.

Therefore, the thing to do is to run a virtualized OS which has as much memory as you want to give it. Then, the virtual OS will get driven into swap, but your main OS will hum along just fine. Even better, if you want to just count page faults rather than measure wall-clock time, with this approach you can make the swap disk for the virtual machine a ramdisk of the unvirtualized one - thus making page faults take almost no time to resolve in the virtualized machine!

pboothe