tags:

views:

204

answers:

6

Every now and again I find myself doing something moderately dumb that results in my program allocating all the memory it can get and then some.

This kind of thing used to cause the program to die fairly quickly with an "out of memory" error, but these days Windows will go out of its way to give this non-existent memory to the application, and in fact is apparently prepared to commit suicide doing so. Not literally of course, but it will starve itself of usable physical RAM so badly that even running the task manager will require half an hour of swapping (after all the runaway application is still allocating more and more memory all the time).

This doesn't happen too often, but when it does it's disastrous. I usually have to reset my machine, causing data loss from time to time and generally a lot of inconvenience.

Do you have any practical advice on making the consequences of such a mistake less dire? Perhaps some registry tweak to limit the max amount of virtual memory an app is allowed to allocate? Or some CLR flag that will limit this only for the current application? (It's usually in .NET that I do this to myself.)

("Don't run out of RAM" and "Buy more RAM" are no use - the former I have no control over, and the latter I've already done.)

A: 

use java ?

keep trace of big allocating stuff ?

Xavier Combelle
why down vote? It's tagged as language-agnostic
Xavier Combelle
I didn't downvote. But "language-agnostic" means "applies to any language", not "I'm willing to switch languages for any arbitrary reason"
zildjohn01
-1 because it doesn't answer the question at all.
John Saunders
Why doesn't java have this problem?
Dykam
This answer completely misses the point, doesn’t provide any useful ideas, and proselytises.
Timwi
Yes, we should all use Java because things haven't slowed down enough.
BoltClock
Because the JVM limits the total amount of memory disposable to avoid such problems. I don't know why CLR don't
Xavier Combelle
@Xavier: I'm going to guess that CLR doesn't because fixed memory limits on the JVM cause more confusing problems for users than they solve for developers. Which is not to say they aren't great for developers, or that it couldn't be arranged that the default is "unlimited" :-)
Steve Jessop
@Steve: A well designed program should not use more memory than reasonable and so should not confuse the user.
Xavier Combelle
@Xavier. Not so. Many programs use as much memory as is necessary to handle the task given to the program by the user. When written in Java, these programs can cause user headaches, because users must get involved in configuring something they don't really understand and can't predict. Sure, for a game or something you can make a fair prediction of an upper limit how much memory it will ever need to use. For editing video, a fixed limit is completely inappropriate and hence confusing for the user. Unless the author does the thing you aren't "supposed" to do: specify an enormous limit.
Steve Jessop
A: 

I usually use Task Manager in that case to kill the process before the machine runs of memory. TaskMan runs pretty well even as the machine starts paging pretty badly. After that the machine will usually recover. Later versions of Windows (such as 7) generally have more survivability in these situations than earlier versions. Running without DWM (turning off Aero themes in Vista and 7) generally also gives more time to invoke taskman to monitor and potentially kill off runaway processes.

+3  A: 

The obvious answer would be to run your program inside of a virtual machine until it's tested to the point that you're reasonably certain such things won't happen.

If you don't like that amount of overhead, there is a bit of middle ground: you could run that process inside a job object with a limit set on the memory used for that job object.

Jerry Coffin
The VM will die just the same if it happens, and since that's where I had been doing development I suffer almost all the same effects. Re job objects: haven't come across these before; is it possible to set up Visual Studio to start debugging "via" a job object, if I can say so?
romkyns
Right -- you'd have to assign it to a separate VM to get much good. I don't think VS supports putting debugees into job objects, though it does seem like an obvious step.
Jerry Coffin
+1  A: 

In Windows you can control the attributes of a process using Job Objects

Shaji
+8  A: 

You could keep a command prompt open whenever you run a risky app. Then, if it starts to get out of control, you don't have to wait for Task Manager to load, just use:

taskkill /F /FI "MEMUSAGE ge 2000000"

This will (in theory) force kill anything using more than 2GB of memory.

Use taskkill /? to get the full list of options it takes.

EDIT: Even better, run the command as a scheduled task every few minutes. Any process that starts to blow up will get zapped automatically.

dmb
Brilliant idea with the scheduling, this might just do the trick. I had no idea taskkill had a powerful filtering capability.
romkyns
+100. This is going in my quick launch bar. After "a few minutes" go by, it's usually too late.
zildjohn01
As long as it doesn't kill your overnight 3D renders half way through...
Merlyn Morgan-Graham
+6  A: 

There's something you can do: limit the working set size of your process. Paste this into your Main() method:

#if DEBUG
      Process.GetCurrentProcess().MaxWorkingSet = new IntPtr(256 * 1024 * 1024);
#endif

That limits the amount of RAM your process can claim, preventing other processes from getting swapped out completely.

Other things you can do:

  • Add more RAM, no reason to not have at least 3 Gigabytes these days.
  • Defrag your paging file. That requires defragging the disk first, then defrag the paging file with, say, SysInternals' pagedefrag utility.

Especially the latter maintenance task is important on old machines. A fragged paging file can dramatically worsen swapping behavior. Common on XP machines that never were defragged before and have a smallish disk that was allowed to fill up. The paging file fragmentation causes a lot of disk head seeks, badly affecting the odds that another process can swap itself back into RAM in a reasonable amount of time.

Hans Passant
+1. I think you just made a (small) portion of my test job about 10x easier...
Merlyn Morgan-Graham