tags:

views:

567

answers:

11

In Java, is there a way to know that a StackOverflow error or OutOfMemory exception may happen soon?

The OutOfMemory exception might be an easier one to catch, if one is capable of getting memory usage statistics programmatically, and if one knows ahead of time how much memory needs to be used before the OutOfMemory exception is thrown. But are those values knowable?

For the StackOverflow error, is there a way to get recursion depth, and how does one know what value for recursion depth would cause the error to occur?

By knowing ahead of time whether these errors will happen, I feel I can recover the application more gracefully, instead of watching it crash.

+2  A: 

You should never see a StackOverflow exception if your application is designed and implemented correctly!

Generally, if you get a StackOverflow exception then it's a sign that there's a bug in your recursion code.

John Topley
Not necessarily true. I could, for example, have an overly ambitious anagram-finding method that simply hasn't reached an adequate exit condition before the recursion depth gets too deep.
David
But I agree that, in 90% of cases, StackOverflow reflects a problem in the code.
David
If your anagram-finding method recurses so deep on some cases that the stack overflows, wouldn't you say it's implemented incorrectly? That sounds like a pretty serious problem to me.
mquander
This doesn't answer the question. Some real code has a certain chance of a StackOverflowError, such as code which recursively parses XML which is sent by an outside client.
Avi
I disagree. If your code is parsing a tree that might be so, so deep that it could actually overflow the stack if you do it recursively, then don't parse it recursively.
mquander
@mquander no - an inefficient algorithm can still be perfectly correct
Greg
... You could of course write such code extra-carefully so as not to recurse, but recursion may be the most straightforward method; it may be acceptable to succeed 99.999% of the time, and gracefully fail for the problem cases.
Avi
@Greg: I guess if by "correct" you mean "the algorithm works in a platonic way in an ideal world" then it might be "correct," but if it stops working sometimes on real data on a real computer, then it probably isn't very practical; @Avi: If it's really acceptable to fail sometimes on real data, and very hard to do it without recursion, then I'd check whether the tree depth is in "reasonable" bounds beforehand instead of catching the error after the fact.
mquander
@mquander: I agree with you where that is possible. But in the real world it isn't always possible - sometimes large inputs aren't completely available beforehand. The point is to answer the question, even if you are right that 99% of (even recursive) algorithms should be written "safely". But take even the simple case of HTML-rendering in a web browser - it must be started before the input is available, and generally needs to be done recursively. That is the reason we have the option of catching the rare StackOverflowError and bailing out gracefully.
Avi
+2  A: 

Most StackOverflow errors come out of bad recursion. Unfortunately, the problem of determining if a recursion will stop is generally not decidable (this is a central concept in CS). There are cases, however, you could get warnings, for example some IDEs will let you know if you're invoking a function recursively with no parameters.

Uri
Curious, Does an infinite loop follow the same and result in StackOverflow error too?
instantsetsuna
+4  A: 

You can anticipate out-of-memory conditions with Runtime.freeMemory() and Runtime.maxMemory(). Most times it'll be hard recovering gracefully, but I leave that to you.

gustafc
Aren't these numbers somewhat unreliable for real-time monitoring?
Elijah
Oh absolutely, freeMemory() is said to return an approximation right there in the API.
CaptainAwesomePants
A: 

You could discover a lot about recursion depth, by creating a Throwable object, and querying its getStackTrace() method. But this is expensive to do.

If you really have a method with a small potential of throwing a StackOverflowError or an OutOfMemoryError why not just insert try-catch blocks and catch those errors? They can be caught and handled just like checked exceptions.

Avi
That's inane. If you know that your method runs the system out of memory or overflows the stack, don't catch the error; stop using so much memory and stop overflowing the stack.
mquander
You don't know this, of course. But on some crazy inputs, there may be a very small chance that it will. That is what error handling is for.
Avi
I wonder if a program can gracefully recover from a stack overflow/out of memory. I've NEVER considered it a possibility, but +1 (to neutralize the -1) for making me think about it. I may need to go out and test.
Bill K
I don't know if you can gracefully recover once the error occurs, but it sure would be nice to anticipate the error, stop doing whatever memory-intensive process you're doing, and tell the user, "Hey, this is taking a lot of memory.. let's try something else"
David
A: 

I don't know anything about working this out at run time, or what you might be able to do to avoid it once you predict it is going to happen. Better to try and avoid it occurring in the first place.

1) You could use Findbugs which may indicate some StackOverFlow errors occurring from inadvertently calling the same method from itself.

2) You could store data likely to cause you to run out of memory with a SoftReference and have a null check on accessing it so it can be reloaded if it's been garbage collected.

If either of these things are actually issues for you then the solution probably isn't in detecting it happening but architecting your solution differently to avoid them occuring if at all possible.

Robin
A: 

Usually memory useage is hard to predict.

Stack Overflows from infinite regression generally show up in the write/debug cycle.

Harder to catch are memory issues on things like large held collections, caches, etc.etc. As has been pointed out you can check runtime free and max memory, but be careful, as their meanings aren't obvious - "free" here, means "free now", e.g. meaning if you ran a full gc you might get more. Unfortunately there's no way to get the "total possible free, including garbage-collectible" without running System.gc(), which is not a nice thing to do on a production application (where you're liable to have large enough data sets to cause the problem in the first place) because the entire JVM will come to a screeching halt for a few seconds (or more, in a large app). Note that even System.gc() is not guaranteed to run "now", but my experience has been that it has whenever I've played with it.

You can print gc activity from the running jvm by starting java with -verbose:gc, -XX:+PrintGCTimeStamps, and -XX:+PrintGCDetails (more detail here), and in general if the collector starts to run more frequently, it's probably a sign that you're running out of memory.

Steve B.
+6  A: 

Anticipating Out of Memory Errors

I'm surprised I didn't see this mentioned in the other posts, but you can use ManagementFactory in Java 5/6 to get at a lot of the memory usage information.

Look at the platform mbean server page for more information on detecting low memory conditions in Java. I believe you can setup notifiers to call code when memory usage reaches a certain threshold.

Elijah
A: 

For StackOverflowError:

To know the current depth, usually it's either:

  1. using a stateful function (storing the depth in outside the function)
  2. using an accumulator (passing the depth as an argument to the function)

Knowing the depth it will occur is difficult. There are several factors:

  1. The stack space allocated to the JVM (you can change this with the -Xss option)
  2. The amount of stack space already used
  3. The amount used by the current function.

Why not try it out using something like this?

public static void main(String[] args) {
    try {
        recurs();
    } catch (Throwable t) {
        // not a good idea in production code....
    }
}
static int depth = 0;
static void recurs() {
    System.out.println(depth++);
    recurs();
}

Run it several times. Also try adding dummy variables. It can be seen that even the same code may halt at different depths and adding more variables cause it to end earlier. So yeah, pretty much it's unpredictable.

I suppose that besides rewriting the algorithm the only option would be to increase the stack space with the -Xss option.

For OutOfMemoryError, there's the -Xmx option

RichN
A: 

One useful thing you can do is use SoftReferences for caches. That will give you a gradual performance slide as you run out of memory. Just don't use WeakReference even in a WeakHashMap because it will annoy me when your application dies nastily on me.

Tom Hawtin - tackline
+1  A: 

The MemoryMXBean can emit notifications if your memory reaches a certain threshold.

Clint
A: 

I don't know how to prevent those error conditions but if you add an unhandled exception handler to the thread at least you can log and maybe do some sort of recovery in another thread.

Javamann