views:

29

answers:

2

We have a collection of objects which grows quite large over time. We have implemented a caching strategy to help alleviate this, however we are still running out of Heap Space at run time - if enough memory isn't allocated at startup.

Is there a standard mechanism to reduce the size of this cache at runtime to remove these OutOFMemory errors? This way if our process is started with a smaller slice of memory than normal we can hopefully avoid the server dying.

I realize that this is an Error type and thus shouldn't be caught / processed as it's normally indicative of more serious issue.

Is it as simple as having something as follows:

private static final long RECOMMENDED_MEMORY = 1073741824L;  //Example 1 Gig
private static int recommendedCacheSize = 100; //Example of 100 items
long heapSize = Runtime.getRuntime().totalMemory();
double size = Math.floor((double) heapSize/RECOMMENDED_MEMORY * recommendedCacheSize);
recommendedCacheSize = ++size; 
A: 

I suspect that the caching implementation does not maintain cache of WeakReference or SoftReference objects, which are typically employed in a Java object cache. It is highly important to use soft or weak references for cache, failing which it is possible for an OOME (Out Of Memory Error) to be thrown.

The rational behind using soft/weak references is that the garbage collector will attempt to collect these objects, in case not enough memory is available. Using strong references (the normal object references) to store references to objects in a cache, will prevent memory from being reclaimed.

If however, you get OOMEs even when you're using weak/soft references, I suspect your GC is not well tuned. It appears that your application experiences sudden spurts of memory consumption, resulting in the OOME condition. Such a situation is unlikely, but possible, especially if the previous invocation of the GC did not clear enough memory (which is eventually consumed by the next spurt in consumption).

Vineet Reynolds
Our cache of objects is utilizing the Apache Commons KeyedObjectPool so we should be meeting this criteria. I believe the real issue that our default setting is too high. The idea is to allow each of the devs have a different cache size based upon each of their machine's abilities.
Scott
@Scott, unless you are using the SoftReferenceObjectPool from Commons Pool, you'll not be using soft references underneath. Besides, Commons Pool is a pooling implementation, not a cache. It has it's own eviction algorithms in place, which obviously is different from GC. Therefore, you might want to examine whether any eviction of objects is occuring - the absence of any eviction will explain the OOME.
Vineet Reynolds
@Vineet shouldn't this serve as a cache if we set the maxIdle to 1 and the whenExhaustedAction to BLOCK? We are using a GenericKeyedObjectPool
Scott
@Scott, it's a bit difficult to say. I've taken a look only at how the pool was implemented internally, and haven't examined the evictor in detail. Out of curiosity, how are you specifying these values you've mentioned?
Vineet Reynolds
@Vineet - we are commons pool 1.2, so we are using the following calls in this objects default constructor -- note setting of the size is done just after instantiating the GenericKeyedObjectPool: GenericKeyedObjectPool pool = new GenericKeyedObjectPool(this);pool.setMaxIdle(1);pool.setWhenExhaustedAction(GenericKeyedObjectPool.WHEN_EXHAUSTED_BLOCK);
Scott
A: 

Why not use something that was designed as a real cache?

While you could use soft references to implement a memory-sensitive cache, such caches tend to leave objects in memory too long (increasing your GC load). Weak references are not suitable for caches, as they have a much higher likelihood of being cleared (in practice, they're cleared on every GC).

Anon