tags:

views:

819

answers:

5

Is there a technique such that I can specify a number n such that when the (n + 1)th entry is inserted, the oldest entry is removed first ensuring that the size of the hashtable is always limited to n?

+4  A: 

You're looking for a LRU cache perhaps? Here's a blog post on one based on LinkedHashMap.

sblundy
I came here to mention LinkedHashMap, which I used for a LRU cache recently.
Paul Tomblin
A: 

You may want to consider using Apache Collections. They have a bunch of LRU implementations. Otherwise, you can easily write a similar wrapper for the standard library collections; I don't think there's one you can use directly.

Uri
A: 

You could use a double-ended queue, or Deque, and just remove the first item when you're at the max count.

Eric H
A: 

If you're caching you could use a WeakHashMap or WeakReference and then not have to worry about the size of the cache.

Steve Kuo
Just to clarify to avoid propagating a common misconception: WeakHashMap is NOT suitable for LRU caching by itself; it uses WeakReferences for KEYS, not VALUES. It's ideal for keeping metadata about objects which don't 'belong' to you; not for assuming entries will be thrown out when memory runs out
Cowan
+12  A: 

LinkedHashMap does exactly that, see javadoc for the removeEldestEntry method.

Something like this should do the trick, this will remove the oldest inserted entry:

Map map = new LinkedHashMap() {
 @Override
 protected boolean removeEldestEntry(Entry eldest) {
  return size() > N;
 }
};

You can also remove oldest accessed entry by specifying it in the constructor:

    Map map = new LinkedHashMap(16, 0.75f, true) {
     @Override
     protected boolean removeEldestEntry(Entry eldest) {
      return size() > N;
     }
    };
Kristian