Is there a technique such that I can specify a number n such that when the (n + 1)th entry is inserted, the oldest entry is removed first ensuring that the size of the hashtable is always limited to n?
+4
A:
You're looking for a LRU cache perhaps? Here's a blog post on one based on LinkedHashMap.
sblundy
2009-01-07 21:34:15
I came here to mention LinkedHashMap, which I used for a LRU cache recently.
Paul Tomblin
2009-01-07 21:36:09
A:
You may want to consider using Apache Collections. They have a bunch of LRU implementations. Otherwise, you can easily write a similar wrapper for the standard library collections; I don't think there's one you can use directly.
Uri
2009-01-07 21:37:30
A:
If you're caching you could use a WeakHashMap or WeakReference and then not have to worry about the size of the cache.
Steve Kuo
2009-01-07 21:57:30
Just to clarify to avoid propagating a common misconception: WeakHashMap is NOT suitable for LRU caching by itself; it uses WeakReferences for KEYS, not VALUES. It's ideal for keeping metadata about objects which don't 'belong' to you; not for assuming entries will be thrown out when memory runs out
Cowan
2009-01-08 01:10:59
+12
A:
LinkedHashMap does exactly that, see javadoc for the removeEldestEntry method.
Something like this should do the trick, this will remove the oldest inserted entry:
Map map = new LinkedHashMap() {
@Override
protected boolean removeEldestEntry(Entry eldest) {
return size() > N;
}
};
You can also remove oldest accessed entry by specifying it in the constructor:
Map map = new LinkedHashMap(16, 0.75f, true) {
@Override
protected boolean removeEldestEntry(Entry eldest) {
return size() > N;
}
};
Kristian
2009-01-07 22:11:48