views:

913

answers:

17

I am writing an application where memory, and to a lesser extent speed, are vital. I have found from profiling that I spend a great deal of time in Map and Set operations. While I look at ways to call these methods less, I am wondering whether anyone out there has written, or come across, implementations that significantly improve on access time or memory overhead? or at least, that can improve these things given some assumptions?

From looking at the JDK source I can't believe that it can't be made faster or leaner.

I am aware of Commons Collections, but I don't believe it has any implementation whose goal is to be faster or leaner. Same for Google Collections.

Update: Should have noted that I do not need thread safety.

A: 

It's probably not so much the Map or Set which causing the problem, but the objects behind them. Depending upon your problem, you might want a more database-type scheme where "objects" are stored as a bunch of bytes rather than Java Objects. You could embed a database (such as Apache Derby) or do your own specialist thing. It's very dependent upon what you are actually doing. HashMap isn't deliberately big and slow...

Tom Hawtin - tackline
I don't see how the nature of the objects changes how fast a Set or Map can look them up, or why a database would be leaner and faster than a Map implementation
Sean Owen
More memory means harder worked cache. Implementation of equals and hashCode is also important. If equals has to chase down various data in different allocations of memory, that is going to be slow. If hashCode causes collisions that's going to be slow.
Tom Hawtin - tackline
+2  A: 

You can extend AbstractMap and/or AbstractSet as a starting point. I did this not too long ago to implement a binary trie based map (the key was an integer, and each "level" on the tree was a bit position. left child was 0 and right child was 1). This worked out well for us because the key was EUI-64 identifiers, and for us most of the time the top 5 bytes were going to be the same.

To implement an AbstractMap, you need to at the very least implement the entrySet() method, to return a set of Map.Entry, each of which is a key/value pair.

To implement a set, you extend AbstractSet and supply implementations of size() and iterator().

That's at the very least, however. You will want to also implement get and put, since the default map is unmodifiable, and the default implementation of get iterates through the entrySet looking for a match.

nsayer
May have to go that way -- was hoping to hear someone had nailed this already.
Sean Owen
+5  A: 

Have you looked at Trove4J ? From the website:

Trove aims to provide fast, lightweight implementations of the java.util.Collections API.

Benchmarks provided here.

Brian Agnew
+3  A: 

Try improving the performance of your equals and hashCode methods, this could help speed up the standard containers use of your objects.

Tom
Yeah they are as fast as possible -- merely comparing / returning ints in my case. Good point though.
Sean Owen
+1  A: 

Check out GNU Trove:

http://trove4j.sourceforge.net/index.html

Taylor Leese
http://stackoverflow.com/questions/865423/optimized-implementations-of-java-util-map-and-java-util-set/865449#865449
erickson
A: 

Commons Collections has FastArrayList, FastHashMap and FastTreeMap but I don't know what they're worth...

Valentin Rocher
Commons Collections doesn't support generics and is old. Google Collections has been through a lot of scrutiny by a lot of smart people. I'd look there first.
erickson
Yeah good lead here but these implementations are trying to optimize away thread contention in a thread-safe implementation, in a mostly read-only environment. I should have noted I don't need thread-safety.
Sean Owen
Nowadays, I'd really just use the concurrent collections introduced in Java 5.
Neil Coffey
A: 

There is at least one implementation in commons-collections that is specifically built for speed: Flat3Map it's pretty specific in that it'll be really quick as long as there are no more than 3 elements.

I suspect that you may get more milage through following @thaggie's advice add look at the equals/hashcode method times.

Gareth Davis
+7  A: 

Normally these methods are pretty quick. There are a couple of things you should check: are your hash codes implemented? Are they sufficiently uniform? Otherwise you'll get rubbish performance.

http://trove4j.sourceforge.net/ <-- this is a bit quicker and saves some memory. I saved a few ms on 50,000 updates

Are you sure that you're using maps/sets correctly? i.e. not trying to iterate over all the values or something similar. Also, e.g. don't do a contains and then a remove. Just check the remove.

Also check if you're using Double vs double. I noticed a few ms performance improvements on ten's of thousands of checks.

Have you also set up the initial capacity correctly/appropriately?

Egwor
Yeah hashCode() is OK as is equals() and yeah I'm not being too dumb (i.e. using entrySet() where applicable for instance). trove4j is a good lead.
Sean Owen
just a thought: have you thought about making your objects immutable and then pre-computing the hash code.
Egwor
A: 
  • Commons Collections has an id map which compares through ==, which should be faster. -[Joda Primities][1] as has primitive collections, as does Trove. I experimented with Trove and found that its memory useage is better.
  • I was mapping collections of many small objects with a few Integers. altering these to ints saved nearly half the memory (although requiring some messier application code to compensate).
  • It seems reasonable to me that sorted trees should consume less memory than hashmaps because they don't require the load factor (although if anyone can confirm or has a reason why this is actually dumb please post in the comments).
Steve B.
Sorted trees should be slower for general lookup since their structure is oriented to maintaining ordering. Hash-based implementations ought to be O(1) in comparison. You are right to think about overhead in the data structures -- that is exactly what I am concerned about. Both TreeMap and HashMap use a Map.Entry object internally for each key. HashMap I suppose has a little more overhead due to empty hash table slots but it's minor. But yeah I want to avoid all those Map.Entry objects for instance.
Sean Owen
A: 

You said you profiled some classes but have you done any timings to check their speed? I'm not sure how you'd check their memory usage. It seems like it would be nice to have some specific figures at hand when you're comparing different implementations.

lumpynose
Profiling shows significant time spent within methods of HashMap, HashSet, etc. Their absolute speed is irrelevant compared to the relative amount of time spent there. I can look at arrays and Map.Entry objects allocated from HashMap, for instance, to get a sense of the memory overhead of the data structure.
Sean Owen
+2  A: 

Here are the ones I know, in addition to Google and Commons Collections:

Of course you can always implement your own data structures which are optimized for your use cases. To be able to help better, we would need to know you access patterns and what kind of data you store in the collections.

Esko Luontola
A: 

What is it you are trying to do? Maps and Sets may be inefficient for your problem.

Thorbjørn Ravn Andersen
Nah, I definitely need key-value lookups in some cases, and simple sets. It's not a case where a List or array makes sense.
Sean Owen
A: 

There are some notes here and links to several alternative data-structure libraries: http://www.leepoint.net/notes-java/data/collections/ds-alternatives.html

I'll also throw in a strong vote for fastutil. (mentioned in another response, and on that page) It has more different data structures than you can shake a stick at, and versions optimized for primitive types as keys or values. (A drawback is that the jar file is huge, but you can presumably trim it to just what you need)

Daniel Martin
+1  A: 

You can possibly save a little on memory by:

(a) using a stronger, wider hash code, and thus avoiding having to store the keys;

(b) by allocating yourself from an array, avoiding creating a separate object per hash table entry.

In case it's useful, here's a no-frills Java implementation of the Numerical Recipies hash table that I've sometimes found useful. You can key directly on a CharSequence (including Strings), or else you must yourself come up with a strong-ish 64-bit hash function for your objects.

Remember, this implementation doesn't store the keys, so if two items have the same hash code (which you'd expect after hashing in the order of 2^32 or a couple of billion items if you have a good hash function), then one item will overwrite the other:

public class CompactMap<E> implements Serializable {
  static final long serialVersionUID = 1L;

  private static final int MAX_HASH_TABLE_SIZE = 1 << 24;
  private static final int MAX_HASH_TABLE_SIZE_WITH_FILL_FACTOR = 1 << 20;

  private static final long[] byteTable;
  private static final long HSTART = 0xBB40E64DA205B064L;
  private static final long HMULT = 7664345821815920749L;

  static {
    byteTable = new long[256];
    long h = 0x544B2FBACAAF1684L;
    for (int i = 0; i < 256; i++) {
      for (int j = 0; j < 31; j++) {
        h = (h >>> 7) ^ h;
        h = (h << 11) ^ h;
        h = (h >>> 10) ^ h;
      }
      byteTable[i] = h;
    }
  }

  private int maxValues;
  private int[] table;
  private int[] nextPtrs;
  private long[] hashValues;
  private E[] elements;
  private int nextHashValuePos;
  private int hashMask;
  private int size;

  @SuppressWarnings("unchecked")
  public CompactMap(int maxElements) {
    int sz = 128;
    int desiredTableSize = maxElements;
    if (desiredTableSize < MAX_HASH_TABLE_SIZE_WITH_FILL_FACTOR) {
      desiredTableSize = desiredTableSize * 4 / 3;
    }
    desiredTableSize = Math.min(desiredTableSize, MAX_HASH_TABLE_SIZE);
    while (sz < desiredTableSize) {
      sz <<= 1;
    }
    this.maxValues = maxElements;
    this.table = new int[sz];
    this.nextPtrs = new int[maxValues];
    this.hashValues = new long[maxValues];
    this.elements = (E[]) new Object[sz];
    Arrays.fill(table, -1);
    this.hashMask = sz-1;
  }

  public int size() {
    return size;
  }

  public E put(CharSequence key, E val) {
    return put(hash(key), val);
  }

  public E put(long hash, E val) {
    int hc = (int) hash & hashMask;
    int[] table = this.table;
    int k = table[hc];
    if (k != -1) {
      int lastk;
      do {
        if (hashValues[k] == hash) {
          E old = elements[k];
          elements[k] = val;
          return old;
        }
        lastk = k;
        k = nextPtrs[k];
      } while (k != -1);
      k = nextHashValuePos++;
      nextPtrs[lastk] = k;
    } else {
      k = nextHashValuePos++;
      table[hc] = k;
    }
    if (k >= maxValues) {
      throw new IllegalStateException("Hash table full (size " + size + ", k " + k);
    }
    hashValues[k] = hash;
    nextPtrs[k] = -1;
    elements[k] = val;
    size++;
    return null;
  }

  public E get(long hash) {
    int hc = (int) hash & hashMask;
    int[] table = this.table;
    int k = table[hc];
    if (k != -1) {
      do {
        if (hashValues[k] == hash) {
          return elements[k];
        }
        k = nextPtrs[k];
      } while (k != -1);
    }
    return null;
  }

  public E get(CharSequence hash) {
    return get(hash(hash));
  }

  public static long hash(CharSequence cs) {
    if (cs == null) return 1L;
    long h = HSTART;
    final long hmult = HMULT;
    final long[] ht = byteTable;
    for (int i = cs.length()-1; i >= 0; i--) {
      char ch = cs.charAt(i);
      h = (h * hmult) ^ ht[ch & 0xff];
      h = (h * hmult) ^ ht[(ch >>> 8) & 0xff];
    }
    return h;
  }

}
Neil Coffey
Good call on storing only hashes though in my case, not an option. Yes I have a hunch that I want an implementation that uses an array with linear probing, rather than separate chaining -- that is, no linked lists of container objects.
Sean Owen
N.B. Strictly this example isn't linear probing. We actually allocate mini lists at each "bucket", it's just that those mini-lists are allocated from an array.
Neil Coffey
A: 

Which version of the JVM are you using?

If you are not on 6 (although I suspect you are) then a switch to 6 may help.

If this is a server application and is running on windows try using -server to use the correct hotspot implementation.

Fortyrunner
Yep, on Java 6, and passing -server for sure.
Sean Owen
A: 

Hey Sean,

I went through something like this a couple of years ago -- very large Maps and Sets as well as very many of them. The default Java implementations consumed way too much space. In the end I rolled my own, but only after I examined the actual usage patterns that my code required. For example, I had a known large set of objects that were created early on and some Maps were sparse while others were dense. Other structures grew monotonically (no deletes) while in other places it was faster to use a "collection" and do the occasional but harmless extra work of processing duplicate items than it was to spend the time and space on avoiding duplicates. Many of the implementations I used were array-backed and exploited the fact that my hashcodes were sequentially allocated and thus for dense maps a lookup was just an array access.

Take away messages:

  1. look at your algorithm,
  2. consider multiple implementations, and
  3. remember that most of the libraries out there are catering for general purpose use (eg insert and delete, a range of sizes, neither sparse nor dense, etc) so they're going to have overheads that you can probably avoid.

Oh, and write unit tests...

A: 

At times when I have see Map and Set operations are using a high percentage of CPU, it has indicated that I have over used Map and Set and restructuring my data has almost eliminated collections from the top 10% CPU consumer.

See if you can avoid copies of collections, iterating over collections and any other operation which results in accessing most of the elements of the collection and creating objects.

Peter Lawrey