views:

387

answers:

6

Hi guys,

I have a requirement that a Map will be constructed with up to 50~200 entries (it could be more, let's call it not-too-little anyway). The writing is only done once and the reading (using Map.get("keyName")) can go more than 20 per request (it's a webapp).

I'm going for Hashmap currently as it (I suppose) gives me the most optimum performance (note: numerous reads per request). Not being a data-structure guy, can any of you suggest a Map implementation that's best fit for my requirement, say from the java.lang.*, Apache commons, etc. packages?

yc

+1  A: 

If all of the writes are done before any of the reads then you could use the Collectons.unmodifiableMap method.

If that isn't the case then writing code to do what you want isn't terribly hard (wanders off to find the post that has the basic code in it...)

Hmm... just to be sure that the question is what I am thinking... is the read only aspect the important part or is trying to access the data fast the most important part?

Edit: (based on comment)

Have you checked to see, with a profiler if the code is slow? If not then you should not be worrying about it yet.

TofuBeer
Thanks for the answer. The latter (multiple reading) is more important.The reason why I mentioned about read-only, because I was thinking if the hashing/indexing could somehow be improved due to this.yc
yclian
Unmodifiable map is just a wrapper around the underlying map; it will actually hurt performance to add this extra level of delegation.
erickson
yes, hence my question on if read only was more important then speed and asking if the code was profiled and known to be slow :-)
TofuBeer
+6  A: 

Unless you are actually having performance problems (and have traced them to this code) I wouldn't worry about it.

And before I tried to replace the Map I'd look at why, exactly, I need to do 4000 lookups (200 entries by 20 reads each) to generate a web page.

But at a bet I'd guess that the time to do those 4000 lookups will turn out to be negligible compared to other parts of the process.

MarkusQ
Reasoning might be that it's some sort of online Excel thingamajigger and there's nasty legacy code behind it. HashMap is extremely fast though (ConcurrentHashMap is even faster!) and if the map is called 4000 times per request, JIT will reoptimize it to its fullest on the fly anyway.
Esko
@EskoIs ConcurrentHashMap faster? It uses Iterator to walk through all the key-value vs. hash+index of HashMap.yc
yclian
+1  A: 

If this were actually a bottleneck I would look at resizing the map to avoid collisions.

Ron
A: 

If you strictly write in the beginning, before readers start, so synchronization and sharing across threads are not issues... HashMap is your friend. When you are not sure about that: ConcurrentHashMap.

Both implementations are part of the JDK.

ReneS
A: 

Several years ago (as a school project), I implemented in C a write-once read-many mapping (and other collections) which took an existing collection, and then "sealed it" by creating a much more memory and cache-efficient representation (e.g., a graph was stored as an optimized array). We saw amazing performance differences.

Since Java supports interfaces, implementing something like this can be a lot easier (and in fact encouraged) but I'm not familiar with an out-of-the-box implementation.

Uri
+1  A: 

If it's really worth the trouble, you might try perfect hashing. I'm not familiar with any Java implementations, though.

Darius Bacon