views:

204

answers:

4

I read ConcurrentHashMap works better in multi threading than Hashtable due to having locks at bucket level rather than map wide lock. It is at max 32 locks possible per map. Want to know why 32 and why not more than 32 locks.

+4  A: 

The default isn't 32, it's 16. And you can override it with the constructor argument concurrency level:

public ConcurrentHashMap(int initialCapacity,
                         float loadFactor,
                         int concurrencyLevel)

so you can do:

Map<String, String> map = new ConcurrentHashmap<String, String)(128, 0.75f, 64);

to change it to 64. The defaults are (as of Java 6u17):

  • initialCapacity: 16;
  • loadFactory: 0.75f;
  • concurrencyLevel: 16.
cletus
Yes, default is 16 but the maximum allowed is 32. And I want to know why is it 32.
DKSRathore
I don't know where you're getting 32 from. I'm looking at the source (Java 6) and nowhere does it mention 32.
cletus
http://www.ibm.com/developerworks/library/j-jtp08223/
DKSRathore
That article is dated 21 Aug 2003 so predates even Java 5 and as such was more of a preview than anything. Always consider information like this in context of its date. When in doubt go to the JDK source.
cletus
+5  A: 

If you're talking about the Java ConcurrentHashMap, then the limit is arbitrary:

Creates a new map with the same mappings as the given map. The map is created with a capacity of 1.5 times the number of mappings in the given map or 16 (whichever is greater), and a default load factor (0.75) and concurrencyLevel (16).

If you read the source code it becomes clear that the maximum number of segments is 2^16, which should be more than sufficient for any conceivable need in the immediate future.

You may have been thinking of certain alternative experimental implementations, like this one:

This class supports a hard-wired preset concurrency level of 32. This allows a maximum of 32 put and/or remove operations to proceed concurrently.

Note that in general, factors other than synchronization efficiency are usually the bottlenecks when more than 32 threads are trying to update a single ConcurrentHashMap.

John Feminella
Nice. Thanks a lot John. This question was scratching my head since a week.
DKSRathore
+3  A: 

According to the source of ConcurrentHashMap, the maximum allowed is 65536:

/**
 * The maximum number of segments to allow; used to bound
 * constructor arguments.
 */
static final int MAX_SEGMENTS = 1 << 16; // slightly conservative

public ConcurrentHashMap(int initialCapacity,
                         float loadFactor, int concurrencyLevel) {
    if (concurrencyLevel > MAX_SEGMENTS)
        concurrencyLevel = MAX_SEGMENTS;
mhaller
Ok. I skipped this in the java src. Thanks Mike.
DKSRathore
+1  A: 

To use all the default concurrency level of 16 you need to have 16 cores using the map at the same moment. If you have 32 cores only using the map 25% of the time then only 8 of 16 segments will be used at once.

In summary, you need to have a lot of cores all using the same map and doing nothing much else. Real programs usually do something other than access one map.

Peter Lawrey
Peter, Can you point me to some detailed link or reference for such details.
DKSRathore
It is just logical as I see it. The number of cores/hyper-threads you have determines the number of active threads you can have; call it A. If the threads spend a percentage of their time in the map, call it P. The assumption is you need around A * P segments (possibly more to reduce contention) So if you have 4 cores and it spends 25% of the time in the Map (This would be very high for a program which does useful work) you need about 4 x 25% segments i.e. 1. You can do the math. for you number of cores and the percentage time you expect to be using the map.
Peter Lawrey