views:

101

answers:

2

Freelists are a common way to speed up allocation by reusing existing memory that was already allocated. Is there a way to use free-lists in a concurrent allocator, without incurring the overhead of a lock for every allocation (which would neutralize the intended performance gain of the freelist)?

+1  A: 

Use a lock-free linked list.

moonshadow
Is that actually a performance win over using a lock, it's my understanding that lock-free datastructures still usually have approximately the same speed as a locked variant (but loses the potential for deadlocks, etc.)?
Alex Gaynor
If there is little contention, a lockfree implementation should outperform using a lock (read-modify-write instead of a lock-read-modify-write-unlock), especially if using the lock incurs an OS call. If there is heavy contention, the lock free implementation may actually have worse performance (due to having to back off and retry repeatedly) but in that kind of situation you're not actually gaining anything by having multiple threads with a locked implementation either since if linked list mutation is the majority of your work, the majority of your work would becomes serialised anyway.
moonshadow
A: 

You could have thread-specific free list chunks.

Basically, there is some system which populates the free lists (e.g. a garbage collector). Then each thread could have its own free list chunk, containing a small number of entries. Locking would be used to allocate a new chunk. With chunks with 30 entries, you would lock only once every 30 allocations. Conversely, with thread-specific chunks you may have to run the GC sooner, because the shared list could become empty even if some of the thread-specific chunks still have some free entries.

Thomas Pornin