tags:

views:

87

answers:

3

Suppose we do a malloc request for memory block of size n where 2 ^k !=n for k>0. Malloc returns us space for that requestted memory block but how is the remainig buffer handled from the page. I read Pages are generally blocks of memory which are powers of two.

Wiki states the following:

 Like any method of memory allocation, the heap will become fragmented; that is,
 there will be sections of used and unused memory in the allocated 
 space on the heap. A good allocator will attempt to find an unused area
 of already allocated memory to use before resorting to expanding the heap.

So my question is how is this tracked?

EDIT: How is the unused memory tracked when using malloc ?

+2  A: 

This varies a lot from implementation to implementation. Some waste the space, some sub-divide pages until they get the requested size (or close to it) &c.

If you are asking out of curiosity, I suggest you read the source code for the implementation in question,

If it's because of performance worries, try to benchmark it and see what happens.

Morten Siebuhr
+3  A: 

This really depends on the specific implementation, as Morten Siebuhr pointed out already. In very simple cases, there might be a list of free, fixed-size blocks of memory (possibly all having the same size), so the unused memory is simply wasted. Note that real implementations will never use such simplistic algorithms.

This is an overview over some simple possibilities: http://www.osdcom.info/content/view/31/39/

This Wikipedia entry has several interesting links, including the one above: http://en.wikipedia.org/wiki/Dynamic_memory_allocation#Implementations

As a final remark, googling "malloc implementation" turns up a heap (pun intended) of valuable links.

Greg S
+3  A: 

A standard BSD-style memory allocator basically works like this:

  • It keeps a linked list of pre-allocated memory blocks for sizes 2^k for k<=12 (for example).
  • In reality, each list for a given k is composed of memory-blocks from different areas, see below.
  • A malloc request for n bytes is serviced by calculating n', the closest 2^k >= n, then looking up the first area in the list for k, and then returning the first free block in the free-list for the given area.
  • When there is no pre-allocated memory block for size 2^k, an area is allocated, an area being some larger piece of continuous memory, say a 4kB piece of memory. This piece of memory is then chopped up into pieces that are 2^k bytes. At the beginning of the continuous memory area there is book-keeping information such as where to find the linked list of free blocks within the area. A bitmap can also be used, but a linked list typically has better cache behavior (you want the next allocated block to return memory that is already in the cache).
  • The reason for using areas is that free(ptr) can be implemented efficiently. ptr & 0xfffff000 in this example points to the beginning of the area which contains the book-keeping structures and makes it possible to link the memory block back into the area.
  • The BSD allocator will waste space by always returning a memory block 2^k in size, but it can reuse the memory of the block to keep the free-list, which is a nice property. Also allocation is blazingly fast.

Modifications to the above general idea include:

  • Using anonymous mmap for large allocations. This shifts the work over to the kernel for handling large mallocs and avoids wasting a lot of memory in these cases.
  • The GNU version of malloc have special cases for non-power-of-two buckets. There is nothing inherent in the BSD allocator that requires returning 2^k memory blocks, only that there are pre-defined bucket sizes. The GNU allocator has more buckets and thus waste less space.
  • Sharing memory between threads is a tricky subject. Lock-contention during allocation is an important consideration, so in the GNU allocator for example will eagerly create extra areas for different threads for a given bucket size if it ever encounters lock-contention during allocation.