You're right, it's a common problem [Edit: how to do fixed-size allocation, I mean. "malloc
is slowing down my application" is less common than you might think].
If your code is too slow and malloc
a plausible culprit, then a simple cell allocator (or "memory pool") might improve things. You can almost certainly find one somewhere, or it's easy to write:
Allocate a large block, and put a singly-linked list node at the start of each 16-byte cell. Link them all together. To allocate, take the head off the list and return it. To free, add the cell to the head of the list. Of course if you try to allocate and the list is empty, then you have to allocate a new large block, divide it into cells, and add them all to the free list.
You can avoid that big up-front work if you want. When you allocate a big block, just store a pointer to the end of it. To allocate, move your pointer back 16 bytes through the block and return the new value. Unless it was already at the start of the block[*], of course. If that happens, and the free list is also empty, you need a new large block. Free doesn't change - just add the node to the free list.
You have an option whether to deal out of the block first, and check the free list if that's exhausted, or to check the free list first, and deal off the block if that's empty. I don't know which tends to be faster - the good thing about a last-in-first-out free list is that it's cache-friendly, since you're using memory that was used recently, so I'd probably try that first.
Note that the list node is not needed while the cell is allocated, so there's essentially zero overhead per cell. Quite aside from speed, this is likely to be an advantage over malloc
, or other general-purpose allocators.
Do be aware that dropping the whole allocator is pretty much the only way to release memory back to the system, so users who are planning to allocate a lot of cells, use them, and free them all, should create their own allocator, use it, and then destroy it. Both for performance (you don't have to free all the cells) and to prevent the fragmentation-style effect where a whole block must be kept if any of its cells are in use. If you can't do this, your memory use will be the high-water-mark of the time your program has been running. For some programs that's a problem (for instance a long-running program with occasional big spikes in memory use, on a system where memory is constrained). For others it's absolutely fine (for instance if the number of cells in use increases until very near the end of the program, or fluctuates within a range where you really don't care that you're using more memory than you strictly could). For some its actively desirable (if you know how much memory you're going to use, you can allocate it all up-front and not have to worry about failures). For that matter, some implementations of malloc
have difficulty releasing memory back from the process to the OS.
[*] Where "start of the the block" probably means "the start of the block, plus the size of some node used to maintain a list of all blocks, so they can all be freed when the cell allocator is destroyed".