I am looking for an algorithm that, based on the previous behavior of a system, predicts the future behavior.
I'm building a parallel memory allocator that has a public list of empty blocks. Each thread, when it needs, can take blocks from this list and allocate from them. The blocks are grouped in bins, depending on the allocation size (8, 12, 16 bytes... to about 4 KB). When a block becomes empty it is returned to the global list (with a synchronization overhead, of course). If no block is empty in a bin, it tries to "steal" locations from blocks in other bins before getting a new empty one.
Now there are two situations that concern me:
- It may be possible that a thread allocates, lets say, memory that takes 5 blocks. After a while, it deallocates all this memory (and the blocks go to the global list). Immediately after, it allocates again 5 blocks, deallocates them, and so on. In this case, it would be wise to keep those 5 blocks all the time and don't return them to the global list, as it avoids the synchronization overhead.
- If the allocator "steals" a location, it uses memory that would otherwise be wasted. But there are cases when this is going to increase the memory use.
I want to make a system that can observe this kind of patterns, and save the results somewhere so that the next time, the allocators knows what is wise to do and what not (keep at least N blocks in bin X, don't "steal" from bin Y).
Are genetic algorithms of any use? I don't know anything about them, but I heard they are good at machine learning, or something like this.
Thanks in advance!