Think about it like this.
You've got 6MB of memory in the pool, and you have a 1MB buffer allocated, and you want to extend it to 2MB with realloc.
So the memory now looks like
1(just released)+2(buffer)+3(free)
or perhaps
1(just released)+3(free)+2(buffer) (the 1 and 3 merge together to be 4)
If you then want just over 3MB buffer (should be OK, 2 + 3.x is less than 6), then one is going to fail.
This is fairly easy to cope with on a desktop machine with GB of real memory, and virtualised address spaces for each process.
But if you're trying to alloc/realloc 8MB buffers on an embedded machine with anything less than say 64MB of memory pool space (not counting program code, OS code, all other data), then it's likely to get awkward in a hurry.
Is it really necessary to store a page in contiguous memory?
Because if most pages are say less than 10K, a few are 100K, and odd-balls are 1MB, then I would do something like this.
Code:
struct pageFragment {
struct pageFragment *next;
size_t usedSize;
char fragment[10240];
};
struct page {
struct pageFragment *head;
struct pageFragment *tail;
size_t totalSize;
};
Sure there are more allocations (and more frees later on), but the pool manager is going to have a lot more chance of joining adjacent blocks together into larger free blocks to satisfy future requests.
What you never end up doing is trying to realloc xMB into yMB and finding that it doesn't fit in any free block.