Introduction
When I first encountered malloc and free in C, they seemed like magic. These functions allowed dynamic memory allocation, but how did they really work? Where was the metadata stored, and what kind of algorithms ensured efficient allocation and deallocation? The more I learned, the more I realized these concepts relied on complex structures and algorithms like linked lists, free lists, and memory pools. This curiosity sparked a question: Could I build a simpler memory management system? One that didn’t rely on advanced algorithms but still worked with different types of memory, such as heap or stack. The constraints? I have exactly MEMORY_SZ bytes available, and I need to handle alloc(size) and dealloc(ptr) calls while keeping all metadata within those MEMORY_SZ bytes. Here’s the story of how I designed and implemented a minimalist memory allocator.
Understanding the Challenge
Memory allocators often store metadata outside the allocated space, but in this exercise, the metadata must fit inside the MEMORY_SZ bytes.
Additionally:
- Memory is divided into chunks, with each chunk being the smallest allocatable unit (uint64_t in our case).
- Metadata must track the status of every chunk, distinguishing between FREE and USED states.
- Metadata overhead should be minimal, leaving most of the space for user data.
To meet these constraints, I adopted a bitmap-like structure to track chunk statuses. The allocator maps each chunk to a corresponding status, where:
- FREE indicates an available chunk.
- USED1, USED2, and USED3 identify different allocations, ensuring unique identification for deallocation. (This is inspired by the Four Color Theorem, which states that four colors are sufficient to distinguish adjacent regions on a map.)
Designing the Data Structure
The core of the allocator is the Memory structure:
typedef struct Memory {
uint8_t status[NB_STATUS]; // Metadata: chunk statuses
#if PADDING_SZ > 0
uint8_t padding[PADDING_SZ]; // Ensure alignment
#endif
uint64_t data[NB_CHUNK]; // User data
} Memory;
Key points:
- Each chunk's status is stored using 2 bits (FREE, USED1, USED2, USED3), enabling efficient metadata management.
- status is compact, with 1 byte representing the statuses of 4 chunks.
- Padding ensures alignment and that the structure size matches MEMORY_SZ.
#define ALIGN_SZ 8
#define STATUS_SZ 1
#define CHUNK_SZ 8
#define NB_STATUS_OCTET 4
#define COUPLE_STATUS_CHUNK (STATUS_SZ + (CHUNK_SZ * NB_STATUS_OCTET))
#define PADDING_MAX (MEMORY_SZ % COUPLE_STATUS_CHUNK)
#define USEFUL_SZ (MEMORY_SZ - PADDING_MAX)
#define NB_STATUS (USEFUL_SZ / COUPLE_STATUS_CHUNK)
#define NB_CHUNK (NB_STATUS * NB_STATUS_OCTET)
#define PADDING_SZ (PADDING_MAX - (NB_STATUS % ALIGN_SZ))
These calculations ensure that metadata and user data fit perfectly within the memory space.
Implementation
Allocation
The alloc function searches for consecutive free chunks in status that match the requested size. Once found, the corresponding status entries are updated, and a pointer to the allocated space in data is returned. If no such space is available, NULL is returned.
To manipulate the status array, helper functions are used:
- st_access(uint8_t status, int i) to access 2-bit values.
- set(uint8_t value, uint8_t status, int i) to update 2-bit values.
- status_memset to initialize ranges of status.
Deallocation
Deallocating a pointer involves:
- Mapping the pointer back to its corresponding chunk in status.
- Identifying the USED type (e.g., USED1).
- Iterating through status and replacing all matching USED statuses with FREE.
Testing the Allocator
To validate the implementation, I wrote tests like test_alloc_alloc_alloc_free_middle.
Here’s the sequence:
- Allocate three blocks of memory.
- Free the middle block.
- Allocate a new block in the freed space.
Fun facts
- Partial Deallocation: The allocator supports partial freeing. For example, if an allocated array is no longer fully needed, the unused portion can be freed explicitly.
- OS Independence: The Memory structure can reside on the stack, making this allocator usable in environments without an operating system.
Conclusion
Designing this allocator was a rewarding journey into the fundamentals of memory management. It’s far from a production-ready solution, but as a proof of concept, it demonstrates how simple structures and algorithms can achieve efficient memory management. This experiment underscores the value of research and development in computer science. Even small projects like this can deepen your understanding of how systems work under the hood—and they’re just plain fun to build.