Is it ever OK to *not* use free() on allocated memory?

Nick picture Nick · Mar 18, 2014 · Viewed 8k times · Source

I'm studying computer engineering, and I have some electronics courses. I heard, from two of my professors (of these courses) that it is possible to avoid using the free() function (after malloc(), calloc(), etc.) because the memory spaces allocated likely won't be used again to allocate other memory. That is, for example, if you allocate 4 bytes and then release them you will have 4 bytes of space that likely won't be allocated again: you will have a hole.

I think that's crazy: you can't have a not-toy-program where you allocate memory on the heap without releasing it. But I don't have the knowledge to explain exactly why it's so important that for each malloc() there must be a free().

So: are there ever circumstances in which it might be appropriate to use a malloc() without using free()? And if not, how can I explain this to my professors?

Answer

unwind picture unwind · Mar 18, 2014

Easy: just read the source of pretty much any half-serious malloc()/free() implementation. By this, I mean the actual memory manager that handles the work of the calls. This might be in the runtime library, virtual machine, or operating system. Of course the code is not equally accessible in all cases.

Making sure memory is not fragmented, by joining adjacent holes into larger holes, is very very common. More serious allocators use more serious techniques to ensure this.

So, let's assume you do three allocations and de-allocations and get blocks layed out in memory in this order:

+-+-+-+
|A|B|C|
+-+-+-+

The sizes of the individual allocations don't matter. then you free the first and last one, A and C:

+-+-+-+
| |B| |
+-+-+-+

when you finally free B, you (initially, at least in theory) end up with:

+-+-+-+
| | | |
+-+-+-+

which can be de-fragmented into just

+-+-+-+
|     |
+-+-+-+

i.e. a single larger free block, no fragments left.

References, as requested: