Using Dynamic Memory allocation for arrays

You use pointers.

Specifically, you use a pointer to an address, and using a standard c library function calls, you ask the operating system to expand the heap to allow you to store what you need to.

Now, it might refuse, which you will need to handle.

The next question becomes – how do you ask for a 2D array? Well, you ask for an array of pointers, and then expand each pointer.

As an example, consider this:

int i = 0;
char** words;
words = malloc((num_words)*sizeof(char*));

if ( words == NULL )
{
    /* we have a problem */
    printf("Error: out of memory.\n");
    return;
}

for ( i=0; i<num_words; i++ )
{
    words[i] = malloc((word_size+1)*sizeof(char));
    if ( words[i] == NULL )
    {
        /* problem */
        break;
    }
}

if ( i != num_words )
{
    /* it didn't allocate */
}

This gets you a two-dimensional array, where each element words[i] can have a different size, determinable at run time, just as the number of words is.

You will need to free() all of the resultant memory by looping over the array when you’re done with it:

for ( i = 0; i < num_words; i++ )
{
    free(words[i]);
}

free(words);

If you don’t, you’ll create a memory leak.

You could also use calloc. The difference is in calling convention and effect – calloc initialises all the memory to 0 whereas malloc does not.

If you need to resize at runtime, use realloc.


Also, important, watch out for the word_size+1 that I have used. Strings in C are zero-terminated and this takes an extra character which you need to account for. To ensure I remember this, I usually set the size of the variable word_size to whatever the size of the word should be (the length of the string as I expect) and explicitly leave the +1 in the malloc for the zero. Then I know that the allocated buffer can take a string of word_size characters. Not doing this is also fine – I just do it because I like to explicitly account for the zero in an obvious way.

There is also a downside to this approach – I’ve explicitly seen this as a shipped bug recently. Notice I wrote (word_size+1)*sizeof(type) – imagine however that I had written word_size*sizeof(type)+1. For sizeof(type)=1 these are the same thing but Windows uses wchar_t very frequently – and in this case you’ll reserve one byte for your last zero rather than two – and they are zero-terminated elements of type type, not single zero bytes. This means you’ll overrun on read and write.  

Addendum: do it whichever way you like, just watch out for those zero terminators if you’re going to pass the buffer to something that relies on them.

Leave a Comment