You could allocate the entire block at once:
int ** foo;
foo = malloc(sizeof(int*) * firstCount);
foo[0] = malloc(sizeof(int) * firstCount * secondCount);
for (int i=1; i<firstCount; i++)
{
foo[i] = foo[0] + i * secondCount;
}
You could allocate the entire block at once:
int ** foo;
foo = malloc(sizeof(int*) * firstCount);
foo[0] = malloc(sizeof(int) * firstCount * secondCount);
for (int i=1; i<firstCount; i++)
{
foo[i] = foo[0] + i * secondCount;
}
My suggestion would be to keep what you have.
Sometimes trying to optimize when there is not a problem can hurt performance. Have you done any benchmarks against your friends theory?
malloc
ing a big chunk requires locating contiguous address space, and might even cause malloc
to fail when your current method would succeed (address space fragmentation, etc).
When I said your current implementation is extensible, I mean it is trivial to resize. Should you allocate for a [100][3] and later realize you need a [100][4] then all you need to do is 100 very small reallocs, likely not changing addresses. However, should the macro method need resizing, you need to realloc
the whole chunk which may not exist contiguously. Even worse, the data is no longer in the right place to be accessed by the macro because the math has changed, so you will need a series of expensive memmove
s.
To generalize, I think it is important to always code with readability, maintainability, and ease of use in mind - and only optimize away from that after establishing a bottleneck.