views:

315

answers:

7

I have a malloc in C that is 26901^2*sizeof(double)

This got me thinking what the largest value can be here?

Also, would I have any problems defining a macro to access this 2D array?

 #define DN(i,j) ((int)i * ny + (int)j)

Because this seems to not be working for me - or I am at least unsure it is. I can't figure out how to make totalview dive on a macro to tell me what A[DN(indx,jndx)] is actually looking at.

+8  A: 

That depends on your malloc implementation!

According to Wikipedia, "Since the v2.3 release, the GNU C library (glibc) uses a modified ptmalloc2, which itself is based on dlmalloc v2.7.0." dlmalloc refers to Doug Lea's malloc implementation. The important thing to note in this implementation is that large mallocs are accomplished through the operating system's memory mapped file functionality, so these blocks can be quite large indeed without many problems of finding a contiguous block.

bowenl2
+1  A: 

The size parameter in a call to malloc is of type size_t, which varies by implementation. See this question for more.

Will
+5  A: 

The malloc question is answered (depends on OS, which you don't specify), so about that define:

#define DN(i,j) ((int)i * ny + (int)j)

is not quite safe, for someone might do DN(a+b,c) which expands to

((int)a+b * ny + (int)c)

which is probably not what you wanted. So put a lot of parentheses in there:

#define DN(i,j) ((int)(i) * ny + (int)(j))

to see what DN(indx,jndx) points to, just printf("%d\n",DN(indx,jndx));

mvds
Good answer. Also, those casts the OP had, they look unnecessary and possibly even dangerous.
Zack
If a cast is needed (and it's a big "if", really), they should be casts to an unsigned type, unless additional trickery is used to make the array support negative indices. `size_t` is a good candidate.
RBerteig
+1  A: 
Sinan Ünür
Your approach assumes that every free'd block of memory can be immediately reused. Some memory implementations might use pooling or may cache memory allocations which defeats your approach. Also the malloc+free may cause fragmentation, resulting in a lower than expected value. I think it's better to do it the other way around, try to allocate a huge number (maximum value of size_t) and then decreasing this until the malloc succeeds.
Patrick
A: 

This got me thinking what the largest value can be here?

26'901^2 = 723'663'801. If your double is 8 bytes, then it is less than 8GB. I see totally no problem allocating that much of memory and my apps routinely allocate (on 64 bit systems) much more. (Biggest memory consumption I have ever seen was 420GB (on Solaris 10 numa system with 640GB RAM) with largest continuous block of ~24GB.)

Largest value is hard to identify since it is platform dependent: similar to the 32bit systems it depends on user-space / kernel-space split. As things stand at the moment, I think one would first come to the limit of the actual physical RAM - before reaching the limit of what libc can allocate. (And kernel doesn't care, it just expands virtual memory often without even considering whether there is sufficient RAM to pin it to.)

Dummy00001
+1  A: 

The largest memory block you can ask malloc() for is the largest size_t value - this is SIZE_MAX from <limits.h>. The largest amount you can sucessfully request is obviously dependent on the operating system and the configuration of the individual machine.

Your macro is not safe. It performs the index calculation with an int variable, which is only required to have a range up to 32767. Any value higher than this can cause signed overflow, which results in undefined behaviour. You are probably best off doing the calculation as a size_t, since that type must be able to hold any valid array index:

#define DN(i, j) ((size_t)(i) * ny + (size_t)(j))

(Although note that if you supply negative values for i or j, you'll get an index far out of bounds).

caf
+3  A: 

Observations

Assuming a typical allocator, such as the one glibc uses, there are some observations:

  1. Whether or not the memory is actually used, the region must be reserved contiguously in virtual memory.
  2. The largest free contiguous regions depends on the memory usage of existing memory regions, and the availability of those regions to malloc.
  3. The mapping practices depend on the architecture and OS. Furthermore underlying system calls to obtain memory regions are affected by these practices (such as malloc calling through to mmap to acquire pages).

Experiment

Here's a simple program to allocate the largest possible block (compile with gcc largest_malloc_size.c -Wall -O2:

#include <stdio.h>
#include <stdlib.h>
#include <unistd.h>

static void *malloc_wrap(size_t size)
{
    void *p = malloc(size);
    if (p) {
        printf("Allocated %zu bytes from %p to %p\n", size, p, p + size);
    }
    else {
        printf("Failed to allocated %zu bytes\n", size);
    }
    return p;
}

int main()
{
    size_t step = 0x1000000;
    size_t size = step;
    size_t best = 0;
    while (step > 0)
    {
        void *p = malloc_wrap(size);
        if (p) {
            free(p);
            best = size;
        }
        else {
            step /= 0x10;
        }
        size += step;
    }
    void *p = malloc_wrap(best);
    if (p) {
        pause();
        return 0;
    }
    else {
        return 1;
    }
}

Running the above program (./a.out) on my Linux stanley 2.6.32-24-generic-pae #39-Ubuntu SMP Wed Jul 28 07:39:26 UTC 2010 i686 GNU/Linux machine obtains this result:

<snip>
Allocated 2919235584 bytes from 0x9763008 to 0xb7763008
Allocated 2936012800 bytes from 0x8763008 to 0xb7763008
Failed to allocated 2952790016 bytes
Failed to allocated 2953838592 bytes
Failed to allocated 2953904128 bytes
Failed to allocated 2953908224 bytes
Allocated 2936012800 bytes from 0x85ff008 to 0xb75ff008

This is an allocation of exactly 2800MiB. Observing the relevant mapping from /proc/[number]/maps:

<snip>
0804a000-0804b000 rw-p 00001000 08:07 3413394    /home/matt/anacrolix/public/stackoverflow/a.out
085ff000-b7600000 rw-p 00000000 00:00 0          [heap]
b7600000-b7621000 rw-p 00000000 00:00 0 
b7621000-b7700000 ---p 00000000 00:00 0 
b7764000-b7765000 rw-p 00000000 00:00 0 
b7765000-b78b8000 r-xp 00000000 08:08 916041     /lib/tls/i686/cmov/libc-2.11.1.so
<snip>
bfc07000-bfc1c000 rw-p 00000000 00:00 0          [stack]

Conclusion

It appears the heap has been expanded in the area between the program data and code, and the shared library mappings, which sit snug against the user/kernel memory space boundary (obviously 3G/1G on this system).

This result suggests that the maximum allocatable space using malloc is roughly equal to:

  1. The user space region (3GB in the example)
  2. Less the offset to the start of the heap (program code and data)
  3. Less the space reserved for the main thread stack
  4. Less the space occupied by all the mapped in shared libraries
  5. Finally, the largest contiguous region that can be found by the underlying system call within the region available to the heap (which may be fragmented by other mappings)

Notes

With respect to glibc and Linux implementations, the following manual snippets are of great interest:

malloc

   Normally, malloc() allocates memory from the heap, and adjusts the size
   of the heap as required, using sbrk(2).  When allocating blocks of mem‐
   ory larger than MMAP_THRESHOLD bytes, the glibc malloc() implementation
   allocates the memory as a  private  anonymous  mapping  using  mmap(2).
   MMAP_THRESHOLD  is  128  kB  by  default,  but is adjustable using mal‐
   lopt(3).

mmap

   MAP_ANONYMOUS
          The mapping is not backed by any file; its contents are initial‐
          ized to zero.

Afterword

This test was done on a x86 kernel. I'd expect similar results from a x86_64 kernel, albeit with vastly larger memory regions returned. Other operating systems may vary in their placement of mappings, and the handling of large mallocs, so results could be quite considerably different.

Matt Joiner