tags:

views:

940

answers:

4

What is the initial heap size alloted typically to a C++ program running on UNIX based OS ?

How is it decided by the g++ compiler if at all it has a role to play in this regard ?

+3  A: 

The heap is extended dynamically by asking the OS for more memory as needed.

It's not determined by the compiler, exactly, but by the library.

It is more typical to fix the size of the heap in dynamic languages with GC. In C and C++, it is a simple matter to ask the OS for more memory, since it is obvious when you need it. As a consequence, the initial heap size matters very little and is just an implementation decision on the part of the allocation library.

DigitalRoss
Thanks for the answer DigitalRoss.
Arvind K
@DigitalRoss:Would you please explain "It's not determined by the compiler, exactly, but by the library."? I can understand it is not determined by compiler but How by library? Thanks.
pierr
In both C and C++ the heap policy is event driven. The compiler does generate heap allocation calls in C++, tho not in C. When the compiler generates a call to an allocator, it is now in the hands of the library because an actual function gets called. That function attempts to allocate from the heap (perhaps something has been freed recently) but if it fails it just calls the OS to get more memory for the process as a whole, and it adds that additional memory to the heap.
DigitalRoss
+2  A: 

For C++, no matter what the platform, the heap is almost always extended dynamically by asking the OS for more memory as needed. On some embedded platforms, or some very old platforms this may not be true, but then you probably have a really good idea how much heap you have because of the nature of the environment.

On Unix platforms this is doubly true. Even most Unix embedded platforms work this way.

On platforms that work like this, the library usually doesn't have any kind of internal limit, but instead relies on the OS to tell it that it can't have any more memory. This may happen well after you have actually asked for more memory than is available though for a variety of reasons.

On most Unix systems, there is a hard limit on how much total memory a process can have. This limit can be queried with the getrlimit system call. The relevant constant is RLIMIT_AS. This limit governs the maximum number of memory pages that can be assigned to a process and directly limits the amount of heap space available.

Unfortunately that limit doesn't directly say how much heap you can use. Memory pages are assigned to a process as a result of mmap calls, to hold the program code itself, and for the process' stack.

Additionally, this limit is frequently set well in excess of the total memory available to the whole system if you add together physical memory and swap space. So in reality your program will frequently run out of memory before this limit is reached.

Lastly, some versions of Unix over-assign pages. They allow you to allocate a massive number of pages, but only actually find memory for those pages when you write to them. This means your program can be killed for running out of memory even if all the memory allocation calls succeed. The rationale for this is the ability to allocate huge arrays which will only ever be partially used.

So, in short, there isn't a typical size, and no good way to find out what the size really is.

Omnifarious
A: 

In short, there is not definit way to configure the heap size. But we do have some way to impact the heap memory size , as the heap memory size is part of the total availabe memory.

You can get the total amount of availabe memory in the sytem by :

 cat /proc/meminfo  | grep CommitLimit 
 CommitLimit:    498080 kB

This CommitLimit is caculated with following formula: CommitLimit = ('vm.overcommit_ratio' * Physical RAM) + Swap

Supposing the swap is zero, by setting the overcommit_ratio you can configure total availble memory. You can set the overcommit_ratio by :

sysctl -w vm.overcommit_ratio=60

And it is important to notice that this limit is only adhered to if strict overcommit accounting is enabled (mode 2 in 'vm.overcommit_memory'). . This could be set by :

 sysctl -w vm.overcommit_memory=2

Here is the kernel document that explain this well.

pierr
This is Linux-specific and is not about particular process memory but about how Linux kernel handles virtual memory across processes. Traditionally Unix managed data segment with ulimit.
Nikolai N Fetissov
A: 

you could try to write an small program with a while(true) loop. after run it, "cat /proc/{pid}/maps" you'll know its initial heap size.

EffoStaff Effo