views:

71

answers:

3

I work on Linux for ARM processor for cable modem. There is a tool that I have written (as the job demands) that sends/storms customized UDP packets using raw sockets. I form the packet from scratch so that we have the flexibility to play with different options. This tool is mainly for stress testing routers.

The details are here.

I actually have multiple interfaces created. Each interface will obtain IP addresses using DHCP. This is done in order to make the modem behave as virtual customer premises equipment (vcpe).

When the system comes up, I start those processes that are asked to. Every process that I start will continuously send packets. So process 0 will send packets using interface 0 and so on. Each of these processes that send packets would allow configuration (change in UDP parameters and other options at run time). Thats the reason I decide to have separate processes.

I start these processes using fork and excec from the provisioning processes of the modem.

The problem now is that each process takes up a lot of memory. Starting just 3 such processes, causes the system to crash and reboot.

I have tried the following:- 1-I have always assumed that pushing more code to the Shared Libraries will help. So when I tried moving many functions into shared library and keeping minimum code in the processes, it made no difference to my surprise.

2-I also removed all arrays and made them use the heap. However it made no difference. This maybe because the processes runs continuously and it makes no difference if it is stack or heap?

3-I suspect the process from I where I call the fork is huge and that is the reason for the processes that I make result being huge. I am not sure how else I could go about. say process A is huge -> I start process B by forking and excec. B inherits A's memory area. So now I do this -> A starts C which inturn starts B will also not help as C still inherits A?. I used vfork as an alternative which did not help either. I do wonder why.

I would appreciate if someone give me tips to help me reduce the memory used by each independent child processes.

Kindly do let me know if you need more details or clarification.

Thank you.

+1  A: 

Given this is a test tool, then the most efficient thing to do is to add more memory to the testing machine.

Failing that:

  1. How are you measuring memory usage? Some methods don't get accurate results.
  2. Check you don't have any memory leaks. e.g. with Valgrind on Linux x86.
  3. You could try running the different testers in a single process, as different threads, or even multiplexed in a single thread - since the network should be the limiting factor?
  4. exec() will shrink the processes memory size as the new execution gets a fresh memory map.
  5. If you can't add physical memory, then maybe you can add swap, maybe just for testing?
Douglas Leeder
+1  A: 

Not technically answering your question, but providing a couple of alternative solutions:

If you are using Linux have you considered using pktgen? It is a flexible tool for sending UDP packets from kernel as fast as the interface allows. This is much faster than a userspace tool.

oh and a shameless plug. I have made a multi-threaded network testing tool, which could be used to spam the network with UDP packets. It can operate in multi-process mode (by using fork), or multi-thread mode (by using pthreads). The pthreads might use less RAM, so might be better for you to use. If anything it might be worth looking at the source as I've spent many years improving this code, and its been able to generate enough packets to saturate a 10gbps interface.

bramp
A: 

What could be happening is that the fork call in process A requires a significant amount of RAM + swap (if any). Thus, when you call fork() from this process the kernel must reserve enough RAM and swap for the child process to have it's own copy (copy-on-write, actually) of the parent process's writable private memory, namely it's stack and heap. When you call exec() from the child process, that memory is no longer needed and your child process can have it's own, smaller private working set.

So, first thing to make sure is that you don't have more than one process at a time in the state between fork() and exec(). During this state is where the child process must have a duplicate of it's parent process virtual memory space.

Second, try using the overcommit settings which will allow the kernel to reserve more memory than actually exists. These are /proc/sys/vm/overcommit*. You can get away with using overcommit because your child processes only need the extra VM space until they call exec, and shouldn't actually touch the duplicated address space of the parent process.

Third, in your parent process you can allocate the largest blocks using shared memory, rather than the stack or heap, which are private. Thus, when you fork, those shared memory regions will be shared with the child process rather than duplicated copy-on-write.

karunski