views:

152

answers:

3

What's the fastest, best way on modern Linux of achieving the same effect as a fork-execve combo from a large process ?

My problem is that the process forking is ~500MByte big, and a simple benchmarking test achieves only about 50 forks/s from the process (c.f ~1600 forks/s from a minimally sized process) which is too slow for the intended application.

Some googling turns up vfork as having being invented as the solution to this problem... but also warnings about not to use it. Modern Linux seems to have acquired related clone and posix_spawn calls; are these likely to help ? What's the modern replacement for vfork ?

I'm using 64bit Debian Lenny on an i7 (the project could move to Squeeze if posix_spawn would help).

+5  A: 

Did you actually measure how much time forks take? Quoting the page you linked,

Linux never had this problem; because Linux used copy-on-write semantics internally, Linux only copies pages when they changed (actually, there are still some tables that have to be copied; in most circumstances their overhead is not significant)

So the number of forks doesn't really show how big the overhead will be. You should measure the time consumed by forks, and (which is a generic advice) consumed only by the forks you actually perform, not by benchmarking maximum performance.

But if you really figure out that forking a large process is a slow, you may spawn a small ancillary process, pipe master process to its input, and receive commands to exec from it. The small process will fork and exec these commands.

posix_spawn()

This function, as far as I understand, is implemented via fork/exec on desktop systems. However, in embedded systems (particularly, in those without MMU on board), processes are spawned via a syscall, interface to which is posix_spawn or a similar function. Quoting the informative section of POSIX standard describing posix_spawn:

  • Swapping is generally too slow for a realtime environment.

  • Dynamic address translation is not available everywhere that POSIX might be useful.

  • Processes are too useful to simply option out of POSIX whenever it must run without address translation or other MMU services.

Thus, POSIX needs process creation and file execution primitives that can be efficiently implemented without address translation or other MMU services.

I don't think that you will benefit from this function on desktop if your goal is to minimize time consumption.

Pavel Shved
Yes I was under the impression "Linux never had this problem" too... until I actually benchmarked it and got the numbers I quote above. Presumably copying those tables (which I believe are VM page tables) takes quite a while when your process is 500MByte big.
timday
+1, I implemented this as an 'exec' helper to a single threaded non-blocking server. Instead of blocking in a fork() / execv(), I simply pipe the request to the helper, then flag the connection as waiting_for_exec_result, then do useful work while waiting for the data to be available to send back to the client.
Tim Post
+2  A: 

If you know the number of subprocess ahead of time, it might be reasonable to pre-fork your application on startup then distribute the execv information via a pipe. Alternatively, if there is some sort of "lull" in your program it might be reasonable to fork ahead of time a subprocess or two for quick turnaround at a later time. Neither of these options would directly solve the problem but if either approach is suitable to your app, it might allow you to side-step the issue.

Rakis
+3  A: 

Outcome: I was going to go down the early-spawned helper subprocess route as suggested by other answers here, but then I came across this re using huge page support to improve fork performance.

Having tried it myself using libhugetlbfs to simply make all my app's mallocs allocate huge pages, I'm now getting around 2400 forks/s regardless of the process size (over the range I'm interested in anyway). Amazing.

timday
Nice work. (and some more)
tster