views:

2730

answers:

8

I've heard that creating a new process on a Windows box is more expensive than on Linux. Is this true? Can somebody explain the technical reasons for why it's more expensive and provide any historical reasons for the design decisions behind those reasons?

+17  A: 

Unix has a 'fork' system call which 'splits' the current process into two, and gives you a second process that is identical to the first (modulo the return from the fork call). Since the address space of the new process is already up and running this is should be cheaper than calling 'CreateProcess' in Windows and having it load the exe image, associated dlls, etc.

In the fork case the OS can use 'copy-on-write' semantics for the memory pages associated with both new processes to ensure that each one gets their own copy of the pages they subsequently modify.

Rob Walker
This argument only holds when you're really forking. If you're starting a new process, on Unix you still have to fork and exec. Both Windows and Unix have copy on write. Windows will certainly reuse a loaded EXE if you run a second copy of an app. I don't think your explanation is correct, sorry.
Joel Spolsky
More on exec() and fork() http://vipinkrsahu.blogspot.com/search/label/system%20programming
vipinsahu
+13  A: 

In addition to the answer of Rob Walker: Nowadys you have things like the Native POSIX Thread Library - if you want. But for a long time the only way to "delegate" the work in the unix world was to use fork() (and it's still prefered in many,many circumstances). e.g. some kind of socket server

socket_accept()
fork()
if (child)
    handleRequest()
else
    goOnBeeingParent()
Therefore the implementation of fork had to be fast and lots optimizations have been implemented over time. Microsoft endorsed CreateThread or even fibers instead of creating new processes and usage of interprocess communication. I think it's not "fair" to compare CreateProcess to fork since they are not interchangeable. It's probably more appropriate to compare fork/exec to CreateProcess.

VolkerK
About your last point: fork() is not exchangeable with CreateProcess(), but one can also say that Windows should implement fork() then, because that gives more flexibility.
Blaisorblade
+8  A: 

The key to this matter is the historical usage of both systems, I think. Windows (and DOS before that) have originally been single-user systems for personal computers. As such, these systems typically don't have to create a lot of processes all the time; (very) simply put, a process is only created when this one lonely user requests it (and we humans don't operate very fast, relatively speaking).

Unix-based systems have originally been multi-user systems and servers. Especially for the latter it is not uncommon to have processes (e.g. mail or http daemons) that split of processes to handle specific jobs (e.g. taking care of one incoming connection). An important factor in doing this is the cheap fork method (that, as mentioned by Rob Walker (47865), initially uses the same memory for the newly created process) which is very useful as the new process immediately has all the information it needs.

It is clear that at least historically the need for Unix-based systems to have fast process creation is far greater than for Windows systems. I think this is still the case because Unix-based systems are still very process oriented, while Windows, due to its history, has probably been more thread oriented (threads being useful to make responsive applications).

Disclaimer: I'm by no means an expert on this matter, so forgive me if I got it wrong.

mweerden
+24  A: 

mweerden: NT has been designed for multi-user from day one, so this is not really a reason. However, you are right about that process creation plays a less important role on NT than on Unix as NT, in contrast to Unix, favors multithreading over multiprocessing.

Rob, it is true that fork is relatively cheap when COW is used, but as a matter of fact, fork is mostly followed by an exec. And an exec has to load all images as well. Discussing the performance of fork therefore is only part of the truth.

When discussing the speed of process creation, it is probably a good idea to distinguish between NT and Windows/Win32. As far as NT (i.e. the kernel itself) goes, I do not think process creation (NtCreateProcess) and thread creation (NtCreateThread) is significantly slower as on the average Unix. There might be a little bit more going on, but I do not see the primary reason for the performance difference here.

If you look at Win32, however, you'll notice that it adds quite a bit of overhead to process creation. For one, it requires the CSRSS to be notified about process creation, which involves LPC. It requires at least kernel32 to be loaded additionally, and it has to perform a number of additional bookkeeping work items to be done before the process is considered to be a full-fledged Win32 process. And let's not forget about all the additional overhead imposed by parsing manifests, checking if the image requires a compatbility shim, checking whether software restriction policies apply, yada yada.

That said, I see the overall slowdown in the sum of all those little things that have to be done in addition to the raw creation of a process, VA space, and initial thread. But as said in the beginning -- due to the favoring of multithreading over multitasking, the only software that is seriously affected by this additional expense is poorly ported Unix software. Although this sitatuion changes when software like Chrome and IE8 suddenly rediscover the benefits of multiprocessing and begin to frequently start up and teardown processes...

Johannes Passing
Fork is not always followed by exec(), and people care about fork() alone. Apache 1.3 uses fork() (without exec) on Linux and threads on Windows, even if in many cases processes are forked before they are needed and kept in a pool.
Blaisorblade
Not forgetting of course, the 'vfork' command, which is designed for the 'just call exec' scenario you describe.
Chris Huang-Leaver
+3  A: 

All that plus there's the fact that on the Win machine most probably an antivirus software will kick in during the CreateProcess... That's usually the biggest slowdown.

gabr
+2  A: 

Uh, there seems to be a lot of "it's better this way" sort of justification going on.

I think people could benefit from reading "Showstopper"; the book about the development of Windows NT.

The whole reason the services run as DLL's in one process on Windows NT was that they were too slow as separate processes.

If you got down and dirty you'd find that the library loading strategy is the problem.

On Unices ( in general) the Shared libraries (DLL's) code segments are actually shared.

Windows NT loads a copy of the DLL per process, becauase it manipulates the library code segment (and executable code segment) after loading. (Tells it where is your data ?)

This results in code segments in libraries that are not reusable.

So, the NT process create is actually pretty expensive. And on the down side, it makes DLL's no appreciable saving in memory, but a chance for inter-app dependency problems.

Sometimes it pays in engineering to step back and say, "now, if we were going to design this to really suck, what would it look like?"

I worked with an embedded system that was quite temperamental once upon a time, and one day looked at it and realized it was a cavity magnetron, with the electronics in the microwave cavity. We made it much more stable (and less like a microwave) after that.

Tim Williscroft
Code segments are reusable as long as the DLL loads at its preferred base address. Traditionally you should ensure that you set non-conflicting base addresses for all DLLs that would load into your processes, but that doesn't work with ASLR.
Mike Dimmick
There is some tool to rebase all DLLs, isn't there? Not sure what it does with ASLR.
Zan Lynx
Sharing of code sections works on ASLR-enabled systems as well.
Johannes Passing
+6  A: 

Adding to what JP said: most of the overhead belongs to Win32 startup for the process.

The Windows NT kernel actually does support COW fork. SFU (Microsoft's UNIX environment for Windows) uses them. However, Win32 does not support fork. SFU processes are not Win32 processes. SFU is orthogonal to Win32: they are both environment subsystems built on the same kernel.

In addition to the out-of-process LPC calls to CSRSS, in XP and later there is an out of process call to the application compatibility engine to find the program in the application compatibility database. This step causes enough overhead that Microsoft provides a group policy option to disable the compatibility engine on WS2003 for performance reasons.

The Win32 runtime libraries (kernel32.dll, etc.) also do a lot of registry reads and initialization on startup that don't apply to UNIX, SFU or native processes.

Native processes (with no environment subsystem) are very fast to create. SFU does a lot less than Win32 for process creation, so its processes are also fast to create.

Chris Smith
+1  A: 

The short answer is "software layers and components".

The windows SW architecture has a couple of additional layers and components that don't exist on Unix or are simplified and handled inside the kernel on Unix.

On Unix, fork and exec are direct calls to the kernel.

On Windows, the kernel API is not used directly, there is win32 and certain other components on top of it, so process creation must go through extra layers and then the new process must start up or connect to those layers and components.

For quite some time researchers and corporations have attempted to break up Unix in a vaguely similar way, usually basing their experiments on the Mach kernel; a well-known example is OS X.. Every time they try, though, it gets so slow they end up at least partially merging the pieces back into the kernel either permanently or for production shipments.

DigitalRoss