views:

46

answers:

1

If a Linux system runs out of physical memory, does it drop inactive executable code pages? I assume the answer is yes, since there is no reason to keep them in swap, so they are simply discarded and re-loaded if necessary (as far as I know, that's what FreeBSD does).

If the above is true for Linux, my question is, how does it handle executables run from network filesystems (e.g. NFS)? Does it go and fetch executable pages over the network if there is a page fault?

+3  A: 

If a Linux system runs out of physical memory, does it drop inactive executable code pages? I assume the answer is yes, since there is no reason to keep them in swap, so they are simply discarded and re-loaded if necessary (as far as I know, that's what FreeBSD does).

Yes it does.

If the above is true for Linux, my question is, how does it handle executables run from network filesystems (e.g. NFS)? Does it go and fetch executable pages over the network if there is a page fault?

The NFS client caches stuff locally (if you're worried about speed), but possibly not the whole file (if you're worried about consistency). Here is a relevant NFS FAQ entry.

Artelius
Does it mean there is a theoretical possibility that a running program can lock up if (1) there is a page fault, (2) there is a network FS driver cache miss, (3) remote server becomes unavailable?
Alex B
That seems logical. And a bit scary.
Artelius