I read that Linux is a monolithic kernel. Does monolithic kernel means compiling the linking the complete kernel code into an executable? If Linux is able to support modules, why not break all the subsystems into modules and load them when necessary. In that case, kernel doesn't have to load all modules initially and maintain index of the functions in the module and load them when necessary.
Linux can be a monolithic and a modular kernel. It can be considered a monolithic kernel because all of the modules can be compiled into one binary executable. It is considered a modular kernel because the kernel can be customizible into smaller components that can be added/removed at runtime.
One would want to make the kernel to be monolithic to get minor performance increases. Also, it would shrink the overall size of the kernel as well. Typically monolithic kernels are best found in embdedded devices.
Monolithic kernel means that the whole operating system runs in kernel mode (i.e. highly privileged by the hardware). That is, no part of the OS runs in user mode (lower privilege). Only applications on top of the OS run in user mode.
In non-monolithic kernel operating systems, such as Windows, a large part of the OS itself runs in user mode.
In either case, the OS can be highly modular.
From wikipedia
A monolithic kernel is a kernel architecture where the entire operating system is working in the kernel space and alone as supervisor mode. In difference with other architectures,[1] the monolithic kernel defines alone a high-level virtual interface over computer hardware, with a set of primitives or system calls to implement all operating system services such as process management, concurrency, and memory management itself and one or more device drivers as modules.
Recent versions of Windows on the other hand use a Hybric kerkel.
A hybrid kernel is a kernel architecture based on combining aspects of microkernel and monolithic kernel architectures used in computer operating systems. The category is controversial due to the similarity to monolithic kernel; the term has been dismissed by some as simple marketing. The traditional kernel categories are monolithic kernels and microkernels (with nanokernels and exokernels seen as more extreme versions of microkernels).
Hi Bala,
'monolithic' in this context does not refer to there being a single large executable, and as you say, there Linux supports the dynamic loading of kernel modules at runtime. When talking about kernels, 'monolithic' means that the entire operating system runs in 'privileged' or 'supervisor' mode, as opposed to other types of operating systems that use a type of kernel such as a 'microkernel', where only a minimal set of functionality runs in privileged mode, and most of the operating system runs in user space.
proponents of microkernels say that this is better because smaller code means less bugs, and bugs running in supervisor mode can cause much greater problems than user space code (such as a greater chance of having security vulnerabilities or total system crashes in the form of a 'kernel panic') some microkernels are sufficiently minimal that they can be 'formally verified', which means you can mathematically prove that the kernel is 'correct' according to a specification L4 is a good example of this.
A monolithic kernel is a kernel where all services (file system, VFS, device drivers, etc) as well as core functionality (scheduling, memory allocation, etc) are a tight knit group sharing the same space. This directly opposes a micro kernel.
A micro kernel prefers an approach where core functionality is isolated from system services and device drivers (which are basically just system services). For instance, VFS (virtual file system) and block device file systems (i.e. minixfs) are separate processes that run outside of the kernel's space, using IPC to communicate with the kernel, other services and user processes. In short, if its a module in Linux, its a service in a micro kernel, indicating an isolated process.
Do not confuse the term modular kernel to be anything but monolithic. Some monolithic kernels can be compiled to be modular (i.e. Linux), what matters is that the module is inserted to and run from the same space that handles core functionality.
The advantage to a micro kernel is that any failed service can be easily re-started, for instance .. no kernel halt if the root file system throws an abort.
The disadvantage to a micro kernel is that asynchronous IPC messaging can become very difficult to debug, especially if fibrils are implemented. Additionally, just tracking down a FS/write issue means examining the user space process, the block device service, VFS service, file system service and (possibly) the PCI service. If you get a blank on that, its time to look at the IPC service. This is often easier in a monolithic kernel. GNU Hurd suffers from these debugging problems (reference). I'm not even going to go into checkpointing when dealing with complex message queues. Micro kernels are not for the faint of heart.
The shortest path to a working, stable kernel is the monolithic approach. Either approach can offer a POSIX interface, where the design of the kernel becomes of little interest to someone simply wanting to write code to run on any given design.
I use Linux (monolithic) in production, however most of my learning / hacking / tinkering with kernel development goes into a micro kernel, specifically HelenOS.
Edit
If you got this far though my very long winded answer, you will probably have some fun reading the 'Great Torvalds-Tanenbaum debate on kernel design'. Its even funnier to read in 2009, almost 18 years after it transpired. The funniest part was Linus' signature in one of the last messages:
Linus "my first, and hopefully last flamefest" Torvalds
Obviously, that did not come true any more than Tanenbaum's prediction that x86 would soon be obsolete.
NB:
When I say "Minix", I do not imply Minix 3. Additionally, when I mention The HURD, I am referencing (mostly) the Mach micro kernel. It is not my intent to disparage the recent work of others.