views:

1167

answers:

6

I am not a master of the kernel code, but have some basic idea of its code structure. In this post we can discuss what are the good and bad things in the design of the kernel.

Update: No, this is not for homework. I would have mentioned that if that was the case.

See this: http://stackoverflow.com/questions/1548442/i-know-how-to-program-now-how-do-i-learn-to-design

Everybody praises the design of the Linux kernel. Let's have a list of good and bad design decisions that have been taken in the design of the kernel.

+1  A: 

I think this http://linuxhelp.blogspot.com/2006/05/monolithic-kernel-vs-microkernel-which.html should shine some light on your question.

Maxim Veksler
+5  A: 

some article exist named "Linux Kernel Design Patterns". you cam find various pattern for design kernel for Linux. one article that continuous is "Linux Kernel Design Patterns - Part 1", start from this and googleing for extra article for Kernel Design Pattern.

SjB
A: 

Worst thing:

The build system, (yes, I know it's nothing to do with the design of the kernel itself).
It's an absolute nightmare, nothing like any other in existence, and if for some reason (adding a new architecture, perhaps) you need to change it then you have to learn a whole new language to do so.

Best thing:

Almost everything is configurable. It's amazing that basically the same kernel is used in tiny embedded devices with no MMU, and in supercomputers with huge memories and thousands of processors.

Autopulated
These two features may not be entirely unrelated ;-)
Edmund
No, that irony wasn't lost on me!
Autopulated
GNU Make isn't the worst thing in the world to have in your toolbox of languages... It's quite useful...
xyld
But it isn't GNU make!It's a special kernel make program: kbuild!
Autopulated
Kbuild files are GNU Make files. They just happen to be called Kbuild. Correct me if I'm wrong... but here may be some helper programs here and there, but its all GNU Make AFAIK. ie: I don't need to first compile the kernels "version" of make to build the kernel. GNU Make is pretty complicated at first... but its all GNU Make
xyld
+3  A: 

Not directly about Linux's design, but I believe the development process behind it is the most noteworthy. The kernel itself is constantly evolving, and does so with incredible speed. This is only possible because of the decentralized version control (git), which makes it possible for a very large number of developers to work simultanuously.

Also, with git bisect they accomplished something remarkable; it is now possible for non-developers to track down bugs. Here is a quote from David Miller:

What people don't get is that this is a situation where the "end node principle" applies. When you have limited resources (here: developers) you don't push the bulk of the burden upon them. Instead you push things out to the resource you have a lot of, the end nodes (here: users), so that the situation actually scales.

People reporting a bug have access to the environment where the bug happens, and "git bisect" automatically extract relevant information from this environment. This is also a good way to get new contributors.

Also, whenever developers want to contribute code, they absolutely have to break their code down into tiny, separately apply-able patches, so that each change can be easily reviewed. This way a large portion of their code can be understood by many people.

The Linux Management Style is an interesting read. Linus tries to live an atmosphere where you do not hide behind politeness, but clearly state what you think. This might come accross as rude some times, but I am sure it keeps the code quality at a high level.

martinus
While I agree that a project like Linux kernel typically needs a DVCS, there was a life before git (and Linux was using BitKeeper at that time) and git is not a revolution compared to BitKeeper (but it's open source).
Pascal Thivent
+2  A: 

bad things:

  1. new feature rate is high and bug fix cycle for this features is little.
  2. config system must be intelligent. and categorized for some general needs. if it has wizard like config system and more interactions and info's when doing config.
  3. big kernel lock and other locks everywhere without good checking mechanism.

good things:

  1. user mode drivers: user mode filesystem, user mode block driver, user mode block driver, raw hid, ...
  2. good driver inclusion. in this days big range of devices have support in linux kernel.
  3. light virtualization in kernel.
  4. good communication mechanisms between user space and kernel space (dev,proc,sys,debugfs,...)
  5. very good using of C language with function pointers at structures to simulate some of object oriented features.
  6. user mode linux for debugging.
  7. good support of embedded systems
  8. very good security mechanisms: selinux, apparmor, integrity verification,...
chezgi
A: 

I think kernel development, rather than being a design issue with philosophical backgrounds and debates (such as micro-kernel vs. monolithic kernel), is an issue of practical code that works reliably. The variety of peripherals and protocols that a kernel should support in addition to the wide range of hardware versions and manufacturers and also complex problems that arise in protected mode development (versus user mode development that applications use) justify this statement. Also, do not forget the backward compatibility issue which is vital for an OS in practice but mainly neglected in design philosophy.

Linux kernel is a good example of this: a monolithic kernel (though very modular) with many pragmatic hacks in it that outperforms many well-designed academic or commercial operating systems in popularity and performance in both server and embedded system areas.

A superior advantage of Linux is that many aspects of its kernel can be either embedded in the kernel image (mostly bzImage) or added as a kernel module later. You can configure it before making the kernel image using the config tool and then any kernel module can be easily removed or added at runtime (by root privilege of course) without needing to restart the OS. This makes the kernel development and maintenance really easier. (Just think about Windows OS that needs a restart for almost every update, many times in non-related-to-kernel programs)

Amir Moghimi