tags:

views:

396

answers:

7

Possible Duplicate:
C# driver development?

Why do we use C for device driver development rather than C#?

+12  A: 

Because C# programs cannot run in kernel mode (Ring 0).

Darin Dimitrov
+1, but just to add: Windows now supports user mode drivers as well to a limited extend http://www.microsoft.com/whdc/driver/wdf/UMDF.mspx
Brian Rasmussen
@Brian, yes that's a good remark.
Darin Dimitrov
However, there have been research projects to develop an OS primarily in a managed language like C#. See http://en.wikipedia.org/wiki/Singularity_%28operating_system%29.
Brian
+6  A: 

The main reason is that C is much better suited for low-level (close to hardware) development than C#. C was designed as a form of portable assembly code. Also, many times it may be difficult or unsafe for C# to be used. The key time is when drivers are ran at ring-0, or kernel mode. Most applications are not meant to be run in this mode, including the .NET runtime.

While it may be theoretically possible to do, many times C is more suited to the task. Some of the reasons are tighter control over what machine code is produced and being closer to the processor is almost always better when working directly with hardware.

Earlz
+2  A: 

Because C# is a high level language and it cannot talk to processors directly. C code compile straight into native code that a processor can understand.

this. __curious_geek
C# compiles indirectly to native code. Every computer is "indirectly" compiled into something the processor can understand.
Earlz
@Earlz, As a long time embedded systems and drivers guy, the idea of letting a JIT or interpreter loose inside my kernel makes me extremely uncomfortable. I have enough trouble knowing that the driver will meet its latency requirements without adding that level of complexity to the party. On the other hand, I welcome the trend towards things like user mode drivers in Windows and FUSE in linux.
RBerteig
A: 

The C is a language which is supports more for interfacing with other peripherals. So C is used to develop System Softwares.The only problem is that one need to manage the memory .It is a developer's nightmare.But in C# the memory management can be done easiy and automatically(there are n number of differences between c and C#,try googling ).

Vishnu K B
That is not correct : http://msdn.microsoft.com/en-us/library/y31yhkeb.aspx
Moberg
+2  A: 

Ignoring C#'s managed language limitations for a little bit, object oriented languages, such as C#, often do things under the hood which may get in the way when developing drivers. Driver development -- actually reading and manipulating bits in hardware -- often has tight timing constraints and makes use of programming practices which are avoided in other types of programming, such as busy waits and dereferencing pointers which were set from integer constant values. Device drivers are often actually written in a mix of C and inline assembly (or make use of some other method of issuing instructions which C compilers don't normally produce). The low level locking mechanisms alone (written in assembly) are enough to make using C# difficult. C# has lots of locking functionality, but when you dig down they are at the threading level. Drivers need to be able to block interrupts as well as other threads to preform some tasks.

Object oriented languages also tend to allocate and deallocate (via garbage collection) lots of memory over and over. Within an interrupt handler, though, your ability to access heap allocation functionality is severely restricted. This is both because heap allocation may trigger garbage collection (expensive) and because you have to avoid and deal with both the interrupt code and the non-interrupt code trying to allocate at the same time. Without lots of limitations placed on this code, the compiler's output, and on whatever is managing the code (VM or possibly compiled natively and only using libraries) you will likely end up with some very odd bugs.

Managed languages have to be managed by someone. This management probably relies on a functioning underlying operating system. This produces a bootstrap problem for using managed code for many (but not all) drivers.

It is entirely possible to write device drivers in C#. If you are driving a serial device, such as a GPS device, then it may be trivial (though you will be making use of a UART chip or USB driver somewhere lower, which is probably written in C or assembly) since it can be done as an application. If you are writing an ethernet card (that isn't used as for network booting) then it may be (theoretically) possible to use C# for some parts of this, but you would probably rely heavily on libraries written in other languages and/or use of an operating system's userspace driver functionality.

C is used for drivers because it has relatively predictable output. If you know a little bit of assembly for a processor you can write some code in C, compile for that processor, and have a pretty good guess at what the assembly for that will look like. You will also know the determinism of those instructions. None of them are going to surprise you and kick off the garbage collector or call a destructor (or finalize).

nategoose
A: 

Here's the honest truth. People who tend to be good at hardware or hardware interfaces, are not very sophisticated programmers. They tend to stick to simpler languages like C. Otherwise, frameworks would have evolved that allow languages like C++ or even C# to be used at kernel level. Entire OSes have been written in C++ (ECOS). So, IMHO, it is mostly tradition.

Now, there are some legitimate arguments against using more sophisticated languages in demanding code like drivers / kernel. There is the visibility aspect. C# and C++ compilers do a lot behind the scenes. An innocuous assignment statement may hide a shitload of code (operator overrides, properties). The cost of exceptions may not be clearly visible. Garbage collection makes the lifetime of objects / memory unclear. All the features that make programming easier, are also rope to hang yourself with.

Then there is the required ecosystem for the language. If the required features pull in too many components, the size alone may become a factor. If the primitives in the language tend to be heavy (for the sake of useful software abstractions), that's a burden drivers and kernels may not be willing to carry.

Ziffusion
+1  A: 

Just to build a little to Darin Dimitrov's answer. Yes C# programs cannot run in kernel mode.

But why can't they?

In Patrick Dussud's interview for Behind the Code he describes an attempt that was made during the development of Vista to include the CLR at a low level*. The wall that they hit was that the CLR takes dependencies the OS security library which in turn takes dependencies on the UI level. They were not able to resolve this for Vista. Except for singularity I don't know of any other effort to do this.

*Note that while "low level" may not have been sufficient for being able to write drivers in C# it was at the very least necessary.

Conrad Frix