views:

248

answers:

2

I am writing a client/server in linux kernel (Yes. Inside the kernel. Its design decision taken and finalised. Its not going to change)

The server reads incoming packets from a raw socket. The transport protocol for these packets (on which the raw socket is listening) is custom and UDP like. In short I do not have to listen for incoming connections and then fork a thread to handle that connection.

I have to just process any IP datagram coming on that raw socket. I will keep reading for packets in an infinite loop on the raw socket. In the user-level equivalent program, I would have created a separate thread and kept listening for incoming packets.

Now for kernel level server, I have doubts about whether I should run it in a separate thread or not because:

  1. I think read() is an I/O operation. So somewhere inside the read(), kernel must be calling schedule() function to relinquish the control of the processor. Thus after calling read() on raw socket, the current kernel active context will be put on hold (put in a sleep queue maybe?) until the packets are available. As and when packets will arrive, the kernel interrupt context will signal that the read context, which is sleeping in the queue, is once again ready to run. I am using 'context' here on purpose instead of 'thread'. Thus I should not require a separate kernel thread.

  2. On the other hand, if read() does not relinquish the control then entire kernel will be blocked.

Can anyone provide tips about how should I design my server? What is the fallacy of the argument presented in point 1?

A: 

I think your best bet might be to emulate the way drivers are written, think of your server as a virtual device sitting on top of the ones that the requests are coming from. Example: a mouse driver accepts continuous input, but doesn't lock the system if programmed correctly, and a network adapter is probably more similar to your case.

Dana the Sane
+2  A: 

I'm not sure whether you need a raw socket at all in the kernel. Inside the kernel you can add a netfilter hook, or register something else (???) which will receive all packets; this might be what you want.

If you DID use a raw socket inside the kernel, then you'd probably need to have a kernel thread (i.e. started by kernel_thread) to call read() on it. But it need not be a kernel thread, it could be a userspace thread which just made a special syscall or device call to call the desired kernel-mode routine.

If you have a hook registered, the context it's called in is probably something which should not do too much processing; I don't know exactly what that is likely to be, it may be a "bottom half handler" or "tasklet", whatever the are (these types of control structures keep changing from one version to another). I hope it's not actually an interrupt service routine.


In answer to your original question:

  1. Yes, sys_read will block the calling thread, whether it's a kernel thread or a userspace one. The system will not hang. However, if the calling thread is not in a state where blocking makes sense, the kernel will panic (scheduling in interrupt or something)

Yes you will need to do this in a separate thread, no it won't hang the system. However, making system calls in kernel mode is very iffy, although it does work (sort of).

But if you installed some kind of hook instead, you wouldn't need to do any of that.

MarkR
Protocol drivers use dev_add_pack (linux/netdevice.h) to add a packet type, which registers a callback which gets called for each packet of that type. This is quite extreme and low-level.
MarkR