I am writing a client/server in linux kernel (Yes. Inside the kernel. Its design decision taken and finalised. Its not going to change)
The server reads incoming packets from a raw socket. The transport protocol for these packets (on which the raw socket is listening) is custom and UDP like. In short I do not have to listen for incoming connections and then fork a thread to handle that connection.
I have to just process any IP datagram coming on that raw socket. I will keep reading for packets in an infinite loop on the raw socket. In the user-level equivalent program, I would have created a separate thread and kept listening for incoming packets.
Now for kernel level server, I have doubts about whether I should run it in a separate thread or not because:
I think read() is an I/O operation. So somewhere inside the read(), kernel must be calling schedule() function to relinquish the control of the processor. Thus after calling read() on raw socket, the current kernel active context will be put on hold (put in a sleep queue maybe?) until the packets are available. As and when packets will arrive, the kernel interrupt context will signal that the read context, which is sleeping in the queue, is once again ready to run. I am using 'context' here on purpose instead of 'thread'. Thus I should not require a separate kernel thread.
On the other hand, if read() does not relinquish the control then entire kernel will be blocked.
Can anyone provide tips about how should I design my server? What is the fallacy of the argument presented in point 1?