views:

82

answers:

4

Suppose a application level protocol is implemented via UDP. Client timeout is required, thus server need to keep state of each client it talks to.

Also suppose select is used.

  1. Is it always the best to implement multi-threading server? I figure a link-list will do the same, where server timeout time=Earliest Timeout of a client- CurrentTime . A link-list will have the same function as keeping client's states, while avoiding the overhead of creating new threads(though introducing some complexity for server to maintain client-specific timeout).

  2. If multi-threading is chosen, then onward, is it the best to invoke new socket for new client? This will introduce system resource overhead. But I figure the default server socket (bind with server well-known port) will do the same since it got buffer (well..maybe not long enough for scalable num of clients..)

Thanks!

A: 

Multi-threading is definitely not a must as you have already come up with an alternative. We can't really use absolutes like always or never as each case has unique requirements and constraints.

Yes, adding a new thread/socket for each connection will consume more resources. It sounds like you need to get a good definition of how many connections you will need. Then you can determine if you will have sufficient resources or not.

If the resources constraints are not a concern, I would choose the simpler solution. Is it easier to use the tools you already have (i.e. well tested functions to handle threads and sockets) as opposed to writing a new body of functionality (the linked list suggestion)? What about code maintenance? If another programmer works on this project in the future, would it would be easier for them to understand an implementation with standard operating system calls they are already familiar with or a linked list?

semaj
Thanks for your explanation, especially the maintenance part.
Figo
A: 

Linked-Lists will not scale.

Using linked lists on the server-side to check the clients one-by-one and address their needs is all well and good for 5 to 10 clients. But what happens when you have 100? 1000? What happens if one clients request takes a very long time to handle?

Threads don't just provide a way of maintaing state for individual clients. They also provide a way of simultaneously "distributing the server resources" across all clients. It's as if each client has a dedicated server to itself, there is (almost) no queue: the client wants something, it asks the server, the server replies. It's instantaneous.

Plus, you could be wasting valuable resources with your linked list approach. What if all the clients but one want nothing? You'll be cycling repeatedly over a hundred clients, doing nothing but wasting CPU cycles, until you come across the one that does require the server's attention.

Computer Guru
That's a pretty blanket statement. I have seen EFNet servers happily handling upwards of 15,000 simultaneous clients - using a single-threaded `ircd`, with essentially the method described. So it's quite possible that it will be scalable *enough*.
caf
+2  A: 
Aidan Cully
Thanks for your advice. I guess I will follow the regular model instead. But threading isn't that useful on single-core system right, compared to dua-core.
Figo
No, it is.Threading won't increase throughput on a single-core system. But it *is* useful in that increases responsiveness, reliability, and average waiting time.
Computer Guru
@Computer Guru - that's true relative to a polling architecture, but I don't think it'll be true relative to a well-designed AIO-based (or other interrupt-based) architecture. On a single core, threading basically _reduces_ to an interrupt-based architecture.
Aidan Cully
+1  A: 

I'm not going to suggest anything new that isn't in the answer by Aidan Cully, however, take a look at the theory behind Apache's Multi Processing Modules: http://www.linuxquestions.org/linux/answers/Networking/Multi_Processing_Module_in_Apache

In essence, the server is split into multiple modules and threads/processes are created to manage connections, depending on need and configuration options - it sounds like the balance described in Aidan's answer although the Apache implementation may differ slightly.

Ninefingers
Yeah, I was somewhat active in the Apache (httpd) community when its processing model was introduced, and the discussion surrounding it had a lot to do with my thinking.
Aidan Cully
Aaaahh an expert on the subject then - it seems like a very sensible approach and one I intend to follow on an up and coming project I am doing @ work.
Ninefingers