views:

205

answers:

3

I apologize for the weird question wording... here's the design problem:

I am developing a server (on Linux using C++, FWIW) that provides a service to many instances of a client application running on consumer PCs.

I want the following:

1) All clients first identify themselves to a "gatekeeper" server application. Consider this a login procedure, with credentials like a user name and password being passed in. Call the gatekeeper program "gserver". (for gatekeeper.)

2) Once each client has been validated, it is then placed into a long term connection with one of several instances of a different server application running on the same physical server box bound to the same server address. Call any of these instances "wserver" (for "working" server.)

So, what the client sees is that a "gatekeeper" application gives it passworded access to one of several "working" servers running on the same box.

Here is the "real" challenge: we want to exclusively use a "well known" port number for the inbound server connections (like port 80 or 443, say.) Or, our own "well known" port.

We would prefer not to have to make the client talk to a second port on the server for the long term connection phase with wserver(n). The problem with this, of course, is that only one server process at a time can be bound to the same port and server address.

This implies that a connection made by the client with gserver must also fill the role of the long term connection. The only way I see to accomplish this is that gserver must, after login, act like a proxy and copy traffic between itself and the client to the particular wserver(n) that the client is bound to logically.

It would be ideal if a TCP/IP connection first made between client(n) and gserver could be somehow "transported" to another application on the same server, intact, and could then be sustained by one of the wserver(n) instances for the long term connection.

I know that web servers do something like this for spreading out server loads. "Load balancing". The main difference here is that the "balancing" is the allocation of a particular user to a particular wserver(n) instance. But I also have the impression that load balancing is a kind of proxying - which I am trying to avoid (since it complicates the architecture and adds overhead as well as a single point of failure.)

This is a conceptual and design question. Don't worry about source code examples, unless they are absolutely essential to get the ideas across. If we pin down an approach, I can code it up.

Thanks!

+4  A: 

What you are looking for is file descriptor passing. See UNP 15.7. One well-known heavy user of this facility is postfix.

Nikolai N Fetissov
I'm investigating this. I will post again when I have verified that this sample works for this situation. Thanks!!!
Wannabe Tycoon
I just built and ran the example you suggested. It works, and it's exactly what I need here! Many thanks.
Wannabe Tycoon
Glad to hear that :)
Nikolai N Fetissov
A: 

I dont' know if this applies to your design, but the usual solution (as implemmented by the xinetd daemon) is to fork() and then exec() the process. For example, xinetd may serve services like rlogin, rsh, tftp, telnet, etc. which are actually served by different programs. This will not be useful to you if your wservers are processes already running in the system.

Guillermo Prandi
As it turns out, clients need to be plugged into existing "wservers". So, as you stated, fork/exec can't work for this case. But thanks for pointing that out.
Wannabe Tycoon
+2  A: 

I developed such an application long time ago. Since multiple servers can't listen on the same port. What you need is to have gserver listening on the well-known port. Once connection is established, pass the connection to the other servers via an Unix socket. Once the connection is passed to other server, gserver is out of picture. It can die and the other server will be still serving the connection.

ZZ Coder