views:

296

answers:

9

I have 1 process that receives incoming connection from port 1000 in 1 linux server. However, 1 process is not fast enough to handle all the incoming request.

I want to run multiple processes in the server but with 1 end-point. In this way, the client will only see 1 end-point/process not multiple.

I have checked LVS and other Load Balancing Solution. Those solutions seem geared towards multiple servers load-balancing.

Any other solution to help on my case?

+2  A: 

The question is a little unclear to me, but I suspect the answer you are looking for is to have a single process accepting tasks from the network, and then forking off 'worker processes' to actually perform the work (before returning the result to the user).

In that way, the work which is being done does not block the acceptance of more requests.

As you point out, the term load balancing carries the implication of multiple servers - what you want to look for is information about how to write a linux network daemon.

The two kes system calls you'll want to look at are called fork and exec.

Matt Sheppard
+1  A: 

Yes, i'm looking something like linux network daemon. I wonder whether there's a readily available program/solution to do this.

Niko Gunadi
+2  A: 

It sounds like you just need to integrate your server with xinetd.

This is a server that listens on predefined ports (that you control through config) and forks off processes to handle the actual communication on that port.

Thomas Vander Stichele
+2  A: 

You also may want to go with a web server like nginx. It can load balance your app against multiple ports of the same app, and is commonly used to load balance Ruby on Rails apps (which are single threaded). The downside is that you need to run multiple copies of your app (one on each port) for this load balancing to work.

Chris Bunch
+1  A: 

You need multi-processing or multi-threading. You aren't specific on the details of the server, so I can't give you advice on what to do exactly. fork and exec as Matt suggested can be a solution, but really: what kind of protocol/server are we talking about?

Leon Timmermans
A: 

i am looking something more like nginx where i will need to run multiple copies of my app.

Let me try it out.

Thanks for the help.

Niko Gunadi
A: 

i am thinking to run multiple application similar to ypops.

Niko Gunadi
A: 

nginx is great but if you don't fancy a whole new web server, apache 2.2 with mod proxy balancer will do the same job

Dave Verwer
A: 

Perhaps you can modify your client to round-robin ports (say) 1000-1009 and run 10 copies of the process?

Alternatively there must be some way of internally refactoring it.

It's possible for several processes to listen to the same socket at once by having it opened before calling fork(), but (if it's a TCP socket) once accept() is called the resulting socket then belongs to whichever process successfully accepted the connection.

So essentially you could use:

  • Prefork, where you open the socket, fork a specified number of children which then share the load
  • Post-fork, where you have one master process which accepts all the connections and forks children to handle individual sockets
  • Threads - you can share the sockets in whatever way you like with those, as the file descriptors are not cloned, they're just available to any thread.
MarkR