views:

154

answers:

4

Hi,

My question is related to Architecture of the Application on which am working right now. Currently, we are installing server locally on each box and that server get's data from the client and does some kind of processing on it and than it generates output and receipt is printed depending upon the output data, and that output data is stored in centralized database by hourly upload from local server's on client boxes.

I have concern of is it good practice to install server locally on each client box or its best approach to have centralized server. When asked it was suggested that if we use centralized server than latency, speed and bandwidth would come in considerations as each and every client request would hit server thereby increasing the time of execution, reducing bandwidth and latency would be also badly affected.

Note:

Business line of application is Shipping and Supply Chain Logistics, application generates all routing, rating and other label related information which is needed to ship package from source to destination. Ex. Apple, Dell ship millions and millions of package and so this server does all work of generating label, routing and rating details...Hope this would make picture more clear :)

Here client process millions and millions of transactions and so request hitting ratio is very high.

Thanks.

A: 

Client-server environments (web included) have advantages and disadvantages, so the context of your application is critical. In your scenario, you have distributed servers so the workload is balanced. However, you have a nightmare in terms of maintaining each server (software, operations, reliability, etc.) A centralized server provides better maintenance/monitoring/etc., but also carries increased workload.

The answer for your situation depends greatly on the needs of your application. While millions of transactions sound like a lot, well-designed applications can handle that load quite reasonably. However, you may be sending a substantial amount of data in those transactional requests, which might make that process onerous and unreliable. Again, application context is very important.

Based on the notes you've supplied, it sounds as if there is some local server processing that handles real-time transactions, but asynchronously does processed/summarized data loads to a central db on a schedule. That's certainly not a poor approach, although it does increase environmental complexity.

I will gladly edit my response if you can supply greater detail about your application.

Hope this helps.

jro
What kind of details regarding project will guide me with better answer ?
Rachel
Business line of application is Shipping and Supply Chain Logistics, application generates all routing, rating and other label related information which is needed to ship package from source to destination. Ex. Apple, Dell ship millions and millions of package and so this server does all work of generating label, routing and rating details...Hope this would make picture more clear :)
Rachel
A: 

It depends on what sort of system you have and what your requirements are.

One of the advantages of a centralised server model is that you can scale the number of clients and the number of servers independently to make the most of your hardware and it also allows for redundancy in the event one of your servers falls over. For instance Web services in an SOA environment is suited to this model. This does come with an increase in latency which if you have real time systems with SLA's which require responses in the couple of milliseconds range than this probably isn't the way to go.

Since it appears that you are after really fast response times than perhaps what you have now is quite a reasonable solution.

The syncing of the data back to the database on a schedule could be done differently if you were looking for a way to make that closer to real time, perhaps a message queue would work. This would probably make things a little simpler as well.

Dean Johnston
Message queue at the backend or the local server ?
Rachel
A message queue between you servers and your database, so the servers drop a message onto the queue and then forget about it, and a separate process running near your databases plucks them off and stores them in its own time. If you were using a traditional client server model the storing in the db would be done by the server, but if you want to avoid the overhead of waiting for responses a message queue could give you an asynchronous method of still storing your data real time or close to.
Dean Johnston
A: 

Both approaches can work successfully.

The drawbacks of a store-and-forward system is that you will not have up-to-date information in the central location of what's going on at a shipping station. The technical drawbacks of a more fully-connected centralized system are not necessarily bandwidth and transaction throughput, since these can be mitigated with more resources (it's a cost problem, not a technical problem), but a fully-connected system has more points of failure and no local fallback option.

On the costs side, although fatter clients have lower bandwidth costs, administering the clients increases management costs. Typically, the management costs, while they can be mitigated are labor costs and support costs, which often outweigh the commodity technology costs.

Cade Roux
A: 

As others have said, it all depends on what you're doing.

However, the biggest thing to look at is how many times you're crossing machine boundaries. If you can minimize that, you'll be in pretty good shape. In general, I'd avoid RPC mechanics whenever possible, as that will be two machine boundary crossings :)

The issue with having a 'server' on each local machine is simple - how do you maintain consistent state?

Also, your network topology will be an important factor. If everything's on a local subnet (ideally on the same switch), latency won't be an issue unless you have horribly designed network code. If you're going over the cloud, it's a different story.

kyoryu