views:

157

answers:

3

Hello everyone,

I have to write an architecture case study but there are some things that i don't know, so i'd like some pointers on the following :

The website must handle 5k simultaneous users. The backend is composed by a commercial software, some webservices, some message queues, and a database.

I want to recommend to use Spring for the backend, to deal with the different elements, and to expose some Rest services.

I also want to recommend wicket for the front (not the point here).

What i don't know is : must i install the front and the back on the same tomcat server or two different ? and i am tempted to put two servers for the front, with a load balancer (no need for session replication in this case). But if i have two front servers, must i have two back servers ? i don't want to create some kind of bottleneck.

Based on what i read on this blog a really huge charge is handle by one tomcat only for the first website mentionned. But i cannot find any info on this, so i can't tell if it seems plausible.

If you can enlight me, so i can go on in my case study, that would be really helpful.

Thanks :)

+4  A: 

There are probably two main reasons for having multiple servers for each tier; high-availability and performance. If you're not doing this for HA reasons, then the unfortunate answer is 'it depends'.

Having two front end servers doesn't force you to have two backend servers. Is the backend going to be under a sufficiently high load that it will require two servers? It will depend a lot on what it is doing, and would be best revealed by load testing and/or profiling. For a site handling 5000 simultaneous users, though, my guess would be yes...

dogbert
+2  A: 

It totally depends on your application. How heavy are your sessions? (Wicket is known for putting a lot in the session). How heavy are your backend processes.

It might be a better idea to come up with something that can scale. A load-balancer with the possibility to keep adding new servers for scaling.

Measurement is the best thing you can do. Create JMeter scripts and find out where your app breaks. Built a plan from there.

Albert
this is a theoritical exercise, there's no actual app, so i can't do any load testing. but could you talk a little more about how i plan for scaling please ? thanks :)
Maxime ARNSTAMM
+1  A: 

To expand on my comment: think through the typical process by which a client makes a request to your server:

  • it initiates a connection, which has an overhead for both client and server;
  • it makes one or more requests via that connection, holding on to resources on the server for the duration of the connection;
  • it closes the connection, generally releasing application resources, but generally still hogging a port number on your server for some number of seconds after the conncetion is closed.

So in designing your architecture, you need to think about things such as:

  • how many connections can you actually hold open simultaneously on your server? if you're using Tomcat or other standard server with one thread per connection, you may have issues with having 5,000 simultaneous threads; (a NIO-based architecture, on the other hand, can handle thousands of connections without needing one thread per connection); if you're in a shared environment, you may simply not be able to have that many open connections;
  • if clients don't hold their connections open for the duration of a "session", what is the right balance between number of requests and/or time per connection, bearing in mind the overhead of making and closing a connection (initialisation of encrypted session if relevant, network overhead in creating the connection, port "hogged" for a while after the connection is closed)

Then more generally, I'd say consider:

  • in whatever architecture you go for, how easily can you re-architecture/replace specific components if they prove to be bottlenecks?
  • for each "black box" component/framework that you use, what actual problem does it solve for you, and what are its limitations? (Don't just use Tomcat because your boss's mate's best man told them about it down the pub...)

I would also agree with what other people have said-- at some point you need to not be too theoretical. Design something sensible, then run a test bed to see how it actually copes with your expected volumes of data. (You might not have the whole app built, but you can start making predictions about "we're going to have X clients sending Y requests every Z minutes, and p% of those requests will take n milliseconds and write r rows to the database"...)

Neil Coffey