views:

48

answers:

2

I have a web app hosted in a cloud environment which can be expanded to multiple web-nodes to serve higher load.

What I need to do is to catch this situation when we get more and more HTTP requests (assets are stored remotely). How can I do that?

The problem I see from this point of view is that if we have more requests than mongrel cluster can handle then the queue will grow. And in our Rails app we can only count only after mongrel will receive the request from balancer..

Any recommendations?

+3  A: 

I would start with an around_filter maybe something like:

around_filter :time_it

private
  def time_it
    now = DateTime.now
    yield # this is where the request gets made.
    time_span = DateTime.now - now

    # convert to double for better accuracy.
    # (i.e. 3.5 => 4 so you might not want to
    # work with integers here.
    if time_span * 86400.0 > 4.0  
      logger.debug "something is taking a little longer then expected here!"
    end
  end

Use this as a starting point. Hope that helps.

Edit: put this code in the ApplicationController so it can be used by every Controller.

DJTripleThreat
This works well, I think one issue with this in general is that you have to time every request with `DateTime.now`, so that super efficient but its really the only way to get the job done.
Joseph Silvashy
Yeah, there's probably a great gem that can do this better (metric_fu? or some kind of profiling gem), but don't know of any that do what he's looking for for sure. He can run this a couple times, find out where the bottle necks are then put this code in a particular controller with the problem.
DJTripleThreat
A: 

In addition to @DJTripleThreat, you should have a look at NewRelic RPM to get more insight to your code's performance.

hurikhan77