views:

68

answers:

6

Hi

We have a site hosted one side of the planet and a customer on the other.

Its asp.net and theres load of complex business rules on the forms. So there are many instances where the user takes some actions and the site posts back to alter the form based on business rules.

So now the customer is complaining about site lag.

For us, the lag is barely noticeable so were pretty much talking pure geographical distance here I think.

What are the options for improving performance...

a) Put a mirrored data center nearer the customers country b) Rewrite the whole app, trying to implement the business rules entirely in client side script (may not be feasible)

Outside of this, has anyone any tips or tricks that might boost performance.

We already have heavy caching between db and web server but in this case, this isn't the issue since they are side by side anyway...

The prob is a 30,000 mile roundtrip between client and server...

(I notice the reverse is slow also - when I use websites in the customers country, they always seem slow...)

+1  A: 

First step is that you probably want to get some performance information from your client accessing your website. Something like Firebug (in Firefox) that shows how long every request for each item on your page took. You may be surprised what the bottle neck actually is. Maybe just adding a CDN (content Delivery Network) for your images, etc. would be enough.

If your site has any external references or tracking that runs on the client (Web trends, etc) that may even be the bottleneck, it could be nothing to do with your site as such

Paul Hadfield
Firebug, nice tip, hadn't thought about it for measurements. We were talking at work friday about setting up some comparison measurements - us to the server vs them to the server. For us its about 1000 miles (we see a lag of under 1 sec) and them its over 10000 miles - they see a lag of up to 8 secs on the largetest form)
Solyad
+1  A: 

One thing you should look at is how big is the ViewState of the page. For every postback it will send the viewstate. If it's large and the internet lines are "slow", then you will get lag.

Ways to fix this is to scrutinize your code and turn off viewstate for controls which don't need it, compress the viewstate before sending it to the client, making the postbacks smaller, cache the viewstate on the server and replace it in the aspx file with a guid or similar making the postback even smaller.

And of course make sure you have compression (gzip) turned on for your entire site, so that what you send in the first place is compressed.

Also make sure you add cache headers to all static content so that the client caches those files (js, css, images).

Use Fiddler or something similar to monitor how much data is being sent back and forth for your application.

Mikael Svenson
Good point, along with my comments, check for any unrequired data being sent between the browser and the client. Could be a good point to make sure that IIS compression for both static and dynamic files is enabled and that JS and CSS files have been mini-fyed
Paul Hadfield
It helps to minify css and js, but usually they will be cached after the first request anyway, so I don't expect that to be the problem. Of course he should check that caching headers are added to js and css.
Mikael Svenson
Didn't think css and js are the issue, since as you say they get cached. In my form engine I have viewstate switched off bar the necessary controls
Solyad
+5  A: 

I have this problem too. Some of my clients are in New Zealand and I am in the UK. That is as big a round-trip as you can get, unless you count deep space probes.

Start here:

http://www.aspnet101.com/2010/03/50-tips-to-boost-asp-net-performance-part-i/ http://www.aspnet101.com/2010/04/50-tips-to-boost-asp-net-performance-part-ii/

Many of these are serverside hints, but particlar hints from these pages that might help you include:

  • disable ViewState where appropriate
  • use a CDN so that users are getting as much of their content as possible from nearby servers e.g. jQuery script files, Azure services.
  • change your images to sprites
  • minify your js
  • validate all Input received from the Users - on the clientside saves unnecessary round trips. jQuery validation is excellent for this.
  • Use IIS Compression - reduces download size
  • Use AJAX wherever possible - if you don't already, this has the greatest potential to improve your round trip sizes. My preference is (you guessed it...) jQuery AJAX

Also, in Firefox, install the YSlow Add-on. This will give you hints on how to improve your particular page

If all of this is not enough and you can afford the time and investment, converting your application to ASP.NET MVC will make your pages a lot lighter on the bandwidth. You can do this gradually, changing the most problematic pages first and over time replace your site without impacting your users. But only do this after exhausting the many ideas posted by al lof the answers to your question.

Another option, if you are going to do a rewrite is to consider a Rich Internet Application using Silverlight. This way, you can have appropriate C# business rules executing in the client browser and only return to the server for small packets of data.

The most obvious short term solution would be to buy some hosting space in the same country as your client, but you would have to consider database synchronising if you have other clients in your home country.

Daniel Dyson
"That is as big a round-trip as you can get, unless you count deep space probes" :)
Rob Levine
Yip our round trip is a similar distance.We have thought about mvc in a future version. I'm lucky in that I work for a major corporation that has data centers around the planet so it should be possible to set of mirrors...
Solyad
Good plan. The mirrored architechture will enable you to have a failover plan too, in case one datacenter goes down.
Daniel Dyson
+1  A: 

This might sound obvious, but here it goes: I'd try to limit the information interchange between the client and the server to the absolute minimum, probably by caching as much information as possible on the first call, and using javascript.

For example: if your app is polling the server when the user presses "get me a new blank form", you can instead send a "hidden"(i.e. on a javascript string) blank form on the first call, and have it replace the visible one with javascript when the user presses the button. No server polling = big gain in perceived responsiveness.

Another example would be an ajax service that renders a complex form every time that the user changes one field. The inefficient (but normally easier to implement) way to do it is having the server "send" the complete form, in html. A more efficient way would be having the server return a short message (maybe encoded in json), and have the client build the form from the message, again with javascript. The trick here is that in some cases you can start rendering the form before the message is received, so the "perceived responsiveness" will be also better.

Finally, see if you can cache things up; If the user is asking for information that you already have, don't pull the server for it again. For example, save on a javascript array the "current state" of the page, so if the user presses "back" or "forward" you can just restore from there, instead of polling the server.

egarcia
+1  A: 

You are dealing with a "long fat pipe" here, meaning the bandwidth is sufficient (it can still do x KB/s) but the lag is increased. I would be looking at decreasing the number and frequency of requests first, before decreasing the size of the request.

You don't have to reimplement 100% of the business rules in Javascript, but do start chipping away at the simple validation. IMHO this has the potential to give you the best bang for buck.

But of course, don't take my word for it, investigate where the bottleneck happens - i.e. response time or transfer time. Most modern browsers' developer plugins can do that these days.

Igor Zevaka
Good phrase, "long fat pipe"! Think this is basically the issue
Solyad
A: 

First of all, thanks guys for all the information.

Just a bit of extra background. The site is built using a dynamic form generating engine I wrote.

Basically I created a system whereby the form layout is described in the db and rendered on the fly.

This has been a very useful system in that it allows rapid changes and also means our outputs, which include on screen, pdf and xml outputs are synched to this descriptions, which I call form maps. For example, adding a new control, or reordering a form, is automatically seen in all renders - form, pdf, xml and standard display

It does however introduce the the overhead of having to build the page on every request, including post backs

I've been speaking to an architect in my company that we're probably going to need a few copies around the globe - its not possible to run a global system for people in different countries from one european data center.

Some of the forms are huge, the customer is unwilling to see sense and break them into smaller portions - so this is also adding to the overhead.

The only thing I have not tried yet is running something like gzip to reduce the payload been sent over and back...

Solyad