views:

153

answers:

4

I am developing an application which involves multiple user interactivity in real time. It basically involves lots of AJAX POST/GET requests from each user to the server - which in turn translates to database reads and writes. The real time result returned from the server is used to update the client side front end.

I know optimisation is quite a tricky, specialised area, but what advice would you give me to get maximum speed of operation here - speed is of paramount importance, but currently some of these POST requests take 20-30 seconds to return.

One way I have thought about optimising it is to club POST requests and send them out to the server as a group 8-10, instead of firing individual requests. I am not currently using caching in the database side, and don't really have too much knowledge on what it is, and whether it will be beneficial in this case.

Also, do the AJAX POST and GET requests incur the same overhead in terms of speed?

+1  A: 

Rather than continuously hitting the database, cache frequently used data items (with an expiry time based upon how infrequently the data changes).

Can you reduce your communication with the server by caching some data client side?

The purpose of GET is as its name implies - to GET information. It is intended to be used when you are reading information to display on the page. Browsers will cache the result from a GET request and if the same GET request is made again then they will display the cached result rather than rerunning the entire request. This is not a flaw in the browser processing but is deliberately designed to work that way so as to make GET calls more efficient when the calls are used for their intended purpose. A GET call is retrieving data to display in the page and data is not expected to be changed on the server by such a call and so re-requesting the same data should be expected to obtain the same result.

The POST method is intended to be used where you are updating information on the server. Such a call is expected to make changes to the data stored on the server and the results returned from two identical POST calls may very well be completely different from one another since the initial values before the second POST call will be differentfrom the initial values before the first call because the first call will have updated at least some of those values. A POST call will therefore always obtain the response from the server rather than keeping a cached copy of the prior response.

Ref.

Mitch Wheat
A: 

The optimization tricks you'd use are generally the same tricks you'd use for a normal website, just with a faster turn around time. Some things you can look into doing are:

  • Prefetch GET requests that have high odds of being loaded by the user
  • Use a caching layer in between as Mitch Wheat suggests. Depending on your technology platform, you can look into memcache, it's quite common and there are libraries for just about everything
  • Look at denormalizing data that is going to be queried at a very high frequency. Assuming that reads are more common than writes, you should get a decent performance boost if you move the workload to the write portion of the data access (as opposed to adding database load via joins)
  • Use delayed inserts to give priority to writes and let the database server optimize the batching
  • Make sure you have intelligent indexes on the table and figure out what benefit they're providing. If you're rebuilding the indexes very frequently due to a high write:read ratio, you may want to scale back the queries
  • Look at retrieving data in more general queries and filtering the data when it makes to the business layer of the application. MySQL (for instance) uses a very specific query cache that matches against a specific query. It might make sense to pull all results for a given set, even if you're only going to be displaying x%.
  • For writes, look at running asynchronous queries to the database if it's possible within your system. Data synchronization doesn't have to be instantaneous, it just needs to appear that way (most of the time)
  • Cache common pages on disk/memory in a fully formatted state so that the server doesn't have to do much processing of them

All in all, there are lots of things you can do (and they generally come down to general development practices on a more bite sized scale).

Jeremy Stanley
A: 

The common tuning tricks would be: - use more indexing - use less indexing - use more or less caching on filesystem, database, application, or content - provide more bandwidth or more cpu power or more memory on any of your components - minimize the overhead in any kind of communication

Of course an alternative would be to: 0 develop a set of tests, preferable automatic that can determine, if your application works correct.
1 measure the 'speed' of your application.
2 determine how fast it has to become
3 identify the source of the performane problems:
typical problems are: network throughput, file i/o, latency, locking issues, insufficient memory, cpu 4 fix the problem
5 make sure it is actually faster
6 make sure it is still working correct (hence the tests above)
7 return to 1

Jens Schauder
A: 

Have you tried profiling your app?

Not sure what framework you're using (if any), but frankly from your questions I doubt you have the technical skill yet to just eyeball this and figure out where things are slowing down.

Bluntly put, you should not be messing around with complicated ways to try to solve your problem, because you don't really understand what the problem is. You're more likely to make it worse than better by doing so.

What I would recommend you do is time every step. Most likely you'll find that either

  1. you've got one or two really long running bits or
  2. you're running a shitton of queries because of an n+1 error or the like

When you find what's going wrong, fix it. If you don't know how, post again. ;-)

Sai Emrys