views:

119

answers:

4

Possible Duplicate:
Very large HTTP request vs many small requests

I need a 2D array(as Json) to be sent from server to client. It would be around 400X400 in size with each entry around 4 characters of text. So that makes it around 640KB of data.

Which of the following extreme approaches is better ?

  1. I make a large HTTP request of all the data at one go.
  2. I make 400 requests - each asking for a single row (around 1.6 KB)

I believe optimal approach would be somewhere in middle. Could anyone give me an idea what might be the optimal single request size for this data.

Thanks,

+6  A: 

When making a request you always have to deal with some overhead (like DNS request, opening connection and closing it). So it might be wiser to have 1 big request.

Also you might experience better gzip/deflate compression when having 1 big request.

Gertjan
You shouldn't give correct answers to those who have accept rate below 70%. Moreover is there a way to send json asynchronously ?
eugeneK
@eugeneK: Who says you shouldn't give correct answers to those with an accept rate below 70%?
Dean Harding
@eugeneK: json is no different than normal HTTP requests. The result is parsed as JSON when it gets back on the server. So use the same technique.
Gertjan
+3  A: 

Definitely go with 1 request and if you enable gzip compression on the server you wont be sending anything near 640KB

All the page speed optimisation tools(eg yslow, google page speed etc) recommend reducing the number of requests to speed up page load times.

geoff
A: 

Small number of HTTP requests would be better so make one Request.

Prav
Please also share WHY it is better in your opinion.
Gertjan
I see a paradox in the air.
Elzo Valugi
+2  A: 

Depends on the application and the effect you wish to achieve. Here are two scenarios:

  • if you are dealing with a GUI then perhaps chunking is a good idea where a small chunk would update the visuals giving an illusion of 'speed' to human. Here you want to logically chunk up the data as per gui update requirements. You can apply this same concept to prioritizing any other pseudo-real-time scenario.

  • If on the other hand you are just dumping this data then don't chunk, since 100 6 byte requests are overall significantly more time consuming than 1 600 byte request.

Generally speaking however, network packet transportation (TCP) chunking and delivery is FAR more optimized than whatever you could come up with at the application transport layer (HTTP). Multiple requests / chunks means multiple fragments.

It is generally a futile effort to try to do transport layer optimizations using application layer protocol. And, IMHO it defeats the purpose of both :-)

If you have real time requirements for whatever reason, then you should take control of the transport itself and do optimization there (but that does not seem to be the case).

Happy coding

Elf King