SO i am in the process of setting up a load balanced configuration for our web application with nginx
OK
I would most probably go with sticky sessions to avoid session issues on the load balanced setup
So you're not going with load balancing, you're looking at load splitting?
Don't.
Done properly, load balancing means that your chances of a loss of service are reduced exponentially by the number of nodes. Say the probaility of an individual node is 0.05 (i.e. 95% uptime), then the probability of losing both nodes is 0.05 x 0.05 = 0.0025 (99.75% uptime). OTOH if you split the load as you suggest, then you lose 1/N of your availability whenever a node fails, and the probability of losing a node is N*0.05, so you're only getting 96.75% availability with 2 nodes.
Regarding deployments across multiple nodes, the way I used to do it was to:
1) take a node, call it node1, offline
2) apply release to node1
3) verify that the deployment was successful
4) bring node1 back online
5) take node2 offline
6) rsync from node1 to node2
7) run rsync again to check it had completed
8) bring node 2 back online
then repeat 5-8 for each additional node
what would be the best approach to sync these user files across all web server?
The method above is for deployments - for user submitted data you need to distribute the content at the time it is submitted. I use custom scripts for this. In the event that a node is offline when the update occurs, it can be resynched (steps 6+7) before making it available again.
The scripts I used sent a request to a node requesting that it copy from the originator of the request - so it could run with short timeouts and guaranteed that the source content was available.
In terms of implementing the load balancing - although you can spend lots of money buying sophisticated hardware, I've yet to see anything which works better than round-robin for lots of reasons - not least that the failover is implemented transparently at the client.
HTH
C.