views:

70

answers:

4

Hey guys,

Currently my company has a 3 server set-up. 2 web boxes behind a load-balancer and another box not behind the load-balancer (used for Admin, CMS and stats). Due to the state of funds at the moment we are looking to decommission our single box which is not behind the load-balancer. The box has our CMS on it and a media subdomain points to /home/web/media on that box. The problem is if we remove the box and port all the code (PHP) over to the load-balanced web boxes, then when a file is uploaded in the CMS it will only upload it to the media directory of the box the user hits. So if a user hits web1 and uploads a file that file will only be accessible in the /home/web/media directory of web1. So we need to somehow rsync the /media directories on both web1 and web2 when a file is uploaded. Or do something else.

What would you recommend to be the best way to accomplish this?

Any help would be much appreciated.

Just for information purposes we are running PHP 5.2, Red Hat Enterprise Linux and Apache 2.0.52

Regards,

Owen

+1  A: 

Isn't it an idea to use a network share for the media, so you can make it available on both servers at all time?

CharlesLeaf
it would be really slow without a way of caching..plus it kinda breaks the point of having 2 machines, in case one of them dies
Quamis
@Quamis if you keep the network storage away from both machines and separated, with scheduled off-site backups, you're already one step further. As for caching, you could use a CDN that has a (short) cache time. Good disks and network connections help as well. There are probably a lot of other different options as well.. This is just one of them that I'd imagine..
CharlesLeaf
A: 

    function scp($username, $host, $port = 22, $file, $destination){

        $dirs = explode("/", $destination);
        array_pop($dirs);
        $dirs = implode("/", $dirs);

//         die("\nscp ".$file." ".$username."@".$host.":".$dirs);

         system("ssh ".$username."@".$host." mkdir -p ".$dirs);
         system("scp ".$file." ".$username."@".$host.":".$destination);
    }

of course you need that www-data has its public key on both servers and write privileges

sathia
+1  A: 

You have a few choices (some have been already mentioned):

  • Store uploaded files in a database (not recommended for files you will need fast random access to).
  • Use a network filesystem such as NFS or SMB, and store uploaded files there. (You can also have code copy uploaded file to the other server's filesystem exposed over NFS or SMB).
  • Use a clustered filesystem such as GFS or OCFS.
vls
A: 

If you want the file uploaded to both servers in realtime, just do that.

  1. User uploads a file on server1.
  2. server1 process the upload and stores it in server1/media.
  3. server1 makes an authenticated request to server2/api/uploadFile (curl)
  4. server2 process the upload and stores it in server2/media.

Hope that makes sense.

xmarcos