views:

45

answers:

5

So, I have three server, and the idea was to keep all media (images, files, movies) on a media server. I never got around to do it but I think I probably should.

So these are the three servers:

WWW server DB server Media server

Visitors obviously connect to the WWW server and currently image resizing and cache:ing is done on the WWW servers as the original files are kept there. So the idea for me is for image functions I have, that does all the image compositioning, resizing and cahceing would just pie the command over to the media server that would return ther path to the finnished file.

What I don't know is how to handle functions such as file_exists() and figuring out image dimensions when needed before even any image management comes into play. Do I pipe all these commands to the other server, via HTTP? I was thinking along the ways of doing it this way:

function image(##ARGS##){
    if ($GLOBALS["media_host"] != "localhost"){
        list ($src, $width, height) = file('http://$GLOBALS[media_host]/imgfunc.php?args=##ARGS##');
        return "<img src='$src' height and width >";
    }
    .... do other stuff here 
}

Am I approaching this the wrong way? Is there a better way to do this?

A: 

You need to open a port on the media server to retrieve information and that's exactly what you're doing. Your approach is fine (so long as you're ok with this functionality being made available on a public port).

I'm assuming that any media file info isn't being stored in a database.

webbiedave
+2  A: 

Not sure what you're going for, but I also keep media files on a separate server from my code:

I use Amazon S3 to store my media files, and I simply include a base tag in the head of my HTML file to make it all work. Basically, it takes all relative files/links and points those relative paths to the other server.

<base href="https://s3.amazonaws.com/BUCKET/PROJECT/FOLDER/" />

There are some who will probably object my use of the base href tag in this way, but it works really well for me. In this way I can direct all the image loading bandwidth to Amazon and away from my server.

swt83
+1  A: 

Imagine the media server as an S3 bucket, would probably make your life easier to understand what should happen where. Install lighthttpd on the media server and serve the images directly from there. For storage, process the image on the main server, upload the image to the media server, store all the info related to the image in the database, so that when you want to serve it, you already have all the info available and you assume for all the right reasons that the image is still there :)

As for the way you want to do it, i think it would cause a serious bottleneck and raise alot of network traffic, you are kind of trying to implement "messages" found in distributed systems and we all know the pitfalls involved there. I say keep it simple!

Sabeen Malik
+1  A: 

To find out if a remote file exists, run a HEAD request:

$curl = curl_init();
curl_setopt($curl, CURLOPT_URL, "url here");
curl_setopt($curl, CURLOPT_HEADER, true);
curl_setopt($curl, CURLOPT_NOBODY, true);
curl_exec($curl);
$code = curl_getinfo($curl, CURLINFO_HTTP_CODE);
curl_close($curl);

if($code == 200)
    // The file exists.
mattbasta
A: 

Unless you're making significant cost savings by using a dumb (non-scripting) host this is a bad idea - it doubles the probability that your system will fail. If you've got 2 hosts, the best solution is to have them as exact mirrors. If you need to have session/data replication, then this does introduce a small overhead - but it's worth the cost. Distribute the load via round robin DNS. In addition it means you only need to back up one site and its scalable from a single server (e.g. your development box) up to....well lots.

I'd also recommend that you don't store modified versions of submitted files (except for an initial transformation to prevent them posting something you don't want on your server). Then transform the content on demand (and use server-side caching).

One advantage of having content available via different DNS names is that most browsers will then run more requests in parallel - but you can still do this with cloned server by using multiple vhosts or wildcard vhosts.

C.

symcbean