views:

496

answers:

1

Hi I am hoping someone can help me to understand why I am getting varied results with some PHP code I have written to upload files to S3 then call an EC2 instance to perform actions on the uploaded file.

Here is the order i do things -

1) use S3 class to put file

$result = s3 -> putObjectFile($uploadDIR, $bucket, $name, S3::ACL)

2) check $result

if($result == "1") {
//file made it to s3

3) use cURL to call EC2 instance and perform action on file in S3

I am using this with video files, when I upload a small video file it works ok (eg 3MB) but for larger video (eg 80MB) the code doesnt seem to get past step 1. The file is moved to s3 ok but my guess is that after a while PHP gives up waiting to see if $result == 1 and so does not execute the rest of the code.

What is the best way to handle something like this? How can I detect that the file has been uploaded to S3 and then run some code when it has?

many thanks Stephen

+3  A: 

For big files the time it takes to upload them will probably be higher than the script's max_execution_time.

You could just use the set_time_limit() function but it's still probably not a good idea for web pages as the script would just "hang" there without any user feedback (output to the browser).

This would probably be nicer:

  1. use set_time_limit() so the script doesn't die.
  2. Store the file name in a temporary location (DB, session, etc) and get a unique ID for it
  3. Output a page with some AJAX code to (repeatedly) query a second script for the file's status (FINISHED, FAILED, UNDEFINED?)
  4. On the original script, wait for the action to finish and update the DB with the result.
jcinacio