views:

697

answers:

2

So, I'm using Paperclip and AWS-S3, which is awesome. And it works great. Just one problem, though: I need to upload really large files. As in over 50 Megabytes. And so, nginx dies. So apparently Paperclip stores things to disk before going to S3?

I found this really cool article, but it also seems to be going to disk first, and then doing everything else in the background.

Ideally, I'd be able to upload the file in the background... I have a small amount of experience doing this with PHP, but nothing with Rails as of yet. Could anyone point me in a general direction, even?

+1  A: 

Maybe you have to increase the timeout in the ngix configs?

Lichtamberg
I will look into that. Hm. Thanks.
Steve Klabnik
A: 

You might be interested in my post here:

http://www.railstoolkit.com/posts/fancyupload-amazon-s3-uploader-with-paperclip

Its about uploading multiple files (with progress bars, simultaneously) directly to S3 without hitting the server.

Thanks for the link! The only problem that I can see with this is that FancyUpload is in Flash, and flash has to load the entire file into memory before starting the upload. So If I'd want to upload a 300MB file, I have to have that much RAM... the flash uploaders I tested all made my Firefox crash, and I have 4GB in my machine. However, the article is still interesting, and I'll be sure to refer to it later...
Steve Klabnik
Oh, thats in deed a disadvantage! I didn't know about that.