views:

60

answers:

3

I have a Rail3 app with paper_clip, with the intent to store data on S3.

In the app, Users belong to an instance.

I would like the data stored across all the models per instance. And would like to prevent a user for Instance A from accessing, or being able to load data from Instance B.

What's the best way to handle this? thanks

+1  A: 

The easiest way to do this is probably to store the file with a random, unguessable name. Then you can show the URLs to users in Instance A, but the Instance B users won't be able to guess them.

It's not bulletproof security, but it's good enough. Facebook, for instance, uses this approach for user photos.

tfe
Thanks, thats an interesting idea.. Concerns are it's more sensative than facebook photos. Also users will want to download files, and having a crazy name might not be attractive to the user.
AnApprentice
+2  A: 

You could try what is said on this page:

http://thewebfellas.com/blog/2009/8/29/protecting-your-paperclip-downloads

The specficics are under the section "No more streaming, time for a redirection".

Summary: S3 has four canned access policies, by using the authenticated-read policy S3 provides a way to generate an authenticated URL for private content that only works for a specified period of time.

I haven't actually done this, so please let me know if it works for you. :-)

(reposted from my answer here: http://stackoverflow.com/questions/4003828/aws-s3-ruby-on-rails-heroku-security-hole-in-my-app)

Adam21e
+ 1 for the in app protected downloads. The idea is that you make sure your s3 content is private, then a user requests the file from your app, which generates a time limited access link for S3, the url is set as a content header (xsendfile) in your apps response to your webserver (nginx or apache recommended) and the webserver streams the content to the user, without users having to see anything other that the original pretty url defined to request the file from your app.
Jeremy
+1  A: 

I actually just implemented authorized S3 url's in my Ruby on Rails 3 application with Paperclip. Let me share how I accomplished this.

So what I did, and what you probably want is quite easy to implement. Let me give you an example:

FileObject model

has_attached_file :attachment,
  :path           => "files/:id/:basename.:extension",
  :storage        => :s3,
  :s3_permissions => :private,
  :s3_credentials => File.join(Rails.root, 'config', 's3.yml')

FileObjectsController controller

  def download
    @file_object = FileObject.find(params[:id])
    redirect_to(@file_object.attachment.expiring_url(10))
  end

I believe this is quite straightforward. You add the Paperclip attachment to the FileObject model and then have an action (download for example) in the FileObjectsController. This way you can do some application level authorization from within your controller with a before_filter or something.

The expiring_url() method (provided by Paperclip) on the @file_object.attachment basically requests Amazon S3 for a key which makes the file accessible with that particular key. The first argument of the expiring_url() method takes an integer which represents the amount of seconds in which you want the provided URL to expire.

In my application it is currently set to 10 (@file_object.attachment.expiring_url(10)) so when the user requests a file, the user ALWAYS has to go through my application at for example myapp.com/file_objects/3/download to get a new valid URL from Amazon, which the user then instantly will be using to download the file since we're using the redirect_to method in the download action. So basically 10 seconds after the user hits the download action, the link already expired and the user has (or is still) happily downloading the file, while it remains protected from any non-authorized users.

I have even tried to set expiring_url(1) so that the URL instantly expires after the user triggers the Amazon S3 request for the URL. This worked for me locally, but never used it in production, you can try that too. However, I set it to 10 seconds to give the server a short period of time to respond. Works great so far and I doubt anyone will hijack someone's URL within 10 seconds after it's been created, let alone know what the URL is.

Extra security measure I took is just to generate a secret key for every file on create so my URL's always look like this:

has_attached_file :attachment,
  :path => "files/:id/:secret_key/:basename.:extension"

So that every URL has it's unique secret_key in it's path, making it harder to hijack within the time the URL is accessible. Mind you that, while the URL to your file remains the same, the accessibility comes from the additional parameters that Amazon S3 provides which expire:

http://s3.amazonaws.com/mybucket/files/f5039a57acc187b36c2d/my_file.pdf?AWSAccessKeyId=AKIAIPPJ2IPWN5U3O1OA&Expires=1288526454&Signature=5i4%2B99rUwhpP2SbNsJKhT/nSzsQ%3D

Notice this part, which is the key Amazon generates and expires which makes the file temporarily accessible:

my_file.pdf?AWSAccessKeyId=AKIAIPPJ2IPWN5U3O1OA&Expires=1288526454&Signature=5i4%2B99rUwhpP2SbNsJKhT/nSzsQ%3D

That's what it's all about. And this changes with every request for your file if requested through the download action.

Hope this helps!

Michael van Rooijen