tags:

views:

1387

answers:

2

We are building an application that will be storing a lot of images. We have a virtualized environment at GoGrid.com and we were hoping to utilize their cloud storage.

Not sure exactly how to word this, but if we (in our code) specify the unc path and creds to place or retrieve an image, that seems terribly inefficient (connect, get image, disconnect)

If we have a large volume of images or many users performing this at once, it seems like it would bring any normal server to it's knees.

So my question is, short of having huge drives that your website is running on, how should we aim to accomplish this? Again, we are opting for the GoGrid cloud starge versus Amazon S3 since everything is under one nice umbrella. The cloud storage is accessible via a UNC path and a specific username/password.

Thanks!

A: 

If you think you might be changing the file access method over the course of time, then be certain to keep it abstract. Start off with a simple UNC implementation, but you would then be able to change to a web service or REST implementation later.

John Saunders
Would it make any sense to try and setup some sort of mapped drive and then have the code get and put files to that mapped drive (vs connecting for each request?)
extreme
If i were you, I'd leave it as a UNC path, connecting on each request, and get the rest of the system working. Once it worked, I'd look to see if the file access method was causing performance problems. I wouldn't spend time assuming that it does.
John Saunders
A: 

I have not worked with high volume access servers before, but it sounds to me like you are looking for something very lightweight accessing a UNC path. Facebook has an architecture based on a lightweight HTTP implementation and blade servers for load balancing using a single 10TB filesystem, but you don't sound like you're quite there yet. If you want to maximize what you get out of the connects, try to pull multiple files in a connect/read/disconnect sequence. Be warned that this will make your users wait a little while longer, and 20ms slower for google lead to something like a 20% drop in users. Aside from that, I don't know of any faster way to do it than to simply access the path.

I'm wrestling with a similar problem accessing files stored on UNCs, where the user can connect one minute, can't the next, but can the next-next minute. I suspect connections are not being closed after our app accesses the server, but we just use File.Copy and File.Exist from System.IO. How do we explicitly manage the "connect/read/disconnect" sequence?
flipdoubt