+1  A: 

I agree with Omar Al Zabir on high availability web sites:

Do: Use Storage Area Network (SAN)

Why: Performance, scalability, reliability and extensibility. SAN is the ultimate storage solution. SAN is a giant box running hundreds of disks inside it. It has many disk controllers, many data channels, many cache memories. You have ultimate flexibility on RAID configuration, adding as many disks you like in a RAID, sharing disks in multiple RAID configurations and so on. SAN has faster disk controllers, more parallel processing power and more disk cache memory than regular controllers that you put inside a server. So, you get better disk throughput when you use SAN over local disks. You can increase and decrease volumes on-the-fly, while your app is running and using the volume. SAN can automatically mirror disks and upon disk failure, it automatically brings up the mirrors disks and reconfigures the RAID.

Full article is at CodeProject.

Because I don't personally have the budget for a SAN right now, I rely on option 1 (ROBOCOPY) from your post. But the files that I'm saving are not unique and can be recreated automatically if they die for some reason so absolute fault-tolerance is necessary in my case.

DavGarcia
If he finds DFS already an overkill, I am not sure how a SAN can be considered less intensive and costly....
icelava
A: 

I suppose it depends on the type of download volume that you would be seeing. I am storing files in a SQL Server 2005 Image column with great success. We don't see heavy demand for these files, so performance is really not that big of an issue in our particular situation.

One of the benefits of storing the files in the database is that it makes disaster recovery a breeze. It also becomes much easier to manage file permissions as we can manage that on the database.

Windows Server has a File Replication Service that I would not recommend. We have used that for some time and it has caused alot of headaches.

Jim Petkus
+1  A: 

You need a SAN with RAID. They build these machines for uptime.

This is really an IT question...

sliderhouserules
+2  A: 

When there are a variety of different application types sharing information via the medium of a central database, storing file content directly into the database would generally be a good idea. But it seems you only have one type in your system design - a web application. If it is just the web servers that ever need to access the files, and no other application interfacing with the database, storage in the file system rather than the database is still a better approach in general. Of course it really depends on the intricate requirements of your system.

If you do not perceive DFS as a viable approach, you may wish to consider Failover clustering of your file server tier, whereby your files are stored in an external shared storage (not an expensive SAN, which I believe is overkill for your case since DFS is already out of your reach) connected between Active and Passive file servers. If the active file server goes down, the passive may take over and continue read/writes to the shared storage. Windows 2008 clustering disk driver has been improved over Windows 2003 for this scenario (as per article), which indicates the requirement of a storage solution supporting SCSI-3 (PR) commands.

icelava
+1  A: 

Consider a cloud solution like AWS S3. It's pay for what you use, scalable and has high availability.

Al W
A: 

DFS is probably the easiest solution to setup, although depending on the reliability of your network this can become un-synchronized at times, which requires you to break the link, and re-sync, which is quite painful to be honest.

Given the above, I would be inclined to use a SQL Server storage solution, as this reduces the complexity of your system, rather then increases it.

Do some tests to see if performance will be an issue first.

Bravax