views:

881

answers:

10

I would like to make 2 TB or so available via NFS and CIFS. I am looking for a 2 (or more) server solution for high availability and the ability to load balance across the servers if possible. Any suggestions for clustering or high availability solutions?

This is business use, planning on growing to 5-10 TB over next few years. Our facility is almost 24 hours a day, six days a week. We could have 15-30 minutes of downtime, but we want to minimize data loss. I want to minimize 3 AM calls.

We are currently running one server with ZFS on Solaris and we are looking at AVS for the HA part, but we have had minor issues with Solaris (CIFS implementation doesn't work with Vista, etc) that have held us up.

We have started looking at

  • DRDB over GFS (GFS for distributed lock capability)
  • Gluster (needs client pieces, no native CIFS support?)
  • Windows DFS (doc says only replicates after file closes?)

We are looking for a "black box" that serves up data.

We currently snapshot the data in ZFS and send the snapshot over the net to a remote datacenter for offsite backup.

Our original plan was to have a 2nd machine and rsync every 10 - 15 min. The issue on a failure would be that ongoing production processes would lose 15 minutes of data and be left "in the middle". They would almost be easier to start from the beginning than to figure out where to pickup in the middle. That is what drove us to look at HA solutions.

A: 

Are you looking for an "enterprise" solution or a "home" solution? It is hard to tell from your question, because 2TB is very small for an enterprise and a little on the high end for a home user (especially two servers). Could you clarify the need so we can discuss tradeoffs?

David Ackerman
+1  A: 

I would recommend NAS Storage. (Network Attached Storage).

HP has some nice ones you can choose from.

http://h18006.www1.hp.com/storage/aiostorage.html

as well as Clustered versions:

http://h18006.www1.hp.com/storage/software/clusteredfs/index.html?jumpid=reg_R1002_USEN

Sev
+2  A: 

These days 2TB fits in one machine, so you've got options, from simple to complex. These all presume linux servers:

  • You can get poor-man's HA by setting up two machines and doing a periodic rsync from the main one to the backup.
  • You can use DRBD to mirror one from the other at the block level. This has the disadvantage of being somewhat difficult to expand in the future.
  • You can use OCFS2 to cluster the disks instead, for future expandability.

There are also plenty of commercial solutions, but 2TB is a bit small for most of them these days.

You haven't mentioned your application yet, but if hot failover isn't necessary, and all you really want is something that will stand up to losing a disk or two, find a NAS that support RAID-5, at least 4 drives, and hotswap and you should be good to go.

pjz
A: 

There's two ways to go at this. The first is to just go buy a SAN or a NAS from Dell or HP and throw money at the problem. Modern storage hardware just makes all of this easy to do, saving your expertise for more core problems.

If you want to roll your own, take a look at using Linux with DRBD.

http://www.drbd.org/

DRBD allows you to create networked block devices. Think RAID 1 across two servers instead of just two disks. DRBD deployments are usually done using Heartbeat for failover in case one system dies.

I'm not sure about load balancing, but you might investigate and see if LVS can be used to load balance across your DRBD hosts:

http://www.linuxvirtualserver.org/

To conclude, let me just reiterate that you're probably going to save yourself a lot of time in the long run just forking out the money for a NAS.

bmdhacks
A: 

I assume from the body of your question is you're a business user? I purchased a 6TB RAID 5 unit from Silicon Mechanics and have it NAS attached and my engineer installed NFS on our servers. Backups performed via rsync to another large capacity NAS.

A: 

Have a look at Amazon Simple Storage Service (Amazon S3)

http://www.amazon.com/S3-AWS-home-page-Money/b/ref=sc_fe_l_2?ie=UTF8&node=16427261&no=3435361&me=A36L942TSJ2AJA

-- This may be of interest re. High Availability

Dear AWS Customer:

Many of you have asked us to let you know ahead of time about features and services that are currently under development so that you can better plan for how that functionality might integrate with your applications. To that end, we are excited to share some early details with you about a new offering we have under development here at AWS -- a content delivery service.

This new service will provide you a high performance method of distributing content to end users, giving your customers low latency and high data transfer rates when they access your objects. The initial release will help developers and businesses who need to deliver popular, publicly readable content over HTTP connections. Our goal is to create a content delivery service that:

Lets developers and businesses get started easily - there are no minimum fees and no commitments. You will only pay for what you actually use. Is simple and easy to use - a single, simple API call is all that is needed to get started delivering your content. Works seamlessly with Amazon S3 - this gives you durable storage for the original, definitive versions of your files while making the content delivery service easier to use. Has a global presence - we use a global network of edge locations on three continents to deliver your content from the most appropriate location.

You'll start by storing the original version of your objects in Amazon S3, making sure they are publicly readable. Then, you'll make a simple API call to register your bucket with the new content delivery service. This API call will return a new domain name for you to include in your web pages or application. When clients request an object using this domain name, they will be automatically routed to the nearest edge location for high performance delivery of your content. It's that simple.

We're currently working with a small group of private beta customers, and expect to have this service widely available before the end of the year. If you'd like to be notified when we launch, please let us know by clicking here.

Sincerely,

The Amazon Web Services Team

pro
S3 does not have particularly fantastic availability. It is great in many ways, but does not fit the "high availability" requirement the OP is asking for.
Stu Thompson
A: 

Your best bet maybe to work with experts who do this sort of thing for a living. These guys are actually in our office complex...I've had a chance to work with them on a similar project I was lead on.

http://www.deltasquare.com/About

ben
+2  A: 
Tony Dodd
There needs to be a sort of badge out there for this sort of post.
Kent Fredric
A: 

May I suggest you visit the F5 site and check out http://www.f5.com/solutions/virtualization/file/

jm04469
A: 

You can look at Mirror File System. It does the file replication on file system level. The same file on both primary and backup systems are live file.

http://www.linux-ha.org/RelatedTechnologies/Filesystems

fish.ada94