views:

103

answers:

3

We are a startup company and doesnt have invested yet in HW resources in order to prepre our dev and testing environment. The suggestion is to buy a high end server, install vmware ESX and deploy mutiple VMs for build, TFS, database, ... for testing,stging and dev enviornment. We are still not sure what specs to go with e.g. RAM, whether SAN is needed?, HD, Processor, etc..?

Please advice.

A: 

This is a very open ended question that really has a best answer of ... "It depends".

If you have the money to get individual machines for everything you need then go that route. You can scale back a little on the hardware with this option.

If you don't have the money to get individual machines, then you may want to look at a top end server for this. If this is your route, I would look at a quad machine with at least 8GB RAM and multiple NICs. You can go with a server box that has multiple hard drive bays that you can setup multiple RAIDS on. I recommend that you use a RAID 5 so that you have redundancy.

With something like this you can run multiple VMWare sessions without much of a problem.

I setup a 10TB box at my last job. It had 2 NICs, 8GB, and was a quad machine. Everything included cost about 9.5K

Mark
A: 

Your hardware requirements will somewhat depend on what kind of reliability you want for this stuff. If you're using this to run everything, I'd recommend having at least two machines you split the VMs over, and if you're using N servers normally, you should be able to get by on N-1 of them for the time it takes your vendor to replace the bad parts.

At the low-end, that's 2 servers. If you want higher reliability (ie. less downtime), then a SAN of some kind to store the data on is going to be required (all the live migration stuff I've seen is SAN-based). If you can live with the 'manual' method (power down both servers, move drives from server1 to server2, power up server2, reconfigure VMs to use less memory and start up), then you don't really need the SAN route.

At the end of the day, your biggest sizing requirement will be HD and RAM. Your HD footprint will be relatively fixed (at least in most kinds of a dev/test environment), and your RAM footprint should be relatively fixed as well (though extra here is always nice). CPU is usually one thing you can skimp on a little bit if you have to, so long as you're willing to wait for builds and the like.

The other nice thing about going all virtualized is that you can start with a pair of big servers and grow out as your needs change. Need to give your dev environment more power? Get another server and split the VMs up. Need to simulate a 4-node cluster? Lower the memory usage of the existing node and spin up 3 copies.

At this point, unless I needed very high-end performance (ie. I need to consider clustering high-end physical servers for performance needs), I'd go with a virtualized environment. With the extensions on modern CPUs and OS/hypervisor support for them, the hit is not that big if done correct.

Jonathan
A: 

You haven't really given much information to go on. It all depends on what type of applications you're developing, resource usage, need to configure different environments, etc.

Virtualization provides cost savings when you're looking to consolidate underutilized hardware. If each environment is sitting idle most of the time, then it makes sense to virtualize them.

However if each of your build/tfs/testing/staging/dev environments will be heavily used by all developers during the working day simultaniously then there might not be as many cost savings by virtualizing everything.

My advice would be if you're not sure, then don't do it. You can always virtualize later and reuse the hardware.

sascha