Your hardware requirements will somewhat depend on what kind of reliability you want for this stuff. If you're using this to run everything, I'd recommend having at least two machines you split the VMs over, and if you're using N servers normally, you should be able to get by on N-1 of them for the time it takes your vendor to replace the bad parts.
At the low-end, that's 2 servers. If you want higher reliability (ie. less downtime), then a SAN of some kind to store the data on is going to be required (all the live migration stuff I've seen is SAN-based). If you can live with the 'manual' method (power down both servers, move drives from server1 to server2, power up server2, reconfigure VMs to use less memory and start up), then you don't really need the SAN route.
At the end of the day, your biggest sizing requirement will be HD and RAM. Your HD footprint will be relatively fixed (at least in most kinds of a dev/test environment), and your RAM footprint should be relatively fixed as well (though extra here is always nice). CPU is usually one thing you can skimp on a little bit if you have to, so long as you're willing to wait for builds and the like.
The other nice thing about going all virtualized is that you can start with a pair of big servers and grow out as your needs change. Need to give your dev environment more power? Get another server and split the VMs up. Need to simulate a 4-node cluster? Lower the memory usage of the existing node and spin up 3 copies.
At this point, unless I needed very high-end performance (ie. I need to consider clustering high-end physical servers for performance needs), I'd go with a virtualized environment. With the extensions on modern CPUs and OS/hypervisor support for them, the hit is not that big if done correct.