views:

419

answers:

2

I'm looking into using virtual machines to host multiple OSes and I'm looking at the free solutions which there are a lot of them. I'm confused by what a hypervisor is and why are they different or better than a "standard" virtual machine. When I mean standard I going to use the benchmark virtual machine VMWare Server 2.0.

For a dual core system with 4 GB of ram that would be capable of running a max of 3 VMs. Which is the best choice? Hypervisor or non-hypervisor and why? I've already read the Wikipedia article but the technical details are over my head. I need a basic answer of what can these different VM flavors do for me.

My main question relates to how I would do testing on multiple environments. I am concerned about the isolation of OSes so I can test applications on multiple OSes at the same time. Also which flavor gives a closer experience of how a real machine operates?

I'm considering the following:

(hypervisor)

  • Xen
  • Hyper-V

(non-hypervisor)

  • VirtualBox
  • VMWare Server 2.0
  • Virtual PC 2007

*The classifications of the VMs I've listed may be incorrect.

+3  A: 

The main difference is that Hyper-V doesn't run on top of the OS but instead along with the system it runs on top of a thin layer called hypervisor. Hypervisor is a computer hardware platform virtualization software that allows multiple operating systems to run on a host computer concurrently.

Many other virtualization solution uses other techniques like emulation. For more details see Wikipedia.

David Pokluda
So does this mean that hypervisors don't have as much isolation as a normal VM? Also does this mean that it's possible for defects from the VM can bleed into the real OS? If so hypervisors may not be what I want for a testing environment.
Jeremy Edwards
@Jeremy, it means completely the opposite.
Tim Post
@David you should probably also include KVM (Kernel Based Virtual Machine). It's basically a hypervisor built into the standard Linux Kernel core. It has been included in the kernel since 2.6.0. For more info see, http://www.linux-kvm.org/page/Main_Page
Evan Plaice
+2  A: 

Disclaimer, everything below is (broadly) my opinion.

Its helpful to consider a virtual machine monitor (a hypervisor) as a very small microkernel. It has very few jobs beyond accessing the underlying hardware, such as monitoring of event channels and granting guest domains access to specific resources .. while enforcing some kind of scheduler.

All guest machines are completely oblivious of the others, the isolation is true. Guests do not share memory with the privileged guest (or each other). So, in this instance, you could (roughly) think of each guest (even the privileged one) as a process, as far as the VMM is concerned. Typically, the first guest gets extra privileges so that it can manage the rest. This is the ideal technology to use when virtual machines are put into production and exposed to the world.

Additionally, some guests can be patched to become aware of the hypervisor, significantly increasing their performance.

On the other hand we have things like VMWare and QEMU, which rely on the host kernel to give it access to bare metal and enough memory to exist. They assume that all guests need to be presented with a complete machine, the limits put on the process presenting these (more or less) become the limits of the virtual machine. I say more or less because device mapper QoS is not commonly implemented. This is the ideal solution for trying code in some other OS, or some other architecture. A lot of people will call QEMU, Simics or even sometimes VMWare (depending on the product) a 'simulator'.

For production roll outs I use Xen, for testing something I just cross compiled I use QEMU, Simics or VirtualBox.

If you are just testing / rolling new code on various operating systems and architectures, I highly recommend #2. If your need is introspection (i.e. watching guest memory change as bad programs run in a guest) ... I'd need more explanation before answering.

Tim Post