views:

123

answers:

5

Can anyone explain me why is virtualization needed for cloud computing? A single instance of IIS and Windows Server can host multiple web applications. Then why do we need to run multiple instances of OS on a single machine? How can this lead to more efficient utilization of resources? How can the virtualization overhead be worth it? Is it strictly a matter of economics - I have money to buy only 100 machines, so I run virtualization to pretend I have 1000 machines?

+2  A: 

First of all, virtualization prevents possible damage to the underlying system. Since users want the environment to work transparently - so that nodes can be added and excluded seamlessly - those nodes need to be completely bulletproof so that the user software they run can't make them unusable.

Other than that - yes, virtualization facilitates higher resources utilization and also seamless deployment and migration of software between nodes. This lets you pay for actual resources used and lower costs.

sharptooth
A: 

Virtualization usually helps to separate concerns, keep things isolated and more secure. Besides it's much easier to go for the on-demand consumption scenarios in virtual environments

These benefits are worth the higher resource consumption even for large deployments. This translates directly into the economical benefits and cost savings.

Rinat Abdullin
+2  A: 

Because Cloud Computing (whatever this marketing buzz word means) is not about webhosting or email servers or any other well defined single service.

Its about a complete server infrastructure for you and your company. Also its not a virtual private server - its a virtual private server rack. You still have to develop your IT infrastructure to work on different nodes - adding nodes on demand when the load is high.

In fact i see cloud computing as nothing more then a more flexible accounting system for current servers.

To get the flexibility you need an easy way to add/remove servers and utilize the hardware as much as possible. This is only possible with virtualization. Otherwise you would some computers in your server farm running idle and others are busy but putting load from one system to the other be impossible.

And you want this utilization all without downtime. When you want to move one system to another hardware node there is nothing else then virtualization. Sophisticated operating systems like AIX do not call this virtualisation but its just the same thing with a different name.

The virtualization overhead is on good system almost non existing. I compile a lot in a VMWare Linux image on my MacOSX system and even on this consumer environment i can't even measure the difference during the 28sec a compile takes in a VM and time when i boot into the Linux partition. In fact sometimes due to caching the compiling inside the VM is faster.

And yes it is only about economics. Because there are so many times you don't need all of the 1000 servers. Just buy what you need. It works unless the cloud service is so expensive that running idle on your own servers makes more sense - which is a situation i found with Amazon Cloud vs running the computers in our company.

Lothar
+1  A: 

See my answer to Wastage of resources in virtualization - you're pretty much talking about the same things.

If your processes can co-exist on the same system, all depend on the same libraries, configuration settings, etc. can be brought up/down and restarted without affecting each other - then you may "waste" resources virtualizing them.

However if you need to reboot/restart Server A without affecting Server B and they both have pretty low usage, or the two applications depend on different kernel versions for example - then that's a good candidate for virtualization.

When you move up to enterprise level virtualization (cloud computing) and start thinking about computing costs in cents-per-hour and dollars-per-gigabyte then this "overhead" is nothing compared to the savings and other benefits. You don't have disks half empty, CPUs idling, wasted resources, competition for who gets to configure what. Virtual hosts can move between hosts depending on load, fault tolerance, high-availability, automated provisioning.

sascha
+1  A: 

Virtualization is convenient for cloud computing for a variety of reasons:

  1. Cloud computing is much more than a web app running in IIS. ActiveDirectory isn't a web app. SQL Server isn't a web app. To get full benefit of running code in the cloud, you need the option to install a wide variety of services in the cloud nodes just as you would in your own IT data center. Many of those services are not web apps governed by IIS. If you only look at the cloud as a web app, then you'll have difficulty building anything that isn't a web app.
  2. The folks running and administering the cloud hardware underneath the covers need ultimate authority and control to shut down, suspend, and occasionally relocate your cloud code to a different physical machine. If some bit of code in your cloud app goes nuts and runs out of control, it's much more difficult to shut down that service or that machine when the code is running directly on the physical hardware than it is when the rogue code is running in a VM managed by a hypervisor.
  3. Resource utilization - multiple tenants (VMs) executing on the same physical hardware, but with much stronger isloation from each other than IIS's process walls. Lower cost per tenant, higher income per unit of hardware.
dthorpe