views:

153

answers:

6

I am part of an organization in which there is contention amongst some very competent folks as to whether or not testing cross-browser behavior for JavaScript intensive web applications on virtual machines (for IE6/7/8, FF2/3, Chrome on XP/Vista/7) is reliable. This is using VMWare server on a Linux box host.

While the discrepancies seen are few, there are cases in which it has proven difficult to tell if it is a product of virtualization or just different machine configurations.

My question to the community is, what is people experience with this? Is there any credence to the claim that VM pose inconsistencies, or are they generally spot-on reliable? Can we trust them?

+6  A: 

If the VM is running a normal OS, there should not be any issues with its browser.

It's possible that rendering and performance differences might be noticeable, but you should test for that anyway.

Remember that your end-users might also browse your site in a VM.

SLaks
+1 Good answer.
James Westgate
+3  A: 

VM is the best way. Other options like multiple IE differences are much more frequent. Personally we use Virtual Box

Luke
+2  A: 

There is no difference between an OS running inside a VM and an OS running on the actual metal!

Obviously, as you point out, OS and app configuration can make a side-by-side comparison difficult.

Also, there are possible performance discrepancies under virtualization. This should not have a measurable affect on browser behavior.

I suggest working with your folks to come up with an OS configuration gold copy script that you can follow when building up both virtual and host OSs that can address possible comparison issues. Eg:

Noel
I should caveat my first sentence: of course there are differences. The point is that the virtualized OS doesn't know about them.
Noel
+1 on this answer, it's definitely more of a configuration issue on the virtual than a problem with the virtualization itself. And as Noel points out, you need to build the configurations you want to provide support for.
SBUJOLD
+3  A: 

I've never seen a behavior difference between code in a VM and code on a "real" PC that I couldn't directly ascribe to patch discrepancies between the underlying operating systems. Similarly, I've had the experience of working in a building where exactly one mysteriously-configured laptop would exhibit weird behaviors, and no other machine could be found to do the same things. (Yes, weird Javascript behaviors in a web application. I shudder now from the memory of it.)

Now, if your code involves testing things that might be affected by the workings of the video driver, then you might have cause for concern. Clearly, a VM-based approach isn't going to give you much variety in terms of video hardware. However, for a web application, that seems unlikely to be an issue.

Pointy
A: 

There are no differences. We run a cross browser testing service and use VMWare ESXi for all the Windows and Ubuntu based configurations. The browsers under a VM image render just as they would under a 'real' OS.

Ken
+1  A: 

The only differences in running in a VM vs running on a real machine that you may need to watch out for are actual hardware differences. A strength of using a VM is that all OS combinations will be running against the same virtual hardware - the same vanilla VGA "adapter", the same vanilla network "card", etc. For testing, though, that is also a liability, since it means you're not testing against real world hardware.

JavaScript running in the browser normally shouldn't care about your hardware, but there can be situations where artifacts may be observable. If your JavaScript makes use of advanced browser features like plugins, ActiveX controls, or display features like Flash, Silverlight, OpenGL, or Canvas or whatever, or things like video or audio playback, your code may be sensitive to hardware differences between machines that would not be apparent when testing exclusively in homogeneous VMs. Similarly, if your JavaScript is doing fairly low-level wire protocol operations (sending packets of data through XmlHttpRequest), differences between the VM virtual network stack and actual network card hardware and drivers could cause changes in packet arrival timing, dropped packets, or perhaps even out of order packets.

The best course of action IMO is to do most of your testing, perhaps even all of your automated testing, in VMs simply because it's more manageable and more economical to do it that way, but also budget and schedule to do some testing on widely disparate hardware - both super fast and super awful video cards, built-in mobo audio as well as aftermarket audio cards, uniproc and multiproc systems, etc.

dthorpe