views:

54

answers:

2

I don't mean for this question to be a flame bait but I'll be using Microsoft and their win32 API as a example of a legacy API.

Now what I am wondering here is Microsoft is spending a lots of their money and energy in maintaining their legacy API, including all of the "glitches/bugs/workaround" that are needed to keep the API functioning the same. Now I'm aware that in Windows 7 they are providing a way for the customer to run their application in a "Windows XP" VM which would be one such way for them to start cleaning up their win32 API because they could then push all of the application into the "Windows XP" VM.

So now what I am wondering is, is it possible to virtualization a legacy API in such way that an customer/program can still access and use it, yet at the same time be able to take advantage of the newer version/API? Because as far as I understand it, if the application is ran in the "Windows XP" VM, it won't be able to access any of the newer API/feature of Windows 7.

+1  A: 

This is an interesting question, at least to me, here are some of my thoughts.

Your understanding is correct, an application running in the XP VM only has access to the Win32 APIs provided by XP in the VM. One of the many ways that I have seen Microsoft's approach to enhancing specific APIs is to create new functions with the enhanced/fixed functionality and name the new function by append Ex and even ExEx to the original name, for example

GetVersion
GetVersionEx

For functions that accept pointers to structures, the structures are 'versioned' by using the size of the structure to determine the functionality required, so older code would be passing a previous size of the structure while newer code would be passing in the newer larger strucure and the API functions accordingly.

I guess, the problem has become that it is no longer just differences in how an API works, but more integral to the functioning of the operating system and the internal structures which have changes significantly enough that arguably badly written code is effectively broken.

As to your actual question, I guess it would be quite tough. Even if one thought to let the OS adjust how it executes code based on a target OS version in the PE header of the executable, what would happen if a newer DLL was loaded into the process that targeted the latest OS, now how should the OS handle this when the code is executing? IMHO, I think this would be very challenging, one frought with pitfalls that would ultimately fail.

Of course that is just my raw thoughts on the topic so I might be 100% wrong and there is some simple approach that just did not come to mind.

Chris Taylor
+1  A: 

The thing that puzzles me about this question when it comes up is that Windows has been doing this since NT came out in the mid nineties. This is how NT runs DOS and Win16 programs, and how it always has. The NTVDM virtualization layer runs 16-bit apps under Win32 with very little special support from the core OS. This is just one example - another is WINE, which as I understand it does a pretty reasonabl job of running windows apps on top of an API set which is very different from that of windows. So it is definitely possible.

The more pertinent question would be why Microsoft would consider it. In order for you to think it is necessary you have to think two things. 1) There is something better to replace the win32 API with and 2) Maintaining the Win32 API is a burden.

Both of these are questionable. In the case of kernel duties, such as accessing hardware and synchronizing and doing threads and processes and memory the Win32 API does a pretty good job, and is ultimately quite close to what the kernel really does. If you think there is a better API then that must mean there is also a better kernel. I personally don't think that NT needs replacing right now. For graphics and windowing, admitedly gdi32 is a bit long in the tooth. But Microsoft solved that problem by building WPF right alongside it. This then brings in the burden question. Well, sure there are two APIs to maintain, but if you virtualized GDI on top of WPF you'd still have to maintain both anyway so there is no benefit there. The advantage of running both in parallel is that GDI already exists and is already tested. All you have to do is to fix the occasional bug, whereas a new virtualization layer would have to be written and tested all over again, which takes time away from making WPF better.

In terms of maintaining back compat, that isn't as much of a burden as it sounds. It is mainly a test question - you have to test that the API behaviour doesn't change, but again - those tests have already been written, so it isn't really any extra work.

So, to answer a question with a question, why would they bother?

Stewart