I am using Windows Media Format SDK to capture the desktop in real time and save it in a WMV file (actually this is an oversimplification of my project, but this is the relevant part). For encoding, I am using the Windows Media Video 9 Screen codec because it is very efficient for screen captures and because it is available to practically everybody without the need to install anything, as the codec is included with Windows Media Player 9 runtime (included in Windows XP SP1).
I am making BITMAP screen shots using the GDI functions and feed those BITMAPs to the encoder. As you can guess, taking screen shots with GDI is slow, and I don't get the screen cursor, which I have to add manually to the BITMAPs. The BITMAPs I get initially are DDBs, and I need to convert those to DIBs for the encoder to understand (RGB input), and this takes more time.
Firing a profiler shows that about 50% of the time is spent in WMVCORE.DLL, the encoder. This is to be expected, of course as the encoding is CPU intensive.
The thing is, there is something called Windows Media Encoder that comes with a SDK, and can do screen capture using the desired codec in a simpler, and more CPU friendly way.
The WME is based on WMF. It's a higher lever library and also has .NET bindings. I can't use it in my project because this brings unwanted dependencies that I have to avoid.
I am asking about the method WME uses for feeding sample data to the WMV encoder. The encoding takes place with WME exactly like it takes place with my application that uses WMF. WME is more efficient than my application because it has a much more efficient way of feeding video data to the encoder. It doesn't rely on slow GDI functions and DDB->DIB conversions.
How is it done?
Thanks in advance,
A.