Background: I have a little video playing app with a UI inspired by the venerable Sasami2k, just updated to use VMR9 (i.e. Direct3D9 with DirectShow) and be less unstable. Currently, it's a C++ app using raw Win32, through necessity: none of the various toolkits are worth a damn. WPF, in particular, was not possible, due to its airspace restrictions.
OK, so, now that D3DImage exists it might be viable to mix and match D3D/VMR9/DirectShow and WPF. Given past frustrations with Win32's inextensibility, this seems like a good thing.
But y'know, I'm falling at the first hurdle here.
With Win32 I have created (very easily) a borderless window that's resizable, resizes proportionately, snaps to the screen edges, and takes up the whole screen (including taskbar area) when maximized. It's a video app, so these are all pretty desirable properties.
OK, so, how to do the same with WPF?
In Win32, I use: WM_GETMINMAXINFO to control the maximize behaviour WM_NCHITTEST to control the resize borders WM_MOVING to control the snap-to-screen-edges WM_SIZING to control the resize aspect ratio
However, looking at WPF it seems that the various events arrive too late, unless I'm misunderstanding the documentation?
For example, I don't know when I'm mid-move, as LocationChanged says it fires only once the window has moved (which is too late). Similarly, it appears that StateChanged only fires once the window has been restored/maximized (when I need the information prior to the maximize, to tell the system the correct maximize size).
And I seem to be completely overlooking where the system tells me about resizes. Likewise the hit testing.
So, uh, am I missing something here, or do I have no choice but to drop back to hooking the wndproc of this thing anyway? Can I do what I want without hooking the WndProc?
If I have to use the WndProc I might as well stick with my existing codebase; I want to have simpler, cleaner UI code, and moving away from the WndProc is fundamental to this.
If I do have to hook the WndProc, I have to wonder--why? Win32 has got the sizing/sized, moving/moved, poschanging/poschanged window messages, and they're all useful. Why wouldn't WPF replicate the same set of events? It seems like an unnecessary gap in functionality.
Plus, it means that WPF is tied to a specific USER32-dependent implementation. This means that MS can't (in Windows 7 or 8, say) invert the display layer to make WPF "native" and emulate HWNDs and WndProcs for legacy apps--even though this is precisely what MS should be doing.