We are developing a skinned app with various rounded edges on most of its windows. I am using window regions to define non-rectangular shapes, but nearly everyone objects to the jagged aliasing this causes, because pixels can only be either fully opaque or fully transparent.
I've come up with a solution to this using layered windows, but we want to make sure this will run (and hopefully run well) on a variety of systems, and I want to see if anyone has any better ideas, or ways to optimize what I'm doing. I know layered windows requires win2000 or later, and that is fine, as that is already a requirement for other reasons. From some basic tests it looks ok on Vista, but that's no guarantee yet.
Here's what I do: I have a window, call it A, with controls and text and whatever comprises that window. I have window B as a child of window A, except it has the WS_POPUP style instead of WS_CHILD, so it can position itself outside A's region and is drawn on top of A's controls. Window B also has the WS_EX_LAYERED style, and on initialization, I call UpdateLayeredWindow with the ULW_ALPHA flag and a source DC with a 32bit bitmap with an alpha channel, to get it to draw with per-pixel alpha.
The bitmap used in the source DC for window B is pretty much just the pixels around the border of the window that I want to blend smoothly from the window's background into full transparency. I would skip the whole two window approach and just use a single layered window, except that when you're using UpdateLayeredWindow, it is drawn from a buffer kept in memory, in lieu of the typical WM_PAINT messages and all that, and trying to get interactive child controls (and child windows) to work well with that sounds like a remarkable hassle (and probably wouldn't even work for everything).
So, it's basically window A with all the child controls and whatever, with window B floating directly on top of it, drawing a nice smooth border. I respond to WM_MOVE messages and so on by moving window B along with it, and I have window B disabled so it can never get focus or input (clicks already go through, since parts of it that are of zero opacity, such as the majority of its inner parts, already are excluded from picking).
For kicks, here's what the pieces look like, to show what I mean a little better.
- Window A's background, with cyan used to mask fully transparent pixels.
- Window B's bitmap, with an alpha channel (not pictured of course, it's a jpg) that would blend the non-black pixels into transparency.
- Combined result
So, it works, but I can't be certain it's really the best way to do this. I have two questions:
- Does this sound acceptable, or is there anything glaringly terrible about it?
- As it currently works, it seems like it's using an off-screen buffer the size of the window (which can be up to 1024x768) even though very few pixels of it have any non-zero opacity data -- would it be worth the overhead and additional complexity of cutting it up into separate border pieces and compositing them together?