Hi,
I'm attempting to solve what I thought would be a very simple problem. I want to keep a QPixmap updated with the entire screen contents. You can get such a pixmap by doing this:
QDesktopWidget *w = QApplication::desktop();
if (w)
{
QRect r = w->screenGeometry();
QPixmap p = QPixmap::grabWindow(w->winId(), 0, 0, r.width(), r.height())
QByteArray bitmap;
}
The problem with this is that QDesktopWidget ends up re-grabbing the entire screen pixmap from the X11 server every time you ask for it, even if nothing has changed.
I need this code to be fast, so I'm trying to do this myself. My starting point was the qx11mirror demo, however, that basically does the same thing. It uses the XDamage extension to work out when something has changed, but instead of using the damaged rectangle information to just update that part of the cached pixmap, it just sets a "dirty" flag, which triggers an entire refresh anyway.
So I'm trying to modify the qx11mirror example to just update the damaged portion of the windows, but I can't seem to get anything to work - all I get is a blank (black) pixmap. The code I'm using is:
void QX11Mirror::x11Event(XEvent *event)
{
if (event->type == m_damageEvent + XDamageNotify)
{
XDamageNotifyEvent *e = reinterpret_cast<XDamageNotifyEvent*>(event);
XWindowAttributes attr;
XGetWindowAttributes(QX11Info::display(), m_window, &attr);
XRenderPictFormat *format = XRenderFindVisualFormat(QX11Info::display(), attr.visual);
bool hasAlpha = ( format->type == PictTypeDirect && format->direct.alphaMask );
int x = attr.x;
int y = attr.y;
int width = attr.width;
int height = attr.height;
// debug output so I can see the window pos vs the damaged area:
qDebug() << "repainting dirty area:" << x << y << width << height << "vs" << e->area.x << e->area.y << e->area.width << e->area.height;
XRenderPictureAttributes pa;
pa.subwindow_mode = IncludeInferiors; // Don't clip child widgets
Picture picture = XRenderCreatePicture(QX11Info::display(),
m_window,
format,
CPSubwindowMode,
&pa);
XserverRegion region = XFixesCreateRegionFromWindow(QX11Info::display(),
m_window, WindowRegionBounding);
XFixesTranslateRegion(QX11Info::display(), region, -x, -y);
XFixesSetPictureClipRegion(QX11Info::display(), picture, 0, 0, region);
XFixesDestroyRegion(QX11Info::display(), region);
//QPixmap dest(width, height);
XRenderComposite(QX11Info::display(), // display
hasAlpha ? PictOpOver : PictOpSrc, // operation mode
picture, // src drawable
None, // src mask
dest.x11PictureHandle(), // dest drawable
e->area.x, // src X
e->area.y, // src Y
0, // mask X
0, // mask Y
e->area.x, // dest X
e->area.y, // dest Y
e->area.width, // width
e->area.height); // height
m_px = dest;
XDamageSubtract(QX11Info::display(), e->damage, None, None);
emit windowChanged();
}
else if (event->type == ConfigureNotify)
{
XConfigureEvent *e = &event->xconfigure;
m_position = QRect(e->x, e->y, e->width, e->height);
emit positionChanged(m_position);
}
}
Can anyone point me in the right direction? The documetnation for XRender, XDamage, and the other X11 extensions is pretty bad.
Reasons for using XRender over XCopyArea
The following text taken from here.
It is perfectly possible to create a GC for a window and use XCopyArea() to copy the contents of the window if you want to use the core protocol, but since the Composite extension exposes new visuals (ones with alpha channels e.g.), there's no guarantee that the format of the source drawable will match that of the destination. With the core protocol that situation will result in a match error, something that won't happen with the Xrender extension.
In addition the core protocol has no understanding of alpha channels, which means that it can't composite windows that use the new ARGB visual. When the source and destination have the same format, there's also no performance advantage to using the core protocol as of X11R6.8. That release is also the first to support the new Composite extension.
So in conclusion there are no drawbacks, and only advantages to choosing Xrender over the core protocol for these operations.