Qt & double buffering - are there any neat tricks to capture pixels or manipulate the back buffer?

darron picture darron · May 21, 2009 · Viewed 9.4k times · Source

I'm migrating an application to Qt from MFC.

The MFC app would use GDI calls to construct the window (a graph plot, basically). It would draw to a memory bitmap back buffer, and then BitBlt that to the screen. Qt, however, already does double buffering.

When the user clicks and drags in the graph, I'd like that section of the window to be inverted.

I'd like to find the best way to do this. Is there a way to do something like grabWindow() that will grab from the widget's back buffer, not the screen? ... maybe a BitBlt(..., DST_INVERT) equivalent?

I saw setCompositionMode() in QPainter, but the docs say that only works on painters operating on QImage. (Otherwise I could composite a solid rectangle image onto my widget with a fancy composition mode to get something like the invert effect)

I could do the same thing as MFC, painting to a QImage back buffer... but I read that hardware acceleration may not work this way. It seems like it'd be a waste to reimplement the double buffering already provided to you in Qt. I'm also not so sure what the side effects of turning off the widget's double-buffering may be (to avoid triple-buffering).

At one point, I had a convoluted QPixmap::grabWidget() call with recursion-preventing flags protecting it, but that rendered everything twice and is obviously worse than just drawing to a QImage. (and it's specifically warned against in the docs)

Should I give up and draw everything to a QImage doing it basically like I did in MFC?

EDIT:

Okay, a QPixmap painter runs at approximately the same speed as direct now. So, using a QPixmap back-buffer seems to be the best way to do this.

The solution was not obvious to me, but possibly if I looked at more examples (like Ariya's Monster demo) I would have just coded it the way it was expected to be done and it would have worked just fine.

Here's the difference. I saw help system demos using this:

    QPainter painter(this)

in the start of paintEvent(). So, it seemed to naturally follow to me that to double buffer to a QPixmap then paint on the screen, you needed to do this:

    QPainter painter(&pixmap);
    QPainter painterWidget(this);
    ...  draw using 'painter' ...
    painterWidget.drawPixmap(QPoint(0,0), pixmap);

when in fact you are apparently supposed to do this:

    QPainter painter;
    painter.begin(&pixmap);
    ... draw using 'painter' ...
    painter.end();
    painter.begin(this);
    painter.drawPixmap(QPoint(0,0), pixmap);
    painter.end();

I can see that my way had two active painters at the same time. I'm not entirely sure why it's faster, but intuitively I like the latter one better. It's a single QPainter object, and it's only doing one thing at a time. Maybe someone can explain why the first method is bad? (In terms of broken assumptions in the Qt rendering engine)

Answer

Ariya Hidayat picture Ariya Hidayat · May 22, 2009

Assuming you don't really want to pixel values from your offscreen buffer (but rather, just drawing something again on top of it and blit again to the screen), you should use QPixmap as the buffer, not QImage. Using the latter disables all painting acceleration as Qt falls back using its software raster engine, hence the use QPixmap. If you use OpenGL graphics system, you can still benefit from it.

For an example on how to do this, check my last code on running the Monster demo, it needs to have an offscreen pixmap to due the motion blur effect via repeated painting with source over composition mode.

To disable Qt's backing store (which is generally not a good idea), use the Qt::WA_PaintOnScreen for your top-level widget.

A bit unrelated, but you might want to have a look QRubberBand widget.