How to render view into image faster?

NOrder picture NOrder · Sep 28, 2013 · Viewed 16.1k times · Source

I'm making magnifier app, which allows an user touch the screen and move his finger, there will be a magnifier with his finger path. I implement it with take a screenshot and assign the image to magnifier image view, as following:

    CGSize imageSize = frame.size;
    UIGraphicsBeginImageContextWithOptions(imageSize, NO, 0.0);
    CGContextRef c = UIGraphicsGetCurrentContext();
    CGContextScaleCTM(c, scaleFactor, scaleFactor);
    CGContextConcatCTM(c, CGAffineTransformMakeTranslation(-frame.origin.x, -frame.origin.y));
    [self.layer renderInContext:c];
    UIImage *screenshot = UIGraphicsGetImageFromCurrentImageContext();
    UIGraphicsEndImageContext();
    return screenshot;

the problem is that self.layer renderInContext is slow, so user feel not smooth when he is moving his finger. and I tried to run self.layer renderInContext in other thread, however, it makes the magnifier image looked weird because the image in magnifier showed delay.

is there any better way to render view into image? does renderInContext: use GPU?

Answer

Jano picture Jano · Sep 28, 2013

No. In iOS6, renderInContext: is the only way. It is slow. It uses the CPU.

Ways to render UIKit content

renderInContext:

[view.layer renderInContext:UIGraphicsGetCurrentContext()];
  • Requires iOS 2.0. It runs in the CPU.
  • It doesn't capture views with non-affine transforms, OpenGL, or video content.
  • If an animation is running, you can have the option of capturing:
    • view.layer, which captures the final frame of the animation.
    • view.presentationLayer, which captures the current frame of the animation .

snapshotViewAfterScreenUpdates:

UIView *snapshot = [view snapshotViewAfterScreenUpdates:YES];
  • Requires iOS 7.
  • It is the fastest method.
  • The view contents are immutable. Not good if you want to apply an effect.
  • It captures all content types (UIKit, OpenGL, or video).

resizableSnapshotViewFromRect:afterScreenUpdates:withCapInsets

[view resizableSnapshotViewFromRect:rect afterScreenUpdates:YES withCapInsets:edgeInsets]
  • Requires iOS 7.
  • Same as snapshotViewAfterScreenUpdates: but with resizable insets. content is also immutable.

drawViewHierarchyInRect:afterScreenUpdates:

[view drawViewHierarchyInRect:rect afterScreenUpdates:YES];
  • Requires iOS 7.
  • It draws in the current context.
  • According to session 226 it is faster than renderInContext:.

See WWDC 2013 Session 226 Implementing Engaging UI on iOS about the new snapshotting APIs.


If it is any help, here is some code to discard capture attempts while one is still running.

This throttles block execution to one at a time, and discards others. From this SO answer.

dispatch_semaphore_t semaphore = dispatch_semaphore_create(1);
dispatch_queue_t renderQueue = dispatch_queue_create("com.throttling.queue", NULL);

- (void) capture {
    if (dispatch_semaphore_wait(semaphore, DISPATCH_TIME_NOW) == 0) {
        dispatch_async(renderQueue, ^{
            // capture
            dispatch_semaphore_signal(semaphore);
        });
    }
}

What is this doing?

  • Create a semaphore for one (1) resource.
  • Create a serial queue.
  • DISPATCH_TIME_NOW means the timeout is none, so it returns non zero immediately on red light. Thus, not executing the if content.
  • If green light, run the block asynchronously, and set green light again.