Is it possible using video as texture for GL in iOS?

eonil picture eonil · Nov 21, 2010 · Viewed 8.6k times · Source

Is it possible using video (pre-rendered, compressed with H.264) as texture for GL in iOS?

If possible, how to do it? And any playback quality/frame-rate or limitations?

Answer

Tommy picture Tommy · Nov 22, 2010

As of iOS 4.0, you can use AVCaptureDeviceInput to get the camera as a device input and connect it to a AVCaptureVideoDataOutput with any object you like set as the delegate. By setting a 32bpp BGRA format for the camera, the delegate object will receive each frame from the camera in a format just perfect for handing immediately to glTexImage2D (or glTexSubImage2D if the device doesn't support non-power-of-two textures; I think the MBX devices fall into this category).

There are a bunch of frame size and frame rate options; at a guess you'll have to tweak those depending on how much else you want to use the GPU for. I found that a completely trivial scene with just a textured quad showing the latest frame, redrawn only exactly when a new frame arrives on an iPhone 4, was able to display that device's maximum 720p 24fps feed without any noticeable lag. I haven't performed any more thorough benchmarking than that, so hopefully someone else can advise.

In principle, per the API, frames can come back with some in-memory padding between scanlines, which would mean some shuffling of contents before posting off to GL so you do need to implement a code path for that. In practice, speaking purely empirically, it appears that the current version of iOS never returns images in that form so it isn't really a performance issue.

EDIT: it's now very close to three years later. In the interim Apple has released iOS 5, 6 and 7. With 5 they introduced CVOpenGLESTexture and CVOpenGLESTextureCache, which are now the smart way to pipe video from a capture device into OpenGL. Apple supplies sample code here, from which the particularly interesting parts are in RippleViewController.m, specifically its setupAVCapture and captureOutput:didOutputSampleBuffer:fromConnection: — see lines 196–329. Sadly the terms and conditions prevent a duplication of the code here without attaching the whole project but the step-by-step setup is:

  1. create a CVOpenGLESTextureCacheCreate and an AVCaptureSession;
  2. grab a suitable AVCaptureDevice for video;
  3. create an AVCaptureDeviceInput with that capture device;
  4. attach an AVCaptureVideoDataOutput and tell it to call you as a sample buffer delegate.

Upon receiving each sample buffer:

  1. get the CVImageBufferRef from it;
  2. use CVOpenGLESTextureCacheCreateTextureFromImage to get Y and UV CVOpenGLESTextureRefs from the CV image buffer;
  3. get texture targets and names from the CV OpenGLES texture refs in order to bind them;
  4. combine luminance and chrominance in your shader.