OpenGL ES to video in iOS (rendering to a texture with iOS 5 texture cache)

user1562826 picture user1562826 · Jul 30, 2012 · Viewed 8.7k times · Source

You know the sample code of Apple with the CameraRipple effect? Well I'm trying to record the camera output in a file after openGL has done all the cool effect of water.

I've done it with glReadPixels, where I read all the pixels in a void * buffer , create CVPixelBufferRef and append it to the AVAssetWriterInputPixelBufferAdaptor, but it's too slow, coz readPixels takes tons of time. I found out that using FBO and texture cash you can do the same, but faster. Here is my code in drawInRect method that Apple use:

CVReturn err = CVOpenGLESTextureCacheCreate(kCFAllocatorDefault, NULL, (__bridge void *)_context, NULL, &coreVideoTextureCashe);
if (err) 
{
    NSAssert(NO, @"Error at CVOpenGLESTextureCacheCreate %d");
}


CFDictionaryRef empty; // empty value for attr value.
CFMutableDictionaryRef attrs2;
empty = CFDictionaryCreate(kCFAllocatorDefault, // our empty IOSurface properties dictionary
                           NULL,
                           NULL,
                           0,
                           &kCFTypeDictionaryKeyCallBacks,
                           &kCFTypeDictionaryValueCallBacks);
attrs2 = CFDictionaryCreateMutable(kCFAllocatorDefault,
                                  1,
                                  &kCFTypeDictionaryKeyCallBacks,
                                  &kCFTypeDictionaryValueCallBacks);

CFDictionarySetValue(attrs2,
                     kCVPixelBufferIOSurfacePropertiesKey,
                     empty);

//CVPixelBufferPoolCreatePixelBuffer (NULL, [assetWriterPixelBufferInput pixelBufferPool], &renderTarget);
CVPixelBufferRef pixiel_bufer4e = NULL;

CVPixelBufferCreate(kCFAllocatorDefault, 
                    (int)_screenWidth, 
                    (int)_screenHeight,
                    kCVPixelFormatType_32BGRA,
                    attrs2,
                    &pixiel_bufer4e);
CVOpenGLESTextureRef renderTexture;
CVOpenGLESTextureCacheCreateTextureFromImage (kCFAllocatorDefault,
                                              coreVideoTextureCashe, pixiel_bufer4e,
                                              NULL, // texture attributes
                                              GL_TEXTURE_2D,
                                              GL_RGBA, // opengl format
                                              (int)_screenWidth, 
                                              (int)_screenHeight,
                                              GL_BGRA, // native iOS format
                                              GL_UNSIGNED_BYTE,
                                              0,
                                              &renderTexture);
CFRelease(attrs2);
CFRelease(empty);
glBindTexture(CVOpenGLESTextureGetTarget(renderTexture), CVOpenGLESTextureGetName(renderTexture));
glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE);
glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE);

glFramebufferTexture2D(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, GL_TEXTURE_2D, CVOpenGLESTextureGetName(renderTexture), 0);

CVPixelBufferLockBaseAddress(pixiel_bufer4e, 0);

if([pixelAdapter appendPixelBuffer:pixiel_bufer4e withPresentationTime:currentTime]) {
                float result = currentTime.value;
            NSLog(@"\n\n\4eta danni i current time e : %f \n\n",result);
                currentTime = CMTimeAdd(currentTime, frameLength);
        }

CVPixelBufferUnlockBaseAddress(pixiel_bufer4e, 0);
CVPixelBufferRelease(pixiel_bufer4e);
CFRelease(renderTexture);
CFRelease(coreVideoTextureCashe);

It records a video and it's pretty quick, yet the video is just black I think the textureCasheRef is not the right one or am I filling it wrong.

As an update, here is another way I've tried. I must be missing something. In viewDidLoad, after I set the openGL context I do this:

CVOpenGLESTextureCacheCreate(kCFAllocatorDefault, NULL, (__bridge   void *)_context, NULL, &coreVideoTextureCashe);

    if (err) 
    {
        NSAssert(NO, @"Error at CVOpenGLESTextureCacheCreate %d");
    }

    //creats the pixel buffer

    pixel_buffer = NULL;
    CVPixelBufferPoolCreatePixelBuffer (NULL, [pixelAdapter pixelBufferPool], &pixel_buffer);

    CVOpenGLESTextureRef renderTexture;
    CVOpenGLESTextureCacheCreateTextureFromImage (kCFAllocatorDefault, coreVideoTextureCashe, pixel_buffer,
                                                  NULL, // texture attributes
                                                  GL_TEXTURE_2D,
                                                  GL_RGBA, //  opengl format
                                                   (int)screenWidth,
                                                  (int)screenHeight,
                                                  GL_BGRA, // native iOS format
                                                  GL_UNSIGNED_BYTE,
                                                  0,
                                                  &renderTexture);

    glBindTexture(CVOpenGLESTextureGetTarget(renderTexture), CVOpenGLESTextureGetName(renderTexture));
    glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE);
    glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE);

    glFramebufferTexture2D(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, GL_TEXTURE_2D, CVOpenGLESTextureGetName(renderTexture), 0);

Then in drawInRect: I do this:

 if(isRecording&&writerInput.readyForMoreMediaData) {
    CVPixelBufferLockBaseAddress(pixel_buffer, 0);

    if([pixelAdapter appendPixelBuffer:pixel_buffer withPresentationTime:currentTime]) {
        currentTime = CMTimeAdd(currentTime, frameLength);
    }
    CVPixelBufferLockBaseAddress(pixel_buffer, 0);
    CVPixelBufferRelease(pixel_buffer);
}

Yet it crashes with bad_acsess on the renderTexture, which is not nil but 0x000000001.

UPDATE

With the code below I actually managed to pull the video file, but there are some green and red flashes. I use BGRA pixelFormatType.

Here I create the texture Cache:

CVReturn err2 = CVOpenGLESTextureCacheCreate(kCFAllocatorDefault, NULL, (__bridge void *)_context, NULL, &coreVideoTextureCashe);
if (err2) 
{
    NSLog(@"Error at CVOpenGLESTextureCacheCreate %d", err);
    return;
}

And then in drawInRect I call this:

if(isRecording&&writerInput.readyForMoreMediaData) {
    [self cleanUpTextures];



    CFDictionaryRef empty; // empty value for attr value.
    CFMutableDictionaryRef attrs2;
    empty = CFDictionaryCreate(kCFAllocatorDefault, // our empty IOSurface properties dictionary
                           NULL,
                           NULL,
                           0,
                           &kCFTypeDictionaryKeyCallBacks,
                           &kCFTypeDictionaryValueCallBacks);
    attrs2 = CFDictionaryCreateMutable(kCFAllocatorDefault,
                                   1,
                                   &kCFTypeDictionaryKeyCallBacks,
                                   &kCFTypeDictionaryValueCallBacks);

    CFDictionarySetValue(attrs2,
                     kCVPixelBufferIOSurfacePropertiesKey,
                     empty);

//CVPixelBufferPoolCreatePixelBuffer (NULL, [assetWriterPixelBufferInput pixelBufferPool], &renderTarget);
    CVPixelBufferRef pixiel_bufer4e = NULL;

    CVPixelBufferCreate(kCFAllocatorDefault, 
                    (int)_screenWidth, 
                    (int)_screenHeight,
                    kCVPixelFormatType_32BGRA,
                    attrs2,
                    &pixiel_bufer4e);
    CVOpenGLESTextureRef renderTexture;
    CVOpenGLESTextureCacheCreateTextureFromImage (kCFAllocatorDefault,
                                              coreVideoTextureCashe, pixiel_bufer4e,
                                              NULL, // texture attributes
                                              GL_TEXTURE_2D,
                                              GL_RGBA, // opengl format
                                              (int)_screenWidth, 
                                              (int)_screenHeight,
                                              GL_BGRA, // native iOS format
                                              GL_UNSIGNED_BYTE,
                                              0,
                                              &renderTexture);
    CFRelease(attrs2);
    CFRelease(empty);
    glBindTexture(CVOpenGLESTextureGetTarget(renderTexture), CVOpenGLESTextureGetName(renderTexture));
    glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE);
    glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE);

    glFramebufferTexture2D(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, GL_TEXTURE_2D, CVOpenGLESTextureGetName(renderTexture), 0);

    CVPixelBufferLockBaseAddress(pixiel_bufer4e, 0);

    if([pixelAdapter appendPixelBuffer:pixiel_bufer4e withPresentationTime:currentTime]) {
        float result = currentTime.value;
        NSLog(@"\n\n\4eta danni i current time e : %f \n\n",result);
        currentTime = CMTimeAdd(currentTime, frameLength);
    }

    CVPixelBufferUnlockBaseAddress(pixiel_bufer4e, 0);
    CVPixelBufferRelease(pixiel_bufer4e);
    CFRelease(renderTexture);
  //  CFRelease(coreVideoTextureCashe);
}

I know I can optimize this a lot by not doing all these things here, yet I use wanted to make it work. In cleanUpTextures I flush the textureCache with:

 CVOpenGLESTextureCacheFlush(coreVideoTextureCashe, 0);

Something might be wrong with the RGBA stuff or I don't know but it seems that it's still getting kind of wrong Cache.

Answer

Brad Larson picture Brad Larson · Jul 30, 2012

For recording video, this isn't the approach I'd use. You're creating a new pixel buffer for each rendered frame, which will be slow, and you're never releasing it, so it's no surprise you're getting memory warnings.

Instead, follow what I describe in this answer. I create a pixel buffer for the cached texture once, assign that texture to the FBO I'm rendering to, then append that pixel buffer using the AVAssetWriter's pixel buffer input on every frame. It's far faster to use the single pixel buffer than recreating one every frame. You also want to leave the pixel buffer associated with your FBO's texture target, rather than associating it on every frame.

I encapsulate this recording code within the GPUImageMovieWriter in my open source GPUImage framework, if you want to see how this works in practice. As I indicate in the above-linked answer, doing the recording in this fashion leads to extremely fast encodes.