I’d like to convert a CGImage
to CMSampleBufferRef
and append it to a AVAssetWriterInput
using the appendSampleBuffer:
method. I’ve managed to get the CMSampleBufferRef
using the following code, but the appendSampleBuffer:
simply returns NO
when I supply the resulting CMSampleBufferRef
. What am I doing wrong?
- (void) appendCGImage: (CGImageRef) frame
{
const int width = CGImageGetWidth(frame);
const int height = CGImageGetHeight(frame);
// Create a dummy pixel buffer to try the encoding
// on something simple.
CVPixelBufferRef pixelBuffer = NULL;
CVReturn status = CVPixelBufferCreate(kCFAllocatorDefault, width, height,
kCVPixelFormatType_32BGRA, NULL, &pixelBuffer);
NSParameterAssert(status == kCVReturnSuccess && pixelBuffer != NULL);
// Sample timing info.
CMTime frameTime = CMTimeMake(1, 30);
CMTime currentTime = CMTimeAdd(lastSampleTime, frameTime);
CMSampleTimingInfo timing = {frameTime, currentTime, kCMTimeInvalid};
OSStatus result = 0;
// Sample format.
CMVideoFormatDescriptionRef videoInfo = NULL;
result = CMVideoFormatDescriptionCreateForImageBuffer(NULL,
pixelBuffer, &videoInfo);
NSParameterAssert(result == 0 && videoInfo != NULL);
// Create sample buffer.
CMSampleBufferRef sampleBuffer = NULL;
result = CMSampleBufferCreateForImageBuffer(kCFAllocatorDefault,
pixelBuffer, true, NULL, NULL, videoInfo, &timing, &sampleBuffer);
NSParameterAssert(result == 0 && sampleBuffer != NULL);
// Ship out the frame.
NSParameterAssert(CMSampleBufferDataIsReady(sampleBuffer));
NSParameterAssert([writerInput isReadyForMoreMediaData]);
BOOL success = [writerInput appendSampleBuffer:frame];
NSParameterAssert(success); // no go :(
}
P.S. I know there are memory leaks in this code, I’ve omitted some of the code for simplicity.
Aha, I’ve completely missed the AVAssetWriterInputPixelBufferAdaptor
class that’s made especially for piping the pixel buffers into a writer input. Now the code works, even without the messy CMSampleBuffer
stuff.