I would like to perform a few operations to a CVPixelBufferRef
and come out with a cv::Mat
CV_8UC1
)I am not sure what the most efficient order is to do this, however, I do know that all of the operations are available on an open:CV matrix, so I would like to know how to convert it.
- (void) captureOutput:(AVCaptureOutput *)captureOutput
didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer
fromConnection:(AVCaptureConnection *)connection
{
CVPixelBufferRef pixelBuffer = CMSampleBufferGetImageBuffer(sampleBuffer);
cv::Mat frame = f(pixelBuffer); // how do I implement f()?
I found the answer in some excellent GitHub source code. I adapted it here for simplicity. It also does the greyscale conversion for me.
CVPixelBufferRef pixelBuffer = CMSampleBufferGetImageBuffer(sampleBuffer);
OSType format = CVPixelBufferGetPixelFormatType(pixelBuffer);
// Set the following dict on AVCaptureVideoDataOutput's videoSettings to get YUV output
// @{ kCVPixelBufferPixelFormatTypeKey : kCVPixelFormatType_420YpCbCr8BiPlanarFullRange }
NSAssert(format == kCVPixelFormatType_420YpCbCr8BiPlanarFullRange, @"Only YUV is supported");
// The first plane / channel (at index 0) is the grayscale plane
// See more infomation about the YUV format
// http://en.wikipedia.org/wiki/YUV
CVPixelBufferLockBaseAddress(pixelBuffer, 0);
void *baseaddress = CVPixelBufferGetBaseAddressOfPlane(pixelBuffer, 0);
CGFloat width = CVPixelBufferGetWidth(pixelBuffer);
CGFloat height = CVPixelBufferGetHeight(pixelBuffer);
cv::Mat mat(height, width, CV_8UC1, baseaddress, 0);
// Use the mat here
CVPixelBufferUnlockBaseAddress(pixelBuffer, 0);
I am thinking that the best order will be: