How to directly rotate CVImageBuffer image in IOS 4 without converting to UIImage?

Ian Charnas picture Ian Charnas · Dec 25, 2010 · Viewed 20.6k times · Source

I am using OpenCV 2.2 on the iPhone to detect faces. I'm using the IOS 4's AVCaptureSession to get access to the camera stream, as seen in the code that follows.

My challenge is that the video frames come in as CVBufferRef (pointers to CVImageBuffer) objects, and they come in oriented as a landscape, 480px wide by 300px high. This is fine if you are holding the phone sideways, but when the phone is held in the upright position I want to rotate these frames 90 degrees clockwise so that OpenCV can find the faces correctly.

I could convert the CVBufferRef to a CGImage, then to a UIImage, and then rotate, as this person is doing: Rotate CGImage taken from video frame

However that wastes a lot of CPU. I'm looking for a faster way to rotate the images coming in, ideally using the GPU to do this processing if possible.

Any ideas?

Ian

Code Sample:

 -(void) startCameraCapture {
  // Start up the face detector

  faceDetector = [[FaceDetector alloc] initWithCascade:@"haarcascade_frontalface_alt2" withFileExtension:@"xml"];

  // Create the AVCapture Session
  session = [[AVCaptureSession alloc] init];

  // create a preview layer to show the output from the camera
  AVCaptureVideoPreviewLayer *previewLayer = [AVCaptureVideoPreviewLayer layerWithSession:session];
  previewLayer.frame = previewView.frame;
  previewLayer.videoGravity = AVLayerVideoGravityResizeAspectFill;

  [previewView.layer addSublayer:previewLayer];

  // Get the default camera device
  AVCaptureDevice* camera = [AVCaptureDevice defaultDeviceWithMediaType:AVMediaTypeVideo];

  // Create a AVCaptureInput with the camera device
  NSError *error=nil;
  AVCaptureInput* cameraInput = [[AVCaptureDeviceInput alloc] initWithDevice:camera error:&error];
  if (cameraInput == nil) {
   NSLog(@"Error to create camera capture:%@",error);
  }

  // Set the output
  AVCaptureVideoDataOutput* videoOutput = [[AVCaptureVideoDataOutput alloc] init];
  videoOutput.alwaysDiscardsLateVideoFrames = YES;

  // create a queue besides the main thread queue to run the capture on
  dispatch_queue_t captureQueue = dispatch_queue_create("catpureQueue", NULL);

  // setup our delegate
  [videoOutput setSampleBufferDelegate:self queue:captureQueue];

  // release the queue.  I still don't entirely understand why we're releasing it here,
  // but the code examples I've found indicate this is the right thing.  Hmm...
  dispatch_release(captureQueue);

  // configure the pixel format
  videoOutput.videoSettings = [NSDictionary dictionaryWithObjectsAndKeys:
          [NSNumber numberWithUnsignedInt:kCVPixelFormatType_32BGRA], 
          (id)kCVPixelBufferPixelFormatTypeKey,
          nil];

  // and the size of the frames we want
  // try AVCaptureSessionPresetLow if this is too slow...
  [session setSessionPreset:AVCaptureSessionPresetMedium];

  // If you wish to cap the frame rate to a known value, such as 10 fps, set 
  // minFrameDuration.
  videoOutput.minFrameDuration = CMTimeMake(1, 10);

  // Add the input and output
  [session addInput:cameraInput];
  [session addOutput:videoOutput];

  // Start the session
  [session startRunning];  
 }

 - (void)captureOutput:(AVCaptureOutput *)captureOutput didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer fromConnection:(AVCaptureConnection *)connection {
  // only run if we're not already processing an image
  if (!faceDetector.imageNeedsProcessing) {

   // Get CVImage from sample buffer
   CVImageBufferRef cvImage = CMSampleBufferGetImageBuffer(sampleBuffer);

   // Send the CVImage to the FaceDetector for later processing
   [faceDetector setImageFromCVPixelBufferRef:cvImage];

   // Trigger the image processing on the main thread
   [self performSelectorOnMainThread:@selector(processImage) withObject:nil waitUntilDone:NO];
  }
 }

Answer

Sten picture Sten · Sep 7, 2012

vImage is a pretty fast way to do it. Requires ios5 though. The call says ARGB but it works for the BGRA you get from the buffer.

This also has the advantage that you can cut out a part of the buffer and rotate that. See my answer here

- (unsigned char*) rotateBuffer: (CMSampleBufferRef) sampleBuffer
{
 CVImageBufferRef imageBuffer = CMSampleBufferGetImageBuffer(sampleBuffer);
 CVPixelBufferLockBaseAddress(imageBuffer,0);

 size_t bytesPerRow = CVPixelBufferGetBytesPerRow(imageBuffer);
 size_t width = CVPixelBufferGetWidth(imageBuffer);
 size_t height = CVPixelBufferGetHeight(imageBuffer);
 size_t currSize = bytesPerRow*height*sizeof(unsigned char); 
 size_t bytesPerRowOut = 4*height*sizeof(unsigned char); 

 void *srcBuff = CVPixelBufferGetBaseAddress(imageBuffer); 
 unsigned char *outBuff = (unsigned char*)malloc(currSize);  

 vImage_Buffer ibuff = { srcBuff, height, width, bytesPerRow};
 vImage_Buffer ubuff = { outBuff, width, height, bytesPerRowOut};

 uint8_t rotConst = 1;   // 0, 1, 2, 3 is equal to 0, 90, 180, 270 degrees rotation

 vImage_Error err= vImageRotate90_ARGB8888 (&ibuff, &ubuff, NULL, rotConst, NULL,0);
 if (err != kvImageNoError) NSLog(@"%ld", err);

 return outBuff;
}