How to convert a UIImage to a CVPixelBuffer

Ryan picture Ryan · Jun 9, 2017 · Viewed 16.9k times · Source

Apple's new CoreML framework has a prediction function that takes a CVPixelBuffer. In order to classify a UIImage a conversion must be made between the two.

Conversion code I got from an Apple Engineer:

1  // image has been defined earlier
2
3     var pixelbuffer: CVPixelBuffer? = nil
4   
5     CVPixelBufferCreate(kCFAllocatorDefault, Int(image.size.width), Int(image.size.height), kCVPixelFormatType_OneComponent8, nil, &pixelbuffer)
6     CVPixelBufferLockBaseAddress(pixelbuffer!, CVPixelBufferLockFlags(rawValue:0))
7   
8     let colorspace = CGColorSpaceCreateDeviceGray()
9     let bitmapContext = CGContext(data: CVPixelBufferGetBaseAddress(pixelbuffer!), width: Int(image.size.width), height: Int(image.size.height), bitsPerComponent: 8, bytesPerRow: CVPixelBufferGetBytesPerRow(pixelbuffer!), space: colorspace, bitmapInfo: 0)!
10  
11    bitmapContext.draw(image.cgImage!, in: CGRect(x: 0, y: 0, width: image.size.width, height: image.size.height))

This solution is in swift and is for a grayscale image. Changes that must be made depending on the type of image are:

  • Line 5 | kCVPixelFormatType_OneComponent8 to another OSType (kCVPixelFormatType_32ARGB for RGB)
  • Line 8 | colorSpace to another CGColorSpace (CGColorSpaceCreateDeviceRGB for RGB)
  • Line 9 | bitsPerComponent to the number of bits per pixel of memory (32 for RGB)
  • Line 9 | bitmapInfo to a nonzero CGBitmapInfo property (kCGBitmapByteOrderDefault is the default)

Answer

onmyway133 picture onmyway133 · Jun 10, 2017

You can take a look at this tutorial https://www.hackingwithswift.com/whats-new-in-ios-11, code is in Swift 4

func buffer(from image: UIImage) -> CVPixelBuffer? {
  let attrs = [kCVPixelBufferCGImageCompatibilityKey: kCFBooleanTrue, kCVPixelBufferCGBitmapContextCompatibilityKey: kCFBooleanTrue] as CFDictionary
  var pixelBuffer : CVPixelBuffer?
  let status = CVPixelBufferCreate(kCFAllocatorDefault, Int(image.size.width), Int(image.size.height), kCVPixelFormatType_32ARGB, attrs, &pixelBuffer)
  guard (status == kCVReturnSuccess) else {
    return nil
  }

  CVPixelBufferLockBaseAddress(pixelBuffer!, CVPixelBufferLockFlags(rawValue: 0))
  let pixelData = CVPixelBufferGetBaseAddress(pixelBuffer!)

  let rgbColorSpace = CGColorSpaceCreateDeviceRGB()
  let context = CGContext(data: pixelData, width: Int(image.size.width), height: Int(image.size.height), bitsPerComponent: 8, bytesPerRow: CVPixelBufferGetBytesPerRow(pixelBuffer!), space: rgbColorSpace, bitmapInfo: CGImageAlphaInfo.noneSkipFirst.rawValue)

  context?.translateBy(x: 0, y: image.size.height)
  context?.scaleBy(x: 1.0, y: -1.0)

  UIGraphicsPushContext(context!)
  image.draw(in: CGRect(x: 0, y: 0, width: image.size.width, height: image.size.height))
  UIGraphicsPopContext()
  CVPixelBufferUnlockBaseAddress(pixelBuffer!, CVPixelBufferLockFlags(rawValue: 0))

  return pixelBuffer
}