So, I wanted to explore new Google's Camera API - CameraX
.
What I want to do, is take an image from camera feed every second and then pass it into a function that accepts bitmap for machine learning purposes.
I read the documentation on Camera X
Image Analyzer:
The image analysis use case provides your app with a CPU-accessible image to perform image processing, computer vision, or machine learning inference on. The application implements an Analyzer method that is run on each frame.
..which basically is what I need. So, I implemented this image analyzer like this:
imageAnalysis.setAnalyzer { image: ImageProxy, _: Int ->
viewModel.onAnalyzeImage(image)
}
What I get is image: ImageProxy
. How can I transfer this ImageProxy
to Bitmap
?
I tried to solve it like this:
fun decodeBitmap(image: ImageProxy): Bitmap? {
val buffer = image.planes[0].buffer
val bytes = ByteArray(buffer.capacity()).also { buffer.get(it) }
return BitmapFactory.decodeByteArray(bytes, 0, bytes.size)
}
But it returns null
- because decodeByteArray
does not receive valid (?) bitmap bytes. Any ideas?
You will need to check the image.format
to see if it is ImageFormat.YUV_420_888
. If so , then you can you use this extension to convert image to bitmap:
fun Image.toBitmap(): Bitmap {
val yBuffer = planes[0].buffer // Y
val vuBuffer = planes[2].buffer // VU
val ySize = yBuffer.remaining()
val vuSize = vuBuffer.remaining()
val nv21 = ByteArray(ySize + vuSize)
yBuffer.get(nv21, 0, ySize)
vuBuffer.get(nv21, ySize, vuSize)
val yuvImage = YuvImage(nv21, ImageFormat.NV21, this.width, this.height, null)
val out = ByteArrayOutputStream()
yuvImage.compressToJpeg(Rect(0, 0, yuvImage.width, yuvImage.height), 50, out)
val imageBytes = out.toByteArray()
return BitmapFactory.decodeByteArray(imageBytes, 0, imageBytes.size)
}
This works for a number of camera configurations. However, you might need to use a more advanced method that considers pixel strides.