I have this situation where I use OpenCV to detect faces in front of the camera and do some ML on those faces. The issue that I have is that once I do all the processing, and go to grab the next frame I get the past, not the present. Meaning, I'll read whats inside the buffer, and not what is actually in front of the camera. Since I don't care which faces came in front of the camera while processing, I care what is now in front of the camera.
I did try to set the buffer size to 1
, and that did help quite a lot, but I still will get at least 3 buffer reads. Setting the FPS to 1, also dose not help remove this situation 100%. Bellow is the flow that I have.
let cv = require('opencv4nodejs');
let camera = new cv.VideoCapture(camera_port);
camera.set(cv.CAP_PROP_BUFFERSIZE, 1);
camera.set(cv.CAP_PROP_FPS, 2);
camera.set(cv.CAP_PROP_POS_FRAMES , 1);
function loop()
{
//
// <>> Grab one frame from the Camera buffer.
//
let rgb_mat = camera.read();
// Do to gray scale
// Do face detection
// Crop the image
// Do some ML stuff
// Do whats needs to be done after the results are in.
//
// <>> Release data from memory
//
rgb_mat.release();
//
// <>> Restart the loop
//
loop();
}
Is it possible to remove the buffer all-together? And if so, how. If not, a why
would be much appreciated.
Whether CAP_PROP_BUFFERSIZE
is supported appears quite operating system and backend-specific. E.g., the 2.4 docs state it is "only supported by DC1394 [Firewire] v 2.x backend currently," and for backend V4L, according to the code, support was added only on 9 Mar 2018.
The easiest non-brittle way to disable the buffer is using a separate thread; for details, see my comments under Piotr Kurowski's answer. Here Python code that uses a separate thread to implement a bufferless VideoCapture: (I did not have a opencv4nodejs environment.)
import cv2, Queue, threading, time
# bufferless VideoCapture
class VideoCapture:
def __init__(self, name):
self.cap = cv2.VideoCapture(name)
self.q = Queue.Queue()
t = threading.Thread(target=self._reader)
t.daemon = True
t.start()
# read frames as soon as they are available, keeping only most recent one
def _reader(self):
while True:
ret, frame = self.cap.read()
if not ret:
break
if not self.q.empty():
try:
self.q.get_nowait() # discard previous (unprocessed) frame
except Queue.Empty:
pass
self.q.put(frame)
def read(self):
return self.q.get()
cap = VideoCapture(0)
while True:
frame = cap.read()
time.sleep(.5) # simulate long processing
cv2.imshow("frame", frame)
if chr(cv2.waitKey(1)&255) == 'q':
break
The frame reader thread is encapsulated inside the custom VideoCapture
class, and communication with the main thread is via a queue.
This answer suggests using cap.grab()
in a reader thread, but the docs do not guarantee that grab()
clears the buffer, so this may work in some cases but not in others.