I am trying to calibrate RaspiCam Fisheye lens camera with OpenCV. I am using Python example code and the cheesboard row and column numbers are also correct but somehow I can not get a successful result. I have tested with a lso much of photos below you can see them. My source code: https://github.com/jagracar/OpenCV-python-tests/blob/master/OpenCV-tutorials/cameraCalibration/cameraCalibration.py
my chess board rows and columns: rows = 9, cols = 6
but does not get a successful result
Edit: my solution
https://gist.github.com/mesutpiskin/0412c44bae399adf1f48007f22bdd22d
Starting with opencv 3, the fisheye
module was introduced, which manages the calibration for fisheye type lenses quite well. (At least for those who are not familiar with the mathematics behind the calibration process.)
# Checkboard dimensions
CHECKERBOARD = (6,9)
subpix_criteria = (cv2.TERM_CRITERIA_EPS + cv2.TERM_CRITERIA_MAX_ITER, 30, 0.1)
calibration_flags = cv2.fisheye.CALIB_RECOMPUTE_EXTRINSIC + cv2.fisheye.CALIB_CHECK_COND + cv2.fisheye.CALIB_FIX_SKEW
objp = np.zeros((1, CHECKERBOARD[0]*CHECKERBOARD[1], 3), np.float32)
objp[0,:,:2] = np.mgrid[0:CHECKERBOARD[0], 0:CHECKERBOARD[1]].T.reshape(-1, 2)
objpoints = [] # 3d point in real world space
imgpoints = [] # 2d points in image plane.
### read images and for each image:
img = cv2.imread(fname)
img_shape = img.shape[:2]
gray = cv2.cvtColor(img,cv2.COLOR_BGR2GRAY)
# Find the chess board corners
ret, corners = cv2.findChessboardCorners(gray, CHECKERBOARD, cv2.CALIB_CB_ADAPTIVE_THRESH+cv2.CALIB_CB_FAST_CHECK+cv2.CALIB_CB_NORMALIZE_IMAGE)
# If found, add object points, image points (after refining them)
if ret == True:
objpoints.append(objp)
cv2.cornerSubPix(gray,corners,(3,3),(-1,-1),subpix_criteria)
imgpoints.append(corners)
###
# calculate K & D
N_imm = # number of calibration images
K = np.zeros((3, 3))
D = np.zeros((4, 1))
rvecs = [np.zeros((1, 1, 3), dtype=np.float64) for i in range(N_imm)]
tvecs = [np.zeros((1, 1, 3), dtype=np.float64) for i in range(N_imm)]
retval, K, D, rvecs, tvecs = cv2.fisheye.calibrate(
objpoints,
imgpoints,
gray.shape[::-1],
K,
D,
rvecs,
tvecs,
calibration_flags,
(cv2.TERM_CRITERIA_EPS+cv2.TERM_CRITERIA_MAX_ITER, 30, 1e-6))
And now that you have K and D, you can undistort:
img = # your image to undistort
map1, map2 = cv2.fisheye.initUndistortRectifyMap(K, D, np.eye(3), K, DIM, cv2.CV_16SC2)
undistorted_img = cv2.remap(img, map1, map2, interpolation=cv2.INTER_LINEAR, borderMode=cv2.BORDER_CONSTANT)
this should work!
If you want to see the hidden parts of the image (for example the portion outside the yellow box in the above image), after the calibration, you need this:
img = cv2.imread(img_path)
img_dim = img.shape[:2][::-1]
DIM = # dimension of the images used for calibration
scaled_K = K * img_dim[0] / DIM[0]
scaled_K[2][2] = 1.0
new_K = cv2.fisheye.estimateNewCameraMatrixForUndistortRectify(scaled_K, D,
img_dim, np.eye(3), balance=balance)
map1, map2 = cv2.fisheye.initUndistortRectifyMap(scaled_K, D, np.eye(3),
new_K, img_dim, cv2.CV_16SC2)
undist_image = cv2.remap(img, map1, map2, interpolation=cv2.INTER_LINEAR,
borderMode=cv2.BORDER_CONSTANT)
Now, by varying the balance
value you should decrease or increase the size of the final immage (compared to the image above, practically the yellow rectangle).
From OpenCV API:
balance
: Sets the new focal length in range between the min focal length and the max focal length. Balance is in range of [0, 1].