I have a numpy array that I wish to resize using opencv. Its values range from 0 to 255. If I opt to use cv2.INTER_CUBIC, I may get values outside this range. This is undesirable, since the resized array is supposed to still represent an image. One solution is to clip the results to [0, 255]. Another is to use a different interpolation method. It is my understanding that using INTER_AREA is valid for down-sampling an image, but works similar to nearest neighbor for upsampling it, rendering it less than optimal for my purpose.
Should I use INTER_CUBIC (and clip), INTER_AREA, or INTER_LINEAR?
an example for values outside of range using INTER_CUBIC:
a = np.array( [ 0, 10, 20, 0, 5, 2, 255, 0, 255 ] ).reshape( ( 3, 3 ) )
[[ 0 10 20]
[ 0 5 2]
[255 0 255]]
b = cv2.resize( a.astype('float'), ( 4, 4 ), interpolation = cv2.INTER_CUBIC )
[[ 0. 5.42489886 15.43670964 21.29199219]
[ -28.01513672 -2.46422291 1.62949324 -19.30908203]
[ 91.88964844 25.07939219 24.75106835 91.19140625]
[ 273.30322266 68.20603609 68.13853455 273.15966797]]
Edit: As berak pointed out, converting the type to float (from int64) allows for values outside the original range. the cv2.resize() function does not work with the default 'int64' type. However, converting to 'uint8' will automatically saturate the values to [0..255].
Also, as pointed out by SaulloCastro, another related answer demonstrated scipy's interpolation, and that there the defualt method is the cubic interpolation (with saturation).
If you are enlarging the image, you should prefer to use INTER_LINEAR or INTER_CUBIC interpolation. If you are shrinking the image, you should prefer to use INTER_AREA interpolation.
Cubic interpolation is computationally more complex, and hence slower than linear interpolation. However, the quality of the resulting image will be higher.