We use the following code to verify the formula. First we read in a color image as both color (img_color) and grayscale (img_gray). Then we use the equation above to convert the color image to gray (img_color2gray). As the final step, we compare img_gray and img_color2gray. This comparison allows to verify the equation. The output of the equation is floating number. We use rounding (np.round) to convert float to integer. np.round(1.499) = 1 and np.round(1.5) = 2. We have tried other quantization methods such as floor or ceil but they don't behave as well.
img_name = 'lena_color_512.tif'; img_color = cv2.imread(img_name, cv2.IMREAD_COLOR) img_gray = cv2.imread(img_name, cv2.IMREAD_GRAYSCALE) shape = img_gray.shape img_color2gray = np.zeros(shape, dtype=np.int32) img_color2gray[:,:] = np.round(0.299*img_color[:,:,2]+0.587*img_color[:,:,1]+0.114*img_color[:,:,0]) img_color2gray = np.clip(img_color2gray, 0, 255)
The comparison result is this: img_gray and img_color2gray are very similar but not identical. Out of 512*512=262144 pixels in total, 352 pixels don't match (%0.13 of total). The dots in the figure below are the unmatched pixels. This leaves a question of how exactly this conversion is done in OpenCV SW. My source code can be found here.
No comments:
Post a Comment