OpenCV 4 Computer Vision Application Programming Cookbook(Fourth Edition)
上QQ阅读APP看书,第一时间看更新

There's more...

When a computation is done over a pixel neighborhood, it is common to represent this with a kernel matrix. This kernel describes how the pixels involved in the computation are combined in order to obtain the desired result. For the sharpening filter used in this recipe, the kernel would be as follows:

Unless stated otherwise, the current pixel corresponds to the center of the kernel. The value in each cell of the kernels represents a factor that multiplies the corresponding pixel. The result of the application of the kernel on a pixel is then given by the sum of all these multiplications. The size of the kernel corresponds to the size of the neighborhood (here, 3 x 3). Using this representation, it can be seen that, as required by the sharpening filter, the four horizontal and vertical neighbors of the current pixel are multiplied by -1, while the current one is multiplied by 5. Applying a kernel to an image is more than a convenient representation; it is the basis for the concept of convolution in signal processing. The kernel defines a filter that is applied to the image.

Since filtering is a common operation in image processing, OpenCV has defined a special function that performs this task—the cv::filter2D function. To use this, you just need to define a kernel (in the form of a matrix). The function is then called with the image and the kernel, and it returns the filtered image. Using this function, it is, therefore, easy to redefine our sharpening function as follows:

void sharpen2D(const cv::Mat &image, cv::Mat &result) { 
 
   // Construct kernel (all entries initialized to 0) 
   cv::Mat kernel(3,3,CV_32F,cv::Scalar(0)); 
   // assigns kernel values 
   kernel.at<float>(1,1)= 5.0; 
   kernel.at<float>(0,1)= -1.0; 
   kernel.at<float>(2,1)= -1.0; 
   kernel.at<float>(1,0)= -1.0; 
   kernel.at<float>(1,2)= -1.0; 
 
   //filter the image 
   cv::filter2D(image,result,image.depth(),kernel); 
} 

This implementation produces exactly the same result as the previous one (and with the same efficiency). If you input a color image, then the same kernel will be applied to all three channels. Note that it is particularly advantageous to use the filter2D function with the large kernel, as it uses, in this case, a more efficient algorithm.