
Finding the most prominent depth of the image center region
Once the hand is placed roughly in the center of the screen, we can start finding all image pixels that lie on the same depth plane as the hand. This is done by following these steps:
- First, we simply need to determine the most prominent depth value of the center region of the image. The simplest approach would be to look only at the depth value of the center pixel, like this:
width, height = depth.shape center_pixel_depth = depth[width/2, height/2]
- Then, create a mask in which all pixels at a depth of center_pixel_depth are white and all others are black, as follows:
import numpy as np depth_mask = np.where(depth == center_pixel_depth, 255,
0).astype(np.uint8)
However, this approach will not be very robust, because there is the chance that it will be compromised by the following:
- Your hand will not be placed perfectly parallel to the Kinect sensor.
- Your hand will not be perfectly flat.
- The Kinect sensor values will be noisy.
Therefore, different regions of your hand will have slightly different depth values.
The segment_arm method takes a slightly better approach—it looks at a small neighborhood in the center of the image and determines the median depth value. This is done by following these steps:
- First, we find the center region (for example, 21 x 21 pixels) of the image frame, like this:
def segment_arm(frame: np.ndarray, abs_depth_dev: int = 14) -> np.ndarray:
height, width = frame.shape
# find center (21x21 pixels) region of imageheight frame
center_half = 10 # half-width of 21 is 21/2-1
center = frame[height // 2 - center_half:height // 2 + center_half,
width // 2 - center_half:width // 2 + center_half]
- Then, we determine the median depth value, med_val, as follows:
med_val = np.median(center)
We can now compare med_val with the depth value of all pixels in the image and create a mask in which all pixels whose depth values are within a particular range [med_val-abs_depth_dev, med_val+abs_depth_dev] are white, and all other pixels are black.
However, for reasons that will become clear in a moment, let's paint the pixels gray instead of white, like this:
frame = np.where(abs(frame - med_val) <= abs_depth_dev,
128, 0).astype(np.uint8)
- The result will look like this:
You will note that the segmentation mask is not smooth. In particular, it contains holes at points where the depth sensor failed to make a prediction. Let's learn how to apply morphological closing to smoothen the segmentation mask, in the next section.