Feature detection OpenCV
Feature detection is an essential concept in computer vision, which involves identifying meaningful areas of an image, often referred to as "features". OpenCV, a popular open-source library for computer vision, provides several methods for feature detection.
This Answer will walk through the concept of feature detection in OpenCV, touching on several of the main methods available.
A glance at what features are
Features are unique regions of an image that are particularly informative or interesting. These could include edges, corners, or blobs (regions based on color intensity). These features are used in various computer vision applications such as image recognition, tracking, and registration.
Key feature detection methods in OpenCV
OpenCV provides a variety of methods for feature detection. This Answer will focus on four of the most common methods: Harris corner detection, Shi-Tomasi corner detection, and scale-invariant feature transform (SIFT).
Getting started with OpenCV
Before we dive into feature detection, make sure to have OpenCV installed. We can install it using the following command:
pip install opencv-python
Harris corner detection
Harris corner detection is a reliable approach used to recognize the corners in an image. Corners are points where the intensity of an image changes significantly in multiple directions.
Harris corner detection involves the following steps:
Load and convert the image to grayscale:
input_img = cv2.imread('image.jpg')gray_img = cv2.cvtColor(input_img, cv2.COLOR_BGR2GRAY)
Calculate corner responses:
harris_resp = cv2.cornerHarris(np.float32(gray_img), 2, 3, 0.04)harris_resp = cv2.dilate(harris_resp, None)
Identify and draw corners:
threshold = 0.01 * harris_resp.max()img_harris = input_img.copy()for i in range(harris_resp.shape[0]):for j in range(harris_resp.shape[1]):if harris_resp[i, j] > threshold:cv2.circle(img_harris, (j, i), 5, (0, 0, 255), -1)
Note: To learn about Harris corner detection in more detail, refer to this Answer.
Shi-Tomasi corner detection
The Shi-Tomasi corner detection method is an improvement over the Harris corner detection method. It works better in detecting corners, especially in images with more corners. It involves the following steps:
Calculate Shi-Tomasi corners:
shi_tomasi_pts = cv2.goodFeaturesToTrack(gray_img, 25, 0.01, 10)shi_tomasi_pts = np.int0(shi_tomasi_pts)
Identify and draw corners:
img_shi_tomasi = input_img.copy()for pt in shi_tomasi_pts:x, y = pt.ravel()cv2.circle(img_shi_tomasi, (x, y), 3, (255, 0, 0), -1)
Scale-invariant feature transform (SIFT)
The scale-invariant feature transform (SIFT) method is used to detect and describe local features in images. It is robust to image scaling, orientation, and affine distortion.
Here's how SIFT works:
Initialize the SIFT detector:
sift_detector = cv2.SIFT_create()
Detect the keypoints and compute descriptors:
sift_keypoints, _ = sift_detector.detectAndCompute(gray_img, None)
Draw the keypoints on the image:
img_sift = cv2.drawKeypoints(input_img, sift_keypoints, img_sift)
FAST algorithm for corner detection
The FAST (features from accelerated segment test) algorithm for corner detection is a highly efficient method designed to identify corners in an image. It's particularly useful for real-time video processing applications.
Here's how FAST works:
Initialize FAST detector:
fast_detector = cv2.FastFeatureDetector_create()
Detect the keypoints:
fast_keypoints = fast_detector.detect(gray_img, None)
Draw the keypoints on the image:
img_fast = cv2.drawKeypoints(input_img, fast_keypoints, img_fast, color=(255, 0, 0))
Oriented FAST and Rotated BRIEF (ORB)
The Oriented FAST and Rotated BRIEF (ORB) algorithm is a fast and robust method for feature detection and description. It combines the FAST keypoint detector and the BRIEF descriptor with several modifications to enhance performance.
Here's how the ORB works:
Initialize the ORB detector:
orb_detector = cv2.ORB_create()
Detect the keypoints and compute descriptors:
orb_keypoints, _ = orb_detector.detectAndCompute(gray_img, None)
Draw the keypoints on the image:
img_orb = cv2.drawKeypoints(input_img, orb_keypoints, img_orb, color=(0, 255, 0))
Implementation
Here's the combined code that incorporates all the above methods and displays the results in a single plot using subplots:
# Import necessary libraries
import cv2
import numpy as np
import matplotlib.pyplot as plt
# Load the input image using OpenCV
input_img = cv2.imread('image.jpg')
# Convert the input image to grayscale using OpenCV
gray_img = cv2.cvtColor(input_img, cv2.COLOR_BGR2GRAY)
# Calculate corner responses for Harris corner detection
harris_resp = cv2.cornerHarris(np.float32(gray_img), 2, 3, 0.04)
harris_resp = cv2.dilate(harris_resp, None)
# Apply Shi-Tomasi corner detection to find distinctive points
shi_tomasi_pts = cv2.goodFeaturesToTrack(gray_img, 25, 0.01, 10)
shi_tomasi_pts = np.int0(shi_tomasi_pts)
# Initialize the SIFT (Scale-Invariant Feature Transform) detector
sift_detector = cv2.SIFT_create()
# Detect keypoints and compute descriptors using SIFT
sift_keypoints, _ = sift_detector.detectAndCompute(gray_img, None)
# Initialize the FAST (Features from Accelerated Segment Test) detector
fast_detector = cv2.FastFeatureDetector_create()
# Detect keypoints using FAST detector
fast_keypoints = fast_detector.detect(gray_img, None)
# Initialize the ORB (Oriented FAST and Rotated BRIEF) detector
orb_detector = cv2.ORB_create()
# Detect keypoints and compute descriptors using ORB
orb_keypoints, _ = orb_detector.detectAndCompute(gray_img, None)
# Create copies of the input image to draw keypoints on
img_harris = input_img.copy()
img_shi_tomasi = input_img.copy()
img_sift = input_img.copy()
img_fast = input_img.copy()
img_orb = input_img.copy()
# Set a threshold for Harris corner response values
thresh = 0.01 * harris_resp.max()
# Iterate through Harris corner response matrix and draw circles on image
for i in range(harris_resp.shape[0]):
for j in range(harris_resp.shape[1]):
if harris_resp[i, j] > thresh:
cv2.circle(img_harris, (j, i), 5, (0, 0, 255), -1)
# Iterate through Shi-Tomasi corner points and draw circles on image
for pt in shi_tomasi_pts:
x, y = pt.ravel()
cv2.circle(img_shi_tomasi, (x, y), 3, (255, 0, 0), -1)
# Draw SIFT keypoints on the image
img_sift = cv2.drawKeypoints(input_img, sift_keypoints, img_sift)
# Draw FAST keypoints on the image
img_fast = cv2.drawKeypoints(input_img, fast_keypoints, img_fast, color=(255, 0, 0))
# Draw ORB keypoints on the image
img_orb = cv2.drawKeypoints(input_img, orb_keypoints, img_orb, color=(0, 255, 0))
# Create subplots to display original image and processed images
fig, axes = plt.subplots(2, 3, figsize=(15, 10))
# Display original image
axes[0, 0].imshow(cv2.cvtColor(input_img, cv2.COLOR_BGR2RGB))
axes[0, 0].set_title('Original Image')
# Display Harris corner detection result
axes[0, 1].imshow(cv2.cvtColor(img_harris, cv2.COLOR_BGR2RGB))
axes[0, 1].set_title('Harris Corner Detection')
# Display Shi-Tomasi corner detection result
axes[0, 2].imshow(cv2.cvtColor(img_shi_tomasi, cv2.COLOR_BGR2RGB))
axes[0, 2].set_title('Shi-Tomasi Corner Detection')
# Display SIFT features
axes[1, 0].imshow(cv2.cvtColor(img_sift, cv2.COLOR_BGR2RGB))
axes[1, 0].set_title('SIFT Features')
# Display FAST algorithm corner detection result
axes[1, 1].imshow(cv2.cvtColor(img_fast, cv2.COLOR_BGR2RGB))
axes[1, 1].set_title('FAST Algorithm Corners')
# Display ORB features
axes[1, 2].imshow(cv2.cvtColor(img_orb, cv2.COLOR_BGR2RGB))
axes[1, 2].set_title('ORB Features')
# Adjust layout and display the plot
plt.tight_layout()
plt.show()Code explanation
Here’s the explanation for each section of the code:
Lines 1–4: In this section, we import the necessary libraries to work with images, numerical operations, and visualization. The
cv2library provides functions for computer vision tasks,numpyis used for numerical computations, andmatplotlib.pyplotallows us to create plots and visualizations.Lines 6–7: Here, we load the input image using OpenCV’s
imreadfunction and assign it to the variableinput_img. This image will be used for further processing.Lines 9–10: We convert the input image to grayscale using the
cvtColorfunction from OpenCV. Grayscale images are easier to work with for many computer vision tasks and feature detection methods.Lines 12–14: In this section, we perform Harris corner detection. The
cornerHarrisfunction is applied to the grayscale image, and the corner responses are computed. These responses are then dilated to enhance the visibility of corners.Lines 16–18: Shi-Tomasi corner detection is performed in these lines. The
goodFeaturesToTrackfunction is used to detect these points, and the results are stored in theshi_tomasi_ptsvariable.Lines 20–21: Here, we initialize the SIFT (Scale-Invariant Feature Transform) detector using the
cv2.SIFT_create()function.Lines 23–24: Using the initialized SIFT detector, we detect keypoints and compute descriptors for the grayscale image. Keypoints represent distinctive points in the image, and descriptors provide information about the local image region around each keypoint.
Lines 26–27: The FAST (features from accelerated segment test) detector is initialized using the
cv2.FastFeatureDetector_create()function.Lines 29–30: Keypoints are detected using the FAST detector, and the results are stored in the
fast_keypointsvariable.Lines 32–33: The ORB (Oriented FAST and Rotated BRIEF) detector is initialized using the
cv2.ORB_create()function.Lines 35–36: Keypoints and descriptors are detected and computed using the ORB detector, and the results are stored in the
orb_keypointsvariable.Lines 38–66: In this section, we draw the keypoints or corners on separate copies of the input image using circles. For Harris corner detection, we iterate through the corner response matrix and draw circles for corners that exceed a certain threshold. For Shi-Tomasi corner detection, we iterate through the detected corner points and draw circles at each point.
Lines 68–69: This part of the code sets up subplots using
plt.subplots()to create a grid for displaying images. We create a 2x3 grid to display the original image and the results of different feature detection methods.Lines 71–93: Each subplot is assigned an image and a title using the
imshow()function andset_title()method. ThecvtColor()function is used to convert the images to RGB format for proper visualization.Lines 95–97: Finally, the layout is adjusted and the plot is displayed using
plt.tight_layout()andplt.show().
This code provides a comprehensive demonstration of various feature detection methods using OpenCV, allowing us to visualize the detected keypoints or corners on the original image and compare the results.
Conclusion
Feature detection is a critical component of computer vision, allowing us to identify unique points in images for various applications. OpenCV provides a range of methods, including Harris corner detection, Shi-Tomasi corner detection, SIFT, FAST, and ORB. Each method has its strengths and can be applied based on specific requirements. By understanding these methods, we can enhance our ability to extract meaningful information from images for various computer vision tasks.
Test your knowledge
Harris corner detection
Efficiently detects pixels with intensity variations, commonly used for real-time applications
Shi-Tomasi corner detection
Detects keypoints by maximizing the minimum eigenvalue of a local autocorrelation matrix
SIFT (Scale-Invariant Feature Transform)
Detects keypoints and computes descriptors, providing information about local image regions
FAST (Features from Accelerated Segment Test)
Combines FAST corner detection with BRIEF descriptor extraction
ORB (Oriented FAST and Rotated BRIEF)
Identifies distinctive points based on the corner responses of an image
Free Resources