This article explains the steps involved in edge detection algorithms.
1. Filtering: Edge detection algorithms are mainly based on the first and second derivatives of image intensity. However, the calculation of derivatives is very sensitive to noise. Therefore, filters must be used to improve the performance of noise-related edge detectors. It should be noted that most filters reduce noise but also cause a loss of edge strength. Therefore, a trade-off must be made between enhancing edges and reducing noise.
2. Enhancement: Edge enhancement is based on determining the intensity changes in the neighborhood of each point in the image. Enhancement algorithms can highlight points with significant changes in neighborhood (or local) intensity values. Edge enhancement is generally accomplished by calculating the gradient magnitude.
3. Detection: In an image, many points have relatively large gradient magnitudes, but these points are not necessarily edges in specific applications. Therefore, a method should be used to determine which points are edges. The simplest edge detection criterion is the gradient magnitude threshold criterion.
4. Positioning: If an application requires determining the position of an edge, the position of the edge can be estimated at the sub-pixel resolution, and the orientation of the edge can also be estimated.
Edge detection is a type of machine vision inspection technique. In edge detection algorithms, the first three steps are commonly used. This is because, in most cases, it is only necessary for the edge detector to indicate that an edge appears near a certain pixel in the image, without needing to specify the edge's precise location or direction.
Edge detection essentially involves using an algorithm to extract the boundary between an object and the background in an image. We define an edge as the boundary of a region in an image where the gray level changes abruptly. The change in image gray level can be reflected by the gradient of the image's gray level distribution; therefore, we can use local image differentiation techniques to obtain edge detection operators. Classical edge detection methods achieve this by constructing edge detection operators from a small neighborhood of pixels in the original image.
Edge detection is primarily used for applications such as checking the regularity and neatness of chip pins, target localization, and presence/defect detection. The application of edge detection technology provides strong technical support for high-precision inspection and dimensional measurement in various industries.