Share this

Design Principle Analysis of Vision Recognition System for Dicing Machine

2026-04-06 04:50:02 · · #1
1. Composition of the Visual Recognition System The visual recognition system of the dicing machine is a real-time image processing system primarily based on a computer. As shown in Figure 1, it consists of an optical illumination system, a CCD camera, image processing software, and other components. The purpose of the recognition system is to achieve automatic alignment. Under the premise of ensuring the accuracy of the worktable, high-precision image processing algorithms play a decisive role in the accuracy of the automatic visual alignment system. The core component is the pattern recognition algorithm. Currently, commonly used recognition methods include statistical pattern recognition, feature extraction, neural network recognition, and template matching. China started relatively late in this field, with research mainly concentrated in universities and research institutes, focusing on theoretical research, and the market impact is not significant. This has resulted in a significant lag in the development speed of machine vision compared to Europe and the United States. 2. Technology Route Selection Given the current situation both domestically and internationally, when we began to establish our own visual recognition technology architecture for the dicing machine, our starting point was how to leverage existing mature resources and theoretical algorithms, based on the characteristics of the equipment itself, to build a set of visual algorithms that integrates efficiency and practicality, thereby forming a machine vision library specifically for the fully automatic dicing machine. We tried various methods, including collaborating with foreign machine vision companies to customize complete visual recognition systems according to specific functional module requirements. However, the problem was that we had to bear the expensive development costs and high profits of foreign companies, leading to a sharp increase in equipment costs. Moreover, there was a high risk of leaking our own technical secrets during the collaboration process. Practice proved that this approach was not feasible. Purchasing software development kits (SDKs) from foreign vision companies and performing secondary development was more suitable than the above method, and the technical difficulty was also lower. However, this also faced the challenges of the SSDs not being highly targeted, the actual effect not fully meeting the site requirements, increased cost per unit, and inability to resolve problems. After continuous exploration and comparison of several common algorithms in the industry, we finally decided to adopt the template geometric feature matching algorithm based on the OpenCV vision function library on the fully automatic dicing machine. OpenCV is Intel's open-source computer vision library, a cross-platform vision function library consisting of mid- and high-level APIs. It consists of a series of C functions and a small number of C++ classes, implementing many common algorithms in image processing and computer vision. This avoided our repetitive research on some mature low-level algorithms and saved a lot of time. More importantly, it is free for both non-commercial and commercial use, so it won't put pressure on our equipment costs. Geometric feature matching of templates is a new visual positioning technology that emerged in the market in the late 1990s. It is understood that many well-known semiconductor equipment manufacturers worldwide, including Japan's DISCO, Tokyo Seimitsu, and the US's K&S, have adopted related technologies in their main equipment vision fields. Unlike traditional grayscale matching, geometric feature matching sets a domain of interest and learns the geometric features of objects within that domain, then searches for objects with similar shapes within the image. It does not rely on specific pixel grayscale values, which in principle guarantees that it has some advantages over traditional visual positioning algorithms. This algorithm has been verified during the development of a fully automatic dicing machine. The application of this technology improves the visual recognition efficiency and automatic alignment capability of the fully automatic dicing machine, enabling precise object positioning and automatic alignment and dicing even when conditions such as workpiece angle, size, and brightness are changed. 3. Recognition System Design 3.1 Design Process The design structure of most visual recognition systems is basically similar. The key lies in the selection of the recognition algorithm. The design structure flow of the dicing machine visual recognition system is shown in Figure 2. In the application of this algorithm, considering the actual situation of the dicing machine's working site, in order to effectively extract the feature points of the pre-stored template image, we preprocessed the obtained dicing workpiece template image to extract the geometric features in the image. These preprocessing mainly include reducing and filtering noise in the image and enhancing the geometric feature points to be matched in the image. Among these, filtering and segmentation are two important steps before extracting the geometric features of the pre-template image. 3.2 Filter Design Principle Generally speaking, noise in the field is represented as a high-frequency signal in the image. Therefore, general filters achieve the purpose of filtering by weakening and eliminating high-frequency components in Fourier space. However, various structural details in the workpiece to be diced, such as edges and corners, are also high-frequency components. Therefore, how to retain the structural features in the image to the maximum extent while filtering out noise has always been the main direction of image filtering research. Linear filters include moving average filters and Gaussian filters, while the most commonly used nonlinear filters are median filters and SUSAN (Smallest Univalue Segment Assimilating Nucleus) filters. Among them, SUSAN filtering can effectively preserve other structural features of objects while filtering out image noise. It can meet the noise smoothing effect requirements of the positioning template image in the automatic alignment system of the fully automatic dicing machine. The SUASN method is a general term for a class of image processing algorithms, including filtering, edge extraction, and corner extraction. The basic principles of all these algorithms are the same. SUSAN filtering is essentially a weighted average mean filtering, and the similarity test function is its weighting factor. Equation (1) defines the similarity test function, which measures the similarity between pixel S[i,j] and each pixel S[im,jn] in its neighborhood (m, n are offsets). It can be seen that the similarity measurement function not only compares the difference between the gray values ​​of S[im,jn] and S[i,j], but also considers the influence of the distance between S[im,jn] and S[i,j]. In the formula: S[im,jn], S[i,j] are the gray values ​​of the pixels, and T is the threshold for measuring the similarity of gray values. Its value has little impact on the filtering result. Among them: θ can be regarded as the variance of the Gaussian smoothing filter. A larger value of θ can achieve a better smoothing effect, while a smaller value of θ can preserve the details in the image. After multiple experiments, we believe that 4.0 is more appropriate. The filtering function defined by the similarity measurement function is as shown in formula (2): In the formula: S,[i,j] are the gray values ​​of the pixels after filtering. As can be seen from formula (2), the weight of a large similarity is large, so it has a greater impact on the filtering result, and vice versa. SUSAN filtering does not include the center point itself, which can effectively remove impulse noise. 3.3 Image analysis algorithm selection After filtering to remove the noise interference on site, the next step is to separate the image into non-overlapping meaningful regions, each region corresponding to the surface of a certain object. The classification is based on the spectral characteristics, spatial characteristics, gray values, colors, etc. of the pixels. This is actually an important step in the transition from image processing to image analysis, and it is also a general computer vision technique. Image segmentation algorithms can be divided into two main categories: gray-level thresholding based on metric space and segmentation based on spatial region growing. For the automatic alignment system of a fully automatic dicing machine, gray-level thresholding based on metric space is more suitable. It is equivalent to performing binarization on the image. The threshold is generally calculated from the gray-level histogram of the image. We used an iterative algorithm to calculate the threshold for the bimodal histogram. The results were quite satisfactory. The iterative algorithm is a method for calculating the segmentation threshold for the bimodal histogram. First, the maximum and minimum gray values ​​Mmax and Mmin in the image are determined, and the initial threshold is: According to T, the image is divided into two parts: target and background, and the average gray value of the two parts is calculated respectively: Where: i is the gray value, ni is the number of pixels with gray value equal to i, thus obtaining the new threshold: [align=center] [/align] If: Tk+1=Tk, the iteration process ends; otherwise, it continues. The above image preprocessing process can be well implemented using the OpenCV vision function library. The geometric feature point set is a collection of points that accurately reflect the location of the positioning marker. The selection of features has a significant impact on the final template matching. A larger number of geometric feature points results in higher matching accuracy, but slower speed. A smaller number of points results in lower matching accuracy, but faster speed. Therefore, through numerous experiments, we selected the most suitable geometric feature points, balancing matching speed and accuracy. In the context of this system's application, the geometric edge points of the positioning template are a good choice. To extract the geometric feature point set of the positioning template, we first segment the image using an iterative algorithm, and then use the SUASAN edge and corner extraction algorithm to obtain the geometric edge points of the positioning template. 3.4 Principle of Geometric Edge Corner Extraction SUSAN geometric edge extraction operates by performing calculations on pixels within a given window size to obtain the initial response of the corner at the center point of the window. Then, it searches for local maxima among all initial responses to obtain the final set of geometric edge points. The algorithm is as follows: (1) Calculate the number of pixels n(x0y0) in the window whose grayscale value is similar to the center pixel of the window using the following two formulas: (2) Obtain the initial response of the corner using the following formula: (3) Repeat (1) and (2) to obtain the initial response of the corner at all pixels in the image. Finally, find local maxima to obtain the edge point set and the position of the corner. The geometric threshold has a certain impact on the output result. It not only affects the number of output corners, but more importantly, it also affects the shape of the output corners. For example, when the geometric threshold is reduced, the detected corners will be sharper. The grayscale difference threshold T has little effect on the geometric shape of the output corners, but it affects the number of output corners. Because the grayscale difference threshold defines the maximum allowable grayscale change within a window, and the grayscale change is greatest at the point where the graphic template merges with its background image in the dicing process, reducing the grayscale threshold allows the algorithm to detect smaller edge geometric changes in the image and output more corner points. Clearly, in the automatic alignment system of a dicing machine, using the geometric feature points of the template image as the basis significantly reduces the number of feature points and greatly shortens the computation time, thus significantly improving the speed of automatic alignment. 4. Conclusion The above algorithms are all well implemented based on the OpenCV vision function library. The entire image processing process is completed on a PC using the VC++ 6.0 development tool. After continuous field experiments, we ultimately conclude that the feature point effect of the positioning template image obtained by using the OpenCV vision function library, through SUSAN filtering, iterative segmentation, and SUSAN geometric edge corner point extraction algorithm, is ideal. It not only fully preserves the contour features of the graphic but also greatly reduces the number of feature points, effectively improving the accuracy and speed of automatic alignment for image matching in a dicing machine.
Read next

CATDOLL 138CM Jing TPE (Customer Photos)

Height: 138cm Weight: 26kg Shoulder Width: 30cm Bust/Waist/Hip: 65/61/76cm Oral Depth: 3-5cm Vaginal Depth: 3-15cm Anal...

Articles 2026-02-22