Share this

Design of a Virtual Instrument System for Automatic Recognition of IC Chip Surface Markings

2026-04-06 07:37:16 · · #1
Application Areas: Semiconductors, Industrial Automation Products Used: LabVIEW 7.0, IMAQ Vision, IMAQ Vision Assistant, PXI-1409, MBC-5051, etc. Challenge: To build an automatic identification system for IC chip surface markings, enabling the recognition of English letters, numbers, and manufacturer logos on the chip surface. Application Solution: Using NI's LabVIEW, IMAQ Vision, and IMAQ Vision Assistant software, along with image acquisition hardware such as the PXI-1409 image acquisition card and the MBC-5051 CCD monochrome camera, an automatic identification system for IC chip surface markings is built to recognize English letters, numbers, and manufacturer logos. Introduction Automatic chip surface marking recognition technology is a requirement for the continuous and rapid development of chip manufacturing technology. Chip surface markings mainly include manufacturer logos and serial numbers (including English letters and numbers). Due to the extreme importance of automatic identification technology, significant human and material resources have been invested in its research, resulting in remarkable progress. It can be applied to the field of automatic chip performance testing, improving chip testing efficiency and thus increasing manufacturers' production capacity, possessing a very broad application prospect. This system is built using NI's vision system development tools, thus featuring short development cycles and low costs. The system comprehensively utilizes various image processing techniques such as sharpening, filtering, thinning, and feature recognition, successfully implementing a series of functions from automatic chip tracking and positioning, image acquisition, image preprocessing, skeleton extraction, and recognition. Abstract With the continuous development of chip manufacturing technology, developing an auto-recognizing system for chip surface markings, primarily containing licenses, has become increasingly important. As this technique is of great importance, significant effort has been invested, resulting in numerous achievements. This system can be used to automate chip testing, thereby increasing chip manufacturers' throughput. Thanks to the use of LabVIEW software and its IMAQ Vision image processing modules, the system has a very short development cycle and is extremely cost-effective. By comprehensively utilizing various technologies, such as thresholding, filtering, arithmetic/logical operations, cutting, thinning, feature matching, and so on, this system can successfully deliver all expected functions in practical applications. System Introduction We used LabVIEW, IMAQ Vision, and IMAQ Vision Assistant for system development. LabVIEW's unique dataflow programming, IMAQ Vision's powerful image processing capabilities, and IMAQ Vision Assistant's automatic code generation function significantly shortened the system development cycle and reduced costs. Figure 1 shows the workflow of the automatic identification system for IC chip surface markings. Here, an NI PXI-1409 image acquisition card and an MBC-5051 black-and-white camera are used for image acquisition, and then the acquired images are sent to a computer for processing. To improve the flexibility of recognition, a learning module is added to the system. Similar to the recognition process, it also includes image preprocessing, text region cropping, thinning, and feature extraction. The difference is that the learning process directly saves the extracted features to the computer, while the recognition process needs to compare these feature values ​​with the already stored feature values ​​one by one to complete the matching. [align=center] Figure 1 Workflow of the recognition system[/align] Automatic Chip Tracking and Positioning Once the chip enters the camera's field of view, it can be detected by the system. The system will begin to automatically track the chip, ensuring that the ROI always contains the chip, thereby reducing the size of the image to be processed and reducing the computational load. In this process, the system mainly completes three actions: thresholding the acquired image, locating an object larger than a certain threshold, and setting the ROI based on the object's position information. Therefore, this positioning function can be easily implemented through IMAQ Vision. Users only need to set the grayscale threshold in the first action and the object size in the second action. Image Preprocessing The acquired images contain a large amount of noise, which greatly hinders the image thinning and recognition processes, so it is necessary to filter it out. Figure 2 shows the image after preprocessing. After the system acquires the image, it will automatically preprocess the image. The system also supports manual image processing, mainly to improve its adaptability in different environments. If the user is not satisfied with the effect of automatic image processing, they can open the manual image processing program and adjust the parameters of the image processing functions provided by the system to obtain a more ideal image quality. While performing manual processing, the system will record the image processing functions used by the user and their parameters. The user can save these parameters to the computer. Therefore, if the next object to be processed is still in the same environment, these parameters can be retrieved and used for automatic image processing. [align=center] Figure 2 Chip surface markings acquired and processed by the system[/align] Text Segmentation and Thinning The system first separates each character individually, and then thins and extracts their skeleton to facilitate subsequent recognition. Text segmentation is achieved by utilizing the gaps between text lines and between characters. Since the grayscale value of white is 255 and the grayscale value of black is zero, it can be scanned line by line. The sum of the grayscale values ​​of all pixels in the current line is compared with that of the previous line. If a positive transition occurs, it indicates that the line is the upper boundary of the text line; if a negative transition occurs, it indicates that the line is the lower boundary of the text line. After determining the line boundary, the line is scanned left and right to determine the left and right boundaries of the characters, thus separating each character. However, line-by-line scanning is slow, so the system uses a two-scan method that is an improvement on the previous method, with a similar principle. [align=center] Figure 3 Thinning process of two characters[/align] The thinning process uses the FPA thinning algorithm [1], which is simple to implement, powerful, and has ideal results. As shown in Figure 3, this is the thinning process of two separated characters in the image. Text Recognition The system uses a matching method to recognize text. Therefore, some feature quantities in the image must be extracted first, and then these feature quantities are matched with the pre-prepared standard feature quantities. If the standard feature value of a template is closest to the feature value of the image to be recognized (i.e., the similarity distance is the smallest), the system will recognize the image to be recognized as the text or image described by the template. The system uses the method shown in Figure 4 to extract feature values. There are 20 intersecting lines on the image plane, which are vertical, horizontal, and diagonal. These 20 lines are marked with numbers 1 to 20. When a handwritten character is input on the image plane, the number of intersections between each stroke of the character and each line is calculated, and these are used as the feature values ​​of the character. Let the feature value array be C, where C = {Ci | i = 1, 2, ..., 20}, and the value of each component represents the number of intersections between the corresponding numbered feature line and each stroke. [align=center] Figure 4 Recognition Feature Lines[/align] Before extracting feature values, the system first enlarges the separated text images into squares one by one, and after enlargement, the text is still located in the center of the image, making it easier to generate diagonal feature lines. Based on the intersection points of the feature lines, the system matches more character features, such as the number of endpoints and the position of endpoints, to improve the recognition accuracy. The extraction of these features relies on the image thinning process. The similarity distance D is calculated as follows: Let the feature quantity of the image to be recognized be and the feature quantity of a certain standard template be , then the similarity distance D between the image to be recognized and the standard template is calculated by the following formula. Figure 5 shows the similarity matching process. [align=center] Figure 5 Similarity Distance Matching[/align] Using the Recognition System The main program can be divided into three parts: the first part is the program start, including reading the configuration file, allocating memory space (especially the image memory space), and initializing the hardware; the second part of the program waits for user operation, and once the user presses the button on the front panel, the system responds to these events; the third part is the program end, mainly for releasing resources and handling errors. Figure 6 shows the main interface of the recognition system. [align=center]Figure 6 Automatic Identification System for IC Chip Surface Markings[/align] The "Process" function allows users to manually process images if they are not satisfied with the automatically processed images. Expanding this function can further improve the system and enhance its adaptability to different environments. The main program has three image displays: the upper left display shows the acquired image or the image opened using "Open," with the "Open" and "Save" buttons for operating on the image in this display; the lower left display shows the processed image, allowing users to compare it with the original image, with "Save Image" for operating on the image in this display; the small right display shows the refined image of a single character, allowing users to observe the refinement effect in real time. Before using the recognition system, users need to complete the following tasks: 1. Start the acquisition device, such as the image acquisition card, camera, and light source; 2. Adjust the camera's focal length and position, and adjust the light source direction to create an optimal environment for acquisition, thereby improving the system's recognition rate. The external environment plays a crucial role in the recognition rate; a good environment can significantly improve the system's recognition rate. 3. Configure NI's image acquisition device. Results and Conclusions Utilizing advanced virtual instrument technologies from NI (National Instruments) such as LabVIEW, IMAQ Vision, and PXI, and through the functionality of numerous image processing modules, we established a complete automatic chip surface marking recognition system in a relatively short time. This system can automatically locate and track the chip within the camera's field of view, cut out the chip, and then extract the text on the chip one by one through a series of image processing steps. The text image is then refined, and finally, the system obtains the image's feature information based on the refined text, matching it with a standard template to complete the text recognition. The system has a wide range of applicability, capable of recognizing objects including manufacturer logos and icons; it also has high practical value. Combined with an automatic chip testing system, it will greatly improve production capacity and efficiency, thus possessing extremely broad application prospects. Acknowledgements I sincerely thank Professor Jiang Jianjun. During the system's development, Professor Jiang provided me with invaluable assistance, offering numerous positive and effective solutions to many problems. I would also like to thank my doctoral students Fan Shaochun, Liu Jiguang, Liu Wenqing, and Ming Fanhua for their significant assistance during the completion of this project. References [1]. Keiji Taniguchi. Digital Image Processing Applications. Translated by Zhu Hong, Liao Xuecheng, and Le Jing. Beijing: Science Press, 2000 [2]. Jiaguang Sun. Computer Graphics. (Third Edition). Beijing: Tsinghua University Press, 1998 [3]. Rafael C. Gonzalez. Digital Image Processing. (Second Edition). Translated by Qiuqi Ruan and Yuzhi Ruan. Beijing: Electronic Industry Press, 2003 [4]. Nanning Zheng. Computer Vision and Pattern Recognition. Beijing: National Defense Industry Press, 1998 [5]. Sarp Ertürk. Digital Image Processing, February 2003 Edition, University of Kocaeli, Part Number 323604A-01. [6]. Haibing Guan and Guorong Xuan. A New Fully Parallel Refinement Algorithm. Computer Engineering, 1997, 23(1): 256-258 [7]. Rui Wang and Xin Ai. Research on Digital Recognition Technology Based on LabVIEW. Modern Electric Power, 2003, 20(3,4): 96-99 [8]. IMAQ Vision Concepts Manual, October 2000 Edition, National Instruments Corporation, Part Number 322916A-01. [9]. IMAQ Vision for LabVIEW™ User Manual, June 2003 Edition, National Instruments Corporation, Part Number 322917B-01.
Read next

CATDOLL Rosie Hard Silicone Head

The head made from hard silicone does not have a usable oral cavity. You can choose the skin tone, eye color, and wig, ...

Articles 2026-02-22
CATDOLL Katya Soft Silicone Head

CATDOLL Katya Soft Silicone Head

Articles
2026-02-22
CATDOLL Nanako Soft Silicone Head

CATDOLL Nanako Soft Silicone Head

Articles
2026-02-22
CATDOLL 128CM Hedi

CATDOLL 128CM Hedi

Articles
2026-02-22