Design of a Quality Inspection Machine Vision System Based on Virtual Instruments
2026-04-06 04:33:54··#1
Introduction An image intensifier is an optoelectronic image enhancement device that amplifies weak light signals. It allows people to observe external objects or targets in low-light conditions. The quality of its circuitry directly affects its overall quality. A substandard electrical system can easily cause malfunctions such as black spots (poor soldering), bright spots (short circuits), flashes, and flickering (unstable circuits) during use. Therefore, before an image intensifier is put into use, its reliability must be tested. However, current regulations and standards in China have drawbacks such as long testing times and complex testing conditions. Standards require that image intensifier tests be conducted in multiple test cycles. Each test cycle lasts 16 hours, with a 5-minute break every 55 minutes of operation. There is a 2-hour interval between adjacent test cycles, and the online working time for a single test must not be less than 600 hours. Furthermore, various stresses (optical stress, electrical stress, etc.) must be repeatedly applied to the scope during the test, and malfunctions occurring under various test conditions must be identified and recorded in real time for post-test analysis. Due to the complexity of reliability testing, China currently lacks testing equipment for reliability assessment of image intensifiers. In recent years, with the continuous development of computer technology and digital image processing technology, machine vision has been widely applied in medical imaging, industrial production, and quality inspection. Virtual instrument technology can quickly construct reliable testing or control systems by combining general-purpose computers with hardware through software. Combining the two allows the analytical functions of machine vision and the control functions of virtual instruments to be used simultaneously, resulting in a high performance-price ratio. Therefore, the image intensifier reliability testing machine vision system (hereinafter referred to as the reliability testing system) developed by combining machine vision technology with virtual instrument technology has achieved good results. System Structure and Working Principle The entire system is divided into an optomechanical subsystem and a monitoring and recording subsystem, as shown in Figure 1. The optomechanical subsystem simulates the optical stress and electrical stress under actual working conditions for the image intensifier and provides a support for placing the image intensifier during testing. It includes a light source, two-stage integrating spheres of different sizes, a frosted glass, an aperture, a transmittance plate, a collimator, a night vision device bracket, an optical stress switching motion device, and a luminous intensity detector. [align=center]Figure 1. Schematic diagram of the overall system structure[/align] The monitoring and recording subsystem not only identifies and records faults such as black spots, bright spots, flashes, and flickering at the eyepiece of the image intensifier in real time, but also records the test environment parameters corresponding to the fault images. Finally, these test data are analyzed and processed to provide a reasonable evaluation of the image intensifier quality. Considering the real-time requirements and efficiency of the system, the monitoring and recording subsystem is designed as a distributed structure, consisting of four image cameras and one management unit connected in a star network via a HUB. The image acquisition card PCI-1407 installed on each image camera is connected to the CCD camera to monitor and record the fault images at the corresponding image intensifier eyepiece in conjunction with the fault image recognition and processing software. To solve the problem of real-time storage of fault images, a disk array controller is also installed on each image camera. The management unit is equipped with a multi-functional data acquisition card PCI-6024E to monitor and record various parameters during the test process and control the optical stress switching, electrical stress switching, and adjustments of the optomechanical part in conjunction with the management unit software. The control box and adapter serve as the interface between the optomechanical subsystem and the detection and recording subsystem. On one hand, they convert control signals from the monitoring and recording subsystem into signals recognizable by the motion mechanism; on the other hand, they convert test parameters from the optomechanical section and other parts into electrical signals recognizable by the monitoring and recording subsystem, thus forming a unified system. During system operation, the operator first sets the test conditions (such as the required electrical stress) on the management unit. Then, the management unit coordinates (through communication between processes on the network) the entire system to perform a self-test, ensuring that all equipment is ready. After the self-test is complete, the management unit automatically sets the test conditions according to the operator's settings before starting the test. In each work cycle of the test cycle, each imager first acquires a standard image without malfunctions (guaranteed by the algorithm and operator visual inspection). Subsequently, the CCD cameras connected to the imagers continuously convert the images at the image intensifier eyepieces into standard video signals and input them to the image acquisition card. The image acquisition card decomposes and acquires the video signal, converts it into a digital signal, and sends it to the computer for processing. The fault image recognition and processing software on the image processing unit processes the digital image signal in real time and identifies whether a fault exists in the image. If a fault is found, it is saved; otherwise, the next frame is processed. During the experiment, the management unit synchronously monitors the experimental environment parameters corresponding to each frame and records them in the database. After each work cycle, the management unit controls the electrical stress applied to the image intensifier to shut it off, ensuring the image intensifier rests. Simultaneously, it controls the motion mechanism of the optomechanical subsystem to change the aperture and transmittance plate, switching the optical stress to ensure that the optical stress is ready before the next work cycle begins. This process is repeated until multiple test cycles of the entire experiment are completed. During development, the NI LabVIEW 5.0 PDS virtual instrument development platform and the NI IMAQ Vision 5.0 machine vision software development platform, combined with the NI SQL Toolkit, were used to rapidly develop most of the software modules. To improve the software's processing speed, the underlying fault identification program was developed using VC++ 6.0, and the C language program was embedded and integrated into the software system using LabVIEW's CIN interface. The status data management module was developed using POWER Builder 6.0 and MS SQL Server 7.0. The data communication and system management modules were developed using LabVIEW and NI DataSocket. These software modules are installed on the management machine and the imager, respectively, and the software and configuration of each imager are identical. To expand the system, simply connect the computer configured according to the imager's requirements to the network. System Technical Characteristics The reliability testing system has the following technical characteristics, which effectively ensure the normal operation of the system: 1. The system operates under unconventional light sources. The image intensifier amplifies weak ambient light, and the brightness of the image at its eyepiece is approximately tens of lux, resulting in very high image noise and making fault identification very difficult. The solution is to use LabVIEW and IMAQ Vision to automatically adjust the black-and-white level of the image acquisition card and the CCD exposure coefficient under different illumination conditions, ensuring that fault extraction is performed at a high signal-to-noise ratio. 2. The system has strong real-time performance. According to the specifications, the system must perform a series of operations, including image acquisition, preprocessing, fault identification, and image storage, within 80ms, requiring high real-time performance. Two main methods were employed to address the real-time performance issue: disk array technology and fault identification software written in VC++. Based on the characteristics of the faults to be identified—black spots, bright spots, flashes, and flickering brightness—the grayscale thresholds decreased sequentially, while the area thresholds increased sequentially. The system used grayscale and area as feature parameters for fault identification. A VC++ program was used to perform an erosion operation on the result of subtracting the fault image from the standard image, and then identify the fault according to the set thresholds. The program was compiled into .lsb format and embedded into a LabVIEW program using CIN nodes. Testing showed that this program typically takes 30ms to identify a single frame of fault image, fully meeting the system requirements. 3. Another method used in the high-speed image streaming disk system to improve system real-time performance is RAID technology. RAID has seven basic levels (RAID 0 to 6) and some combinations of basic RAID levels, based on different storage performance, data security, and storage costs. RAID 0 allows multiple disks to execute a data request in parallel, distributing continuous data across multiple disks for storage and access, thus effectively solving the bottleneck problem between disk I/O and CPU processing speed. The hard drives on each imager in the system are connected to the system via RAID interface cards to improve the system's real-time performance. 4. Distributed Synchronous Data Acquisition and Control The entire system is completed collaboratively by a management computer and four imagers, with strict timing relationships during operation. In the communication module written using NI DataSocket, the sender must obtain confirmation from the receiver before proceeding with subsequent work after sending a message. This mechanism effectively ensures the coordinated operation of the entire system. On the other hand, to facilitate the differentiation of subordinate faults afterward, it is required to record the corresponding system status when saving each fault image. For this purpose, a synchronization mechanism of same frequency, same phase, and simultaneous start is adopted. Same frequency means that the frequency of image acquisition and status acquisition is the same; same phase means that the synchronous video signal parsed from any image acquisition card is connected to the synchronous input terminals of the other three CCDs to ensure that the video signals sent from the four CCDs to the image acquisition card are in phase; in addition, the trigger terminals of the image acquisition card and the data acquisition card are connected together and both are in the triggered state. After any image acquisition card sends a trigger signal, the entire system starts to operate. Figure 2 shows the fault image containing black spots and bright spots recorded after algorithm processing. [align=center]Figure 2 Fault image containing black spots and bright spots[/align] Conclusion In the process of combining virtual instrument technology with machine vision technology to realize the entire system, the fault identification part was implemented using VC++ to improve the real-time performance of the system. The completed algorithm was compiled into a format supported by the CIN interface of the LabVIEW virtual instrument development platform and then embedded into the entire software system. After testing, the total time for the system to process fault images, including image acquisition and storage, was no more than 40ms after using this software integration method and algorithm, which fully met the requirements. At the same time, by using the virtual instrument development platform to complete its excellent control functions, developers only need to focus on the integrity of the system functions without having to consider complex details. This greatly leverages the performance of virtual instruments, makes the system highly flexible and scalable, saves development costs, and improves the system's performance-price ratio. References: 1. KR Castleman. Digital Image Processing. Electronic Industry Press, 1998.9. 2. National Instruments. Measurement and Automation Catalogue. 2000. 3. National Instruments. IMAQ Vision User's Manual. 1999. 4. National Instruments. LabVIEW 5.0 User's Manual. 1999. 5. He Bin, Ma Tianyu, et al. Visual C++ Digital Image Processing. Posts & Telecom Press, 2001.4.