Development of a vision system for pick and place machines
2026-04-06 07:40:13··#1
Abstract: With the widespread application of miniature and densely spaced chip electronic components, manufacturers of electronic products have put forward higher requirements for pick-and-place machines in terms of placement accuracy and speed. Therefore, machine vision systems are widely used in pick-and-place machines. The pick-and-place machine vision system mainly realizes two types of functions: Mark recognition of PCB boards to realize coordinate transformation; and identification, detection and alignment of chip components to realize correct placement of components. Its composition can be divided into hardware and software parts. The main function of the hardware part is to acquire high-quality images; the main function of the software part is to complete high-precision and high-speed algorithms. Keywords: vision system, pick-and-place machine, LED light source. The pick-and-place machine uses mechanical alignment devices, which have been phased out due to their unreliability and easy damage to components. Current pick-and-place machines all use laser alignment or vision alignment. In view of the development trend of increasingly dense pins and smaller size of chip electronic components, the following factors must be considered in order to accurately place them: (1) positioning error of PCB board; (2) centering error of components; (3) motion error of pick-and-place machine itself. If all these factors affecting placement accuracy are accumulated, it becomes difficult to achieve precise placement of fine-pitch components. Simply using mechanical methods to position the PCB and center the components cannot meet the accuracy requirements for placing fine-pitch components. Furthermore, although placement error is closely related to the machine's own motion error, even with high machine motion precision, it is difficult to guarantee the accuracy requirements for placing fine-pitch components. A few years ago, the industry-acceptable accuracy standard was 0.1 mm (chip components) and 0.05 mm (close-pitch components). Now, this standard is trending towards 0.05 mm (chip components) and 0.025 mm (close-pitch components). Therefore, a machine vision system is necessary to achieve such high accuracy requirements. 1. Placement Machine Vision System 1.1 Hardware Composition of the Placement Machine Vision System The vision system hardware mainly consists of: a moving camera, a stationary camera, an LED light source, an image acquisition card, and an industrial PC. The stationary camera is used for component identification, detection, and centering. The camera and the placement head are linked to learn the placement and pickup positions of components, and to check the placement effect or quality. The camera transmits the acquired images to the acquisition card, which is controlled by the industrial PC to complete image acquisition. The video signal is then combined with the video output from the industrial PC and sent to the monitor for display. Figure 1 shows the structure of the placement machine, and Figure 2 shows the structure of the placement machine's vision system. The instantaneous dynamic image data captured by the image acquisition card is processed by the industrial PC and the results are returned to the main control program to complete other corresponding control processes. 1.2 Vision System Functions The vision system has two functions: production functions and calibration functions. See Table 1 and Table 2 for details. 1.3 Vision System Workflow Diagram The data source during software execution is the database of chip electronic components and the PIK file of the PCB board (taking Protel design as an example). The component database contains all the characteristic description values of the components; the PIK file contains the coordinate values of the placed components on the PCB board. Figure 3 is a flowchart of a single component placement operation, which completes the judgment, identification, and alignment of a component. 2. Vision System Design and Selection 2.1 LED Light Source Design With the development of photolithography technology, surface mount technology and component packaging technology have also developed rapidly, with over ten thousand types of surface mount component packaging. The full-vision placement machine identifies, detects, and aligns surface mount components through a vision system. Based on the packaging form and vision algorithm, surface mount components can be divided into five types: Chip, SOIC, QFP, BGA, and ODD. Using a single type of LED light source makes it difficult to obtain high-quality images. Combining multiple types of LED light sources and controlling them with a computer allows for high-speed and accurate acquisition of high-quality images, thus completing a rapid placement process. The combined light source consists of three independently controllable light sources: a low-angle light source, a side light source, and a coaxial light source. Since a black and white camera is used, red, high-brightness LEDs are selected. Table 3 shows the correspondence between components and light sources. 2.2 Vision System Hardware Integration (1) When selecting a CCD, the following factors should be considered: interlaced or progressive scanning, frame transmission rate, resolution, synchronization method, charge overflow caused by pixel size, frame field transmission interval, electronic shutter, minimum illumination, signal-to-noise ratio, and whether it has gain, etc. According to the requirements of the mounting speed, an interlaced scanning CCD can be selected in a medium-speed pick-and-place machine. The size is 6.4mm×4.8mm, the resolution is 768H×494V, the electronic shutter can reach 1/10000s, the minimum illumination can reach 0.1lx, and the signal-to-noise ratio (S/N) is 56dB. (2) The lens should be matched with the CCD after determining the working parameters. The following main working parameters need to be determined: the working distance of the camera, the field of view, and the adjustable range of the lens's performance parameters, etc. The working distances of the moving and stationary cameras are 50-100mm and 100-200mm, respectively, and their corresponding fields of view are 20mm×20mm and 55mm×55mm, respectively. The focal length of the lens can be calculated using the following two formulas. Theoretically, the performance of the lens itself should be as high as possible, namely, low distortion (<0.1%), large aperture (up to 1.4), manual focus, and should also consider whether it is a 6.4mm×4.8mm lens, whether it is a C interface, etc. (3) The LED light should also be matched with the lens, especially on the moving light of the pick-and-place machine, which is caused by the reflection of the smooth PCB board. Theoretically, the larger the diameter of the innermost circle of the moving light, the larger the reflection produced. Therefore, keeping the field of view of the camera within the diameter of the light can avoid the reflection appearing in the image. As shown in Figure 4, if the image is outside the captured image within the range of θ′, then the maximum area that can be illuminated has been reached; if the field of view of the camera is larger, reaching the area enclosed by θ′, the lampshade blocks part of the image; if the camera is lowered, reaching the area enclosed by θ′, the image is in the image. Based on the above analysis, the structural installation and adjustment parameter values of the moving and stationary cameras can be determined, as shown in Table 4. Its installation structure is shown in Figure 5. [align=center] [/align] (4) Matching of image card and CCD The selection of image card should consider the following functions: digital and analog signal processing, standard and non-standard signal processing, progressive and interlaced signal processing, multi-channel and single-channel processing, color and black and white signal processing, software matching, etc. In medium-speed chip mounters, image cards with digital, standard, interlaced, multi-channel and black and white signal processing functions can be used. VC++ is selected as the development platform to independently develop the vision program. 3. Image Acquisition and Processing of the Vision System Experiment 1: Different types of chip electronic components were acquired and moved above the camera. Images were captured, and the position and angle deviations (Δx, Δy, θ) were calculated (see Table 6). The mounting head moved the components 20 times each in position and rotated 20 times, and the Δx, Δy, and θ values were calculated. The calculation results show that the calculated deviation and the error of the pre-value of the movement are less than 8 μm, meeting the specified requirements. Experiment 2: A glass plate with circular marks at the four corners (size: 200×200×3, positional accuracy deviation of the four marks less than 5 μm) was fabricated using photolithography. The components were mounted according to the pre-valued position, and the deviation from the pre-value was detected using an optical coordinate measuring machine. The test results were all less than 0.1 mm (chip components) and 0.05 mm (close-pitch components). 4 Conclusions (1) The vision system can capture high-quality images and is stable and reliable. (2) By establishing a database of surface mount components, the parameters of the components can be called in real time during the image recognition process, which reduces the image recognition time. (3) Sub-images can be obtained based on the parameters of the components, making image processing fast and simple. (4) The surface mount components are matched with the LED light source, which ensures the quality of the image and improves the speed and accuracy of recognition. (5) For irregularly shaped components, users can program them themselves.