A Brief Discussion on Vision Systems for Pick and Place Machines
2026-04-06 04:55:07··#1
The increasing demand for smaller, lighter, thinner, and more reliable electronic devices has spurred the rapid development of various new types of components, especially fine-pitch devices, which are being used more and more in various electronic devices. This, in turn, places higher demands on the placement accuracy of pick-and-place machines—a key piece of equipment in surface mount technology (SMT). This article provides a detailed comparison of the vision systems of FUJI (mainly IP3 and CP6) and SIEMENS (S80F) pick-and-place machines from an application perspective. 1. The Principle of Machine Vision Systems A pick-and-place machine vision system is a computer-based image observation, recognition, and analysis system. It primarily uses a camera as the sensing component, or detection component, for the computer. The camera senses the light intensity distribution of an object within a given field of view, converts it into analog electrical signals, and then digitizes it into discrete values via an A/D converter. These values represent the average light intensity within the field of view. The resulting digital image is covered by a regular spatial grid, with each grid called a pixel. Obviously, the object image occupies a certain number of grids in the pixel array. The computer processes the pixel array containing the digital image of the object, comparing and analyzing the image features with a reference image input beforehand. Based on the calculation results, the computer sends instructions to the actuator. In machine vision systems, grayscale resolution is crucial. The grayscale method uses multiple levels of image brightness to represent the resolution. Grayscale resolution specifies the discrete values at which the machine measures the light intensity; the lower the light intensity to be processed, the higher the grayscale resolution. 2. Composition of the Vision System A pick-and-place machine vision system consists of vision hardware and software. The hardware generally comprises three parts: image detection, image storage and processing, and image display. The camera is the sensing component of the vision system; solid-state cameras or CCD cameras are used in pick-and-place machines. The main part of a solid-state camera is an integrated circuit chip. A CCD array composed of many tiny photosensitive elements is fabricated on the integrated circuit chip. The electrical signal output by each photosensitive element is inversely proportional to the intensity of the reflected light from the observed target. This electrical signal is recorded as the gray level of a pixel. The coordinates of the photosensitive element determine the position of that point in the image. The camera acquires a large amount of information, which is processed by a microprocessor. The processing results are displayed on an industrial television. Communication cables connect the camera to the microprocessor, and the microprocessor to the actuator and the display, typically using an RS232 serial communication interface. 3. Accuracy of the Vision System The main factors affecting the accuracy of a vision system are the number of pixels in the camera and the optical magnification. The more pixels the camera has, the higher the accuracy; the higher the image magnification, the higher the accuracy. This is because a higher optical magnification means more pixels for a given area, thus increasing accuracy. FUJI's IP3 requires high precision when mounting components with a lead width of 0.15mm. However, excessive magnification makes component location more difficult, increasing the risk of component loss and reducing the mounting rate. Therefore, an appropriate optical magnification should be selected based on actual needs. 4. Comparison of FUJI and SIEMENS Vision Systems 1. Precise PCB Positioning FUJI's IP and CP both have a dedicated MARK CAMERA used to acquire the position, size, and shape of marker points on the PCB and read the center position. When positioning on the PCB, at least two marker points (based on X, Y coordinates) are required. In the horizontal position of the TABLE, the search is performed sequentially around the center of each marker point within a certain range. If no target is found, the search range is expanded (this can be set in the program). After determining the position of the marker point, it is compared with the coordinates in the program to determine the deviation, which is reflected in the X, Y, and Q values. Then, the mounting coordinates are corrected. SIEMENS works roughly the same way. 2. Component Inspection and Centering: FUJI uses two cameras, one large and one small, to identify and center different components, while simultaneously performing inspection functions. Different illumination methods are used for different components: J-type leads (PLCC, SOJ, BGA) use front-light illumination, while others use rear-light illumination. The nozzle on the placement head picks up the component at the feeder position specified in the program, aiming to pick it up as close to the center point as possible, especially for larger components such as PLCC84. This is crucial; otherwise, image processing often fails. After picking up the component at a specific position and acquiring its shape image, a special algorithm (depending on the component) is used to obtain edge data and determine the center position. This is compared with the data in the program to obtain the X, Y, and Q deviation values. While providing correction data, the following inspection functions are performed: whether the actual component deviates from the component described in the PART DATA (package: including pin tree, pin position, pin length, and size), whether the pins are skewed, pin coplanarity, and polarity detection, etc. When performing the inspection function, the pick-and-place machine compares the characteristics of the device under inspection with the stored packaged devices. If the inspection fails, it may indicate an incorrect package, incorrect component placement, or a defective device. In this case, the system will instruct the placement head to send the device to the discard cassette. FUJI provides an industrial CRT display to observe the device image. Through the on-site control panel, the machine can be manually operated to acquire images of the actual device. Multiple methods are available to check the differences between the packaged component in the program and the actual component. The CRT can indicate where the error (BUG) is, and when an error occurs, the screen also provides an error code, facilitating analysis of the cause of the error and providing modification suggestions. The vision software uses different VISION TYPEs for different devices, which correspond to different image processing algorithms. Different grayscale solutions and different illumination sequences are used for the pins of different devices, allowing for pin tree verification. For devices with polarity, polarity detection can also be performed, demonstrating the adaptability of the pick-and-place machine. The SIEMENS 80F4 is also a multi-functional placement machine. It has two placement heads: a rotary head and an IC head. The rotary head consists of 12 placement heads and can place up to PLCC44, while the IC head can place devices up to 55mm-855mm in size. The SIEMENS has three cameras: PCB, COMPONENT, and IC. The PCB camera is mainly used to align the markings on the machine with the markings on the PCB. The COMPONENT camera, located above the rotary head, is used for optical alignment of small components and adjusting their placement. The IC camera is primarily used for optical alignment of larger components. SIEMENS has three main imaging methods: optical alignment of chip components (such as general chip components), SO components, and BGAs. When optically aligning chip components, only parallel light is used, and only the edges of the component are checked to find the center and calculate the adjustment error required by the pick-and-place machine. However, when optically aligning SO components, the relative position of each pin and each solder ball must also be detected; position and solder ball brightness are both included in the inspection. SIEMENS has a significant advantage in image processing of PLCCs, mainly because FUJI's light source is parallel light. For J-type pins, the processing result is the same, with only reflection at the bottom of the J-type pin. In comparison, SIEMENS's light source has side lighting, reflecting images from the angled surface of the J-type pin, allowing for a more comprehensive optical inspection of the PLCC. SIEMENS also has advantages in PLCC placement, and side lighting plays an important role in the optical inspection of BGAs. 5. Conclusion By comparing the vision systems of FUJI and SIEMENS, we gained a deeper understanding of the image processing technology of pick-and-place machines. It's evident that high-precision pick-and-place machines integrate various modern high-tech technologies such as computers, optics, electronics, and automatic control. With the rapid development of these technologies, pick-and-place machines are evolving towards higher speeds, higher precision, and more powerful functions. This article, written to share my experiences using pick-and-place machines, aims to facilitate mutual exchange and improvement among colleagues working in SMT (Surface Mount Technology).