1 Introduction
Machine vision utilizes photoelectric imaging systems to acquire images of controlled targets, which are then digitally processed by a computer or dedicated image processing module. Based on information such as pixel distribution, brightness, and color, it identifies dimensions, shapes, and colors. This combines the speed and repeatability of computers with the high intelligence and abstraction capabilities of human vision, significantly improving the flexibility and automation of production.
2. Basic Structure of PC-Based Machine Vision System
Figure 1 shows the application of a PC-based machine vision system in an empty bottle inspection system on a beer production line. As shown in Figure 1, the system mainly consists of six parts: a camera, lens, light source, image acquisition card, PC platform, and control unit. These parts work together to ultimately complete the quality inspection and rejection of bottles. The following section uses Figure 1 as an example to introduce the functions and selection of each component.
3 cameras
Currently, industrial cameras mainly come in two types: CCD and CMOS. CMOS cameras are a later development and their image quality is not very good, so they are mainly used in products where image quality requirements are not very high, such as the cameras that come with most mobile phones. CCD cameras are more sensitive than CMOS cameras and can take better pictures in low light conditions, therefore, CCD cameras are more commonly used in industry. A CCD (charge-coupled device) is a semiconductor optical device. This device has functions such as photoelectric conversion, information storage, and time delay, and it has high integration and low power consumption, so it has been widely used in solid-state image sensing, information storage, and processing since its inception. When choosing a camera, the following aspects should be considered:
3.1 Camera Scanning Method
Cameras can be classified into area scanning cameras and line scanning cameras according to their scanning method. This is self-explanatory.
(1) A line scan camera is a camera that performs line scanning on an object. Line scan cameras can be further divided into interlaced scanning and progressive scanning. Line scan cameras are suitable for the following situations:
To perform one-dimensional measurements on a fixed object;
The object is in motion;
The edge image of a rotatable cylinder needs to be processed;
This method is suitable for situations where a high-resolution image of the object is required, but price is also a factor to consider.
(2) Area scanning camera
Line scan cameras are characterized by smooth motion, high speed tracking accuracy, and high light intensity requirements. Currently, line scan cameras have resolutions reaching several thousand frames per second (fps) and detection rates of 60 fps or even higher. Area scan cameras, on the other hand, can only capture one image at a time. Due to their inherent limitations, area scan cameras are unsuitable for continuous, high-precision detection of dynamic targets. However, based on their working principle, the following techniques are employed:
Choose frame-to-frame or line-to-line CCD;
Uses a high-speed shutter (electronic shutter);
Single-game technology is used;
By using high-frequency light sources, it is entirely possible to acquire dynamic images in real time, meeting the requirements of industrial online inspection.
3.2 Camera Color
Industrial cameras can be categorized by color into monochrome and color cameras. Monochrome cameras offer higher resolution and faster data acquisition than color cameras. However, with advancements in camera manufacturing technology, color cameras are increasingly being used. This is because older color camera systems typically consisted of three cameras, corresponding to the r (red), g (green), and b (blue) wavelengths, respectively. Now, single-CCD color cameras are available. Color cameras provide superior observation and discrimination capabilities, thus playing a crucial role in medicine, biology, and some industrial process control applications.
3.3 Camera Output Interface Type
Camera output interfaces include RS-422, RS-644, USB, IEEE 1394, and CameraLink. When selecting an image processing card, you should pay attention to whether it supports the output format of the selected camera.
3.4 lens
The main parameters of a lens include: image sensor size, CCD sensor size, focal length, field of view, object distance, depth of field, and angle of view.
The following factors should be considered when selecting a lens:
(1) Whether the lens's imaging surface matches the CCD camera being used. The imaging surface is related to the lens's design and manufacturing. Ideally, the larger the imaging surface, the better. However, some manufacturers' lenses have smaller imaging surfaces due to design or manufacturing limitations that do not meet technical requirements.
(2) Determine the focal length, object distance, and field of view of the lens (this is mainly determined based on the actual working or installation environment). The relationship between these parameters is: the smaller the focal length, the larger the angle of view; the shorter the minimum object distance, the larger the field of view. Taking the three most commonly used lenses (50mm, 25mm, 16mm) as examples: the 50mm lens has the largest focal length, so the 50mm lens has the smallest angle of view and the smallest field of view, but the farthest minimum object distance; the 25mm lens has the next largest focal length; the 16mm lens has the smallest focal length, so the 16mm lens has the largest angle of view and the largest field of view, and the closest minimum object distance.
4 Other components
4.1 Light source
Light source is a crucial factor affecting the input of machine vision systems, as it directly impacts the quality of the input data and accounts for at least 30% of the application's effectiveness. Due to the vast differences in the color, material, refractive index, and other properties of the objects being inspected, it is essential to select appropriate lighting devices for each specific application to achieve optimal results.
Light sources can be categorized by their illumination method into backlighting, front lighting, structured light, and stroboscopic lighting. Backlighting places the object under test between the light source and the camera, offering the advantage of high-contrast images. Front lighting places the light source and camera on the same side of the object, facilitating installation. Structured light illumination projects gratings or line light sources onto the object, demodulating its three-dimensional information based on the resulting distortions. Stroboscopic lighting illuminates the object with high-frequency light pulses; the camera must be synchronized with the light source to effectively capture images of high-speed moving objects.
The types of light sources used include halogen lamps, fluorescent lamps, and LED light sources. A comparison of their main performance characteristics is shown in the attached table.
4.2 Image Acquisition Card
An image acquisition card acts as a bridge between the camera and the computer for transmitting video signals. Currently, most cameras still output analog signals, while image acquisition cards convert various analog video signals into digital signals via A/D conversion and send them to the computer for processing, storage, and transmission.
The following aspects should be considered when selecting an image acquisition card:
(1) Video input format and data transfer rate
Most cameras use RS-422 or EIA644 as their output signal format, so the image acquisition card needs to support the output signal format used by the camera in the system. For flexibility, supporting both formats is preferable. When a camera captures high-resolution images at high speeds, it generates a high output rate. In this case, the camera typically uses multiple signals to output simultaneously, and the image acquisition card must be able to support multiple inputs and the camera's output rate.
(2) Data throughput
When the signal input rate of the image acquisition card is high, the bandwidth between the image acquisition card and the image processing system needs to be considered. When using a PC, the image acquisition card uses a PCI interface. The theoretical peak bandwidth of the PCI interface is 132 Mbps. However, in actual use, the average data transfer rate of the PCI interface on most computers is 50-90 Mbps, which may not be sufficient for transmission needs during instantaneous high transfer rates. To avoid data loss due to conflicts with other PCI devices, the image acquisition card should have a data buffer. Under normal circumstances, 2 MB of onboard memory is sufficient for most task requirements.
(3) Digital I/O control
In machine vision systems, input/output control is crucial. The camera's shooting time is often determined by the processing requirements. If a resettable camera is used, a reset signal needs to be generated. In some systems, a pixel clock generator is required to set the shooting frame rate. External synchronization refers to using the same synchronization signal to ensure video signal synchronization between different video devices. This ensures that the video signals output by different devices have the same frame start and end times. To achieve external synchronization, a composite synchronization signal or composite video signal needs to be input to the camera. If the image acquisition card already has digital I/O capabilities and can generate the gating, triggering, and other electronic signals required by the camera and other electronic equipment, it is very useful for the system; otherwise, a separate digital I/O card will be needed.
4.3pc platform
In this system, the PC platform receives images output from the image acquisition card, which are then preprocessed, analyzed, and identified by the image processing software to determine the quality of the empty bottles. Finally, the results are sent to the PLC. Since both the image acquisition card and the image processing software consume significant system resources, a high-performance industrial PC should be selected as the PC platform to ensure fast and stable system operation.
4.4 Control Unit
This system uses a PLC as the underlying controller. It connects to photoelectric sensors, encoders, transmitters, and an image acquisition subsystem via I/O ports. The image acquisition subsystem controls the CCD camera's shooting and directly controls the transmitter's operation. Simultaneously, the PLC connects to an industrial computer via a RS-485 bus to receive control information and system parameters from the computer.
During system operation, the PLC is responsible for promptly notifying the image acquisition subsystem to activate the CCD camera and capture images of empty bottles at the shooting position. To achieve this, a photoelectric sensor is needed to detect the bottle's location. The system uses a reflective photoelectric sensor, which outputs a trigger signal when it does not receive a beam of light reflected from a reflector. The photoelectric sensor is installed near the CCD camera's shooting position, and its output is connected to the PLC's I/O input. When no empty bottle is passing by, the photoelectric sensor receives the reflected beam and outputs no signal. However, when an empty bottle passes by, the photoelectric sensor cannot receive the reflected beam and thus outputs a trigger signal. Upon receiving this signal from the input, the PLC determines that the empty bottle has reached the shooting position and outputs a start signal from the I/O output to the image acquisition system, activating the CCD camera. The camera then promptly captures an image of the detected empty bottle.
After the acquired image information is analyzed and processed by a dedicated information processing module, a conclusion is drawn regarding whether the empty bottle is of acceptable quality. If it is unacceptable, the main industrial control computer will send a control command via the 485 bus, requesting the PLC to control the ejector to eject the empty bottle. Upon receiving the ejection command, the PLC needs to calibrate the unacceptable empty bottle and track its position. When the unacceptable empty bottle reaches the ejector's position, the ejector is activated to eject the unacceptable empty bottle. To determine the ejection position, an encoder is connected to the motor driving the conveyor belt. When the motor rotates, the encoder emits pulses accordingly. By counting the number of pulses, the distance traveled by the conveyor belt can be determined. In this way, if the distance the unacceptable empty bottle travels to reach the ejection position can be measured, it can be accurately ejected. The encoder's pulse output can be connected to the PLC's I/O input port beforehand. Then, an empty bottle is placed on the conveyor belt, allowing it to pass through the detection position and the ejection position sequentially. The PLC uses a counter to record the number of pulses during this process; this value corresponds to the distance between the detection position and the ejection position.
5. Visual processing software
5.1 Focus of Visual Processing Software Development
Vision processing software is an important component of PC-based machine vision systems. It primarily analyzes, processes, and recognizes images to identify specific target features. Developing vision processing software is extremely complex; starting from the ground up often requires a long development cycle, and self-written software rarely meets requirements in terms of speed and stability.
To meet the needs of system integrators and end users, image acquisition card manufacturers have developed corresponding image processing software packages for their products. This allows them to focus on software application-level development, using these packages for secondary development and saving development costs. Therefore, when selecting an image acquisition card for a complete machine vision system, the selection should be based on the functions the system needs to perform and the functionality of the software packages provided by the image acquisition card manufacturer.
5.2 Image Acquisition Software Package Functions
(1) Edge finding function
Edge detection is one of the most basic and commonly used tools in image processing. By detecting edges, the target in the captured image can be distinguished from the background, reducing the number of pixels to be processed and improving the software's processing speed.
(2) Target positioning function
When empty bottles being tested pass through the camera's shooting area at high speed on the production line, due to the instability of the production line and errors in shooting time, each empty bottle will appear in different areas of the captured image. The target positioning function allows the region of interest (ROI) in the processing software to change as the workpiece's position in the image changes, always remaining located at the key part of the workpiece.
(3) Image preprocessing function
Including binarization, edge sharpening, contrast adjustment, etc., appropriate preprocessing algorithms can highlight the target image, improve the speed of image analysis, and simplify the analysis process, making them essential functions.
(4) Character reading function (OCR)
This feature is particularly important for vision systems that are primarily used for reading various characters.
(5) Interface functions
The software package should be able to easily interface with other software or controls and work together.
6. Conclusion
PC-based machine vision systems are characterized by high speed, high precision, and high automation. They integrate advanced sensors, computers, digital image processing, and machine vision technologies, and can be widely applied in industrial manufacturing, electronics and semiconductors, packaging, agriculture, pharmaceuticals, and beer production. They can significantly improve the automation level of existing production lines, ensure product quality, and increase production efficiency. However, research on machine vision in my country started relatively late, and currently, the market mainly relies on imports for this type of equipment. With the continuous improvement of social productivity and the increasing automation of factories, the application prospects of this technology are very broad. Only through in-depth research and exploration in both theory and practical technology can the gap with advanced foreign technologies be narrowed and a foothold in the domestic machine vision market achieved.
While the light source can be selected according to requirements during the design process, in most cases, choosing an LED light source is a trend.