Share this

Machine vision system analysis and the impact of shutter stains and scratches

2026-04-06 05:43:16 · · #1

Machine vision technology is an important branch of computer science, integrating technologies from optics, mechanics, electronics, and computer hardware and software. It involves multiple fields such as computer science, image processing, pattern recognition, artificial intelligence, signal processing, and opto-mechatronics. Since its inception, it has had a history of over 20 years. Its functions and applications have gradually improved and expanded with the development of industrial automation. In particular, the rapid development of technologies such as digital image sensors, CMOS and CCD cameras, embedded technologies like DSP, FPGA, and ARM, as well as image processing and pattern recognition, has greatly propelled the development of machine vision.

In short, machine vision uses machines to perform various measurements and judgments, replacing the human eye. On a production line, humans performing these measurements and judgments are prone to errors due to fatigue and individual differences, while machines can perform them tirelessly and steadily. Generally, a machine vision system includes a lighting system, a lens, a camera system, and an image processing system. For each application, we need to consider the system's operating speed and image processing speed, whether to use a color or black-and-white camera, whether to detect the size of the target or its defects, the required field of view, resolution, and contrast. Functionally, a typical machine vision system can be divided into: an image acquisition section, an image processing section, and a motion control section.

The main working process of a complete machine vision system is as follows:

1. The workpiece positioning detector detects that the object has moved close to the center of the camera system's field of view and sends a trigger pulse to the image acquisition part.

2. The image acquisition unit sends start pulses to the camera and lighting system respectively according to the pre-set program and delay.

3. The camera stops the current scan and starts a new frame scan, or the camera is in a waiting state before the start pulse arrives, and starts a frame scan after the start pulse arrives.

4. Before the camera starts scanning a new frame, turn on the exposure mechanism. The exposure time can be set in advance.

5. Another start pulse turns on the lighting, and the lighting time should match the camera's exposure time.

6. After the camera is exposed, the scanning and output of one frame of image officially begins.

7. The image acquisition section receives analog video signals and digitizes them via an A/D converter, or directly receives digital video data digitized from the camera.

8. The image acquisition section stores digital images in the processor or computer's memory.

9. The processor processes, analyzes, and identifies the image to obtain measurement results or logic control values.

10. Processing results to control the operation of the production line, perform positioning, and correct motion errors, etc.

As can be seen from the above workflow, machine vision is a relatively complex system. Because most systems monitor moving objects, the matching and coordination between the system and these moving objects is particularly important, thus imposing strict requirements on the action time and processing speed of each part of the system. In certain application areas, such as robotics and guided aerial vehicles, there are also strict requirements on the weight, size, and power consumption of the entire system or a part thereof.

The advantages of machine vision systems include:

1. Non-contact measurement does not cause any damage to either the observer or the observed, thereby improving the reliability of the system.

2. It has a wide spectral response range, for example, by using infrared measurements that are invisible to the human eye, thus expanding the visual range of the human eye.

3. Long-term stable operation: Humans find it difficult to observe the same object for a long time, while machine vision can perform measurement, analysis, and recognition tasks for extended periods.

Machine vision systems are being applied in an increasingly wide range of fields. They have been widely used in industries such as manufacturing, agriculture, defense, transportation, healthcare, finance, and even sports and entertainment, and can be said to have penetrated into all aspects of our lives, production, and work.

In specific applications using machine vision systems for inspection, there are applications with continuous feeding or intermittent feeding, where the target object is paused for a period of time for inspection. In these cases, it's necessary to know the achievable inspection speed, the number of targets, and the maximum number of targets that can be inspected per minute. These data can be calculated based on the processing speed of the vision system.

The calculation method is as follows:

Maximum number of tests per minute = 60 (sec.) ÷ processing speed of the vision system (sec.)

For example: if the processing speed of the vision system is 20ms

Therefore, the maximum number of tests per minute = 60 sec ÷ 0.02 sec = 3000 TImes/min (= 50 TImes/sec).

However, actual processing speed will vary depending on the camera type and detection settings of the vision system. While most simple applications can run at 20ms, it's best to test the detection performance with a real target object in practical applications.

If there are specific requirements for the processing speed of the vision system in a particular application, the following calculation method can be used to obtain the result:

The required processing speed (ms) for a vision system = 1 (sec.) ÷ required number of detections (TImes/sec.) x 1000

In practical applications, when the target object is continuously moving within the camera's field of view, the camera shutter speed must also be considered; otherwise, a blurry image will appear, failing to meet the detection requirements. For example, when a camera captures an image of electronic components on a continuously moving production line, if the shutter speed (exposure time) is not fast enough for the production line's speed, the image will be blurry. To prevent blurring, the shutter speed needs to be set so that the object's movement speed does not exceed 1/10 of the required tolerance value when the camera captures the image. As shown in the figure below, the target object is continuously moving within the camera's field of view.

High-speed shutter image Low-speed shutter image

How to calculate camera shutter speed:

Shutter speed = Required tolerance [mm] ÷ Production line speed [mm/sec.]

Example) Detection tolerance = 0.2mm

Production line speed = 200 mm/sec.

Shutter speed = 0.2mm ÷ 10 ÷ 200mm/sec = 1/10000

Therefore, the ideal shutter speed for this application is faster than 1/10000.

If the vision system has a fast processing speed, inspection on a high-speed production line is certainly feasible. So, how long does a typical size inspection process take? This inspection time varies greatly depending on the processing power of the vision system and the settings for individual applications. The table below provides a baseline estimate of the time required to capture and process images (reference values), which users can refer to based on their specific applications.

Defect detection, dirt detection, and chip inspection are all typical applications of machine vision systems. Depending on the workpiece and production line conditions, each type of inspection requires different functionalities. This article provides a brief introduction to the principles and usage of machine vision-based dirt detection tools.

1. Section

Vision systems detect changes in intensity data as blemishes or edges using CCD image sensors. However, processing each pixel individually is time-consuming, and noise can affect the detection results. Therefore, vision systems use the average intensity of a small region consisting of several pixels, called a "segment," and detect blemishes by comparing the average intensity of these segments.

As shown in the image above, the average intensity of a segment (4x4 pixels) is compared with the average intensity of the surrounding area. The red segment in the image is found to have blemishes.

2. The algorithm of the stain detection tool (segment comparison and calculation method)

Detection principle:

(1) When the X direction is specified as the detection direction:

The stain detection tool measures the average intensity of a specified area (segment) and changes the segment position at intervals of one-quarter of the segment.

It determines the difference between the maximum and minimum intensity in four sections, including the standard section (①95 in the figure below). This difference is considered the stain grade of the standard section.

When a blemish level exceeds the current threshold, the standard segment is considered a blemish. The number of times the preset threshold is exceeded in the tested area is called the "blemish range". This process is repeated, thereby continuously changing the position of the standard segment in the tested area.

The principle of detecting stains on circular workpieces

Various circular workpieces, such as PET bottles, bearings, or O-rings, require visual inspection of circular areas. When searching for a circular area, the program simultaneously performs a polar coordinate transformation. To detect stains, it converts the circular window (inspection segment) into a rectangle and compares the segment's intensity in both the circular and radial directions.

The basic analysis process of machine vision scratch detection consists of two steps: first, determining whether there are scratches on the surface of the product being inspected; and second, after confirming that scratches exist on the image being analyzed, extracting the scratches.

Scratch detection is a common problem in industrial production. Many parts of industrial equipment operate in high-temperature and high-pressure environments, are subjected to complex loads, and are used in harsh environments, resulting in high failure rates and serious consequences. Therefore, visual inspection of defects, fatigue cracks, and their propagation in related parts is particularly important.

Scratches can generally be divided into three main categories:

The first type of scratch involves subtle changes in grayscale values, resulting in a relatively uniform grayscale across the entire image. The scratch area is also small, consisting of only a few pixels, and its grayscale is only slightly lower than the surrounding image, making it difficult to distinguish. A mean filter can be applied to the original image to obtain a smoother image. This smoother image can then be subtracted from the original image. If the absolute value of the difference exceeds a threshold, that image is designated as a target. All targets are then marked, their areas are calculated, and targets with excessively small areas are removed. The remaining targets are then marked as scratches.

The second type of scratch has significant differences in grayscale across its parts and is typically elongated. If a fixed threshold is used for segmentation on an image, the marked defect portion will be smaller than the actual area. Because the scratches in these images are long and narrow, relying solely on grayscale detection will miss the extended parts of the defect. For this type of image, a method combining dual thresholding and defect shape feature analysis is chosen based on its characteristics.

The third type of scratch is relatively easy to identify by appearance, and the grayscale changes are also more obvious compared to the surrounding area. Smaller threshold defects can be directly marked.

Due to the diversity of images in industrial inspection, each image requires analysis and comprehensive consideration of various processing methods to achieve the desired effect. Generally, the grayscale value of the scratched area is darker than that of the surrounding normal area, meaning the grayscale value of the scratched area is smaller; moreover, it is mostly on smooth surfaces, so the overall grayscale variation of the image is very uniform and lacks texture features. Therefore, scratch detection generally uses statistical grayscale feature analysis or threshold segmentation methods to mark the scratched area.

Comparison of digital camera transmission interfaces

Read next

Benefit Analysis and Countermeasures for Implementing Green Lighting Projects

1. Introduction Green lighting is a descriptive term for lighting systems that are highly efficient, energy-saving, and ...

Articles 2026-02-22
CATDOLL Maruko 88CM TPE Doll

CATDOLL Maruko 88CM TPE Doll

Articles
2026-02-22
CATDOLL 108CM Cici

CATDOLL 108CM Cici

Articles
2026-02-22