summary
A high-precision vision measurement system based on image processing offers advantages such as non-contact operation, real-time performance, flexibility, and accuracy. This system consists of a machine tool, light source, CCD camera, image acquisition card, grating ruler, grating ruler reading card, motor, motion control card, and PC. First, controlling the light source lays a solid foundation for obtaining good image quality. The CCD converts the acquired light signal into an electrical signal. Then, the image acquisition card acquires the image of the object being measured and stores it in the PC, completing the image acquisition process. Next, image processing technology, spatial geometric calculations, motion control, and the acquisition and processing of grating data are used to obtain the geometric dimensions of the object and detect the physical quantities to be measured. The entire measurement system allows for high-precision measurement of objects with simple mouse operations, making it simple, effective, and freeing people from tedious, complex, and heavy workloads.
This paper mainly studies the working principle of the image measuring instrument, as well as the overall system design and measurement principles and processes. It also introduces the key technologies used in image measurement. Finally, it analyzes the main error sources in image measurement and explains the causes and elimination methods for various errors.
Keywords: image measurement, system composition, measurement technology, error analysis
1 Introduction
1. Research Objectives and Significance of Image Measurement Systems
Modern science and technology are rapidly advancing towards the micro and ultra-precision fields, moving from the millimeter and micrometer scales to the nanometer scale. Improvements in industrial manufacturing technologies and processing techniques place higher demands on detection methods, speeds, and accuracy. High-precision measurement technology is the foundation and prerequisite for industrial development; the accuracy and efficiency of measurement, to a certain extent, determine the level of development in manufacturing and even science and technology. However, existing detection methods (such as calipers and microscopes) struggle to balance speed and accuracy, necessitating the search for a new detection technology to solve this problem, thus giving rise to computer vision inspection technology. With the ever-expanding demand for high-precision measuring instruments, the domestic market for such instruments has also seen significant development. However, in the image measurement industry, domestic development is limited to hardware production; the core software is entirely sourced from abroad.
2. Working principle of machine vision image measuring instrument
2.1 Definition of Machine Vision
Computer vision is a science that studies how to make machines “see”. More specifically, it refers to machine vision that uses cameras and computers to replace human eyes to identify, track and measure targets, and then performs graphic processing to make the images more suitable for human eyes to observe or transmit to instruments for detection [1].
2.2 Working principle of the image measuring instrument
Image measurement is a measurement method that uses images as a means of detection and information transmission when measuring an object. Its purpose is to extract useful signals from the image. Image measurement based on image analysis, as the name suggests, focuses on image analysis. The basic concepts of image and digital image are described below. There is no single, precise definition of the term "image." Generally speaking, an image refers to the visual impression of the emission and reflection of an object. Because computers can only process digital information, images cannot be directly processed by computers. Therefore, an image must be converted into a digital form, becoming a digital image, before it can be processed by a computer—that is, the image must be digitized.
A typical image measurement system mainly consists of six parts: a light source, a machine tool, a CCD camera, an image acquisition card, a motion control system, and a PC, as shown in Figure 2-1. These parts are combined to accomplish high-precision image inspection tasks in various environments.
Figure 2-1 Measurement process of the image measuring instrument
First, the workpiece to be measured is placed on the worktable. The motion control program is started, and the motion control card controls the movement of the x, y, and z axes to achieve the appropriate positions. The image of the workpiece is then clearly displayed on the CCD. The CCD converts the acquired light signal into an electrical signal, which is then acquired by the image acquisition card and sent to the PC. Next, image processing technology, spatial geometric calculations, motion control, and the acquisition and processing of grating data are used to obtain the geometric dimensions of the workpiece and the physical quantities to be measured. Finally, the measurement software completes the measurement work, obtaining the desired parameters and completing the measurement process. The measurement process is shown in Figure 2-2.
Figure 2-2 Measurement process of the image measuring instrument
Shape and size visual inspection is a new application of visual inspection technology in the field of measurement. Compared with traditional shape and size inspection technology, machine vision-based inspection technology has the following advantages:
Improve image quality. Digital image processing technology can be used to perform various processing on images.
Improving measurement accuracy, increasing the resolution of image acquisition equipment such as cameras, or adjusting the magnification of optical lenses can result in obtaining more and more accurate image information.
It can measure geometric quantities that are difficult to measure using traditional methods.
It can perform high-precision calibration and error correction of the imaging system.
High degree of automation.
3. Overall Scheme Design of Machine Vision Image Measuring Instrument
3.1 System Overview
The detection system designed in this project mainly consists of five parts: a frame platform, an image acquisition system, a light source control system, a motion control system, and an image processing system. The organic integration of these systems enables high-precision image detection tasks under various environments. The system structure diagram is shown in Figure 3-1.
Figure 3-1 Overall System Schematic Diagram
This inspection system is a high-precision vision inspection system based on image processing. The system first controls the upper and lower light sources through six separate and integrated controls of 12 rings. The best imaging conditions are obtained by controlling the light sources in separate rings. The CCD converts the light signal emitted by the object into an analog electrical signal and inputs it into the image acquisition card. The image acquisition card performs A/D conversion on the analog electrical signal and dynamically acquires the image signal into the memory of the computer host. Then, the image of the object under test in the PC memory is displayed in the video area of the measurement system. High-precision measurement of tiny objects is achieved by combining the XYZ three-axis movement of the machine tool, the reading changes of the grating data, the changes of the mouse click point, and geometric calculations.
3.2 Composition of the image measurement system
3.2.1 Machine
The machine tool consists of a marble base, a motion platform, and a vertical support beam. The heavy marble base provides excellent shock absorption when the motion platform or lens moves, effectively ensuring the consistency of the workpiece's position on the moving platform. The motion platform has a glass layer in the middle, under which a lower light source is placed. The platform moves in the X and Y directions, and grating rulers are installed on both sides of the platform for reading the movement position. The lens, CCD, and upper light source are mounted on the vertical support beam and can move up and down along the Z-axis.
3.2.2 Image Acquisition System
An image acquisition system consists of a CCD camera and an image acquisition card. The CCD works by projecting an image of a scene onto its image-sensitive surface using an optical imaging system. The image-sensitive surface, through the photoelectric effect, converts the reflected light from the object into a corresponding number of charge carriers based on brightness. Within a clock cycle, the CCD device, under the action of a transfer pulse, transfers the electrons collected at its gate to the shift register of the C1800C ...
Figure 3-2: Schematic diagram of the imaging principle of a vision system based on CCD and lens
Figure 3-3 CCD and Image Acquisition Card Workflow
For image acquisition cards, the video image signal is sent to the data buffer via a multiplexer, decoder, and A/Q converter. After cropping, scaling, and data format conversion, the internal control handles graphic overlay and data transmission. The data transmission target location is determined by software, which can be either video memory or computer memory.
3.2.3 Light Source Control System
The lighting environment has a significant impact on the quality of acquired images. Different working environments, such as indoors, outdoors, and under sunlight or artificial light, result in varying levels of illumination. Even under the same lighting conditions, differences in the distance between the CCD and the object being measured, as well as variations in lens magnification, will lead to different exposure results.
The light source control system consists of upper and lower light sources and a light source control card. The function of the light source control system is to optimize the illumination conditions of the object being measured by controlling the light sources. The principle of the light source control system is that the effect of the light source on the object being measured is reflected in the image acquisition system and then displayed on the display system. Manual adjustments to the light source control system are then made to achieve the best display image. During fully automatic measurement, the system can adjust according to the position of the object being measured, based on a saved script. Figure 3-4 shows the schematic diagram of the light source control system.
Figure 3-4: Schematic diagram of the light source control system
3.2.4 Motion Control System
The motion control system consists of a grating, motors, a grating reading card, and a motion control card. The horizontal motion of the object being measured is synthesized from the X and Y directions of the platform; while the vertical motion of the CCD and the upper light source is driven by a vertical motor. Grating rulers are installed in the X, Y, and Z directions, and the position of the machine tool in these three directions is obtained through the grating reading card. The purpose of the motion control system is to meet the three-dimensional motion control of the detection platform. The motion control system determines the position of the grating through the grating ruler reading card, sends the data to the computer for processing, and then controls the motors through the motion control card to realize the movement of the machine tool. The schematic diagram of the motion control system is shown in Figure 3-5.
Figure 3-5 Schematic diagram of motion control system
3.2.2 Data Processing System
The data processing system mainly consists of a computer. Digital image signals acquired by a CCD camera and image acquisition card are transmitted to the computer's memory. Depending on different needs, the image information is processed, such as sharpness evaluation, lighting environment evaluation, image generation and analysis, etc. Based on the feedback from the processing, operations such as light source control, motion control, autofocus, and image processing are performed. The principle diagram of data processing is shown in Figure 3-6:
Figure 3-6 Data Processing System
3.3 Key Technologies of Image Measurement
Key technologies in image measurement systems mainly include image acquisition, high-precision system calibration, image feature extraction, and high-precision edge localization. Many key technologies affect the final results of a vision measurement system; this section briefly discusses image acquisition, image processing, system calibration, autofocus, and edge detection technologies within image measurement systems.
3.3.1 Image Acquisition
The visual images seen by the eye are continuous, while computers can only process discrete data. Therefore, the image measuring instrument first converts the continuous image function into a discrete dataset. This process is called image digitization[28]. The process of digitizing an image is the process of producing a two-dimensional matrix in the computer. The digitization process includes two steps: scanning, sampling and quantization. Scanning is the process of traversing the image in a certain order, such as traversing in priority order. The pixel is the smallest addressable unit in the traversal process. Therefore, the above grid is also called the matrix scanning grid. Sampling refers to the traversal process, measuring the gray value at each smallest addressable unit, i.e., pixel position, of the image. The result of sampling is to obtain the gray value of each unit. Sampling is usually done by photoelectric sensor devices. Quantization is to convert the gray value obtained by sampling into discrete integer values through analog-to-digital sensor devices. Digital image acquisition is completed by the image acquisition system. After imaging, sampling and quantization, a digital image is obtained. Image acquisition is actually the conversion of the visual image and internal features of the measured object into a series of discrete data that can be processed by the computer. It mainly consists of three parts: illumination, image focusing formation, image determination and forming the camera output signal. Scanning an image according to a rectangular scanning grid results in the generation of a two-dimensional integer matrix corresponding to the image. The position of each element in the matrix is determined by the scanning order, and the grayscale value of each pixel is generated by sampling and quantized to obtain the integer representation of the grayscale value of each pixel. Therefore, the result of image acquisition is the digitization of a continuous image of nature to ultimately obtain a digital image.
3.3.2 Image Processing
In image measurement systems, image information processing technology mainly relies on image processing methods, including image filtering, image enhancement, edge extraction, thinning, feature extraction, image recognition and understanding. After these processing steps, the quality of the output image is significantly improved, and the image edges are more intuitive, which not only improves the visual effect of the image but also facilitates computer analysis, processing and recognition of the image. The image processing part is the core of the entire measurement software, and it largely determines the measurement accuracy. With the increasing accuracy requirements of industrial inspection and other applications, pixel-level accuracy can no longer meet the actual measurement requirements, so higher-precision edge subdivision algorithms, i.e., subpixel algorithms, are needed [2]. Studies have shown that by different subpixel subdivision calculations, the edge position can reach 0.1 pixels or 0.01 pixels. This shows that using software to improve the measurement accuracy has the advantages of being simple and effective. Therefore, image measurement software algorithms are receiving increasing attention.
3.3.3 System Calibration
Calibration of image measurement systems is crucial. Camera calibration is the process of determining the transformation relationship between the three-dimensional object space coordinate system and the two-dimensional coordinate system of the camera image, as well as the camera's internal and external parameters. High-precision measurement systems require high-precision calibration parameters. Since lens distortion is inevitable during imaging, and the assumptions of the pinhole projection model also contain imaging errors, finding a simple yet sufficiently accurate camera calibration method is a key factor in the accuracy of visual measurements. Therefore, high precision and high efficiency are fundamental requirements for calibration methods.
3.3.4 Autofocus
In image measurement systems, the quality of the acquired image has a significant impact on the measurement and inspection results. Under the condition of fixed light source conditions and external working environment, it is extremely important to maintain the best imaging distance between the lens and the object under inspection [30]. A precise autofocus system is a very critical part of subsequent image measurement.
3.3.5 Subpixel edge detection technology
With the increasing precision requirements of applications such as industrial inspection, pixel-level accuracy is no longer sufficient for practical measurement needs. Therefore, higher-precision edge extraction algorithms are required, leading to sub-pixel edge detection algorithms. Over the past two decades, in the field of optical measurement digital image processing, many researchers have attempted to solve the problem of high-precision target localization in images using software processing methods. If a software method can locate feature targets in an image at the sub-pixel level, it is equivalent to improving the accuracy of the measurement system. For example, when the algorithm's accuracy is 0.1 pixels, it is equivalent to a tenfold increase in the hardware resolution of the measurement system. Therefore, high-precision target localization in images has become one of the most important aspects of improving the accuracy of optical measurement. This sub-pixel localization technology has significant theoretical and practical implications and is one of the important and distinctive technologies in optical measurement digital image analysis.
4. System Error Analysis of the Image Measurement Instrument
4.1 Comprehensive Analysis of Error Sources
The error of an image measuring instrument refers to the inherent error of the instrument itself. Before the instrument was manufactured, its error was essentially fixed under specified operating conditions.
In order to effectively analyze the accuracy of this image measuring instrument, it is necessary to first analyze and summarize the various error sources of the instrument's accuracy, especially the main errors that affect the accuracy of the measuring instrument, and then grasp their changing patterns, and finally find ways to control them and further reduce their impact on the measurement accuracy of the instrument.
The errors in this instrument are multifaceted, potentially occurring at various stages of its design, manufacturing, and use. These are referred to here as the principle error, manufacturing error, and operational error of the image measuring instrument. Because they arise at different stages, their patterns of motion differ. Mathematically, principle errors are mostly systematic errors, while manufacturing and operational errors are mostly random errors.
Through detailed research and analysis of the image measurement process, we know that the sources of error include: errors caused by CCD camera distortion; errors caused by the guiding mechanism; errors caused by the measurement environment, mainly the influence of temperature; variations in error magnitude due to different measurement methods, mainly referring to different choices of image processing algorithms; and dynamic errors. The overall error classification of the image measuring instrument is shown in Figure 4-1.
Of all the aforementioned errors, the one that affects the image measuring instrument the most is undoubtedly the principle error (including the error caused by CCD camera distortion and the variation in error magnitude caused by different measurement methods). This error is the key factor affecting the accuracy of the measuring instrument, and therefore it is the focus of this paper's analysis and research.
Figure 4-1 Error classification of image measuring instruments
4.2 Principle error of the image measuring instrument
The principle error arises from the use of approximate theories, mathematical models, mechanisms, and measurement control circuits in the instrument design. It is only related to the instrument's design and not to its manufacturing or use. According to the instrument's design principles, the approach to analyzing the principle error of an image measuring instrument is to compare the actual relationships between the various components of the instrument with the theoretical relationships used in the design and calculation. If there are differences, then a principle error exists.
There are three main types of errors in the principle of image measuring instruments: First, the impact of different image processing algorithms on the accuracy of the image measuring instrument. Second, the optical error of the CCD camera, mainly referring to the lens distortion caused by the optical lens of the CCD camera, which affects the geometric accuracy of the image. Third, errors arising from different measurement methods.
4.2.1 Impact of Image Processing Algorithms on Accuracy
Three-dimensional scenes are projected onto the retina, forming various images, which are continuous analog images. Since computers can only process discrete data, analog image functions must be converted into discrete data sets before they can be processed by the computer. A CCD camera consists of many photosensitive pixels. After receiving input light, these pixels undergo charge transfer, resulting in an output voltage proportional to the input light intensity—this is the analog electrical signal. Digitizing devices are circuit components that convert analog signals into digital signals. In an image measuring instrument, the digitizing device is the image acquisition card. The image acquisition card converts analog signals into video signals, and then converts the video images into digital images. The target image acquired by the image acquisition card is sent to computer memory or saved to the hard drive. To facilitate subsequent processing, the acquired image must be preprocessed. However, various noises inevitably exist during the imaging process, which can affect our results and thus the accuracy of the measuring instrument. Therefore, noise removal is essential during image processing. Edge detection is a fundamental feature of an image, reflecting the contours of objects or the boundaries between different surfaces of an object in the image. Edge contours are an important factor for humans to recognize the shape of objects and are also an important processing object in image processing (processing flow is shown in Figure 4-1). Edge extraction is required in the image processing process, and there are many different edge extraction methods in digital image processing technology. Choosing different extraction methods will produce significant changes in the edge position of the same measured object, thus affecting the final measurement results.
Figure 4-1 Image Processing Workflow
1) Digital quantization: Converting the original video image into a digital signal, i.e., grayscale, that a computer can recognize.
2) Pixel smoothing: The purpose is to eliminate various parasitic effects that may occur during image transmission and quantization, while also minimizing the blurring of image edges and lines to facilitate subsequent processing.
3) Gray-scale statistics: Count the number of pixels with various gray-scale values within the known range of the image until a pixel with a high gray-scale value is found, providing raw data for subsequent work.
4) Set threshold: Set appropriate image grayscale values based on the statistical grayscale information to retain grayscale points on the contour light band and remove the remaining low grayscale points; the determination of the threshold should proceed normally with the width of the light band and the illumination.
5) Remove impurities: Eliminate other high grayscale impurities besides the contour light band to ensure that image processing can proceed normally.
6) Contour Line Extraction: This is a crucial step in image processing, directly impacting the accuracy of the measuring instrument. Its purpose is to accurately extract the contour lines of the workpiece to obtain the desired measurement data and provide effective support. For example, to measure the length and width of a rectangular workpiece, effectively extracting the contour lines is essential to obtaining relatively accurate length and width values.
4.2.2 Optical Errors of CCD Cameras
In the research and application of computer vision, the instruments or equipment used are generally optical lenses composed of multiple lenses. When analyzing, these optical systems work according to the ideal pinhole imaging principle, and there are model errors. Therefore, the two-dimensional images have different degrees of nonlinear deformation, which is usually called geometric distortion. In addition to geometric distortion, there are other factors such as the instability of the camera imaging process and the quantization error caused by low image resolution. Therefore, there is a complex nonlinear relationship between the actual image of the object point on the camera image plane and the spatial point. The main distortion errors are divided into three categories: radial distortion, eccentric distortion and thin prism distortion [31]. The first category only produces radial position deviation, while the latter two categories produce both radial and tangential deviation. Figure 4-1 shows the relationship between the position of the ideal image point without distortion and the position of the actual image point with distortion.
Figure 4-1 Ideal image points vs. actual image points Figure 4-2 Radial distortion
1) Radial Distortion (Radial Anomaly): Changes in the radial curvature of the optical lens are the main cause of radial distortion. This distortion causes image points to move radially, with the distortion increasing the further away from the center. Positive radial distortion causes points to move away from the image center, with an increased scaling factor; negative radial distortion causes points to move closer to the image center, with a decreased scaling factor, resulting in pincushion distortion and barrel distortion, respectively, as shown in Figure 4-2. The mathematical model is represented by formula (4-1).
2) Eccentric distortion: Due to assembly errors, the optical axes of multiple optical lenses that make up the optical system cannot be completely collinear, which causes eccentric deformation. This deformation is composed of radial deformation components and tangential deformation components, and its mathematical model is represented by formula (4-2).
3) Thin prism distortion: Thin prism distortion refers to image distortion caused by manufacturing errors of optical lenses and imaging sensitive arrays, such as a small tilt angle between the lens and the image plane of the camera. This type of distortion is equivalent to adding a thin prism to the optical system. Its distortion includes radial distortion components and tangential distortion components. The mathematical model is represented by formula (4-3).
All three types of nonlinear distortion mentioned above exist in images captured by optical lenses. The nonlinear distortion of an image is the superposition of these three types of distortion, thus allowing the establishment of a nonlinear distortion model in the image coordinate system. The ideal image point coordinates (Xu, Yu) on the image plane are equal to the sum of the differences between the actual image point coordinates (Xd, Xd), i.e.
Of all the distortions, radial distortion generally has a larger impact on the measurement results, while the other two have a relatively smaller impact.
4.2.3 Errors arising from different measurement methods
The errors caused by different measurement methods mainly refer to the recognition and quantization errors brought about by different image processing techniques. The edge of the image is a basic feature of the image, which is the reflection of the outline of the object or the boundary between different surfaces of the object in the image. The edge outline is an important factor for humans to recognize the shape of the object, and it is also an important processing object in image processing. Edge extraction is required in the process of image processing, and there are many different edge extraction methods in digital image processing technology. Choosing different extraction methods will produce significant changes in the edge position of the same measured part, thus affecting the final measurement result. For example, when measuring the radius and center of a circular workpiece, when the outline of the circle changes, its radius value and center position will change accordingly [32]. It can be seen that in the process of image processing, the image processing algorithm has a very important influence on the measurement accuracy of the instrument, which is the focus of image measurement.
4.3 Manufacturing errors of the image measuring instrument
Manufacturing errors in instruments refer to the errors caused by incomplete manufacturing and assembly of the instrument's parts, components, parts, and other components in terms of dimensions, shape, relative positions, and other parameters. Manufacturing errors are all caused by imperfections in the manufacturing process. Image measuring instruments inevitably generate many errors during the manufacturing process, thus affecting their accuracy. For example, clearances in the fit of internal and external dimensions can cause skew errors in linear motion and radial runout errors in rotary motion; the roundness of shafts and sleeves can cause rotational errors in the shaft system; surface waviness and roughness can affect the smoothness of motion.
Therefore, manufacturing errors account for a significant proportion of all instrument errors. However, it is important to note that not all manufacturing errors affect the accuracy of an instrument. Generally, only manufacturing errors related to instrument accuracy are studied; these are also known as original errors.
For image measuring instruments, the manufacturing errors affecting their accuracy mainly include linear motion positioning errors within the mechanism errors. Image measuring instruments are orthogonal coordinate system measuring instruments. Orthogonal coordinate system measuring instruments have three nominally mutually perpendicular axes, namely the XYZ axes, and three moving parts that move along these three axes, causing the CCD to perform three-dimensional linear motion relative to the workpiece being measured. The displacement of this motion can be read from the grating rulers placed along the three circumferences. Due to imperfections in the manufacturing and assembly of the mechanism, the actual displacement of each moving part will inevitably deviate from its nominal value; this error is often called linear motion positioning error. In image measurement of small workpieces, the entire measurement process, from CCD calibration to final data output, is completed in a static state. Since no motion is involved, the mechanism error has no impact on our measurement results, and this error can be disregarded. However, in the measurement of larger workpieces, there may be situations where the CCD's field of view cannot meet our measurement requirements. For example, if the distance between two points is too large, it is necessary to control the measuring stage to move in both the X and Y directions to obtain the desired measurement results. Since the movement of the motion axis is involved in this process, the mechanism error will have a certain impact on the final result.
Furthermore, since the measuring platform of the measuring instrument is composed of a large square frosted glass plate, the workpiece to be measured needs to be placed on the platform and within the field of view of the CCD camera during measurement. Therefore, the levelness of the measuring platform and whether the lens of the CCD camera is parallel to the horizontal plane also affect the measurement results. Additionally, the lens of the CCD camera also has an angle with the horizontal plane (as shown in Figure 4-3). When the workpiece is placed on the measuring platform, the two form a certain angle, causing the presented image to deviate from the actual object. Therefore, this situation should be avoided as much as possible during manufacturing and installation. First, when selecting frosted glass, we should use frosted glass with excellent flatness for our measuring platform, and the levelness of its support should be a key performance requirement. Second, when installing the CCD camera, the levelness of its lens should be treated as a very important task, requiring repeated adjustments to reduce errors.
Figure 4-3 Schematic diagram of camera installation error
Let's briefly analyze the magnitude of this error: When the measuring platform and the CCD camera lens form a certain angle θ, based on geometric knowledge, we can derive the following error calculation formula:
If the level performance of the measuring platform of the image measuring instrument and the installation of the CCD camera are excellent, the included angle between them will be within 0.5°, and this error is very small.
Manufacturing errors in instruments are difficult to avoid. Besides improving machining accuracy and assembly during manufacturing, appropriate measures should also be taken to control them during the design process. Specific methods are as follows:
1) Rationally allocate and determine manufacturing errors. Based on the overall accuracy index of the instrument, correctly allocate errors among the major links in the measurement process and control, and rationally determine the manufacturing errors of each link in the structural design. This is of great significance for ensuring and improving the accuracy of the instrument.
2) Correctly apply instrument design principles and design rules, such as the error averaging principle, compensation principle, Abbe principle, and minimum deformation principle, to minimize the impact of manufacturing errors on the instrument's accuracy.
3) Determine the instrument's structural parameters appropriately. While ensuring the instrument's functionality and performance, select its structural parameters with the goal of minimizing the impact of manufacturing errors on instrument accuracy.
4) Reasonable structural manufacturability. Good structural manufacturability facilitates processing and assembly, making it easier to ensure manufacturing precision. The structural design should adhere to the principle of unified datum surfaces, and the design datum selected during the design process should fully consider the feasibility and reliability of processing and assembly.
5) Set up appropriate adjustment and compensation mechanisms. Appropriate adjustment and compensation mechanisms can effectively reduce the impact of manufacturing errors on instrument accuracy.
4.4 Operating error of the image measuring instrument
Errors that occur during the use of an instrument are called operational errors. These include errors caused by force deformation, wear and clearance, temperature deformation, vibration, and interference.
The operational errors that affect the accuracy of an image measuring instrument include errors caused by temperature, errors caused by interference and fluctuations in the circuit, and wear.
1) Temperature-induced errors occur because changes in temperature alter the dimensions, shape, relative positions, and key characteristic parameters of the imaging measuring instrument's components, thus affecting the instrument's accuracy. Temperature variations can cause changes in electrical parameters and instrument characteristics, leading to temperature sensitivity drift and zero-point drift.
2) Errors caused by interference and environmental fluctuations. Interference refers to two aspects: firstly, interference from external equipment's electromagnetic fields and electrical sparks; and secondly, interference caused by electromagnetic field interference between different current levels and interference through coupling via ground wires and power supplies. Environmental fluctuations refer to fluctuations in ambient temperature, humidity, atmospheric pressure, steam source pressure, and the power supply voltage of the instrument's electrical equipment during use. All of these can cause measurement errors in the instrument.
When using an image measuring instrument, the refraction, scattering, and diffraction of optical elements in the CCD camera cause stray light to enter the main optical path, affecting the imaging of the object being measured and ultimately producing errors. However, these errors are negligible and therefore not within the scope of our research.
Another scenario is that after calibration, voltage fluctuations or human error can affect the brightness of the upper and lower light sources of the image measuring instrument, causing uneven system illumination. This results in shadows along the image edges after the CCD camera captures the image, leading to errors in image edge extraction. Therefore, we strive to select locations with stable voltage and ensure adequate brightness of the upper and lower light sources to measure various parameters of the workpiece. Even in the event of significant voltage fluctuations, the operator can avoid this error by waiting for the voltage to stabilize before measuring the workpiece.
3) Wear. Wear causes dimensional, shape, and positional errors in the parts of the image measuring instrument, increases the clearance between parts, and reduces the stability of the instrument's working accuracy. Wear is closely related to friction. The regularity of the cross-sectional profile of the parts and the different cross-sectional profile shapes left by different machining methods are also factors. Due to prolonged operation, the clearance between the x, y, and z axes of the image measuring instrument will increase, thereby reducing working accuracy. This is especially true for CCD cameras placed on the z-axis; when clearance appears on the z-axis, it will affect its angle and have a certain impact on the final image information obtained.
Although operational errors have a small impact on the final measurement results, they are generated during the use of the image measuring instrument. The instrument operator must be fully aware of the conditions under which these errors occur in order to effectively reduce them.
An effective way to eliminate this error is to increase the number of times the image measuring instrument is calibrated, since measurement is essentially a comparison process. Frequent calibration effectively eliminates operational errors and improves the instrument's accuracy.
5. Conclusions and Outlook
5.1 Work Summary
The increasing demand for product precision is an inevitable trend in modern industrial development. Traditional and modern testing methods suffer from drawbacks such as low efficiency, poor accuracy, high cost, and limited flexibility. Therefore, developing a highly efficient, accurate, low-cost, and flexible testing system has become an urgent need.
本文研究了影像测量仪的组成及其测量原理和测量过程,并系统的分析了影像其精度的各种因素。本文的主要结论及创造工作如下:
1) 本文论述了计算机视觉检测技术的国内外动态,以及计算机视觉检测的特点。
2) 系统的概述了计算机视觉检测在实际生活中各方领域的应用,并阐明了计算机检测在未来的伟大前景。
3) 分析了视觉检测的基本原理及其结构、特点和优越性,并系统的阐述了机台,图像采集系统、运动控制系统、数据处理系统等系统的具体工作原理以及视觉检测和关键技术研究的必要性。
4) 系统的对影像测量仪可能产生的所有误差源进行了系统的分析,并根据各个误差产生的不同阶段分为:原理误差、制造误差、运行误差,并提出了消除或减小误差的具体措施。
References
[1] 崔屹.计算机视觉[M].电子工业出版社.1997:11-12
[2] 贺岳平.计算机边缘检测[J].计算机应用,2008,3(1):78-80
[3] 鲍歌堂,赵辉,陶卫.图像测量技术中几种自动调焦算法的对比分析[[J].上海交通大学学报,2005,39(1):121-128
[4] 姜大志,郁倩.计算机机视觉成像的非线性畸变的研究与综述[J].计算机工程,2001,2(27):108-110