Share this

Discussion on Machine Vision-Based Intelligent Driving Systems for Automobiles

2026-04-06 05:58:47 · · #1
Abstract: This paper proposes an automobile intelligent steer system that uses binocular stereo machine vision as the main means of road environment perception. The feasibility of machine vision-based intelligent driving is analyzed, and key technical challenges urgently need to be addressed. Keywords: Machine Vision, Intelligent Steer, Stereo Matching, 3D Reconstruction 1. Introduction With social development and population growth, automobiles are increasingly entering our daily lives, leading to increasingly congested traffic. Driving is a complex activity, and prolonged driving can easily cause fatigue, thus increasing the risk of traffic accidents. Furthermore , some engineering vehicles operate in harsh environments with high labor intensity, making automated driving essential. To simplify, enhance, and improve the comfort of car driving, freeing drivers from the tedium and monotony of manual labor, countries worldwide are actively researching and developing autonomous driving technology. Germany, the United States, and Japan, among others, have made significant strides in this field. The School of Mechanical and Electrical Engineering at the National University of Defense Technology in my country has also been conducting research on autonomous driving technology. In June 2003, the unmanned "Hongqi" CA7460, jointly developed with FAW Group Corporation, successfully completed its test drive in Changsha, Hunan Province. It achieved a stable speed of 130 km/h (compared to the US maximum of 100 km/h and Germany's 120 km/h), with a top speed of 170 km/h and the ability to safely overtake. However, their system primarily relies on onboard radar, infrared rangefinders, and image sensors to identify and measure road conditions. The resulting road environment information is limited and cannot meet the requirements of intelligent driving. Therefore, these systems are currently only applicable to highways with good road conditions and are unsuitable for the challenging conditions of lower-grade roads and urban roads. Vision is an important means for humans to observe and understand the world. About 75% of the information humans obtain from the outside world comes from the visual system, especially 90% of the information needed by drivers. Among the environmental perception methods currently used in car driver assistance, visual sensors can obtain higher, more accurate and richer road structure environment information than ultrasound, lidar and other technologies [5]. With the development of computer technology and the maturity of image processing/recognition technology, machine vision technology has made great progress and is now widely used in three-dimensional measurement, three-dimensional reconstruction, virtual reality, moving target detection and target recognition. In the field of autonomous driving, a prerequisite is road condition recognition and the detection of distance and speed of vehicles and obstacles. Only by solving this problem can the driving of the car be controlled. Machine vision technology integrates three-dimensional measurement and image recognition technology. At present, research on machine vision in the field of intelligent robots is in full swing: Klaus Fleischer et al. proposed Machine-Vision-Based Detection and Tracking of Stationary Infrastructural Objects Beside Inner-city Roads [3]; D. Brzakovie et al. proposed Road edge detection for mobile robot navigation [2]; O. Djekoune et al. proposed Vision-guided mobile robot navigation using neural network [4]. These research results have important implications for the application of machine vision in intelligent driving. This paper applies machine vision technology as the main means of road condition perception to vehicle autonomous driving, providing a different perspective for realizing intelligent driving of vehicles. 2. Machine Vision Technology Since the MARR visual computing theory was proposed, machine vision technology has developed rapidly and is one of the fastest developing technologies in the field of intelligent driving, as well as one of the main research directions in the field of intelligent driving. 2.1 Basic Principles of Machine Vision Obtaining the distance of each point in the scene relative to the camera is one of the important tasks of a stereo vision system. The distance of each point in the scene relative to the camera can be represented by a depth map. The machine vision system mainly relies on binocular (multi-lens) CCDs to acquire two (multiple) images at different spatial positions, and generates a depth map by using the depth information and imaging geometry of these two (multiple) images [1] (as shown in Figure 1). This paper takes a relatively simple and commonly used binocular CCD vision system as an example. Its geometric relationship is shown in the figure. It consists of two identical CCD cameras. The two image planes are located on the same plane. The coordinate axes of the two cameras are parallel to each other and the x-axis coincides. The distance between the cameras in the lower x-direction is the limit distance B. [align=center] Figure 1. Geometric model of binocular stereo vision[/align] In the figure, the projection points of scene point P in the left and right image planes are P_left and P_right, respectively. Assuming that the origin of the coordinate system coincides with the center of the left lens, by comparing similar triangles PMC[sub]l[/sub] and P[sub]l[/sub]LC[sub]l[/sub], we get: (1) Similarly, from similar triangles PNCr and PlRCr, we get: (2) Combining the above two equations: (3) Where F is the focal length. From the above derivation, we can see that the depth information in various scenes can be realized by calculating disparity. In machine vision systems, in order to accurately calculate disparity, an important prerequisite is to be able to find the conjugate pairs of projection points in the left and right image pairs (the projection points of the same point in different images in the scene are called conjugate pairs), that is, stereo matching. There are three main types of matching methods: edge feature matching, region feature matching, and phase matching. Stereo pairing is an important research direction in machine vision. There are many useful research results in this area. O.Djekoune et al. proposed a new algorithm in [4] that uses neural networks to improve the matching speed and accuracy of stereo pairs. 2.2 Application of machine vision technology in intelligent driving To apply machine vision technology in intelligent driving, machine vision technology must have three characteristics: real-time performance, robustness, and practicality [7]. Real-time performance requires that the data processing of the machine vision system must be synchronized with the high-speed driving of the vehicle; robustness requires that intelligent vehicles have good adaptability to different road environments such as highways, urban roads, and ordinary roads, complex road environments such as road width, color, texture, curves, slopes, potholes, obstacles and traffic flow, and various weather conditions such as sunny, cloudy, rainy, snowy, and foggy; practicality means that intelligent vehicles can be accepted by ordinary users [7]. At present, machine vision is mainly used for path recognition and tracking [7]. Compared with other sensors, machine vision has the advantages of rich detection information, non-contact measurement and the ability to realize three-dimensional modeling of road environment, but the data processing volume is extremely large, and there are problems with system real-time performance and stability. It is necessary to develop high-performance computer hardware and research new algorithms to solve these problems. With the rapid development of computer technology and image processing technology, three-dimensional reconstruction of road environment provides powerful information for high-speed intelligent driving of vehicles and has practical feasibility in the near future. The basic principle of machine vision road recognition is that the CCD image gray value, image texture and optical flow of the road surface environment (white road signs, edges, road surface color, potholes, obstacles, etc.) are different. Based on this difference, the required path image information, such as orientation deviation, lateral deviation and vehicle position in the road can be obtained after image processing. Combining this information with the vehicle's dynamic equations can form a mathematical model of the vehicle control system. 3. Structural design of intelligent driving system (1) Machine vision system The hardware composition of machine vision system: mainly consists of two CCD cameras with the same parameters, models and performance, two identical video acquisition cards and video processing software on the computer. We obtain relevant depth information by processing images captured by left and right CCD cameras. It is crucial that the signals from the two CCD cameras are synchronized; otherwise, the captured images will not correspond, making it impossible to correctly extract the relevant depth information. Therefore, our left and right CCD cameras are synchronized by drawing a frame synchronization signal from the left camera's frame synchronization circuit to the right CCD camera's frame synchronization circuit, ensuring that the images from both channels are always synchronized. The machine vision processing software system is primarily responsible for extracting key information such as obstacle detection and recognition, traffic signal detection and recognition, traffic pattern recognition and detection, road edge recognition and detection, curve curvature recognition and detection, distance and speed detection of vehicles ahead, and road surface pothole and slope recognition and detection. Based on this information, a 3D reconstruction of the road environment is performed. The road environment information processed by the machine vision processing software system is fused with information from multiple sensors in the auxiliary system. Combined with the vehicle dynamics model (many scientists are researching the application of fuzzy control and neural network technologies in vehicle dynamics models) and vehicle driving state parameters, the vehicle behavior decision-making and scheduling system makes reasonable decisions and scheduling. Then, the path planning system generates reasonable path planning and vehicle control commands to control the vehicle. Road edge detection is crucial for vehicles to correctly identify roads, especially low-grade roads lacking traffic patterns. Our machine vision system detects road edge and width information. Edge detection involves binarizing the CCD-captured image to extract the road edges. Width detection uses stereo matching of road images from two CCDs to extract depth information, and then calculates the road width based on machine vision theory. Data from other sensors determines the vehicle's position and driving parameters, enabling reasonable path planning, optimized control, and path tracking to prevent deviation from the road surface. Traffic pattern, road sign, and traffic signal recognition are also important. Traffic patterns include common zebra crossings, lane lines, and arrows. These patterns have fixed colors (e.g., zebra crossings are white) and shapes, so simple image processing and comparison with a pre-built traffic pattern model allow for rapid recognition. Traffic sign recognition is more complex. Some signs contain text, requiring not only image processing to extract the text information but also analysis of the traffic information it contains. Traffic signals, including traffic lights and police flags, have fixed operating patterns that can be pre-modeled and then detected and identified based on image processing and information from other sensors. The detection and identification of distance and speed of vehicles and obstacles ahead are also crucial for intelligent driving. Accurate and safe detection of vehicles and obstacles ahead is essential. It's not enough to simply identify them; their speed, direction of movement, and distance from the vehicle must also be detected. Based on several consecutive measurements of their distance, speed, and direction, the vehicle's possible trajectories should be predicted, providing reliable data for overtaking, deceleration, obstacle avoidance, and risk reduction. Machine vision technology can be used for 3D motion detection, including model-based corresponding point estimation and optical flow-based estimation methods. Model-based and optical flow-based methods offer many mature algorithms that facilitate system implementation. Camera calibration in machine vision systems aims to determine the camera's internal and external attribute parameters and establish a spatial imaging model to establish the corresponding relationship between object points in the spatial coordinate system and their image points on the image plane. Camera calibration is divided into internal parameter calibration and external parameter calibration. Internal parameters determine the internal geometric and optical characteristics of the camera and do not change with the movement of the camera; external parameters determine the three-dimensional position and orientation of the camera image plane relative to the coordinate system of the objective world. After the camera moves, it needs to be recalibrated. In this paper, the camera moves with the vehicle, but the parameters we need are all internal parameters. We only need to pre-calibrate the internal parameters of the camera. [align=center] Figure 2 Block diagram of intelligent driving system[/align] (2) Main control system The core of the entire intelligent driving system is the main control system, which is responsible for the information collection and identification of various sensors, then processing, and finally making vehicle behavior decisions and scheduling, planning paths and generating vehicle control commands based on the processed information. The design concept of the entire intelligent driving system in this paper is also based on simulating human driving. The main control system is the brain of the car, and the machine vision system is like the eyes of a person. Once the main control system crashes or the control software is unstable, it will lead to a major traffic accident with the destruction of the vehicle and death of people. The main control system computer operates in a harsh environment. When the car is driving at high speed, the vibration is great and the temperature near the engine is high. In order to ensure the safe and stable operation of the main control system, the main control system computer should be a high-performance and high-stability industrial control computer. (3) The auxiliary ranging and positioning system mainly includes the vehicle GPS positioning system, ranging radar, electronic map, etc. With the development of traffic informatization, GIS-based electronic maps have begun to be used in daily car driving. GIS-based electronic maps include a wide range of hierarchical geographical location information, which can be used to set the macroscopic path of car driving in a large direction. Then, the geographical location information of the car's current position is determined by the vehicle GPS global positioning system. By comparing it with the geographical location information of the point on the electronic map, we can know where the car is now in the macroscopic path we set. This can prevent the car from going to the wrong intersection, going in the wrong direction, or deviating from the pre-set macroscopic path when driving autonomously. The comprehensive use of GIS-based electronic maps and vehicle GPS global positioning systems ensures that the car can drive autonomously according to the macroscopic path we set without deviating from the pre-set path. The vehicle ranging radar mainly assists the machine vision system in measuring the speed and distance of vehicles in front and the distance of road obstacles in some special environments. Just like humans, the stereo vision system composed of dual CCDs will have a significantly reduced recognition effect on vehicles and obstacles in front under poor visibility conditions such as overcast skies, heavy fog, and heavy rain. It will also have accuracy errors in measuring the speed and distance of vehicles (obstacles) and will not be able to measure vehicles and obstacles in front that are outside the visibility range. However, the vehicle-mounted ranging radar can assist the stereo vision system in improving the accuracy of measuring the distance and speed of vehicles (obstacles) when the weather is good. In bad weather, it can make up for the shortcomings of the stereo vision system in terms of measurement effect and improve the reliability of the system. (4) Vehicle driving parameter detection system The main function of the vehicle driving parameter detection system is to detect key parameters of the vehicle's driving status, such as the front wheel angle, rear wheel speed, and throttle position, and provide parameters for the main control system to make vehicle decision-making, scheduling and path planning. This system is relatively simple. The vehicle driving status parameters are detected by the vehicle's instruments. As long as you want to read these parameters into the main control system, you can do so. (5) The actuator of the entire vehicle's automatic driving system is mainly an electro-hydraulic servo system, which consists of multiple servo cylinders to complete various driving actions. It imitates human driving actions, pushing and pulling various control levers to complete a series of tasks such as gear shifting, acceleration, deceleration, steering, parking, and engine shutdown. The core components controlling the electro-hydraulic servo system are the clutch ECU (electronic control unit), gear ECU, steering ECU, throttle ECU, and brake ECU. These five electronic controllers (ECUs) each have their own functions to receive and execute the control commands issued by the main control computer after decision-making and scheduling calculations. The execution of the commands mainly relies on the ECU to amplify the command signals and then send them to the electro-hydraulic servo system for mechanical execution. 4. Conclusion Machine vision technology in intelligent driving is a very complex technology that requires more detailed research. The current difficulties and key points are mainly focused on rapid and effective stereo matching, rapid three-dimensional reconstruction of the road environment, and the real-time performance of machine vision processing. Many scientists have already conducted in-depth research in this area, and new research results will undoubtedly promote the application of machine vision technology in intelligent driving. References [1] Jia Yunde, Machine Vision, Science Press, 2000. [2] D. Brzakovie, L. Hong, Road edge detection for mobile robot navigation, Robotics and Automation, 1989 IEEE International Conference on 14-19 May 1989, P1143 –1147, vol.2. [3] Klaus Fleischer, Hans-Hellmut Nagel, Machine-vision-based detection and tracking of stationary infrastructural objects beside inner-city roads, Intelligent Transportation Systems, IEEE 2001, 25-29 Aug, P 525 –530. [4] O.Djekoune, K.Achour, Vision-guided mobile robot navigation using neural network, Image and Signal Processing and Analysis, 19-21 June 2001, P 355 –361. [5] Wang Rongben, Li Bin, Chu Jiangwei, Ji Shouwen, Research on the method of measuring the distance to the vehicle ahead based on vehicle-mounted monocular machine vision on highways, Highway Traffic Technology, December 2001, Vol.18, No.6. [6] Qin Guihe, Ge Anlin, Lei Yulong, Intelligent Transportation Systems and their Vehicle Control Technology, Automotive Engineering, 2001, Vol.23, No.1. [7] Gu Baiyuan, Wang Rongben, Chu Jiangwei, Experimental high-speed intelligent vehicle navigation technology, Robotics Technology and Application, 2002, No.5.
Read next

CATDOLL CATDOLL 115CM Saki TPE

Height: 115cm Weight: 19.5kg Shoulder Width: 29cm Bust/Waist/Hip: 57/53/64cm Oral Depth: 3-5cm Vaginal Depth: 3-15cm An...

Articles 2026-02-22