Design and Implementation of a Vision-Based High-Speed Line-Following Robot
2026-04-06 08:39:55··#1
Abstract: To address the specific requirements for line-following speed in certain robotics competitions, this paper proposes a novel design method for a high-speed line-following robot. This method employs a low-resolution camera as the line-following sensor to increase the predicted line-following distance. The robot's core controller utilizes a Freescale HCS12 16-bit microcontroller. The system uses its on-chip A/D converter for video capture, and through video processing and steering/speed control, ultimately achieves high-speed line-following movement. Keywords: mobile robot; single-chip; vision; line-tracking Abstract: To meet the specific requirements of line-tracking speed in some robotic competitions, this paper presents a new design for a high-speed line-tracking robot. This robot utilizes a low-resolution camera as the line-tracking sensor to increase the distance for pre-detection. The MCU of this robot is a Freescale HCS12 16-bit single-chip. High-speed line-tracking is achieved through vision capture by an A/D module and processing with steer and speed control. 1. Vehicle Body Mechanical Design To reflect the speed requirements, a simulated racing car model is used as the vehicle body mechanical platform. A rear-wheel drive, front-wheel steering working mode is adopted to achieve high-speed steering. However, if a two-wheel structure were used, steering through a dual-motor differential mechanism, the requirements for motor synchronization control would be very high under high-speed steering, making it difficult to achieve. Front-wheel steering is driven by a servo motor, and rear-wheel drive is transmitted to the rear axle via a DC motor, utilizing a mechanical differential mechanism to prevent steering slippage. The installation positions of the main components are shown in Figure 1. [align=center]Figure 1 Vehicle Body and Structural Diagram[/align] The robot uses a camera as a line-finding sensor. To obtain a good forward field of view for the camera, it is mounted high at the front of the vehicle body, thereby capturing sufficiently rich route information in front of the vehicle body to achieve route prediction. This is the key reason why the vision solution is significantly superior to the photoelectric sensor solution in line-finding speed. 2. Hardware Circuit Design This section mainly introduces the performance of the microcontroller as the core controller and the circuit structure of the video acquisition module, and briefly introduces the hardware implementation of other modules. The overall system structure is shown in Figure 2: [align=center] Figure 2 System Hardware Structure Design Diagram[/align] 2.1 Core Controller Design To achieve video acquisition, considering factors such as overall cost-effectiveness and equipment installation, the core controller selected is Freescale's 16-bit high-performance microcontroller—MC9S12DG128 (hereinafter referred to as S12). Its instruction processing clock can reach 38MHz, and its A/D converter's operating clock can reach 16MHz for video acquisition. It also has 8 PWM channels to control servos and DC motors for steering and speed control; 8 capture/compare channels to acquire encoder pulse signals as speed sensors; a serial communication interface for wireless debugging; and up to 64 I/Os (through I/O multiplexing) sufficient for status display and parameter setting. Furthermore, it has 128k of flash memory, eliminating the need for memory expansion; video data storage and retrieval can be achieved on-chip. As shown in Figure 2, the entire system uses a single microcontroller, eliminating the need for additional controllers and memory, making it a true "single-chip" system. 2.2 Video Acquisition Module Due to the speed limitations of the microcontroller's A/D converter, a low-resolution monochrome camera is required. Low resolution means increased single-line scanning time, while a monochrome camera means only a single A/D converter is needed for video acquisition. An Omvision OV5116 chip-based CMOS monochrome camera with a resolution of 320×240 and an image refresh rate of 50Hz was selected. An LM1881 video synchronization signal separator chip was used to extract the horizontal and vertical synchronization signals from the video signal and connect them to the pulse capture channel of the S12 converter. The capture signal triggers the AD module to acquire and store video data. [align=center] Figure 3 Schematic diagram of the video acquisition circuit[/align] 2.3 Motor Control and Power Supply A Mabuchi RS-380SH DC motor was selected as the main drive motor, controlled by a PWM signal. A Freescale MC33886 full-bridge driver chip was used, with two half-bridges to achieve forward and reverse motor rotation. The motor reversal here is not for reversing, but mainly for vehicle deceleration. When switching between forward and reverse rotation of the motor, the motor drive current will increase instantaneously with the increase in load. Therefore, it is necessary to increase the voltage regulation capability to ensure the normal operating voltage of the system and prevent the microcontroller from automatically restarting. The entire system has multiple voltage requirements: the microcontroller and servo motor are powered by 5V; the CMOS camera requires 6-9V. Therefore, for ease of development, the most commonly used 7.2V rechargeable battery pack is selected. Only a 5V voltage regulator chip needs to be added to the system to provide 5V voltage. 3. Video Acquisition and Processing This section focuses on using the on-chip A/D converter of the S12 to implement video acquisition and processing. 3.1 Video Acquisition The standard operating clock of the AD converter on the S12 is 2MHz, and AD sampling requires at least 14 clock cycles. Therefore, each acquisition requires 7µs = 14/2MHz. According to the video transmission principle and CMOS camera parameters, the single-line scanning time of the video is... Therefore, under the default clock operation, the A/D module can only acquire 9 video points per line, as shown in Figure 5. [align=center]Figure 4 Video Acquisition Effect under 2MHz A/D Clock[/align] This acquisition effect obviously cannot meet the requirements of line-finding control, so it is necessary to speed up the AD working clock, increasing the speed by 8 times to 16MHz. The sampling time is also 8 times faster than that of the video. Theoretically, 77 points can be acquired per line. The actual acquisition effect is shown in Figure 5, with an accuracy of 40×76 pixels. Such video effect is sufficient to meet the line-finding accuracy requirements. (Due to the high acquisition accuracy, multiple sampling points in each line of video are located in the video line blanking area, i.e., the black areas on both sides of the image.) [align=center]Figure 5 Video Acquisition and Video Processing Effect under 16MHz A/D Clock[/align] 3.2 Video Processing The black line position in the video is extracted through video processing. Since the video image is simple, the video processing algorithm adopts the edge detection algorithm, that is, the difference between two adjacent points in each line is calculated, and the edge position of the black line in the video image is obtained according to the size and sign of the difference. At the same time, the width of the "black line" is determined by calculating the distance between the two edge positions, and other interference is filtered out. The video processing effect is shown in Figure 5. To save system resources, the system did not acquire all 320 lines of video, but selected 40 lines for acquisition, which still met the line-following control requirements. Simultaneously, video processing and motion control were performed during the system's idle time when non-acquiring video lines were not captured, achieving simultaneous acquisition, processing, and control. Furthermore, this method does not require saving all video data, but only stores the array of black line positions after video processing, reducing system storage space usage and program execution time. 4. Motion Control Strategy The main design objective of this walking robot is to improve its line-following speed. The use of cameras is precisely to increase the detection distance of the forward line, providing sufficient decision-making time for motion control. Therefore, its motion control strategy is also based on this approach. This system uses a combination of preview and PID control to achieve speed and steering control. Based on the video acquired by the microcontroller, the system judges the road conditions ahead of the vehicle, clearly distinguishing between straight sections and curves, as well as the curvature of the curves. Under different road conditions, the vehicle exhibits different driving performances due to factors such as its mechanical structure and motor characteristics. There are optimal entry speeds, curve speeds, and curve routes when driving on curves. While higher vehicle speed is generally better on straightaways, pre-entry deceleration is essential for safe cornering. This is the key reason why the camera-based solution outperforms the infrared photoelectric sensor solution in terms of speed: sufficient prediction distance ensures adequate deceleration time and distance, achieving the fastest cornering effect. The control algorithm is explained as follows: First, the variance of the black line position data is calculated. Based on the variance, the curvature of the black line is determined, and the track is simply divided into three types: straightaways, small curves, and large curves. Through extensive testing, the optimal vehicle speed for each type of track is obtained, and closed-loop PID control is used to achieve vehicle speed control. For steering control, since the focus is on line-following speed rather than precise lateral control, a PD control algorithm combined with a pre-aiming algorithm is used. The steering control distance is dynamically adjusted according to the track conditions. Based on the fuzzy control model and human driving habits, lateral control is performed using a longer video line on straightaways, and a shorter video line is used when entering curves. The steering formula is as follows: Based on this speed and steering control strategy, after numerous practical experiments, a good line-following motion effect was finally achieved, with an average line-following speed reaching 2.5 m/s, significantly higher than that of ordinary walking robot designs. Since this paper focuses on the system construction scheme, and the control algorithm varies greatly between different vehicle mechanics and motors, the experimental data is not representative; therefore, only the algorithm strategy is explained here. 5. Summary and Outlook This paper designs a vision-based walking robot system for high-speed line-following. The system uses a high-performance microcontroller to complete video acquisition and processing, ultimately realizing a set of line-following and walking functions for speed and steering control. The system is lightweight and agile, requiring no memory expansion or other programmable devices, resulting in low construction costs. In the first National Undergraduate Intelligent Vehicle Competition, the system ran smoothly and achieved excellent results. Innovation: The system does not use a common infrared photodiode, but instead uses a low-resolution camera as the line-following sensor. This system breaks with traditional approaches, utilizing only a single microcontroller for video acquisition and processing. Since video capture provides significantly more route information than infrared photoelectric sensors, this low-cost video line-finding solution offers high flexibility for motion control algorithm development. However, due to microcontroller speed limitations, the system cannot yet handle color video acquisition, thus hindering line-finding in complex video images. Besides applications in robotics competitions, this system can also be used for navigation algorithm research in intelligent vehicles. Its simplicity and low cost effectively address relevant issues in intelligent vehicle research. Furthermore, this system can serve as an excellent teaching platform for instruction in control theory and video processing. References [1] Zhuo Qing et al. Track parameter detection method based on area array CCD. Electronic Products World, 2006-07-041 [2] Qiu Jifan Design and implementation of line-following navigation system for mobile robot. Microcomputer Information, 2006-26-071 [3] Hou Shaoyun, Cao Weihua Design and implementation of line-following walking robot based on DSP and fuzzy control. Electronic Technology Application, 2006-03-005 [4] Wang Jingqi, Chen Huiyan, Zheng Pei Autonomous vehicle steering control using fuzzy adaptive PID and pre-aiming strategy. Automotive Engineering, 2003-04-013 [5] Akihiro Suzuki, Nobuhiko Yasui. Lane recognition system for guiding of autonomous vehicle. IEEE IV'92 Symposium, 1992, 07: 196-201