Research on Highway Traffic Flow Detection System Based on Machine Vision
2026-04-06 06:58:55··#1
Abstract : Key data required in intelligent transportation systems include road occupancy, traffic flow, and vehicle speed. This paper introduces a digital image-based traffic flow detection system based on the TMS320DM642, clarifying the hardware composition principle, software structure, and traffic flow detection algorithm of this embedded vision system. The system was tested and integrated with existing traffic signal controllers, demonstrating its high recognition rate, small size, low cost, and good real-time performance, enabling real-time detection of highway traffic flow information. 1. Introduction With the increasing population, the pressure on transportation is growing, making intelligent transportation systems a research hotspot in recent years. Traffic flow detection is a fundamental part of intelligent transportation and occupies an important position in the system. Currently, there are various methods for detecting traffic flow, such as electromagnetic induction and ultrasonic detection of traffic information. However, in reality, the speed and type of moving vehicles are constantly changing, resulting in unstable reflected signals and large measurement errors. Compared with the above methods, video-based traffic flow detection methods have many advantages: 1. Reliable information can be extracted from video images to complete road traffic monitoring, improving the automation level of roads and vehicles. 2. Installing video cameras in traffic monitoring and control systems is more economical and less destructive than installing other sensors. 3. Many cameras have already been installed in actual road traffic systems for road traffic monitoring and control, achieving two goals at once. Existing traditional video detection methods are based on industrial control computers, with mature algorithms and related products. However, they also have disadvantages: 1. Because general-purpose CPUs do not have dedicated hardware multipliers, it is difficult to achieve real-time image processing. 2. Using general-purpose industrial control computers running Windows systems is costly, and constant attention must be paid to whether the machine crashes, whether the system is infected with viruses, and whether the operating system needs patching and upgrading. Based on the above two points, this paper proposes an embedded image recognition scheme based on TMS320DM642 (hereinafter referred to as DM642), which solves the existing problems. 2. Principle and Composition of Traffic Flow Detection System 2.1 Working Principle of Traffic Flow Detection System The traffic flow detection system consists of several parts, including video acquisition, digital video signal processing of traffic flow, traffic flow detection algorithms under different environments, and output of traffic flow detection results. The core chip of the digital image acquisition section is the TVP5150, which converts analog video signals into digital video signals. The DM642 runs image algorithms to perform digital image processing on the acquired images. The traffic flow digital video detection algorithm primarily uses an improved frame difference method for motion detection during the day and a vehicle headlight detection method at night. The acquired highway image is divided into four parts according to lanes, each corresponding to one lane. A virtual coil (a rectangular detection area in the image, collectively referred to as a virtual coil) is set in each lane. When a vehicle crosses the virtual coil, the pixel value within the virtual coil changes. Based on this change, the I/O port is controlled to generate corresponding pulses for each lane. After processing, the pulses are sent to the highway traffic signal controller to control the traffic lights, thereby achieving the goal of intelligent traffic. Simultaneously, traffic flow information can be transmitted to the monitoring center via the network. 2.2 Traffic Flow Detection System Hardware The DM642 is a digital signal processing chip designed by Texas Instruments specifically for multi-channel video input/output. Its powerful computing capabilities, built on a second-generation high-performance very long instruction word (VLSI) architecture, allow for the parallel execution of eight instructions, making this chip highly suitable for digital image processing. Considering the requirements of practical work and system stability, the DM642's main frequency was set to 600MHz. Based on the actual application environment and the needs of the embedded system, in addition to expanding the necessary memory and video acquisition/playback sections, the system mainly expanded with multiple digital I/O channels, asynchronous serial ports, and network interfaces to facilitate communication with external systems. The specific hardware is shown in Figure 1. The specifications are as follows: 1. External SDRAM, 4M×64 bits; 2. External Flash, 4M×8 bits; 3. Two PAL/NTSC standard analog video inputs (CVBS or S-Video), one PAL/NTSC standard analog video output; 4. Eight digital I/O ports expanded via CPLD for outputting traffic flow information on the lanes; 5. Two UART interfaces, configurable with RS232/RS422/RS485 standards; 6. Real-time clock (RTC) + watchdog circuit; 7. 10M/100Mbase-TX standard Ethernet interface. Figure 1. Hardware Physical Diagram Figure 2. System Composition Diagram As shown in Figure 2, the TMS320DM642 expands its external memory via a 64-bit wide EMIF bus, including 32Mbytes of synchronous DRAM for storing user code and image data during runtime. 4M of Flash memory stores the bootloader and user applications. During startup, the code and data in the Flash are loaded into the main memory (SDRAM). User configuration parameters for the virtual coil can also be stored in the Flash. The SDRAM has a data width of 64 bits, and the Flash has a data width of 8 bits, corresponding to the CE0 and CE1 spaces of the TMS320DM642, respectively. Similarly, the Universal Asynchronous Receiver/Transmitter (UART) and the CPLD are also connected to the DM642 via the EMIF bus. The UART is used to expand the serial port; in this system, it can be used to expand the RS232 interface. The CPLD is used to implement the glue logic between the Flash and UART and to expand general-purpose digital I/O. For ease of software implementation, these two parts are also connected to the CE1 space of the DM642, with their internal registers serving as part of the CE1 storage space. 2.3 Video Acquisition and Output Section To collect traffic flow information at the intersection, this system features two analog video inputs. The system converts the analog video signals acquired by the camera into a digital video stream using a TVP5150 according to ITU-R BT.656, and sends the embedded synchronization signals to the VP1 and VP2 ports of the DM642. The horizontal and vertical synchronization signals are embedded in the EAV and SAV time base signals of the video data stream; the video ports only require the video sampling clock and sampling enable signal. The DM642 can achieve three-frame continuous acquisition of digital video images via FIFO. While one frame is being processed, the other two buffers can still achieve cyclic acquisition of images, thus resolving the contradiction between constant-speed video acquisition and variable-speed image processing. This system also includes one video output for local playback; this function can be discontinued after system debugging. The video output is implemented using the Phillips SAA7121 chip. The SAA7121 converts the digital video signal from the DM642's VP0 port into a PAL (50Hz) or NTSC (60Hz) analog signal for output via an external video port. 3. Software Section 3.1 Traffic Flow Statistics Algorithm Due to the significant variations in road surface light intensity between day and night, the algorithm's adaptability is crucial. To obtain traffic flow information throughout the day, the algorithm processes day and night separately. The program automatically switches between the two algorithms based on the varying lighting conditions, ensuring a smooth operating environment. 3.1.1 Selection of Virtual Coil: The selection of the virtual coil affects the accuracy and speed of the detection algorithm and is influenced by the camera's installation height and tilt angle, as well as the camera's depth of field. Generally, placing the virtual coil near the bottom of the image, where vehicle spacing is larger, facilitates detection. A larger virtual coil generally results in higher detection accuracy, but also a longer algorithm execution time. Since the system must adapt to various intersections and road surfaces, the selection of the virtual coil is left to the user. We developed PC software using VC6.0. Users can use this software to set the size and position of the virtual coils for each lane via serial port. 3.1.2 Time Interval Between Adjacent Detection Frames: Since the entire system needs to communicate with the traffic signal controller, the total processing time for images on each road must not exceed 0.25 seconds. Here, we choose an interval of 0.125 seconds between each frame. 3.1.3 Traffic Flow Detection Algorithm (www.51kaifa.com/html/jswz) Since the daytime traffic flow detection algorithm has mature applications in industrial control computers, it will not be elaborated upon here. At night, road visibility is relatively low, and the algorithm mainly focuses on identifying vehicle lights. At night, vehicle lights are very bright, so as long as the vehicle lights can be correctly detected, vehicle measurement can be performed. The interference in the algorithm comes from the reflection of light emitted by vehicle lights from the road surface. After using Matlab simulation experiments, it was found that after binarization and denoising, the bright spots in the image are basically the shape of vehicle lights, while the road surface reflective area diverges forward. Therefore, the shape characteristics of the bright spots on the detection window can be used to identify vehicle lights and road surface reflective areas. The threshold for image binarization was selected using Otsu's method. Compared to empirical methods, Otsu's method calculates the threshold by variance, making it more adaptable to different environments, but this also increases the time and space complexity of the algorithm. Image denoising uses a 3x3 median filter, which we improved with a faster algorithm and applied only to virtual coils. The original grayscale image of the road surface is shown in Figure 3, and Figure 4 shows its binarized image. The rectangular area in the figure represents the virtual coil, and there are two white areas within the coil. The presence of a vehicle headlight is determined pixel by pixel based on the maximum aspect ratio of the white area. The white area corresponding to a vehicle headlight is generally less than or equal to its width, as shown in Figure 5. The white area corresponding to road surface reflection is longer than its width, as shown in Figure 8. The white area within the virtual coil represents road surface reflection. 3.2 System Software Framework Based on DSP/BIOS The system software development environment is CCS, using the DSP/BIOS kernel provided by TI and the RF5 software reference framework advocated by TI. Input, processing, and output threads are configured through DSP/BIOS, and synchronization between these threads is achieved through semaphores. Using a DSP/BIOS kernel, configuring and modifying the DSP/BIOS is convenient and easy, offering many advantages over traditional methods. The RF5 (Reference Framework 5) DSP software architecture significantly shortens development time while maximizing code portability and robustness. The software architecture, from bottom to top, consists of CSL (Chip Support Library), DSP/BIOS, Driver layer, signal processing library layer, and algorithm standard layer. These three layers constitute RF5, with the user application layer at the top. User modification and maintenance are convenient, requiring only changes to the upper layers. The input driver uses the FVID class driver provided by TI. This driver configures the DM642 video port and the IIC module's connection to the TVP5150 A/D converter chip through structure configuration parameters, enabling the TVP5150 to output a PAL digital video stream and write the acquired image to a designated memory area via the video port's FIFO. After image acquisition, a message is sent to the processing module via semaphores; the message structure stores the starting address of the memory space containing the image data. The input module then waits for a response from the output module to continue processing the next frame. The processing module is responsible for executing the traffic flow statistics algorithm. It extracts image data addresses from the message structure sent by the input module for image algorithm processing. The calculation results are output through the I/O ports extended by the CPLD, transmitting the traffic flow monitoring information to the traffic signal controller. 3.3 Code Optimization The program is mainly written in C language. Some core code has undergone assembly optimization to meet real-time requirements. The quality of C language program optimization directly affects program efficiency. The program extensively uses space-for-time tradeoffs to improve code execution efficiency. There are many code optimization methods, the main ones being: 1. Compiler optimization, selecting different compilation optimization options during compilation, such as -pm and -oe options. 2. Optimizing C language code by adding indicative information. Commonly used indicative information includes #Pragma MUST_ITERATE. 3. Writing linear assembly programs to improve program execution speed. 4. Writing assembly programs to implement software pipelining. Pipelining can use various techniques, such as dependency graphs and iteration interval timing tables. These are described in detail in TI's technical documentation and will not be repeated here. 4. Experimental Results and Analysis To verify the reliability of the traffic flow detection system, the algorithm was ported to the detection system, and actual detection tests were conducted on multiple highway overpasses using tripod-mounted cameras. Due to the presence of a certain number of pedestrians on the overpasses, the overpasses swayed slightly, affecting the stability of the cameras. Additionally, vehicles straddling lane lines caused some errors in the detection results, but the detection effect remained good. A set of tests is shown in Table 1, showing the traffic flow detection results. Under natural conditions, the image size was 720×576. Daytime measurements were taken at 3:27 PM, and evening measurements at 6:50 PM, in clear weather. The test location was two lanes on Xueyuan Road in Haidian District, Beijing. A 1/3-inch CCD was used. The lens focal length was 3.5-8mm, and the maximum aperture ratio was 1:1.4. Table 1 shows the traffic flow detection results. From the results, it can be seen that the daytime video detection results were slightly better, while at night, the shape and brightness of vehicle lights varied significantly, causing some errors, but the system's recognition accuracy remained above 80%. Experiments have demonstrated that this method offers high detection accuracy, low implementation cost, and reliable system operation. The authors' innovations are as follows: 1. Previous vision-based traffic flow detection hardware relied on industrial control computers. This paper proposes a new solution, introducing an embedded traffic flow detection system based on the TMS320DM642. Experiments have proven that this system is small in size, low in cost, and stable and reliable. 2. The paper introduces a nighttime traffic flow detection algorithm. This algorithm has low complexity and fast processing speed, ensuring both real-time traffic flow detection and meeting accuracy requirements.