Share this

Design and Development of Machine Vision-Based Driver Assistance Systems

2026-04-06 03:48:28 · · #1

1 Introduction

Machine vision -based driver assistance systems aim to improve drivers' environmental perception capabilities. By monitoring the external environment and issuing timely warnings to drivers in unsafe situations, the human-vehicle-road system becomes more stable, safe, and reliable, thereby enhancing the safety performance of automobiles.

Developing a machine vision -based driver assistance system presents the following challenges:

(1) The system algorithm is complex and the code is extensive. Machine vision mainly relies on cameras to collect external information and convert it into digital image signals for processing. Different external environments and detection purposes result in different points that the system needs to process. Therefore, the entire system is extremely complex in terms of algorithms and the development process is slow.

(2) The testing environment requirements are stringent. In the later stages of system development, testing its performance and making corrections is one of the key steps in the entire R&D process. Compared with other automotive electronic products, machine vision-based driver assistance technology products need to consider two factors when testing: First, whether the driver's safety can be guaranteed during real vehicle testing; Second, the testing process needs to be effective, reliable, and repeatable to facilitate timely detection and correction of problems.

If the above-mentioned challenges can be successfully resolved during the development of machine vision-based assisted driving systems, it will lay the groundwork for the future research and development of machine vision-based assisted driving technology products, improve the development efficiency of assisted driving technology products, promote the early mass production of assisted driving technology products, and ultimately improve the safety performance of automobiles.

2. Design Background and Design Principles

To address the aforementioned issues in the research and testing of machine vision-assisted driving systems, and considering the excellent image processing capabilities and powerful real-time simulation testing functions of NI's EVS and PXI platforms, a machine vision-assisted driving development system based on NI EVS and PXI was designed using the LabVIEW programming language and integrating simulation testing models through the VeriStand development platform.

The NIEVS platform enables rapid implementation of machine vision-based driver assistance functions, primarily due to the following features of the NIEVS platform:

(1) High-performance multi-core processor with 2GB RAM suitable for fast detection and large-scale image processing;

(2) Connecting multiple cameras to achieve synchronous detection (Gigabit Ethernet vision and IEEE 1394 standard) can be used for the development of various driving assistance functions;

(3) High-speed I/O channels are suitable for industrial communication and have strong expansion capabilities;

(4) Automatic detection is achieved by using a vision generator configuration, without the need for design and development of underlying drivers and interface circuits;

(5) Vision Development Module (VDM) integrates a large number of common machine vision processing basic modules. Developers will focus on integration and application to quickly implement various detection and recognition functions.

(6) The use of graphical programming makes it easier for developers to develop and debug complex algorithms.

When designers have new ideas, this system can quickly bring them to life, improving system development efficiency. The Visual Development Module (VDM) allows designers to focus more on the effects of different algorithms, reducing their programming effort and further improving system performance through comprehensive comparison.

The NIPXI platform enables system testing in an effective, trusted, and reproducible environment, facilitating early problem detection and remediation. The NIPXI platform offers unique advantages in several areas:

(1) Provides a graphical software development environment and good human-computer interaction elements, focusing on application development without needing to focus on the underlying driver, and the human-computer interface is easy to develop;

(2) Good real-time performance, ensuring the timing requirements and real-time performance of data acquisition and testing, and being able to run complex vehicle models;

(3) The system has high reliability, high integration, and good scalability;

(4) It has good openness and extensibility and can integrate various models developed by other software platforms.

Combining the advantages of NIEVS and PXI, and using the LabVIEW programming language, a machine vision-assisted driving system based on NIEVS and PXI was developed by integrating simulation test models through the VeriStand development platform.

3. System Technical Principles and Design Architecture

Based on the challenges and corresponding solutions in the development of machine vision-assisted driving systems, the designed system should have the following two functions:

(1) Rapid development and implementation of machine vision systems. Using the NIEVS platform, the pre-defined functional requirements are implemented through programming, and the entire hardware system is guaranteed to meet the functional requirements.

(2) Reliable, effective and repeatable testing of machine vision-based driver assistance functions. A virtual testing system was built using the NIPXI platform to test problems that arise in the machine vision development process, so as to make timely rectifications and improve the system's safety performance.

Based on the above ideas, the system design principle architecture is shown in Figure 1.

Figure 1 System Design Principle Architecture Diagram

As shown in Figure 1, the entire platform is divided into two parts: a virtual testing system and a machine vision system. Each part consists of its own hardware and software.

In the virtual testing system, PC 1 connects to the simulation testing platform via TCP/IP protocol to configure the parameters of the simulation model. The main function of the simulation testing platform is to run the vehicle dynamics model and collect the output parameters of the virtual cockpit. The simulation test results are transmitted to PC 2 via CAN communication. The virtual reality software running on PC 2 converts the output parameters of the simulation testing platform into vehicle operation effects and displays them in the virtual cockpit.

In a machine vision system, a camera captures a virtual driving scene in a virtual cockpit and connects to a vision processing platform via TCP/IP protocol. The machine vision function is then programmed and implemented on the vision processing platform.

3.1 Machine Vision System

The main function of the machine vision system is that, according to the pre-defined requirements, the designer programs various recognition and detection functions through the vision processing system.

The core of a machine vision system is the NIEVS embedded vision development platform. One of the key reasons for using the NIEVS platform is the simplicity and intuitiveness of the LabVIEW programming language and the excellent image processing capabilities of the Vision Development Module (VDM). The Vision Development Module is dedicated to developing and configuring machine vision applications. It contains hundreds of functions, can acquire images from various cameras, and can perform various image processing operations, including image enhancement, inspection and display, feature localization, object recognition, and part measurement. With the help of the EVS hardware platform and software programming environment, the configured functions can be quickly implemented, significantly shortening development time.

The structure of the machine vision system is shown in Figure 2.

Figure 2 Machine Vision System

The road condition information simulated by the virtual reality software is displayed on an LCD screen. It is collected by the piA1000-60gc camera and transmitted to the NIEVS-1464 (Windows) embedded vision system for processing. According to the pre-set algorithm program, the machine vision function is completed.

3.2 Virtual Testing System

The main function of a virtual testing system is to provide an effective, reliable, and repeatable virtual environment to ensure the real-time nature of the testing process and facilitate the timely detection and rectification of problems.

The NIPXI platform can be used to accelerate test execution time, improve software development efficiency, increase processing power, and enhance scalability, thereby greatly reducing the development investment of machine vision systems.

Based on the above principles and taking into account the characteristics of the NIPXI platform, a virtual testing system was built.

The structure of the virtual testing system is shown in Figure 3.

Figure 3. Virtual Testing System Structure Diagram

The NIPXI-8513 primarily acquires steering wheel angle information. As a single-port software-optional Controller Area Network (CAN) PXI interface, it is suitable for developing CAN applications in NI LabVIEW, NI LabWindows/CVI, and C/C++ on Windows and LabVIEW Real-Time operating systems.

The NIPXI-7841R digital RIO board acquires accelerator pedal information. Its programmable FPGA chip is suitable for onboard processing and flexible I/O operations. Users can configure various analog and digital functions using the NI LabVIEW graphical block diagram and NI LabVIEW FPGA module. This block diagram runs in hardware, facilitating direct and timely control of all I/O signals to achieve superior performance.

The NIPXIe-8135 runs vehicle dynamics models and is a high-performance embedded controller based on an Intel Core i7-3610QE processor, suitable for PXI systems. Combined with a 2.3GHz baseband, a 3.3GHz quad-core processor (single-core TurboBoost mode), and dual-channel 1600MHz DDR3 memory, this controller is ideal for processor-intensive modular instrumentation and data acquisition applications.

The vehicle dynamics model receives information such as steering wheel angle and accelerator pedal, and then the vehicle model performs the corresponding movements. In order to display the movement effects more intuitively, the output of the vehicle dynamics model is connected to the virtual reality software CarMaker. Combined with different traffic scenarios provided by CarMaker, the realism and effectiveness of the testing process are further improved.

4. Software Implementation

The functions of the entire system are mainly divided into two parts, and therefore the software implementation is also divided into two parts: machine vision software implementation and virtual testing software implementation.

4.1 Machine Vision Software Implementation

Machine vision-based driver assistance systems can perform many functions, such as lane detection, pedestrian detection, traffic signal and sign recognition, and vehicle night vision systems. This development platform allows new ideas to be quickly implemented through programming and then tested.

The following section uses the implementation process of lane line detection as an example to introduce the software implementation based on the NIEVS platform.

The main function of lane detection is to provide information such as the distance and heading angle of a vehicle deviating from the center line of the lane when driving on a structured road, with the help of a machine vision platform. When a lane departure warning function is added to the vehicle, a warning can be issued to the driver when the vehicle is about to deviate from the lane boundary, thereby ensuring the safe driving of the vehicle.

A camera mounted on the vehicle's windshield collects road condition information ahead, which is then processed by the EVS embedded vision system. To ensure the system's real-time performance and reliability, the raw video information typically undergoes image cropping, grayscale conversion, edge detection, binarization, and line detection. When the lane line position deviation is small across 10 consecutive frames, the vehicle's trajectory can be considered relatively stable, thus allowing for a narrowing of the lane line search and detection area and further improving the system's real-time performance. The lane line detection algorithm flowchart is shown in Figure 4.

Figure 4. Lane line detection algorithm flow

Video cropping primarily involves removing image information irrelevant to lane line detection, such as the sky, to reduce the amount of image data that needs processing, minimize irrelevant interference, and improve the system's real-time performance and accuracy. The control used is IMAQExtractVI.

The grayscale conversion function transforms the original color image into a grayscale image without affecting lane line detection, further reducing the amount of data that needs to be processed. The control used is IMAQExtractSingleColorPlaneVI.

The purpose of edge detection is to highlight lane line edges, as lane line detection primarily relies on information about the lane edges. The control used is IMAQEdgeDetectionVI, and the selected edge detection algorithm is the Sobel algorithm.

Binarization further simplifies image information based on edge detection. By setting a threshold, pixels above the threshold have a grayscale value of 1, and pixels below the threshold have a grayscale value of 0. The control used is IMAQAutoBThreshold2VI, and the selected binarization algorithm is the inter-class variation algorithm.

Line detection involves detecting lane lines within a defined area by setting parameters. The control used here is IMAQFindEdgeVI, and the algorithm selected is Hough transform.

When the lane line position deviation is small across 10 consecutive frames, in order to reduce the amount of data processing and improve the real-time performance of the system, Kalman filtering can be used to track the region of origin (ROI) where the lane line will appear, and then lane line detection can be performed in that region, thus reducing the search area for lane line detection.

The LabVIEW program for lane line detection is shown in Figure 5.

Figure 5. LabVIEW program for lane detection.

4.2 Implementation of Virtual Testing Software

The virtual testing software mainly includes the following three parts: vehicle dynamics model building, Veristand configuration and instrument display, and Carmaker 3D scene modeling.

The vehicle dynamics model is the foundation of the entire virtual testing platform. Building a model that matches the actual performance of vehicles ensures a more effective and reliable testing process. The vehicle dynamics model built using MATLAB/Simulink is shown in Figure 6.

Figure 6 Vehicle dynamics model

Veristand plays an integration role in virtual testing systems, primarily performing the following three functions:

(1) Import the vehicle dynamics model into the PXI platform;

(2) Generate a virtual instrument and use the operation interface to monitor and interact with the running tasks in real time;

(3) Configure the I/O port and CAN communication data connection relationship.

The Veristand system configuration and instrument display interface are shown in Figure 7.

Figure 7 Veristand system configuration and instrument display interface

Carmaker's 3D scenes can display the output of vehicle models in a motion effect. Veristand transmits the output data of the vehicle model to the Carmaker software via CAN communication. Carmaker can then create different road conditions, making the testing process more diverse. A Carmaker 3D scene is shown in Figure 8.

Figure 8 Carmaker 3D scene

The software in the virtual testing section transmits the collected information such as throttle and steering wheel inputs to the vehicle dynamics model running in PXI. The dynamic effects of the vehicle model simulation are then displayed in the virtual reality software CarMaker. The vehicle dynamics model is built in MATLAB and configured using the VeriStand simulation testing platform. Leveraging the advantages of the PXI platform, the simulation testing process is smoother and more real-time.

5. Integration and Application

The hardware and software of the EVS and PXI components are integrated together to complete the development of a machine vision-assisted driving system based on NIEVS and PXI. The physical prototype is shown in Figure 9.

Figure 9. Actual vehicle of the driver assistance system

The following section, using lane detection as an example, introduces the application of a machine vision-based assisted driving technology platform.

Different road conditions were set up in CarMaker, and the machine vision system was programmed to design the detection results, as shown in Figure 10.

Figure 10 Lane line detection results under different road conditions

Figure a shows the simplest road condition information with no interference. Figure b contains an intersection, Figure c contains road traffic signs, and Figure d shows a curve. As can be seen from Figure 10, lane lines can be correctly detected in the absence of interference. Lane lines can still be detected even when there is an intersection or traffic signs. However, lane lines cannot be accurately detected when there is a curve. This is mainly because the lane line detection algorithm does not consider curve detection, which needs to be improved in future development.

6 Conclusions

To address the challenges in developing machine vision-based driver assistance systems, this system was developed using hardware design and software programming, based on system requirements analysis and leveraging the NIEVS and PXI platforms. The design and testing of the Lane Departure Warning (LDW) function demonstrates that this system can be applied to the development of machine vision-based driver assistance systems.

This system fully leverages the excellent image processing capabilities of the NIEVS platform and the powerful simulation testing functions of the PXI platform. During the machine vision function development phase, the EVS hardware platform and software resources help designers quickly complete modeling and programming steps, shortening the R&D cycle. The PXI platform's high reliability, strong field capabilities, good real-time performance, high hardware and software integration, and good scalability enable virtual testing of machine vision-assisted driving systems with limited investment. The good compatibility between the EVS and PXI platforms, combined, accelerates the process of realizing machine vision-assisted driving technology from concept to product, laying the foundation for its early mass production and thus improving the active safety performance of automobiles.

Read next

CATDOLL CATDOLL 115CM Dora (TPE Body with Hard Silicone Head)

Height: 115cm Weight: 19.5kg Shoulder Width: 29cm Bust/Waist/Hip: 57/53/64cm Oral Depth: 3-5cm Vaginal Depth: 3-15cm An...

Articles 2026-02-22