Share this

A Brief Analysis of Automotive Sensor Fusion Systems

2026-04-06 05:50:54 · · #1

Autonomous driving relies on the coordinated efforts of the perception, control, and execution layers. Cameras, radar, and other sensors acquire information such as images, distance, and speed, acting as the eyes and ears of autonomous driving.

The control module analyzes and processes information, makes judgments, and issues commands, acting as the brain. Various components of the vehicle execute these commands, acting as the hands and feet. Environmental perception is the foundation of all this, making sensors indispensable for autonomous driving.

Three important sensors

Cameras: The Insightful Eyes of Intelligent Driving

In-vehicle cameras are fundamental to many ADAS functions, including warning and recognition features. Among these functions, visual image processing systems are relatively basic and more intuitive for drivers, and cameras are the foundation of visual image processing systems. Therefore, in-vehicle cameras are indispensable for autonomous driving.

ADAS functions achievable by cameras

Many of the above functions can be achieved with the help of a camera, and some functions can only be achieved through a camera.

The price of in-vehicle cameras continues to decline, and multiple cameras per vehicle will become a trend in the future. Cameras are relatively inexpensive, and their price has been declining from over 300 yuan in 2010 to around 200 yuan per camera by 2014, making them easy to popularize and apply.

Depending on the requirements of different ADAS functions, the installation location of cameras varies. Based on their installation location, cameras can be divided into four categories: front-view, side-view, rear-view, and built-in. To achieve full ADAS functionality in the future, a single vehicle will need to be equipped with at least five cameras.

Forward-facing camera

The forward-facing camera is the most frequently used, and a single camera can perform multiple functions, such as dashcam recording, lane departure warning, forward collision warning, and pedestrian recognition. Forward-facing cameras typically use wide-angle lenses and are mounted high on the rearview mirror or windshield to achieve a longer effective range.

Side-view cameras are becoming the replacement of rearview mirrors. Because rearview mirrors have a limited range, they become invisible when another vehicle is diagonally behind or behind them, creating a blind spot that significantly increases the likelihood of traffic accidents. Installing side-view cameras on both sides of the vehicle can essentially cover the blind spot, automatically alerting the driver when a vehicle enters it.

Panoramic parking system

The panoramic parking system uses multiple ultra-wide-angle cameras installed around the vehicle to simultaneously capture images of the vehicle's surroundings. After being corrected and stitched together by the image processing unit, it forms a panoramic top-down view of the vehicle's surroundings, which is then transmitted in real time to the display device on the center console.

The driver can have a "God's-eye view" of the vehicle's location and obstacles around it from inside the car.

Automotive cameras are widely used and relatively inexpensive, making them one of the most basic and common sensors. Compared to mobile phone cameras, automotive cameras operate under much harsher conditions, requiring them to meet stringent requirements such as shock resistance, magnetic resistance, waterproofing, and high-temperature resistance. The manufacturing process is complex and technically challenging.

In particular, forward-facing cameras used in ADAS functions are crucial for driving safety and must have extremely high reliability. Therefore, the manufacturing process for automotive cameras is also more complex.

Automotive camera industry chain

Before becoming a Tier 1 supplier for an OEM, a wide range of rigorous tests must be conducted. However, once a supplier is included in a OEM's Tier 1 supplier system, it creates a high barrier to entry and is difficult to replace because the cost of changing suppliers is too high; changing suppliers means that the OEM must conduct complex tests again.

Mobileye, a global leader in vision-based ADAS, began developing vision processing systems in 1999, but vehicles equipped with Mobileye products didn't hit the market until 2007. It took eight years from development to officially entering the OEM market. However, after becoming a Tier 1 supplier for numerous automakers, Mobileye has become an absolute oligopoly in this field.

Since its IPO in 2014, Mobileye has had a near 100% success rate in competing with other companies for smart car safety equipment bids from major automakers.

Millimeter-wave radar: a core sensor for ADAS

Millimeter waves have wavelengths between centimeter waves and light waves, thus combining the advantages of both microwave guidance and photoelectric guidance.

1) Compared with centimeter-wave seekers, millimeter-wave seekers have the advantages of small size, light weight and high spatial resolution;

2) Compared with optical seekers such as infrared and laser, millimeter-wave seekers have a stronger ability to penetrate fog, smoke and dust, a longer transmission distance, and are characterized by all-weather and all-time operation;

3) Stable performance, unaffected by the shape or color of the target object. Millimeter-wave radar effectively compensates for the limitations of other sensors such as infrared, laser, ultrasonic, and cameras in automotive applications.

Millimeter-wave radar typically has a detection range of 150m-250m, with some high-performance models reaching up to 300m, meeting the needs of detecting a large area while a car is moving at high speed. At the same time, millimeter-wave radar offers high detection accuracy.

Millimeter-wave radar applied to adaptive cruise

These characteristics enable millimeter-wave radar to monitor the operation of vehicles over a wide area, and it is also more accurate in detecting information such as the speed, acceleration, and distance of vehicles ahead. Therefore, it is the preferred sensor for adaptive cruise control (ACC) and automatic emergency braking (AEB).

Currently, the unit price of a 77GHz millimeter-wave radar system is around 250 euros, and the high price limits the application of millimeter-wave radar in vehicles.

LiDAR: Powerful Functions

LiDAR boasts superior performance and is considered the optimal technology for autonomous driving. LiDAR offers significantly better performance compared to other autonomous driving sensors:

High resolution. LiDAR can achieve extremely high angular, range, and velocity resolution, meaning it can obtain very clear images using Doppler imaging technology.

High precision. Lasers propagate in a straight line, have good directionality, and a very narrow beam with very low dispersion, thus lidar has very high precision.

It has strong resistance to active interference. Unlike microwave and millimeter-wave radars, which are susceptible to the influence of electromagnetic waves that are widely present in nature, there are few signal sources in nature that can interfere with lidar. Therefore, lidar has a strong resistance to active interference.

Space modeling of lidar

3D LiDAR is typically mounted on the roof of a vehicle and can rotate at high speed to acquire point cloud data of the surrounding space, thereby creating a real-time 3D spatial map of the vehicle's surroundings. Simultaneously, LiDAR can measure the distance, speed, acceleration, and angular velocity of other vehicles in three directions. Combined with GPS maps, this data is used to calculate the vehicle's position. This vast and rich dataset is transmitted to the ECU for analysis and processing, enabling the vehicle to make rapid decisions.

LiDAR automotive solutions :

Map-centric: Autonomous driving by internet companies like Google and Baidu is map-centric, mainly because LiDAR can create high-precision maps for these companies.

Car-centric: For most automakers, they want a LiDAR product that is specifically tailored to their vehicles.

LiDAR on Baidu's self-driving cars

First, compared to the bulky "flower pot" used for surveying, small LiDAR is a better match for cars. In order to balance aesthetics and drag coefficient, autonomous vehicles should not be different from ordinary cars in appearance. LiDAR should be made as small as possible and directly embedded in the car body, which means minimizing or even eliminating mechanical rotating parts.

Therefore, automotive LiDAR systems do not use large rotating structures; instead, the rotating components are integrated into the product's internal structure during manufacturing. For example, Ibeo's LUX LiDAR product uses a fixed laser source and changes the laser beam direction by rotating an internal glass plate to achieve multi-angle detection.

Quanergy's S3 is an all-solid-state product that uses a new phase matrix technology and has no rotating parts inside.

However, good things are expensive. The unit price of LiDAR is in the tens of thousands, making it difficult to commercialize.

Finally, let's compare the performance of these three sensors:

Changes triggered by an accident

In May 2016, a Tesla in Florida, USA, with Autopilot engaged, collided with a white heavy truck, resulting in the death of the Tesla driver.

This incident, dubbed the "world's first fatal accident involving autonomous driving," has raised concerns among many about the safety of autonomous driving and cast a shadow over Tesla.

Tesla

After the accident was exposed, Tesla terminated its cooperation with its vision recognition system supplier Mobileye and in September pushed out the V8.0 system via OTA, which enhanced the role of millimeter-wave radar and upgraded it to the main control sensor.

Tesla's V7.0 autonomous driving system primarily relied on image recognition, with millimeter-wave radar serving as an auxiliary sensor. The V8.0 system significantly altered the entire technical solution: it now primarily uses millimeter-wave radar, supplemented by image recognition. The radar's detection range is now six times greater than before, greatly enhancing Tesla's ability to recognize obstacles ahead.

In October 2016, Tesla released Autopilot 2.0 , announcing that all future models would have the hardware system for fully autonomous driving. Tesla also stated that the safety of autonomous driving based on this hardware was unprecedentedly improved.

Hardware comparison between Autopilot 2.0 and Autopilot 1.0

Tesla's Full Self-Driving hardware system includes:

1) Eight cameras are installed around the vehicle body, which can measure objects within a range of 250 meters;

2) Equipped with 12 ultrasonic sensors to assist in detection;

3) The upgraded and enhanced millimeter-wave radar can operate in adverse weather conditions and detect vehicles ahead;

4) The performance of the automotive motherboard is 40 times that of the previous product, significantly improving computing power.

The biggest hardware change in Tesla's Autopilot 2.0 fully autonomous driving system is the increase in the number of cameras, from one to eight. This indicates that Tesla's perception technology has shifted from relying on cameras to emphasizing radar, and finally back to cameras.

Tesla's constantly changing choice of main control sensors indicates that there is no completely fixed technical route for the sensing end, and Tesla itself is constantly moving forward through exploration.

Mobileye

In fact, it was Mobileye that proposed the "breakup" with Tesla.

After more than a decade of research and innovation, Mobileye has become the absolute leader in vision-based ADAS products, thanks to the advanced vision algorithms on its EyeQ series chips that enable a variety of ADAS functions.

Since the development of the first-generation EyeQ product in 2007, Mobileye has collaborated with STMicroelectronics to continuously upgrade chip technology and optimize vision algorithms. The EyeQ3 product is now 48 times faster than the first-generation product.

Mobileye's EyeQ series product upgrade status

As we can see from the table, the first three generations of products only featured a single camera. Currently, the EyeQ4 and EyeQ5 product plans have been released, with the EyeQ4 starting to use a multi-camera solution. It is expected that in the future, through chip upgrades and algorithm optimization, Mobileye's chip algorithms will integrate more sensors, launching a solution combining multi-camera, millimeter-wave radar, and LiDAR to fully support autonomous driving.

In July 2016, Mobileye announced the end of its partnership with Tesla, with EyeQ3 being the last collaboration between the two companies. Almost simultaneously, Mobileye also announced a partnership with Intel and BMW. This March, Intel acquired Mobileye at a premium of over 33%.

In fact, the deeper reason for Mobileye's termination of its cooperation with Tesla lies in:

1) Different styles and strategies. Mobileye is relatively conservative, while Tesla is relatively aggressive. Therefore, Mobileye prefers to cooperate with traditional automakers.

2) Data ownership is disputed. Mobileye proposed a concept called REM, in which data would be shared by the members who join. However, Tesla, which has accumulated the most mileage and data, is unwilling to share its data with other car manufacturers for free.

However, Tesla is just one of Mobileye's many vehicle customers. But with the strong alliance with Intel, Mobileye will benefit from the resources provided by Intel at the chip level to build powerful vision-based algorithms that achieve sensor fusion, and drive vision algorithms to continuously advance towards autonomous driving.

Trend – Multi-sensor fusion

Comparing the product upgrades of Tesla and Mobileye, we find that while the "old flames" have physically separated, their essence remains the same. Both improve autonomous driving capabilities by increasing the number of sensors and fusing multiple sensors.

The main causes of the Tesla accidents mentioned above are:

Millimeter-wave radar ranging may misjudge. A millimeter-wave radar might detect a large obstacle ahead, but due to the truck's large reflective surface and height, it might mistake the trailer for a traffic sign suspended above the road.

Camera light causes blindness

The front-facing camera EyeQ3 may have made a misjudgment. The trailer involved in the accident was horizontal, entirely white, and lacked color warnings. In strong sunlight, the image recognition system could easily misidentify the trailer as a cloud.

In extreme cases, both Tesla's millimeter-wave radar and front-facing camera made misjudgments. This demonstrates that the camera + millimeter-wave radar solution lacks redundancy and has poor fault tolerance, making it difficult to fulfill the mission of autonomous driving. Multiple sensor information fusion and comprehensive judgment are necessary.

Each sensor has its own advantages and disadvantages and is difficult to replace one another. To achieve autonomous driving in the future, multiple sensors will definitely need to work together to form a vehicle's perception system. Different sensors have different principles and functions, and can play their respective roles in different application scenarios, making them difficult to replace one another.

Multiple sensors of the same or different types obtain information from different local areas and categories. This information may complement each other, or it may be redundant and contradictory. Ultimately, the control center can only issue a single correct instruction. This requires the control center to fuse the information obtained from multiple sensors and make a comprehensive judgment.

Imagine if one sensor tells the car to brake immediately, while another sensor says it's safe to continue driving, or if one sensor tells the car to turn left while another tells it to turn right. In such cases, without fusion of sensor information, the car will be "confused and at a loss," potentially leading to an accident.

Therefore, to ensure safety when using multiple sensors, information fusion is essential. Multi-sensor fusion significantly improves system redundancy and fault tolerance, thereby ensuring rapid and accurate decision-making, and is an inevitable trend in autonomous driving.

Multi-sensor fusion requirements:

1) At the hardware level, the quantity must be sufficient, that is, different types of sensors must be equipped to ensure sufficient information acquisition and redundancy;

2) At the software level, the algorithm must be sufficiently optimized, the data processing speed must be fast enough, and the fault tolerance must be good to ensure the speed and correctness of the final decision.

Algorithms are the core of multi-sensor fusion

Simply put, sensor fusion is the process of combining and analyzing data and information from multiple sensors to more accurately and reliably describe the external environment, thereby improving the correctness of system decisions.

The basic principle of multi-sensor fusion

The basic principle of multi-sensor fusion is similar to the human brain's process of integrating environmental information. Humans perceive the external environment by transmitting information detected by their senses (various sensors) such as eyes, ears, nose, and limbs to the brain (information fusion center), where it is integrated with prior knowledge (database) to make a rapid and accurate assessment of the surrounding environment and ongoing events.

Multi-sensor fusion architectures: distributed, centralized, and hybrid.

1) Distributed. The raw data obtained from each independent sensor is first processed locally, and then the results are sent to the information fusion center for intelligent optimization and combination to obtain the final result. Distributed systems have low communication bandwidth requirements, fast computing speed, and good reliability and continuity, but the tracking accuracy is far lower than that of centralized systems.

2) Centralized. The centralized approach sends the raw data from each sensor directly to the central processing unit for fusion processing, enabling real-time fusion. It offers high data processing accuracy and flexible algorithms, but its drawbacks include high processor requirements, lower reliability, and large data volume, making it difficult to implement.

3) Hybrid Approach. In a hybrid multi-sensor information fusion framework, some sensors employ a centralized fusion method, while the remaining sensors use a distributed fusion method. The hybrid fusion framework has strong adaptability, combining the advantages of both centralized and distributed fusion, and exhibits high stability. However, the structure of the hybrid fusion method is more complex than the previous two methods, thus increasing the communication and computational costs.

Comparison of three sensor fusion architectures

Because the use of multiple sensors greatly increases the amount of information that needs to be processed, including contradictory information, it is crucial to ensure that the system can process data quickly, filter out useless and erroneous information, and thus ensure that the system makes timely and correct decisions.

Currently, theoretical methods for multi-sensor fusion include Bayesian criterion method, Kalman filtering method, DS evidence theory method, fuzzy set theory method, and artificial neural network method.

As our analysis above shows, multi-sensor fusion is not difficult to implement at the hardware level; the key and challenging aspects lie in the algorithm. While the hardware and software of multi-sensor fusion are difficult to separate, the algorithm is the crucial element and presents a high technological barrier. Therefore, the algorithm will occupy a major part of the value chain.

Conclusion

Driven by the wave of autonomous driving, domestic automakers have a stronger demand for intelligent and electronic technologies than joint venture automakers. This has created opportunities for domestic first- and second-tier auto parts suppliers in this field. Over the past few years, the auto parts industry has been continuously laying the groundwork and waiting for the market to open up.

Compared to the control and execution layers, which are largely controlled by internet giants, OEMs, and Tier 1 suppliers, the sensor layer has more dispersed component suppliers and relatively lower barriers to entry, with a shorter entry cycle. The sensor layer remains the easiest entry point for domestic companies to enter the autonomous driving industry.

Read next

CATDOLL 146CM Ya TPE (Customer Photos)

Height: 146cm A-cup Weight: 26kg Shoulder Width: 32cm Bust/Waist/Hip: 64/54/74cm Oral Depth: 3-5cm Vaginal Depth: 3-15c...

Articles 2026-02-22