[Introduction] Currently, most autonomous vehicles rely on sensor fusion, which involves analyzing and integrating data from multiple sensors, such as millimeter-wave radar, lidar, and cameras, according to certain criteria to collect environmental information. As demonstrated by industry giants, multi-sensor fusion improves the performance of autonomous vehicle systems, making driving safer.
However, not all sensor fusion produces the same results. While many autonomous vehicle manufacturers rely on "target-level" sensor fusion, only centralized front-end sensor fusion can provide the information needed for optimal driving decisions in autonomous driving systems. We will further explain the difference between target-level fusion and centralized front-end sensor fusion, and why centralized front-end fusion is indispensable. Centralized front-end sensor fusion preserves raw sensor data, enabling more accurate decisions. Autonomous driving systems typically rely on a dedicated set of sensors to collect low-level raw data about their environment. Each type of sensor has its advantages and disadvantages, as shown in the figure:
Integrating multiple sensors, including millimeter-wave radar, lidar, and cameras, maximizes the quality and quantity of collected data, generating a complete environmental image. The advantages of multi-sensor fusion over individual sensor processing are widely accepted by autonomous vehicle manufacturers, but this fusion typically occurs at the "target-level" post-processing stage. In this model, object data collection, processing, fusion, and classification all occur at the sensor level. However, before comprehensive data processing, individual sensors pre-filter information, removing almost all the background information needed for autonomous driving decisions. This makes target-level fusion difficult to meet the needs of future autonomous driving algorithms. Centralized pre-sensor fusion effectively avoids these risks. Millimeter-wave radar, lidar, and camera sensors send the raw data to the vehicle's central domain controller for processing. This method maximizes the amount of information acquired by the autonomous driving system, enabling algorithms to access all valuable information and achieve better decisions than target-level fusion. AI-enhanced millimeter-wave radar significantly improves the performance of autonomous driving systems through centralized processing.
Currently, autonomous driving systems centrally process camera data. However, centralized processing remains impractical when it comes to millimeter-wave radar data. High-performance millimeter-wave radar typically requires hundreds of antenna channels, significantly increasing the amount of data generated. Therefore, local processing becomes a more cost-effective option. However, Ambarella's AI-enhanced millimeter-wave radar perception algorithm can improve radar angular resolution and performance without requiring additional physical antennas. Raw radar data from fewer channels can be transmitted to the central processor at a lower cost using interfaces such as standard automotive Ethernet. When the autonomous driving system fuses the raw AI-enhanced radar data with the raw camera data, it can fully leverage these two complementary sensing methods to build a complete picture of the environment, making the fused result more comprehensive than the information obtained from any single sensor.
The continuous updates and iterations of millimeter-wave radar have helped reduce costs and significantly improve the performance of autonomous driving systems. When traditional low-cost radars are mass-produced, the price of each millimeter-wave radar can be less than $50, an order of magnitude lower than the target cost of lidar. Combined with ubiquitous low-cost camera sensors, AI radar provides acceptable accuracy, which is crucial for the large-scale commercial production of autonomous vehicles. Furthermore, lidar sensors overlap with camera/millimeter-wave radar perception fusion systems running AI algorithms. If the cost of lidar continues to decrease, it can serve as a safety redundancy in L4/L5 autonomous driving systems, combining camera and millimeter-wave radar.
Algorithm-first, centrally-processed architectures deepen sensor fusion to optimize autonomous driving system performance. Current target-level sensor fusion has certain limitations. This is because front-end sensors all have local processors, which limits the size, power consumption, and resource distribution of each smart sensor, further limiting the performance of the entire autonomous driving system. Furthermore, processing large amounts of data quickly depletes the vehicle's power and shortens its driving range. In contrast, algorithm-first, centrally-processed architectures enable what we call deep, centralized front-end sensor fusion.
This technology optimizes the performance of autonomous driving systems by leveraging state-of-the-art semiconductor process nodes. This is primarily due to its dynamically distributed processing power across all sensors and its ability to enhance the performance of different sensors and data flows based on driving scenarios. By acquiring high-quality, low-level raw data, the central processing unit (CPU) can make smarter and more accurate driving decisions. Autonomous vehicle manufacturers can use low-power millimeter-wave radar and camera sensors, combined with cutting-edge, algorithm-first application-specific processors, such as Ambarella's recently announced 5nm CV3 AI high-computing-power control chip, which offers optimal perception and path planning performance, the highest energy efficiency, significantly increasing the driving range of each autonomous vehicle while reducing battery consumption. Don't abandon sensors—invest in their fusion. Autonomous driving systems require diverse data to make correct driving decisions; only deep, centralized sensor fusion can provide the extensive data needed for optimal performance and safety in autonomous driving systems.
In our ideal model… 1. Low-power, AI-enhanced millimeter-wave radar and camera sensors are locally connected to an embedded processor on the periphery of the autonomous vehicle. 2. The embedded processor sends raw detection-level object data to a central domain SoC. 3. Using AI, the central domain processor analyzes the combined data to identify objects and make driving decisions. Centralized front-end sensor fusion can improve existing high-level fusion architectures, making autonomous vehicles using sensor fusion powerful and reliable. To reap these benefits, autonomous vehicle manufacturers must invest in algorithm-first central processors and AI-enabled millimeter-wave radar and camera sensors. Through these combined efforts, AI manufacturers can usher in the next phase of technological transformation in the development of autonomous vehicles.