Both the SAE standard and the domestic standard primarily consider two points when classifying autonomous driving: how many driving tasks the system can perform for a human, and who is responsible when the system cannot handle the situation. As the level increases, the vehicle undertakes more and more tasks, and the supervisory role of the human driver gradually weakens, until the highest level system can independently complete the entire driving task in any scenario and at any time.
Lower-level autonomous driving primarily assists the driver, such as maintaining a safe distance on highways or making minor lane adjustments. Intermediate-level driving begins to offer the ability to "let go" in limited scenarios; the system can handle driving for a period but requires human intervention. Higher-level driving encompasses many uncertain and complex situations, requiring the system to autonomously assess and handle various emergencies. This shift from "assistance to autonomy," while seemingly a quantitative accumulation, actually involves a qualitative leap in perception, decision-making, control, redundancy, and verification.
From perception to decision-making, the gradual improvement of technological capabilities
The technological framework of autonomous driving can be roughly broken down into several main lines: perception (seeing the surroundings), localization and mapping (knowing the location), prediction and decision-making (judging the next move of others and deciding what to do), control (translating decisions into actions), and human-machine interaction and degradation strategies (how to hand over to humans or safely stop in case of anomalies). At lower levels, these modules can be relatively simple; at intermediate and higher levels, they not only need to be more precise, but also have provable reliability and fault tolerance.
Lower-level systems may only require a single sensor to fulfill basic functions, such as radar for adaptive cruise control or a front-facing camera for lane departure warning. These are functions achievable at lower levels of autonomous driving. At higher levels, the limitations of a single sensor become increasingly apparent. Cameras are sensitive to strong backlight, nighttime conditions, and rain or snow; millimeter-wave radar is inferior to cameras in fine shape recognition; and lidar performance fluctuates under certain weather conditions. Therefore, advanced systems employ multi-sensor fusion, combining data from different sources to improve robustness. Fusion is not simply a matter of piling data together; it also involves handling temporal alignment, coordinate transformation, confidence assessment of each sensor, and anomaly detection—all of which are substantial engineering tasks.
Localization and mapping are also crucial for improving the level of autonomous driving. Basic positioning using GNSS and inertial navigation can meet the needs of low-speed urban assistance, but when lane-level precision is required for decision-making, high-precision positioning and high-resolution maps become necessary. Maps not only contain geometric information but also need to semantically describe lane connectivity, traffic rules, and historical events. Furthermore, maps must be up-to-date; road construction, temporary closures, and changes in lane markings can all cause map inconsistencies. The system needs to be designed with strategies to handle map inconsistencies and cannot assume that maps are always accurate.
Prediction and decision-making are the core of "intelligence" in autonomous driving. Lower-level decisions are often based on rules and simple models, while mid- to high-level decisions require trajectory prediction of dynamic targets, evaluation of the risks and benefits of multiple alternative strategies, and real-time selection. This demands more complex models, greater computing power, and stricter latency control. At this level, decision-making must also include explicit mechanisms for accident avoidance and safety constraints; it cannot simply seek the optimal path but must also ensure a safe escape route in extreme or abnormal situations.
As autonomous driving levels continue to improve, the requirements for control have evolved from smoothly executing longitudinal and lateral maneuvers to maintaining vehicle stability even under redundant hardware and disconnection scenarios. Control algorithms need to consider the physical limits of actuators, vehicle dynamics, and conservative strategies in the event of sensor degradation. The software architecture must support hot redundancy and failover to ensure the system does not become uncontrollable when some modules fail.
Transform "holding a meeting" into "being allowed to drive".
The ultimate goal of autonomous driving systems is not simply to enable them to drive independently in a closed environment, but to transform autonomous driving from merely "being able to do it" to "reliably and demonstrably doing it." This requires redundant design, a rigorous verification system, and supporting regulations and standards. Redundancy is not simply about adding similar equipment; it requires cross-level and cross-module design, such as redundancy at the sensor level, computing platform level, and power and communication link level. The core purpose of redundancy design is to avoid critical safety failures caused by single points of failure, while ensuring that the switching process is rapid and does not introduce new hazards.
For autonomous driving systems, validation work inevitably consumes a significant portion of the development cycle. Lower-level functions can be achieved through scenario testing and laboratory validation, but mid- to high-level functions require massive simulations covering numerous boundary conditions, real-vehicle validation with millions of miles of driving, and specialized testing for edge cases. Validation methods also need to be more refined, including formal validation, scenario generation, causal attribution analysis, and robustness testing. Evaluation metrics should not only focus on average performance but also on worst-case scenarios and fault recovery times under extreme conditions. This is precisely why large automakers and autonomous driving companies invest heavily in closed-loop simulation and scenario playback analysis.
Taking steady steps is more realistic than trying to achieve everything at once.
Currently, many automakers have released autonomous driving solutions for various application scenarios. To ensure that autonomous driving can better serve people, it is essential to "confine" the technology. Selecting a controllable operational design domain and limiting the system within that domain is a feasible strategy for the early deployment of autonomous driving. Closed parks, lane-limited shuttle buses, and driverless taxis being tested in specific urban areas all fall under this strategy. By keeping complexity within a certain range, the system can first address various degradation, anomaly, and disaster modes before gradually expanding the operational scenarios.
To promote advanced autonomous driving systems to large-scale mass production and widespread application, several long-term tasks cannot be ignored. First, there is the trade-off between cost and weight. High-precision sensors, large computing platforms, and redundant modules all increase vehicle cost and energy consumption, posing an obstacle to mass production. Second, there is the lifecycle management of software and data. Autonomous driving systems require frequent model updates, correction of perception parameters, and patching of vulnerabilities. Ensuring update security, reliable rollback mechanisms, and online version management remains an ongoing engineering challenge. Third, there is the issue of social and user acceptance. When the system faces emergencies, the ability of users to quickly respond to takeover requests and the level of societal tolerance for autonomous driving accidents will directly impact deployment strategies.
Technological development is not a linear process. The prudent approach at present is often a phased iteration, first achieving high automation within a controllable domain to accumulate operational experience, data, and user trust, before gradually opening up the boundaries. Simultaneously, the collaborative construction of the industry ecosystem must keep pace. This includes a team dedicated to the continuous maintenance of high-precision maps, supporting roadside intelligent infrastructure, and cross-enterprise data sharing and standardization. No single company can solve all problems alone; collaboration and standardization will undoubtedly become increasingly important in the future.
Final words
Each level of advancement in autonomous driving represents a qualitative leap in engineering. Perception must evolve from simply "seeing" to "robustly seeing in complex environments"; decision-making must shift from "following the rules" to "making safe choices even in unpredictable situations"; and the system must progress from "being able to be taken over by humans during occasional malfunctions" to "never putting the vehicle in danger under any circumstances." Achieving these goals requires simultaneous advancements in sensing, computing architecture, software engineering, verification methods, and regulatory compliance. Rather than achieving the highest level overnight, a more realistic approach is to perfect each step, design safe and reliable degradation paths, and continuously verify and iterate in the real world.