Share this

How do self-driving cars achieve accurate positioning?

2026-04-06 05:15:00 · · #1

Therefore, autonomous driving systems typically employ multi-sensor fusion, integrating data from various sensors such as Global Navigation Satellite System (GNSS), Inertial Measurement Unit (IMU), LiDAR, cameras, and Ultra-Wideband (UWB) to achieve high-precision and high-reliability vehicle positioning capabilities through iterative optimization of algorithms.

Global Navigation Satellite System (GNSS)

GNSS (Global Navigation Satellite System) is one of the fundamental methods of positioning. Common systems include GPS (US), GLONASS (Russia), Galileo (EU), and BeiDou (China). GNSS measures carrier phase or pseudorange through satellite signals, theoretically achieving meter-level positioning accuracy. However, due to factors such as atmospheric ionospheric delay, multipath effects, and signal blockage, single-GNSS positioning often suffers errors of several meters or even tens of meters. To improve accuracy, autonomous driving systems often employ differential GNSS (DGNSS) or real-time dynamic correction (RTK) technology, sending correction information to the onboard GNSS receiver via a base station network to improve positioning accuracy to within 10 centimeters. Even so, in areas with weak GNSS coverage, such as urban canyons or tunnels, GNSS positioning can experience signal interruptions or a sudden drop in accuracy, requiring compensation from other sensors.

Inertial Measurement Unit (IMU)

An IMU (Integrated Measurement Unit) consists of a three-axis accelerometer and a three-axis gyroscope, capable of measuring the linear acceleration and angular velocity of a vehicle at high frequencies (typically above 100Hz). Based on Zero-Range Update (ZUPT) or a vehicle dynamics model, the IMU can provide smooth and continuous attitude and displacement estimates in a short time, compensating for positioning blind spots when GNSS fails. However, the IMU itself suffers from cumulative error; when velocity and position are obtained by integrating acceleration, the error increases quadratically over time. Therefore, autonomous driving systems fuse GNSS and IMU data through filters (such as Extended Kalman Filter (EKF), Unscented Kalman Filter (UKF), or factor graph-based adjustment) to achieve real-time correction and state estimation, enabling positioning to have both the global reference of GNSS and the high-frequency dynamic response capability of the IMU.

LiDAR SLAM technology

LiDAR can acquire high-precision 3D point clouds, depict the geometric features of the surrounding environment, and register point cloud sequences to achieve simultaneous localization and mapping (LiDAR SLAM). Common algorithms include LOAM (LiDAR Odometry and Mapping), Cartographer, and FAST-LIO. They extract and match point cloud features (such as planes and edges) to estimate the pose of a new frame of point cloud with an existing map or local sub-map, using high-confidence poses as optimization constraints to continuously update the vehicle pose and map. LiDAR SLAM has advantages such as resistance to illumination changes and strong anti-interference capabilities, enabling continuous localization in environments with GNSS failure or poor visual conditions. However, it is computationally intensive, requires high point cloud quality and rich environmental features, and needs to be used in conjunction with other sensors to ensure robustness.

Visual inertial navigation and visual odometry

Cameras, characterized by low cost, high resolution, and rich information content, can be used for scene perception such as lane line recognition, traffic sign detection, and object recognition, while achieving pose estimation through visual odometry (VO) or visual inertial odometry (VIO). VO methods solve for relative motion based on feature matching of two or more frames of images (such as SIFT, ORB, and SuperPoint); VIO further combines IMU data and improves estimation accuracy and stability through filtering or optimization frameworks (such as MSCKF and VINS-Mono). Visual localization has high accuracy in scenes with dense features and rich textures, but is susceptible to factors such as changes in lighting and weather conditions like rain and fog. Future developments could combine deep learning to utilize semantic features to assist localization, improving robustness in similar environments.

High-precision maps and positioning

High-resolution (HD) maps are a crucial prerequisite for autonomous driving localization, typically including geometric and semantic information such as lane centerlines, curbs, potholes, traffic signs, and traffic light positions with centimeter-level precision. During localization, the vehicle matches real-time sensor-perceived environmental features with map elements, calculating the current vehicle pose using methods such as ICP (Iterative Closest Point) and NDT (Normal Distribution Transform). HD maps not only compensate for insufficient sensor data but also provide redundancy checks, ensuring reliable localization even when multiple sensor sources malfunction. However, building and updating HD maps is costly, requiring specialized surveying equipment and regular maintenance.

Multi-sensor fusion algorithm

In autonomous driving positioning systems, multi-sensor fusion is crucial. The most commonly used methods include Extended Kalman Filter (EKF), Unscented Kalman Filter (UKF), Particle Filter (PF), and factor graph-based optimization (such as g2o and GTSAM). EKF-like methods are suitable for scenarios with high real-time requirements, while factor graph optimization can better handle nonlinear and multimodal information and supports batch optimization in the backend. The fusion framework typically consists of two parts: a front-end (measurement preprocessing, feature extraction and matching) and a back-end (state estimation and optimization). The front-end filters, denoises, and spatiotemporally aligns sensor data, and extracts key features; the back-end, centered on the fusion algorithm, integrates observations from various sensors, vehicle motion models, and map priors into the optimization problem, iteratively solving a sparse linear solver to obtain a globally consistent optimal pose.

Challenges and Outlook

While current positioning technologies can meet the needs of most urban and highway scenarios, multi-sensor fusion still faces challenges such as signal obstruction, sensor failure, and limited computing resources in "GNSS shadow areas" such as extreme weather (blizzards, dense fog), underground parking lots, and densely populated high-rise buildings. In the future, with the development of communication technologies such as 5G/6G networks, vehicle-to-everything (V2X), and ultra-wideband (UWB), vehicle-to-vehicle and vehicle-to-infrastructure cooperative positioning will be possible, further improving positioning accuracy and robustness. The application of artificial intelligence technologies such as self-supervised learning and reinforcement learning in SLAM and positioning will also facilitate intelligent fusion of sensor data and adaptive error correction. In the future, autonomous vehicle positioning will inevitably evolve towards higher accuracy, stronger robustness, and lower cost, laying a solid foundation for the large-scale commercialization of intelligent transportation and autonomous driving.

Read next

CATDOLL Sabrina Soft Silicone Head

You can choose the skin tone, eye color, and wig, or upgrade to implanted hair. Soft silicone heads come with a functio...

Articles 2026-02-22