Each technology has its own advantages and disadvantages. GNSS can provide global coordinates, but it is easy to lose signals in tunnels or mountainous areas. IMU does not rely on external signals, but it will accumulate errors over time. LiDAR and visual SLAM can perceive the environment in real time, but they are sensitive to lighting and environmental texture.
To truly achieve autonomous driving, autonomous vehicles must be able to flexibly cope with various traffic environments. However, many traffic scenarios include tunnels, mountainous areas, and other places where satellite signals are weak and road conditions may obstruct the view. In such cases, how can autonomous vehicles achieve centimeter-level positioning? Today, the forefront of intelligent driving will discuss this topic with you.
Analysis of common positioning technologies for autonomous driving
1) GNSS/Satellite Positioning: Advantages and Limitations
GNSS (Global Navigation Satellite System) includes multiple national satellite navigation systems such as GPS, BeiDou, and Galileo. It calculates a vehicle's latitude, longitude, and altitude by receiving signals from at least four satellites. Its advantage is providing absolute positioning with global coverage, typically offering meter-level or even higher accuracy. To improve accuracy, enhancement techniques such as Real-time Dynamic Differential (RTK) or Precise Point Positioning (PPP) are commonly used, raising positioning accuracy to decimeter or even centimeter levels. For example, dual-frequency RTK can achieve a lateral error of less than 0.2 meters at 95% confidence, while single-frequency RTK achieves only 0.4 meters. Furthermore, multi-constellation (simultaneously receiving signals from GPS, BeiDou, etc.) and dual-frequency designs can reduce ionospheric and multipath errors, further improving reliability.
For autonomous driving needs, GNSS has significant limitations. It relies on satellite signal transmission, and significant problems arise in environments where signals are blocked or reflected. For example, in tunnels, underground parking garages, and densely forested mountain valleys, satellite signals are frequently obstructed or reflected via multiple paths. Tests have shown that when vehicles traverse areas with tall buildings, overpasses, or tunnels, direct path signals are often blocked, and the receiver can only receive reflected signals, leading to a significant increase in positioning errors. Furthermore, GNSS is also affected by error sources such as the atmospheric ionosphere, satellite clock bias, and receiver noise. While these errors can be mitigated through dual-frequency or differential techniques, GNSS positioning becomes completely ineffective once the signal is interrupted in areas of continuous obstruction. In real-world driving scenarios, if positioning accuracy needs to reach the 10-centimeter level (required for safe cornering and lane-level control), the system's positioning performance will drastically decline if GNSS is lost. For instance, in a 10-kilometer-long tunnel, relying solely on a visual odometry system could result in a cumulative positioning error of 2.3 meters (more than half the width of a typical lane), indicating that traditional GPS solutions cannot meet accuracy requirements in such scenarios.
To address these issues, a common approach is to combine enhanced GNSS with auxiliary signals, such as using ground-based RTK networks or satellite augmentation signals (PPP-RTK) to improve accuracy; there are also proposals to transmit augmentation signals via low-Earth orbit (LEO) satellites (such as Starlink). It's important to understand that LEO satellites themselves do not directly improve positioning accuracy, but they can increase the number of visible satellites in partially obscured areas. For example, in areas under viaducts or near buildings, navigation signals transmitted via LEO satellites allow the receiver to "see" a sufficient number of satellites, thereby enhancing navigation reliability. Even if the accuracy of a single satellite remains unchanged, the overlapping positioning can make the location more accurate. Furthermore, traditional cellular networks (such as 5G/6G base stations) and future satellite internet are also expected to assist positioning through signal coverage.
2) Inertial navigation (IMU) and odometry
When GNSS is unavailable, vehicles can still use inertial navigation for short-term positioning compensation. An IMU (Inertial Measurement Unit), consisting of accelerometers and gyroscopes, measures the vehicle's acceleration and rotational angular velocity along three axes. IMUs update at high frequencies (typically 100–200 Hz), allowing them to "relay" estimates of vehicle displacement at GNSS positioning update intervals (around 10 Hz). The common practice is to integrate IMU readings to obtain the vehicle's velocity and position increments; this method is called inertial dead reckoning. In the event of GNSS loss, the vehicle can estimate short-term position changes based on the most recent velocity and acceleration readings.
The advantages of an IMU (Independent Memory Unit) are that it does not rely on external signals and can continuously output data in any environment, such as tunnels or underground; it also has a high update frequency, enabling it to capture short-term dynamics of high-speed movement. In many technical solutions, IMUs and GNSS (GNSS Array of Named Components) are considered a "golden combination." GNSS can correct the cumulative drift error of the IMU, while the IMU can maintain short-term position continuity when GNSS signals are unstable. Therefore, most autonomous driving systems use GNSS+IMU combined navigation, fusing the data from both through filtering algorithms.
However, IMUs also have inherent limitations: their output velocity and position errors accumulate over time. High-precision IMUs are expensive, while low-cost IMUs have even greater errors. A common method for measuring accumulated IMU error is to allow the vehicle to travel at a constant speed for 1000 meters with no GNSS signal, and then observe the deviation between the calculated and actual values. In practice, this error is typically between one-thousandth and five-thousandths (1σ). This means that if the vehicle travels 1 kilometer, the longitudinal drift could reach several meters (as in the case below), and the accuracy of long-distance calculations will rapidly decrease. Therefore, relying solely on IMUs/odometers for travel exceeding several hundred meters in tunnels or mountainous areas necessitates alternative positioning methods to correct for errors.
3) LiDAR and Visual SLAM
To maintain accurate positioning even when GNSS fails, autonomous driving systems often utilize external environmental features. LiDAR (Light Detection and Ranging) is a sensor that measures distance by emitting laser beams, generating high-density 3D point clouds. LiDAR's advantage lies in its built-in light source (laser), enabling it to detect surrounding obstacles and road boundaries at night or in low-light conditions. Tests have shown that LiDAR performs significantly better than ordinary cameras in dimly lit environments or tunnel entrances/exits because it provides a stable light source and accurate distance information. Li Xiang, CEO of Li Auto, has also publicly stated that retaining LiDAR is for safety reasons, not due to technological limitations.
Another major application of LiDAR is point cloud map matching. Autonomous driving companies typically use high-precision LiDAR to scan the road environment beforehand, building high-resolution maps. When the vehicle is driving, the real-time LiDAR scan results are then aligned with the map for localization. Commonly used algorithms include ICP (Iterative Closest Point) and NDT (Normal Distribution Transform). By matching fixed environmental features (such as mountain edges, tunnel walls, tunnel entrance structures, etc.), the vehicle can "find" its own location in the map. This method does not rely on satellite signals and is very suitable for tunnels, indoor environments, or mountain roads.
However, LiDAR also has its drawbacks: it is costly and generates massive amounts of data. High-resolution point cloud maps produce millions of data points per second, placing enormous pressure on storage and processing. Real-time localization requires rapidly matching large amounts of point clouds, resulting in a huge computational load that may affect real-time performance. Furthermore, LiDAR matching may encounter difficulties if the road environment is relatively monotonous (e.g., the repetitive shape of the inner walls of long, straight tunnels, or the scarcity of distinctive features in dense forests).
Visual SLAM utilizes onboard cameras (monocular, stereo, or RGB-D) for feature extraction and matching. It can perform localization and mapping by recognizing visual features such as road signs, lane lines, and building edges. The advantages of visual systems are relatively low cost, high resolution, and the ability to capture color information. Visual localization performs well in well-lit, visually rich scenes. Its disadvantages include susceptibility to lighting and weather conditions: in strong light, backlight, at night, or in rain or snow, camera image quality deteriorates, feature extraction becomes more difficult, and localization accuracy significantly decreases. When lighting changes drastically, a pure vision system may fail to accurately track the environment, potentially leading to vehicle localization errors. Therefore, visual SLAM is often used in conjunction with other sensors (such as IMUs) to form a visual inertial navigation (VIO) system to mitigate single-vision drift.
4) Map matching and lane line reference
Besides real-time sensors, high-definition maps (HD maps) are also crucial for positioning in autonomous driving. HD maps record detailed 3D geometric and semantic information about roads, such as lane lines, traffic signs, and surrounding buildings. Vehicles can use map matching to align the roadside lines, curbs, and landmarks captured by sensors with corresponding elements on the map to obtain a precise location. In tunnels, if the location and shape of the tunnel entrance are marked on the map, LiDAR or vision systems can detect the tunnel entrance structure after the vehicle enters the tunnel, allowing for matching and positioning.
The use of HD maps can significantly improve positioning accuracy, but it also presents many challenges. Creating and maintaining high-precision maps is costly and requires frequent updates; furthermore, the sheer volume of map data demands substantial computing power for loading and querying. To address these issues, some solutions propose methods such as block-based loading or vehicle-to-infrastructure (V2I) map sharing to reduce the processing load on vehicles. High-precision maps are crucial prior information sources for autonomous driving, but in areas without maps, pure SLAM technology must be relied upon.
Positioning challenges in tunnels and mountainous areas
Complex environments such as tunnels and mountainous areas pose unique challenges to autonomous driving positioning. Tunnels completely block satellite signals, necessitating closed-loop positioning using onboard sensors. While using LiDAR/visual mapping with lane line references can partially correct positioning errors when tunnel entrances and exits are identified, SLAM matching becomes significantly more difficult if the tunnel has a simple road structure, indistinct lane lines, or complex traffic conditions due to congestion. Real-world testing shows that when lane line recognition is normal within the tunnel, vehicle positioning meets safety requirements; however, if lane lines are lost and the estimated distance exceeds approximately 400 meters, the lateral error may exceed 0.78 meters, failing to meet the requirements for high-speed assisted driving. Furthermore, relying solely on inertial estimation for hundreds of meters within the tunnel can accumulate longitudinal errors to the meter level after exiting the tunnel (e.g., a longitudinal error of 3 meters at 99.7% confidence level for a 1km tunnel + 1km outside the tunnel). This means that relying solely on IMU + speedometer fusion will quickly deviate from the actual lane in long tunnels, requiring special measures.
While mountain roads don't completely block out the sky, mountains, valleys, and forests can cause multipath and intermittent loss of satellite signals. Furthermore, mountain roads often have many curves and significant undulations, making slope and roll measurements by the IMU (Infrastructure Detector Unit) even more crucial. The complex surrounding scenery also presents both opportunities (rich features) and challenges for visual SLAM (variable tree shadows, snow cover, etc.). Mountain positioning, similar to urban highway positioning, requires multi-sensor collaboration: fully utilizing environmental features such as lane markings, road edges, tunnel and bridge structures, and roadside trees, matching them with maps, and simultaneously achieving continuous positioning through short-term inertial calculations by the IMU.
Typical Practices and Solutions
1) Lane lines and visual assistance
Some manufacturers install specific lane markings or RFID tags inside highway tunnels to help vehicles locate themselves. Studies have shown that as long as the lane markings are clearly visible inside the tunnel, vehicles can pass through a 1-kilometer-long tunnel smoothly; even if the lane markings are interrupted, as long as the distance is less than 400 meters, a high-precision IMU combined with wheel speed can maintain a lane positioning error within 0.8 meters.
2) Tunnel Map SLAM
For convoys frequently entering and exiting the same tunnel (such as buses or school buses), a 3D point cloud mapping of the tunnel's interior is pre-drilled to create an internal map. This map is downloaded to the vehicle before entering the tunnel, and LiDAR point cloud matching is used to maintain positioning within the tunnel. When exiting the tunnel and resuming GNSS, the global position is updated using satellite signals.
3) High-precision map support
Chinese autonomous driving projects such as Baidu Apollo and Pony.ai rely on detailed, high-precision maps. At intersections of overpasses and mountain roads, or before and after tunnel entrances, these systems simultaneously locate themselves using maps and sensors. Once the vehicle exits a tunnel, it first identifies features at the entrance (signs, lights) and immediately "locks" its current relative position onto the map, thus correcting for drifting within the tunnel. In this way, high-precision map matching becomes a crucial calibration method.
4) Use of lidar and millimeter-wave radar
The higher the level of autonomous driving, the greater the reliance on radar and lidar. Statistics show that in L2 and above driving systems, the installation rate of lidar exceeds 60%, and it approaches 100% in urban NOA scenarios. Many technical solutions propose retaining LiDAR at night or in complex environments to ensure perception safety. Millimeter-wave radar, on the other hand, has high penetration in smoke and dusty weather, is used for long-term stable target tracking, and is also a commonly used auxiliary sensor in tunnel scenarios.
5) Multi-link redundant positioning
Internationally, companies like Waymo also employ multimodal redundant positioning. They deploy inertial navigation landmarks in some tunnels or underground loops, using them in conjunction with high-precision maps. While details are not publicly available, the underlying concept is similar. Tesla currently only uses vision + GPS positioning, resulting in a positioning error of up to 2.3 meters reported by netizens during tests in Chinese highway tunnels. Subsequent reports indicate that Tesla has partnered with Baidu to integrate HD maps and other information into its FSD system, which may improve the inaccurate positioning in tunnels.
Future Trend Outlook
1) Low Earth Orbit Satellite Positioning
As mentioned earlier, LEO satellites (such as SpaceX Starlink or domestic low-Earth orbit constellations) can expand satellite coverage. Huawei anticipates that by the 6G era, through the integration of terrestrial and non-terrestrial networks, positioning accuracy can be improved from meters to centimeters. While individual low-Earth orbit satellites do not directly improve accuracy, they can broadcast enhanced signals, especially increasing the number of available satellites in partially obscured areas, theoretically assisting GNSS in achieving higher availability.
2) 6G and V2X assistance
Future 6G networks may have built-in positioning capabilities (such as ultra-wideband positioning using side communication units) and enhance positioning accuracy through vehicle-to-vehicle and vehicle-to-base station collaborative sensing (C-V2X). Some research suggests that "6G transportation" will integrate communication and sensing, utilizing real-time, high-bandwidth, low-latency networks to enable vehicles to exchange maps and location information, improving safe and redundant positioning in complex environments.
3) Smart sensor upgrade
High-performance IMUs (such as fiber optic gyroscopes), novel visual radars (sensors that integrate millimeter-wave and camera technologies), and quantum gyroscopes are also under development, which may further reduce inertial drift or increase environmental feature perception in the future. Meanwhile, the application of artificial intelligence technology in SLAM is becoming increasingly mature, enabling more flexible handling of special landscapes such as tunnels and mountainous areas.
4) Cloud and edge computing
Autonomous vehicles can transmit environmental data to edge servers in real time, allowing high-performance computing resources in the cloud to participate in localization and mapping. Even with poor GNSS signals, cloud servers may be able to help vehicles correct their position using remote observations and historical data. For example, environmental radar echoes and historical traffic flow information recorded by edge base stations can serve as positioning aids.
In summary, positioning in tunnels and mountainous areas is a major challenge for autonomous driving. Currently, several complementary solutions exist: short-term transitions rely on IMU calculations, while LiDAR/visual systems are used for local SLAM, followed by matching and correction with high-precision maps. Finally, all sensor data is fused to output the optimal solution. In the future, with upgrades to satellite navigation networks and advancements in communication technology, the positioning accuracy of autonomous vehicles in weak-signal environments such as tunnels and mountainous areas is expected to improve further, ensuring the safe operation of high-level autonomous driving.