Currently, intelligent logistics handling robots and sweeping robots have been put into practical use in some cities and households, and drones and unmanned vehicles are also being rapidly promoted. The reason why these robots can quickly enter the application stage is inseparable from the development of autonomous positioning and navigation technology.
Recently, iResearch Consulting Group's iResearch.com released its "Top 10 Breakthrough AI Technologies of 2018," which included robot autonomous navigation technology based on multi-sensor cross-disciplinary fusion. What is robot autonomous positioning and navigation technology? What are the current technological means to achieve robot autonomous positioning and navigation? What are the difficulties and challenges in realizing these technologies and applications?
Fundamentals: Vision and radar are the primary sensors.
Autonomous positioning and navigation technology has become one of the core and focal points of robotic products. Dr. Du Mingfang, an expert committee member of the Chinese Association of Automation and the Institute of Internet Industry at Tsinghua University, told Science and Technology Daily that autonomous navigation, broadly speaking, includes two parts: local navigation and global navigation. Local navigation refers to acquiring real-time environmental information through sensors such as vision, radar, and ultrasound, extracting features from the fused data, and processing them with intelligent algorithms to determine the currently passable area and track multiple targets. Global navigation mainly refers to using global navigation data provided by GPS for global path planning and achieving path navigation within the entire electronic map area.
"Currently, vision and radar are the two main sensors used in local autonomous navigation," explained Du Mingfang. As passive sensors, vision sensors have significant advantages, such as rich information acquisition, good concealment, small size, and no "environmental pollution" caused by interference, while also being less expensive than radar. To achieve autonomous navigation, it is common for multiple sensors to collaborate to identify various environmental information, such as road boundaries, terrain features, obstacles, and guides. In this way, the robot can use environmental perception to determine accessible or inaccessible areas in its direction of movement, confirm its relative position in the environment, and predict the movement of dynamic obstacles, providing a basis for local path planning.
Du Mingfang told reporters that, based on current development, multi-sensor information fusion technology has been applied to autonomous navigation systems, and its role is related to the level of robot intelligence. "The core of this navigation technology lies in its ability to effectively process and fuse information collected from multiple sensors, improving the robot's 'resistance' to uncertain information, ensuring that more reliable information is utilized, and helping to more intuitively judge the surrounding environment," he said.
Visual navigation has been successfully applied to low-altitude aircraft navigation, drone navigation, and Mars rover landing. However, Du Mingfang also pointed out that visual sensors still have problems such as providing indirect information, high computational and storage requirements, and heavy network transmission burdens. Utilizing multi-sensor information fusion can eliminate uncertainties in robot positioning and navigation and improve accuracy, but excessive fusion can also lead to a significant increase in computational load.
How to solve these problems? Du Mingfang believes that choosing an appropriate fusion algorithm is key. Currently, "there are increasingly more practices applying fundamental theories such as intelligent computing theory and probability theory to the field of multi-sensor fusion in robotics," he said.
Method: Combining multiple technologies to achieve complementary advantages
What are some ways to achieve autonomous positioning and navigation for robots? Actually, some of the autonomous positioning and navigation technologies used in autonomous vehicles and robots are the same. Qianxun Location CEO Chen Jinpei told reporters that Qianxun uses a combination of LiDAR positioning and navigation with sensors to achieve a positioning accuracy of about 1 meter and can complete initial positioning in 3 seconds.
LiDAR navigation involves installing precisely positioned laser reflectors around the robot's path. The robot emits a laser beam via a laser scanner and simultaneously collects the laser beam reflected from the reflectors to determine its current position and heading. Guidance is then achieved through continuous triangulation. Besides ranging and positioning, LiDAR also functions as an obstacle avoidance and obstacle recognition system.
Du Mingfang said that lidar is an active sensor, and the perception data it provides is much simpler and more direct than visual information, requiring less computation to process; however, its disadvantages include high cost, poor concealment, environmental pollution, and insufficient information.
It is understood that Suning's robots and unmanned vehicles employ a different "multi-sensor fusion positioning method, combining multi-line LiDAR, GPS, and inertial navigation." Specifically, the LiDAR first maps the environment, obtaining a priori point cloud map. GPS and inertial navigation then preliminarily determine the machine's global location. Finally, the LiDAR scan data is matched with the priori point cloud map to obtain a more precise global location, achieving accurate positioning and autonomous navigation. At the perception level, the LiDAR integrates with vision to identify pedestrians, vehicles, and obstacles in real time, providing a basis for planning the optimal detour route.
In addition, there is inertial navigation, which involves installing gyroscopes on robots or autonomous vehicles and positioning blocks on the ground in the driving area. The robot determines its position and heading by calculating the gyroscope's deviation signal (angular rate) and collecting signals from the ground positioning blocks, thus achieving guidance. A representative from Suning stated in an interview with Science and Technology Daily that inertial navigation technology offers precise positioning, minimal ground processing workload, and high path flexibility. However, it has a higher manufacturing cost, and the accuracy and reliability of guidance are closely related to the manufacturing precision of the gyroscope and its subsequent signal processing. In short, no single technology can solve all problems; currently, autonomous robot navigation generally employs a combination of multiple technologies to achieve complementary advantages.
Challenges: Power consumption, cost, and industrialization issues remain to be solved.
Currently, the applications of autonomous positioning and navigation robots mainly fall into two categories. The first is household robotic vacuum cleaners and home care/companion robots. Chen Shikai, CEO of SLAMTEC, said that this type of application can be summarized as "zero-configuration," meaning that from a consumer perspective, it needs to be as simple as possible—ready to use immediately after purchase. The other category is in commercial scenarios, which require a pre-configuration process, and this configuration must have high reliability and scalability.
Chen Shikai said that navigation and positioning systems for personal and home scenarios need to address the challenges of power consumption, size, and cost. Currently, both Simultaneous Localization and Mapping (SLAM) algorithms and path planning systems are quite complex. "A robotic vacuum cleaner's battery capacity may only be around 20 watt-hours. If you put it on a laptop to run SLAM algorithms, it might run out of power in less than an hour, which is completely unacceptable."
Furthermore, when a new robot is first powered on, it has no idea about the layout of the home environment, so a map needs to be drawn beforehand. "This presents a contradiction," Chen Shikai said. People want robots to start working immediately upon arriving at their location, but mainstream algorithms still require a pre-construction or exploration of the environment. In this regard, "the industry needs to do some work." Chen Shikai gave an example, such as providing an initial path plan, which can be gradually refined and improved as the robot is used and explored.
In commercial or professional scenarios, the challenge for autonomous navigation systems lies in the fact that commercial maps are typically very large, sometimes exceeding tens of thousands of square meters. "Currently, SLAM systems are still quite memory- and computationally intensive. Enabling them to work in such large environments is a significant challenge for navigation and positioning systems," said Chen Shikai. He explained that the solution is to equip them with powerful hardware while simultaneously optimizing the software and algorithms. "A qualified navigation and positioning system should not only have LiDAR but also visual sensors and ultrasonic sensors, and its navigation and positioning algorithms should be integrated accordingly. This integration may not be difficult academically or algorithmically, but considering industrialization issues—for example, many ultrasonic sensors are non-standard products, and depth vision sensors vary in specifications and installation locations from manufacturer to manufacturer—finding a unified, standardized interface for easy customer use presents a challenge."