In the late 1970s, with the application of computers and the development of sensing technology, mobile robot research experienced a new surge. Especially in the mid-1980s, a wave of robot design and manufacturing swept the world. A large number of world-renowned companies began developing mobile robot platforms, primarily serving as experimental platforms for university laboratories and research institutions, thus promoting the emergence of various research directions in mobile robotics.
Mobile robots are now widely used in military, industrial, and civilian fields, and are still developing. While mobile robot technology has made encouraging progress and research results are promising, it still requires a long period of development to meet practical application needs. It is believed that with the continuous improvement of sensing, intelligent, and computing technologies, intelligent mobile robots will certainly be able to play a human-like role in production and daily life. So, what are the main technologies involved in mobile robot positioning? Currently, there are five main positioning technologies for mobile robots.
I. Ultrasonic Navigation and Positioning Technology for Mobile Robots
The working principle of ultrasonic navigation and positioning is similar to that of laser and infrared. Typically, ultrasonic sensors emit ultrasonic waves from their transmitting probes, and these waves return to the receiving device when they encounter obstacles in the medium.
By receiving the reflected ultrasonic wave signal emitted by itself, and calculating the propagation distance S based on the time difference between the emission and reception of the ultrasonic wave and the propagation speed, the distance from the obstacle to the robot can be obtained. The formula is: S=Tv/2, where T is the time difference between the emission and reception of the ultrasonic wave, and v is the wave speed of the ultrasonic wave in the medium.
Of course, many mobile robot navigation and positioning technologies use separate transmitting and receiving devices, with multiple receiving devices placed in the environmental map and transmitting probes installed on the mobile robot.
In the navigation and positioning of mobile robots, due to the inherent defects of ultrasonic sensors, such as specular reflection and limited beam angle, it is difficult to obtain sufficient information about the surrounding environment. Therefore, an ultrasonic sensing system composed of multiple sensors is usually used to establish a corresponding environmental model. The information collected by the sensors is transmitted to the control system of the mobile robot through serial communication. The control system then uses certain algorithms to process the data based on the collected signals and the established mathematical model to obtain the robot's position and environmental information.
Due to their advantages such as low cost, fast data acquisition rate, and high distance resolution, ultrasonic sensors have long been widely used in the navigation and positioning of mobile robots. Furthermore, they do not require complex image processing techniques when acquiring environmental information, resulting in fast ranging speed and good real-time performance.
Furthermore, ultrasonic sensors are not easily affected by external environmental conditions such as weather, ambient light, shadows from obstacles, and surface roughness. Ultrasonic navigation and positioning have been widely applied in the perception systems of various mobile robots.
II. Visual Navigation and Positioning Technology for Mobile Robots
In visual navigation and positioning systems, the most widely used method both domestically and internationally is the navigation approach based on local vision, which involves installing onboard cameras within the robot. In this method, control equipment and sensors are mounted on the robot's body, while high-level decisions such as image recognition and path planning are handled by the onboard control computer.
Visual navigation and positioning systems mainly include: cameras (or CCD image sensors), video signal digitization equipment, DSP-based high-speed signal processors, computers, and their peripherals. Many robotic systems now use CCD image sensors, whose basic components are a row of silicon imaging elements. Photosensitive elements and charge transfer devices are arranged on a substrate, and through the sequential transfer of charge, the video signals of multiple pixels are extracted in a time-division and sequential manner. For example, the resolution of images acquired by area array CCD sensors can range from 32×32 to 1024×1024 pixels.
In simple terms, the working principle of a visual navigation and positioning system is to perform optical processing on the robot's surrounding environment. First, a camera is used to collect image information, which is then compressed and fed back to a learning subsystem composed of neural networks and statistical methods. The learning subsystem then links the collected image information with the robot's actual position to complete the robot's autonomous navigation and positioning function.
III. GPS Global Positioning System
Currently, pseudorange differential dynamic positioning is commonly used in navigation and positioning technology applications for intelligent robots. This method involves using a base receiver and a dynamic receiver to jointly observe four GPS satellites, and then calculating the robot's three-dimensional position coordinates at a specific moment using a specific algorithm. Differential dynamic positioning eliminates satellite clock errors. For users located 1000km from the base station, it can eliminate satellite clock errors and errors caused by the troposphere, thus significantly improving dynamic positioning accuracy.
However, in mobile navigation, the positioning accuracy of a mobile GPS receiver is affected by satellite signal conditions and road environment, as well as by many other factors such as clock errors, propagation errors, and receiver noise. Therefore, relying solely on GPS navigation suffers from relatively low positioning accuracy and reliability. Thus, in robot navigation applications, magnetic compasses, optical encoders, and GPS data are typically used in conjunction with GPS. Furthermore, GPS navigation systems are not suitable for indoor or underwater robot navigation, or for robot systems requiring high positional accuracy.
IV. Mobile Robot Optical Reflection Navigation and Positioning Technology
Typical optical reflection navigation and positioning methods primarily utilize laser or infrared sensors for distance measurement. Both laser and infrared sensors employ optical reflection technology for navigation and positioning.
A laser global positioning system generally consists of a laser rotation mechanism, a reflector, a photoelectric receiving device, and a data acquisition and transmission device.
During operation, the laser is emitted outward through a rotating mirror mechanism. When it scans a cooperative landmark formed by a rear reflector, the reflected light is processed by a photoelectric receiving device as a detection signal. This initiates a data acquisition program to read the code disk data (the measured angle value of the target) from the rotating mechanism. The data is then transmitted to a host computer for processing. Based on the known location of the landmark and the detected information, the sensor's current position and orientation in the landmark coordinate system can be calculated, thereby achieving further navigation and positioning.
Laser ranging has advantages such as narrow beam, good parallelism, low scattering, and high ranging directional resolution. However, it is also greatly affected by environmental factors. Therefore, how to denoise the acquired signal when using laser ranging is a major challenge. In addition, laser ranging also has blind zones, so it is difficult to achieve navigation and positioning solely with lasers. In industrial applications, it is generally used for on-site inspection within a specific range, such as detecting cracks in pipelines.
Infrared sensing technology is frequently used in obstacle avoidance systems for multi-joint robots to form a large-area "sensitive skin" covering the surface of the robot arm, which can detect various objects encountered by the robot arm during operation.
A typical infrared sensor includes a solid-state light-emitting diode (LED) that emits infrared light and a solid-state photodiode that acts as a receiver. The infrared LED emits a modulated signal, and the infrared photodiode receives the modulated infrared signal reflected from the target object. Ambient infrared interference is eliminated by signal modulation and a dedicated infrared filter. Let the output signal Vo represent the voltage output of the reflected light intensity; then Vo is a function of the distance between the probe and the workpiece: Vo = f(x, p), where p is the workpiece reflectance coefficient, which is related to the surface color and roughness of the target object, and x is the distance between the probe and the workpiece.
When the workpieces are all similar target objects with the same p value, x and Vo correspond one-to-one. x can be obtained by interpolating the proximity measurement experimental data of various target objects. In this way, the position of the robot from the target object can be measured by the infrared sensor, and then the mobile robot can be navigated and positioned by other information processing methods.
Although infrared sensing positioning also has advantages such as high sensitivity, simple structure and low cost, it is often used as a proximity sensor in mobile robots because of its high angular resolution but low distance resolution. It can detect nearby or sudden moving obstacles, making it easier for the robot to stop in an emergency.
V. The current mainstream robot localization technology is SLAM (Simultaneous Localization and Mapping).
Most leading service robot companies in the industry have adopted SLAM technology. Only SLAMTEC holds a unique advantage in SLAM technology. So what exactly is SLAM technology? Simply put, SLAM technology refers to the entire process by which a robot completes localization, mapping, and path planning in an unknown environment.
SLAM (Simultaneous Localization and Mapping), since its inception in 1988, has primarily been used to study the intelligent movement of robots. For completely unknown indoor environments, equipped with core sensors such as LiDAR, SLAM technology can help robots build maps of the indoor environment, facilitating autonomous movement.
The SLAM problem can be described as follows: A robot starts moving from an unknown location in an unknown environment, and during the movement, it performs self-localization based on position estimation and sensor data, while building an incremental map.
The main ways to implement SLAM technology include VSLAM, Wifi-SLAM, and Lidar SLAM.
1. VSLAM (Visual SLAM)
This refers to navigation and exploration using depth cameras such as cameras and Kinect in indoor environments. Simply put, its working principle involves optical processing of the robot's surroundings. First, a camera collects image information, compresses the collected information, and then feeds it back to a learning subsystem composed of neural networks and statistical methods. The learning subsystem then links the collected image information with the robot's actual position, completing the robot's autonomous navigation and localization function.
However, indoor VSLAM is still in the research stage and far from practical application. On the one hand, the computational load is too large, placing high demands on the performance of the robot system; on the other hand, the maps generated by VSLAM (mostly point clouds) cannot yet be used for robot path planning, requiring further exploration and research.
2. Wifi-SLAM
This refers to using multiple sensors in a smartphone for positioning, including Wi-Fi, GPS, gyroscope, accelerometer, and magnetometer, and then using algorithms such as machine learning and pattern recognition to create accurate indoor maps from the acquired data. The provider of this technology was acquired by Apple in 2013. Whether Apple has already implemented Wi-Fi-SLAM technology in the iPhone, essentially giving every iPhone user a built-in mapping robot, remains to be seen. Undoubtedly, more accurate positioning not only benefits maps but also makes all location-based services (LBS) more precise.
3. Lidar SLAM
This refers to using LiDAR as a sensor to acquire map data, enabling robots to achieve simultaneous localization and map building. While the technology itself is quite mature after years of validation, the high cost of LiDAR remains a significant bottleneck that needs to be addressed.
Google's self-driving cars utilize this technology, with a roof-mounted LiDAR system from the American company Velodyne, costing over $70,000. This LiDAR emits 64 laser beams while rotating at high speed. The lasers hit surrounding objects and return, allowing the system to calculate the distance between the vehicle and those objects. The computer system then uses this data to create a detailed 3D terrain map, which is combined with a high-resolution map to generate different data models for use by the onboard computer system. The LiDAR accounts for half the total cost of the vehicle, which may be one of the reasons why Google's self-driving cars have been slow to enter mass production.
LiDAR, with its strong directionality, effectively ensures navigation accuracy and is well-suited for indoor environments. However, Lidar SLAM has not performed exceptionally well in indoor robot navigation because LiDAR is too expensive.