Share this

Research on Self-Localization of Mobile Robots Based on Depth Vision

2026-04-06 05:57:32 · · #1

Abstract : Self-localization is fundamental for mobile robots to perform path planning and autonomous navigation. Solving its problems is a prerequisite for mobile robots to complete tasks. This paper constructs an indoor positioning system based on the Kinect depth vision sensor and ultrasound, and improves and refines the trilateration algorithm. The mobile robot can accurately obtain its own location information by perceiving and analyzing artificial beacons through its own sensors.

Keywords : mobile robot, self-localization, Kinect sensor, beacon;

1 Introduction

With the increasingly widespread application of mobile robots in military, aerospace, industrial, and everyday life fields, people are demanding higher levels of intelligence from robots. Robots are being used for various tasks that humans cannot or find difficult to perform, including manual labor and repetitive tasks. Only when a mobile robot knows its location and how to get from one location to another can it move purposefully and effectively complete specific tasks. This process is figuratively called the navigation problem: "Where am I?", "Where do I want to go?", "How do I get there?". The robot self-localization problem is the first problem that needs to be solved, and its solution is the key and foundation for solving the other two problems.

Beacon-based positioning systems rely on a series of beacons with known features in the environment and observe the beacons through sensors installed on the mobile robot. There are many types of sensors that can be used to observe beacons, including ultrasonic sensors, laser sensors, vision sensors, etc. Beacon positioning methods mainly include trilateration, triangulation and scene analysis [1]. Students from Hefei University of Technology proposed an ultrasonic indoor positioning system based on Zigbee. In this system, ultrasonic transmitters and Zigbee wireless modules are installed on the mobile robot, and ultrasonic receiver modules are installed on the ceiling at certain intervals. The global coordinates of the receiver modules, which serve as beacons, are known. The robot body is positioned by obtaining the distance between the receiver module and the transmitter using the trilateration principle. This positioning method can achieve a good positioning effect, but a large number of receiver modules need to be installed in the positioning area, which increases the cost of the positioning system and causes great inconvenience to installation and maintenance. Students from Harbin Institute of Technology studied an indoor robot positioning method under a sparse ultrasonic network to address the problem that ultrasonic networks require a large number of hardware facilities. However, this positioning method requires determining the initial pose of the robot and takes a lot of time during the positioning process.

This paper proposes a localization method based on the Kinect depth vision sensor. An easily identifiable artificial beacon is designed, and the beacon is registered and identified using edge detection and other methods. Then, the Kinect vision sensor is used to accurately measure the distance from the robot body to the beacon. Finally, an improved trilateration principle is applied to achieve self-localization of the mobile robot.

2. Robot Hardware System

MobileRobots' Pioneer robot is smaller than most robots, but it integrates intelligent mobile robot technology and its capabilities are comparable to those bulky and expensive devices. The mobile robot used in this article is the Pioneer3-DX model designed and manufactured by MobileRobots. This model is equipped with a PC computer on board, making it a fully autonomous intelligent mobile robot system.

2.1 Sonar Ring

The sonar rings on the Pioneer3-DX model are in fixed positions: one on each side, and six more distributed at 20-degree intervals along the front and rear sides. This sonar array arrangement provides the robot with seamless 360-degree detection, as shown in Figure 1.

Figure 1 Pioneer3 sonar ring

The ARCOS-based MobileRobots robot can support up to four sonar rings, each with up to eight transducers. Sonar client commands can be used to start or stop the entire sonar array or individual sonar arrays. The command string parameters consist of a series of sonar numbers from 1 to 32. Sonar numbers 1 to 8 correspond to the rotation order of sonar array 1. Sonar numbers 9 to 16 correspond to the rotation order of sonar array 2; 17-24 specify the order of sonar array 3; and 25-32 specify the order of sonar array 4. Each sonar can be repeated two or more times in each rotation. If a sonar number does not appear in any other altered sequence, that sonar ring will not be started.

2.1 Kinect

The Kinect sensor is a depth vision sensor released by Microsoft in June 2010, which includes an RGB color camera, an infrared emitter, and an infrared CMOS camera. It can be used to measure spatial three-dimensional point data. The RGB color camera is responsible for capturing color information, and the infrared emitter and infrared CMOS camera together form a three-dimensional structured light depth sensor, which is responsible for capturing depth images [2].

Kinect uses light coding technology for ranging, which means using a light source to encode the space to be measured . The light source used by Kinect is called laser speckle, which is a random diffraction pattern formed after a laser shines on a rough object or penetrates frosted glass. These speckles are highly random and change their patterns with different distances, meaning that the speckle patterns at any two points in space are different [3]. As long as such structured light is applied to the space, the entire space is marked. If an object is placed in this space, you can know the location of the object by looking at the speckle pattern on the object. Of course, before this, the speckle pattern of the entire space must be recorded, so the light source must be calibrated first. In the PrimeSense patent, the calibration method is as follows [4]: ​​at intervals, a reference plane is taken and the speckle pattern on the reference plane is recorded. Assuming the defined user activity space is within a range of 1 to 4 meters from the television, and a reference plane is taken every 10 cm, then we have already saved 30 speckle images after calibration. When measurement is needed , a speckle image of the scene to be measured is taken, and cross-correlation is performed on this image and the 30 saved reference images in sequence. In this way, we will obtain 30 correlation images, and the distance value of the location where there is an object in the space will be displayed on the correlation image.

Figure 2. Kinect appearance diagram

This paper describes the installation of a gimbal on a Pioneer3-DX robot, placing the Kinect depth vision sensor on top of the gimbal. Through a control system, the Kinect vision sensor can perform 360-degree detection of the surrounding environment. The specific localization approach is illustrated in flowchart 3. Artificial beacons are identified using edge detection and image registration methods. For details, please refer to the paper "Research on Edge Detection Algorithm Based on Sobel Operator". This paper only discusses the localization process after the artificial beacons are identified.

Figure 3. Flowchart of robot self-localization

3. Improved Trilateral Positioning Method

The traditional trilateration method is shown in Figure 4. The coordinates of points A, B, and C are known, and are assumed to be...

Figure 4. Schematic diagram of the triaxial positioning principle

In the absence of error, the system of equations 3-1 has a unique solution, that is, the circles projected onto the horizontal plane have a unique intersection point. However, in actual measurement, error is unavoidable, and the three circles cannot intersect at the same point, but rather form a region [6]. As shown in Figure 5.

Figure 5. Measurement error formation area

From the process of solving the equations of the trilateration method, we can find that if the system of equations does not intersect at a point, the coordinates of the node to be measured by the trilateration method are the intersection of two lines. These two lines are line L1 passing through the intersection of circle B and circle A and line L2 passing through the intersection of circle A and circle C. The solution of the trilateration method does not fully utilize the known coordinates of the three nodes, but calculates the position of the node to be measured based on the intersection of two lines. The coordinates calculated in this way will have a relatively large error [7]. As shown in Figure 6.

Figure 6. Trilateration algorithm under error conditions

To make the calculated coordinates of the moving node more accurate, in the case of measurement errors, previous researchers proposed using the centroid of the region enclosed by three circles to calculate the position of the node to be measured [8]. The equations in the 3-1 system can be expressed as the following functions:

This paper conducted multiple independent experiments using the aforementioned algorithm, obtaining relatively abundant experimental results. Experimental data show that in most cases, the method can accurately achieve self-localization of the robot; however, there are still instances where the localized point deviates significantly from the actual robot position. The author selected a representative set of data for further analysis and proposed a method to address this problem, thus improving the aforementioned algorithm.

4. Experiments and Simulations

This paper randomly recorded the exact locations of ten robots at the landmark locations: 1 (180, 80, 45), 2 (200, 60, 45), and 3 (220, 70, 45).

4 (240, 80, 45) 5 (260, 90, 45) 6 (280, 100, 45)

7 (300, 110, 45) 8 (320, 120, 45) 9 (340, 130, 45)

The distances from the location to the three landmarks are 10 (360, 140, 45) (unit: cm) and measured by the Kinect sensor, as shown in Table 1. The improved trilateration algorithm mentioned above is used to achieve self-localization of the robot. The robot coordinates obtained by self-localization are compared with the actual robot positions recorded in the previous simulation to verify the localization effect of this paper.

Table 1 Data Information

During the simulation experiment, the author found that as the measurement error of the distance between the robot and the landmark increased, there were indeed cases where the three circles could not intersect at a single point. However, this could not completely satisfy the requirement of obtaining the desired point. As shown in Figure 7.

Figure 7 MATLAB simulation results

In practical applications, the two methods can be integrated, and appropriate improvements can be made for different landmark locations to achieve precise robot localization. Figure 8 shows a simulation of the robot's self-localization using the fusion algorithm presented in this paper.

Figure 8. Robot localization comparison chart

In the above figure, the blue curve represents the robot's actual position, and the red curve represents the robot's self-localized position obtained by applying the fusion algorithm in this paper. From the simulation image, we can clearly see that the robot's self-localization effect using the algorithm in this paper is significant, with high positioning accuracy and an error within 2cm. It can be applied to indoor structured environments and can accurately achieve robot self-localization.

5. Conclusion

This paper constructs an indoor positioning system based on the Kinect depth vision sensor and improves and refines the trilateration algorithm. The mobile robot can accurately obtain its own position information by perceiving and analyzing artificial beacons through its own sensors. Furthermore, Matlab simulation experiments verify the high accuracy of the proposed algorithm and its ability to accurately achieve robot self-localization.

References

[1] Sun, Limin; Li, Jianzhong; Chen, Yu . Wireless Sensor Networks [M] . Beijing: Tsinghua University Press, 2005, 140.

[2] Min Huasong, Yang Jie . Research on robot localization algorithm integrating IMU and Kinect [D] . Wuhan: Wuhan University of Science and Technology, 2014, 5.

[3] Xia Luyi, He Chao . Simultaneous target tracking and obstacle avoidance of mobile robots based on Kinect [D] . Taiyuan: Taiyuan University of Technology, 2013, 5

[4] Xu Xiangmin, Li Huixian, Ye Rizang . Research on the Application of 3D Reconstruction Technology Based on Kinect Depth Sensor [D] . Guangzhou: South China University of Technology, 2013, 10.

[5] Sun Baojiang, Xu Yue . Robot localization and obstacle avoidance based on ultrasonic ranging [D] . Jinan: Qilu University of Technology, 2013, 5

[6] Zhang, Shugang . Local obstacle avoidance algorithm and application of mobile robot based on ultrasound [D] . Harbin: Harbin Institute of Technology, 2013, 12.

[7] Lu Huimin, Zhang Hui, Zheng Zhiqiang . Vision-based self-localization of mobile robots [J] . Journal of Central South University (Natural Science Edition), 2009, Vol. 40, Supplement, pp. 128-129

[8] Zhou Lun . Research on Ultrasonic Network Positioning Method for Indoor Mobile Robots [D] . Harbin: Harbin Institute of Technology, 2013, 7

[9Ibraheem M. Gyroscope-enhanced dead reckoning localization system for an intelligent walker[C]//Information Networking and Automation (ICINA) , 2010 International Conference on . IEEE , 2010 , 1 : V1-67-V1-72 . ]

[10] Cho BS , Moon W , Seo WJ , et al . A dead reckoning localization system for mobile robots using inertial sensors and wheel revolution encoding[J] . Journal of mechanical science and technology , 2011 , 25(11) : 2907-2917 .

About the author:

Guo Tongying (1974-), female, associate professor, master's supervisor

Chen Ce (1990-), male, Master of Control Engineering

Read next

CATDOLL EQ (Sleepy Q) 108CM

Height: 108cm Weight: 14.5kg Shoulder Width: 26cm Bust/Waist/Hip: 51/47/59cm Oral Depth: 3-5cm Vaginal Depth: 3-13cm An...

Articles 2026-02-22