Share this

Robot Indoor Motion Trajectory Correction Control Algorithm

2026-04-06 07:22:27 · · #1

Mobile robots, integrating complex functions such as environmental information perception, dynamic decision-making and intelligent planning, precise behavioral control and execution, and autonomous navigation and localization, are a current research hotspot with enormous application potential in military, industrial, agricultural, and service sectors. Currently, most mobile robots rely solely on sensors such as odometry and sonar to determine their position when moving in complex indoor environments. This results in significant deviations between the robot's actual trajectory and the preset trajectory, severely hindering the development of deeper applications. This paper proposes a solution to the problem of inaccurate motion trajectories in indoor robots.

Limiting deviations in robot motion trajectories can be considered from both hardware and software perspectives. From a hardware perspective, high-precision sensors and highly reliable motion control systems can be used to improve the accuracy of robot navigation. From a software perspective, robot navigation and positioning algorithms can be improved to obtain accurate pose parameters that are fed back to the motion control system.

This paper mainly corrects robot motion trajectory deviations from the perspective of software algorithms: First, LiDAR sensor information is converted into correct motion control commands and fed back to the motion control terminal in real time to correct the robot's motion trajectory deviations; Second, the trajectory correction model parameters in the algorithm are optimized through a large number of experiments to further improve the accuracy of the robot's indoor motion trajectory.

1. Indoor mapping motion trajectory control algorithm

1.1 Robot Indoor Mapping and Localization Model

This paper installs a LiDAR sensor device on a robot control platform, uses the open-source HectorSLAM algorithm package for debugging, and builds a SLAM functional module to achieve simultaneous localization and map creation of a mobile robot in an unknown indoor environment, so as to collect the robot's coordinate positioning information.

1.1.1 Motion Model

To accurately describe the motion of a mobile robot, it is necessary to parameterize the actual motion trajectory, use data to represent the robot's position and posture information in the environment, and then control it, as shown in Figure 1.

Figure 1 Robot motion model

Let the robot's motion state at time k be Xk=(xk,yk,θk), then the motion model based on control commands can be expressed as follows:

(1)

In the formula, V is velocity; W is angular velocity; U is the noise term, representing the mechanical error or environmental noise during wheel movement; and the robot velocity is also included.

(2)

In the formula, the wheel radius is r; the wheel track is 2l; and the wheel speed is φ.

1.1.2 SLAM Module

The SLAM module can acquire the robot's real-time pose information and create an environmental map in an unknown environment. The specific processing procedure is shown in Figure 2. First, the sensors scan the environmental information and process the observed data, processing the collected point cloud and image data. Second, feature matching is performed, and optimization is carried out in the filter algorithm to present the robot's motion trajectory and the created map information in the local environment. Finally, the robot's motion trajectory and map information output by the SLAM module can be used to adjust the robot's motion control state through trajectory correction algorithms.

Figure 2 SLAM framework

In a specific environment, when LiDAR is placed on the robot panel to scan the corridor map, a comparison between the SLAM map creation and the real environment can be seen, as shown in Figures 3 and 4.

Figure 3. Real-world SLAM testing environment

Figure 4. SLAM map creation result

During the process of the laser rangefinder scanning environmental information, the observed quantities are the distance and azimuth angle between environmental landmarks and the robot. Let the landmark point be i and the sensor position be s. The observation model of the system is then as follows:

(3)

In the formula, xks represents the x-coordinate value of the sensor at time k; yks represents the y-coordinate value of the sensor at time k; θks represents the azimuth angle of the sensor at time k; vρ represents the distance observation noise; vθ represents the azimuth observation noise; xi represents the x-coordinate value of the i-th landmark; and yi represents the y-coordinate value of the i-th landmark.

Since the positions of the sensor and the robot can be interchanged, a functional relationship between the observed value Z and the robot's position can be obtained, i.e., the sensor observation model. During the observation process, the observation noise follows a Gaussian distribution, and the above observations can also be transformed into a polar coordinate system.

1.2 PID Trajectory Correction Control Algorithm

PID control, short for Proportional-Integral-Derivative controller, consists of three parts: P (proportional unit), I (integral unit), and D (derivative unit). It is mainly applicable to linear feedback control systems. In practical applications of robot trajectory control, the controller compares the map and positioning data collected by sensors with the expected reference values, and then transmits the difference to the input control terminal to solve for the desired output data. The new input can keep the system parameters moving towards the standard values.

A PID loop consists of three parts: sensor measurement information, controller decision information, and output device unit. The loop uses the difference between the demand result and the measurement result to calculate the system's correction value. Based on this correction value, a corresponding decision is made to eliminate the error. In the control loop, eliminating the correction value means clearing the current error, averaging past errors, or changing future errors.

As shown in Figure 5, the information from the laser rangefinder mounted on the robot's panel is the feedback information source in the PID control structure. Making decisions using this information source is a key part of motion control . In this paper, a PI closed-loop control algorithm is adopted in the trajectory correction decision-making, using the positioning output of the laser radar to perform real-time feedback regulation of the robot's motion state.

Figure 5 Feedback control structure

The PI control algorithm is mainly used in the robot's trajectory correction process, which is divided into the following two parts:

1.2.1 Azimuth Correction Principle

(1) Steering setting: When the robot deviates from the preset straight trajectory, the direction of the robot's deviation can be determined by the coordinate information output by the LiDAR SLAM.

(2) Rotation Angle: The magnitude of the rotation is not arbitrary. Optimal steering parameters are determined based on the robot's hardware characteristics and through multiple experimental tests. Different steering angles are set for different coordinate thresholds to ensure the accuracy of the corrected trajectory. Let d be the distance the robot deviates from its horizontal axis (in meters); v be the movement speed (in meters per second); and θ be the adjustment angle (in degrees). Based on experimental results, the following adjustment relationships exist.

(4)

1.2.2 Distance Correction Principle

(1) Direction setting: Under normal circumstances, the robot cannot reach the specified position during its movement, which is the integral correction control mode. The difference between the robot's accumulated distance increment and the preset distance can be used to determine the robot's corrected movement direction.

(2) Correction distance: that is, to compensate for the difference between the distance increment output by the robot SLAM function module and the preset distance.

2. Trajectory Correction Algorithm Test

Typical robot motion trajectories are divided into linear motion and curvilinear motion. Curvilinear motion can be regarded as the superposition of linear motion. Therefore, in the field of robot kinematics research, the accuracy of the robot's linear motion and loop motion trajectory is mainly examined. This paper designs corresponding experiments for these two typical motion trajectories and analyzes and compares the effects of trajectory correction algorithms.

2.1 Straight-line trajectory test

(1) Straight walking test: Change the azimuth adjustment parameter in the trajectory correction algorithm and compare the correction effect of the robot's straight movement of 10m under the same conditions, as shown in Figures 6 and 7.

Figure 6 Slightly corrected parameters

Figure 7 Optimal Correction Parameters

Comparative analysis shows that the adjusted correction parameters effectively limit the drift of the robot's trajectory, and the optimized correction parameters can limit the robot's trajectory to the centimeter level.

(2) Control the mobile robot to walk back and forth for 5m in a straight line, and compare the trajectory before and after correction. The results are shown in Figure 8 and Figure 9.

Figure 8 Comparison of linear motion trajectories before and after correction

Figure 9. Comparison of magnified motion trajectories before and after correction.

Figure 9 is a magnified view of the endpoint of the robot's trajectory in Figure 8. Analysis of Figure 9 shows that the corrected robot trajectory deviation can be effectively converged, while the uncorrected robot trajectory will continuously accumulate once drift occurs.

2.2 Loop Route Test

In order to test the stability of the algorithm correction, a typical closed-loop detection [4] experiment was designed: the mobile robot was controlled to walk along the wall indoors, and the trajectory was rectangular. This experiment was set in a corridor environment. The length of the rectangular trajectory was 4m and the width was 0.9m. The specific model is shown in Figure 10.

Figure 10 Loop test trajectory model

The analysis of the experimental results shows that, under normal indoor conditions, the robot follows the preset route with and without trajectory correction, demonstrating the feasibility of the trajectory correction algorithm.

As shown in Figure 11, the experiment shows that the average error of the corrected Y-axis is 5 cm, while the average error of the uncorrected Y-axis is 28 cm. Clearly, the robot's trajectory deviation can be effectively controlled after feedback control correction. The trajectory error of a robot relying solely on odometry for positioning will continuously accumulate after drifting. By constraining the information from LiDAR sensors, the error can converge to the centimeter level, thus verifying the feasibility and reliability of the trajectory correction algorithm.

Figure 11 Indoor loop test of robot in unmanned state.

3. Conclusion

This paper designs a trajectory correction algorithm for robot motion trajectory deviation, and qualitatively and quantitatively analyzes the mathematical model and correction parameters of robot trajectory correction. An empirical model of robot trajectory correction control is constructed through extensive experimental data, yielding reliable trajectory correction control parameters. Analysis of multiple experimental results after trajectory correction shows that after correcting the preset trajectory of the mobile robot for indoor mapping using the PID trajectory control algorithm, the robot can adjust its motion state in real time during indoor movement without the need for additional manual control commands. Furthermore, the mobile robot controlled by the trajectory correction algorithm can move accurately along the preset route, exhibiting high stability and achieving the expected accuracy requirements during indoor movement. The trajectory correction control model obtained through extensive experiments demonstrates significant effectiveness, verifying the feasibility and reliability of this robot motion trajectory deviation correction scheme.

Read next

CATDOLL 133CM Kiki Shota Doll

Height: 133cm Male Weight: 28kg Shoulder Width: 31cm Bust/Waist/Hip: 64/59/73cm Oral Depth: 3-5cm Vaginal Depth: N/A An...

Articles 2026-02-22