introduction
Research on mobile robots began in the late 1960s with Nils Nilssen and Charles Roesn at Stanford Research Institute (SPAR), who developed the autonomous mobile robot Shakey between 1966 and 1972. The late 1970s saw a new surge in mobile robot research, particularly since the mid-1980s, with a wave of robot design and manufacturing sweeping the world. A large number of world-renowned companies, such as General Electric, Honda, and Sony, began developing mobile robot platforms. These mobile robots primarily served as experimental platforms in university laboratories and research institutions, thus promoting the emergence of various research directions in mobile robotics. With the rapid development of mobile robot technology and its widespread application in industry, military, and other fields, a new technical science—robotics—conceived in the theory, design, manufacturing, and application of mobile robots, has gradually taken shape and is increasingly attracting widespread attention. Research on mobile robots is entering a new stage.
Mobile robots are a type of robotic system capable of sensing their environment and their own state through sensors, enabling them to move autonomously towards targets in obstacle-filled environments, thereby performing specific functions. Ideally, autonomous mobile robots can complete prescribed tasks in various environments without human intervention, possessing a high level of intelligence. However, currently, fully autonomous mobile robots are mostly in the experimental stage; those entering practical use are primarily semi-autonomous mobile robots, which perform various tasks in specific environments with human intervention. Remotely controlled robots, on the other hand, cannot function without human intervention.
With the continuous emergence of new intelligent control algorithms, mobile robots are developing towards intelligence, which places higher demands on the performance of motion control systems. Designing and implementing a control system for an intelligent mobile robot requires familiarity with mobile robot hardware and software development, mastery of the motion control characteristics of mobile robots, and the establishment of a feasible and stable platform for subsequent functional expansion of mobile robots. This platform can then serve as a common foundation for the development of various robots. The development of an intelligent mobile robot control system has significant practical implications and will lay a solid foundation for future mobile robot development.
1. Control system structure and function
The control system of a mobile robot is the actuator of the robot system, playing a crucial role in the stable operation of the system, and sometimes it can also function as a simple controller. The elements constituting a robot motion control system include: computer hardware and control software, input/output devices, drivers, and sensor systems. The block diagram of an intelligent mobile robot control system is shown in Figure 1.
Figure 1. Block diagram of mobile robot control system
Implementation of the mobile robot control system: The main function of the mobile robot control system is to generate motion control information for the robot and control its movement. Trajectory tracking is one of the tasks that the mobile robot needs to perform. Its typical working process involves the robot motion controller generating motion control information based on a planned path, controlling the robot to complete the corresponding movements, and tracking the planned path. The input information used in the motion control process includes obstacle distance information provided by the underlying ultrasonic ranging module, robot position and speed information provided by the motor encoder, and video information collected and processed by panoramic cameras and monocular vision cameras.
The robot body is equipped with four drive motors, which serve as the driving mechanism for the mobile robot. Each drive motor has an optoelectronic encoder, which provides orthogonal encoded pulse signals, which can be used for closed-loop speed control of the drive motor and robot positioning pulses. The onboard processor is mainly responsible for the control and management of the ultrasonic ranging module, robot positioning, and communication with the host computer. It can be a general-purpose computer, a high-capacity microcontroller, DSP, ARM, or other embedded controllers.
The mobile robot's input information includes visual input and distance detection. Visual information includes panoramic vision and binocular vision cameras. Distance information includes laser ranging and ultrasonic ranging modules. Based on a pre-built environment map created by the developers, the mobile robot reads environmental information during movement, performs calculations according to control rules within the processor, and outputs control information to the drive motors to control the robot's movement.
The mobile robot's onboard processor and host computer serve as the central processing unit, receiving obstacle distance information from ranging modules such as laser and ultrasonic sensors, as well as visual information from panoramic and binocular vision systems. Combined with preset functions in the host computer, the robot is controlled to perform corresponding actions by controlling the drive motors.
2. Control System Hardware Design
The control system hardware mainly includes a main control board unit, a motor drive, and an ultrasonic ranging module unit.
2.1 Main Control Board Unit
The main control board primarily manages module interfaces, transmits data, and controls the ultrasonic ranging module. The interfaces of various sensor modules on the mobile robot are not entirely the same; for example, panoramic vision camera modules typically use USB interfaces, while binocular vision cameras generally use RS232 interfaces. The main control board is mainly responsible for communication with the host computer; motor drive control information can be transmitted from the host computer to the motor drive controller via the main control board. Simultaneously, the main control board detects the orthogonal encoded pulse signals provided by the motor encoder disk for robot positioning. The distance detection module interface is primarily handled by the main control board. The ultrasonic ranging module is managed by the main control board; the generation of transmitted signals, the detection and processing of received signals, and the reading of ultrasonic ranging time are all controlled by the main control board.
2.2 Motor Drive Unit
The mobile robot has four directional wheels, each driven by a unique drive motor. To ensure real-time, accurate, stable, and independent operation of the closed-loop speed control of the drive motors, each drive motor is driven independently by its own motor drive controller. Each motor drive module consists of a controller with a communication interface and a motor drive unit. The motor drive module control chip receives control commands, calculates the motor's speed and direction, and outputs the control voltage. Simultaneously, it detects the motor's speed via a coaxial photoelectric encoder, performs calculations on the difference between the target speed and the actual speed, and outputs the control voltage value to complete the closed-loop speed control of the drive motor.
2.3 Ultrasonic ranging unit
The ultrasonic ranging unit is used to detect the distance between the mobile robot and surrounding obstacles. A 40kHz square wave signal is generated by the main control board controller, amplified, and output as an ultrasonic signal via a transducer. The ultrasonic signal propagates in the air, is reflected by obstacles, and is received by the transducer. The received ultrasonic signal is converted into a small-amplitude electrical signal, which is amplified, filtered, and compared before being detected by the main control board controller to calculate the transmission time. Therefore, the module requires a square wave signal power amplifier, an ultrasonic transducer, and devices for amplifying, filtering, and comparing the received small-amplitude ultrasonic signal. Because ultrasonic signals attenuate in the air, the electrical signal output by the receiving transducer is extremely weak, mostly in the millivolt range, and may contain some noise interference. Therefore, the received signal needs to be amplified and filtered. The transmission time of the ultrasonic signal is from the time the control chip emits the ultrasonic signal to the time the shaped signal is received and detected, thus calculating the distance to the obstacle. The hardware structure diagram of the ultrasonic ranging unit is shown in Figure 2.
Figure 2 Hardware structure diagram of ultrasonic ranging unit
3. Simulation Model
This paper studies the kinematic simulation of robots using MATLAB/Simulink simulation software, proposing kinematic simulation based on the mechanism simulation tool SimMechanics and kinematic simulation based on MATLAB functions. A smart car simulation platform based on SimMechanics is designed, capable of determining the optimal parameters of the PID controller according to system performance requirements, and using virtual reality technology to reflect the state of the smart car in real time during operation. This provides an excellent demonstration environment for mastering car motion control in electronic design.
Assuming the car's coordinates in the XOY coordinate system are (X, Y), and the angle between its direction of travel and the X-axis is θ, then the vector [X, Y, θ] represents the car's pose, and the car's motion equations are as follows:
In the formula, b is the lateral distance between the left and right drive wheels, vL is the linear velocity of the left wheel, vR is the linear velocity of the right wheel, ω is the steering speed of the trolley, and v is the forward speed of the trolley. The left and right drive wheels use the same type of motor, and the wheel friction torque Tf is a resistive constant torque load. If i is the reduction ratio and η is the transmission mechanism efficiency, then the equivalent torque of the load torque referred to the motor shaft is:
T=Tf/iη
The use of virtual simulation technology makes the measurement and control of intelligent vehicles more intuitive, while the Simulink optimization design module makes it easier to adjust the system controller parameters. The combination of the two enables visual and interactive operation, allowing real-time observation of changes in the intelligent vehicle's motion state.
4. Controller parameter optimization
The performance of a conventional PID controller depends on the tuning of parameters Kp, Ki, and Kd. Better parameter tuning results in better control, and vice versa. This design adjusts three control parameters, Kp, Ki, and Kd, to enable the intelligent vehicle to move more accurately and quickly along a given path. Figure 3 shows a kinematic simulation model based on SimMechanics, which integrates graphical interface-based system controller optimization design and simulation functions, capable of optimizing controller parameters according to set performance constraints. The PID controller output passes through a driver to control the controlled object.
Figure 3. Kinematic simulation model based on SimMechanics
Based on the established SimMechanics simulation model of the robot, the kinematic (or dynamic) simulation analysis of the robot can be realized by setting the analysis type in the simulation environment. The end effector trajectory of the robot measured by the photoelectric encoder is shown in Figure 4.
Figure 4. The end effector trajectory of the robot
5. Conclusion
Based on the SimMechanics simulation model, dynamic simulations can be easily performed by setting parameters and selecting simulation types. This design can independently complete the closed-loop speed regulation of the mobile robot's drive motor. The robot's movement is controlled by commands sent from the host controller, without the need for participation in the closed-loop speed regulation process. The path tracking program control of the mobile robot is implemented using the MATLAB toolbox. The displacement of the robot's left and right wheels, recorded by the robot's photoelectric encoder disk, is converted into the robot's current position and posture, enabling tracking of the discretized path.
For more information, please follow the Motion Control channel.