Share this

Bomb disposal robot control system based on binocular vision positioning

2026-04-06 04:32:39 · · #1

Abstract: This paper proposes a control system scheme for an automatic grasping and bomb disposal robot based on binocular vision positioning and navigation. The principle of binocular vision positioning and the design of the motion control system are introduced. A real-time control system based on xPC targets is generated using MATLAB/RTW tools and runs on a prototype equipped with PC/104, achieving good results.

Keywords: Bomb disposal robot motion control, binocular vision localization, MATLAB/RTW

Abstract: A control system scheme of a Bomb-disposal Robot based on the binocular stereo vision location and navigation is proposed in this paper. The principle of the binocular vision location and the design of a motion control system are described. Realtime control system of the robot based on xPC target through the MATLAB/RTW run successfully in test prototype furnished with PC/104, and satisfactory result is obtained.

Keywords: Bomb-disposal Robot Motion Control Binocular Vision Location MatLab/RTW

1. Introduction

Bomb disposal robots, designed for use in special environments, have been developed in many countries, and some products are already on the market. However, most are remotely controlled robots , requiring human intervention to grasp the target object. Because multiple joints need coordinated control, and the positioning accuracy during object grasping is typically below 5mm, operators without extensive training are ill-equipped to perform the task effectively. This paper proposes a hand-eye coordination system based on binocular vision positioning and navigation to automatically control the object grasping process. This control scheme has been validated on an experimental prototype, achieving satisfactory results.

2. System Design

The bomb disposal robot consists of three parts: a mechanical system, a binocular vision positioning system, and a motion control system. It can accurately measure and locate targets and automatically complete the grasping function.

2.1 Mechanical Structure

The robot's overall structure consists of two parts: the vehicle body and the robotic arm. A motor is installed on each side of the vehicle body, and four-wheel drive is achieved through chain transmission. By controlling the speed and direction of the motors on both sides, the vehicle can turn at any radius.

This robotic arm has five degrees of freedom, corresponding to the waist, shoulder, elbow, wrist joints, and gripper rotation joint, as shown in Figure 1. Each joint is driven by a DC motor and uses closed-loop servo control . Two cameras are mounted on the forearm for binocular vision.

Figure 1 Schematic diagram of the robotic arm mechanism

2.2 Binocular visual localization and hand-eye coordination

The binocular vision positioning [2] system uses two cameras mounted on the forearm to capture two images of the same target object. Using the principle of triangulation, the precise relative position of the target object and the coordinate system of the robot is obtained through image matching and calculation. By planning the movement trajectory of the gripper, the camera is always pointed at the target object as the gripper approaches the target object. This hand-eye coordination method ensures that the target object is always under image monitoring. Then, when the gripper is closer to the target object (determined during gripper path planning), a more accurate distance measurement is performed, and finally the target is accurately grasped. The entire grasping process is automatically completed by the robot control system.

2.2.1 Triangulation Principle

If the geometric positions of two cameras are known, and the image position of the same object in the two cameras is also known, the position of the object in space can be calculated using the trigonometric principle, that is, the depth information can be obtained by trigonometric ranging.

As shown in Figure 2. Considering the simplest case, L and R are two pinhole cameras with identical parameters, f is the focal length of the two cameras, the optical axes of L and R are parallel and coincide with the X-axis, and the Y-axis is perpendicular to the paper. Taking the camera coordinate system of L as the world coordinate system, and the offset of the origin or projection center of R as b, b becomes the baseline of the stereo vision system. The target point is P, and the x-coordinates of P projected onto the left and right cameras are x1 and x2, respectively. Based on the principles of spatial geometry, we have the following formula:

Given (x1, y1), (x2, y2), focal length f, and baseline length b, the three-dimensional coordinates (X, Y, Z) of point P can be obtained.

Figure 2. Schematic diagram of triangulation.

2.3 Motion Control System

The robot motion control system adopts a fully digital servo approach, meaning that except for the motor drive and subsequent parts which are hardware-based, the rest, such as the interpolation algorithm, comparison unit, and controller, are digitized and software-based, and implemented by a computer. Its key advantages are the ability to flexibly utilize various complex control laws and reduced hardware costs.

2.3.1 Hand-held path planning and interpolation

To achieve hand-eye coordination control, where the camera remains pointed at the target object during the initial stage of automatic object grasping (when the gripper approaches the target), the trajectory from the gripper's starting point to the precise ranging point should be a straight line. To achieve approximately horizontal extension of the gripper towards the target, the trajectory from the precise ranging point to the final grasping point should be another straight line. Therefore, the gripper path consists of two straight lines.

The path planning of the gripper is carried out in the rectangular coordinate space [3]. The designed movement speed of the gripper is 50 mm/s. Since the accuracy requirement of the robot gripper movement trajectory is not high, ΔL=5mm is used for fixed-distance interpolation to obtain a dense data series, which is equivalent to a coarse interpolation period of 100ms. After obtaining a series of target points, the inverse kinematics algorithm is used to calculate the angle series of each joint. Each joint is controlled by position closed-loop servo control to achieve the required angle. The functional block diagram is shown in Figure 3. Since the manipulator is an open-chain cantilever structure, in order to make the movement of each arm smooth, the movement of the arm must be controlled by speed. Therefore, speed planning is carried out by further interpolation in the joint space. The interpolation speed curve adopts the trapezoidal speed curve mode, the interpolation period is 10ms, the control law adopts the digital PID algorithm, the motor is regulated by PWM wave, the PWM wave is generated by software, the period is 0.5ms, that is, the PWM frequency is 2kHz. In addition, if the feedback loop is open or a fault occurs, the deviation will be too large or even larger. This will cause the arm to be too fast and unable to stop when it reaches the target position. Therefore, for safety reasons, a deviation detection protection module is added. The block diagram of the closed-loop position control system for each joint is shown in Figure 4.

Figure 3 Block diagram of the robot motion control system

Figure 4 Block diagram of closed-loop servo control system for each joint position

2.3.2 Simplification of the Inverse Kinematics Algorithm

Inverse kinematics algorithms with more than five degrees of freedom[4] are complex and require a long computation time. From the perspective of engineering implementation, they can be appropriately simplified. In the most common case, the gripper grasps the target horizontally, and the rotation direction of the gripper depends on the force position when the target is grasped. When grasping the target, the waist joint can be rotated first to make the gripper align with the target. This process actually reduces three degrees of freedom in the inverse kinematics algorithm, leaving only two degrees of freedom, namely two joint angles, to be calculated, which greatly simplifies the calculation process and reduces the computer's processing time. In addition, since the position feedback uses an incremental encoder, each joint of the robot must determine a position (set position) after power-on. At this position, a definite value is written to the reversible counter. The angles of each joint are shown in Figure 1.

2.3.3 xPC Target Real-Time Control System

The motion control system of this robot is a real-time control system based on xPC target [5] of MATLAB/RTW [1] (Real-Time Workshop). The xPC target adopts the host-target approach. The host is used to run Simulink [6], establish the control model, generate target code and download it to the target machine. The host also runs the control interface program and sends operation control commands. The target PC is used to execute the generated code. xPC provides a highly shrunk real-time operation core that runs on the target PC. This robot uses PC/104 as the real-time target machine and an I/O expansion card for encoder signal input and motor control signal output.

Figure 5 shows a partial Simulink model of the control system. The target point transmitted by the vision system is first checked for accessibility. If it is not accessible, an error is reported. If it is accessible, the angle to be rotated by the waist joint and other joints are calculated and obtained through gripper trajectory planning (coarse interpolation). The angles correspond to the number of encoder pulses. The difference between the target pulse count and the encoder feedback pulse count is output to the PWM wave generation module through a digital PID algorithm. The PWM wave is then output to the motor driver to drive the DC motor. Figure 6 shows a photo of the robot grasping a water bottle in the field.

Figure 5. Local control Simulink model

Figure 6. On-site photos

3. Conclusion

The innovation of this design lies in applying a binocular vision positioning system to robot target navigation and grasping. It boasts high positioning accuracy, requires no human intervention during grasping, and achieves a target grasping accuracy within 5mm. The control system is a real-time xPC target system generated using MATLAB/RTW. This method offers a short development cycle, ease of debugging, and is a powerful tool for control system product development and verification.

In addition, due to load variations, the robotic arm may tremble in certain positions during movement, and the speed changes may not be smooth. These issues need to be addressed by improving control laws (such as adding feedforward control or adopting adaptive control) and hardware (such as using high-pulse-number encoders and reducing mechanical transmission backlash).

4. References

1. Yang Di, Li Litao, Yang Xu, et al., Real-time Simulation Development Environment and Applications for Systems. Beijing: Tsinghua University Press, 2002;

2.Tsai R. Y An efficient and accurate camera calibration technique for 3D machine vision. In Proc. IEEE Conf, on Computer Vision and Pattern Recognition, Miami Beach. FL, 1986, 364374;

3. Cong Shuang, Li Zexiang, eds. Practical Motion Control Technology. Beijing: Electronic Industry Press, 2006.1;

4. Yin Jiying, He Guangping, eds. Articulated Robots. Beijing: Chemical Industry Press, Industrial Equipment and Information Engineering Publishing Center, 2003;

5. Xu Guozheng, Chen Yong. Data Acquisition System Based on Matlab/xPC Target. Microcomputer Information [J], 2005, 10(33), 63-64;

6. Li Ying, Zhu Boli, and Zhang Wei (eds.). Fundamentals of Simulink Dynamic System Modeling and Simulation. Xi'an: Xi'an University of Electronic Science and Technology Press, 2004.7.

Read next

CATDOLL 135CM Nanako (TPE Body with Hard Silicone Head)

Height: 135cm Weight: 24.5kg Shoulder Width: 33cm Bust/Waist/Hip: 62/57/69cm Oral Depth: 3-5cm Vaginal Depth: 3-15cm An...

Articles 2026-02-22