[ Abstract ] This paper studies a search and rescue robot that can be used for detection in dangerous areas of coal mines. China is a country with frequent mining accidents. The poor geological conditions and high gas content in Chinese mines pose many hidden dangers to coal mine production. Every year, there are reports of mining accidents, bringing significant negative social impact. When an accident occurs, the underground environment, such as gas concentration, CO concentration, smoke levels, and visibility, is unknown, and it is unclear whether there are explosions or other hazards. Rescuers entering the accident site rashly is extremely dangerous and could even cause injury or death to rescuers. In such accident sites with dense smoke, toxic gases, and high temperatures, robots are an ideal detection and rescue device.
Keywords: Coal mine search and rescue robot system motion control
1. Research Background
Search and rescue robots are mobile robots that replace human rescue personnel in search and rescue operations during natural disasters, accidents, and other emergencies. These robots can be remotely controlled or operate autonomously to penetrate complex, dangerous, and uncertain disaster sites, probe unknown environmental information, and search for and rescue trapped individuals. Search and rescue robots represent an important branch of robotics technology's practical application and a new research field with significant social value.
Search and rescue robots can be applied in many rescue scenarios, such as earthquakes, mudslides, typhoons, floods, mine disasters, firefighting, hazardous material removal, and field reconnaissance. After a disaster or accident, the on-site environment is complex and harsh, full of unknowns and uncertainties, seriously threatening the lives of search and rescue personnel and posing a severe challenge to the deployment and implementation of search and rescue operations. The first 48 hours after a disaster are critical for rescue efforts; otherwise, the chances of survival for those trapped become very slim after 48 hours. Therefore, the research on search and rescue robots has significant practical value and social significance, and has received high attention from countries such as the United States, Japan, Australia, and China in recent years.
The research objective of this project is to develop a stable and reliable mobile detection robot platform for underground coal mines, primarily based on master-slave remote operation and possessing a certain degree of autonomy. The platform's main task is environmental detection in hazardous areas of underground coal mines, including detecting ambient temperature, gas composition and content (CO, CH4, O2, H2S, etc.), and acquiring and uploading on-site video and audio in real time. For these hazardous areas, we focus on finite parameter detection within a limited target environment. Because coal mine accidents are diverse and complex, it is impossible to expect one or two complex mechanisms to adapt to all underground environments. Especially for extremely complex environments such as roof falls and collapses, or special environments such as water inrush, specialized mechanisms and technologies must be used. Therefore, this platform mainly targets hazardous environments such as gas outbursts, localized fires, explosions, or collapses, where access is possible. In addition, as a search and rescue robot platform, this system has reserved an interface for loading end effectors such as robotic arms, thereby providing necessary technical support for completing more complex and effective rescue work, and also providing important technical reserves for the application of this platform in other search and rescue fields (such as earthquakes, mudslides, fires and other disaster sites).
2. Research progress on search and rescue robots
Research on emergency disaster search and rescue robots began in the 1980s. After the 1995 Oklahoma bombing and the Kobe earthquake in Japan, search and rescue robots gradually gained attention as a humanitarian application of robotics.
Over the following decade, search and rescue robot technology continued to develop, but most remained in the laboratory stage, with few instances of them participating in actual rescue operations and playing a significant role. The first large-scale application of search and rescue robots in on-site rescue occurred after the 9/11 attacks in the United States. Six types of robots from the military and research institutes—Talon, Solem, PACKBOT, VGTV, MicroTracs, and SPAWARUrbot—participated in the rescue efforts, as shown in Figure 1. In this rescue mission, the main tasks of the robot system included searching for spaces in the rubble where survivors might be present and monitoring structural changes to prevent collapses that could endanger rescue personnel. The search and rescue operation was divided into two phases. In the first phase, the robots did not delve excessively into the rubble, but rather played a supporting role in areas inaccessible to humans. The second phase focused on clearing the building debris and providing data for analyzing the causes of the World Trade Center tower collapse. During this phase, as operators became more proficient and gained experience on-site, the superiority of the robot system gradually became apparent. The robot conducted close-up reconnaissance and photography at the scene to determine the stability of the remaining walls and the likelihood of collapse. Simultaneously, using its onboard detectors, the robot measured the concentrations of carbon monoxide, hydrogen sulfide, and volatile organic compounds, as well as the ambient temperature, forming fundamental data on the hazardous conditions of the site. With the on-site analysis and guidance of over a dozen experts from various disciplines and fields, the rescue efforts were significantly accelerated, and personnel safety was ensured, demonstrating clear advantages. However, the rescue operation also revealed some shortcomings in the robot system, such as deficiencies in waterproofing, heat resistance, shock absorption, and other harsh environmental capabilities, as well as limitations in the robot's own state perception and environmental description methods. In conclusion, this rescue mission was the largest and most successful in human history to involve rescue robots. During this operation, engineers and on-site experts accumulated a wealth of valuable experience in using robotic systems for disaster relief, which is a tremendous asset for future research on search and rescue robots.
Subsequently, search and rescue robots from the United States, Japan, Australia and other countries began to participate in actual disaster relief operations. Through close cooperation with disaster emergency departments, they have continuously accumulated practical disaster relief experience and improved the performance of search and rescue robots to enhance their adaptability to the search and rescue environment.
After several years of research and improvement, search and rescue robots were again used in the search and rescue efforts following the La Canchita mudslide and Hurricane Katrina in California. The La Canchita mudslide caused widespread building collapses and gas leaks. Inuktun's VGTV-Xtreme robot, specifically designed and improved for disaster relief, was deployed to the scene, but its tracks detached, rendering it unable to continue its mission. In the same year, VGTV-Xtreme played a significant role in the relief efforts following Hurricane Katrina, one of the worst natural disasters in US history. Furthermore, RoboCup Rescue, a dedicated international search and rescue robot competition, has been established to promote research and development in this field.
Research on search and rescue robots in my country started relatively late, but has developed rapidly in recent years, attracting increasing attention from research institutions. For example, Harbin Institute of Technology, Shanghai Jiao Tong University, Shenyang Institute of Automation, and Guangdong Weifu Company have all developed their own search and rescue robot systems. Several institutions, including China University of Mining and Technology and Tsinghua University, have also developed mobile robot platforms for underground coal mine rescue. However, most domestic search and rescue robots are still in the research stage of prototypes or limited to applications such as outdoor hazard removal. There are no reports of robots participating in actual disaster rescue operations such as mine accidents, earthquakes, and building collapses. During the Wangjialing flooding accident on April 2, 2010, an underwater robot developed by the Shenyang Institute of Automation, Chinese Academy of Sciences, was brought to the site to attempt to participate in the flooding detection task. Although it was ultimately not adopted, it was still a valuable attempt, accumulating valuable experience for flooding accident detection and rescue.
3 Key Technologies of Coal Mine Underground Search and Rescue Robots
When designing disaster relief robots, we should start from the overall system requirements, consider the environmental adaptability of the disaster relief robot, coordinate the technical connections of various subsystems, carry out top-level design, and study key technologies for comprehensive integration. We should pay full attention to key technologies in the process of designing disaster relief robots.
3.1 Movement mechanism
As the mobile carrier of a mobile robot, the motion mechanism directly affects the robot's mobility and terrain adaptability. The motion platform of a coal mine search and rescue robot should be adaptable to various complex underground terrain conditions, such as ruins, mud, sand, steps, steep slopes, and trenches, thus possessing strong terrain adaptability. In addition, it should have a certain speed and good kinematic stability to minimize the possibility of tipping over or rolling over. Currently, there are many types of motion mechanisms for search and rescue robots, such as wheeled, tracked, and serpentine mechanisms. Different motion platforms determine their respective motion capabilities. Wheeled robots are fast and efficient, but have poor obstacle-crossing ability and limited adaptability to complex terrain; tracked robots have strong obstacle-crossing ability, but have the disadvantages of slow speed and low movement efficiency; snake-like robots can crawl into narrow spaces and transmit image information using cameras mounted on their heads, but also have the disadvantages of slow speed and complex mechanisms; legged robots, such as quadrupeds and hexapods, have the characteristics of strong terrain adaptability and can cross large trenches and steps, but most legged mechanisms currently have the characteristics of slow speed and low efficiency; wheel-leg hybrid robots have the terrain adaptability of tracked robots and the movement speed of wheeled robots, but also have the disadvantages of relatively complex structure and large size; in addition, inspired by organisms in nature, various special bionic robots have also shown a promising future []. Taking into account the terrain environment in coal mines and the actual situation that may occur after an accident, adopting a hybrid tracked system with an auxiliary arm that has strong terrain adaptability is a relatively ideal motion mechanism. This method has strong terrain adaptability while maintaining a small size and can pass through relatively narrow spaces.
In addition to the factors mentioned above, the design of the motion platform must be reliable to cope with complex environments. For example, the design of coal mine search and rescue robots must focus on explosion-proof, waterproof, and high-temperature resistance. Tracked robots are also prone to track derailment and detachment, rendering the robot immobile. Besides flexible mobility and reliable design, search and rescue robots should also be portable. To cope with sudden mine accidents and improve search and rescue efficiency, search and rescue robots should have strong mobility and must be deployed to the scene as quickly as possible. After searching one target location, they should be able to move to the next search and rescue location as quickly as possible. An excessively large size, besides resulting in higher energy consumption and significantly reduced platform maneuverability, will also bring difficulties to the rescue work during transportation.
3.2 Sensing System
The main functions of search and rescue robots include search and rescue, but current research on search and rescue robots worldwide is mostly focused on environmental detection and survivor search. Due to the extreme complexity of the environment and the diverse and complex difficulties faced by trapped personnel, rescue work remains extremely challenging. Therefore, environmental detection and personnel search are currently the primary functions of search and rescue robots, and their search and detection capabilities largely depend on the type and application of their onboard sensors. As the perception system of a search and rescue robot, sensors must possess functions such as information acquisition, storage and analysis, and transmission, while also requiring small size, sufficient resolution and response time, as well as good stability and reliability.
The primary purpose of environmental detection is to enable search and rescue personnel to understand the comprehensive environmental conditions underground after the accident in real time, assess the impact of the underground environment on the lives and health of survivors and search and rescue personnel, consider the feasibility of assigning rescue teams to the well to complete the rescue mission, and provide necessary and reliable underground environmental parameters for developing a scientific and efficient rescue plan. This requires detecting the temperature, gas composition (such as oxygen content, toxic gas content, and combustible gas content) underground, as well as the terrain and geological structure. Secondly, while conducting environmental detection, once the robot enters the accident site, it should have the ability to search for and locate survivors and conduct preliminary detection of their condition. Finally, to ensure that the robot can safely and effectively complete the detection task, the robot should have the ability to perceive its own condition and the environment, such as its own posture, temperature, battery level, and other body parameters, as well as information on obstacles, fire zones, water zones, and other hazardous environments, and the robot's location.
Currently, sensors for some environmental detection and perception are relatively mature, such as temperature sensors, oxygen content sensors, combustible gas detectors, and toxic gas detectors. These sensors are small in size, have high detection accuracy, and good integration, and can basically meet the needs of downhole environmental detection. The detection of downhole terrain and geological structure mainly relies on vision systems or distance and position sensors used in conjunction with vision systems, such as sonar detectors and laser rangefinders. The search and location of personnel is carried out by life detectors, thermal imagers, and other equipment. The perception of the robot's own state mainly relies on perception units such as odometers, inertial systems, and attitude sensors to complete the perception of the robot's position and attitude and provide necessary data for navigation and motion control.
Furthermore, the underground environment, especially after an accident, is complex and prone to severe conditions such as dense smoke and dust. In such environments, many sensors, particularly vision systems, are severely affected. Far-infrared detectors, however, possess excellent smoke-penetrating capabilities and can simultaneously obtain the radiation temperature of the surface of the object being measured. Therefore, using far-infrared imagers for detection in complex environments can supplement visible light vision systems and also enable target identification by measuring the radiation temperature of specific object surfaces, such as humans, fire points, and bodies of water. Other special circumstances may also cause different sensors to fail; therefore, using multiple sensors and fusing the information is an effective solution. Multi-sensor information fusion integrates incomplete local environmental information from multiple sensors of the same or different types at different locations, eliminating redundancy and contradictions to form a relatively complete and consistent description of the environment, improving the speed and accuracy of intelligent decision-making. Common methods for multi-sensor fusion include: weighted averaging, Bayesian estimation, Kalman filtering, neural networks and fuzzy inference, and production rules with confidence factors.
3.3 Visualized Human-Computer Interaction and Teleoperation
To enable downhole robots to work more flexibly, operators need a simple, convenient, and fully functional operating platform. In addition to operating the robot, this platform can also intuitively display various collected information as well as the robot's posture and position information, ensuring the reliable operation of the robot through various methods.
3.4 Robot explosion-proof and control system technology for harsh environments
Since robots operate underground, and primarily in areas containing flammable and explosive gases such as methane, explosion-proof design is essential and extremely important. The control system must not only be intrinsically safe, but also thermally designed, waterproof, acid mist-proof, and dust-proof, as well as designed to withstand harsh environments, including vibration and impact resistance, and interference immunity.
The dust, smoke, and scattered debris at the scene all increase the difficulty for disaster relief robots to perform their tasks. At the same time, the high temperatures at the scene are also detrimental to the robot's use, and may even cause the tracks or tires to melt and burn. Therefore, the design process should pay attention to the robot's dustproof capabilities and heat resistance, and also consider its waterproof, explosion-proof, corrosion-resistant, electromagnetic interference-resistant, and heat radiation-resistant functions. Furthermore, the complex scene environment also places new demands on the robot's control cables; sharp metal fragments or other debris pose a threat to the control cables, so their robustness must be considered when selecting them.
3.5 Robotic Autonomous Navigation, Positioning, and Motion Control Technology in Downhole Wells
During missions, disaster relief robots must avoid dangerous environments and prevent the creation of further dangers, requiring navigation capabilities. Secondly, to provide the disaster relief center with the location of survivors, the robot must be able to determine its own location and return to the center after completing the mission, necessitating localization and path planning capabilities. Navigation methods for disaster relief robots can be categorized as: map model matching navigation based on environmental information, landmark navigation based on various navigation signals, visual navigation, and olfactory navigation. Localization determines the robot's position relative to global coordinates in a two-dimensional working environment and can be divided into inertial positioning, landmark positioning, and acoustic positioning. Path planning searches for an optimal or near-optimal collision-free path from the initial state to the target state based on the robot's perceived working environment information, according to a certain performance index, and achieves reasonable and complete path coverage of the required cleaning area. This can be divided into two types: global path planning with complete environmental information and local path planning with completely or partially unknown environmental information. Furthermore, GPS can provide real-time, high-precision three-dimensional position, three-dimensional velocity, and time information for any location on the global surface and in near-Earth space. Disaster relief robots can use a combination of satellite positioning systems and electronic maps to provide real-time location information, enabling visualization of robot positioning.
Location information is essential for downhole exploration robots. Whether it's information on surviving personnel, the extent of tunnel damage, or local gaseous environment conditions, a relatively precise location is crucial for reference. While operators can make a general assessment of the location using forward-facing imagery, such positioning lacks precision. Moreover, in situations like fires, smoke and other contaminants make visual imagery-based positioning impossible. Therefore, fusing multiple sensor data to measure and calculate the robot's current position is absolutely necessary.
Furthermore, in complex, unstructured downhole environments, remote control via cameras alone is insufficient to guarantee the accuracy and safety of operational actions. Ensuring the robot moves accurately, efficiently, and safely along the planned path and according to issued kinematic commands is a prerequisite for successfully completing its exploration mission. Safety is paramount; even slight mishaps could cause the robot to tip over or collide. Therefore, collecting the robot's attitude information, combining it with motion parameters such as position and heading, and controlling its trajectory and attitude through a combination of local autonomous real-time monitoring and remote monitoring, along with studying the robot's kinematic model to enable it to autonomously maintain within a safe range and promptly intervene and issue alarms when the operator issues potentially dangerous commands, is a crucial and practical function.
It is evident that although most existing search and rescue robots employ a master-slave operation mode, given the complexity and danger of the underground coal mine environment, especially the harsh conditions for communication systems, autonomous navigation and motion control capabilities are essential. Improving the intelligence and autonomy of search and rescue robots is also a crucial direction for development in this field. Underground coal mines present a complex environment where structured and unstructured elements coexist, and after an accident, the environment is often uneven and unstructured. Therefore, this paper focuses on the autonomous navigation, positioning, and motion control technologies of underground coal mine exploration robots, specifically addressing the challenging underground coal mine environment.
3.6 Navigation Technology for Mobile Robots on Uneven Road Surfaces
3.6.1 Mobile robot platform adapted to uneven road surfaces
Robots that work in unstructured and uneven environments are generally used in fields such as interstellar exploration, field reconnaissance, agricultural farming, and mining operations. Some researchers abroad have defined these robots as Off-Road Mobile Robots, which is a more appropriate definition.
3.6.2 Current Status of Research on Navigation Technology for Mobile Robots on Uneven Road Surfaces
In recent years, positioning and navigation technologies for mobile robots on uneven surfaces have received increasing attention, and a variety of relatively complete solutions have been developed, achieving certain research and application results in fields such as interstellar exploration, field reconnaissance, mining, and agriculture.
Currently, the sensors used by robots for positioning, navigation, and state perception mainly include odometers, inertial navigation units (INS), GPS systems, ultrasonic or sonar sensors, laser rangefinders, and computer vision systems. Odometers or photoelectric encoders are widely used sensors in mobile robots, primarily for calculating mileage during dead reckoning. INS were previously used for aircraft attitude measurement and control, but in recent years they have been increasingly applied to algorithms for positioning and attitude measurement of ground vehicles or mobile robots. Especially when integrated with odometers and GPS to form a combined navigation system, it has become an important means of navigation for mobile robots. GPS systems, as absolute position sensors, are convenient to use, highly accurate, and easy to process data. Especially after differential calculation, their accuracy can reach sub-meter levels or even higher, allowing them to be directly used for robot positioning. Their main problems are signal loss in complex environments due to obstructions from buildings or large vegetation, and their limited use for military purposes due to US control. A common approach is to fuse odometry, IMU, and GPS information using a specific method (typically employing algorithms such as extended Kalman filtering or odorless Kalman filtering) to accurately estimate the robot's pose. Sonar or ultrasonic sensors are primarily used to detect obstacles at close range (generally within 5 meters) around the robot, and are typically mounted in arrays on the robot to maximize their detection range. Due to their short detection range and slow speed, they are mostly used in low-speed mobile robot systems. LiDAR is also a commonly used distance measurement tool, mainly divided into two-dimensional and three-dimensional LiDAR. Because of its long measurement range (generally reaching tens or even hundreds of meters) and high measurement speed (dozens of scans per second), it is increasingly used in mobile robot environmental perception and modeling. The visual system is the most commonly used environmental perception system and a primary means for humans to perceive and understand the world. In recent years, especially in the last two or three years, research results based on visual environmental perception have emerged in large numbers. Its computational complexity has been continuously reduced, and its engineering level has been increasing. There are also more and more research results on using vision to achieve obstacle detection, dead reckoning and attitude estimation, and even road environment modeling. In addition, there are other technologies such as millimeter-wave radar, infrared sensors, force sensors, and tactile sensors that are also applied to the perception of robot self-parameters or environmental parameters in different situations, which will not be detailed here.
Environmental reconstruction technology is an important research direction in information technology, especially computer vision technology. For robotic systems, it is mainly used for robot navigation, target tracking and recognition, and the reproduction of real-world scenes. Its main methods have evolved from computer vision to the combination of computer vision and LiDAR, i.e., the concept of active vision. In recent years, due to the continuous improvement of computer vision algorithms and computing platforms, there has been a return to using only computer vision, i.e., passive vision methods.
In the mid-1970s, researchers such as Marr, Barrow, and Tenenbaum proposed the theory of visual computing, the core of which is to recover the three-dimensional structure of a scene from an image. S. Z. Barnard and M. A. Fischler systematically introduced the research results of 3D vision from the mid-1970s to 1981, mainly including basic methods of stereo reconstruction, algorithm evaluation criteria, and reviews of influential algorithms at the time. In the late 1970s and 1980s, Gennery and Moravec, working at Stanford University, first applied stereo vision 3D reconstruction technology to mobile robot navigation. On a platform called StanfordCart, they achieved its autonomous localization and 3D detection of the surrounding environment based on stereo vision. However, due to limitations in its computational speed and the shortcomings of the hardware platform, the system could not operate reliably for extended periods.
In the 1980s, researchers at Carnegie Mellon University (CMU) and NASA's JPL were at the forefront of this field. In the late 1980s, CMU researchers successfully solved the computational speed and engineering reliability problems of stereo vision on their mobile robot platform, the CMU Rover (Moravec, 1983). Their main improvements lay in upgrading the hardware platform and refining the perception algorithms. Most notably, in 1987, Matthias and Shafer first proposed the Visual Odometry Algorithms based on stereo vision. This algorithm was the first to calculate the robot's trajectory and posture with relatively high accuracy using visual means. This opened up applications of vision-based motion estimation algorithms in Earth's field environments (Nister, 2006; Agrawal, 2007) and in NASA's Mars Exploration Mission (MER) for extraterrestrial exploration robots (Cheng, 2006).
In subsequent research, CMU remained at the forefront globally. Their Navlab mobile robot platform employed active vision, combining a monocular camera and a LiDAR as its environmental detection solution. This successfully overcame the significant computational complexity of passive vision in matching and feature extraction at the time. From this period onward, mobile robots gradually achieved so-called real-time autonomous navigation, primarily due to continuous algorithm improvements and substantial increases in computing platform performance. For over a decade starting in the mid-1990s, environmental perception and detection methods based on active detection technologies (mainly LiDAR and millimeter-wave radar) were widely applied. In particular, the ability to rapidly model environments when combined with visual sensors made it the preferred solution for mobile robots, especially those in the field.
However, research on environmental perception and modeling based on monocular or binocular passive vision has never ceased. In the late 1980s and 1990s, Matthias's research at JPL broke through the limitations of scene-based real-time stereo vision algorithms, and in the late 1990s, he first applied it to environmental detection and modeling for field robots. Since then, stereo vision has gradually gained attention and truly become a competitive technology in the field of robot 3D perception.
In the past five years, visual perception technology has been increasingly applied in mobile robots, especially in perception and environmental modeling in complex environments with uneven terrain. One such application is NASA's MER program, the DemoIII autonomous field rover, which employed three pairs of stereo vision cameras: one pair of forward-facing stereo cameras, one pair of rear-facing stereo cameras, and one pair of stereo cameras mounted on a servo gimbal (Matthies, 2007). Here, stereo vision enabled two basic functions: visual odometry and path planning. In the absence of absolute positioning systems like GPS on Mars, the stereo vision-based visual odometry method, using relative positioning, achieved sufficient positioning accuracy for the rover (Cheng, 2005). Furthermore, the stereo vision-based path planning method can quickly perceive uneven terrain (Biesiadecki and Maimone, 2006) and predict changes in terrain tilt (Angelova, 2007). Meanwhile, other researchers have also achieved robot pose estimation and 3D road reconstruction based on monocular or binocular vision. For example, in 2006, Nister used monocular and binocular vision odometry to estimate the posture and position of mobile robots in complex outdoor environments, achieving high accuracy, good reliability and real-time performance.
Conclusion
Research on search and rescue robots for detecting dangerous areas in coal mines provides a safety guarantee for current coal mining operations. The solutions to the key technologies mentioned in the article are of great significance for the development and research of high-performance search and rescue robots.