Compared to traditional robot obstacle avoidance technology, current drone obstacle avoidance technology faces challenges. In the early stages of obstacle avoidance, due to technological and market limitations, drone reliability was relatively low. Therefore, drone manufacturers would specify in their manuals that drones must fly in open areas and should avoid areas with large crowds. Accidental misoperation or the activation of the one-key (low voltage, loss of control) return-to-home function when tall obstacles are nearby could lead to the drone crashing into obvious obstacles, leaving the manufacturer helpless. While current drone obstacle avoidance is still in its early stages, the booming drone market has spurred manufacturers to invest heavily in developing such technologies to reduce the incidence of such accidents. The common goal is to measure the distance between the drone and obstacles.
Obstacle avoidance technology can be applied to drones in various ways, but the specific applications differ. For example, some technologies are not suitable for forward-looking obstacle avoidance. Currently, widely used obstacle detection methods in the drone field include ultrasonic ranging, infrared or laser ranging, and binocular vision.
Among the many trends in consumer drone technology, obstacle avoidance is a key element in achieving automation and even intelligence. A well-developed autonomous obstacle avoidance system can greatly reduce the rate of drone damage and accidents involving people and buildings caused by operational errors. Judging from the new products and technological development directions of various consumer drone manufacturers, obstacle avoidance technology will also become more sophisticated in the next few years and become a standard system for mid-to-high-end consumer drones.
Obstacle avoidance technology is the intelligent technology by which drones autonomously avoid obstacles. An automatic obstacle avoidance system can promptly evade obstacles in a drone's flight path, greatly reducing losses caused by operational errors. In addition to reducing the number of crashes, it also provides significant assistance to drone novice operators!
In the pre-obstacle avoidance era, user manuals for consumer drones typically stated that they must fly in open areas and should avoid areas with large crowds (this is also largely due to the relatively poor reliability of consumer drones in the current technological and market environment). A single misoperation, or activating the one-key (low voltage, loss of control) return-to-home function when there are tall obstacles nearby, could cause the drone to crash into such an obvious obstacle – a truly irreversible situation. To reduce the incidence of such accidents, manufacturers have been working tirelessly to develop obstacle avoidance technology, and in terms of implementation, everyone's focus has been on the same thing – measuring the distance between the drone and the obstacle.
We can imagine that if a drone can measure the distance to a potential obstacle, it can stop before crashing into it (although fixed-wing drones might disagree). While this approach is simple and crude, it still has some effectiveness. Currently, the most commonly used obstacle detection methods include:
Ultrasonic ranging:
This method is familiar to many people; the reversing radar in passenger cars uses ultrasonic waves to detect obstacles. The advantages of this method are that it is a mature technology with low cost; however, its disadvantages are its short effective range (the effective range of commonly used low-to-mid-range ultrasonic sensors is no more than 10 meters) and certain requirements for the reflective surface. Therefore, ultrasonic ranging sensors are often used to measure the distance between drones and the ground (fixed-wing drones say they fly too high and too fast for ultrasonic sensors to be effective).
Infrared and ultrasonic technologies, because they both require the active emission of light and sound waves, have requirements regarding the reflective objects. For example, infrared light is absorbed by black objects, penetrates transparent objects, and is interfered with by other infrared rays; while ultrasonic light is absorbed by objects like sponges and is easily interfered with by propeller airflow. Furthermore, active ranging can lead to interference between two drones. In contrast, while binocular vision also requires light, its requirements for reflective objects are much lower, and two drones can be used simultaneously without interference, making it more versatile. Although laser technology can achieve similar functionality to binocular vision, current laser components are generally expensive, bulky, and power-consuming, making their application in consumer drones neither economical nor practical. Drone obstacle avoidance requires a technology capable of handling multiple obstacles. In other words, current drone ranging typically involves measuring the distance from the drone to an object using a ranging payload, and then calculating the distance using the wavelength, wave velocity, and feedback time of a specific wave.
For this reason, relevant technical personnel recommended an ultrasonic obstacle avoidance sensor - MB1043.
Introduction to obstacle avoidance technology for drones
Ultrasonic obstacle avoidance: Ultrasonic waves are a type of sound wave. Because their frequency is higher than 20kHz, they are inaudible to the human ear and have stronger directionality. The principle of ultrasonic ranging is simpler than that of infrared ranging. Since sound waves reflect when they encounter obstacles, and the speed of sound is known, the measurement distance can be easily calculated by knowing only the time difference between transmission and reception. Combined with the distance between the transmitter and receiver, the actual distance to the obstacle can be calculated.
The MB1043 ultrasonic obstacle avoidance sensor is a high-resolution (1mm), high-precision, low-power ultrasonic sensor. Its design not only addresses interference and noise, providing noise immunity, but also compensates for sensitivity issues related to targets of varying sizes and changing supply voltages. Furthermore, it features standard internal temperature compensation, resulting in more accurate distance measurements. Applications include indoor environments, pedestrian detection, small target detection, high-sensitivity applications, robot ranging and obstacle avoidance, and drone ranging and obstacle avoidance. It's an excellent low-cost solution!
In recent years, the drone market has grown rapidly, and obstacle avoidance technology, as a guarantee for increasing the safety of drone flight, has also been developing rapidly with the advancement of technology. During flight, drones collect information about the surrounding environment through their sensors, measure distances, and then make corresponding action commands to achieve the function of "obstacle avoidance".
Currently, the most common obstacle avoidance technologies for drones are infrared sensors, ultrasonic sensors, laser sensors, and visual sensors. So why did DJI choose binocular vision for forward-looking obstacle avoidance? This requires explaining the principles of each technology.
We are familiar with the applications of infrared light: from TV and air conditioner remote controls to hotel automatic doors, all utilize the principle of infrared sensing. Specifically, in the application of obstacle avoidance in drones, the common implementation method of infrared obstacle avoidance is the "triangulation principle".
An infrared sensor consists of an infrared emitter and a CCD detector. The infrared emitter emits infrared rays, which are reflected by objects. The reflected light is received by the CCD detector. Since the distance D of the object is different, the reflection angle will also be different. Different reflection angles will produce different offset values L. Knowing these data and then calculating, the distance of the object can be obtained.
Ultrasound is actually a type of sound wave. Because its frequency is higher than 20kHz, it cannot be heard by the human ear and has a stronger directionality.
The principle of ultrasonic ranging is simpler than that of infrared ranging because sound waves are reflected when they encounter obstacles, and the speed of sound waves is known. Therefore, as long as the time difference between transmission and reception is known, the measurement distance can be easily calculated. By combining the distance between the transmitter and the receiver, the actual distance to the obstacle can be calculated.
Ultrasonic ranging is cheaper than infrared ranging, but its sensing speed and accuracy are somewhat inferior. Similarly, because it requires actively emitting sound waves, its accuracy decreases as the sound waves attenuate over distant obstacles. Furthermore, ultrasonic ranging will not work on objects that absorb sound waves, such as sponges, or in windy conditions.
Laser obstacle avoidance is similar to infrared obstacle avoidance, involving emitting and receiving laser beams. However, laser sensors employ various measurement methods, including triangulation similar to infrared and time-difference plus velocity measurements similar to ultrasound.
Regardless of the method, laser obstacle avoidance is significantly superior to infrared and ultrasonic in terms of accuracy, feedback speed, anti-interference ability, and effective range.
However, it's important to note that ultrasonic, infrared, and laser ranging technologies are all one-dimensional sensors. They can only provide a distance value and cannot perceive the real three-dimensional world. Of course, because laser beams are extremely narrow, multiple laser beams can be used simultaneously to form an array radar. This technology has matured in recent years and is widely used in autonomous vehicles. However, due to its large size and high cost, it is not very suitable for drones.
Solving the problem of how robots "see" is what we commonly refer to as computer vision. Its foundation lies in how to extract three-dimensional information from two-dimensional images, thereby understanding the three-dimensional world we inhabit. Visual recognition systems typically include one or two cameras. A single photograph only provides two-dimensional information, like a 2D movie, lacking a direct sense of space; we must rely on our own life experiences such as object occlusion and perspective to fill in the gaps. Therefore, the information obtained from a single camera is extremely limited and cannot directly achieve the desired effect (although other methods can be used to assist in obtaining this information, these are still immature and have not been widely validated). Analogously, in machine vision, the image information from a single camera cannot capture the distance relationship between each object in the scene and the lens; that is, it lacks a third dimension.