Autonomous driving relies on several advanced technologies that complement each other to perceive the surrounding environment and navigate autonomously. How exactly do these technologies work together?
Besides well-known industry leaders like Waymo, which other companies are driving the development of perception capabilities in this sector? Self-driving cars must be able to recognize traffic signals and signs, as well as other cars, bicycles, and pedestrians. They must also be able to sense the distance and speed of objects ahead in order to react accordingly.
Cameras and computer vision: Cameras are widely used in autonomous vehicles and vehicles equipped with advanced driver assistance systems (ADAS), and are an important perception device in autonomous driving environments.
Cameras can recognize colors and fonts, helping to detect road signs, traffic lights, and street markings—an advantage over radar and lidar. However, cameras are far inferior to lidar in terms of depth and distance detection.
Autonomous driving perception systems use computer vision technology to detect objects and signals, thereby processing data extracted from cameras.
Computer vision software needs to be able to identify specific details of lane boundaries (such as line colors and patterns) and assess appropriate traffic rules to achieve safe, human-like autonomous driving in complex traffic scenarios.
Among these, cameras are commonly used. They are similar to human eyes, allowing them to see colored signs and objects, understand fonts, and distinguish between traffic lights. However, they also have many drawbacks. For example, their vision deteriorates significantly at night or in inclement weather, and they are not good at observing at long distances.
Secondly, there's the more controversial LiDAR, or laser radar. It's commonly found on car roofs, like a constantly rotating hat. The principle is simple: by calculating the reflection time and wavelength of a laser beam, it can create a 3D map of surrounding obstacles. Its weakness, however, is its inability to recognize images and colors.
Furthermore, millimeter-wave radar is indispensable because it can operate in all weather conditions, even though it cannot determine altitude, has low resolution, and struggles to create images. However, its ability to penetrate dust, fog, rain, and snow has secured its place in the market.
Among these, the most closely related to security are common security cameras. Currently, new artificial intelligence technologies have enabled security cameras to possess functions such as facial recognition, license plate recognition, and liveness detection, becoming a transformative force driving the development of the security industry. The application of AI has also created demand for cameras in other industries, including autonomous driving.
Currently, mainstream autonomous driving technologies rely on two systems: one is the LiDAR system, but its high cost makes widespread adoption difficult; the other is the video surveillance and analysis system, a more mature, inexpensive, and easily deployed technology. Among all autonomous driving technologies, video-based technologies stand out. Through video surveillance, real-time analysis of road conditions, vehicle and pedestrian information can be performed, assisting the vehicle in making effective and timely responses. The performance of the image sensor determines the quality of the transmitted image; without high-quality image acquisition and transmission, accurate video analysis is impossible. Video surveillance and high-quality image sensors are crucial for autonomous vehicles.
In the future, driven by breakthroughs in camera sensor performance, declining chip costs, and deep learning technology, it is expected that at least 8-10 cameras will be used in applications such as rear-view/surround-view and night vision cameras for autonomous vehicles, advanced driver assistance systems, mirror replacements and dashcams, and driver/vehicle interfaces.
In recent years, the demand for video surveillance in the automotive sector has become a driving force for the further development of the security industry. Vehicle cameras are poised to benefit from both autonomous driving and connected vehicle technologies, resulting in a huge market potential. According to IHS estimates, global shipments of vehicle cameras will grow from 28 million units in 2014 to 83 million units in 2020, representing a compound annual growth rate of 20%. Security companies are aggressively entering the autonomous driving field, largely driven by their advanced security technologies, particularly video surveillance technology.
Decision technology
Having identified its surroundings through visual perception, the next step for an autonomous robot is to fully utilize this information for understanding and analysis to determine its next move. This task requires a powerful brain. Autonomous robots need to access this knowledge base in two ways: expert-based rule-based and AI-based methods.
Expert-based rule-based approach, also known as rule-based, involves pre-written rules that must be strictly followed when making decisions. For example, when preparing to overtake or change lanes, the following conditions must be met (this is a hypothetical expert's explanation for reference only): the road radius is greater than 500R (no lane change on curves); the distance to vehicles in front and behind in the target lane is more than 20m; the speed is no more than 5km/h slower than the vehicle behind; and so on... Overtaking and lane changing are permitted when all N of these conditions are met simultaneously.
AI-based learning, also known as artificial intelligence, has been a hot topic. It mimics the human brain, using AI algorithms to understand scenarios. This might involve accumulating experience through numerous mistakes or receiving prior guidance from someone. By building a knowledge base through AI-based learning, its responses become more flexible.
Positioning technology
Currently, besides the mainstream positioning methods using GPS or GNSS (Global Navigation Satellite System), there are also methods such as laying electromagnetic guide lines on roads to achieve positioning. The biggest challenge for high-precision GPS positioning is the impact of geographical factors such as mountains and tunnels on accuracy. Although this can be estimated using an IMU (Inertial Measurement Unit), if the GPS signal is lost for too long, the accumulated error will be quite large.
In addition, 3D dynamic high-definition maps specifically designed for autonomous driving bring more possibilities to autonomous driving. With high-definition maps, it's easy to pinpoint one's location within the lane.
Recently, ARCFOX held its 2021 ARCFOX Brand Night and Launch Event, unveiling the all-electric sedan Alfa S, available in four versions priced from RMB 251,900 to RMB 344,900. Also showcased was the Alfa S Huawei HI Edition, the world's first mass-produced vehicle equipped with three LiDAR sensors, co-developed by ARCFOX and Huawei. The debut of this first domestically produced autonomous driving technology marks the official start of a new era for autonomous driving in China.
The emergence of self-driving cars has dealt a devastating blow to the current automotive industry and brought about tremendous changes to social development. Tesla was the first to release a self-driving car, but judging from the complaints and rights protection incidents Tesla has encountered, intelligent vehicles are also facing many challenges due to the immaturity of the technology.
What technologies are needed to achieve autonomous driving? Next, we will explain the technologies for achieving autonomous driving in simple terms.
1. Principle of autonomous driving: Autonomous vehicles are a type of intelligent vehicle, also known as wheeled mobile robots. They mainly rely on the computer system inside the vehicle to enable the vehicle to have environmental perception, path planning and autonomous vehicle control. In other words, they use electronic technology to control the car to achieve human-like driving or autonomous driving.
2. Visual Perception Technology: In short, visual perception technology relies on a single or multiple cameras to accurately identify lane lines, road edges, drivable areas, vehicles, pedestrians, traffic signs, and traffic lights. The visual perception module acts as the eyes and ears of an autonomous vehicle, allowing it to discern its surroundings and providing information support for its behavioral decisions. Visual perception includes both the autonomous vehicle's own positional perception and its perception of the surrounding environment.
3. LiDAR Sensing Technology: LiDAR is a sensor used to accurately obtain three-dimensional position information, much like human eyes, determining the position, size, external shape, and even material of objects. It's a general term for active sensors that detect environmental information using laser ranging technology. It uses laser beams to detect targets, obtain data, and generate accurate digital engineering models. LiDAR consists of three parts: a transmitting system, a receiving system, and information processing. The working principle of LiDAR is to use the emission, reflection, and reception of visible and near-infrared light waves (mostly infrared light near the 950nm band) to detect objects. It can accurately detect and track traffic participants and unknown targets, and is suitable for scenarios such as autonomous driving and vehicle-to-everything (V2X) communication.
4. Multi-sensor fusion technology: Multi-sensor information fusion (MSIF) is an information processing process that uses computer technology to automatically analyze and synthesize information and data from multiple sensors or sources under certain criteria to complete the required decision-making and estimation. The basic principle of MSIF is similar to the human brain's information processing, combining and optimizing information from various sensors at multiple levels and in multiple spaces to ultimately produce a consistent interpretation of the observed environment. It allows for the free combination of multi-sensor data, providing the system with low-latency, high-precision, and fault-tolerant sensing results.
Uber launched its self-driving car service last week and began testing it in Pittsburgh, bringing even wider attention to the already hot self-driving technology.
Over the past two years, major technology companies have been conducting self-driving car experiments in full swing. In fact, from complex automatic cruise control to semi-autonomous driving systems installed on some vehicles, and then to fully autonomous vehicles, self-driving technology already exists in many forms.
Adaptive cruise control system
Adaptive Cruise Control is an intelligent automatic control system that evolved from existing cruise control technology and is now found in many ordinary and luxury cars.
While the vehicle is in motion, radar, cameras, and other sensors detect the distance to other vehicles. If the distance is too small, the system will apply appropriate braking to the wheels and reduce engine power output to maintain a safe distance from the vehicle in front.
Its cost varies from vehicle to vehicle, and the price range of vehicles equipped with this system is very wide.