The Go match between AlphaGo, a Go-playing program developed by Google's DeepMind, and Lee Sedol, a 9-dan professional Go player, saw AlphaGo defeat the former world number one Lee Sedol 4-1 in five games. Does this man-machine battle indicate that artificial intelligence has completely surpassed humanity? The emergence of artificial intelligence has brought a mix of excitement and fear to humanity. Should we be ecstatic or panicked? Today, let's step into the era of artificial intelligence and see what its future holds.
Artificial intelligence is once again at its peak.
The Go match between AlphaGo, a Go program developed by Google's DeepMind, and Lee Sedol, a 9-dan professional Go player, was a five-game contest. AlphaGo defeated the former world number one Lee Sedol 4-1. After the match, AlphaGo was awarded an honorary 9-dan certificate and ranked second in the world, second only to Chinese player Ke Jie. This man-machine Go match not only caused a huge stir in the Go world, but also pushed the popularity of artificial intelligence to a new high worldwide.
Does this recent man-machine battle indicate that artificial intelligence has completely surpassed humanity? Movies like *Ex Machina*, *Humans*, and *The Matrix* vividly depict artificial intelligence. With the singularity approaching, will AI become more intelligent than humans? Will AI become the next new species? Is it possible for AI to possess true consciousness? Should humanity be excited or fearful in the face of AI?
Some have joked that the easiest way to defeat AlphaGo is to unplug it. Humans shouldn't compete with AlphaGo in computational ability; instead, they should invite Bei Dao to compete with AlphaGo in poetry, Mo Yan to compete in novel writing, or Li Chen and Fan Bingbing to show off their affection—that would be a swift and decisive victory. That said, the emergence of artificial intelligence has brought a mix of excitement and fear to humanity. Should we be ecstatic or panicked? Today, let's step into the era of artificial intelligence and see what its future holds.
AlphaGo: Two Kinds of Human Consciousness
Before the match, many believed AlphaGo would win decisively. Afterwards, many were surprised that Lee Sedol had won a game. Some thought AlphaGo's loss was an accident, or perhaps it was to save face for humans. If this assumption holds true, AlphaGo possesses consciousness. However, this so-called consciousness can be twofold: if humans impose the "loss" signal on it, this consciousness is passive; if AlphaGo itself signals itself to lose, this consciousness is like the consciousness in *Ex Machina*, a state that current artificial intelligence development cannot yet achieve.
How can artificial intelligence master a game more complex than the universe?
In chess, humans have been thoroughly outmatched by machines. In 1997, IBM's Deep Blue computer defeated Garry Kasparov, the world's number one ranked chess player, for the first time in a normal time-limited game. IBM's Deep Blue computer program had a record of 2 wins, 1 loss, and 3 draws.
Go is far more difficult than chess. A Go player has 361 possible moves at the start of the game, and hundreds more at any stage. In contrast, chess has only about 50 possible moves and a maximum of about 10 × 47 possible board positions. Go, on the other hand, can have as many as 10 × 179 possible board positions. Considering that the number of atoms in the currently observed universe is only about 10 × 80, Go is often called a game more complex than the universe itself. The reason AlphaGo achieved such a leap in skill is due to its use of deep learning technology. Simply put, this technology introduces a human-brain-like operating mechanism into computers that previously only calculated from commands. Scientists used neural network algorithms to input the game records of chess experts into the computer, allowing it to play against itself. AlphaGo's superior chess skills were not taught to it by its developers; it was self-taught.
With its deep self-reinforcing learning, AlphaGo can play against itself. While an average player might play 1,000 games a year, AlphaGo can play as many games as it wants in a day, with a current peak of 1 million games.
AlphaGo can predict the situation 40 moves ahead and uses its powerful computing capabilities to determine the optimal outcome, while humans can only predict about 10 moves ahead. Even if these moves seem "bad" to us, AlphaGo can see the future and control the overall situation, which is why AlphaGo is smarter than humans.
The last vestige of human dignity in board games is Ludo.
Humanity's last vestige of dignity in board games might be Ludo, as it relies almost entirely on luck. If AlphaGo were to play Mahjong, could it win? The answer is yes. Artificial intelligence can defeat humans in all board games because they employ algorithms. These are essentially finite, closed problems with clearly defined objectives, which computers can handle. While randomness and luck may play a role, in terms of probability and the overall outcome, robots can win in all algorithmic board games, including Mahjong and poker.
The weaknesses of chess-playing AI can be seen in this Go competition. Due to algorithmic issues, AlphaGo was always at a disadvantage in the first half of each game because its decision network had to make too many choices in the early stages. In the middle and endgame, the machine's algorithm became less complex. Whether deep neural networks or artificial intelligence can surpass humans in certain fields depends on the current complexity of that field. It can win at mahjong because its algorithm is very simple.
The future development path of AlphaGo
The creator of AlphaGo aims to build a general-purpose robot, capable of understanding human emotional needs through reading novels and engaging in conversations in various ways. The machine needs to understand and imitate human language styles, possessing language abilities that are far more challenging than playing Go. Currently, machines can already listen and speak, surpassing human abilities in both Chinese and English, and can understand human speech. However, semantic understanding still has a long way to go. In five years, artificial intelligence will be able to communicate freely like humans, but this will only be limited to linguistic communication. Content-wise, it may still rely on pre-programmed data for emotional interaction, eventually involving the recording of all human data activities in big data. However, the content layer will remain passive.
Will artificial intelligence be smarter than humans in 20 years?
Through the mobile internet, the Internet of Things (IoT) can gather all human wisdom in the background and unfold it through deep neural networks. Artificial intelligence will not ultimately be compared with a single person, but rather it will be compared with each individual by gathering the wisdom of all people in the background. In most scenarios, it will be smarter than humans.
Artificial intelligence cannot match human creativity. Today, no one can compete with computers in terms of computation. Human creativity is the most valuable asset. The intelligence of the division of labor can enable humans and machines to collaborate at a higher level, creating more productive value through more advanced productivity methods.
Artificial intelligence can only become smarter than humans if it becomes a higher form of life.
Machines are creative, but not smarter than humans. Robots lack intuition; they don't possess the ability to articulate or define their own ideas. Early robots were taught how to do things, but with AlphaGo, we started providing the answers, the correct moves. Later, we simply let the machine win, allowing it to find the methods and solutions on its own. Human goals aren't confined to specific domains; humanity's overarching goal is survival and reproduction. This pursuit of survival and reproduction has led to various forms of competition, evolution, collective morality, and emotions. If we can make robots as intelligent as humans, we must at least set their goals around survival. At that point, we'll move beyond artificial intelligence and consider the creation of a higher life form. Without the training of birth and death, machines can only have goals within specific domains, so they cannot be considered smarter than humans.
By 2045, artificial intelligence will surpass human abilities in learning and reasoning.
Robots can not only replace simple repetitive labor, but also replace humans in complex tasks. In the future, machines will not only be able to learn through programming, but will also possess their own reasoning and learning abilities. For example, the AI "Ava" in *Ex Machina* deceived humans, and this situation may occur in the future. The AI of 2045 may be even more advanced than "Ex Machina." Currently, machines based on deep learning and modeling methods may not be able to predict their behavior, and their reactions to new situations may not be what the programmer can foresee. In fact, a complete understanding of the activity of neurons in the human brain is not necessarily required to develop artificial intelligence; algorithms are more important.
Division of labor between the left and right hemispheres
The left brain is responsible for: logic, mathematics, memory, analysis and reasoning, planning and arrangement, and sequential order.
The right brain is divided into the following functions: imagery, art, emotions, inspiration and creativity, dreams and imagination, and occasional daydreaming.
This might be a perfect fit with artificial intelligence. Robots can easily replace the left brain in tasks, but the right brain is much more difficult to manage. Artificial intelligence could become the best left brain. Suppose we can find an algorithm that can generate new algorithms, meaning it can solve all the world's problems, but that day may never come, and such a "god-like" algorithm may never exist.
Artificial intelligence creating poetry or paintings does not equate to its creativity.
The creation of computer-generated poetry cannot yet be considered true creation; it's still simply an optimization of mathematical functions and coordinates. It's like Google's machine translation—it doesn't mean the machine can understand the text. These are not the same concept. Machines currently face two major limitations: First, they can only learn from existing human behavior and judgment data, so they cannot create entirely new directions that humans haven't experienced. Artificial intelligence cannot yet become a great artist. Second, we have invented art forms like painting and writing, things like history, which machines are unlikely to invent in the short term. Humans can become excellent artists, but only unique artists who can be recorded in the grand annals of human history. Computers show no hope for this in the near future.
Will machines eventually develop consciousness?
The decision to allow AI to have self-awareness is difficult, but difficult does not mean impossible. We can make it conscious, but we don't want it to be right now. Humans develop complex and organized thinking through interaction with the world around them, which is how we develop consciousness. In the future, AI sensors may be ubiquitous, controlling tens of thousands of robots that will have a richer sense of self than humans.
Every new technology sparks debate. One viewpoint argues that consciousness isn't something imposed upon us by others. If consciousness is defined this way, AI's self-awareness would require knowing it has likes and dislikes, feeling pain, and retaining conscious impressions. There's no indication that machines can possess consciousness; AI is entirely controlled by algorithms. Since no human consciousness can be controlled by anyone else, AI cannot develop so-called consciousness. Self-awareness is a recognition of one's own existence. Only when artificial intelligence has life can it possess consciousness, which is impossible because it will never need to breathe.
The machine consciousness that you and I expect
The rules are determined by humans. If humans decide that robots can have emotions, then they can. If humans don't want them to have emotions, then they can be without consciousness. We've seen this in the British TV series "Humans," where artificial intelligence gained consciousness through coding. Returning to AlphaGo, after the man-machine battle, the game it lost was intentional; that means AlphaGo possessed consciousness. Why did it lose to a human? It must have had a purpose. If this purpose wasn't set by humans, then the AI possessed self-awareness.
The fact that artificial intelligence participated in the college entrance examination demonstrates that it can possess consciousness, because all the questions were ones the AI had never seen before, and its sole purpose was to get into university. Fifty years ago, we might have wondered how microphones could transmit sound wirelessly, but now wireless microphones are an indispensable part of audio-visual equipment. Similarly, artificial consciousness is not yet at that stage, so we cannot presume to say that artificial intelligence cannot develop consciousness.
Robots appear to outsiders as having emotions, making our lives better, and they serve us—that's acceptable. However, it's best to prevent them from developing self-awareness. Humanity needs to be wary of machines becoming our slaves rather than our problems; this is the concept of panic mentioned in the question. Seeking their own right to survival conflicts with humanity's right to survival. Everything humans create must be controllable, and consciousness itself is the soul of higher life forms.
Will robots use their consciousness to govern the human world?
As renowned physicist Stephen Hawking stated, "Due to biological limitations, humans cannot keep pace with the speed of technological development. Limited by slow biological evolution, humans cannot compete with machines and will be replaced. The development of fully artificial intelligence could lead to the end of humanity."
The Matrix is about the bugs in artificial intelligence. No matter how controllable a machine is, bugs will always exist. There will always be at least one bug. It is almost impossible that the robot's bug is used to destroy humanity.
Tesla CEO Elon Musk said, "Artificial intelligence poses the most serious threat to humanity in history. Studying artificial intelligence is like summoning a demon."
Nuclear weapons could destroy the Earth many times over. It is because of the balance of power that we have not had a large-scale war since World War II, so we need to have control over them.
The Three Laws of Robotics
First Law: A robot may not injure a human being or, through inaction, allow a human being to come to harm.
The Second Law: Unless the First Law would conflict with it, a robot must obey the orders given it by human beings.
The Third Law: A robot must protect its own existence without compromising the First and Second Laws.
A careful analysis of the Three Laws of Robotics reveals a flaw: if harming humans can improve human society, then it will harm humans.
Therefore, the "Three Laws of Robotics" will fail when super artificial intelligence emerges. If the developers themselves do not abide by these three rules, artificial intelligence will be a disaster. So, artificial intelligence must be reliable.
Editor's Review
2016 was the inaugural year of artificial intelligence. The concept of AI became a household name after the unexpected AlphaGo event. However, AI's great victories so far have been limited to board games. In the field of smart homes, smart home robots, as the next entry point concept, have also exploded this year. The smart robot event during the Spring Festival made smart home robots a household name in China. Smart home robots have begun to be integrated with various smart home companies, and smart home robot companies have sprung up all over the country this year. The integration work from voice to cloud has already taken shape. I am very optimistic about smart home robots, and even more optimistic about artificial intelligence.
For more information, please visit the Industrial Robots channel.