Share this

Briefly describe the future development trends of artificial intelligence

2026-04-06 04:44:46 · · #1

In recent years, artificial intelligence (AI) technology has developed rapidly worldwide, becoming a significant force driving technological and industrial advancements and profoundly impacting economic and social development and the progress of human civilization. What is the current state of AI technology development? What are its applications? What are its future trends? Our reporter interviewed relevant experts to find out.

Artificial intelligence's ability to handle complex tasks has been greatly improved.

Currently, artificial intelligence technology has entered the practical application stage and is profoundly changing human production and life.

"In its nearly 70-year history, artificial intelligence has gone through three stages: instilling rules, instilling knowledge, and learning from data. The large-scale AI model technology that has developed rapidly around the world in recent years relies on basic models trained using 'big data + high computing power + strong algorithms,' which is a typical manifestation of the third stage of AI development," said Huang Tiejun, president of the Beijing Academy of Artificial Intelligence.

Currently, various large-scale artificial intelligence models are developing rapidly, and many high-tech companies around the world are investing in the construction of large-scale artificial intelligence models.

“A relatively mature technical framework has now been formed around the big model of artificial intelligence, but products and ecosystems are still under development,” said Zeng Dajun, deputy director and researcher at the Institute of Automation, Chinese Academy of Sciences. “Overall, the technological development of the big model of artificial intelligence has been more rapid than any previous artificial intelligence technology, and its influence is unprecedented.”

The emergence of large-scale artificial intelligence models has opened up new possibilities for the realization of general artificial intelligence and greatly enhanced the ability of artificial intelligence to handle complex tasks.

"For example, chatbots based on large language models of artificial intelligence can achieve high-quality information integration, translation, and simple problem solving and planning," Zeng Dajun said. "These types of robots have attracted attention mainly because they have initially possessed some characteristics of general artificial intelligence, including fluent natural language generation, full-domain knowledge system coverage, general processing models across task scenarios, and smooth human-computer interaction interfaces."

However, the capabilities of large-scale AI models are still limited at present.

"On the one hand, due to the inherent structural and mechanistic vulnerabilities of large-scale AI models, there is a risk of malicious attacks; on the other hand, the knowledge representation and learning modes of large-scale AI models themselves still have defects, leading to common-sense errors and fabricated content in their answers," Zeng Dajun said. "AI scholars are working on tackling these problems."

Artificial intelligence is accelerating towards a new stage of comprehensive application.

"I'm a freshman majoring in computer science and I want to take an artificial intelligence course. What preparations do I need to make?" "You need to learn basic mathematics, programming languages, machine learning algorithms, and pay attention to technological trends..." This dialogue is not between a teacher and a student, but between a student and artificial intelligence.

In August 2023, Zhejiang University, in conjunction with Higher Education Press and other institutions, released the "Zhihai-Sanle" vertical education model. Based on corpora such as core textbooks, domain papers and dissertations, as well as professional instruction datasets, it can provide services such as intelligent question answering, test generation, learning navigation, and teaching evaluation. It has already been applied in many universities.

“We break down these teaching materials into sentences, paragraphs, and chapters to ‘feed’ the large model. This high-quality corpus will synthesize probabilistic relationships between words, which will inspire students,” said Wu Fei, a professor at Zhejiang University.

Industrial quality inspection, knowledge management, code generation, voice interaction… Currently, artificial intelligence in China is continuously deepening its application from single points to diversified applications, and from general scenarios to industry-specific scenarios, accelerating towards a new stage of comprehensive application. In particular, with breakthroughs in large-scale AI models and the rise of generative AI, AI is better able to handle complex problems in production and daily life, providing more advanced tools and means for various industries to achieve product and process innovation.

Predicting the path of a typhoon over the next 10 days used to require 5 hours of simulation on 3,000 servers. Now, based on the pre-trained Pangu meteorological model, a more accurate prediction can be obtained within 10 seconds. A set of ancient books with nearly 40 million words was identified, proofread, and published online by researchers using artificial intelligence in just over 3 months.

"Large-scale artificial intelligence models have driven the rapid development of the generative artificial intelligence industry, bringing tremendous innovation opportunities to many fields such as scientific exploration, technological research and development, artistic creation, and business management," said Wang Endong, an academician of the Chinese Academy of Engineering.

Driven by both supply and demand, technological innovations are beginning to move from laboratory research to industrial practice on a large scale, accelerating the industrialization of artificial intelligence. According to incomplete statistics, as of October 2023, China had released over 200 large-scale artificial intelligence models, with research institutions and enterprises becoming the main developers.

According to Shang Haifeng, president of Huawei Hybrid Cloud, innovative technologies, represented by artificial intelligence, are rapidly reshaping various industries.

Zhao Zhiyun, director of the New Generation Artificial Intelligence Development Research Center of the Ministry of Science and Technology, said: "Artificial intelligence technology is continuously evolving in the direction of pursuing higher precision, tackling more complex tasks, and expanding the boundaries of capabilities. Scenario innovation has become a new path for the upgrading of artificial intelligence technology and industrial growth."

Liu Jun, senior vice president of Inspur Information, believes that in the future, artificial intelligence needs to be further integrated into application scenarios and empower specific industrial processes. "This process is difficult for a single manufacturer to complete independently; it requires deeper collaboration across the industry chain and the innovation ecosystem," Liu Jun said.

More general artificial intelligence is expected to be realized.

Experts say that the third stage of artificial intelligence development, represented by large-scale AI models, will enjoy a long period of development dividends and become an important driving force for a new round of technological revolution and industrial transformation.

The Institute of Automation, Chinese Academy of Sciences, has made an assessment of the evolution of large-scale artificial intelligence models. Zeng Dajun shared his views: the application and innovation ecosystem is undergoing dramatic changes or at least has the potential for dramatic changes; large-scale artificial intelligence models are driving the rapid development of decision-making intelligence; the demand for miniaturization and domain specialization of large-scale artificial intelligence models is very urgent; and more general-purpose artificial intelligence is expected to be realized.

Zeng Dajun said, "The big data model of artificial intelligence is like a prototype of the human brain. By 'feeding' various data, it realizes various intelligent capabilities. The big data model of artificial intelligence is redefining the interaction between humans and computers and is expected to become the main interface for human-computer interaction."

Zeng Dajun emphasized the development of miniaturization and domain-specificity in large AI models. He stated that the computational and energy consumption challenges of existing large AI models will drive many efforts towards domain-specific, lightweight smaller models, or a hybrid of large and small models, particularly in fields such as finance, education, healthcare, and transportation, in an effort to reduce the cost of large models.

Huang Tiejun believes that artificial intelligence will evolve from information intelligence to entity intelligence, with visual and embodied AI models being the next breakthrough. "Big data is the expression of the world, and language cognitive models trained from it can support information services. Large language models can improve the intelligence level of entities such as autonomous driving and robots, but the development of technologies such as vision, hearing, embodiment, and interaction is still needed."

Huang Tiejun told reporters that current intelligent emergence is only static and does not yet possess the dynamic emergence capabilities of the human brain. "In the future, we hope to achieve truly dynamic emergence capabilities in artificial intelligence through brain-like intelligence."

In 2023, OpenAI, the developer of ChatGPT, was placed under unprecedented scrutiny, pushing the development of subsequent versions of GPT-4 into the spotlight. According to sources, OpenAI is training a next-generation artificial intelligence, tentatively named "Q*" (pronounced Q-star). The next generation of OpenAI products may be released in the new year.

According to media reports, "Q*" may be the first artificial intelligence trained "from scratch." Its characteristics include intelligence not derived from data on human activity, and its ability to modify its own code to adapt to more complex learning tasks. The former makes the development of AI capabilities increasingly opaque, while the latter has always been considered a necessary condition for the emergence of the AI ​​"singularity." In the field of AI development, the "singularity" specifically refers to a machine possessing the ability to self-iterate, leading to rapid development in a short period and exceeding human control.

While some reports suggest that "Q*" can currently only solve elementary school level math problems and is far from reaching a "singularity," given that the iteration speed of artificial intelligence in virtual environments may far exceed expectations, it is still possible that it will autonomously develop into AI capable of surpassing human levels in various fields in the not-too-distant future. In 2023, OpenAI predicted that artificial intelligence surpassing human levels in all aspects would emerge within ten years; Nvidia founder Jensen Huang stated that general artificial intelligence may surpass human levels within five years.

Once general artificial intelligence (AI) is realized, it can be used to solve a variety of complex scientific problems, such as the search for extraterrestrial life and habitable galaxies, the control of artificial nuclear fusion, the screening of nanomaterials or superconducting materials, and the development of anti-cancer drugs. These problems typically require decades of research by human researchers to find new solutions, and the research workload in some cutting-edge fields has already exceeded human limits. However, AI has virtually unlimited time and energy in its own virtual world, making it a potential substitute for human researchers in some easily virtualized tasks. But how humans will then supervise these AIs, which surpass human intelligence in terms of level, to ensure they do not harm humanity is another question worth considering.

Of course, we shouldn't overestimate the pronouncements of Silicon Valley giants, as the history of AI development has seen three "AI winters," with many grand technological visions fading due to various limitations. However, it's certain that large-scale model technology still has considerable room for improvement. Besides GPT-4, Google's Gemini and Anthropic's Claude2 are currently the second largest large-scale models after GPT-4. Domestically, Baidu's Wenxin Yiyan and Alibaba's Tongyi Qianwen are also among the best domestically developed large-scale models. Whether they will release even more revolutionary products in the new year remains to be seen.

Trend 1: Synthetic data breaks through the bottleneck of training data for artificial intelligence

The data bottleneck refers to the limited availability of high-quality data for training AI, and synthetic data holds the promise of breaking this bottleneck.

Synthetic data is data synthesized by machine learning models using mathematical and statistical principles, based on the imitation of real data. A simple analogy to understand synthetic data is this: it's like writing specialized textbooks for AI. For example, although the dialogues in an English textbook might use fictional names like "Xiaoming" and "Xiaohong," it doesn't prevent students from learning English. Therefore, in a sense, textbooks can be seen as a kind of compiled, filtered, and processed "synthetic data" for students.

Some papers suggest that a model needs at least 62 billion parameters to be trained to develop "thought chains," i.e., the ability to perform step-by-step logical reasoning. However, the reality is that the amount of high-quality, non-repetitive training data generated by humans is currently limited. Using generative AI like ChatGPT to produce high-quality synthetic data in unprecedented quantities will enable future AI to achieve significantly higher performance.

Besides the demand for large amounts of high-quality data driving the popularity of synthetic data, data security considerations are also a significant factor. In recent years, countries have enacted stricter data security protection laws, making it objectively more cumbersome to train artificial intelligence using human-generated data. This data may not only contain personal information, but much of it is also protected by copyright. In the current context where internet privacy and copyright protection lack unified standards and a sound framework, using internet data for training can easily lead to numerous legal disputes. On the other hand, de-identifying this data presents challenges in terms of screening and identification accuracy. Faced with this dilemma, synthetic data becomes the most cost-effective and efficient option.

Furthermore, training with human data can lead to artificial intelligence learning harmful content. Some examples include methods for manufacturing bombs using everyday objects or regulating chemicals; others include many bad habits that AI shouldn't exhibit, such as slacking off during task execution like a human, lying to please users, and developing biases and discrimination. Using synthetic data instead, minimizing AI exposure to harmful content during training, could potentially overcome these drawbacks associated with training with human data.

The above analysis shows that synthetic data is quite groundbreaking and has the potential to solve the previous dilemma of balancing the development of artificial intelligence with data privacy protection. However, ensuring that relevant companies and institutions responsibly produce synthetic data, and creating synthetic data training sets that are both in line with China's own culture and values ​​and comparable in scale and technology to those of Western countries that primarily use English-language online materials, will be a significant challenge for China.

In addition, a significant change brought about by synthetic data is that big data from human society may no longer be necessary for AI training. In the future digital world, the generation, storage, and use of human data will still follow the laws and order of human society, including maintaining national data security, protecting commercial data secrets, and respecting personal data privacy, while synthetic data required for AI training will be managed using a different set of standards.

Trend 2 : Quantum computers may be first applied to artificial intelligence.

As the most cutting-edge application of electronic computers today, artificial intelligence has always faced concerns about insufficient computing power. A few months after ChatGPT's launch, OpenAI President Altman publicly stated that he was not encouraging more users to register for OpenAI. In November 2023, OpenAI even announced a suspension of new registrations for ChatGPT Plus paid subscriptions to ensure existing users had a high-quality experience. Clearly, as the world's most powerful AI, ChatGPT has encountered bottlenecks in computing power and other areas. Against this backdrop, discussing the application of quantum computers in the field of artificial intelligence becomes a promising future solution.

First, most algorithms in the field of artificial intelligence fall under the category of parallel computing. For example, when AlphaGo plays Go, it needs to consider the opponent's responses after placing stones in different positions simultaneously, and find the most likely way to win the game. This requires optimizing the efficiency of parallel computing. Quantum computers excel at parallel computing because they can simultaneously compute and store both "0" and "1" states without consuming additional computing resources like electronic computers, such as serializing multiple computing units or parallelizing computational tasks. The more complex the computational task, the greater the advantage of quantum computing.

While generative artificial intelligence may eliminate a number of traditional digital jobs, it also opens a window of opportunity – "no-code software development." Currently, programming aids based on large AI models have reached a new stage, capable of generating software or web page code from users' very vague instructions. For example, in the 2023 GPT-4 demonstration, the demonstrator simply hand-drew a very rough structural diagram on an A4 sheet of paper, and GPT-4 automatically generated an accessible web page based on it. This undoubtedly lowers the barrier to entry for developing IT services. Anyone with a sufficiently creative digital service "idea" that can meet the needs of many can become a hotbed of internet innovation; the era of "innovation for everyone" has arrived.

In response, the government needs to change its mindset and balance market regulation with the promotion of innovation. On the one hand, it should lower the registration and financing thresholds in the digital innovation process, address the pain points in the development and growth of SMEs, and make employment and innovation policies adapt to the new demand that "everyone can innovate." On the other hand, it needs to explore new copyright and patent protection policies that are more conducive to protecting innovation, thereby incentivizing those talents who can continuously propose innovations.

In conclusion, looking ahead to 2024, whether it's the iterative development of AI technology itself, its reshaping of data value, or its penetration into various industries and fields, the impact of AI is ubiquitous. It empowers scientific research, innovation, and the economy while also bringing new challenges and risks. We should view the many changes brought about by AI with an open mind and carefully study and address the new issues and risks it may bring.

Read next

CATDOLL 146CM Tami TPE

Height: 146cm A-cup Weight: 26kg Shoulder Width: 32cm Bust/Waist/Hip: 64/54/74cm Oral Depth: 3-5cm Vaginal Depth: 3-15c...

Articles 2026-02-22