Share this

The overall status and trends of enterprise artificial intelligence application development

2026-04-06 06:02:58 · · #1

Even today, artificial intelligence (AI) technologies and algorithms still face numerous challenges in practical application. Looking ahead, enabling businesses to effectively utilize and sustain AI applications is a crucial aspect of the successful industrial application of AI technologies and algorithms.

The research team of Academician Liu He of the Chinese Academy of Engineering published an article entitled "The Current Status and Challenges of Artificial Intelligence Applications in Chinese Enterprises" in the 6th issue of the Chinese Academy of Engineering journal *Engineering Science* in 2022. The article focuses on the healthy and sustainable development of my country's AI industry, starting with practical cases of AI application implementation in enterprises, outlining the current state of the industry, analyzing development challenges, exploring root causes, and proposing coping strategies. The complexity of enterprise AI application implementation is reflected in multiple dimensions such as business needs, data, algorithms, infrastructure, and supporting solutions. The maturity of application depends on the level of data preparation and governance. At the national macro level, it is necessary to build a more favorable AI industry ecosystem to promote the coordinated development of the entire AI industry chain; to support AI industry technology research and development with more powerful and concrete measures, especially full-stack AI, AI basic platforms and tool systems, and AI core technologies, to improve my country's independent controllability of core AI technologies; and to encourage enterprises to actively implement digital transformation and adopt AI technology for intelligent upgrading, forming a strong coupling and two-way cycle between AI industry technology research and development and enterprise AI implementation innovation.

I. Introduction

The world is currently undergoing a transformation driven by digital technologies, with artificial intelligence (AI), big data, the Internet of Things (IoT), cloud computing, and 5G mobile communications reshaping the innovation landscape and the division of labor in the industrial chain. The nation attaches great importance to this development trend and has released the "New Generation Artificial Intelligence Development Plan" (2017), aiming to promote the rapid development of AI technology and industry, accelerate its penetration into more application areas, further expand the scale of related industries, support the deep integration of AI and the real economy, build a digital China and a smart society, promote the digitalization of industries and the industrialization of digital technologies, and form internationally competitive digital industry clusters. The booming AI industry has created opportunities for industry-academia-research cooperation in the AI ​​field, and has also raised new requirements for technological innovation and practical application.

AI, a technology that uses artificially created computers to represent human intelligence, has undergone decades of development and ups and downs. Since 2000, statistical learning methods, represented by deep learning, have significantly enhanced the versatility of AI algorithms, improved the performance of various basic tasks, and created opportunities to apply AI algorithms to a large number of practical problems. Over the past decade, AI technology innovation has profoundly changed social production and lifestyles, gradually giving rise to the AI ​​industry. However, even today, AI technology and algorithms still face various difficulties in practical application, prompting us to consider: to truly enable enterprises to use AI effectively and sustainably, what issues should academia and industry focus on addressing, and how should they collaborate to effectively implement algorithms?

This article focuses on the implementation and sustainable development of AI applications in Chinese enterprises, proposing ideas on the challenges and potential solutions. Starting with real-world examples of AI applications in production, it reviews the current state of industry development and existing problems, analyzes the urgent challenges facing the AI ​​industry, and explores the causes and coping strategies, aiming to provide a basic reference for the healthy development of my country's AI industry.

II. Overall Status and Trends of Artificial Intelligence Development

(I) Development Trends of Artificial Intelligence Technology

Since the concept was first proposed in 1956, the development of AI has gone through three waves.

① The first wave (1956-1974) was the birth period of AI, which focused on enabling machines to have logical reasoning ability. The developed computers could solve algebra application problems, prove geometric theorems, and learn and use English programs; the first perceptual neural network software and chat software (instant messaging software) were developed.

② The second wave (1980-1987) saw AI become more practical, with AI programs (“expert systems”) capable of solving specific problems in certain fields beginning to be used by enterprises; knowledge base systems and knowledge engineering became the main research directions at that time.

③ The third wave (1993 to present): fundamental obstacles to computing performance have been gradually overcome, and the "intelligent agent" paradigm has gained widespread acceptance; theoretical breakthroughs in deep learning have enabled AI technology development to transcend the scope of research on human intelligence. Emerging technologies such as the Internet, cloud computing, big data, chips, and the Internet of Things have provided ample data and computing power support for the development of AI technology, and business innovation models represented by "AI+" have gradually matured. The top ten hottest research topics in the field of AI from 2011 to 2020, selected from global academic databases, include: deep neural networks, feature extraction, image classification, object detection, semantic segmentation, representation learning, generative adversarial networks, semantic networks, collaborative filtering, and machine translation.

The AI ​​industry can be divided into three layers: the foundational layer, the technology layer, and the application layer. The foundational layer mainly refers to computing platforms and data centers, such as cloud computing platforms, high-performance computing chips, and data resources, providing data support and computing power for AI applications. The technology layer refers to AI algorithms and technologies, which are the core components of the AI ​​industry. The application layer refers to products and services specifically developed for specific industries and scenarios, representing a natural extension of the AI ​​industry. Many factors constrain the development of the AI ​​industry, such as data quality, generalization ability, and interpretability. This study argues that the practical application of AI in enterprises is a complex systems engineering project, with interpretability being the primary constraint. For traditional machine learning, feature extraction is manually performed by experts or experienced personnel, embedding a large amount of professional knowledge; although the extracted features may not be comprehensive, its excellent interpretability makes it highly regarded in academia. Deep learning feature extraction is automatically completed through neural networks, reducing human intervention by eliminating the need for expert involvement, and the extracted features are usually more comprehensive. However, neural network models based on deep learning cannot provide reasonable feature interpretations; if AI cannot understand the deep logic between knowledge or behavior, it is prone to model collapse when using existing models to deal with unknown variables. Therefore, the key to the successful application of AI in enterprises lies in integrating the expert knowledge accumulated over a long period into the AI ​​model, thereby constructing a neural network model that integrates knowledge and data, and thus improving the interpretability and generalization ability of the model.

(II) Domestic and International Progress in Artificial Intelligence

The economic and social significance of AI development is becoming a consensus in most countries worldwide, with a series of AI development strategies and plans being proposed and rapidly updated. Countries are actively establishing AI research institutions, guiding R&D investment, emphasizing talent cultivation, building computing power and data infrastructure, improving platform and resource application levels, and formulating industry application policies and regulations to create a favorable environment for AI application development. For example, the U.S. National Science and Technology Council's "National Artificial Intelligence Research and Development Strategic Plan: 2019 Update" highlights AI R&D investment priorities and focuses on the rapid development of cutting-edge fields. The European Commission adopted the proposed Artificial Intelligence Law (2021), aiming to build globally trusted AI centers and strengthen the use, investment, and innovation of AI within the EU.

In my country, AI technology continues to innovate and make breakthroughs, AI resources continue to be optimized and integrated, and the AI ​​industry structure is gradually improving.

① In recent years, my country has ranked first in the world in the number of AI-related patent applications, with a growth rate significantly higher than that of other countries; the number of papers published by Chinese scholars in top international conferences and journals has steadily increased, making them an important contributor to the AI ​​community.

② The establishment of AI computing and innovation centers provides basic data resources, computing resources, and key algorithm services to meet industry needs, significantly reducing the R&D costs and industry entry barriers for small and medium-sized enterprises, and driving collaborative innovation in AI industry-academia-research collaboration; the availability of domestic AI open-source learning frameworks and open-source software is constantly increasing, and open-source frameworks such as Ascend and PaddlePaddle are accelerating technological innovation in the industry.

③ AI provides the driving force for enterprises' digital transformation and upgrading. The intelligent industry is accelerating its innovative development. The development of intelligent robots, smart cities, intelligent transportation, and intelligent manufacturing is rapid. The in-depth application of AI in various industries has brought practical value to many enterprises. After summarizing the practice of hundreds of enterprise projects, it was found that the proportion of AI algorithms entering the core production system is close to 30%, and the efficiency improvement brought about by introducing AI into the core production system is more than 18% on average.

III. Typical Cases of Artificial Intelligence Applications in Chinese Enterprises

In my country, with the continuous evolution of related technologies, AI algorithms are gradually moving from experimental verification to enterprise implementation. For example, AI companies, represented by Huawei Cloud Computing Technology Co., Ltd., have applied AI to vertical sectors such as energy (e.g., intelligent exploration, production inspection, smart gas stations), urban governance (e.g., smart office/service, intelligent government hotlines), transportation (e.g., smart highways, intelligent connectivity, urban traffic management), industrial manufacturing (e.g., process optimization, optimization control, industrial vision), pharmaceuticals and healthcare (e.g., medical imaging, clinical assistance, drug development), and banking and finance (e.g., intelligent risk control, intelligent marketing, intelligent dual recording). Whether it's enterprise-oriented services like transportation and industrial manufacturing, or individual applications like smart healthcare and the internet, AI algorithms are entering or have already entered core production processes. While creating value, they also generate new problems, which in turn feed back into and drive AI algorithm research and development.

(I) Intelligent Oil and Gas Exploration

Intelligent oil and gas exploration is one of the key application scenarios for AI algorithms in the energy industry, and it can be argued that AI algorithms have profoundly changed this industry. In the context of intelligentization and the goals of carbon peaking and carbon neutrality, the intelligent transformation of the oil exploration and development sector is a crucial and unavoidable development model for the energy industry. This is because, with the continuous deterioration of oil resource quality, the difficulty of oil and gas resource extraction continues to increase, and extraction costs are rising year by year. Abandoning the old view of "reserves as the priority" and establishing a new concept of "data as the priority," increasing data mining efforts and enhancing data value are new challenges facing oil and gas exploration and development. Data, computing power, algorithms, and scenarios are integrated to jointly drive the "point-to-area" application of AI technology in the oil exploration and development field.

In recent years, AI technology has been gradually applied to various aspects of oil exploration and development, including sedimentary reservoir research, well logging interpretation, geophysical processing, drilling and completion, and reservoir engineering, and has made certain progress.

① In the study of sedimentary reservoirs, some scholars have achieved accurate quantification of sedimentary reservoirs by using intelligent analysis of core images. The industrial application results are good and the application potential is outstanding.

② In terms of well logging, some petroleum companies and research institutions have completed basic research and preliminary applications in areas such as curve reconstruction, lithology identification, reservoir parameter prediction, oil, gas and water layer identification, intelligent stratification, and imaging logging by utilizing machine learning and deep learning.

③ In geophysical exploration, computer vision technologies such as target detection, image segmentation, and image classification have been used to realize seismic data processing and interpretation, including structural interpretation, seismic facies identification, seismic wavefield forward modeling, seismic inversion, first arrival picking, seismic data reconstruction and interpolation, and seismic attribute analysis.

④ In terms of drilling AI applications, it is mainly reflected in intelligent wellbore trajectory optimization, intelligent directional drilling, and intelligent drilling speed optimization.

⑤ In reservoir engineering, the use of precise stratified water injection "hard data" has enabled data-driven intelligent water injection optimization for oil, gas and water wells, effectively improving recovery rates. Furthermore, preliminary results have been achieved in applications such as production prediction based on recurrent neural networks and intelligent optimization of production measures.

Specifically, intelligent thin-section rock identification is a representative case of computer vision technology applied to core analysis. Traditional thin-section identification relies on visual observation under a microscope by rock and mineral experts, which suffers from low efficiency, insufficient personnel, and difficulty in accurately quantifying identification results. Intelligent thin-section rock identification collects and labels massive amounts of thin-section images to establish a label sample library required for deep learning; it uses computer vision technology to construct intelligent models for rock particle segmentation, mineral composition identification, pore structure analysis, and rock structure analysis, making the identification results more intuitive, quantitative, and accurate (see Figure 1). Intelligent stratified water injection is another typical case of AI application in oil and gas field development. As the fourth generation of intelligent stratified water injection technology, it utilizes "hard data" from water injection development, combined with computational geometry, morphology, and other technologies to optimize production parameters, achieving data-driven intelligent water injection optimization for oil, gas, and water wells; the resulting water-driven reservoir flow field recognition technology enables the evaluation, analysis, real-time monitoring, and intelligent control of water injection effects, providing reliable decision-making basis for water injection optimization, well network layer adjustment, and deep profile modification, significantly improving recovery rates.

Figure 1. Partial results of intelligent identification of rock thin sections.

During the 13th Five-Year Plan period, China National Petroleum Corporation (CNPC) built a cognitive computing platform, the corresponding functional architecture of which is shown in Figure 2. The platform enables applications in several key scenarios, such as automatic acquisition of first arrival waves from seismic formations, automatic interpretation of seismic horizons, identification of oil and gas layers in well logging, diagnosis of pumping unit well conditions, and prediction of single-well production and water-cut patterns. Pilot applications have been conducted in companies such as CNPC's Southwest Oil and Gas Field Branch and Changqing Oilfield Branch, achieving good results. For example, intelligent interpretation of seismic horizons reduced the workload of approximately one month required for manual interpretation to within one week, and intelligent identification of oil and gas layers in well logging improved the interpretation accuracy rate by 10 percentage points and reduced the interpretation time for a single well from one day to 10 minutes.

Figure 2 Functional Architecture of China Petroleum Cognitive Computing Platform

Note: GPU stands for Graphics Processing Unit; FPGA stands for Field Programmable Gate Array; CPU stands for Central Processing Unit.

It should also be noted that while the application of AI technology in the oil exploration and development field is steadily progressing and making new strides, its progress is still relatively slow compared to general technology fields such as image recognition, motion recognition, and intelligent image editing. There is still a significant gap between technological research and practical application, and the field remains in a stage of "easier said than done." The scenarios in oil exploration and development are complex, requiring multi-dimensional analysis using methods such as well logging, seismic analysis, and remote sensing. It has the characteristics of systems engineering and cannot be solved solely by data-driven models. In contrast, the business scenarios in general technology fields are relatively simple and WYSIWYG, and the objects to be identified are common items. Furthermore, the oil exploration and development field faces severe data quality issues. The data used for modeling is often processed and interpreted, resulting in small sample sizes and data distortion. In contrast, data in general technology fields is mostly real-time collected big data, and various public datasets provide rich data sources for industry modeling. In terms of the R&D ecosystem, specific fields such as oil exploration and development are relatively weak. While various open-source AI models and code have provided rich pre-trained models for AI modeling in recent years, most are built for general technology fields and are difficult to directly transfer to the oil field.

(II) Smart City Governance

Cities are an important symbol of human civilization, a product of industrial and commercial development, and represent humanity's pursuit of a higher quality of life. The digital age has laid the foundation for the digitization of all objects, services, and spaces within cities. However, the ever-increasing number of entities and elements, along with the fast-paced lifestyle, places higher demands on urban governance. "Smart city" systems, with AI algorithms at their core and supported by large-scale computing power and big data technology, can effectively connect urban elements such as buildings, schools, hospitals, and public transportation, enabling intelligent decision-making in various urban scenarios. The fundamental strategy of smart cities is to transform data into a resource for urban governance, and then apply this resource to empower grassroots governance.

Intelligent government service lines and urban traffic management are typical application cases of smart cities.

① The continuous expansion of the traditional 12345 hotline's service scope has led to increased pressure on customer service personnel and decreased service efficiency. The intelligent hotline solution extracts data from government regulations, documents, internal and external notices, and work orders to establish a knowledge graph including event, time, location, responsible party, and solution. Utilizing related algorithms from the knowledge graph, it intelligently identifies residents' call intentions, assisting human operators in efficient communication with residents. By analyzing historical requests and handling records, it intelligently dispatches work orders, forming a hotline service model that is "more accurate in receiving calls, faster in dispatching, and more effective in handling them." In pilot cities, overall work order dispatch efficiency has increased by over 50%, work order processing time has been shortened by more than 20%, and satisfaction with work order completion has been significantly improved.

② Morning and evening rush hours pose a significant challenge to urban traffic. To alleviate scheduling difficulties caused by increased road network pressure, the intelligent traffic control scheme utilizes structured data parsed from electronic police cameras to analyze traffic flow patterns at various intersections and combines this with traffic police experience to model timing. Based on constructing objective functions such as intersection traffic flow and green light usage, the mathematical description of traffic knowledge is integrated into the optimal traffic control timing modeling process as constraints. In pilot cities, this has resulted in an 18% increase in average vehicle speed on main roads, a 20% decrease in road queue overflows, and the morning rush hour dissipating 10-15 minutes earlier, significantly alleviating traffic congestion during morning and evening rush hours.

Smart city governance presents a significant challenge for AI algorithms. The complex and rapidly changing scenarios of urban governance place higher demands on advanced metrics of AI algorithms, such as versatility and transferability. Urban governance requires the integration of data from multiple modalities, including images, language, and speech, aligning with the future trend of AI algorithms moving towards multimodal applications. However, current AI algorithms still lag behind in terms of flexibility and robustness for practical application, often requiring targeted development to adapt to different scenarios. This results in high development and application costs, making it difficult to scale up and standardize their application in other cities.

IV. Analysis of Several Key Elements Facilitating Enterprise AI Applications Based on Case Studies

(i) Scientific research and demand-driven approaches are equally important in artificial intelligence.

As a general-purpose technology, AI has yielded novel and positive results in enterprise applications, but it also faces numerous problems and challenges. Currently, AI algorithms lack standardization and development efficiency is low because of significant differences in data modalities, data volume and quality, and hardware environments across different industries and enterprises. Under traditional development models, AI algorithms have poor transferability between scenarios, requiring developers to design neural network structures and execute specific optimization and deployment processes for different scenarios. The lack of a unified and standardized development model also means that developed algorithms often fail to meet requirements for generalization, robustness, security, and interpretability. Solving these problems requires a simultaneous approach from both scientific research and enterprise application implementation.

1. Iterate and evolve AI algorithms based on enterprise technology needs.

From the perspective of training data, the vast majority of data in the era of big data lacks labeling information. For example, less than 1% of data obtained from the publicly available internet domain has semantic labels. To fully utilize this data and improve the quality of large models, efficient self-supervised learning algorithms are needed. Although academia has proposed self-supervised learning algorithms for specific modalities such as images and text, their training efficiency is limited, and the capabilities of the trained models are difficult to evaluate. The differences between different modalities of data also make it difficult to align semantic knowledge learned from image and text data within the same semantic space. It is necessary to design cross-modal model training and evaluation schemes, moving away from the blind pursuit of model size, allowing large models to be evaluated based on their own understanding of the data rather than relying on downstream tasks. This is the only way to solve the model production problem in the process of AI implementation.

From a model perspective, a major obstacle to deep learning lies in its generalization ability and interpretability. When a model encounters test samples outside the training data distribution range, its performance drops significantly. Furthermore, neural network models typically cannot explain their own reasoning processes, leading professionals to be skeptical of the model's predictions and limiting its application in real-world production scenarios. While research into techniques such as transfer learning, adversarial attacks and defenses, and activation unit visualization has made some progress, it is still far from achieving robust, generalizable, and interpretable AI algorithms.

Encouraging interdisciplinary research and promoting the integration of AI with other disciplines will foster synergistic development through mutual reinforcement. Applying AI algorithms to cutting-edge research in other disciplines (such as medical imaging, biopharmaceuticals, and scientific computing) holds immense potential, but it also places higher demands on these algorithms. For example, scientific computing differs from traditional AI applications; AI algorithm design faces substantial challenges such as the completeness and sufficiency of input data, and the rationality of the computational architecture, resulting in extremely high complexity of the solution space. Therefore, AI algorithms need to possess the ability to handle uncertain data and be combined with algorithms such as knowledge computing and automated machine learning to assist scientists across various disciplines in exploring unknown areas.

2. Organically integrate scenarios, algorithms, and platforms in the technical system and architecture design.

Deep learning-based AI algorithms require substantial computing resources, resulting in high training costs. Enterprises should strive for organic integration with the AI ​​industry in their technology systems and architecture design, deploying large models in the cloud to reasonably avoid training models on general-purpose data. Coupled with efficient toolchains, large models can be validated on private data at a lower cost, facilitating deployment to edge devices and reducing overall usage costs. This development approach will allow more enterprises to benefit from AI algorithms.

As an emerging interdisciplinary field, AI's key foundations stem from specific subfields of computer science. Implementing AI algorithms in enterprise applications requires close collaboration among different types of personnel: algorithm developers and architects with professional-level AI skills, product managers who understand the fundamental logic of AI, and enterprise technical personnel who embrace the concept of AI applications. AI training should be encouraged to be integrated into enterprises, using real-world case studies to enhance the understanding of AI among enterprise technical and management personnel, thereby creating a positive collaborative environment.

(II) Promoting the Implementation of Enterprise Artificial Intelligence Applications with a Systemic Approach

Currently, the level of understanding of AI among enterprises varies greatly. Many companies even believe that "AI is just algorithms," neglecting "infrastructure" and "supporting solutions." This leads to a severe underestimation of the complexity of AI application implementation, often resulting in poor project effectiveness or even failure. Real-world scenarios are characterized by variable demands and diverse data, often requiring the combination of multiple models based on the industry knowledge of domain experts to seamlessly integrate with the enterprise's business production systems. When implementing AI applications, enterprises need to adopt a systems thinking approach, comprehensively considering factors at the point/surface, top/bottom, and horizontal/vertical dimensions to ensure "clear pain points, reasonable steps, and implementation within capabilities." Adhering to systems thinking in both the development and application phases is crucial for the collaborative progress of the AI ​​industry and enterprises.

The entire AI application development process is a systematic project. Its essence is to engineer the algorithm development pipeline, creating application scenarios that reduce costs and increase efficiency. Through close cooperation among sub-processes, a small closed loop is formed (see Figure 3). Specifically: data labeling is used to label datasets to support subsequent algorithm training; data processing cleans and organizes abnormal data, increasing the data volume through data augmentation and refinement; algorithm training develops new algorithms or selects existing algorithms, using data to train and build models; model evaluation evaluates models based on test datasets, repeatedly fine-tuning them; application generation logically arranges multiple models, specifying input/output paths; application evaluation conducts further evaluation based on business datasets; and deployment and inference include application deployment, monitoring, and maintenance. For example, when an anomaly is detected during monitoring and maintenance, the system will feed the data back to the data labeling stage, label difficult examples, supplement the abnormal data, and re-initiate algorithm training.

The integration of AI into core enterprise production environments is a systematic project, with the most crucial element being the combination of industry-specific mechanistic models and AI models. Various industries have accumulated long-term expertise, resulting in numerous mechanistic models encompassing knowledge from physics, chemistry, biology, and other fields. To facilitate application across different industries, AI algorithms should incorporate industry knowledge and enterprise-specific data, transforming general-purpose algorithms into specialized ones. This process should bridge the domain gap between general-purpose and industry-specific data, supporting the integration of large models with various industry knowledge bases and reducing the cost of fine-tuning large models across different industries.

Applying systems engineering methodologies, we conduct requirements decomposition, architecture design, system development, and integration verification to form an end-to-end solution. Taking an industrial optimization control project implemented in a company as an example, the AI ​​system relies on a cloud computing platform and provides service interfaces externally through containerized deployment. Through edge applications, it completes the protocol conversion of data interfaces for production operation systems in the factory production environment, thereby establishing data and command interaction between the AI ​​system and the old systems in the factory production environment. Systems engineering reliability design methods are adopted to ensure the security and stability of data channels, guaranteeing the company's safe production. Given the complex and ever-changing conditions in industrial production, data-driven AI models may encounter unprecedented conditions, requiring the establishment of feedback loops through edge-cloud collaborative system capabilities to monitor and iterate on abnormal results from AI applications. This combination of AI application systems and enterprise business systems can be seen as a large-scale closed loop integrating AI with core production systems.

Figure 3. The entire process of AI application development

Note: MLOPS stands for Machine Learning Operations.

(III) Creating a favorable environment for collaborative development to facilitate the application of artificial intelligence

1. Focusing on a clearly defined enterprise application scenario is the starting point for successful application.

Discovering enterprise application scenarios is the first step in the practical application of AI. To accurately position AI within an industry, it's crucial to deeply understand the specific business scenarios and needs of the enterprise, and to have a clear understanding of the capabilities of AI algorithms and the problems they can solve. A continuously evolving virtuous cycle is reflected in: collaboration among multiple departments and professionals of various types; clarifying the boundaries of AI application implementation for specific business scenarios; assessing the feasibility and potential value of AI applications based on the enterprise's internal/external data governance and assetization levels; optimizing data-driven AI models using existing enterprise knowledge; and supplementing and refining more data in the process of realizing social and commercial value.

Enterprise AI application scenarios can be categorized into massive repetitive scenarios, expert experience transfer scenarios, and multi-domain collaborative scenarios. Scenario-based problem analysis can identify problem boundaries; introducing industry knowledge can reduce detours in the implementation process, thereby improving the quality of enterprise AI implementation. Taking intelligent oil and gas exploration as an example, the acquisition, processing, and interpretation of seismic data involves dozens of procedures. Currently, seismic horizon tracking, a fundamental and critical business环节, is mainly completed manually. The large volume of seismic data makes manual interpretation extremely tedious, while using AI for automatic horizon tracking can significantly reduce repetitive professional labor, shorten the exploration cycle, and save exploration costs.

2. Enterprise digitalization is the foundation for the successful application of AI.

An enterprise's level of digitalization determines the preparedness of its data elements, which is crucial to the effectiveness of AI applications. AI technologies, primarily deep learning, are highly dependent on data, especially in industry-specific scenarios. Due to the limited availability of publicly available datasets, enterprises need to accumulate large amounts of data themselves, placing high demands on their information and digital infrastructure capabilities. During enterprise digital transformation and intelligent development, it is essential to promptly establish unified industry data standards to provide a foundation for data management models and trust mechanisms; and to gradually improve data intellectual property protection and sharing mechanisms to avoid the "data silo" problem caused by strong centralization.

Data quality is also crucial. If data management is fragmented and lacks standardization, data governance must be implemented beforehand to create a compliant data platform and data assets, only then can AI solutions be deployed. Taking the intelligent upgrade of a heating company as an example, at least three prerequisites exist for AI implementation:

① Cleaning up basic data: Parameters from a series of equipment, including heat sources, heat exchange stations, units, unit valves, and indoor temperature sensors, must be systematically collected and modeled;

② At least one heating season's historical equipment operation data, weather data, etc., should be used as the AI ​​training and validation set;

③ Analyze and further simulate abnormal scenarios where historical data is missing (such as data disconnection, equipment data anomalies, valve malfunctions, extreme weather, etc.) to obtain richer and more reliable datasets. Only when the prerequisites, represented by high-quality industry data, are sufficiently abundant can a complete AI control scheme be trained.

3. Facilitate the interconnection and interoperability of AI application scenarios and realize value through centralized construction.

Centralized construction is a necessary model for AI systems engineering. Advance planning for collaborative AI algorithm and computing power development across different departments is crucial to avoid fragmented efforts and eliminate redundant construction and resource waste. AI capabilities need to converge from individual algorithms to a horizontally interconnected central engine layer to achieve the reuse and efficient accumulation of common capabilities across different AI algorithms. In other words, the central computing power foundation addresses the problem of fragmented physical resources (as cloud computing essentially packages and aggregates large amounts of fragmented computing power to achieve higher reliability, higher performance, and lower cost), while the central engine layer addresses the localized problems of AI algorithms. Applying this centralized thinking and model simultaneously to problem definition and AI platform construction is essential to elevating the application value of AI from localized to holistic.

Taking smart city construction as an example, most current problems stem from complex, interconnected local issues that contribute to the overall problem. The effectiveness of AI applications in these scenarios largely depends on the problem definition and construction model. For urban traffic resource optimization, focusing solely on solving congestion at a single intersection, even if AI algorithms optimize the solution, will hardly fundamentally improve the overall traffic situation and vehicle availability. To address these issues, a centralized construction approach is needed, expanding the perspective from the local to the global level. This allows local optimization to radiate to the global level, driving the entire system to generate greater value. Specifically, before designing AI algorithms, the interoperability of data, resources, and scenarios should be considered first. If not, interoperability issues must be resolved before considering how to apply AI to the optimized problem and system.

The "unified management network" for urban operations is a representative practical case. Unlike applications from single management departments, the "unified management network" is essentially a centralized construction model that enables interconnection among multiple management departments. It transforms previously localized, closed-loop urban problems into more efficient and optimized solutions through multi-departmental collaboration. For the redesigned, global issues, AI applications have expanded from simply perceiving and discovering specific events to using knowledge graphs to uncover the correlations between events across management departments. This allows for precise recommendations of appropriate measures and responsible parties, achieving overall optimization of efficiency and satisfaction in handling incidents.

(iv) Using a developmental mindset to address challenges in the development process

The development and application of AI technology is a gradual process. A lack of tolerance for AI algorithms can lead to a crisis of trust when problems arise in practical applications, potentially hindering technological iteration and improvement, and creating additional obstacles to the implementation of AI applications. It is advisable to view the problems and challenges in the implementation of AI for enterprises with a developmental mindset: providing opportunities and time for development, as well as support in systems, processes, and mechanisms, to promote positive and sustainable capability evolution.

The application scenario of a dynamic fault detection system for train operation is representative: train inspection equipment photographs the bottom and sides of freight cars, and the images are reviewed to determine if there are any safety risks. Given the extremely high speeds of trains, the complex outdoor environment, and the hundreds of possible fault types, AI systems face the challenge of learning from small samples on a long tail of data. AI R&D companies have collaborated with railway research institutions to apply AI technology. Through iterative development and testing, false positives and false negatives are rapidly incorporated into the system, continuously improving the model's capabilities and ultimately reaching the level of expert human interpretation. In similar development processes, more open and optimized organizational processes, enabling efficient collaboration between industry experts and AI experts, and applying technologies such as active learning to improve the efficiency of human assistance, will significantly accelerate the implementation of AI applications.

Over the past decade, AI has been one of the fastest-growing technology sectors, thanks to a virtuous cycle between technological research and industrial application. Research results create possibilities for enterprise implementation, while the benefits of enterprise implementation, in turn, fuel AI research. However, it's important to note that after a decade of explosive growth in deep learning, AI research is entering a more complex phase, with fewer easily achievable or easily improved technical points. To maintain this virtuous cycle, it's crucial to deeply explore the "connection points" between technological research and industrial application. AI applications aim to "reduce repetitive work" and "expand the boundaries of human knowledge." Aiming at this fundamental goal and maintaining a close integration between technological research and industry application is essential to balancing technological realization with the creation of social value.

V. Suggestions for the Development of Artificial Intelligence Applications in Chinese Enterprises

人类社会正在进入泛智能时代,在此关键时期,单项技术的重大突破、单一解决方案的深刻变化,都有可能改变各行业的竞争格局。种种迹象表明,当前正处于AI大规模落地应用的爆发前夕。世界范围内的企业尤其是领军企业,普遍重视AI,加大投入以紧跟潮流并获得领先优势。如果在布局阶段错失机遇,一旦马太效应形成,后发者可能处于更为弱势的境地。相应地,我国应高度重视AI产业发展,积极发布AI企业落地配套政策,引导国产AI根技术、基础平台、关键行业应用等层面的协调发展,以更好推动AI技术的产业集聚、促进AI与实体经济的深度融合。

建议支持AI产业链发展,扶持全栈AI国产化,完善AI基础平台及工具体系,培育国产AI根技术,提高AI核心技术的自主可控能力。

① 加大底层根技术的研发投入,构建算力、算法、数据三位一体的AI新生态,鼓励企业、高校、社会服务等加速应用AI根技术;以核心产业主体多方共建的方式,积极构建具有国际有影响力、分布于若干行业的AI根技术应用体系,构筑AI企业应用的技术纵深。

② 支持AI根技术持续创新,引导优势企业开展AI根技术研发攻关,支持行业示范应用,促进国产AI大模型、框架、芯片等核心技术突破及产业化落地。③ 鼓励优势AI企业加速完善包括端到端的AI基础平台、工具链在内的AI共性支撑技术体系,实质性提高自主可控能力并形成一定的比较优势。

建议支持企业积极采用AI进行智能化升级,形成技术研发和产业智能创新的双向大循环。企业智能化所需的生产要素,如数据、算法、算力等,通常很难依靠单一企业来构建齐备。应建立科学合理的算力、数据、算法开放共享机制,持续开放并积累各行业的算法、模型、数据,促进跨企业的智能化协作,以要素的规模化提升企业AI应用的水平。

① 地方政府可借助新型基础设施建设的发展机遇,新建或扩建包括AI计算中心(公共算力)、一体化大数据中心(数据共享)、AI创新中心(模型共享)在内的AI公共要素体系。

② 由企业、管理部门注入数据和应用需求,支持企业开展基于云计算、大数据的AI应用创新示范;鼓励企业采用AI进行技术换代、流程再造、服务升级,形成自主技术研发和产业智能创新循环互促的发展格局。③ 制定相关标准,规范AI技术研究,以发展的思维积极应对AI落地过程中可能带来的问题;完善与企业AI应用相关的法律法规和流程制度,促进产业智能化良性发展,防控潜在技术和社会风险。


Read next

CATDOLL 146CM Jing TPE

Height: 146cm A-cup Weight: 26kg Shoulder Width: 32cm Bust/Waist/Hip: 64/54/74cm Oral Depth: 3-5cm Vaginal Depth: 3-15c...

Articles 2026-02-22