Abstract: This paper first elucidates the origin of genetic algorithms, followed by a brief overview. It focuses on the basic working principles of genetic algorithms, discusses existing problems and improvement methods, and proposes research and applications of genetic algorithms in automatic control. Finally, the paper presents a general program for a genetic algorithm in Matlab.
Keywords: Genetic algorithm, automatic control, Matlab
Chinese Library Classification Number: TP301.6
The Theory and Application of Genetic Algorithm
Li Shanshan, He Yinzhou
School of Control Science and Engineering
Shandong University 250061
Abstract: This paper introduces the origin of genetic algorithm, and then gives a shot introduction of genetic algorithm. This paper underlines the basic working theory, discusses the problems and improving methods in genetic algorithm and then proposes the research and application of genetic algorithm in automatic control. At last, this paper gives the codes of genetic algorithm in Matlab.
Keywords: Genetic algorithm, Automatic control, Matlab
Genetic algorithm (GA) is a randomized search algorithm that draws on the natural selection and natural inheritance mechanisms of the biological world [1]. It was first proposed and its mathematical framework was established by Holland et al. of the University of Michigan in the 1970s. As soon as it was proposed, it attracted attention, and its research began to reach its peak in the mid-1980s. Goldberg and Michalewicz [2, 3] gave a comprehensive and systematic discussion of the algorithm and its applications, and successfully applied it to optimization problems in various fields. Although genetic algorithm has achieved great success in many fields, it has problems such as slow convergence speed and easy to get trapped in local optima. How to improve the search ability and convergence speed of genetic algorithm so that it can be better applied to the solution of practical problems is a major topic that researchers from various countries have been exploring. This paper summarizes the current research status from the relevant theories of genetic algorithm and discusses the application of genetic algorithm in control.
1. Origin of Genetic Algorithms
Since Darwin's theory of evolution gained acceptance, biologists have been extremely interested in the mechanisms of evolution. The fossil record shows that the complex structures of life we observe evolved in a relatively short period, a fact that has surprised many, including biologists. While the mechanisms driving this evolution are not yet fully understood, some of their characteristics are known. Evolution occurs on chromosomes, which encode biological structures, and organisms are partially generated through the decoding of these chromosomes. Below are some of the widely accepted general characteristics of evolutionary theory:
(1) Evolution occurs on chromosomes, not on the organisms they encode.
(2) Natural selection links chromosomes and the structure they translate into, with chromosomes of well-adapted individuals often having more reproductive opportunities than chromosomes of poorly adapted individuals.
(3) The process of reproduction is the moment when evolution occurs. Mutation can make the chromosomes of offspring different from those of their parents. By combining substances from the two parent chromosomes, recombination can produce chromosomes with great differences in the offspring.
(4) Biological evolution has no memory. The information about the emergence of an individual is contained in the set of chromosomes carried by that individual and the structure encoded by those chromosomes, which allows the individual to adapt well to its environment.
Most organisms evolve through two basic processes: natural selection and sexual reproduction. The principle of natural selection is that the fittest survive and the unfit are eliminated. Natural selection determines which individuals in a population can survive and reproduce; sexual reproduction ensures the mixing and recombination of genes in offspring. Compared to offspring that contain only a single copy of the parent's genes and rely on accidental mutations for improvement, offspring produced by gene recombination evolve much faster. In the 1960s, John Holland of the University of Michigan in the United States noticed in his research on how to build machines that can learn that learning can occur not only through the adaptation of a single organism but also through the evolution of a population over many generations. Inspired by Darwin's theory of evolution, he gradually realized that in the research of machine learning, in order to obtain a good learning algorithm, it is not enough to rely on the establishment of a single strategy but also on the reproduction of a population containing many candidate strategies. Their ideas originated from genetic evolution, and Holland named this research field genetic algorithm[4].
2. Overview of Genetic Algorithms
The genetic algorithm created by Holland is a probabilistic search algorithm that uses a certain encoding technique to act on binary strings called chromosomes. Its basic idea is to simulate the evolutionary process of a population composed of these strings [5]. The genetic algorithm recombines those strings with good fitness through organized rather than random information exchange. In each generation, it uses the good-fit bits and segments in the string structure of the previous generation to generate a new population of strings. As an extra addition, it occasionally tries to replace the original parts with new bits and segments in the string structure. The genetic algorithm is a type of random algorithm, but it is not a simple random walk. It can effectively use existing information to search for strings that are expected to improve the quality of the solution. Similar to natural evolution, the genetic algorithm solves the problem by acting on the genes on the chromosomes to find good chromosomes. Similar to nature, the genetic algorithm knows nothing about the problem itself and selects chromosomes based on fitness values, so that chromosomes with good fitness values have more chances to reproduce than chromosomes with poor fitness values. This algorithm is an adaptive global optimization probabilistic search algorithm formed by simulating the genetic and evolutionary process of organisms in the natural environment. Using genetic algorithms as a tool, multi-objective problems are handled by the weight coefficient change method and constraints are handled by the penalty function method, thereby solving complex problems. The reason why genetic algorithms have gained widespread attention and recognition is mainly due to their own characteristics. The basic characteristics of genetic algorithms [6] are mainly as follows:
(1) Intelligence: The intelligence of genetic algorithms includes self-organization, self-adaptation, and self-learning. When applying genetic algorithms to solve problems, after determining the encoding scheme, fitness function, and genetic operators, the genetic algorithm organizes its search based on the obtained information. Since genetic algorithms are based on the natural selection strategy of "survival of the fittest," individuals with higher fitness values have a gene structure that is more adapted to the environment and have a higher probability of survival. Then, through continuous gene exchange and gene mutation, they search for offspring that are more adapted to the environment. For the optimization of complex nonlinear, multi-peak functions, it is not easy to obtain the global optimal solution using general methods. However, if an appropriate objective function is given, genetic algorithms can generally search for a near-optimal solution. This is a concrete manifestation of the inherent intelligence and self-adaptation advantages of genetic algorithms.
(2) Parallelism: The parallelism of genetic algorithms manifests in two aspects: intrinsic parallelism and implicit parallelism. Intrinsic parallelism is a concrete manifestation of the randomness of the genetic algorithm's search for solutions. Due to this characteristic of genetic algorithms, the same problem can be solved simultaneously and independently on several computers, and then the optimal individual can be selected. The implicit parallelism of genetic algorithms is due to the population search method they employ.
(3) Global Optimization: Genetic algorithms search multiple regions of the solution space simultaneously, thus greatly reducing the possibility of traditional optimization methods getting trapped in local optima. For the optimization of complex functions such as nonlinear and multi-peak functions, traditional optimization methods are prone to converge to local optima due to the limitations of their chosen search strategies. The simultaneous search method of genetic algorithms in multiple regions of the solution space and the randomness of the search region changes make it easy to escape local optima during the search process.
(4) Robustness: The robustness of an algorithm refers to its applicability and effectiveness under different conditions and environments. Since genetic algorithms utilize the fitness values of individuals to drive the evolution of the population, regardless of the structural characteristics of the problem being solved, when using genetic algorithms to solve different problems, only the corresponding fitness evaluation function needs to be designed, without modifying other parts of the algorithm. Furthermore, because genetic algorithms possess the adaptive characteristics of natural systems, the trade-off between efficiency and effectiveness allows them to be applied to different environments and achieve good results.
Genetic algorithms begin with a population representing the potential solution set of the problem to be optimized. This population consists of a certain number of individuals encoded with genes. Genes are encoded into chromosomes, and each individual is composed of chromosomes; each individual is essentially an entity bearing chromosomal characteristics. Chromosomes, as the primary carriers of genetic material, are collections of multiple genes. Their internal manifestation (i.e., genotype) is a combination of these genes, determining the individual's external appearance. Therefore, the initial step requires mapping from phenotype to genotype—that is, encoding.
After the first generation of population is generated, individuals with increasingly better fitness will be generated through generational evolution according to the principles of survival of the fittest and natural selection. In each generation, some individuals with high fitness are selected based on the fitness of individuals in the problem domain. Based on these selected individuals with high fitness, and with the help of crossover and mutation operators in natural genetics, the next generation of population representing the new solution set is generated. This process will cause the population to evolve like natural evolution, making the later generation population have higher fitness than the previous generation population and more adapted to the environment. After the optimization process is completed, the best individual in the last generation population can be decoded and used as the approximate optimal solution to the problem[7].
3. The principle of genetic algorithms
The research on genetic algorithms mainly includes three areas[8]: the theory and technology of genetic algorithms; optimization using genetic algorithms; and machine learning of classification systems using genetic algorithms. Among them, the research on the theory and technology of genetic algorithms mainly includes problems such as encoding, crossover operation, mutation operation, selection operation, and fitness evaluation.
3.1 Basic Principles
In nature, due to the differences among individuals within a biological group, they possess varying abilities to adapt to and survive in their environment. Following the fundamental principles of biological evolution, survival of the fittest, the weakest individuals are eliminated. Through mating, superior chromosomes and genes from the father are passed on to offspring. The recombination of chromosomes and genes produces new, more resilient individuals who then form new groups. Under specific conditions, genes can mutate, producing new genes and more resilient individuals; however, mutations are not heritable. As individuals are continuously renewed, the population continuously evolves towards the optimal direction. Genetic algorithms realistically simulate the mechanisms of biological evolution in nature to find the best fit.
Unlike traditional search algorithms, genetic algorithms start the search process from a set of randomly generated initial solutions, called a population. Each individual in the population is a solution to the problem, called a chromosome. These chromosomes evolve continuously in subsequent iterations, which is called heredity. Genetic algorithms are mainly implemented through crossover, mutation, and selection operations. Crossover or selection operations generate the next generation of chromosomes, called offspring. The quality of chromosomes is measured by fitness. Based on the fitness, a certain number of individuals are selected from the previous generation and offspring to form the next generation population, and then continue to evolve. After several generations, the algorithm converges to the best chromosome, which is likely to be the optimal or suboptimal solution to the problem. The concept of fitness is used in genetic algorithms to measure the degree to which each individual in the population is likely to reach the optimal solution in the random calculation. The function that measures the fitness of an individual is called the fitness function. The definition of the fitness function is generally related to the specific problem being solved [9].
3.2 Typical Algorithms and Steps
The general steps of a genetic algorithm are as follows:
(1) Randomly generate the initial population and initialize the population;
(2) Calculate the fitness of each individual in the population and whether it meets the convergence criterion;
(3) Select individuals to enter the next generation according to a rule determined by the individual's fitness value;
(4) Perform crossover operations according to probability Pc;
(5) Perform mutation operation according to probability Pm;
(6) If a certain stopping condition is not met, proceed to (3); otherwise, proceed to the next step.
(7) Output the chromosome with the best fitness in the population as the satisfactory or optimal solution to the problem.
The general process of the genetic algorithm [10] is shown in Figure 1.
3.3 Encoding Issues
Encoding is the primary problem that genetic algorithms aim to solve. Commonly used encoding methods include binary encoding, Gray code encoding, real number encoding, and symbolic encoding. Different encoding methods are used for different problems.
3.4 Group Setting
Genetic operations are performed simultaneously on numerous individuals, forming a population. In the genetic algorithm process, the task following code design is setting the initial population, and then evolving generation by generation until the evolutionary process terminates according to a certain stopping criterion, thus obtaining the final generation. A key issue is determining the population size, i.e., the number of individuals in the population. Two factors need to be considered: a. Setting the initial population: First, based on the inherent knowledge of the problem, try to grasp the distribution range of the optimal solution space within the entire problem space, and then set the initial population within this distribution range. Then, randomly generate a certain number of individuals, and then select the best individuals to add to the initial population. This process iterates until the number of individuals in the initial population reaches a predetermined size. b. How to maintain the size of each generation in the evolutionary process: The determination of the population size is greatly affected by the selection operation in genetic operations. Generally speaking, the larger the population size, the higher the diversity of individuals in the population, and the lower the risk of the algorithm getting trapped in local solutions. Therefore, considering population diversity, the population size should be relatively large. However, a population size that is too large will bring several drawbacks. First, there's the issue of computational efficiency. A larger population requires more fitness evaluations, increasing computational complexity and impacting algorithm performance. Second, the survival probability of individuals in a population is largely proportional to their fitness. When the population is very large, a small number of highly fit individuals will survive, while most will be eliminated. This affects the formation of the pairing pool and consequently, the crossover operation. On the other hand, a small population size limits the search space of the genetic algorithm, potentially causing the search to stall in the immature stage, leading to premature convergence.
Fitness function: Genetic algorithms basically do not use external information in evolutionary search, but only use the fitness function as the basis. The objective function of genetic algorithms is not subject to the constraint of continuous differentiability and the domain can be any set. The only requirement for the objective function is that it can calculate a non-negative result that can be compared for the input value. This feature makes genetic algorithms widely used[11].
3.5 Genetic manipulation
Genetic operations include three basic genetic operators: crossover, mutation, and selection [12]. These three operators have the following characteristics: a. The operations of these three genetic operators are all randomized operations, therefore, the migration of individuals to the optimal solution is also random. b. The effect of genetic operations is closely related to the operation probabilities, encoding methods, population size, initial population, and fitness function settings of the three genetic operators. c. The operation methods or strategies of the three basic genetic operators are directly related to the encoding methods of individuals.
3.5.1 Cross Operations
Crossover, or crossover operation, refers to the exchange of partial genes between two paired chromosomes in a certain way, resulting in two new individuals. Crossover is a key feature distinguishing genetic algorithms from other evolutionary algorithms; it determines the global search capability of genetic algorithms and is the primary method for generating new individuals. Common crossover operators include: a. Single-point crossover: Also known as simple crossover, it involves randomly setting a crossover point in the individual's encoding string and then exchanging partial genes between the two paired individuals at that point. b. Two-point crossover: The specific operation is as follows: 1. Set two crossover points in the encoding strings of the two paired individuals; 2. Exchange partial genes between the two crossover points. c. Uniform crossover: It refers to the exchange of every gene in the two paired individuals with equal probability, thus forming two new individuals. The specific operation is as follows: 1. Randomly generate a binary mask word w = w1w2…wx with the same length as the individual's encoding. 2. Two new individuals X and Y are generated from two parent individuals A and B according to the following rules: if wi=0, the i-th gene of X inherits the corresponding gene of A, and the i-th gene of Y inherits the corresponding gene of B; if wi=1, the i-th genes of A and B are exchanged, thereby generating the i-th gene of X and Y[13].
3.5.2 Mutation Operation
Mutation, in genetic algorithms, refers to replacing certain gene values in an individual's encoding string with other gene values to form a new individual. Mutation is an auxiliary method for generating new individuals, but it is essential because it determines the algorithm's local search capability. Common methods include: a) Basic bit mutation: This involves randomly selecting one or more genes in the individual's encoding string with a mutation probability p. b) Uniform mutation: This involves replacing each gene in the individual with a random number uniformly distributed within a certain range, with a relatively small probability. c) Binary mutation: This operation requires two chromosomes. After a binary mutation operation, two new individuals are generated, and the genes in the new individuals are obtained by taking the XOR/XOR values of the corresponding genes on the original chromosomes.
3.5.3 Selection Operation
Selection is an operation that weeds out the less fit individuals in a population: individuals with high fitness are more likely to be passed on to the next generation; individuals with low fitness are less likely to be passed on. Its task is to select some individuals from the parent population and pass them on to the next generation using a certain method.
4. Research and Application of Genetic Algorithms in the Field of Control
Genetic algorithms provide a general framework for solving complex system optimization problems. They are independent of the specific domain of the problem and have strong robustness to the types of problems, so they are widely used in many disciplines. Currently, the main application areas include function optimization, production scheduling problems, automatic control, machine learning, image processing, artificial life, genetic programming, data mining, etc. [14]. This paper mainly discusses the research and application of genetic algorithms in the field of control.
4.1 System Identification and Model Price Reduction
System identification is the foundation of control system design. There are many effective methods, but most of these techniques deal with linear models of parameters and are based on the assumption that the search space is continuous and differentiable. Current online identification methods are all recursive implementations of offline methods. These recursive methods are essentially local search methods that use gradient techniques. When the search space is not differentiable or the parameters are nonlinear, these methods are not easy to find the global optimal solution. In contrast, the search of the genetic algorithm does not depend on gradient information and does not require solving for differentiable functions. It only needs to find feasible solutions under self-constraint conditions of the function. Furthermore, the genetic algorithm has the characteristic of global search. GA provides a simple and effective method for the identification of nonlinear systems [15].
4.2 Nonlinear System Control
In control system design, practical problems often have strict constraints and nonlinearity, and the index function may be neither continuous nor differentiable. Different combinations of parameters may result in the same control action. Traditional optimization methods are very sensitive to the selection of initial values and are easily trapped in local extrema near the initial solution. Genetic algorithm provides an effective way to optimize nonlinear control systems. Because GA does not require the differentiation of the index function, the design automation method based on genetic algorithm and performance analysis can consider many performance requirements of practical systems and can directly design linear controllers for nonlinear objects without first linearizing the object. Practice has proven that this is an effective method for control system design. Xu Diansheng discussed the problem of controller parameter optimization using GA, studied the determination of parameter values of nonlinear systems with specific structures using GA, and pointed out that under the given performance index, the globally optimal controller parameters can be obtained [16].
4.3 Fuzzy Logic Control System
In practical engineering applications, due to the lack of detailed prior knowledge about the dynamic, nonlinear and time-varying characteristics of the object, it is very difficult or impossible to obtain an accurate model of the controlled object. Fuzzy inference method based on model-free estimation of control system is one of the powerful tools for control system design. This rule-based system introduces fuzzy linguistic variables into the rule set to model human experience methods. However, the selection of rules and membership functions of fuzzy controllers is highly subjective. When the number of inputs, outputs and the level of linguistic variable division increases, the number of fuzzy rules increases rapidly with the square relationship of the level. All of these bring difficulties to the design of fuzzy controllers. In order to solve the difficulties in the design of fuzzy controllers, many scholars have studied the use of GA for the auxiliary design and automated design of fuzzy systems. Varsek[17] proposed a three-stage framework for learning dynamic system control; that is, first obtain the decision table through GA, then convert the control rules into an understandable form through machine learning, and finally optimize the numerical parameters of the rules with GA. ark studied a new genetic fuzzy inference system to generate the set of optimization parameters and obtained good performance[18].
4.4 Neural Network Control System
Artificial neural networks are a type of model established by simulating the human brain's nervous system in terms of microstructure and function. They have the ability to simulate some of the human's figurative thinking. Their characteristics are nonlinearity, learning ability and adaptability. They are an important way to simulate human intelligence and have been widely used in many aspects. However, the initial weights, thresholds and Gaussian function center vectors of neural networks cannot be well determined, and the transfer functions of the hidden layer units are discontinuous and nondifferentiable. Therefore, the traditional optimization method may get stuck in local minima [19]. The search of genetic algorithms does not depend on gradient information, nor does it need to solve for differentiability of the function. It only needs to solve for feasible solutions under the self-constraint conditions of the function. In addition, genetic algorithms have the characteristics of global search. Optimizing the connection weights, network structure, initial weights, thresholds and Gaussian function center vectors of neural networks with genetic algorithms can not only easily obtain the global optimal solution, but also improve the generalization performance of neural networks and greatly improve the accuracy, robustness and adaptability of the system. Reference [20] proposed to use genetic algorithms to replace the least squares method to train the weights, thresholds and Gaussian function center vectors of RBF neural networks and obtained satisfactory simulation results.
5. MATLAB Implementation of Genetic Algorithm
The following main function is required:
(1) Encoding and population generation
function [pop] = initializega(num,bounds,evalFN,evalOps,options)
% pop - the initial, evaluated, random population
% num - the size of the population, ie the number to create
% bounds - the number of permutations in an individual (eg, number of cities in a tsp
% evalFN - the evaluation fn, usually the name of the .m file for evaluation
% evalOps- any options to be passed to the eval function defaults [ ]
% options- options to the initialize function, ie. [eps, float/binary, prec] where eps is the epsilon value and the second option is 1 for orderOps, prec is the precision of the variables defaults [1e-6 1]
(2) Cross
function [c1,c2] = arithXover(p1,p2,bounds,Ops)
% Arith crossover takes two parents P1,P2 and performs an interpolation along the line formed by the two parents.
% function [c1,c2] = arithXover(p1,p2,bounds,Ops)
% p1 - the first parent ( [solution string function value] )
% p2 - the second parent ( [solution string function value] )
% bounds - the bounds matrix for the solution space
% Ops - Options matrix for arith crossover [gen #ArithXovers]
(3) Selection
normGeomSelect: NormGeomSelect is a ranking selection function based on the normalized geometric distribution.
(Sequence selection function based on normal distribution)
(4) Variation
function[newPop] = normGeomSelect(oldPop,options)
% NormGeomSelect is a ranking selection function based on the normalized geometric distribution.
% function[newPop] = normGeomSelect(oldPop,options)
% newPop - the new population selected from the oldPop
% oldPop - the current population
% options - options to normGeomSelect [gen probability_of_selecting_best]
(5) Some auxiliary functions:
f2b: Return the binary representation of the float number fval.
b2f: Return the float number corresponding to the binary representation of bval.
nonUnifMutation: Non-uniform mutation changes one of the parameters of the parent based on a non-uniform probability distribution. This Gaussian distribution starts wide and narrows to a point distribution as the current generation approaches the maximum generation.
maxGenTerm:Returns 1, ie terminates the GA when the
The maximal generation has been reached. (The genetic algorithm terminates and returns 1 when the number of iterations exceeds the maximum number of iterations; otherwise, it returns 0.)
roulette: roulette is the traditional selection function with the probability of surviving equal to the fitness of i / sum of the fitness of all individuals
6. Conclusion
Genetic algorithms (GAs) are highly effective optimization methods, providing faster and better results than traditional optimization methods for determining certain control structures and actions in the field of control. It can be said that intelligently connecting GAs with CAD software packages for high-precision system modeling, automated controller design, and parameter optimization would be an effective new approach. However, it has consistently failed to address the study of its own evolution. Moreover, in essence, current GA models only superficially depict the human evolutionary process, failing to capture the true evolutionary process of humans, let alone the actual learning process of neural thought. Therefore, GA models themselves require further in-depth exploration.
References
[1] HOLIAND J H. Adaptation in natural and artificial systems[M]. Ann Arbor: University of Michigan Press, 1975.
[2] De JONG K A. The analysis of the behavior of a class of genetic adaptive systems[D]. Ann Arbor: University of Michigan, 1975.
[3] GOLDVERG D E. Genetic algorithms in search, optimization and machine learning[M]. Boston: Addison-Wesley Longman Press, 1989.
[4] Zhao Zhongrong. Research on Container Loading Optimization Method Based on Improved Genetic Algorithm [D]. 2010.
[5] Andeas Bortfeldt, Hermann Gehring. A hybrid genetic algorithm for the container loading problem[J]. Eruopean Journal of Operational Research. 2001(131): 143-161.
[6] Liu Song, Li Wenhui. A brief discussion on the application of genetic algorithm in Park system [J]. Science & Technology Communication. 2010(22): 226.
[7] Huang Xiaoming, Liu Changan. Application of improved genetic algorithm in automatic test paper generation system [J]. Science Technology and Engineering. 2010(08): 1999-2002.
[8] Zhou Xin, Ling Xinghong. A review of the theory and technology of genetic algorithms [J]. Computer and Information Technology. 2010(04): 37-39.
[9] Zhao Yipeng, Meng Lei, Peng Chengjing. A review of the principles and development direction of genetic algorithms [J]. Information Science. 2010: 79-80.
[10] Xia Yu, Long Pengfei. A neural network ensemble model based on an improved genetic algorithm [J]. Software Time and Space. 2010, 26(11-3): 206-207.
[11] Wang Chunshui, Xiao Xuezhu, Chen Hanming. Examples of applications of genetic algorithms [J]. Computer Simulation. 2005, 6.
[12] Li Huachang, Xie Shulan, Yi Zhongsheng. Principles and applications of genetic algorithms [J]. Mining and Metallurgy. 2005, 3.
[13] Davis L. Adaptive operator probability in genetic algorithms[C]. San Francisco: Morgan Kaufmann Publishers, 1989.
[14] Ding Chengmin, Zhang Chuansheng, Liu Hui. A comprehensive discussion on genetic algorithms [J]. Information and Control. 2007, 26.
[15] Xuan Guangnan, Cheng Runchuan. Genetic Algorithm and Engineering Design [M]. Beijing: Science Press, 2000.
[16] Xing Wenxun, Xie Jinxing. Modern Optimization Computation Methods [M]. Beijing: Tsinghua University Press, 1999.
[17] Varek A T.Urbacic and B.Filipic. Genetic Algorithms in Controller Design and Tuing[J]. IEEE Trans. 1993, 23(5): 1330-1339.
[18] Ge Jike, Qiu Yuhui. A review of genetic algorithm research [J]. Computer Applications Research. 2008, 25(10).
[19] Wang Hongyan, Yang Jingan. Research progress on parallel genetic algorithms [J]. Computer Science. 2009, 26(6).
[20] Liu Baokun. Research on neural network adaptive controller based on genetic algorithm [J]. Information and Control. 2007, 26(4): 311-314.
About the author:
Li Shanshan, born in March 1988, is a female master's student at the School of Control Science and Engineering, Shandong University. Her research interests include control science and engineering, and energy efficiency analysis and evaluation.
He Yinzhou, born in May 1987, male, is a master's student at the School of Control Science and Engineering, Shandong University, specializing in DCS control systems.