Feedforward Neural Network Data Processing Method Based on Groundwater Level Measurement
2026-04-06 08:00:39··#1
Abstract : Nonlinear systems are widespread in nature. Artificial neural networks, with their ability to represent arbitrary nonlinear relationships and self-learn, provide solutions to such problems. Based on a series of provided groundwater level measurement data, this paper utilizes radial basis function neural networks and backpropagation (BP) neural networks to construct training and detection sample sets for the groundwater level data, respectively, and compares and evaluates the simulation results. Keywords : feedward neural network; nonlinear; RBF neural network; BP neural network CLC number: TP118 Document code: A [align=center]Data processing method for feedward neural network based on measure of groundwater level ZOU Jing, YU Yang, CHEN Liang (School of Information science &Engineering, Shenyang Ligong University, Shenyang, Liaoning 110168, China)[/align] Abstract : Nonlinear system exists extensively in nature, artificial neural network have the ability that can express arbitrarily nonlinear relation and self study, and offer a method that solve these problems. Under the premise of the groundwater level's data which had already provided, this paper use RBFNN and BPNN respectively to train and test the sample gather of these data, and set up corresponding network. In the last, this paper compare and critique the corresponding simulation result. Keywords : feedforward neural network; nonlinear; RBF neural network; BP neural network The feedforward neural network structure is hierarchical, and information values can be propagated from the input layer unit to the unit of the layer above it. The units of the first layer are connected to all the units of the second layer, and the second layer is connected to the units of the layer above it. There is no connection between the units in the same layer [1]. The input-output relationship of neurons in the feedforward network can be achieved by linear threshold hard transformation or nonlinear transformation of unit ascent, both of which adopt supervised learning [2]. Among them, RBF neural network and BP neural network are good solutions for approximating arbitrary nonlinear mappings [3]. This paper takes a series of groundwater level measurement data as samples, and divides all data samples into training samples and detection samples to construct neural networks. Two types of feedforward neural networks are used to train samples and detection samples respectively. The approximation errors of the two networks are analyzed and compared, and evaluation conclusions are given. 1. Preliminary preparation Groundwater level is mainly affected by important factors such as river flow, temperature, saturation difference, precipitation and evaporation. 24 sets of data are summarized as shown in Table 1.1. Sets 1-19 are selected as training samples and sets 20-24 are selected as test samples. (Data has been normalized) [align=center]Table 1.1 Monitoring data of groundwater level and its influencing factors[/align] 2. Creation, training and testing of RBF network Radial basis function (RBF) neural network consists of three layers. The input layer nodes only transmit the input signal to the hidden layer. The hidden layer nodes are composed of radial action functions like Gaussian functions, while the output layer nodes are usually simple linear functions[4]. The action function (basis function) in the hidden layer node will respond locally to the input signal. That is, when the input signal is close to the central range of the basis function, the hidden layer node will produce a large output. It can be seen that this network has local approximation ability. Therefore, the radial basis function network is also called the local sensing field network[5]. From the table above, the input vector p and the target vector t are obtained. (Training sample) Input test sample p_test, target test sample t_test. (Test sample) A zero-error RBF neural network is created using the precise design function newrbe, and the number of hidden layers is automatically selected. SPREAD=1.5; Net=newrbe(p,t,SPREAD); SPREAD is the distribution density of the radial basis function; the larger the value, the smoother the function. Since the network establishment process is the training process, the network net obtained at this time is already trained. See Figure 2.1 for the RBF neural network model. [align=center] Figure 2.1 RBF Neural Network Model[/align] Then, the network is simulated to verify its prediction error, as shown in Figure 2.2. y=sim(net,p_test) The results are: y=0.6455 1.0844 0.3816 0.0064 0.1837 plot(1:5,y-t_test); % The prediction error is shown in Figure 2.2. As can be seen from the figure, the prediction error of the network is not large for groundwater level prediction. Furthermore, the value of SPREAD affects the prediction accuracy of the network. Next, the prediction accuracy of the network is calculated for SPREAD=2, 3, 4, and 5 respectively. The code is as follows: y=rands(4,5); for i=1:4 net=newrbe(p,t,i+1); y(i,:)=sim(net,p_test); end plot(1:5,y(1,:)-t_test,'r'); hold on; plot(1:5,y(2,:)-t_test,'b'); hold on; plot(1:5,y(3,:)-t_test,'g'); hold on; plot(1:5,y(4,:)-t_test,'.'); hold on; [align=center] Figure 2.2 RBF network training error curve Figure 2.3 Prediction error when SPREAD takes different values[/align] As can be seen from the figure, when SPREAD=2 or 3, the prediction error of the network is the smallest, and the ideal result can be obtained. 3. Creation, Training, and Testing of the BP Network A BP network is used to re-predict groundwater levels. A 5*11*1 BP network structure is selected. The number of hidden layer neurons is obtained from 5*2+1. The training function is trainlm. The training iterations are set to 1000. A two-layer network is created, and its network model is shown in Figure 3.1. [align=center] Figure 3.1 BP network structure model[/align] net=newff([0 0.6814;0 0.9697;0 1.0000;0 0.6129;0 1.0000],[11 1],{'tansig','logsig'}, 'trainlm'); net.trainParam.epochs=1000; net=tran(net,p,t); The training iterations are shown in Figure 3.2 [align=center] Figure 3.2 BP network training error[/align] % Test sample simulation y=sim(net,p_test); Let y_bp=y-t_test plot(1:5,y_bp,'*'); hold off; The error plot is shown in Figure 2.3. 4. Comparative Analysis As can be seen from Figure 2.3, the BP network is significantly inferior to the RBF network in terms of prediction accuracy, and the training time of the BP network is significantly longer than that of the RBF network, and its training speed is relatively slow. Since the output layer of the RBF neural network is a linear weighted sum of the intermediate layers, it avoids the cumbersome and lengthy calculations of the BP network, has a higher computational speed and extrapolation ability, and at the same time makes the network have a strong nonlinear mapping function. Compared with the BP neural network, since there are fewer parameters to be adjusted, only one smoothing factor, it can find a suitable prediction network faster and has a greater computational advantage. 5. Conclusion Theoretically speaking, the RBF network and the BP network can approximate any continuous nonlinear function. The main difference between the two is that they use different activation functions. The hidden nodes in the BP network use the sigmoid function, whose function value is non-zero in an infinitely large range in the input space, while the activation function of the RBF network is local. References [1] Zhang Ling, Zhang Ba. Theory and Application of Artificial Neural Networks [M]. [2] Shen Qing, Hu Dewen, Shi Chun. Application Technology of Neural Networks [M]. Changsha: National University of Defense Technology Press, 1993. [3] Wen Xin et al. MATLAB Neural Network Simulation and Application [M]. Science Press, 2003.7. [4] Huang Jialiang. Research on the Application of RBF Neural Network in Fault Diagnosis of Marine Low-Speed Diesel Engines [Master's Thesis]. Dalian Maritime University, 2000.3. [5] Cong Shuang. Neural Network Theory and Application Oriented to MATLAB Toolbox. Hefei: University of Science and Technology of China Press, 1998.