time series {xp(m+1),xp(m+2),…,x p (m+s)}. It is also equal to say
that a m-p-s network is used to approach the function. Shown
as in Equation (8):
( , ,... ) ( , ,..., ) p(m 1) p(m 2) p(m s) p1 p2 pm T x x x = F x x x + + + (8)
When s=1, the prediction is the first step prediction of the
neutral network, as seen in Equation (9).
( , ,..., ) p(m 1) p1 p2 pm x = F x x x + (9)
When s is larger than one, the prediction process must be
done step by step, the result of the former prediction process
will be used as an input to the next prediction process.
{xp1,xp2,…,xpm} are the input of the network and xp(m+1) is the
result of the first prediction and it will be used as a input to
predict xp(m+2), using the xp2,xp3,…,xp(m+1).
III. ATIFICIAL NEUTRAL NETWORKS
A. Establishment
The MATLAB neutral network toolbox provides with
various kinds of functions used for designing and analyzing
the BP neutral networks [9]. To build a BP network, the
following equation is used in establishing the network.
net = (PR,[S1,S2,..., Si,..., SN],{TF1,...,TFN},BTF,BLF,PF); (10
)
Where PR is the R×2 matrix that composed by the
maximum value and the minimum value of each input neuron
vectors; Si is the number of the neurons of the ist layer; TFi is
the transfer function of the ist layer, including transig, logsig
and purelin, where transig is the acquiescent one. The
difference of the three transfer functions is shown in Figure 2.
Figure 2. The difference of the three transfer functions
BTF is the function that used to train the networks.
MATLAB provides more than 10 functions for choosing, of
which the trainlm (acquiescent), trainb, trainc, trainr,
traingdx and trainscg are the most frequently functions used.
BLF is the learning function of the networks. MATLAB also
provides many functions; learngdm (acquiescent), learngd
and learnwh are generally used in application. PF is the
performance function of network, including mae, mse
(acquiescent), msereg, sse.
B. Determination of the Layers
1) The number of the layers
A feed forward BP neutral network generally consists of 3
layers, shown as in Figure 1. Hornik has testified that the
MLP networks with a hidden layer can approach to any
rational function with arbitrary precision if the function used
in the input and output layers are linear (for example, purlein)
and the function used in the hidden layer is Sigmiod [10, 11].
While it is only a theoretical conclusion and the there are
some failures of satisfying the real performance of precisions
in real applications as to a 3-layer network. Increasing the
number of hidden layers can reduce the errors of the network,
but it will make the network more complex and it also means
that there are more than one hidden layers to be determined
while now there are no reliable ways to specify the number of
its neurons. Compared to the 4-layer network, 3-layer
network can achieve the relative high precision by adding the
hidden nodes, and the training process is also simpler, thus
the 3-layer network is our priority.
2) The transfer function (TFi) and performance function
(PF) between layers
The network used in this proposed system is the ideal
3-layer neutral network thus the transfer between the input
layer and the hidden layer is the nonlinear log-sigmoid
function, the transfer function of the output
本论文由英语论文网提供整理,提供论文代写,英语论文代写,代写论文,代写英语论文,代写留学生论文,代写英文论文,留学生论文代写相关核心关键词搜索。