Description

A Novel Fast Orthogonal Search Method for design of functional link networks and their use in system identification

All materials on our website are shared by users. If you have any questions about copyright issues, please report us to resolve them. We are always happy to assist you.

Related Documents

Share

Transcript

A Novel Fast Orthogonal Search Method for Design of FunctionalLink Networks and their Use in System Identiﬁcation
Hazem M. Abbas
Abstract
—In this paper, a functional link neural net (FLN)capable of performing sparse nonlinear system identiﬁcationis proposed. Korenberg’s Fast Orthogonal Search (FOS) isadopted to detect the proper model and its associated parame-ters. The FOS algorithm is modiﬁed by ﬁrst sorting all possiblenonlinear functional expansion of the input pattern accordingto their correlation with the system output. The sorted functionsare divided into equal size groups, pins, where functions withthe highest correlation with the output are assigned to theﬁrst pin. Lower correlation members go the following pinand so forth. During the identiﬁcation process, members inlower pins are tried ﬁrst. If a solution is not found, nextpins join the candidates pool until the identiﬁcation processcompletes within prespeciﬁed accuracy. The modiﬁed GramSchmidt orthogonalization and Choleskey decomposition areapplied to create orthogonal functionals that can linearly ﬁt theidentiﬁed system. The proposed architecture is tested on noise-free and noisy nonlinear systems and shown to ﬁnd sparsemodels that can approximate the experimented systems withacceptable accuracy.
I. I
NTRODUCTION
Identiﬁcation of nonlinear dynamic systems is of consid-erable importance in many engineering applications such asecho cancellation [1], device modeling [2], nonlinear ﬁlterdesign [3] among others. Feedforward neural networks havebeen employed in system identiﬁcation using a set of delayedinputs and outputs [4]. Multilayer perceptrons (MLP) trainedusing the backpropagation algorithm for system modelingwas presented in [5]. Other neural models also include radialbasis function nets [6] and orthonormal nets with Lagendrefunctions [7].Functional link networks (FLN) [8] replaces the hiddenlayer in MLP by providing nonlinear function expansion of the network input using functional links, polynomial basisfunctions, for example. The net output is composed of alinear sum of the basis functions. They have proved capableof approximating nonlinear mappings and were shown tosuccessfully model nonlinear systems [4].In system identiﬁcation, the objective is to determinethe signiﬁcant model terms and their associated values.Normally, most of the model terms are not contributing tothe system output resulting in a sparse model representation.When a FLN is trained to identify a nonlinear system,all candidate terms represented by the functional links aretreated equally. The value of all terms are estimated althoughonly a few terms are needed to be identiﬁed and thusresulting in a great waste of computing power. In addition,
The author is with Mentor Graphics Egypt, 51 Beirut St., Heliopolis,Cairo 11571, Egypt (email: h.abbas@ieee.org).
the estimated model parameters will become inaccurate sincea large number of insigniﬁcant terms are involved.Many methods have been introduced to address the modelselection problem such as evolutionary algorithms ([9], [10])and orthogonal least squares methods ([11], [12]). Evolu-tionary techniques rely on applying methods borrowed fromnature by applying genetic operators to ﬁnd the correctmodel. Orthogonal techniques use the pool of candidateterms to calculate a new set of orthogonal terms that reducethe squared error between the system and the model.This paper presents a method to construct a minimum FLNnetwork with polynomial functional links by employing amodiﬁed FOS [11] that sorts the candidate terms accordingwith their correlation with the identiﬁed output so that termswith the highest correlation coefﬁcients are used to formthe orthogonal space. Analysis and simulation results of theproposed method demonstrate the efﬁcacy of the method inﬁnding the optimal design of FLN in a time much shorterthan when the conventional FOS is used.The paper is organized as follows. The characterizationof the system identiﬁcation problem is presented in sectionII. Section III presents a description of using FLN nets forsystem identiﬁcation. Section IV reviews the FOS algorithmwhile section V introduces the sorted version of the FOS andits utilization to ﬁnd the minimum polynomial expansions of the FLN. Analysis and experimental results of the proposednetworks are presented in section VI.II. T
HE
N
ONLINEAR
S
YSTEM
I
DENTIFICATION
P
ROBLEM
The nonlinear system identiﬁcation problem is depictedin Fig. 1. The system has an input signal,
x
(
n
)
, and pro-duces an output,
y
(
n
)
,
n
= 1
,
···
,N,
where
N
is therecord length. The identiﬁcation model should approximate
.
SystemModel
x
(
n
)
y
(
n
)ˆ
y
(
n
)
+
e
(
n
)
Fig. 1. Identiﬁcation System
the system output when excited with the same input by
2743
1-4244-0991-8/07/$25.00/©2007 IEEE
minimizing the error,
J
=
N i
=1
y
(
i
)
−
ˆ
y
(
n
)
2
, betweenthe observed output,
y
(
n
)
, and the modeled output,
ˆ
y
(
n
)
,within an acceptable accuracy. A model that is describedby nonlinear autoregressive with exogenous inputs can beexpressed as
ˆ
y
(
n
) =
F
y
(
n
−
1)
,...y
(
n
−
K
);
x
(
n
)
,...,x
(
n
−
L
)
+
e
(
n
)
where
F
(
.
)
is a nonlinear function to be determined,
e
(
n
)
is
iid
model error, and
K
and
L
are maximum lags in theinput and output, respectively. When neural nets are used asthe model, the nonlinearity,
F
, is replaced by the output of the sigmoidal hidden layer(s) in MLP [4], Gaussian basisfunctions in RBF [13], Chebyshev basis functions [14], orhigher-degree polynomial in FLN [15].III. F
UNCTIONAL
L
INK
N
ETWORKS
(FLN)FLN nets approximate a desired single output of asparse system using a small set of basis functions
{
p
i
,i
=0
,
···
M
}
,
M
M
,
and a set of associated weights,
{
w
i
,i
= 0
,
···
M
}
so that the output is expressed as
ˆ
y
(
n
) =
M
i
=0
w
i
p
i
(
n
)
where
M
is the total number of functional links that can becomposed by the network,
M
is the number of functionallinks that can reproduce the sparse system,
p
0
= 1
and allbasis functions are linearly independent. Figure 2 shows aFLN network approximating a single output. Multidimen-sional function approximation using FLN nets have beenintroduced in [16] and analyzed in more details in [15].
u
1
u
2
u
I
p
0
w
0
p
1
p
M
w
1
w
M
+
ˆ
yy
FunctionalExpansion
Fig. 2. Functional Link Network Structure
Assume that there is an
N
number of inputoutput patternpairs to be learned by the FLN, and the the input vector,
U
∈
R
d
, is composed of all possible
d
lags in the input andoutput, i.e.,
d
=
L
+
K
, while the resultant output,
ˆ
y
is ascalar value. Each of the input pattern is passed through afunctional expansion block producing a corresponding
M
+1
-dimensional
(
M≥
d
)
expanded vector. Considering all
N
patterns, inputoutput relationship may be expressed as
ˆy
=
P w
T
where
P
= [
p
0
p
1
···
p
M
+1
]
is a
N
×
(
M
+1)
dimensionalmatrix,
p
i
is an
N
-dimensional vector representing the
i
th basis function, and
ˆy
is an
N
-dimensional estimatedoutput. In order to ﬁnd the weight vector,
w
, a number
N
of simultaneous equations need to be solved. If the basisfunctions were chosen to be formed as nonlinear polynomialsof the lagged input and output samples, the weights canbe calculated using the least square error method based onthe vector of the output signals,
y
, and the regressor datarepresented by
P
, i.e.,
w
= (
P
T
P
)
−
1
P
T
y
Since the least square methods consider each polynomialterm equally important and estimate all associated weightseven when only a small subset of those polynomials needto be identiﬁed (which is the case with in sparse sys-tems). Hence, this amounts to a huge waste of computationresources. Additionally, estimating a large number of in-signiﬁcant terms will introduce inaccuracies in the estimatedterms. Orthogonal least squares (OLS) search techniques[12] alleviate this problem by applying forward stepwisemodel selection techniques to ﬁnd the signiﬁcant terms. Allterms are studied to determine the amount of contribution of each term in modeling the desired system. Several methodshave been proposed to improve the OLS techniques such asKorenberg’s Fast Orthogonal Search (FOS) [11] and [17]. Inthis work, a variation in the FOS is proposed to make it morecomputationally efﬁcient in ﬁnding the exact model.IV. F
AST
O
RTHOGONAL
S
EARCH
(FOS)[11]The FLN output can be expressed as a time series
ˆ
y
(
n
) =
M
i
=0
w
i
p
i
(
n
) +
e
(
n
)
(1)where
p
0
(
n
) = 1
and for
m
≥
1
,
p
m
(
n
) =
x
(
n
−
l
1
)
x
(
n
−
l
j
)
···
y
(
n
−
k
1
)
y
(
n
−
k
i
)
, and
0
≤
l
v
≤
L
,
0
≤
k
v
≤
K
. Using the orthogonal search method, the model in (1) isexpressed as
ˆ
y
(
n
) =
M
i
=0
g
i
s
i
(
n
) +
e
(
n
)
(2)The new orthogonal basis,
s
i
(
n
)
, are constructed from the
p
i
(
n
)
using the modiﬁed Gram-Schmidt procedure so thatthey are orthogonal over the observation period of the output.The parameters,
g
i
, are selected to minimize the mean–squared error over the interval, i.e.,
e
2
(
n
) =
y
(
n
)
−
M
i
=0
g
i
s
i
(
n
)
2
=
y
2
(
n
)
−
M
i
=0
g
2
i
s
2
i
(
n
)
(3)The overbar denotes the time average. It can be easily shownthat the addition of a new term,
w
r
p
r
(
n
)
, will reduce theerror by the amount
Q
(
r
) =
g
2
r
s
2
r
(
n
)
where
g
r
=
y
(
n
)
s
r
(
n
)
s
2
r
(
n
)
.
(4)
2744
In order to expand the model, it is required to calculatethe quantity
Q
for all candidates and choose the one forwhich
Q
is the greatest. Note also that the construction of the orthogonal functions,
s
i
(
n
)
, is computationally intensiveas it should be done for each candidate term [18]. The FastOrthogonal search avoids this problem. Using Gram-Schmidtorthogonalization, the functions
s
i
(
n
)
are created as follow:
s
0
(
n
) = 1
s
r
(
n
) =
p
r
(
n
)
−
r
−
1
i
=0
α
ri
s
r
(
n
)
,r
= 1
,
···
,M
(5)where
α
ri
=
p
r
(
n
)
s
i
(
n
)
s
2
i
(
n
)
, i
= 0
,
···
,r
−
1
Deﬁne
D
(
m,r
) =
s
m
(
n
)
s
r
(
n
)
and using (5),
D
(
m,r
) =
p
m
(
n
)
p
r
(
n
)
−
r
−
1
i
=0
α
ri
D
(
m,i
)
,
(6)
D
(
m,
0) =
p
m
(
n
)
,r
= 1
,
···
,M
;
m
= 1
,
···
,M
(7)Similarly,
s
2
m
(
n
) =
p
2
m
(
n
)
−
m
−
1
r
=0
α
2
mr
s
2
r
(
n
)
,m
= 1
,
···
,M
(8)and
E
(
m
) =
s
2
m
(
n
)
, E
(0) = 1
,m
= 0
,
···
,M
(9)This results in
α
mr
=
D
(
m,r
)
E
(
r
)
,r
= 0
,
···
,m
−
1;
m
= 1
,
···
,M
(10)This provides two recursive equations
D
(
m,r
) =
p
m
(
n
)
p
r
(
n
)
−
r
−
1
i
=0
D
(
r,i
)
D
(
m,i
)
E
(
i
)
,
(11)
r
= 1
,
···
,m
−
1;
m
= 2
,
···
,M E
(
m
) =
p
2
m
(
n
)
−
m
−
1
r
=0
D
2
(
m,r
)
E
(
r
)
,
(12)
m
= 1
,
···
,M
By repeating a similar procedure with the output,
y
, one willobtain another recursive equation,
C
(
m
) =
y
(
n
)
p
m
(
n
)
−
m
−
1
r
=0
α
mr
C
(
r
)
,m
= 1
···
,M
(13)and
C
(0) = 1
. Using (13) and (12),
g
m
=
C
(
m
)
E
(
m
)
,m
= 0
···
,M
(14)
Q
(
m
) =
g
2
m
E
(
m
)
(15)
e
2
(
n
) =
y
2
(
n
)
−
M
m
=0
g
2
m
E
(
m
)
(16)The
M
selected basis functions are those that produce thelargest
Q
(
m
)
in (15). Finally, the weight values associatedwith the selected functional links,
w
i
, can be calculated usingthe following [19]:
w
m
=
M
i
=
m
g
i
v
i
(17)
v
m
= 1
, v
i
=
−
i
−
1
r
=
m
α
ir
v
r
(18)This completes the identiﬁcation of the FLN basis functionsneeded to represent the sparse system and their associatedweight values.The speed up offered by this algorithm is achieved byexploiting the lagged nature of the difference equation in the
p
i
(
n
)
terms that makes it possible to accelerate the calcu-lations of the different time averages in the FOS algorithm.This is done by relating the time averages to input and outputmeans and correlations and then by making small correctionsfor the ﬁnite record length. More details can be found in[20]. The method has shown to save a lot of computationtime and memory storage. Also, the recursive equations in(12) and (13) requires calculations to be performed for thecurrent candidate while values for previous candidates arereused.V. S
ORTED
F
AST
O
RTHOGONAL
S
EARCH
(SFOS)The FOS can be enhanced further through different vari-ations. This can be done by arranging the basis functionsterms in such a way that most
probable
terms are selectedﬁrst. A method was proposed in [21] where the terms aregrouped into disjoints subsets that are searched sequentially.The set with linear
x
terms are searched ﬁrst, followed by the
y
terms, and then by the nonlinear
xx
terms, the
yy
terms,and ﬁnally by the
xy
terms. A similar approach was proposedin [22] in their genetic evolution of a FLN by favoringsimple models ﬁrst. More complex nonlinear individuals aresearched when the simpler linear terms become unable to ﬁtthe required mapping.This work presents another modiﬁcation to the FOS al-gorithm. The main idea of the proposed modiﬁcation isto exploit the fact that basis functions with the highestcorrelation with output are more likely to be principal systemterms [10]. The algorithm starts by sorting all candidatebasis functions,
M
, in descending order according to theircorrelation with the output and grouping them into a numberof pins,
V
. Each pin is assigned an equal number of candidatefunctions,
R
=
M
V
, where candidates with highest correlationwith the output go the ﬁrst pin and lower correlation termsare assigned to the following one and so forth. Then theconventional FOS is applied to operate on the candidates inthe ﬁrst pin. Obviously, the majority of the candidates neededto fully represent the system will be picked there and a greatreduction in the representation error is expected. If a solutionwithin acceptable accuracy cannot be attained after testing allﬁrst pin candidates, the members of the second pin are added
2745
to the candidate pool and the FOS algorithm continues untila ﬁnal acceptable solution is reached.Since the term with the largest error reduction contribution,
Q
(
r
)
, should be selected ﬁrst, moving the terms that arehighly correlated with the output higher up in the candidatelist, will result in a much faster convergence to the minimumbasis function architecture. To demonstrate the validity of thisconjecture, let us examine the procedure of selecting the ﬁrstterm. To calculate
Q
(1)
, we start with the values of the ﬁrstconstant basis function,
m
= 0
, E
(0) = 1
, C
(0) =
y, D
(1
,
0) =
p
1
(
n
)
The reduction in error by the chosen term is,
Q
(1) =
g
21
s
21
(
n
) =
g
21
E
(1) =
C
2
(1)
E
(1)
Using (13) and (12),
C
(1) =
y
(
n
)
p
1
(
n
)
−
α
10
C
(0) =
y
(
n
)
p
1
(
n
)
−
y
(
n
)
p
1
(
n
)
E
(1) =
p
21
(
n
)
−
α
10
D
(1
,
0) =
p
21
(
n
)
−
[
p
1
(
n
)]
2
This gives,
Q
(1) =
y
(
n
)
p
1
(
n
)
−
y
(
n
)
p
1
(
n
)
p
21
(
n
)
−
[
p
1
(
n
)]
2
=
y
(
n
)
p
1
(
n
)
−
y
(
n
)
p
1
(
n
)
σ
2
p
1
(19)where
σ
2
p
1
is the variance of the term,
p
1
(
n
)
. The correlationcoefﬁcient,
ρ
(
y,p
1
)
, between the output and the tested termis deﬁned as
ρ
2
(
y,p
1
) = [(
y
(
n
)
−
y
(
n
))(
p
1
(
n
)
−
p
1
(
n
))]
2
σ
2
p
1
σ
2
y
It is straightforward to show that
σ
2
p
1
Q
(1) =
ρ
2
(
y,p
1
)
σ
2
y
σ
2
p
1
and hence
Q
(1)
∝
ρ
2
(
y,p
1
)
(20)Therefore, the candidate with the highest
Q
(1)
, is the termwhich has the maximum
ρ
2
(
y,p
1
)
with the observed output.This ﬁnding justiﬁes the proposed ordering of the candidates.By removing the output contribution of this candidate fromthe srcinal output and repeating the process with the re-maining candidates, the above analysis will always select thecandidate with the highest correlation with the new output.It should be noted that such ordering does not guaranteethat all signiﬁcant candidates will be placed high on the list.However, the CPU time needed by the FOS algorithm is theupper bound of the proposed SFOS. As it will be shown inthe experiments, the SFOS will be shown to perform muchfaster than FOS.
TABLE IE
XAMPLE
1: O
RDERING OF
B
ASIS
F
UNCTIONS BEFORE AND AFTER
S
ORTING
No. Candidate location loc. after sorting1 20 32 118 73 282 784 357 125 403 46 411 137 715 18 822 769 831 510 1495 2
VI. S
IMULATION
R
ESULTS AND
A
NALYSIS
The proposed FLN-SFOS identiﬁcation algorithm has beentested using a set of experiments. We will describe twoexamples to demonstrate the performance of the algorithmin identifying a sparse 3-
rd
order system. The locationsof the principal basis functions are chosen randomly. Thecorresponding weight values of the chosen functions aregenerated from a uniform distribution bounded by the upperand lower values of each kernel
([
−
2
.
5
,
2
.
5])
. The ﬁrstexample demonstrates the ability of the algorithm in correctlyidentifying the system driven by a white Gaussian input andnoise-free measurement. The second example describes thealgorithm performance when there is a white Gaussian mea-surement noise. In the experiments, the input sequence,
x
(
n
)
,is drawn from a zero-mean unit-variance white Gaussiandistribution and the output is observed for a record lengthof
500+(
K
+
L
)
. The ﬁrst
K
+
L
samples are discarded toeliminate any error that might result in calculating the timeaverages.
A. Example 1: A third-order system with no measurement noise
The ﬁrst example is of a 3–rd order system with timedelays equal to
(
L
= 21;
K
= 0)
.
This amounts to a totalnumber of basis functions,
M
= 2024
. Only 10 candidates
(
M
= 10
M
)
have been used to generate the output.A pin size of
R
= 10
, is used which results in having21 pins to try. Table I shows the ordering of the selectedbasis functions before and after the correlation-based sortingprocess. It is obvious that one basis function is linear andthe rest are combinations of nonlinear terms in 2–
nd
and3–
rd
order terms of
x
. The sorting process resulted in allsigniﬁcant terms to be grouped within the ﬁrst pin (highestcandidate is in the 78th position). If the conventional FOS isapplied on this data, a number of 1495 terms has to be testedin order to completely reconstruct the srcinal system. Withthe SFOS, the process is terminated after the 78th term istested and even before trying all candidates in the pin. Theorder of selection of the basis functions by the SFOS is 1,2, 3, 7, 5, 4, 12, 13, 78, and 76. Obviously, the terms withthe highest
ρ
2
(
y,p
i
)
were selected ﬁrst. When the FOS isapplied on the system, it required 2.2969 sec of computingtime. The proposed SFOS-FLN algorithm only took 0.093
2746
sec. of computing time, a signiﬁcant speed–up factor of 24.Expectedly, both algorithms reached the same representationerror.
B. Example 2: A third-order system with Gaussian noiseadded
The previous experiment was repeated after adding a whiteGaussian noise to the output with a SNR equal to 10 dB.As in the ﬁrst example, the input used is a 500 data pointsdrawn from a white Gaussian noise with zero-mean and unitvariance. The measurement noise is independent from theinput signal. Due to noise, it is expected that extra basisfunctions are added in the ﬁnal solution as an attempt to ﬁtthe noise. The FOS managed to identify the srcinal basisfunctions. However, the corresponding weights,
w
i
weredifferent from the srcinal one. Moreover, nine extra basisfunctions were added to the model in order to achieve aﬁnal mse error equal to 1.528. After candidate sorting, onceagain all candidates were placed in the ﬁrst pin. Naturally,the exact locations within the pin differed from the one inthe noise–free case due to the new correlation values causedby the noisy output. The SFOS managed to identify the 10srcinal candidates with exceptional accuracy in the weightvalues and only extracted just one extra false candidate witha very small weight value. However, the ﬁnal mse error was1.7, an 11% increase than the mse obtained by the FOS. TheCPU time needed for the noisy case was 0.1094 sec. for theSFOS and 2.2969 sec. for the FOS, a speed–up of about 22.It is worth mentioning that the speed–up factor is dependenton the number of candidate functionals,
M
, and the pin size,
V
.The SFOS-FLN need also to be tested on classiﬁcationproblems to test the capability of the algorithm in producingnonlinear discriminant analysis functions based on the higherpolynomials offered by the FLN.VII. C
ONCLUSIONS
In this paper, an approach, SFOS, of constructing FLNfor sparse polynomial systems identiﬁcation is presented.The proposed algorithm exploits the fact that FLN basisfunctions with the highest correlation with output are moreprobable to be principal system terms. The algorithm sorts allavailable candidates in descending order according to theircorrelation with the output and assigns them to different pinsof ﬁxed size. It has been shown that such an ordering willguarantee that principal terms will get selected ﬁrst. The FOSalgorithm is applied to pick up the correct candidates andcalculates their associated weight values using orthogonalsearch and Cholskey decomposition. The SFOS algorithmexamines ﬁrst the basis function candidates in the ﬁrstpin. If the selected functions become unable to produce anacceptable solution, subsequent candidates in the followingpins are added to the pool. The process ends when a solutionis found and the identiﬁed system is successfully reproduced.In the absence of any measurement noise, the algorithmhas produced exact results when applied to sparse secondand third order systems. For noisy outputs, the algorithmmanaged to detect the correct terms with a small error inthe kernel values. In either case, the SFOS has shown toprovide a considerable speed–up factor when compared withthe FOS.R
EFERENCES[1] O. Agazzi and D. G. Messerschmitt, “Nonlinear echo cancellationof data signals,”
IEEE Transcation on Communication
, vol. COMM,no. 30, pp. 2421–2433, 1982.[2] G. Stegmayer, “olterra series and neural networks to model anelectronic device nonlinear behavior,” in
Proceedings of the IEEE International Joint Conference on Neural Networks (IJCNN)
, 2004,pp. 2907–2910.[3] J. D. Taft, “Quadratic linear ﬁlters for signal detection,”
IEEE Tran-scation on Signal Processing
, vol. SP, no. 39, pp. 2557–2559, 1991.[4] S. Chen, S. Billings, and P. Grant, “Nonlinear system identiﬁcationusing neural networks,”
International Journal of Control
, vol. 51, no. 6,pp. 1191–1214, 1990.[5] K. Narendra and K. Parthasarathy, “Identiﬁcation and control of dynamical systems using neural networks,”
IEEE Transactions on Neural Networks
, vol. 1, pp. 4–27, 1990.[6] S. Elanyar and Y. Shin, “Radial basis fnction neural network for ap-proximation and estimation of nonlinear stochastic dynamic systems,”
IEEE Transactions on Neural Networks
, vol. 5, pp. 594–603, 1994.[7] S. Yang and C. Tseng, “An orthonormal neural network for functionapproximation,”
IEEE Transactions on Systems, Man and Cybernetics,Part B
, vol. 26, pp. 779–785, 1996.[8] Y.-H. Pao, S. Phillips, and D. Sobajic, “Neural-net computing andintelligent control systems,”
International Journal of Control
, vol. 56,no. 2, pp. 263–289, 1992.[9] L. Yao, “Genetic algorithm based identiﬁcation of nonlinear systemsby sparse volterra ﬁlters,”
IEEE Transactions on Signal Processing
,vol. 47, no. 12, pp. 3433–3435, 1999.[10] H. Abbas and M. Bayoumi, “Volterra-system identiﬁcation usingadaptive real-coded genetic algorithm,”
IEEE Transactions on Systems, Man and Cybernetics, Part A
, vol. 36, no. 4, pp. 671– 684, 2006.[11] M. J. Korenberg, “A robust orthogonal algorithm for system identiﬁ-cation and time–series analysis,”
Biological Cybernetics
, vol. 60, pp.267–276, 1989.[12] S. Chen, C. Cowan, and P. Grant, “Orthogonal least squares learningfor radial basis function networks,”
IEEE Transactions on Neural Networks
, vol. 2, no. 2, pp. 302–309, 1991.[13] S. Chen, S. Ching, and K. Alkadhimi, “Regularized orthogonal leastsquares algorithm for constructing radial basis function networks,”
International Journal of Control
, vol. 64, no. 5, pp. 829–837, 1996.[14] S. Purwara, I. Karb, and A. Jha, “On-line system identiﬁcation of complex systems using chebyshev neural networks,”
Applied Soft Computing
, vol. 7, no. 1, pp. 364–372, 2007.[15] J. Patra, R. Pal, B. Chatterji, and G. Panda, “Identiﬁcation of nonlineardynamic systems using functional link artiﬁcial neural networks,”
IEEE Transactions on Systems, Man and Cybernetics, Part B
, vol. 29,pp. 254–262, 1999.[16] T. Yamada and T. Yabuta, “Dynamic system identiﬁcation using neuralnetworks,”
IEEE Transactions on Systems, Man and Cybernetics
,vol. 23, pp. 204 – 211, 1993.[17] E. Chng, S. Chen, and B. Mulgrew, “Efﬁcient computational schemesfor the orthogonal least squares algorithm,”
IEEE Transactions onSignal Processing
, vol. 43, pp. 373 – 376, 1995.[18] M. J. Korenberg and L. D. Paarmann, “Orthogonal approaches to time–series analysis and system identiﬁcation,”
IEEE Signal Processing Magazine
, pp. 29–43, July 1991.[19] M. J. Korenberg, S. Bruder, and P. McIlroy, “Exact orthogonal kernelestimation from ﬁnite data records: Extending wiener’s identiﬁcationof nonlinear systems,”
Annals of Biomedical Engineering
, vol. 16, pp.201–214, 1988.[20] M. J. Korenberg, “Identifying nonlinear difference equation andfunctional expansion representations: The fast orthogonal algorithm,”
Annals of Biomedical Engineering
, vol. 16, pp. 123–142, 1988.[21] P. McIlroy, “Applications of nonlinear system identiﬁcation,” Master’sthesis, Queen’s University at Kingston, Ontario, Canada, 1986.[22] A. Sierra, J. A. Macias, and F. Corbacho, “Evolution of functional link networks,”
IEEE Transactions on Evolutionary Computation
, vol. 5,no. 1, pp. 54–65, 2001.
2747

Search

Similar documents

Related Search

A simple rapid GC-FID method for the determinPPP method for development of rural healthA Novel Computing Method for 3D Doubal DensitA novel comprehensive method for real time ViDrawing as a tool for designDesign of a Manual Scissor Lift for AutomotivSociometry as a method for investigating peerA Practical Method for the Analysis of GenetiA calculation method for diurnal temperature A map matching method for GPS based real time

We Need Your Support

Thank you for visiting our website and your interest in our free products and services. We are nonprofit website to share and download documents. To the running of this website, we need your help to support us.

Thanks to everyone for your continued support.

No, Thanks