A variant of the particle swarm optimization for the improvementof fault diagnosis in industrial systems via faults estimation
Lídice Camps Echevarría
a
, Orestes Llanes Santiago
a,
n
, Juan Alberto Hernández Fajardo
a
,Antônio J. Silva Neto
a,b
, Doniel Jiménez Sánchez
a
a
Instituto Superior Politécnico José Antonio Echeverría, Cujae, La Habana, Cuba
b
Instituto Politécnico, IPRJ/UERJ, RJ, Brazil
a r t i c l e i n f o
Article history:
Received 13 January 2012Received in revised form6 November 2013Accepted 6 November 2013Available online 9 December 2013
Keywords:
Ant colony optimizationFault diagnosisIndustrial systemsParticle swarm optimizationRobust diagnosisSensitive diagnosis
a b s t r a c t
This paper proposes an approach for Fault Diagnosis and Isolation (FDI) on industrial systems via faultsestimation. FDI is presented as an optimization problem and it is solved with Particle SwarmOptimization (PSO) and Ant Colony Optimization (ACO) algorithms. Also, is presented a study of thein
ﬂ
uence of some parameters from PSO and ACO in the desirable characteristics of FDI, i.e. robustnessand sensitivity. As a consequence, the Particle Swarm Optimization with Memory (PSOM) algorithm, anew variant of PSO was developed. PSOM has the objective of reducing the number of iterations/generations that PSO needs to execute in order to provide a reasonable quality diagnosis. The proposedapproach is tested using simulated data from a DC Motor benchmark. The results and analysis indicatethe suitability of the approach as well as the PSOM algorithm.
&
2013 Elsevier Ltd. All rights reserved.
1. Introduction
A fault is an unpermitted deviation of at least one characteristicproperty or parameter of a system from the acceptable, usual orstandard operating condition (Simani et al., 2002).Faults can cause economic losses as well as damage to humancapital and the environment. There is an increasing interest on thedevelopment of new methods for fault detection and isolation, FDI,also known as Fault Diagnosis, in relation to reliability, safety andef
ﬁ
ciency (Isermann, 2005).
The FDI methods are responsible for detecting, isolating andestablishing the causes of the faults affecting the system. Theyshould also guarantee the fast detection of incipient faults (sensitivity to faults) and the rejection of false alarms that are attributable to disturbances or spurious signals (robustness).The FDI methods are broken down into three general groups,the process history based methods (Venkatasubramanian et al.,2002c), those based on qualitative models (Venkatasubramanianet al., 2002b), and the quantitative model based methods, alsoknown as analytical methods (Venkatasubramanian et al., 2002a).
The quantitative model based methods make use of an analytical or computational model of the system. The great variety of the proposed model based methods is brought down to a few basicconcepts such as: the parity space; observer approach and theparameters identi
ﬁ
cation or estimation approach (Isermann,1984;Frank, 1990, 1996; Isermann, 2005).
Many papers and books have been devoted to making descriptionsand establishing links among the different approaches for modelbased diagnosis (Frank, 1990; Venkatasubramanian et al., 2002a;
Simani et al., 2002; Witczak, 2007; Metenidin et al., 2011). A clear
description of each approach and their limitations are presented inWitczak (2007). In Witczak (2007) and Metenidin et al. (2011) it is
recognized that observers and parity space approaches do not alwaysallow the isolation of the actuators faults. For nonlinear models, thecomplexity on the observer design increases, while an exact model of the system is necessary for the parity space approach (Witczak, 2007;Metenidin et al., 2011).Parameter estimation approach requires the knowing of therelationships between such parameters and the physical coef
ﬁ
cientsof the system,as wellasthe in
ﬂ
uence of thefaultsinthese coef
ﬁ
cients(Frank,1990, 1996; Isermann, 2005). This approach does not provide a
good diagnosis for the case of sensor faults. Furthermore, this usuallydemands a high computing time, which makes it infeasible for mostsituations (Isermann, 2005; Witczak, 2007).
The topics of robustness and sensitivity are of high interestin FDI. Thus, many robust analytical methods have been developed(Isermann,1984, 2005; Frank,1990; Chen and Patton,1999; Patton
et al., 2000). However, the unavoidable process disturbances andthe modelling errors make that most FDI methods becomeunfeasible in practical applications (Simani et al., 2002; Simani
and Patton, 2008). Therefore, further research on the topic of Contents lists available at ScienceDirect
journal homepage: www.elsevier.com/locate/engappai
Engineering Applications of Arti
ﬁ
cial Intelligence
09521976/$see front matter
&
2013 Elsevier Ltd. All rights reserved.http://dx.doi.org/10.1016/j.engappai.2013.11.007
n
Corresponding author. Tel.:
þ
53 7 2663204.
Email address:
orestes@electrica.cujae.edu.cu (O. Llanes Santiago).Engineering Applications of Arti
ﬁ
cial Intelligence 28 (2014) 36
–
51
robust and sensitive FDI methods is indispensable (Isermann,1984; Simani et al., 2002; Simani and Patton, 2008).
The FDI methods based in observers or parity space demandlarge efforts to generate robust and sensitive residuals, related tothe fault detection. The generation of a residual is highly dependent on the model that describes the system.This paper proposes an approach for FDI in industrial systems viafaultsestimation.Theproposalallowstodiagnosethesystembasedona direct fault estimation. Soft computing techniques are recognized asattractive tools for solving various problems related to modern FDI(Witczak, 2007; Metenidin et al., 2011). This approach considers the
use of meta heuristics, in order to obtain a robust and sensitivediagnosis. Some FDI methods that use meta heuristics are reported inWitczak (2007), Yang et al. (2007), Wang et al. (2008) and Camps
Echevarría et al. (2010) and Metenidin et al. (2011).
The meta heuristics Particle Swarm Optimization (PSO) and AntColony Optimization (ACO) have simple structures. Moreover, theywere recently applied to FDI (Liu et al., 2008; Liu, 2009; Samanta
and Nataraj, 2009; Duarte and Quiroga, 2010; Metenidin et al.,
2011). Therefore, they were selected for this approach.This work is also aimed at studying the in
ﬂ
uence of parametersfrom PSO and ACO in robustness and sensitivity. This study is the basisfor the development of a new variant of PSO, named Particle SwarmOptimization with Memory (PSOM). The new algorithm has theobjective of reducing the number of iterations/generations that PSOneedstoexecute,inordertodiscoverreasonablequalitysolutions.Thismeans less computational time, which allows faster diagnosis. PSOMcan be easily extended to other optimization problems.The proposed approach also permits the direct estimation of the faults, indistinctly if they take place in the actuator, process orsensors. This estimation is based on the residual, which is directlyobtained between the measurements of the output of the systemand the output that estimates the model. This avoids restrictionsto the nature of the model.The main contributions of this paper can be summarized asfollows: the study of a new approach for the development of robust and sensitive FDI methods based on direct fault estimationwith the meta heuristics PSO and ACO; the study of the in
ﬂ
uenceof their parameters in order to increase the robustness andsensitivity; and the development of a new variant PSOM, whichuses a pheromone matrix from ACO for storing the history of PSO,useful for improving the computational cost that is required byPSO. The viability of the proposal is demonstrated by diagnosingthe simulation data from a DC Motor.This paper is organized as follows. The second section introducesthe modelling of faults and the modelbased FDI methods viaparameters estimation. The proposed approach for FDI is alsodescribed in this section. The third and fourth sections give a brief description of the algorithm for PSO and the algorithm for ACO,respectively. The
ﬁ
fth section justi
ﬁ
es and describes the PSOMalgorithm. Afterward, the next section details the DC motor case studyand its simulations. The other sections present the experimentalmethodology, experiments and results, following the same order.The tenth section presents a comparison between Parity Space,Diagnostic Observers and our proposal for FDI. Finally, some concluding comments and remarks are presented.
2. Modelling faults and FDI based on direct fault estimation
FDI based on model parameters, which are partially unknown,requires online parameters estimation methods. For that purposethe input vector
u
ð
t
Þ
A
R
m
, the output vector
y
ð
t
Þ
A
R
p
and the basicmodel structure must be known (Isermann, 2005).
The models for describing the systems depend on the dynamicsof the process and the objective to be reached with the simulation.The most used model is the linear time invariant (LTI), which hastwo representations: the transfer function or transfer matrix, andthe state space representation. This last representation is also validfor nonlinear models.Let us express the input/output behavior of SISO (single inputsingle output) processes by means of ordinary linear differentialequations
y
ð
t
Þ ¼
ψ
T
ð
t
Þ
Θ
ð
t
Þ ð
1
Þ
where
Θ
ð
t
Þ¼½
a
1
…
a
n
b
0
…
b
m
ð
2
Þ
and
ψ
T
ð
t
Þ ¼ ½
y
1
ð
t
Þ
…
y
n
ð
t
Þ
u
1
ð
t
Þ
…
u
m
ð
t
Þ ð
3
Þ
where
y
n
ð
t
Þ
and
u
m
ð
t
Þ
indicate derivatives (
y
n
ð
t
Þ ¼
dy
n
ð
t
Þ
=
dt
n
).The respective transfer function becomes, through Laplacetransformation:
G
p
¼
y
ð
s
Þ
u
ð
s
Þ ¼
B
ð
s
Þ
A
ð
s
Þ ¼
b
0
þ
b
1
s
þ
⋯
þ
b
m
s
m
1
þ
a
1
s
þ
⋯
þ
a
n
s
n
ð
4
Þ
The faults affecting the system may eventually change one orseveral parameters in the vector
Θ
ð
t
Þ
. The FDI based on modelparameters is divided into two steps. The
ﬁ
rst is meant for theestimation of the model parameters vector
Θ
ð
t
Þ
. The second fordetecting and isolating the faults based on known relationshipsbetween model parameters, physical coef
ﬁ
cients of the systemand faults (Isermann, 1984, 2005).
The main drawback of this approach is that the model parameters should have physical meaning, i.e., they should correspondwith the parameters of the system. In such situations, the detection and isolation of faults are very straightforward. Otherwise, itis usually dif
ﬁ
cult to distinguish a fault from a change in theparameters vector
Θ
ð
t
Þ
. Moreover, the process of fault isolationmay become extremely dif
ﬁ
cult because model parameters do notuniquely correspond with those of the system. It should also bepointed out that the detection of faults in sensors and actuators ispossible but rather complicated (Witczak, 2007; Metenidin et al.,
2011).The two approaches that are commonly used for estimating themodel parameters
Θ
ð
t
Þ
are classi
ﬁ
ed with respect to the minimization function they use (Frank, 1990; Isermann, 2005):
sum of least squares of the equation error;
sum of least squares of output error.The FDI based on parameters estimation considering theminimization of the sum of least squares of the output errorrequires numerical optimization methods. These methods givemore precise parameters estimations but the computational effortis bigger, and online applications are, in general, not possible(Isermann, 2005). Another typical limitation regarding parameters
estimationbased approaches is related to the fact that the inputsignal should be persistently excited (Witczak, 2007; Metenidin
et al., 2011).Instead of estimating the model parameters vector
Θ
, let usconsider explicitly the faults in a SISO system in a closed loopdescribed by a LTI model as
y
ð
s
Þ ¼
G
yw
ð
s
Þ
w
ð
s
Þþ
G
yf
u
ð
s
Þ
f
u
ð
s
Þþ
G
yf
y
ð
s
Þ
f
y
ð
s
Þþ
G
yf
p
ð
s
Þ
f
p
ð
s
Þ ð
5
Þ
where
w
ð
s
Þ
A
R
is the reference signal of the control system,
f
u
,
f
p
,
f
y
A
R
are faults in the actuator, process and output sensors,respectively. The transfer function
G
yw
(
s
) represents the dynamicsof the system while
G
yf
u
ð
s
Þ
,
G
yf
p
ð
s
Þ
and
G
yf
y
are the transfer
L. Camps Echevarría et al. / Engineering Applications of Arti
ﬁ
cial Intelligence 28 (2014) 36
–
51
37
functions that represent the faults
f
u
,
f
p
and
f
y
, respectively (Ding,2008).This proposed approach considers the estimation of the faultyparameters vector
θ
f
¼½
f
u
f
y
f
p
instead of
Θ
ð
t
Þ
. Therefore, itrequires a model that directly represents the effect of the faultsin the actuator, process and sensors of the system. This kind of model is widely used in other FDI model based methods such asthose based in observers or in parity spaces (Frank, 1990;Isermann, 2005; Ding, 2008).
The estimation of
θ
f
allows diagnosing the system in a directway: from the minimization of the sum of the square of the outputerrors. The optimization is described as follows:min
F
ð
^
θ
f
Þ ¼
∑
I t
¼
1
½
y
t
ð
θ
f
Þ
^
y
t
ð
^
θ
f
Þ
2
s
:
a
:
θ
f
ð
min
Þ
r
^
θ
f
r
θ
f
ð
max
Þ
ð
6
Þ
where
I
is the number of sampling instants and
^
y
t
ð
^
θ
f
Þ
is theestimated output at each time instant
t
.
^
y
t
ð
^
θ
f
Þ
is obtained from thesolution of the model given by Eq. (5), and
y
t
ð
θ
f
Þ
is the outputmeasured by the sensor at the same instant (Ding, 2008).The proposed approach in this paper permits the directestimation of the faults taking place in the actuator, process orsensors. This allows alleviating one of the limitations of the modelbased methods for FDI (Witczak, 2007; Metenidin et al., 2011).
In order to obtain robustness to model uncertainties, disturbances or noise, the algorithms ACO and PSO are applied to thefaults estimation. These two algorithms showed robustness inother applications. Their parameters can be manipulated in orderto increase this characteristic. Depending on the model used withthe proposed approach and the applications of PSO and ACO thereis no necessity of making additional efforts in the generation of arobust residual. Thus, this proposal allows avoiding anotherlimitation of the model based FDI methods (Witczak, 2007;Metenidin et al., 2011).The proposed approach requires to meet the followingassumptions:
There is a known model of the system that represents thedynamics of the faults.
The faults cannot be intermittent.The steps of the proposed methodology are1 Formulate the optimization problem, see Eq. (6).2 Take the vector
^
θ
f
¼
0
!
as a solution of the optimizationproblem and compute the objective function
F
ð
^
θ
f
Þ
. If
F
ð
^
θ
f
Þ
o
0
:
01 then the system is not under the in
ﬂ
uence of faults. Otherwise, go to step number 3.3 Apply ACO, PSO or PSOM to solve the optimization problem:estimations of
θ
f
(It is recommended PSOM).4 Diagnosis: If any component of
^
θ
f
is different from zero, thenthe fault that corresponds with that component is affecting thesystem. The magnitude of the fault coincides with the value of the estimation.Due to model uncertainties, noise in the measurements andother disturbances, the value of the objective function is not equalto zero, even when the estimation of the fault vector
^
θ
f
coincideswith the real fault vector
θ
f
. Therefore, the fault vector can bedifferent from zero even when the system is not affected by faults.This causes uncertainties in the decision.Step number 2 of the methodology allows avoiding thisdisadvantage. A threshold for the objective function is determined.It is considered that the system is not affected by faults and thatthe measurements are affected by noise up to 8%. If this thresholdis exceeded, then it is decided that the system is under the effect of faults.
3. Particle swarm optimization
Many strategies that mimic different natural behavior havebeen proposed for handling dif
ﬁ
cult optimization problems. TheSwarm Intelligence brings together some optimization algorithmsthat are based on the observation of simpli
ﬁ
ed social models. Thisis the case of Particle Swarm Optimization (PSO), which wasintroduced by Kennedy and Eberhart in 1995 (Kennedy andEberhart, 1995; Kennedy, 1997). PSO is based on the social
behavior of
ﬂ
ocking birds and schooling
ﬁ
shes.PSO has been applied to different
ﬁ
elds requiring parameteroptimization in a high dimensional space. This is a result of itssimplicity, high ef
ﬁ
ciency in searching, easy implementation andits fast convergence tothe global optimum (Kennedy and Eberhart,1995; Kameyama, 2009). All these advantages, in addition to its
applications in automatic control, system identi
ﬁ
cation, and somerecent results related with the FDI area (Poli, 2007; Samanta and
Nataraj, 2009; Liu, 2009; Duarte and Quiroga, 2010) motivated the
selection of PSO in this study.
3.1. Description of the algorithm PSO
PSO works with a group or population (swarm) of
Z
agents(particles), which are interested on
ﬁ
nding a good approximationto the global minimum
x
0
of the objective function
f
:
D
R
n

R
.Each agent moves throughout the search space
D
. The positionof the
z
th particle is identi
ﬁ
ed with a solution for the optimizationproblem. On each
l
th iteration, its value is updated and it isrepresented by a vector
X
z l
A
R
n
.Each particle accumulates its historical best position
X
z
ð
pbest
Þ
,which represents the achieved individual experience. The bestposition that was achieved along the iterative procedure andamong all the
X
gbest
represents the collective experience.The generation of the new position needs the current velocityof the particle
V
z l
A
R
n
and the previous position
X
z l
1
X
l z
¼
X
l
1
z
þ
V
l z
ð
7
Þ
The vector
V
z l
is updated according to the following expression:
V
z l
¼
V
z l
1
þ
c
1
Ξ
ð
X
z
ð
pbest
Þ
X
z l
1
Þþ
c
2
Ξ
ð
X
gbest
X
z l
1
Þ ð
8
Þ
where
V
z l
1
is the previous velocity of the
z
th particle;
Ξ
denotes adiagonal matrix with random numbers in the interval [0,1]; and
c
1
,
c
2
are the parameters that characterize the trend during thevelocity updating (Kennedy, 1997; Kameyama, 2009). They are
called cognitive and social parameter, respectively. They representhow the individual and social experience in
ﬂ
uence in the nextagent decision. Some studies have been made in order to determine the best values for
c
1
and
c
2
. The values
c
1
¼
c
2
¼
2 ,
c
1
¼
c
2
¼
2
:
05 or
c
1
4
c
2
with
c
1
þ
c
2
r
4
:
10 are recommended(Kennedy, 1998; Carlisle and Dozier, 2001; Beielstein et al., 2002).
Some variants of the algorithm have been developed with theobjective of improving some characteristics of PSO, e.g. velocity,stability and convergence.Eqs. (7) and (8) represent the canonical implementation of PSO.
Another well known variant is the one with inertial weight, whichconsiders either constant inertial weight or inertial weight reduction. The idea behind this variant is to add an inertial factor
ω
forbalancing the importance of the local and global search (Beielsteinet al., 2002; Kameyama, 2009). This parameter
ω
affects theupdating of each particle velocity by the expression
V
z l
¼
ω
V
z l
1
þ
c
1
Ξ
ð
X
z
ð
pbest
Þ
X
z l
1
Þ
L. Camps Echevarría et al. / Engineering Applications of Arti
ﬁ
cial Intelligence 28 (2014) 36
–
51
38
þ
c
2
Ξ
ð
X
gbest
X
z l
1
Þ ð
9
Þ
Nowadays, the most accepted strategy for
ω
is to establish
ω
A
½
ω
min
;
ω
max
and to reduce its value according the number of the current iteration
l
by means of
ω
¼
ω
max
ω
max
ω
min
Itr
max
l
ð
10
Þ
where
Itr
max
is the maximum number of iterations to be reached. It isrecommended to take
ω
max
¼
0
:
9 and
ω
min
¼
0
:
4 (Liang et al., 2006).
The basic PSO is recognized as a particular case for thealternative that considers inertial weight if assigning
ω
¼
1 duringall the performance of the algorithm (Beielstein et al., 2002;Kameyama, 2009).The parameter called constriction factor
χ
(Clerc and Kennedy,2002; Kameyama, 2009) is introduced in order to facilitate the
control of the particle
´
s velocity
V
z l
¼
χ
½
V
z l
1
þ
c
1
Ξ
ð
X
z
ð
pbest
Þ
X
z l
1
Þþ
c
2
Ξ
ð
X
gbest
X
z l
1
Þ ð
11
Þ
where
χ
¼
2
j
2
φ
ﬃﬃﬃﬃﬃﬃﬃﬃﬃﬃﬃﬃﬃﬃﬃﬃﬃ
φ
2
4
φ
p
jð
12
Þ
and
φ
¼
c
1
þ
c
2
4
4.The literature recommends to set
χ
¼
0
:
729 with
c
1
¼
c
2
¼
2
:
05.This is equivalent to use the inertia weight variant with
ω
¼
0
:
729through the entire procedure and establishing
c
1
¼
c
2
¼
1
:
49(Eberhart and Shi, 2001).There are different topologies for PSO. In this work, is used the
Gbest
topology. It determines that all the particles are connected toeach other and they are part of a unique neighborhood(Kameyama, 2009).A pseudocode for PSO is given in Fig. 1.
4. Ant colony optimization
Ant Colony Optimization (ACO) was initially proposed forinteger programming problems (Dsrco and Caro, 1992). ACO isinspired on the behavior of ants seeking a path between theircolony and a source of food. This behavior is due to the deposit andevaporation of pheromone.ACO was successfully extended to continuous optimizationproblems (Dorigo and Blum, 2005; SilvaNeto and Becceneri,
2009; Socha and Dsrco, 2008). An advantage of this algorithm
is that its parameters can be manipulated in order to achieve morediversi
ﬁ
cation or intensi
ﬁ
cation during the search. This allows anef
ﬁ
cient hybridization with other algorithms.
4.1. Description of the algorithm
For the continuous case the idea of ACO is to mimic thebehavior of ants (Dsrco and Blum, 2005; SilvaNeto and Becce
neri, 2009; Socha and Dsrco, 2008). In this case, the adaptation to
the continuous case that was reported in SilvaNeto and Becceneri(2009) was applied. This variant was applied to other problems(Becceneri and Zinober, 2001; Souto et al., 2005). In this variant,
the
ﬁ
rst step is to divide the feasible interval of each variable of theproblem into
k
possible
χ
kn
values. At each iteration the algorithmgenerates a family of
Z
new ants. This generation uses theinformation that was obtained from the previous ants, which issaved in the pheromone accumulative probability matrix
PC
(thematrix has dimensions
n
k
where
n
is the number of variables inthe problem). This matrix is updated at each iteration
l
as
pc
ij
ð
l
Þ ¼
∑
j g
¼
1
f
ig
ð
l
Þ
∑
k g
¼
1
f
ig
ð
l
Þð
13
Þ
where
f
ij
are the elements of the pheromone matrix
F
A
M
n
k
ð
R
Þ
and they express the pheromone level of the discrete value
j
th of the variable
i
th. This matrix is updated in each iteration based onthe evaporation factor
C
evap
and the incremental factor
C
inc
as
f
ij
ð
l
Þ¼ð
1
C
evap
Þ
f
ij
ð
l
1
Þþ
δ
ij
;
best
C
inc
f
ij
ð
l
1
Þ ð
14
Þ
where
δ
ij
;
best
¼
1 if
χ
ji
¼
x
ð
best
Þ
i
0 otherwise
(
ð
15
Þ
being
x
ð
best
Þ
i
the component
i
th of the best ant
X
gbest
.The scheme for generating a new ant
X
z l
at the iteration
l
needs
n
random numbers
q
rand
1
,
q
rand
2
,
…
,
q
randn
. For each component
x
ð
z
Þ
n
from the ant
X
z l
to be generated, the following generationmechanism is set as
x
ð
z
Þ
n
¼
χ
mn
if
q
randn
o
q
0
χ
^
mn
if
q
randn
Z
q
0
(
ð
16
Þ
where
m
:
f
nm
Z
f
nm
8
m
¼
1
;
2
;
…
;
k
ð
17
Þ
and
^
m
:
ð
pc
n
^
m
4
q
randn
Þ
4
ð
pc
n
^
m
r
pc
nm
Þ 8
m
Z
^
m
ð
18
Þ
The control parameter
q
0
allows controlling the level of randomness during the ants generation. The pseudocode forACO is given in Fig. 2.
Fig. 1.
Pseudocode for PSO algorithm.
Fig. 2.
Pseudocode for ACO algorithm.
L. Camps Echevarría et al. / Engineering Applications of Arti
ﬁ
cial Intelligence 28 (2014) 36
–
51
39
5. New variant: Particle swarm optimization with memory (PSOM)
PSO has been hybridized successfully with other optimizationmethods. The interest of combining PSO with other methods ismostly based on its recognized characteristic of not improving thequality of the solutions when the number of iterations is increased(Angeline, 1998). Instead, PSO can
ﬁ
nd good enough solutionsfaster than other evolutionary algorithms (Angeline, 1998).In the particular case of PSO and ACO, some works have focusedon the hybridization between them. For example, in Shelokar et al.(2007) are proposed an algorithm for which ACO is used toperform a local search around each particle of PSO, at eachiteration.Our proposal, called Particle Swarm Optimization with Memory(PSOM), has the objective of reducing the number of iterations/generations that PSO needs to execute, in order to discover reasonablequality solutions. This means a less computational time, which allowsa faster diagnosis. This is also a desirable characteristic for onlinediagnosing systems in industrial processes.The implementation of the algorithm consists of two stages:
First stage: A swarm explores the search space, i.e. PSO isapplied. A pheromone matrix, as described in Section 4, storesthe information of the search in each iteration.
Second stage: Another swarm makes an intensi
ﬁ
cation of thepromising regions of the search space. For that purpose, its initialposition is generated using the generationscheme of ACO, which isdescribed in Section 4. For this generation scheme, the algorithmuses the pheromone matrix achieved in the
ﬁ
rst stage.It can be concluded that PSOM uses the memory of ACO forstoring the historical behavior of the agents from PSO during the
ﬁ
rst stage. In the second stage, the algorithms use this memory foraddressing the search of another swarm.
5.1. Description of the algorithm
This part describes the implementation of PSOM.In the
ﬁ
rst stage, it applies PSO, i.e. a swarm of
Z
1
agentsexplores the search space
D
following the structure of PSO. Thenew position
X
l z
A
R
n
and the new velocity
V
l z
of each agent
z
, with
z
¼
1
;
2
;
…
Z
1
, are updated at the iteration
l
following Eqs. (7) and(9), respectively. The values of the PSO parameters are based onthe presented study of their in
ﬂ
uence in the robust and sensitivediagnosis. Following the idea of ACO for the continuous case, seeSection 4, we divide the permissible interval of each variable into
k
possible values x
nk
, and a pheromone matrix
F
, with dimensions
n
k
, is generated and updated at each iteration.In order to update the pheromone matrix
F
, we propose thefollowing strategy:
On each iteration
l
, each component of the vector
X
gbest
isidenti
ﬁ
ed with only one of the
k
discrete values that areassigned to the variable that corresponds with that component
n
th. This connection generates the vector
X
gbest
ð
d
Þ
. We de
ﬁ
ne avectorial function
G
:
R
n
⟼
R
n
.Let be
x
gbest n
and
x
gbest
ð
d
Þ
n
the
n
th components of the vector
X
gbest
and
X
gbest
ð
d
Þ
respectively, then it is established
G
n
ð
X
gbest
Þ¼
x
gbest
ð
d
Þ
n
¼
x
mn
ð
19
Þ
where
m
:
j
x
gbest n
ð
n
Þ
x
mn
j¼
min
m
j
x
gbest n
ð
n
Þ
x
mn
j
with
m
¼
1
;
2
…
k
ð
20
Þ
With the new vector
G
ð
X
gbest
Þ ¼
X
gbest
ð
d
Þ
the matrix
F
is updatedas in ACO, see Eq. (14).The second stage considers a swarm of
Z
2
(
Z
2
o
Z
1
) agents thatwill perform an intensi
ﬁ
cation of the promising regions of thesearch space
D
. For that purpose, the algorithm generates aninitial swarm using the information stored in the pheromonematrix
F
from the
ﬁ
rst stage. The mechanism for generating theinitial swarm is the same as described in Eqs. (16)
–
(18):The pseudocode for the PSOM method is given in Fig. 3.
6. Benchmark DC motor
This section describes the main characteristics of the DC Motorcontrol system DR300 (Ding, 2008). This system has been widelyused for studying and testing new methods of FDI due to itssimilitude with high speed industrial control systems (Ding, 2008).The system is formed bya permanent magnet, which is coupledto a DC generator. The main function of this generator is tosimulate the effect of a fault that results when a load torque isapplied to the axis of the motor. The speed is measured by atachometer that feeds the signal to a PI (proportionalintegralcontroller) speed controller. Fig. 4 shows the block diagram of theDC Motor control system AMIRA DR300.The voltage
U
T
(Volts) is proportional to the rotational speed of the motor's axis
W
(rad/s).
U
T
is compared with
U
ref
(Volts) in orderto use the error for computing the control signal
U
C
(Volts) for thePI speed controller. The AMIRA DR300 system also includes an
Fig. 3.
Pseudocode for PSOM algorithm.
Fig. 4.
Block diagram of the DC Motor control system AMIRA DR300.
L. Camps Echevarría et al. / Engineering Applications of Arti
ﬁ
cial Intelligence 28 (2014) 36
–
51
40