Computers & Operations Research 32 (2005) 1499–1514
www.elsevier.com/locate/dsw
A line up evolutionary algorithm for solving nonlinear constrained optimization problems
Haralambos Sarimveis
∗
, Athanassios Nikolakopoulos
School of Chemical Engineering, National Technical University of Athens, 9, Heroon Polytechniou str.,Zografou Campus, Athens 15780, Greece
Abstract
In this work a complete framework is presented for solving nonlinear constrained optimization problems, based on the lineup dierential evolution (LUDE) algorithm which is proposed for solving unconstrained problems. Linear and/or nonlinear constraints are handled by embodying them in an augmented Lagrangianfunction, where the penalty parameters and multipliers are adapted as the execution of the algorithm proceeds.The LUDE algorithm maintains a population of solutions, which is continuously improved as it thrives fromgeneration to generation. In each generation the solutions are lined up according to the corresponding objectivefunction values. The position’s in the line are very important, since they determine to what extent the crossover and the mutation operators are applied to each particular solution. The eciency of the proposed methodologyis illustrated by solving numerous unconstrained and constrained optimization problems and comparing it withother optimization techniques that can be found in the literature.
?
2003 Elsevier Ltd. All rights reserved.
Keywords:
Evolutionary algorithms; Constrained optimization; Penalty adaptation; Nonlinear programming
1. Introduction
Many problems in several scientic areas are formulated as nonlinear programming problems(NLP). In most cases they consist not only of a nonlinear objective function that has to be optimized, but of a number of linear and/or nonlinear constraints as well that must be satised by the solution.Due to the complex nature of many of these problems, conventional optimization algorithms are oftenunable to provide even a feasible solution. For example, gradient optimization techniques have only been able to tackle special formulations, where continuity and convexity must be imposed. Obviously,the development of ecient algorithms for handling complex nonlinear optimization problems is of
∗
Corresponding author. Tel.: +302107723237; fax: +302107723138.
Email addresses:
hsarimv@chemeng.ntua.gr , hsarimv@central.ntua.gr (H. Sarimveis).
03050548/$see front matter
?
2003 Elsevier Ltd. All rights reserved.doi:10.1016/j.cor.2003.11.015
1500
H. Sarimveis, A. Nikolakopoulos/Computers & Operations Research 32 (2005) 1499–1514
great importance. In this work we present a new framework for solving such problems that belongs tothe family of stochastic search algorithms, known as evolutionary algorithms. Evolutionary algorithmsare truly continuous counterparts of genetic algorithms [1]. They are direct parallel search techniquesthat use the greedy criterion to decide on their moves, but are also characterized by some builtinsafeguards to prevent misconvergence. The greedy criterion is often relaxed by incorporating thesimulating annealing concept [2,3] that occasionally permits an uphill move.
Evolutionary algorithms have many advantages compared to the traditional nonlinear programmingtechniques, among which the following three are the most important:(i) They can be applied to problems that consist of discontinuous, nondierentiable and nonconvexobjective functions and/or constraints.(ii) They do not require the computation of the gradients of the cost function and the constraints.(iii) They can easily escape from local optima. Nonetheless, until very recently evolutionary algorithms have not been widely accepted, due to their poor performance in handling constraints. An excellent study on comparing evolutionary algorithmsfor constrained optimization problems has been published by Michalewicz and Schoenauer [4], whogrouped the methods into the following four categories:(i) Methods based on preserving feasibility of solutions.(ii) Methods based on penalty functions.(iii) Methods based on a search for feasible solutions.(iv) Other hybrid methods.In the same paper a number of benchmark problems were presented and tested on the dierentevolutionary algorithms. The obtained results conrmed that evolutionary algorithms have dicultiesin solving constrained optimization problems. In fact, there was no single algorithm that succeededin solving all the benchmark problems.Among the evolutionary algorithms the methods based on penalty functions have proven to bethe most popular. These methods augment the cost function, so that it includes the squared or absolute values of the constraint violations multiplied by penalty coecients. However, penaltyfunction methods are also characterized by serious drawbacks, since small values of the penaltycoecients drive the search outside the feasible region and often produce infeasible solutions [5],while imposing very severe penalties makes it dicult to drive the population to the optimum[5 – 7].
The above observations drove the research towards methods that are able to adapt the penalty parameters as the algorithm proceeds. The benets of using such adaptive penalty strategies werealready observed in traditional nonlinear programming [8]. The rst attempts to adapt the penalty
parameters are summarized in the review paper of Michalewicz and Schoenauer [4]. Later, researchers observed that the penalty methods can be signicantly improved by using augmentedLagrange functions. Kim and Myung [9] introduced a two phase evolutionary algorithm where the
penalty method is implemented in the rst phase, while during the second phase an augmentedLagrangian function is applied on the best solution of the rst phase. Tahk and Sun [10] pro
posed the coevolutionary augmented Lagrangian method (CEALM) which uses an evolution of two
H. Sarimveis, A. Nikolakopoulos/Computers & Operations Research 32 (2005) 1499–1514
1501
populations with opposite objectives to solve constrained optimization problems. It is in fact a Lagrangian approach that transforms the constrained optimization problem to a zerosum game for which the Lagrange multiplier vector is the maximizing player and the parameter vector is the minimizing one. In two recent publications [11,12] the hybrid dierential method (HDE) was presented
for solving unconstrained optimization problems, by adding two operators (acceleration and migration) on the dierential evolution (DE) method [13]. In the same publications the HDE methodwas also used as the basis for solving constrained optimization problems. The approach presentedin [11,12] used an augmented Lagrangian function and treated the problem as a min–max problem,
where in the minimization phase the Lagrange multipliers are xed and the HDE algorithm searchesfor the best values of the decision variables, while in the maximization phase the Lagrange multi pliers are updated. Tang et al. [14] proposed a special hybrid genetic algorithm (HGA) with penaltyfunction and gradient direction search, which uses mutation along the weighted gradient direction asthe main operator and only in the later generations it utilizes an arithmetic combinatorial crossover.Fung, Tang and Wang [15] presented the extended hybrid genetic algorithm (EHGA), which is a
fuzzybased methodology that embeds the information of the infeasible points into the evaluationfunction.In this work we introduce the lineup dierential evolution algorithm (LUDE) which is an iterativestochastic methodology that starts with a random population of possible solutions. The tness of each solution is measured by computing the corresponding value of the objective function. Then newgenerations are produced by lining up the solutions according to their tness and applying the LUDEcrossover and mutation operators. The position’s in the line are very important, since they determineto what extent the crossover and the mutation operators are applied to each particular solution. Linear and/or nonlinear constraints are handled by embodying them in an augmented Lagrangian function,where the penalty parameters and multipliers are adapted during the execution of the algorithm.The eciency of the proposed framework is illustrated by solving numerous unconstrained andconstrained optimization problems and comparing the results with those obtained by applying to thesame problems other evolutionary techniques that can be found in the literature.The rest of the paper in synthesized as follows: In the next section the LUDE algorithm isintroduced. In Section 3, the complete framework for solving constrained optimization problems is
presented. Section 4 compares the proposed method with other evolutionary algorithms in a number of benchmark unconstrained and constrained nonlinear optimization problems. The paper ends withthe concluding remarks.
2. The LUDE algorithm
The LUDE algorithm is aiming towards approximating the global optimum of a nonlinear objectivefunction consisting of
N
continuous decision variables where the only constraints that are taken intoaccount are the restrictions of the decision variables between upper and lower limits. Without lossof generality, in the rest of the paper the minimization problem will be considered:min
x
f
(
x
) (1)
1502
H. Sarimveis, A. Nikolakopoulos/Computers & Operations Research 32 (2005) 1499–1514
subject to
x
∈
X
=
{
x

x
∈
R
n
;
x
lo
6
x
6
x
up
}
;
(2)where
f
(
x
) is the objective function of the problem and
x
lo
,
x
up
are the lower and upper boundsthat are posed on the variables of the problem.The algorithm starts with a population of
L
possible solutions, which are selected assuming auniform probability distribution for each variable. In some cases we can use an initial guess thatis believed to be located close to the optimum. For example in model predictive control problems,the solution obtained in each time point can be used as a guess in the next point. In such cases,the initial guess is taken as a nominal solution on which normally distributed random deviations areadded to produce the starting population.The innovation of the proposed method lies on the way it produces new generations of solutions.The idea is to line up the solutions in each generation in a descending order according to the corresponding values of the objective function. The crossover and mutation operations are then applied tothe solutions as follows: During the crossover operation, for each adjacent pair the dierence vector is computed, weighted by a random number between 0 and 1 and added on the rst vector of the pair. The produced solution replaces the rst vector, if it produces an objective function value that islower than the tness value of the second vector. At the end of the crossover operation, the solutionsare lined up again. Then the mutation operation is applied, taking into account that the worse mem bers of the population should be altered substantially, while only small changes should be made tothe best solutions. In order to achieve this, a dierent probability of mutation is calculated for eachsolution in the list, which is reduced from the top to the bottom. This probability denes the number of variables in each solution that will undergo the mutation operation. The nonuniform mutation isutilized, since it adds to the algorithm more local search capabilities. Using this approach, in therst iterations the variables which are mutated can be located anywhere in the input space. As thealgorithm proceeds, more conservative moves are preferred and thus, search is located on a morelocal level.A detailed description of the LUDE algorithm follows next, assuming that we have preselectedthe maximum number of iterations
mxiter
and set the number of iterations
iter
= 0:(1) Generate a population of
L
solutions
x
i
; i
=1
;:::;L
, where the values for each decision variableare chosen randomly between the respective lower and upper bounds, assuming a uniformdistribution.(2) Increase the number of iterations by 1,
iter
=
iter
+ 1.(3) Compute the objective function value corresponding to each solution
f
(
x
i
)
; i
= 1
;:::;L
.(4) Arrange the solutions so that they formulate a line in a descending order:
x
1
;
x
2
;
x
3
;:::;
x
L
where
x
i
precedes
x
j
if
f
(
x
i
)
¿f
(
x
j
)
i;j
= 1
;:::;L
.(5) Apply the crossover operator:FOR
i
= 1
;:::;L
−
1
x
i;
new
=
x
i
+
r
(
x
i
+1
−
x
i
), where
r
is a random number between 0 and 1if
f
(
x
i;
new
)
¡f
(
x
i
+1
) then
x
i
=
x
i;
new
END
H. Sarimveis, A. Nikolakopoulos/Computers & Operations Research 32 (2005) 1499–1514
1503
(6) Arrange the solutions so that they formulate a line in a descending order:
x
1
;
x
2
;
x
3
;:::;
x
L
where
x
i
precedes
x
j
if
f
(
x
i
)
¿f
(
x
j
)
i;j
= 1
;:::;L
.(7) Apply the nonuniform mutation operation:FOR
i
= 1
;:::;Lp
m;i
=
L
−
i
+ 1
L
(3)FOR
j
= 1
;:::;N
Generate a random number
r
between 0 and 1IF
r ¡p
m;i
Generate a binary random number
b
and a random number
r
between 0 and 1IF
b
= 0 THEN
x
i;
new
(
j
) =
x
i
(
j
) + (
x
up
(
j
)
−
x
i
(
j
))
r d
e
−
2
∗
iter=mxiter
(4)IF
b
= 1 THEN
x
i;
new
(
j
) =
x
i
(
j
)
−
(
x
i
(
j
)
−
x
lo
(
j
))
r d
e
−
2
∗
iter=mxiter
(5)ENDENDIF
f
(
x
i;
new
)
¡f
(
x
i
) then
x
i
=
x
i;
new
END(8) Replace the solution corresponding to the maximum objective function with the best solutionfound so far and increase the number of iterations by 1.(9) If the number of iterations is equal to the maximum number of iterations
mxiter
STOP.Otherwise turn the algorithm to step 2.
Remark 1.
In the nonuniform mutation equation the parameter
d
is set equal to 1. The value of
d
is modied when the LUDE algorithm is used for constrained optimization problems as will beshown in the next section.
Remark 2.
It is interesting to note that the only parameters in the algorithm that must be adjusted by the user are the size of the population and the termination criterion, i.e. the maximum number of iterations. This is a great advantage of the proposed method compared to other algorithms, wherethe performance is sensitive to the values of the design parameters.
3. Using the LUDE algorithm to solve constrained optimization problems
In this section we will consider the more general nonlinear constrained optimization problem,where except of the objective function (1) and the upper and lower bounds on the decision variables(2), the solution must also satisfy a number of equality and/or inequality constraints:
h
m
(
x
) = 0
; l
= 1
;:::;M
(6)
g
k
(
x
)
¿
0
; k
= 1
;:::;K
(7)As mentioned in the introduction, most EAs that have been implemented to solve constrained optimization problems use penalty function approaches in such a way that the tness function
F
(
x
) is