Description

In unconstrained optimization iterative methods are used in order to locate optimal points. By considering each iteration of the optimization process as a change of time, the sequence of iterative points may be considered as a time series. In

All materials on our website are shared by users. If you have any questions about copyright issues, please report us to resolve them. We are always happy to assist you.

Related Documents

Share

Transcript

International Conference
24th Mini EURO Conference
“Continuous Optimization and Information-Based Technologies in the Financial Sector”
(MEC EurOPT 2010)
June 23–26, 2010, Izmir, TURKEY
ISBN 978-9955-28-598-4 R. Kasımbeyli, C. Dinçer, S. Özpeynirci and L. Sakalauskas (Eds.): MEC EurOPT 2010 Selected papers. Vilnius, 2010, pp. 76–80
© Izmir University of Economics, Turkey, 2010 © Vilnius Gediminas Technical University, Lithuania, 2010
76
A NOVEL FORECASTING HYBRID METHOD FOR UNCONSTRAINED OPTIMIZATION Eleftheria N. Malihoutsaki
1
,
George S. Androulakis
2
,
Theodoula N. Grapsa
1
1
University of Patras, Department of Mathematics, Rio, 265 04, Greece
2
University of Patras, Department of Business Administration, Rio, 265 04, Greece
Abstract:
In unconstrained optimization iterative methods are used in order to locate optimal points. By considering each iteration of the optimization process as a change of time, the sequence of iterative points may be considered as a time series. In finance, there are many techniques for forecasting future values by using historical data. In this paper, a novel hybrid method is introduced combining properly the above techniques. This hybrid approach may improve the performance of an optimization technique, especially in cases of slow convergence. Preliminary results are discussed.
Keywords:
unconstrained optimization, ARMA models, time series forecasting.
1. Introduction
Optimization is concerned with the practical computational task of finding minima and/or maxima of functions of one, several or even thousands of variables. The appropriate computational method to use depends crucially on the nature of the function being optimized, the nature of the variables, as well as the number of variables. In general the optimization problem is
. .
min ( )
n
s t x
f x
∈Ω⊆
ℝ
(1) Here
( )
f x
is called
objective
function
, the set
n
Ω⊆
ℝ
is called
constrained
or
feasible
set
and the inde-pendent variables of the vector
1 2
( , ,..., )
nn
x x x x
= ∈
ℝ
are referred to decision variables. The local or global minimizer of the optimization problem (1) is denoted by
opt
x
. A class of methods for optimizing the objective function
( )
f x
is of iterative form, given by
1
k k k k
x x a d
+
= +
(2) Where
k
x
is the current iteration,
k
d
a proper direction and
k
a
the step length along this direction in or-der to ensure the reduction of the objective function, (Dennis and Schnabel 1983, Nocedal and Wright 2006). A feature of repetitive processes given by equation (2) is that it cannot use information from all pre- vious points. Moreover, the sequence of points generated by an iterative process depends crucially on the nature of the objective function and the used optimization method. Time series is a sequence of data points, measured typically at successive times spaced at uniform time intervals. A common notation for time series
Y
is
0 1
{ , ,...}
Y Y Y
=
(3) Where
Y
is indexed by the natural numbers,
N
, (Gujarati 2003). Time series forecasting is the use of a model to forecast future events based on known past events: to predict data points before they are measured. Models for time series forecasting are the autoregressive (AR) models, the
integrated
(I) models, and the
moving average
(MA) models, (Mills 1990). These three classes depend linearly on previous data points. Combinations of these ideas produce autoregressive moving average (ARMA) and autoregressive integrated moving average (ARIMA) models, (Box and Jenkins 1976). Among types of non-linear time series models, there are models to represent the changes of variance along time (heteroskedasticity). These models are called autoregressive conditional heteroskedasticity (ARCH) and the collection comprises a wide variety of representation(GARCH, TARCH, EGARCH, FIGARCH, CGARCH, etc), (Engle 1982, Bollerslev 1986, Gujarati 2003).
A NOVEL FORECASTING HYBRID METHOD FOR UNCONSTRAINED OPTIMIZATION
77Inspired by the idea of time series forecasting, in this paper, we treat the produced iterative points of (2), at m steps, to be the known past events for the forecasting model. Therefore, the proposed approach results to the next iteration of the optimization process, in each direction, through a hybrid way using the iterative process (2) and the forecasting model of time series (3). Since the use of time series forecasting is an intermediate step in the optimization process it is necessary to take into account the complexity and
the computational cost of this model. Thus, the simple ARMA models seem to be a good choice. More-over, in order to avoid the process of recalculation of ARMA coefficients in each iteration, an implemen-tation of an idea derived from trust region optimization methods is used. Therefore, this recalculation realized only when ARMA model does not give enough information in the minimization process. In section 2 the ARMA(p,q) models for time series forecasting are described. In section 3 are given a description of the proposed methodology, a graphical representation of the new algorithm and the conver-gence theorem of the new method. The implementation of the new proposed algorithm in test optimiza-tion problems is presented in section 4. Finally, in section 5 is presented the conclusions of the paper and some priorities for future research.
2. ARMA models for forecasting
In statistics and signal processing, autoregressive moving average (ARMA) models, sometimes called Box-Jenkins models after the iterative Box-Jenkins methodology usually used to estimate them, are typically applied to time series data, (Box and Jenkins 1976, Tsay 2005). Given a time series of data
t
Y
, (3), the ARMA model is a tool for predicting future values in this se-ries. The model consists of two parts, an autoregressive (AR) part and a moving average (MA) part, given by
1 1
p qt t t i t ii ii i
Y Y
ε ϕ θ ε
− −= =
= + +
∑ ∑
(4) where
1 2
, ,...,
p
ϕ ϕ ϕ
are the parameters of the autoregressive part of the model,
1 2
, ,...,
q
θ θ θ
are the pa-rameters for the moving average part and
1
, ,...
t t
ε ε
−
are white noise error terms. The moving average model is essentially a finite impulse response filter with some additional interpretation placed on it. A common notation for ARMA models is ARMA(
p
,
q
) model where
p
is the order of the autoregressive part and
q
is the order of the moving average part.
Fig. 1.
The proposed direction
k
r
3. The proposed methodology
The basic idea of the new methodology is to combine the direction of minimization method (2) with another new direction, seeking a reduction of the objective function f(x). Each coordinate of the iterative point resulting from the optimization process (2) may be considered as a time series,
0 1
{ , ,..., }, 1,...,
k i i i
x x x i n
=
. Thus, for each coordinate’s time series an ARMA(p,q) model, outlined in sec-tion 2, may be corresponded given by:
E. N. Malihoutsaki, G. S. Androulakis, T. N. Grapsa
78
1 1
, 1,...,
p qk k j k j j k ji i i i i i j j
x x i n
ε ϕ θ ε
− −= =
= + + =
∑ ∑
(5) Therefore, for each coordinate
i
, the solution of equation (5) produces the corresponding coefficients
, 1,...,
ji
j p
ϕ
=
and
, 1,...,
ji
j q
θ
=
of ARMA(p,q) model. Using these ARMA models for the prediction of
1
k i
x
+
, for each coordinate, the differences
1
k k i i
x x
+
−
are calculated via equation (5) by
1 1 1 1 12 1
( 1) , 1,...,
p qk k k k k j k j j k ji i i i i i i i i i j j
u x x x x i n
ϕ ε ϕ θ ε
+ + + − + −= =
= − = − + + + =
∑ ∑
(6) It is obvious that the computational cost of solving the systems resulting from the equation (5) is prohibitive for use in each iteration. Therefore, it is necessary to find a way for the recalculation of the solution of equations (5) when it is needed. The vector
u
obtained from equation (6) is utilized only if
1
ARMA
CR
=
, defined according to the following ARMA criterion:
1,0,
k k k ARMA
u a d CRotherwise
>=
(7) When the
ARMA
CR
criterion is not satisfied the iterative scheme (2) is applied for the next iteration. Moreover, only in this case a recalculation of ARMA models coefficients is needed. If the
ARMA
CR
criterion (7) is met then the next approximation of the optimal point is obtained by
1
(1 ) , 0 1
k k k k k
x x a d u
λ λ λ
+
= + + − ≤ <
. (8) The proper selection of
λ
optimizes the objective function (1) and the corresponding approximation of the optimal point
1
k
x
+
lies on the line segment
SQ
(see Fig. 1 for details) defined by the points given by iterative scheme (2) and the ARMA one, (5). Obviously, from Figure 1 the equation (8) may be written in the form:
1
, 0 1
k k k k
x y r
κ
β β
+
= + ≤ <
(9) where
k
r
is the direction of line segment
SQ
and
1
k k
y x
+
=
is the auxiliary waypoint given by optimiza-tion method (2). Overall, the new iteration given by equation (9) should be used only when there is a re-duction of the objective function (1). In our case, it is always guaranteed because the worst case is the point
1
k k
x y
+
=
, which is acceptable point from the minimization process (2). Fig. 1 shows the graphical representation of the new proposed method when the
ARMA
CR
criterion is met. The blue segment represents the iteration of the optimization method given by equation (2), while the green segment the direction of ARMA model given by equation (6). The dark red line is the linear combination of the above two vectors, denoted by
k
r
. Finally, the light red line represents the optimal point depicted along the
SQ
line segment. Notice that the iterative scheme given by equation (8) reduces to iterative process given by (2) for
1
λ
=
, while for
0
λ
=
it results to the ARMA prediction. The basic idea behind the definition of
ARMA
CR
criterion (7) liken to the increment or decrement of the trust region radius depending on the process of the iterative algorithm. In this way the new method self-regulates the increment or decrement of the radius, considering only the
ARMA
CR
criterion, without using further function evaluations. The flow chart of the new proposed algorithm, named UAT (
U
nconstrained optimization with
A
RMA prediction as a
T
rust region part), is shown in Fig. 2.
Theorem 1
:
Under the assumptions of the convergence theorem of the iterative scheme (2), the sequence
x
k
generated by algorithm UAT is finite and terminates at a desirable point
x
opt
.
Proof.
When
0
k
=
β
the equation (9) results to the iterative scheme (2) which converges. In all the other cases,
0 1
k
< <
β
, the iterative point is acceptable when the reduction of objective function is guar-anteed. Thus the theorem is proved.
A NOVEL FORECASTING HYBRID METHOD FOR UNCONSTRAINED OPTIMIZATION
79
4. Numerical application
As analyzed in the previous section, the new methodology can be applied to many optimization problems and cooperates with all the iterative methods. In order to examine the efficiency of the new hy-brid method we implemented the new algorithm, Fig. 2, in open source software R, (R Development Core Team 2009). The convergence criteria for all the tested algorithms were: (a)
1 8
( ) ( ) 10
k k
f x f x
+ −
− <
and (b) the maximum number of iterations is set equal to 2000. A well-known test function for optimization is Stenger function, (Stenger 1975, Androulakis and Vrahatis 1996). The objective function is given by
2 2 2 21 2 1 2 2 1 2
( , ) ( 4 ) ( 2 4 )
f x x x x x x x
= − + − +
(10) This function has two minima at
1
(0,0)
opt
x
=
and
2
(1.695415,0.7186072)
opt
x
=
with
( ) 0, 1,2
opt i
f x i
= =
.
Fig. 2.
The flow chart of UAT algorithm
We tested the new hybrid methodology in a box of initial points
[ 1,4] [ 1,4]
x
∈ − × −
using: (a) opti-mization method of Nelder and Mead (NM)-a relative slow but robust method, (b) the method Fletcher-Reeves (FR) from the class of conjugate gradient method, and (c) the Broyden-Fletcher-Goldfarb-Shanno (BFGS) method from the class of quasi-Newton methods, (Nocedal and Wright 2006). The numerical results in Table 1 show that the proposed UAT methodology accelerates the conver-gence of the used optimization method. For example, by using FR method in the UAT algorithm with only 1
−
2 ARMA(0,2) recalculations a 50% reduction of function evaluations is succeeded in comparison with the classical FR method. Table 2 presents the results of optimization methods that correspond to spe-cific initial values. Numerical results for several test optimization functions with several variables will be analyzed in future work.
E. N. Malihoutsaki, G. S. Androulakis, T. N. Grapsa
80
Table 1.
Reduction of function evaluatins using UAT methodology Optimization method Number of ARMA(0,2) recalculations Reduction of function calculations using UAT algorithm NM 2−3 60% FR 1−2 50% BFGS 1−2 90%
Table 2.
Numerical results for some initial values. UAT algorithm Initial point Optimization method Iterations ARMA(0,2) recalculations Iterations (−1, −1) NM 119 2 44 FR 31 1 15 BFGS 11 1 10 (1.37, 0.03) NM 9 1 8 FR 142 1 25 BFGS 8 1 7
5. Conclusion and further research
In this paper a new hybrid method for unconstrained or constrained optimization is presented. The new proposed method can co-operates with any iterative optimization process. The innovative idea of the new method is to use time series forecasting techniques to predict future points arising from the optimiza-tion process. Since the time series forecasting methods implements effort on computational cost, a criterion is adopted for the needed of recalculation of coefficients of the forecasting method. For all the above mentioned reasons we decided the selection of ARMA(p,q) models -with small p and q values- for time series forecasting. The convergence of the proposed methodology is proved. In a future work, we will investigate the possibility of finding optimal values for parameters p and q of time series ARMA models. Moreover, investigation of the case for some initial values to achieve enlargement of the basin of convergence will take place in a future work.
References
Androulakis, G. S.; Vrahatis, M. N. 1996. OPTAC: A portable software package for analyzing and comparing opti-mization methods by visualization,
Journal of Computational and Applied Mathematic
72: 41−62. doi:10.1016/0377-0427(95)00244-8 Bollerslev, T. 1986. Generalized autoregressive conditional heteroskedasticity,
Journal of Econometrics
31: 307−327. doi:10.1016/0304-4076(86)90063-1 Box, G. E. P.; Jenkins, G. M. 1976.
Time series analysis: Forecasting and control
. 2
nd
edition, Holden-Day, San Fransisco. Dennis, J. E. Jr. and Schnabel, R. B. 1983.
Numerical methods for unconstrained optimization and nonlinear equa-tions.
Englewood Cliffs, New Jersey: Prentice-Hall, Inc. Engle, R. F. 1982. Autoregressive conditional heteroskedasticit y with estimates of variance of United Kingdom in-flation,
Econometrica
50: 987−1008. doi:10.2307/1912773 Gujarati, D. N. 2003.
Basic econometrics
. 4
th
edition, McGraw-Hill, New York. Noceda, J. and Wright, S. J. 2006.
Numerical Optimization.
2nd ed., New York, USA: Springer Science+Business Media. Mills, T. C. 1990.
Time series techniques for Economists.
Cambridge University Press. R Development Core Team. 2009.
R: A language and environment for statistical computing.
R foundation for statis-tical computing, Vienna, Austria. Stenger, F. 1975. Computing the topological degree of a mapping,
Rn. Numer. Math
. 25: 23−38. doi:10.1007/BF01419526 Tsay, R. S. 2005.
Analysis of fi nancial time series.
2
nd
edition, John Wiley & Sons, New Jersey. doi:10.1002/0471746193

Search

Similar documents

Related Search

A Novel Computing Method for 3D Doubal DensitA novel comprehensive method for real time ViSociometry as a method for investigating peerA Practical Method for the Analysis of GenetiA calculation method for diurnal temperature A map matching method for GPS based real timeA simple rapid GC-FID method for the determinA Novel Model for Competition and CooperationDevelopment of a novel approach for identificLattice Boltzmann method for fluid dynamics

We Need Your Support

Thank you for visiting our website and your interest in our free products and services. We are nonprofit website to share and download documents. To the running of this website, we need your help to support us.

Thanks to everyone for your continued support.

No, Thanks