Stability Conditions for Observer Based Output Feedback Stabilization with Nonlinear ModelPredictive Control
Rolf Findeisen
†
, Lars Imsland
, Frank Allg¨ower
†
, Bjarne A. Foss
,
†
Institute for Systems Theory in Engineering, University of Stuttgart, 70550 Stuttgart, Germany
{
findeise,allgower
}
@ist.unistuttgart.de
Department of Engineering Cybernetics, Norwegian University of Science and Technology, Trondheim, 7491 Norway
{
Lars.Imsland,Bjarne.Foss
}
@itk.ntnu.no
Abstract
—We consider the output feedback problem forcontinuous time systems using state feedback nonlinear modelpredictive control in combination with suitable state observers.Speciﬁcally we derive, for a broad class of state feedback nonlinear model predictive controllers, conditions on the observerthat guarantee that the closed loop is semiglobal practicallystable. The derived results are based on the fact that predictivecontrollers that possess a continuous value function are tosome extent inherently robust. To achieve semiglobal practicalstability one must basically require that the observer usedachieves a sufﬁciently fast convergence of the estimation error.Since this in general a very stringent condition, we show that aseries of observers such as highgain observers, moving horizonobservers and observers with ﬁnite convergence do satisfy it.Keywords: nonlinear predictive control, output feedback, semiglobal practical stability
I. I
NTRODUCTION
Motivated by the success of linear model predictive control, nonlinear model predictive control (NMPC) has receivedconsiderable attention over the past years. Key questions ashow to achieve stability of the closed loop in the state feedback case and how to efﬁciently solve the underlying optimalcontrol problem have been examined, see for example [1].In this paper we are concerned with the output feedback problem for sampleddata NMPC for continuous timesystems. Sampleddata NMPC here means, that based onthe state information at discrete sampling times an optimalcontrol problem is solved online and the resulting inputsignal is applied openloop to the system until the nextsampling time, where the whole process is repeated.One of the key obstacles for the application of nonlinearmodel predictive control (NMPC) is that it is inherently astate feedback control scheme, since the applied input isbased on the optimization of the future system behavior.In many applications, however, the system state can not befully measured. Thus, to apply predictive control methodsthe state must be recovered from the measured outputs usingstate observers. However, even if the state feedback NMPCcontroller and the observer used are both stable, there isno guarantee that the overall closed loop is stable, since nogeneral separation principle for nonlinear systems exists.Various researchers have addressed the question of closedloop stability of NMPC using observers for state recovery,see for example [2], [3], [4]. In this paper we follow alongthe lines of the approach presented in [5], [6], where the combination of stabilizing state feedback controllers with highgain observers is proposed to achieve semiglobal practicalstability of the closed loop. Semiglobal practical stabilitymeans that for any compact set of state initial conditions
that is a subset of the region of attraction of the state
feedback controller and any bounded region of observerinitial conditions there exist an observer gain and a samplingtime, such that the system state converges in ﬁnite timeinto any desired region of convergence around the srcin.In this paper we show that the semiglobal stability resultis not limited to highgain observers. For this purpose wederive explicit conditions on the estimation error such thatthe closed loop is semiglobal practically stable. Basicallywe require that the observer error can be made as smallas desired in any desired time. While this condition is inprinciple very stringent, we outline a series of observersdesigns that achieve the desired properties. Examples arestandard highgain observers [7], optimization based movinghorizon observers with contraction constraint [2], observersthat possess a linear error dynamics where the poles canbe chosen arbitrarily (e.g. observers based on normal formconsiderations and output injection [8], [9]), or observersthat achieve ﬁnite convergence time such as sliding modeobservers [10] or the approach presented in [11], [12].The approach we present is based on the fact that sampleddata predictive controllers that possess a
continuous
valuefunction are inherently robust to small disturbances, i.e. we
will consider the estimation error as a disturbance acting onthe closedloop. This inherent robustness property of NMPC
is closely connected to recent results on the robustness properties of discontinuous feedback via sampled and hold [13].However, here we consider the speciﬁc case of sampleddataNMPC controller and we do not assume that the applied inputis realized via a hold element.The paper is structured as follows: In the ﬁrst part,Section II, we state the considered system class and the considered class of sampleddata NMPC controllers. Section IIIoutlines the basic idea, before we present in Section IV thesemiglobal practical stability result. In Section V we showthat several existing observer designs satisfy the required as
Proceedings of the 42nd IEEE Conference on Decision and Control Maui, Hawaii USA, December 2003
TuP133
0780379241/03/$17.00 ©2003 IEEE
1425
sumptions thus can be used in combination with a stabilizingNMPC controller to achieve semiglobal practical stability.II. S
YSTEM
C
LASS AND
S
TATE
F
EEDBACK
NMPCWe consider the stabilization of timeinvariant nonlinearsystems of the form˙
x
(
t
) =
f
(
x
(
t
)
,
u
(
t
))
,
x
(
0
) =
x
0
y
=
h
(
x
,
u
)
where
x
(
t
)
∈
R
n
is the system state,
u
(
t
)
∈
R
m
is the inputvector and
y
∈
R
p
are the measured outputs. Besides stabilization we require that the inputs stay in the compactset
U
⊂
R
p
and that the states stay in the connected set
X
⊆
R
n
, i.e.
u
(
t
)
∈
U
,
x
(
t
)
∈
X
,
∀
t
≥
0. We furthermoreassume that
(
0
,
0
)
∈
X
×
U
. With respect to the vector ﬁeld
f
:
R
n
×
R
m
→
R
n
we assume that it is locally Lipschitzcontinuous and satisﬁes
f
(
0
,
0
) =
0.We do not state explicit observability assumptions on thesystems and conditions on the function
h
:
R
n
×
R
m
→
R
n
,since they depend on the observer used for state recovery.Note that, as outlined in Section V, a whole series of differentobservers satisfy the conditions required for semiglobalpractical stability derived in Section IV.Furthermore, we do not assume an explicit controllabilityassumption. Instead, as often done in NMPC, the controllability is contained in the feasibility assumption of the optimalcontrol problem.
A. State Feedback NMPC
In state feedback sampleddata NMPC an openloop optimal control problem is solved at discrete sampling instants
t
i
based on the current state information
x
(
t
i
)
. The samplinginstants
t
i
are given by a partition
π
of the time axis.
Deﬁnition 1 (Partition)
Every series
π
= (
t
i
)
, i
∈
N
of positive real numbers such that t
0
=
0
, t
i
<
t
i
+
1
and t
i
→
∞
for i
→
∞
is called a partition. Furthermore, let
¯
π
:
=
sup
i
∈
N
(
t
i
+
1
−
t
i
)
be the upper diameter and
π
:
=
inf
i
∈
N
(
t
i
+
1
−
t
i
)
be the lower diameter of
π
.
For a given
t
,
t
i
should be taken as the nearest samplinginstant with
t
i
<
t
. In sampleddata NMPC, the openloopinput applied in between the sampling instants is given bythe solution of the optimal control problem:min
¯
u
(
·
)
J
(
¯
u
(
·
)
;
x
(
t
i
))
(2a)subject to˙¯
x
(
τ
)=
f
(
¯
x
(
τ
)
,
¯
u
(
τ
))
,
¯
x
(
t
i
)=
x
(
t
i
)
(2b)¯
u
(
τ
)
∈
U
,
¯
x
(
τ
)
∈
X
τ
∈
[
t
i
,
t
i
+
T
p
]
(2c)¯
x
(
t
i
+
T
p
)
∈
E
.
(2d)The bar denotes predicted variables, i.e. ¯
x
(
·
)
is the solutionof (2b) driven by the input ¯
u
(
·
)
:
[
t
i
,
t
i
+
T
p
]
→
U
with theinitial condition
x
(
t
i
)
. The cost functional
J
minimized overthe control horizon
T
p
≥
¯
π
>
0 is given by
J
(
¯
u
(
·
)
;
x
(
t
i
))
:
=
t
i
+
T
p
t
i
F
(
¯
x
(
τ
)
,
¯
u
(
τ
))
d
τ
+
E
(
¯
x
(
t
i
+
T
p
))
,
(3)where the stage cost
F
:
R
×
U
→
R
is assumed to becontinuous, satisﬁes
F
(
0
,
0
) =
0, and is lower bounded bya class
K
function
1
α
F
:
α
F
(
x
)
≤
F
(
x
,
u
)
∀
(
x
,
u
)
∈
R
×
U
.The terminal region constraint
E
and the terminal penaltyterm
E
might be present or not. They are often used toenforce stability of the closed loop [1], [14]. The solutionof the optimal control problem (2) is denoted by ¯
u
(
·
;
x
(
t
i
))
.It deﬁnes the openloop input that is applied to the systemuntil the next sampling instant
t
i
+
1
,
u
(
t
;
x
(
t
i
))=
¯
u
(
t
;
x
(
t
i
))
,
t
∈
[
t
i
,
t
i
+
1
)
.
(4)The control
u
(
t
;
x
(
t
i
))
is a feedback, since it is recalculatedat each sampling instant using the new state measurement.We denote in the following the solution of (1a) startingat time
t
1
from an initial state
x
(
t
1
)
applying an input
u
:
[
t
1
,
t
2
]
→
R
m
by
x
(
τ
;
u
(
·
)
,
x
(
t
1
))
,
τ
∈
[
t
1
,
t
2
]
. For clarityof presentation we limit ourself to input signals that arepiecewise continuous. Furthermore, in the following we oftenrefer to the so called value function
V
(
x
)
which is deﬁnedas the minimal value of the cost for the state
x
:
V
(
x
)
:
=
J
(
u
∗
(
·
;
x
)
;
x
)
.Instead of going into details on how to achieve closed loopstability using state feedback NMPC, we explicitly state theassumptions we require on the NMPC scheme
Assumption 1
There exists a simply connected region
R
⊆
X
⊆
R
n
with
0
∈
R
(nominal region of attraction) suchthat:
1)
Along solution trajectories starting at a samplinginstant t
i
at x
(
t
i
)
∈
R
, the value function satisﬁes for all
0
≤
τ
:V
(
x
(
t
i
+
τ
))
−
V
(
x
(
t
i
))
≤ −
t
i
+
τ
t
i
F
(
x
(
s
)
,
u
(
s
;
x
(
s
i
)))
ds
.
2)
The value function V
(
x
)
is uniformly continuous.
3)
For all compact subsets
S
⊂
R
there is at least onelevel set
Ω
c
=
{
x
∈
R

V
(
x
)
≤
c
}
s.t.
S
⊂
Ω
c
.
Note that the given assumptions imply stability of the NMPCscheme [14], [1]. Assumption 1.1 is typically satisﬁed forstabilizing NMPC schemes. However, in general there isno guarantee that a stabilizing NMPC schemes satisﬁesAssumption 1.2 and 1.3, especially if state constrains arepresent. As is well known [15], [14], [16], NMPC can alsostabilize systems that cannot be stabilized by feedback that
is continuous in the state. This in general also implies a discontinuous value function. Many NMPC schemes, however,
1
A continuous function
α
:
[
0
,
∞
)
→
[
0
,
∞
)
is a
K
function, if it is strictlyincreasing and
α
(
0
) =
0.
1426
satisfy this assumption at least locally around the srcin [17],[18], [1]. Furthermore, for example NMPC schemes thatare based on control Lyapunov function [19] and that arenot subject to constraints on the states and inputs satisfyAssumption 1.
Remark 1
Note that the uniform continuity assumption onV
(
x
)
implies that for any compact subset
S
⊂
R
there exists a
K
function
α
V
such that for any x
1
,
x
2
∈
S
V
(
x
1
)
−
V
(
x
2
)
≤
α
V
(
x
1
−
x
2
)
.
III. P
ROBLEM
S
TATEMENT AND
B
ASIC
I
DEA
We assume that instead of the real system state
x
(
t
i
)
atevery sampling instant only a state estimate ˆ
x
(
t
i
)
is available.Thus, instead of the optimal feedback (4) the following“disturbed” feedback is applied:
u
(
τ
;
x
(
t
i
))=
¯
u
(
τ
; ˆ
x
(
t
i
))
,
τ
∈
[
t
i
,
t
i
+
1
)
.
(5)Note that the estimated state ˆ
x
(
t
i
)
can be outside the feasibleset
R
. To avoid feasibility problems we assume that in thiscase the input is ﬁxed to an arbitrary, but bounded value.We derive in the next section semiglobal practicalstability assuming that after an initial phase, the observererror at the sampling instants can be made sufﬁciently small,i.e. we assume that
Assumption 2
(
Observer error convergence)
For any desired estimation maximum error
0
<
e
max
there exist observer parameters, such that
x
(
t
i
)
−
ˆ
x
(
t
i
)
≤
e
max
,
∀
t
i
≥
k
conv
¯
π
.
(6)
Here k
conv
>
0
is a freely chosen, but ﬁxed number of sampling instants after which the observer error has tosatisfy
(6)
.
Remark 2
Depending on the observer used, further conditions on the system (e.g. observability assumptions) might benecessary, see Section V. Note that the observer does not have to operate continuously since the state information isonly necessary at the sampling times.
Since we do not assume that the observer error convergesto zero, we can certainly not achieve asymptotic stability tothe srcin, nor can we render the region of attraction of thestate feedback controller invariant. Thus, we consider in thefollowing the question, if under the assumption that for anymaximum error
e
max
observer parameters exist, such that (6)holds, the system state in the closed loop can be renderedsemiglobal practically stable. Here semiglobal practicallystable means, that for arbitrary sets
Ω
α
⊂
Ω
c
0
⊂
Ω
c
⊂
R
,0
<
α
<
c
0
<
c
there exist observer parameters and a maximumsampling time ¯
π
such that
∀
x
(
0
)
∈
Ω
c
0
: 1.
x
(
t
)
∈
Ω
c
,
t
>
0, 2.
∃
T
α
>
0 s.t.
x
(
t
)
∈
Ω
α
,
∀
t
≥
T
α
. For clariﬁcation see Fig. 1.Note, that in the following we only consider level sets for
R
Ω
c
0
Ω
α
Ω
c
x
(
0
)
Fig. 1. Set of initial conditions
Ω
c
0
, desired maximum attainable set
Ω
c
and desired region of convergence
Ω
α
the desired set of initial conditions (
Ω
c
0
), the maximumattainable set (
Ω
c
) and the set of desired convergence (
Ω
α
).We do this for pure simpliﬁcation of the presentation. Inprinciple one can consider arbitrary compact sets whichcontain the srcin, and subsets of each other and of
R
, sincedue to Assumption 1.3 it is always possible to ﬁnd suitablecovering level sets.
a) Basic Idea::
The derived results are based on theobservation that small state estimation errors lead to a(small) difference between the predicted state ¯
x
and the realdeveloping state (as long as both of them are contained inthe set
Ω
c
). As shown, the inﬂuence of the estimation error(after the convergence time
k
conv
¯
π
) of the observer can inprinciple be bounded by
V
(
x
(
t
i
+
1
))
−
V
(
x
(
t
i
))
≤
ε
(
x
(
t
i
)
−
ˆ
x
(
t
i
)
)
−
t
i
+
1
t
i
F
(
¯
x
(
τ
; ¯
u
∗
(
·
; ˆ
x
(
t
i
))
,
x
(
t
i
))
,
¯
u
∗
(
τ
; ˆ
x
(
t
i
)))
d
τ
,
(7)where
ε
corresponds to the state estimation error contribution.Note that the integral contribution is strictly negative. Thus,if
ε
“scales” with the size of the observer error (it certainlyalso scales with the sampling time
t
i
+
1
−
t
i
) one can achievecontraction.However, the boundingis only possible after a certain time,since in the initial phase the estimation error is not bounded.To avoid that the system state leaves during this time the set
Ω
c
we thus have to decrease the maximum sampling time ¯
π
.To bound in the following the integral contribution on theright side of (7), we state the following fact:
Fact 1
For any c
>
α
>
0
with
Ω
c
⊂
R
, T
p
>
δ
>
0
thelower bound V
min
(
c
,
α
,
δ
)
on the value function existsand is nontrivial for all x
0
∈
Ω
c
/
Ω
α
:
0
<
V
min
(
c
,
α
,
δ
)
:
=
min
x
0
∈
Ω
c
/
Ω
α
δ
0
F
(
¯
x
(
s
; ¯
u
∗
(
·
;
x
0
)
,
x
0
)
,
¯
u
∗
(
s
;
x
0
))
ds
<
∞
.
IV. S
EMI

GLOBAL
P
RACTICAL
S
TABILITY
Under the given setup the following theorem holds
1427
Theorem 1
Given arbitrary level sets
Ω
α
⊂
Ω
c
0
⊂
Ω
c
⊂
R
.Then there exists a maximum allowable observer error e
max
and a maximum sampling time
¯
π
, such that for all initialconditions x
0
∈
Ω
c
0
x
(
τ
)
∈
Ω
c
τ
≥
0
and that the state x
(
τ
)
converges in ﬁnite time to the set
Ω
α
and stays there.Proof:
The proof is divided in three parts. In the ﬁrstpart it is ensured that the system state does not leave themaximum admissible set
Ω
c
during the convergence time
k
conv
¯
π
of the observer. This is achieved by decreasing themaximum sampling time ¯
π
sufﬁciently. In the second part isshown, that by requiring a sufﬁciently small
e
max
the systemstate converges into the set
Ω
α
/
2
. In the third part it is shownthat the state will not leave the set
Ω
α
once it has entered itat a sampling time.
First part (x
(
t
i
+
τ
)
∈
Ω
c
∀
x
(
t
i
)
∈
Ω
c
0
):
Note that
Ω
c
0
isstrictly contained in
Ω
c
and thus also in
Ω
c
1
, with
c
1
:
=
c
0
+ (
c
−
c
0
)
/
2. Thus, there exists a time
T
c
1
such that
x
(
τ
)
∈
Ω
c
1
,
∀
0
≤
τ
≤
T
c
1
. The existence is guaranteed, since aslong as
x
(
t
)
∈
Ω
c
,
x
(
t
)
−
x
0
≤
t
0
f
(
x
(
s
)
,
u
(
s
))
ds
≤
k
Ω
c
t
,where
k
Ω
c
is a constant depending on the Lipschitz constantsof
f
and on the bounds on
u
. We take
T
c
1
as the smallest(worst case) time to reach the boundary of
Ω
c
1
from anypoint
x
0
∈
Ω
c
0
allowing
u
(
s
)
to take any value in
U
. Wepick now the maximum sampling time ¯
π
as ¯
π
≤
T
c
1
/
k
conv
, itwill be ﬁxed for the remainder of the proof. Note that therealways exist observer parameters such that after this time theobserver error is smaller than any desirable
e
max
.In the following we consider the state difference in thevalue function between the initial state
x
(
t
i
)
∈
Ω
c
1
at asampling time and the developing state
x
(
t
i
+
τ
;
x
(
t
i
)
,
u
ˆ
x
)
. Forsimplicity of notation,
u
ˆ
x
denotes the optimal input resultingfrom ˆ
x
(
t
i
)
and
u
x
denotes the input that correspond to thereal state
x
(
t
i
)
. Furthermore,
x
i
=
x
(
t
i
)
and ˆ
x
i
=
ˆ
x
(
t
i
)
.The following equality is valid as long as the states staywithin
Ω
c
:
V
(
x
(
τ
;
x
i
,
u
ˆ
x
))
−
V
(
x
i
)=
V
(
x
(
τ
;
x
i
,
u
ˆ
x
))
−
V
(
x
(
τ
; ˆ
x
i
,
u
ˆ
x
))+
V
(
x
(
τ
; ˆ
x
i
,
u
ˆ
x
))
−
V
(
ˆ
x
i
)+
V
(
ˆ
x
i
)
−
V
(
x
i
)
.
(8)We can bound the last two terms since
V
is uniformlycontinuous in compact subsets of
R
⊃
Ω
c
. Furthermore, notethat the third and forth term start from the same ˆ
x
i
, and thatthe ﬁrst term can be bound via
α
V
:
V
(
x
(
τ
;
x
i
,
u
ˆ
x
))
−
V
(
x
i
)
≤
α
V
(
e
L
f x
(
τ
−
t
i
)
ˆ
x
i
−
x
i
)
−
τ
t
i
F
(
x
(
s
; ˆ
x
i
,
u
ˆ
x
)
,
u
ˆ
x
)
ds
+
α
V
(
ˆ
x
i
−
x
i
)
Here we used an upper bound for
x
(
τ
;
x
i
,
u
ˆ
x
)
−
x
(
τ
; ˆ
x
i
,
u
ˆ
x
)
based on the GronwallBellmann lemma. From this equationit follows (skipping the negative contribution of the integral)that if one chooses the observer parameters such, that
α
V
(
e
L
f x
¯
π
e
max
)+
α
V
(
e
max
)
≤
c
−
c
1
(9)then
x
(
t
i
+
τ
)
∈
Ω
c
∀
τ
≤
(
t
i
+
1
−
t
i
)
.
Second part (ﬁnite time convergence to
Ω
α
/
2
):
We assumethat (9) holds and that
x
(
t
i
)
∈
Ω
c
1
. Note that (9) assures that
x
(
t
i
+
τ
)
∈
Ω
c
∀
τ
∈
[
0
,
t
i
+
1
−
t
i
]
. Assuming that
x
i
/
∈
Ω
α
/
2
weobtain from (8) that
V
(
x
(
τ
;
x
i
,
u
ˆ
x
))
−
V
(
x
i
)
≤−
V
min
(
c
,
α
/
2
,
τ
−
t
i
)+
α
V
e
L
f x
¯
π
ˆ
x
i
−
x
i
+
α
V
(
ˆ
x
i
−
x
i
)
.
To achieve convergence to the set
Ω
α
/
2
in ﬁnite time weneed that the right hand side is strictly less than zero. Onepossibility to obtain this is to require, that the observerparameters are chosen such that:
α
V
e
L
f x
¯
π
ˆ
x
i
−
x
i
+
α
V
(
ˆ
x
i
−
x
i
)
−
V
min
(
c
,
α
/
2
,
π
)
≤−
V
min
(
c
,
α
/
2
,
π
)+
V
min
(
c
,
α
/
4
,
π
)
.
Note that
−
V
min
(
c
,
α
/
2
,
π
)+
V
min
(
c
,
α
/
4
,
π
)
<
0 since
α
/
4
<
α
/
2. Thus, if we choose the observer parameters such that
α
V
(
e
L
f x
¯
π
e
max
)+
α
V
(
e
max
)
≤
V
min
(
c
,
α
/
4
,
π
)
we achieve ﬁnite time convergence from any point
x
(
t
i
)
∈
Ω
c
0
to the set
Ω
α
/
2
.
Third part (x
(
t
i
+
1
)
∈
Ω
α
∀
x
(
t
i
)
∈
Ω
α
/
2
):
This is triviallysatisﬁed following the arguments in the ﬁrst part of the proof,assuming that
α
V
(
e
L
f x
¯
π
e
max
)+
α
V
(
e
max
)
≤
α
/
2
.
Combining all three steps, we obtain the Theorem if ¯
π
≤
T
c
1
/
k
conv
(10)and if we choose the observer error
e
max
such that
α
V
(
e
L
f x
¯
π
e
max
)+
α
V
(
e
max
)
≤
min
{
c
−
c
1
,
V
min
(
c
,
α
/
4
,
π
)
,
α
/
2
}
.
(11)
Remark 3
Explicitly designing an observer based on
(11)
and
(10)
is in general not possible. However, the theoremunderpins that if the observer error can be sufﬁciently fast decreased that the closed loop system state will be semiglobally practically stable.
V. P
OSSIBLE
O
BSERVER
D
ESIGNS
Theorem 1 lays the basis for the design of observer basedoutput feedback NMPC controllers that achieve semiglobalpractical stability. While in principle Assumption 2 is verydifﬁcult to satisfy, we outline in this Section a series of observers designs that achieve the desired properties. Wewill go into details for standard highgain observers [7]and optimization based moving horizon observers with contraction constraint [2]. Note, that further observer designsthat satisfy the assumptions are for example observers thatposses a linear error dynamics where the poles can be chosen
1428
arbitrarily (e.g. based on normal form considerations andoutput injection [8], [9]), or observers that achieve ﬁniteconvergence time such as sliding mode observers [10] orthe approach presented in [11], [12].
b) High Gain Observers:
One possible observer designthat satisﬁes Assumption 2 are highgain observers. Basically highgain observers obtain a state estimate based on
approximated derivatives of the output signals. They are ingeneral based on the assumption that the system is uniformlycompletely observable. Uniform complete observability isdeﬁned in terms of the observability map
H
, which is givenby successive differentiation of the output
y
:
Y
T
=
y
1
,
˙
y
1
,... ,
y
(
r
1
)
1
,
y
2
,... ,
y
p
,...,
y
(
r
p
)
p
=
:
H
(
x
)
T
.
Here
Y
is the vector of output derivatives. Note that weassume for simplicity of presentation that
H
does not dependon the input and its derivatives. More general results allowingthat
H
depends on the input and its derivatives can befound in [6]. We assume that the system is uniform completeobservable, i.e.
Assumption 3
The system
(1a)
is uniformly completely observable in the sense that there exists a set of indices
{
r
1
,... ,
r
p
}
such that the mapping Y
=
H
(
x
)
depends onlyon x, is smooth with respect to x and its inverse from Y to xis smooth and onto.
The system state is recovered by a highgain observer. Application of the coordinate transformation
ζ
:
=
H
(
x
)
, where
H
is the observability mapping, to the system (1a) leads tothe system in observability normal form in
ζ
coordinates˙
ζ
=
A
ζ
+
B
φ
(
ζ
,
u
)
,
y
=
C
ζ
.
The matrices
A
,
B
and
C
have the following structure
A
=
blockdiag
[
A
1
,...
A
p
]
,
A
i
=
0 1 0
···
00 0 1
···
0
......
0
··· ···
0 10
··· ··· ···
0
r
i
×
r
i
B
=
blockdiag
[
B
1
,... ,
B
p
]
,
B
i
=
0
···
0 1
T r
i
×
1
C
=
blockdiag
[
C
1
,... ,
C
p
]
,
C
i
=
1 0
···
0
1
×
r
i
,
and
φ
:
R
n
×
R
m
→
R
p
is the “system nonlinearity” inobservability normal form. The highgain observer
2
˙ˆ
ζ
=
A
ˆ
ζ
+
H
ε
(
y
−
C
ˆ
ζ
)+
B
ˆ
φ
(
ˆ
ζ
,
u
)
(13)allows recovery of the states [7], [20]
ζ
from information of
y
(
t
)
assuming that
Assumption 4
ˆ
φ
in
(13)
is globally bounded.
The function ˆ
φ
is the approximation of
φ
that isused in the observer. The observer gain matrix
2
We use hatted variables for the observer states and variables.
H
ε
is given by
H
ε
=
blockdiag
[
H
ε
,
1
,... ,
H
ε
,
p
]
, with
H
T
ε
,
i
=[
α
(
i
)
1
/
ε
,
α
(
i
)
2
/
ε
2
,... ,
α
(
i
)
n
/
ε
r
i
]
, where
ε
is the socalledhighgain parameter since 1
/
ε
goes to inﬁnity for
ε
→
0. The
α
(
i
)
j
s are design parameters and must be chosen such that thepolynomials
s
n
+
α
(
i
)
1
s
n
−
1
+
···
+
α
(
i
)
n
−
1
s
+
α
(
i
)
n
=
0
,
i
=
1
,...,
p
are Hurwitz.Note that estimates obtained in
ζ
coordinates can betransformed back to the
x
coordinates by ˆ
x
=
H
−
1
(
ζ
)
.As for example shown in [20] and utilized in [6], under theassumption that the initial observer error is out of a compactset and that the system state stays in a bounded region, forany desired
e
max
and any convergencetime
k
conv
¯
π
there existsa maximum
ε
such that for any
ε
≤
ε
the observer errorstays bounded and satisﬁes:
x
(
τ
)
−
ˆ
x
(
τ
)
∀
τ
≥
k
conv
¯
π
.Thus, the highgain observer satisﬁes Assumption 2. Further details can be found in [6], [21].
c) Moving Horizon Observers::
Moving horizon estimators (MHE) are optimization based observers, i.e. thestate estimate is obtained by the solution of a dynamicoptimization problem in which the deviation between themeasured output and the simulated output starting from
estimated initial state is minimized. Various approaches tomoving horizon state estimation exist [2], [22], [23], [24]. Wefocus here on the MHE scheme with contraction constraint asintroduced in [2], since it satisﬁes the Assumptions needed.In the approach proposed in [2] basically at all sampling instants a dynamic optimization problem is solved, consideringthe output measurements spanning over a certain estimation
window in the past. Assuming that certain reconstructabilityassumptions hold and that no disturbances are present, onecould in principle estimate the system state by solving onesingle dynamic optimization problem. However, since thiswould involve the solution of a global optimization problemin realtime, it is proposed in [2] to only improve the estimateat every sampling time by requiring that the integrated errorbetween the measured output and the simulated output isdecreased from sampling instant to sampling instant. Sincethe contraction rate directly corresponds to the convergenceof the state estimation error and since it can in principle bechosen freely this MHE scheme satisﬁes the Assumptions onthe state estimator. Thus, it can be employed together witha statefeedback NMPC controller to achieve semiglobalpractical stability.VI. C
ONCLUSIONS
To derive stable output feedback NMPC schemes is of practical as well as of theoretical relevance. In this paper weexpanded the sampleddata output feedback NMPC approachfor continuous time systems using highgain observers aspresented in [6], [21] by stating explicit conditions on theobserver error such that the closed loop is semiglobal practical stable. As shown, a series of observer designs satisfy therequired conditions. Examples are moving horizon observerswith contraction constraint [2], observers that posses a linear
1429