Legal forms

Stability conditions for observer based output feedback stabilization with nonlinear model predictive control

Description
Stability conditions for observer based output feedback stabilization with nonlinear model predictive control
Categories
Published
of 6
All materials on our website are shared by users. If you have any questions about copyright issues, please report us to resolve them. We are always happy to assist you.
Related Documents
Share
Transcript
  Stability Conditions for Observer Based Output Feedback Stabilization with Nonlinear ModelPredictive Control Rolf Findeisen † , Lars Imsland  , Frank Allg¨ower † , Bjarne A. Foss  , † Institute for Systems Theory in Engineering, University of Stuttgart, 70550 Stuttgart, Germany { findeise,allgower } @ist.uni-stuttgart.de  Department of Engineering Cybernetics, Norwegian University of Science and Technology, Trondheim, 7491 Norway { Lars.Imsland,Bjarne.Foss } @itk.ntnu.no  Abstract —We consider the output feedback problem forcontinuous time systems using state feedback nonlinear modelpredictive control in combination with suitable state observers.Specifically we derive, for a broad class of state feedback non-linear model predictive controllers, conditions on the observerthat guarantee that the closed loop is semi-global practicallystable. The derived results are based on the fact that predictivecontrollers that possess a continuous value function are tosome extent inherently robust. To achieve semi-global practicalstability one must basically require that the observer usedachieves a sufficiently fast convergence of the estimation error.Since this in general a very stringent condition, we show that aseries of observers such as high-gain observers, moving horizonobservers and observers with finite convergence do satisfy it.Keywords: nonlinear predictive control, output feedback, semi-global practical stability I. I NTRODUCTION Motivated by the success of linear model predictive con-trol, nonlinear model predictive control (NMPC) has receivedconsiderable attention over the past years. Key questions ashow to achieve stability of the closed loop in the state feed-back case and how to efficiently solve the underlying optimalcontrol problem have been examined, see for example [1].In this paper we are concerned with the output feed-back problem for sampled-data NMPC for continuous timesystems. Sampled-data NMPC here means, that based onthe state information at discrete sampling times an optimalcontrol problem is solved on-line and the resulting inputsignal is applied open-loop to the system until the nextsampling time, where the whole process is repeated.One of the key obstacles for the application of nonlinearmodel predictive control (NMPC) is that it is inherently astate feedback control scheme, since the applied input isbased on the optimization of the future system behavior.In many applications, however, the system state can not befully measured. Thus, to apply predictive control methodsthe state must be recovered from the measured outputs usingstate observers. However, even if the state feedback NMPCcontroller and the observer used are both stable, there isno guarantee that the overall closed loop is stable, since nogeneral separation principle for nonlinear systems exists.Various researchers have addressed the question of closedloop stability of NMPC using observers for state recovery,see for example [2], [3], [4]. In this paper we follow alongthe lines of the approach presented in [5], [6], where the com-bination of stabilizing state feedback controllers with high-gain observers is proposed to achieve semi-global practicalstability of the closed loop. Semi-global practical stabilitymeans that for any compact set of state initial conditions that is a subset of the region of  attraction of the state feedback controller and any bounded region of observerinitial conditions there exist an observer gain and a samplingtime, such that the system state converges in finite timeinto any desired region of convergence around the srcin.In this paper we show that the semi-global stability resultis not limited to high-gain observers. For this purpose wederive explicit conditions on the estimation error such thatthe closed loop is semi-global practically stable. Basicallywe require that the observer error can be made as smallas desired in any desired time. While this condition is inprinciple very stringent, we outline a series of observersdesigns that achieve the desired properties. Examples arestandard high-gain observers [7], optimization based movinghorizon observers with contraction constraint [2], observersthat possess a linear error dynamics where the poles canbe chosen arbitrarily (e.g. observers based on normal formconsiderations and output injection [8], [9]), or observersthat achieve finite convergence time such as sliding modeobservers [10] or the approach presented in [11], [12].The approach we present is based on the fact that sampled-data predictive controllers that possess a  continuous  valuefunction are inherently robust to small disturbances, i.e. we will consider the estimation error as a disturbance acting onthe closed-loop. This inherent robustness property of  NMPC is closely connected to recent results on the robustness prop-erties of discontinuous feedback via sampled and hold [13].However, here we consider the specific case of sampled-dataNMPC controller and we do not assume that the applied inputis realized via a hold element.The paper is structured as follows: In the first part,Section II, we state the considered system class and the con-sidered class of sampled-data NMPC controllers. Section IIIoutlines the basic idea, before we present in Section IV thesemi-global practical stability result. In Section V we showthat several existing observer designs satisfy the required as- Proceedings of the 42nd IEEE Conference on Decision and Control Maui, Hawaii USA, December 2003  TuP13-3 0-7803-7924-1/03/$17.00 ©2003 IEEE  1425  sumptions thus can be used in combination with a stabilizingNMPC controller to achieve semi-global practical stability.II. S YSTEM  C LASS AND  S TATE  F EEDBACK  NMPCWe consider the stabilization of time-invariant nonlinearsystems of the form˙  x ( t  ) =  f  (  x ( t  ) , u ( t  )) ,  x ( 0 ) =  x 0  y  =  h (  x , u ) where  x ( t  ) ∈ R n is the system state,  u ( t  ) ∈ R m is the inputvector and  y  ∈  R  p are the measured outputs. Besides sta-bilization we require that the inputs stay in the compactset  U   ⊂  R  p and that the states stay in the connected set X   ⊆  R n , i.e.  u ( t  ) ∈ U  ,  x ( t  ) ∈ X  ,  ∀ t   ≥  0. We furthermoreassume that  ( 0 , 0 ) ∈ X  × U  . With respect to the vector field  f   : R n × R m → R n we assume that it is locally Lipschitzcontinuous and satisfies  f  ( 0 , 0 ) =  0.We do not state explicit observability assumptions on thesystems and conditions on the function  h  : R n × R m → R n ,since they depend on the observer used for state recovery.Note that, as outlined in Section V, a whole series of differentobservers satisfy the conditions required for semi-globalpractical stability derived in Section IV.Furthermore, we do not assume an explicit controllabilityassumption. Instead, as often done in NMPC, the controlla-bility is contained in the feasibility assumption of the optimalcontrol problem.  A. State Feedback NMPC  In state feedback sampled-data NMPC an open-loop opti-mal control problem is solved at discrete sampling instants t  i  based on the current state information  x ( t  i ) . The samplinginstants  t  i  are given by a partition  π  of the time axis. Definition 1 (Partition)  Every series  π = ( t  i )  , i ∈ N  of posi-tive real numbers such that t  0  = 0  , t  i < t  i + 1  and t  i → ∞  for i → ∞ is called a partition. Furthermore, let   ¯ π  : =  sup i ∈ N ( t  i + 1 − t  i ) be the upper diameter and   π : =  inf  i ∈ N ( t  i + 1 − t  i )  be the lower diameter of   π . For a given  t  ,  t  i  should be taken as the nearest samplinginstant with  t  i  < t  . In sampled-data NMPC, the open-loopinput applied in between the sampling instants is given bythe solution of the optimal control problem:min ¯ u ( · )  J  ( ¯ u ( · ) ;  x ( t  i ))  (2a)subject to˙¯  x ( τ )=  f  ( ¯  x ( τ ) ,  ¯ u ( τ )) ,  ¯  x ( t  i )=  x ( t  i )  (2b)¯ u ( τ ) ∈ U  ,  ¯  x ( τ ) ∈ X   τ ∈ [ t  i , t  i  + T   p ]  (2c)¯  x ( t  i  + T   p ) ∈ E  .  (2d)The bar denotes predicted variables, i.e. ¯  x ( · )  is the solutionof (2b) driven by the input ¯ u ( · )  :  [ t  i , t  i  + T   p ] → U   with theinitial condition  x ( t  i ) . The cost functional  J   minimized overthe control horizon  T   p  ≥  ¯ π >  0 is given by  J  ( ¯ u ( · ) ;  x ( t  i )) : =    t  i + T   p t  i F  ( ¯  x ( τ ) ,  ¯ u ( τ )) d  τ +  E  ( ¯  x ( t  i  + T   p )) ,  (3)where the stage cost  F   :  R    × U   → R  is assumed to becontinuous, satisfies  F  ( 0 , 0 ) = 0, and is lower bounded bya class  K    function 1 α F  :  α F  (   x  ) ≤ F  (  x , u ) ∀ (  x , u ) ∈ R    × U  .The terminal region constraint  E   and the terminal penaltyterm  E   might be present or not. They are often used toenforce stability of the closed loop [1], [14]. The solutionof the optimal control problem (2) is denoted by ¯ u  ( · ;  x ( t  i )) .It defines the open-loop input that is applied to the systemuntil the next sampling instant  t  i + 1 , u ( t  ;  x ( t  i ))=  ¯ u  ( t  ;  x ( t  i )) ,  t  ∈ [ t  i , t  i + 1 ) .  (4)The control  u ( t  ;  x ( t  i ))  is a feedback, since it is recalculatedat each sampling instant using the new state measurement.We denote in the following the solution of (1a) startingat time  t  1  from an initial state  x ( t  1 )  applying an input u  :  [ t  1 , t  2 ] → R m by  x ( τ ; u ( · ) ,  x ( t  1 )) ,  τ ∈ [ t  1 , t  2 ] . For clarityof presentation we limit ourself to input signals that arepiecewise continuous. Furthermore, in the following we oftenrefer to the so called value function  V  (  x )  which is definedas the minimal value of the cost for the state  x :  V  (  x )  : =  J  ( u ∗ ( · ;  x ) ;  x ) .Instead of going into details on how to achieve closed loopstability using state feedback NMPC, we explicitly state theassumptions we require on the NMPC scheme Assumption 1  There exists a simply connected region R    ⊆ X   ⊆ R n with  0 ∈ R    (nominal region of attraction) suchthat: 1)  Along solution trajectories starting at a samplinginstant t  i  at x ( t  i ) ∈ R    , the value function satisfies for all  0 ≤ τ :V  (  x ( t  i  +  τ ))  − V  (  x ( t  i ))  ≤ −    t  i + τ t  i F  (  x ( s ) , u ( s ;  x ( s i ))) ds . 2)  The value function V  (  x )  is uniformly continuous. 3)  For all compact subsets  S   ⊂ R    there is at least onelevel set   Ω c = {  x ∈ R   | V  (  x ) ≤ c }  s.t.  S  ⊂ Ω c . Note that the given assumptions imply stability of the NMPCscheme [14], [1]. Assumption 1.1 is typically satisfied forstabilizing NMPC schemes. However, in general there isno guarantee that a stabilizing NMPC schemes satisfiesAssumption 1.2 and 1.3, especially if state constrains arepresent. As is well known [15], [14], [16], NMPC can alsostabilize systems that cannot be stabilized by feedback that is continuous in the state. This in general also implies a dis-continuous value function. Many NMPC schemes, however, 1 A continuous function  α :  [ 0 , ∞ ) → [ 0 , ∞ )  is a  K    function, if it is strictlyincreasing and  α ( 0 ) =  0. 1426  satisfy this assumption at least locally around the srcin [17],[18], [1]. Furthermore, for example NMPC schemes thatare based on control Lyapunov function [19] and that arenot subject to constraints on the states and inputs satisfyAssumption 1. Remark 1  Note that the uniform continuity assumption onV  (  x )  implies that for any compact subset  S  ⊂ R    there exists a K   -function  α V   such that for any x 1 ,  x 2 ∈ S    V  (  x 1 ) − V  (  x 2 ) ≤ α V  (   x 1 −  x 2  ) . III. P ROBLEM  S TATEMENT AND  B ASIC  I DEA We assume that instead of the real system state  x ( t  i )  atevery sampling instant only a state estimate ˆ  x ( t  i )  is available.Thus, instead of the optimal feedback (4) the following“disturbed” feedback is applied: u ( τ ;  x ( t  i ))=  ¯ u  ( τ ; ˆ  x ( t  i )) ,  τ ∈ [ t  i , t  i + 1 ) .  (5)Note that the estimated state ˆ  x ( t  i )  can be outside the feasibleset  R   . To avoid feasibility problems we assume that in thiscase the input is fixed to an arbitrary, but bounded value.We derive in the next section semi-global practicalstability assuming that after an initial phase, the observererror at the sampling instants can be made sufficiently small,i.e. we assume that Assumption 2  ( Observer error convergence)  For any de-sired estimation maximum error   0 < e max  there exist observer  parameters, such that    x ( t  i ) − ˆ  x ( t  i ) ≤ e max ,  ∀ t  i  ≥ k  conv ¯ π .  (6)  Here k  conv  >  0  is a freely chosen, but fixed number of sampling instants after which the observer error has tosatisfy  (6) . Remark 2  Depending on the observer used, further condi-tions on the system (e.g. observability assumptions) might benecessary, see Section V. Note that the observer does not have to operate continuously since the state information isonly necessary at the sampling times. Since we do not assume that the observer error convergesto zero, we can certainly not achieve asymptotic stability tothe srcin, nor can we render the region of attraction of thestate feedback controller invariant. Thus, we consider in thefollowing the question, if under the assumption that for anymaximum error  e max  observer parameters exist, such that (6)holds, the system state in the closed loop can be renderedsemi-global practically stable. Here semi-global practicallystable means, that for arbitrary sets  Ω α  ⊂ Ω c 0  ⊂ Ω c ⊂ R   ,0 < α < c 0 < c  there exist observer parameters and a maximumsampling time ¯ π  such that  ∀  x ( 0 ) ∈ Ω c 0 : 1.  x ( t  ) ∈ Ω c ,  t   > 0, 2. ∃ T  α  >  0 s.t.  x ( t  ) ∈ Ω α ,  ∀ t   ≥ T  α . For clarification see Fig. 1.Note, that in the following we only consider level sets for R   Ω c 0 Ω α Ω c  x  ( 0 ) Fig. 1. Set of initial conditions  Ω c 0 , desired maximum attainable set  Ω c and desired region of convergence  Ω α the desired set of initial conditions ( Ω c 0 ), the maximumattainable set ( Ω c ) and the set of desired convergence ( Ω α ).We do this for pure simplification of the presentation. Inprinciple one can consider arbitrary compact sets whichcontain the srcin, and subsets of each other and of   R   , sincedue to Assumption 1.3 it is always possible to find suitablecovering level sets. a) Basic Idea::  The derived results are based on theobservation that small state estimation errors lead to a(small) difference between the predicted state ¯  x  and the realdeveloping state (as long as both of them are contained inthe set  Ω c ). As shown, the influence of the estimation error(after the convergence time  k  conv ¯ π ) of the observer can inprinciple be bounded by V  (  x ( t  i + 1 )) − V  (  x ( t  i )) ≤ ε (   x ( t  i ) −  ˆ  x ( t  i )  ) −    t  i + 1 t  i F  ( ¯  x ( τ ; ¯ u ∗ ( · ; ˆ  x ( t  i )) ,  x ( t  i )) ,  ¯ u ∗ ( τ ; ˆ  x ( t  i ))) d  τ ,  (7)where ε  corresponds to the state estimation error contribution.Note that the integral contribution is strictly negative. Thus,if   ε  “scales” with the size of the observer error (it certainlyalso scales with the sampling time  t  i + 1 − t  i ) one can achievecontraction.However, the boundingis only possible after a certain time,since in the initial phase the estimation error is not bounded.To avoid that the system state leaves during this time the set Ω c  we thus have to decrease the maximum sampling time ¯ π .To bound in the following the integral contribution on theright side of (7), we state the following fact: Fact 1  For any c  >  α  >  0  with  Ω c  ⊂ R    , T   p  >  δ  >  0  thelower bound V  min ( c , α , δ )  on the value function existsand is non-trivial for all x 0 ∈ Ω c / Ω α :  0 < V  min ( c , α , δ )  : = min  x 0 ∈ Ω c / Ω α    δ 0  F  ( ¯  x ( s ; ¯ u ∗ ( · ;  x 0 ) ,  x 0 ) ,  ¯ u ∗ ( s ;  x 0 )) ds < ∞ . IV. S EMI - GLOBAL  P RACTICAL  S TABILITY Under the given setup the following theorem holds 1427  Theorem 1  Given arbitrary level sets  Ω α ⊂ Ω c 0 ⊂ Ω c ⊂ R   .Then there exists a maximum allowable observer error e max and a maximum sampling time  ¯ π  , such that for all initialconditions x 0 ∈ Ω c 0  x ( τ ) ∈ Ω c  τ ≥ 0  and that the state x ( τ ) converges in finite time to the set   Ω α  and stays there.Proof:  The proof is divided in three parts. In the firstpart it is ensured that the system state does not leave themaximum admissible set  Ω c  during the convergence time k  conv ¯ π  of the observer. This is achieved by decreasing themaximum sampling time ¯ π  sufficiently. In the second part isshown, that by requiring a sufficiently small  e max  the systemstate converges into the set  Ω α / 2 . In the third part it is shownthat the state will not leave the set  Ω α  once it has entered itat a sampling time. First part (x ( t  i + τ ) ∈ Ω c  ∀  x ( t  i ) ∈ Ω c 0 ):  Note that  Ω c 0  isstrictly contained in  Ω c  and thus also in  Ω c 1 , with  c 1  : = c 0  + ( c − c 0 ) / 2. Thus, there exists a time  T  c 1  such that  x ( τ ) ∈ Ω c 1 , ∀ 0 ≤ τ ≤ T  c 1 . The existence is guaranteed, since aslong as  x ( t  ) ∈ Ω c ,    x ( t  ) −  x 0 ≤   t  0   f  (  x ( s ) , u ( s ))  ds ≤ k  Ω c t  ,where  k  Ω c  is a constant depending on the Lipschitz constantsof   f   and on the bounds on  u . We take  T  c 1  as the smallest(worst case) time to reach the boundary of   Ω c 1  from anypoint  x 0 ∈ Ω c 0  allowing  u ( s )  to take any value in  U  . Wepick now the maximum sampling time ¯ π  as ¯ π ≤ T  c 1 / k  conv , itwill be fixed for the remainder of the proof. Note that therealways exist observer parameters such that after this time theobserver error is smaller than any desirable  e max .In the following we consider the state difference in thevalue function between the initial state  x ( t  i )  ∈  Ω c 1  at asampling time and the developing state  x ( t  i + τ ;  x ( t  i ) , u ˆ  x ) . Forsimplicity of notation,  u ˆ  x denotes the optimal input resultingfrom ˆ  x ( t  i )  and  u  x  denotes the input that correspond to thereal state  x ( t  i ) . Furthermore,  x i  =  x ( t  i )  and ˆ  x i  =  ˆ  x ( t  i ) .The following equality is valid as long as the states staywithin  Ω c : V  (  x ( τ ;  x i , u ˆ  x )) − V  (  x i )= V  (  x ( τ ;  x i , u ˆ  x )) − V  (  x ( τ ; ˆ  x i , u ˆ  x ))+ V  (  x ( τ ; ˆ  x i , u ˆ  x )) − V  ( ˆ  x i )+ V  ( ˆ  x i ) − V  (  x i ) .  (8)We can bound the last two terms since  V   is uniformlycontinuous in compact subsets of  R    ⊃ Ω c . Furthermore, notethat the third and forth term start from the same ˆ  x i , and thatthe first term can be bound via  α V  : V  (  x ( τ ;  x i , u ˆ  x )) − V  (  x i ) ≤ α V  ( e  L  f x ( τ − t  i )  ˆ  x i −  x i  ) −    τ t  i F  (  x ( s ; ˆ  x i , u ˆ  x ) , u ˆ  x ) ds + α V  (  ˆ  x i −  x i  ) Here we used an upper bound for    x ( τ ;  x i , u ˆ  x ) −  x ( τ ; ˆ  x i , u ˆ  x )  based on the Gronwall-Bellmann lemma. From this equationit follows (skipping the negative contribution of the integral)that if one chooses the observer parameters such, that α V  ( e  L  f x ¯ π e max )+ α V  ( e max ) ≤ c − c 1  (9)then  x ( t  i  + τ ) ∈ Ω c  ∀ τ ≤ ( t  i + 1 − t  i ) . Second part (finite time convergence to  Ω α / 2 ):  We assumethat (9) holds and that  x ( t  i ) ∈ Ω c 1 . Note that (9) assures that  x ( t  i  + τ ) ∈ Ω c  ∀ τ ∈ [ 0 , t  i + 1 − t  i ] . Assuming that  x i  / ∈ Ω α / 2  weobtain from (8) that V  (  x ( τ ;  x i , u ˆ  x )) − V  (  x i ) ≤− V  min ( c , α / 2 , τ − t  i )+ α V   e  L  f x ¯ π  ˆ  x i −  x i   + α V  (  ˆ  x i −  x i  ) . To achieve convergence to the set  Ω α / 2  in finite time weneed that the right hand side is strictly less than zero. Onepossibility to obtain this is to require, that the observerparameters are chosen such that: α V   e  L  f x ¯ π  ˆ  x i −  x i   + α V  (  ˆ  x i −  x i  ) − V  min ( c , α / 2 , π ) ≤− V  min ( c , α / 2 , π )+ V  min ( c , α / 4 , π ) . Note that  − V  min ( c , α / 2 , π )+ V  min ( c , α / 4 , π ) < 0 since  α / 4 < α / 2. Thus, if we choose the observer parameters such that α V  ( e  L  f x ¯ π e max )+ α V  ( e max ) ≤ V  min ( c , α / 4 , π ) we achieve finite time convergence from any point  x ( t  i ) ∈ Ω c 0 to the set  Ω α / 2 . Third part (x ( t  i + 1 ) ∈ Ω α  ∀  x ( t  i ) ∈ Ω α / 2 ):  This is triviallysatisfied following the arguments in the first part of the proof,assuming that α V  ( e  L  f x ¯ π e max )+ α V  ( e max ) ≤ α / 2 . Combining all three steps, we obtain the Theorem if ¯ π ≤ T  c 1 / k  conv  (10)and if we choose the observer error  e max  such that α V  ( e  L  f x ¯ π e max )+ α V  ( e max ) ≤ min { c − c 1 , V  min ( c , α / 4 , π ) , α / 2 } .  (11) Remark 3  Explicitly designing an observer based on  (11) and   (10)  is in general not possible. However, the theoremunderpins that if the observer error can be sufficiently fast decreased that the closed loop system state will be semi-globally practically stable. V. P OSSIBLE  O BSERVER  D ESIGNS Theorem 1 lays the basis for the design of observer basedoutput feedback NMPC controllers that achieve semi-globalpractical stability. While in principle Assumption 2 is verydifficult to satisfy, we outline in this Section a series of observers designs that achieve the desired properties. Wewill go into details for standard high-gain observers [7]and optimization based moving horizon observers with con-traction constraint [2]. Note, that further observer designsthat satisfy the assumptions are for example observers thatposses a linear error dynamics where the poles can be chosen 1428  arbitrarily (e.g. based on normal form considerations andoutput injection [8], [9]), or observers that achieve finiteconvergence time such as sliding mode observers [10] orthe approach presented in [11], [12]. b) High Gain Observers:  One possible observer designthat satisfies Assumption 2 are high-gain observers. Basi-cally high-gain observers obtain a state estimate based on approximated derivatives of the output signals. They are ingeneral based on the assumption that the system is uniformlycompletely observable. Uniform complete observability isdefined in terms of the observability map  H   , which is givenby successive differentiation of the output  y : Y  T  =   y 1 ,  ˙  y 1 ,... ,  y ( r  1 ) 1  ,  y 2 ,... ,  y  p ,...,  y ( r   p )  p  = : H   (  x ) T  . Here  Y   is the vector of output derivatives. Note that weassume for simplicity of presentation that H    does not dependon the input and its derivatives. More general results allowingthat  H    depends on the input and its derivatives can befound in [6]. We assume that the system is uniform completeobservable, i.e. Assumption 3  The system  (1a)  is uniformly completely ob-servable in the sense that there exists a set of indices { r  1 ,... , r   p }  such that the mapping Y   = H   (  x )  depends onlyon x, is smooth with respect to x and its inverse from Y to xis smooth and onto. The system state is recovered by a high-gain observer. Ap-plication of the coordinate transformation  ζ  : = H   (  x ) , where H    is the observability mapping, to the system (1a) leads tothe system in observability normal form in  ζ  coordinates˙ ζ  =  A ζ +  B φ ( ζ , u ) ,  y  = C  ζ . The matrices  A ,  B  and  C   have the following structure  A  =  blockdiag [  A 1 ,...  A  p ] ,  A i  =  0 1 0  ···  00 0 1  ···  0 ...... 0  ··· ···  0 10  ··· ··· ···  0  r  i × r  i  B  =  blockdiag [  B 1 ,... ,  B  p ] ,  B i  =  0  ···  0 1  T r  i × 1 C   =  blockdiag [ C  1 ,... , C   p ] , C  i  =  1 0  ···  0  1 × r  i , and  φ  :  R n × R m →  R  p is the “system nonlinearity” inobservability normal form. The high-gain observer 2 ˙ˆ ζ  =  A ˆ ζ +  H  ε (  y − C  ˆ ζ )+  B ˆ φ ( ˆ ζ , u )  (13)allows recovery of the states [7], [20]  ζ  from information of   y ( t  )  assuming that Assumption 4  ˆ φ  in  (13)  is globally bounded. The function ˆ φ  is the approximation of   φ  that isused in the observer. The observer gain matrix 2 We use hatted variables for the observer states and variables.  H  ε  is given by  H  ε  =  blockdiag [  H  ε , 1 ,... ,  H  ε ,  p ] , with  H  T  ε , i  =[ α ( i ) 1  / ε ,  α ( i ) 2  / ε 2 ,... , α ( i ) n  / ε r  i ] , where  ε  is the so-calledhigh-gain parameter since 1 / ε  goes to infinity for  ε → 0. The α ( i )  j  s are design parameters and must be chosen such that thepolynomials  s n + α ( i ) 1  s n − 1 + ··· + α ( i ) n − 1 s + α ( i ) n  = 0 ,  i = 1 ,...,  p are Hurwitz.Note that estimates obtained in  ζ  coordinates can betransformed back to the  x  coordinates by ˆ  x  = H   − 1 ( ζ ) .As for example shown in [20] and utilized in [6], under theassumption that the initial observer error is out of a compactset and that the system state stays in a bounded region, forany desired  e max  and any convergencetime  k  conv ¯ π  there existsa maximum  ε  such that for any  ε  ≤  ε  the observer errorstays bounded and satisfies:    x ( τ ) −  ˆ  x ( τ )  ∀ τ  ≥ k  conv ¯ π .Thus, the high-gain observer satisfies Assumption 2. Fur-ther details can be found in [6], [21]. c) Moving Horizon Observers::  Moving horizon es-timators (MHE) are optimization based observers, i.e. thestate estimate is obtained by the solution of a dynamicoptimization problem in which the deviation between themeasured output and the simulated output starting from estimated initial state is minimized. Various approaches tomoving horizon state estimation exist [2], [22], [23], [24]. Wefocus here on the MHE scheme with contraction constraint asintroduced in [2], since it satisfies the Assumptions needed.In the approach proposed in [2] basically at all sampling in-stants a dynamic optimization problem is solved, consideringthe output measurements spanning over a certain estimation window in the past. Assuming that certain reconstructabilityassumptions hold and that no disturbances are present, onecould in principle estimate the system state by solving onesingle dynamic optimization problem. However, since thiswould involve the solution of a global optimization problemin real-time, it is proposed in [2] to only improve the estimateat every sampling time by requiring that the integrated errorbetween the measured output and the simulated output isdecreased from sampling instant to sampling instant. Sincethe contraction rate directly corresponds to the convergenceof the state estimation error and since it can in principle bechosen freely this MHE scheme satisfies the Assumptions onthe state estimator. Thus, it can be employed together witha state-feedback NMPC controller to achieve semi-globalpractical stability.VI. C ONCLUSIONS To derive stable output feedback NMPC schemes is of practical as well as of theoretical relevance. In this paper weexpanded the sampled-data output feedback NMPC approachfor continuous time systems using high-gain observers aspresented in [6], [21] by stating explicit conditions on theobserver error such that the closed loop is semi-global prac-tical stable. As shown, a series of observer designs satisfy therequired conditions. Examples are moving horizon observerswith contraction constraint [2], observers that posses a linear 1429
Search
Similar documents
View more...
Related Search
We Need Your Support
Thank you for visiting our website and your interest in our free products and services. We are nonprofit website to share and download documents. To the running of this website, we need your help to support us.

Thanks to everyone for your continued support.

No, Thanks
SAVE OUR EARTH

We need your sign to support Project to invent "SMART AND CONTROLLABLE REFLECTIVE BALLOONS" to cover the Sun and Save Our Earth.

More details...

Sign Now!

We are very appreciated for your Prompt Action!

x