Travel

A Framework for Reactive Motion and Sensing Planning: A Critical Events-Based Approach

Description
A Framework for Reactive Motion and Sensing Planning: A Critical Events-Based Approach
Categories
Published
of 11
All materials on our website are shared by users. If you have any questions about copyright issues, please report us to resolve them. We are always happy to assist you.
Related Documents
Share
Transcript
  A Framework for Reactive Motion and SensingPlanning: A Critical Events-based Approach R. Murrieta-Cid 1 , A. Sarmiento 2 , T. Muppirala 2 , S. Hutchinson 2 ,R. Monroy 1 , M. Alencastre 1 , L. Mu˜noz 1 , and R. Swain 1 1 Tec de Monterrey Campus Estado de M´exico, { rafael.murrieta, raulm, malencastre, lmunoz, rswain } @itesm.mx 2 University of Illinois at Urbana-Champaign { asarmien, muppiral, seth } @uiuc.edu Abstract.  We propose a framework for reactive motion and sensingplanning based on critical events. A critical event amounts to crossing acritical curve, which divides the environment. We have applied our ap-proach to two different problems: i) object finding and ii) pursuit-evasion.We claim that the proposed framework is in general useful for reactivemotion planning based on information provided by sensors. We generalizeand formalize the approach and suggest other possible applications. 1 Introduction We propose a framework for reactive motion and sensing planning based oncritical events. A critical event amounts to crossing a critical curve, which dividesthe environment. We work at the frontiers of computational geometry algorithmsand control algorithms. The srcinality and the strength of this project is to bringboth issues together.We divide the environment in finitely many parts, using a discretization func-tion which takes as input sensor information. Thus, in our approach, planningcorresponds to switching among a finite number of control actions consideringsensor input. This approach naturally allows us to deal with obstacles.We have applied our approach to several different problems, here for lackof space we only present two: i) object finding and ii) pursuit-evasion. In ob- ject finding, our approach produces a continuous path, which is optimal in thatit minimizes the expected time taken to find the object. In pursuit-evasion,we have dealt with computing the motions of a mobile robot pursuer in or-der to maintain visibility of a moving evader in an environment withobstacles.Our solutions to these two problems have been published elsewhere. In thispaper we show that these solutions actually rely on the same general framework.We claim that the proposed framework is in general useful for reactive motionplanning based on information provided by sensors. We generalize and formalizethe approach and suggest other possible applications. A. Gelbukh, A. de Albornoz, and H. Terashima (Eds.): MICAI 2005, LNAI 3789, pp. 990–1000, 2005.c   Springer-Verlag Berlin Heidelberg 2005  Reactive Motion and Sensing Planning 991 2 A General Framework The crux of our approach consists of relating critical events with both the con-trols to be applied on the robot and the robot environment representation. A critical event   signals that the robot has crossed a critical curve drawn on therobot workspace,  W  . It corresponds to changes in the sensors information read-ings, driving our algorithms.We use  C   and  U   to respectively denote the robot configuration space and therobot control space (velocity vectors applied to the robot).  P   ⊂ C  × U  , the robotphase space, is the cross product of   C   and  U  . Critical curves are projections onthe workspace of   P  . This means that even if a configuration is valid to accomplisha task, it may not be valid due to the velocity related with that configuration.Hence, the critical curves may change their location according to a given robotvelocity.Let  Y   denote the observation space, which corresponds to all possible sen-sor readings. The robot state space is  X   ⊂  C  × E  , in which  E   is the set of all possible environments where the robot might be [12]. Evidently, there is arelation between the robot state  x ( t ) and the robot observation state  y ( t ) whichis a function of time. Thus,  y ( t ) =  h ( x ( t )), where  y  ∈ Y   and  x ∈ X  . The robotinformation state is defined as  i t  = ( u 0 ,...,u t − 1 ,y o ,...,y t ).  i t  is the historyof all sensor readings up to time  t  and all controls that have been applied tothe robot up to time  t − 1 [2]. The  information space   I   is defined as the set of all possibles information states [12]. We underline that the critical events andthe robot objective lie over  I  . That is, a robot objective amounts to achievinga specific task defined on the information state. Two example robot objectivesare maintaining visibility of a moving evader and reaching a robot configurationgiven in terms of a specific robot sensor reading.Critical events may be of several types. A type of critical event is systemati-cally associated to a type of control. Mainly, to accomplish a robotic task meansto answer the following question: what control should be applied on the robotgiven some  i t ?. Thus, planning corresponds to a discrete mapping  ce   :  i t  →  u between  i t  and  u , triggered by the critical event  ce  . The output controls cor-respond to at the very worst case locally optimal polices that solve the robotictask.Note that instead of using the robot state space,  X  , we use the critical eventsto make a decision on control should be applied.  ce   actually encodes the mostrelevant information on  X  . In addition, it relates observations with the bestcontrol that can be applied.  ce   is built using local information but, if necessary,it may involve global one.We want to use our framework to generate, whenever possible, optimal con-trols to accomplish a robotic task. As mentioned earlier, planning correspondsto relate a critical events with a control. However, some problems may be his-tory dependent. That means that the performance of a control to be applied notonly depends on the current action and a sensor reading, but it also depends onall previous sensor readings and their associated controls. In history dependentproblems, the concatenation of locally optimal controls triggered by independent  992 R. Murrieta Cid et al. critical events does not necessarily generate a globally optimal solution. For in-stance, we have shown that object finding is history dependent and moreoverNP-hard.To deal with history dependent problems, we have proposed a two layerapproach. The high level,  combinatoric   layer attempts to find a “suitable” orderof reaching critical events. The low level,  continuous   layer takes an ordering inputby the upper one and finds how to best visit the regions defined by critical curves.This decoupling approach makes the problem tractable, but at the expense of missing global optimality. For the combinatorial level, we have proposed to usea  utility function based heuristic  , given as the ratio of a gain over a cost. Thisutility function is used to drive a greedy algorithm in a reduced search spacethat is able to explore several steps ahead but without evaluating all possiblescombinations.In no history dependent problems, such as finding a minimal length path inan environment without holes [11], the Bellman’s principle of optimality holdsand thus the concatenation of locally optimal paths will result in a globallyoptimal one. The navigation approach presented in [11] is also based on criticalevents. But, differently to the ones presented in this paper, it is based on closedloop sensor feed-back. 3 Object Finding We have used critical events to finding time optimal search paths in knownenvironments. In particular, we have searched a known environment for an objectwhose unknown location is characterized by a known probability density function(pdf).In this problem, we deal with  continuous sensing   in a continuous space. Weassume that the robot is sensing the environment as it moves. A continuoustrajectory is said to  cover   [9] a polygon P if each point  p  ∈  P   is visible fromsome point along the trajectory. Any trajectory that covers a simple (withoutholes) polygon must visit each subset of the polygon that is bounded by theaspect graph lines associated to non-convex vertices of the polygon.We call the area bounded by these aspect graph lines the  corner guard regions  .A continuous trajectory that covers a simple polygon needs to have at least onepoint inside the region associated to “outlying” non-convex vertices (non-convexvertices in polygon ears), like  A  and  C   in Fig. 1 a). Since these points need to beconnected with a continuous path, a covering trajectory will cross all the othercorner guard regions, like the one associated to vertex  B .Since a continuous trajectory needs to visit all the corner guard regions, it isimportant to decide in which order they are to be visited. The problem can beabstracted to finding an specific order of visiting nodes in a graph that minimizesthe expected value of time to find an object. [6] shows that the discrete versionof this problem is NP-hard. For this reason, to generate continuous trajectorieswe propose an approach with two layers that solve specific parts of the problem.This one is described below (see 3.4).  Reactive Motion and Sensing Planning 993 3.1 Continuous Sensing in the Base Case The simplest case for a continuous sensing robot is that shown in Fig. 1 b).Then, the robot has to move around a non-convex vertex (corner) to explore theunseen area  A ′ . For now, we assume that this is the only unseen portion of theenvironment.As the robot follows any given trajectory  S  , it will sense new portions of the environment. The rate at which new environment is seen determines the ex-pected value of the time required to find the object along that route. In partic-ular, consider the following definition of expectation for a non-negative randomvariable [5]: E  [ T  | S  ] =    ∞ 0 P  ( T > t )  dt.  (1) 3.2 Expected Value of Time Along any Trajectory In the simple environment shown in Fig. 1 b) the robot’s trajectory is expressedas a function in polar coordinates with the srcin on the non-convex vertex. Weassume that the robot will have a starting position such that its line of sight willonly sweep the horizontal edge  E  1 . As mentioned before, the expected value of the time to find an object depends on the area  A ′ not yet seen by the robot. a b Fig.1.  a) convex corners b) base case Assuming that the probability density function of the object’s location overthe environment is constant, the probability of not having seen the object attime  t  is P  ( T > t ) =  A ′ ( t ) A  =  Q y 2 2 A  tan( θ ( t )) ,  (2)where  A  is the area of the whole environment (for more details, see [7]) . Notethat the reference frame used to define the equation 2 is  local  . It is defined with  994 R. Murrieta Cid et al. respect to the reflex vertex (this with interior angle larger than  π ). From (1)and (2), E  [ T  | S  ] =  Q y 2 2 A    t f  0 dt tan( θ ( t )) .  (3)Equation (3) is useful for calculating the expected value of the time to find anobject given a robot trajectory  S   expressed as a parametric function  θ ( t ). 3.3 Minimization Using Calculus of Variations The Calculus of Variations [3] is a mathematical tool employed to find stationaryvalues (usually a minimum or a maximum) of integrals of the form I   =    ba F  ( x,y,y ′ )  dx,  (4)where  x  and  y  are the independent and dependent variables respectively.The integral in (4) has a stationary value if and only if the Euler-Lagrangeequation is satisfied, ∂F ∂y  −  ddx  ∂F ∂y ′  = 0 .  (5)It is possible to express the differential of time as a function of a differentialof   θ . This will allow us to rewrite the parametric equation as a function in which θ  and  r  are the independent and dependent variables respectively, The resultingequation is as follows: E  [ T  | S  ] =  Q y 2 2 A    θ f  θ i 1tan( θ )  r ′ 2 + r 2  12 dθ.  (6)To find stationary values of (6), we use (5) with  x  =  θ ,  y  =  r  and  F   = 1tan θ  r ′ 2 + r 2  12 . After simplification, this yields the following second order non-linear differential equation, r ′′ =  r  + 2 r ′ 2 r  + 2sin(2 θ )  r ′ +  r ′ 3 r 2  .  (7)This equation describes the route to move around a non-convex vertex (corner)to search the area on the other side optimally (according to the expected valueof time). We have solved equation (7) numerically using an adaptive step-sizeRunge-Kutta method. The Runge-Kutta algorithm has been coupled with aglobally convergent Newton-Raphson method [7].
Search
Similar documents
Related Search
We Need Your Support
Thank you for visiting our website and your interest in our free products and services. We are nonprofit website to share and download documents. To the running of this website, we need your help to support us.

Thanks to everyone for your continued support.

No, Thanks