Poems

A fly-locust based neuronal control system applied to an unmanned aerial vehicle: the invertebrate neuronal principles for course stabilization, altitude control and collision avoidance

Description
A fly-locust based neuronal control system applied to an unmanned aerial vehicle: the invertebrate neuronal principles for course stabilization, altitude control and collision avoidance
Categories
Published
of 15
All materials on our website are shared by users. If you have any questions about copyright issues, please report us to resolve them. We are always happy to assist you.
Related Documents
Share
Transcript
  http://ijr.sagepub.com Robotics Research The International Journal of DOI: 10.1177/0278364907080253 2007; 26; 759 The International Journal of Robotics Research  Sergi Bermudez i Badia, Pawel Pyk and Paul F.M.J. Verschure neuronal principles for course stabilization, altitude control and collision avoidanceA fly-locust based neuronal control system applied to an unmanned aerial vehicle: the invertebrate http://ijr.sagepub.com/cgi/content/abstract/26/7/759   The online version of this article can be found at:   Published by: http://www.sagepublications.com   On behalf of:   Multimedia Archives   can be found at: The International Journal of Robotics Research Additional services and information for http://ijr.sagepub.com/cgi/alerts Email Alerts:   http://ijr.sagepub.com/subscriptions Subscriptions:   http://www.sagepub.com/journalsReprints.nav Reprints: http://www.sagepub.com/journalsPermissions.nav Permissions: http://ijr.sagepub.com/cgi/content/abstract/26/7/759#BIBLSAGE Journals Online and HighWire Press platforms): (this article cites 22 articles hosted on the Citations    © 2007 SAGE Publications. All rights reserved. Not for commercial use or unauthorized distribution.  at Biblioteca de la Universitat Pompeu Fabra on July 17, 2007 http://ijr.sagepub.comDownloaded from   Sergi Bermúdez i Badia Laboratory for Synthetic Perceptive, Emotive and Cognitive Systems,Universitat Pompeu Fabra, Ocata num. 1, 08003 Barcelona, SpainInstitute of Neuroinformatics, ETH/University of ZurichWinterthurerstr. 190, CH-8057 Zurich, Switzerlandsergi.bermudez@upf.edu Pawel Pyk Institute of Neuroinformatics, ETH/University of ZurichWinterthurerstr. 190, CH-8057 Zurich, Switzerland Paul F.M.J. Verschure Laboratory for Synthetic Perceptive, Emotive and Cognitive Systems,Universitat Pompeu Fabra, Ocata num. 1, 08003 Barcelona, SpainICREA & Technology Department, University Pompeu FabraPasseig de Circumval.lació 8, 08003, Barcelona, Spain A fly-locust basedneuronal control systemapplied to an unmannedaerial vehicle: theinvertebrate neuronalprinciples for coursestabilization, altitudecontrol and collisionavoidance Abstract The most versatile and robust flying machines are still those produced by nature through evolution. The solutions to the 6 DOFcontrolprob-lem faced by these machines are implemented in extremely small neu-ronal structures comprising thousands of neurons. Hence, the biolog-ical principles of flight control are not only very effective but alsoefficient in terms of their implementation. An important question is towhat extent these principles can be generalized to man-made flying platforms.Here, thisquestion is investigated in relation to the compu-tational and behavioral principles of the opto-motor system of the flyand locust. The aim is to provide a control infrastructure based onlyon biologically plausible and realistic neuronal models of the insect opto-motor system. It is shown that relying solely on vision, biologi-cally constrained neuronal models of the fly visual system suffice for course stabilization and altitude control of a blimp-based UAV. More-over, the system is augmented with a collision avoidance model based on the Lobula Giant Movement Detector neuron of the Locust. It isshown that the biologically constrained course stabilization model ishighly robust and that the combined model is able to perform au-tonomous indoor flight. The International Journal of Robotics ResearchVol. 26, No. 7, July 2007, pp. 759–772DOI: 10.1177/0278364907080253 c 1 2007 SAGE PublicationsFigures 1–4, 6–13 appear in color online: http://ijr.sagepub.com KEY WORDS—biologically based, insect vision, optic flow,neural model, LGMD, EMD, UAV, autonomous flight, blimp-based, altitude control, course stabilization, Reichardt correla-tion 1. Introduction Nature has produced highly versatile and robust flying ma-chines (Dudley 2000). Indeed it was bird flight itself that in-spired the Wright brothers in the construction of the first fixed-wing air plane (Kelly 1950). In addition to having informedthe construction of flying platforms themselves, nature canalso provide us with solutions to the 6 DOF control problemfaced by flying machines (Dudley 2000). For instance, drag-onfliescan fly in cluttered environmentsat a speed of 10 2 bodylengths/s(Marden2005),whileablowflycanmakerapidflightmaneuvers at up to 1.2 m/s, with accelerations of up to 20 m/s 2 (Schilstra and van Hateren 1999). These flight maneuvers aremainly generated by the opto-motor system of a brain of about1 mm 3 comprising about 200,000 neurons (Posey et al. 2001).Although it is not the only system used for navigation, ap-proximately two-thirds of the fly brain is dedicated to visualprocessing (Strausfeld 1976). Hence, the brain of flying in-sects includes principles of visual flight control that are notonly very effective but also efficient in terms of their imple-mentation. An important question is to what extend these prin-ciples can be generalized to man-made flying platforms. Herewe investigate this question in relation to the computational759    © 2007 SAGE Publications. All rights reserved. Not for commercial use or unauthorized distribution.  at Biblioteca de la Universitat Pompeu Fabra on July 17, 2007 http://ijr.sagepub.comDownloaded from   760 THE INTERNATIONAL JOURNAL OF ROBOTICS RESEARCH / July 2007and behavioral principles of the opto-motor system of the flyand locust. We will evaluate to what extend a biologically con-strained neuronal model of this system is able to control anUnmanned Aerial Vehicle (UAV).A large number of models for insect based opto-motor be-havior have been proposed and many of these show reasonablygood results (Harrison 2005 1  Netter and Franceschini 2002 1 Martinand Franceschini 1994 1  Franceschini et al.1992). How-ever, occasionally biologically unrealistic sensors are includedin the control system to achieve a functional result (Zufferey etal.2002 1 Ichikawaet al.2001).Instead,asopposed toresortingto additional sensors we aim at providing a control infrastruc-ture based only on vision and biologically plausible and real-istic neuronal models of the insect opto-motor system. In ad-dition, a considerable amount of work has been done on obsta-cle avoidance, homing and trajectory following using groundbased robots (Netter and Franceschini 2002 1  Harrison 2005 1 Blanchard and Verschure 1999 1  Blanchard et al. 2001 1  Hafneret al. 2002 1  Hafner and Salomon 2002 1  Martin and Frances-chini 1994 1  Franceschini et al. 1992). However, despite the rel-evance of these robot experiments, using ground based mobilerobots as a platform reduces the 6 DOF problem that flyinginsects need to solve to a 3 DOF problem. The latter negotiatecomplex dynamics that include inertia and 6 DOF that lead toproblems of course stabilization, altitude and position controlthat ground based systems do not have to face. Hence, we willuse flying robots to investigate and reconstruct the principlesunderlying biological flight control systems. The insect flightcontrol system is of particular interest because of its abilityto show robust flight stabilization, collision avoidance, securetakeoff, landing and so on using relatively simple visual mech-anisms (Tammero and Dickinson 2002 1  Srinivasan et al. 1996 1 Egelhaaf and Borst 1993 1  Egelhaaf 1985 1  Reichardt 1961). Inthe field many types of UAVs are and have been used, mostof these helicopter, fixed wing or blimp based (Skafidas 2002 1 Iida 2001 1  Zufferey et al. 2002 1  Musial et al. 2000 1  Saripalliet al. 2003 1  Netter and Franceschini 2002). Since our opto-motormodelisacomponentofalargersystemthathastoservechemical localization and search (Pyk et al. 2006) we limit itsapplication to a dirigible that will exert a minimal effect on thestructure of the chemical plumes in its environment. 2. Methods and Materials  2.1. Setup We have developed a blimp-based robot designed to work within indoor environments (Figure 1). The dimensions of the hull are 30 by 120 cm (radius  2  length) that providesfor a payload of about 250 g at 600 m over sea level. Fourpropellers are mounted in a lightweight balsa wood struc-ture providing the robot with independent control for altitudeand translation. The propellers (DIDELSA, Belmont,Switzer-land, www.didel.com) are hand made and optimized for thecombination of motors (08GS – 8 mm motor, API-Portescap,La Chaux-de-Fonds, Switzerland, www.portescap.com) and1:7 gearboxes (8R78 mm, DIDEL SA, Belmont, Switzerland,www.didel.com), providing 20 g thrust per motor at full speed.The robot is powered using a 10 g lithium-polymer battery(West-technik, Germany) providing for about one hour of au-tonomous flight . The UAV is equipped with two CCD color cameras (“Mod-ule 3”, Conrad Electronics, Germany) mounted on the frontpart separated by 110 3 , pointing to the left and right side re-spectively. Each of these lightweight high-resolution cameras(628 [H] x 582 [V] pixels) is equipped with a wide-angle lens(2.5 mm lenses, Conrad Electronics, Switzerland). Given theopening angle of the cameras (100 [H]  2  87 [V] degrees),the combined camera system covers over 180 3  of the frontalhorizontal sphere. The images acquired with the cameras aretransmitted via two lightweight PAL transmitters (SDX-21LPvideo transmitters working in the 2.4 GHz band, producedby RF-Video, Canada) to our ground station where they arefurther processed. An on-board radio receiver allows the re-mote control of the speed of all the motors independently viaa radio link. Serial communication with the flying platform isestablished using a pair of BIM433-F transceivers (WirelessWorld AG, Switzerland), allowing for up to 115200 baud. Theground station setup consists of two PAL receivers (Wavecom,RF-Video, Canada) that receive the signals from the on boardcameras 1  a quad combiner (Grand Virtual Guard, ARP Data-con) that combinesthemtogether injust oneimage 1 anda USBframegrabber(LifeviewUSBCapview,Lifeview,Taiwan)thatallows a laptop or PC to acquire the video stream.  2.2. Experimental Room and 3D Tracking System Experiments are performed in a 5  2  4  2  4.5 m room withrandomly distributed solid black squares on the walls and thefloor as visual cues (Figure 1). In order to analyze and quantifythe trajectory and the behavior of the UAV accurately, we havedeveloped a 3D real time visual tracking system that providesus with the position (  x, y, z ), heading direction (  x  h ,  y h ,  z h ) andlinear velocity of the UAV. The tracking system, uses two in-frared cameras (2.5 mm lenses, Conrad Electronics, Switzer-land) mounted at the ceiling at 4.5 m and uses stereoscopyto infer the 3D position of two IR LEDs mounted on top of the hull of the UAV. The transformation of the two camerapixel coordinates into a pair of three dimensional positions isachieved with a multilayer perceptron. A large number of reg-ular spaced pre-mapped positions in the room are used as ref-erence points for interpolation. The tracking system achievesan accuracy of up to 5 cm. Furthermore, the tracking data isacquired synchronously with the internal states and responsesoftheneural model thatcontrolstheflightbehavior.Hence,we    © 2007 SAGE Publications. All rights reserved. Not for commercial use or unauthorized distribution.  at Biblioteca de la Universitat Pompeu Fabra on July 17, 2007 http://ijr.sagepub.comDownloaded from   Bermúdez i Badia, Pyk, and Verschure / A fly-locust based neuronal control system applied to an unmanned aerial vehicle 761Fig. 1. Image of the blimp-based UAV in our test arena. Thewalls and the floor of the 5  2 4 2 4 1 5 m room are coveredwith randomly distributed black squares to provide visual cuesto the UAV. A vision based tracking system is mounted at theceiling of the room, providing us with the position, orientationand velocity of the UAV.are able to directly correlate the neuronal states of our modelto the behaviors produced by the UAV. The data acquisition isperformed at 5 Hz and the data analysis is performed usingMatlab (Mathworks, USA) and our simulation environmentIQR421 (Bernardet et al. 2002). 3. Models Our neuronal model includes components for course stabiliza-tion, altitude control and collision avoidance and is derivedfrom our current understanding of the insect opto-motor sys-tem (Reichardt 1961 1  Egelhaaf and Borst 1993 1  Srinivasan etal. 1996 1  Kern et al. 2001 1  Braitenberg 1967). Our model com-prises several processes performed in different layers of theinsect visual system (Figure 2). These systems are consideredto have a hierarchical organization where the signals from thephotoreceptors are ultimately integrated in the response of, socalled, wide field neurons that are tuned to specific propertiesof the visual input. For this reason these high-level neurons arealso called matched filters. The first step in the processing hi-erarchy occurs at the level of the lamina where the luminancesignal acquired by the photoreceptors is normalized using alogarithmic compression (Levine 1985) with:  I   photo  4 k  i  5 log 2 k   j  I  input   6 const  3  (1)where  I   photo  is the photoreceptor response,  I  input   the inputluminance level and  k  i 4 k   j 4 const   scaling constants. Subse-quently an edge enhancement is performed in the Lamina us-ing a centre/surround inhibition, a method similar to a differ-ence of Gaussians based zero-crossing edge extraction (Gon-zalez and Woods 1992):  Edge image  4  Input  image 7  Dif f Gauss Kernel  (2)where  Input  image  is an input image,  Dif f Gauss Kernel  a dif-ference of Gaussians kernel and  Edge image  theresulting imagefrom the convolution operation.  Dif f Gauss Kernel  4  f   2546 1 3 8  f   2546 2 3  (3)  f   25463  4 1 9  2 76 exp 1 8 2  x  6 53 2 2 6 2 2  (4)with  5  the mean value, and  6 1  8 6 2  fixed standard deviationvalues.Aftertheisolation of thecontrast information,three parallelprocessing streams deal with extracting optic flow informationrelevant for flight stabilization, altitude control and collisionavoidance. There are two different priority levels: stabilizationand altitude control responses are inhibited whenever a colli-sion is detected, hence, an avoidance action is always priori-tized.  3.1. Course Stabilization and Altitude Control: Elementary Motion Detectors and the HS/VS system Course stabilization and altitude control are achieved by re-acting to any drift or perturbation of basic optic flow patterns(Srinivasan et al. 1996). These optic flow patterns are detectedby the, so called, wide-field Horizontal and Vertical Systemneurons (HS and VS respectively) located in the Lobula platelayer (Hengstenberg 1982). These cells are known to be mo-tion sensitive and they respond maximally to a stimulus mov-ing in a certain preferred direction whereas they show a de-crease in the membrane potential due to stimuli moving in theoppositedirection,i.e.nulldirection(EgelhaafandBorst1993 1 Egelhaaf 1985). The responses of these visual neurons resultfromtheintegrationoftheactivityoflocalvisualmotionsensi-tive cells called Elementary Motion Detectors (EMDs). Sincethese neurons are able to extract egocentric motion informa-tion, they are therefore good candidates to be used for a setof tasks such as course stabilization, altitude control, odome-try, etc. (Srinivasan et al. 1996 1  Tammero and Dickinson 2002 1 Franceschini et al. 1992). Both HS and VS cells are neuronsof the visual system believed to be involved in providing rele-vant visual information that is used in flight control (Egelhaaf 1985). These cells encode the direction of rotation of the an-imal largely independent of the spatial layout and texture of the environment (Hengstenberg 1982). Only when the animalis very close to an object are the responses affected (Kern et al.2001 1  Tammero and Dickinson 2002).In the specific case of   Drosophila,  there are about 800 om-matidia and 8 photoreceptors (R1–R8) per ommatidia (Ready    © 2007 SAGE Publications. All rights reserved. Not for commercial use or unauthorized distribution.  at Biblioteca de la Universitat Pompeu Fabra on July 17, 2007 http://ijr.sagepub.comDownloaded from   762 THE INTERNATIONAL JOURNAL OF ROBOTICS RESEARCH / July 2007Fig. 2. Functional (left) and anatomical (right) structure of a prototypical insect visual system based on the locust. See text forfurther explanation.et al. 1976). In our implementation a slightly lower resolutionis used instead: 25 2 25 input pixels (625 pixels). Furthermore,flies have about 50–60 wide-field motion sensitive neurons ortangential cells that encode motion information (Hengstenberg1982). Instead, our implementation only makes use of fourwide-field motion sensitive neurons. In our model, these cellsintegrate the response of 25 2 25 local motion sensitive cells(EMDs). A higher resolution is impractical since the compu-tational cost would slow down the system and would provokealiasingrelatedproblems.Incontrast,alowerresolutionwouldreduce the motion sensitivity of the model. Since, each one of the EMDs is sensitive only to local motion in the visual field(Egelhaafand Borst 1993 1 Egelhaaf1985 1 DouglasandStraus-feld 1996 1  Reichardt 1961), the integration of the EMDs overthe whole visual field produces a VS/HS cell type of responsethat encodes the ego-motion of the insect. Using this informa-tion provided by the HS/VS cells we can generate the motoractions that will compensate for any drift of the UAV or main-tain a specific altitude.Up to now we have described a hierarchical structure thatgoesfromthephotoreceptorstotheselectionoftheopto-motoraction. Every layer described above performs an important op-eration that only makes sense in the given context of the neuralstructure. Therefore, the output of the HS/VS cells can only beunderstood as the integration of EMDs, and those in turn onlyas a pairwise processing of neuronal responses in the Laminaand so on.The so called “correlation model” of the EMDs, or Re-ichardt correlator, was proposed long ago (Reichardt 1961)and this model only requires a few elaborations to reflect thespecific physiological features of the fly’s motion detectionsystem (Egelhaaf and Borst 1993 1  Higgins et al. 2004) (Fig-ure 3).The Reichardt correlation model is applied at the pixel levelbetween neighboring pixels (  I  a  and  I  b  in Figure 3) separatedby a certain distance  D . There are two branches, the null andpreferred output, which are computed independently. Given atranslating object from pixel  a  to  b  at speed  9 , the Reichardtcorrelation  Rcorr  2  I  a 4  I  b 3  is defined as:  Rcorr  2  I  a 4  I  b 3  4  Out   pref erred  2  I  a 4  I  b 3 8  Out  null 2  I  a 4  I  b 3  (5) Out   pref erred  2  I  a 4  I  b 3  4  I  a  2 t  8  3 5  I  b  2 t  3  (6) Out  null 2  I  a 4  I  b 3  4  I  b  2 t  8  3 5  I  a  2 t  31  (7)Given the speed  9  and a pixel separation of   D ,  I  b  2 t  3 4  I  a  2 t  8  D 93  (8)then, Out   pref erred   2  I  a 4  I  a  2 t  8  D 933 4  I  a  2 t  8 3 5  I  a  2 t  8  D 931  (9)We find that  Out   pref erred   and  Out  null  are maximum when,     Out   pref erred   4 0 4  for    4  D 9  (10)    © 2007 SAGE Publications. All rights reserved. Not for commercial use or unauthorized distribution.  at Biblioteca de la Universitat Pompeu Fabra on July 17, 2007 http://ijr.sagepub.comDownloaded from 
Search
Similar documents
View more...
Related Search
We Need Your Support
Thank you for visiting our website and your interest in our free products and services. We are nonprofit website to share and download documents. To the running of this website, we need your help to support us.

Thanks to everyone for your continued support.

No, Thanks