Home & Garden

Recognition and teaching of robot skills by fuzzy time-modeling

Recognition and teaching of robot skills by fuzzy time-modeling
of 7
All materials on our website are shared by users. If you have any questions about copyright issues, please report us to resolve them. We are always happy to assist you.
Related Documents
  See discussions, stats, and author profiles for this publication at: https://www.researchgate.net/publication/221398627 Recognition and Teaching of Robot Skills by Fuzzy Time-Modeling. Conference Paper  · January 2009 Source: DBLP CITATIONS 6 READS 28 4 authors:Some of the authors of this publication are also working on these related projects: E-care@home   View projectAction and Intention Recognition in Human Interaction with Autonomous Systems (AIR)   View projectRainer Heinrich PalmÖrebro universitet 148   PUBLICATIONS   1,844   CITATIONS   SEE PROFILE Bourhane KadmiryI.M.D 19   PUBLICATIONS   223   CITATIONS   SEE PROFILE Boyko IlievÖrebro universitet 47   PUBLICATIONS   314   CITATIONS   SEE PROFILE Dimiter DriankovÖrebro universitet 94   PUBLICATIONS   4,171   CITATIONS   SEE PROFILE All content following this page was uploaded by Bourhane Kadmiry on 13 January 2017. The user has requested enhancement of the downloaded file. All in-text references underlined in blue are added to the srcinal documentand are linked to publications on ResearchGate, letting you access and read them immediately.  Recognition and Teaching of Robot Skills by Fuzzy Time-Modeling Rainer Palm, Bourhane Kadmiry, Boyko Iliev, and Dimiter DriankovAASS, Dept. of Technology Orebro University SE-70182 Orebro, SwedenEmails: rub.palm@t-online.de,  {  bourhane.kadmiry, boyko.iliev, dimiter.driankov } @aass.oru.se  Abstract – Robot skills are low-level motion and/or grasping ca- pabilities that constitute the basic building blocks from which tasksare built. Teaching and recognition of such skills can be doneby Programming-by-Demonstration approach. A human operator demonstrates certain skills while his motions are recorded by a data-capturing device and modeled in our case via fuzzy clustering and Takagi-Sugeno modeling technique. The resulting skill models usethe time as input and the operator’s actions and reactions as outputs.Given a test skill by the human operator the robot control systemrecognizes the individual phases of skills and generates the type of   skill shown by the operator. Keywords – Fuzzy modeling, time clustering, robot skills, Programming-by-Demonstration 1 Introduction Robot skills are low-level motion and/or grasping capabilitiesthat constitute the basic building blocks from which tasks are built. One major challenge in robot programmingis to be ableto program skills in an easy and fast manner with high ac-curacy.  Programming by Demonstration (PbD)  (PbD) is onesuchapproachthat has the aforementionedproperties. In PbDa human who demonstrates the skills is equipped with data-capturing devices (e.g., data glove, cameras, haptic devicesetc.). The demonstrator performs a skill while the robot cap-tures the associated motion data, analyzes it and generates arobot-centered model of the demonstrated skill, that is a cor-responding robot skill. Once the robot has acquired a number of robot skills from demonstrations it is able to recognize ademonstrated human skill as one of its already available ro- bot skills. In a final step, when a task is demonstrated thenthe robot recognizes the robot skills that constitute it and thuscreates a program consisting of these skills. This approachcan be used not only for industrial robots but also in the fieldsof prosthetics, humanoid service robots, remote control andteleoperation in hasardous and dangerous environments, andlast but not least in the entertainment industry. However, suchapplications are relatively few so far due to the lack of appro- priate sensor systems and some unsolved problems with theman-robot interaction. Selected skills are- contour following- assembly (peg-in-holeinsertion)- handling of objects- grasping of objects.Different techniques for recognition of skills have been ap- plied for PbD. For the manipulation domain Morrow andKhosla describe the construction of a library of robot capa- bilities by analysis and identification of tasks using a cam-era and a force-torque sensor  [1]. Their approach is to de-velop a sensorimotor layer which integrates sensing into therobot programming primitives. In [2] Kaiser and Dillmanndescribe a neural net approach for the initial skill learning andreinforcement learning, skill refinement and adaptation. Ex- perimental results were shown by an insertion example and adoor-openingexperiment. In the context of task learning Geibet. al. proposed an approach to integrating high-level arti-ficial intelligence planning technology with low-level roboticcontrol [3]. Kwun Han and M. Veloso describe an automated recognition of the behavior of robots using HMMs to repre-sent and recognize strategic behaviors of robotic agents [4].In the field of recognition of robot behaviors the following publications are important: Zoellner et. al. [5] use a dataglove with integrated tactile sensors for behavior recognitionwhich is based on supportvectormachines(SVM).Ekvall andKragic [6] apply Hidden Markov Models (HMM) and addressthe PbD-problemusing the armtrajectoryas an additionalfea-ture for grasp classification. Li et. al. [7] use the singu-lar value decomposition (SVD) for the generation of featurevectors of human grasps and support vector machines (SVM)which are applied to the classification problem. A fuzzy logicapproachforgesturerecognitionwaspublishedbyBimber [8].The method is applied to 6 d.o.f. (degrees of freedom) tra- jectories of a human arm but cannot cope with more than 6dimensions. Despite the advanced state of the art, the citedmethods on dynamic classification do not consider the evolu-tion of a robot behavior in time and space. This appears to be a disadvantage because neither the estimation of the oc-curence of a specific skill nor the problem of segmentationcan be solved in a general way. The method presented in this paper tries to overcomesome of these drawbacks by modelingtrajectories in time and space. Its advantage over other meth-ods like HMM or Gaussian MixtureModels is discussed in [9]and [10]. To solve a ’fuzzy’ task like skill recognition it turnsout that fuzzy modeling is a suitable approach to deal with.The main focus of this method is  programming by demonstra-tion and recognitionofhumanskills  usingfuzzyclusteringandTakagi-Sugeno (TS) fuzzy modeling (see also [11] and [12]).The paper is organized as follows: In Section 2 a general ap- proach to learning skills by partitioning them into phases isdiscussed. Section 3 deals with fuzzy time-modeling and thesegmentation principle. Section 4 describes the recognitionof  phases and the decision process for the classification of skills.Section 5 presents simulations and experimental results. Thefinal Section 6 draws some conclusions and directions for fu-ture work. ISBN: 978-989-95079-6-8 IFSA-EUSFLAT 20097  2 Programming of robot skills by humandemonstration Programming of robot skills requires the building of a libraryof models of skills being taught or trained by a human demon-strator. In a next step newly demonstrated skills lead to testmodels which are then compared with the training models(skills) of the library. By such a comparison the robot is ableto recognize these newly demonstrated skills. Finally a robottask including the recognized skills can automatically be gen-erated (see Fig. 1). For the training phase two main tasks areneeded to perform:  segmentation  of human demonstrationsinto skill phases and  phase modeling   of the skill phases.  Seg-mentation  means a partition of the data record into a sequenceof episodes, where each one contains a single skill phase. For thetest phasethreemain tasks areneededtoperform:  segmen-tation  of the human test demonstrations,  phase recognition ,and  skill classification .  Phase recognition  means to recognizethe phases performedin each episode. The thirdtask is to con-nect the recognized skill phases in such a way that a full skillcan be identified.Figure 1: Learning skills from human demonstrations 3 Fuzzy time-modeling and the segmentationprinciple Let a skill be partitioned into a sequence of phases as de-scribedabove. Eachphasestartsandendswithadiscreteeventcomingeither froma discretesensor or fromsome appropriate preprocessing of continuous sensor signals. The structure of askill can be described most appropriately by a hybrid automa-ton in which nodes represent continuous phases and arcs thediscrete transitions (switches) between them. Figure 2 showsan example of such a hybrid automaton. The hybrid processis event-controlled(see [13]) and is assumed to be stable bothwithin the individual phases and with respect to the switch-ing behavior between them. On the other hand the purpose of segmentation is to identify the discrete instants of time for theswitches to occur in order to cut the whole skill into phasesduring demonstration.The following subsection deals with fuzzy time-modelingin general that is used both for phase modeling and for seg-mentation. In the same context the training of time cluster models using new data is described. After that the segmenta-tion procedure is presented.Figure 2: Hybrid automaton of a skill 3.1 Fuzzy time-modeling  Let us for the time being only concentrate on the modelingof a skill phase. The recognition of a skill phase is achieved by a model that reflects the  behavior of the robot end-effector in time  during the episode considered. Each demonstrationis repeated several times to collect enough samples of every particular skill phase. From those data, models for each indi-vidual phase are developedusing fuzzy clustering and Takagi-Sugeno fuzzy modeling ([14, 11]). We consider time instants as model inputs and end-effector coordinates as model out- puts. Define the end-effector coordinate by x ( t ) =  f  ( t )  (1)where  x ( t )  ∈  R 3 , f   ∈  R 3 , and  t  ∈  R + . Further linearize (1)at selected time points  t i x ( t ) =  x ( t i ) + ∆ f  ( t )∆ t  | t i  ·  ( t  −  t i )  (2)which is a linear equation in  t , x ( t ) =  A i  ·  t  + d i  (3)where A i  =  ∆ f  ( t )∆ t  | t i  ∈  R 3 and d i  =  x ( t i ) − ∆ f  ( t )∆ t  | t i · t i  ∈  R 3 .Using (3) as a local linear model one can express (1) in termsof a Takagi-Sugeno fuzzy model [15] x ( t ) = c  i =1 w i ( t )  ·  ( A i  ·  t  + d i )  (4) w i ( t )  ∈  [0 , 1]  is the degree of membership of a time point  t  toa cluster with the cluster center   t i ,  c  is the number of clusters,and  ci =1  w i ( t ) = 1 .Let  x  = [ x 1 ,x 2 ,x 3 ] T   be the 3 end-effector cordinates and t  the time. The general clustering and modeling steps are de-scribed as follows •  Select an appropriatenumberof local linear models (dataclusters)  c •  Find  c  cluster centers  ( t i ,x 1 i ,x 2 i ,x 3 i ) ,  i  = 1 ...c , in the product space of the data quadruples  ( t,x 1 ,x 2 ,x 3 )  byFuzzy-c-elliptotype clustering •  Find the corresponding fuzzy regions in the space of in- put data  ( t )  by projection of the clusters of the productspace into Gustafson-Kessel clusters (GK) within the in- put space [16] •  Calculate  c  local linear (affine) models (4) using the GK clusters from step 2. ISBN: 978-989-95079-6-8 IFSA-EUSFLAT 20098  The membership degree  w i ( t )  of an input data point  t  in aninput cluster   C  i  is then calculated by w i ( t ) = 1 c  j =1 ( ( t − t i ) T  M  ipro ( t − t i )( t − t j ) T  M  jpro ( t − t j ) 1˜ mproj − 1 (5)The projected cluster centers  t i  and induced matrices  M  i pro define the input clusters  C  i  ( i  = 1 ...c ). The parameter  ˜ m  pro  >  1  determines the fuzziness of an individual cluster.The spheres in Fig. 3 covering the trajectories represent thelocal fuzzy models. The stripes along the time coordinate rep-resent projectionsof the local fuzzy models ontothe time line.Figure 3: Time-clustering principle for the end-effector andits motion in (x,y) 3.1.1 Training of time cluster models using new data A skill model can be built in several ways- A single user trains the model by repeatingthe same skill n  times-  m  users train the model by repeating the same skill  n timesThe 1st model is generated bythe time sequences [( t 1 ,t 2 ,...,t N  ) 1 ... ( t 1 ,t 2 ,...,t M  ) n ]  and theoutput (end-effector position) sequences  [( x 1 , x 2 ,..., x N  ) 1 ... ( x 1 , x 2 ,..., x M  ) n ] .The 2nd model is generated by the time sequences  [(( t 1 ,t 2 ,...,t N  ) 11 ... ( t 1 ,t 2 ,...,t M  ) 1 n )  ... (( t 1 ,t 2 ,...,t N  ) m 1  ... ( t 1 ,t 2 ,...,t M  ) mn  )]  andthe output sequences  [(( x 1 , x 2 ,..., x N  ) 11 ... ( x 1 , x 2 ,..., x M  ) 1 n ) ...  (( x 1 , x 2 ,..., x N  ) m 1  ... ( x 1 , x 2 ,..., x M  ) mn  )]  where  m  is thenumber of users in the training process,  N,M   are lengths of time sequences where  N   ≈  M  .Oncea particularskill modelhas been generatedit mightbenecessary to take new data into account. These data may src-inate from different human operators to cover several ways of  performing the same type of skill. Let for simplicity the oldmodel be built by a time sequence  [ t 1 ,t 2 ,...,t N  ]  and a respec-tive output sequence  [ x 1 , x 2 ,..., x N  ] . The old model is thenrepresented by the input cluster centers  t i  and the output clus-ter centers  x i  ( i  = 1 ...c ). It is also described by the parame-ters  A i  and  d i  of the local linear models. Let  [˜ t 1 , ˜ t 2 ,..., ˜ t M  ] , [˜ x 1 ,  ˜ x 2 ,...,  ˜ x M  ]  be new training data. A new model can be built by ”chaining” old and new training data leading for thetime sequencesto  [ t 1 ,t 2 ,...,t N  , ˜ t 1 , ˜ t 2 , ... , ˜ t M  ] , andfortheout- put sequences to  [ x 1 , x 2 ,..., x N  ,  ˜ x 1 ,  ˜ x 2 ,...,  ˜ x M  ] . The result isa model that involves properties of the old model and the newdata. If the old sequenceof data is notavailable, a correspond-ing sequence can be generated by running the old model withthe time instants  [ t 1 ,t 2 ,...,t N  ]  as inputs and the end-effector  positions  [ x 1 , x 2 ,..., x N  ]  as outputs. 3.2 Segmentation principle Let for simplicity the signals of a skill be the end-effector co-ordinates  x ( t )  ∈  R 3 and the forces  f  ( t )  ∈  R 3 in the end-effector. In order to generate the ’events’ determining thetime bounds for the phases,  x ( t )  is differentiated twice and f  ( t )  only ones by time. The absolute values of the resultingvectors are collected in X ( t ) = [ | ¨x ( t ) | T  , | ˙f  ( t ) | T  ] T  .  (6)For a segmentation we need the time-discrete case ˜X  = [ X ( t 1 ) ... X ( t n )]  ∈  R 6 × n .  (7)Further define a vector of bounds  B  >  0  ∈  R 6 abovewhich  X ( t i )  are counted as ’events’. Then a vector   I  =[ I  1 ...I  k ...I  m ] T  is generated where  I  k  are discrete time stamps t i  for which at least one component of   X ( t i )  lies above thecorresponding component of the vector of bounds B . I  k  =  t i  if   X ( t i )  >  B .  (8)The next step is to select the number of time clusters  c and find the clusters by time clustering for the data  Y  =([ X ( I  1 ); I  1 ] ... [ X ( I  k ); I  k ] ... [ X ( I  m ); I  m ]) .  Y  is a combinationof ’events’  X ( t i )  and their corresponding time instants  I  k .Once the time clusters are found the skill could be cut at thesetime intants into phases. However, for complicated skills thenumberof phases c might notbe knownin advance. Thereforewe choose a higher number   c  and merge those cluster centerswho are located close to each other into one. 4 Recognition of robot skills In this section the recognition of phases or sub-skills ist dis-cussed first. In a second step on the basis of this knowledge -number and type of phases - the construction of the full skillis presented. 4.1 Recognition of phases using the distance between fuzzyclusters Let the model of each phase have the same number of clusters i  = 1 ...c  so that each duration  T  l  ( l  = 1 ...L ) of the  l -th phaseis divided into  c  −  1  time intervals  ∆ t i ,  i  = 2 ...c  of the samelength. Let the phases be executed in an environment compa-rable with the modeled phase in order to avoid calibration andre-scaling procedures. Furthermore let V   modell  = [ X 1 ,..., X i ,..., X c ] model,l X i  = [ x,y,z,f  x ,f  y ,f  z ] T i where matrix  V   modell  includes the output cluster centers  X i for the  l -th phase model.A model of the phase to be classified is built by the matrix V   test  = [ X 1 ,..., X i ,..., X c ] test,l  (9) ISBN: 978-989-95079-6-8 IFSA-EUSFLAT 20099  A decision about which phase is present is made by applyingthe Euclidean matrix norm N  l  =  || V   modell  −  V   test ||  (10)Once the unknown phase is classified to the phase model withthe smallest norm  min ( N  l ) ,  l  = 1 ...L  then the recognition of the phase is finished. 4.2 Recognition of skills using phase models Once the phases of a test skill are recognized (identified) oneshould be able to recognize the skill as a whole and finally toreconstruct the hybrid automaton that represents the skill (seeFig. 2). For this purpose a list of possible skills and their  phases should be produced. In the following we will discussthe following robot skills- handling- contour following- assemblyThe corresponding phases can be found in table 1. FiguresTable 1: Skills and phasesPhases Skills: handling contour assembly1. Grasp object x x2. Free motion x3. Search contact x x x4. Keep contact x5. Follow with contact x x x6. Peg-in hole x7. release object/contact x x x4,5, and 6 show the correspondence between the skills andtheir individual phases.Figure 4: Handling skillFigure 5: Contour following skill 5 Experiments and simulations In this section an experimental evaluation of the recognitionof phases will be presented. The experimental platform com- prises a data glove with diodes mounted at the fingertips andlinks (see Fig. 7). A system of 4 stereo cameras takes recordsFigure 6: Assembly skillof the positions of the diodes so that the position of the handand its fingers can be tracked. In addition, tactile sensors aremounted at each finger tip in order to detect the contact be-tween the fingertips and an object or a surface, respectively.In the experiment only the tip of the index finger is tracked inorder to identify the contour performed by this finger. The ex- periments described here cover only contour following exam- ples but using different contours running at different speeds.Each example consists of three phases: the approach phase,the contour following phase, and the retract phase. The ex- periment starts with the index fingertip being in contact witha defined start location at a distance from the contour. Thenit follows the approach phase without any contact to an ob- ject ending at the begin of the contour to be followed. In thecontour following phase the index finger follows the contour while the contact is preserved until the end of the contour isreached and the retract phase starts. During the retract phasethere is no contact until the index finger reaches the start loca-tion again. The experiments can be divided into 3 groups:A Straight lines1: slow speed (see Fig. 8)2: fast speed3: ramp downhill slow4: ramp downhill fast5: ramp uphill slow6: ramp uphill fastB Meander 7: meander slow (see Fig. 9)8: meander fastC Loops9: loop slow 1 (see Fig. 10)10: loop slow 211: loop fast 1Three modeling examples are shown in Figs. 8 - 10. The blue curves represent the modeled phases whereas the redcurves represent the original data. Each phase of a skill ismodeled by 15 cluster centers. The crosses depict the cluster centers. It can be observed that the modeling/approximationquality of the fuzzy time models is excellent. The partitionof the skill into phases has been done by means of the forcesapplied to the tip of the index finger. Figure 11 shows thetime plots for the meander experiment 7. By means of theforce  f   applied to the fingertip and its derivative df   a segmen- ISBN: 978-989-95079-6-8 IFSA-EUSFLAT 200910
Related Search
We Need Your Support
Thank you for visiting our website and your interest in our free products and services. We are nonprofit website to share and download documents. To the running of this website, we need your help to support us.

Thanks to everyone for your continued support.

No, Thanks

We need your sign to support Project to invent "SMART AND CONTROLLABLE REFLECTIVE BALLOONS" to cover the Sun and Save Our Earth.

More details...

Sign Now!

We are very appreciated for your Prompt Action!