Government & Politics

Naming and gesturing spatial relations: evidence from focal brain-injured individuals

Naming and gesturing spatial relations: evidence from focal brain-injured individuals
of 10
All materials on our website are shared by users. If you have any questions about copyright issues, please report us to resolve them. We are always happy to assist you.
Related Documents
  Naming and gesturing spatial relations: Evidence from focalbrain-injured individuals Tilbe Göksun n , Matthew Lehet, Katsiaryna Malykhina, Anjan Chatterjee Department of Neurology and Center for Cognitive Neuroscience, University of Pennsylvania, Philadelphia, PA 19104, United States a r t i c l e i n f o  Article history: Received 24 September 2012Received in revised form21 February 2013Accepted 7 May 2013Available online 14 May 2013 Keywords: Lesion studiesSpatial relationsPrepositionsCo-speech gestures a b s t r a c t Spatial language helps us to encode relations between objects and organize our thinking. Little is knownabout the neural instantiations of spatial language. Using voxel-lesion symptom mapping (VLSM),we tested the hypothesis that focal brain injured patients who had damage to left frontal  –  parietalperi-Sylvian regions would have dif  󿬁 culty in naming spatial relations between objects. We alsoinvestigated the relationship between impaired verbalization of spatial relations and spontaneousgesture production. Patients with left or right hemisphere damage and elderly control participants wereasked to name static (e.g., an apple  on  a book) and dynamic (e.g., a pen  moves over   a box) locativerelations depicted in brief video clips. The correct use of prepositions in each task and gestures thatrepresent the spatial relations were coded. Damage to the left posterior middle frontal gyrus, the leftinferior frontal gyrus, and the left anterior superior temporal gyrus were related to impairment innaming spatial relations. Production of spatial gestures negatively correlated with naming accuracy,suggesting that gestures might help or compensate for dif  󿬁 culty with lexical access. Additional analysessuggested that left hemisphere patients who had damage to the left posterior middle frontal gyrus andthe left inferior frontal gyrus gestured less than expected, if gestures are used to compensate forimpairments in retrieving prepositions. &  2013 Elsevier Ltd. All rights reserved. 1. Introduction Spatial language, such as words for locative relations and actions,helps us to encode spatial information in the environment andorganize our thinking (Chatterjee, 2001, 2008). Despite its signi 󿬁 cancein framing our thinking, few studies have investigated the neuralunderpinnings of spatial language (Amorapanth, Widick, & Chatterjee,2009; Chatterjee, 2008; Damasio, Grabowski, Tranel, Ponto, Hichwa, & Damasio, 2001; Kemmerer, 2006). The current study is motivated by the hypothesis that perceptual and lexical-semantic spatial informa-tion have a parallel organization in the brain. Based on the putativeneural organization of the perception of locative relations we predictthat patients with focal brain injury to the left frontal  –  parietal peri-Sylvian regions would have dif  󿬁 culty in naming spatial relationsbetween objects.People gesture spontaneously when they speak. Virtually nothingabout the spontaneous use of spatial gestures in the setting of neurological disease is known. It is possible that people rely onspontaneous gestures when they have dif  󿬁 culty communicatingverbally. We see this behavior commonly among travelers who usegestures when they try to communicate with people with whom theydo not share a language. Alternatively, de 󿬁 cits in expressing spatialrelations verbally might generalize to de 󿬁 cits in expressing spatialrelations gesturally. In this study, we will also explore these possibleconsequences of focal brain injury on the production of spontaneousspatial gestures.Spatial language comprises terms for a range of spatial relations.Here, we focus on locative prepositions, which describe spatialrelations between a  󿬁  gure  (the object to be located) and its  ground (the reference object) (Talmy, 1983). For example, in the sentence “ the book is on the shelf, ”  the book refers to the  󿬁 gure and the shelf refers to the ground. The preposition  “ on ”  presents the spatialrelationship between the  󿬁 gure and ground. Thus, locative preposi-tions describe  “ extrinsic relations ”  in which an object ( 󿬁 gure) isrelated to an external referent (ground) (Chatterjee, 2008). In thefollowing sections, we  󿬁 rst review our current understanding of theneural basis of locative information. We then discuss the relationbetween speech and gesture and how gesture might compensate forimpaired speech before presenting the current study. 1.1. The neural correlates of locative prepositions The presumed neural correlates of the perception of spatialrelations follow from a fundamental tenet of visual neuroscienceContents lists available at SciVerse ScienceDirect journal homepage: Neuropsychologia 0028-3932/$-see front matter  &  2013 Elsevier Ltd. All rights reserved. n Correspondence to: University of Pennsylvania, Department of Neurology, 3400Spruce Street 3 Gates, Philadelphia, PA 19107, United States. Tel.: + 1 215 573 7031;fax: + 1 215 898 1982. E-mail addresses:, (T. Göksun).Neuropsychologia 51 (2013) 1518  –  1527  (Ungerleider & Mishkin, 1982), which is that visual processing segregates into two pathways. The ventral stream ( ‘ what ’  path-way) processes information about object properties, such as color,shape or size of an object. The dorsal stream ( ‘ where ’  pathway)processes spatial information such as the location and motion of an object. Even though these pathways interact, studies fromnonhuman primates (e.g., Orban, Van Essen, & Vanduffel, 2004; Wang, Tanaka, & Tanifuji, 1996) and human adults (e.g., Bly & Kosslyn, 1997; Haxby et al., 1991) support this division of labor in visual processing.Consistent with this two-stream hypothesis, brain damage tofronto-parietal circuits can produce profound spatial de 󿬁 cits suchas spatial neglect and simultanagnosia. Germane to our investiga-tion, both fMRI studies in healthy participants and behavioralstudies in patients with focal brain damage con 󿬁 rm a fronto-parietal circuit for knowledge of locative relations (e.g.,Amorapanth et al., 2009; Wu, Waller, & Chatterjee, 2007). The intraparietal sulcus and the posterior middle frontal gyrus seem tobe critical nodes mediating this knowledge.We previously proposed that spatial perception and languagehave a parallel organizational structure within the brain(Chatterjee, 2008). For example, the perception of actions relieson posterior temporal-occipital regions including area MT/MSTand the lexical expression of these actions (action verbs) activatesareas just anterior and dorsal to this area (Kable, Kan, Wilson,Thompson-Schill, & Chatterjee 2005). The general hypothesis is that there is a perceptual to verbal gradient organized within theleft hemisphere of right-handed individuals, such that perceptualnodes serve as points of entry for their lexical counterparts thatare shifted centripetally towards peri-Sylvian cortex (Chatterjee,2008). As suggested by Kemmerer (2010), the areas related to lexical-semantic encoding of spatial relations can be close to, butdistinguishable from the representation of spatial relation dedi-cated to perception.Recent empirical  󿬁 ndings support this parallel organization of spatial perception and language (e.g., Amorapanth et al., 2009,2012; Baciu, Koenig, Vernier, Bedoin, Rubin, & Segebarth, 1999; Damasio et al., 2001; Emmorey et al., 2002; Kemmerer, 2006; Noordzij, Neggers, Ramsey, & Postma, 2008; Tranel & Kemmerer,2004; Wu et al., 2007). For example, Tranel and Kemmerer (2004) examined brain-injured patients ’  knowledge of locative preposi-tions. Participants were presented groups of three pictures. Eachset had two objects and involved 12 different spatial relations.Then they were asked to point to the picture that involved adifferent categorical spatial relation than the other two. Theyfound that damage to the white matter underlying the leftsupramarginal gyrus and frontal operculum were associated withde 󿬁 cits in matching these spatial relations (see also Kemmerer &Tranel, 2000). Amorapanth et al. (2009) extended these  󿬁 ndingsand found that damage to the left supramarginal gyrus andangular gyrus, the left posterior middle and inferior frontal gyri,and the left superior temporal gyrus were associated with de 󿬁 citsin matching the categorical spatial relations (see also Amorapanthet al., 2012; Wu et al., 2007). Neuroimaging studies corroborate these  󿬁 ndings (Amorapanth et al., 2009; Baciu et al., 1999; Noordzij et al., 2008).The growing literature on the neural basis of locative preposi-tions has focused on comprehension. Only a few studies haveinvestigated the neural underpinnings of   producing   locative pre-positions. These studies demonstrated that the neural organiza-tion of lexical and semantic organization of spatial language mightbe similar to perceiving spatial relations (Damasio et al., 2001;Emmorey et al., 2002; Kemmerer, 2006; MacSweeney et al., 2002; Tranel, Manzel, Asp, & Kemmerer, 2008). For example, Damasio et al. (2001) using PET imaging found that naming static spatialrelations between objects from drawings, activated the leftsupramarginal gyrus, the inferior prefrontal cortex, left inferiortemporal lobe, and right parietal regions. Case studies withaphasic patients show similar patterns of neural involvement inproducing locative prepositions (e.g., Friederici,1982; Kemmerer & Tranel, 2000; Tesak & Hummer, 1994; Tranel & Kemmerer, 2004). Here we examine focal brain injured patients ’  production of locative prepositions using voxel-lesion symptom mapping(VLSM) analysis. VLSM is a powerful technique to examinebrain  –  behavior relationships in patients with focal brain injury(Bates et al., 2003; Kimberg, Coslett, & Schwartz, 2007). Unlike traditional lesion mapping methods, in VLSM patients are notclassi 󿬁 ed based on lesion site, clinical diagnosis or behavioralperformance. One need not make categorical distinctions aboutwhether a patient has a de 󿬁 cit or not, since performance on tasksare treated as continuous variables. VLSM offers speci 󿬁 city tolesion analysis by increasing the possibility of detecting neuroa-natomical regions underlying a cognitive process that might bemissed in coarser traditional lesion mapping methods. Further-more the inferential strengths of lesion methods offer an impor-tant constraint on neural hypotheses generated by functionalneuroimaging methods (Chatterjee, 2005; Fellows, Heberlein, Morales, Shivde, Waller, & Wu, 2005).Our focus on production of locative information raises addi-tional questions about alternate means of communication, such asthe use of gestures. Do gestures simply accompany speech? Dothey help to compensate when verbal communication is impairedor are they also impaired? In the next section, we brie 󿬂 y reviewthe interactions between speech and gesture to motivate ourinvestigations of the relationship of spontaneous gesture andimpaired speech. 1.2. Associations between speech and gesture People gesture spontaneously when they talk. The hand move-ments of co-speech gestures are typically related to the accom-panying language by their form and function. Gestures can beclassi 󿬁 ed into four main categories  —  deictic gestures  (i.e., pointingto an object, person, or location),  beat gestures  (i.e., quick handmovements highlighting the prosody of the speech withoutsemantic meaning), and  iconic gestures  that represent objects,events such as moving the hand in an arc to refer to direction of an action or  metaphoric gestures  that refer to abstract ideas(McNeill, 1992). In this paper, we only examine iconic gestures as relevant to the communication of spatial information.McNeill (1992) claims that speech and gesture are complemen-tary processes that form a tightly integrated language system (alsosee Alibali, Kita, & Young, 2000; Feyereisen, 1983; Goldin-Meadow, 2003; Kita & Özyürek, 2003; McNeill, 2005). Without speech, many iconic gestures might not have an obvious meaning. But in combina-tion with speech, gestures can clarify or emphasize spatial aspects of the propositional content of speech. Despite considerable behavioralevidence of a close relationship between speech and gesture, weknow relatively little about the neural correlates of co-speechgestures (Holle, Gunter, Rueschemeyer, Hennenlotter, & Iacoboni,2008; Skipper, Goldin-Meadow, Nusbaum, & Small, 2007; Willems, Özyürek, & Hagoort, 2007; for a review see Willems & Hagoort (2007). For example, Willems et al. (2007) reported that co-speech gestures and language processing recruitoverlapping areas in the leftinferior frontal gyrus (BA 45), suggesting a pivotal role of Broca ' s areain processing both types of information (but see Skipper et al., 2007). Most research on the neural correlates of co-speech gestureproduction has focused on patients with aphasia (e.g., Ahlsén,1991; Béland & Ska, 1992; Cicone, Wapner, Foldi, Zurif, & Gardner 1979; Cocks, Dipper, Middleton, & Morgan, 2011; Cocks, Sautin, Kita, Morgan, & Zlotowitz, 2009; Dipper, Cocks, Rowe, & Morgan;2011; Feyereisen, 1983; Friederici, 1981, 1982; Glosser, Wiener, & T. Göksun et al. / Neuropsychologia 51 (2013) 1518 – 1527   1519  Kaplan,1986; Hadar, Burstein, Krauss, & Soroker,1998; Kemmerer, Chandrasekaran, & Tranel, 2007; Le May, David, & Thomas 1988), patients with Parkinson ' s disease (e.g., Cleary, Poliakoff, Galpin,Dick, & Holler, 2011), or split-brain patients (e.g., Lausberg, Kita,Zaidel, & Ptito, 2003; Kita & Lausberg, 2008). These studies try to determine whether or not gestures compensate for impairedspeech. Some studies suggest that speech impairment is asso-ciated with gesture impairment (e.g., Cicone et al., 1979; Glosser et al., 1986; McNeill, 1985). Others indicate that aphasic patients use more iconic gestures than healthy controls (e.g., Feyereisen,1983; Hadar et al., 1998; Kemmerer et al., 2007; Lanyon & Rose, 2009; Le May et al., 1988). In an early study, Feyereisen (1983) showed that even though Broca ' s aphasics gesture less per minutecompared to healthy controls, they used co-speech gestures perword more often than controls. Hermann, Reichle, Lucius-Hoene,Wallesch, and Johannsen-Horbach (1988) also reported thatseverely aphasic patients communicated more frequently usingnonverbal means such as iconic gestures than healthy controls.These  󿬁 ndings support other behavioral studies with healthypatients, which suggest that gesture help with lexical access(Hadar & Butterworth, 1997).Studies also demonstrate that type and severity of aphasia andthe other neuropsychological de 󿬁 cits patients produce variations ingesture production (e.g., Ahlsén, 1991; Béland & Ska, 1992; Cicone et al., 1979; Duffy & Duffy, 1981; Duffy, Duffy, & Pearson, 1975; Glosseretal.,1986;Hermannetal.,1988).Forexample,Ahlsén(1991) showed that a Wernicke ' s aphasic used compensatory body commu-nications to overcome speech problems. When comparing Broca ' saphasics and Wernicke ' s aphasics, Le May et al. (1988) found that Wernicke ' s aphasics produced many kinetographic gestures(i.e., dynamic movement of the hand to represent for example theaction of slicing) whereas Broca ' s aphasics signi 󿬁 cantly gesturedmore overall than Wernicke ' s aphasics and controls.Other studies focused on how damage to the right hemisphereis associated with gesture production (e.g., Cocks et al., 2007;Hadar & Krauss, 1999; Hadar et al., 1998; Kita & Lausberg, 2008; Lausberg, Zaidel, Cruz, & Ptito 2007; McNeill & Pedelty, 1995). For example, McNeill and Pedelty (1995) suggested that damage to theright hemisphere led to a reduction in the use of gestures becauseof an impairment in visuo-spatial imagery. Yet, a recent study byCocks et al. (2007) found that the right hemisphere patients variedin their use of gestures based on the nature of their discourse. Inparticular, discourse samples with high emotional content resultedin less gesture production than in other discourse types.Although these neuropsychological studies are informative, theinferences drawn about the relationship between speech andspontaneous gesture, and their neural correlates are drawn fromcase studies and small series. They typically tally the total numberof gestures rather than analyze the speci 󿬁 c content of gestures,thus attenuating the relationship between impaired speech andspontaneous gesture. 1.3. Summary and predictions The aims of our study are twofold. We examine (1) thecontribution of frontal  –  parietal peri-Sylvian regions ( ‘ where ’  path-way) to naming locative prepositions by testing focal brain injuredpatients and (2) the relationship of impaired naming of spatialrelations to spontaneous gesture production in these patients.In this study we use VLSM analysis to test the naming of locative relations in a relatively large sample size of left hemi-sphere damaged (LHD) and right hemisphere damaged (RHD)patients. We predict that LHD patients who have damage in theperi-Sylvian fronto-parietal regions will be impaired in correctlynaming spatial relations between objects.We also varied the way spatial relations were displayed. Moststudies use static pictures as stimuli for presenting the spatialrelations (e.g., Damasio et al., 2001; Emmorey et al., 2002). A recent study by Tranel et al. (2008) investigate the in 󿬂 uence of static vs. dynamic stimuli on naming actions. They found over-lapping neuroanatomical correlates involved in naming both typesof stimuli. Moreover, using dynamic stimuli Wu, Morganti, andChatterjee (2008) showed that attention to  ‘ where ’  an objectmoves in space (i.e., dynamic prepositions) activated bilateralparietal and frontal areas as is reported in the processing of locative prepositions. Even though these  󿬁 ndings suggest thatspatial relations are treated similarly in the brain regardless of whether they are static or dynamic contexts, in this study we willdirectly compare naming of these two types of spatial relations.Lastly, we probe the relationship between speech de 󿬁 cits andspontaneous gestures. Patients who have dif  󿬁 culty in namingspatial relations might use iconic spatial gestures to compensatefor their impairments. Alternatively, in some aphasic patients(Cicone et al., 1979; McNeill, 1985) gesture production might also be impaired and these patients would not use spatial gestures tocompensate for their speech de 󿬁 cits. In these patients the namingde 󿬁 cit re 󿬂 ect de 󿬁 cits at a conceptual level or de 󿬁 cits in bothspeech and limb motor production systems. 2. Materials and methods  2.1. Participants Thirty-two patients with chronic unilateral lesions (16 LHD and 16 RHD patients)were recruited from the Focal Lesion Subject Database at the University of Pennsylvania(Fellows, Stark, Berg, & Chatterjee, 2008). Patients were not chosen based on speci 󿬁 clesion locations or behavioral criteria. The database excludes patients with a history of other neurological disorders, psychiatric disorders, or substance abuse. LHD patientsranged in agefrom37 to79 ( M  ¼ 64.69, SD ¼ 11.49,10females) and RHDpatientsrangedin age from 45 to 87 ( M  ¼ 63.50, SD ¼ 11.99, 11 females). The LHD and patients had anaverage of 13.6 (SD ¼ 2.02) and 15.1 (SD ¼ 3.44) years of education, respectively. Thirteenage-matched (range: 38  –  77,  M  ¼  60.85, SD ¼  11.05, 9 females) and education-matched( M  ¼ 16,SD ¼  2.12) olderadultsservedas ahealthycontrol (HC) group.Thethreegroupsdid not differ in age or years of education,  p s 4 0.05. In addition, LHD and RHD patientsdid not differ in lesion size,  p 4 0.05. Fig.1 displays lesion overlap maps of patients. Allparticipants were right-handed, native English-speakers, and provided written,informed consent in accordance with the policies of the University of Pennsylvania ' sInstitutional Review Board. Participants received $15/h for volunteering their time.Table 1 presents the detailed demographic data for each patient.  2.2. Tasks and stimuli 2.2.1. Neuropsychological tasks Patients were administered the language comprehension and language pro-duction subtests of the Western Aphasia Battery (WAB; Kertesz, 1982). The scoresfrom these neuropsychological tasks are presented in Table 1. They were also administered the Object and Action Naming Battery (OANB; Druks, 2000). In thistask, each patient named 50 pictures of actions and 81 pictures of objects.  2.2.2. Experimental tasks Two experimental tasks, consisting of static pictures and dynamic movie clips of different spatial relations between two objects were created. The  static spatial relationstask  had 24picturesdepictingfourdifferent spatial relations, twotopologic ( in, on ) andtwo projective ( above, below ) relations between two objects. A male ' s hand illustratedthe spatial relationin eachpicture. The pictureswere takenwith a Sony digital cameraon awhite table (seeFig. 2A for sample stimulus). The 󿬁 nalset of 24 was selected from36 pictures based on ratings of familiarity and naming of the spatial relations by 18nativeEnglishspeakerswitha meanageof21.88(range:18  –  27,SD ¼ 2.76).After seeingeach picture, individuals  󿬁 rst rated the familiarity of the objects in the picture on a5-point scale (1 ¼ not familiar at all, 5 ¼ very familiar). Then, they named the spatialrelationbetweentwoobjects.Toensurethatparticipantsused both  above and below  intheir descriptions of the spatial relations, the experimenter started the sentences thatthe participants were asked to complete. For example, when the participants saw thepicture of a cup on a book, they  󿬁 rst rated the familiarity of two objects by simplyhitting 1  –  5 on the keyboard. Then, the experimenter said  “ The cup …”  and theparticipant  󿬁 nished the sentence by saying  “ The cup is on the book. ”  Two practicetrials were presented before the start of the task. Stimuli were presented on a T. Göksun et al. / Neuropsychologia 51 (2013) 1518 – 1527  1520  Macbook Air computer using Matlab 2007 Psychtoolbox. Pictures below an average of 3.5 familiarity rating and below 97% naming agreement were eliminated.In  the dynamic spatial relations task , 28 short movie clips depicting  󿬁 ve differentspatial relations (  put in, put on, move over, move under, move across ) between twoobjects were used. In each clip, one object was always stationary on a table, and theother object was moved in relation to the stationary object. A male hand illustratedthe spatial relation in each movie clip. The clips were  󿬁 lmed with a Sony digitalcamera in front of a white background on a table (see Fig. 2B for a still picture from amovie clip). Final movie clips were edited using iMovie. Each movie lasted for 3 s.The same volunteers from the previous task rated the familiarity of the objects in the Fig.1.  Coverage map indicating the lesion locations for all participants. The colored scale represents the number of lesions for each pixel. (Forinterpretation of the referencesto color in this  󿬁 gure legend, the reader is referred to the web version of this article.)  Table 1 Patient demographic and neuropsychological data. Patient Gender Age Education(years)LesionsideLocation Lesion size (# of  voxels)Cause Chronicity (months) WAB(AQ)OANB(action)OANB(object) LT_85 F 63 15 L I 13,079 Stroke 177  – – –  CD_141 F 52 16 L Pe 21,605 Stroke 143 98.8 100 96KG_215 M 61 14 L F 17,422 Stroke 145 94.4 96 93.8TO_221 F 77 13 L O 5886 Stroke 160 100 100 100BC_236 M 65 18 L FP 155,982 Stroke 210 90.8 88 94XK_342 F 57 12 L OT 42,144 Stroke 125 93.4 94 93TD_360 M 58 12 L T BG 38,063 Stroke 118 65.3 52  –  IG_363 M 74 16 L F 16,845 Stroke 117 91.4 96 95KD_493 M 68 14 L ACA 22,404 Aneurysm 101 92.1 98 95DR_529 F 66 12 L PA F 8969 Stroke 100  – – –  DR_565 F 53 12 L PA F 14,517 Aneurysm 103 99.8 98 97.5MC_577 F 79 11 L C 4191 Stroke 50 85.3 82 79NS_604 F 37 12 L PO 79,231 AVM 113  –   100 98UD_618 M 77 15 L F 48,743 Stroke 47 93.6 76 85KM_642 M 77 12 L P 7996 Stroke 109 96.8 94 98FC_83 M 70 12 R FTP 8040 Stroke 169 99.8 96 98MB_101 F 58 18 R T BG 10,543 Stroke 426 98.4 98 98NC_112 F 48 16 R O 4733 Stroke 178 100 98  –  RT_309 F 66 21 R T 79,691 Hematoma 128  – – –  DF_316 F 87 12 R P 2981 Stroke 126 97.1 88 93DC_392 M 56 10 R PT 39,068 Stroke 108 97.6 98 95DX_444 F 80 12 R PT 41,172 Stroke 106 95.5 94 93TS_474 F 51 11 R P 22,208 Stroke 100 95.1 98 95NS_569 F 72 18 R FT BG 37,366 Stroke 77 100 100 99DG_592 F 45 12 R PT 130,552 Stroke 127 97.8 98 98KG_593 F 49 12 R FTP BG 170,128 Stroke 58 100 90 95KS_605 M 63 18 R C 23,217 Stroke 76  – – –  ND_640 F 70 18 R PT 64,603 Stroke 54 96.8 100 100CS_657 M 75 18 R PO 33,568 Stroke 43 99.2 98 100KN_675 M 64 18 R FT 23,779 Stroke 32  – – –  MN_738 F 62 16 R C 32,154 Stroke 25 98.4 100 100 Key : F, frontal; T, temporal; P, parietal; O, occipital; BG, basal ganglia; C, cerebellum; I, insula; Pe, peri-Sylvian; PA: pericallosal artery; ACA, anterior cerebral artery; MCA,middle cerebral artery; AVM, arteriovenous malformations. WAB-AQ indicates a composite language score with a maximum possible score of 100. OANB (action) and OANB(object) demonstrate knowledge of verbs and nouns with a maximum possible score of 100. T. Göksun et al. / Neuropsychologia 51 (2013) 1518 – 1527   1521  movieclipsona 5-pointscale(1 ¼ notfamiliaratall,5 ¼ veryfamiliar)andthen namedthe relation between two objects by describing what the moving object did. A total of 54 movie clips were shown to the volunteers. For example, when the participants sawthe movie clip of an orange being put in a bowl, they  󿬁 rst rated the familiarity of twoobjects by hitting 1  –  5 onthekeyboard. Then, they described the relationbetweentwoobjects by saying  “ the orange was put in a bowl. ”  Two practice trials were presentedbefore the start of the task. Stimuli were presented on a Macbook Air computer usingMatlab 2007 Psychtoolbox. The  󿬁 nal set of 28 movie clips was selected based on theagreement among the participants. Movie clips below an average of 3.5 familiarityrating and below 97% naming agreement were eliminated.  2.3. Procedure Participants were tested individually in the laboratory or in their homes.In each session, the static spatial relations task was presented before the dynamicspatial relations task. In  the static spatial relations task , after two practice trials, eachparticipant received 24 test trials in a random order. When a picture was shown onthe screen, the experimenter asked the participants to describe the relationbetween two objects. As in the norming phase, the experimenter started thesentences and the participants completed them. In  the dynamic spatial relationstask , two practice trials were followed by 28 test trials in a random order. Afterwatching the short movie clip on the screen, the experimenter asked theparticipants to describe what the moving object did in relation to the stationaryobject. The experimenter presented the pictures and the movie clips on a MacbookAir computer using Matlab 2007 and advanced the trials when the participant wasready. The session was videotaped for further transcriptions of speech and gesture.The experimenter did not mention gestures to the participants or in 󿬂 uence theirgesturing during the tasks. The neuropsychological tasks were administered on adifferent testing session either before or after the experimental tasks.  2.4. Coding  2.4.1. Speech Native English speakers transcribed all speech verbatim for participants ’ responses to each trial. In both tasks, speech for each trial was then coded forthe correct preposition.  2.4.2. Gesture Each participant ' s spontaneous gestures were transcribed for each trial. A changein the shape of the hand or motion signaled the end of a gesture. For each trial, thecoders initially decided whether a gesture was produced. The gestures were classi 󿬁 edas (1) static gestures or (2) dynamic gestures.  Static   gestures referred either to objectsor to their locative properties. These gestures included pointing at the objects in thepictures and in the movie clips, showing a property of the objects (e.g., making acurved hand shape as the palm faces up to refer to the bowl) or illustrating the staticspatialrelationbetween objects (e.g., makinga 󿬂 athandshape as thepalmfaces downover the other hand to refer to the preposition  ‘ above ’ ).  Dynamic   gestures involved themovementofthehand in onedirectional axis (e.g.,from leftto rightor back and forth)or circular movements of the hand. These dynamic gestures mainly represented thedynamic spatial relations between objects such as index  󿬁 nger moving in an arc fromleftto rightto illustrate the preposition ‘ over ’ . For thepurposesof this study, static anddynamic gestures that referred to spatial relations in a given trial were included foranalyses. In particular, these gestures are  iconic   and depict spatial relations betweenobjects statically (e.g., palm faces down put onto the other hand to refer to thepreposition  ‘ on ’ ) or dynamically (e.g., palm faces down moved from left to right overthe other hand to refer to the preposition  ‘ move over ’ ).  2.5. Reliability To test the reliability of the coding system, we conducted two types of codingby a second person. First, she randomly chose and coded 20% participants ’  allresponses both for speech and gesture. For the static spatial relations task,agreement between coders was 98.0% ( k ¼ 0.897,  n ¼ 144 trials) for naming spatialrelations, 98.4% ( k ¼ 0.872,  n ¼ 120 trials) for detecting gestures, and 96.0% forcoding gestures that referred to correct spatial relations ( k ¼ 0.832,  n ¼ 120 trials).For the dynamic spatial relations task, agreement between coders was 98.8%( k ¼ 0.923,  n ¼ 168 trials) for naming spatial relations, 99.7% ( k  ¼ 0.951,  n ¼ 168trials) for detecting gestures, and 96.0% for coding gestures that referred to correctspatial relations ( k ¼ 0.836,  n ¼ 168 trials).Second, 20% of each participant ’ s responses both for speech and gesture wererandomly chosen and coded. For the static spatial relations task ( n ¼ 220), agree-ment between coders was 97.3% ( k  ¼ 0.973) for naming spatial relations, 93.4%( k ¼ 0.891) for gesture identi 󿬁 cation, 96.4% ( k  ¼ 0.942) for gesture category (staticvs. dynamic), and 92.5% for coding gestures that referred to spatial relations( k ¼ 0.912).For the dynamic spatial relations task ( n ¼ 232), agreement between coders was94.0% ( k ¼ 0.940) for naming spatial relations, 97.8% ( k ¼ 0.978) for gesture identi- 󿬁 cation, 95.2% ( k ¼ 0.931) for gesture category (static vs. dynamic), and 98.7% forcoding gestures that referred to spatial relations ( k ¼ 0.987).  2.6. Analyses 2.6.1. Behavioral analyses For speech, the dependent variable was the accuracy of naming static spatialrelations and dynamic spatial relations in each task. The percentage of correctresponses was calculated for each patient. For gesture, two dependent variableswere measured: the percentage of trials inwhich participants produced at least onegesture and the real focus of our study; the percentage of trials in whichparticipants produced spatial iconic gestures.  2.6.2. Neuroanatomical analyses CT or MRI scans for all patients were rendered to a common anatomical space(Colin27; Voxel-based lesion-symptom mapping (VLSM; Bates et al., 2003) analyses were then conducted usingVoxbo brain-imaging analysis software developed at the University of Pennsylvania( VLSM assessed the relationship between behavioralmeasures and brain lesions on a voxel by voxel basis. The analyses were restrictedto the voxels in which at least two patients had lesions. The analyses resulted instatistical  t  -maps of lesioned brain areas that were signi 󿬁 cantly related to impairedbehavioral performances. We conducted VLSM analyses for speech and gesturedependent variables separately in both tasks. One-tailed  t  -tests for speech and two-tailed  t  -tests for gesture compared behavioral scores between patients with andwithout lesions at every voxel. The  t- map for each analysis was thresholded at q  o 0.05 using the false discovery rate (FDR) to control for multiple comparisons(Benjamini & Hochberg, 1995; Genovese, Lazar, & Nichols, 2002). 3. Results  3.1. Neuropsychological analyses Even though most patients were not overtly aphasic, WAB scoreswere lower for the LHD patients compared to the RHD patients,  F  (1, Fig. 2.  Sample stimuli from the static spatial relations task (A) and the dynamic spatial relations task (B). The target preposition for picture on the left (A) was  in  (i.e., thepumpkin is  in  the bowl). The still frame on the right (B) represents the  put onto  spatial relation. The arrow indicates the direction of the moving object (i.e., the pumpkin was  put onto  the book). T. Göksun et al. / Neuropsychologia 51 (2013) 1518 – 1527  1522
Similar documents
View more...
Related Search
We Need Your Support
Thank you for visiting our website and your interest in our free products and services. We are nonprofit website to share and download documents. To the running of this website, we need your help to support us.

Thanks to everyone for your continued support.

No, Thanks