Publications

Displaying 1 - 100 of 110
  • Acheson, D. J., Hamidi, M., Binder, J. R., & Postle, B. R. (2011). A common neural substrate for language production and verbal working memory. Journal of Cognitive Neuroscience, 23(6), 1358-1367. doi:10.1162/jocn.2010.21519.

    Abstract

    Verbal working memory (VWM), the ability to maintain and manipulate representations of speech sounds over short periods, is held by some influential models to be independent from the systems responsible for language production and comprehension [e.g., Baddeley, A. D. Working memory, thought, and action. New York, NY: Oxford University Press, 2007]. We explore the alternative hypothesis that maintenance in VWM is subserved by temporary activation of the language production system [Acheson, D. J., & MacDonald, M. C. Verbal working memory and language production: Common approaches to the serial ordering of verbal information. Psychological Bulletin, 135, 50–68, 2009b]. Specifically, we hypothesized that for stimuli lacking a semantic representation (e.g., nonwords such as mun), maintenance in VWM can be achieved by cycling information back and forth between the stages of phonological encoding and articulatory planning. First, fMRI was used to identify regions associated with two different stages of language production planning: the posterior superior temporal gyrus (pSTG) for phonological encoding (critical for VWM of nonwords) and the middle temporal gyrus (MTG) for lexical–semantic retrieval (not critical for VWM of nonwords). Next, in the same subjects, these regions were targeted with repetitive transcranial magnetic stimulation (rTMS) during language production and VWM task performance. Results showed that rTMS to the pSTG, but not the MTG, increased error rates on paced reading (a language production task) and on delayed serial recall of nonwords (a test of VWM). Performance on a lexical–semantic retrieval task (picture naming), in contrast, was significantly sensitive to rTMS of the MTG. Because rTMS was guided by language production-related activity, these results provide the first causal evidence that maintenance in VWM directly depends on the long-term representations and processes used in speech production.
  • Acheson, D. J., & MacDonald, M. C. (2011). The rhymes that the reader perused confused the meaning: Phonological effects during on-line sentence comprehension. Journal of Memory and Language, 65, 193-207. doi:10.1016/j.jml.2011.04.006.

    Abstract

    Research on written language comprehension has generally assumed that the phonological properties of a word have little effect on sentence comprehension beyond the processes of word recognition. Two experiments investigated this assumption. Participants silently read relative clauses in which two pairs of words either did or did not have a high degree of phonological overlap. Participants were slower reading and less accurate comprehending the overlap sentences compared to the non-overlapping controls, even though sentences were matched for plausibility and differed by only two words across overlap conditions. A comparison across experiments showed that the overlap effects were larger in the more difficult object relative than in subject relative sentences. The reading patterns showed that phonological representations affect not only memory for recently encountered sentences but also the developing sentence interpretation during on-line processing. Implications for theories of sentence processing and memory are discussed. Highlights The work investigates the role of phonological information in sentence comprehension, which is poorly understood. ► Subjects read object and subject relative clauses +/- phonological overlap in two pairs of words. ► Unique features of the study were online reading measures and pinpointed overlap locations. ► Phonological overlap slowed reading speed and impaired sentence comprehension, especially for object relatives. ► The results show a key role for phonological information during online comprehension, not just later sentence memory.
  • Acheson, D. J., Postle, B. R., & MacDonald, M. C. (2011). The effect of concurrent semantic categorization on delayed serial recall. Journal of Experimental Psychology: Learning, Memory, and Cognition, 37, 44-59. doi:10.1037/a0021205.

    Abstract

    The influence of semantic processing on the serial ordering of items in short-term memory was explored using a novel dual-task paradigm. Participants engaged in 2 picture-judgment tasks while simultaneously performing delayed serial recall. List material varied in the presence of phonological overlap (Experiments 1 and 2) and in semantic content (concrete words in Experiment 1 and 3; nonwords in Experiments 2 and 3). Picture judgments varied in the extent to which they required accessing visual semantic information (i.e., semantic categorization and line orientation judgments). Results showed that, relative to line-orientation judgments, engaging in semantic categorization judgments increased the proportion of item-ordering errors for concrete lists but did not affect error proportions for nonword lists. Furthermore, although more ordering errors were observed for phonologically similar relative to dissimilar lists, no interactions were observed between the phonological overlap and picture-judgment task manipulations. These results demonstrate that lexical-semantic representations can affect the serial ordering of items in short-term memory. Furthermore, the dual-task paradigm provides a new method for examining when and how semantic representations affect memory performance.
  • Araújo, S., Faísca, L., Bramão, I., Inácio, F., Petersson, K. M., & Reis, A. (2011). Object naming in dyslexic children: More than a phonological deficit. The Journal of General Psychology, 138, 215-228. doi:10.1080/00221309.2011.582525.

    Abstract

    In the present study, the authors investigate how some visual factors related to early stages of visual-object naming modulate naming performance in dyslexia. The performance of dyslexic children was compared with 2 control groups—normal readers matched for age and normal readers matched for reading level—while performing a discrete naming task in which color and dimensionality of the visually presented objects were manipulated. The results showed that 2-dimensional naming performance improved for color representations in control readers but not in dyslexics. In contrast to control readers, dyslexics were also insensitive to the stimulus's dimensionality. These findings are unlikely to be explained by a phonological processing problem related to phonological access or retrieval but suggest that dyslexics have a lower capacity for coding and decoding visual surface features of 2-dimensional representations or problems with the integration of visual information stored in long-term memory.
  • Araújo, S., Inácio, F., Francisco, A., Faísca, L., Petersson, K. M., & Reis, A. (2011). Component processes subserving rapid automatized naming in dyslexic and non-dyslexic readers. Dyslexia, 17, 242-255. doi:10.1002/dys.433.

    Abstract

    The current study investigated which time components of rapid automatized naming (RAN) predict group differences between dyslexic and non-dyslexic readers (matched for age and reading level), and how these components relate to different reading measures. Subjects performed two RAN tasks (letters and objects), and data were analyzed through a response time analysis. Our results demonstrated that impaired RAN performance in dyslexic readers mainly stem from enhanced inter-item pause times and not from difficulties at the level of post-access motor production (expressed as articulation rates). Moreover, inter-item pause times account for a significant proportion of variance in reading ability in addition to the effect of phonological awareness in the dyslexic group. This suggests that non-phonological factors may lie at the root of the association between RAN inter-item pauses and reading ability. In normal readers, RAN performance was associated with reading ability only at early ages (i.e. in the reading-matched controls), and again it was the RAN inter-item pause times that explain the association.
  • Araújo, S., Faísca, L., Petersson, K. M., & Reis, A. (2011). What does rapid naming tell us about dyslexia? Avances en Psicología Latinoamericana, 29, 199-213.

    Abstract

    This article summarizes some of the important findings from research evaluating the relationship between poor rapid naming and impaired reading performance. Substantial evidence shows that dyslexic readers have problems with rapid naming of visual items. Early research assumed that this was a consequence of phonological processing deficits, but recent findings suggest that non-phonological processes may lie at the root of the association between slow naming speed and poor reading. The hypothesis that rapid naming reflects an independent core deficit in dyslexia is supported by the main findings: (1) some dyslexics are characterized by rapid naming difficulties but intact phonological skills; (2) evidence for an independent association between rapid naming and reading competence in the dyslexic readers, when the effect of phonological skills was controlled; (3) rapid naming and phonological processing measures are not reliably correlated. Recent research also reveals greater predictive power of rapid naming, in particular the inter-item pause time, for high-frequency word reading compared to pseudoword reading in developmental dyslexia. Altogether, the results are more consistent with the view that a phonological component alone cannot account for the rapid naming performance in dyslexia. Rather, rapid naming problems may emerge from the inefficiencies in visual-orthographic processing as well as in phonological processing.
  • Baggio, G., & Hagoort, P. (2011). The balance between memory and unification in semantics: A dynamic account of the N400. Language and Cognitive Processes, 26, 1338-1367. doi:10.1080/01690965.2010.542671.

    Abstract

    At least three cognitive brain components are necessary in order for us to be able to produce and comprehend language: a Memory repository for the lexicon, a Unification buffer where lexical information is combined into novel structures, and a Control apparatus presiding over executive function in language. Here we describe the brain networks that support Memory and Unification in semantics. A dynamic account of their interactions is presented, in which a balance between the two components is sought at each word-processing step. We use the theory to provide an explanation of the N400 effect.
  • Bottini, R., & Casasanto, D. (2011). Space and time in the child’s mind: Further evidence for a cross-dimensional asymmetry [Abstract]. In L. Carlson, C. Hölscher, & T. Shipley (Eds.), Proceedings of the 33rd Annual Conference of the Cognitive Science Society (pp. 3010). Austin, TX: Cognitive Science Society.

    Abstract

    Space and time appear to be related asymmetrically in the child’s mind: temporal representations depend on spatial representations more than vice versa, as predicted by space-time metaphors in language. In a study supporting this conclusion, spatial information interfered with children’s temporal judgments more than vice versa (Casasanto, Fotakopoulou, & Boroditsky, 2010, Cognitive Science). In this earlier study, however, spatial information was available to participants for more time than temporal information was (as is often the case when people observe natural events), suggesting a skeptical explanation for the observed effect. Here we conducted a stronger test of the hypothesized space-time asymmetry, controlling spatial and temporal aspects of the stimuli even more stringently than they are generally ’controlled’ in the natural world. Results replicated Casasanto and colleagues’, validating their finding of a robust representational asymmetry between space and time, and extending it to children (4-10 y.o.) who speak Dutch and Brazilian Portuguese.
  • Bramão, B., Reis, A., Petersson, K. M., & Faísca, L. (2011). The role of color in object recognition: A review and meta-analysis. Acta Psychologica, 138, 244-253. doi:10.1016/j.actpsy.2011.06.010.

    Abstract

    In this study, we systematically review the scientific literature on the effect of color on object recognition. Thirty-five independent experiments, comprising 1535 participants, were included in a meta-analysis. We found a moderate effect of color on object recognition (d = 0.28). Specific effects of moderator variables were analyzed and we found that color diagnosticity is the factor with the greatest moderator effect on the influence of color in object recognition; studies using color diagnostic objects showed a significant color effect (d = 0.43), whereas a marginal color effect was found in studies that used non-color diagnostic objects (d = 0.18). The present study did not permit the drawing of specific conclusions about the moderator effect of the object recognition task; while the meta-analytic review showed that color information improves object recognition mainly in studies using naming tasks (d = 0.36), the literature review revealed a large body of evidence showing positive effects of color information on object recognition in studies using a large variety of visual recognition tasks. We also found that color is important for the ability to recognize artifacts and natural objects, to recognize objects presented as types (line-drawings) or as tokens (photographs), and to recognize objects that are presented without surface details, such as texture or shadow. Taken together, the results of the meta-analysis strongly support the contention that color plays a role in object recognition. This suggests that the role of color should be taken into account in models of visual object recognition.

    Files private

    Request files
  • Bramão, I., Inácio, F., Faísca, L., Reis, A., & Petersson, K. M. (2011). The influence of color information on the recognition of color diagnostic and noncolor diagnostic objects. The Journal of General Psychology, 138(1), 49-65. doi:10.1080/00221309.2010.533718.

    Abstract

    In the present study, the authors explore in detail the level of visual object recognition at which perceptual color information improves the recognition of color diagnostic and noncolor diagnostic objects. To address this issue, 3 object recognition tasks, with different cognitive demands, were designed: (a) an object verification task; (b) a category verification task; and (c) a name verification task. They found that perceptual color information improved color diagnostic object recognition mainly in tasks for which access to the semantic knowledge about the object was necessary to perform the task; that is, in category and name verification. In contrast, the authors found that perceptual color information facilitates noncolor diagnostic object recognition when access to the object’s structural description from long-term memory was necessary—that is, object verification. In summary, the present study shows that the role of perceptual color information in object recognition is dependent on color diagnosticity
  • Brookshire, G., & Casasanto, D. (2011). Motivation and motor action: Hemispheric specialization for motivation reverses with handedness. In L. Carlson, C. Holscher, & T. Shipley (Eds.), Proceedings of the 33rd Annual Meeting of the Cognitive Science Society (pp. 2610-2615). Austin, TX: Cognitive Science Society.
  • Casasanto, D., & Lupyan, G. (2011). Ad hoc cognition [Abstract]. In L. Carlson, C. Hölscher, & T. F. Shipley (Eds.), Proceedings of the 33rd Annual Conference of the Cognitive Science Society (pp. 826). Austin, TX: Cognitive Science Society.

    Abstract

    If concepts, categories, and word meanings are stable, how can people use them so flexibly? Here we explore a possible answer: maybe this stability is an illusion. Perhaps all concepts, categories, and word meanings (CC&Ms) are constructed ad hoc, each time we use them. On this proposal, all words are infinitely polysemous, all communication is ’good enough’, and no idea is ever the same twice. The details of people’s ad hoc CC&Ms are determined by the way retrieval cues interact with the physical, social, and linguistic context. We argue that even the most stable-seeming CC&Ms are instantiated via the same processes as those that are more obviously ad hoc, and vary (a) from one microsecond to the next within a given instantiation, (b) from one instantiation to the next within an individual, and (c) from person to person and group to group as a function of people’s experiential history. 826
  • Casasanto, D. (2011). Bodily relativity: The body-specificity of language and thought. In L. Carlson, C. Holscher, & T. Shipley (Eds.), Proceedings of the 33rd Annual Meeting of the Cognitive Science Society (pp. 1258-1259). Austin, TX: Cognitive Science Society.
  • Casasanto, D. (2011). Different bodies, different minds: The body-specificity of language and thought. Current Directions in Psychological Science, 20, 378-383. doi:10.1177/0963721411422058.

    Abstract

    Do people with different kinds of bodies think differently? According to the bodyspecificity hypothesis (Casasanto 2009), they should. In this article, I review evidence that right- and left-handers, who perform actions in systematically different ways, use correspondingly different areas of the brain for imagining actions and representing the meanings of action verbs. Beyond concrete actions, the way people use their hands also influences the way they represent abstract ideas with positive and negative emotional valence like “goodness,” “honesty,” and “intelligence,” and how they communicate about them in spontaneous speech and gesture. Changing how people use their right and left hands can cause them to think differently, suggesting that motoric differences between right- and left-handers are not merely correlated with cognitive differences. Body-specific patterns of motor experience shape the way we think, communicate, and make decisions
  • Casasanto, D., & Chrysikou, E. G. (2011). When left is "Right": Motor fluency shapes abstract concepts. Psychological Science, 22, 419-422. doi:10.1177/0956797611401755.

    Abstract

    Right- and left-handers implicitly associate positive ideas like "goodness"and "honesty"more strongly with their dominant side of space, the side on which they can act more fluently, and negative ideas more strongly with their nondominant side. Here we show that right-handers’ tendency to associate "good" with "right" and "bad" with "left" can be reversed as a result of both long- and short-term changes in motor fluency. Among patients who were right-handed prior to unilateral stroke, those with disabled left hands associated "good" with "right," but those with disabled right hands associated "good" with "left,"as natural left-handers do. A similar pattern was found in healthy right-handers whose right or left hand was temporarily handicapped in the laboratory. Even a few minutes of acting more fluently with the left hand can change right-handers’ implicit associations between space and emotional valence, causing a reversal of their usual judgments. Motor experience plays a causal role in shaping abstract thought.
  • Casasanto, D., & De Bruin, A. (2011). Word Up! Directed motor action improves word learning [Abstract]. In L. Carlson, C. Hölscher, & T. Shipley (Eds.), Proceedings of the 33rd Annual Conference of the Cognitive Science Society (pp. 1902). Austin, TX: Cognitive Science Society.

    Abstract

    Can simple motor actions help people expand their vocabulary? Here we show that word learning depends on where students place their flash cards after studying them. In Experiment 1, participants learned the definitions of ”alien words” with positive or negative emotional valence. After studying each card, they placed it in one of two boxes (top or bottom), according to its valence. Participants who were instructed to place positive cards in the top box, consistent with Good is Up metaphors, scored about 10.
  • Chen, A., & Lai, V. T. (2011). Comb or coat: The role of intonation in online reference resolution in a second language. In W. Zonneveld, & H. Quené (Eds.), Sound and Sounds. Studies presented to M.E.H. (Bert) Schouten on the occasion of his 65th birthday (pp. 57-68). Utrecht: UiL OTS.

    Abstract

    1 Introduction In spoken sentence processing, listeners do not wait till the end of a sentence to decipher what message is conveyed. Rather, they make predictions on the most plausible interpretation at every possible point in the auditory signal on the basis of all kinds of linguistic information (e.g., Eberhard et al. 1995; Alman and Kamide 1999, 2007). Intonation is one such kind of linguistic information that is efficiently used in spoken sentence processing. The evidence comes primarily from recent work on online reference resolution conducted in the visual-world eyetracking paradigm (e.g., Tanenhaus et al. 1995). In this paradigm, listeners are shown a visual scene containing a number of objects and listen to one or two short sentences about the scene. They are asked to either inspect the visual scene while listening or to carry out the action depicted in the sentence(s) (e.g., 'Touch the blue square'). Listeners' eye movements directed to each object in the scene are monitored and time-locked to pre-defined time points in the auditory stimulus. Their predictions on the upcoming referent and sources for the predictions in the auditory signal are examined by analysing fixations to the relevant objects in the visual scene before the acoustic information on the referent is available
  • Chu, M., & Kita, S. (2011). Microgenesis of gestures during mental rotation tasks recapitulates ontogenesis. In G. Stam, & M. Ishino (Eds.), Integrating gestures: The interdisciplinary nature of gesture (pp. 267-276). Amsterdam: John Benjamins.

    Abstract

    People spontaneously produce gestures when they solve problems or explain their solutions to a problem. In this chapter, we will review and discuss evidence on the role of representational gestures in problem solving. The focus will be on our recent experiments (Chu & Kita, 2008), in which we used Shepard-Metzler type of mental rotation tasks to investigate how spontaneous gestures revealed the development of problem solving strategy over the course of the experiment and what role gesture played in the development process. We found that when solving novel problems regarding the physical world, adults go through similar symbolic distancing (Werner & Kaplan, 1963) and internalization (Piaget, 1968) processes as those that occur during young children’s cognitive development and gesture facilitates such processes.
  • Cleary, R. A., Poliakoff, E., Galpin, A., Dick, J. P., & Holler, J. (2011). An investigation of co-speech gesture production during action description in Parkinson’s disease. Parkinsonism & Related Disorders, 17, 753-756. doi:10.1016/j.parkreldis.2011.08.001.

    Abstract

    Methods The present study provides a systematic analysis of co-speech gestures which spontaneously accompany the description of actions in a group of PD patients (N = 23, Hoehn and Yahr Stage III or less) and age-matched healthy controls (N = 22). The analysis considers different co-speech gesture types, using established classification schemes from the field of gesture research. The analysis focuses on the rate of these gestures as well as on their qualitative nature. In doing so, the analysis attempts to overcome several methodological shortcomings of research in this area. Results Contrary to expectation, gesture rate was not significantly affected in our patient group, with relatively mild PD. This indicates that co-speech gestures could compensate for speech problems. However, while gesture rate seems unaffected, the qualitative precision of gestures representing actions was significantly reduced. Conclusions This study demonstrates the feasibility of carrying out fine-grained, detailed analyses of gestures in PD and offers insights into an as yet neglected facet of communication in patients with PD. Based on the present findings, an important next step is the closer investigation of the qualitative changes in gesture (including different communicative situations) and an analysis of the heterogeneity in co-speech gesture production in PD.
  • Davids, N., Segers, E., Van den Brink, D., Mitterer, H., van Balkom, H., Hagoort, P., & Verhoeven, L. (2011). The nature of auditory discrimination problems in children with specific language impairment: An MMN study. Neuropsychologia, 49, 19-28. doi:10.1016/j.neuropsychologia.2010.11.001.

    Abstract

    Many children with Specific Language Impairment (SLI) show impairments in discriminating auditorily presented stimuli. The present study investigates whether these discrimination problems are speech specific or of a general auditory nature. This was studied by using a linguistic and nonlinguistic contrast that were matched for acoustic complexity in an active behavioral task and a passive ERP paradigm, known to elicit the mismatch negativity (MMN). In addition, attention skills and a variety of language skills were measured. Participants were 25 five-year-old Dutch children with SLI having receptive as well as productive language problems and 25 control children with typical speechand language development. At the behavioral level, the SLI group was impaired in discriminating the linguistic contrast as compared to the control group, while both groups were unable to distinguish the non-linguistic contrast. Moreover, the SLI group tended to have impaired attention skills which correlated with performance on most of the language tests. At the neural level, the SLI group, in contrast to the control group, did not show an MMN in response to either the linguistic or nonlinguistic contrast. The MMN data are consistent with an account that relates the symptoms in children with SLI to non-speech processing difficulties.
  • Dediu, D. (2011). A Bayesian phylogenetic approach to estimating the stability of linguistic features and the genetic biasing of tone. Proceedings of the Royal Society of London/B, 278(1704), 474-479. doi:10.1098/rspb.2010.1595.

    Abstract

    Language is a hallmark of our species and understanding linguistic diversity is an area of major interest. Genetic factors influencing the cultural transmission of language provide a powerful and elegant explanation for aspects of the present day linguistic diversity and a window into the emergence and evolution of language. In particular, it has recently been proposed that linguistic tone—the usage of voice pitch to convey lexical and grammatical meaning—is biased by two genes involved in brain growth and development, ASPM and Microcephalin. This hypothesis predicts that tone is a stable characteristic of language because of its ‘genetic anchoring’. The present paper tests this prediction using a Bayesian phylogenetic framework applied to a large set of linguistic features and language families, using multiple software implementations, data codings, stability estimations, linguistic classifications and outgroup choices. The results of these different methods and datasets show a large agreement, suggesting that this approach produces reliable estimates of the stability of linguistic data. Moreover, linguistic tone is found to be stable across methods and datasets, providing suggestive support for the hypothesis of genetic influences on its distribution.
  • Dolscheid, S., Shayan, S., Majid, A., & Casasanto, D. (2011). The thickness of musical pitch: Psychophysical evidence for the Whorfian hypothesis. In L. Carlson, C. Hölscher, & T. Shipley (Eds.), Proceedings of the 33rd Annual Conference of the Cognitive Science Society (pp. 537-542). Austin, TX: Cognitive Science Society.
  • Dufau, S., Duñabeitia, J. A., Moret-Tatay, C., McGonigal, A., Peeters, D., Alario, F.-X., Balota, D. A., Brysbaert, M., Carreiras, M., Ferrand, L., Ktori, M., Perea, M., Rastle, K., Sasburg, O., Yap, M. J., Ziegler, J. C., & Grainger, J. (2011). Smart phone, smart science: How the use of smartphones can revolutionize research in cognitive science. PLoS One, 6(9), e24974. doi:10.1371/journal.pone.0024974.

    Abstract

    Investigating human cognitive faculties such as language, attention, and memory most often relies on testing small and homogeneous groups of volunteers coming to research facilities where they are asked to participate in behavioral experiments. We show that this limitation and sampling bias can be overcome by using smartphone technology to collect data in cognitive science experiments from thousands of subjects from all over the world. This mass coordinated use of smartphones creates a novel and powerful scientific ‘‘instrument’’ that yields the data necessary to test universal theories of cognition. This increase in power represents a potential revolution in cognitive science
  • Fitz, H., Chang, F., & Christansen, M. H. (2011). A connectionist account of the acquisition and processing of relative clauses. In E. Kidd (Ed.), The acquisition of relative clauses. Processing, typology and function (pp. 39-60). Amsterdam: Benjamins.

    Abstract

    Relative clause processing depends on the grammatical role of the head noun in the subordinate clause. This has traditionally been explained in terms of cognitive limitations. We suggest that structure-related processing differences arise from differences in experience with these structures. We present a connectionist model which learns to produce utterances with relative clauses from exposure to message-sentence pairs. The model shows how various factors such as frequent subsequences, structural variations, and meaning conspire to create differences in the processing of these structures. The predictions of this learning-based account have been confirmed in behavioral studies with adults. This work shows that structural regularities that govern relative clause processing can be explained within a usage-based approach to recursion.
  • Flecken, M. (2011). Assessing bilingual attainment: macrostructural planning in narratives. International Journal of Bilingualism, 15(2), 164-186. doi:10.1177/1367006910381187.

    Abstract

    The present study addresses questions concerning bilinguals’ attainment in the two languages by investigating the extent to which early bilinguals manage to apply the information structure required in each language when producing a complex text. In re-narrating the content of a film, speakers have to break down the perceived series of dynamic situations and structure relevant information into units that are suited for linguistic expression. The analysis builds on typological studies of Germanic and Romance languages which investigate the role of grammaticized concepts in determining core features in information structure. It takes a global perspective in that it focuses on factors that determine information selection and information structure that hold in macrostructural terms for the text as a whole (factors driving information selection, the temporal frame used to locate events on the time line, and the means used in reference management). A first comparison focuses on Dutch and German monolingual native speakers and shows that despite overall typological similarities, there are subtle though systematic differences between the two languages in the aforementioned areas of information structure. The analyses of the bilinguals focus on their narratives in both languages, and compares the patterns found to those found in the monolingual narratives. Findings show that the method used provides insights into the individual bilingual’s attainment in the two languages and identifies either balanced levels of attainment, patterns showing higher degrees of conformity with one of the languages, as well as bilingual-specific patterns of performance.
  • Flecken, M. (2011). What native speaker judgments tell us about the grammaticalization of a progressive aspectual marker in Dutch. Linguistics, 49(3), 479-524. doi:10.1515/LING.2011.015.

    Abstract

    This paper focuses on native speaker judgments of a construction in Dutch that functions as a progressive aspectual marker (aan het X zijn, referred to as aan het-construction) and represents an event as in progression at the time of speech. The method was chosen in order to investigate how native speakers assess the scope and conditions of use of a construction which is in the process of grammaticalization. It allows for the inclusion of a large group of participants of different age groups and an investigation of potential age-related differences. The study systematically covers a range of temporal variables that were shown to be relevant in elicitation and corpus-based studies on the grammaticalization of progressive aspect constructions. The results provide insights into the selectional preferences and constraints of the aan het-construction in contemporary Dutch, as judged by native speakers, and the extent to which they correlate with production tasks.
  • Folia, V., Forkstam, C., Ingvar, M., Hagoort, P., & Petersson, K. M. (2011). Implicit artificial syntax processing: Genes, preference, and bounded recursion. Biolinguistics, 5(1/2), 105-132.

    Abstract

    The first objective of this study was to compare the brain network engaged by preference classification and the standard grammaticality classification after implicit artificial syntax acquisition by re-analyzing previously reported event-related fMRI data. The results show that preference and grammaticality classification engage virtually identical brain networks, including Broca’s region, consistent with previous behavioral findings. Moreover, the results showed that the effects related to artificial syntax in Broca’s region were essentially the same when masked with variability related to natural syntax processing in the same participants. The second objective was to explore CNTNAP2-related effects in implicit artificial syntax learning by analyzing behavioral and event-related fMRI data from a subsample. The CNTNAP2 gene has been linked to specific language impairment and is controlled by the FOXP2 transcription factor. CNTNAP2 is expressed in language related brain networks in the developing human brain and the FOXP2–CNTNAP2 pathway provides a mechanistic link between clinically distinct syndromes involving disrupted language. Finally, we discuss the implication of taking natural language to be a neurobiological system in terms of bounded recursion and suggest that the left inferior frontal region is a generic on-line sequence processor that unifies information from various sources in an incremental and recursive manner.
  • De La Fuente, J., Casasanto, D., Román, A., & Santiago, J. (2011). Searching for cultural influences on the body-specific association of preferred hand and emotional valence. In L. Carlson, C. Holscher, & T. Shipley (Eds.), Proceedings of the 33rd Annual Meeting of the Cognitive Science Society (pp. 2616-2620). Austin, TX: Cognitive Science Society.
  • Habets, B., Kita, S., Shao, Z., Ozyurek, A., & Hagoort, P. (2011). The role of synchrony and ambiguity in speech–gesture integration during comprehension. Journal of Cognitive Neuroscience, 23, 1845-1854. doi:10.1162/jocn.2010.21462.

    Abstract

    During face-to-face communication, one does not only hear speech but also see a speaker's communicative hand movements. It has been shown that such hand gestures play an important role in communication where the two modalities influence each other's interpretation. A gesture typically temporally overlaps with coexpressive speech, but the gesture is often initiated before (but not after) the coexpressive speech. The present ERP study investigated what degree of asynchrony in the speech and gesture onsets are optimal for semantic integration of the concurrent gesture and speech. Videos of a person gesturing were combined with speech segments that were either semantically congruent or incongruent with the gesture. Although gesture and speech always overlapped in time, gesture and speech were presented with three different degrees of asynchrony. In the SOA 0 condition, the gesture onset and the speech onset were simultaneous. In the SOA 160 and 360 conditions, speech was delayed by 160 and 360 msec, respectively. ERPs time locked to speech onset showed a significant difference between semantically congruent versus incongruent gesture–speech combinations on the N400 for the SOA 0 and 160 conditions. No significant difference was found for the SOA 360 condition. These results imply that speech and gesture are integrated most efficiently when the differences in onsets do not exceed a certain time span because of the fact that iconic gestures need speech to be disambiguated in a way relevant to the speech context.
  • Hagoort, P. (2011). The binding problem for language, and its consequences for the neurocognition of comprehension. In E. A. Gibson, & N. J. Pearlmutter (Eds.), The processing and acquisition of reference (pp. 403-436). Cambridge, MA: MIT Press.
  • Hagoort, P. (2011). The neuronal infrastructure for unification at multiple levels. In G. Gaskell, & P. Zwitserlood (Eds.), Lexical representation: A multidisciplinary approach (pp. 231-242). Berlin: De Gruyter Mouton.
  • Harbusch, K., & Kempen, G. (2011). Automatic online writing support for L2 learners of German through output monitoring by a natural-language paraphrase generator. In M. Levy, F. Blin, C. Bradin Siskin, & O. Takeuchi (Eds.), WorldCALL: International perspectives on computer-assisted language learning (pp. 128-143). New York: Routledge.

    Abstract

    Students who are learning to write in a foreign language, often want feedback on the grammatical quality of the sentences they produce. The usual NLP approach to this problem is based on parsing student-generated text. Here, we propose a generation-based ap- proach aiming at preventing errors ("scaffolding"). In our ICALL system, the student constructs sentences by composing syntactic trees out of lexically anchored "treelets" via a graphical drag & drop user interface. A natural-language generator computes all possible grammatically well-formed sentences entailed by the student-composed tree. It provides positive feedback if the student-composed tree belongs to the well-formed set, and negative feedback otherwise. If so requested by the student, it can substantiate the positive or negative feedback based on a comparison between the student-composed tree and its own trees (informative feedback on demand). In case of negative feedback, the system refuses to build the structure attempted by the student. Frequently occurring errors are handled in terms of "malrules." The system we describe is a prototype (implemented in JAVA and C++) which can be parameterized with respect to L1 and L2, the size of the lexicon, and the level of detail of the visually presented grammatical structures.
  • Haun, D. B. M., Rapold, C. J., Janzen, G., & Levinson, S. C. (2011). Plasticity of human spatial memory: Spatial language and cognition covary across cultures. Cognition, 119, 70-80. doi:10.1016/j.cognition.2010.12.009.

    Abstract

    The present paper explores cross-cultural variation in spatial cognition by comparing spatial reconstruction tasks by Dutch and Namibian elementary school children. These two communities differ in the way they predominantly express spatial relations in language. Four experiments investigate cognitive strategy preferences across different levels of task-complexity and instruction. Data show a correlation between dominant linguistic spatial frames of reference and performance patterns in non-linguistic spatial memory tasks. This correlation is shown to be stable across an increase of complexity in the spatial array. When instructed to use their respective non-habitual cognitive strategy, participants were not easily able to switch between strategies and their attempts to do so impaired their performance. These results indicate a difference not only in preference but also in competence and suggest that spatial language and non-linguistic preferences and competences in spatial cognition are systematically aligned across human populations.

    Files private

    Request files
  • Holler, J., & Wilkin, K. (2011). Co-speech gesture mimicry in the process of collaborative referring during face-to-face dialogue. Journal of Nonverbal Behavior, 35, 133-153. doi:10.1007/s10919-011-0105-6.

    Abstract

    Mimicry has been observed regarding a range of nonverbal behaviors, but only recently have researchers started to investigate mimicry in co-speech gestures. These gestures are considered to be crucially different from other aspects of nonverbal behavior due to their tight link with speech. This study provides evidence of mimicry in co-speech gestures in face-to-face dialogue, the most common forum of everyday talk. In addition, it offers an analysis of the functions that mimicked co-speech gestures fulfill in the collaborative process of creating a mutually shared understanding of referring expressions. The implications bear on theories of gesture production, research on grounding, and the mechanisms underlying behavioral mimicry.
  • Holler, J., Tutton, M., & Wilkin, K. (2011). Co-speech gestures in the process of meaning coordination. In Proceedings of the 2nd GESPIN - Gesture & Speech in Interaction Conference, Bielefeld, 5-7 Sep 2011.

    Abstract

    This study uses a classical referential communication task to investigate the role of co-speech gestures in the process of coordination. The study manipulates both the common ground between the interlocutors, as well as the visibility of the gestures they use. The findings show that co-speech gestures are an integral part of the referential utterances speakers produced with regard to both initial references as well as repeated references, and that the availability of gestures appears to impact on interlocutors’ referential oordination. The results are discussed with regard to past research on common ground as well as theories of gesture production.
  • Holler, J., & Wilkin, K. (2011). An experimental investigation of how addressee feedback affects co-speech gestures accompanying speakers’ responses. Journal of Pragmatics, 43, 3522-3536. doi:10.1016/j.pragma.2011.08.002.

    Abstract

    There is evidence that co-speech gestures communicate information to addressees and that they are often communicatively intended. However, we still know comparatively little about the role of gestures in the actual process of communication. The present study offers a systematic investigation of speakers’ gesture use before and after addressee feedback. The findings show that when speakers responded to addressees’ feedback gesture rate remained constant when this feedback encouraged clarification, elaboration or correction. However, speakers gestured proportionally less often after feedback when providing confirmatory responses. That is, speakers may not be drawing on gesture in response to addressee feedback per se, but particularly with responses that enhance addressees’ understanding. Further, the large majority of speakers’ gestures changed in their form. They tended to be more precise, larger, or more visually prominent after feedback. Some changes in gesture viewpoint were also observed. In addition, we found that speakers used deixis in speech and gaze to increase the salience of gestures occurring in response to feedback. Speakers appear to conceive of gesture as a useful modality in redesigning utterances to make them more accessible to addressees. The findings further our understanding of recipient design and co-speech gestures in face-to-face dialogue. Highlights ► Gesture rate remains constant in response to addressee feedback when the response aims to correct or clarify understanding. ► But gesture rate decreases when speakers provide confirmatory responses to feedback signalling correct understanding. ► Gestures are more communicative in response to addressee feedback, particularly in terms of precision, size and visual prominence. ► Speakers make gestures in response to addressee feedback more salient by using deictic markers in speech and gaze.
  • Holler, J. (2011). Verhaltenskoordination, Mimikry und sprachbegleitende Gestik in der Interaktion. Psychotherapie - Wissenschaft: Special issue: "Sieh mal, wer da spricht" - der Koerper in der Psychotherapie Teil IV, 1(1), 56-64. Retrieved from http://www.psychotherapie-wissenschaft.info/index.php/psy-wis/article/view/13/65.
  • Jasmin, K., & Casasanto, D. (2011). The QWERTY effect: How stereo-typing shapes the mental lexicon. In L. Carlson, C. Holscher, & T. Shipley (Eds.), Proceedings of the 33rd Annual Conference of the Cognitive Science Society. Austin, TX: Cognitive Science Society.
  • Johnson, J. S., Sutterer, D. W., Acheson, D. J., Lewis-Peacock, J. A., & Postle, B. R. (2011). Increased alpha-band power during the retention of shapes and shape-location associations in visual short-term memory. Frontiers in Psychology, 2(128), 1-9. doi:10.3389/fpsyg.2011.00128.

    Abstract

    Studies exploring the role of neural oscillations in cognition have revealed sustained increases in alpha-band (∼8–14 Hz) power during the delay period of delayed-recognition short-term memory tasks. These increases have been proposed to reflect the inhibition, for example, of cortical areas representing task-irrelevant information, or of potentially interfering representations from previous trials. Another possibility, however, is that elevated delay-period alpha-band power (DPABP) reflects the selection and maintenance of information, rather than, or in addition to, the inhibition of task-irrelevant information. In the present study, we explored these possibilities using a delayed-recognition paradigm in which the presence and task relevance of shape information was systematically manipulated across trial blocks and electroencephalographic was used to measure alpha-band power. In the first trial block, participants remembered locations marked by identical black circles. The second block featured the same instructions, but locations were marked by unique shapes. The third block featured the same stimulus presentation as the second, but with pretrial instructions indicating, on a trial-by-trial basis, whether memory for shape or location was required, the other dimension being irrelevant. In the final block, participants remembered the unique pairing of shape and location for each stimulus. Results revealed minimal DPABP in each of the location-memory conditions, whether locations were marked with identical circles or with unique task-irrelevant shapes. In contrast, alpha-band power increases were observed in both the shape-memory condition, in which location was task irrelevant, and in the critical final condition, in which both shape and location were task relevant. These results provide support for the proposal that alpha-band oscillations reflect the retention of shape information and/or shape–location associations in short-term memory.
  • Junge, C. (2011). The relevance of early word recognition: Insights from the infant brain. PhD Thesis, Radboud University Nijmegen, Nijmegen.

    Abstract

    Baby's begrijpen woorden eerder dan dat ze deze zeggen. Dit stadium is onderbelicht want moeilijk waarneembaar. Caroline Junge onderzocht de vaardigheden die nodig zijn voor het leren van de eerste woordjes: conceptherkenning, woordherkenning en het verbinden van woord aan betekenis. Daarvoor bestudeerde ze de hersenpotentialen van het babybrein tijdens het horen van woordjes. Junge stelt vast dat baby's van negen maanden al woordbegrip hebben. En dat is veel vroeger dan tot nu toe bekend was. Als baby's een woord hoorde dat niet klopte met het plaatje dat ze zagen, lieten ze een N400-effect zien, een klassiek hersenpotentiaal. Uit eerder Duits onderzoek is gebleken dat baby's van twaalf maanden dit effect nog niet laten zien, omdat de hersenen nog niet rijp zouden zijn. Het onderzoek van Junge weerlegt dit. Ook laat ze zien dat als baby's goed woorden kunnen herkennen binnen zinnetjes, dit belangrijk is voor hun latere taalontwikkeling, wat mogelijk tot nieuwe therapieën voor taalstoornissen zal leiden.
  • Kelly, S., Byrne, K., & Holler, J. (2011). Raising the stakes of communication: Evidence for increased gesture production as predicted by the GSA framework. Information, 2(4), 579-593. doi:10.3390/info2040579.

    Abstract

    Theorists of language have argued that co-­speech hand gestures are an intentional part of social communication. The present study provides evidence for these claims by showing that speakers adjust their gesture use according to their perceived relevance to the audience. Participants were asked to read about items that were and were not useful in a wilderness survival scenario, under the pretense that they would then explain (on camera) what they learned to one of two different audiences. For one audience (a group of college students in a dormitory orientation activity), the stakes of successful communication were low;; for the other audience (a group of students preparing for a rugged camping trip in the mountains), the stakes were high. In their explanations to the camera, participants in the high stakes condition produced three times as many representational gestures, and spent three times as much time gesturing, than participants in the low stakes condition. This study extends previous research by showing that the anticipated consequences of one’s communication—namely, the degree to which information may be useful to an intended recipient—influences speakers’ use of gesture.
  • Koenigs, M., Acheson, D. J., Barbey, A. K., Soloman, J., Postle, B. R., & Grafman, J. (2011). Areas of left perisylvian cortex mediate auditory-verbal short-term memory. Neuropsychologia, 49(13), 3612-3619. doi:10.1016/j.neuropsychologia.2011.09.013.

    Abstract

    A contentious issue in memory research is whether verbal short-term memory (STM) depends on a neural system specifically dedicated to the temporary maintenance of information, or instead relies on the same brain areas subserving the comprehension and production of language. In this study, we examined a large sample of adults with acquired brain lesions to identify the critical neural substrates underlying verbal STM and the relationship between verbal STM and language processing abilities. We found that patients with damage to selective regions of left perisylvian cortex – specifically the inferior frontal and posterior temporal sectors – were impaired on auditory–verbal STM performance (digit span), as well as on tests requiring the production and/or comprehension of language. These results support the conclusion that verbal STM and language processing are mediated by the same areas of left perisylvian cortex.

    Files private

    Request files
  • Kokal, I., Engel, A., Kirschner, S., & Keysers, C. (2011). Synchronized drumming enhances activity in the caudate and facilitates prosocial commitment - If the rhythm comes easily. PLoS One, 6(11), e27272. doi:10.1371/journal.pone.0027272.

    Abstract

    Why does chanting, drumming or dancing together make people feel united? Here we investigate the neural mechanisms underlying interpersonal synchrony and its subsequent effects on prosocial behavior among synchronized individuals. We hypothesized that areas of the brain associated with the processing of reward would be active when individuals experience synchrony during drumming, and that these reward signals would increase prosocial behavior toward this synchronous drum partner. 18 female non-musicians were scanned with functional magnetic resonance imaging while they drummed a rhythm, in alternating blocks, with two different experimenters: one drumming in-synchrony and the other out-of-synchrony relative to the participant. In the last scanning part, which served as the experimental manipulation for the following prosocial behavioral test, one of the experimenters drummed with one half of the participants in-synchrony and with the other out-of-synchrony. After scanning, this experimenter "accidentally" dropped eight pencils, and the number of pencils collected by the participants was used as a measure of prosocial commitment. Results revealed that participants who mastered the novel rhythm easily before scanning showed increased activity in the caudate during synchronous drumming. The same area also responded to monetary reward in a localizer task with the same participants. The activity in the caudate during experiencing synchronous drumming also predicted the number of pencils the participants later collected to help the synchronous experimenter of the manipulation run. In addition, participants collected more pencils to help the experimenter when she had drummed in-synchrony than out-of-synchrony during the manipulation run. By showing an overlap in activated areas during synchronized drumming and monetary reward, our findings suggest that interpersonal synchrony is related to the brain's reward system.
  • Lai, V. T., Hagoort, P., & Casasanto, D. (2011). Affective and non-affective meaning in words and pictures. In L. Carlson, C. Holscher, & T. Shipley (Eds.), Proceedings of the 33rd Annual Meeting of the Cognitive Science Society (pp. 390-395). Austin, TX: Cognitive Science Society.
  • Menenti, L., Gierhan, S., Segaert, K., & Hagoort, P. (2011). Shared language: Overlap and segregation of the neuronal infrastructure for speaking and listening revealed by functional MRI. Psychological Science, 22, 1173-1182. doi:10.1177/0956797611418347.

    Abstract

    Whether the brain’s speech-production system is also involved in speech comprehension is a topic of much debate. Research has focused on whether motor areas are involved in listening, but overlap between speaking and listening might occur not only at primary sensory and motor levels, but also at linguistic levels (where semantic, lexical, and syntactic processes occur). Using functional MRI adaptation during speech comprehension and production, we found that the brain areas involved in semantic, lexical, and syntactic processing are mostly the same for speaking and for listening. Effects of primary processing load (indicative of sensory and motor processes) overlapped in auditory cortex and left inferior frontal cortex, but not in motor cortex, where processing load affected activity only in speaking. These results indicate that the linguistic parts of the language system are used for both speaking and listening, but that the motor system does not seem to provide a crucial contribution to listening.
  • Ozyurek, A. (2011). Language in our hands: The role of the body in language, cognition and communication [Inaugural lecture]. Nijmegen: Radboud University Nijmegen.

    Abstract

    Even though most studies of language have focused on speech channel and/or viewed language as an amodal abstract system, there is growing evidence on the role our bodily actions/ perceptions play in language and communication. In this context, Özyürek discusses what our meaningful visible bodily actions reveal about our language capacity. Conducting cross-linguistic, behavioral, and neurobiological research, she shows that co-speech gestures reflect the imagistic, iconic aspects of events talked about and at the same time interact with language production and comprehension processes. Sign languages can also be characterized having an abstract system of linguistic categories as well as using iconicity in several aspects of the language structure and in its processing. Studying language multimodally reveals how grounded language is in our visible bodily actions and opens up new lines of research to study language in its situated, natural face-to-face context.
  • Ozyurek, A., & Perniss, P. M. (2011). Event representations in signed languages. In J. Bohnemeyer, & E. Pederson (Eds.), Event representations in language and cognition (pp. 84-107). New York: Cambridge University Press.
  • Perniss, P. M., Zwitserlood, I., & Ozyurek, A. (2011). Does space structure spatial language? Linguistic encoding of space in sign languages. In L. Carlson, C. Holscher, & T. Shipley (Eds.), Proceedings of the 33rd Annual Meeting of the Cognitive Science Society (pp. 1595-1600). Austin, TX: Cognitive Science Society.
  • Petersson, K. M., Forkstam, C., Inácio, F., Bramão, I., Araújo, S., Souza, A. C., Silva, S., & Castro, S. L. (2011). Artificial language learning. In A. Trevisan, & V. Wannmacher Pereira (Eds.), Alfabeltização e cognição (pp. 71-90). Porto Alegre, Brasil: Edipucrs.

    Abstract

    Neste artigo fazemos uma revisão breve de investigações actuais com técnicas comportamentais e de neuroimagem funcional sobre a aprendizagem de uma linguagem artificial em crianças e adultos. Na secção final, discutimos uma possível associação entre dislexia e aprendizagem implícita. Resultados recentes sugerem que a presença de um défice ao nível da aprendizagem implícita pode contribuir para as dificuldades de leitura e escrita observadas em indivíduos disléxicos.
  • Pijnacker, J., Geurts, B., Van Lambalgen, M., Buitelaar, J., & Hagoort, P. (2011). Reasoning with exceptions: An event-related brain potentials study. Journal of Cognitive Neuroscience, 23, 471-480. doi:10.1162/jocn.2009.21360.

    Abstract

    Defeasible inferences are inferences that can be revised in the light of new information. Although defeasible inferences are pervasive in everyday communication, little is known about how and when they are processed by the brain. This study examined the electrophysiological signature of defeasible reasoning using a modified version of the suppression task. Participants were presented with conditional inferences (of the type “if p, then q; p, therefore q”) that were preceded by a congruent or a disabling context. The disabling context contained a possible exception or precondition that prevented people from drawing the conclusion. Acceptability of the conclusion was indeed lower in the disabling condition compared to the congruent condition. Further, we found a large sustained negativity at the conclusion of the disabling condition relative to the congruent condition, which started around 250 msec and was persistent throughout the entire epoch. Possible accounts for the observed effect are discussed.
  • Reis, A., Faísca, L., & Petersson, K. M. (2011). Literacia: Modelo para o estudo dos efeitos de uma aprendizagem específica na cognição e nas suas bases cerebrais. In A. Trevisan, J. J. Mouriño Mosquera, & V. Wannmacher Pereira (Eds.), Alfabeltização e cognição (pp. 23-36). Porto Alegro, Brasil: Edipucrs.

    Abstract

    A aquisição de competências de leitura e de escrita pode ser vista como um processo formal de transmissão cultural, onde interagem factores neurobiológicos e culturais. O treino sistemático exigido pela aprendizagem da leitura e da escrita poderá produzir mudanças quantitativas e qualitativas tanto a nível cognitivo como ao nível da organização do cérebro. Estudar sujeitos iletrados e letrados representa, assim, uma oportunidade para investigar efeitos de uma aprendizagem específica no desenvolvimento cognitivo e suas bases cerebrais. Neste trabalho, revemos um conjunto de investigações comportamentais e com métodos de imagem cerebral que indicam que a literacia tem um impacto nas nossas funções cognitivas e na organização cerebral. Mais especificamente, discutiremos diferenças entre letrados e iletrados para domínios cognitivos verbais e não-verbais, sugestivas de que a arquitectura cognitiva é formatada, em parte, pela aprendizagem da leitura e da escrita. Os dados de neuroimagem funcionais e estruturais são também indicadores que a aquisição de uma ortografia alfabética interfere nos processos de organização e lateralização das funções cognitivas.
  • Scheeringa, R., Fries, P., Petersson, K. M., Oostenveld, R., Grothe, I., Norris, D. G., Hagoort, P., & Bastiaansen, M. C. M. (2011). Neuronal dynamics underlying high- and low- frequency EEG oscillations contribute independently to the human BOLD signal. Neuron, 69, 572-583. doi:10.1016/j.neuron.2010.11.044.

    Abstract

    Work on animals indicates that BOLD is preferentially sensitive to local field potentials, and that it correlates most strongly with gamma band neuronal synchronization. Here we investigate how the BOLD signal in humans performing a cognitive task is related to neuronal synchronization across different frequency bands. We simultaneously recorded EEG and BOLD while subjects engaged in a visual attention task known to induce sustained changes in neuronal synchronization across a wide range of frequencies. Trial-by-trial BOLD luctuations correlated positively with trial-by-trial fluctuations in high-EEG gamma power (60–80 Hz) and negatively with alpha and beta power. Gamma power on the one hand, and alpha and beta power on the other hand, independently contributed to explaining BOLD variance. These results indicate that the BOLD-gamma coupling observed in animals can be extrapolated to humans performing a task and that neuronal dynamics underlying high- and low-frequency synchronization contribute independently to the BOLD signal.

    Supplementary material

    mmc1.pdf
  • Scheeringa, R. (2011). On the relation between oscillatory EEG activity and the BOLD signal. PhD Thesis, Radboud University Nijmegen, Nijmegen.

    Abstract

    Functional Magnetic Resonance Imaging (fMRI) and Electropencephalography (EEG) are the two techniques that are most often used to study the working brain. With the first technique we use the MRI machine to measure where in the brain the supply of oxygenated blood increases as result of an increased neural activity with a high precision. The temporal resolution of this measure however is limited to a few seconds. With EEG we measure the electrical activity of the brain with millisecond precision by placing electrodes on the skin of the head. We can think of the EEG signal as a signal that consists of multiple superimposed frequencies that vary their strength over time and when performing a cognitive task. Since we measure EEG at the level of the scalp, it is difficult to know where in the brain the signals exactly originate from. For about a decade we are able to measure fMRI and EEG at the same time, which possibly enables us to combine the superior spatial resolution of fMRI with the superior temporal resolution of EEG. To make this possible, we need to understand how the EEG signal is related to the fMRI signal, which is the central theme of this thesis. The main finding in this thesis is that increases in the strength of EEG frequencies below 30 Hz are related to a decrease in the fMRI signal strength, while increases in the strength of frequencies above 40 Hz is related to an increase in the strength of the fMRI signal. Changes in the strength of the low EEG frequencies are however are not coupled to changes in high frequencies. Changes in the strength of low and high EEG frequencies therefore contribute independently to changes in the fMRI signal.
  • Segaert, K., Menenti, L., Weber, K., & Hagoort, P. (2011). A paradox of syntactic priming: Why response tendencies show priming for passives, and response latencies show priming for actives. PLoS One, 6(10), e24209. doi:10.1371/journal.pone.0024209.

    Abstract

    Speakers tend to repeat syntactic structures across sentences, a phenomenon called syntactic priming. Although it has been suggested that repeating syntactic structures should result in speeded responses, previous research has focused on effects in response tendencies. We investigated syntactic priming effects simultaneously in response tendencies and response latencies for active and passive transitive sentences in a picture description task. In Experiment 1, there were priming effects in response tendencies for passives and in response latencies for actives. However, when participants' pre-existing preference for actives was altered in Experiment 2, syntactic priming occurred for both actives and passives in response tendencies as well as in response latencies. This is the first investigation of the effects of structure frequency on both response tendencies and latencies in syntactic priming. We discuss the implications of these data for current theories of syntactic processing.

    Supplementary material

    Segaert_2011_Supporting_Info.doc
  • Small, S. L., Hickok, G., Nusbaum, H. C., Blumstein, S., Coslett, H. B., Dell, G., Hagoort, P., Kutas, M., Marantz, A., Pylkkanen, L., Thompson-Schill, S., Watkins, K., & Wise, R. J. (2011). The neurobiology of language: Two years later [Editorial]. Brain and Language, 116(3), 103-104. doi:10.1016/j.bandl.2011.02.004.
  • Staum Casasanto, L., Gijssels, T., & Casasanto, D. (2011). The Reverse-Chameleon Effect: Negative social consequences of anatomical mimicry.[Abstract]. In L. Carlson, C. Hölscher, & T. F. Shipley (Eds.), Proceedings of the 33rd Annual Conference of the Cognitive Science Society (pp. 1103). Austin, TX: Cognitive Science Society.

    Abstract

    Mirror mimicry has well-known consequences for the person being mimicked: it increases how positively they feel about the mimicker (the Chameleon Effect). Here we show that anatomical mimicry has the opposite social consequences: a Reverse-Chameleon Effect. To equate mirror and anatomical mimicry, we asked participants to have a face-to-face conversation with a digital human (VIRTUO), in a fully-immersive virtual environment. Participants’ spontaneous head movements were tracked, and VIRTUO mimicked them at a 2-second delay, either mirror-wise, anatomically, or not at all (instead enacting another participant’s movements). Participants who were mimicked mirror-wise rated their social interaction with VIRTUO to be significantly more positive than those who were mimicked anatomically. Participants who were not mimicked gave intermediate ratings. Beyond its practical implications, the Reverse-Chameleon Effect constrains theoretical accounts of how mimicry affects social perception
  • Tesink, C. M. J. Y., Buitelaar, J. K., Petersson, K. M., Van der Gaag, R. J., Teunisse, J.-P., & Hagoort, P. (2011). Neural correlates of language comprehension in autism spectrum disorders: When language conflicts with world knowledge. Neuropsychologia, 49, 1095-1104. doi:10.1016/j.neuropsychologia.2011.01.018.

    Abstract

    In individuals with ASD, difficulties with language comprehension are most evident when higher-level semantic-pragmatic language processing is required, for instance when context has to be used to interpret the meaning of an utterance. Until now, it is unclear at what level of processing and for what type of context these difficulties in language comprehension occur. Therefore, in the current fMRI study, we investigated the neural correlates of the integration of contextual information during auditory language comprehension in 24 adults with ASD and 24 matched control participants. Different levels of context processing were manipulated by using spoken sentences that were correct or contained either a semantic or world knowledge anomaly. Our findings demonstrated significant differences between the groups in inferior frontal cortex that were only present for sentences with a world knowledge anomaly. Relative to the ASD group, the control group showed significantly increased activation in left inferior frontal gyrus (LIFG) for sentences with a world knowledge anomaly compared to correct sentences. This effect possibly indicates reduced integrative capacities of the ASD group. Furthermore, world knowledge anomalies elicited significantly stronger activation in right inferior frontal gyrus (RIFG) in the control group compared to the ASD group. This additional RIFG activation probably reflects revision of the situation model after new, conflicting information. The lack of recruitment of RIFG is possibly related to difficulties with exception handling in the ASD group.

    Files private

    Request files
  • Van Leeuwen, T. (2011). How one can see what is not there: Neural mechanisms of grapheme-colour synasthesia. PhD Thesis, Radboud University Nijmegen, Nijmegen.

    Abstract

    People with grapheme-colour synaesthesia experience colour for letters of the alphabet or digits; A can be red and B can be green. How can it be, that people automatically see a colour where only black letters are printed on the paper? With brain scans (fMRI) I showed that (black) letters activate the colour area of the brain (V4) and also a brain area that is important for combining different types of information (SPL). We found that the location where synaesthetes subjectively experience their colours is related to the order in which these brain areas become active. Some synaesthetes see their colour ‘projected onto the letter’, similar to real colour experiences, and in this case colour area V4 becomes active first. If the colours appear like a strong association without a fixed location in space, SPL becomes active first, similar to what happens for normal memories. In a last experiment we showed that in synaesthetes, attention is captured by real colour very strongly, stronger than for control participants. Perhaps this attention effect of colour can explain how letters and colours become coupled in synaesthetes.
  • Van Leeuwen, T. M., Den Ouden, H. E. M., & Hagoort, P. (2011). Effective connectivity determines the nature of subjective experience in grapheme-color synesthesia. Journal of Neuroscience, 31, 9879-9884. doi:10.1523/JNEUROSCI.0569-11.2011.

    Abstract

    Synesthesia provides an elegant model to investigate neural mechanisms underlying individual differences in subjective experience in humans. In grapheme–color synesthesia, written letters induce color sensations, accompanied by activation of color area V4. Competing hypotheses suggest that enhanced V4 activity during synesthesia is either induced by direct bottom-up cross-activation from grapheme processing areas within the fusiform gyrus, or indirectly via higher-order parietal areas. Synesthetes differ in the way synesthetic color is perceived: “projector” synesthetes experience color externally colocalized with a presented grapheme, whereas “associators” report an internally evoked association. Using dynamic causal modeling for fMRI, we show that V4 cross-activation during synesthesia was induced via a bottom-up pathway (within fusiform gyrus) in projector synesthetes, but via a top-down pathway (via parietal lobe) in associators. These findings show how altered coupling within the same network of active regions leads to differences in subjective experience. Our findings reconcile the two most influential cross-activation accounts of synesthesia.
  • Van der Linden, M. (2011). Experience-based cortical plasticity in object category representation. PhD Thesis, Radboud University Nijmegen, Nijmegen.

    Abstract

    Marieke van der Linden investigated the neural mechanisms underlying category formation in the human brain. The research in her thesis provides novel insights in how the brain learns, stores, and uses category knowledge, enabling humans to become skilled in categorization. The studies reveal the neural mechanisms through which perceptual as well as conceptual category knowledge is created and shaped by experience. The results clearly show that neuronal sensitivity to object features is affected by categorization training. These findings fill in a missing link between electrophysiological recordings from monkey cortex demonstrating learning-induced sharpening of neuronal selectivity and brain imaging data showing category-specific representations in the human brain. Moreover, she showed that it is specifically the features of an object that are relevant for its categorization that induce selectivity in neuronal populations. Category-learning requires collaboration between many different brain areas. Together these can be seen as the neural correlates of the key points of categorization: discrimination and generalization. The occipitotemporal cortex represents those characteristic features of objects that define its category. The narrowly shape-tuned properties of this area enable fine-grained discrimination of perceptually similar objects. In addition, the superior temporal sulcus forms associations between members or properties (i.e. sound and shape) of a category. This allows the generalization of perceptually different but conceptually similar objects. Last but not least is the prefrontal cortex which is involved in coding behaviourally-relevant category information and thus enables the explicit retrieval of category membership.
  • Van Berkum, J. J. A. (2011). Zonder gevoel geen taal [Inaugural lecture].

    Abstract

    Onderzoek naar taal en communicatie heeft zich in het verleden veel te veel gericht op taal als systeem om berichten te coderen, een soort TCP/IP (netwerkprotocol voor communicatie tussen computers). Dat moet maar eens veranderen, stelt prof. dr. Jos van Berkum, hoogleraar Discourse, Cognitie en Communicatie, in zijn oratie die hij op 30 september zal houden aan de Universiteit Utrecht. Hij pleit voor meer onderzoek naar de sterke verwevenheid van taal en gevoel.
  • De Vries, M., Christiansen, M. H., & Petersson, K. M. (2011). Learning recursion: Multiple nested and crossed dependencies. Biolinguistics, 5(1/2), 010-035.

    Abstract

    Language acquisition in both natural and artificial language learning settings crucially depends on extracting information from sequence input. A shared sequence learning mechanism is thus assumed to underlie both natural and artificial language learning. A growing body of empirical evidence is consistent with this hypothesis. By means of artificial language learning experiments, we may therefore gain more insight in this shared mechanism. In this paper, we review empirical evidence from artificial language learning and computational modelling studies, as well as natural language data, and suggest that there are two key factors that help determine processing complexity in sequence learning, and thus in natural language processing. We propose that the specific ordering of non-adjacent dependencies (i.e., nested or crossed), as well as the number of non-adjacent dependencies to be resolved simultaneously (i.e., two or three) are important factors in gaining more insight into the boundaries of human sequence learning; and thus, also in natural language processing. The implications for theories of linguistic competence are discussed.
  • Wang, L. (2011). The influence of information structure on language comprehension: A neurocognitive perspective. PhD Thesis, Radboud University Nijmegen, Nijmegen.
  • Wang, L., Bastiaansen, M. C. M., Yang, Y., & Hagoort, P. (2011). The influence of information structure on the depth of semantic processing: How focus and pitch accent determine the size of the N400 effect. Neuropsychologia, 49, 813-820. doi:10.1016/j.neuropsychologia.2010.12.035.

    Abstract

    To highlight relevant information in dialogues, both wh-question context and pitch accent in answers can be used, such that focused information gains more attention and is processed more elaborately. To evaluate the relative influence of context and pitch accent on the depth of semantic processing, we measured Event-Related Potentials (ERPs) to auditorily presented wh-question-answer pairs. A semantically incongruent word in the answer occurred either in focus or non-focus position as determined by the context, and this word was either accented or unaccented. Semantic incongruency elicited different N400 effects in different conditions. The largest N400 effect was found when the question-marked focus was accented, while the other three conditions elicited smaller N400 effects. The results suggest that context and accentuation interact. Thus accented focused words were processed more deeply compared to conditions where focus and accentuation mismatched, or when the new information had no marking. In addition, there seems to be sex differences in the depth of semantic processing. For males, a significant N400 effect was observed only when the question-marked focus was accented, reduced N400 effects were found in the other dialogues. In contrast, females produced similar N400 effects in all the conditions. These results suggest that regardless of external cues, females tend to engage in more elaborate semantic processing compared to males.
  • Wilkin, K., & Holler, J. (2011). Speakers’ use of ‘action’ and ‘entity’ gestures with definite and indefinite references. In G. Stam, & M. Ishino (Eds.), Integrating gestures: The interdisciplinary nature of gesture (pp. 293-308). Amsterdam: John Benjamins.

    Abstract

    Common ground is an essential prerequisite for coordination in social interaction, including language use. When referring back to a referent in discourse, this referent is ‘given information’ and therefore in the interactants’ common ground. When a referent is being referred to for the first time, a speaker introduces ‘new information’. The analyses reported here are on gestures that accompany such references when they include definite and indefinite grammatical determiners. The main finding from these analyses is that referents referred to by definite and indefinite articles were equally often accompanied by gesture, but speakers tended to accompany definite references with gestures focusing on action information and indefinite references with gestures focusing on entity information. The findings suggest that speakers use speech and gesture together to design utterances appropriate for speakers with whom they share common ground.

    Files private

    Request files
  • Willems, R. M., Labruna, L., D'Esposito, M., Ivry, R., & Casasanto, D. (2011). A functional role for the motor system in language understanding: Evidence from Theta-Burst Transcranial Magnetic Stimulation. Psychological Science, 22, 849 -854. doi:10.1177/0956797611412387.

    Abstract

    Does language comprehension depend, in part, on neural systems for action? In previous studies, motor areas of the brain were activated when people read or listened to action verbs, but it remains unclear whether such activation is functionally relevant for comprehension. In the experiments reported here, we used off-line theta-burst transcranial magnetic stimulation to investigate whether a causal relationship exists between activity in premotor cortex and action-language understanding. Right-handed participants completed a lexical decision task, in which they read verbs describing manual actions typically performed with the dominant hand (e.g., “to throw,” “to write”) and verbs describing nonmanual actions (e.g., “to earn,” “to wander”). Responses to manual-action verbs (but not to nonmanual-action verbs) were faster after stimulation of the hand area in left premotor cortex than after stimulation of the hand area in right premotor cortex. These results suggest that premotor cortex has a functional role in action-language understanding.

    Supplementary material

    Supplementary materials Willems.pdf
  • Willems, R. M., Clevis, K., & Hagoort, P. (2011). Add a picture for suspense: Neural correlates of the interaction between language and visual information in the perception of fear. Social, Cognitive and Affective Neuroscience, 6, 404-416. doi:10.1093/scan/nsq050.

    Abstract

    We investigated how visual and linguistic information interact in the perception of emotion. We borrowed a phenomenon from film theory which states that presentation of an as such neutral visual scene intensifies the percept of fear or suspense induced by a different channel of information, such as language. Our main aim was to investigate how neutral visual scenes can enhance responses to fearful language content in parts of the brain involved in the perception of emotion. Healthy participants’ brain activity was measured (using functional magnetic resonance imaging) while they read fearful and less fearful sentences presented with or without a neutral visual scene. The main idea is that the visual scenes intensify the fearful content of the language by subtly implying and concretizing what is described in the sentence. Activation levels in the right anterior temporal pole were selectively increased when a neutral visual scene was paired with a fearful sentence, compared to reading the sentence alone, as well as to reading of non-fearful sentences presented with the same neutral scene. We conclude that the right anterior temporal pole serves a binding function of emotional information across domains such as visual and linguistic information.
  • Willems, R. M., Benn, Y., Hagoort, P., Tonia, I., & Varley, R. (2011). Communicating without a functioning language system: Implications for the role of language in mentalizing. Neuropsychologia, 49, 3130-3135. doi:10.1016/j.neuropsychologia.2011.07.023.

    Abstract

    A debated issue in the relationship between language and thought is how our linguistic abilities are involved in understanding the intentions of others (‘mentalizing’). The results of both theoretical and empirical work have been used to argue that linguistic, and more specifically, grammatical, abilities are crucial in representing the mental states of others. Here we contribute to this debate by investigating how damage to the language system influences the generation and understanding of intentional communicative behaviors. Four patients with pervasive language difficulties (severe global or agrammatic aphasia) engaged in an experimentally controlled non-verbal communication paradigm, which required signaling and understanding a communicative message. Despite their profound language problems they were able to engage in recipient design as well as intention recognition, showing similar indicators of mentalizing as have been observed in the neurologically healthy population. Our results show that aspects of the ability to communicate remain present even when core capacities of the language system are dysfunctional
  • Willems, R. M. (2011). Re-appreciating the why of cognition: 35 years after Marr and Poggio. Frontiers in Psychology, 2, 244. doi:10.3389/fpsyg.2011.00244.

    Abstract

    Marr and Poggio’s levels of description are one of the most well-known theoretical constructs of twentieth century cognitive science. It entails that behavior can and should be considered at three different levels: computation, algorithm, and implementation. In this contribution focus is on the computational level of description, the level that describes the “why” of cognition. I argue that the computational level should be taken as a starting point in devising experiments in cognitive (neuro)science. Instead, the starting point in empirical practice often is a focus on the stimulus or on some capacity of the cognitive system. The “why” of cognition tends to be ignored when designing research, and is not considered in subsequent inference from experimental results. The overall aim of this manuscript is to show how re-appreciation of the computational level of description as a starting point for experiments can lead to more informative experimentation.
  • Willems, R. M., & Casasanto, D. (2011). Flexibility in embodied language understanding. Frontiers in Psychology, 2, 116. doi:10.3389/fpsyg.2011.00116.

    Abstract

    Do people use sensori-motor cortices to understand language? Here we review neurocognitive studies of language comprehension in healthy adults and evaluate their possible contributions to theories of language in the brain. We start by sketching the minimal predictions that an embodied theory of language understanding makes for empirical research, and then survey studies that have been offered as evidence for embodied semantic representations. We explore four debated issues: first, does activation of sensori-motor cortices during action language understanding imply that action semantics relies on mirror neurons? Second, what is the evidence that activity in sensori-motor cortices plays a functional role in understanding language? Third, to what extent do responses in perceptual and motor areas depend on the linguistic and extra-linguistic context? And finally, can embodied theories accommodate language about abstract concepts? Based on the available evidence, we conclude that sensori-motor cortices are activated during a variety of language comprehension tasks, for both concrete and abstract language. Yet, this activity depends on the context in which perception and action words are encountered. Although modality-specific cortical activity is not a sine qua non of language processing even for language about perception and action, sensori-motor regions of the brain appear to make functional contributions to the construction of meaning, and should therefore be incorporated into models of the neurocognitive architecture of language.
  • Aziz-Zadeh, L., Casasanto, D., Feldman, J., Saxe, R., & Talmy, L. (2008). Discovering the conceptual primitives. In B. C. Love, K. McRae, & V. M. Sloutsky (Eds.), Proceedings of the 30th Annual Conference of the Cognitive Science Society (pp. 27-28). Austin, TX: Cognitive Science Society.
  • Baggio, G., Van Lambalgen, M., & Hagoort, P. (2008). Computing and recomputing discourse models: An ERP study. Journal of Memory and Language, 59, 36-53. doi:10.1016/j.jml.2008.02.005.

    Abstract

    While syntactic reanalysis has been extensively investigated in psycholinguistics, comparatively little is known about reanalysis in the semantic domain. We used event-related brain potentials (ERPs) to keep track of semantic processes involved in understanding short narratives such as ‘The girl was writing a letter when her friend spilled coffee on the paper’. We hypothesize that these sentences are interpreted in two steps: (1) when the progressive clause is processed, a discourse model is computed in which the goal state (a complete letter) is predicted to hold; (2) when the subordinate clause is processed, the initial representation is recomputed to the effect that, in the final discourse structure, the goal state is not satisfied. Critical sentences evoked larger sustained anterior negativities (SANs) compared to controls, starting around 400 ms following the onset of the sentence-final word, and lasting for about 400 ms. The amplitude of the SAN was correlated with the frequency with which participants, in an offline probe-selection task, responded that the goal state was not attained. Our results raise the possibility that the brain supports some form of non-monotonic recomputation to integrate information which invalidates previously held assumptions.
  • Bastiaansen, M. C. M., Oostenveld, R., Jensen, O., & Hagoort, P. (2008). I see what you mean: Theta power increases are involved in the retrieval of lexical semantic information. Brain and Language, 106(1), 15-28. doi:10.1016/j.bandl.2007.10.006.

    Abstract

    An influential hypothesis regarding the neural basis of the mental lexicon is that semantic representations are neurally implemented as distributed networks carrying sensory, motor and/or more abstract functional information. This work investigates whether the semantic properties of words partly determine the topography of such networks. Subjects performed a visual lexical decision task while their EEG was recorded. We compared the EEG responses to nouns with either visual semantic properties (VIS, referring to colors and shapes) or with auditory semantic properties (AUD, referring to sounds). A time–frequency analysis of the EEG revealed power increases in the theta (4–7 Hz) and lower-beta (13–18 Hz) frequency bands, and an early power increase and subsequent decrease for the alpha (8–12 Hz) band. In the theta band we observed a double dissociation: temporal electrodes showed larger theta power increases in the AUD condition, while occipital leads showed larger theta responses in the VIS condition. The results support the notion that semantic representations are stored in functional networks with a topography that reflects the semantic properties of the stored items, and provide further evidence that oscillatory brain dynamics in the theta frequency range are functionally related to the retrieval of lexical semantic information.
  • De Bree, E., Van Alphen, P. M., Fikkert, P., & Wijnen, F. (2008). Metrical stress in comprehension and production of Dutch children at risk of dyslexia. In H. Chan, H. Jacob, & E. Kapia (Eds.), Proceedings of the 32nd Annual Boston University Conference on Language Development (pp. 60-71). Somerville, Mass: Cascadilla Press.

    Abstract

    The present study compared the role of metrical stress in comprehension and production of three-year-old children with a familial risk of dyslexia with that of normally developing children to further explore the phonological deficit in dyslexia. A visual fixation task with stress (mis-)matches in bisyllabic words, as well as a non-word repetition task with bisyllabic targets were presented to the control and at-risk children. Results show that the at-risk group was less sensitive to stress mismatches in word recognition than the control group. Correct production of metrical stress patterns did not differ significantly between the groups, but the percentages of phonemes produced correctly were lower for the at-risk than the control group. These findings suggest that processing of metrical stress is not impaired in at-risk children, but that this group cannot exploit metrical stress for speech in word recognition. This study demonstrates the importance of including suprasegmental skills in dyslexia research.
  • Casasanto, D., & Boroditsky, L. (2008). Time in the mind: Using space to think about time. Cognition, 106, 579-573. doi:10.1016/j.cognition.2007.03.004.

    Abstract

    How do we construct abstract ideas like justice, mathematics, or time-travel? In this paper we investigate whether mental representations that result from physical experience underlie people’s more abstract mental representations, using the domains of space and time as a testbed. People often talk about time using spatial language (e.g., a long vacation, a short concert). Do people also think about time using spatial representations, even when they are not using language? Results of six psychophysical experiments revealed that people are unable to ignore irrelevant spatial information when making judgments about duration, but not the converse. This pattern, which is predicted by the asymmetry between space and time in linguistic metaphors, was demonstrated here in tasks that do not involve any linguistic stimuli or responses. These findings provide evidence that the metaphorical relationship between space and time observed in language also exists in our more basic representations of distance and duration. Results suggest that our mental representations of things we can never see or touch may be built, in part, out of representations of physical experiences in perception and motor action.
  • Casasanto, D. (2008). Who's afraid of the big bad Whorf? Crosslinguistic differences in temporal language and thought. In P. Indefrey, & M. Gullberg (Eds.), Time to speak: Cognitive and neural prerequisites for time in language (pp. 63-79). Oxford: Wiley.

    Abstract

    The idea that language shapes the way we think, often associated with Benjamin Whorf, has long been decried as not only wrong but also fundamentally wrong-headed. Yet, experimental evidence has reopened debate about the extent to which language influences nonlinguistic cognition, particularly in the domain of time. In this article, I will first analyze an influential argument against the Whorfian hypothesis and show that its anti-Whorfian conclusion is in part an artifact of conflating two distinct questions: Do we think in language? and Does language shape thought? Next, I will discuss crosslinguistic differences in spatial metaphors for time and describe experiments that demonstrate corresponding differences in nonlinguistic mental representations. Finally, I will sketch a simple learning mechanism by which some linguistic relativity effects appear to arise. Although people may not think in language, speakers of different languages develop distinctive conceptual repertoires as a consequence of ordinary and presumably universal neural and cognitive processes.
  • Casasanto, D. (2008). Who's afraid of the big bad Whorf? Crosslinguistic differences in temporal language and thought. Language Learning, 58(suppl. 1), 63-79. doi:10.1111/j.1467-9922.2008.00462.x.

    Abstract

    The idea that language shapes the way we think, often associated with Benjamin Whorf, has long been decried as not only wrong but also fundamentally wrong-headed. Yet, experimental evidence has reopened debate about the extent to which language influences nonlinguistic cognition, particularly in the domain of time. In this article, I will first analyze an influential argument against the Whorfian hypothesis and show that its anti-Whorfian conclusion is in part an artifact of conflating two distinct questions: Do we think in language? and Does language shape thought? Next, I will discuss crosslinguistic differences in spatial metaphors for time and describe experiments that demonstrate corresponding differences in nonlinguistic mental representations. Finally, I will sketch a simple learning mechanism by which some linguistic relativity effects appear to arise. Although people may not think in language, speakers of different languages develop distinctive conceptual repertoires as a consequence of ordinary and presumably universal neural and cognitive processes.
  • Casasanto, D. (2008). Similarity and proximity: When does close in space mean close in mind? Memory & Cognition, 36(6), 1047-1056. doi:10.3758/MC.36.6.1047.

    Abstract

    People often describe things that are similar as close and things that are dissimilar as far apart. Does the way people talk about similarity reveal something fundamental about the way they conceptualize it? Three experiments tested the relationship between similarity and spatial proximity that is encoded in metaphors in language. Similarity ratings for pairs of words or pictures varied as a function of how far apart the stimuli appeared on the computer screen, but the influence of distance on similarity differed depending on the type of judgments the participants made. Stimuli presented closer together were rated more similar during conceptual judgments of abstract entities or unseen object properties but were rated less similar during perceptual judgments of visual appearance. These contrasting results underscore the importance of testing predictions based on linguistic metaphors experimentally and suggest that our sense of similarity arises from our ability to combine available perceptual information with stored knowledge of experiential regularities.
  • Dijkstra, K., & Casasanto, D. (2008). Autobiographical memory and motor action [Abstract]. In B. C. Love, K. McRae, & V. M. Sloutsky (Eds.), Proceedings of the 30th Annual Conference of the Cognitive Science Society (pp. 1549). Austin, TX: Cognitive Science Society.

    Abstract

    Retrieval of autobiographical memories is facilitated by activation of perceptuo-motor aspects of the experience, for example a congruent body position at the time of the experiencing and the time of retelling (Dijkstra, Kaschak, & Zwaan, 2007). The present study examined whether similar retrieval facilitation occurs when the direction of motor action is congruent with the valence of emotional memories. Consistent with evidence that people mentally represent emotions spatially (Casasanto, in press), participants moved marbles between vertically stacked boxes at a higher rate when the direction of movement was congruent with the valence of the memory they retrieved (e.g., upward for positive memories, downward for negative memories) than when direction and valence were incongruent (t(22)=4.24, p<.001). In addition, valence-congruent movements facilitated access to these memories, resulting in shorter retrieval times (t(22)=2.43, p<.05). Results demonstrate bidirectional influences between the emotional content of autobiographical memories and irrelevant motor actions.
  • Folia, V., Uddén, J., Forkstam, C., Ingvar, M., Hagoort, P., & Petersson, K. M. (2008). Implicit learning and dyslexia. Annals of the New York Academy of Sciences, 1145, 132-150. doi:10.1196/annals.1416.012.

    Abstract

    Several studies have reported an association between dyslexia and implicit learning deficits. It has been suggested that the weakness in implicit learning observed in dyslexic individuals may be related to sequential processing and implicit sequence learning. In the present article, we review the current literature on implicit learning and dyslexia. We describe a novel, forced-choice structural "mere exposure" artificial grammar learning paradigm and characterize this paradigm in normal readers in relation to the standard grammaticality classification paradigm. We argue that preference classification is a more optimal measure of the outcome of implicit acquisition since in the preference version participants are kept completely unaware of the underlying generative mechanism, while in the grammaticality version, the subjects have, at least in principle, been informed about the existence of an underlying complex set of rules at the point of classification (but not during acquisition). On the basis of the "mere exposure effect," we tested the prediction that the development of preference will correlate with the grammaticality status of the classification items. In addition, we examined the effects of grammaticality (grammatical/nongrammatical) and associative chunk strength (ACS; high/low) on the classification tasks (preference/grammaticality). Using a balanced ACS design in which the factors of grammaticality (grammatical/nongrammatical) and ACS (high/low) were independently controlled in a 2 × 2 factorial design, we confirmed our predictions. We discuss the suitability of this task for further investigation of the implicit learning characteristics in dyslexia.
  • Forkstam, C., Elwér, A., Ingvar, M., & Petersson, K. M. (2008). Instruction effects in implicit artificial grammar learning: A preference for grammaticality. Brain Research, 1221, 80-92. doi:10.1016/j.brainres.2008.05.005.

    Abstract

    Human implicit learning can be investigated with implicit artificial grammar learning, a paradigm that has been proposed as a simple model for aspects of natural language acquisition. In the present study we compared the typical yes–no grammaticality classification, with yes–no preference classification. In the case of preference instruction no reference to the underlying generative mechanism (i.e., grammar) is needed and the subjects are therefore completely uninformed about an underlying structure in the acquisition material. In experiment 1, subjects engaged in a short-term memory task using only grammatical strings without performance feedback for 5 days. As a result of the 5 acquisition days, classification performance was independent of instruction type and both the preference and the grammaticality group acquired relevant knowledge of the underlying generative mechanism to a similar degree. Changing the grammatical stings to random strings in the acquisition material (experiment 2) resulted in classification being driven by local substring familiarity. Contrasting repeated vs. non-repeated preference classification (experiment 3) showed that the effect of local substring familiarity decreases with repeated classification. This was not the case for repeated grammaticality classifications. We conclude that classification performance is largely independent of instruction type and that forced-choice preference classification is equivalent to the typical grammaticality classification.
  • Goldin-Meadow, S., Chee So, W., Ozyurek, A., & Mylander, C. (2008). The natural order of events: how speakers of different languages represent events nonverbally. Proceedings of the National Academy of Sciences of the USA, 105(27), 9163-9168. doi:10.1073/pnas.0710060105.

    Abstract

    To test whether the language we speak influences our behavior even when we are not speaking, we asked speakers of four languages differing in their predominant word orders (English, Turkish, Spanish, and Chinese) to perform two nonverbal tasks: a communicative task (describing an event by using gesture without speech) and a noncommunicative task (reconstructing an event with pictures). We found that the word orders speakers used in their everyday speech did not influence their nonverbal behavior. Surprisingly, speakers of all four languages used the same order and on both nonverbal tasks. This order, actor–patient–act, is analogous to the subject–object–verb pattern found in many languages of the world and, importantly, in newly developing gestural languages. The findings provide evidence for a natural order that we impose on events when describing and reconstructing them nonverbally and exploit when constructing language anew.

    Supplementary material

    GoldinMeadow_2008_naturalSuppl.pdf
  • Hagoort, P. (2008). Mijn omweg naar de filosofie. Algemeen Nederlands Tijdschrift voor Wijsbegeerte, 100(4), 303-310.
  • Hagoort, P., Ramsey, N. F., & Jensen, O. (2008). De gereedschapskist van de cognitieve neurowetenschap. In F. Wijnen, & F. Verstraten (Eds.), Het brein te kijk: Verkenning van de cognitieve neurowetenschap (pp. 41-75). Amsterdam: Harcourt Assessment.
  • Li, X., Hagoort, P., & Yang, Y. (2008). Event-related potential evidence on the influence of accentuation in spoken discourse comprehension in Chinese. Journal of Cognitive Neuroscience, 20(5), 906-915. doi:10.1162/jocn.2008.20512.

    Abstract

    In an event-related potential experiment with Chinese discourses as material, we investigated how and when accentuation influences spoken discourse comprehension in relation to the different information states of the critical words. These words could either provide new or old information. It was shown that variation of accentuation influenced the amplitude of the N400, with a larger amplitude for accented than deaccented words. In addition, there was an interaction between accentuation and information state. The N400 amplitude difference between accented and deaccented new information was smaller than that between accented and deaccented old information. The results demonstrate that, during spoken discourse comprehension, listeners rapidly extract the semantic consequences of accentuation in relation to the previous discourse context. Moreover, our results show that the N400 amplitude can be larger for correct (new,accented words) than incorrect (new, deaccented words) information. This, we argue, proves that the N400 does not react to semantic anomaly per se, but rather to semantic integration load, which is higher for new information.
  • Hagoort, P. (2008). Über Broca, Gehirn und Bindung. In Jahrbuch 2008: Tätigkeitsberichte der Institute. München: Generalverwaltung der Max-Planck-Gesellschaft. Retrieved from http://www.mpg.de/306524/forschungsSchwerpunkt1?c=166434.

    Abstract

    Beim Sprechen und beim Sprachverstehen findet man die Wortbedeutung im Gedächtnis auf und kombiniert sie zu größeren Einheiten (Unifikation). Solche Unifikations-Operationen laufen auf unterschiedlichen Ebenen der Sprachverarbeitung ab. In diesem Beitrag wird ein Rahmen vorgeschlagen, in dem psycholinguistische Modelle mit neurobiologischer Sprachbetrachtung in Verbindung gebracht werden. Diesem Vorschlag zufolge spielt der linke inferiore frontale Gyrus (LIFG) eine bedeutende Rolle bei der Unifi kation
  • Hagoort, P. (2008). The fractionation of spoken language understanding by measuring electrical and magnetic brain signals. Philosophical Transactions of the Royal Society of London. Series B, Biological Sciences, 363, 1055-1069. doi:10.1098/rstb.2007.2159.

    Abstract

    This paper focuses on what electrical and magnetic recordings of human brain activity reveal about spoken language understanding. Based on the high temporal resolution of these recordings, a fine-grained temporal profile of different aspects of spoken language comprehension can be obtained. Crucial aspects of speech comprehension are lexical access, selection and semantic integration. Results show that for words spoken in context, there is no ‘magic moment’ when lexical selection ends and semantic integration begins. Irrespective of whether words have early or late recognition points, semantic integration processing is initiated before words can be identified on the basis of the acoustic information alone. Moreover, for one particular event-related brain potential (ERP) component (the N400), equivalent impact of sentence- and discourse-semantic contexts is observed. This indicates that in comprehension, a spoken word is immediately evaluated relative to the widest interpretive domain available. In addition, this happens very quickly. Findings are discussed that show that often an unfolding word can be mapped onto discourse-level representations well before the end of the word. Overall, the time course of the ERP effects is compatible with the view that the different information types (lexical, syntactic, phonological, pragmatic) are processed in parallel and influence the interpretation process incrementally, that is as soon as the relevant pieces of information are available. This is referred to as the immediacy principle.
  • Hagoort, P. (2008). Should psychology ignore the language of the brain? Current Directions in Psychological Science, 17(2), 96-101. doi:10.1111/j.1467-8721.2008.00556.x.

    Abstract

    Claims that neuroscientific data do not contribute to our understanding of psychological functions have been made recently. Here I argue that these criticisms are solely based on an analysis of functional magnetic resonance imaging (fMRI) studies. However, fMRI is only one of the methods in the toolkit of cognitive neuroscience. I provide examples from research on event-related brain potentials (ERPs) that have contributed to our understanding of the cognitive architecture of human language functions. In addition, I provide evidence of (possible) contributions from fMRI measurements to our understanding of the functional architecture of language processing. Finally, I argue that a neurobiology of human language that integrates information about the necessary genetic and neural infrastructures will allow us to answer certain questions that are not answerable if all we have is evidence from behavior.
  • Janzen, G., Jansen, C., & Van Turennout, M. (2008). Memory consolidation of landmarks in good navigators. Hippocampus, 18, 40-47.

    Abstract

    Landmarks play an important role in successful navigation. To successfully find your way around an environment, navigationally relevant information needs to be stored and become available at later moments in time. Evidence from functional magnetic resonance imaging (fMRI) studies shows that the human parahippocampal gyrus encodes the navigational relevance of landmarks. In the present event-related fMRI experiment, we investigated memory consolidation of navigationally relevant landmarks in the medial temporal lobe after route learning. Sixteen right-handed volunteers viewed two film sequences through a virtual museum with objects placed at locations relevant (decision points) or irrelevant (nondecision points) for navigation. To investigate consolidation effects, one film sequence was seen in the evening before scanning, the other one was seen the following morning, directly before scanning. Event-related fMRI data were acquired during an object recognition task. Participants decided whether they had seen the objects in the previously shown films. After scanning, participants answered standardized questions about their navigational skills, and were divided into groups of good and bad navigators, based on their scores. An effect of memory consolidation was obtained in the hippocampus: Objects that were seen the evening before scanning (remote objects) elicited more activity than objects seen directly before scanning (recent objects). This increase in activity in bilateral hippocampus for remote objects was observed in good navigators only. In addition, a spatial-specific effect of memory consolidation for navigationally relevant objects was observed in the parahippocampal gyrus. Remote decision point objects induced increased activity as compared with recent decision point objects, again in good navigators only. The results provide initial evidence for a connection between memory consolidation and navigational ability that can provide a basis for successful navigation.
  • Kho, K. H., Indefrey, P., Hagoort, P., Van Veelen, C. W. M., Van Rijen, P. C., & Ramsey, N. F. (2008). Unimpaired sentence comprehension after anterior temporal cortex resection. Neuropsychologia, 46(4), 1170-1178. doi:10.1016/j.neuropsychologia.2007.10.014.

    Abstract

    Functional imaging studies have demonstrated involvement of the anterior temporal cortex in sentence comprehension. It is unclear, however, whether the anterior temporal cortex is essential for this function.We studied two aspects of sentence comprehension, namely syntactic and prosodic comprehension in temporal lobe epilepsy patients who were candidates for resection of the anterior temporal lobe. Methods: Temporal lobe epilepsy patients (n = 32) with normal (left) language dominance were tested on syntactic and prosodic comprehension before and after removal of the anterior temporal cortex. The prosodic comprehension test was also compared with performance of healthy control subjects (n = 47) before surgery. Results: Overall, temporal lobe epilepsy patients did not differ from healthy controls in syntactic and prosodic comprehension before surgery. They did perform less well on an affective prosody task. Post-operative testing revealed that syntactic and prosodic comprehension did not change after removal of the anterior temporal cortex. Discussion: The unchanged performance on syntactic and prosodic comprehension after removal of the anterior temporal cortex suggests that this area is not indispensable for sentence comprehension functions in temporal epilepsy patients. Potential implications for the postulated role of the anterior temporal lobe in the healthy brain are discussed.
  • Ladd, D. R., Dediu, D., & Kinsella, A. R. (2008). Reply to Bowles (2008). Biolinguistics, 2(2), 256-259.
  • De Lange, F. P., Koers, A., Kalkman, J. S., Bleijenberg, G., Hagoort, P., Van der Meer, J. W. M., & Toni, I. (2008). Increase in prefrontal cortical volume following cognitive behavioural therapy in patients with chronic fatigue syndrome. Brain, 131, 2172-2180. doi:10.1093/brain/awn140.

    Abstract

    Chronic fatigue syndrome (CFS) is a disabling disorder, characterized by persistent or relapsing fatigue. Recent studies have detected a decrease in cortical grey matter volume in patients with CFS, but it is unclear whether this cerebral atrophy constitutes a cause or a consequence of the disease. Cognitive behavioural therapy (CBT) is an effective behavioural intervention for CFS, which combines a rehabilitative approach of a graded increase in physical activity with a psychological approach that addresses thoughts and beliefs about CFS which may impair recovery. Here, we test the hypothesis that cerebral atrophy may be a reversible state that can ameliorate with successful CBT. We have quantified cerebral structural changes in 22 CFS patients that underwent CBT and 22 healthy control participants. At baseline, CFS patients had significantly lower grey matter volume than healthy control participants. CBT intervention led to a significant improvement in health status, physical activity and cognitive performance. Crucially, CFS patients showed a significant increase in grey matter volume, localized in the lateral prefrontal cortex. This change in cerebral volume was related to improvements in cognitive speed in the CFS patients. Our findings indicate that the cerebral atrophy associated with CFS is partially reversed after effective CBT. This result provides an example of macroscopic cortical plasticity in the adult human brain, demonstrating a surprisingly dynamic relation between behavioural state and cerebral anatomy. Furthermore, our results reveal a possible neurobiological substrate of psychotherapeutic treatment.
  • Nieuwland, M. S., & Van Berkum, J. J. A. (2008). The interplay between semantic and referential aspects of anaphoric noun phrase resolution: Evidence from ERPs. Brain & Language, 106, 119-131. doi:10.1016/j.bandl.2008.05.001.

    Abstract

    In this event-related brain potential (ERP) study, we examined how semantic and referential aspects of anaphoric noun phrase resolution interact during discourse comprehension. We used a full factorial design that crossed referential ambiguity with semantic incoherence. Ambiguous anaphors elicited a sustained negative shift (Nref effect), and incoherent anaphors elicited an N400 effect. Simultaneously ambiguous and incoherent anaphors elicited an ERP pattern resembling that of the incoherent anaphors. These results suggest that semantic incoherence can preclude readers from engaging in anaphoric inferencing. Furthermore, approximately half of our participants unexpectedly showed common late positive effects to the three types of problematic anaphors. We relate the latter finding to recent accounts of what the P600 might reflect, and to the role of individual differences therein.
  • Nieuwland, M. S., & Van Berkum, J. J. A. (2008). The neurocognition of referential ambiguity in language comprehension. Language and Linguistics Compass, 2(4), 603-630. doi:10.1111/j.1749-818x.2008.00070.x.

    Abstract

    Referential ambiguity arises whenever readers or listeners are unable to select a unique referent for a linguistic expression out of multiple candidates. In the current article, we review a series of neurocognitive experiments from our laboratory that examine the neural correlates of referential ambiguity, and that employ the brain signature of referential ambiguity to derive functional properties of the language comprehension system. The results of our experiments converge to show that referential ambiguity resolution involves making an inference to evaluate the referential candidates. These inferences only take place when both referential candidates are, at least initially, equally plausible antecedents. Whether comprehenders make these anaphoric inferences is strongly context dependent and co-determined by characteristics of the reader. In addition, readers appear to disregard referential ambiguity when the competing candidates are each semantically incoherent, suggesting that, under certain circumstances, semantic analysis can proceed even when referential analysis has not yielded a unique antecedent. Finally, results from a functional neuroimaging study suggest that whereas the neural systems that deal with referential ambiguity partially overlap with those that deal with referential failure, they show an inverse coupling with the neural systems associated with semantic processing, possibly reflecting the relative contributions of semantic and episodic processing to re-establish semantic and referential coherence, respectively.
  • Otten, M., & Van Berkum, J. J. A. (2008). Discourse-based word anticipation during language processing: Prediction of priming? Discourse Processes, 45, 464-496. doi:10.1080/01638530802356463.

    Abstract

    Language is an intrinsically open-ended system. This fact has led to the widely shared assumption that readers and listeners do not predict upcoming words, at least not in a way that goes beyond simple priming between words. Recent evidence, however, suggests that readers and listeners do anticipate upcoming words “on the fly” as a text unfolds. In 2 event-related potentials experiments, this study examined whether these predictions are based on the exact message conveyed by the prior discourse or on simpler word-based priming mechanisms. Participants read texts that strongly supported the prediction of a specific word, mixed with non-predictive control texts that contained the same prime words. In Experiment 1A, anomalous words that replaced a highly predictable (as opposed to a non-predictable but coherent) word elicited a long-lasting positive shift, suggesting that the prior discourse had indeed led people to predict specific words. In Experiment 1B, adjectives whose suffix mismatched the predictable noun's syntactic gender elicited a short-lived late negativity in predictive stories but not in prime control stories. Taken together, these findings reveal that the conceptual basis for predicting specific upcoming words during reading is the exact message conveyed by the discourse and not the mere presence of prime words.
  • Ozyurek, A., Kita, S., Allen, S., Brown, A., Furman, R., & Ishizuka, T. (2008). Development of cross-linguistic variation in speech and gesture: motion events in English and Turkish. Developmental Psychology, 44(4), 1040-1054. doi:10.1037/0012-1649.44.4.1040.

    Abstract

    The way adults express manner and path components of a motion event varies across typologically different languages both in speech and cospeech gestures, showing that language specificity in event encoding influences gesture. The authors tracked when and how this multimodal cross-linguistic variation develops in children learning Turkish and English, 2 typologically distinct languages. They found that children learn to speak in language-specific ways from age 3 onward (i.e., English speakers used 1 clause and Turkish speakers used 2 clauses to express manner and path). In contrast, English- and Turkish-speaking children’s gestures looked similar at ages 3 and 5 (i.e., separate gestures for manner and path), differing from each other only at age 9 and in adulthood (i.e., English speakers used 1 gesture, but Turkish speakers used separate gestures for manner and path). The authors argue that this pattern of the development of cospeech gestures reflects a gradual shift to language-specific representations during speaking and shows that looking at speech alone may not be sufficient to understand the full process of language acquisition.
  • Patel, A. D., Iversen, J. R., Wassenaar, M., & Hagoort, P. (2008). Musical syntactic processing in agrammatic Broca's aphasia. Aphasiology, 22(7/8), 776-789. doi:10.1080/02687030701803804.

    Abstract

    Background: Growing evidence for overlap in the syntactic processing of language and music in non-brain-damaged individuals leads to the question of whether aphasic individuals with grammatical comprehension problems in language also have problems processing structural relations in music. Aims: The current study sought to test musical syntactic processing in individuals with Broca's aphasia and grammatical comprehension deficits, using both explicit and implicit tasks. Methods & Procedures: Two experiments were conducted. In the first experiment 12 individuals with Broca's aphasia (and 14 matched controls) were tested for their sensitivity to grammatical and semantic relations in sentences, and for their sensitivity to musical syntactic (harmonic) relations in chord sequences. An explicit task (acceptability judgement of novel sequences) was used. The second experiment, with 9 individuals with Broca's aphasia (and 12 matched controls), probed musical syntactic processing using an implicit task (harmonic priming). Outcomes & Results: In both experiments the aphasic group showed impaired processing of musical syntactic relations. Control experiments indicated that this could not be attributed to low-level problems with the perception of pitch patterns or with auditory short-term memory for tones. Conclusions: The results suggest that musical syntactic processing in agrammatic aphasia deserves systematic investigation, and that such studies could help probe the nature of the processing deficits underlying linguistic agrammatism. Methodological suggestions are offered for future work in this little-explored area.
  • Perniss, P. M., & Ozyurek, A. (2008). Representations of action, motion and location in sign space: A comparison of German (DGS) and Turkish (TID) sign language narratives. In J. Quer (Ed.), Signs of the time: Selected papers from TISLR 8 (pp. 353-376). Seedorf: Signum Press.
  • Petersson, K. M. (2008). On cognition, structured sequence processing, and adaptive dynamical systems. American Institute of Physics Conference Proceedings, 1060(1), 195-200.

    Abstract

    Cognitive neuroscience approaches the brain as a cognitive system: a system that functionally is conceptualized in terms of information processing. We outline some aspects of this concept and consider a physical system to be an information processing device when a subclass of its physical states can be viewed as representational/cognitive and transitions between these can be conceptualized as a process operating on these states by implementing operations on the corresponding representational structures. We identify a generic and fundamental problem in cognition: sequentially organized structured processing. Structured sequence processing provides the brain, in an essential sense, with its processing logic. In an approach addressing this problem, we illustrate how to integrate levels of analysis within a framework of adaptive dynamical systems. We note that the dynamical system framework lends itself to a description of asynchronous event-driven devices, which is likely to be important in cognition because the brain appears to be an asynchronous processing system. We use the human language faculty and natural language processing as a concrete example through out.
  • Scheeringa, R., Bastiaansen, M. C. M., Petersson, K. M., Oostenveld, R., Norris, D. G., & Hagoort, P. (2008). Frontal theta EEG activity correlates negatively with the default mode network in resting state. International Journal of Psychophysiology, 67, 242-251. doi:10.1016/j.ijpsycho.2007.05.017.

    Abstract

    We used simultaneously recorded EEG and fMRI to investigate in which areas the BOLD signal correlates with frontal theta power changes, while subjects were quietly lying resting in the scanner with their eyes open. To obtain a reliable estimate of frontal theta power we applied ICA on band-pass filtered (2–9 Hz) EEG data. For each subject we selected the component that best matched the mid-frontal scalp topography associated with the frontal theta rhythm. We applied a time-frequency analysis on this component and used the time course of the frequency bin with the highest overall power to form a regressor that modeled spontaneous fluctuations in frontal theta power. No significant positive BOLD correlations with this regressor were observed. Extensive negative correlations were observed in the areas that together form the default mode network. We conclude that frontal theta activity can be seen as an EEG index of default mode network activity.

Share this page