Publications

Displaying 601 - 694 of 694
  • Tesink, C. M. J. Y., Buitelaar, J. K., Petersson, K. M., Van der Gaag, R. J., Teunisse, J.-P., & Hagoort, P. (2011). Neural correlates of language comprehension in autism spectrum disorders: When language conflicts with world knowledge. Neuropsychologia, 49, 1095-1104. doi:10.1016/j.neuropsychologia.2011.01.018.

    Abstract

    In individuals with ASD, difficulties with language comprehension are most evident when higher-level semantic-pragmatic language processing is required, for instance when context has to be used to interpret the meaning of an utterance. Until now, it is unclear at what level of processing and for what type of context these difficulties in language comprehension occur. Therefore, in the current fMRI study, we investigated the neural correlates of the integration of contextual information during auditory language comprehension in 24 adults with ASD and 24 matched control participants. Different levels of context processing were manipulated by using spoken sentences that were correct or contained either a semantic or world knowledge anomaly. Our findings demonstrated significant differences between the groups in inferior frontal cortex that were only present for sentences with a world knowledge anomaly. Relative to the ASD group, the control group showed significantly increased activation in left inferior frontal gyrus (LIFG) for sentences with a world knowledge anomaly compared to correct sentences. This effect possibly indicates reduced integrative capacities of the ASD group. Furthermore, world knowledge anomalies elicited significantly stronger activation in right inferior frontal gyrus (RIFG) in the control group compared to the ASD group. This additional RIFG activation probably reflects revision of the situation model after new, conflicting information. The lack of recruitment of RIFG is possibly related to difficulties with exception handling in the ASD group.

    Files private

    Request files
  • Tesink, C. M. J. Y., Buitelaar, J. K., Petersson, K. M., Van der Gaag, R. J., Kan, C. C., Tendolkar, I., & Hagoort, P. (2009). Neural correlates of pragmatic language comprehension in autism disorders. Brain, 132, 1941-1952. doi:10.1093/brain/awp103.

    Abstract

    Difficulties with pragmatic aspects of communication are universal across individuals with autism spectrum disorders (ASDs). Here we focused on an aspect of pragmatic language comprehension that is relevant to social interaction in daily life: the integration of speaker characteristics inferred from the voice with the content of a message. Using functional magnetic resonance imaging (fMRI), we examined the neural correlates of the integration of voice-based inferences about the speaker’s age, gender or social background, and sentence content in adults with ASD and matched control participants. Relative to the control group, the ASD group showed increased activation in right inferior frontal gyrus (RIFG; Brodmann area 47) for speakerincongruent sentences compared to speaker-congruent sentences. Given that both groups performed behaviourally at a similar level on a debriefing interview outside the scanner, the increased activation in RIFG for the ASD group was interpreted as being compensatory in nature. It presumably reflects spill-over processing from the language dominant left hemisphere due to higher task demands faced by the participants with ASD when integrating speaker characteristics and the content of a spoken sentence. Furthermore, only the control group showed decreased activation for speaker-incongruent relative to speaker-congruent sentences in right ventral medial prefrontal cortex (vMPFC; Brodmann area 10), including right anterior cingulate cortex (ACC; Brodmann area 24/32). Since vMPFC is involved in self-referential processing related to judgments and inferences about self and others, the absence of such a modulation in vMPFC activation in the ASD group possibly points to atypical default self-referential mental activity in ASD. Our results show that in ASD compensatory mechanisms are necessary in implicit, low-level inferential processes in spoken language understanding. This indicates that pragmatic language problems in ASD are not restricted to high-level inferential processes, but encompass the most basic aspects of pragmatic language processing.
  • Tesink, C. M. J. Y., Petersson, K. M., Van Berkum, J. J. A., Van den Brink, D., Buitelaar, J. K., & Hagoort, P. (2009). Unification of speaker and meaning in language comprehension: An fMRI study. Journal of Cognitive Neuroscience, 21, 2085-2099. doi:10.1162/jocn.2008.21161.

    Abstract

    When interpreting a message, a listener takes into account several sources of linguistic and extralinguistic information. Here we focused on one particular form of extralinguistic information, certain speaker characteristics as conveyed by the voice. Using functional magnetic resonance imaging, we examined the neural structures involved in the unification of sentence meaning and voice-based inferences about the speaker's age, sex, or social background. We found enhanced activation in the inferior frontal gyrus bilaterally (BA 45/47) during listening to sentences whose meaning was incongruent with inferred speaker characteristics. Furthermore, our results showed an overlap in brain regions involved in unification of speaker-related information and those used for the unification of semantic and world knowledge information [inferior frontal gyrus bilaterally (BA 45/47) and left middle temporal gyrus (BA 21)]. These findings provide evidence for a shared neural unification system for linguistic and extralinguistic sources of information and extend the existing knowledge about the role of inferior frontal cortex as a crucial component for unification during language comprehension.
  • Theakston, A., & Rowland, C. F. (2009). Introduction to Special Issue: Cognitive approaches to language acquisition. Cognitive Linguistics, 20(3), 477-480. doi:10.1515/COGL.2009.021.
  • Theakston, A. L., & Rowland, C. F. (2009). The acquisition of auxiliary syntax: A longitudinal elicitation study. Part 1: Auxiliary BE. Journal of Speech, Language, and Hearing Research, 52, 1449-1470. doi:10.1044/1092-4388(2009/08-0037).

    Abstract

    Purpose: The question of how and when English-speaking children acquire auxiliaries is the subject of extensive debate. Some researchers posit the existence of innately given Universal Grammar principles to guide acquisition, although some aspects of the auxiliary system must be learned from the input. Others suggest that auxiliaries can be learned without Universal Grammar, citing evidence of piecemeal learning in their support. This study represents a unique attempt to trace the development of auxiliary syntax by using a longitudinal elicitation methodology. Method: Twelve English-speaking children participated in 3 tasks designed to elicit auxiliary BE in declaratives and yes/no and wh-questions. They completed each task 6 times in total between the ages of 2;10 (years;months) and 3;6. Results: The children’s levels of correct use of 2 forms of BE (is,are) differed according to auxiliary form and sentence structure, and these relations changed over development. An analysis of the children’s errors also revealed complex interactions between these factors. Conclusion: These data are problematic for existing accounts of auxiliary acquisition and highlight the need for researchers working within both generativist and constructivist frameworks to develop more detailed theories of acquisition that directly predict the pattern of acquisition observed.
  • Theakston, A. L., Lieven, E. V., Pine, J. M., & Rowland, C. F. (2004). Semantic generality, input frequency and the acquisition of syntax. Journal of Child Language, 31(1), 61-99. doi:10.1017/S0305000903005956.

    Abstract

    In many areas of language acquisition, researchers have suggested that semantic generality plays an important role in determining the order of acquisition of particular lexical forms. However, generality is typically confounded with the effects of input frequency and it is therefore unclear to what extent semantic generality or input frequency determines the early acquisition of particular lexical items. The present study evaluates the relative influence of semantic status and properties of the input on the acquisition of verbs and their argument structures in the early speech of 9 English-speaking children from 2;0 to 3;0. The children's early verb utterances are examined with respect to (1) the order of acquisition of particular verbs in three different constructions, (2) the syntactic diversity of use of individual verbs, (3) the relative proportional use of semantically general verbs as a function of total verb use, and (4) their grammatical accuracy. The data suggest that although measures of semantic generality correlate with various measures of early verb use, once the effects of verb use in the input are removed, semantic generality is not a significant predictor of early verb use. The implications of these results for semantic-based theories of verb argument structure acquisition are discussed.
  • Thiebaut de Schotten, M., Dell'Acqua, F., Forkel, S. J., Simmons, A., Vergani, F., Murphy, D. G. M., & Catani, M. (2011). A lateralized brain network for visuospatial attention. Nature Neuroscience, 14, 1245-1246. doi:10.1038/nn.2905.

    Abstract

    Right hemisphere dominance for visuospatial attention is characteristic of most humans, but its anatomical basis remains unknown. We report the first evidence in humans for a larger parieto-frontal network in the right than left hemisphere, and a significant correlation between the degree of anatomical lateralization and asymmetry of performance on visuospatial tasks. Our results suggest that hemispheric specialization is associated with an unbalanced speed of visuospatial processing.

    Additional information

    supplementary material
  • Timpson, N. J., Tobias, J. H., Richards, J. B., Soranzo, N., Duncan, E. L., Sims, A.-M., Whittaker, P., Kumanduri, V., Zhai, G., Glaser, B., Eisman, J., Jones, G., Nicholson, G., Prince, R., Seeman, E., Spector, T. D., Brown, M. A., Peltonen, L., Smith, G. D., Deloukas, P. and 1 moreTimpson, N. J., Tobias, J. H., Richards, J. B., Soranzo, N., Duncan, E. L., Sims, A.-M., Whittaker, P., Kumanduri, V., Zhai, G., Glaser, B., Eisman, J., Jones, G., Nicholson, G., Prince, R., Seeman, E., Spector, T. D., Brown, M. A., Peltonen, L., Smith, G. D., Deloukas, P., & Evans, D. M. (2009). Common variants in the region around Osterix are associated with bone mineral density and growth in childhood. Human Molecular Genetics, 18(8), 1510-1517. doi:10.1093/hmg/ddp052.

    Abstract

    Peak bone mass achieved in adolescence is a determinant of bone mass in later life. In order to identify genetic variants affecting bone mineral density (BMD), we performed a genome-wide association study of BMD and related traits in 1518 children from the Avon Longitudinal Study of Parents and Children (ALSPAC). We compared results with a scan of 134 adults with high or low hip BMD. We identified associations with BMD in an area of chromosome 12 containing the Osterix (SP7) locus, a transcription factor responsible for regulating osteoblast differentiation (ALSPAC: P = 5.8 x 10(-4); Australia: P = 3.7 x 10(-4)). This region has previously shown evidence of association with adult hip and lumbar spine BMD in an Icelandic population, as well as nominal association in a UK population. A meta-analysis of these existing studies revealed strong association between SNPs in the Osterix region and adult lumbar spine BMD (P = 9.9 x 10(-11)). In light of these findings, we genotyped a further 3692 individuals from ALSPAC who had whole body BMD and confirmed the association in children as well (P = 5.4 x 10(-5)). Moreover, all SNPs were related to height in ALSPAC children, but not weight or body mass index, and when height was included as a covariate in the regression equation, the association with total body BMD was attenuated. We conclude that genetic variants in the region of Osterix are associated with BMD in children and adults probably through primary effects on growth.
  • Torreira, F., & Ernestus, M. (2011). Realization of voiceless stops and vowels in conversational French and Spanish. Laboratory Phonology, 2(2), 331-353. doi:10.1515/LABPHON.2011.012.

    Abstract

    The present study compares the realization of intervocalic voiceless stops and vowels surrounded by voiceless stops in conversational Spanish and French. Our data reveal significant differences in how these segments are realized in each language. Spanish voiceless stops tend to have shorter stop closures, display incomplete closures more often, and exhibit more voicing than French voiceless stops. As for vowels, more cases of complete devoicing and greater degrees of partial devoicing were found in French than in Spanish. Moreover, all French vowel types exhibit significantly lower F1 values than their Spanish counterparts. These findings indicate that the extent of reduction that a segment type can undergo in conversational speech can vary significantly across languages. Language differences in coarticulatory strategies and “base-of-articulation” are discussed as possible causes of our observations.
  • Torreira, F., & Ernestus, M. (2011). Vowel elision in casual French: The case of vowel /e/ in the word c’était. Journal of Phonetics, 39(1), 50 -58. doi:10.1016/j.wocn.2010.11.003.

    Abstract

    This study investigates the reduction of vowel /e/ in the French word c’était /setε/ ‘it was’. This reduction phenomenon appeared to be highly frequent, as more than half of the occurrences of this word in a corpus of casual French contained few or no acoustic traces of a vowel between [s] and [t]. All our durational analyses clearly supported a categorical absence of vowel /e/ in a subset of c’était tokens. This interpretation was also supported by our finding that the occurrence of complete elision and [e] duration in non-elision tokens were conditioned by different factors. However, spectral measures were consistent with the possibility that a highly reduced /e/ vowel is still present in elision tokens in spite of the durational evidence for categorical elision. We discuss how these findings can be reconciled, and conclude that acoustic analysis of uncontrolled materials can provide valuable information about the mechanisms underlying reduction phenomena in casual speech.
  • Trilsbeek, P. (2004). Report from DoBeS training week. Language Archive Newsletter, 1(3), 12-12.
  • Trilsbeek, P. (2004). DoBeS Training Course. Language Archive Newsletter, 1(2), 6-6.
  • Trilsbeek, P., & Van Uytvanck, D. (2009). Regional archives and community portals. IASA Journal, 32, 69-73.
  • Tufvesson, S. (2011). Analogy-making in the Semai sensory world. The Senses & Society, 6(1), 86-95. doi:10.2752/174589311X12893982233876.

    Abstract

    In the interplay between language, culture, and perception, iconicity structures our representations of what we experience. By examining secondary iconicity in sensory vocabulary, this study draws attention to diagrammatic qualities in human interaction with, and representation of, the sensory world. In Semai (Mon-Khmer, Aslian), spoken on Peninsular Malaysia, sensory experiences are encoded by expressives. Expressives display a diagrammatic iconic structure whereby related sensory experiences receive related linguistic forms. Through this type of formmeaning mapping, gradient relationships in the perceptual world receive gradient linguistic representations. Form-meaning mapping such as this enables speakers to categorize sensory events into types and subtypes of perceptions, and provide illustrates how a diagrammatic iconic structure within sensory vocabulary creates networks of relational sensory knowledge. Through analogy, speakers draw on this knowledge to comprehend sensory referents and create new unconventional forms, which are easily understood by other members of the community. Analogy-making such as this allows speakers to capture fine-grained differences between sensory events, and effectively guide each other through the Semai sensory landscape. sensory specifics of various kinds. This studyillustrates how a diagrammatic iconic structure within sensory vocabulary creates networks of relational sensory knowledge. Through analogy, speakers draw on this knowledge to comprehend sensory referents and create new unconventional forms, which are easily understood by other members of the community. Analogy-making such as this allows speakers to capture fine-grained differences between sensory events, and effectively guide each other through the Semai sensory landscape.
  • Tuinman, A., Mitterer, H., & Cutler, A. (2011). Perception of intrusive /r/ in English by native, cross-language and cross-dialect listeners. Journal of the Acoustical Society of America, 130, 1643-1652. doi:10.1121/1.3619793.

    Abstract

    In sequences such as law and order, speakers of British English often insert /r/ between law and and. Acoustic analyses revealed such “intrusive” /r/ to be significantly shorter than canonical /r/. In a 2AFC experiment, native listeners heard British English sentences in which /r/ duration was manipulated across a word boundary [e.g., saw (r)ice], and orthographic and semantic factors were varied. These listeners responded categorically on the basis of acoustic evidence for /r/ alone, reporting ice after short /r/s, rice after long /r/s; orthographic and semantic factors had no effect. Dutch listeners proficient in English who heard the same materials relied less on durational cues than the native listeners, and were affected by both orthography and semantic bias. American English listeners produced intermediate responses to the same materials, being sensitive to duration (less so than native, more so than Dutch listeners), and to orthography (less so than the Dutch), but insensitive to the semantic manipulation. Listeners from language communities without common use of intrusive /r/ may thus interpret intrusive /r/ as canonical /r/, with a language difference increasing this propensity more than a dialect difference. Native listeners, however, efficiently distinguish intrusive from canonical /r/ by exploiting the relevant acoustic variation.
  • Tyler, M., & Cutler, A. (2009). Cross-language differences in cue use for speech segmentation. Journal of the Acoustical Society of America, 126, 367-376. doi:10.1121/1.3129127.

    Abstract

    Two artificial-language learning experiments directly compared English, French, and Dutch listeners’ use of suprasegmental cues for continuous-speech segmentation. In both experiments, listeners heard unbroken sequences of consonant-vowel syllables, composed of recurring three- and four-syllable “words.” These words were demarcated by(a) no cue other than transitional probabilities induced by their recurrence, (b) a consistent left-edge cue, or (c) a consistent right-edge cue. Experiment 1 examined a vowel lengthening cue. All three listener groups benefited from this cue in right-edge position; none benefited from it in left-edge position. Experiment 2 examined a pitch-movement cue. English listeners used this cue in left-edge position, French listeners used it in right-edge position, and Dutch listeners used it in both positions. These findings are interpreted as evidence of both language-universal and language-specific effects. Final lengthening is a language-universal effect expressing a more general (non-linguistic) mechanism. Pitch movement expresses prominence which has characteristically different placements across languages: typically at right edges in French, but at left edges in English and Dutch. Finally, stress realization in English versus Dutch encourages greater attention to suprasegmental variation by Dutch than by English listeners, allowing Dutch listeners to benefit from an informative pitch-movement cue even in an uncharacteristic position.
  • De Vaan, L., Ernestus, M., & Schreuder, R. (2011). The lifespan of lexical traces for novel morphologically complex words. The Mental Lexicon, 6, 374-392. doi:10.1075/ml.6.3.02dev.

    Abstract

    This study investigates the lifespans of lexical traces for novel morphologically complex words. In two visual lexical decision experiments, a neologism was either primed by itself or by its stem. The target occurred 40 trials after the prime (Experiments 1 & 2), after a 12 hour delay (Experiment 1), or after a one week delay (Experiment 2). Participants recognized neologisms more quickly if they had seen them before in the experiment. These results show that memory traces for novel morphologically complex words already come into existence after a very first exposure and that they last for at least a week. We did not find evidence for a role of sleep in the formation of memory traces. Interestingly, Base Frequency appeared to play a role in the processing of the neologisms also when they were presented a second time and had their own memory traces.
  • Van Berkum, J. J. A., Holleman, B., Nieuwland, M. S., Otten, M., & Murre, J. (2009). Right or wrong? The brain's fast response to morally objectionable statements. Psychological Science, 20, 1092 -1099. doi:10.1111/j.1467-9280.2009.02411.x.

    Abstract

    How does the brain respond to statements that clash with a person's value system? We recorded event-related brain potentials while respondents from contrasting political-ethical backgrounds completed an attitude survey on drugs, medical ethics, social conduct, and other issues. Our results show that value-based disagreement is unlocked by language extremely rapidly, within 200 to 250 ms after the first word that indicates a clash with the reader's value system (e.g., "I think euthanasia is an acceptable/unacceptable…"). Furthermore, strong disagreement rapidly influences the ongoing analysis of meaning, which indicates that even very early processes in language comprehension are sensitive to a person's value system. Our results testify to rapid reciprocal links between neural systems for language and for valuation.

    Additional information

    Critical survey statements (in Dutch)
  • Van den Brink, D., & Hagoort, P. (2004). The influence of semantic and syntactic context constraints on lexical selection and integration in spoken-word comprehension as revealed by ERPs. Journal of Cognitive Neuroscience, 16(6), 1068-1084. doi:10.1162/0898929041502670.

    Abstract

    An event-related brain potential experiment was carried out to investigate the influence of semantic and syntactic context constraints on lexical selection and integration in spoken-word comprehension. Subjects were presented with constraining spoken sentences that contained a critical word that was either (a) congruent, (b) semantically and syntactically incongruent, but beginning with the same initial phonemes as the congruent critical word, or (c) semantically and syntactically incongruent, beginning with phonemes that differed from the congruent critical word. Relative to the congruent condition, an N200 effect reflecting difficulty in the lexical selection process was obtained in the semantically and syntactically incongruent condition where word onset differed from that of the congruent critical word. Both incongruent conditions elicited a large N400 followed by a left anterior negativity (LAN) time-locked to the moment of word category violation and a P600 effect. These results would best fit within a cascaded model of spoken-word processing, proclaiming an optimal use of contextual information during spokenword identification by allowing for semantic and syntactic processing to take place in parallel after bottom-up activation of a set of candidates, and lexical integration to proceed with a limited number of candidates that still match the acoustic input.
  • Van Turennout, M., Hagoort, P., & Brown, C. M. (1998). Brain activitity during speaking: From syntax to phonology in 40 milliseconds. Science, 280, 572-574.

    Abstract

    In normal conversation, speakers translate thoughts into words at high speed. To enable this speed, the retrieval of distinct types of linguistic knowledge has to be orchestrated with millisecond precision. The nature of this orchestration is still largely unknown. This report presents dynamic measures of the real-time activation of two basic types of linguistic knowledge, syntax and phonology. Electrophysiological data demonstrate that during noun-phrase production speakers retrieve the syntactic gender of a noun before its abstract phonological properties. This two-step process operates at high speed: the data show that phonological information is already available 40 milliseconds after syntactic properties have been retrieved.
  • Van Turennout, M., Hagoort, P., & Brown, C. M. (1998). Brain activity during speaking: From syntax to phonology in 40 milliseconds. Science, 280(5363), 572-574. doi:10.1126/science.280.5363.572.
  • Van Wijk, C., & Kempen, G. (1987). A dual system for producing self-repairs in spontaneous speech: Evidence from experimentally elicited corrections. Cognitive Psychology, 19, 403-440. doi:10.1016/0010-0285(87)90014-4.

    Abstract

    This paper presents a cognitive theory on the production and shaping of selfrepairs during speaking. In an extensive experimental study, a new technique is tried out: artificial elicitation of self-repairs. The data clearly indicate that two mechanisms for computing the shape of self-repairs should be distinguished. One is based on the repair strategy called reformulation, the second one on lemma substitution. W. Levelt’s (1983, Cognition, 14, 41- 104) well-formedness rule, which connects self-repairs to coordinate structures, is shown to apply only to reformulations. In case of lemma substitution, a totally different set of rules is at work. The linguistic unit of central importance in reformulations is the major syntactic constituent; in lemma substitutions it is a prosodic unit. the phonological phrase. A parametrization of the model yielded a very satisfactory fit between observed and reconstructed scores.
  • Van Alphen, P. M., De Bree, E., Gerrits, E., De Jong, J., Wilsenach, C., & Wijnen, F. (2004). Early language development in children with a genetic risk of dyslexia. Dyslexia, 10, 265-288. doi:10.1002/dys.272.

    Abstract

    We report on a prospective longitudinal research programme exploring the connection between language acquisition deficits and dyslexia. The language development profile of children at-risk for dyslexia is compared to that of age-matched controls as well as of children who have been diagnosed with specific language impairment (SLI). The experiments described concern the perception and production of grammatical morphology, categorical perception of speech sounds, phonological processing (non-word repetition), mispronunciation detection, and rhyme detection. The results of each of these indicate that the at-risk children as a group underperform in comparison to the controls, and that, in most cases, they approach the SLI group. It can be concluded that dyslexia most likely has precursors in language development, also in domains other than those traditionally considered conditional for the acquisition of literacy skills. The dyslexia-SLI connection awaits further, particularly qualitative, analyses.
  • Van Leeuwen, T. M., Den Ouden, H. E. M., & Hagoort, P. (2011). Effective connectivity determines the nature of subjective experience in grapheme-color synesthesia. Journal of Neuroscience, 31, 9879-9884. doi:10.1523/JNEUROSCI.0569-11.2011.

    Abstract

    Synesthesia provides an elegant model to investigate neural mechanisms underlying individual differences in subjective experience in humans. In grapheme–color synesthesia, written letters induce color sensations, accompanied by activation of color area V4. Competing hypotheses suggest that enhanced V4 activity during synesthesia is either induced by direct bottom-up cross-activation from grapheme processing areas within the fusiform gyrus, or indirectly via higher-order parietal areas. Synesthetes differ in the way synesthetic color is perceived: “projector” synesthetes experience color externally colocalized with a presented grapheme, whereas “associators” report an internally evoked association. Using dynamic causal modeling for fMRI, we show that V4 cross-activation during synesthesia was induced via a bottom-up pathway (within fusiform gyrus) in projector synesthetes, but via a top-down pathway (via parietal lobe) in associators. These findings show how altered coupling within the same network of active regions leads to differences in subjective experience. Our findings reconcile the two most influential cross-activation accounts of synesthesia.
  • Van Alphen, P. M., & Smits, R. (2004). Acoustical and perceptual analysis of the voicing distinction in Dutch initial plosives: The role of prevoicing. Journal of Phonetics, 32(4), 455-491. doi:10.1016/j.wocn.2004.05.001.

    Abstract

    Three experiments investigated the voicing distinction in Dutch initial labial and alveolar plosives. The difference between voiced and voiceless Dutch plosives is generally described in terms of the presence or absence of prevoicing (negative voice onset time). Experiment 1 showed, however, that prevoicing was absent in 25% of voiced plosive productions across 10 speakers. The production of prevoicing was influenced by place of articulation of the plosive, by whether the plosive occurred in a consonant cluster or not, and by speaker sex. Experiment 2 was a detailed acoustic analysis of the voicing distinction, which identified several acoustic correlates of voicing. Prevoicing appeared to be by far the best predictor. Perceptual classification data revealed that prevoicing was indeed the strongest cue that listeners use when classifying plosives as voiced or voiceless. In the cases where prevoicing was absent, other acoustic cues influenced classification, such that some of these tokens were still perceived as being voiced. These secondary cues were different for the two places of articulation. We discuss the paradox raised by these findings: although prevoicing is the most reliable cue to the voicing distinction for listeners, it is not reliably produced by speakers.
  • Van de Meerendonk, N., Indefrey, P., Chwilla, D. J., & Kolk, H. H. (2011). Monitoring in language perception: Electrophysiological and hemodynamic responses to spelling violations. Neuroimage, 54, 2350-2363. doi:10.1016/j.neuroimage.2010.10.022.

    Abstract

    The monitoring theory of language perception proposes that competing representations that are caused by strong expectancy violations can trigger a conflict which elicits reprocessing of the input to check for possible processing errors. This monitoring process is thought to be reflected by the P600 component in the EEG. The present study further investigated this monitoring process by comparing syntactic and spelling violations in an EEG and an fMRI experiment. To assess the effect of conflict strength, misspellings were embedded in sentences that were weakly or strongly predictive of a critical word. In support of the monitoring theory, syntactic and spelling violations elicited similarly distributed P600 effects. Furthermore, the P600 effect was larger to misspellings in the strongly compared to the weakly predictive sentences. The fMRI results showed that both syntactic and spelling violations increased activation in the left inferior frontal gyrus (lIFG), while only the misspellings activated additional areas. Conflict strength did not affect the hemodynamic response to spelling violations. These results extend the idea that the lIFG is involved in implementing cognitive control in the presence of representational conflicts in general to the processing of errors in language perception.
  • Van de Ven, M., & Gussenhoven, C. (2011). On the timing of the final rise in Dutch falling-rising intonation contours. Journal of Phonetics, 39, 225-236. doi:10.1016/j.wocn.2011.01.006.

    Abstract

    A corpus of Dutch falling-rising intonation contours with early nuclear accent was elicited from nine speakers with a view to establishing the extent to which the low F0 target immediately preceding the final rise, was attracted by a post-nuclear stressed syllable (PNS) in either of the last two words or by Second Occurrence Contrastive Focus (SOCF) on either of these words. We found a small effect of foot type, which we interpret as due to a rhythmic 'trochaic enhancement' effect. The results show that neither PNS nor SOCF influences the location of the low F0 target, which appears consistently to be timed with reference to the utterance end. It is speculated that there are two ways in which postnuclear tones can be timed. The first is by means of a phonological association with a post-nuclear stressed syllable, as in Athenian Greek and Roermond Dutch. The second is by a fixed distance from the utterance end or from the target of an adjacent tone. Accordingly, two phonological mechanisms are defended, association and edge alignment, such that all tones edge-align, but only some associate. Specifically, no evidence was found for a third situation that can be envisaged, in which a post-nuclear tone is gradiently attracted to a post-nuclear stress.

    Files private

    Request files
  • Van Gijn, R. (2011). Pronominal affixes, the best of both worlds: The case of Yurakaré. Transactions of the Philological Society, 109(1), 41-58. doi:10.1111/j.1467-968X.2011.01249.x.

    Abstract

    I thank the speakers of Yurakaré who have taught me their language for sharing their knowledge with me. I would furthermore like to thank Grev Corbett, Michael Cysouw, and an anonymous reviewer for commenting on earlier drafts of this paper. All remaining errors are mine. The research reported in this paper was made possible by grants from Prof. Pieter Muysken’s Spinoza project Lexicon & Syntax, the University of Surrey, the DoBeS foundation, and the Netherlands Organization for Scientific Research, for which I am grateful. Pronominal affixes in polysynthetic languages have an ambiguous status in the sense that they have characteristics normally associated with free pronouns as well as characteristics associated with agreement markers. This situation arises because pronominal affixes represent intermediate stages in a diachronic development from independent pronouns to agreement markers. Because this diachronic change is not abrupt, pronominal affixes can show different characteristics from language to language. By presenting an in-depth discussion of the pronominal affixes of Yurakaré, an unclassified language from Bolivia, I argue that these so-called intermediate stages as typically attested in polysynthetic languages actually represent economical systems that combine advantages of agreement markers and of free pronouns. In terms of diachronic development, such ‘intermediate’ systems, being functionally well-adapted, appear to be rather stable, and it can even be reinforced by subsequent diachronic developments.
  • Van Gijn, R. (2011). Subjects and objects: A semantic account of Yurakaré argument structure. International Journal of American Linguistics, 77, 595-621. doi:10.1086/662158.

    Abstract

    Yurakaré (unclassified, central Bolivia) marks core arguments on the verb by means of pronominal affixes. Subjects are suffixed, objects are prefixed. There are six types of head-marked objects in Yurakaré, each with its own morphosyntactic and semantic properties. Distributional patterns suggest that the six objects can be divided into two larger groups reminiscent of the typologically recognized direct vs. indirect object distinction. This paper looks at the interaction of this complex system of participant marking and verbal semantics. By investigating the participant-marking patterns of nine verb classes (four representing a gradual decrease of patienthood of the P participant, five a gradual decrease of agentivity of the A participant), I come to the conclusion that grammatical roles in Yurakaré can be defined semantically, and case frames are to a high degree determined by verbal semantics.
  • Van Leeuwen, E. J. C., Zimmerman, E., & Davila Ross, M. (2011). Responding to inequities: Gorillas try to maintain their competitive advantage during play fights. Biology Letters, 7(1), 39-42. doi:10.1098/rsbl.2010.0482.

    Abstract

    Humans respond to unfair situations in various ways. Experimental research has revealed that non-human species also respond to unequal situ- ations in the form of inequity aversions when they have the disadvantage. The current study focused on play fights in gorillas to explore for the first time, to our knowledge, if/how non-human species respond to inequities in natural social settings. Hitting causes a naturally occurring inequity among individuals and here it was specifically assessed how the hitters and their partners engaged in play chases that followed the hitting. The results of this work showed that the hitters significantly more often moved first to run away immediately after the encounter than their partners. These findings provide evidence that non-human species respond to inequities by trying to maintain their competitive advantages. We conclude that non-human pri- mates, like humans, may show different responses to inequities and that they may modify them depending on if they have the advantage or the disadvantage.
  • Van de Ven, M., Tucker, B. V., & Ernestus, M. (2011). Semantic context effects in the comprehension of reduced pronunciation variants. Memory & Cognition, 39, 1301-1316. doi:10.3758/s13421-011-0103-2.

    Abstract

    Listeners require context to understand the highly reduced words that occur in casual speech. The present study reports four auditory lexical decision experiments in which the role of semantic context in the comprehension of reduced versus unreduced speech was investigated. Experiments 1 and 2 showed semantic priming for combinations of unreduced, but not reduced, primes and low-frequency targets. In Experiment 3, we crossed the reduction of the prime with the reduction of the target. Results showed no semantic priming from reduced primes, regardless of the reduction of the targets. Finally, Experiment 4 showed that reduced and unreduced primes facilitate upcoming low-frequency related words equally if the interstimulus interval is extended. These results suggest that semantically related words need more time to be recognized after reduced primes, but once reduced primes have been fully (semantically) processed, these primes can facilitate the recognition of upcoming words as well as do unreduced primes.
  • Van Gijn, R. (2009). The phonology of mixed languages. Journal of Pidgin and Creole Languages, 24(1), 91-117. doi:10.1075/jpcl.24.1.04gij.

    Abstract

    Mixed languages are said to be the result of a process of intertwining (e.g. Bakker & Muysken 1995, Bakker 1997), a regular process in which the grammar of one language is combined with the lexicon of another. However, the outcome of this process differs from language pair to language pair. As far as morphosyntax is concerned, people have discussed these different outcomes and the reasons for them extensively, e.g. Bakker 1997 for Michif, Mous 2003 for Ma’a, Muysken 1997a for Media Lengua and 1997b for Callahuaya. The issue of phonology, however, has not generated a large debate. This paper compares the phonological systems of the mixed languages Media Lengua, Callahuaya, Mednyj Aleut, and Michif. It will be argued that the outcome of the process of intertwining, as far as phonology is concerned, is at least partly determined by the extent to which unmixed phonological domains exist.
  • Vandeberg, L., Guadalupe, T., & Zwaan, R. A. (2011). How verbs can activate things: Cross-language activation across word classes. Acta Psychologica, 138, 68-73. doi:10.1016/j.actpsy.2011.05.007.

    Abstract

    The present study explored whether language-nonselective access in bilinguals occurs across word classes in a sentence context. Dutch–English bilinguals were auditorily presented with English (L2) sentences while looking at a visual world. The sentences contained interlingual homophones from distinct lexical categories (e.g., the English verb spoke, which overlaps phonologically with the Dutch noun for ghost, spook). Eye movement recordings showed that depictions of referents of the Dutch (L1) nouns attracted more visual attention than unrelated distractor pictures in sentences containing homophones. This finding shows that native language objects are activated during second language verb processing despite the structural information provided by the sentence context. Research highlights We show that native language words are activated during second language sentence processing. We tested this in a visual world setting on homophones with a different word class across languages. Fixations show that processing second language verbs activated native language nouns.
  • Vartiainen, J., Aggujaro, S., Lehtonen, M., Hulten, A., Laine, M., & Salmelin, R. (2009). Neural dynamics of reading morphologically complex words. NeuroImage, 47, 2064-2072. doi:10.1016/j.neuroimage.2009.06.002.

    Abstract

    Despite considerable research interest, it is still an open issue as to how morphologically complex words such as “car+s” are represented and processed in the brain. We studied the neural correlates of the processing of inflected nouns in the morphologically rich Finnish language. Previous behavioral studies in Finnish have yielded a robust inflectional processing cost, i.e., inflected words are harder to recognize than otherwise matched morphologically simple words. Theoretically this effect could stem either from decomposition of inflected words into a stem and a suffix at input level and/or from subsequent recombination at the semantic–syntactic level to arrive at an interpretation of the word. To shed light on this issue, we used magnetoencephalography to reveal the time course and localization of neural effects of morphological structure and frequency of written words. Ten subjects silently read high- and low-frequency Finnish words in inflected and monomorphemic form. Morphological complexity was accompanied by stronger and longerlasting activation of the left superior temporal cortex from 200 ms onwards. Earlier effects of morphology were not found, supporting the view that the well-established behavioral processing cost for inflected words stems from the semantic–syntactic level rather than from early decomposition. Since the effect of morphology was detected throughout the range of word frequencies employed, the majority of inflected Finnish words appears to be represented in decomposed form and only very high-frequency inflected words may acquire full-form representations.
  • Verdonschot, R. G., La Heij, W., Paolieri, D., Zhang, Q., & Schiller, N. O. (2011). Homophonic context effects when naming Japanese kanji: Evidence for processing costs. Quarterly Journal of Experimental Psychology, 64(9), 1836-1849. doi:10.1080/17470218.2011.585241.

    Abstract

    The current study investigated the effects of phonologically related context pictures on the naming latencies of target words in Japanese and Chinese. Reading bare words in alphabetic languages has been shown to be rather immune to effects of context stimuli, even when these stimuli are presented in advance of the target word (e. g., Glaser & Dungelhoff, 1984; Roelofs, 2003). However, recently, semantic context effects of distractor pictures on the naming latencies of Japanese kanji (but not Chinese hanzi) words have been observed (Verdonschot, La Heij, & Schiller, 2010). In the present study, we further investigated this issue using phonologically related (i.e., homophonic) context pictures when naming target words in either Chinese or Japanese. We found that pronouncing bare nouns in Japanese is sensitive to phonologically related context pictures, whereas this is not the case in Chinese. The difference between these two languages is attributed to processing costs caused by multiple pronunciations for Japanese kanji.
  • Verdonschot, R. G., Kiyama, S., Tamaoka, K., Kinoshita, S., La Heij, W., & Schiller, N. O. (2011). The functional unit of Japanese word naming: Evidence from masked priming. Journal of Experimental Psychology: Learning, Memory, and Cognition, 37(6), 1458-1473. doi:10.1037/a0024491.

    Abstract

    Theories of language production generally describe the segment as the basic unit in phonological encoding (e.g., Dell, 1988; Levelt, Roelofs, & Meyer, 1999). However, there is also evidence that such a unit might be language specific. Chen, Chen, and Dell (2002), for instance, found no effect of single segments when using a preparation paradigm. To shed more light on the functional unit of phonological encoding in Japanese, a language often described as being mora based, we report the results of 4 experiments using word reading tasks and masked priming. Experiment 1 demonstrated using Japanese kana script that primes, which overlapped in the whole mora with target words, sped up word reading latencies but not when just the onset overlapped. Experiments 2 and 3 investigated a possible role of script by using combinations of romaji (Romanized Japanese) and hiragana; again, facilitation effects were found only when the whole mora and not the onset segment overlapped. Experiment 4 distinguished mora priming from syllable priming and revealed that the mora priming effects obtained in the first 3 experiments are also obtained when a mora is part of a syllable. Again, no priming effect was found for single segments. Our findings suggest that the mora and not the segment (phoneme) is the basic functional phonological unit in Japanese language production planning.
  • Verhagen, J., & Schimke, S. (2009). Differences or fundamental differences? Zeitschrift für Sprachwissenschaft, 28(1), 97-106. doi:10.1515/ZFSW.2009.011.
  • Verhagen, J. (2009). Temporal adverbials, negation and finiteness in Dutch as a second language: A scope-based account. IRAL, 47(2), 209-237. doi:10.1515/iral.2009.009.

    Abstract

    This study investigates the acquisition of post-verbal (temporal) adverbials and post-verbal negation in L2 Dutch. It is based on previous findings for L2 French that post-verbal negation poses less of a problem for L2 learners than post-verbal adverbial placement (Hawkins, Towell, Bazergui, Second Language Research 9: 189-233, 1993; Herschensohn, Minimally raising the verb issue: 325-336, Cascadilla Press, 1998). The current data show that, at first sight, Moroccan and Turkish learners of Dutch also have fewer problems with post-verbal negation than with post-verbal adverbials. However, when a distinction is made between different types of adverbials, it seems that this holds for adverbials of position such as 'today' but not for adverbials of contrast such as 'again'. To account for this difference, it is argued that different types of adverbial occupy different positions in the L2 data for reasons of scope marking. Moreover, the placement of adverbials such as 'again' interacts with the acquisition of finiteness marking (resulting in post-verbal placement), while there is no such interaction between adverbials such as 'today' and finiteness marking.
  • Verhagen, J. (2011). Verb placement in second language acquisition: Experimental evidence for the different behavior of auxiliary and lexical verbs. Applied Psycholinguistics, 32, 821 -858. doi:10.1017/S0142716411000087.

    Abstract

    This study investigates the acquisition of verb placement by Moroccan and Turkish second language (L2) learners of Dutch. Elicited production data corroborate earlier findings from L2 German that learners who do not produce auxiliaries do not raise lexical verbs over negation, whereas learners who produce auxiliaries do. Data from elicited imitation and sentence matching support this pattern and show that learners can have grammatical knowledge of auxiliary placement before they can produce auxiliaries. With lexical verbs, they do not show such knowledge. These results present further evidence for the different behavior of auxiliary and lexical verbs in early stages of L2 acquisition.
  • Verhoeven, L., Baayen, R. H., & Schreuder, R. (2004). Orthographic constraints and frequency effects in complex word identification. Written Language and Literacy, 7(1), 49-59.

    Abstract

    In an experimental study we explored the role of word frequency and orthographic constraints in the reading of Dutch bisyllabic words. Although Dutch orthography is highly regular, several deviations from a one-to-one correspondence occur. In polysyllabic words, the grapheme E may represent three different vowels: /ε /, /e/, or /œ /. In the experiment, skilled adult readers were presented lists of bisyllabic words containing the vowel E in the initial syllable and the same grapheme or another vowel in the second syllable. We expected word frequency to be related to word latency scores. On the basis of general word frequency data, we also expected the interpretation of the initial syllable as a stressed /e/ to be facilitated as compared to the interpretation of an unstressed /œ /. We found a strong negative correlation between word frequency and latency scores. Moreover, for words with E in either syllable we found a preference for a stressed /e/ interpretation, indicating a lexical frequency effect. The results are discussed with reference to a parallel dual-route model of word decoding.
  • Vernes, S. C., MacDermot, K. D., Monaco, A. P., & Fisher, S. E. (2009). Assessing the impact of FOXP1 mutations on developmental verbal dyspraxia. European Journal of Human Genetics, 17(10), 1354-1358. doi:10.1038/ejhg.2009.43.

    Abstract

    Neurodevelopmental disorders that disturb speech and language are highly heritable. Isolation of the underlying genetic risk factors has been hampered by complexity of the phenotype and potentially large number of contributing genes. One exception is the identification of rare heterozygous mutations of the FOXP2 gene in a monogenic syndrome characterised by impaired sequencing of articulatory gestures, disrupting speech (developmental verbal dyspraxia, DVD), as well as multiple deficits in expressive and receptive language. The protein encoded by FOXP2 belongs to a divergent subgroup of forkhead-box transcription factors, with a distinctive DNA-binding domain and motifs that mediate hetero- and homodimerisation. FOXP1, the most closely related member of this subgroup, can directly interact with FOXP2 and is co-expressed in neural structures relevant to speech and language disorders. Moreover, investigations of songbird orthologues indicate that combinatorial actions of the two proteins may play important roles in vocal learning, leading to the suggestion that human FOXP1 should be considered a strong candidate for involvement in DVD. Thus, in this study, we screened the entire coding region of FOXP1 (exons and flanking intronic sequence) for nucleotide changes in a panel of probands used earlier to detect novel mutations in FOXP2. A non-synonymous coding change was identified in a single proband, yielding a proline-to-alanine change (P215A). However, this was also found in a random control sample. Analyses of non-coding SNP changes did not find any correlation with affection status. We conclude that FOXP1 mutations are unlikely to represent a major cause of DVD.

    Additional information

    ejhg200943x1.pdf
  • Vernes, S. C., Oliver, P. L., Spiteri, E., Lockstone, H. E., Puliyadi, R., Taylor, J. M., Ho, J., Mombereau, C., Brewer, A., Lowy, E., Nicod, J., Groszer, M., Baban, D., Sahgal, N., Cazier, J.-B., Ragoussis, J., Davies, K. E., Geschwind, D. H., & Fisher, S. E. (2011). Foxp2 regulates gene networks implicated in neurite outgrowth in the developing brain. PLoS Genetics, 7(7): e1002145. doi:10.1371/journal.pgen.1002145.

    Abstract

    Forkhead-box protein P2 is a transcription factor that has been associated with intriguing aspects of cognitive function in humans, non-human mammals, and song-learning birds. Heterozygous mutations of the human FOXP2 gene cause a monogenic speech and language disorder. Reduced functional dosage of the mouse version (Foxp2) causes deficient cortico-striatal synaptic plasticity and impairs motor-skill learning. Moreover, the songbird orthologue appears critically important for vocal learning. Across diverse vertebrate species, this well-conserved transcription factor is highly expressed in the developing and adult central nervous system. Very little is known about the mechanisms regulated by Foxp2 during brain development. We used an integrated functional genomics strategy to robustly define Foxp2-dependent pathways, both direct and indirect targets, in the embryonic brain. Specifically, we performed genome-wide in vivo ChIP–chip screens for Foxp2-binding and thereby identified a set of 264 high-confidence neural targets under strict, empirically derived significance thresholds. The findings, coupled to expression profiling and in situ hybridization of brain tissue from wild-type and mutant mouse embryos, strongly highlighted gene networks linked to neurite development. We followed up our genomics data with functional experiments, showing that Foxp2 impacts on neurite outgrowth in primary neurons and in neuronal cell models. Our data indicate that Foxp2 modulates neuronal network formation, by directly and indirectly regulating mRNAs involved in the development and plasticity of neuronal connections
  • Vernes, S. C., & Fisher, S. E. (2009). Unravelling neurogenetic networks implicated in developmental language disorders. Biochemical Society Transactions (London), 37, 1263-1269. doi:10.1042/BST0371263.

    Abstract

    Childhood syndromes disturbing language development are common and display high degrees of heritability. In most cases, the underlying genetic architecture is likely to be complex, involving multiple chromosomal loci and substantial heterogeneity, which makes it difficult to track down the crucial genomic risk factors. Investigation of rare Mendelian phenotypes offers a complementary route for unravelling key neurogenetic pathways. The value of this approach is illustrated by the discovery that heterozygous FOXP2 (where FOX is forkhead box) mutations cause an unusual monogenic disorder, characterized by problems with articulating speech along with deficits in expressive and receptive language. FOXP2 encodes a regulatory protein, belonging to the forkhead box family of transcription factors, known to play important roles in modulating gene expression in development and disease. Functional genetics using human neuronal models suggest that the different FOXP2 isoforms generated by alternative splicing have distinct properties and may act to regulate each other's activity. Such investigations have also analysed the missense and nonsense mutations found in cases of speech and language disorder, showing that they alter intracellular localization, DNA binding and transactivation capacity of the mutated proteins. Moreover, in the brains of mutant mice, aetiological mutations have been found to disrupt the synaptic plasticity of Foxp2-expressing circuitry. Finally, although mutations of FOXP2 itself are rare, the downstream networks which it regulates in the brain appear to be broadly implicated in typical forms of language impairment. Thus, through ongoing identification of regulated targets and interacting co-factors, this gene is providing the first molecular entry points into neural mechanisms that go awry in language-related disorders
  • Vigliocco, G., Vinson, D. P., Indefrey, P., Levelt, W. J. M., & Hellwig, F. M. (2004). Role of grammatical gender and semantics in German word production. Journal of Experimental Psychology: Learning, Memory, and Cognition, 30(2), 483-497. doi:10.1037/0278-7393.30.2.483.

    Abstract

    Semantic substitution errors (e.g., saying "arm" when "leg" is intended) are among the most common types of errors occurring during spontaneous speech. It has been shown that grammatical gender of German target nouns is preserved in the errors (E. Marx, 1999). In 3 experiments, the authors explored different accounts of the grammatical gender preservation effect in German. In all experiments, semantic substitution errors were induced using a continuous naming paradigm. In Experiment 1, it was found that gender preservation disappeared when speakers produced bare nouns. Gender preservation was found when speakers produced phrases with determiners marked for gender (Experiment 2) but not when the produced determiners were not marked for gender (Experiment 3). These results are discussed in the context of models of lexical retrieval during production.
  • De Vignemont, F., Majid, A., Jola, C., & Haggard, P. (2009). Segmenting the body into parts: Evidence from biases in tactile perception. Quarterly Journal of Experimental Psychology, 62, 500-512. doi:10.1080/17470210802000802.

    Abstract

    How do we individuate body parts? Here, we investigated the effect of body segmentation between hand and arm in tactile and visual perception. In a first experiment, we showed that two tactile stimuli felt farther away when they were applied across the wrist than when they were applied within a single body part (palm or forearm), indicating a “category boundary effect”. In the following experiments, we excluded two hypotheses, which attributed tactile segmentation to other, nontactile factors. In Experiment 2, we showed that the boundary effect does not arise from motor cues. The effect was reduced during a motor task involving flexion and extension movements of the wrist joint. Action brings body parts together into functional units, instead of pulling them apart. In Experiments 3 and 4, we showed that the effect does not arise from perceptual cues of visual discontinuities. We did not find any segmentation effect for the visual percept of the body in Experiment 3, nor for a neutral shape in Experiment 4. We suggest that the mental representation of the body is structured in categorical body parts delineated by joints, and that this categorical representation modulates tactile spatial perception.
  • Voermans, N. C., Petersson, K. M., Daudey, L., Weber, B., Van Spaendonck, K. P., Kremer, H. P. H., & Fernández, G. (2004). Interaction between the Human Hippocampus and the Caudate Nucleus during Route Recognition. Neuron, 43, 427-435. doi:10.1016/j.neuron.2004.07.009.

    Abstract

    Navigation through familiar environments can rely upon distinct neural representations that are related to different memory systems with either the hippo-campus or the caudate nucleus at their core. However,it is a fundamental question whether and how these systems interact during route recognition. To address this issue, we combined a functional neuroimaging approach with a naturally occurring, well-controlled humanmodel of caudate nucleus dysfunction (i.e., pre-clinical and early-stage Huntington’s disease). Our results reveal a noncompetitive interaction so that the hippocampus compensates for gradual caudate nucleus dysfunction with a gradual activity increase,maintaining normal behavior. Furthermore, we revealed an interaction between medial temporal and caudate activity in healthy subjects, which was adaptively modified in Huntington patients to allow compensatory hippocampal processing. Thus, the two memory systems contribute in a noncompetitive, co operative manner to route recognition, which enables Polthe hippocampus to compensate seamlessly for the functional degradation of the caudate nucleus
  • Vonk, W., Hustinx, L. G., & Simons, W. H. (1992). The use of referential expressions in structuring discourse. Language and Cognitive Processes, 301-333. doi:10.1080/01690969208409389.

    Abstract

    Referential expressions that refer to entities that occur in a text differ in lexical specificity. It is claimed that if these anaphoric expressions are more specific than necessary for their identificational function, they not only relate the current information to the intended referent, but also contribute to the expression of the thematic structure of the discourse and to the comprehension of the thematic structure. In two controlled production experiments, it is demonstrated that thematic shifts are produced when one has to make use of such an overspecified expression, and that overspecified referential expressions are produced when one has to formulate a thematic shift. In two comprehension experiments, using a probe recognition technique, it is shown that an overspecified referential expression decreases the availability of information contained in a sentence that precedes the overspecification. This finding is interpreted in terms of the thematic structuring function of referential expressions in the understanding of discourse.
  • De Vos, C. (2011). A signers' village in Bali, Indonesia. Minpaku Anthropology Newsletter, 33, 4-5.
  • De Vos, C. (2009). [Review of the book Language complexity as an evolving variable ed. by Geoffrey Sampson, David Gil and Peter Trudgill]. LINGUIST List, 20.4275. Retrieved from http://linguistlist.org/issues/20/20-4275.html.
  • De Vos, C., Van der Kooij, E., & Crasborn, O. (2009). Mixed signals: Combining linguistic and affective functions of eyebrows in questions in Sign Language of the Netherlands. Language and Speech, 52(2/3), 315-339. doi:10.1177/0023830909103177.

    Abstract

    The eyebrows are used as conversational signals in face-to-face spoken interaction (Ekman, 1979). In Sign Language of the Netherlands (NGT), the eyebrows are typically furrowed in content questions, and raised in polar questions (Coerts, 1992). On the other hand, these eyebrow positions are also associated with anger and surprise, respectively, in general human communication (Ekman, 1993). This overlap in the functional load of the eyebrow positions results in a potential conflict for NGT signers when combining these functions simultaneously. In order to investigate the effect of the simultaneous realization of both functions on the eyebrow position we elicited instances of both question types with neutral affect and with various affective states. The data were coded using the Facial Action Coding System (FACS: Ekman, Friesen, & Hager, 2002) for type of brow movement as well as for intensity. FACS allows for the coding of muscle groups, which are termed Action Units (AUs) and which produce facial appearance changes. The results show that linguistic and affective functions of eyebrows may influence each other in NGT. That is, in surprised polar questions and angry content question a phonetic enhancement takes place of raising and furrowing, respectively. In the items with contrasting eyebrow movements, the grammatical and affective AUs are either blended (occur simultaneously) or they are realized sequentially. Interestingly, the absence of eyebrow raising (marked by AU 1+2) in angry polar questions, and the presence of eyebrow furrowing (realized by AU 4) in surprised content questions suggests that in general AU 4 may be phonetically stronger than AU 1 and AU 2, independent of its linguistic or affective function.
  • De Vos, C. (2011). Kata Kolok color terms and the emergence of lexical signs in rural signing communities. The Senses & Society, 6(1), 68-76. doi:10.2752/174589311X12893982233795.

    Abstract

    How do new languages develop systematic ways to talk about sensory experiences, such as color? To what extent is the evolution of color terms guided by societal factors? This paper describes the color lexicon of a rural sign language called Kata Kolok which emerged approximately one century ago in a Balinese village. Kata Kolok has four color signs: black, white, red and a blue-green term. In addition, two non-conventionalized means are used to provide color descriptions: naming relevant objects, and pointing to objects in the vicinity. Comparison with Balinese culture and spoken Balinese brings to light discrepancies between the systems, suggesting that neither cultural practices nor language contact have driven the formation of color signs in Kata Kolok. The few lexicographic investigations from other rural sign languages report limitations in the domain of color. On the other hand, larger, urban signed languages have extensive systems, for example, Australian Sign Language has up to nine color terms (Woodward 1989: 149). These comparisons support the finding that, rural sign languages like Kata Kolok fail to provide the societal pressures for the lexicon to expand further.
  • De Vos, C. (2004). Over de biologische functie van taal: Pinker vs. Chomsky. Honours Review, 2(1), 20-25.

    Abstract

    Hoe is de complexe taal van de mens ontstaan? Geleidelijk door natuurlijke selectie, omdat groeiende grammaticale vermogens voor de mens een evolutionair voordeel opleverden? Of plotseling, als onbedoeld bijproduct of neveneffect van een genetische mutatie, zonder dat er sprake is van een adaptief proces? In dit artikel zet ik de argumenten van Pinker en Bloom voor de eerste stelling tegenover argumenten van Chomsky en Gould voor de tweede stelling. Vervolgens laat ik zien dat deze twee extreme standpunten ruimte bieden voor andere opties, die nader onderzoek waard zijn. Zo kan genetisch onderzoek in de komende decennia informatie opleveren, die nuancering van beide standpunten noodzakelijk maakt.
  • Vosse, T., & Kempen, G. (2009). In defense of competition during syntactic ambiguity resolution. Journal of Psycholinguistic Research, 38(1), 1-9. doi:10.1007/s10936-008-9075-1.

    Abstract

    In a recent series of publications (Traxler et al. J Mem Lang 39:558–592, 1998; Van Gompel et al. J Mem Lang 52:284–307, 2005; see also Van Gompel et al. (In: Kennedy, et al.(eds) Reading as a perceptual process, Oxford, Elsevier pp 621–648, 2000); Van Gompel et al. J Mem Lang 45:225–258, 2001) eye tracking data are reported showing that globally ambiguous (GA) sentences are read faster than locally ambiguous (LA) counterparts. They argue that these data rule out “constraint-based” models where syntactic and conceptual processors operate concurrently and syntactic ambiguity resolution is accomplished by competition. Such models predict the opposite pattern of reading times. However, this argument against competition is valid only in conjunction with two standard assumptions in current constraint-based models of sentence comprehension: (1) that syntactic competitions (e.g., Which is the best attachment site of the incoming constituent?) are pooled together with conceptual competitions (e.g., Which attachment site entails the most plausible meaning?), and (2) that the duration of a competition is a function of the overall (pooled) quality score obtained by each competitor. We argue that it is not necessary to abandon competition as a successful basis for explaining parsing phenomena and that the above-mentioned reading time data can be accounted for by a parallel-interactive model with conceptual and syntactic processors that do not pool their quality scores together. Within the individual linguistic modules, decision-making can very well be competition-based.
  • Vosse, T., & Kempen, G. (2009). The Unification Space implemented as a localist neural net: Predictions and error-tolerance in a constraint-based parser. Cognitive Neurodynamics, 3, 331-346. doi:10.1007/s11571-009-9094-0.

    Abstract

    We introduce a novel computer implementation of the Unification-Space parser (Vosse & Kempen 2000) in the form of a localist neural network whose dynamics is based on interactive activation and inhibition. The wiring of the network is determined by Performance Grammar (Kempen & Harbusch 2003), a lexicalist formalism with feature unification as binding operation. While the network is processing input word strings incrementally, the evolving shape of parse trees is represented in the form of changing patterns of activation in nodes that code for syntactic properties of words and phrases, and for the grammatical functions they fulfill. The system is capable, at least in a qualitative and rudimentary sense, of simulating several important dynamic aspects of human syntactic parsing, including garden-path phenomena and reanalysis, effects of complexity (various types of clause embeddings), fault-tolerance in case of unification failures and unknown words, and predictive parsing (expectation-based analysis, surprisal effects). English is the target language of the parser described.
  • De Vries, M., Christiansen, M. H., & Petersson, K. M. (2011). Learning recursion: Multiple nested and crossed dependencies. Biolinguistics, 5(1/2), 010-035.

    Abstract

    Language acquisition in both natural and artificial language learning settings crucially depends on extracting information from sequence input. A shared sequence learning mechanism is thus assumed to underlie both natural and artificial language learning. A growing body of empirical evidence is consistent with this hypothesis. By means of artificial language learning experiments, we may therefore gain more insight in this shared mechanism. In this paper, we review empirical evidence from artificial language learning and computational modelling studies, as well as natural language data, and suggest that there are two key factors that help determine processing complexity in sequence learning, and thus in natural language processing. We propose that the specific ordering of non-adjacent dependencies (i.e., nested or crossed), as well as the number of non-adjacent dependencies to be resolved simultaneously (i.e., two or three) are important factors in gaining more insight into the boundaries of human sequence learning; and thus, also in natural language processing. The implications for theories of linguistic competence are discussed.
  • Vuong, L., & Martin, R. C. (2011). LIFG-based attentional control and the resolution of lexical ambiguities in sentence context. Brain and Language, 116, 22-32. doi:10.1016/j.bandl.2010.09.012.

    Abstract

    The role of attentional control in lexical ambiguity resolution was examined in two patients with damage to the left inferior frontal gyrus (LIFG) and one control patient with non-LIFG damage. Experiment 1 confirmed that the LIFG patients had attentional control deficits compared to normal controls while the non-LIFG patient was relatively unimpaired. Experiment 2 showed that all three patients did as well as normal controls in using biasing sentence context to resolve lexical ambiguities involving balanced ambiguous words, but only the LIFG patients took an abnormally long time on lexical ambiguities that resolved toward a subordinate meaning of biased ambiguous words. Taken together, the results suggest that attentional control plays an important role in the resolution of certain lexical ambiguities – those that induce strong interference from context-inappropriate meanings (i.e., dominant meanings of biased ambiguous words).
  • Waller, D., Loomis, J. M., & Haun, D. B. M. (2004). Body-based senses enhance knowledge of directions in large-scale environments. Psychonomic Bulletin & Review, 11(1), 157-163.

    Abstract

    Previous research has shown that inertial cues resulting from passive transport through a large environment do not necessarily facilitate acquiring knowledge about its layout. Here we examine whether the additional body-based cues that result from active movement facilitate the acquisition of spatial knowledge. Three groups of participants learned locations along an 840-m route. One group walked the route during learning, allowing access to body-based cues (i.e., vestibular, proprioceptive, and efferent information). Another group learned by sitting in the laboratory, watching videos made from the first group. A third group watched a specially made video that minimized potentially confusing head-on-trunk rotations of the viewpoint. All groups were tested on their knowledge of directions in the environment as well as on its configural properties. Having access to body-based information reduced pointing error by a small but significant amount. Regardless of the sensory information available during learning, participants exhibited strikingly common biases.
  • Wang, L., Hagoort, P., & Yang, Y. (2009). Semantic illusion depends on information structure: ERP evidence. Brain Research, 1282, 50-56. doi:10.1016/j.brainres.2009.05.069.

    Abstract

    Next to propositional content, speakers distribute information in their utterances in such a way that listeners can make a distinction between new (focused) and given (non-focused) information. This is referred to as information structure. We measured event-related potentials (ERPs) to explore the role of information structure in semantic processing. Following different questions in wh-question-answer pairs (e.g. What kind of vegetable did Ming buy for cooking today? /Who bought the vegetables for cooking today?), the answer sentences (e.g., Ming bought eggplant/beef to cook today.) contained a critical word, which was either semantically appropriate (eggplant) or inappropriate (beef), and either focus or non-focus. The results showed a full N400 effect only when the critical words were in focus position. In non-focus position a strongly reduced N400 effect was observed, in line with the well-known semantic illusion effect. The results suggest that information structure facilitates semantic processing by devoting more resources to focused information.
  • Wang, L., Bastiaansen, M. C. M., Yang, Y., & Hagoort, P. (2011). The influence of information structure on the depth of semantic processing: How focus and pitch accent determine the size of the N400 effect. Neuropsychologia, 49, 813-820. doi:10.1016/j.neuropsychologia.2010.12.035.

    Abstract

    To highlight relevant information in dialogues, both wh-question context and pitch accent in answers can be used, such that focused information gains more attention and is processed more elaborately. To evaluate the relative influence of context and pitch accent on the depth of semantic processing, we measured Event-Related Potentials (ERPs) to auditorily presented wh-question-answer pairs. A semantically incongruent word in the answer occurred either in focus or non-focus position as determined by the context, and this word was either accented or unaccented. Semantic incongruency elicited different N400 effects in different conditions. The largest N400 effect was found when the question-marked focus was accented, while the other three conditions elicited smaller N400 effects. The results suggest that context and accentuation interact. Thus accented focused words were processed more deeply compared to conditions where focus and accentuation mismatched, or when the new information had no marking. In addition, there seems to be sex differences in the depth of semantic processing. For males, a significant N400 effect was observed only when the question-marked focus was accented, reduced N400 effects were found in the other dialogues. In contrast, females produced similar N400 effects in all the conditions. These results suggest that regardless of external cues, females tend to engage in more elaborate semantic processing compared to males.
  • Warner, N., Fountain, A., & Tucker, B. V. (2009). Cues to perception of reduced flaps. Journal of the Acoustical Society of America, 125(5), 3317-3327. doi:10.1121/1.3097773.

    Abstract

    Natural, spontaneous speech (and even quite careful speech) often shows extreme reduction in many speech segments, even resulting in apparent deletion of consonants. Where the flap ([(sic)]) allophone of /t/ and /d/ is expected in American English, one frequently sees an approximant-like or even vocalic pattern, rather than a clear flap. Still, the /t/ or /d/ is usually perceived, suggesting the acoustic characteristics of a reduced flap are sufficient for perception of a consonant. This paper identifies several acoustic characteristics of reduced flaps based on previous acoustic research (size of intensity dip, consonant duration, and F4 valley) and presents phonetic identification data for continua that manipulate these acoustic characteristics of reduction. The results indicate that the most obvious types of acoustic variability seen in natural flaps do affect listeners' percept of a consonant, but not sufficiently to completely account for the percept. Listeners are affected by the acoustic characteristics of consonant reduction, but they are also very skilled at evaluating variability along the acoustic dimensions that realize reduction.

    Files private

    Request files
  • Warner, N., Jongman, A., Sereno, J., & Kemps, R. J. J. K. (2004). Incomplete neutralization and other sub-phonemic durational differences in production and perception: Evidence from Dutch. Journal of Phonetics, 32(2), 251-276. doi:10.1016/S0095-4470(03)00032-9.

    Abstract

    Words which are expected to contain the same surface string of segments may, under identical prosodic circumstances, sometimes be realized with slight differences in duration. Some researchers have attributed such effects to differences in the words’ underlying forms (incomplete neutralization), while others have suggested orthographic influence and extremely careful speech as the cause. In this paper, we demonstrate such sub-phonemic durational differences in Dutch, a language which some past research has found not to have such effects. Past literature has also shown that listeners can often make use of incomplete neutralization to distinguish apparent homophones. We extend perceptual investigations of this topic, and show that listeners can perceive even durational differences which are not consistently observed in production. We further show that a difference which is primarily orthographic rather than underlying can also create such durational differences. We conclude that a wide variety of factors, in addition to underlying form, can induce speakers to produce slight durational differences which listeners can also use in perception.
  • Warner, N., Luna, Q., Butler, L., & Van Volkinburg, H. (2009). Revitalization in a scattered language community: Problems and methods from the perspective of Mutsun language revitalization. International Journal of the Sociology of Language, 198, 135-148. doi:10.1515/IJSL.2009.031.

    Abstract

    This article addresses revitalization of a dormant language whose prospective speakers live in scattered geographical areas. In comparison to increasing the usage of an endangered language, revitalizing a dormant language (one with no living speakers) requires different methods to gain knowledge of the language. Language teaching for a dormant language with a scattered community presents different problems from other teaching situations. In this article, we discuss the types of tasks that must be accomplished for dormant-language revitalization, with particular focus on development of teaching materials. We also address the role of computer technologies, arguing that each use of technology should be evaluated for how effectively it increases fluency. We discuss methods for achieving semi-fluency for the first new speakers of a dormant language, and for spreading the language through the community.
  • Wassenaar, M., Brown, C. M., & Hagoort, P. (2004). ERP-effects of subject-verb agreement violations in patients with Broca's aphasia. Journal of Cognitive Neuroscience, 16(4), 553-576. doi:10.1162/089892904323057290.

    Abstract

    This article presents electrophysiological data on on-line syntactic processing during auditory sentence comprehension in patients with Broca's aphasia. Event-related brain potentials (ERPs) were recorded from the scalp while subjects listened to sentences that were either syntactically correct or contained violations of subject-verb agreement. Three groups of subjects were tested: Broca patients (n = 10), nonaphasic patients with a right-hemisphere (RH) lesion (n = 5), and healthy agedmatched controls (n = 12). The healthy, control subjects showed a P600/SPS effect as response to the agreement violations. The nonaphasic patients with an RH lesion showed essentially the same pattern. The overall group of Broca patients did not show this sensitivity. However, the sensitivity was modulated by the severity of the syntactic comprehension impairment. The largest deviation from the standard P600/SPS effect was found in the patients with the relatively more severe syntactic comprehension impairment. In addition, ERPs to tones in a classical tone oddball paradigm were also recorded. Similar to the normal control subjects and RH patients, the group of Broca patients showed a P300 effect in the tone oddball condition. This indicates that aphasia in itself does not lead to a general reduction in all cognitive ERP effects. It was concluded that deviations from the standard P600/SPS effect in the Broca patients reflected difficulties with on-line maintaining of number information across clausal boundaries for establishing subject-verb agreement.
  • Weber, A., & Cutler, A. (2004). Lexical competition in non-native spoken-word recognition. Journal of Memory and Language, 50(1), 1-25. doi:10.1016/S0749-596X(03)00105-0.

    Abstract

    Four eye-tracking experiments examined lexical competition in non-native spoken-word recognition. Dutch listeners hearing English fixated longer on distractor pictures with names containing vowels that Dutch listeners are likely to confuse with vowels in a target picture name (pencil, given target panda) than on less confusable distractors (beetle, given target bottle). English listeners showed no such viewing time difference. The confusability was asymmetric: given pencil as target, panda did not distract more than distinct competitors. Distractors with Dutch names phonologically related to English target names (deksel, ‘lid,’ given target desk) also received longer fixations than distractors with phonologically unrelated names. Again, English listeners showed no differential effect. With the materials translated into Dutch, Dutch listeners showed no activation of the English words (desk, given target deksel). The results motivate two conclusions: native phonemic categories capture second-language input even when stored representations maintain a second-language distinction; and lexical competition is greater for non-native than for native listeners.
  • Weber, A., Broersma, M., & Aoyagi, M. (2011). Spoken-word recognition in foreign-accented speech by L2 listeners. Journal of Phonetics, 39, 479-491. doi:10.1016/j.wocn.2010.12.004.

    Abstract

    Two cross-modal priming studies investigated the recognition of English words spoken with a foreign accent. Auditory English primes were either typical of a Dutch accent or typical of a Japanese accent in English and were presented to both Dutch and Japanese L2 listeners. Lexical-decision times to subsequent visual target words revealed that foreign-accented words can facilitate word recognition for L2 listeners if at least one of two requirements is met: the foreign-accented production is in accordance with the language background of the L2 listener, or the foreign accent is perceptually confusable with the standard pronunciation for the L2 listener. If neither one of the requirements is met, no facilitatory effect of foreign accents on L2 word recognition is found. Taken together, these findings suggest that linguistic experience with a foreign accent affects the ability to recognize words carrying this accent, and there is furthermore a general benefit for L2 listeners for recognizing foreign-accented words that are perceptually confusable with the standard pronunciation.
  • Weber, K., & Indefrey, P. (2009). Syntactic priming in German–English bilinguals during sentence comprehension. Neuroimage, 46, 1164-1172. doi:10.1016/j.neuroimage.2009.03.040.

    Abstract

    A longstanding question in bilingualism is whether syntactic information is shared between the two language processing systems. We used an fMRI repetition suppression paradigm to investigate syntactic priming in reading comprehension in German–English late-acquisition bilinguals. In comparison to conventional subtraction analyses in bilingual experiments, repetition suppression has the advantage of being able to detect neuronal populations that are sensitive to properties that are shared by consecutive stimuli. In this study, we manipulated the syntactic structure between prime and target sentences. A sentence with a passive sentence structure in English was preceded either by a passive or by an active sentence in English or German. We looked for repetition suppression effects in left inferior frontal, left precentral and left middle temporal regions of interest. These regions were defined by a contrast of all non-target sentences in German and English versus the baseline of sentence-format consonant strings. We found decreases in activity (repetition suppression effects) in these regions of interest following the repetition of syntactic structure from the first to the second language and within the second language.
    Moreover, a separate behavioural experiment using a word-by-word reading paradigm similar to the fMRI experiment showed faster reading times for primed compared to unprimed English target sentences regardless of whether they were preceded by an English or a German sentence of the same structure.
    We conclude that there is interaction between the language processing systems and that at least some syntactic information is shared between a bilingual's languages with similar syntactic structures.

    Files private

    Request files
  • Wells, J. B., Christiansen, M. H., Race, D. S., Acheson, D. J., & MacDonald, M. C. (2009). Experience and sentence processing: Statistical learning and relative clause comprehension. Cognitive Psychology, 58(2), 250-271. doi:10.1016/j.cogpsych.2008.08.002.

    Abstract

    Many explanations of the difficulties associated with interpreting object relative clauses appeal to the demands that object relatives make on working memory. MacDonald and Christiansen [MacDonald, M. C., & Christiansen, M. H. (2002). Reassessing working memory: Comment on Just and Carpenter (1992) and Waters and Caplan (1996). Psychological Review, 109, 35-54] pointed to variations in reading experience as a source of differences, arguing that the unique word order of object relatives makes their processing more difficult and more sensitive to the effects of previous experience than the processing of subject relatives. This hypothesis was tested in a large-scale study manipulating reading experiences of adults over several weeks. The group receiving relative clause experience increased reading speeds for object relatives more than for subject relatives, whereas a control experience group did not. The reading time data were compared to performance of a computational model given different amounts of experience. The results support claims for experience-based individual differences and an important role for statistical learning in sentence comprehension processes.
  • Whitehouse, A. J., Bishop, D. V., Ang, Q., Pennell, C. E., & Fisher, S. E. (2011). CNTNAP2 variants affect early language development in the general population. Genes, Brain and Behavior, 10, 451-456. doi:10.1111/j.1601-183X.2011.00684.x.

    Abstract

    Early language development is known to be under genetic influence, but the genes affecting normal variation in the general population remain largely elusive. Recent studies of disorder reported that variants of the CNTNAP2 gene are associated both with language deficits in specific language impairment (SLI) and with language delays in autism. We tested the hypothesis that these CNTNAP2 variants affect communicative behavior, measured at 2 years of age in a large epidemiological sample, the Western Australian Pregnancy Cohort (Raine) Study. Singlepoint analyses of 1149 children (606 males, 543 emales) revealed patterns of association which were strikingly reminiscent of those observed in previous investigations of impaired language, centered on the same genetic markers, and with a consistent direction of effect (rs2710102, p = .0239; rs759178, p = .0248). Based on these findings we performed analyses of four-marker haplotypes of rs2710102- s759178-rs17236239-rs2538976, and identified significant association (haplotype TTAA, p = .049; haplotype GCAG, p = .0014). Our study suggests that common variants in the exon 13-15 region of CNTNAP2 influence early language acquisition, as assessed at age 2, in the general population. We propose that these CNTNAP2 variants increase susceptibility to SLI or autism when they occur together with other risk factors.

    Additional information

    Whitehouse_Additional_Information.doc
  • Widlok, T. (2004). Ethnography in language Documentation. Language Archive Newsletter, 1(3), 4-6.
  • Willems, R. M., Toni, I., Hagoort, P., & Casasanto, D. (2009). Body-specific motor imagery of hand actions: Neural evidence from right- and left-handers. Frontiers in Human Neuroscience, 3: 39, pp. 39. doi:10.3389/neuro.09.039.2009.

    Abstract

    If motor imagery uses neural structures involved in action execution, then the neural correlates of imagining an action should differ between individuals who tend to execute the action differently. Here we report fMRI data showing that motor imagery is influenced by the way people habitually perform motor actions with their particular bodies; that is, motor imagery is ‘body-specific’ (Casasanto, 2009). During mental imagery for complex hand actions, activation of cortical areas involved in motor planning and execution was left-lateralized in right-handers but right-lateralized in left-handers. We conclude that motor imagery involves the generation of an action plan that is grounded in the participant’s motor habits, not just an abstract representation at the level of the action’s goal. People with different patterns of motor experience form correspondingly different neurocognitive representations of imagined actions.
  • Willems, R. M., & Hagoort, P. (2009). Broca's region: Battles are not won by ignoring half of the facts. Trends in Cognitive Sciences, 13(3), 101. doi:10.1016/j.tics.2008.12.001.
  • Willems, R. M., Labruna, L., D'Esposito, M., Ivry, R., & Casasanto, D. (2011). A functional role for the motor system in language understanding: Evidence from Theta-Burst Transcranial Magnetic Stimulation. Psychological Science, 22, 849 -854. doi:10.1177/0956797611412387.

    Abstract

    Does language comprehension depend, in part, on neural systems for action? In previous studies, motor areas of the brain were activated when people read or listened to action verbs, but it remains unclear whether such activation is functionally relevant for comprehension. In the experiments reported here, we used off-line theta-burst transcranial magnetic stimulation to investigate whether a causal relationship exists between activity in premotor cortex and action-language understanding. Right-handed participants completed a lexical decision task, in which they read verbs describing manual actions typically performed with the dominant hand (e.g., “to throw,” “to write”) and verbs describing nonmanual actions (e.g., “to earn,” “to wander”). Responses to manual-action verbs (but not to nonmanual-action verbs) were faster after stimulation of the hand area in left premotor cortex than after stimulation of the hand area in right premotor cortex. These results suggest that premotor cortex has a functional role in action-language understanding.

    Additional information

    Supplementary materials Willems.pdf
  • Willems, R. M., Clevis, K., & Hagoort, P. (2011). Add a picture for suspense: Neural correlates of the interaction between language and visual information in the perception of fear. Social, Cognitive and Affective Neuroscience, 6, 404-416. doi:10.1093/scan/nsq050.

    Abstract

    We investigated how visual and linguistic information interact in the perception of emotion. We borrowed a phenomenon from film theory which states that presentation of an as such neutral visual scene intensifies the percept of fear or suspense induced by a different channel of information, such as language. Our main aim was to investigate how neutral visual scenes can enhance responses to fearful language content in parts of the brain involved in the perception of emotion. Healthy participants’ brain activity was measured (using functional magnetic resonance imaging) while they read fearful and less fearful sentences presented with or without a neutral visual scene. The main idea is that the visual scenes intensify the fearful content of the language by subtly implying and concretizing what is described in the sentence. Activation levels in the right anterior temporal pole were selectively increased when a neutral visual scene was paired with a fearful sentence, compared to reading the sentence alone, as well as to reading of non-fearful sentences presented with the same neutral scene. We conclude that the right anterior temporal pole serves a binding function of emotional information across domains such as visual and linguistic information.
  • Willems, R. M., Ozyurek, A., & Hagoort, P. (2009). Differential roles for left inferior frontal and superior temporal cortex in multimodal integration of action and language. Neuroimage, 47, 1992-2004. doi:10.1016/j.neuroimage.2009.05.066.

    Abstract

    Several studies indicate that both posterior superior temporal sulcus/middle temporal gyrus (pSTS/MTG) and left inferior frontal gyrus (LIFG) are involved in integrating information from different modalities. Here we investigated the respective roles of these two areas in integration of action and language information. We exploited the fact that the semantic relationship between language and different forms of action (i.e. co-speech gestures and pantomimes) is radically different. Speech and co-speech gestures are always produced together, and gestures are not unambiguously understood without speech. On the contrary, pantomimes are not necessarily produced together with speech and can be easily understood without speech. We presented speech together with these two types of communicative hand actions in matching or mismatching combinations to manipulate semantic integration load. Left and right pSTS/MTG were only involved in semantic integration of speech and pantomimes. Left IFG on the other hand was involved in integration of speech and co-speech gestures as well as of speech and pantomimes. Effective connectivity analyses showed that depending upon the semantic relationship between language and action, LIFG modulates activation levels in left pSTS.

    This suggests that integration in pSTS/MTG involves the matching of two input streams for which there is a relatively stable common object representation, whereas integration in LIFG is better characterized as the on-line construction of a new and unified representation of the input streams. In conclusion, pSTS/MTG and LIFG are differentially involved in multimodal integration, crucially depending upon the semantic relationship between the input streams.

    Additional information

    Supplementary table S1
  • Willems, R. M., Benn, Y., Hagoort, P., Tonia, I., & Varley, R. (2011). Communicating without a functioning language system: Implications for the role of language in mentalizing. Neuropsychologia, 49, 3130-3135. doi:10.1016/j.neuropsychologia.2011.07.023.

    Abstract

    A debated issue in the relationship between language and thought is how our linguistic abilities are involved in understanding the intentions of others (‘mentalizing’). The results of both theoretical and empirical work have been used to argue that linguistic, and more specifically, grammatical, abilities are crucial in representing the mental states of others. Here we contribute to this debate by investigating how damage to the language system influences the generation and understanding of intentional communicative behaviors. Four patients with pervasive language difficulties (severe global or agrammatic aphasia) engaged in an experimentally controlled non-verbal communication paradigm, which required signaling and understanding a communicative message. Despite their profound language problems they were able to engage in recipient design as well as intention recognition, showing similar indicators of mentalizing as have been observed in the neurologically healthy population. Our results show that aspects of the ability to communicate remain present even when core capacities of the language system are dysfunctional
  • Willems, R. M., & Casasanto, D. (2011). Flexibility in embodied language understanding. Frontiers in Psychology, 2, 116. doi:10.3389/fpsyg.2011.00116.

    Abstract

    Do people use sensori-motor cortices to understand language? Here we review neurocognitive studies of language comprehension in healthy adults and evaluate their possible contributions to theories of language in the brain. We start by sketching the minimal predictions that an embodied theory of language understanding makes for empirical research, and then survey studies that have been offered as evidence for embodied semantic representations. We explore four debated issues: first, does activation of sensori-motor cortices during action language understanding imply that action semantics relies on mirror neurons? Second, what is the evidence that activity in sensori-motor cortices plays a functional role in understanding language? Third, to what extent do responses in perceptual and motor areas depend on the linguistic and extra-linguistic context? And finally, can embodied theories accommodate language about abstract concepts? Based on the available evidence, we conclude that sensori-motor cortices are activated during a variety of language comprehension tasks, for both concrete and abstract language. Yet, this activity depends on the context in which perception and action words are encountered. Although modality-specific cortical activity is not a sine qua non of language processing even for language about perception and action, sensori-motor regions of the brain appear to make functional contributions to the construction of meaning, and should therefore be incorporated into models of the neurocognitive architecture of language.
  • Willems, R. M., & Hagoort, P. (2009). Hand preference influences neural correlates of action observation. Brain Research, 1269, 90-104. doi:10.1016/j.brainres.2009.02.057.

    Abstract

    It has been argued that we map observed actions onto our own motor system. Here we added to this issue by investigating whether hand preference influences the neural correlates of action observation of simple, essentially meaningless hand actions. Such an influence would argue for an intricate neural coupling between action production and action observation, which goes beyond effects of motor repertoire or explicit motor training, as has been suggested before. Indeed, parts of the human motor system exhibited a close coupling between action production and action observation. Ventral premotor and inferior and superior parietal cortices showed differential activation for left- and right-handers that was similar during action production as well as during action observation. This suggests that mapping observed actions onto the observer's own motor system is a core feature of action observation - at least for actions that do not have a clear goal or meaning. Basic differences in the way we act upon the world are not only reflected in neural correlates of action production, but can also influence the brain basis of action observation.
  • Willems, R. M. (2011). Re-appreciating the why of cognition: 35 years after Marr and Poggio. Frontiers in Psychology, 2, 244. doi:10.3389/fpsyg.2011.00244.

    Abstract

    Marr and Poggio’s levels of description are one of the most well-known theoretical constructs of twentieth century cognitive science. It entails that behavior can and should be considered at three different levels: computation, algorithm, and implementation. In this contribution focus is on the computational level of description, the level that describes the “why” of cognition. I argue that the computational level should be taken as a starting point in devising experiments in cognitive (neuro)science. Instead, the starting point in empirical practice often is a focus on the stimulus or on some capacity of the cognitive system. The “why” of cognition tends to be ignored when designing research, and is not considered in subsequent inference from experimental results. The overall aim of this manuscript is to show how re-appreciation of the computational level of description as a starting point for experiments can lead to more informative experimentation.
  • Wittenburg, P., Skiba, R., & Trilsbeek, P. (2004). Technology and Tools for Language Documentation. Language Archive Newsletter, 1(4), 3-4.
  • Wittenburg, P. (2004). Training Course in Lithuania. Language Archive Newsletter, 1(2), 6-6.
  • Wittenburg, P., Dirksmeyer, R., Brugman, H., & Klaas, G. (2004). Digital formats for images, audio and video. Language Archive Newsletter, 1(1), 3-6.
  • Wittenburg, P. (2004). International Expert Meeting on Access Management for Distributed Language Archives. Language Archive Newsletter, 1(3), 12-12.
  • Wittenburg, P. (2004). Final review of INTERA. Language Archive Newsletter, 1(4), 11-12.
  • Wittenburg, P. (2004). LinguaPax Forum on Language Diversity, Sustainability, and Peace. Language Archive Newsletter, 1(3), 13-13.
  • Wittenburg, P. (2004). LREC conference 2004. Language Archive Newsletter, 1(3), 12-13.
  • Wittenburg, P. (2004). News from the Archive of the Max Planck Institute for Psycholinguistics. Language Archive Newsletter, 1(4), 12-12.
  • Zeshan, U. (2004). Interrogative constructions in sign languages - Cross-linguistic perspectives. Language, 80(1), 7-39.

    Abstract

    This article reports on results from a broad crosslinguistic study based on data from thirty-five signed languages around the world. The study is the first of its kind, and the typological generalizations presented here cover the domain of interrogative structures as they appear across a wide range of geographically and genetically distinct signed languages. Manual and nonmanual ways of marking basic types of questions in signed languages are investigated. As a result, it becomes clear that the range of crosslinguistic variation is extensive for some subparameters, such as the structure of question-word paradigms, while other parameters, such as the use of nonmanual expressions in questions, show more similarities across signed languages. Finally, it is instructive to compare the findings from signed language typology to relevant data from spoken languages at a more abstract, crossmodality level.
  • Zeshan, U. (2004). Hand, head and face - negative constructions in sign languages. Linguistic Typology, 8(1), 1-58. doi:10.1515/lity.2004.003.

    Abstract

    This article presents a typology of negative constructions across a substantial number of sign languages from around the globe. After situating the topic within the wider context of linguistic typology, the main negation strategies found across sign languages are described. Nonmanual negation includes the use of head movements and facial expressions for negation and is of great importance in sign languages as well as particularly interesting from a typological point of view. As far as manual signs are concerned, independent negative particles represent the dominant strategy, but there are also instances of irregular negation in most sign languages. Irregular negatives may take the form of suppletion, cliticisation, affixing, or internal modification of a sign. The results of the study lead to interesting generalisations about similarities and differences between negatives in signed and spoken languages.
  • Zwitserlood, I. (2011). Gebruiksgemak van het eerste Nederlandse Gebarentaal woordenboek kan beter [Book review]. Levende Talen Magazine, 4, 46-47.

    Abstract

    Review: User friendliness of the first dictionary of Sign Language of the Netherlands can be improved
  • Zwitserlood, I. (2011). Gevraagd: medewerkers verzorgingshuis met een goede oog-handcoördinatie. Het meten van NGT-vaardigheid. Levende Talen Magazine, 1, 44-46.

    Abstract

    (Needed: staff for residential care home with good eye-hand coordination. Measuring NGT-skills.)
  • Zwitserlood, I. (2009). Het Corpus NGT. Levende Talen Magazine, 6, 44-45.

    Abstract

    The Corpus NGT
  • Zwitserlood, I. (2011). Het Corpus NGT en de dagelijkse lespraktijk. Levende Talen Magazine, 6, 46.

    Abstract

    (The Corpus NGT and the daily practice of language teaching)
  • Zwitserlood, I. (2009). Het Corpus NGT en de dagelijkse lespraktijk (1). Levende Talen Magazine, 8, 40-41.
  • Zwitserlood, I. (2011). Het Corpus NGT en de opleiding leraar/tolk NGT. Levende Talen Magazine, 1, 40-41.

    Abstract

    (The Corpus NGT and teacher NGT/interpreter NGT training)

Share this page