Publications

Displaying 701 - 797 of 797
  • Tuinman, A. (2011). Processing casual speech in native and non-native language. PhD Thesis, Radboud University Nijmegen, Nijmegen.
  • Tuinman, A., Mitterer, H., & Cutler, A. (2011). Perception of intrusive /r/ in English by native, cross-language and cross-dialect listeners. Journal of the Acoustical Society of America, 130, 1643-1652. doi:10.1121/1.3619793.

    Abstract

    In sequences such as law and order, speakers of British English often insert /r/ between law and and. Acoustic analyses revealed such “intrusive” /r/ to be significantly shorter than canonical /r/. In a 2AFC experiment, native listeners heard British English sentences in which /r/ duration was manipulated across a word boundary [e.g., saw (r)ice], and orthographic and semantic factors were varied. These listeners responded categorically on the basis of acoustic evidence for /r/ alone, reporting ice after short /r/s, rice after long /r/s; orthographic and semantic factors had no effect. Dutch listeners proficient in English who heard the same materials relied less on durational cues than the native listeners, and were affected by both orthography and semantic bias. American English listeners produced intermediate responses to the same materials, being sensitive to duration (less so than native, more so than Dutch listeners), and to orthography (less so than the Dutch), but insensitive to the semantic manipulation. Listeners from language communities without common use of intrusive /r/ may thus interpret intrusive /r/ as canonical /r/, with a language difference increasing this propensity more than a dialect difference. Native listeners, however, efficiently distinguish intrusive from canonical /r/ by exploiting the relevant acoustic variation.
  • Tyler, M., & Cutler, A. (2009). Cross-language differences in cue use for speech segmentation. Journal of the Acoustical Society of America, 126, 367-376. doi:10.1121/1.3129127.

    Abstract

    Two artificial-language learning experiments directly compared English, French, and Dutch listeners’ use of suprasegmental cues for continuous-speech segmentation. In both experiments, listeners heard unbroken sequences of consonant-vowel syllables, composed of recurring three- and four-syllable “words.” These words were demarcated by(a) no cue other than transitional probabilities induced by their recurrence, (b) a consistent left-edge cue, or (c) a consistent right-edge cue. Experiment 1 examined a vowel lengthening cue. All three listener groups benefited from this cue in right-edge position; none benefited from it in left-edge position. Experiment 2 examined a pitch-movement cue. English listeners used this cue in left-edge position, French listeners used it in right-edge position, and Dutch listeners used it in both positions. These findings are interpreted as evidence of both language-universal and language-specific effects. Final lengthening is a language-universal effect expressing a more general (non-linguistic) mechanism. Pitch movement expresses prominence which has characteristically different placements across languages: typically at right edges in French, but at left edges in English and Dutch. Finally, stress realization in English versus Dutch encourages greater attention to suprasegmental variation by Dutch than by English listeners, allowing Dutch listeners to benefit from an informative pitch-movement cue even in an uncharacteristic position.
  • Uddén, J., Folia, V., Forkstam, C., Ingvar, M., Fernández, G., Overeem, S., Van Elswijk, G., Hagoort, P., & Petersson, K. M. (2008). The inferior frontal cortex in artificial syntax processing: An rTMS study. Brain Research, 1224, 69-78. doi:10.1016/j.brainres.2008.05.070.

    Abstract

    The human capacity to implicitly acquire knowledge of structured sequences has recently been investigated in artificial grammar learning using functional magnetic resonance imaging. It was found that the left inferior frontal cortex (IFC; Brodmann's area (BA) 44/45) was related to classification performance. The objective of this study was to investigate whether the IFC (BA 44/45) is causally related to classification of artificial syntactic structures by means of an off-line repetitive transcranial magnetic stimulation (rTMS) paradigm. We manipulated the stimulus material in a 2 × 2 factorial design with grammaticality status and local substring familiarity as factors. The participants showed a reliable effect of grammaticality on classification of novel items after 5days of exposure to grammatical exemplars without performance feedback in an implicit acquisition task. The results show that rTMS of BA 44/45 improves syntactic classification performance by increasing the rejection rate of non-grammatical items and by shortening reaction times of correct rejections specifically after left-sided stimulation. A similar pattern of results is observed in FMRI experiments on artificial syntactic classification. These results suggest that activity in the inferior frontal region is causally related to artificial syntax processing.
  • De Vaan, L., Ernestus, M., & Schreuder, R. (2011). The lifespan of lexical traces for novel morphologically complex words. The Mental Lexicon, 6, 374-392. doi:10.1075/ml.6.3.02dev.

    Abstract

    This study investigates the lifespans of lexical traces for novel morphologically complex words. In two visual lexical decision experiments, a neologism was either primed by itself or by its stem. The target occurred 40 trials after the prime (Experiments 1 & 2), after a 12 hour delay (Experiment 1), or after a one week delay (Experiment 2). Participants recognized neologisms more quickly if they had seen them before in the experiment. These results show that memory traces for novel morphologically complex words already come into existence after a very first exposure and that they last for at least a week. We did not find evidence for a role of sleep in the formation of memory traces. Interestingly, Base Frequency appeared to play a role in the processing of the neologisms also when they were presented a second time and had their own memory traces.
  • Van Berkum, J. J. A., Holleman, B., Nieuwland, M. S., Otten, M., & Murre, J. (2009). Right or wrong? The brain's fast response to morally objectionable statements. Psychological Science, 20, 1092 -1099. doi:10.1111/j.1467-9280.2009.02411.x.

    Abstract

    How does the brain respond to statements that clash with a person's value system? We recorded event-related brain potentials while respondents from contrasting political-ethical backgrounds completed an attitude survey on drugs, medical ethics, social conduct, and other issues. Our results show that value-based disagreement is unlocked by language extremely rapidly, within 200 to 250 ms after the first word that indicates a clash with the reader's value system (e.g., "I think euthanasia is an acceptable/unacceptable…"). Furthermore, strong disagreement rapidly influences the ongoing analysis of meaning, which indicates that even very early processes in language comprehension are sensitive to a person's value system. Our results testify to rapid reciprocal links between neural systems for language and for valuation.

    Additional information

    Critical survey statements (in Dutch)
  • Van Berkum, J. J. A., Van den Brink, D., Tesink, C. M. J. Y., Kos, M., & Hagoort, P. (2008). The neural integration of speaker and message. Journal of Cognitive Neuroscience, 20(4), 580-591. doi:10.1162/jocn.2008.20054.

    Abstract

    When do listeners take into account who the speaker is? We asked people to listen to utterances whose content sometimes did not match inferences based on the identity of the speaker (e.g., “If only I looked like Britney Spears” in a male voice, or “I have a large tattoo on my back” spoken with an upper-class accent). Event-related brain responses revealed that the speaker's identity is taken into account as early as 200–300 msec after the beginning of a spoken word, and is processed by the same early interpretation mechanism that constructs sentence meaning based on just the words. This finding is difficult to reconcile with standard “Gricean” models of sentence interpretation in which comprehenders initially compute a local, context-independent meaning for the sentence (“semantics”) before working out what it really means given the wider communicative context and the particular speaker (“pragmatics”). Because the observed brain response hinges on voice-based and usually stereotype-dependent inferences about the speaker, it also shows that listeners rapidly classify speakers on the basis of their voices and bring the associated social stereotypes to bear on what is being said. According to our event-related potential results, language comprehension takes very rapid account of the social context, and the construction of meaning based on language alone cannot be separated from the social aspects of language use. The linguistic brain relates the message to the speaker immediately.
  • Van de Ven, M. A. M. (2011). The role of acoustic detail and context in the comprehension of reduced pronunciation variants. PhD Thesis, Radboud University Nijmegen, Nijmegen.
  • Van Berkum, J. J. A. (2008). Understanding sentences in context: What brain waves can tell us. Current Directions in Psychological Science, 17(6), 376-380. doi:10.1111/j.1467-8721.2008.00609.x.

    Abstract

    Language comprehension looks pretty easy. You pick up a novel and simply enjoy the plot, or ponder the human condition. You strike a conversation and listen to whatever the other person has to say. Although what you're taking in is a bunch of letters and sounds, what you really perceive—if all goes well—is meaning. But how do you get from one to the other so easily? The experiments with brain waves (event-related brain potentials or ERPs) reviewed here show that the linguistic brain rapidly draws upon a wide variety of information sources, including prior text and inferences about the speaker. Furthermore, people anticipate what might be said about whom, they use heuristics to arrive at the earliest possible interpretation, and if it makes sense, they sometimes even ignore the grammar. Language comprehension is opportunistic, proactive, and, above all, immediately context-dependent.
  • van Kuijk, D., & Boves, L. (1999). Acoustic characteristics of lexical stress in continuous telephone speech. Speech Communication, 27(2), 95-111. doi:10.1016/S0167-6393(98)00069-7.

    Abstract

    In this paper we investigate acoustic differences between vowels in syllables that do or do not carry lexical stress. In doing so, we concentrated on segmental acoustic phonetic features that are conventionally assumed to differ between stressed and unstressed syllables, viz. Duration, Energy and Spectral Tilt. The speech material in this study differs from the type of material used in previous research: instead of specially constructed sentences we used phonetically rich sentences from the Dutch POLYPHONE corpus. Most of the Duration, Energy and Spectral Tilt features that we used in the investigation show statistically significant differences for the population means of stressed and unstressed vowels. However, it also appears that the distributions overlap to such an extent that automatic detection of stressed and unstressed syllables yields correct classifications of 72.6% at best. It is argued that this result is due to the large variety in the ways in which the abstract linguistic feature `lexical stress' is realized in the acoustic speech signal. Our findings suggest that a lexical stress detector has little use for a single pass decoder in an automatic speech recognition (ASR) system, but could still play a useful role as an additional knowledge source in a multi-pass decoder.
  • Van Berkum, J. J. A., Brown, C. M., & Hagoort, P. (1999). Early referential context effects in sentence processing: Evidence from event-related brain potentials. Journal of Memory and Language, 41(2), 147-182. doi:10.1006/jmla.1999.2641.

    Abstract

    An event-related brain potentials experiment was carried out to examine the interplay of referential and structural factors during sentence processing in discourse. Subjects read (Dutch) sentences beginning like “David told the girl that … ” in short story contexts that had introduced either one or two referents for a critical singular noun phrase (“the girl”). The waveforms showed that within 280 ms after onset of the critical noun the reader had already determined whether the noun phrase had a unique referent in earlier discourse. Furthermore, this referential information was immediately used in parsing the rest of the sentence, which was briefly ambiguous between a complement clause (“ … that there would be some visitors”) and a relative clause (“ … that had been on the phone to hang up”). A consistent pattern of P600/SPS effects elicited by various subsequent disambiguations revealed that a two-referent discourse context had led the parser to initially pursue the relative-clause alternative to a larger extent than a one-referent context. Together, the results suggest that during the processing of sentences in discourse, structural and referential sources of information interact on a word-by-word basis.
  • Van Leeuwen, T. M., Den Ouden, H. E. M., & Hagoort, P. (2011). Effective connectivity determines the nature of subjective experience in grapheme-color synesthesia. Journal of Neuroscience, 31, 9879-9884. doi:10.1523/JNEUROSCI.0569-11.2011.

    Abstract

    Synesthesia provides an elegant model to investigate neural mechanisms underlying individual differences in subjective experience in humans. In grapheme–color synesthesia, written letters induce color sensations, accompanied by activation of color area V4. Competing hypotheses suggest that enhanced V4 activity during synesthesia is either induced by direct bottom-up cross-activation from grapheme processing areas within the fusiform gyrus, or indirectly via higher-order parietal areas. Synesthetes differ in the way synesthetic color is perceived: “projector” synesthetes experience color externally colocalized with a presented grapheme, whereas “associators” report an internally evoked association. Using dynamic causal modeling for fMRI, we show that V4 cross-activation during synesthesia was induced via a bottom-up pathway (within fusiform gyrus) in projector synesthetes, but via a top-down pathway (via parietal lobe) in associators. These findings show how altered coupling within the same network of active regions leads to differences in subjective experience. Our findings reconcile the two most influential cross-activation accounts of synesthesia.
  • Van den Bos, E., & Poletiek, F. H. (2008). Effects of grammar complexity on artificial grammar learning. Memory & Cognition, 36(6), 1122-1131. doi:10.3758/MC.36.6.1122.

    Abstract

    The present study identified two aspects of complexity that have been manipulated in the implicit learning literature and investigated how they affect implicit and explicit learning of artificial grammars. Ten finite state grammars were used to vary complexity. The results indicated that dependency length is more relevant to the complexity of a structure than is the number of associations that have to be learned. Although implicit learning led to better performance on a grammaticality judgment test than did explicit learning, it was negatively affected by increasing complexity: Performance decreased as there was an increase in the number of previous letters that had to be taken into account to determine whether or not the next letter was a grammatical continuation. In particular, the results suggested that implicit learning of higher order dependencies is hampered by the presence of longer dependencies. Knowledge of first-order dependencies was acquired regardless of complexity and learning mode.
  • Van Berkum, J. J. A., Hijne, H., De Jong, T., Van Joolingen, W. R., & Njoo, M. (1991). Aspects of computer simulations in education. Education & Computing, 6(3/4), 231-239.

    Abstract

    Computer simulations in an instructional context can be characterized according to four aspects (themes): simulation models, learning goals, learning processes and learner activity. The present paper provides an outline of these four themes. The main classification criterion for simulation models is quantitative vs. qualitative models. For quantitative models a further subdivision can be made by classifying the independent and dependent variables as continuous or discrete. A second criterion is whether one of the independent variables is time, thus distinguishing dynamic and static models. Qualitative models on the other hand use propositions about non-quantitative properties of a system or they describe quantitative aspects in a qualitative way. Related to the underlying model is the interaction with it. When this interaction has a normative counterpart in the real world we call it a procedure. The second theme of learning with computer simulation concerns learning goals. A learning goal is principally classified along three dimensions, which specify different aspects of the knowledge involved. The first dimension, knowledge category, indicates that a learning goal can address principles, concepts and/or facts (conceptual knowledge) or procedures (performance sequences). The second dimension, knowledge representation, captures the fact that knowledge can be represented in a more declarative (articulate, explicit), or in a more compiled (implicit) format, each one having its own advantages and drawbacks. The third dimension, knowledge scope, involves the learning goal's relation with the simulation domain; knowledge can be specific to a particular domain, or generalizable over classes of domains (generic). A more or less separate type of learning goal refers to knowledge acquisition skills that are pertinent to learning in an exploratory environment. Learning processes constitute the third theme. Learning processes are defined as cognitive actions of the learner. Learning processes can be classified using a multilevel scheme. The first (highest) of these levels gives four main categories: orientation, hypothesis generation, testing and evaluation. Examples of more specific processes are model exploration and output interpretation. The fourth theme of learning with computer simulations is learner activity. Learner activity is defined as the ‘physical’ interaction of the learner with the simulations (as opposed to the mental interaction that was described in the learning processes). Five main categories of learner activity are distinguished: defining experimental settings (variables, parameters etc.), interaction process choices (deciding a next step), collecting data, choice of data presentation and metacontrol over the simulation.
  • Van der Lugt, A. (1999). From speech to words. PhD Thesis, Radboud University Nijmegen, Nijmegen. doi:10.17617/2.2057645.
  • Van Heuven, W. J. B., Schriefers, H., Dijkstra, T., & Hagoort, P. (2008). Language conflict in the bilingual brain. Cerebral Cortex, 18(11), 2706-2716. doi:10.1093/cercor/bhn030.

    Abstract

    The large majority of humankind is more or less fluent in 2 or even more languages. This raises the fundamental question how the language network in the brain is organized such that the correct target language is selected at a particular occasion. Here we present behavioral and functional magnetic resonance imaging data showing that bilingual processing leads to language conflict in the bilingual brain even when the bilinguals’ task only required target language knowledge. This finding demonstrates that the bilingual brain cannot avoid language conflict, because words from the target and nontarget languages become automatically activated during reading. Importantly, stimulus-based language conflict was found in brain regions in the LIPC associated with phonological and semantic processing, whereas response-based language conflict was only found in the pre-supplementary motor area/anterior cingulate cortex when language conflict leads to response conflicts.
  • Van de Weijer, J. (1999). Language input for word discovery. PhD Thesis, Radboud University Nijmegen, Nijmegen. doi:10.17617/2.2057670.
  • Van Donselaar, W., Kuijpers, C. T., & Cutler, A. (1999). Facilitatory effects of vowel epenthesis on word processing in Dutch. Journal of Memory and Language, 41, 59-77. doi:10.1006/jmla.1999.2635.

    Abstract

    We report a series of experiments examining the effects on word processing of insertion of an optional epenthetic vowel in word-final consonant clusters in Dutch. Such epenthesis turns film, for instance, into film. In a word-reversal task listeners treated words with and without epenthesis alike, as monosyllables, suggesting that the variant forms both activate the same canonical representation, that of a monosyllabic word without epenthesis. In both lexical decision and word spotting, response times to recognize words were significantly faster when epenthesis was present than when the word was presented in its canonical form without epenthesis. It is argued that addition of the epenthetic vowel makes the liquid consonants constituting the first member of a cluster more perceptible; a final phoneme-detection experiment confirmed that this was the case. These findings show that a transformed variant of a word, although it contacts the lexicon via the representation of the canonical form, can be more easily perceptible than that canonical form.
  • Van Wijk, C., & Kempen, G. (1980). Functiewoorden: Een inventarisatie voor het Nederlands. ITL: Review of Applied Linguistics, 53-68.
  • Van Leeuwen, T. (2011). How one can see what is not there: Neural mechanisms of grapheme-colour synasthesia. PhD Thesis, Radboud University Nijmegen, Nijmegen.

    Abstract

    People with grapheme-colour synaesthesia experience colour for letters of the alphabet or digits; A can be red and B can be green. How can it be, that people automatically see a colour where only black letters are printed on the paper? With brain scans (fMRI) I showed that (black) letters activate the colour area of the brain (V4) and also a brain area that is important for combining different types of information (SPL). We found that the location where synaesthetes subjectively experience their colours is related to the order in which these brain areas become active. Some synaesthetes see their colour ‘projected onto the letter’, similar to real colour experiences, and in this case colour area V4 becomes active first. If the colours appear like a strong association without a fixed location in space, SPL becomes active first, similar to what happens for normal memories. In a last experiment we showed that in synaesthetes, attention is captured by real colour very strongly, stronger than for control participants. Perhaps this attention effect of colour can explain how letters and colours become coupled in synaesthetes.
  • Van Berkum, J. J. A., & De Jong, T. (1991). Instructional environments for simulations. Education & Computing, 6(3/4), 305-358.

    Abstract

    The use of computer simulations in education and training can have substantial advantages over other approaches. In comparison with alternatives such as textbooks, lectures, and tutorial courseware, a simulation-based approach offers the opportunity to learn in a relatively realistic problem-solving context, to practise task performance without stress, to systematically explore both realistic and hypothetical situations, to change the time-scale of events, and to interact with simplified versions of the process or system being simulated. However, learners are often unable to cope with the freedom offered by, and the complexity of, a simulation. As a result many of them resort to an unsystematic, unproductive mode of exploration. There is evidence that simulation-based learning can be improved if the learner is supported while working with the simulation. Constructing such an instructional environment around simulations seems to run counter to the freedom the learner is allowed to in ‘stand alone’ simulations. The present article explores instructional measures that allow for an optimal freedom for the learner. An extensive discussion of learning goals brings two main types of learning goals to the fore: conceptual knowledge and operational knowledge. A third type of learning goal refers to the knowledge acquisition (exploratory learning) process. Cognitive theory has implications for the design of instructional environments around simulations. Most of these implications are quite general, but they can also be related to the three types of learning goals. For conceptual knowledge the sequence and choice of models and problems is important, as is providing the learner with explanations and minimization of error. For operational knowledge cognitive theory recommends learning to take place in a problem solving context, the explicit tracing of the behaviour of the learner, providing immediate feedback and minimization of working memory load. For knowledge acquisition goals, it is recommended that the tutor takes the role of a model and coach, and that learning takes place together with a companion. A second source of inspiration for designing instructional environments can be found in Instructional Design Theories. Reviewing these shows that interacting with a simulation can be a part of a more comprehensive instructional strategy, in which for example also prerequisite knowledge is taught. Moreover, information present in a simulation can also be represented in a more structural or static way and these two forms of presentation provoked to perform specific learning processes and learner activities by tutor controlled variations in the simulation, and by tutor initiated prodding techniques. And finally, instructional design theories showed that complex models and procedures can be taught by starting with central and simple elements of these models and procedures and subsequently presenting more complex models and procedures. Most of the recent simulation-based intelligent tutoring systems involve troubleshooting of complex technical systems. Learners are supposed to acquire knowledge of particular system principles, of troubleshooting procedures, or of both. Commonly encountered instructional features include (a) the sequencing of increasingly complex problems to be solved, (b) the availability of a range of help information on request, (c) the presence of an expert troubleshooting module which can step in to provide criticism on learner performance, hints on the problem nature, or suggestions on how to proceed, (d) the option of having the expert module demonstrate optimal performance afterwards, and (e) the use of different ways of depicting the simulated system. A selection of findings is summarized by placing them under the four themes we think to be characteristic of learning with computer simulations (see de Jong, this volume).
  • Van den Bos, E., & Poletiek, F. H. (2008). Intentional artificial grammar learning: When does it work? European Journal of Cognitive Psychology, 20(4), 793-806. doi:10.1080/09541440701554474.

    Abstract

    Actively searching for the rules of an artificial grammar has often been shown to produce no more knowledge than memorising exemplars without knowing that they have been generated by a grammar. The present study investigated whether this ineffectiveness of intentional learning could be overcome by removing dual task demands and providing participants with more specific instructions. The results only showed a positive effect of learning intentionally for participants specifically instructed to find out which letters are allowed to follow each other. These participants were also unaffected by a salient feature. In contrast, for participants who did not know what kind of structure to expect, intentional learning was not more effective than incidental learning and knowledge acquisition was guided by salience.
  • Van de Meerendonk, N., Indefrey, P., Chwilla, D. J., & Kolk, H. H. (2011). Monitoring in language perception: Electrophysiological and hemodynamic responses to spelling violations. Neuroimage, 54, 2350-2363. doi:10.1016/j.neuroimage.2010.10.022.

    Abstract

    The monitoring theory of language perception proposes that competing representations that are caused by strong expectancy violations can trigger a conflict which elicits reprocessing of the input to check for possible processing errors. This monitoring process is thought to be reflected by the P600 component in the EEG. The present study further investigated this monitoring process by comparing syntactic and spelling violations in an EEG and an fMRI experiment. To assess the effect of conflict strength, misspellings were embedded in sentences that were weakly or strongly predictive of a critical word. In support of the monitoring theory, syntactic and spelling violations elicited similarly distributed P600 effects. Furthermore, the P600 effect was larger to misspellings in the strongly compared to the weakly predictive sentences. The fMRI results showed that both syntactic and spelling violations increased activation in the left inferior frontal gyrus (lIFG), while only the misspellings activated additional areas. Conflict strength did not affect the hemodynamic response to spelling violations. These results extend the idea that the lIFG is involved in implementing cognitive control in the presence of representational conflicts in general to the processing of errors in language perception.
  • Van der Linden, M. (2011). Experience-based cortical plasticity in object category representation. PhD Thesis, Radboud University Nijmegen, Nijmegen.

    Abstract

    Marieke van der Linden investigated the neural mechanisms underlying category formation in the human brain. The research in her thesis provides novel insights in how the brain learns, stores, and uses category knowledge, enabling humans to become skilled in categorization. The studies reveal the neural mechanisms through which perceptual as well as conceptual category knowledge is created and shaped by experience. The results clearly show that neuronal sensitivity to object features is affected by categorization training. These findings fill in a missing link between electrophysiological recordings from monkey cortex demonstrating learning-induced sharpening of neuronal selectivity and brain imaging data showing category-specific representations in the human brain. Moreover, she showed that it is specifically the features of an object that are relevant for its categorization that induce selectivity in neuronal populations. Category-learning requires collaboration between many different brain areas. Together these can be seen as the neural correlates of the key points of categorization: discrimination and generalization. The occipitotemporal cortex represents those characteristic features of objects that define its category. The narrowly shape-tuned properties of this area enable fine-grained discrimination of perceptually similar objects. In addition, the superior temporal sulcus forms associations between members or properties (i.e. sound and shape) of a category. This allows the generalization of perceptually different but conceptually similar objects. Last but not least is the prefrontal cortex which is involved in coding behaviourally-relevant category information and thus enables the explicit retrieval of category membership.
  • Van de Ven, M., & Gussenhoven, C. (2011). On the timing of the final rise in Dutch falling-rising intonation contours. Journal of Phonetics, 39, 225-236. doi:10.1016/j.wocn.2011.01.006.

    Abstract

    A corpus of Dutch falling-rising intonation contours with early nuclear accent was elicited from nine speakers with a view to establishing the extent to which the low F0 target immediately preceding the final rise, was attracted by a post-nuclear stressed syllable (PNS) in either of the last two words or by Second Occurrence Contrastive Focus (SOCF) on either of these words. We found a small effect of foot type, which we interpret as due to a rhythmic 'trochaic enhancement' effect. The results show that neither PNS nor SOCF influences the location of the low F0 target, which appears consistently to be timed with reference to the utterance end. It is speculated that there are two ways in which postnuclear tones can be timed. The first is by means of a phonological association with a post-nuclear stressed syllable, as in Athenian Greek and Roermond Dutch. The second is by a fixed distance from the utterance end or from the target of an adjacent tone. Accordingly, two phonological mechanisms are defended, association and edge alignment, such that all tones edge-align, but only some associate. Specifically, no evidence was found for a third situation that can be envisaged, in which a post-nuclear tone is gradiently attracted to a post-nuclear stress.

    Files private

    Request files
  • Van Wingen, G. A., Van Broekhoven, F., Verkes, R. J., Petersson, K. M., Bäckström, T., Buitelaar, J. K., & Fernández, G. (2008). Progesterone selectively increases amygdala reactivity in women. Molecular Psychiatry, 13, 325-333. doi:doi:10.1038/sj.mp.4002030.

    Abstract

    The acute neural effects of progesterone are mediated by its neuroactive metabolites allopregnanolone and pregnanolone. These neurosteroids potentiate the inhibitory actions of c-aminobutyric acid (GABA). Progesterone is known to produce anxiolytic effects in animals, but recent animal studies suggest that pregnanolone increases anxiety after a period of low allopregnanolone concentration. This effect is potentially mediated by the amygdala and related to the negative mood symptoms in humans that are observed during increased allopregnanolone levels. Therefore, we investigated with functional magnetic resonance imaging (MRI) whether a single progesterone administration to healthy young women in their follicular phase modulates the amygdala response to salient, biologically relevant stimuli. The progesterone administration increased the plasma concentrations of progesterone and allopregnanolone to levels that are reached during the luteal phase and early pregnancy. The imaging results show that progesterone selectively increased amygdala reactivity. Furthermore, functional connectivity analyses indicate that progesterone modulated functional coupling of the amygdala with distant brain regions. These results reveal a neural mechanism by which progesterone may mediate adverse effects on anxiety and mood.
  • Van Gijn, R. (2011). Pronominal affixes, the best of both worlds: The case of Yurakaré. Transactions of the Philological Society, 109(1), 41-58. doi:10.1111/j.1467-968X.2011.01249.x.

    Abstract

    I thank the speakers of Yurakaré who have taught me their language for sharing their knowledge with me. I would furthermore like to thank Grev Corbett, Michael Cysouw, and an anonymous reviewer for commenting on earlier drafts of this paper. All remaining errors are mine. The research reported in this paper was made possible by grants from Prof. Pieter Muysken’s Spinoza project Lexicon & Syntax, the University of Surrey, the DoBeS foundation, and the Netherlands Organization for Scientific Research, for which I am grateful. Pronominal affixes in polysynthetic languages have an ambiguous status in the sense that they have characteristics normally associated with free pronouns as well as characteristics associated with agreement markers. This situation arises because pronominal affixes represent intermediate stages in a diachronic development from independent pronouns to agreement markers. Because this diachronic change is not abrupt, pronominal affixes can show different characteristics from language to language. By presenting an in-depth discussion of the pronominal affixes of Yurakaré, an unclassified language from Bolivia, I argue that these so-called intermediate stages as typically attested in polysynthetic languages actually represent economical systems that combine advantages of agreement markers and of free pronouns. In terms of diachronic development, such ‘intermediate’ systems, being functionally well-adapted, appear to be rather stable, and it can even be reinforced by subsequent diachronic developments.
  • Van Putten, S. (2009). Talking about motion in Avatime. Master Thesis, Leiden University.
  • Van Gijn, R. (2011). Subjects and objects: A semantic account of Yurakaré argument structure. International Journal of American Linguistics, 77, 595-621. doi:10.1086/662158.

    Abstract

    Yurakaré (unclassified, central Bolivia) marks core arguments on the verb by means of pronominal affixes. Subjects are suffixed, objects are prefixed. There are six types of head-marked objects in Yurakaré, each with its own morphosyntactic and semantic properties. Distributional patterns suggest that the six objects can be divided into two larger groups reminiscent of the typologically recognized direct vs. indirect object distinction. This paper looks at the interaction of this complex system of participant marking and verbal semantics. By investigating the participant-marking patterns of nine verb classes (four representing a gradual decrease of patienthood of the P participant, five a gradual decrease of agentivity of the A participant), I come to the conclusion that grammatical roles in Yurakaré can be defined semantically, and case frames are to a high degree determined by verbal semantics.
  • Van Leeuwen, E. J. C., Zimmerman, E., & Davila Ross, M. (2011). Responding to inequities: Gorillas try to maintain their competitive advantage during play fights. Biology Letters, 7(1), 39-42. doi:10.1098/rsbl.2010.0482.

    Abstract

    Humans respond to unfair situations in various ways. Experimental research has revealed that non-human species also respond to unequal situ- ations in the form of inequity aversions when they have the disadvantage. The current study focused on play fights in gorillas to explore for the first time, to our knowledge, if/how non-human species respond to inequities in natural social settings. Hitting causes a naturally occurring inequity among individuals and here it was specifically assessed how the hitters and their partners engaged in play chases that followed the hitting. The results of this work showed that the hitters significantly more often moved first to run away immediately after the encounter than their partners. These findings provide evidence that non-human species respond to inequities by trying to maintain their competitive advantages. We conclude that non-human pri- mates, like humans, may show different responses to inequities and that they may modify them depending on if they have the advantage or the disadvantage.
  • Van de Ven, M., Tucker, B. V., & Ernestus, M. (2011). Semantic context effects in the comprehension of reduced pronunciation variants. Memory & Cognition, 39, 1301-1316. doi:10.3758/s13421-011-0103-2.

    Abstract

    Listeners require context to understand the highly reduced words that occur in casual speech. The present study reports four auditory lexical decision experiments in which the role of semantic context in the comprehension of reduced versus unreduced speech was investigated. Experiments 1 and 2 showed semantic priming for combinations of unreduced, but not reduced, primes and low-frequency targets. In Experiment 3, we crossed the reduction of the prime with the reduction of the target. Results showed no semantic priming from reduced primes, regardless of the reduction of the targets. Finally, Experiment 4 showed that reduced and unreduced primes facilitate upcoming low-frequency related words equally if the interstimulus interval is extended. These results suggest that semantically related words need more time to be recognized after reduced primes, but once reduced primes have been fully (semantically) processed, these primes can facilitate the recognition of upcoming words as well as do unreduced primes.
  • Van Berkum, J. J. A., Hagoort, P., & Brown, C. M. (1999). Semantic integration in sentences and discourse: Evidence from the N400. Journal of Cognitive Neuroscience, 11(6), 657-671. doi:10.1162/089892999563724.

    Abstract

    In two ERP experiments we investigated how and when the language comprehension system relates an incoming word to semantic representations of an unfolding local sentence and a wider discourse. In experiment 1, subjects were presented with short stories. The last sentence of these stories occasionally contained a critical word that, although acceptable in the local sentence context, was semantically anomalous with respect to the wider discourse (e.g., "Jane told the brother that he was exceptionally slow" in a discourse context where he had in fact been very quick). Relative to coherent control words (e.g., "quick"), these discourse-dependent semantic anomalies elicited a large N400 effect that began at about 200-250 ms after word onset. In experiment 2, the same sentences were presented without their original story context. Although the words that had previously been anomalous in discourse still elicited a slightly larger average N400 than the coherent words, the resulting N400 effect was much reduced, showing that the large effect observed in stories was related to the wider discourse. In the same experiment, single sentences that contained a clear local semantic anomaly elicited a standard sentence-dependent N400 effect (e.g., Kutas & Hillyard, 1980). The N400 effects elicited in discourse and in single sentences had the same time course, overall morphology, and scalp distribution. We argue that these findings are most compatible with models of language processing in which there is no fundamental distinction between the integration of a word in its local (sentence-level) and its global (discourse-level) semantic context.
  • Van der Veer, G. C., Bagnara, S., & Kempen, G. (1991). Preface. Acta Psychologica, 78, ix. doi:10.1016/0001-6918(91)90002-H.
  • Van Gijn, R. (2009). The phonology of mixed languages. Journal of Pidgin and Creole Languages, 24(1), 91-117. doi:10.1075/jpcl.24.1.04gij.

    Abstract

    Mixed languages are said to be the result of a process of intertwining (e.g. Bakker & Muysken 1995, Bakker 1997), a regular process in which the grammar of one language is combined with the lexicon of another. However, the outcome of this process differs from language pair to language pair. As far as morphosyntax is concerned, people have discussed these different outcomes and the reasons for them extensively, e.g. Bakker 1997 for Michif, Mous 2003 for Ma’a, Muysken 1997a for Media Lengua and 1997b for Callahuaya. The issue of phonology, however, has not generated a large debate. This paper compares the phonological systems of the mixed languages Media Lengua, Callahuaya, Mednyj Aleut, and Michif. It will be argued that the outcome of the process of intertwining, as far as phonology is concerned, is at least partly determined by the extent to which unmixed phonological domains exist.
  • Van Berkum, J. J. A., Brown, C. M., & Hagoort, P. (1999). When does gender constrain parsing? Evidence from ERPs. Journal of Psycholinguistic Research, 28(5), 555-566. doi:10.1023/A:1023224628266.

    Abstract

    We review the implications of recent ERP evidence for when and how grammatical gender agreement constrains sentence parsing. In some theories of parsing, gender is assumed to immediately and categorically block gender-incongruent phrase structure alternatives from being pursued. In other theories, the parser initially ignores gender altogether. The ERP evidence we discuss suggests an intermediate position, in which grammatical gender does not immediately block gender-incongruent phrase structures from being considered, but is used to dispose of them shortly thereafter.
  • Van Turennout, M., Hagoort, P., & Brown, C. M. (1999). The time course of grammatical and phonological processing during speaking: evidence from event-related brain potentials. Journal of Psycholinguistic Research, 28(6), 649-676. doi:10.1023/A:1023221028150.

    Abstract

    Motor-related brain potentials were used to examine the time course of grammatical and phonological processes during noun phrase production in Dutch. In the experiments, participants named colored pictures using a no-determiner noun phrase. On half of the trials a syntactic-phonological classification task had to be performed before naming. Depending on the outcome of the classifications, a left or a right push-button response was given (go trials), or no push-button response was given (no-go trials). Lateralized readiness potentials (LRPs) were derived to test whether syntactic and phonological information affected the motor system at separate moments in time. The results showed that when syntactic information determined the response-hand decision, an LRP developed on no-go trials. However, no such effect was observed when phonological information determined response hand. On the basis of the data, it can be estimated that an additional period of at least 40 ms is needed to retrieve a word's initial phoneme once its lemma has been retrieved. These results provide evidence for the view that during speaking, grammatical processing precedes phonological processing in time.
  • Vandeberg, L., Guadalupe, T., & Zwaan, R. A. (2011). How verbs can activate things: Cross-language activation across word classes. Acta Psychologica, 138, 68-73. doi:10.1016/j.actpsy.2011.05.007.

    Abstract

    The present study explored whether language-nonselective access in bilinguals occurs across word classes in a sentence context. Dutch–English bilinguals were auditorily presented with English (L2) sentences while looking at a visual world. The sentences contained interlingual homophones from distinct lexical categories (e.g., the English verb spoke, which overlaps phonologically with the Dutch noun for ghost, spook). Eye movement recordings showed that depictions of referents of the Dutch (L1) nouns attracted more visual attention than unrelated distractor pictures in sentences containing homophones. This finding shows that native language objects are activated during second language verb processing despite the structural information provided by the sentence context. Research highlights We show that native language words are activated during second language sentence processing. We tested this in a visual world setting on homophones with a different word class across languages. Fixations show that processing second language verbs activated native language nouns.
  • Vartiainen, J., Aggujaro, S., Lehtonen, M., Hulten, A., Laine, M., & Salmelin, R. (2009). Neural dynamics of reading morphologically complex words. NeuroImage, 47, 2064-2072. doi:10.1016/j.neuroimage.2009.06.002.

    Abstract

    Despite considerable research interest, it is still an open issue as to how morphologically complex words such as “car+s” are represented and processed in the brain. We studied the neural correlates of the processing of inflected nouns in the morphologically rich Finnish language. Previous behavioral studies in Finnish have yielded a robust inflectional processing cost, i.e., inflected words are harder to recognize than otherwise matched morphologically simple words. Theoretically this effect could stem either from decomposition of inflected words into a stem and a suffix at input level and/or from subsequent recombination at the semantic–syntactic level to arrive at an interpretation of the word. To shed light on this issue, we used magnetoencephalography to reveal the time course and localization of neural effects of morphological structure and frequency of written words. Ten subjects silently read high- and low-frequency Finnish words in inflected and monomorphemic form. Morphological complexity was accompanied by stronger and longerlasting activation of the left superior temporal cortex from 200 ms onwards. Earlier effects of morphology were not found, supporting the view that the well-established behavioral processing cost for inflected words stems from the semantic–syntactic level rather than from early decomposition. Since the effect of morphology was detected throughout the range of word frequencies employed, the majority of inflected Finnish words appears to be represented in decomposed form and only very high-frequency inflected words may acquire full-form representations.
  • Verdonschot, R. G., La Heij, W., Paolieri, D., Zhang, Q., & Schiller, N. O. (2011). Homophonic context effects when naming Japanese kanji: Evidence for processing costs. Quarterly Journal of Experimental Psychology, 64(9), 1836-1849. doi:10.1080/17470218.2011.585241.

    Abstract

    The current study investigated the effects of phonologically related context pictures on the naming latencies of target words in Japanese and Chinese. Reading bare words in alphabetic languages has been shown to be rather immune to effects of context stimuli, even when these stimuli are presented in advance of the target word (e. g., Glaser & Dungelhoff, 1984; Roelofs, 2003). However, recently, semantic context effects of distractor pictures on the naming latencies of Japanese kanji (but not Chinese hanzi) words have been observed (Verdonschot, La Heij, & Schiller, 2010). In the present study, we further investigated this issue using phonologically related (i.e., homophonic) context pictures when naming target words in either Chinese or Japanese. We found that pronouncing bare nouns in Japanese is sensitive to phonologically related context pictures, whereas this is not the case in Chinese. The difference between these two languages is attributed to processing costs caused by multiple pronunciations for Japanese kanji.
  • Verdonschot, R. G., Kiyama, S., Tamaoka, K., Kinoshita, S., La Heij, W., & Schiller, N. O. (2011). The functional unit of Japanese word naming: Evidence from masked priming. Journal of Experimental Psychology: Learning, Memory, and Cognition, 37(6), 1458-1473. doi:10.1037/a0024491.

    Abstract

    Theories of language production generally describe the segment as the basic unit in phonological encoding (e.g., Dell, 1988; Levelt, Roelofs, & Meyer, 1999). However, there is also evidence that such a unit might be language specific. Chen, Chen, and Dell (2002), for instance, found no effect of single segments when using a preparation paradigm. To shed more light on the functional unit of phonological encoding in Japanese, a language often described as being mora based, we report the results of 4 experiments using word reading tasks and masked priming. Experiment 1 demonstrated using Japanese kana script that primes, which overlapped in the whole mora with target words, sped up word reading latencies but not when just the onset overlapped. Experiments 2 and 3 investigated a possible role of script by using combinations of romaji (Romanized Japanese) and hiragana; again, facilitation effects were found only when the whole mora and not the onset segment overlapped. Experiment 4 distinguished mora priming from syllable priming and revealed that the mora priming effects obtained in the first 3 experiments are also obtained when a mora is part of a syllable. Again, no priming effect was found for single segments. Our findings suggest that the mora and not the segment (phoneme) is the basic functional phonological unit in Japanese language production planning.
  • Verdonschot, R. G. (2011). Word processing in languages using non-alphabetic scripts: The cases of Japanese and Chinese. PhD Thesis, Leiden University, Leiden, The Netherlands.

    Abstract

    This thesis investigates the processing of words written in Japanese kanji and Chinese hànzì, i.e. logographic scripts. Special attention is given to the fact that the majority of Japanese kanji have multiple pronunciations (generally depending on the combination a kanji forms with other characters). First, using masked priming, it is established that upon presentation of a Japanese kanji multiple pronunciations are activated. In subsequent experiments using word naming with context pictures it is concluded that both Chinese hànzì and Japanese kanji are read out loud via a direct route from orthography to phonology. However, only Japanese kanji become susceptible to semantic or phonological context effects as a result of a cost due to the processing of multiple pronunciations. Finally, zooming in on the size of the articulatory planning unit in Japanese it is concluded that the mora as a phonological unit best complies with the observed data pattern and not the phoneme or the syllable
  • Verhagen, J., & Schimke, S. (2009). Differences or fundamental differences? Zeitschrift für Sprachwissenschaft, 28(1), 97-106. doi:10.1515/ZFSW.2009.011.
  • Verhagen, J. (2009). Finiteness in Dutch as a second language. PhD Thesis, VU University, Amsterdam.
  • Verhagen, J. (2009). Temporal adverbials, negation and finiteness in Dutch as a second language: A scope-based account. IRAL, 47(2), 209-237. doi:10.1515/iral.2009.009.

    Abstract

    This study investigates the acquisition of post-verbal (temporal) adverbials and post-verbal negation in L2 Dutch. It is based on previous findings for L2 French that post-verbal negation poses less of a problem for L2 learners than post-verbal adverbial placement (Hawkins, Towell, Bazergui, Second Language Research 9: 189-233, 1993; Herschensohn, Minimally raising the verb issue: 325-336, Cascadilla Press, 1998). The current data show that, at first sight, Moroccan and Turkish learners of Dutch also have fewer problems with post-verbal negation than with post-verbal adverbials. However, when a distinction is made between different types of adverbials, it seems that this holds for adverbials of position such as 'today' but not for adverbials of contrast such as 'again'. To account for this difference, it is argued that different types of adverbial occupy different positions in the L2 data for reasons of scope marking. Moreover, the placement of adverbials such as 'again' interacts with the acquisition of finiteness marking (resulting in post-verbal placement), while there is no such interaction between adverbials such as 'today' and finiteness marking.
  • Verhagen, J. (2011). Verb placement in second language acquisition: Experimental evidence for the different behavior of auxiliary and lexical verbs. Applied Psycholinguistics, 32, 821 -858. doi:10.1017/S0142716411000087.

    Abstract

    This study investigates the acquisition of verb placement by Moroccan and Turkish second language (L2) learners of Dutch. Elicited production data corroborate earlier findings from L2 German that learners who do not produce auxiliaries do not raise lexical verbs over negation, whereas learners who produce auxiliaries do. Data from elicited imitation and sentence matching support this pattern and show that learners can have grammatical knowledge of auxiliary placement before they can produce auxiliaries. With lexical verbs, they do not show such knowledge. These results present further evidence for the different behavior of auxiliary and lexical verbs in early stages of L2 acquisition.
  • Vernes, S. C., MacDermot, K. D., Monaco, A. P., & Fisher, S. E. (2009). Assessing the impact of FOXP1 mutations on developmental verbal dyspraxia. European Journal of Human Genetics, 17(10), 1354-1358. doi:10.1038/ejhg.2009.43.

    Abstract

    Neurodevelopmental disorders that disturb speech and language are highly heritable. Isolation of the underlying genetic risk factors has been hampered by complexity of the phenotype and potentially large number of contributing genes. One exception is the identification of rare heterozygous mutations of the FOXP2 gene in a monogenic syndrome characterised by impaired sequencing of articulatory gestures, disrupting speech (developmental verbal dyspraxia, DVD), as well as multiple deficits in expressive and receptive language. The protein encoded by FOXP2 belongs to a divergent subgroup of forkhead-box transcription factors, with a distinctive DNA-binding domain and motifs that mediate hetero- and homodimerisation. FOXP1, the most closely related member of this subgroup, can directly interact with FOXP2 and is co-expressed in neural structures relevant to speech and language disorders. Moreover, investigations of songbird orthologues indicate that combinatorial actions of the two proteins may play important roles in vocal learning, leading to the suggestion that human FOXP1 should be considered a strong candidate for involvement in DVD. Thus, in this study, we screened the entire coding region of FOXP1 (exons and flanking intronic sequence) for nucleotide changes in a panel of probands used earlier to detect novel mutations in FOXP2. A non-synonymous coding change was identified in a single proband, yielding a proline-to-alanine change (P215A). However, this was also found in a random control sample. Analyses of non-coding SNP changes did not find any correlation with affection status. We conclude that FOXP1 mutations are unlikely to represent a major cause of DVD.

    Additional information

    ejhg200943x1.pdf
  • Vernes, S. C., Newbury, D. F., Abrahams, B. S., Winchester, L., Nicod, J., Groszer, M., Alarcón, M., Oliver, P. L., Davies, K. E., Geschwind, D. H., Monaco, A. P., & Fisher, S. E. (2008). A functional genetic link between distinct developmental language disorders. New England Journal of Medicine, 359(22), 2337 -2345. doi:10.1056/NEJMoa0802828.

    Abstract

    BACKGROUND: Rare mutations affecting the FOXP2 transcription factor cause a monogenic speech and language disorder. We hypothesized that neural pathways downstream of FOXP2 influence more common phenotypes, such as specific language impairment. METHODS: We performed genomic screening for regions bound by FOXP2 using chromatin immunoprecipitation, which led us to focus on one particular gene that was a strong candidate for involvement in language impairments. We then tested for associations between single-nucleotide polymorphisms (SNPs) in this gene and language deficits in a well-characterized set of 184 families affected with specific language impairment. RESULTS: We found that FOXP2 binds to and dramatically down-regulates CNTNAP2, a gene that encodes a neurexin and is expressed in the developing human cortex. On analyzing CNTNAP2 polymorphisms in children with typical specific language impairment, we detected significant quantitative associations with nonsense-word repetition, a heritable behavioral marker of this disorder (peak association, P=5.0x10(-5) at SNP rs17236239). Intriguingly, this region coincides with one associated with language delays in children with autism. CONCLUSIONS: The FOXP2-CNTNAP2 pathway provides a mechanistic link between clinically distinct syndromes involving disrupted language.

    Additional information

    nejm_vernes_2337sa1.pdf
  • Vernes, S. C., Oliver, P. L., Spiteri, E., Lockstone, H. E., Puliyadi, R., Taylor, J. M., Ho, J., Mombereau, C., Brewer, A., Lowy, E., Nicod, J., Groszer, M., Baban, D., Sahgal, N., Cazier, J.-B., Ragoussis, J., Davies, K. E., Geschwind, D. H., & Fisher, S. E. (2011). Foxp2 regulates gene networks implicated in neurite outgrowth in the developing brain. PLoS Genetics, 7(7): e1002145. doi:10.1371/journal.pgen.1002145.

    Abstract

    Forkhead-box protein P2 is a transcription factor that has been associated with intriguing aspects of cognitive function in humans, non-human mammals, and song-learning birds. Heterozygous mutations of the human FOXP2 gene cause a monogenic speech and language disorder. Reduced functional dosage of the mouse version (Foxp2) causes deficient cortico-striatal synaptic plasticity and impairs motor-skill learning. Moreover, the songbird orthologue appears critically important for vocal learning. Across diverse vertebrate species, this well-conserved transcription factor is highly expressed in the developing and adult central nervous system. Very little is known about the mechanisms regulated by Foxp2 during brain development. We used an integrated functional genomics strategy to robustly define Foxp2-dependent pathways, both direct and indirect targets, in the embryonic brain. Specifically, we performed genome-wide in vivo ChIP–chip screens for Foxp2-binding and thereby identified a set of 264 high-confidence neural targets under strict, empirically derived significance thresholds. The findings, coupled to expression profiling and in situ hybridization of brain tissue from wild-type and mutant mouse embryos, strongly highlighted gene networks linked to neurite development. We followed up our genomics data with functional experiments, showing that Foxp2 impacts on neurite outgrowth in primary neurons and in neuronal cell models. Our data indicate that Foxp2 modulates neuronal network formation, by directly and indirectly regulating mRNAs involved in the development and plasticity of neuronal connections
  • Vernes, S. C., & Fisher, S. E. (2009). Unravelling neurogenetic networks implicated in developmental language disorders. Biochemical Society Transactions (London), 37, 1263-1269. doi:10.1042/BST0371263.

    Abstract

    Childhood syndromes disturbing language development are common and display high degrees of heritability. In most cases, the underlying genetic architecture is likely to be complex, involving multiple chromosomal loci and substantial heterogeneity, which makes it difficult to track down the crucial genomic risk factors. Investigation of rare Mendelian phenotypes offers a complementary route for unravelling key neurogenetic pathways. The value of this approach is illustrated by the discovery that heterozygous FOXP2 (where FOX is forkhead box) mutations cause an unusual monogenic disorder, characterized by problems with articulating speech along with deficits in expressive and receptive language. FOXP2 encodes a regulatory protein, belonging to the forkhead box family of transcription factors, known to play important roles in modulating gene expression in development and disease. Functional genetics using human neuronal models suggest that the different FOXP2 isoforms generated by alternative splicing have distinct properties and may act to regulate each other's activity. Such investigations have also analysed the missense and nonsense mutations found in cases of speech and language disorder, showing that they alter intracellular localization, DNA binding and transactivation capacity of the mutated proteins. Moreover, in the brains of mutant mice, aetiological mutations have been found to disrupt the synaptic plasticity of Foxp2-expressing circuitry. Finally, although mutations of FOXP2 itself are rare, the downstream networks which it regulates in the brain appear to be broadly implicated in typical forms of language impairment. Thus, through ongoing identification of regulated targets and interacting co-factors, this gene is providing the first molecular entry points into neural mechanisms that go awry in language-related disorders
  • Viaro, M., Bercelli, F., & Rossano, F. (2008). Una relazione terapeutica: Il terapeuta allenatore. Connessioni: Rivista di consulenza e ricerca sui sistemi umani, 20, 95-105.
  • De Vignemont, F., Majid, A., Jola, C., & Haggard, P. (2009). Segmenting the body into parts: Evidence from biases in tactile perception. Quarterly Journal of Experimental Psychology, 62, 500-512. doi:10.1080/17470210802000802.

    Abstract

    How do we individuate body parts? Here, we investigated the effect of body segmentation between hand and arm in tactile and visual perception. In a first experiment, we showed that two tactile stimuli felt farther away when they were applied across the wrist than when they were applied within a single body part (palm or forearm), indicating a “category boundary effect”. In the following experiments, we excluded two hypotheses, which attributed tactile segmentation to other, nontactile factors. In Experiment 2, we showed that the boundary effect does not arise from motor cues. The effect was reduced during a motor task involving flexion and extension movements of the wrist joint. Action brings body parts together into functional units, instead of pulling them apart. In Experiments 3 and 4, we showed that the effect does not arise from perceptual cues of visual discontinuities. We did not find any segmentation effect for the visual percept of the body in Experiment 3, nor for a neutral shape in Experiment 4. We suggest that the mental representation of the body is structured in categorical body parts delineated by joints, and that this categorical representation modulates tactile spatial perception.
  • De Vos, C. (2011). A signers' village in Bali, Indonesia. Minpaku Anthropology Newsletter, 33, 4-5.
  • De Vos, C. (2009). [Review of the book Language complexity as an evolving variable ed. by Geoffrey Sampson, David Gil and Peter Trudgill]. LINGUIST List, 20.4275. Retrieved from http://linguistlist.org/issues/20/20-4275.html.
  • De Vos, C., Van der Kooij, E., & Crasborn, O. (2009). Mixed signals: Combining linguistic and affective functions of eyebrows in questions in Sign Language of the Netherlands. Language and Speech, 52(2/3), 315-339. doi:10.1177/0023830909103177.

    Abstract

    The eyebrows are used as conversational signals in face-to-face spoken interaction (Ekman, 1979). In Sign Language of the Netherlands (NGT), the eyebrows are typically furrowed in content questions, and raised in polar questions (Coerts, 1992). On the other hand, these eyebrow positions are also associated with anger and surprise, respectively, in general human communication (Ekman, 1993). This overlap in the functional load of the eyebrow positions results in a potential conflict for NGT signers when combining these functions simultaneously. In order to investigate the effect of the simultaneous realization of both functions on the eyebrow position we elicited instances of both question types with neutral affect and with various affective states. The data were coded using the Facial Action Coding System (FACS: Ekman, Friesen, & Hager, 2002) for type of brow movement as well as for intensity. FACS allows for the coding of muscle groups, which are termed Action Units (AUs) and which produce facial appearance changes. The results show that linguistic and affective functions of eyebrows may influence each other in NGT. That is, in surprised polar questions and angry content question a phonetic enhancement takes place of raising and furrowing, respectively. In the items with contrasting eyebrow movements, the grammatical and affective AUs are either blended (occur simultaneously) or they are realized sequentially. Interestingly, the absence of eyebrow raising (marked by AU 1+2) in angry polar questions, and the presence of eyebrow furrowing (realized by AU 4) in surprised content questions suggests that in general AU 4 may be phonetically stronger than AU 1 and AU 2, independent of its linguistic or affective function.
  • De Vos, C. (2008). Janger Kolok: de Balinese dovendans. Woord en Gebaar, 12-13.
  • De Vos, C. (2011). Kata Kolok color terms and the emergence of lexical signs in rural signing communities. The Senses & Society, 6(1), 68-76. doi:10.2752/174589311X12893982233795.

    Abstract

    How do new languages develop systematic ways to talk about sensory experiences, such as color? To what extent is the evolution of color terms guided by societal factors? This paper describes the color lexicon of a rural sign language called Kata Kolok which emerged approximately one century ago in a Balinese village. Kata Kolok has four color signs: black, white, red and a blue-green term. In addition, two non-conventionalized means are used to provide color descriptions: naming relevant objects, and pointing to objects in the vicinity. Comparison with Balinese culture and spoken Balinese brings to light discrepancies between the systems, suggesting that neither cultural practices nor language contact have driven the formation of color signs in Kata Kolok. The few lexicographic investigations from other rural sign languages report limitations in the domain of color. On the other hand, larger, urban signed languages have extensive systems, for example, Australian Sign Language has up to nine color terms (Woodward 1989: 149). These comparisons support the finding that, rural sign languages like Kata Kolok fail to provide the societal pressures for the lexicon to expand further.
  • Vosse, T., & Kempen, G. (2009). In defense of competition during syntactic ambiguity resolution. Journal of Psycholinguistic Research, 38(1), 1-9. doi:10.1007/s10936-008-9075-1.

    Abstract

    In a recent series of publications (Traxler et al. J Mem Lang 39:558–592, 1998; Van Gompel et al. J Mem Lang 52:284–307, 2005; see also Van Gompel et al. (In: Kennedy, et al.(eds) Reading as a perceptual process, Oxford, Elsevier pp 621–648, 2000); Van Gompel et al. J Mem Lang 45:225–258, 2001) eye tracking data are reported showing that globally ambiguous (GA) sentences are read faster than locally ambiguous (LA) counterparts. They argue that these data rule out “constraint-based” models where syntactic and conceptual processors operate concurrently and syntactic ambiguity resolution is accomplished by competition. Such models predict the opposite pattern of reading times. However, this argument against competition is valid only in conjunction with two standard assumptions in current constraint-based models of sentence comprehension: (1) that syntactic competitions (e.g., Which is the best attachment site of the incoming constituent?) are pooled together with conceptual competitions (e.g., Which attachment site entails the most plausible meaning?), and (2) that the duration of a competition is a function of the overall (pooled) quality score obtained by each competitor. We argue that it is not necessary to abandon competition as a successful basis for explaining parsing phenomena and that the above-mentioned reading time data can be accounted for by a parallel-interactive model with conceptual and syntactic processors that do not pool their quality scores together. Within the individual linguistic modules, decision-making can very well be competition-based.
  • Vosse, T., & Kempen, G. (2009). The Unification Space implemented as a localist neural net: Predictions and error-tolerance in a constraint-based parser. Cognitive Neurodynamics, 3, 331-346. doi:10.1007/s11571-009-9094-0.

    Abstract

    We introduce a novel computer implementation of the Unification-Space parser (Vosse & Kempen 2000) in the form of a localist neural network whose dynamics is based on interactive activation and inhibition. The wiring of the network is determined by Performance Grammar (Kempen & Harbusch 2003), a lexicalist formalism with feature unification as binding operation. While the network is processing input word strings incrementally, the evolving shape of parse trees is represented in the form of changing patterns of activation in nodes that code for syntactic properties of words and phrases, and for the grammatical functions they fulfill. The system is capable, at least in a qualitative and rudimentary sense, of simulating several important dynamic aspects of human syntactic parsing, including garden-path phenomena and reanalysis, effects of complexity (various types of clause embeddings), fault-tolerance in case of unification failures and unknown words, and predictive parsing (expectation-based analysis, surprisal effects). English is the target language of the parser described.
  • De Vries, M., Christiansen, M. H., & Petersson, K. M. (2011). Learning recursion: Multiple nested and crossed dependencies. Biolinguistics, 5(1/2), 010-035.

    Abstract

    Language acquisition in both natural and artificial language learning settings crucially depends on extracting information from sequence input. A shared sequence learning mechanism is thus assumed to underlie both natural and artificial language learning. A growing body of empirical evidence is consistent with this hypothesis. By means of artificial language learning experiments, we may therefore gain more insight in this shared mechanism. In this paper, we review empirical evidence from artificial language learning and computational modelling studies, as well as natural language data, and suggest that there are two key factors that help determine processing complexity in sequence learning, and thus in natural language processing. We propose that the specific ordering of non-adjacent dependencies (i.e., nested or crossed), as well as the number of non-adjacent dependencies to be resolved simultaneously (i.e., two or three) are important factors in gaining more insight into the boundaries of human sequence learning; and thus, also in natural language processing. The implications for theories of linguistic competence are discussed.
  • Vuong, L., & Martin, R. C. (2011). LIFG-based attentional control and the resolution of lexical ambiguities in sentence context. Brain and Language, 116, 22-32. doi:10.1016/j.bandl.2010.09.012.

    Abstract

    The role of attentional control in lexical ambiguity resolution was examined in two patients with damage to the left inferior frontal gyrus (LIFG) and one control patient with non-LIFG damage. Experiment 1 confirmed that the LIFG patients had attentional control deficits compared to normal controls while the non-LIFG patient was relatively unimpaired. Experiment 2 showed that all three patients did as well as normal controls in using biasing sentence context to resolve lexical ambiguities involving balanced ambiguous words, but only the LIFG patients took an abnormally long time on lexical ambiguities that resolved toward a subordinate meaning of biased ambiguous words. Taken together, the results suggest that attentional control plays an important role in the resolution of certain lexical ambiguities – those that induce strong interference from context-inappropriate meanings (i.e., dominant meanings of biased ambiguous words).
  • Wagner, A. (2008). Phoneme inventories and patterns of speech sound perception. PhD Thesis, Radboud University Nijmegen, Nijmegen.
  • Wagner, A., & Ernestus, M. (2008). Identification of phonemes: Differences between phoneme classes and the effect of class size. Phonetica, 65(1-2), 106-127. doi:10.1159/000132389.

    Abstract

    This study reports general and language-specific patterns in phoneme identification. In a series of phoneme monitoring experiments, Castilian Spanish, Catalan, Dutch, English, and Polish listeners identified vowel, fricative, and stop consonant targets that are phonemic in all these languages, embedded in nonsense words. Fricatives were generally identified more slowly than vowels, while the speed of identification for stop consonants was highly dependent on the onset of the measurements. Moreover, listeners' response latencies and accuracy in detecting a phoneme correlated with the number of categories within that phoneme's class in the listener's native phoneme repertoire: more native categories slowed listeners down and decreased their accuracy. We excluded the possibility that this effect stems from differences in the frequencies of occurrence of the phonemes in the different languages. Rather, the effect of the number of categories can be explained by general properties of the perception system, which cause language-specific patterns in speech processing.
  • Wang, L. (2011). The influence of information structure on language comprehension: A neurocognitive perspective. PhD Thesis, Radboud University Nijmegen, Nijmegen.
  • Wang, L., Hagoort, P., & Yang, Y. (2009). Semantic illusion depends on information structure: ERP evidence. Brain Research, 1282, 50-56. doi:10.1016/j.brainres.2009.05.069.

    Abstract

    Next to propositional content, speakers distribute information in their utterances in such a way that listeners can make a distinction between new (focused) and given (non-focused) information. This is referred to as information structure. We measured event-related potentials (ERPs) to explore the role of information structure in semantic processing. Following different questions in wh-question-answer pairs (e.g. What kind of vegetable did Ming buy for cooking today? /Who bought the vegetables for cooking today?), the answer sentences (e.g., Ming bought eggplant/beef to cook today.) contained a critical word, which was either semantically appropriate (eggplant) or inappropriate (beef), and either focus or non-focus. The results showed a full N400 effect only when the critical words were in focus position. In non-focus position a strongly reduced N400 effect was observed, in line with the well-known semantic illusion effect. The results suggest that information structure facilitates semantic processing by devoting more resources to focused information.
  • Wang, L., Bastiaansen, M. C. M., Yang, Y., & Hagoort, P. (2011). The influence of information structure on the depth of semantic processing: How focus and pitch accent determine the size of the N400 effect. Neuropsychologia, 49, 813-820. doi:10.1016/j.neuropsychologia.2010.12.035.

    Abstract

    To highlight relevant information in dialogues, both wh-question context and pitch accent in answers can be used, such that focused information gains more attention and is processed more elaborately. To evaluate the relative influence of context and pitch accent on the depth of semantic processing, we measured Event-Related Potentials (ERPs) to auditorily presented wh-question-answer pairs. A semantically incongruent word in the answer occurred either in focus or non-focus position as determined by the context, and this word was either accented or unaccented. Semantic incongruency elicited different N400 effects in different conditions. The largest N400 effect was found when the question-marked focus was accented, while the other three conditions elicited smaller N400 effects. The results suggest that context and accentuation interact. Thus accented focused words were processed more deeply compared to conditions where focus and accentuation mismatched, or when the new information had no marking. In addition, there seems to be sex differences in the depth of semantic processing. For males, a significant N400 effect was observed only when the question-marked focus was accented, reduced N400 effects were found in the other dialogues. In contrast, females produced similar N400 effects in all the conditions. These results suggest that regardless of external cues, females tend to engage in more elaborate semantic processing compared to males.
  • Warner, N., Fountain, A., & Tucker, B. V. (2009). Cues to perception of reduced flaps. Journal of the Acoustical Society of America, 125(5), 3317-3327. doi:10.1121/1.3097773.

    Abstract

    Natural, spontaneous speech (and even quite careful speech) often shows extreme reduction in many speech segments, even resulting in apparent deletion of consonants. Where the flap ([(sic)]) allophone of /t/ and /d/ is expected in American English, one frequently sees an approximant-like or even vocalic pattern, rather than a clear flap. Still, the /t/ or /d/ is usually perceived, suggesting the acoustic characteristics of a reduced flap are sufficient for perception of a consonant. This paper identifies several acoustic characteristics of reduced flaps based on previous acoustic research (size of intensity dip, consonant duration, and F4 valley) and presents phonetic identification data for continua that manipulate these acoustic characteristics of reduction. The results indicate that the most obvious types of acoustic variability seen in natural flaps do affect listeners' percept of a consonant, but not sufficiently to completely account for the percept. Listeners are affected by the acoustic characteristics of consonant reduction, but they are also very skilled at evaluating variability along the acoustic dimensions that realize reduction.

    Files private

    Request files
  • Warner, N., Luna, Q., Butler, L., & Van Volkinburg, H. (2009). Revitalization in a scattered language community: Problems and methods from the perspective of Mutsun language revitalization. International Journal of the Sociology of Language, 198, 135-148. doi:10.1515/IJSL.2009.031.

    Abstract

    This article addresses revitalization of a dormant language whose prospective speakers live in scattered geographical areas. In comparison to increasing the usage of an endangered language, revitalizing a dormant language (one with no living speakers) requires different methods to gain knowledge of the language. Language teaching for a dormant language with a scattered community presents different problems from other teaching situations. In this article, we discuss the types of tasks that must be accomplished for dormant-language revitalization, with particular focus on development of teaching materials. We also address the role of computer technologies, arguing that each use of technology should be evaluated for how effectively it increases fluency. We discuss methods for achieving semi-fluency for the first new speakers of a dormant language, and for spreading the language through the community.
  • Weber, A., Broersma, M., & Aoyagi, M. (2011). Spoken-word recognition in foreign-accented speech by L2 listeners. Journal of Phonetics, 39, 479-491. doi:10.1016/j.wocn.2010.12.004.

    Abstract

    Two cross-modal priming studies investigated the recognition of English words spoken with a foreign accent. Auditory English primes were either typical of a Dutch accent or typical of a Japanese accent in English and were presented to both Dutch and Japanese L2 listeners. Lexical-decision times to subsequent visual target words revealed that foreign-accented words can facilitate word recognition for L2 listeners if at least one of two requirements is met: the foreign-accented production is in accordance with the language background of the L2 listener, or the foreign accent is perceptually confusable with the standard pronunciation for the L2 listener. If neither one of the requirements is met, no facilitatory effect of foreign accents on L2 word recognition is found. Taken together, these findings suggest that linguistic experience with a foreign accent affects the ability to recognize words carrying this accent, and there is furthermore a general benefit for L2 listeners for recognizing foreign-accented words that are perceptually confusable with the standard pronunciation.
  • Weber, K., & Lavric, A. (2008). Syntactic anomaly elicits a lexico-semantic (N400) ERP effect in the second but not in the first language. Psychophysiology, 45(6), 920-925. doi:10.1111/j.1469-8986.2008.00691.x.

    Abstract

    Recent brain potential research into first versus second language (L1 vs. L2) processing revealed striking responses to morphosyntactic features absent in the mother tongue. The aim of the present study was to establish whether the presence of comparable morphosyntactic features in L1 leads to more similar electrophysiological L1 and L2 profiles. ERPs were acquired while German-English bilinguals and native speakers of English read sentences. Some sentences were meaningful and well formed, whereas others contained morphosyntactic or semantic violations in the final word. In addition to the expected P600 component, morphosyntactic violations in L2 but not L1 led to an enhanced N400. This effect may suggest either that resolution of morphosyntactic anomalies in L2 relies on the lexico-semantic system or that the weaker/slower morphological mechanisms in L2 lead to greater sentence wrap-up difficulties known to result in N400 enhancement.
  • Weber, K., & Indefrey, P. (2009). Syntactic priming in German–English bilinguals during sentence comprehension. Neuroimage, 46, 1164-1172. doi:10.1016/j.neuroimage.2009.03.040.

    Abstract

    A longstanding question in bilingualism is whether syntactic information is shared between the two language processing systems. We used an fMRI repetition suppression paradigm to investigate syntactic priming in reading comprehension in German–English late-acquisition bilinguals. In comparison to conventional subtraction analyses in bilingual experiments, repetition suppression has the advantage of being able to detect neuronal populations that are sensitive to properties that are shared by consecutive stimuli. In this study, we manipulated the syntactic structure between prime and target sentences. A sentence with a passive sentence structure in English was preceded either by a passive or by an active sentence in English or German. We looked for repetition suppression effects in left inferior frontal, left precentral and left middle temporal regions of interest. These regions were defined by a contrast of all non-target sentences in German and English versus the baseline of sentence-format consonant strings. We found decreases in activity (repetition suppression effects) in these regions of interest following the repetition of syntactic structure from the first to the second language and within the second language.
    Moreover, a separate behavioural experiment using a word-by-word reading paradigm similar to the fMRI experiment showed faster reading times for primed compared to unprimed English target sentences regardless of whether they were preceded by an English or a German sentence of the same structure.
    We conclude that there is interaction between the language processing systems and that at least some syntactic information is shared between a bilingual's languages with similar syntactic structures.

    Files private

    Request files
  • Wegener, C. (2008). A grammar of Savosavo: A Papuan language of the Solomon Islands. PhD Thesis, Radboud University Nijmegen, Njimegen.
  • Wells, J. B., Christiansen, M. H., Race, D. S., Acheson, D. J., & MacDonald, M. C. (2009). Experience and sentence processing: Statistical learning and relative clause comprehension. Cognitive Psychology, 58(2), 250-271. doi:10.1016/j.cogpsych.2008.08.002.

    Abstract

    Many explanations of the difficulties associated with interpreting object relative clauses appeal to the demands that object relatives make on working memory. MacDonald and Christiansen [MacDonald, M. C., & Christiansen, M. H. (2002). Reassessing working memory: Comment on Just and Carpenter (1992) and Waters and Caplan (1996). Psychological Review, 109, 35-54] pointed to variations in reading experience as a source of differences, arguing that the unique word order of object relatives makes their processing more difficult and more sensitive to the effects of previous experience than the processing of subject relatives. This hypothesis was tested in a large-scale study manipulating reading experiences of adults over several weeks. The group receiving relative clause experience increased reading speeds for object relatives more than for subject relatives, whereas a control experience group did not. The reading time data were compared to performance of a computational model given different amounts of experience. The results support claims for experience-based individual differences and an important role for statistical learning in sentence comprehension processes.
  • Whitehouse, A. J., Bishop, D. V., Ang, Q., Pennell, C. E., & Fisher, S. E. (2011). CNTNAP2 variants affect early language development in the general population. Genes, Brain and Behavior, 10, 451-456. doi:10.1111/j.1601-183X.2011.00684.x.

    Abstract

    Early language development is known to be under genetic influence, but the genes affecting normal variation in the general population remain largely elusive. Recent studies of disorder reported that variants of the CNTNAP2 gene are associated both with language deficits in specific language impairment (SLI) and with language delays in autism. We tested the hypothesis that these CNTNAP2 variants affect communicative behavior, measured at 2 years of age in a large epidemiological sample, the Western Australian Pregnancy Cohort (Raine) Study. Singlepoint analyses of 1149 children (606 males, 543 emales) revealed patterns of association which were strikingly reminiscent of those observed in previous investigations of impaired language, centered on the same genetic markers, and with a consistent direction of effect (rs2710102, p = .0239; rs759178, p = .0248). Based on these findings we performed analyses of four-marker haplotypes of rs2710102- s759178-rs17236239-rs2538976, and identified significant association (haplotype TTAA, p = .049; haplotype GCAG, p = .0014). Our study suggests that common variants in the exon 13-15 region of CNTNAP2 influence early language acquisition, as assessed at age 2, in the general population. We propose that these CNTNAP2 variants increase susceptibility to SLI or autism when they occur together with other risk factors.

    Additional information

    Whitehouse_Additional_Information.doc
  • Widlok, T. (2008). Landscape unbounded: Space, place, and orientation in ≠Akhoe Hai// om and beyond. Language Sciences, 30(2/3), 362-380. doi:10.1016/j.langsci.2006.12.002.

    Abstract

    Even before it became a common place to assume that “the Eskimo have a hundred words for snow” the languages of hunting and gathering people have played an important role in debates about linguistic relativity concerning geographical ontologies. Evidence from languages of hunter-gatherers has been used in radical relativist challenges to the overall notion of a comparative typology of generic natural forms and landscapes as terms of reference. It has been invoked to emphasize a personalized relationship between humans and the non-human world. It is against this background that this contribution discusses the landscape terminology of ≠Akhoe Hai//om, a Khoisan language spoken by “Bushmen” in Namibia. Landscape vocabulary is ubiquitous in ≠Akhoe Hai//om due to the fact that the landscape plays a critical role in directionals and other forms of “topographical gossip” and due to merges between landscape and group terminology. This system of landscape-cum-group terminology is outlined and related to the use of place names in the area.
  • Willems, R. M., Ozyurek, A., & Hagoort, P. (2008). Seeing and hearing meaning: ERP and fMRI evidence of word versus picture integration into a sentence context. Journal of Cognitive Neuroscience, 20, 1235-1249. doi:10.1162/jocn.2008.20085.

    Abstract

    Understanding language always occurs within a situational context and, therefore, often implies combining streams of information from different domains and modalities. One such combination is that of spoken language and visual information, which are perceived together in a variety of ways during everyday communication. Here we investigate whether and how words and pictures differ in terms of their neural correlates when they are integrated into a previously built-up sentence context. This is assessed in two experiments looking at the time course (measuring event-related potentials, ERPs) and the locus (using functional magnetic resonance imaging, fMRI) of this integration process. We manipulated the ease of semantic integration of word and/or picture to a previous sentence context to increase the semantic load of processing. In the ERP study, an increased semantic load led to an N400 effect which was similar for pictures and words in terms of latency and amplitude. In the fMRI study, we found overlapping activations to both picture and word integration in the left inferior frontal cortex. Specific activations for the integration of a word were observed in the left superior temporal cortex. We conclude that despite obvious differences in representational format, semantic information coming from pictures and words is integrated into a sentence context in similar ways in the brain. This study adds to the growing insight that the language system incorporates (semantic) information coming from linguistic and extralinguistic domains with the same neural time course and by recruitment of overlapping brain areas.
  • Willems, R. M., Toni, I., Hagoort, P., & Casasanto, D. (2009). Body-specific motor imagery of hand actions: Neural evidence from right- and left-handers. Frontiers in Human Neuroscience, 3: 39, pp. 39. doi:10.3389/neuro.09.039.2009.

    Abstract

    If motor imagery uses neural structures involved in action execution, then the neural correlates of imagining an action should differ between individuals who tend to execute the action differently. Here we report fMRI data showing that motor imagery is influenced by the way people habitually perform motor actions with their particular bodies; that is, motor imagery is ‘body-specific’ (Casasanto, 2009). During mental imagery for complex hand actions, activation of cortical areas involved in motor planning and execution was left-lateralized in right-handers but right-lateralized in left-handers. We conclude that motor imagery involves the generation of an action plan that is grounded in the participant’s motor habits, not just an abstract representation at the level of the action’s goal. People with different patterns of motor experience form correspondingly different neurocognitive representations of imagined actions.
  • Willems, R. M., & Hagoort, P. (2009). Broca's region: Battles are not won by ignoring half of the facts. Trends in Cognitive Sciences, 13(3), 101. doi:10.1016/j.tics.2008.12.001.
  • Willems, R. M., Labruna, L., D'Esposito, M., Ivry, R., & Casasanto, D. (2011). A functional role for the motor system in language understanding: Evidence from Theta-Burst Transcranial Magnetic Stimulation. Psychological Science, 22, 849 -854. doi:10.1177/0956797611412387.

    Abstract

    Does language comprehension depend, in part, on neural systems for action? In previous studies, motor areas of the brain were activated when people read or listened to action verbs, but it remains unclear whether such activation is functionally relevant for comprehension. In the experiments reported here, we used off-line theta-burst transcranial magnetic stimulation to investigate whether a causal relationship exists between activity in premotor cortex and action-language understanding. Right-handed participants completed a lexical decision task, in which they read verbs describing manual actions typically performed with the dominant hand (e.g., “to throw,” “to write”) and verbs describing nonmanual actions (e.g., “to earn,” “to wander”). Responses to manual-action verbs (but not to nonmanual-action verbs) were faster after stimulation of the hand area in left premotor cortex than after stimulation of the hand area in right premotor cortex. These results suggest that premotor cortex has a functional role in action-language understanding.

    Additional information

    Supplementary materials Willems.pdf
  • Willems, R. M., Clevis, K., & Hagoort, P. (2011). Add a picture for suspense: Neural correlates of the interaction between language and visual information in the perception of fear. Social, Cognitive and Affective Neuroscience, 6, 404-416. doi:10.1093/scan/nsq050.

    Abstract

    We investigated how visual and linguistic information interact in the perception of emotion. We borrowed a phenomenon from film theory which states that presentation of an as such neutral visual scene intensifies the percept of fear or suspense induced by a different channel of information, such as language. Our main aim was to investigate how neutral visual scenes can enhance responses to fearful language content in parts of the brain involved in the perception of emotion. Healthy participants’ brain activity was measured (using functional magnetic resonance imaging) while they read fearful and less fearful sentences presented with or without a neutral visual scene. The main idea is that the visual scenes intensify the fearful content of the language by subtly implying and concretizing what is described in the sentence. Activation levels in the right anterior temporal pole were selectively increased when a neutral visual scene was paired with a fearful sentence, compared to reading the sentence alone, as well as to reading of non-fearful sentences presented with the same neutral scene. We conclude that the right anterior temporal pole serves a binding function of emotional information across domains such as visual and linguistic information.
  • Willems, R. M., Ozyurek, A., & Hagoort, P. (2009). Differential roles for left inferior frontal and superior temporal cortex in multimodal integration of action and language. Neuroimage, 47, 1992-2004. doi:10.1016/j.neuroimage.2009.05.066.

    Abstract

    Several studies indicate that both posterior superior temporal sulcus/middle temporal gyrus (pSTS/MTG) and left inferior frontal gyrus (LIFG) are involved in integrating information from different modalities. Here we investigated the respective roles of these two areas in integration of action and language information. We exploited the fact that the semantic relationship between language and different forms of action (i.e. co-speech gestures and pantomimes) is radically different. Speech and co-speech gestures are always produced together, and gestures are not unambiguously understood without speech. On the contrary, pantomimes are not necessarily produced together with speech and can be easily understood without speech. We presented speech together with these two types of communicative hand actions in matching or mismatching combinations to manipulate semantic integration load. Left and right pSTS/MTG were only involved in semantic integration of speech and pantomimes. Left IFG on the other hand was involved in integration of speech and co-speech gestures as well as of speech and pantomimes. Effective connectivity analyses showed that depending upon the semantic relationship between language and action, LIFG modulates activation levels in left pSTS.

    This suggests that integration in pSTS/MTG involves the matching of two input streams for which there is a relatively stable common object representation, whereas integration in LIFG is better characterized as the on-line construction of a new and unified representation of the input streams. In conclusion, pSTS/MTG and LIFG are differentially involved in multimodal integration, crucially depending upon the semantic relationship between the input streams.

    Additional information

    Supplementary table S1
  • Willems, R. M., Benn, Y., Hagoort, P., Tonia, I., & Varley, R. (2011). Communicating without a functioning language system: Implications for the role of language in mentalizing. Neuropsychologia, 49, 3130-3135. doi:10.1016/j.neuropsychologia.2011.07.023.

    Abstract

    A debated issue in the relationship between language and thought is how our linguistic abilities are involved in understanding the intentions of others (‘mentalizing’). The results of both theoretical and empirical work have been used to argue that linguistic, and more specifically, grammatical, abilities are crucial in representing the mental states of others. Here we contribute to this debate by investigating how damage to the language system influences the generation and understanding of intentional communicative behaviors. Four patients with pervasive language difficulties (severe global or agrammatic aphasia) engaged in an experimentally controlled non-verbal communication paradigm, which required signaling and understanding a communicative message. Despite their profound language problems they were able to engage in recipient design as well as intention recognition, showing similar indicators of mentalizing as have been observed in the neurologically healthy population. Our results show that aspects of the ability to communicate remain present even when core capacities of the language system are dysfunctional
  • Willems, R. M., Oostenveld, R., & Hagoort, P. (2008). Early decreases in alpha and gamma band power distinguish linguistic from visual information during spoken sentence comprehension. Brain Research, 1219, 78-90. doi:10.1016/j.brainres.2008.04.065.

    Abstract

    Language is often perceived together with visual information. This raises the question on how the brain integrates information conveyed in visual and/or linguistic format during spoken language comprehension. In this study we investigated the dynamics of semantic integration of visual and linguistic information by means of time-frequency analysis of the EEG signal. A modified version of the N400 paradigm with either a word or a picture of an object being semantically incongruous with respect to the preceding sentence context was employed. Event-Related Potential (ERP) analysis showed qualitatively similar N400 effects for integration of either word or picture. Time-frequency analysis revealed early specific decreases in alpha and gamma band power for linguistic and visual information respectively. We argue that these reflect a rapid context-based analysis of acoustic (word) or visual (picture) form information. We conclude that although full semantic integration of linguistic and visual information occurs through a common mechanism, early differences in oscillations in specific frequency bands reflect the format of the incoming information and, importantly, an early context-based detection of its congruity with respect to the preceding language context
  • Willems, R. M., & Casasanto, D. (2011). Flexibility in embodied language understanding. Frontiers in Psychology, 2, 116. doi:10.3389/fpsyg.2011.00116.

    Abstract

    Do people use sensori-motor cortices to understand language? Here we review neurocognitive studies of language comprehension in healthy adults and evaluate their possible contributions to theories of language in the brain. We start by sketching the minimal predictions that an embodied theory of language understanding makes for empirical research, and then survey studies that have been offered as evidence for embodied semantic representations. We explore four debated issues: first, does activation of sensori-motor cortices during action language understanding imply that action semantics relies on mirror neurons? Second, what is the evidence that activity in sensori-motor cortices plays a functional role in understanding language? Third, to what extent do responses in perceptual and motor areas depend on the linguistic and extra-linguistic context? And finally, can embodied theories accommodate language about abstract concepts? Based on the available evidence, we conclude that sensori-motor cortices are activated during a variety of language comprehension tasks, for both concrete and abstract language. Yet, this activity depends on the context in which perception and action words are encountered. Although modality-specific cortical activity is not a sine qua non of language processing even for language about perception and action, sensori-motor regions of the brain appear to make functional contributions to the construction of meaning, and should therefore be incorporated into models of the neurocognitive architecture of language.
  • Willems, R. M., & Hagoort, P. (2009). Hand preference influences neural correlates of action observation. Brain Research, 1269, 90-104. doi:10.1016/j.brainres.2009.02.057.

    Abstract

    It has been argued that we map observed actions onto our own motor system. Here we added to this issue by investigating whether hand preference influences the neural correlates of action observation of simple, essentially meaningless hand actions. Such an influence would argue for an intricate neural coupling between action production and action observation, which goes beyond effects of motor repertoire or explicit motor training, as has been suggested before. Indeed, parts of the human motor system exhibited a close coupling between action production and action observation. Ventral premotor and inferior and superior parietal cortices showed differential activation for left- and right-handers that was similar during action production as well as during action observation. This suggests that mapping observed actions onto the observer's own motor system is a core feature of action observation - at least for actions that do not have a clear goal or meaning. Basic differences in the way we act upon the world are not only reflected in neural correlates of action production, but can also influence the brain basis of action observation.
  • Willems, R. M. (2009). Neural reflections of meaning in gesture, language, and action. PhD Thesis, Radboud University Nijmegen, Nijmegen.
  • Willems, R. M. (2011). Re-appreciating the why of cognition: 35 years after Marr and Poggio. Frontiers in Psychology, 2, 244. doi:10.3389/fpsyg.2011.00244.

    Abstract

    Marr and Poggio’s levels of description are one of the most well-known theoretical constructs of twentieth century cognitive science. It entails that behavior can and should be considered at three different levels: computation, algorithm, and implementation. In this contribution focus is on the computational level of description, the level that describes the “why” of cognition. I argue that the computational level should be taken as a starting point in devising experiments in cognitive (neuro)science. Instead, the starting point in empirical practice often is a focus on the stimulus or on some capacity of the cognitive system. The “why” of cognition tends to be ignored when designing research, and is not considered in subsequent inference from experimental results. The overall aim of this manuscript is to show how re-appreciation of the computational level of description as a starting point for experiments can lead to more informative experimentation.
  • Williams, N. M., Williams, H., Majounie, E., Norton, N., Glaser, B., Morris, H. R., Owen, M. J., & O'Donovan, M. C. (2008). Analysis of copy number variation using quantitative interspecies competitive PCR. Nucleic Acids Research, 36(17): e112. doi:10.1093/nar/gkn495.

    Abstract

    Over recent years small submicroscopic DNA copy-number variants (CNVs) have been highlighted as an important source of variation in the human genome, human phenotypic diversity and disease susceptibility. Consequently, there is a pressing need for the development of methods that allow the efficient, accurate and cheap measurement of genomic copy number polymorphisms in clinical cohorts. We have developed a simple competitive PCR based method to determine DNA copy number which uses the entire genome of a single chimpanzee as a competitor thus eliminating the requirement for competitive sequences to be synthesized for each assay. This results in the requirement for only a single reference sample for all assays and dramatically increases the potential for large numbers of loci to be analysed in multiplex. In this study we establish proof of concept by accurately detecting previously characterized mutations at the PARK2 locus and then demonstrating the potential of quantitative interspecies competitive PCR (qicPCR) to accurately genotype CNVs in association studies by analysing chromosome 22q11 deletions in a sample of previously characterized patients and normal controls.
  • Wittenburg, P. (2008). Die CLARIN Forschungsinfrastruktur. ÖGAI-journal (Österreichische Gesellschaft für Artificial Intelligence), 27, 10-17.
  • Wolters, G., & Poletiek, F. H. (2008). Beslissen over aangiftes van seksueel misbruik bij kinderen. De Psycholoog, 43, 29-29.
  • Li, X., Yang, Y., & Hagoort, P. (2008). Pitch accent and lexical tone processing in Chinese discourse comprehension: An ERP study. Brain Research, 1222, 192-200. doi:10.1016/j.brainres.2008.05.031.

    Abstract

    In the present study, event-related brain potentials (ERP) were recorded to investigate the role of pitch accent and lexical tone in spoken discourse comprehension. Chinese was used as material to explore the potential difference in the nature and time course of brain responses to sentence meaning as indicated by pitch accent and to lexical meaning as indicated by tone. In both cases, the pitch contour of critical words was varied. The results showed that both inconsistent pitch accent and inconsistent lexical tone yielded N400 effects, and there was no interaction between them. The negativity evoked by inconsistent pitch accent had the some topography as that evoked by inconsistent lexical tone violation, with a maximum over central–parietal electrodes. Furthermore, the effect for the combined violations was the sum of effects for pure pitch accent and pure lexical tone violation. However, the effect for the lexical tone violation appeared approximately 90 ms earlier than the effect of the pitch accent violation. It is suggested that there might be a correspondence between the neural mechanism underlying pitch accent and lexical meaning processing in context. They both reflect the integration of the current information into a discourse context, independent of whether the current information was sentence meaning indicated by accentuation, or lexical meaning indicated by tone. In addition, lexical meaning was processed earlier than sentence meaning conveyed by pitch accent during spoken language processing.
  • Zwitserlood, I. (2008). Grammatica-vertaalmethode en nederlandse gebarentaal. Levende Talen Magazine, 95(5), 28-29.
  • Zwitserlood, I. (2011). Gebruiksgemak van het eerste Nederlandse Gebarentaal woordenboek kan beter [Book review]. Levende Talen Magazine, 4, 46-47.

    Abstract

    Review: User friendliness of the first dictionary of Sign Language of the Netherlands can be improved
  • Zwitserlood, I. (2011). Gevraagd: medewerkers verzorgingshuis met een goede oog-handcoördinatie. Het meten van NGT-vaardigheid. Levende Talen Magazine, 1, 44-46.

    Abstract

    (Needed: staff for residential care home with good eye-hand coordination. Measuring NGT-skills.)
  • Zwitserlood, I. (2009). Het Corpus NGT. Levende Talen Magazine, 6, 44-45.

    Abstract

    The Corpus NGT
  • Zwitserlood, I. (2011). Het Corpus NGT en de dagelijkse lespraktijk. Levende Talen Magazine, 6, 46.

    Abstract

    (The Corpus NGT and the daily practice of language teaching)
  • Zwitserlood, I. (2009). Het Corpus NGT en de dagelijkse lespraktijk (1). Levende Talen Magazine, 8, 40-41.
  • Zwitserlood, I. (2011). Het Corpus NGT en de opleiding leraar/tolk NGT. Levende Talen Magazine, 1, 40-41.

    Abstract

    (The Corpus NGT and teacher NGT/interpreter NGT training)

Share this page