Publications

Displaying 901 - 950 of 950
  • Vosse, T., & Kempen, G. (2009). In defense of competition during syntactic ambiguity resolution. Journal of Psycholinguistic Research, 38(1), 1-9. doi:10.1007/s10936-008-9075-1.

    Abstract

    In a recent series of publications (Traxler et al. J Mem Lang 39:558–592, 1998; Van Gompel et al. J Mem Lang 52:284–307, 2005; see also Van Gompel et al. (In: Kennedy, et al.(eds) Reading as a perceptual process, Oxford, Elsevier pp 621–648, 2000); Van Gompel et al. J Mem Lang 45:225–258, 2001) eye tracking data are reported showing that globally ambiguous (GA) sentences are read faster than locally ambiguous (LA) counterparts. They argue that these data rule out “constraint-based” models where syntactic and conceptual processors operate concurrently and syntactic ambiguity resolution is accomplished by competition. Such models predict the opposite pattern of reading times. However, this argument against competition is valid only in conjunction with two standard assumptions in current constraint-based models of sentence comprehension: (1) that syntactic competitions (e.g., Which is the best attachment site of the incoming constituent?) are pooled together with conceptual competitions (e.g., Which attachment site entails the most plausible meaning?), and (2) that the duration of a competition is a function of the overall (pooled) quality score obtained by each competitor. We argue that it is not necessary to abandon competition as a successful basis for explaining parsing phenomena and that the above-mentioned reading time data can be accounted for by a parallel-interactive model with conceptual and syntactic processors that do not pool their quality scores together. Within the individual linguistic modules, decision-making can very well be competition-based.
  • Vosse, T., & Kempen, G. (2000). Syntactic structure assembly in human parsing: A computational model based on competitive inhibition and a lexicalist grammar. Cognition, 75, 105-143.

    Abstract

    We present the design, implementation and simulation results of a psycholinguistic model of human syntactic processing that meets major empirical criteria. The parser operates in conjunction with a lexicalist grammar and is driven by syntactic information associated with heads of phrases. The dynamics of the model are based on competition by lateral inhibition ('competitive inhibition'). Input words activate lexical frames (i.e. elementary trees anchored to input words) in the mental lexicon, and a network of candidate 'unification links' is set up between frame nodes. These links represent tentative attachments that are graded rather than all-or-none. Candidate links that, due to grammatical or 'treehood' constraints, are incompatible, compete for inclusion in the final syntactic tree by sending each other inhibitory signals that reduce the competitor's attachment strength. The outcome of these local and simultaneous competitions is controlled by dynamic parameters, in particular by the Entry Activation and the Activation Decay rate of syntactic nodes, and by the Strength and Strength Build-up rate of Unification links. In case of a successful parse, a single syntactic tree is returned that covers the whole input string and consists of lexical frames connected by winning Unification links. Simulations are reported of a significant range of psycholinguistic parsing phenomena in both normal and aphasic speakers of English: (i) various effects of linguistic complexity (single versus double, center versus right-hand self-embeddings of relative clauses; the difference between relative clauses with subject and object extraction; the contrast between a complement clause embedded within a relative clause versus a relative clause embedded within a complement clause); (ii) effects of local and global ambiguity, and of word-class and syntactic ambiguity (including recency and length effects); (iii) certain difficulty-of-reanalysis effects (contrasts between local ambiguities that are easy to resolve versus ones that lead to serious garden-path effects); (iv) effects of agrammatism on parsing performance, in particular the performance of various groups of aphasic patients on several sentence types.
  • Vosse, T., & Kempen, G. (2009). The Unification Space implemented as a localist neural net: Predictions and error-tolerance in a constraint-based parser. Cognitive Neurodynamics, 3, 331-346. doi:10.1007/s11571-009-9094-0.

    Abstract

    We introduce a novel computer implementation of the Unification-Space parser (Vosse & Kempen 2000) in the form of a localist neural network whose dynamics is based on interactive activation and inhibition. The wiring of the network is determined by Performance Grammar (Kempen & Harbusch 2003), a lexicalist formalism with feature unification as binding operation. While the network is processing input word strings incrementally, the evolving shape of parse trees is represented in the form of changing patterns of activation in nodes that code for syntactic properties of words and phrases, and for the grammatical functions they fulfill. The system is capable, at least in a qualitative and rudimentary sense, of simulating several important dynamic aspects of human syntactic parsing, including garden-path phenomena and reanalysis, effects of complexity (various types of clause embeddings), fault-tolerance in case of unification failures and unknown words, and predictive parsing (expectation-based analysis, surprisal effects). English is the target language of the parser described.
  • De Vries, M., Barth, A. C. R., Maiworm, S., Knecht, S., Zwitserlood, P., & Flöel, A. (2010). Electrical stimulation of Broca’s area enhances implicit learning of an artificial grammar. Journal of Cognitive Neuroscience, 22, 2427-2436. doi:10.1162/jocn.2009.21385.

    Abstract

    Artificial grammar learning constitutes a well-established model for the acquisition of grammatical knowledge in a natural setting. Previous neuroimaging studies demonstrated that Broca's area (left BA 44/45) is similarly activated by natural syntactic processing and artificial grammar learning. The current study was conducted to investigate the causal relationship between Broca's area and learning of an artificial grammar by means of transcranial direct current stimulation (tDCS). Thirty-eight healthy subjects participated in a between-subject design, with either anodal tDCS (20 min, 1 mA) or sham stimulation, over Broca's area during the acquisition of an artificial grammar. Performance during the acquisition phase, presented as a working memory task, was comparable between groups. In the subsequent classification task, detecting syntactic violations, and specifically, those where no cues to superficial similarity were available, improved significantly after anodal tDCS, resulting in an overall better performance. A control experiment where 10 subjects received anodal tDCS over an area unrelated to artificial grammar learning further supported the specificity of these effects to Broca's area. We conclude that Broca's area is specifically involved in rule-based knowledge, and here, in an improved ability to detect syntactic violations. The results cannot be explained by better tDCS-induced working memory performance during the acquisition phase. This is the first study that demonstrates that tDCS may facilitate acquisition of grammatical knowledge, a finding of potential interest for rehabilitation of aphasia.
  • De Vries, M., Ulte, C., Zwitserlood, P., Szymanski, B., & Knecht, S. (2010). Increasing dopamine levels in the brain improves feedback-based procedural learning in healthy participants: An artificial-grammar-learning experiment. Neuropsychologia, 48, 3193-3197. doi:10.1016/j.neuropsychologia.2010.06.024.

    Abstract

    Recently, an increasing number of studies have suggested a role for the basal ganglia and related dopamine inputs in procedural learning, specifically when learning occurs through trial-by-trial feedback (Shohamy, Myers, Kalanithi, & Gluck. (2008). Basal ganglia and dopamine contributions to probabilistic category learning. Neuroscience and Biobehavioral Reviews, 32, 219–236). A necessary relationship has however only been demonstrated in patient studies. In the present study, we show for the first time that increasing dopamine levels in the brain improves the gradual acquisition of complex information in healthy participants. We implemented two artificial-grammar-learning tasks, one with and one without performance feedback. Learning was improved after levodopa intake for the feedback-based learning task only, suggesting that dopamine plays a specific role in trial-by-trial feedback-based learning. This provides promising directions for future studies on dopaminergic modulation of cognitive functioning.
  • Wang, L., Hagoort, P., & Yang, Y. (2009). Semantic illusion depends on information structure: ERP evidence. Brain Research, 1282, 50-56. doi:10.1016/j.brainres.2009.05.069.

    Abstract

    Next to propositional content, speakers distribute information in their utterances in such a way that listeners can make a distinction between new (focused) and given (non-focused) information. This is referred to as information structure. We measured event-related potentials (ERPs) to explore the role of information structure in semantic processing. Following different questions in wh-question-answer pairs (e.g. What kind of vegetable did Ming buy for cooking today? /Who bought the vegetables for cooking today?), the answer sentences (e.g., Ming bought eggplant/beef to cook today.) contained a critical word, which was either semantically appropriate (eggplant) or inappropriate (beef), and either focus or non-focus. The results showed a full N400 effect only when the critical words were in focus position. In non-focus position a strongly reduced N400 effect was observed, in line with the well-known semantic illusion effect. The results suggest that information structure facilitates semantic processing by devoting more resources to focused information.
  • Warner, N., Fountain, A., & Tucker, B. V. (2009). Cues to perception of reduced flaps. Journal of the Acoustical Society of America, 125(5), 3317-3327. doi:10.1121/1.3097773.

    Abstract

    Natural, spontaneous speech (and even quite careful speech) often shows extreme reduction in many speech segments, even resulting in apparent deletion of consonants. Where the flap ([(sic)]) allophone of /t/ and /d/ is expected in American English, one frequently sees an approximant-like or even vocalic pattern, rather than a clear flap. Still, the /t/ or /d/ is usually perceived, suggesting the acoustic characteristics of a reduced flap are sufficient for perception of a consonant. This paper identifies several acoustic characteristics of reduced flaps based on previous acoustic research (size of intensity dip, consonant duration, and F4 valley) and presents phonetic identification data for continua that manipulate these acoustic characteristics of reduction. The results indicate that the most obvious types of acoustic variability seen in natural flaps do affect listeners' percept of a consonant, but not sufficiently to completely account for the percept. Listeners are affected by the acoustic characteristics of consonant reduction, but they are also very skilled at evaluating variability along the acoustic dimensions that realize reduction.

    Files private

    Request files
  • Warner, N., Otake, T., & Arai, A. (2010). Intonational structure as a word-boundary cue in Tokyo Japanese. Language and Speech, 53, 107-131. doi:10.1177/0023830909351235.

    Abstract

    While listeners are recognizing words from the connected speech stream, they are also parsing information from the intonational contour. This contour may contain cues to word boundaries, particularly if a language has boundary tones that occur at a large proportion of word onsets. We investigate how useful the pitch rise at the beginning of an accentual phrase (APR) would be as a potential word-boundary cue for Japanese listeners. A corpus study shows that it should allow listeners to locate approximately 40–60% of word onsets, while causing less than 1% false positives. We then present a word-spotting study which shows that Japanese listeners can, indeed, use accentual phrase boundary cues during segmentation. This work shows that the prosodic patterns that have been found in the production of Japanese also impact listeners’ processing.
  • Warner, N., & Weber, A. (2001). Perception of epenthetic stops. Journal of Phonetics, 29(1), 53-87. doi:10.1006/jpho.2001.0129.

    Abstract

    In processing connected speech, listeners must parse a highly variable signal. We investigate processing of a particular type of production variability, namely epenthetic stops between nasals and obstruents. Using a phoneme monitoring task and a dictation task, we test listeners' perception of epenthetic stops (which are not part of the string of segments intended by the speaker). We confirm that the epenthetic stop perceived is the one predicted by articulatory accounts of how such stops are produced, and that the likelihood of an epenthetic stop being perceived as a real stop is related to the strength of acoustic cues in the signal. We show that the probability of listeners mis-parsing epenthetic stops as real is influenced by language-specific syllable structure constraints, and depends on processing demands. We further show, through reaction time data, that even when epenthetic stops are perceived, they impose a greater processing load than stops which were intended by the speaker. These results show that processing of phonetic variability is affected by several factors, including language-specific phonology, even though the mis-timing of articulations that creates epenthetic stops is universally possible.
  • Warner, N., Luna, Q., Butler, L., & Van Volkinburg, H. (2009). Revitalization in a scattered language community: Problems and methods from the perspective of Mutsun language revitalization. International Journal of the Sociology of Language, 198, 135-148. doi:10.1515/IJSL.2009.031.

    Abstract

    This article addresses revitalization of a dormant language whose prospective speakers live in scattered geographical areas. In comparison to increasing the usage of an endangered language, revitalizing a dormant language (one with no living speakers) requires different methods to gain knowledge of the language. Language teaching for a dormant language with a scattered community presents different problems from other teaching situations. In this article, we discuss the types of tasks that must be accomplished for dormant-language revitalization, with particular focus on development of teaching materials. We also address the role of computer technologies, arguing that each use of technology should be evaluated for how effectively it increases fluency. We discuss methods for achieving semi-fluency for the first new speakers of a dormant language, and for spreading the language through the community.
  • Warner, N., Jongman, A., Cutler, A., & Mücke, D. (2001). The phonological status of Dutch epenthetic schwa. Phonology, 18, 387-420. doi:10.1017/S0952675701004213.

    Abstract

    In this paper, we use articulatory measures to determine whether Dutch schwa epenthesis is an abstract phonological process or a concrete phonetic process depending on articulatory timing. We examine tongue position during /l/ before underlying schwa and epenthetic schwa and in coda position. We find greater tip raising before both types of schwa, indicating light /l/ before schwa and dark /l/ in coda position. We argue that the ability of epenthetic schwa to condition the /l/ alternation shows that Dutch schwa epenthesis is an abstract phonological process involving insertion of some unit, and cannot be accounted for within Articulatory Phonology.
  • Warner, N., & Arai, T. (2001). The role of the mora in the timing of spontaneous Japanese speech. The Journal of the Acoustical Society of America, 109, 1144-1156. doi:10.1121/1.1344156.

    Abstract

    This study investigates whether the mora is used in controlling timing in Japanese speech, or is instead a structural unit in the language not involved in timing. Unlike most previous studies of mora-timing in Japanese, this article investigates timing in spontaneous speech. Predictability of word duration from number of moras is found to be much weaker than in careful speech. Furthermore, the number of moras predicts word duration only slightly better than number of segments. Syllable structure also has a significant effect on word duration. Finally, comparison of the predictability of whole words and arbitrarily truncated words shows better predictability for truncated words, which would not be possible if the truncated portion were compensating for remaining moras. The results support an accumulative model of variance with a final lengthening effect, and do not indicate the presence of any compensation related to mora-timing. It is suggested that the rhythm of Japanese derives from several factors about the structure of the language, not from durational compensation.
  • Warren, C. M., Tona, K. D., Ouwekerk, L., Van Paridon, J., Poletiek, F. H., Bosch, J. A., & Nieuwenhuis, S. (2019). The neuromodulatory and hormonal effects of transcutaneous vagus nerve stimulation as evidenced by salivary alpha amylase, salivary cortisol, pupil diameter, and the P3 event-related potential. Brain Stimulation, 12(3), 635-642. doi:10.1016/j.brs.2018.12.224.

    Abstract

    Background

    Transcutaneous vagus nerve stimulation (tVNS) is a new, non-invasive technique being investigated as an intervention for a variety of clinical disorders, including epilepsy and depression. It is thought to exert its therapeutic effect by increasing central norepinephrine (NE) activity, but the evidence supporting this notion is limited.
    Objective

    In order to test for an impact of tVNS on psychophysiological and hormonal indices of noradrenergic function, we applied tVNS in concert with assessment of salivary alpha amylase (SAA) and cortisol, pupil size, and electroencephalograph (EEG) recordings.
    Methods

    Across three experiments, we applied real and sham tVNS to 61 healthy participants while they performed a set of simple stimulus-discrimination tasks. Before and after the task, as well as during one break, participants provided saliva samples and had their pupil size recorded. EEG was recorded throughout the task. The target for tVNS was the cymba conchae, which is heavily innervated by the auricular branch of the vagus nerve. Sham stimulation was applied to the ear lobe.
    Results

    P3 amplitude was not affected by tVNS (Experiment 1A: N=24; Experiment 1B: N=20; Bayes factor supporting null model=4.53), nor was pupil size (Experiment 2: N=16; interaction of treatment and time: p=0.79). However, tVNS increased SAA (Experiments 1A and 2: N=25) and attenuated the decline of salivary cortisol compared to sham (Experiment 2: N=17), as indicated by significant interactions involving treatment and time (p=.023 and p=.040, respectively).
    Conclusion

    These findings suggest that tVNS modulates hormonal indices but not psychophysiological indices of noradrenergic function.
  • Wassenaar, M., & Hagoort, P. (2001). Het matchen van zinnen bij plaatjes door Broca afasiepatiënten: een hersenpotentiaal studie. Afasiologie, 23, 122-126.
  • Weber, A. (2002). Assimilation violation and spoken-language processing: A supplementary report. Language and Speech, 45, 37-46. doi:10.1177/00238309020450010201.

    Abstract

    Previous studies have shown that spoken-language processing is inhibited by violation of obligatory regressive assimilation. Weber (2001) replicated this inhibitory effect in a phoneme-monitoring study examining regressive place assimilation of nasals, but found facilitation for violation of progressive assimilation. German listeners detected the velar fricative [x] more quickly when fricative assimilation was violated (e.g., *[bIxt] or *[blInx@n]) than when no violation occurred (e.g., [baxt] or [blu:x@n]). It was argued that a combination of two factors caused facilitation:(1) progressive assimilation creates different restrictions for the monitoring target than regressive assimilation does, and (2) the sequences violating assimilation (e.g., *[Ix]) are novel for German listeners and therefore facilitate fricative detection (novel popout). The present study tested progressive assimilation violation in non-novel sequences using the palatal fricative [C]. Stimuli either violated fricative assimilation (e.g., *[ba:C@l ]) or did not (e.g., [bi: C@l ]). This manipulation does not create novel sequences: sequences like *[a:C] can occur across word boundaries, while *[Ix] cannot. No facilitation was found. However, violation also did not significantly inhibit processing. The results confirm that facilitation depends on the combination of progressive assimilation with novelty of the sequence.
  • Weber, A. (2001). Help or hindrance: How violation of different assimilation rules affects spoken-language processing. Language and Speech, 44(1), 95-118. doi:10.1177/00238309010440010401.

    Abstract

    Four phoneme-detection studies tested the conclusion from recent research that spoken-language processing is inhibited by violation of obligatory assimilation processes in the listeners’ native language. In Experiment 1, native listeners of German detected a target fricative in monosyllabic Dutch nonwords, half of which violated progressive German fricative place assimilation. In contrast to the earlier findings, listeners detected the fricative more quickly when assimilation was violated than when no violation occurred. This difference was not due to purely acoustic factors, since in Experiment 2 native Dutch listeners, presented with the same materials, showed no such effect. In Experiment 3, German listeners again detected the fricative more quickly when violation occurred in both monosyllabic and bisyllabic native nonwords, further ruling out explanations based on non-native input or on syllable structure. Finally Experiment 4 tested whether the direction in which the rule operates (progressive or regressive) controls the direction of the effect on phoneme detection responses.When regressive German place assimilation for nasals was violated, German listeners detected stops more slowly, exactly as had been observed in previous studies of regressive assimilation. It is argued that a combination of low expectations in progressive assimilation and novel popout causes facilitation of processing,whereas not fulfilling high expectations in regressive assimilation causes inhibition.
  • Weber, K., Christiansen, M., Indefrey, P., & Hagoort, P. (2019). Primed from the start: Syntactic priming during the first days of language learning. Language Learning, 69(1), 198-221. doi:10.1111/lang.12327.

    Abstract

    New linguistic information must be integrated into our existing language system. Using a novel experimental task that incorporates a syntactic priming paradigm into artificial language learning, we investigated how new grammatical regularities and words are learned. This innovation allowed us to control the language input the learner received, while the syntactic priming paradigm provided insight into the nature of the underlying syntactic processing machinery. The results of the present study pointed to facilitatory syntactic processing effects within the first days of learning: Syntactic and lexical priming effects revealed participants’ sensitivity to both novel words and word orders. This suggested that novel syntactic structures and their meaning (form–function mapping) can be acquired rapidly through incidental learning. More generally, our study indicated similar mechanisms for learning and processing in both artificial and natural languages, with implications for the relationship between first and second language learning.
  • Weber, K., Micheli, C., Ruigendijk, E., & Rieger, J. (2019). Sentence processing is modulated by the current linguistic environment and a priori information: An fMRI study. Brain and Behavior, 9(7): e01308. doi:10.1002/brb3.1308.

    Abstract

    Introduction
    Words are not processed in isolation but in rich contexts that are used to modulate and facilitate language comprehension. Here, we investigate distinct neural networks underlying two types of contexts, the current linguistic environment and verb‐based syntactic preferences.

    Methods
    We had two main manipulations. The first was the current linguistic environment, where the relative frequencies of two syntactic structures (prepositional object [PO] and double‐object [DO]) would either follow everyday linguistic experience or not. The second concerned the preference toward one or the other structure depending on the verb; learned in everyday language use and stored in memory. German participants were reading PO and DO sentences in German while brain activity was measured with functional magnetic resonance imaging.

    Results
    First, the anterior cingulate cortex (ACC) showed a pattern of activation that integrated the current linguistic environment with everyday linguistic experience. When the input did not match everyday experience, the unexpected frequent structure showed higher activation in the ACC than the other conditions and more connectivity from the ACC to posterior parts of the language network. Second, verb‐based surprisal of seeing a structure given a verb (PO verb preference but DO structure presentation) resulted, within the language network (left inferior frontal and left middle/superior temporal gyrus) and the precuneus, in increased activation compared to a predictable verb‐structure pairing.

    Conclusion
    In conclusion, (1) beyond the canonical language network, brain areas engaged in prediction and error signaling, such as the ACC, might use the statistics of syntactic structures to modulate language processing, (2) the language network is directly engaged in processing verb preferences. These two networks show distinct influences on sentence processing.

    Additional information

    Supporting information
  • Weber, K., & Indefrey, P. (2009). Syntactic priming in German–English bilinguals during sentence comprehension. Neuroimage, 46, 1164-1172. doi:10.1016/j.neuroimage.2009.03.040.

    Abstract

    A longstanding question in bilingualism is whether syntactic information is shared between the two language processing systems. We used an fMRI repetition suppression paradigm to investigate syntactic priming in reading comprehension in German–English late-acquisition bilinguals. In comparison to conventional subtraction analyses in bilingual experiments, repetition suppression has the advantage of being able to detect neuronal populations that are sensitive to properties that are shared by consecutive stimuli. In this study, we manipulated the syntactic structure between prime and target sentences. A sentence with a passive sentence structure in English was preceded either by a passive or by an active sentence in English or German. We looked for repetition suppression effects in left inferior frontal, left precentral and left middle temporal regions of interest. These regions were defined by a contrast of all non-target sentences in German and English versus the baseline of sentence-format consonant strings. We found decreases in activity (repetition suppression effects) in these regions of interest following the repetition of syntactic structure from the first to the second language and within the second language.
    Moreover, a separate behavioural experiment using a word-by-word reading paradigm similar to the fMRI experiment showed faster reading times for primed compared to unprimed English target sentences regardless of whether they were preceded by an English or a German sentence of the same structure.
    We conclude that there is interaction between the language processing systems and that at least some syntactic information is shared between a bilingual's languages with similar syntactic structures.

    Files private

    Request files
  • Wells, J. B., Christiansen, M. H., Race, D. S., Acheson, D. J., & MacDonald, M. C. (2009). Experience and sentence processing: Statistical learning and relative clause comprehension. Cognitive Psychology, 58(2), 250-271. doi:10.1016/j.cogpsych.2008.08.002.

    Abstract

    Many explanations of the difficulties associated with interpreting object relative clauses appeal to the demands that object relatives make on working memory. MacDonald and Christiansen [MacDonald, M. C., & Christiansen, M. H. (2002). Reassessing working memory: Comment on Just and Carpenter (1992) and Waters and Caplan (1996). Psychological Review, 109, 35-54] pointed to variations in reading experience as a source of differences, arguing that the unique word order of object relatives makes their processing more difficult and more sensitive to the effects of previous experience than the processing of subject relatives. This hypothesis was tested in a large-scale study manipulating reading experiences of adults over several weeks. The group receiving relative clause experience increased reading speeds for object relatives more than for subject relatives, whereas a control experience group did not. The reading time data were compared to performance of a computational model given different amounts of experience. The results support claims for experience-based individual differences and an important role for statistical learning in sentence comprehension processes.
  • Willems, R. M., Toni, I., Hagoort, P., & Casasanto, D. (2009). Body-specific motor imagery of hand actions: Neural evidence from right- and left-handers. Frontiers in Human Neuroscience, 3: 39, pp. 39. doi:10.3389/neuro.09.039.2009.

    Abstract

    If motor imagery uses neural structures involved in action execution, then the neural correlates of imagining an action should differ between individuals who tend to execute the action differently. Here we report fMRI data showing that motor imagery is influenced by the way people habitually perform motor actions with their particular bodies; that is, motor imagery is ‘body-specific’ (Casasanto, 2009). During mental imagery for complex hand actions, activation of cortical areas involved in motor planning and execution was left-lateralized in right-handers but right-lateralized in left-handers. We conclude that motor imagery involves the generation of an action plan that is grounded in the participant’s motor habits, not just an abstract representation at the level of the action’s goal. People with different patterns of motor experience form correspondingly different neurocognitive representations of imagined actions.
  • Willems, R. M., Hagoort, P., & Casasanto, D. (2010). Body-specific representations of action verbs: Neural evidence from right- and left-handers. Psychological Science, 21, 67-74. doi:10.1177/0956797609354072.

    Abstract

    According to theories of embodied cognition, understanding a verb like throw involves unconsciously simulating the action of throwing, using areas of the brain that support motor planning. If understanding action words involves mentally simulating one’s own actions, then the neurocognitive representation of word meanings should differ for people with different kinds of bodies, who perform actions in systematically different ways. In a test of the body-specificity hypothesis, we used functional magnetic resonance imaging to compare premotor activity correlated with action verb understanding in right- and left-handers. Righthanders preferentially activated the left premotor cortex during lexical decisions on manual-action verbs (compared with nonmanual-action verbs), whereas left-handers preferentially activated right premotor areas. This finding helps refine theories of embodied semantics, suggesting that implicit mental simulation during language processing is body specific: Right- and lefthanders, who perform actions differently, use correspondingly different areas of the brain for representing action verb meanings.
  • Willems, R. M., & Hagoort, P. (2009). Broca's region: Battles are not won by ignoring half of the facts. Trends in Cognitive Sciences, 13(3), 101. doi:10.1016/j.tics.2008.12.001.
  • Willems, R. M., Peelen, M. V., & Hagoort, P. (2010). Cerebral lateralization of face-selective and body-selective visual areas depends on handedness. Cerebral Cortex, 20, 1719-1725. doi:10.1093/cercor/bhp234.

    Abstract

    The left-hemisphere dominance for language is a core example of the functional specialization of the cerebral hemispheres. The degree of left-hemisphere dominance for language depends on hand preference: Whereas the majority of right-handers show left-hemispheric language lateralization, this number is reduced in left-handers. Here, we assessed whether handedness analogously has an influence upon lateralization in the visual system. Using functional magnetic resonance imaging, we localized 4 more or less specialized extrastriate areas in left- and right-handers, namely fusiform face area (FFA), extrastriate body area (EBA), fusiform body area (FBA), and human motion area (human middle temporal [hMT]). We found that lateralization of FFA and EBA depends on handedness: These areas were right lateralized in right-handers but not in left-handers. A similar tendency was observed in FBA but not in hMT. We conclude that the relationship between handedness and hemispheric lateralization extends to functionally lateralized parts of visual cortex, indicating a general coupling between cerebral lateralization and handedness. Our findings indicate that hemispheric specialization is not fixed but can vary considerably across individuals even in areas engaged relatively early in the visual system.
  • Willems, R. M., De Boer, M., De Ruiter, J. P., Noordzij, M. L., Hagoort, P., & Toni, I. (2010). A dissociation between linguistic and communicative abilities in the human brain. Psychological Science, 21, 8-14. doi:10.1177/0956797609355563.

    Abstract

    Although language is an effective vehicle for communication, it is unclear how linguistic and communicative abilities relate to each other. Some researchers have argued that communicative message generation involves perspective taking (mentalizing), and—crucially—that mentalizing depends on language. We employed a verbal communication paradigm to directly test whether the generation of a communicative action relies on mentalizing and whether the cerebral bases of communicative message generation are distinct from parts of cortex sensitive to linguistic variables. We found that dorsomedial prefrontal cortex, a brain area consistently associated with mentalizing, was sensitive to the communicative intent of utterances, irrespective of linguistic difficulty. In contrast, left inferior frontal cortex, an area known to be involved in language, was sensitive to the linguistic demands of utterances, but not to communicative intent. These findings show that communicative and linguistic abilities rely on cerebrally (and computationally) distinct mechanisms
  • Willems, R. M., Ozyurek, A., & Hagoort, P. (2009). Differential roles for left inferior frontal and superior temporal cortex in multimodal integration of action and language. Neuroimage, 47, 1992-2004. doi:10.1016/j.neuroimage.2009.05.066.

    Abstract

    Several studies indicate that both posterior superior temporal sulcus/middle temporal gyrus (pSTS/MTG) and left inferior frontal gyrus (LIFG) are involved in integrating information from different modalities. Here we investigated the respective roles of these two areas in integration of action and language information. We exploited the fact that the semantic relationship between language and different forms of action (i.e. co-speech gestures and pantomimes) is radically different. Speech and co-speech gestures are always produced together, and gestures are not unambiguously understood without speech. On the contrary, pantomimes are not necessarily produced together with speech and can be easily understood without speech. We presented speech together with these two types of communicative hand actions in matching or mismatching combinations to manipulate semantic integration load. Left and right pSTS/MTG were only involved in semantic integration of speech and pantomimes. Left IFG on the other hand was involved in integration of speech and co-speech gestures as well as of speech and pantomimes. Effective connectivity analyses showed that depending upon the semantic relationship between language and action, LIFG modulates activation levels in left pSTS.

    This suggests that integration in pSTS/MTG involves the matching of two input streams for which there is a relatively stable common object representation, whereas integration in LIFG is better characterized as the on-line construction of a new and unified representation of the input streams. In conclusion, pSTS/MTG and LIFG are differentially involved in multimodal integration, crucially depending upon the semantic relationship between the input streams.

    Additional information

    Supplementary table S1
  • Willems, R. M., & Hagoort, P. (2009). Hand preference influences neural correlates of action observation. Brain Research, 1269, 90-104. doi:10.1016/j.brainres.2009.02.057.

    Abstract

    It has been argued that we map observed actions onto our own motor system. Here we added to this issue by investigating whether hand preference influences the neural correlates of action observation of simple, essentially meaningless hand actions. Such an influence would argue for an intricate neural coupling between action production and action observation, which goes beyond effects of motor repertoire or explicit motor training, as has been suggested before. Indeed, parts of the human motor system exhibited a close coupling between action production and action observation. Ventral premotor and inferior and superior parietal cortices showed differential activation for left- and right-handers that was similar during action production as well as during action observation. This suggests that mapping observed actions onto the observer's own motor system is a core feature of action observation - at least for actions that do not have a clear goal or meaning. Basic differences in the way we act upon the world are not only reflected in neural correlates of action production, but can also influence the brain basis of action observation.
  • Willems, R. M., Toni, I., Hagoort, P., & Casasanto, D. (2010). Neural dissociations between action verb understanding and motor imagery. Journal of Cognitive Neuroscience, 22(10), 2387-2400. doi:10.1162/jocn.2009.21386.

    Abstract

    According to embodied theories of language, people understand a verb like throw, at least in part, by mentally simulating throwing. This implicit simulation is often assumed to be similar or identical to motor imagery. Here we used fMRI totest whether implicit simulations of actions during language understanding involve the same cortical motor regions as explicit motor imagery Healthy participants were presented with verbs related to hand actions (e.g., to throw) and nonmanual actions (e.g., to kneel). They either read these verbs (lexical decision task) or actively imagined performing the actions named by the verbs (imagery task). Primary motor cortex showd effector-specific activation during imagery, but not during lexical decision. Parts of premotor cortex distinguished manual from nonmanual actions during both lexical decision and imagery, but there was no overlap or correlation between regions activated during the two tasks. These dissociations suggest that implicit simulation and explicit imagery cued by action verbs may involve different types of motor representations and that the construct of “mental simulation” should be distinguished from “mental imagery” in embodied theories of language.
  • Willems, R. M., & Varley, R. (2010). Neural insights into the relation between language and communication. Frontiers in Human Neuroscience, 4, 203. doi:10.3389/fnhum.2010.00203.

    Abstract

    The human capacity to communicate has been hypothesized to be causally dependent upon language. Intuitively this seems plausible since most communication relies on language. Moreover, intention recognition abilities (as a necessary prerequisite for communication) and language development seem to co-develop. Here we review evidence from neuroimaging as well as from neuropsychology to evaluate the relationship between communicative and linguistic abilities. Our review indicates that communicative abilities are best considered as neurally distinct from language abilities. This conclusion is based upon evidence showing that humans rely on different cortical systems when designing a communicative message for someone else as compared to when performing core linguistic tasks, as well as upon observations of individuals with severe language loss after extensive lesions to the language system, who are still able to perform tasks involving intention understanding
  • Wirthlin, M., Chang, E. F., Knörnschild, M., Krubitzer, L. A., Mello, C. V., Miller, C. T., Pfenning, A. R., Vernes, S. C., Tchernichovski, O., & Yartsev, M. M. (2019). A modular approach to vocal learning: Disentangling the diversity of a complex behavioral trait. Neuron, 104(1), 87-99. doi:10.1016/j.neuron.2019.09.036.

    Abstract

    Vocal learning is a behavioral trait in which the social and acoustic environment shapes the vocal repertoire of individuals. Over the past century, the study of vocal learning has progressed at the intersection of ecology, physiology, neuroscience, molecular biology, genomics, and evolution. Yet, despite the complexity of this trait, vocal learning is frequently described as a binary trait, with species being classified as either vocal learners or vocal non-learners. As a result, studies have largely focused on a handful of species for which strong evidence for vocal learning exists. Recent studies, however, suggest a continuum in vocal learning capacity across taxa. Here, we further suggest that vocal learning is a multi-component behavioral phenotype comprised of distinct yet interconnected modules. Discretizing the vocal learning phenotype into its constituent modules would facilitate integration of findings across a wider diversity of species, taking advantage of the ways in which each excels in a particular module, or in a specific combination of features. Such comparative studies can improve understanding of the mechanisms and evolutionary origins of vocal learning. We propose an initial set of vocal learning modules supported by behavioral and neurobiological data and highlight the need for diversifying the field in order to disentangle the complexity of the vocal learning phenotype.

    Files private

    Request files
  • Witteman, M. J., & Segers, E. (2010). The modality effect tested in children in a user-paced multimedia environment. Journal of Computer Assisted Learning, 26, 132-142. doi:10.1111/j.1365-2729.2009.00335.x.

    Abstract

    The modality learning effect, according to Mayer (2001), proposes that learning is enhanced when information is presented in both the visual and auditory domain (e.g., pictures and spoken information), compared to presenting information solely in the visual channel (e.g., pictures and written text). Most of the evidence for this effect comes from adults in a laboratory setting. Therefore, we tested the modality effect with 80 children in the highest grade of elementary school, in a naturalistic setting. In a between-subjects design children either saw representational pictures with speech or representational pictures with text. Retention and transfer knowledge was tested at three moments: immediately after the intervention, one day after, and after one week. The present study did not find any evidence for a modality effect in children when the lesson is learner-paced. Instead, we found a reversed modality effect directly after the intervention for retention. A reversed modality effect was also found for the transfer questions one day later. This effect was robust, even when controlling for individual differences.
  • Wittenburg, P. (2010). Archiving and accessing language resources. Concurrency and Computation: Practice and Experience, 22(17), 2354-2368. doi:10.1002/cpe.1605.

    Abstract

    Languages are among the most complex systems that evolution has created. With an unforeseen speed many of these unique results of evolution are currently disappearing: every two weeks one of the 6500 still spoken languages is dying and many are subject to extreme changes due to globalization. Experts understood the need to document the languages and preserve the cultural and linguistic treasures embedded in them for future generations. Also linguistic theory will need to consider the variation of the linguistic systems encoded in languages to improve our understanding of how human minds process language material, thus accessibility to all types of resources is increasingly crucial. Deeper insights into human language processing and a higher degree of integration and interoperability between resources will also improve our language processing technology. The DOBES programme is focussing on the documentation and preservation of language material. The Max Planck Institute developed the Language Archiving Technology to help researchers when creating, archiving and accessing language resources. The recently started CLARIN research infrastructure has as main goals to achieve a broad visibility and an easy
    accessibility of language resources.
  • Wolf, M. C., Muijselaar, M. M. L., Boonstra, A. M., & De Bree, E. H. (2019). The relationship between reading and listening comprehension: Shared and modality-specific components. Reading and Writing, 32(7), 1747-1767. doi:10.1007/s11145-018-9924-8.

    Abstract

    This study aimed to increase our understanding on the relationship between reading and listening comprehension. Both in comprehension theory and in educational practice, reading and listening comprehension are often seen as interchangeable, overlooking modality-specific aspects of them separately. Three questions were addressed. First, it was examined to what extent reading and listening comprehension comprise modality-specific, distinct skills or an overlapping, domain-general skill in terms of the amount of explained variance in one comprehension type by the opposite comprehension type. Second, general and modality-unique subskills of reading and listening comprehension were sought by assessing the contributions of the foundational skills word reading fluency, vocabulary, memory, attention, and inhibition to both comprehension types. Lastly, the practice of using either listening comprehension or vocabulary as a proxy of general comprehension was investigated. Reading and listening comprehension tasks with the same format were assessed in 85 second and third grade children. Analyses revealed that reading comprehension explained 34% of the variance in listening comprehension, and listening comprehension 40% of reading comprehension. Vocabulary and word reading fluency were found to be shared contributors to both reading and listening comprehension. None of the other cognitive skills contributed significantly to reading or listening comprehension. These results indicate that only part of the comprehension process is indeed domain-general and not influenced by the modality in which the information is provided. Especially vocabulary seems to play a large role in this domain-general part. The findings warrant a more prominent focus of modality-specific aspects of both reading and listening comprehension in research and education.
  • Xiang, H.-D., Fonteijn, H. M., Norris, D. G., & Hagoort, P. (2010). Topographical functional connectivity pattern in the perisylvian language networks. Cerebral Cortex, 20, 549-560. doi:10.1093/cercor/bhp119.

    Abstract

    We performed a resting-state functional connectivity study to investigate directly the functional correlations within the perisylvian language networks by seeding from 3 subregions of Broca's complex (pars opercularis, pars triangularis, and pars orbitalis) and their right hemisphere homologues. A clear topographical functional connectivity pattern in the left middle frontal, parietal, and temporal areas was revealed for the 3 left seeds. This is the first demonstration that a functional connectivity topology can be observed in the perisylvian language networks. The results support the assumption of the functional division for phonology, syntax, and semantics of Broca's complex as proposed by the memory, unification, and control (MUC) model and indicated a topographical functional organization in the perisylvian language networks, which suggests a possible division of labor for phonological, syntactic, and semantic function in the left frontal, parietal, and temporal areas.
  • Zavala, R. (2001). Entre consejos, diablos y vendedores de caca, rasgos gramaticales deloluteco en tres de sus cuentos. Tlalocan. Revista de Fuentes para el Conocimiento de las Culturas Indígenas de México, XIII, 335-414.

    Abstract

    The three Olutec stories from Oluta, Veracruz, werenarrated by Antonio Asistente Maldonado. Roberto Zavala presents amorpheme-by-morpheme analysis of the texts with a sketch of the majorgrammatical and typological features of this language. Olutec is spoken bythree dozen speakers. The grammatical structure of this language has not beendescribed before. The sketch contains information on verb and noun morphology,verb dasses, clause types, inverse/direct patterns, grammaticalizationprocesses, applicatives, incorporation, word order type, and discontinuousexpressions. The stories presented here are the first Olutec texts everpublished. The motifs of the stories are well known throughout Middle America.The story of "the Rabbit who wants to be big" explains why one of the mainprotagonists of Middle American folktales acquired long ears. The story of "theDevil who is inebriated by the people of a village" explains how theinhabitants of a village discover the true identity of a man who likes to dancehuapango and decide to get rid of him. Finally the story of "theshit-sellers" presents two compadres, one who is lazy and the otherone who works hard. The hard-worker asks the lazy compadre how he surviveswithout working. The latter lies to to him that he sells shit in theneighboring village. The hard-working compadre decides to become a shit-sellerand in the process realizes that the lazy compadre deceived him. However, he islucky and meets with the Devil who offers him money in compensation for havingbeen deceived. When the lazy compadre realizes that the hard-working compadrehas become rich, he tries to do the same business but gets beaten in theprocess.
  • Zheng, X., & Lemhöfer, K. (2019). The “semantic P600” in second language processing: When syntax conflicts with semantics. Neuropsychologia, 127, 131-147. doi:10.1016/j.neuropsychologia.2019.02.010.

    Abstract

    In sentences like “the mouse that chased the cat was hungry”, the syntactically correct interpretation (the mouse chases the cat) is contradicted by semantic and pragmatic knowledge. Previous research has shown that L1 speakers sometimes base sentence interpretation on this type of knowledge (so-called “shallow” or “good-enough” processing). We made use of both behavioural and ERP measurements to investigate whether L2 learners differ from native speakers in the extent to which they engage in “shallow” syntactic processing. German learners of Dutch as well as Dutch native speakers read sentences containing relative clauses (as in the example above) for which the plausible thematic roles were or were not reversed, and made plausibility judgments. The results show that behaviourally, L2 learners had more difficulties than native speakers to discriminate plausible from implausible sentences. In the ERPs, we replicated the previously reported finding of a “semantic P600” for semantic reversal anomalies in native speakers, probably reflecting the effort to resolve the syntax-semantics conflict. In L2 learners, though, this P600 was largely attenuated and surfaced only in those trials that were judged correctly for plausibility. These results generally point at a more prevalent, but not exclusive occurrence of shallow syntactic processing in L2 learners.
  • Zhernakova, A., Elbers, C. C., Ferwerda, B., Romanos, J., Trynka, G., Dubois, P. C., De Kovel, C. G. F., Franke, L., Oosting, M., Barisani, D., Bardella, M. T., Joosten, L. A. B., Saavalainen, P., van Heel, D. A., Catassi, C., Netea, M. G., Wijmenga, C., & Finnish Celiac Dis Study, G. (2010). Evolutionary and Functional Analysis of Celiac Risk Loci Reveals SH2B3 as a Protective Factor against Bacterial Infection. American Journal of Human Genetics, 86(6), 970-977. doi:10.1016/j.ajhg.2010.05.004.

    Abstract

    Celiac disease (CD) is an intolerance to dietary proteins of wheat, barley, and rye. CD may have substantial morbidity, yet it is quite common with a prevalence of 1%-2% in Western populations. It is not clear why the CD phenotype is so prevalent despite its negative effects on human health, especially because appropriate treatment in the form of a gluten-free diet has only been available since the 1950s, when dietary gluten was discovered to be the triggering factor. The high prevalence of CD might suggest that genes underlying this disease may have been favored by the process of natural selection. We assessed signatures of selection for ten confirmed CD-associated loci in several genome-wide data sets, comprising 8154 controls from four European populations and 195 individuals from a North African population, by studying haplotype lengths via the integrated haplotype score (iHS) method. Consistent signs of positive selection for CD-associated derived alleles were observed in three loci: IL12A, IL18RAP, and SH2B3. For the SH2B3 risk allele, we also show a difference in allele frequency distribution (F(st)) between HapMap phase II populations. Functional investigation of the effect of the SH2B3 genotype in response to lipopolysaccharide and muramyl dipeptide revealed that carriers of the SH2B3 rs3184504*A risk allele showed stronger activation of the NOD2 recognition pathway. This suggests that SH2B3 plays a role in protection against bacteria infection, and it provides a possible explanation for the selective sweep on SH2B3, which occurred sometime between 1200 and 1700 years ago.
  • Zhu, Z., Bastiaansen, M. C. M., Hakun, J. G., Petersson, K. M., Wang, S., & Hagoort, P. (2019). Semantic unification modulates N400 and BOLD signal change in the brain: A simultaneous EEG-fMRI study. Journal of Neurolinguistics, 52: 100855. doi:10.1016/j.jneuroling.2019.100855.

    Abstract

    Semantic unification during sentence comprehension has been associated with amplitude change of the N400 in event-related potential (ERP) studies, and activation in the left inferior frontal gyrus (IFG) in functional magnetic resonance imaging (fMRI) studies. However, the specificity of this activation to semantic unification remains unknown. To more closely examine the brain processes involved in semantic unification, we employed simultaneous EEG-fMRI to time-lock the semantic unification related N400 change, and integrated trial-by-trial variation in both N400 and BOLD change beyond the condition-level BOLD change difference measured in traditional fMRI analyses. Participants read sentences in which semantic unification load was parametrically manipulated by varying cloze probability. Separately, ERP and fMRI results replicated previous findings, in that semantic unification load parametrically modulated the amplitude of N400 and cortical activation. Integrated EEG-fMRI analyses revealed a different pattern in which functional activity in the left IFG and bilateral supramarginal gyrus (SMG) was associated with N400 amplitude, with the left IFG activation and bilateral SMG activation being selective to the condition-level and trial-level of semantic unification load, respectively. By employing the EEG-fMRI integrated analyses, this study among the first sheds light on how to integrate trial-level variation in language comprehension.
  • Zora, H., Riad, T., & Ylinen, S. (2019). Prosodically controlled derivations in the mental lexicon. Journal of Neurolinguistics, 52: 100856. doi:10.1016/j.jneuroling.2019.100856.

    Abstract

    Swedish morphemes are classified as prosodically specified or prosodically unspecified, depending on lexical or phonological stress, respectively. Here, we investigate the allomorphy of the suffix -(i)sk, which indicates the distinction between lexical and phonological stress; if attached to a lexically stressed morpheme, it takes a non-syllabic form (-sk), whereas if attached to a phonologically stressed morpheme, an epenthetic vowel is inserted (-isk). Using mismatch negativity (MMN), we explored the neural processing of this allomorphy across lexically stressed and phonologically stressed morphemes. In an oddball paradigm, participants were occasionally presented with congruent and incongruent derivations, created by the suffix -(i)sk, within the repetitive presentation of their monomorphemic stems. The results indicated that the congruent derivation of the lexically stressed stem elicited a larger MMN than the incongruent sequences of the same stem and the derivational suffix, whereas after the phonologically stressed stem a non-significant tendency towards an opposite pattern was observed. We argue that the significant MMN response to the congruent derivation in the lexical stress condition is in line with lexical MMN, indicating a holistic processing of the sequence of lexically stressed stem and derivational suffix. The enhanced MMN response to the incongruent derivation in the phonological stress condition, on the other hand, is suggested to reflect combinatorial processing of the sequence of phonologically stressed stem and derivational suffix. These findings bring a new aspect to the dual-system approach to neural processing of morphologically complex words, namely the specification of word stress.
  • Zormpa, E., Meyer, A. S., & Brehm, L. (2019). Slow naming of pictures facilitates memory for their names. Psychonomic Bulletin & Review, 26(5), 1675-1682. doi:10.3758/s13423-019-01620-x.

    Abstract

    Speakers remember their own utterances better than those of their interlocutors, suggesting that language production is beneficial to memory. This may be partly explained by a generation effect: The act of generating a word is known to lead to a memory advantage (Slamecka & Graf, 1978). In earlier work, we showed a generation effect for recognition of images (Zormpa, Brehm, Hoedemaker, & Meyer, 2019). Here, we tested whether the recognition of their names would also benefit from name generation. Testing whether picture naming improves memory for words was our primary aim, as it serves to clarify whether the representations affected by generation are visual or conceptual/lexical. A secondary aim was to assess the influence of processing time on memory. Fifty-one participants named pictures in three conditions: after hearing the picture name (identity condition), backward speech, or an unrelated word. A day later, recognition memory was tested in a yes/no task. Memory in the backward speech and unrelated conditions, which required generation, was superior to memory in the identity condition, which did not require generation. The time taken by participants for naming was a good predictor of memory, such that words that took longer to be retrieved were remembered better. Importantly, that was the case only when generation was required: In the no-generation (identity) condition, processing time was not related to recognition memory performance. This work has shown that generation affects conceptual/lexical representations, making an important contribution to the understanding of the relationship between memory and language.
  • Zormpa, E., Brehm, L., Hoedemaker, R. S., & Meyer, A. S. (2019). The production effect and the generation effect improve memory in picture naming. Memory, 27(3), 340-352. doi:10.1080/09658211.2018.1510966.

    Abstract

    The production effect (better memory for words read aloud than words read silently) and the picture superiority effect (better memory for pictures than words) both improve item memory in a picture naming task (Fawcett, J. M., Quinlan, C. K., & Taylor, T. L. (2012). Interplay of the production and picture superiority effects: A signal detection analysis. Memory (Hove, England), 20(7), 655–666. doi:10.1080/09658211.2012.693510). Because picture naming requires coming up with an appropriate label, the generation effect (better memory for generated than read words) may contribute to the latter effect. In two forced-choice memory experiments, we tested the role of generation in a picture naming task on later recognition memory. In Experiment 1, participants named pictures silently or aloud with the correct name or an unreadable label superimposed. We observed a generation effect, a production effect, and an interaction between the two. In Experiment 2, unreliable labels were included to ensure full picture processing in all conditions. In this experiment, we observed a production and a generation effect but no interaction, implying the effects are dissociable. This research demonstrates the separable roles of generation and production in picture naming and their impact on memory. As such, it informs the link between memory and language production and has implications for memory asymmetries between language production and comprehension.

    Additional information

    pmem_a_1510966_sm9257.pdf
  • Zwitserlood, I., van den Bogaerde, B., & Terpstra, A. (2010). De Nederlandse Gebarentaal en het ERK. Levende Talen Magazine, 2010(5), 50-51.
  • Zwitserlood, I. (2010). De Nederlandse Gebarentaal, het Corpus NGT en het ERK. Levende Talen Magazine, 2010(8), 44-45.
  • Zwitserlood, I. (2010). Laat je vingers spreken: NGT en vingerspelling. Levende Talen Magazine, 2010(2), 46-47.
  • Zwitserlood, I. (2009). Het Corpus NGT. Levende Talen Magazine, 6, 44-45.

    Abstract

    The Corpus NGT
  • Zwitserlood, I. (2010). Het Corpus NGT en de dagelijkse lespraktijk (2). Levende Talen Magazine, 2010(3), 47-48.
  • Zwitserlood, I. (2009). Het Corpus NGT en de dagelijkse lespraktijk (1). Levende Talen Magazine, 8, 40-41.
  • Zwitserlood, I. (2010). Sign language lexicography in the early 21st century and a recently published dictionary of Sign Language of the Netherlands. International Journal of Lexicography, 23, 443-476. doi:10.1093/ijl/ecq031.

    Abstract

    Sign language lexicography has thus far been a relatively obscure area in the world of lexicography. Therefore, this article will contain background information on signed languages and the communities in which they are used, on the lexicography of sign languages, the situation in the Netherlands as well as a review of a sign language dictionary that has recently been published in the Netherlands.
  • Zwitserlood, I., & Crasborn, O. (2010). Wat kunnen we leren uit een Corpus Nederlandse Gebarentaal? WAP Nieuwsbrief, 28(2), 16-18.
  • Zwitserlood, I. (2010). Verlos ons van de glos. Levende Talen Magazine, 2010(7), 40-41.

Share this page