Publications

Displaying 701 - 759 of 759
  • Van Gijn, R., Haude, K., & Muysken, P. (2011). Subordination in South America: An overview. In R. Van Gijn, K. Haude, & P. Muysken (Eds.), Subordination in native South-American languages (pp. 1-24). Amsterdam: Benjamins.
  • Van Wijk, C., & Kempen, G. (1982). Syntactische formuleervaardigheid en het schrijven van opstellen. Pedagogische Studiën, 59, 126-136.

    Abstract

    Meermalen is getracht om syntactische formuleenuuirdigheid direct en objectief te meten aan de hand van gesproken of geschreven teksten. Uitgangspunt hierbij vormde in de regel de syntactische complexiteit van de geproduceerde taaluitingen. Dit heeft echter niet geleid tot een plausibele, duidelijk omschreven en praktisch bruikbare index. N.a.v. een kritische bespreking van de notie complexiteit wordt in dit artikel als nieuw criterium voorgesteld de connectiviteit van de taaluitingen; de expliciete aanduiding van logiscli-scmantische relaties tussen proposities. Connectiviteit is gemakkelijk scoorbaar aan de hand van functiewoorden die verschillende vormen van nevenschikkend en onderschikkend zinsverband markeren. Deze nieuwe index ondetrangt de kritiek die op complexiteit gegeven kon worden, blijkt duidelijk te discrimineren tussen groepen leerlingen die van elkaar verschillen naar leeftijd en opleidingsniveau, en sluit aan bij recente taalpsychologische en sociolinguïstische theorie. Tot besluit worden enige onderwijskundige implicaties aangegeven.
  • Van Leeuwen, E. J. C., Zimmerman, E., & Davila Ross, M. (2011). Responding to inequities: Gorillas try to maintain their competitive advantage during play fights. Biology Letters, 7(1), 39-42. doi:10.1098/rsbl.2010.0482.

    Abstract

    Humans respond to unfair situations in various ways. Experimental research has revealed that non-human species also respond to unequal situ- ations in the form of inequity aversions when they have the disadvantage. The current study focused on play fights in gorillas to explore for the first time, to our knowledge, if/how non-human species respond to inequities in natural social settings. Hitting causes a naturally occurring inequity among individuals and here it was specifically assessed how the hitters and their partners engaged in play chases that followed the hitting. The results of this work showed that the hitters significantly more often moved first to run away immediately after the encounter than their partners. These findings provide evidence that non-human species respond to inequities by trying to maintain their competitive advantages. We conclude that non-human pri- mates, like humans, may show different responses to inequities and that they may modify them depending on if they have the advantage or the disadvantage.
  • Van Gijn, R. (2011). Semantic and grammatical integration in Yurakaré subordination. In R. Van Gijn, K. Haude, & P. Muysken (Eds.), Subordination in native South-American languages (pp. 169-192). Amsterdam: Benjamins.

    Abstract

    Yurakaré (unclassified, central Bolivia) has five subordination strategies (on the basis of a morphosyntactic definition). In this paper I argue that the use of these different strategies is conditioned by the degree of conceptual synthesis of the two events, relating to temporal integration and participant integration. The most integrated events are characterized by shared time reference; morphosyntactically they are serial verb constructions, with syntactically fused predicates. The other constructions are characterized by less grammatical integration, which correlates either with a low degree of temporal integration of the dependent predicate and the main predicate, or with participant discontinuity.
  • Van de Ven, M., Tucker, B. V., & Ernestus, M. (2011). Semantic context effects in the comprehension of reduced pronunciation variants. Memory & Cognition, 39, 1301-1316. doi:10.3758/s13421-011-0103-2.

    Abstract

    Listeners require context to understand the highly reduced words that occur in casual speech. The present study reports four auditory lexical decision experiments in which the role of semantic context in the comprehension of reduced versus unreduced speech was investigated. Experiments 1 and 2 showed semantic priming for combinations of unreduced, but not reduced, primes and low-frequency targets. In Experiment 3, we crossed the reduction of the prime with the reduction of the target. Results showed no semantic priming from reduced primes, regardless of the reduction of the targets. Finally, Experiment 4 showed that reduced and unreduced primes facilitate upcoming low-frequency related words equally if the interstimulus interval is extended. These results suggest that semantically related words need more time to be recognized after reduced primes, but once reduced primes have been fully (semantically) processed, these primes can facilitate the recognition of upcoming words as well as do unreduced primes.
  • Van Berkum, J. J. A., Hagoort, P., & Brown, C. M. (1999). Semantic integration in sentences and discourse: Evidence from the N400. Journal of Cognitive Neuroscience, 11(6), 657-671. doi:10.1162/089892999563724.

    Abstract

    In two ERP experiments we investigated how and when the language comprehension system relates an incoming word to semantic representations of an unfolding local sentence and a wider discourse. In experiment 1, subjects were presented with short stories. The last sentence of these stories occasionally contained a critical word that, although acceptable in the local sentence context, was semantically anomalous with respect to the wider discourse (e.g., "Jane told the brother that he was exceptionally slow" in a discourse context where he had in fact been very quick). Relative to coherent control words (e.g., "quick"), these discourse-dependent semantic anomalies elicited a large N400 effect that began at about 200-250 ms after word onset. In experiment 2, the same sentences were presented without their original story context. Although the words that had previously been anomalous in discourse still elicited a slightly larger average N400 than the coherent words, the resulting N400 effect was much reduced, showing that the large effect observed in stories was related to the wider discourse. In the same experiment, single sentences that contained a clear local semantic anomaly elicited a standard sentence-dependent N400 effect (e.g., Kutas & Hillyard, 1980). The N400 effects elicited in discourse and in single sentences had the same time course, overall morphology, and scalp distribution. We argue that these findings are most compatible with models of language processing in which there is no fundamental distinction between the integration of a word in its local (sentence-level) and its global (discourse-level) semantic context.
  • Van Berkum, J. J. A., Brown, C. M., & Hagoort, P. (1999). When does gender constrain parsing? Evidence from ERPs. Journal of Psycholinguistic Research, 28(5), 555-566. doi:10.1023/A:1023224628266.

    Abstract

    We review the implications of recent ERP evidence for when and how grammatical gender agreement constrains sentence parsing. In some theories of parsing, gender is assumed to immediately and categorically block gender-incongruent phrase structure alternatives from being pursued. In other theories, the parser initially ignores gender altogether. The ERP evidence we discuss suggests an intermediate position, in which grammatical gender does not immediately block gender-incongruent phrase structures from being considered, but is used to dispose of them shortly thereafter.
  • Van Turennout, M., Hagoort, P., & Brown, C. M. (1999). The time course of grammatical and phonological processing during speaking: evidence from event-related brain potentials. Journal of Psycholinguistic Research, 28(6), 649-676. doi:10.1023/A:1023221028150.

    Abstract

    Motor-related brain potentials were used to examine the time course of grammatical and phonological processes during noun phrase production in Dutch. In the experiments, participants named colored pictures using a no-determiner noun phrase. On half of the trials a syntactic-phonological classification task had to be performed before naming. Depending on the outcome of the classifications, a left or a right push-button response was given (go trials), or no push-button response was given (no-go trials). Lateralized readiness potentials (LRPs) were derived to test whether syntactic and phonological information affected the motor system at separate moments in time. The results showed that when syntactic information determined the response-hand decision, an LRP developed on no-go trials. However, no such effect was observed when phonological information determined response hand. On the basis of the data, it can be estimated that an additional period of at least 40 ms is needed to retrieve a word's initial phoneme once its lemma has been retrieved. These results provide evidence for the view that during speaking, grammatical processing precedes phonological processing in time.
  • Van Berkum, J. J. A. (2011). Zonder gevoel geen taal [Inaugural lecture].

    Abstract

    Onderzoek naar taal en communicatie heeft zich in het verleden veel te veel gericht op taal als systeem om berichten te coderen, een soort TCP/IP (netwerkprotocol voor communicatie tussen computers). Dat moet maar eens veranderen, stelt prof. dr. Jos van Berkum, hoogleraar Discourse, Cognitie en Communicatie, in zijn oratie die hij op 30 september zal houden aan de Universiteit Utrecht. Hij pleit voor meer onderzoek naar de sterke verwevenheid van taal en gevoel.
  • Van Valin Jr., R. D. (1995). Toward a functionalist account of so-called ‘extraction constraints’. In B. Devriendt (Ed.), Complex structures: A functionalist perspective (pp. 29-60). Berlin: Mouton de Gruyter.
  • Vandeberg, L., Guadalupe, T., & Zwaan, R. A. (2011). How verbs can activate things: Cross-language activation across word classes. Acta Psychologica, 138, 68-73. doi:10.1016/j.actpsy.2011.05.007.

    Abstract

    The present study explored whether language-nonselective access in bilinguals occurs across word classes in a sentence context. Dutch–English bilinguals were auditorily presented with English (L2) sentences while looking at a visual world. The sentences contained interlingual homophones from distinct lexical categories (e.g., the English verb spoke, which overlaps phonologically with the Dutch noun for ghost, spook). Eye movement recordings showed that depictions of referents of the Dutch (L1) nouns attracted more visual attention than unrelated distractor pictures in sentences containing homophones. This finding shows that native language objects are activated during second language verb processing despite the structural information provided by the sentence context. Research highlights We show that native language words are activated during second language sentence processing. We tested this in a visual world setting on homophones with a different word class across languages. Fixations show that processing second language verbs activated native language nouns.
  • Vapnarsky, V., & Le Guen, O. (2011). The guardians of space: Understanding ecological and historical relations of the contemporary Yucatec Mayas to their landscape. In C. Isendahl, & B. Liljefors Persson (Eds.), Ecology, Power, and Religion in Maya Landscapes: Proceedings of the 11th European Maya Conference. Acta Mesoamericano. vol. 23. Markt Schwaben: Saurwein.
  • Verdonschot, R. G., La Heij, W., Paolieri, D., Zhang, Q., & Schiller, N. O. (2011). Homophonic context effects when naming Japanese kanji: Evidence for processing costs. Quarterly Journal of Experimental Psychology, 64(9), 1836-1849. doi:10.1080/17470218.2011.585241.

    Abstract

    The current study investigated the effects of phonologically related context pictures on the naming latencies of target words in Japanese and Chinese. Reading bare words in alphabetic languages has been shown to be rather immune to effects of context stimuli, even when these stimuli are presented in advance of the target word (e. g., Glaser & Dungelhoff, 1984; Roelofs, 2003). However, recently, semantic context effects of distractor pictures on the naming latencies of Japanese kanji (but not Chinese hanzi) words have been observed (Verdonschot, La Heij, & Schiller, 2010). In the present study, we further investigated this issue using phonologically related (i.e., homophonic) context pictures when naming target words in either Chinese or Japanese. We found that pronouncing bare nouns in Japanese is sensitive to phonologically related context pictures, whereas this is not the case in Chinese. The difference between these two languages is attributed to processing costs caused by multiple pronunciations for Japanese kanji.
  • Verdonschot, R. G., Kiyama, S., Tamaoka, K., Kinoshita, S., La Heij, W., & Schiller, N. O. (2011). The functional unit of Japanese word naming: Evidence from masked priming. Journal of Experimental Psychology: Learning, Memory, and Cognition, 37(6), 1458-1473. doi:10.1037/a0024491.

    Abstract

    Theories of language production generally describe the segment as the basic unit in phonological encoding (e.g., Dell, 1988; Levelt, Roelofs, & Meyer, 1999). However, there is also evidence that such a unit might be language specific. Chen, Chen, and Dell (2002), for instance, found no effect of single segments when using a preparation paradigm. To shed more light on the functional unit of phonological encoding in Japanese, a language often described as being mora based, we report the results of 4 experiments using word reading tasks and masked priming. Experiment 1 demonstrated using Japanese kana script that primes, which overlapped in the whole mora with target words, sped up word reading latencies but not when just the onset overlapped. Experiments 2 and 3 investigated a possible role of script by using combinations of romaji (Romanized Japanese) and hiragana; again, facilitation effects were found only when the whole mora and not the onset segment overlapped. Experiment 4 distinguished mora priming from syllable priming and revealed that the mora priming effects obtained in the first 3 experiments are also obtained when a mora is part of a syllable. Again, no priming effect was found for single segments. Our findings suggest that the mora and not the segment (phoneme) is the basic functional phonological unit in Japanese language production planning.
  • Verdonschot, R. G. (2011). Word processing in languages using non-alphabetic scripts: The cases of Japanese and Chinese. PhD Thesis, Leiden University, Leiden, The Netherlands.

    Abstract

    This thesis investigates the processing of words written in Japanese kanji and Chinese hànzì, i.e. logographic scripts. Special attention is given to the fact that the majority of Japanese kanji have multiple pronunciations (generally depending on the combination a kanji forms with other characters). First, using masked priming, it is established that upon presentation of a Japanese kanji multiple pronunciations are activated. In subsequent experiments using word naming with context pictures it is concluded that both Chinese hànzì and Japanese kanji are read out loud via a direct route from orthography to phonology. However, only Japanese kanji become susceptible to semantic or phonological context effects as a result of a cost due to the processing of multiple pronunciations. Finally, zooming in on the size of the articulatory planning unit in Japanese it is concluded that the mora as a phonological unit best complies with the observed data pattern and not the phoneme or the syllable
  • Verhagen, J. (2011). Verb placement in second language acquisition: Experimental evidence for the different behavior of auxiliary and lexical verbs. Applied Psycholinguistics, 32, 821 -858. doi:10.1017/S0142716411000087.

    Abstract

    This study investigates the acquisition of verb placement by Moroccan and Turkish second language (L2) learners of Dutch. Elicited production data corroborate earlier findings from L2 German that learners who do not produce auxiliaries do not raise lexical verbs over negation, whereas learners who produce auxiliaries do. Data from elicited imitation and sentence matching support this pattern and show that learners can have grammatical knowledge of auxiliary placement before they can produce auxiliaries. With lexical verbs, they do not show such knowledge. These results present further evidence for the different behavior of auxiliary and lexical verbs in early stages of L2 acquisition.
  • Vernes, S. C., Oliver, P. L., Spiteri, E., Lockstone, H. E., Puliyadi, R., Taylor, J. M., Ho, J., Mombereau, C., Brewer, A., Lowy, E., Nicod, J., Groszer, M., Baban, D., Sahgal, N., Cazier, J.-B., Ragoussis, J., Davies, K. E., Geschwind, D. H., & Fisher, S. E. (2011). Foxp2 regulates gene networks implicated in neurite outgrowth in the developing brain. PLoS Genetics, 7(7): e1002145. doi:10.1371/journal.pgen.1002145.

    Abstract

    Forkhead-box protein P2 is a transcription factor that has been associated with intriguing aspects of cognitive function in humans, non-human mammals, and song-learning birds. Heterozygous mutations of the human FOXP2 gene cause a monogenic speech and language disorder. Reduced functional dosage of the mouse version (Foxp2) causes deficient cortico-striatal synaptic plasticity and impairs motor-skill learning. Moreover, the songbird orthologue appears critically important for vocal learning. Across diverse vertebrate species, this well-conserved transcription factor is highly expressed in the developing and adult central nervous system. Very little is known about the mechanisms regulated by Foxp2 during brain development. We used an integrated functional genomics strategy to robustly define Foxp2-dependent pathways, both direct and indirect targets, in the embryonic brain. Specifically, we performed genome-wide in vivo ChIP–chip screens for Foxp2-binding and thereby identified a set of 264 high-confidence neural targets under strict, empirically derived significance thresholds. The findings, coupled to expression profiling and in situ hybridization of brain tissue from wild-type and mutant mouse embryos, strongly highlighted gene networks linked to neurite development. We followed up our genomics data with functional experiments, showing that Foxp2 impacts on neurite outgrowth in primary neurons and in neuronal cell models. Our data indicate that Foxp2 modulates neuronal network formation, by directly and indirectly regulating mRNAs involved in the development and plasticity of neuronal connections
  • Vernes, S. C., & Fisher, S. E. (2011). Functional genomic dissection of speech and language disorders. In J. D. Clelland (Ed.), Genomics, proteomics, and the nervous system (pp. 253-278). New York: Springer.

    Abstract

    Mutations of the human FOXP2 gene have been shown to cause severe difficulties in learning to make coordinated sequences of articulatory gestures that underlie speech (developmental verbal dyspraxia or DVD). Affected individuals are impaired in multiple aspects of expressive and receptive linguistic processing and ­display abnormal grey matter volume and functional activation patterns in cortical and subcortical brain regions. The protein encoded by FOXP2 belongs to a divergent subgroup of forkhead-box transcription factors, with a distinctive DNA-binding domain and motifs that mediate hetero- and homodimerization. This chapter describes the successful use of FOXP2 as a unique molecular window into neurogenetic pathways that are important for speech and language development, adopting several complementary strategies. These include direct functional investigations of FOXP2 splice variants and the effects of etiological mutations. FOXP2’s role as a transcription factor also enabled the development of functional genomic routes for dissecting neurogenetic mechanisms that may be relevant for speech and language. By identifying downstream target genes regulated by FOXP2, it was possible to identify common regulatory themes in modulating synaptic plasticity, neurodevelopment, and axon guidance. These targets represent novel entrypoints into in vivo pathways that may be disturbed in speech and language disorders. The identification of FOXP2 target genes has also led to the discovery of a shared neurogenetic pathway between clinically distinct language disorders; the rare Mendelian form of DVD and a complex and more common form of language ­disorder known as Specific Language Impairment.

    Files private

    Request files
  • Versteegh, M., Ten Bosch, L., & Boves, L. (2011). Modelling novelty preference in word learning. In Proceedings of the 12th Annual Conference of the International Speech Communication Association (Interspeech 2011), Florence, Italy (pp. 761-764).

    Abstract

    This paper investigates the effects of novel words on a cognitively plausible computational model of word learning. The model is first familiarized with a set of words, achieving high recognition scores and subsequently offered novel words for training. We show that the model is able to recognize the novel words as different from the previously seen words, based on a measure of novelty that we introduce. We then propose a procedure analogous to novelty preference in infants. Results from simulations of word learning show that adding this procedure to our model speeds up training and helps the model attain higher recognition rates.
  • Verweij, H., Windhouwer, M., & Wittenburg, P. (2011). Knowledge management for small languages. In V. Luzar-Stiffler, I. Jarec, & Z. Bekic (Eds.), Proceedings of the ITI 2011 33rd Int. Conf. on Information Technology Interfaces, June 27-30, 2011, Cavtat, Croatia (pp. 213-218). Zagreb, Croatia: University Computing Centre, University of Zagreb.

    Abstract

    In this paper an overview of the knowledge components needed for extensive documentation of small languages is given. The Language Archive is striving to offer all these tools to the linguistic community. The major tools in relation to the knowledge components are described. Followed by a discussion on what is currently lacking and possible strategies to move forward.
  • Virpioja, S., Lehtonen, M., Hulten, A., Salmelin, R., & Lagus, K. (2011). Predicting reaction times in word recognition by unsupervised learning of morphology. In W. Honkela, W. Dutch, M. Girolami, & S. Kaski (Eds.), Artificial Neural Networks and Machine Learning – ICANN 2011 (pp. 275-282). Berlin: Springer.

    Abstract

    A central question in the study of the mental lexicon is how morphologically complex words are processed. We consider this question from the viewpoint of statistical models of morphology. As an indicator of the mental processing cost in the brain, we use reaction times to words in a visual lexical decision task on Finnish nouns. Statistical correlation between a model and reaction times is employed as a goodness measure of the model. In particular, we study Morfessor, an unsupervised method for learning concatenative morphology. The results for a set of inflected and monomorphemic Finnish nouns reveal that the probabilities given by Morfessor, especially the Categories-MAP version, show considerably higher correlations to the reaction times than simple word statistics such as frequency, morphological family size, or length. These correlations are also higher than when any individual test subject is viewed as a model.
  • De Vos, C. (2011). A signers' village in Bali, Indonesia. Minpaku Anthropology Newsletter, 33, 4-5.
  • De Vos, C. (2011). Kata Kolok color terms and the emergence of lexical signs in rural signing communities. The Senses & Society, 6(1), 68-76. doi:10.2752/174589311X12893982233795.

    Abstract

    How do new languages develop systematic ways to talk about sensory experiences, such as color? To what extent is the evolution of color terms guided by societal factors? This paper describes the color lexicon of a rural sign language called Kata Kolok which emerged approximately one century ago in a Balinese village. Kata Kolok has four color signs: black, white, red and a blue-green term. In addition, two non-conventionalized means are used to provide color descriptions: naming relevant objects, and pointing to objects in the vicinity. Comparison with Balinese culture and spoken Balinese brings to light discrepancies between the systems, suggesting that neither cultural practices nor language contact have driven the formation of color signs in Kata Kolok. The few lexicographic investigations from other rural sign languages report limitations in the domain of color. On the other hand, larger, urban signed languages have extensive systems, for example, Australian Sign Language has up to nine color terms (Woodward 1989: 149). These comparisons support the finding that, rural sign languages like Kata Kolok fail to provide the societal pressures for the lexicon to expand further.
  • De Vries, M., Christiansen, M. H., & Petersson, K. M. (2011). Learning recursion: Multiple nested and crossed dependencies. Biolinguistics, 5(1/2), 010-035.

    Abstract

    Language acquisition in both natural and artificial language learning settings crucially depends on extracting information from sequence input. A shared sequence learning mechanism is thus assumed to underlie both natural and artificial language learning. A growing body of empirical evidence is consistent with this hypothesis. By means of artificial language learning experiments, we may therefore gain more insight in this shared mechanism. In this paper, we review empirical evidence from artificial language learning and computational modelling studies, as well as natural language data, and suggest that there are two key factors that help determine processing complexity in sequence learning, and thus in natural language processing. We propose that the specific ordering of non-adjacent dependencies (i.e., nested or crossed), as well as the number of non-adjacent dependencies to be resolved simultaneously (i.e., two or three) are important factors in gaining more insight into the boundaries of human sequence learning; and thus, also in natural language processing. The implications for theories of linguistic competence are discussed.
  • Vuong, L., & Martin, R. C. (2011). LIFG-based attentional control and the resolution of lexical ambiguities in sentence context. Brain and Language, 116, 22-32. doi:10.1016/j.bandl.2010.09.012.

    Abstract

    The role of attentional control in lexical ambiguity resolution was examined in two patients with damage to the left inferior frontal gyrus (LIFG) and one control patient with non-LIFG damage. Experiment 1 confirmed that the LIFG patients had attentional control deficits compared to normal controls while the non-LIFG patient was relatively unimpaired. Experiment 2 showed that all three patients did as well as normal controls in using biasing sentence context to resolve lexical ambiguities involving balanced ambiguous words, but only the LIFG patients took an abnormally long time on lexical ambiguities that resolved toward a subordinate meaning of biased ambiguous words. Taken together, the results suggest that attentional control plays an important role in the resolution of certain lexical ambiguities – those that induce strong interference from context-inappropriate meanings (i.e., dominant meanings of biased ambiguous words).
  • Vuong, L., Meyer, A. S., & Christiansen, M. H. (2011). Simultaneous online tracking of adjacent and non-adjacent dependencies in statistical learning. In L. Carlson, C. Hölscher, & T. Shipley (Eds.), Proceedings of the 33rd Annual Conference of the Cognitive Science Society (pp. 964-969). Austin, TX: Cognitive Science Society.
  • Wagner, M., Tran, D., Togneri, R., Rose, P., Powers, D., Onslow, M., Loakes, D., Lewis, T., Kuratate, T., Kinoshita, Y., Kemp, N., Ishihara, S., Ingram, J., Hajek, J., Grayden, D., Göcke, R., Fletcher, J., Estival, D., Epps, J., Dale, R. and 11 moreWagner, M., Tran, D., Togneri, R., Rose, P., Powers, D., Onslow, M., Loakes, D., Lewis, T., Kuratate, T., Kinoshita, Y., Kemp, N., Ishihara, S., Ingram, J., Hajek, J., Grayden, D., Göcke, R., Fletcher, J., Estival, D., Epps, J., Dale, R., Cutler, A., Cox, F., Chetty, G., Cassidy, S., Butcher, A., Burnham, D., Bird, S., Best, C., Bennamoun, M., Arciuli, J., & Ambikairajah, E. (2011). The Big Australian Speech Corpus (The Big ASC). In M. Tabain, J. Fletcher, D. Grayden, J. Hajek, & A. Butcher (Eds.), Proceedings of the Thirteenth Australasian International Conference on Speech Science and Technology (pp. 166-170). Melbourne: ASSTA.
  • Walsh Dickey, L. (1999). Syllable count and Tzeltal segmental allomorphy. In J. Rennison, & K. Kühnhammer (Eds.), Phonologica 1996. Proceedings of the 8th International Phonology Meeting (pp. 323-334). Holland Academic Graphics.

    Abstract

    Tzeltal, a Mayan language spoken in southern Mexico, exhibits allo-morphy of an unusual type. The vowel quality of the perfective suffix is determined by the number of syllables in the stem to which it is attaching. This paper presents previously unpublished data of this allomorphy and demonstrates that a syllable-count analysis of the phenomenon is the proper one. This finding is put in a more general context of segment-prosody interaction in allomorphy.
  • Wang, L. (2011). The influence of information structure on language comprehension: A neurocognitive perspective. PhD Thesis, Radboud University Nijmegen, Nijmegen.
  • Wang, L., Bastiaansen, M. C. M., Yang, Y., & Hagoort, P. (2011). The influence of information structure on the depth of semantic processing: How focus and pitch accent determine the size of the N400 effect. Neuropsychologia, 49, 813-820. doi:10.1016/j.neuropsychologia.2010.12.035.

    Abstract

    To highlight relevant information in dialogues, both wh-question context and pitch accent in answers can be used, such that focused information gains more attention and is processed more elaborately. To evaluate the relative influence of context and pitch accent on the depth of semantic processing, we measured Event-Related Potentials (ERPs) to auditorily presented wh-question-answer pairs. A semantically incongruent word in the answer occurred either in focus or non-focus position as determined by the context, and this word was either accented or unaccented. Semantic incongruency elicited different N400 effects in different conditions. The largest N400 effect was found when the question-marked focus was accented, while the other three conditions elicited smaller N400 effects. The results suggest that context and accentuation interact. Thus accented focused words were processed more deeply compared to conditions where focus and accentuation mismatched, or when the new information had no marking. In addition, there seems to be sex differences in the depth of semantic processing. For males, a significant N400 effect was observed only when the question-marked focus was accented, reduced N400 effects were found in the other dialogues. In contrast, females produced similar N400 effects in all the conditions. These results suggest that regardless of external cues, females tend to engage in more elaborate semantic processing compared to males.
  • Weber, A., Broersma, M., & Aoyagi, M. (2011). Spoken-word recognition in foreign-accented speech by L2 listeners. Journal of Phonetics, 39, 479-491. doi:10.1016/j.wocn.2010.12.004.

    Abstract

    Two cross-modal priming studies investigated the recognition of English words spoken with a foreign accent. Auditory English primes were either typical of a Dutch accent or typical of a Japanese accent in English and were presented to both Dutch and Japanese L2 listeners. Lexical-decision times to subsequent visual target words revealed that foreign-accented words can facilitate word recognition for L2 listeners if at least one of two requirements is met: the foreign-accented production is in accordance with the language background of the L2 listener, or the foreign accent is perceptually confusable with the standard pronunciation for the L2 listener. If neither one of the requirements is met, no facilitatory effect of foreign accents on L2 word recognition is found. Taken together, these findings suggest that linguistic experience with a foreign accent affects the ability to recognize words carrying this accent, and there is furthermore a general benefit for L2 listeners for recognizing foreign-accented words that are perceptually confusable with the standard pronunciation.
  • Wegener, C. (2011). Expression of reciprocity in Savosavo. In N. Evans, A. Gaby, S. C. Levinson, & A. Majid (Eds.), Reciprocals and semantic typology (pp. 213-224). Amsterdam: Benjamins.

    Abstract

    This paper describes how reciprocity is expressed in the Papuan (i.e. non-Austronesian­) language Savosavo, spoken in the Solomon Islands. The main strategy is to use the reciprocal nominal mapamapa, which can occur in different NP positions and always triggers default third person singular masculine agreement, regardless of the number and gender of the referents. After a description of this as well as another strategy that is occasionally used (the ‘joint activity construction’), the paper will provide a detailed analysis of data elicited with set of video stimuli and show that the main strategy is used to describe even clearly asymmetric situations, as long as more than one person acts on more than one person in a joint activity.
  • Weissenborn, J., & Stralka, R. (1984). Das Verstehen von Mißverständnissen. Eine ontogenetische Studie. In Zeitschrift für Literaturwissenschaft und Linguistik (pp. 113-134). Stuttgart: Metzler.
  • Weissenborn, J. (1984). La genèse de la référence spatiale en langue maternelle et en langue seconde: similarités et différences. In G. Extra, & M. Mittner (Eds.), Studies in second language acquisition by adult immigrants (pp. 262-286). Tilburg: Tilburg University.
  • Weissenborn, J. (1986). Learning how to become an interlocutor. The verbal negotiation of common frames of reference and actions in dyads of 7–14 year old children. In J. Cook-Gumperz, W. A. Corsaro, & J. Streeck (Eds.), Children's worlds and children's language (pp. 377-404). Berlin: Mouton de Gruyter.
  • Wheeldon, L. R., & Levelt, W. J. M. (1995). Monitoring the time course of phonological encoding. Journal of Memory and Language, 34(3), 311-334. doi:10.1006/jmla.1995.1014.

    Abstract

    Three experiments examined the time course of phonological encoding in speech production. A new methodology is introduced in which subjects are required to monitor their internal speech production for prespecified target segments and syllables. Experiment 1 demonstrated that word initial target segments are monitored significantly faster than second syllable initial target segments. The addition of a concurrent articulation task (Experiment 1b) had a limited effect on performance, excluding the possibility that subjects are monitoring a subvocal articulation of the carrier word. Moreover, no relationship was observed between the pattern of monitoring latencies and the timing of the targets in subjects′ overt speech. Subjects are not, therefore, monitoring an internal phonetic representation of the carrier word. Experiment 2 used the production monitoring task to replicate the syllable monitoring effect observed in speech perception experiments: responses to targets were faster when they corresponded to the initial syllable of the carrier word than when they did not. We conclude that subjects are monitoring their internal generation of a syllabified phonological representation. Experiment 3 provides more detailed evidence concerning the time course of the generation of this representation by comparing monitoring latencies to targets within, as well as between, syllables. Some amendments to current models of phonological encoding are suggested in light of these results.
  • Whitehouse, A. J., Bishop, D. V., Ang, Q., Pennell, C. E., & Fisher, S. E. (2011). CNTNAP2 variants affect early language development in the general population. Genes, Brain and Behavior, 10, 451-456. doi:10.1111/j.1601-183X.2011.00684.x.

    Abstract

    Early language development is known to be under genetic influence, but the genes affecting normal variation in the general population remain largely elusive. Recent studies of disorder reported that variants of the CNTNAP2 gene are associated both with language deficits in specific language impairment (SLI) and with language delays in autism. We tested the hypothesis that these CNTNAP2 variants affect communicative behavior, measured at 2 years of age in a large epidemiological sample, the Western Australian Pregnancy Cohort (Raine) Study. Singlepoint analyses of 1149 children (606 males, 543 emales) revealed patterns of association which were strikingly reminiscent of those observed in previous investigations of impaired language, centered on the same genetic markers, and with a consistent direction of effect (rs2710102, p = .0239; rs759178, p = .0248). Based on these findings we performed analyses of four-marker haplotypes of rs2710102- s759178-rs17236239-rs2538976, and identified significant association (haplotype TTAA, p = .049; haplotype GCAG, p = .0014). Our study suggests that common variants in the exon 13-15 region of CNTNAP2 influence early language acquisition, as assessed at age 2, in the general population. We propose that these CNTNAP2 variants increase susceptibility to SLI or autism when they occur together with other risk factors.

    Additional information

    Whitehouse_Additional_Information.doc
  • Wilkin, K., & Holler, J. (2011). Speakers’ use of ‘action’ and ‘entity’ gestures with definite and indefinite references. In G. Stam, & M. Ishino (Eds.), Integrating gestures: The interdisciplinary nature of gesture (pp. 293-308). Amsterdam: John Benjamins.

    Abstract

    Common ground is an essential prerequisite for coordination in social interaction, including language use. When referring back to a referent in discourse, this referent is ‘given information’ and therefore in the interactants’ common ground. When a referent is being referred to for the first time, a speaker introduces ‘new information’. The analyses reported here are on gestures that accompany such references when they include definite and indefinite grammatical determiners. The main finding from these analyses is that referents referred to by definite and indefinite articles were equally often accompanied by gesture, but speakers tended to accompany definite references with gestures focusing on action information and indefinite references with gestures focusing on entity information. The findings suggest that speakers use speech and gesture together to design utterances appropriate for speakers with whom they share common ground.

    Files private

    Request files
  • Wilkins, D. (1995). Towards a Socio-Cultural Profile of the Communities We Work With. In D. Wilkins (Ed.), Extensions of space and beyond: manual for field elicitation for the 1995 field season (pp. 70-79). Nijmegen: Max Planck Institute for Psycholinguistics. doi:10.17617/2.3513481.

    Abstract

    Field data are drawn from a particular speech community at a certain place and time. The intent of this survey is to enrich understanding of the various socio-cultural contexts in which linguistic and “cognitive” data may have been collected, so that we can explore the role which societal, cultural and contextual factors may play in this material. The questionnaire gives guidelines concerning types of ethnographic information that are important to cross-cultural and cross-linguistic enquiry, and will be especially useful to researchers who do not have specialised training in anthropology.
  • Wilkins, D., Pederson, E., & Levinson, S. C. (1995). Background questions for the "enter"/"exit" research. In D. Wilkins (Ed.), Extensions of space and beyond: manual for field elicitation for the 1995 field season (pp. 14-16). Nijmegen: Max Planck Institute for Psycholinguistics. doi:10.17617/2.3003935.

    Abstract

    How do languages encode different kinds of movement, and what features do people pay attention to when describing motion events? This document outlines topics concerning the investigation of “enter” and “exit” events. It helps contextualise research tasks that examine this domain (see 'Motion Elicitation' and 'Enter/Exit animation') and gives some pointers about what other questions can be explored.
  • Wilkins, D. (1999). A questionnaire on motion lexicalisation and motion description. In D. Wilkins (Ed.), Manual for the 1999 Field Season (pp. 96-115). Nijmegen: Max Planck Institute for Psycholinguistics. doi:10.17617/2.3002706.

    Abstract

    How do languages express ideas of movement, and how do they package features that can be part of motion, such as path and cause? This questionnaire is used to gain a picture of the lexical resources a language draws on for motion expressions. It targets issues of semantic conflation (i.e., what other semantic information besides motion may be encoded in a verb root) and patterns of semantic distribution (i.e., what types of information are encoded in the morphemes that come together to build a description of a motion event). It was originally designed for Australian languages, but has since been used around the world.
  • Wilkins, D. (1999). Eliciting contrastive use of demonstratives for objects within close personal space (all objects well within arm’s reach). In D. Wilkins (Ed.), Manual for the 1999 Field Season (pp. 25-28). Nijmegen: Max Planck Institute for Psycholinguistics. doi:10.17617/2.2573796.

    Abstract

    Contrastive reference, where a speaker presents or identifies one item in explicit contrast to another (I like this book but that one is boring), has special communicative and information structure properties. This can be reflected in rules of demonstrative use. For example, in some languages, terms equivalent to this and that can be used for contrastive reference in almost any spatial context. But other two-term languages stick more closely to “distance rules” for demonstratives, allowing a this-like term in close space only. This task elicits data concerning one context of contrastive reference, focusing on whether (and how) non-proximal demonstratives can be used to distinguish objects within a proximal area. The task runs like a memory game, with the consultant being asked to identify the locations of two or three hidden items arranged within arm’s reach.
  • Wilkins, D. (1995). Motion elicitation: "moving 'in(to)'" and "moving 'out (of)'". In D. Wilkins (Ed.), Extensions of space and beyond: manual for field elicitation for the 1995 field season (pp. 4-12). Nijmegen: Max Planck Institute for Psycholinguistics. doi:10.17617/2.3003391.

    Abstract

    How do languages encode different kinds of movement, and what features do people pay attention to when describing motion events? This task investigates the expression of “enter” and “exit” activities, that is, events involving motion in(to) and motion out (of) container-like items. The researcher first uses particular stimuli (a ball, a cup, rice, etc.) to elicit descriptions of enter/exit events from one consultant, and then asks another consultant to demonstrate the event based on these descriptions. See also the related entries Enter/Exit Animation and Background Questions for Enter/Exit Research.
  • Wilkins, D. (1999). The 1999 demonstrative questionnaire: “This” and “that” in comparative perspective. In D. Wilkins (Ed.), Manual for the 1999 Field Season (pp. 1-24). Nijmegen: Max Planck Institute for Psycholinguistics. doi:10.17617/2.2573775.

    Abstract

    Demonstrative terms (e.g., this and that) are key to understanding how a language constructs and interprets spatial relationships. They are tricky to pin down, typically having functions that do not match “idealized” uses, and that can become invisible in narrow elicitation settings. This questionnaire is designed to identify the range(s) of use of certain spatial demonstrative terms, and help assess the roles played by gesture, access, attention, and addressee knowledge in demonstrative use. The stimuli consist of 25 diagrammed “elicitation settings” to be created by the researcher.
  • Wilkins, D. P., & Hill, D. (1995). When "go" means "come": Questioning the basicness of basic motion verbs. Cognitive Linguistics, 6, 209-260. doi:10.1515/cogl.1995.6.2-3.209.

    Abstract

    The purpose of this paper is to question some of the basic assumpiions concerning motion verbs. In particular, it examines the assumption that "come" and "go" are lexical universals which manifest a universal deictic Opposition. Against the background offive working hypotheses about the nature of'come" and ''go", this study presents a comparative investigation of t wo unrelated languages—Mparntwe Arrernte (Pama-Nyungan, Australian) and Longgu (Oceanic, Austronesian). Although the pragmatic and deictic "suppositional" complexity of"come" and "go" expressions has long been recognized, we argue that in any given language the analysis of these expressions is much more semantically and systemically complex than has been assumed in the literature. Languages vary at the lexical semantic level äs t o what is entailed by these expressions, äs well äs differing äs t o what constitutes the prototype and categorial structure for such expressions. The data also strongly suggest that, ifthere is a lexical universal "go", then this cannof be an inherently deictic expression. However, due to systemic Opposition with "come", non-deictic "go" expressions often take on a deictic Interpretation through pragmatic attribution. Thus, this crosslinguistic investigation of "come" and "go" highlights the need to consider semantics and pragmatics äs modularly separate.
  • Willems, R. M., Labruna, L., D'Esposito, M., Ivry, R., & Casasanto, D. (2011). A functional role for the motor system in language understanding: Evidence from Theta-Burst Transcranial Magnetic Stimulation. Psychological Science, 22, 849 -854. doi:10.1177/0956797611412387.

    Abstract

    Does language comprehension depend, in part, on neural systems for action? In previous studies, motor areas of the brain were activated when people read or listened to action verbs, but it remains unclear whether such activation is functionally relevant for comprehension. In the experiments reported here, we used off-line theta-burst transcranial magnetic stimulation to investigate whether a causal relationship exists between activity in premotor cortex and action-language understanding. Right-handed participants completed a lexical decision task, in which they read verbs describing manual actions typically performed with the dominant hand (e.g., “to throw,” “to write”) and verbs describing nonmanual actions (e.g., “to earn,” “to wander”). Responses to manual-action verbs (but not to nonmanual-action verbs) were faster after stimulation of the hand area in left premotor cortex than after stimulation of the hand area in right premotor cortex. These results suggest that premotor cortex has a functional role in action-language understanding.

    Additional information

    Supplementary materials Willems.pdf
  • Willems, R. M., Clevis, K., & Hagoort, P. (2011). Add a picture for suspense: Neural correlates of the interaction between language and visual information in the perception of fear. Social, Cognitive and Affective Neuroscience, 6, 404-416. doi:10.1093/scan/nsq050.

    Abstract

    We investigated how visual and linguistic information interact in the perception of emotion. We borrowed a phenomenon from film theory which states that presentation of an as such neutral visual scene intensifies the percept of fear or suspense induced by a different channel of information, such as language. Our main aim was to investigate how neutral visual scenes can enhance responses to fearful language content in parts of the brain involved in the perception of emotion. Healthy participants’ brain activity was measured (using functional magnetic resonance imaging) while they read fearful and less fearful sentences presented with or without a neutral visual scene. The main idea is that the visual scenes intensify the fearful content of the language by subtly implying and concretizing what is described in the sentence. Activation levels in the right anterior temporal pole were selectively increased when a neutral visual scene was paired with a fearful sentence, compared to reading the sentence alone, as well as to reading of non-fearful sentences presented with the same neutral scene. We conclude that the right anterior temporal pole serves a binding function of emotional information across domains such as visual and linguistic information.
  • Willems, R. M., Benn, Y., Hagoort, P., Tonia, I., & Varley, R. (2011). Communicating without a functioning language system: Implications for the role of language in mentalizing. Neuropsychologia, 49, 3130-3135. doi:10.1016/j.neuropsychologia.2011.07.023.

    Abstract

    A debated issue in the relationship between language and thought is how our linguistic abilities are involved in understanding the intentions of others (‘mentalizing’). The results of both theoretical and empirical work have been used to argue that linguistic, and more specifically, grammatical, abilities are crucial in representing the mental states of others. Here we contribute to this debate by investigating how damage to the language system influences the generation and understanding of intentional communicative behaviors. Four patients with pervasive language difficulties (severe global or agrammatic aphasia) engaged in an experimentally controlled non-verbal communication paradigm, which required signaling and understanding a communicative message. Despite their profound language problems they were able to engage in recipient design as well as intention recognition, showing similar indicators of mentalizing as have been observed in the neurologically healthy population. Our results show that aspects of the ability to communicate remain present even when core capacities of the language system are dysfunctional
  • Willems, R. M., & Casasanto, D. (2011). Flexibility in embodied language understanding. Frontiers in Psychology, 2, 116. doi:10.3389/fpsyg.2011.00116.

    Abstract

    Do people use sensori-motor cortices to understand language? Here we review neurocognitive studies of language comprehension in healthy adults and evaluate their possible contributions to theories of language in the brain. We start by sketching the minimal predictions that an embodied theory of language understanding makes for empirical research, and then survey studies that have been offered as evidence for embodied semantic representations. We explore four debated issues: first, does activation of sensori-motor cortices during action language understanding imply that action semantics relies on mirror neurons? Second, what is the evidence that activity in sensori-motor cortices plays a functional role in understanding language? Third, to what extent do responses in perceptual and motor areas depend on the linguistic and extra-linguistic context? And finally, can embodied theories accommodate language about abstract concepts? Based on the available evidence, we conclude that sensori-motor cortices are activated during a variety of language comprehension tasks, for both concrete and abstract language. Yet, this activity depends on the context in which perception and action words are encountered. Although modality-specific cortical activity is not a sine qua non of language processing even for language about perception and action, sensori-motor regions of the brain appear to make functional contributions to the construction of meaning, and should therefore be incorporated into models of the neurocognitive architecture of language.
  • Willems, R. M. (2011). Re-appreciating the why of cognition: 35 years after Marr and Poggio. Frontiers in Psychology, 2, 244. doi:10.3389/fpsyg.2011.00244.

    Abstract

    Marr and Poggio’s levels of description are one of the most well-known theoretical constructs of twentieth century cognitive science. It entails that behavior can and should be considered at three different levels: computation, algorithm, and implementation. In this contribution focus is on the computational level of description, the level that describes the “why” of cognition. I argue that the computational level should be taken as a starting point in devising experiments in cognitive (neuro)science. Instead, the starting point in empirical practice often is a focus on the stimulus or on some capacity of the cognitive system. The “why” of cognition tends to be ignored when designing research, and is not considered in subsequent inference from experimental results. The overall aim of this manuscript is to show how re-appreciation of the computational level of description as a starting point for experiments can lead to more informative experimentation.
  • Wittek, A. (1999). Zustandsveränderungsverben im Deutschen - wie lernt das Kind die komplexe Semantik? In J. Meibauer, & M. Rothweiler (Eds.), Das Lexikon im Spracherwerb (pp. 278-296). Tübingen: Francke.

    Abstract

    Angelika Wittek untersuchte Zustandsveränderungsverben bei vier- bis sechsjährigen Kindern. Englischsprechende Kinder verstehen bis zum Alter von 8 Jahren diese Verben als Bewegungsverben und ignorieren, daß sie zusätzlich die Information über einen Endzustand im Sinne der Negation des Ausgangszustands beeinhalten. Wittek zeigte, daß entgegen der Erwartung transparente, morphologisch komplexe Formen (wachmachen), in denen die Partikel den Endzustand explizit macht, nicht besser verstanden werden als Simplizia (wecken). Zudem diskutierte sie, inwieweit die Verwendung des Adverbs wieder in restitutiver Lesart Hinweise auf den Erwerb dieser Verben geben kann.
  • Witteman, M. J., Bardhan, N. P., Weber, A., & McQueen, J. M. (2011). Adapting to foreign-accented speech: The role of delay in testing. Journal of the Acoustical Society of America. Program abstracts of the 162nd Meeting of the Acoustical Society of America, 130(4), 2443.

    Abstract

    Understanding speech usually seems easy, but it can become noticeably harder when the speaker has a foreign accent. This is because foreign accents add considerable variation to speech. Research on foreign-accented speech shows that participants are able to adapt quickly to this type of variation. Less is known, however, about longer-term maintenance of adaptation. The current study focused on long-term adaptation by exposing native listeners to foreign-accented speech on Day 1, and testing them on comprehension of the accent one day later. Comprehension was thus not tested immediately, but only after a 24 hour period. On Day 1, native Dutch listeners listened to the speech of a Hebrew learner of Dutch while performing a phoneme monitoring task that did not depend on the talker’s accent. In particular, shortening of the long vowel /i/ into /ɪ/ (e.g., lief [li:f], ‘sweet’, pronounced as [lɪf]) was examined. These mispronunciations did not create lexical ambiguities in Dutch. On Day 2, listeners participated in a cross-modal priming task to test their comprehension of the accent. The results will be contrasted with results from an experiment without delayed testing and related to accounts of how listeners maintain adaptation to foreign-accented speech.
  • Witteman, M. J., Weber, A., & McQueen, J. M. (2011). On the relationship between perceived accentedness, acoustic similarity, and processing difficulty in foreign-accented speech. In Proceedings of the 12th Annual Conference of the International Speech Communication Association (Interspeech 2011), Florence, Italy (pp. 2229-2232).

    Abstract

    Foreign-accented speech is often perceived as more difficult to understand than native speech. What causes this potential difficulty, however, remains unknown. In the present study, we compared acoustic similarity and accent ratings of American-accented Dutch with a cross-modal priming task designed to measure online speech processing. We focused on two Dutch diphthongs: ui and ij. Though both diphthongs deviated from standard Dutch to varying degrees and perceptually varied in accent strength, native Dutch listeners recognized words containing the diphthongs easily. Thus, not all foreign-accented speech hinders comprehension, and acoustic similarity and perceived accentedness are not always predictive of processing difficulties.
  • Zavala, R. M. (1999). External possessor in Oluta Popoluca (Mixean): Applicatives and incorporation of relational terms. In D. L. Payne, & I. Barshi (Eds.), External possession (pp. 339-372). Amsterdam: Benjamins.
  • Zeshan, U., & Panda, S. (2011). Reciprocals constructions in Indo-Pakistani sign language. In N. Evans, & A. Gaby (Eds.), Reciprocals and semantic typology (pp. 91-113). Amsterdam: Benjamins.

    Abstract

    Indo-Pakistani Sign Language (IPSL) is the sign language used by deaf communities in a large region across India and Pakistan. This visual-gestural language has a dedicated construction for specifically expressing reciprocal relationships, which can be applied to agreement verbs and to auxiliaries. The reciprocal construction relies on a change in the movement pattern of the signs it applies to. In addition, IPSL has a number of other strategies which can have a reciprocal interpretation, and the IPSL lexicon includes a good number of inherently reciprocal signs. All reciprocal expressions can be modified in complex ways that rely on the grammatical use of the sign space. Considering grammaticalisation and lexicalisation processes linking some of these constructions is also important for a better understanding of reciprocity in IPSL.
  • Zwitserlood, I. (2011). Gebruiksgemak van het eerste Nederlandse Gebarentaal woordenboek kan beter [Book review]. Levende Talen Magazine, 4, 46-47.

    Abstract

    Review: User friendliness of the first dictionary of Sign Language of the Netherlands can be improved
  • Zwitserlood, I. (2011). Gevraagd: medewerkers verzorgingshuis met een goede oog-handcoördinatie. Het meten van NGT-vaardigheid. Levende Talen Magazine, 1, 44-46.

    Abstract

    (Needed: staff for residential care home with good eye-hand coordination. Measuring NGT-skills.)
  • Zwitserlood, I. (2011). Het Corpus NGT en de dagelijkse lespraktijk. Levende Talen Magazine, 6, 46.

    Abstract

    (The Corpus NGT and the daily practice of language teaching)
  • Zwitserlood, I. (2011). Het Corpus NGT en de opleiding leraar/tolk NGT. Levende Talen Magazine, 1, 40-41.

    Abstract

    (The Corpus NGT and teacher NGT/interpreter NGT training)

Share this page