Publications

Displaying 1101 - 1180 of 1180
  • Van Valin Jr., R. D., & LaPolla, R. J. (1997). Syntax: Structure, meaning and function. Cambridge University Press.
  • Van Valin Jr., R. D. (2009). Role and reference grammar. In F. Brisard, J.-O. Östman, & J. Verschueren (Eds.), Grammar, meaning, and pragmatics (pp. 239-249). Amsterdam: Benjamins.
  • Van de Ven, M., Tucker, B. V., & Ernestus, M. (2009). Semantic context effects in the recognition of acoustically unreduced and reduced words. In Proceedings of the 10th Annual Conference of the International Speech Communication Association (pp. 1867-1870). Causal Productions Pty Ltd.

    Abstract

    Listeners require context to understand the casual pronunciation variants of words that are typical of spontaneous speech (Ernestus et al., 2002). The present study reports two auditory lexical decision experiments, investigating listeners' use of semantic contextual information in the comprehension of unreduced and reduced words. We found a strong semantic priming effect for low frequency unreduced words, whereas there was no such effect for reduced words. Word frequency was facilitatory for all words. These results show that semantic context is relevant especially for the comprehension of unreduced words, which is unexpected given the listener driven explanation of reduction in spontaneous speech.
  • van Hell, J. G., & Witteman, M. J. (2009). The neurocognition of switching between languages: A review of electrophysiological studies. In L. Isurin, D. Winford, & K. de Bot (Eds.), Multidisciplinary approaches to code switching (pp. 53-84). Philadelphia: John Benjamins.

    Abstract

    The seemingly effortless switching between languages and the merging of two languages into a coherent utterance is a hallmark of bilingual language processing, and reveals the flexibility of human speech and skilled cognitive control. That skill appears to be available not only to speakers when they produce language-switched utterances, but also to listeners and readers when presented with mixed language information. In this chapter, we review electrophysiological studies in which Event-Related Potentials (ERPs) are derived from recordings of brain activity to examine the neurocognitive aspects of comprehending and producing mixed language. Topics we discuss include the time course of brain activity associated with language switching between single stimuli and language switching of words embedded in a meaningful sentence context. The majority of ERP studies report that switching between languages incurs neurocognitive costs, but –more interestingly- ERP patterns differ as a function of L2 proficiency and the amount of daily experience with language switching, the direction of switching (switching into L2 is typically associated with higher switching costs than switching into L1), the type of language switching task, and the predictability of the language switch. Finally, we outline some future directions for this relatively new approach to the study of language switching.
  • Van Gijn, R. (2009). The phonology of mixed languages. Journal of Pidgin and Creole Languages, 24(1), 91-117. doi:10.1075/jpcl.24.1.04gij.

    Abstract

    Mixed languages are said to be the result of a process of intertwining (e.g. Bakker & Muysken 1995, Bakker 1997), a regular process in which the grammar of one language is combined with the lexicon of another. However, the outcome of this process differs from language pair to language pair. As far as morphosyntax is concerned, people have discussed these different outcomes and the reasons for them extensively, e.g. Bakker 1997 for Michif, Mous 2003 for Ma’a, Muysken 1997a for Media Lengua and 1997b for Callahuaya. The issue of phonology, however, has not generated a large debate. This paper compares the phonological systems of the mixed languages Media Lengua, Callahuaya, Mednyj Aleut, and Michif. It will be argued that the outcome of the process of intertwining, as far as phonology is concerned, is at least partly determined by the extent to which unmixed phonological domains exist.
  • Van Putten, S. (2013). The meaning of the Avatime additive particle tsye. In M. Balbach, L. Benz, S. Genzel, M. Grubic, A. Renans, S. Schalowski, M. Stegenwallner, & A. Zeldes (Eds.), Information structure: Empirical perspectives on theory (pp. 55-74). Potsdam: Universitätsverlag Potsdam. Retrieved from http://nbn-resolving.de/urn/resolver.pl?urn=urn:nbn:de:kobv:517-opus-64804.
  • Van Valin Jr., R. D. (1995). Toward a functionalist account of so-called ‘extraction constraints’. In B. Devriendt (Ed.), Complex structures: A functionalist perspective (pp. 29-60). Berlin: Mouton de Gruyter.
  • Vartiainen, J., Aggujaro, S., Lehtonen, M., Hulten, A., Laine, M., & Salmelin, R. (2009). Neural dynamics of reading morphologically complex words. NeuroImage, 47, 2064-2072. doi:10.1016/j.neuroimage.2009.06.002.

    Abstract

    Despite considerable research interest, it is still an open issue as to how morphologically complex words such as “car+s” are represented and processed in the brain. We studied the neural correlates of the processing of inflected nouns in the morphologically rich Finnish language. Previous behavioral studies in Finnish have yielded a robust inflectional processing cost, i.e., inflected words are harder to recognize than otherwise matched morphologically simple words. Theoretically this effect could stem either from decomposition of inflected words into a stem and a suffix at input level and/or from subsequent recombination at the semantic–syntactic level to arrive at an interpretation of the word. To shed light on this issue, we used magnetoencephalography to reveal the time course and localization of neural effects of morphological structure and frequency of written words. Ten subjects silently read high- and low-frequency Finnish words in inflected and monomorphemic form. Morphological complexity was accompanied by stronger and longerlasting activation of the left superior temporal cortex from 200 ms onwards. Earlier effects of morphology were not found, supporting the view that the well-established behavioral processing cost for inflected words stems from the semantic–syntactic level rather than from early decomposition. Since the effect of morphology was detected throughout the range of word frequencies employed, the majority of inflected Finnish words appears to be represented in decomposed form and only very high-frequency inflected words may acquire full-form representations.
  • Vaughn, C., & Brouwer, S. (2013). Perceptual integration of indexical information in bilingual speech. Proceedings of Meetings on Acoustics, 19: 060208. doi:10.1121/1.4800264.

    Abstract

    The present research examines how different types of indexical information, namely talker information and the language being spoken, are perceptually integrated in bilingual speech. Using a speeded classification paradigm (Garner, 1974), variability in characteristics of the talker (gender in Experiment 1 and specific talker in Experiment 2) and in the language being spoken (Mandarin vs. English) was manipulated. Listeners from two different language backgrounds, English monolinguals and Mandarin-English bilinguals, were asked to classify short, meaningful sentences obtained from different Mandarin-English bilingual talkers on these indexical dimensions. Results for the gender-language classification (Exp. 1) showed a significant, symmetrical interference effect for both listener groups, indicating that gender information and language are processed in an integral manner. For talker-language classification (Exp. 2), language interfered more with talker than vice versa for the English monolinguals, but symmetrical interference was found for the Mandarin-English bilinguals. These results suggest both that talker-specificity is not fully segregated from language-specificity, and that bilinguals exhibit more balanced classification along various indexical dimensions of speech. Currently, follow-up studies investigate this talker-language dependency for bilingual listeners who do not speak Mandarin in order to disentangle the role of bilingualism versus language familiarity.
  • Verdonschot, R. G., La Heij, W., Tamaoka, K., Kiyama, S., You, W.-P., & Schiller, N. O. (2013). The multiple pronunciations of Japanese kanji: A masked priming investigation. Quarterly Journal of Experimental Psychology, 66(10), 2023-2038. doi:10.1080/17470218.2013.773050.

    Abstract

    English words with an inconsistent grapheme-to-phoneme conversion or with more than one pronunciation (homographic heterophones; e.g., lead-/l epsilon d/, /lid/) are read aloud more slowly than matched controls, presumably due to competition processes. In Japanese kanji, the majority of the characters have multiple readings for the same orthographic unit: the native Japanese reading (KUN) and the derived Chinese reading (ON). This leads to the question of whether reading these characters also shows processing costs. Studies examining this issue have provided mixed evidence. The current study addressed the question of whether processing of these kanji characters leads to the simultaneous activation of their KUN and ON reading, This was measured in a direct way in a masked priming paradigm. In addition, we assessed whether the relative frequencies of the KUN and ON pronunciations (dominance ratio, measured in compound words) affect the amount of priming. The results of two experiments showed that: (a) a single kanji, presented as a masked prime, facilitates the reading of the (katakana transcriptions of) their KUN and ON pronunciations; however, (b) this was most consistently found when the dominance ratio was around 50% (no strong dominance towards either pronunciation) and when the dominance was towards the ON reading (high-ON group). When the dominance was towards the KUN reading (high-KUN group), no significant priming for the ON reading was observed. Implications for models of kanji processing are discussed.
  • Verdonschot, R. G., Nakayama, M., Zhang, Q., Tamaoka, K., & Schiller, N. O. (2013). The proximate phonological unit of Chinese-English bilinguals: Proficiency matters. PLoS One, 8(4): e61454. doi:10.1371/journal.pone.0061454.

    Abstract

    An essential step to create phonology according to the language production model by Levelt, Roelofs and Meyer is to assemble phonemes into a metrical frame. However, recently, it has been proposed that different languages may rely on different grain sizes of phonological units to construct phonology. For instance, it has been proposed that, instead of phonemes, Mandarin Chinese uses syllables and Japanese uses moras to fill the metrical frame. In this study, we used a masked priming-naming task to investigate how bilinguals assemble their phonology for each language when the two languages differ in grain size. Highly proficient Mandarin Chinese-English bilinguals showed a significant masked onset priming effect in English (L2), and a significant masked syllabic priming effect in Mandarin Chinese (L1). These results suggest that their proximate unit is phonemic in L2 (English), and that bilinguals may use different phonological units depending on the language that is being processed. Additionally, under some conditions, a significant sub-syllabic priming effect was observed even in Mandarin Chinese, which indicates that L2 phonology exerts influences on L1 target processing as a consequence of having a good command of English.

    Additional information

    English stimuli Chinese stimuli
  • Verga, L., & Kotz, S. A. (2013). How relevant is social interaction in second language learning? Frontiers in Human Neuroscience, 7: 550. doi:10.3389/fnhum.2013.00550.

    Abstract

    Verbal language is the most widespread mode of human communication, and an intrinsically social activity. This claim is strengthened by evidence emerging from different fields, which clearly indicates that social interaction influences human communication, and more specifically, language learning. Indeed, research conducted with infants and children shows that interaction with a caregiver is necessary to acquire language. Further evidence on the influence of sociality on language comes from social and linguistic pathologies, in which deficits in social and linguistic abilities are tightly intertwined, as is the case for Autism, for example. However, studies on adult second language (L2) learning have been mostly focused on individualistic approaches, partly because of methodological constraints, especially of imaging methods. The question as to whether social interaction should be considered as a critical factor impacting upon adult language learning still remains underspecified. Here, we review evidence in support of the view that sociality plays a significant role in communication and language learning, in an attempt to emphasize factors that could facilitate this process in adult language learning. We suggest that sociality should be considered as a potentially influential factor in adult language learning and that future studies in this domain should explicitly target this factor.
  • Verhagen, J., & Schimke, S. (2009). Differences or fundamental differences? Zeitschrift für Sprachwissenschaft, 28(1), 97-106. doi:10.1515/ZFSW.2009.011.
  • Verhagen, J. (2009). Finiteness in Dutch as a second language. PhD Thesis, VU University, Amsterdam.
  • Verhagen, J. (2009). Light verbs and the acquisition of finiteness and negation in Dutch as a second language. In C. Dimroth, & P. Jordens (Eds.), Functional categories in learner language (pp. 203-234). Berlin: Mouton de Gruyter.
  • Verhagen, J. (2009). Temporal adverbials, negation and finiteness in Dutch as a second language: A scope-based account. IRAL, 47(2), 209-237. doi:10.1515/iral.2009.009.

    Abstract

    This study investigates the acquisition of post-verbal (temporal) adverbials and post-verbal negation in L2 Dutch. It is based on previous findings for L2 French that post-verbal negation poses less of a problem for L2 learners than post-verbal adverbial placement (Hawkins, Towell, Bazergui, Second Language Research 9: 189-233, 1993; Herschensohn, Minimally raising the verb issue: 325-336, Cascadilla Press, 1998). The current data show that, at first sight, Moroccan and Turkish learners of Dutch also have fewer problems with post-verbal negation than with post-verbal adverbials. However, when a distinction is made between different types of adverbials, it seems that this holds for adverbials of position such as 'today' but not for adverbials of contrast such as 'again'. To account for this difference, it is argued that different types of adverbial occupy different positions in the L2 data for reasons of scope marking. Moreover, the placement of adverbials such as 'again' interacts with the acquisition of finiteness marking (resulting in post-verbal placement), while there is no such interaction between adverbials such as 'today' and finiteness marking.
  • Verhoeven, V. J. M., Hysi, P. G., Wojciechowski, R., Fan, Q., Guggenheim, J. A., Höhn, R., MacGregor, S., Hewitt, A. W., Nag, A., Cheng, C.-Y., Yonova-Doing, E., Zhou, X., Ikram, M. K., Buitendijk, G. H. S., McMahon, G., Kemp, J. P., St Pourcain, B., Simpson, C. L., Mäkelä, K.-M., Lehtimäki, T. and 90 moreVerhoeven, V. J. M., Hysi, P. G., Wojciechowski, R., Fan, Q., Guggenheim, J. A., Höhn, R., MacGregor, S., Hewitt, A. W., Nag, A., Cheng, C.-Y., Yonova-Doing, E., Zhou, X., Ikram, M. K., Buitendijk, G. H. S., McMahon, G., Kemp, J. P., St Pourcain, B., Simpson, C. L., Mäkelä, K.-M., Lehtimäki, T., Kähönen, M., Paterson, A. D., Hosseini, S. M., Wong, H. S., Xu, L., Jonas, J. B., Pärssinen, O., Wedenoja, J., Yip, S. P., Ho, D. W. H., Pang, C. P., Chen, L. J., Burdon, K. P., Craig, J. E., Klein, B. E. K., Klein, R., Haller, T., Metspalu, A., Khor, C.-C., Tai, E.-S., Aung, T., Vithana, E., Tay, W.-T., Barathi, V. A., Chen, P., Li, R., Liao, J., Zheng, Y., Ong, R. T., Döring, A., Evans, D. M., Timpson, N. J., Verkerk, A. J. M. H., Meitinger, T., Raitakari, O., Hawthorne, F., Spector, T. D., Karssen, L. C., Pirastu, M., Murgia, F., Ang, W., Mishra, A., Montgomery, G. W., Pennell, C. E., Cumberland, P. M., Cotlarciuc, I., Mitchell, P., Wang, J. J., Schache, M., Janmahasatian, S., Janmahasathian, S., Igo, R. P., Lass, J. H., Chew, E., Iyengar, S. K., Gorgels, T. G. M. F., Rudan, I., Hayward, C., Wright, A. F., Polasek, O., Vatavuk, Z., Wilson, J. F., Fleck, B., Zeller, T., Mirshahi, A., Müller, C., Uitterlinden, A. G., Rivadeneira, F., Vingerling, J. R., Hofman, A., Oostra, B. A., Amin, N., Bergen, A. A. B., Teo, Y.-Y., Rahi, J. S., Vitart, V., Williams, C., Baird, P. N., Wong, T.-Y., Oexle, K., Pfeiffer, N., Mackey, D. A., Young, T. L., van Duijn, C. M., Saw, S.-M., Bailey-Wilson, J. E., Stambolian, D., Klaver, C. C., Hammond, C. J., Consortium for Refractive Error and Myopia (CREAM), The Diabetes Control and Complications Trial/Epidemiology of Diabetes Interventions and Complications (DCCT/EDIC) Research Group, Wellcome Trust Case Control Consortium 2 (WTCCC2), & The Fuchs' Genetics Multi-Center Study Group (2013). Genome-wide meta-analyses of multiancestry cohorts identify multiple new susceptibility loci for refractive error and myopia. Nature Genetics, 45(3), 314-318. doi:10.1038/ng.2554.

    Abstract

    Refractive error is the most common eye disorder worldwide and is a prominent cause of blindness. Myopia affects over 30% of Western populations and up to 80% of Asians. The CREAM consortium conducted genome-wide meta-analyses, including 37,382 individuals from 27 studies of European ancestry and 8,376 from 5 Asian cohorts. We identified 16 new loci for refractive error in individuals of European ancestry, of which 8 were shared with Asians. Combined analysis identified 8 additional associated loci. The new loci include candidate genes with functions in neurotransmission (GRIA4), ion transport (KCNQ5), retinoic acid metabolism (RDH5), extracellular matrix remodeling (LAMA2 and BMP2) and eye development (SIX6 and PRSS56). We also confirmed previously reported associations with GJD2 and RASGRF1. Risk score analysis using associated SNPs showed a tenfold increased risk of myopia for individuals carrying the highest genetic load. Our results, based on a large meta-analysis across independent multiancestry studies, considerably advance understanding of the mechanisms involved in refractive error and myopia.
  • Verkerk, A. (2009). A semantic map of secondary predication. In B. Botma, & J. Van Kampen (Eds.), Linguistics in the Netherlands 2009 (pp. 115-126).
  • Verkerk, A., & Frostad, B. H. (2013). The encoding of manner predications and resultatives in Oceanic: A typological and historical overview. Oceanic Linguistics, 52, 1-35. doi:10.1353/ol.2013.0010.

    Abstract

    This paper is concerned with the encoding of resultatives and manner predications in Oceanic languages. Our point of departure is a typological overview of the encoding strategies and their geographical distribution, and we investigate their historical traits by the use of phylogenetic comparative methods. A full theory of the historical pathways is not always accessible for all the attested encoding strategies, given the data available for this study. However, tentative theories about the development and origin of the attested strategies are given. One of the most frequent strategy types used to encode both manner predications and resultatives has been given special emphasis. This is a construction in which a reex form of the Proto-Oceanic causative *pa-/*paka- modies the second verb in serial verb constructions

    Additional information

    52.1.verkerk_supp01.pdf
  • Verkerk, A. (2013). Scramble, scurry and dash: The correlation between motion event encoding and manner verb lexicon size in Indo-European. Language Dynamics and Change, 3, 169-217. doi:10.1163/22105832-13030202.

    Abstract

    In recent decades, much has been discovered about the different ways in which people can talk about motion (Talmy, 1985, 1991; Slobin, 1996, 1997, 2004). Slobin (1997) has suggested that satellite-framed languages typically have a larger and more diverse lexicon of manner of motion verbs (such as run, fly, and scramble) when compared to verb-framed languages. Slobin (2004) has claimed that larger manner of motion verb lexicons originate over time because codability factors increase the accessibility of manner in satellite-framed languages. In this paper I investigate the dependency between the use of the satellite-framed encoding construction and the size of the manner verb lexicon. The data used come from 20 Indo-European languages. The methodology applied is a range of phylogenetic comparative methods adopted from biology, which allow for an investigation of this dependency while taking into account the shared history between these 20 languages. The results provide evidence that Slobin’s hypothesis was correct, and indeed there seems to be a relationship between the use of the satellite-framed construction and the size of the manner verb lexicon
  • Vernes, S. C., MacDermot, K. D., Monaco, A. P., & Fisher, S. E. (2009). Assessing the impact of FOXP1 mutations on developmental verbal dyspraxia. European Journal of Human Genetics, 17(10), 1354-1358. doi:10.1038/ejhg.2009.43.

    Abstract

    Neurodevelopmental disorders that disturb speech and language are highly heritable. Isolation of the underlying genetic risk factors has been hampered by complexity of the phenotype and potentially large number of contributing genes. One exception is the identification of rare heterozygous mutations of the FOXP2 gene in a monogenic syndrome characterised by impaired sequencing of articulatory gestures, disrupting speech (developmental verbal dyspraxia, DVD), as well as multiple deficits in expressive and receptive language. The protein encoded by FOXP2 belongs to a divergent subgroup of forkhead-box transcription factors, with a distinctive DNA-binding domain and motifs that mediate hetero- and homodimerisation. FOXP1, the most closely related member of this subgroup, can directly interact with FOXP2 and is co-expressed in neural structures relevant to speech and language disorders. Moreover, investigations of songbird orthologues indicate that combinatorial actions of the two proteins may play important roles in vocal learning, leading to the suggestion that human FOXP1 should be considered a strong candidate for involvement in DVD. Thus, in this study, we screened the entire coding region of FOXP1 (exons and flanking intronic sequence) for nucleotide changes in a panel of probands used earlier to detect novel mutations in FOXP2. A non-synonymous coding change was identified in a single proband, yielding a proline-to-alanine change (P215A). However, this was also found in a random control sample. Analyses of non-coding SNP changes did not find any correlation with affection status. We conclude that FOXP1 mutations are unlikely to represent a major cause of DVD.

    Additional information

    ejhg200943x1.pdf
  • Vernes, S. C., & Fisher, S. E. (2013). Genetic pathways implicated in speech and language. In S. Helekar (Ed.), Animal models of speech and language disorders (pp. 13-40). New York: Springer. doi:10.1007/978-1-4614-8400-4_2.

    Abstract

    Disorders of speech and language are highly heritable, providing strong
    support for a genetic basis. However, the underlying genetic architecture is complex,
    involving multiple risk factors. This chapter begins by discussing genetic loci associated
    with common multifactorial language-related impairments and goes on to
    detail the only gene (known as FOXP2) to be directly implicated in a rare monogenic
    speech and language disorder. Although FOXP2 was initially uncovered in
    humans, model systems have been invaluable in progressing our understanding of
    the function of this gene and its associated pathways in language-related areas of the
    brain. Research in species from mouse to songbird has revealed effects of this gene
    on relevant behaviours including acquisition of motor skills and learned vocalisations
    and demonstrated a role for Foxp2 in neuronal connectivity and signalling,
    particularly in the striatum. Animal models have also facilitated the identification of
    wider neurogenetic networks thought to be involved in language development and
    disorder and allowed the investigation of new candidate genes for disorders involving
    language, such as CNTNAP2 and FOXP1. Ongoing work in animal models promises
    to yield new insights into the genetic and neural mechanisms underlying human
    speech and language
  • Vernes, S. C., & Fisher, S. E. (2009). Unravelling neurogenetic networks implicated in developmental language disorders. Biochemical Society Transactions (London), 37, 1263-1269. doi:10.1042/BST0371263.

    Abstract

    Childhood syndromes disturbing language development are common and display high degrees of heritability. In most cases, the underlying genetic architecture is likely to be complex, involving multiple chromosomal loci and substantial heterogeneity, which makes it difficult to track down the crucial genomic risk factors. Investigation of rare Mendelian phenotypes offers a complementary route for unravelling key neurogenetic pathways. The value of this approach is illustrated by the discovery that heterozygous FOXP2 (where FOX is forkhead box) mutations cause an unusual monogenic disorder, characterized by problems with articulating speech along with deficits in expressive and receptive language. FOXP2 encodes a regulatory protein, belonging to the forkhead box family of transcription factors, known to play important roles in modulating gene expression in development and disease. Functional genetics using human neuronal models suggest that the different FOXP2 isoforms generated by alternative splicing have distinct properties and may act to regulate each other's activity. Such investigations have also analysed the missense and nonsense mutations found in cases of speech and language disorder, showing that they alter intracellular localization, DNA binding and transactivation capacity of the mutated proteins. Moreover, in the brains of mutant mice, aetiological mutations have been found to disrupt the synaptic plasticity of Foxp2-expressing circuitry. Finally, although mutations of FOXP2 itself are rare, the downstream networks which it regulates in the brain appear to be broadly implicated in typical forms of language impairment. Thus, through ongoing identification of regulated targets and interacting co-factors, this gene is providing the first molecular entry points into neural mechanisms that go awry in language-related disorders
  • De Vignemont, F., Majid, A., Jola, C., & Haggard, P. (2009). Segmenting the body into parts: Evidence from biases in tactile perception. Quarterly Journal of Experimental Psychology, 62, 500-512. doi:10.1080/17470210802000802.

    Abstract

    How do we individuate body parts? Here, we investigated the effect of body segmentation between hand and arm in tactile and visual perception. In a first experiment, we showed that two tactile stimuli felt farther away when they were applied across the wrist than when they were applied within a single body part (palm or forearm), indicating a “category boundary effect”. In the following experiments, we excluded two hypotheses, which attributed tactile segmentation to other, nontactile factors. In Experiment 2, we showed that the boundary effect does not arise from motor cues. The effect was reduced during a motor task involving flexion and extension movements of the wrist joint. Action brings body parts together into functional units, instead of pulling them apart. In Experiments 3 and 4, we showed that the effect does not arise from perceptual cues of visual discontinuities. We did not find any segmentation effect for the visual percept of the body in Experiment 3, nor for a neutral shape in Experiment 4. We suggest that the mental representation of the body is structured in categorical body parts delineated by joints, and that this categorical representation modulates tactile spatial perception.
  • von Stutterheim, C., Flecken, M., & Carroll, M. (2013). Introduction: Conceptualizing in a second language. International Review of Applied Linguistics in Language Teaching, 51(2), 77-85. doi:10.1515/iral-2013-0004.
  • Von Stutterheim, C., Carroll, M., & Klein, W. (2009). New perspectives in analyzing aspectual distinctions across languages. In W. Klein, & P. Li (Eds.), The expression of time (pp. 195-216). Berlin: Mouton de Gruyter.
  • von Stutterheim, C., & Flecken, M. (Eds.). (2013). Principles of information organization in L2 discourse [Special Issue]. International Review of Applied linguistics in Language Teaching (IRAL), 51(2).
  • Vonk, W., Hustinx, L. G., & Simons, W. H. (1992). The use of referential expressions in structuring discourse. Language and Cognitive Processes, 301-333. doi:10.1080/01690969208409389.

    Abstract

    Referential expressions that refer to entities that occur in a text differ in lexical specificity. It is claimed that if these anaphoric expressions are more specific than necessary for their identificational function, they not only relate the current information to the intended referent, but also contribute to the expression of the thematic structure of the discourse and to the comprehension of the thematic structure. In two controlled production experiments, it is demonstrated that thematic shifts are produced when one has to make use of such an overspecified expression, and that overspecified referential expressions are produced when one has to formulate a thematic shift. In two comprehension experiments, using a probe recognition technique, it is shown that an overspecified referential expression decreases the availability of information contained in a sentence that precedes the overspecification. This finding is interpreted in terms of the thematic structuring function of referential expressions in the understanding of discourse.
  • De Vos, C. (2009). [Review of the book Language complexity as an evolving variable ed. by Geoffrey Sampson, David Gil and Peter Trudgill]. LINGUIST List, 20.4275. Retrieved from http://linguistlist.org/issues/20/20-4275.html.
  • De Vos, C., Van der Kooij, E., & Crasborn, O. (2009). Mixed signals: Combining linguistic and affective functions of eyebrows in questions in Sign Language of the Netherlands. Language and Speech, 52(2/3), 315-339. doi:10.1177/0023830909103177.

    Abstract

    The eyebrows are used as conversational signals in face-to-face spoken interaction (Ekman, 1979). In Sign Language of the Netherlands (NGT), the eyebrows are typically furrowed in content questions, and raised in polar questions (Coerts, 1992). On the other hand, these eyebrow positions are also associated with anger and surprise, respectively, in general human communication (Ekman, 1993). This overlap in the functional load of the eyebrow positions results in a potential conflict for NGT signers when combining these functions simultaneously. In order to investigate the effect of the simultaneous realization of both functions on the eyebrow position we elicited instances of both question types with neutral affect and with various affective states. The data were coded using the Facial Action Coding System (FACS: Ekman, Friesen, & Hager, 2002) for type of brow movement as well as for intensity. FACS allows for the coding of muscle groups, which are termed Action Units (AUs) and which produce facial appearance changes. The results show that linguistic and affective functions of eyebrows may influence each other in NGT. That is, in surprised polar questions and angry content question a phonetic enhancement takes place of raising and furrowing, respectively. In the items with contrasting eyebrow movements, the grammatical and affective AUs are either blended (occur simultaneously) or they are realized sequentially. Interestingly, the absence of eyebrow raising (marked by AU 1+2) in angry polar questions, and the presence of eyebrow furrowing (realized by AU 4) in surprised content questions suggests that in general AU 4 may be phonetically stronger than AU 1 and AU 2, independent of its linguistic or affective function.
  • De Vos, C. (2013). Sign-Spatiality in Kata Kolok: How a village sign language of Bali inscribes its signing space [Dissertation abstract]. Sign Language & Linguistics, 16(2), 277-284. doi:10.1075/sll.16.2.08vos.
  • Vosse, T., & Kempen, G. (2009). In defense of competition during syntactic ambiguity resolution. Journal of Psycholinguistic Research, 38(1), 1-9. doi:10.1007/s10936-008-9075-1.

    Abstract

    In a recent series of publications (Traxler et al. J Mem Lang 39:558–592, 1998; Van Gompel et al. J Mem Lang 52:284–307, 2005; see also Van Gompel et al. (In: Kennedy, et al.(eds) Reading as a perceptual process, Oxford, Elsevier pp 621–648, 2000); Van Gompel et al. J Mem Lang 45:225–258, 2001) eye tracking data are reported showing that globally ambiguous (GA) sentences are read faster than locally ambiguous (LA) counterparts. They argue that these data rule out “constraint-based” models where syntactic and conceptual processors operate concurrently and syntactic ambiguity resolution is accomplished by competition. Such models predict the opposite pattern of reading times. However, this argument against competition is valid only in conjunction with two standard assumptions in current constraint-based models of sentence comprehension: (1) that syntactic competitions (e.g., Which is the best attachment site of the incoming constituent?) are pooled together with conceptual competitions (e.g., Which attachment site entails the most plausible meaning?), and (2) that the duration of a competition is a function of the overall (pooled) quality score obtained by each competitor. We argue that it is not necessary to abandon competition as a successful basis for explaining parsing phenomena and that the above-mentioned reading time data can be accounted for by a parallel-interactive model with conceptual and syntactic processors that do not pool their quality scores together. Within the individual linguistic modules, decision-making can very well be competition-based.
  • Vosse, T., & Kempen, G. (2009). The Unification Space implemented as a localist neural net: Predictions and error-tolerance in a constraint-based parser. Cognitive Neurodynamics, 3, 331-346. doi:10.1007/s11571-009-9094-0.

    Abstract

    We introduce a novel computer implementation of the Unification-Space parser (Vosse & Kempen 2000) in the form of a localist neural network whose dynamics is based on interactive activation and inhibition. The wiring of the network is determined by Performance Grammar (Kempen & Harbusch 2003), a lexicalist formalism with feature unification as binding operation. While the network is processing input word strings incrementally, the evolving shape of parse trees is represented in the form of changing patterns of activation in nodes that code for syntactic properties of words and phrases, and for the grammatical functions they fulfill. The system is capable, at least in a qualitative and rudimentary sense, of simulating several important dynamic aspects of human syntactic parsing, including garden-path phenomena and reanalysis, effects of complexity (various types of clause embeddings), fault-tolerance in case of unification failures and unknown words, and predictive parsing (expectation-based analysis, surprisal effects). English is the target language of the parser described.
  • De Vriend, F., Broeder, D., Depoorter, G., van Eerten, L., & Van Uytvanck, D. (2013). Creating & Testing CLARIN Metadata Components. Language Resources and Evaluation, 47(4), 1315-1326. doi:10.1007/s10579-013-9231-6.

    Abstract

    The CLARIN Metadata Infrastructure (CMDI) that is being developed in Common Language Resources and Technology Infrastructure (CLARIN) is a computer-supported framework that combines a flexible component approach with the explicit declaration of semantics. The goal of the Dutch CLARIN project “Creating & Testing CLARIN Metadata Components” was to create metadata components and profiles for a wide variety of existing resources housed at two data centres according to the CMDI specifications. In doing so the principles of the framework were tested. The results of the project are of benefit to other CLARIN-projects that are expected to adhere to the CMDI framework and its accompanying tools.
  • Wagensveld, B., Segers, E., Van Alphen, P. M., & Verhoeven, L. (2013). The role of lexical representations and phonological overlap in rhyme judgments of beginning, intermediate and advanced readers. Learning and Individual Differences, 23, 64-71. doi:10.1016/j.lindif.2012.09.007.

    Abstract

    Studies have shown that prereaders find globally similar non-rhyming pairs (i.e., bell–ball) difficult to judge. Although this effect has been explained as a result of ill-defined lexical representations, others have suggested that it is part of an innate tendency to respond to phonological overlap. In the present study we examined this effect over time. Beginning, intermediate and advanced readers were presented with a rhyme judgment task containing rhyming, phonologically similar, and unrelated non-rhyming pairs. To examine the role of lexical representations, participants were presented with both words and pseudowords. Outcomes showed that pseudoword processing was difficult for children but not for adults. The global similarity effect was present in both children and adults. The findings imply that holistic representations cannot explain the incapacity to ignore similarity relations during rhyming. Instead, the data provide more evidence for the idea that global similarity processing is part of a more fundamental innate phonological processing capacity.
  • Wagensveld, B., Van Alphen, P. M., Segers, E., Hagoort, P., & Verhoeven, L. (2013). The neural correlates of rhyme awareness in preliterate and literate children. Clinical Neurophysiology, 124, 1336-1345. doi:10.1016/j.clinph.2013.01.022.

    Abstract

    Objective Most rhyme awareness assessments do not encompass measures of the global similarity effect (i.e., children who are able to perform simple rhyme judgments get confused when presented with globally similar non-rhyming pairs). The present study examines the neural nature of this effect by studying the N450 rhyme effect. Methods Behavioral and electrophysiological responses of Dutch pre-literate kindergartners and literate second graders were recorded while they made rhyme judgments of word pairs in three conditions; phonologically rhyming (e.g., wijn-pijn), overlapping non-rhyming (e.g., pen-pijn) and unrelated non-rhyming pairs (e.g., boom-pijn). Results Behaviorally, both groups had difficulty judging overlapping but not rhyming and unrelated pairs. The neural data of second graders showed overlapping pairs were processed in a similar fashion as unrelated pairs; both showed a more negative deflection of the N450 component than rhyming items. Kindergartners did not show a typical N450 rhyme effect. However, some other interesting ERP differences were observed, indicating preliterates are sensitive to rhyme at a certain level. Significance Rhyme judgments of globally similar items rely on the same process as rhyme judgments of rhyming and unrelated items. Therefore, incorporating a globally similar condition in rhyme assessments may lead to a more in-depth measure of early phonological awareness skills. Highlights Behavioral and electrophysiological responses were recorded while (pre)literate children made rhyme judgments of rhyming, overlapping and unrelated words. Behaviorally both groups had difficulty judging overlapping pairs as non-rhyming while overlapping and unrelated neural patterns were similar in literates. Preliterates show a different pattern indicating a developing phonological system.
  • Wagner, A. (2013). Cross-language similarities and differences in the uptake of place information. Journal of the Acoustical Society of America, 133, 4256-4267. doi:10.1121/1.4802904.

    Abstract

    Cross-language differences in the use of coarticulatory cues for the identification of fricatives have been demonstrated in a phoneme detection task: Listeners with perceptually similar fricative pairs in their native phoneme inventories (English, Polish, Spanish) relied more on cues from vowels than listeners with perceptually more distinct fricative contrasts (Dutch and German). The present gating study further investigated these cross-language differences and addressed three questions. (1) Are there cross-language differences in informativeness of parts of the speech signal regarding place of articulation for fricative identification? (2) Are such cross-language differences fricative-specific, or do they extend to the perception of place of articulation for plosives? (3) Is such language-specific uptake of information based on cues preceding or following the consonantal constriction? Dutch, Italian, Polish, and Spanish listeners identified fricatives and plosives in gated CV and VC syllables. The results showed cross-language differences in the informativeness of coarticulatory cues for fricative identification: Spanish and Polish listeners extracted place of articulation information from shorter portions of VC syllables. No language-specific differences were found for plosives, suggesting that greater reliance on coarticulatory cues did not generalize to other phoneme types. The language-specific differences for fricatives were based on coarticulatory cues into the consonant.
  • Walters, J., Rujescu, D., Franke, B., Giegling, I., Vasquez, A., Hargreaves, A., Russo, G., Morris, D., Hoogman, M., Da Costa, A., Moskvina, V., Fernandez, G., Gill, M., Corvin, A., O'Donovan, M., Donohoe, G., & Owen, M. (2013). The role of the major histocompatibility complex region in cognition and brain structure: A schizophrenia GWAS follow-up. American Journal of Psychiatry, 170, 877-885. doi:10.1176/appi.ajp.2013.12020226.

    Abstract

    Objective The authors investigated the effects of recently identified genome-wide significant schizophrenia genetic risk variants on cognition and brain structure. Method A panel of six single-nucleotide polymorphisms (SNPs) was selected to represent genome-wide significant loci from three recent genome-wide association studies (GWAS) for schizophrenia and was tested for association with cognitive measures in 346 patients with schizophrenia and 2,342 healthy comparison subjects. Nominally significant results were evaluated for replication in an independent case-control sample. For SNPs showing evidence of association with cognition, associations with brain structural volumes were investigated in a large independent healthy comparison sample. Results Five of the six SNPs showed no significant association with any cognitive measure. One marker in the major histocompatibility complex (MHC) region, rs6904071, showed independent, replicated evidence of association with delayed episodic memory and was significant when both samples were combined. In the combined sample of up to 3,100 individuals, this SNP was associated with widespread effects across cognitive domains, although these additional associations were no longer significant after adjusting for delayed episodic memory. In the large independent structural imaging sample, the same SNP was also associated with decreased hippocampal volume. Conclusions The authors identified a SNP in the MHC region that was associated with cognitive performance in patients with schizophrenia and healthy comparison subjects. This SNP, rs6904071, showed a replicated association with episodic memory and hippocampal volume. These findings implicate the MHC region in hippocampal structure and functioning, consistent with the role of MHC proteins in synaptic development and function. Follow-up of these results has the potential to provide insights into the pathophysiology of schizophrenia and cognition.

    Additional information

    Hoogman_2013_JourAmePsy.supp.pdf
  • Wang, L., Bastiaansen, M. C. M., Yang, Y., & Hagoort, P. (2013). ERP evidence on the interaction between information structure and emotional salience of words. Cognitive, Affective and Behavioral Neuroscience, 13, 297-310. doi:10.3758/s13415-012-0146-2.

    Abstract

    Both emotional words and words focused by information structure can capture attention. This study examined the interplay between emotional salience and information structure in modulating attentional resources in the service of integrating emotional words into sentence context. Event-related potentials (ERPs) to affectively negative, neutral, and positive words, which were either focused or nonfocused in question–answer pairs, were evaluated during sentence comprehension. The results revealed an early negative effect (90–200 ms), a P2 effect, as well as an effect in the N400 time window, for both emotional salience and information structure. Moreover, an interaction between emotional salience and information structure occurred within the N400 time window over right posterior electrodes, showing that information structure influences the semantic integration only for neutral words, but not for emotional words. This might reflect the fact that the linguistic salience of emotional words can override the effect of information structure on the integration of words into context. The interaction provides evidence for attention–emotion interactions at a later stage of processing. In addition, the absence of interaction in the early time window suggests that the processing of emotional information is highly automatic and independent of context. The results suggest independent attention capture systems of emotional salience and information structure at the early stage but an interaction between them at a later stage, during the semantic integration of words.
  • Wang, L., Zhu, Z., Bastiaansen, M. C. M., Hagoort, P., & Yang, Y. (2013). Recognizing the emotional valence of names: An ERP study. Brain and Language, 125, 118-127. doi:10.1016/j.bandl.2013.01.006.

    Abstract

    Unlike common nouns, person names refer to unique entities and generally have a referring function. We used event-related potentials to investigate the time course of identifying the emotional meaning of nouns and names. The emotional valence of names and nouns were manipulated separately. The results show early N1 effects in response to emotional valence only for nouns. This might reflect automatic attention directed towards emotional stimuli. The absence of such an effect for names supports the notion that the emotional meaning carried by names is accessed after word recognition and person identification. In addition, both names with negative valence and emotional nouns elicited late positive effects, which have been associated with evaluation of emotional significance. This positive effect started earlier for nouns than for names, but with similar durations. Our results suggest that distinct neural systems are involved in the retrieval of names’ and nouns’ emotional meaning.
  • Wang, L., Hagoort, P., & Yang, Y. (2009). Semantic illusion depends on information structure: ERP evidence. Brain Research, 1282, 50-56. doi:10.1016/j.brainres.2009.05.069.

    Abstract

    Next to propositional content, speakers distribute information in their utterances in such a way that listeners can make a distinction between new (focused) and given (non-focused) information. This is referred to as information structure. We measured event-related potentials (ERPs) to explore the role of information structure in semantic processing. Following different questions in wh-question-answer pairs (e.g. What kind of vegetable did Ming buy for cooking today? /Who bought the vegetables for cooking today?), the answer sentences (e.g., Ming bought eggplant/beef to cook today.) contained a critical word, which was either semantically appropriate (eggplant) or inappropriate (beef), and either focus or non-focus. The results showed a full N400 effect only when the critical words were in focus position. In non-focus position a strongly reduced N400 effect was observed, in line with the well-known semantic illusion effect. The results suggest that information structure facilitates semantic processing by devoting more resources to focused information.
  • Wang, L., & Chu, M. (2013). The role of beat gesture and pitch accent in semantic processing: An ERP study. Neuropsychologia, 51(13), 2847-2855. doi:10.1016/j.neuropsychologia.2013.09.027.

    Abstract

    The present study investigated whether and how beat gesture (small baton-like hand movements used to emphasize information in speech) influences semantic processing as well as its interaction with pitch accent during speech comprehension. Event-related potentials were recorded as participants watched videos of a person gesturing and speaking simultaneously. The critical words in the spoken sentences were accompanied by a beat gesture, a control hand movement, or no hand movement, and were expressed either with or without pitch accent. We found that both beat gesture and control hand movement induced smaller negativities in the N400 time window than when no hand movement was presented. The reduced N400s indicate that both beat gesture and control movement facilitated the semantic integration of the critical word into the sentence context. In addition, the words accompanied by beat gesture elicited smaller negativities in the N400 time window than those accompanied by control hand movement over right posterior electrodes, suggesting that beat gesture has a unique role for enhancing semantic processing during speech comprehension. Finally, no interaction was observed between beat gesture and pitch accent, indicating that they affect semantic processing independently.
  • Warmelink, L., Vrij, A., Mann, S., Leal, S., & Poletiek, F. H. (2013). The effects of unexpected questions on detecting familiar and unfamiliar lies. Psychiatry, Psychology and law, 20(1), 29-35. doi:10.1080/13218719.2011.619058.

    Abstract

    Previous research suggests that lie detection can be improved by asking the interviewee unexpected questions. The present experiment investigates the effect of two types of unexpected questions: background questions and detail questions, on detecting lies about topics with which the interviewee is (a) familiar or (b) unfamiliar. In this experiment, 66 participants read interviews in which interviewees answered background or detail questions, either truthfully or deceptively. Those who answered deceptively could be lying about a topic they were familiar with or about a topic they were unfamiliar with. The participants were asked to judge whether the interviewees were lying. The results revealed that background questions distinguished truths from both types of lies, while the detail questions distinguished truths from unfamiliar lies, but not from familiar lies. The implications of these findings are discussed.
  • Warner, N., Fountain, A., & Tucker, B. V. (2009). Cues to perception of reduced flaps. Journal of the Acoustical Society of America, 125(5), 3317-3327. doi:10.1121/1.3097773.

    Abstract

    Natural, spontaneous speech (and even quite careful speech) often shows extreme reduction in many speech segments, even resulting in apparent deletion of consonants. Where the flap ([(sic)]) allophone of /t/ and /d/ is expected in American English, one frequently sees an approximant-like or even vocalic pattern, rather than a clear flap. Still, the /t/ or /d/ is usually perceived, suggesting the acoustic characteristics of a reduced flap are sufficient for perception of a consonant. This paper identifies several acoustic characteristics of reduced flaps based on previous acoustic research (size of intensity dip, consonant duration, and F4 valley) and presents phonetic identification data for continua that manipulate these acoustic characteristics of reduction. The results indicate that the most obvious types of acoustic variability seen in natural flaps do affect listeners' percept of a consonant, but not sufficiently to completely account for the percept. Listeners are affected by the acoustic characteristics of consonant reduction, but they are also very skilled at evaluating variability along the acoustic dimensions that realize reduction.

    Files private

    Request files
  • Warner, N., Luna, Q., Butler, L., & Van Volkinburg, H. (2009). Revitalization in a scattered language community: Problems and methods from the perspective of Mutsun language revitalization. International Journal of the Sociology of Language, 198, 135-148. doi:10.1515/IJSL.2009.031.

    Abstract

    This article addresses revitalization of a dormant language whose prospective speakers live in scattered geographical areas. In comparison to increasing the usage of an endangered language, revitalizing a dormant language (one with no living speakers) requires different methods to gain knowledge of the language. Language teaching for a dormant language with a scattered community presents different problems from other teaching situations. In this article, we discuss the types of tasks that must be accomplished for dormant-language revitalization, with particular focus on development of teaching materials. We also address the role of computer technologies, arguing that each use of technology should be evaluated for how effectively it increases fluency. We discuss methods for achieving semi-fluency for the first new speakers of a dormant language, and for spreading the language through the community.
  • Wassenaar, M., Hagoort, P., & Brown, C. M. (1997). Syntactic ERP effects in Broca's aphasics with agrammatic comprehension. Brain and Language, 60, 61-64. doi:10.1006/brln.1997.1911.
  • Weber, A. (1998). Listening to nonnative language which violates native assimilation rules. In D. Duez (Ed.), Proceedings of the European Scientific Communication Association workshop: Sound patterns of Spontaneous Speech (pp. 101-104).

    Abstract

    Recent studies using phoneme detection tasks have shown that spoken-language processing is neither facilitated nor interfered with by optional assimilation, but is inhibited by violation of obligatory assimilation. Interpretation of these results depends on an assessment of their generality, specifically, whether they also obtain when listeners are processing nonnative language. Two separate experiments are presented in which native listeners of German and native listeners of Dutch had to detect a target fricative in legal monosyllabic Dutch nonwords. All of the nonwords were correct realisations in standard Dutch. For German listeners, however, half of the nonwords contained phoneme strings which violate the German fricative assimilation rule. Whereas the Dutch listeners showed no significant effects, German listeners detected the target fricative faster when the German fricative assimilation was violated than when no violation occurred. The results might suggest that violation of assimilation rules does not have to make processing more difficult per se.
  • Weber, K., & Indefrey, P. (2009). Syntactic priming in German–English bilinguals during sentence comprehension. Neuroimage, 46, 1164-1172. doi:10.1016/j.neuroimage.2009.03.040.

    Abstract

    A longstanding question in bilingualism is whether syntactic information is shared between the two language processing systems. We used an fMRI repetition suppression paradigm to investigate syntactic priming in reading comprehension in German–English late-acquisition bilinguals. In comparison to conventional subtraction analyses in bilingual experiments, repetition suppression has the advantage of being able to detect neuronal populations that are sensitive to properties that are shared by consecutive stimuli. In this study, we manipulated the syntactic structure between prime and target sentences. A sentence with a passive sentence structure in English was preceded either by a passive or by an active sentence in English or German. We looked for repetition suppression effects in left inferior frontal, left precentral and left middle temporal regions of interest. These regions were defined by a contrast of all non-target sentences in German and English versus the baseline of sentence-format consonant strings. We found decreases in activity (repetition suppression effects) in these regions of interest following the repetition of syntactic structure from the first to the second language and within the second language.
    Moreover, a separate behavioural experiment using a word-by-word reading paradigm similar to the fMRI experiment showed faster reading times for primed compared to unprimed English target sentences regardless of whether they were preceded by an English or a German sentence of the same structure.
    We conclude that there is interaction between the language processing systems and that at least some syntactic information is shared between a bilingual's languages with similar syntactic structures.

    Files private

    Request files
  • Weber, A. (2009). The role of linguistic experience in lexical recognition [Abstract]. Journal of the Acoustical Society of America, 125, 2759.

    Abstract

    Lexical recognition is typically slower in L2 than in L1. Part of the difficulty comes from a not precise enough processing of L2 phonemes. Consequently, L2 listeners fail to eliminate candidate words that L1 listeners can exclude from competing for recognition. For instance, the inability to distinguish /r/ from /l/ in rocket and locker makes for Japanese listeners both words possible candidates when hearing their onset (e.g., Cutler, Weber, and Otake, 2006). The L2 disadvantage can, however, be dispelled: For L2 listeners, but not L1 listeners, L2 speech from a non-native talker with the same language background is known to be as intelligible as L2 speech from a native talker (e.g., Bent and Bradlow, 2003). A reason for this may be that L2 listeners have ample experience with segmental deviations that are characteristic for their own accent. On this account, only phonemic deviations that are typical for the listeners’ own accent will cause spurious lexical activation in L2 listening (e.g., English magic pronounced as megic for Dutch listeners). In this talk, I will present evidence from cross-modal priming studies with a variety of L2 listener groups, showing how the processing of phonemic deviations is accent-specific but withstands fine phonetic differences.
  • Weissenborn, J. (1988). Von der demonstratio ad oculos zur Deixis am Phantasma. Die Entwicklung der lokalen Referenz bei Kindern. In Karl Bühler's Theory of Language. Proceedings of the Conference held at Kirchberg, August 26, 1984 and Essen, November 21–24, 1984 (pp. 257-276). Amsterdam: Benjamins.
  • Wells, J. B., Christiansen, M. H., Race, D. S., Acheson, D. J., & MacDonald, M. C. (2009). Experience and sentence processing: Statistical learning and relative clause comprehension. Cognitive Psychology, 58(2), 250-271. doi:10.1016/j.cogpsych.2008.08.002.

    Abstract

    Many explanations of the difficulties associated with interpreting object relative clauses appeal to the demands that object relatives make on working memory. MacDonald and Christiansen [MacDonald, M. C., & Christiansen, M. H. (2002). Reassessing working memory: Comment on Just and Carpenter (1992) and Waters and Caplan (1996). Psychological Review, 109, 35-54] pointed to variations in reading experience as a source of differences, arguing that the unique word order of object relatives makes their processing more difficult and more sensitive to the effects of previous experience than the processing of subject relatives. This hypothesis was tested in a large-scale study manipulating reading experiences of adults over several weeks. The group receiving relative clause experience increased reading speeds for object relatives more than for subject relatives, whereas a control experience group did not. The reading time data were compared to performance of a computational model given different amounts of experience. The results support claims for experience-based individual differences and an important role for statistical learning in sentence comprehension processes.
  • Wheeldon, L. R., & Levelt, W. J. M. (1995). Monitoring the time course of phonological encoding. Journal of Memory and Language, 34(3), 311-334. doi:10.1006/jmla.1995.1014.

    Abstract

    Three experiments examined the time course of phonological encoding in speech production. A new methodology is introduced in which subjects are required to monitor their internal speech production for prespecified target segments and syllables. Experiment 1 demonstrated that word initial target segments are monitored significantly faster than second syllable initial target segments. The addition of a concurrent articulation task (Experiment 1b) had a limited effect on performance, excluding the possibility that subjects are monitoring a subvocal articulation of the carrier word. Moreover, no relationship was observed between the pattern of monitoring latencies and the timing of the targets in subjects′ overt speech. Subjects are not, therefore, monitoring an internal phonetic representation of the carrier word. Experiment 2 used the production monitoring task to replicate the syllable monitoring effect observed in speech perception experiments: responses to targets were faster when they corresponded to the initial syllable of the carrier word than when they did not. We conclude that subjects are monitoring their internal generation of a syllabified phonological representation. Experiment 3 provides more detailed evidence concerning the time course of the generation of this representation by comparing monitoring latencies to targets within, as well as between, syllables. Some amendments to current models of phonological encoding are suggested in light of these results.
  • Whitmarsh, S., Udden, J., Barendregt, H., & Petersson, K. M. (2013). Mindfulness reduces habitual responding based on implicit knowledge: Evidence from artificial grammar learning. Consciousness and Cognition, (3), 833-845. doi:10.1016/j.concog.2013.05.007.

    Abstract

    Participants were unknowingly exposed to complex regularities in a working memory task. The existence of implicit knowledge was subsequently inferred from a preference for stimuli with similar grammatical regularities. Several affective traits have been shown to influence
    AGL performance positively, many of which are related to a tendency for automatic responding. We therefore tested whether the mindfulness trait predicted a reduction of grammatically congruent preferences, and used emotional primes to explore the influence of affect. Mindfulness was shown to correlate negatively with grammatically congruent responses. Negative primes were shown to result in faster and more negative evaluations.
    We conclude that grammatically congruent preference ratings rely on habitual responses, and that our findings provide empirical evidence for the non-reactive disposition of the mindfulness trait.
  • Wilkins, D. (1995). Towards a Socio-Cultural Profile of the Communities We Work With. In D. Wilkins (Ed.), Extensions of space and beyond: manual for field elicitation for the 1995 field season (pp. 70-79). Nijmegen: Max Planck Institute for Psycholinguistics. doi:10.17617/2.3513481.

    Abstract

    Field data are drawn from a particular speech community at a certain place and time. The intent of this survey is to enrich understanding of the various socio-cultural contexts in which linguistic and “cognitive” data may have been collected, so that we can explore the role which societal, cultural and contextual factors may play in this material. The questionnaire gives guidelines concerning types of ethnographic information that are important to cross-cultural and cross-linguistic enquiry, and will be especially useful to researchers who do not have specialised training in anthropology.
  • Wilkins, D., Pederson, E., & Levinson, S. C. (1995). Background questions for the "enter"/"exit" research. In D. Wilkins (Ed.), Extensions of space and beyond: manual for field elicitation for the 1995 field season (pp. 14-16). Nijmegen: Max Planck Institute for Psycholinguistics. doi:10.17617/2.3003935.

    Abstract

    How do languages encode different kinds of movement, and what features do people pay attention to when describing motion events? This document outlines topics concerning the investigation of “enter” and “exit” events. It helps contextualise research tasks that examine this domain (see 'Motion Elicitation' and 'Enter/Exit animation') and gives some pointers about what other questions can be explored.
  • Wilkins, D. (1995). Motion elicitation: "moving 'in(to)'" and "moving 'out (of)'". In D. Wilkins (Ed.), Extensions of space and beyond: manual for field elicitation for the 1995 field season (pp. 4-12). Nijmegen: Max Planck Institute for Psycholinguistics. doi:10.17617/2.3003391.

    Abstract

    How do languages encode different kinds of movement, and what features do people pay attention to when describing motion events? This task investigates the expression of “enter” and “exit” activities, that is, events involving motion in(to) and motion out (of) container-like items. The researcher first uses particular stimuli (a ball, a cup, rice, etc.) to elicit descriptions of enter/exit events from one consultant, and then asks another consultant to demonstrate the event based on these descriptions. See also the related entries Enter/Exit Animation and Background Questions for Enter/Exit Research.
  • Wilkins, D. P., & Hill, D. (1995). When "go" means "come": Questioning the basicness of basic motion verbs. Cognitive Linguistics, 6, 209-260. doi:10.1515/cogl.1995.6.2-3.209.

    Abstract

    The purpose of this paper is to question some of the basic assumpiions concerning motion verbs. In particular, it examines the assumption that "come" and "go" are lexical universals which manifest a universal deictic Opposition. Against the background offive working hypotheses about the nature of'come" and ''go", this study presents a comparative investigation of t wo unrelated languages—Mparntwe Arrernte (Pama-Nyungan, Australian) and Longgu (Oceanic, Austronesian). Although the pragmatic and deictic "suppositional" complexity of"come" and "go" expressions has long been recognized, we argue that in any given language the analysis of these expressions is much more semantically and systemically complex than has been assumed in the literature. Languages vary at the lexical semantic level äs t o what is entailed by these expressions, äs well äs differing äs t o what constitutes the prototype and categorial structure for such expressions. The data also strongly suggest that, ifthere is a lexical universal "go", then this cannof be an inherently deictic expression. However, due to systemic Opposition with "come", non-deictic "go" expressions often take on a deictic Interpretation through pragmatic attribution. Thus, this crosslinguistic investigation of "come" and "go" highlights the need to consider semantics and pragmatics äs modularly separate.
  • Willems, R. M., Toni, I., Hagoort, P., & Casasanto, D. (2009). Body-specific motor imagery of hand actions: Neural evidence from right- and left-handers. Frontiers in Human Neuroscience, 3: 39, pp. 39. doi:10.3389/neuro.09.039.2009.

    Abstract

    If motor imagery uses neural structures involved in action execution, then the neural correlates of imagining an action should differ between individuals who tend to execute the action differently. Here we report fMRI data showing that motor imagery is influenced by the way people habitually perform motor actions with their particular bodies; that is, motor imagery is ‘body-specific’ (Casasanto, 2009). During mental imagery for complex hand actions, activation of cortical areas involved in motor planning and execution was left-lateralized in right-handers but right-lateralized in left-handers. We conclude that motor imagery involves the generation of an action plan that is grounded in the participant’s motor habits, not just an abstract representation at the level of the action’s goal. People with different patterns of motor experience form correspondingly different neurocognitive representations of imagined actions.
  • Willems, R. M., & Hagoort, P. (2009). Broca's region: Battles are not won by ignoring half of the facts. Trends in Cognitive Sciences, 13(3), 101. doi:10.1016/j.tics.2008.12.001.
  • Willems, R. M. (2013). Can literary studies contribute to cognitive neuroscience? Journal of literary semantics, 42(2), 217-222. doi:10.1515/jls-2013-0011.
  • Willems, R. M., Ozyurek, A., & Hagoort, P. (2009). Differential roles for left inferior frontal and superior temporal cortex in multimodal integration of action and language. Neuroimage, 47, 1992-2004. doi:10.1016/j.neuroimage.2009.05.066.

    Abstract

    Several studies indicate that both posterior superior temporal sulcus/middle temporal gyrus (pSTS/MTG) and left inferior frontal gyrus (LIFG) are involved in integrating information from different modalities. Here we investigated the respective roles of these two areas in integration of action and language information. We exploited the fact that the semantic relationship between language and different forms of action (i.e. co-speech gestures and pantomimes) is radically different. Speech and co-speech gestures are always produced together, and gestures are not unambiguously understood without speech. On the contrary, pantomimes are not necessarily produced together with speech and can be easily understood without speech. We presented speech together with these two types of communicative hand actions in matching or mismatching combinations to manipulate semantic integration load. Left and right pSTS/MTG were only involved in semantic integration of speech and pantomimes. Left IFG on the other hand was involved in integration of speech and co-speech gestures as well as of speech and pantomimes. Effective connectivity analyses showed that depending upon the semantic relationship between language and action, LIFG modulates activation levels in left pSTS.

    This suggests that integration in pSTS/MTG involves the matching of two input streams for which there is a relatively stable common object representation, whereas integration in LIFG is better characterized as the on-line construction of a new and unified representation of the input streams. In conclusion, pSTS/MTG and LIFG are differentially involved in multimodal integration, crucially depending upon the semantic relationship between the input streams.

    Additional information

    Supplementary table S1
  • Willems, R. M., & Hagoort, P. (2009). Hand preference influences neural correlates of action observation. Brain Research, 1269, 90-104. doi:10.1016/j.brainres.2009.02.057.

    Abstract

    It has been argued that we map observed actions onto our own motor system. Here we added to this issue by investigating whether hand preference influences the neural correlates of action observation of simple, essentially meaningless hand actions. Such an influence would argue for an intricate neural coupling between action production and action observation, which goes beyond effects of motor repertoire or explicit motor training, as has been suggested before. Indeed, parts of the human motor system exhibited a close coupling between action production and action observation. Ventral premotor and inferior and superior parietal cortices showed differential activation for left- and right-handers that was similar during action production as well as during action observation. This suggests that mapping observed actions onto the observer's own motor system is a core feature of action observation - at least for actions that do not have a clear goal or meaning. Basic differences in the way we act upon the world are not only reflected in neural correlates of action production, but can also influence the brain basis of action observation.
  • Willems, R. M. (2009). Neural reflections of meaning in gesture, language, and action. PhD Thesis, Radboud University Nijmegen, Nijmegen.
  • Windhouwer, M., Petro, J., Newskaya, I., Drude, S., Aristar-Dry, H., & Gippert, J. (2013). Creating a serialization of LMF: The experience of the RELISH project. In G. Francopoulo (Ed.), LMF - Lexical Markup Framework (pp. 215-226). London: Wiley.
  • Windhouwer, M., & Wright, S. E. (2013). LMF and the Data Category Registry: Principles and application. In G. Francopoulo (Ed.), LMF: Lexical Markup Framework (pp. 41-50). London: Wiley.
  • Wittek, A. (1998). Learning verb meaning via adverbial modification: Change-of-state verbs in German and the adverb "wieder" again. In A. Greenhill, M. Hughes, H. Littlefield, & H. Walsh (Eds.), Proceedings of the 22nd Annual Boston University Conference on Language Development (pp. 779-790). Somerville, MA: Cascadilla Press.
  • Witteman, M. J. (2013). Lexical processing of foreign-accented speech: Rapid and flexible adaptation. PhD Thesis, Radboud University Nijmegen, Nijmegen.
  • Witteman, M. J., Weber, A., & McQueen, J. M. (2013). Foreign accent strength and listener familiarity with an accent co-determine speed of perceptual adaptation. Attention, Perception & Psychophysics, 75, 537-556. doi:10.3758/s13414-012-0404-y.

    Abstract

    We investigated how the strength of a foreign accent and varying types of experience with foreign-accented speech influence the recognition of accented words. In Experiment 1, native Dutch listeners with limited or extensive prior experience with German-accented Dutch completed a cross-modal priming experiment with strongly, medium, and weakly accented words. Participants with limited experience were primed by the medium and weakly accented words, but not by the strongly accented words. Participants with extensive experience were primed by all accent types. In Experiments 2 and 3, Dutch listeners with limited experience listened to a short story before doing the cross-modal priming task. In Experiment 2, the story was spoken by the priming task speaker and either contained strongly accented words or did not. Strongly accented exposure led to immediate priming by novel strongly accented words, while exposure to the speaker without strongly accented tokens led to priming only in the experiment’s second half. In Experiment 3, listeners listened to the story with strongly accented words spoken by a different German-accented speaker. Listeners were primed by the strongly accented words, but again only in the experiment’s second half. Together, these results show that adaptation to foreign-accented speech is rapid but depends on accent strength and on listener familiarity with those strongly accented words.
  • Wittenburg, P., & Ringersma, J. (2013). Metadata description for lexicons. In R. H. Gouws, U. Heid, W. Schweickard, & H. E. Wiegand (Eds.), Dictionaries: An international encyclopedia of lexicography: Supplementary volume: Recent developments with focus on electronic and computational lexicography (pp. 1329-1335). Berlin: Mouton de Gruyter.
  • Won, S.-O., Hu, I., Kim, M.-Y., Bae, J.-M., Kim, Y.-M., & Byun, K.-S. (2009). Theory and practice of Sign Language interpretation. Pyeongtaek: Korea National College of Rehabilitation & Welfare.
  • Wood, N. (2009). Field recording for dummies. In A. Majid (Ed.), Field manual volume 12 (pp. V). Nijmegen: Max Planck Institute for Psycholinguistics.
  • Wright, S. E., Windhouwer, M., Schuurman, I., & Kemps-Snijders, M. (2013). Community efforts around the ISOcat Data Category Registry. In I. Gurevych, & J. Kim (Eds.), The People's Web meets NLP: Collaboratively constructed language resources (pp. 349-374). New York: Springer.

    Abstract

    The ISOcat Data Category Registry provides a community computing environment for creating, storing, retrieving, harmonizing and standardizing data category specifications (DCs), used to register linguistic terms used in various fields. This chapter recounts the history of DC documentation in TC 37, beginning from paper-based lists created for lexicographers and terminologists and progressing to the development of a web-based resource for a much broader range of users. While describing the considerable strides that have been made to collect a very large comprehensive collection of DCs, it also outlines difficulties that have arisen in developing a fully operative web-based computing environment for achieving consensus on data category names, definitions, and selections and describes efforts to overcome some of the present shortcomings and to establish positive working procedures designed to engage a wide range of people involved in the creation of language resources.
  • Wright, S. E., & Windhouwer, M. (2013). ISOcat - im Reich der Datenkategorien. eDITion: Fachzeitschrift für Terminologie, 9(1), 8-12.

    Abstract

    Im ISOcat-Datenkategorie-Register (Data Category Registry, www.isocat.org) des Technischen Komitees ISO/TC 37 (Terminology and other language and content resources) werden Feldnamen und Werte für Sprachressourcen beschrieben. Empfohlene Feldnamen und zuverlässige Definitionen sollen dazu beitragen, dass Sprachdaten unabhängig von Anwendungen, Plattformen und Communities of Practice (CoP) wiederverwendet werden können. Datenkategorie-Gruppen (Data Category Selections) können eingesehen, ausgedruckt, exportiert und nach kostenloser Registrierung auch neu erstellt werden.
  • Xiao, M., Kong, X., Liu, J., & Ning, J. (2009). TMBF: Bloom filter algorithms of time-dependent multi bit-strings for incremental set. In Proceedings of the 2009 International Conference on Ultra Modern Telecommunications & Workshops.

    Abstract

    Set is widely used as a kind of basic data structure. However, when it is used for large scale data set the cost of storage, search and transport is overhead. The bloom filter uses a fixed size bit string to represent elements in a static set, which can reduce storage space and search cost that is a fixed constant. The time-space efficiency is achieved at the cost of a small probability of false positive in membership query. However, for many applications the space savings and locating time constantly outweigh this drawback. Dynamic bloom filter (DBF) can support concisely representation and approximate membership queries of dynamic set instead of static set. It has been proved that DBF not only possess the advantage of standard bloom filter, but also has better features when dealing with dynamic set. This paper proposes a time-dependent multiple bit-strings bloom filter (TMBF) which roots in the DBF and targets on dynamic incremental set. TMBF uses multiple bit-strings in time order to present a dynamic increasing set and uses backward searching to test whether an element is in a set. Based on the system logs from a real P2P file sharing system, the evaluation shows a 20% reduction in searching cost compared to DBF.
  • Zavala, R. (1997). Functional analysis of Akatek voice constructions. International Journal of American Linguistics, 63(4), 439-474.

    Abstract

    L'A. étudie les corrélations entre structure syntaxique et fonction pragmatique dans les alternances de voix en akatek, une langue maya appartenant au sous-groupe Q'anjob'ala. Les alternances pragmatiques de voix sont les mécanismes par lesquels les langues encodent les différents degrés de topicalité des deux principaux participants d'un événement sémantiquement transitif, l'agent et le patient. A l'aide d'une analyse quantitative, l'A. évalue la topicalité de ces participants et identifie les structures syntaxiques permettant d'exprimer les quatre principales fonctions de voix en akatek : active-directe, inverse, passive et antipassive
  • Zeshan, U., Escobedo Delgado, C. E., Dikyuva, H., Panda, S., & De Vos, C. (2013). Cardinal numerals in rural sign languages: Approaching cross-modal typology. Linguistic Typology, 17(3), 357-396. doi:10.1515/lity-2013-0019.

    Abstract

    This article presents data on cardinal numerals in three sign languages from small-scale communities with hereditary deafness. The unusual features found in these data considerably extend the known range of typological variety across sign languages. Some features, such as non-decimal numeral bases, are unattested in sign languages, but familiar from spoken languages, while others, such as subtractive sub-systems, are rare in sign and speech. We conclude that for a complete typological appraisal of a domain, an approach to cross-modal typology, which includes a typologically diverse range of sign languages in addition to spoken languages, is both instructive and feasible.
  • De Zubicaray, G. I., Acheson, D. J., & Hartsuiker, R. J. (Eds.). (2013). Mind what you say - general and specific mechanisms for monitoring in speech production [Research topic] [Special Issue]. Frontiers in Human Neuroscience. Retrieved from http://www.frontiersin.org/human_neuroscience/researchtopics/mind_what_you_say_-_general_an/1197.

    Abstract

    Psycholinguistic research has typically portrayed speech production as a relatively automatic process. This is because when errors are made, they occur as seldom as one in every thousand words we utter. However, it has long been recognised that we need some form of control over what we are currently saying and what we plan to say. This capacity to both monitor our inner speech and self-correct our speech output has often been assumed to be a property of the language comprehension system. More recently, it has been demonstrated that speech production benefits from interfacing with more general cognitive processes such as selective attention, short-term memory (STM) and online response monitoring to resolve potential conflict and successfully produce the output of a verbal plan. The conditions and levels of representation according to which these more general planning, monitoring and control processes are engaged during speech production remain poorly understood. Moreover, there remains a paucity of information about their neural substrates, despite some of the first evidence of more general monitoring having come from electrophysiological studies of error related negativities (ERNs). While aphasic speech errors continue to be a rich source of information, there has been comparatively little research focus on instances of speech repair. The purpose of this Frontiers Research Topic is to provide a forum for researchers to contribute investigations employing behavioural, neuropsychological, electrophysiological, neuroimaging and virtual lesioning techniques. In addition, while the focus of the research topic is on novel findings, we welcome submission of computational simulations, review articles and methods papers.
  • Zwitserlood, I. (2009). Het Corpus NGT. Levende Talen Magazine, 6, 44-45.

    Abstract

    The Corpus NGT
  • Zwitserlood, I. (2009). Het Corpus NGT en de dagelijkse lespraktijk (1). Levende Talen Magazine, 8, 40-41.
  • Zwitserlood, I., Perniss, P. M., & Ozyurek, A. (2013). Expression of multiple entities in Turkish Sign Language (TİD). In E. Arik (Ed.), Current Directions in Turkish Sign Language Research (pp. 272-302). Newcastle upon Tyne: Cambridge Scholars Publishing.

    Abstract

    This paper reports on an exploration of the ways in which multiple entities are expressed in Turkish Sign Language (TİD). The (descriptive and quantitative) analyses provided are based on a corpus of both spontaneous data and specifically elicited data, in order to provide as comprehensive an account as possible. We have found several devices in TİD for expression of multiple entities, in particular localization, spatial plural predicate inflection, and a specific form used to express multiple entities that are side by side in the same configuration (not reported for any other sign language to date), as well as numerals and quantifiers. In contrast to some other signed languages, TİD does not appear to have a productive system of plural reduplication. We argue that none of the devices encountered in the TİD data is a genuine plural marking device and that the plural interpretation of multiple entity localizations and plural predicate inflections is a by-product of the use of space to indicate the existence or the involvement in an event of multiple entities.

Share this page