Displaying 1 - 100 of 118
  • Alferink, I. (2015). Dimensions of convergence in bilingual speech and gesture. PhD Thesis, Radboud University Nijmegen, Nijmegen.
  • Asaridou, S. S. (2015). An ear for pitch: On the effects of experience and aptitude in processing pitch in language and music. PhD Thesis, Radboud University Nijmegen, Nijmegen.

    Supplementary material

    Full Text (via Radboud)
  • Azar, Z., & Ozyurek, A. (2015). Discourse Management: Reference tracking in speech and gesture in Turkish narratives. Dutch Journal of Applied Linguistics, 4(2), 222-240. doi:10.1075/dujal.4.2.06aza.

    Abstract

    Speakers achieve coherence in discourse by alternating between differential lexical forms e.g. noun phrase, pronoun, and null form in accordance with the accessibility of the entities they refer to, i.e. whether they introduce an entity into discourse for the first time or continue referring to an entity they already mentioned before. Moreover, tracking of entities in discourse is a multimodal phenomenon. Studies show that speakers are sensitive to the informational structure of discourse and use fuller forms (e.g. full noun phrases) in speech and gesture more when re-introducing an entity while they use attenuated forms (e.g. pronouns) in speech and gesture less when maintaining a referent. However, those studies focus mainly on non-pro-drop languages (e.g. English, German and French). The present study investigates whether the same pattern holds for pro-drop languages. It draws data from adult native speakers of Turkish using elicited narratives. We find that Turkish speakers mostly use fuller forms to code subject referents in re-introduction context and the null form in maintenance context and they point to gesture space for referents more in re-introduction context compared maintained context. Hence we provide supportive evidence for the reverse correlation between the accessibility of a discourse referent and its coding in speech and gesture. We also find that, as a novel contribution, third person pronoun is used in re-introduction context only when the referent was previously mentioned as the object argument of the immediately preceding clause.
  • Bank, R., Crasborn, O., & Van Hout, R. (2015). Alignment of two languages: The spreading of mouthings in Sign Language of the Netherlands. International Journal of Bilingualism, 19, 40-55. doi:10.1177/1367006913484991.

    Abstract

    Mouthings and mouth gestures are omnipresent in Sign Language of the Netherlands (NGT). Mouthings in NGT are mouth actions that have their origin in spoken Dutch, and are usually time aligned with the signs they co-occur with. Frequently, however, they spread over one or more adjacent signs, so that one mouthing co-occurs with multiple manual signs. We conducted a corpus study to explore how frequently this occurs in NGT and whether there is any sociolinguistic variation in the use of spreading. Further, we looked at the circumstances under which spreading occurs. Answers to these questions may give us insight into the prosodic structure of sign languages. We investigated a sample of the Corpus NGT containing 5929 mouthings by 46 participants. We found that spreading over an adjacent sign is independent of social factors. Further, mouthings that spread are longer than non-spreading mouthings, whether measured in syllables or in milliseconds. By using a relatively large amount of natural data, we succeeded in gaining more insight into the way mouth actions are utilised in sign languages
  • Bank, R. (2015). The ubiquity of mouthings in NGT: A corpus study. PhD Thesis, Radboud University Nijmegen, Nijmegen.
  • Becker, M., Devanna, P., Fisher, S. E., & Vernes, S. C. (2015). A chromosomal rearrangement in a child with severe speech and language disorder separates FOXP2 from a functional enhancer. Molecular Cytogenetics, 8: 69. doi:10.1186/s13039-015-0173-0.

    Abstract

    Mutations of FOXP2 in 7q31 cause a rare disorder involving speech apraxia, accompanied by expressive and receptive language impairments. A recent report described a child with speech and language deficits, and a genomic rearrangement affecting chromosomes 7 and 11. One breakpoint mapped to 7q31 and, although outside its coding region, was hypothesised to disrupt FOXP2 expression. We identified an element 2 kb downstream of this breakpoint with epigenetic characteristics of an enhancer. We show that this element drives reporter gene expression in human cell-lines. Thus, displacement of this element by translocation may disturb gene expression, contributing to the observed language phenotype.
  • Brucato, N., Guadalupe, T., Franke, B., Fisher, S. E., & Francks, C. (2015). A schizophrenia-associated HLA locus affects thalamus volume and asymmetry. Brain, Behavior, and Immunity, 46, 311-318. doi:10.1016/j.bbi.2015.02.021.

    Abstract

    Genes of the Major Histocompatibility Complex (MHC) have recently been shown to have neuronal functions in the thalamus and hippocampus. Common genetic variants in the Human Leukocyte Antigens (HLA) region, human homologue of the MHC locus, are associated with small effects on susceptibility to schizophrenia, while volumetric changes of the thalamus and hippocampus have also been linked to schizophrenia. We therefore investigated whether common variants of the HLA would affect volumetric variation of the thalamus and hippocampus. We analyzed thalamus and hippocampus volumes, as measured using structural magnetic resonance imaging, in 1.265 healthy participants. These participants had also been genotyped using genome-wide single nucleotide polymorphism (SNP) arrays. We imputed genotypes for single nucleotide polymorphisms at high density across the HLA locus, as well as HLA allotypes and HLA amino acids, by use of a reference population dataset that was specifically targeted to the HLA region. We detected a significant association of the SNP rs17194174 with thalamus volume (nominal P=0.0000017, corrected P=0.0039), as well as additional SNPs within the same region of linkage disequilibrium. This effect was largely lateralized to the left thalamus and is localized within a genomic region previously associated with schizophrenia. The associated SNPs are also clustered within a potential regulatory element, and a region of linkage disequilibrium that spans genes expressed in the thalamus, including HLA-A. Our data indicate that genetic variation within the HLA region influences the volume and asymmetry of the human thalamus. The molecular mechanisms underlying this association may relate to HLA influences on susceptibility to schizophrenia
  • Collins, J. (2015). ‘Give’ and semantic maps. In B. Nolan, G. Rawoens, & E. Diedrichsen (Eds.), Causation, permission, and transfer: Argument realisation in GET, TAKE, PUT, GIVE and LET verbs (pp. 129-146). Amsterdam: John Benjamins.
  • Croijmans, I., & Majid, A. (2015). Odor naming is difficult, even for wine and coffee experts. In D. C. Noelle, R. Dale, A. S. Warlaumont, J. Yoshimi, T. Matlock, C. D. Jennings, & P. P. Maglio (Eds.), Proceedings of the 37th Annual Meeting of the Cognitive Science Society (CogSci 2015) (pp. 483-488). Austin, TX: Cognitive Science Society. Retrieved from https://mindmodeling.org/cogsci2015/papers/0092/index.html.

    Abstract

    Odor naming is difficult for people, but recent cross-cultural research suggests this difficulty is culture-specific. Jahai speakers (hunter-gatherers from the Malay Peninsula) name odors as consistently as colors, and much better than English speakers (Majid & Burenhult, 2014). In Jahai the linguistic advantage for smells correlates with a cultural interest in odors. Here we ask whether sub-cultures in the West with odor expertise also show superior odor naming. We tested wine and coffee experts (who have specialized odor training) in an odor naming task. Both wine and coffee experts were no more accurate or consistent than novices when naming odors. Although there were small differences in naming strategies, experts and non-experts alike relied overwhelmingly on source-based descriptions. So the specific language experts speak continues to constrain their ability to express odors. This suggests expertise alone is not sufficient to overcome the limits of language in the domain of smell.
  • Dingemanse, M., Roberts, S. G., Baranova, J., Blythe, J., Drew, P., Floyd, S., Gisladottir, R. S., Kendrick, K. H., Levinson, S. C., Manrique, E., Rossi, G., & Enfield, N. J. (2015). Universal Principles in the Repair of Communication Problems. PLoS One, 10(9): e0136100. doi:10.1371/journal.pone.0136100.

    Abstract

    There would be little adaptive value in a complex communication system like human language if there were no ways to detect and correct problems. A systematic comparison of conversation in a broad sample of the world’s languages reveals a universal system for the real-time resolution of frequent breakdowns in communication. In a sample of 12 languages of 8 language families of varied typological profiles we find a system of ‘other-initiated repair’, where the recipient of an unclear message can signal trouble and the sender can repair the original message. We find that this system is frequently used (on average about once per 1.4 minutes in any language), and that it has detailed common properties, contrary to assumptions of radical cultural variation. Unrelated languages share the same three functionally distinct types of repair initiator for signalling problems and use them in the same kinds of contexts. People prefer to choose the type that is the most specific possible, a principle that minimizes cost both for the sender being asked to fix the problem and for the dyad as a social unit. Disruption to the conversation is kept to a minimum, with the two-utterance repair sequence being on average no longer that the single utterance which is being fixed. The findings, controlled for historical relationships, situation types and other dependencies, reveal the fundamentally cooperative nature of human communication and offer support for the pragmatic universals hypothesis: while languages may vary in the organization of grammar and meaning, key systems of language use may be largely similar across cultural groups. They also provide a fresh perspective on controversies about the core properties of language, by revealing a common infrastructure for social interaction which may be the universal bedrock upon which linguistic diversity rests.

    Files private

    Request files
  • Drijvers, L., Zaadnoordijk, L., & Dingemanse, M. (2015). Sound-symbolism is disrupted in dyslexia: Implications for the role of cross-modal abstraction processes. In D. Noelle, R. Dale, A. S. Warlaumont, J. Yoshimi, T. Matlock, C. D. Jennings, & P. P. Maglio (Eds.), Proceedings of the 37th Annual Meeting of the Cognitive Science Society (CogSci 2015) (pp. 602-607). Austin, Tx: Cognitive Science Society.

    Abstract

    Research into sound-symbolism has shown that people can consistently associate certain pseudo-words with certain referents; for instance, pseudo-words with rounded vowels and sonorant consonants are linked to round shapes, while pseudowords with unrounded vowels and obstruents (with a noncontinuous airflow), are associated with sharp shapes. Such sound-symbolic associations have been proposed to arise from cross-modal abstraction processes. Here we assess the link between sound-symbolism and cross-modal abstraction by testing dyslexic individuals’ ability to make sound-symbolic associations. Dyslexic individuals are known to have deficiencies in cross-modal processing. We find that dyslexic individuals are impaired in their ability to make sound-symbolic associations relative to the controls. Our results shed light on the cognitive underpinnings of sound-symbolism by providing novel evidence for the role —and disruptability— of cross-modal abstraction processes in sound-symbolic eects.
  • Franken, M. K., McQueen, J. M., Hagoort, P., & Acheson, D. J. (2015). Assessing the link between speech perception and production through individual differences. In Proceedings of the 18th International Congress of Phonetic Sciences. Glasgow: the University of Glasgow.

    Abstract

    This study aims to test a prediction of recent theoretical frameworks in speech motor control: if speech production targets are specified in auditory terms, people with better auditory acuity should have more precise speech targets. To investigate this, we had participants perform speech perception and production tasks in a counterbalanced order. To assess speech perception acuity, we used an adaptive speech discrimination task. To assess variability in speech production, participants performed a pseudo-word reading task; formant values were measured for each recording. We predicted that speech production variability to correlate inversely with discrimination performance. The results suggest that people do vary in their production and perceptual abilities, and that better discriminators have more distinctive vowel production targets, confirming our prediction. This study highlights the importance of individual differences in the study of speech motor control, and sheds light on speech production-perception interaction.
  • Franken, M. K., Hagoort, P., & Acheson, D. J. (2015). Modulations of the auditory M100 in an Imitation Task. Brain and Language, 142, 18-23. doi:10.1016/j.bandl.2015.01.001.

    Abstract

    Models of speech production explain event-related suppression of the auditory cortical response as reflecting a comparison between auditory predictions and feedback. The present MEG study was designed to test two predictions from this framework: 1) whether the reduced auditory response varies as a function of the mismatch between prediction and feedback; 2) whether individual variation in this response is predictive of speech-motor adaptation. Participants alternated between online imitation and listening tasks. In the imitation task, participants began each trial producing the same vowel (/e/) and subsequently listened to and imitated auditorilypresented vowels varying in acoustic distance from /e/. Results replicated suppression, with a smaller M100 during speaking than listening. Although we did not find unequivocal support for the first prediction, participants with less M100 suppression were better at the imitation task. These results are consistent with the enhancement of M100 serving as an error signal to drive subsequent speech-motor adaptation.
  • Gialluisi, A. (2015). Investigating the genetic basis of reading and language skills. PhD Thesis, Radboud University Nijmegen, Nijmegen.

    Supplementary material

    Full Text (via Radboud)
  • Gisladottir, R. S. (2015). Other-initiated repair in Icelandic. Open Linguistics, 1(1), 309-328. doi:10.1515/opli-2015-0004.

    Abstract

    The ability to repair problems with hearing or understanding in conversation is critical for successful communication. This article describes the linguistic practices of other-initiated repair (OIR) in Icelandic through quantitative and qualitative analysis of a corpus of video-recorded conversations. The study draws on the conceptual distinctions developed in the comparative project on repair described in the introduction to this issue. The main aim is to give an overview of the formats for OIR in Icelandic and the type of repair practices engendered by them. The use of repair initiations in social actions not aimed at solving comprehension problems is also briefly discussed. In particular, the interjection ha has a rich usage extending beyond open other-initiation of repair. By describing the linguistic machinery for other-initiated repair in Icelandic, this study contributes to the typology of conversational structure and to the still nascent field of Icelandic social interaction studies.
  • Gisladottir, R. S., Chwilla, D., & Levinson, S. C. (2015). Conversation electrified: ERP correlates of speech act recognition in underspecified utterances. PLoS One, 10(3): e0120068. doi:10.1371/journal.pone.0120068.

    Abstract

    The ability to recognize speech acts (verbal actions) in conversation is critical for everyday interaction. However, utterances are often underspecified for the speech act they perform, requiring listeners to rely on the context to recognize the action. The goal of this study was to investigate the time-course of auditory speech act recognition in action-underspecified utterances and explore how sequential context (the prior action) impacts this process. We hypothesized that speech acts are recognized early in the utterance to allow for quick transitions between turns in conversation. Event-related potentials (ERPs) were recorded while participants listened to spoken dialogues and performed an action categorization task. The dialogues contained target utterances that each of which could deliver three distinct speech acts depending on the prior turn. The targets were identical across conditions, but differed in the type of speech act performed and how it fit into the larger action sequence. The ERP results show an early effect of action type, reflected by frontal positivities as early as 200 ms after target utterance onset. This indicates that speech act recognition begins early in the turn when the utterance has only been partially processed. Providing further support for early speech act recognition, actions in highly constraining contexts did not elicit an ERP effect to the utterance-final word. We take this to show that listeners can recognize the action before the final word through predictions at the speech act level. However, additional processing based on the complete utterance is required in more complex actions, as reflected by a posterior negativity at the final word when the speech act is in a less constraining context and a new action sequence is initiated. These findings demonstrate that sentence comprehension in conversational contexts crucially involves recognition of verbal action which begins as soon as it can.
  • Gisladottir, R. S. (2015). Conversation electrified: The electrophysiology of spoken speech act recognition. PhD Thesis, Radboud University Nijmegen, Nijmegen.

    Supplementary material

    Full Text (via Radboud)
  • Guadalupe, T., Zwiers, M. P., Wittfeld, K., Teumer, A., Vasquez, A. A., Hoogman, M., Hagoort, P., Fernandez, G., Buitelaar, J., van Bokhoven, H., Hegenscheid, K., Völzke, H., Franke, B., Fisher, S. E., Grabe, H. J., & Francks, C. (2015). Asymmetry within and around the human planum temporale is sexually dimorphic and influenced by genes involved in steroid hormone receptor activity. Cortex, 62, 41-55. doi:10.1016/j.cortex.2014.07.015.

    Abstract

    The genetic determinants of cerebral asymmetries are unknown. Sex differences in asymmetry of the planum temporale, that overlaps Wernicke’s classical language area, have been inconsistently reported. Meta-analysis of previous studies has suggested that publication bias established this sex difference in the literature. Using probabilistic definitions of cortical regions we screened over the cerebral cortex for sexual dimorphisms of asymmetry in 2337 healthy subjects, and found the planum temporale to show the strongest sex-linked asymmetry of all regions, which was supported by two further datasets, and also by analysis with the Freesurfer package that performs automated parcellation of cerebral cortical regions. We performed a genome-wide association scan meta-analysis of planum temporale asymmetry in a pooled sample of 3095 subjects, followed by a candidate-driven approach which measured a significant enrichment of association in genes of the ´steroid hormone receptor activity´ and 'steroid metabolic process' pathways. Variants in the genes and pathways identified may affect the role of the planum temporale in language cognition.
  • Hammond, J. (2015). Switch reference in Whitesands. PhD Thesis, Radboud University Nijmegen, Nijmegen.
  • Hanique, I., Aalders, E., & Ernestus, M. (2015). How robust are exemplar effects in word comprehension? In G. Jarema, & G. Libben (Eds.), Phonological and phonetic considerations of lexical processing (pp. 15-39). Amsterdam: Benjamins.

    Abstract

    This paper studies the robustness of exemplar effects in word comprehension by means of four long-term priming experiments with lexical decision tasks in Dutch. A prime and target represented the same word type and were presented with the same or different degree of reduction. In Experiment 1, participants heard only a small number of trials, a large proportion of repeated words, and stimuli produced by only one speaker. They recognized targets more quickly if these represented the same degree of reduction as their primes, which forms additional evidence for the exemplar effects reported in the literature. Similar effects were found for two speakers who differ in their pronunciations. In Experiment 2, with a smaller proportion of repeated words and more trials between prime and target, participants recognized targets preceded by primes with the same or a different degree of reduction equally quickly. Also, in Experiments 3 and 4, in which listeners were not exposed to one but two types of pronunciation variation (reduction degree and speaker voice), no exemplar effects arose. We conclude that the role of exemplars in speech comprehension during natural conversations, which typically involve several speakers and few repeated content words, may be smaller than previously assumed.
  • Hibar, D. P., Stein, J. L., Renteria, M. E., Arias-Vasquez, A., Desrivières, S., Jahanshad, N., Toro, R., Wittfeld, K., Abramovic, L., Andersson, M., Aribisala, B. S., Armstrong, N. J., Bernard, M., Bohlken, M. M., Boks, M. P., Bralten, J., Brown, A. A., Chakravarty, M. M., Chen, Q., Ching, C. R. K. and 267 moreHibar, D. P., Stein, J. L., Renteria, M. E., Arias-Vasquez, A., Desrivières, S., Jahanshad, N., Toro, R., Wittfeld, K., Abramovic, L., Andersson, M., Aribisala, B. S., Armstrong, N. J., Bernard, M., Bohlken, M. M., Boks, M. P., Bralten, J., Brown, A. A., Chakravarty, M. M., Chen, Q., Ching, C. R. K., Cuellar-Partida, G., den Braber, A., Giddaluru, S., Goldman, A. L., Grimm, O., Guadalupe, T., Hass, J., Woldehawariat, G., Holmes, A. J., Hoogman, M., Janowitz, D., Jia, T., Kim, S., Klein, M., Kraemer, B., Lee, P. H., Olde Loohuis, L. M., Luciano, M., Macare, C., Mather, K. A., Mattheisen, M., Milaneschi, Y., Nho, K., Papmeyer, M., Ramasamy, A., Risacher, S. L., Roiz-Santiañez, R., Rose, E. J., Salami, A., Sämann, P. G., Schmaal, L., Schork, A. J., Shin, J., Strike, L. T., Teumer, A., Van Donkelaar, M. M. J., Van Eijk, K. R., Walters, R. K., Westlye, L. T., Whelan, C. D., Winkler, A. M., Zwiers, M. P., Alhusaini, S., Athanasiu, L., Ehrlich, S., Hakobjan, M. M. H., Hartberg, C. B., Haukvik, U. K., Heister, A. J. G. A. M., Hoehn, D., Kasperaviciute, D., Liewald, D. C. M., Lopez, L. M., Makkinje, R. R. R., Matarin, M., Naber, M. A. M., McKay, D. R., Needham, M., Nugent, A. C., Pütz, B., Royle, N. A., Shen, L., Sprooten, E., Trabzuni, D., Van der Marel, S. S. L., Van Hulzen, K. J. E., Walton, E., Wolf, C., Almasy, L., Ames, D., Arepalli, S., Assareh, A. A., Bastin, M. E., Brodaty, H., Bulayeva, K. B., Carless, M. A., Cichon, S., Corvin, A., Curran, J. E., Czisch, M., De Zubicaray, G. I., Dillman, A., Duggirala, R., Dyer, T. D., Erk, S., Fedko, I. O., Ferrucci, L., Foroud, T. M., Fox, P. T., Fukunaga, M., Gibbs, J. R., Göring, H. H. H., Green, R. C., Guelfi, S., Hansell, N. K., Hartman, C. A., Hegenscheid, K., Heinz, A., Hernandez, D. G., Heslenfeld, D. J., Hoekstra, P. J., Holsboer, F., Homuth, G., Hottenga, J.-J., Ikeda, M., Jack, C. R., Jenkinson, M., Johnson, R., Kanai, R., Keil, M., Kent, J. W., Kochunov, P., Kwok, J. B., Lawrie, S. M., Liu, X., Longo, D. L., McMahon, K. L., Meisenzahl, E., Melle, I., Mohnke, S., Montgomery, G. W., Mostert, J. C., Mühleisen, T. W., Nalls, M. A., Nichols, T. E., Nilsson, L. G., Nöthen, M. M., Ohi, K., Olvera, R. L., Perez-Iglesias, R., Pike, G. B., Potkin, S. G., Reinvang, I., Reppermund, S., Rietschel, M., Romanczuk-Seiferth, N., Rosen, G. D., Rujescu, D., Schnell, K., Schofield, P. R., Smith, C., Steen, V. M., Sussmann, J. E., Thalamuthu, A., Toga, A. W., Traynor, B. J., Troncoso, J., Turner, J. A., Valdes Hernández, M. C., van Ent, D. ’., Van der Brug, M., Van der Wee, N. J. A., Van Tol, M.-J., Veltman, D. J., Wassink, T. H., Westman, E., Zielke, R. H., Zonderman, A. B., Ashbrook, D. G., Hager, R., Lu, L., McMahon, F. J., Morris, D. W., Williams, R. W., Brunner, H. G., Buckner, R. L., Buitelaar, J. K., Cahn, W., Calhoun, V. D., Cavalleri, G. L., Crespo-Facorro, B., Dale, A. M., Davies, G. E., Delanty, N., Depondt, C., Djurovic, S., Drevets, W. C., Espeseth, T., Gollub, R. L., Ho, B.-C., Hoffmann, W., Hosten, N., Kahn, R. S., Le Hellard, S., Meyer-Lindenberg, A., Müller-Myhsok, B., Nauck, M., Nyberg, L., Pandolfo, M., Penninx, B. W. J. H., Roffman, J. L., Sisodiya, S. M., Smoller, J. W., Van Bokhoven, H., Van Haren, N. E. M., Völzke, H., Walter, H., Weiner, M. W., Wen, W., White, T., Agartz, I., Andreassen, O. A., Blangero, J., Boomsma, D. I., Brouwer, R. M., Cannon, D. M., Cookson, M. R., De Geus, E. J. C., Deary, I. J., Donohoe, G., Fernández, G., Fisher, S. E., Francks, C., Glahn, D. C., Grabe, H. J., Gruber, O., Hardy, J., Hashimoto, R., Hulshoff Pol, H. E., Jönsson, E. G., Kloszewska, I., Lovestone, S., Mattay, V. S., Mecocci, P., McDonald, C., McIntosh, A. M., Ophoff, R. A., Paus, T., Pausova, Z., Ryten, M., Sachdev, P. S., Saykin, A. J., Simmons, A., Singleton, A., Soininen, H., Wardlaw, J. M., Weale, M. E., Weinberger, D. R., Adams, H. H. H., Launer, L. J., Seiler, S., Schmidt, R., Chauhan, G., Satizabal, C. L., Becker, J. T., Yanek, L., van der Lee, S. J., Ebling, M., Fischl, B., Longstreth, W. T., Greve, D., Schmidt, H., Nyquist, P., Vinke, L. N., Van Duijn, C. M., Xue, L., Mazoyer, B., Bis, J. C., Gudnason, V., Seshadri, S., Ikram, M. A., The Alzheimer’s Disease Neuroimaging Initiative, The CHARGE Consortium, EPIGEN, IMAGEN, SYS, Martin, N. G., Wright, M. J., Schumann, G., Franke, B., Thompson, P. M., & Medland, S. E. (2015). Common genetic variants influence human subcortical brain structures. Nature, 520, 224-229. doi:10.1038/nature14101.

    Abstract

    The highly complex structure of the human brain is strongly shaped by genetic influences. Subcortical brain regions form circuits with cortical areas to coordinate movement, learning, memory and motivation, and altered circuits can lead to abnormal behaviour and disease. To investigate how common genetic variants affect the structure of these brain regions, here we conduct genome-wide association studies of the volumes of seven subcortical regions and the intracranial volume derived from magnetic resonance images of 30,717 individuals from 50 cohorts. We identify five novel genetic variants influencing the volumes of the putamen and caudate nucleus. We also find stronger evidence for three loci with previously established influences on hippocampal volume and intracranial volume. These variants show specific volumetric effects on brain structures rather than global effects across structures. The strongest effects were found for the putamen, where a novel intergenic locus with replicable influence on volume (rs945270; P = 1.08 × 10-33; 0.52% variance explained) showed evidence of altering the expression of the KTN1 gene in both brain and blood tissue. Variants influencing putamen volume clustered near developmental genes that regulate apoptosis, axon guidance and vesicle transport. Identification of these genetic variants provides insight into the causes of variability in human brain development, and may help to determine mechanisms of neuropsychiatric dysfunction

    Files private

    Request files
  • Hintz, F. (2015). Predicting language in different contexts: The nature and limits of mechanisms in anticipatory language processing. PhD Thesis, Radboud University Nijmegen, Nijmegen.

    Supplementary material

    Full Text (via Radboud)
  • Hintz, F., & Meyer, A. S. (2015). Prediction and production of simple mathematical equations: Evidence from anticipatory eye movements. PLoS One, 10(7): e0130766. doi:10.1371/journal.pone.0130766.

    Abstract

    The relationship between the production and the comprehension systems has recently become a topic of interest for many psycholinguists. It has been argued that these systems are tightly linked and in particular that listeners use the production system to predict upcoming content. In this study, we tested how similar production and prediction processes are in a novel version of the visual world paradigm. Dutch speaking participants (native speakers in Experiment 1; German-Dutch bilinguals in Experiment 2) listened to mathematical equations while looking at a clock face featuring the numbers 1 to 12. On alternating trials, they either heard a complete equation ("three plus eight is eleven") or they heard the first part ("three plus eight is") and had to produce the result ("eleven") themselves. Participants were encouraged to look at the relevant numbers throughout the trial. Their eye movements were recorded and analyzed. We found that the participants' eye movements in the two tasks were overall very similar. They fixated the first and second number of the equations shortly after they were mentioned, and fixated the result number well before they named it on production trials and well before the recorded speaker named it on comprehension trials. However, all fixation latencies were shorter on production than on comprehension trials. These findings suggest that the processes involved in planning to say a word and anticipating hearing a word are quite similar, but that people are more aroused or engaged when they intend to respond than when they merely listen to another person.

    Supplementary material

    Data availability
  • Hintz, F., & Huettig, F. (2015). The complexity of the visual environment modulates language-mediated eye gaze. In R. Mishra, N. Srinivasan, & F. Huettig (Eds.), Attention and Vision in Language Processing (pp. 39-55). Berlin: Springer. doi:10.1007/978-81-322-2443-3_3.

    Abstract

    Three eye-tracking experiments investigated the impact of the complexity of the visual environment on the likelihood of word-object mapping taking place at phonological, semantic and visual levels of representation during language-mediated visual search. Dutch participants heard spoken target words while looking at four objects embedded in displays of different complexity and indicated the presence or absence of the target object. During filler trials the target objects were present, but during experimental trials they were absent and the display contained various competitor objects. For example, given the target word “beaker”, the display contained a phonological (a beaver, bever), a shape (a bobbin, klos), a semantic (a fork, vork) competitor, and an unrelated distractor (an umbrella, paraplu). When objects were presented in simple four-object displays (Experiment 2), there were clear attentional biases to all three types of competitors replicating earlier research (Huettig and McQueen, 2007). When the objects were embedded in complex scenes including four human-like characters or four meaningless visual shapes (Experiments 1, 3), there were biases in looks to visual and semantic but not to phonological competitors. In both experiments, however, we observed evidence for inhibition in looks to phonological competitors, which suggests that the phonological forms of the objects nevertheless had been retrieved. These findings suggest that phonological word-object mapping is contingent upon the nature of the visual environment and add to a growing body of evidence that the nature of our visual surroundings induces particular modes of processing during language-mediated visual search.
  • Hoey, E. (2015). Lapses: How people arrive at, and deal with, discontinuities in talk. Research on Language and Social Interaction, 48(4), 430-453. doi:10.1080/08351813.2015.1090116.

    Abstract

    Interaction includes moments of silence. When all participants forgo the option to speak, the silence can be called a “lapse.” This article builds on existing work on lapses and other kinds of silences (gaps, pauses, and so on) to examine how participants reach a point where lapsing is a possibility and how they orient to the lapse that subsequently develops. Drawing from a wide range of activities and settings, I will show that participants may treat lapses as (a) the relevant cessation of talk, (b) the allowable development of silence, or (c) the conspicuous absence of talk. Data are in American and British English.
  • Janssen, R., Moisik, S. R., & Dediu, D. (2015). Bézier modelling and high accuracy curve fitting to capture hard palate variation. In Proceedings of the 18th International Congress of Phonetic Sciences (ICPhS 2015). Glasgow, UK: University of Glasgow.

    Abstract

    The human hard palate shows between-subject variation that is known to influence articulatory strategies. In order to link such variation to human speech, we are conducting a cross-sectional MRI study on multiple populations. A model based on Bezier curves using only three parameters was fitted to hard palate MRI tracings using evolutionary computation. The fits produced consistently yield high accuracies. For future research, this new method may be used to classify our MRI data on ethnic origins using e.g., cluster analyses. Furthermore, we may integrate our model into three-dimensional representations of the vocal tract in order to investigate its effect on acoustics and cultural transmission.
  • Jiang, J., Chen, C., Dai, B., Shi, G., Liu, L., & Lu, C. (2015). Leader emergence through interpersonal neural synchronization. Proceedings of the National Academy of Sciences of the United States of America, 112(14), 4274-4279. doi:10.1073/pnas.1422930112.

    Abstract

    The neural mechanism of leader emergence is not well understood. This study investigated (i) whether interpersonal neural synchronization (INS) plays an important role in leader emergence, and (ii) whether INS and leader emergence are associated with the frequency or the quality of communications. Eleven three-member groups were asked to perform a leaderless group discussion (LGD) task, and their brain activities were recorded via functional near infrared spectroscopy (fNIRS)-based hyperscanning. Video recordings of the discussions were coded for leadership and communication. Results showed that the INS for the leader–follower (LF) pairs was higher than that for the follower–follower (FF) pairs in the left temporo-parietal junction (TPJ), an area important for social mentalizing. Although communication frequency was higher for the LF pairs than for the FF pairs, the frequency of leader-initiated and follower-initiated communication did not differ significantly. Moreover, INS for the LF pairs was significantly higher during leader-initiated communication than during follower-initiated communications. In addition, INS for the LF pairs during leader-initiated communication was significantly correlated with the leaders’ communication skills and competence, but not their communication frequency. Finally, leadership could be successfully predicted based on INS as well as communication frequency early during the LGD (before half a minute into the task). In sum, this study found that leader emergence was characterized by high-level neural synchronization between the leader and followers and that the quality, rather than the frequency, of communications was associated with synchronization. These results suggest that leaders emerge because they are able to say the right things at the right time.
  • Jongman, S. R., Meyer, A. S., & Roelofs, A. (2015). The role of sustained attention in the production of conjoined noun phrases: An individual differences study. PLoS One, 10(9): e0137557. doi:10.1371/journal.pone.0137557.

    Abstract

    It has previously been shown that language production, performed simultaneously with a nonlinguistic task, involves sustained attention. Sustained attention concerns the ability to maintain alertness over time. Here, we aimed to replicate the previous finding by showing that individuals call upon sustained attention when they plan single noun phrases (e.g., "the carrot") and perform a manual arrow categorization task. In addition, we investigated whether speakers also recruit sustained attention when they produce conjoined noun phrases (e.g., "the carrot and the bucket") describing two pictures, that is, when both the first and second task are linguistic. We found that sustained attention correlated with the proportion of abnormally slow phrase-production responses. Individuals with poor sustained attention displayed a greater number of very slow responses than individuals with better sustained attention. Importantly, this relationship was obtained both for the production of single phrases while performing a nonlinguistic manual task, and the production of noun phrase conjunctions in referring to two spatially separated objects. Inhibition and updating abilities were also measured. These scores did not correlate with our measure of sustained attention, suggesting that sustained attention and executive control are distinct. Overall, the results suggest that planning conjoined noun phrases involves sustained attention, and that language production happens less automatically than has often been assumed.
  • Jongman, S. R., Roelofs, A., & Meyer, A. S. (2015). Sustained attention in language production: An individual differences investigation. Quarterly Journal of Experimental Psychology, 68, 710-730. doi:10.1080/17470218.2014.964736.

    Abstract

    Whereas it has long been assumed that most linguistic processes underlying language production happen automatically, accumulating evidence suggests that some form of attention is required. Here, we investigated the contribution of sustained attention, which is the ability to maintain alertness over time. First, the sustained attention ability of participants was measured using auditory and visual continuous performance tasks. Next, the participants described pictures using simple noun phrases while their response times (RTs) and gaze durations were measured. Earlier research has suggested that gaze duration reflects language planning processes up to and including phonological encoding. Individual differences in sustained attention ability correlated with individual differences in the magnitude of the tail of the RT distribution, reflecting the proportion of very slow responses, but not with individual differences in gaze duration. These results suggest that language production requires sustained attention, especially after phonological encoding.
  • Klein, M., Van der Vloet, M., Harich, B., Van Hulzen, K. J., Onnink, A. M. H., Hoogman, M., Guadalupe, T., Zwiers, M., Groothuismink, J. M., Verberkt, A., Nijhof, B., Castells-Nobau, A., Faraone, S. V., Buitelaar, J. K., Schenck, A., Arias-Vasquez, A., Franke, B., & Psychiatric Genomics Consortium ADHD Working Group (2015). Converging evidence does not support GIT1 as an ADHD risk gene. American Journal of Medical Genetics Part B: Neuropsychiatric Genetics, 168, 492-507. doi:10.1002/ajmg.b.32327.

    Abstract

    Attention-Deficit/Hyperactivity Disorder (ADHD) is a common neuropsychiatric disorder with a complex genetic background. The G protein-coupled receptor kinase interacting ArfGAP 1 (GIT1) gene was previously associated with ADHD. We aimed at replicating the association of GIT1 with ADHD and investigated its role in cognitive and brain phenotypes. Gene-wide and single variant association analyses for GIT1 were performed for three cohorts: (1) the ADHD meta-analysis data set of the Psychiatric Genomics Consortium (PGC, N=19,210), (2) the Dutch cohort of the International Multicentre persistent ADHD CollaboraTion (IMpACT-NL, N=225), and (3) the Brain Imaging Genetics cohort (BIG, N=1,300). Furthermore, functionality of the rs550818 variant as an expression quantitative trait locus (eQTL) for GIT1 was assessed in human blood samples. By using Drosophila melanogaster as a biological model system, we manipulated Git expression according to the outcome of the expression result and studied the effect of Git knockdown on neuronal morphology and locomotor activity. Association of rs550818 with ADHD was not confirmed, nor did a combination of variants in GIT1 show association with ADHD or any related measures in either of the investigated cohorts. However, the rs550818 risk-genotype did reduce GIT1 expression level. Git knockdown in Drosophila caused abnormal synapse and dendrite morphology, but did not affect locomotor activity. In summary, we could not confirm GIT1 as an ADHD candidate gene, while rs550818 was found to be an eQTL for GIT1. Despite GIT1's regulation of neuronal morphology, alterations in gene expression do not appear to have ADHD-related behavioral consequences
  • Koch, X., & Janse, E. (2015). Effects of age and hearing loss on articulatory precision for sibilants. In M. Wolters, J. Livingstone, B. Beattie, R. Smith, M. MacMahon, J. Stuart-Smith, & J. Scobbie (Eds.), Proceedings of the 18th International Congress of Phonetic Sciences (ICPhS 2015). London: International Phonetic Association.

    Abstract

    This study investigates the effects of adult age and speaker abilities on articulatory precision for sibilant productions. Normal-hearing young adults with better sibilant discrimination have been shown to produce greater spectral sibilant contrasts. As reduced auditory feedback may gradually impact on feedforward commands, we investigate whether articulatory precision as indexed by spectral mean for [s] and [S] decreases with age, and more particularly with agerelated hearing loss. Younger, middle-aged and older adults read aloud words starting with the sibilants [s] or [S]. Possible effects of cognitive, perceptual, linguistic and sociolinguistic background variables on the sibilants’ acoustics were also investigated. Sibilant contrasts were less pronounced for male than female speakers. Most importantly, for the fricative [s], the spectral mean was modulated by individual high-frequency hearing loss, but not age. These results underscore that even mild hearing loss already affects articulatory precision.
  • Kruspe, N., Burenhult, N., & Wnuk, E. (2015). Northern Aslian. In P. Sidwell, & M. Jenny (Eds.), Handbook of Austroasiatic Languages (pp. 419-474). Leiden: Brill.
  • Kunert, R., & Slevc, L. R. (2015). A commentary on: “Neural overlap in processing music and speech”. Frontiers in Human Neuroscience, 9: 330. doi:10.3389/fnhum.2015.00330.
  • Kunert, R., Willems, R. M., Casasanto, D., Patel, A. D., & Hagoort, P. (2015). Music and language syntax interact in Broca’s Area: An fMRI study. PLoS One, 10(11): e0141069. doi:10.1371/journal.pone.0141069.

    Abstract

    Instrumental music and language are both syntactic systems, employing complex, hierarchically-structured sequences built using implicit structural norms. This organization allows listeners to understand the role of individual words or tones in the context of an unfolding sentence or melody. Previous studies suggest that the brain mechanisms of syntactic processing may be partly shared between music and language. However, functional neuroimaging evidence for anatomical overlap of brain activity involved in linguistic and musical syntactic processing has been lacking. In the present study we used functional magnetic resonance imaging (fMRI) in conjunction with an interference paradigm based on sung sentences. We show that the processing demands of musical syntax (harmony) and language syntax interact in Broca’s area in the left inferior frontal gyrus (without leading to music and language main effects). A language main effect in Broca’s area only emerged in the complex music harmony condition, suggesting that (with our stimuli and tasks) a language effect only becomes visible under conditions of increased demands on shared neural resources. In contrast to previous studies, our design allows us to rule out that the observed neural interaction is due to: (1) general attention mechanisms, as a psychoacoustic auditory anomaly behaved unlike the harmonic manipulation, (2) error processing, as the language and the music stimuli contained no structural errors. The current results thus suggest that two different cognitive domains—music and language—might draw on the same high level syntactic integration resources in Broca’s area.
  • Lam, K. J. Y., Dijkstra, T., & Rueschemeyer, S.-A. (2015). Feature activation during word recognition: action, visual, and associative-semantic priming effects. Frontiers in Psychology, 6: 659. doi:10.3389/fpsyg.2015.00659.

    Abstract

    Embodied theories of language postulate that language meaning is stored in modality-specific brain areas generally involved in perception and action in the real world. However, the temporal dynamics of the interaction between modality-specific information and lexical-semantic processing remain unclear. We investigated the relative timing at which two types of modality-specific information (action-based and visual-form information) contribute to lexical-semantic comprehension. To this end, we applied a behavioral priming paradigm in which prime and target words were related with respect to (1) action features, (2) visual features, or (3) semantically associative information. Using a Go/No-Go lexical decision task, priming effects were measured across four different inter-stimulus intervals (ISI = 100, 250, 400, and 1000 ms) to determine the relative time course of the different features. Notably, action priming effects were found in ISIs of 100, 250, and 1000 ms whereas a visual priming effect was seen only in the ISI of 1000 ms. Importantly, our data suggest that features follow different time courses of activation during word recognition. In this regard, feature activation is dynamic, measurable in specific time windows but not in others. Thus the current study (1) demonstrates how multiple ISIs can be used within an experiment to help chart the time course of feature activation and (2) provides new evidence for embodied theories of language.
  • Lartseva, A., Dijkstra, T., & Buitelaar, J. (2015). Emotional language processing in Autism Spectrum Disorders: A systematic review. Frontiers in Human Neuroscience, 8: 991. doi:10.3389/fnhum.2014.00991.

    Abstract

    In his first description of Autism Spectrum Disorders (ASD), Kanner emphasized emotional impairments by characterizing children with ASD as indifferent to other people, self-absorbed, emotionally cold, distanced, and retracted. Thereafter, emotional impairments became regarded as part of the social impairments of ASD, and research mostly focused on understanding how individuals with ASD recognize visual expressions of emotions from faces and body postures. However, it still remains unclear how emotions are processed outside of the visual domain. This systematic review aims to fill this gap by focusing on impairments of emotional language processing in ASD. We systematically searched PubMed for papers published between 1990 and 2013 using standardized search terms. Studies show that people with ASD are able to correctly classify emotional language stimuli as emotionally positive or negative. However, processing of emotional language stimuli in ASD is associated with atypical patterns of attention and memory performance, as well as abnormal physiological and neural activity. Particularly, younger children with ASD have difficulties in acquiring and developing emotional concepts, and avoid using these in discourse. These emotional language impairments were not consistently associated with age, IQ, or level of development of language skills. We discuss how emotional language impairments fit with existing cognitive theories of ASD, such as central coherence, executive dysfunction, and weak Theory of Mind. We conclude that emotional impairments in ASD may be broader than just a mere consequence of social impairments, and should receive more attention in future research
  • Lewis, A. G., & Bastiaansen, M. C. M. (2015). A predictive coding framework for rapid neural dynamics during sentence-level language comprehension. Cortex, 68, 155-168. doi:10.1016/j.cortex.2015.02.014.

    Abstract

    There is a growing literature investigating the relationship between oscillatory neural dynamics measured using EEG and/or MEG, and sentence-level language comprehension. Recent proposals have suggested a strong link between predictive coding accounts of the hierarchical flow of information in the brain, and oscillatory neural dynamics in the beta and gamma frequency ranges. We propose that findings relating beta and gamma oscillations to sentence-level language comprehension might be unified under such a predictive coding account. Our suggestion is that oscillatory activity in the beta frequency range may reflect both the active maintenance of the current network configuration responsible for representing the sentence-level meaning under construction, and the top-down propagation of predictions to hierarchically lower processing levels based on that representation. In addition, we suggest that oscillatory activity in the low and middle gamma range reflect the matching of top-down predictions with bottom-up linguistic input, while evoked high gamma might reflect the propagation of bottom-up prediction errors to higher levels of the processing hierarchy. We also discuss some of the implications of this predictive coding framework, and we outline ideas for how these might be tested experimentally
  • Lewis, A. G., Wang, L., & Bastiaansen, M. C. M. (2015). Fast oscillatory dynamics during language comprehension: Unification versus maintenance and prediction? Brain and Language, 148, 51-63. doi:10.1016/j.bandl.2015.01.003.

    Abstract

    The role of neuronal oscillations during language comprehension is not yet well understood. In this paper we review and reinterpret the functional roles of beta- and gamma-band oscillatory activity during language comprehension at the sentence and discourse level. We discuss the evidence in favor of a role for beta and gamma in unification (the unification hypothesis), and in light of mounting evidence that cannot be accounted for under this hypothesis, we explore an alternative proposal linking beta and gamma oscillations to maintenance and prediction (respectively) during language comprehension. Our maintenance/prediction hypothesis is able to account for most of the findings that are currently available relating beta and gamma oscillations to language comprehension, and is in good agreement with other proposals about the roles of beta and gamma in domain-general cognitive processing. In conclusion we discuss proposals for further testing and comparing the prediction and unification hypotheses.
  • Lockwood, G., & Dingemanse, M. (2015). Iconicity in the lab: A review of behavioural, developmental, and neuroimaging research into sound-symbolism. Frontiers in Psychology, 6: 1246. doi:10.3389/fpsyg.2015.01246.

    Abstract

    This review covers experimental approaches to sound-symbolism—from infants to adults, and from Sapir’s foundational studies to twenty-first century product naming. It synthesizes recent behavioral, developmental, and neuroimaging work into a systematic overview of the cross-modal correspondences that underpin iconic links between form and meaning. It also identifies open questions and opportunities, showing how the future course of experimental iconicity research can benefit from an integrated interdisciplinary perspective. Combining insights from psychology and neuroscience with evidence from natural languages provides us with opportunities for the experimental investigation of the role of sound-symbolism in language learning, language processing, and communication. The review finishes by describing how hypothesis-testing and model-building will help contribute to a cumulative science of sound-symbolism in human language.
  • Lockwood, G., & Tuomainen, J. (2015). Ideophones in Japanese modulate the P2 and late positive complex responses. Frontiers in Psychology, 6: 933. doi:10.3389/fpsyg.2015.00933.

    Abstract

    Sound-symbolism, or the direct link between sound and meaning, is typologically and behaviorally attested across languages. However, neuroimaging research has mostly focused on artificial non-words or individual segments, which do not represent sound-symbolism in natural language. We used EEG to compare Japanese ideophones, which are phonologically distinctive sound-symbolic lexical words, and arbitrary adverbs during a sentence reading task. Ideophones elicit a larger visual P2 response and a sustained late positive complex in comparison to arbitrary adverbs. These results and previous literature suggest that the larger P2 may indicate the integration of sound and sensory information by association in response to the distinctive phonology of ideophones. The late positive complex may reflect the facilitated lexical retrieval of ideophones in comparison to arbitrary words. This account provides new evidence that ideophones exhibit similar cross-modal correspondences to those which have been proposed for non-words and individual sounds, and that these effects are detectable in natural language.
  • Manrique, E., & Enfield, N. J. (2015). Suspending the next turn as a form of repair initiation: Evidence from Argentine Sign Language. Frontiers in Psychology, 6: 1326. doi:10.3389/fpsyg.2015.01326.

    Abstract

    Practices of other initiated repair deal with problems of hearing or understanding what another person has said in the fast-moving turn-by-turn flow of conversation. As such, other-initiated repair plays a fundamental role in the maintenance of intersubjectivity in social interaction. This study finds and analyses a special type of other initiated repair that is used in turn-by-turn conversation in a sign language: Argentine Sign Language (Lengua de Sehas Argentina or LSA). We describe a type of response termed a "freeze-look,' which occurs when a person has just been asked a direct question: instead of answering the question in the next turn position, the person holds still while looking directly at the questioner. In these cases it is clear that the person is aware of having just been addressed and is not otherwise accounting for their delay in responding (e.g., by displaying a "thinking" face or hesitation, etc.). We find that this behavior functions as a way for an addressee to initiate repair by the person who asked the question. The "freeze-look" results in the questioner "re-doing" their action of asking a question, for example by repeating or rephrasing it Thus, we argue that the "freeze-look" is a practice for other-initiation of repair. In addition, we argue that it is an "off-record" practice, thus contrasting with known on record practices such as saying "Huh?" or equivalents. The findings aim to contribute to research on human understanding in everyday turn-by-turn conversation by looking at an understudied sign language, with possible implications for our understanding of visual bodily communication in spoken languages as wel

    Supplementary material

    Manrique_Enfield_2015_supp.pdf
  • Moers, C., Janse, E., & Meyer, A. S. (2015). Probabilistic reduction in reading aloud: A comparison of younger and older adults. In M. Wolters, J. Livingstone, B. Beattie, R. Smith, M. MacMahon, J. Stuart-Smith, & J. Scobbie (Eds.), Proceedings of the 18th International Congress of Phonetic Sciences (ICPhS 2015). London: International Phonetics Association.

    Abstract

    Frequent and predictable words are generally pronounced with less effort and are therefore acoustically more reduced than less frequent or unpredictable words. Local predictability can be operationalised by Transitional Probability (TP), which indicates how likely a word is to occur given its immediate context. We investigated whether and how probabilistic reduction effects on word durations change with adult age when reading aloud content words embedded in sentences. The results showed equally large frequency effects on verb and noun durations for both younger (Mage = 20 years) and older (Mage = 68 years) adults. Backward TP also affected word duration for younger and older adults alike. ForwardTP, however, had no significant effect on word duration in either age group. Our results resemble earlier findings of more robust BackwardTP effects compared to ForwardTP effects. Furthermore, unlike often reported decline in predictive processing with aging, probabilistic reduction effects remain stable across adulthood.
  • Monaghan, P., Mattock, K., Davies, R., & Smith, A. C. (2015). Gavagai is as gavagai does: Learning nouns and verbs from cross-situational statistics. Cognitive Science, 39, 1099-1112. doi:10.1111/cogs.12186.

    Abstract

    Learning to map words onto their referents is difficult, because there are multiple possibilities for forming these mappings. Cross-situational learning studies have shown that word-object mappings can be learned across multiple situations, as can verbs when presented in a syntactic context. However, these previous studies have presented either nouns or verbs in ambiguous contexts and thus bypass much of the complexity of multiple grammatical categories in speech. We show that noun word-learning in adults is robust when objects are moving, and that verbs can also be learned from similar scenes without additional syntactic information. Furthermore, we show that both nouns and verbs can be acquired simultaneously, thus resolving category-level as well as individual word level ambiguity. However, nouns were learned more accurately than verbs, and we discuss this in light of previous studies investigating the noun advantage in word learning.
  • Morano, L., Ernestus, M., & Ten Bosch, L. (2015). Schwa reduction in low-proficiency L2 speakers: Learning and generalization. In Scottish consortium for ICPhS, M. Wolters, J. Livingstone, B. Beattie, R. Smith, M. MacMahon, J. Stuart-Smith, & J. Scobbie (Eds.), Proceedings of the 18th International Congress of Phonetic Sciences (ICPhS 2015). Glasgow: University of Glasgow.

    Abstract

    This paper investigated the learnability and generalizability of French schwa alternation by Dutch low-proficiency second language learners. We trained 40 participants on 24 new schwa words by exposing them equally often to the reduced and full forms of these words. We then assessed participants' accuracy and reaction times to these newly learnt words as well as 24 previously encountered schwa words with an auditory lexical decision task. Our results show learning of the new words in both forms. This suggests that lack of exposure is probably the main cause of learners' difficulties with reduced forms. Nevertheless, the full forms were slightly better recognized than the reduced ones, possibly due to phonetic and phonological properties of the reduced forms. We also observed no generalization to previously encountered words, suggesting that our participants stored both of the learnt word forms and did not create a rule that applies to all schwa words.
  • Moreno, I., De Vega, M., León, I., Bastiaansen, M. C. M., Lewis, A. G., & Magyari, L. (2015). Brain dynamics in the comprehension of action-related language. A time-frequency analysis of mu rhythms. Neuroimage, 109, 50-62. doi:10.1016/j.neuroimage.2015.01.018.

    Abstract

    EEG mu rhythms (8-13Hz) recorded at fronto-central electrodes are generally considered as markers of motor cortical activity in humans, because they are modulated when participants perform an action, when they observe another’s action or even when they imagine performing an action. In this study, we analyzed the time-frequency (TF) modulation of mu rhythms while participants read action language (“You will cut the strawberry cake”), abstract language (“You will doubt the patient´s argument”), and perceptive language (“You will notice the bright day”). The results indicated that mu suppression at fronto-central sites is associated with action language rather than with abstract or perceptive language. Also, the largest difference between conditions occurred quite late in the sentence, while reading the first noun, (contrast Action vs. Abstract), or the second noun following the action verb (contrast Action vs. Perceptive). This suggests that motor activation is associated with the integration of words across the sentence beyond the lexical processing of the action verb. Source reconstruction localized mu suppression associated with action sentences in premotor cortex (BA 6). The present study suggests (1) that the understanding of action language activates motor networks in the human brain, and (2) that this activation occurs online based on semantic integration across multiple words in the sentence.
  • Neger, T. M., Rietveld, T., & Janse, E. (2015). Adult age effects in auditory statistical learning. In M. Wolters, J. Livingstone, B. Beattie, R. Smith, M. MacMahon, J. Stuart-Smith, & J. Scobbie (Eds.), Proceedings of the 18th International Congress of Phonetic Sciences (ICPhS 2015). London: International Phonetic Association.

    Abstract

    Statistical learning plays a key role in language processing, e.g., for speech segmentation. Older adults have been reported to show less statistical learning on the basis of visual input than younger adults. Given age-related changes in perception and cognition, we investigated whether statistical learning is also impaired in the auditory modality in older compared to younger adults and whether individual learning ability is associated with measures of perceptual (i.e., hearing sensitivity) and cognitive functioning in both age groups. Thirty younger and thirty older adults performed an auditory artificial-grammar-learning task to assess their statistical learning ability. In younger adults, perceptual effort came at the cost of processing resources required for learning. Inhibitory control (as indexed by Stroop colornaming performance) did not predict auditory learning. Overall, younger and older adults showed the same amount of auditory learning, indicating that statistical learning ability is preserved over the adult life span.
  • Nijveld, A., Ten Bosch, L., & Ernestus, M. (2015). Exemplar effects arise in a lexical decision task, but only under adverse listening conditions. In Scottish consortium for ICPhS, M. Wolters, J. Livingstone, B. Beattie, R. Smith, M. MacMahon, J. Stuart-Smith, & J. Scobbie (Eds.), Proceedings of the 18th International Congress of Phonetic Sciences (ICPhS 2015). Glasgow: University of Glasgow.

    Abstract

    This paper studies the influence of adverse listening conditions on exemplar effects in priming experiments that do not instruct participants to use their episodic memories. We conducted two lexical decision experiments, in which a prime and a target represented the same word type and could be spoken by the same or a different speaker. In Experiment 1, participants listened to clear speech, and showed no exemplar effects: they recognised repetitions by the same speaker as quickly as different speaker repetitions. In Experiment 2, the stimuli contained noise, and exemplar effects did arise. Importantly, Experiment 1 elicited longer average RTs than Experiment 2, a result that contradicts the time-course hypothesis, according to which exemplars only play a role when processing is slow. Instead, our findings support the hypothesis that exemplar effects arise under adverse listening conditions, when participants are stimulated to use their episodic memories in addition to their mental lexicons.
  • Peeters, D. (2015). A social and neurobiological approach to pointing in speech and gesture. PhD Thesis, Radboud University, Nijmegen.

    Supplementary material

    Full Text (via Radboud)
  • Peeters, D., Hagoort, P., & Ozyurek, A. (2015). Electrophysiological evidence for the role of shared space in online comprehension of spatial demonstratives. Cognition, 136, 64-84. doi:10.1016/j.cognition.2014.10.010.

    Abstract

    A fundamental property of language is that it can be used to refer to entities in the extra-linguistic physical context of a conversation in order to establish a joint focus of attention on a referent. Typological and psycholinguistic work across a wide range of languages has put forward at least two different theoretical views on demonstrative reference. Here we contrasted and tested these two accounts by investigating the electrophysiological brain activity underlying the construction of indexical meaning in comprehension. In two EEG experiments, participants watched pictures of a speaker who referred to one of two objects using speech and an index-finger pointing gesture. In contrast with separately collected native speakers’ linguistic intuitions, N400 effects showed a preference for a proximal demonstrative when speaker and addressee were in a face-to-face orientation and all possible referents were located in the shared space between them, irrespective of the physical proximity of the referent to the speaker. These findings reject egocentric proximity-based accounts of demonstrative reference, support a sociocentric approach to deixis, suggest that interlocutors construe a shared space during conversation, and imply that the psychological proximity of a referent may be more important than its physical proximity.
  • Piai, V., Roelofs, A., & Roete, I. (2015). Semantic interference in picture naming during dual-task performance does not vary with reading ability. Quarterly Journal of Experimental Psychology, 68(9), 1758-68. doi:10.1080/17470218.2014.985689.

    Abstract

    Previous dual-task studies examining the locus of semantic interference of distractor words in picture naming have obtained diverging results. In these studies, participants manually responded to tones and named pictures while ignoring distractor words (picture-word interference, PWI) with varying stimulus onset asynchrony (SOA) between tone and PWI stimulus. Whereas some studies observed no semantic interference at short SOAs, other studies observed effects of similar magnitude at short and long SOAs. The absence of semantic interference in some studies may perhaps be due to better reading skill of participants in these than in the other studies. According to such a reading-ability account, participants' reading skill should be predictive of the magnitude of their interference effect at short SOAs. To test this account, we conducted a dual-task study with tone discrimination and PWI tasks and measured participants' reading ability. The semantic interference effect was of similar magnitude at both short and long SOAs. Participants' reading ability was predictive of their naming speed but not of their semantic interference effect, contrary to the reading ability account. We conclude that the magnitude of semantic interference in picture naming during dual-task performance does not depend on reading skill.
  • Rivero, O., Selten, M. M., Sich, S., Popp, S., Bacmeister, L., Amendola, E., Negwer, M., Schubert, D., Proft, F., Kiser, D., Schmitt, A. G., Gross, C., Kolk, S. M., Strekalova, T., van den Hove, D., Resink, T. J., Nadif Kasri, N., & Lesch, K. P. (2015). Cadherin-13, a risk gene for ADHD and comorbid disorders, impacts GABAergic function in hippocampus and cognition. Translational Psychiatry, 5: e655. doi:10.1038/tp.2015.152.

    Abstract

    Cadherin-13 (CDH13), a unique glycosylphosphatidylinositol-anchored member of the cadherin family of cell adhesion molecules, has been identified as a risk gene for attention-deficit/hyperactivity disorder (ADHD) and various comorbid neurodevelopmental and psychiatric conditions, including depression, substance abuse, autism spectrum disorder and violent behavior, while the mechanism whereby CDH13 dysfunction influences pathogenesis of neuropsychiatric disorders remains elusive. Here we explored the potential role of CDH13 in the inhibitory modulation of brain activity by investigating synaptic function of GABAergic interneurons. Cellular and subcellular distribution of CDH13 was analyzed in the murine hippocampus and a mouse model with a targeted inactivation of Cdh13 was generated to evaluate how CDH13 modulates synaptic activity of hippocampal interneurons and behavioral domains related to psychopathologic (endo)phenotypes. We show that CDH13 expression in the cornu ammonis (CA) region of the hippocampus is confined to distinct classes of interneurons. Specifically, CDH13 is expressed by numerous parvalbumin and somatostatin-expressing interneurons located in the stratum oriens, where it localizes to both the soma and the presynaptic compartment. Cdh13−/− mice show an increase in basal inhibitory, but not excitatory, synaptic transmission in CA1 pyramidal neurons. Associated with these alterations in hippocampal function, Cdh13−/− mice display deficits in learning and memory. Taken together, our results indicate that CDH13 is a negative regulator of inhibitory synapses in the hippocampus, and provide insights into how CDH13 dysfunction may contribute to the excitatory/inhibitory imbalance observed in neurodevelopmental disorders, such as ADHD and autism.
  • Rojas-Berscia, L. M. (2015). Mayna, the lost Kawapanan language. LIAMES, 15, 393-407. Retrieved from http://revistas.iel.unicamp.br/index.php/liames/article/view/4549.

    Abstract

    The origins of the Mayna language, formerly spoken in northwest Peruvian Amazonia, remain a mystery for most scholars. Several discussions on it took place in the end of the 19th century and the beginning of the 20th; however, none arrived at a consensus. Apart from an article written by Taylor & Descola (1981), suggesting a relationship with the Jivaroan language family, little to nothing has been said about it for the last half of the 20th century and the last decades. In the present article, a summary of the principal accounts on the language and its people between the 19th and the 20th century will be given, followed by a corpus analysis in which the materials available in Mayna and Kawapanan, mainly prayers collected by Hervás (1787) and Teza (1868), will be analysed and compared for the first time in light of recent analyses in the new-born field called Kawapanan linguistics (Barraza de García 2005a,b; Valenzuela-Bismarck 2011a,b , Valenzuela 2013; Rojas-Berscia 2013, 2014; Madalengoitia-Barúa 2013; Farfán-Reto 2012), in order to test its affiliation to the Kawapanan language family, as claimed by Beuchat & Rivet (1909) and account for its place in the dialectology of this language family.
  • Rojas-Berscia, L. M., & Ghavami Dicker, S. (2015). Teonimia en el Alto Amazonas, el caso de Kanpunama. Escritura y Pensamiento, 18(36), 117-146.
  • Rommers, J., Meyer, A. S., & Huettig, F. (2015). Verbal and nonverbal predictors of language-mediated anticipatory eye movements. Attention, Perception & Psychophysics, 77(3), 720-730. doi:10.3758/s13414-015-0873-x.

    Abstract

    During language comprehension, listeners often anticipate upcoming information. This can draw listeners’ overt attention to visually presented objects before the objects are referred to. We investigated to what extent the anticipatory mechanisms involved in such language-mediated attention rely on specific verbal factors and on processes shared with other domains of cognition. Participants listened to sentences ending in a highly predictable word (e.g., “In 1969 Neil Armstrong was the first man to set foot on the moon”) while viewing displays containing three unrelated distractor objects and a critical object, which was either the target object (e.g., a moon), or an object with a similar shape (e.g., a tomato), or an unrelated control object (e.g., rice). Language-mediated anticipatory eye movements to targets and shape competitors were observed. Importantly, looks to the shape competitor were systematically related to individual differences in anticipatory attention, as indexed by a spatial cueing task: Participants whose responses were most strongly facilitated by predictive arrow cues also showed the strongest effects of predictive language input on their eye movements. By contrast, looks to the target were related to individual differences in vocabulary size and verbal fluency. The results suggest that verbal and nonverbal factors contribute to different types of language-mediated eye movement. The findings are consistent with multiple-mechanism accounts of predictive language processing.
  • Rossi, G. (2015). Other-initiated repair in Italian. Open Linguistics, 1(1), 256-282. doi:10.1515/opli-2015-0002.

    Abstract

    This article describes the interactional patterns and linguistic structures associated with other-initiated repair, as observed in a corpus of video recorded conversation in the Italian language (Romance). The article reports findings specific to the Italian language from the comparative project that is the topic of this special issue. While giving an overview of all the major practices for other-initiation of repair found in this language, special attention is given to (i) the functional distinctions between different open strategies (interjection, question words, formulaic), and (ii) the role of intonation in discriminating alternative restricted strategies, with a focus on different contour types used to produce repetitions.
  • Rossi, G. (2015). The request system in Italian interaction. PhD Thesis, Radboud University, Nijmegen.

    Abstract

    People across the world make requests every day. We constantly rely on others to get by in the small and big practicalities of everyday life, be it getting the salt, moving a sofa, or cooking a meal. It has long been noticed that when we ask others for help we use a wide range of forms drawing on various resources afforded by our language and body. To get another to pass the salt, for example, we may say ‘Pass the salt’, or ask ‘Can you pass me the salt?’, or simply point to the salt. What do different forms of requesting give us? The short answer is that they allow us to manage different social relations. But what kind of relations? While prior research has mostly emphasised the role of long-term asymmetries like people’s social distance and relative power, this thesis puts at centre stage social relations and dimensions emerging in the moment-by-moment flow of everyday interaction. These include how easy or hard the action requested is to anticipate for the requestee, whether the action requested contributes to a joint project or serves an individual one, whether the requestee may be unwilling to do it, and how obvious or equivocal it is that a certain person or another should be involved in the action. The study focuses on requests made in everyday informal interactions among speakers of Italian. It involves over 500 instances of requests sampled from a diverse corpus of video recordings, and draws on methods from conversation analysis, linguistics and multimodal analysis. A qualitative analysis of the data is supported by quantitative measures of the distribution of linguistic and interactional features, and by the use of inferential statistics to test the generalizability of some of the patterns observed. The thesis aims to contribute to our understanding of both language and social interaction by showing that forms of requesting constitute a system, organised by a set of recurrent social-interactional concerns.

    Supplementary material

    Full Text (via Radboud)
  • Rossi, G. (2015). Responding to pre-requests: The organization of hai x ‘do you have x’ sequences in Italian. Journal of Pragmatics, 82, 5-22. doi:10.1016/j.pragma.2015.03.008.

    Abstract

    Among the strategies used by people to request others to do things, there is a particular family defined as pre-requests. The typical function of a pre-request is to check whether some precondition obtains for a request to be successfully made. A form like the Italian interrogative hai x ‘do you have x’, for example, is used to ask if an object is available — a requirement for the object to be transferred or manipulated. But what does it mean exactly to make a pre-request? What difference does it make compared to issuing a request proper? In this article, I address these questions by examining the use of hai x ‘do you have x’ interrogatives in a corpus of informal Italian interaction. Drawing on methods from conversation analysis and linguistics, I show that the status of hai x as a pre-request is reflected in particular properties in the domains of preference and sequence organisation, specifically in the design of blocking responses to the pre-request, and in the use of go-ahead responses, which lead to the expansion of the request sequence. This study contributes to current research on requesting as well as on sequence organisation by demonstrating the response affordances of pre-requests and by furthering our understanding of the processes of sequence expansion.
  • San Roque, L., Kendrick, K. H., Norcliffe, E., Brown, P., Defina, R., Dingemanse, M., Dirksmeyer, T., Enfield, N. J., Floyd, S., Hammond, J., Rossi, G., Tufvesson, S., Van Putten, S., & Majid, A. (2015). Vision verbs dominate in conversation across cultures, but the ranking of non-visual verbs varies. Cognitive Linguistics, 26, 31-60. doi:10.1515/cog-2014-0089.

    Abstract

    To what extent does perceptual language reflect universals of experience and cognition, and to what extent is it shaped by particular cultural preoccupations? This paper investigates the universality~relativity of perceptual language by examining the use of basic perception terms in spontaneous conversation across 13 diverse languages and cultures. We analyze the frequency of perception words to test two universalist hypotheses: that sight is always a dominant sense, and that the relative ranking of the senses will be the same across different cultures. We find that references to sight outstrip references to the other senses, suggesting a pan-human preoccupation with visual phenomena. However, the relative frequency of the other senses was found to vary cross-linguistically. Cultural relativity was conspicuous as exemplified by the high ranking of smell in Semai, an Aslian language. Together these results suggest a place for both universal constraints and cultural shaping of the language of perception.
  • Schepens, J. (2015). Bridging linguistic gaps: The effects of linguistic distance on adult learnability of Dutch as an additional language. PhD Thesis, Radboud University Nijmegen, Nijmegen.
  • Schubotz, L., Holler, J., & Ozyurek, A. (2015). Age-related differences in multi-modal audience design: Young, but not old speakers, adapt speech and gestures to their addressee's knowledge. In G. Ferré, & M. Tutton (Eds.), Proceedings of the 4th GESPIN - Gesture & Speech in Interaction Conference (pp. 211-216). Nantes: Université of Nantes.

    Abstract

    Speakers can adapt their speech and co-speech gestures for addressees. Here, we investigate whether this ability is modulated by age. Younger and older adults participated in a comic narration task in which one participant (the speaker) narrated six short comic stories to another participant (the addressee). One half of each story was known to both participants, the other half only to the speaker. Younger but not older speakers used more words and gestures when narrating novel story content as opposed to known content. We discuss cognitive and pragmatic explanations of these findings and relate them to theories of gesture production.
  • Schubotz, L., Oostdijk, N., & Ernestus, M. (2015). Y’know vs. you know: What phonetic reduction can tell us about pragmatic function. In S. Lestrade, P. De Swart, & L. Hogeweg (Eds.), Addenda: Artikelen voor Ad Foolen (pp. 361-380). Njimegen: Radboud University.
  • Schuerman, W. L., Nagarajan, S., & Houde, J. (2015). Changes in consonant perception driven by adaptation of vowel production to altered auditory feedback. In M. Wolters, J. Livingstone, B. Beattie, R. Smith, M. MacMahon, J. Stuart-Smith, & J. Scobbie (Eds.), Proceedings of the 18th International Congresses of Phonetic Sciences (ICPhS 2015). London: International Phonetic Association.

    Abstract

    Adaptation to altered auditory feedback has been shown to induce subsequent shifts in perception. However, it is uncertain whether these perceptual changes may generalize to other speech sounds. In this experiment, we tested whether exposing the production of a vowel to altered auditory feedback affects perceptual categorization of a consonant distinction. In two sessions, participants produced CVC words containing the vowel /i/, while intermittently categorizing stimuli drawn from a continuum between "see" and "she." In the first session feedback was unaltered, while in the second session the formants of the vowel were shifted 20% towards /u/. Adaptation to the altered vowel was found to reduce the proportion of perceived /S/ stimuli. We suggest that this reflects an alteration to the sensorimotor mapping that is shared between vowels and consonants.
  • Schuerman, W. L., Meyer, A. S., & McQueen, J. M. (2015). Do we perceive others better than ourselves? A perceptual benefit for noise-vocoded speech produced by an average speaker. PLoS One, 10(7): e0129731. doi:10.1371/journal.pone.0129731.

    Abstract

    In different tasks involving action perception, performance has been found to be facilitated when the presented stimuli were produced by the participants themselves rather than by another participant. These results suggest that the same mental representations are accessed during both production and perception. However, with regard to spoken word perception, evidence also suggests that listeners’ representations for speech reflect the input from their surrounding linguistic community rather than their own idiosyncratic productions. Furthermore, speech perception is heavily influenced by indexical cues that may lead listeners to frame their interpretations of incoming speech signals with regard to speaker identity. In order to determine whether word recognition evinces similar self-advantages as found in action perception, it was necessary to eliminate indexical cues from the speech signal. We therefore asked participants to identify noise-vocoded versions of Dutch words that were based on either their own recordings or those of a statistically average speaker. The majority of participants were more accurate for the average speaker than for themselves, even after taking into account differences in intelligibility. These results suggest that the speech representations accessed during perception of noise-vocoded speech are more reflective of the input of the speech community, and hence that speech perception is not necessarily based on representations of one’s own speech.
  • Smith, A. C. (2015). Modelling multimodal language processing. PhD Thesis, Radboud University Nijmegen, Nijmegen.

    Supplementary material

    Full Text (via Radboud)
  • Sumer, B. (2015). Acquisition of spatial language by signing and speaking children: A comparison of Turkish Sign Language (TID) and Turkish. PhD Thesis, Radboud University Nijmegen, Nijmegen.
  • Torreira, F., & Valtersson, E. (2015). Phonetic and visual cues to questionhood in French conversation. Phonetica, 72, 20-42. doi:10.1159/000381723.

    Abstract

    We investigate the extent to which French polar questions and continuation statements, two types of utterances with similar morphosyntactic and intonational forms but different pragmatic functions, can be distinguished in conversational data based on phonetic and visual bodily information. We show that the two utterance types can be distinguished well over chance level by automatic classification models including several phonetic and visual cues. We also show that a considerable amount of relevant phonetic and visual information is present before the last portion of the utterances, potentially assisting early speech act recognition by addressees. These findings indicate that bottom-up phonetic and visual cues may play an important role during the production and recognition of speech acts alongside top-down contextual information.
  • Tsuji, S., Mazuka, R., Cristia, A., & Fikkert, P. (2015). Even at 4 months, a labial is a good enough coronal, but not vice versa. Cognition, 134, 252-256. doi:10.1016/j.cognition.2014.10.009.

    Abstract

    Numerous studies have revealed an asymmetry tied to the perception of coronal place of articulation: participants accept a labial mispronunciation of a coronal target, but not vice versa. Whether or not this asymmetry is based on language-general properties or arises from language-specific experience has been a matter of debate. The current study suggests a bias of the first type by documenting an early, cross-linguistic asymmetry related to coronal place of articulation. Japanese and Dutch 4- and 6-month-old infants showed evidence of discrimination if they were habituated to a labial and then tested on a coronal sequence, but not vice versa. This finding has important implications for both phonological theories and infant speech perception research

    Supplementary material

    Tsuji_etal_suppl_2014.xlsx
  • Unsworth, S., Persson, L., Prins, T., & De Bot, K. (2015). An investigation of factors affecting early foreign language learning in the Netherlands. Applied Linguistics, 36(5), 527-548. doi:10.1093/applin/amt052.
  • Van de Velde, M., Kempen, G., & Harbusch, K. (2015). Dative alternation and planning scope in spoken language: A corpus study on effects of verb bias in VO and OV clauses of Dutch. Lingua, 165, 92-108. doi:10.1016/j.lingua.2015.07.006.

    Abstract

    The syntactic structure of main and subordinate clauses is determined to a considerable extent by verb biases. For example, some English and Dutch ditransitive verbs have a preference for the prepositional object dative, whereas others are typically used with the double object dative. In this study, we compare the effect of these biases on structure selection in (S)VO and (S)OV dative clauses in the Corpus of Spoken Dutch (CGN). This comparison allowed us to make inferences about the size of the advance planning scope during spontaneous speaking: If the verb is an obligatory component of clause-level advance planning scope, as is claimed by the hypothesis of hierarchical incrementality, then biases should exert their influence on structure choices, regardless of early (VO) or late (OV) position of the verb in the clause. Conversely, if planning proceeds in a piecemeal fashion, strictly guided by lexical availability, as claimed by linear incrementality, then the verb and its associated biases can only influence structure choices in VO sentences. We tested these predictions by analyzing structure choices in the CGN, using mixed logit models. Our results support a combination of linear and hierarchical incrementality, showing a significant influence of verb bias on structure choices in VO, and a weaker (but still significant) effect in OV clauses
  • Van de Velde, M. (2015). Incrementality and flexibility in sentence production. PhD Thesis, Radboud University Nijmegen, Nijmegen.

    Supplementary material

    Full Text (via Radboud)
  • Van Rhijn, J. R., & Vernes, S. C. (2015). Retinoic acid signaling: A new piece in the spoken language puzzle. Frontiers in Psychology, 6: 1816. doi:10.3389/fpsyg.2015.01816.

    Abstract

    Speech requires precise motor control and rapid sequencing of highly complex vocal musculature. Despite its complexity, most people produce spoken language effortlessly. This is due to activity in distributed neuronal circuitry including cortico-striato-thalamic loops that control speech-motor output. Understanding the neuro-genetic mechanisms that encode these pathways will shed light on how humans can effortlessly and innately use spoken language and could elucidate what goes wrong in speech-language disorders. FOXP2 was the first single gene identified to cause speech and language disorder. Individuals with FOXP2 mutations display a severe speech deficit that also includes receptive and expressive language impairments. The underlying neuro-molecular mechanisms controlled by FOXP2, which will give insight into our capacity for speech-motor control, are only beginning to be unraveled. Recently FOXP2 was found to regulate genes involved in retinoic acid signaling and to modify the cellular response to retinoic acid, a key regulator of brain development. Herein we explore the evidence that FOXP2 and retinoic acid signaling function in the same pathways. We present evidence at molecular, cellular and behavioral levels that suggest an interplay between FOXP2 and retinoic acid that may be important for fine motor control and speech-motor output. We propose that retinoic acid signaling is an exciting new angle from which to investigate how neurogenetic mechanisms can contribute to the (spoken) language ready brain.
  • Verhees, M. W. F. T., Chwilla, D. J., Tromp, J., & Vissers, C. T. W. M. (2015). Contributions of emotional state and attention to the processing of syntactic agreement errors: evidence from P600. Frontiers in Psychology, 6: 388. doi:10.3389%2Ffpsyg.2015.00388.

    Abstract

    The classic account of language is that language processing occurs in isolation from other cognitive systems, like perception, motor action, and emotion. The central theme of this paper is the relationship between a participant’s emotional state and language comprehension. Does emotional context affect how we process neutral words? Recent studies showed that processing of word meaning – traditionally conceived as an automatic process – is affected by emotional state. The influence of emotional state on syntactic processing is less clear. One study reported a mood-related P600 modulation, while another study did not observe an effect of mood on syntactic processing. The goals of this study were: First, to clarify whether and if so how mood affects syntactic processing. Second, to shed light on the underlying mechanisms by separating possible effects of mood from those of attention on syntactic processing. Event-related potentials (ERPs) were recorded while participants read syntactically correct or incorrect sentences. Mood (happy vs. sad) was manipulated by presenting film clips. Attention was manipulated by directing attention to syntactic features vs. physical features. The mood induction was effective. Interactions between mood, attention and syntactic correctness were obtained, showing that mood and attention modulated P600. The mood manipulation led to a reduction in P600 for sad as compared to happy mood when attention was directed at syntactic features. The attention manipulation led to a reduction in P600 when attention was directed at physical features compared to syntactic features for happy mood. From this we draw two conclusions: First, emotional state does affect syntactic processing. We propose mood-related differences in the reliance on heuristics as the underlying mechanism. Second, attention can contribute to emotion-related ERP effects in syntactic language processing. Therefore, future studies on the relation between language and emotion will have to control for effects of attention
  • Viebahn, M., Ernestus, M., & McQueen, J. M. (2015). Syntactic predictability in the recognition of carefully and casually produced speech. Journal of Experimental Psychology: Learning, Memory, and Cognition, 41(6), 1684-1702. doi:10.1037/a0039326.

    Files private

    Request files
  • Witteman, M. J., Bardhan, N. P., Weber, A., & McQueen, J. M. (2015). Automaticity and stability of adaptation to foreign-accented speech. Language and Speech, 52(2), 168-189. doi:10.1177/0023830914528102.

    Abstract

    In three cross-modal priming experiments we asked whether adaptation to a foreign-accented speaker is automatic, and whether adaptation can be seen after a long delay between initial exposure and test. Dutch listeners were exposed to a Hebrew-accented Dutch speaker with two types of Dutch words: those that contained [ɪ] (globally accented words), and those in which the Dutch [i] was shortened to [ɪ] (specific accent marker words). Experiment 1, which served as a baseline, showed that native Dutch participants showed facilitatory priming for globally accented, but not specific accent, words. In experiment 2, participants performed a 3.5-minute phoneme monitoring task, and were tested on their comprehension of the accented speaker 24 hours later using the same cross-modal priming task as in experiment 1. During the phoneme monitoring task, listeners were asked to detect a consonant that was not strongly accented. In experiment 3, the delay between exposure and test was extended to 1 week. Listeners in experiments 2 and 3 showed facilitatory priming for both globally accented and specific accent marker words. Together, these results show that adaptation to a foreign-accented speaker can be rapid and automatic, and can be observed after a prolonged delay in testing.
  • Zhou, W. (2015). Assessing birth language memory in young adoptees. PhD Thesis, Radboud University Nijmegen, Nijmegen.

    Supplementary material

    Full Text (via Radboud)
  • Asaridou, S. S., & McQueen, J. M. (2013). Speech and music shape the listening brain: Evidence for shared domain-general mechanisms. Frontiers in Psychology, 4: 321. doi:10.3389/fpsyg.2013.00321.

    Abstract

    Are there bi-directional influences between speech perception and music perception? An answer to this question is essential for understanding the extent to which the speech and music that we hear are processed by domain-general auditory processes and/or by distinct neural auditory mechanisms. This review summarizes a large body of behavioral and neuroscientific findings which suggest that the musical experience of trained musicians does modulate speech processing, and a sparser set of data, largely on pitch processing, which suggest in addition that linguistic experience, in particular learning a tone language, modulates music processing. Although research has focused mostly on music on speech effects, we argue that both directions of influence need to be studied, and conclude that the picture which thus emerges is one of mutual interaction across domains. In particular, it is not simply that experience with spoken language has some effects on music perception, and vice versa, but that because of shared domain-general subcortical and cortical networks, experiences in both domains influence behavior in both domains.
  • Bergmann, C., Ten Bosch, L., Fikkert, P., & Boves, L. (2013). A computational model to investigate assumptions in the headturn preference procedure. Frontiers in Psychology, 4: 676. doi:10.3389/fpsyg.2013.00676.

    Abstract

    In this paper we use a computational model to investigate four assumptions that are tacitly present in interpreting the results of studies on infants' speech processing abilities using the Headturn Preference Procedure (HPP): (1) behavioral differences originate in different processing; (2) processing involves some form of recognition; (3) words are segmented from connected speech; and (4) differences between infants should not affect overall results. In addition, we investigate the impact of two potentially important aspects in the design and execution of the experiments: (a) the specific voices used in the two parts on HPP experiments (familiarization and test) and (b) the experimenter's criterion for what is a sufficient headturn angle. The model is designed to be maximize cognitive plausibility. It takes real speech as input, and it contains a module that converts the output of internal speech processing and recognition into headturns that can yield real-time listening preference measurements. Internal processing is based on distributed episodic representations in combination with a matching procedure based on the assumptions that complex episodes can be decomposed as positive weighted sums of simpler constituents. Model simulations show that the first assumptions hold under two different definitions of recognition. However, explicit segmentation is not necessary to simulate the behaviors observed in infant studies. Differences in attention span between infants can affect the outcomes of an experiment. The same holds for the experimenter's decision criterion. The speakers used in experiments affect outcomes in complex ways that require further investigation. The paper ends with recommendations for future studies using the HPP. - See more at: http://journal.frontiersin.org/Journal/10.3389/fpsyg.2013.00676/full#sthash.TUEwObRb.dpuf
  • Carrion Castillo, A., Franke, B., & Fisher, S. E. (2013). Molecular genetics of dyslexia: An overview. Dyslexia, 19(4), 214-240. doi:10.1002/dys.1464.

    Abstract

    Dyslexia is a highly heritable learning disorder with a complex underlying genetic architecture. Over the past decade, researchers have pinpointed a number of candidate genes that may contribute to dyslexia susceptibility. Here, we provide an overview of the state of the art, describing how studies have moved from mapping potential risk loci, through identification of associated gene variants, to characterization of gene function in cellular and animal model systems. Work thus far has highlighted some intriguing mechanistic pathways, such as neuronal migration, axon guidance, and ciliary biology, but it is clear that we still have much to learn about the molecular networks that are involved. We end the review by highlighting the past, present, and future contributions of the Dutch Dyslexia Programme to studies of genetic factors. In particular, we emphasize the importance of relating genetic information to intermediate neurobiological measures, as well as the value of incorporating longitudinal and developmental data into molecular designs
  • Dolscheid, S. (2013). High pitches and thick voices: The role of language in space-pitch associations. PhD Thesis, Radboud University Nijmegen, Nijmegen.

    Supplementary material

    Full Text (via Radboud)
  • Dolscheid, S., Shayan, S., Majid, A., & Casasanto, D. (2013). The thickness of musical pitch: Psychophysical evidence for linguistic relativity. Psychological Science, 24, 613-621. doi:10.1177/0956797612457374.

    Abstract

    Do people who speak different languages think differently, even when they are not using language? To find out, we used nonlinguistic psychophysical tasks to compare mental representations of musical pitch in native speakers of Dutch and Farsi. Dutch speakers describe pitches as high (hoog) or low (laag), whereas Farsi speakers describe pitches as thin (na-zok) or thick (koloft). Differences in language were reflected in differences in performance on two pitch-reproduction tasks, even though the tasks used simple, nonlinguistic stimuli and responses. To test whether experience using language influences mental representations of pitch, we trained native Dutch speakers to describe pitch in terms of thickness, as Farsi speakers do. After the training, Dutch speakers’ performance on a nonlinguistic psychophysical task resembled the performance of native Farsi speakers. People who use different linguistic space-pitch metaphors also think about pitch differently. Language can play a causal role in shaping nonlinguistic representations of musical pitch.

    Supplementary material

    DS_10.1177_0956797612457374.pdf
  • Dolscheid, S., Graver, C., & Casasanto, D. (2013). Spatial congruity effects reveal metaphors, not markedness. In M. Knauff, M. Pauen, N. Sebanz, & I. Wachsmuth (Eds.), Proceedings of the 35th Annual Meeting of the Cognitive Science Society (CogSci 2013) (pp. 2213-2218). Austin,TX: Cognitive Science Society. Retrieved from http://mindmodeling.org/cogsci2013/papers/0405/index.html.

    Abstract

    Spatial congruity effects have often been interpreted as evidence for metaphorical thinking, but an alternative markedness-based account challenges this view. In two experiments, we directly compared metaphor and markedness explanations for spatial congruity effects, using musical pitch as a testbed. English speakers who talk about pitch in terms of spatial height were tested in speeded space-pitch compatibility tasks. To determine whether space-pitch congruency effects could be elicited by any marked spatial continuum, participants were asked to classify high- and low-frequency pitches as 'high' and 'low' or as 'front' and 'back' (both pairs of terms constitute cases of marked continuums). We found congruency effects in high/low conditions but not in front/back conditions, indicating that markedness is not sufficient to account for congruity effects (Experiment 1). A second experiment showed that congruency effects were specific to spatial words that cued a vertical schema (tall/short), and that congruity effects were not an artifact of polysemy (e.g., 'high' referring both to space and pitch). Together, these results suggest that congruency effects reveal metaphorical uses of spatial schemas, not markedness effects.
  • Enfield, N. J., Dingemanse, M., Baranova, J., Blythe, J., Brown, P., Dirksmeyer, T., Drew, P., Floyd, S., Gipper, S., Gisladottir, R. S., Hoymann, G., Kendrick, K. H., Levinson, S. C., Magyari, L., Manrique, E., Rossi, G., San Roque, L., & Torreira, F. (2013). Huh? What? – A first survey in 21 languages. In M. Hayashi, G. Raymond, & J. Sidnell (Eds.), Conversational repair and human understanding (pp. 343-380). New York: Cambridge University Press.

    Abstract

    Introduction A comparison of conversation in twenty-one languages from around the world reveals commonalities and differences in the way that people do open-class other-initiation of repair (Schegloff, Jefferson, and Sacks, 1977; Drew, 1997). We find that speakers of all of the spoken languages in the sample make use of a primary interjection strategy (in English it is Huh?), where the phonetic form of the interjection is strikingly similar across the languages: a monosyllable featuring an open non-back vowel [a, æ, ə, ʌ], often nasalized, usually with rising intonation and sometimes an [h-] onset. We also find that most of the languages have another strategy for open-class other-initiation of repair, namely the use of a question word (usually “what”). Here we find significantly more variation across the languages. The phonetic form of the question word involved is completely different from language to language: e.g., English [wɑt] versus Cha'palaa [ti] versus Duna [aki]. Furthermore, the grammatical structure in which the repair-initiating question word can or must be expressed varies within and across languages. In this chapter we present data on these two strategies – primary interjections like Huh? and question words like What? – with discussion of possible reasons for the similarities and differences across the languages. We explore some implications for the notion of repair as a system, in the context of research on the typology of language use. The general outline of this chapter is as follows. We first discuss repair as a system across languages and then introduce the focus of the chapter: open-class other-initiation of repair. A discussion of the main findings follows, where we identify two alternative strategies in the data: an interjection strategy (Huh?) and a question word strategy (What?). Formal features and possible motivations are discussed for the interjection strategy and the question word strategy in order. A final section discusses bodily behavior including posture, eyebrow movements and eye gaze, both in spoken languages and in a sign language.
  • Gialluisi, A., Incollu, S., Pippucci, T., Lepori, M. B., Zappu, A., Loudianos, G., & Romeo, G. (2013). The homozygosity index (HI) approach reveals high allele frequency for Wilson disease in the Sardinian population. European Journal of Human Genetics, 21, 1308-1311. doi:10.1038/ejhg.2013.43.

    Abstract

    Wilson disease (WD) is an autosomal recessive disorder resulting in pathological progressive copper accumulation in liver and other tissues. The worldwide prevalence (P) is about 30/million, while in Sardinia it is in the order of 1/10 000. However, all of these estimates are likely to suffer from an underdiagnosis bias. Indeed, a recent molecular neonatal screening in Sardinia reported a WD prevalence of 1:2707. In this study, we used a new approach that makes it possible to estimate the allelic frequency (q) of an autosomal recessive disorder if one knows the proportion between homozygous and compound heterozygous patients (the homozygosity index or HI) and the inbreeding coefficient (F) in a sample of affected individuals. We applied the method to a set of 178 Sardinian individuals (3 of whom born to consanguineous parents), each with a clinical and molecular diagnosis of WD. Taking into account the geographical provenance of the parents of every patient within Sardinia (to make F computation more precise), we obtained a q=0.0191 (F=7.8 × 10-4, HI=0.476) and a corresponding prevalence P=1:2732. This result confirms that the prevalence of WD is largely underestimated in Sardinia. On the other hand, the general reliability and applicability of the HI approach to other autosomal recessive disorders is confirmed, especially if one is interested in the genetic epidemiology of populations with high frequency of consanguineous marriages.European Journal of Human Genetics advance online publication, 13 March 2013;
  • Gussenhoven, C., & Zhou, W. (2013). Revisiting pitch slope and height effects on perceived duration. In Proceedings of INTERSPEECH 2013: 14th Annual Conference of the International Speech Communication Association (pp. 1365-1369).

    Abstract

    The shape of pitch contours has been shown to have an effect on the perceived duration of vowels. For instance, vowels with high level pitch and vowels with falling contours sound longer than vowels with low level pitch. Depending on whether the comparison is between level pitches or between level and dynamic contours, these findings have been interpreted in two ways. For inter-level comparisons, where the duration results are the reverse of production results, a hypercorrection strategy in production has been proposed [1]. By contrast, for comparisons between level pitches and dynamic contours, the longer production data for dynamic contours have been held responsible. We report an experiment with Dutch and Chinese listeners which aimed to show that production data and perception data are each other’s opposites for high, low, falling and rising contours. We explain the results, which are consistent with earlier findings, in terms of the compensatory listening strategy of [2], arguing that the perception effects are due to a perceptual compensation of articulatory strategies and constraints, rather than that differences in production compensate for psycho-acoustic perception effects.
  • Hanique, I., Aalders, E., & Ernestus, M. (2013). How robust are exemplar effects in word comprehension? The mental lexicon, 8, 269-294. doi:10.1075/ml.8.3.01han.

    Abstract

    This paper studies the robustness of exemplar effects in word comprehension by means of four long-term priming experiments with lexical decision tasks in Dutch. A prime and target represented the same word type and were presented with the same or different degree of reduction. In Experiment 1, participants heard only a small number of trials, a large proportion of repeated words, and stimuli produced by only one speaker. They recognized targets more quickly if these represented the same degree of reduction as their primes, which forms additional evidence for the exemplar effects reported in the literature. Similar effects were found for two speakers who differ in their pronunciations. In Experiment 2, with a smaller proportion of repeated words and more trials between prime and target, participants recognized targets preceded by primes with the same or a different degree of reduction equally quickly. Also, in Experiments 3 and 4, in which listeners were not exposed to one but two types of pronunciation variation (reduction degree and speaker voice), no exemplar effects arose. We conclude that the role of exemplars in speech comprehension during natural conversations, which typically involve several speakers and few repeated content words, may be smaller than previously assumed.
  • Holler, J., Schubotz, L., Kelly, S., Schuetze, M., Hagoort, P., & Ozyurek, A. (2013). Here's not looking at you, kid! Unaddressed recipients benefit from co-speech gestures when speech processing suffers. In M. Knauff, M. Pauen, I. Sebanz, & I. Wachsmuth (Eds.), Proceedings of the 35th Annual Meeting of the Cognitive Science Society (CogSci 2013) (pp. 2560-2565). Austin, TX: Cognitive Science Society. Retrieved from http://mindmodeling.org/cogsci2013/papers/0463/index.html.

    Abstract

    In human face-to-face communication, language comprehension is a multi-modal, situated activity. However, little is known about how we combine information from these different modalities, and how perceived communicative intentions, often signaled through visual signals, such as eye gaze, may influence this processing. We address this question by simulating a triadic communication context in which a speaker alternated her gaze between two different recipients. Participants thus viewed speech-only or speech+gesture object-related utterances when being addressed (direct gaze) or unaddressed (averted gaze). Two object images followed each message and participants’ task was to choose the object that matched the message. Unaddressed recipients responded significantly slower than addressees for speech-only utterances. However, perceiving the same speech accompanied by gestures sped them up to a level identical to that of addressees. That is, when speech processing suffers due to not being addressed, gesture processing remains intact and enhances the comprehension of a speaker’s message
  • Johnson, E. K., Lahey, M., Ernestus, M., & Cutler, A. (2013). A multimodal corpus of speech to infant and adult listeners. Journal of the Acoustical Society of America, 134, EL534-EL540. doi:10.1121/1.4828977.

    Abstract

    An audio and video corpus of speech addressed to 28 11-month-olds is described. The corpus allows comparisons between adult speech directed towards infants, familiar adults and unfamiliar adult addressees, as well as of caregivers’ word teaching strategies across word classes. Summary data show that infant-directed speech differed more from speech to unfamiliar than familiar adults; that word teaching strategies for nominals versus verbs and adjectives differed; that mothers mostly addressed infants with multi-word utterances; and that infants’ vocabulary size was unrelated to speech rate, but correlated positively with predominance of continuous caregiver speech (not of isolated words) in the input.
  • Kupisch, T., Akpinar, D., & Stoehr, A. (2013). Gender assignment and gender agreement in adult bilinguals and second learners of French. Linguistic Approaches to Bilingualism, 3, 150-179. doi:10.1075/lab.3.2.02kup.
  • Mulder, K., Schreuder, R., & Dijkstra, T. (2013). Morphological family size effects in L1 and L2 processing: An electrophysiological study. Language and Cognitive Processes, 27, 1004-1035. doi:10.1080/01690965.2012.733013.

    Abstract

    The present study examined Morphological Family Size effects in first and second language processing. Items with a high or low Dutch (L1) Family Size were contrasted in four experiments involving Dutch–English bilinguals. In two experiments, reaction times (RTs) were collected in English (L2) and Dutch (L1) lexical decision tasks; in two other experiments, an L1 and L2 go/no-go lexical decision task were performed while Event-Related Potentials (ERPs) were recorded. Two questions were addressed. First, is the ERP signal sensitive to the morphological productivity of words? Second, does nontarget language activation in L2 processing spread beyond the item itself, to the morphological family of the activated nontarget word? The two behavioural experiments both showed a facilitatory effect of Dutch Family Size, indicating that the morphological family in the L1 is activated regardless of language context. In the two ERP experiments, Family Size effects were found to modulate the N400 component. Less negative waveforms were observed for words with a high L1 Family Size compared to words with a low L1 Family Size in the N400 time window, in both the L1 and L2 task. In addition, these Family Size effects persisted in later time windows. The data are discussed in light of the Morphological Family Resonance Model (MFRM) model of morphological processing and the BIA + model.
  • Mulder, K. (2013). Family and neighbourhood relations in the mental lexicon: A cross-language perspective. PhD Thesis, Radboud University Nijmegen, Nijmegen.

    Abstract

    We lezen en horen dagelijks duizenden woorden zonder dat het ons enige moeite lijkt te kosten. Toch speelt zich in ons brein ondertussen een complex mentaal proces af, waarbij tal van andere woorden dan het aangeboden woord, ook actief worden. Dit gebeurt met name wanneer die andere woorden overeenkomen met de feitelijk aangeboden woorden in spelling, uitspraak of betekenis. Deze activatie als gevolg van gelijkenis strekt zich zelfs uit tot andere talen: ook daarin worden gelijkende woorden actief. Waar liggen de grenzen van dit activatieproces? Activeer je bij het verwerken van het Engelse woord 'steam' ook het Nederlandse woord 'stram'(een zogenaamd 'buurwoord)? En activeer je bij 'clock' zowel 'clockwork' als 'klokhuis' ( twee morfolologische familieleden uit verschillende talen)? Kimberley Mulder onderzocht hoe het leesproces van Nederlands-Engelse tweetaligen wordt beïnvloed door zulke relaties. In meerdere experimentele studies vond ze dat tweetaligen niet alleen morfologische familieleden en orthografische buren activeren uit de taal waarin ze op dat moment lezen, maar ook uit de andere taal die ze kennen. Het lezen van een woord beperkt zich dus geenszins tot wat je eigenlijk ziet, maar activeert een heel netwerk van woorden in je brein.
  • Peeters, D., Chu, M., Holler, J., Ozyurek, A., & Hagoort, P. (2013). Getting to the point: The influence of communicative intent on the kinematics of pointing gestures. In M. Knauff, M. Pauen, N. Sebanz, & I. Wachsmuth (Eds.), Proceedings of the 35th Annual Meeting of the Cognitive Science Society (CogSci 2013) (pp. 1127-1132). Austin, TX: Cognitive Science Society.

    Abstract

    In everyday communication, people not only use speech but also hand gestures to convey information. One intriguing question in gesture research has been why gestures take the specific form they do. Previous research has identified the speaker-gesturer’s communicative intent as one factor shaping the form of iconic gestures. Here we investigate whether communicative intent also shapes the form of pointing gestures. In an experimental setting, twenty-four participants produced pointing gestures identifying a referent for an addressee. The communicative intent of the speakergesturer was manipulated by varying the informativeness of the pointing gesture. A second independent variable was the presence or absence of concurrent speech. As a function of their communicative intent and irrespective of the presence of speech, participants varied the durations of the stroke and the post-stroke hold-phase of their gesture. These findings add to our understanding of how the communicative context influences the form that a gesture takes.
  • Peeters, D., Dijkstra, T., & Grainger, J. (2013). The representation and processing of identical cognates by late bilinguals: RT and ERP effects. Journal of Memory and Language, 68, 315-332. doi:10.1016/j.jml.2012.12.003.

    Abstract

    Across the languages of a bilingual, translation equivalents can have the same orthographic form and shared meaning (e.g., TABLE in French and English). How such words, called orthographically identical cognates, are processed and represented in the bilingual brain is not well understood. In the present study, late French–English bilinguals processed such identical cognates and control words in an English lexical decision task. Both behavioral and electrophysiological data were collected. Reaction times to identical cognates were shorter than for non-cognate controls and depended on both English and French frequency. Cognates with a low English frequency showed a larger cognate advantage than those with a high English frequency. In addition, N400 amplitude was found to be sensitive to cognate status and both the English and French frequency of the cognate words. Theoretical consequences for the processing and representation of identical cognates are discussed.
  • Piai, V., Roelofs, A., Acheson, D. J., & Takashima, A. (2013). Attention for speaking: Neural substrates of general and specific mechanisms for monitoring and control. Frontiers in Human Neuroscience, 7: 832. doi:10.3389/fnhum.2013.00832.

    Abstract

    Accumulating evidence suggests that some degree of attentional control is required to regulate and monitor processes underlying speaking. Although progress has been made in delineating the neural substrates of the core language processes involved in speaking, substrates associated with regulatory and monitoring processes have remained relatively underspecified. We report the results of an fMRI study examining the neural substrates related to performance in three attention-demanding tasks varying in the amount of linguistic processing: vocal picture naming while ignoring distractors (picture-word interference, PWI); vocal color naming while ignoring distractors (Stroop); and manual object discrimination while ignoring spatial position (Simon task). All three tasks had congruent and incongruent stimuli, while PWI and Stroop also had neutral stimuli. Analyses focusing on common activation across tasks identified a portion of the dorsal anterior cingulate cortex (ACC) that was active in incongruent trials for all three tasks, suggesting that this region subserves a domain-general attentional control function. In the language tasks, this area showed increased activity for incongruent relative to congruent stimuli, consistent with the involvement of domain-general mechanisms of attentional control in word production. The two language tasks also showed activity in anterior-superior temporal gyrus (STG). Activity increased for neutral PWI stimuli (picture and word did not share the same semantic category) relative to incongruent (categorically related) and congruent stimuli. This finding is consistent with the involvement of language-specific areas in word production, possibly related to retrieval of lexical-semantic information from memory. The current results thus suggest that in addition to engaging language-specific areas for core linguistic processes, speaking also engages the ACC, a region that is likely implementing domain-general attentional control.
  • Piai, V., Roelofs, A., Jensen, O., Schoffelen, J.-M., & Bonnefond, M. (2013). Distinct patterns of brain activity characterize lexical activation and competition in speech production [Abstract]. Journal of Cognitive Neuroscience, 25 Suppl., 106.

    Abstract

    A fundamental ability of speakers is to quickly retrieve words from long-term memory. According to a prominent theory, concepts activate multiple associated words, which enter into competition for selection. Previous electrophysiological studies have provided evidence for the activation of multiple alternative words, but did not identify brain responses refl ecting competition. We report a magnetoencephalography study examining the timing and neural substrates of lexical activation and competition. The degree of activation of competing words was manipulated by presenting pictures (e.g., dog) simultaneously with distractor words. The distractors were semantically related to the picture name (cat), unrelated (pin), or identical (dog). Semantic distractors are stronger competitors to the picture name, because they receive additional activation from the picture, whereas unrelated distractors do not. Picture naming times were longer with semantic than with unrelated and identical distractors. The patterns of phase-locked and non-phase-locked activity were distinct but temporally overlapping. Phase-locked activity in left middle temporal gyrus, peaking at 400 ms, was larger on unrelated than semantic and identical trials, suggesting differential effort in processing the alternative words activated by the picture-word stimuli. Non-phase-locked activity in the 4-10 Hz range between 400-650 ms in left superior frontal gyrus was larger on semantic than unrelated and identical trials, suggesting different degrees of effort in resolving the competition among the alternatives words, as refl ected in the naming times. These findings characterize distinct patterns of brain activity associated with lexical activation and competition respectively, and their temporal relation, supporting the theory that words are selected by competition.
  • Piai, V., & Roelofs, A. (2013). Working memory capacity and dual-task interference in picture naming. Acta Psychologica, 142, 332-342. doi:10.1016/j.actpsy.2013.01.006.
  • Piai, V., Meyer, L., Schreuder, R., & Bastiaansen, M. C. M. (2013). Sit down and read on: Working memory and long-term memory in particle-verb processing. Brain and Language, 127(2), 296-306. doi:10.1016/j.bandl.2013.09.015.

    Abstract

    Particle verbs (e.g., look up) are lexical items for which particle and verb share a single lexical entry. Using event-related brain potentials, we examined working memory and long-term memory involvement in particle-verb processing. Dutch participants read sentences with head verbs that allow zero, two, or more than five particles to occur downstream. Additionally, sentences were presented for which the encountered particle was semantically plausible, semantically implausible, or forming a non-existing particle verb. An anterior negativity was observed at the verbs that potentially allow for a particle downstream relative to verbs that do not, possibly indexing storage of the verb until the dependency with its particle can be closed. Moreover, a graded N400 was found at the particle (smallest amplitude for plausible particles and largest for particles forming non-existing particle verbs), suggesting that lexical access to a shared lexical entry occurred at two separate time points.
  • Poellmann, K. (2013). The many ways listeners adapt to reductions in casual speech. PhD Thesis, Radboud University Nijmegen, Nijmegen.

    Supplementary material

    Full Text (via Radboud)
  • Roelofs, A., & Piai, V. (2013). Associative facilitation in the Stroop task: Comment on Mahon et al. Cortex, 49, 1767-1769. doi:10.1016/j.cortex.2013.03.001.

    Abstract

    First paragraph: A fundamental issue in psycholinguistics concerns how speakers retrieve intended words from long-term memory. According to a selection by competition account (e.g., Levelt et al., 1999), conceptually driven word retrieval involves the activation of a set of candidate words and a competitive selection of the intended word from this set.
  • Roelofs, A., Dijkstra, T., & Gerakaki, S. (2013). Modeling of word translation: Activation flow from concepts to lexical items. Bilingualism: Language and Cognition, 16, 343-353. doi:10.1017/S1366728912000612.

    Abstract

    Whereas most theoretical and computational models assume a continuous flow of activation from concepts to lexical items in spoken word production, one prominent model assumes that the mapping of concepts onto words happens in a discrete fashion (Bloem & La Heij, 2003). Semantic facilitation of context pictures on word translation has been taken to support the discrete-flow model. Here, we report results of computer simulations with the continuous-flow WEAVER++ model (Roelofs, 1992, 2006) demonstrating that the empirical observation taken to be in favor of discrete models is, in fact, only consistent with those models and equally compatible with more continuous models of word production by monolingual and bilingual speakers. Continuous models are specifically and independently supported by other empirical evidence on the effect of context pictures on native word production.
  • Roelofs, A., Piai, V., & Schriefers, H. (2013). Context effects and selective attention in picture naming and word reading: Competition versus response exclusion. Language and Cognitive Processes, 28, 655-671. doi:10.1080/01690965.2011.615663.

    Abstract

    For several decades, context effects in picture naming and word reading have been extensively investigated. However, researchers have found no agreement on the explanation of the effects. Whereas it has long been assumed that several types of effect reflect competition in word selection, recently it has been argued that these effects reflect the exclusion of articulatory responses from an output buffer. Here, we first critically evaluate the findings on context effects in picture naming that have been taken as evidence against the competition account, and we argue that the findings are, in fact, compatible with the competition account. Moreover, some of the findings appear to challenge rather than support the response exclusion account. Next, we compare the response exclusion and competition accounts with respect to their ability to explain data on word reading. It appears that response exclusion does not account well for context effects on word reading times, whereas computer simulations reveal that a competition model like WEAVER++ accounts for the findings.

    Files private

    Request files

Share this page