Publications

Displaying 301 - 400 of 590
  • Levelt, W. J. M. (1991). Die konnektionistische Mode. Sprache und Kognition, 10(2), 61-72.
  • Levelt, W. J. M. (1968). [Review of the book] R.C. Oldfield en J.C. Marshall (eds.), Language. Selected readings. Nederlands tijdschrift voor de psychologie, 23, 474.
  • Levelt, W. J. M. (1968). Hans Hörmann, Psychologie der Sprache [Book review]. Lingua, 20, 93-97. doi:10.1016/0024-3841(68)90133-2.
  • Levelt, W. J. M. (2018). Is language natural to man? Some historical considerations. Current Opinion in Behavioral Sciences, 21, 127-131. doi:10.1016/j.cobeha.2018.04.003.

    Abstract

    Since the Enlightenment period, natural theories of speech and language evolution have florished in the language sciences. Four ever returning core issues are highlighted in this paper: Firstly, Is language natural to man or just an invention? Secondly, Is language a specific human ability (a ‘language instinct’) or does it arise from general cognitive capacities we share with other animals? Thirdly, Has the evolution of language been a gradual process or did it rather suddenly arise, due to some ‘evolutionary twist’? Lastly, Is the child's language acquisition an appropriate model for language evolution?
  • Levelt, W. J. M., Schriefers, H., Vorberg, D., Meyer, A. S., Pechmann, T., & Havinga, J. (1991). Normal and deviant lexical processing: Reply to Dell and O'Seaghdha. Psychological Review, 98(4), 615-618. doi:10.1037/0033-295X.98.4.615.

    Abstract

    In their comment, Dell and O'Seaghdha (1991) adduced any effect on phonological probes for semantic alternatives to the activation of these probes in the lexical network. We argue that that interpretation is false and, in addition, that the model still cannot account for our data. Furthermore, and different from Dell and O'seaghda, we adduce semantic rebound to the lemma level, where it is so substantial that it should have shown up in our data. Finally, we question the function of feedback in a lexical network (other than eliciting speech errors) and discuss Dell's (1988) notion of a unified production-comprehension system.
  • Levelt, W. J. M. (1968). Om der wille van het taalpsychologisch experiment. Forum der Letteren, 21, 927-928.
  • Levelt, W. J. M. (1968). R. Quirk en J. Svartvik (eds.), Investigating linguistic acceptability [Book review]. Nederlands tijdschrift voor de psychologie, 32(6), 692.
  • Levelt, W. J. M., Schreuder, R., & Hoenkamp, E. (1976). Struktur und Gebrauch von Bewegungsverben. Zeitschrift für Literaturwissenschaft und Linguistik, 6(23/24), 131-152.
  • Levelt, W. J. M., & Bonarius, M. (1968). Suffixes as deep structure clues. Heymans Bulletins Psychologische Instituten RU Groningen, HB-68-22EX.
  • Levelt, W. J. M., Schriefer, H., Vorberg, D., Meyer, A. S., Pechmann, T., & Havinga, J. (1991). The time course of lexical access in speech production: A study of picture naming. Psychological Review, 98(1), 122-142. doi:10.1037/0033-295X.98.1.122.
  • Levinson, S. C. (2012). Authorship: Include all institutes in publishing index [Correspondence]. Nature, 485, 582. doi:10.1038/485582c.
  • Levinson, S. C., & Senft, G. (1991). Forschungsgruppe für Kognitive Anthropologie - Eine neue Forschungsgruppe in der Max-Planck-Gesellschaft. Linguistische Berichte, 133, 244-246.
  • Levinson, S. C. (2012). Kinship and human thought. Science, 336(6084), 988-989. doi:10.1126/science.1222691.

    Abstract

    Language and communication are central to shaping concepts such as kinship categories.
  • Levinson, S. C., & Senft, G. (1991). Research group for cognitive anthropology - A new research group of the Max Planck Society. Cognitive Linguistics, 2, 311-312.
  • Levinson, S. C. (2018). Spatial cognition, empathy and language evolution. Studies in Pragmatics, 20, 16-21.

    Abstract

    The evolution of language and spatial cognition may have been deeply interconnected. The argument
    goes as follows: 1. Human native spatial abilities are poor, but we make up for it with linguistic
    and cultural prostheses; 2. The explanation for the loss of native spatial abilities may be
    that language has cannibalized the hippocampus, the mammalian mental ‘GPS’; 3. Consequently,
    language may have borrowed conceptual primitives from spatial cognition (in line with ‘localism’),
    these being differentially combined in different languages; 4. The hippocampus may have
    been colonized because: (a) space was prime subject matter for communication, (b) gesture uses
    space to represent space, and was likely precursor to language. In order to explain why the other
    great apes haven’t gone in the same direction, we need to invoke other factors, notably the ‘interaction
    engine’, the ensemble of interactional abilities that make cooperative communication possible
    and provide the matrix for the evolution and learning of language.
  • Levinson, S. C. (1991). Pragmatic reduction of the Binding Conditions revisited. Journal of Linguistics, 27, 107-161. doi:10.1017/S0022226700012433.

    Abstract

    In an earlier article (Levinson, 1987b), I raised the possibility that a Gricean theory of implicature might provide a systematic partial reduction of the Binding Conditions; the briefest of outlines is given in Section 2.1 below but the argumentation will be found in the earlier article. In this article I want, first, to show how that account might be further justified and extended, but then to introduce a radical alternative. This alternative uses the same pragmatic framework, but gives an account better adjusted to some languages. Finally, I shall attempt to show that both accounts can be combined by taking a diachronic perspective. The attraction of the combined account is that, suddenly, many facts about long-range reflexives and their associated logophoricity fall into place.
  • Levinson, S. C. (2012). The original sin of cognitive science. Topics in Cognitive Science, 4, 396-403. doi:10.1111/j.1756-8765.2012.01195.x.

    Abstract

    Classical cognitive science was launched on the premise that the architecture of human cognition is uniform and universal across the species. This premise is biologically impossible and is being actively undermined by, for example, imaging genomics. Anthropology (including archaeology, biological anthropology, linguistics, and cultural anthropology) is, in contrast, largely concerned with the diversification of human culture, language, and biology across time and space—it belongs fundamentally to the evolutionary sciences. The new cognitive sciences that will emerge from the interactions with the biological sciences will focus on variation and diversity, opening the door for rapprochement with anthropology.
  • Levinson, S. C., & Gray, R. D. (2012). Tools from evolutionary biology shed new light on the diversification of languages. Trends in Cognitive Sciences, 16(3), 167-173. doi:10.1016/j.tics.2012.01.007.

    Abstract

    Computational methods have revolutionized evolutionary biology. In this paper we explore the impact these methods are now having on our understanding of the forces that both affect the diversification of human languages and shape human cognition. We show how these methods can illuminate problems ranging from the nature of constraints on linguistic variation to the role that social processes play in determining the rate of linguistic change. Throughout the paper we argue that the cognitive sciences should move away from an idealized model of human cognition, to a more biologically realistic model where variation is central.
  • Levshina, N. (2018). Probabilistic grammar and constructional predictability: Bayesian generalized additive models of help. GLOSSA-a journal of general linguistics, 3(1): 55. doi:10.5334/gjgl.294.

    Abstract

    The present study investigates the construction with help followed by the bare or to-infinitive in seven varieties of web-based English from Australia, Ghana, Great Britain, Hong Kong, India, Jamaica and the USA. In addition to various factors known from the literature, such as register, minimization of cognitive complexity and avoidance of identity (horror aequi), it studies the effect of predictability of the infinitive given help and the other way round on the language user’s choice between the constructional variants. These probabilistic constraints are tested in a series of Bayesian generalized additive mixed-effects regression models. The results demonstrate that the to-infinitive is particularly frequent in contexts with low predictability, or, in information-theoretic terms, with high information content. This tendency is interpreted as communicatively efficient behaviour, when more predictable units of discourse get less formal marking, and less predictable ones get more formal marking. However, the strength, shape and directionality of predictability effects exhibit variation across the countries, which demonstrates the importance of the cross-lectal perspective in research on communicative efficiency and other universal functional principles.
  • Lewis, A. G., Schriefers, H., Bastiaansen, M., & Schoffelen, J.-M. (2018). Assessing the utility of frequency tagging for tracking memory-based reactivation of word representations. Scientific Reports, 8: 7897. doi:10.1038/s41598-018-26091-3.

    Abstract

    Reinstatement of memory-related neural activity measured with high temporal precision potentially provides a useful index for real-time monitoring of the timing of activation of memory content during cognitive processing. The utility of such an index extends to any situation where one is interested in the (relative) timing of activation of different sources of information in memory, a paradigm case of which is tracking lexical activation during language processing. Essential for this approach is that memory reinstatement effects are robust, so that their absence (in the average) definitively indicates that no lexical activation is present. We used electroencephalography to test the robustness of a reported subsequent memory finding involving reinstatement of frequency-specific entrained oscillatory brain activity during subsequent recognition. Participants learned lists of words presented on a background flickering at either 6 or 15 Hz to entrain a steady-state brain response. Target words subsequently presented on a non-flickering background that were correctly identified as previously seen exhibited reinstatement effects at both entrainment frequencies. Reliability of these statistical inferences was however critically dependent on the approach used for multiple comparisons correction. We conclude that effects are not robust enough to be used as a reliable index of lexical activation during language processing.

    Additional information

    Lewis_etal_2018sup.docx
  • Liang, S., Vega, R., Kong, X., Deng, W., Wang, Q., Ma, X., Li, M., Hu, X., Greenshaw, A. J., Greiner, R., & Li, T. (2018). Neurocognitive Graphs of First-Episode Schizophrenia and Major Depression Based on Cognitive Features. Neuroscience Bulletin, 34(2), 312-320. doi:10.1007/s12264-017-0190-6.

    Abstract

    Neurocognitive deficits are frequently observed in patients with schizophrenia and major depressive disorder (MDD). The relations between cognitive features may be represented by neurocognitive graphs based on cognitive features, modeled as Gaussian Markov random fields. However, it is unclear whether it is possible to differentiate between phenotypic patterns associated with the differential diagnosis of schizophrenia and depression using this neurocognitive graph approach. In this study, we enrolled 215 first-episode patients with schizophrenia (FES), 125 with MDD, and 237 demographically-matched healthy controls (HCs). The cognitive performance of all participants was evaluated using a battery of neurocognitive tests. The graphical LASSO model was trained with a one-vs-one scenario to learn the conditional independent structure of neurocognitive features of each group. Participants in the holdout dataset were classified into different groups with the highest likelihood. A partial correlation matrix was transformed from the graphical model to further explore the neurocognitive graph for each group. The classification approach identified the diagnostic class for individuals with an average accuracy of 73.41% for FES vs HC, 67.07% for MDD vs HC, and 59.48% for FES vs MDD. Both of the neurocognitive graphs for FES and MDD had more connections and higher node centrality than those for HC. The neurocognitive graph for FES was less sparse and had more connections than that for MDD. Thus, neurocognitive graphs based on cognitive features are promising for describing endophenotypes that may discriminate schizophrenia from depression.

    Additional information

    Liang_etal_2017sup.pdf
  • Liebal, K., & Haun, D. B. M. (2012). The importance of comparative psychology for developmental science [Review Article]. International Journal of Developmental Science, 6, 21-23. doi:10.3233/DEV-2012-11088.

    Abstract

    The aim of this essay is to elucidate the relevance of cross-species comparisons for the investigation of human behavior and its development. The focus is on the comparison of human children and another group of primates, the non-human great apes, with special attention to their cognitive skills. Integrating a comparative and developmental perspective, we argue, can provide additional answers to central and elusive questions about human behavior in general and its development in particular: What are the heritable predispositions of the human mind? What cognitive traits are uniquely human? In this sense, Developmental Science would benefit from results of Comparative Psychology.
  • Ligthart, S., Vaez, A., Võsa, U., Stathopoulou, M. G., De Vries, P. S., Prins, B. P., Van der Most, P. J., Tanaka, T., Naderi, E., Rose, L. M., Wu, Y., Karlsson, R., Barbalic, M., Lin, H., Pool, R., Zhu, G., Macé, A., Sidore, C., Trompet, S., Mangino, M. and 267 moreLigthart, S., Vaez, A., Võsa, U., Stathopoulou, M. G., De Vries, P. S., Prins, B. P., Van der Most, P. J., Tanaka, T., Naderi, E., Rose, L. M., Wu, Y., Karlsson, R., Barbalic, M., Lin, H., Pool, R., Zhu, G., Macé, A., Sidore, C., Trompet, S., Mangino, M., Sabater-Lleal, M., Kemp, J. P., Abbasi, A., Kacprowski, T., Verweij, N., Smith, A. V., Huang, T., Marzi, C., Feitosa, M. F., Lohman, K. K., Kleber, M. E., Milaneschi, Y., Mueller, C., Huq, M., Vlachopoulou, E., Lyytikäinen, L.-P., Oldmeadow, C., Deelen, J., Perola, M., Zhao, J. H., Feenstra, B., LifeLines Cohort Study, Amini, M., CHARGE Inflammation Working Group, Lahti, J., Schraut, K. E., Fornage, M., Suktitipat, B., Chen, W.-M., Li, X., Nutile, T., Malerba, G., Luan, J., Bak, T., Schork, N., Del Greco M., F., Thiering, E., Mahajan, A., Marioni, R. E., Mihailov, E., Eriksson, J., Ozel, A. B., Zhang, W., Nethander, M., Cheng, Y.-C., Aslibekyan, S., Ang, W., Gandin, I., Yengo, L., Portas, L., Kooperberg, C., Hofer, E., Rajan, K. B., Schurmann, C., Den Hollander, W., Ahluwalia, T. S., Zhao, J., Draisma, H. H. M., Ford, I., Timpson, N., Teumer, A., Huang, H., Wahl, S., Liu, Y., Huang, J., Uh, H.-W., Geller, F., Joshi, P. K., Yanek, L. R., Trabetti, E., Lehne, B., Vozzi, D., Verbanck, M., Biino, G., Saba, Y., Meulenbelt, I., O’Connell, J. R., Laakso, M., Giulianini, F., Magnusson, P. K. E., Ballantyne, C. M., Hottenga, J. J., Montgomery, G. W., Rivadineira, F., Rueedi, R., Steri, M., Herzig, K.-H., Stott, D. J., Menni, C., Franberg, M., St Pourcain, B., Felix, S. B., Pers, T. H., Bakker, S. J. L., Kraft, P., Peters, A., Vaidya, D., Delgado, G., Smit, J. H., Großmann, V., Sinisalo, J., Seppälä, I., Williams, S. R., Holliday, E. G., Moed, M., Langenberg, C., Räikkönen, K., Ding, J., Campbell, H., Sale, M. M., Chen, Y.-D.-I., James, A. L., Ruggiero, D., Soranzo, N., Hartman, C. A., Smith, E. N., Berenson, G. S., Fuchsberger, C., Hernandez, D., Tiesler, C. M. T., Giedraitis, V., Liewald, D., Fischer, K., Mellström, D., Larsson, A., Wang, Y., Scott, W. R., Lorentzon, M., Beilby, J., Ryan, K. A., Pennell, C. E., Vuckovic, D., Balkau, B., Concas, M. P., Schmidt, R., Mendes de Leon, C. F., Bottinger, E. P., Kloppenburg, M., Paternoster, L., Boehnke, M., Musk, A. W., Willemsen, G., Evans, D. M., Madden, P. A. F., Kähönen, M., Kutalik, Z., Zoledziewska, M., Karhunen, V., Kritchevsky, S. B., Sattar, N., Lachance, G., Clarke, R., Harris, T. B., Raitakari, O. T., Attia, J. R., Van Heemst, D., Kajantie, E., Sorice, R., Gambaro, G., Scott, R. A., Hicks, A. A., Ferrucci, L., Standl, M., Lindgren, C. M., Starr, J. M., Karlsson, M., Lind, L., Li, J. Z., Chambers, J. C., Mori, T. A., De Geus, E. J. C. N., Heath, A. C., Martin, N. G., Auvinen, J., Buckley, B. M., De Craen, A. J. M., Waldenberger, M., Strauch, K., Meitinger, T., Scott, R. J., McEvoy, M., Beekman, M., Bombieri, C., Ridker, P. M., Mohlke, K. L., Pedersen, N. L., Morrison, A. C., Boomsma, D. I., Whitfield, J. B., Strachan, D. P., Hofman, A., Vollenweider, P., Cucca, F., Jarvelin, M.-R., Jukema, J. W., Spector, T. D., Hamsten, A., Zeller, T., Uitterlinden, A. G., Nauck, M., Gudnason, V., Qi, L., Grallert, H., Borecki, I. B., Rotter, J. I., März, W., Wild, P. S., Lokki, M.-L., Boyle, M., Salomaa, V., Melbye, M., Eriksson, J. G., Wilson, J. F., Penninx, B. W. J. H., Becker, D. M., Worrall, B. B., Gibson, G., Krauss, R. M., Ciullo, M., Zaza, G., Wareham, N. J., Oldehinkel, A. J., Palmer, L. J., Murray, S. S., Pramstaller, P. P., Bandinelli, S., Heinrich, J., Ingelsson, E., Deary, I. J., Ma¨gi, R., Vandenput, L., Van der Harst, P., Desch, K. C., Kooner, J. S., Ohlsson, C., Hayward, C., Lehtima¨ki, T., Shuldiner, A. R., Arnett, D. K., Beilin, L. J., Robino, A., Froguel, P., Pirastu, M., Jess, T., Koenig, W., Loos, R. J. F., Evans, D. A., Schmidt, H., Smith, G. D., Slagboom, P. E., Eiriksdottir, G., Morris, A. P., Psaty, B. M., Tracy, R. P., Nolte, I. M., Boerwinkle, E., Visvikis-Siest, S., Reiner, A. P., Gross, M., Bis, J. C., Franke, L., Franco, O. H., Benjamin, E. J., Chasman, D. I., Dupuis, J., Snieder, H., Dehghan, A., & Alizadeh, B. Z. (2018). Genome Analyses of >200,000 Individuals Identify 58 Loci for Chronic Inflammation and Highlight Pathways that Link Inflammation and Complex Disorders. The American Journal of Human Genetics, 103(5), 691-706. doi:10.1016/j.ajhg.2018.09.009.

    Abstract

    C-reactive protein (CRP) is a sensitive biomarker of chronic low-grade inflammation and is associated with multiple complex diseases. The genetic determinants of chronic inflammation remain largely unknown, and the causal role of CRP in several clinical outcomes is debated. We performed two genome-wide association studies (GWASs), on HapMap and 1000 Genomes imputed data, of circulating amounts of CRP by using data from 88 studies comprising 204,402 European individuals. Additionally, we performed in silico functional analyses and Mendelian randomization analyses with several clinical outcomes. The GWAS meta-analyses of CRP revealed 58 distinct genetic loci (p < 5 × 10−8). After adjustment for body mass index in the regression analysis, the associations at all except three loci remained. The lead variants at the distinct loci explained up to 7.0% of the variance in circulating amounts of CRP. We identified 66 gene sets that were organized in two substantially correlated clusters, one mainly composed of immune pathways and the other characterized by metabolic pathways in the liver. Mendelian randomization analyses revealed a causal protective effect of CRP on schizophrenia and a risk-increasing effect on bipolar disorder. Our findings provide further insights into the biology of inflammation and could lead to interventions for treating inflammation and its clinical consequences.
  • Linkenauger, S. A., Lerner, M. D., Ramenzoni, V. C., & Proffitt, D. R. (2012). A perceptual-motor deficit predicts social and communicative impairments in individuals with autism spectrum disorders. Autism Research, 5, 352-362. doi:10.1002/aur.1248.

    Abstract

    Individuals with autism spectrum disorders (ASDs) have known impairments in social and motor skills. Identifying putative underlying mechanisms of these impairments could lead to improved understanding of the etiology of core social/communicative deficits in ASDs, and identification of novel intervention targets. The ability to perceptually integrate one's physical capacities with one's environment (affordance perception) may be such a mechanism. This ability has been theorized to be impaired in ASDs, but this question has never been directly tested. Crucially, affordance perception has shown to be amenable to learning; thus, if it is implicated in deficits in ASDs, it may be a valuable unexplored intervention target. The present study compared affordance perception in adolescents and adults with ASDs to typically developing (TD) controls. Two groups of individuals (adolescents and adults) with ASDs and age-matched TD controls completed well-established action capability estimation tasks (reachability, graspability, and aperture passability). Their caregivers completed a measure of their lifetime social/communicative deficits. Compared with controls, individuals with ASDs showed unprecedented gross impairments in relating information about their bodies' action capabilities to visual information specifying the environment. The magnitude of these deficits strongly predicted the magnitude of social/communicative impairments in individuals with ASDs. Thus, social/communicative impairments in ASDs may derive, at least in part, from deficits in basic perceptual–motor processes (e.g. action capability estimation). Such deficits may impair the ability to maintain and calibrate the relationship between oneself and one's social and physical environments, and present fruitful, novel, and unexplored target for intervention.
  • Liszkowski, U., Brown, P., Callaghan, T., Takada, A., & De Vos, C. (2012). A prelinguistic gestural universal of human communication. Cognitive Science, 36, 698-713. doi:10.1111/j.1551-6709.2011.01228.x.

    Abstract

    Several cognitive accounts of human communication argue for a language-independent, prelinguistic basis of human communication and language. The current study provides evidence for the universality of a prelinguistic gestural basis for human communication. We used a standardized, semi-natural elicitation procedure in seven very different cultures around the world to test for the existence of preverbal pointing in infants and their caregivers. Results were that by 10–14 months of age, infants and their caregivers pointed in all cultures in the same basic situation with similar frequencies and the same proto-typical morphology of the extended index finger. Infants’ pointing was best predicted by age and caregiver pointing, but not by cultural group. Further analyses revealed a strong relation between the temporal unfolding of caregivers’ and infants’ pointing events, uncovering a structure of early prelinguistic gestural conversation. Findings support the existence of a gestural, language-independent universal of human communication that forms a culturally shared, prelinguistic basis for diversified linguistic communication.
  • Liu, X., Gao, Y., Di, Q., Hu, J., Lu, C., Nan, Y., Booth, J. R., & Liu, L. (2018). Differences between child and adult large-scale functional brain networks for reading tasks. Human Brain Mapping, 39(2), 662-679. doi:10.1002/hbm.23871.

    Abstract

    Reading is an important high‐level cognitive function of the human brain, requiring interaction among multiple brain regions. Revealing differences between children's large‐scale functional brain networks for reading tasks and those of adults helps us to understand how the functional network changes over reading development. Here we used functional magnetic resonance imaging data of 17 adults (19–28 years old) and 16 children (11–13 years old), and graph theoretical analyses to investigate age‐related changes in large‐scale functional networks during rhyming and meaning judgment tasks on pairs of visually presented Chinese characters. We found that: (1) adults had stronger inter‐regional connectivity and nodal degree in occipital regions, while children had stronger inter‐regional connectivity in temporal regions, suggesting that adults rely more on visual orthographic processing whereas children rely more on auditory phonological processing during reading. (2) Only adults showed between‐task differences in inter‐regional connectivity and nodal degree, whereas children showed no task differences, suggesting the topological organization of adults’ reading network is more specialized. (3) Children showed greater inter‐regional connectivity and nodal degree than adults in multiple subcortical regions; the hubs in children were more distributed in subcortical regions while the hubs in adults were more distributed in cortical regions. These findings suggest that reading development is manifested by a shift from reliance on subcortical to cortical regions. Taken together, our study suggests that Chinese reading development is supported by developmental changes in brain connectivity properties, and some of these changes may be domain‐general while others may be specific to the reading domain.
  • Xu, S., Liu, P., Chen, Y., Chen, Y., Zhang, W., Zhao, H., Cao, Y., Wang, F., Jiang, N., Lin, S., Li, B., Zhang, Z., Wei, Z., Fan, Y., Jin, Y., He, L., Zhou, R., Dekker, J. D., Tucker, H. O., Fisher, S. E. and 4 moreXu, S., Liu, P., Chen, Y., Chen, Y., Zhang, W., Zhao, H., Cao, Y., Wang, F., Jiang, N., Lin, S., Li, B., Zhang, Z., Wei, Z., Fan, Y., Jin, Y., He, L., Zhou, R., Dekker, J. D., Tucker, H. O., Fisher, S. E., Yao, Z., Liu, Q., Xia, X., & Guo, X. (2018). Foxp2 regulates anatomical features that may be relevant for vocal behaviors and bipedal locomotion. Proceedings of the National Academy of Sciences of the United States of America, 115(35), 8799-8804. doi:10.1073/pnas.1721820115.

    Abstract

    Fundamental human traits, such as language and bipedalism, are associated with a range of anatomical adaptations in craniofacial shaping and skeletal remodeling. However, it is unclear how such morphological features arose during hominin evolution. FOXP2 is a brain-expressed transcription factor implicated in a rare disorder involving speech apraxia and language impairments. Analysis of its evolutionary history suggests that this gene may have contributed to the emergence of proficient spoken language. In the present study, through analyses of skeleton-specific knockout mice, we identified roles of Foxp2 in skull shaping and bone remodeling. Selective ablation of Foxp2 in cartilage disrupted pup vocalizations in a similar way to that of global Foxp2 mutants, which may be due to pleiotropic effects on craniofacial morphogenesis. Our findings also indicate that Foxp2 helps to regulate strength and length of hind limbs and maintenance of joint cartilage and intervertebral discs, which are all anatomical features that are susceptible to adaptations for bipedal locomotion. In light of the known roles of Foxp2 in brain circuits that are important for motor skills and spoken language, we suggest that this gene may have been well placed to contribute to coevolution of neural and anatomical adaptations related to speech and bipedal locomotion.

    Files private

    Request files
  • Long, M., Horton, W. S., Rohde, H., & Sorace, A. (2018). Individual differences in switching and inhibition predict perspective-taking across the lifespan. Cognition, 170, 25-30. doi:10.1016/j.cognition.2017.09.004.

    Abstract

    Studies exploring the influence of executive functions (EF) on perspective-taking have focused on inhibition and working memory in young adults or clinical populations. Less consideration has been given to more complex capacities that also involve switching attention between perspectives, or to changes in EF and concomitant effects on perspective-taking across the lifespan. To address this, we assessed whether individual differences in inhibition and attentional switching in healthy adults (ages 17–84) predict performance on a task in which speakers identified targets for a listener with size-contrasting competitors in common or privileged ground. Modification differences across conditions decreased with age. Further, perspective taking interacted with EF measures: youngest adults’ sensitivity to perspective was best captured by their inhibitory performance; oldest adults’ sensitivity was best captured by switching performance. Perspective-taking likely involves multiple aspects of EF, as revealed by considering a wider range of EF tasks and individual capacities across the lifespan.
  • De Loor, G. P., Jurriens, A. A., Levelt, W. J. M., & Van de Geer, J. P. (1968). Line scan imagery interpretation. Photogrammatic Engineering, 28, 502-510.
  • Ludwig, A., Vernesi, C., Lieckfeldt, D., Lattenkamp, E. Z., Wiethölter, A., & Lutz, W. (2012). Origin and patterns of genetic diversity of German fallow deer as inferred from mitochondrial DNA. European Journal of Wildlife Research, 58(2), 495-501. doi:10.1007/s10344-011-0571-5.

    Abstract

    Although not native to Germany, fallow deer (Dama dama) are commonly found today, but their origin as well as the genetic structure of the founding members is still unclear. In order to address these aspects, we sequenced ~400 bp of the mitochondrial d-loop of 365 animals from 22 locations in nine German Federal States. Nine new haplotypes were detected and archived in GenBank. Our data produced evidence for a Turkish origin of the German founders. However, German fallow deer populations have complex patterns of mtDNA variation. In particular, three distinct clusters were identified: Schleswig-Holstein, Brandenburg/Hesse/Rhineland and Saxony/lower Saxony/Mecklenburg/Westphalia/Anhalt. Signatures of recent demographic expansions were found for the latter two. An overall pattern of reduced genetic variation was therefore accompanied by a relatively strong genetic structure, as highlighted by an overall Phict value of 0.74 (P < 0.001).
  • Lum, J. A., & Kidd, E. (2012). An examination of the associations among multiple memory systems, past tense, and vocabulary in typically developing 5-year-old children. Journal of Speech, Language, and Hearing Research, 55(4), 989-1006. doi:10.1044/1092-4388(2011/10-0137).
  • Lumaca, M., Ravignani, A., & Baggio, G. (2018). Music evolution in the laboratory: Cultural transmission meets neurophysiology. Frontiers in Neuroscience, 12: 246. doi:10.3389%2Ffnins.2018.00246.

    Abstract

    In recent years, there has been renewed interest in the biological and cultural evolution of music, and specifically in the role played by perceptual and cognitive factors in shaping core features of musical systems, such as melody, harmony, and rhythm. One proposal originates in the language sciences. It holds that aspects of musical systems evolve by adapting gradually, in the course of successive generations, to the structural and functional characteristics of the sensory and memory systems of learners and “users” of music. This hypothesis has found initial support in laboratory experiments on music transmission. In this article, we first review some of the most important theoretical and empirical contributions to the field of music evolution. Next, we identify a major current limitation of these studies, i.e., the lack of direct neural support for the hypothesis of cognitive adaptation. Finally, we discuss a recent experiment in which this issue was addressed by using event-related potentials (ERPs). We suggest that the introduction of neurophysiology in cultural transmission research may provide novel insights on the micro-evolutionary origins of forms of variation observed in cultural systems.
  • Lutzenberger, H. (2018). Manual and nonmanual features of name signs in Kata Kolok and sign language of the Netherlands. Sign Language Studies, 18(4), 546-569. doi:10.1353/sls.2018.0016.

    Abstract

    Name signs are based on descriptions, initialization, and loan translations. Nyst and Baker (2003) have found crosslinguistic similarities in the phonology of name signs, such as a preference for one-handed signs and for the head location. Studying Kata Kolok (KK), a rural sign language without indigenous fingerspelling, strongly suggests that one-handedness is not correlated to initialization, but represents a more general feature of name sign phonology. Like in other sign languages, the head location is used frequently in both KK and Sign Language of the Netherlands (NGT) name signs. The use of nonmanuals, however, is strikingly different. NGT name signs are always accompanied by mouthings, which are absent in KK. Instead, KK name signs may use mouth gestures; these may disambiguate manually identical name signs, and even form independent name signs without any manual features
  • MacLean, E. L., Matthews, L. J., Hare, B. A., Nunn, C. L., Anderson, R. C., Aureli, F., Brannon, E. M., Call, J., Drea, C. M., Emery, N. J., Haun, D. B. M., Herrmann, E., Jacobs, L. F., Platt, M. L., Rosati, A. G., Sandel, A. A., Schroepfer, K. K., Seed, A. M., Tan, J., Van Schaik, C. P. and 1 moreMacLean, E. L., Matthews, L. J., Hare, B. A., Nunn, C. L., Anderson, R. C., Aureli, F., Brannon, E. M., Call, J., Drea, C. M., Emery, N. J., Haun, D. B. M., Herrmann, E., Jacobs, L. F., Platt, M. L., Rosati, A. G., Sandel, A. A., Schroepfer, K. K., Seed, A. M., Tan, J., Van Schaik, C. P., & Wobber, V. (2012). How does cognition evolve? Phylogenetic comparative psychology. Animal Cognition, 15, 223-238. doi:10.1007/s10071-011-0448-8.

    Abstract

    Now more than ever animal studies have the potential to test hypotheses regarding how cognition evolves. Comparative psychologists have developed new techniques to probe the cognitive mechanisms underlying animal behavior, and they have become increasingly skillful at adapting methodologies to test multiple species. Meanwhile, evolutionary biologists have generated quantitative approaches to investigate the phylogenetic distribution and function of phenotypic traits, including cognition. In particular, phylogenetic methods can quantitatively (1) test whether specific cognitive abilities are correlated with life history (e.g., lifespan), morphology (e.g., brain size), or socio-ecological variables (e.g., social system), (2) measure how strongly phylogenetic relatedness predicts the distribution of cognitive skills across species, and (3) estimate the ancestral state of a given cognitive trait using measures of cognitive performance from extant species. Phylogenetic methods can also be used to guide the selection of species comparisons that offer the strongest tests of a priori predictions of cognitive evolutionary hypotheses (i.e., phylogenetic targeting). Here, we explain how an integration of comparative psychology and evolutionary biology will answer a host of questions regarding the phylogenetic distribution and history of cognitive traits, as well as the evolutionary processes that drove their evolution.
  • Magyari, L., & De Ruiter, J. P. (2012). Prediction of turn-ends based on anticipation of upcoming words. Frontiers in Psychology, 3, 376. doi:10.3389/fpsyg.2012.00376.

    Abstract

    During conversation listeners have to perform several tasks simultaneously. They have to comprehend their interlocutor’s turn, while also having to prepare their own next turn. Moreover, a careful analysis of the timing of natural conversation reveals that next speakers also time their turns very precisely. This is possible only if listeners can predict accurately when the speaker’s turn is going to end. But how are people able to predict when a turn-ends? We propose that people know when a turn-ends, because they know how it ends. We conducted a gating study to examine if better turn-end predictions coincide with more accurate anticipation of the last words of a turn. We used turns from an earlier button-press experiment where people had to press a button exactly when a turn-ended. We show that the proportion of correct guesses in our experiment is higher when a turn’s end was estimated better in time in the button-press experiment. When people were too late in their anticipation in the button-press experiment, they also anticipated more words in our gating study. We conclude that people made predictions in advance about the upcoming content of a turn and used this prediction to estimate the duration of the turn. We suggest an economical model of turn-end anticipation that is based on anticipation of words and syntactic frames in comprehension.
  • Majid, A. (2012). Current emotion research in the language sciences. Emotion Review, 4, 432-443. doi:10.1177/1754073912445827.

    Abstract

    When researchers think about the interaction between language and emotion, they typically focus on descriptive emotion words. This review demonstrates that emotion can interact with language at many levels of structure, from the sound patterns of a language to its lexicon and grammar, and beyond to how it appears in conversation and discourse. Findings are considered from diverse subfields across the language sciences, including cognitive linguistics, psycholinguistics, linguistic anthropology, and conversation analysis. Taken together, it is clear that emotional expression is finely tuned to language-specific structures. Future emotion research can better exploit cross-linguistic variation to unravel possible universal principles operating between language and emotion.
  • Majid, A., Roberts, S. G., Cilissen, L., Emmorey, K., Nicodemus, B., O'Grady, L., Woll, B., LeLan, B., De Sousa, H., Cansler, B. L., Shayan, S., De Vos, C., Senft, G., Enfield, N. J., Razak, R. A., Fedden, S., Tufvesson, S., Dingemanse, M., Ozturk, O., Brown, P. and 6 moreMajid, A., Roberts, S. G., Cilissen, L., Emmorey, K., Nicodemus, B., O'Grady, L., Woll, B., LeLan, B., De Sousa, H., Cansler, B. L., Shayan, S., De Vos, C., Senft, G., Enfield, N. J., Razak, R. A., Fedden, S., Tufvesson, S., Dingemanse, M., Ozturk, O., Brown, P., Hill, C., Le Guen, O., Hirtzel, V., Van Gijn, R., Sicoli, M. A., & Levinson, S. C. (2018). Differential coding of perception in the world’s languages. Proceedings of the National Academy of Sciences of the United States of America, 115(45), 11369-11376. doi:10.1073/pnas.1720419115.

    Abstract

    Is there a universal hierarchy of the senses, such that some senses (e.g., vision) are more accessible to consciousness and linguistic description than others (e.g., smell)? The long-standing presumption in Western thought has been that vision and audition are more objective than the other senses, serving as the basis of knowledge and understanding, whereas touch, taste, and smell are crude and of little value. This predicts that humans ought to be better at communicating about sight and hearing than the other senses, and decades of work based on English and related languages certainly suggests this is true. However, how well does this reflect the diversity of languages and communities worldwide? To test whether there is a universal hierarchy of the senses, stimuli from the five basic senses were used to elicit descriptions in 20 diverse languages, including 3 unrelated sign languages. We found that languages differ fundamentally in which sensory domains they linguistically code systematically, and how they do so. The tendency for better coding in some domains can be explained in part by cultural preoccupations. Although languages seem free to elaborate specific sensory domains, some general tendencies emerge: for example, with some exceptions, smell is poorly coded. The surprise is that, despite the gradual phylogenetic accumulation of the senses, and the imbalances in the neural tissue dedicated to them, no single hierarchy of the senses imposes itself upon language.
  • Majid, A. (2018). Humans are neglecting our sense of smell. Here's what we could gain by fixing that. Time, March 7, 2018: 5130634.
  • Majid, A., & Kruspe, N. (2018). Hunter-gatherer olfaction is special. Current Biology, 28(3), 409-413. doi:10.1016/j.cub.2017.12.014.

    Abstract

    People struggle to name odors, but this
    limitation is not universal. Majid and
    Kruspe investigate whether superior
    olfactory performance is due to
    subsistence, ecology, or language family.
    By comparing closely related
    communities in the Malay Peninsula, they
    find that only hunter-gatherers are
    proficient odor namers, suggesting that
    subsistence is crucial.

    Additional information

    The data are archived at RWAAI
  • Majid, A., Burenhult, N., Stensmyr, M., De Valk, J., & Hansson, B. S. (2018). Olfactory language and abstraction across cultures. Philosophical Transactions of the Royal Society of London, Series B: Biological Sciences, 373: 20170139. doi:10.1098/rstb.2017.0139.

    Abstract

    Olfaction presents a particularly interesting arena to explore abstraction in language. Like other abstract domains, such as time, odours can be difficult to conceptualize. An odour cannot be seen or held, it can be difficult to locate in space, and for most people odours are difficult to verbalize. On the other hand, odours give rise to primary sensory experiences. Every time we inhale we are using olfaction to make sense of our environment. We present new experimental data from 30 Jahai hunter-gatherers from the Malay Peninsula and 30 matched Dutch participants from the Netherlands in an odour naming experiment. Participants smelled monomolecular odorants and named odours while reaction times, odour descriptors and facial expressions were measured. We show that while Dutch speakers relied on concrete descriptors, i.e. they referred to odour sources (e.g. smells like lemon), the Jahai used abstract vocabulary to name the same odours (e.g. musty). Despite this differential linguistic categorization, analysis of facial expressions showed that the two groups, nevertheless, had the same initial emotional reactions to odours. Critically, these cross-cultural data present a challenge for how to think about abstraction in language.
  • Majid, A. (2012). The role of language in a science of emotion [Comment]. Emotion review, 4, 380-381. doi:10.1177/1754073912445819.

    Abstract

    Emotion scientists often take an ambivalent stance concerning the role of language in a science of emotion. However, it is important for emotion researchers to contemplate some of the consequences of current practices
    for their theory building. There is a danger of an overreliance on the English language as a transparent window into emotion categories. More consideration has to be given to cross-linguistic comparison in the future so that models of language acquisition and of the language–cognition interface fit better the extant variation found in today’s peoples.
  • Majid, A., Boroditsky, L., & Gaby, A. (Eds.). (2012). Time in terms of space [Research topic] [Special Issue]. Frontiers in cultural psychology. Retrieved from http://www.frontiersin.org/cultural_psychology/researchtopics/Time_in_terms_of_space/755.

    Abstract

    This Research Topic explores the question: what is the relationship between representations of time and space in cultures around the world? This question touches on the broader issue of how humans come to represent and reason about abstract entities – things we cannot see or touch. Time is a particularly opportune domain to investigate this topic. Across cultures, people use spatial representations for time, for example in graphs, time-lines, clocks, sundials, hourglasses, and calendars. In language, time is also heavily related to space, with spatial terms often used to describe the order and duration of events. In English, for example, we might move a meeting forward, push a deadline back, attend a long concert or go on a short break. People also make consistent spatial gestures when talking about time, and appear to spontaneously invoke spatial representations when processing temporal language. A large body of evidence suggests a close correspondence between temporal and spatial language and thought. However, the ways that people spatialize time can differ dramatically across languages and cultures. This research topic identifies and explores some of the sources of this variation, including patterns in spatial thinking, patterns in metaphor, gesture and other cultural systems. This Research Topic explores how speakers of different languages talk about time and space and how they think about these domains, outside of language. The Research Topic invites papers exploring the following issues: 1. Do the linguistic representations of space and time share the same lexical and morphosyntactic resources? 2. To what extent does the conceptualization of time follow the conceptualization of space?
  • Mamus, E., & Boduroglu, A. (2018). The role of context on boundary extension. Visual Cognition, 26(2), 115-130. doi:10.1080/13506285.2017.1399947.

    Abstract

    Boundary extension (BE) is a memory error in which observers remember more of a scene than they actually viewed. This error reflects one’s prediction that a scene naturally continues and is driven by scene schema and contextual knowledge. In two separate experiments we investigated the necessity of context and scene schema in BE. In Experiment 1, observers viewed scenes that either contained semantically consistent or inconsistent objects as well as objects on white backgrounds. In both types of scenes and in the no-background condition there was a BE effect; critically, semantic inconsistency in scenes reduced the magnitude of BE. In Experiment 2 when we used abstract shapes instead of meaningful objects, there was no BE effect. We suggest that although scene schema is necessary to elicit BE, contextual consistency is not required.
  • Manahova, M. E., Mostert, P., Kok, P., Schoffelen, J.-M., & De Lange, F. P. (2018). Stimulus familiarity and expectation jointly modulate neural activity in the visual ventral stream. Journal of Cognitive Neuroscience, 30(9), 1366-1377. doi:10.1162/jocn_a_01281.

    Abstract

    Prior knowledge about the visual world can change how a visual stimulus is processed. Two forms of prior knowledge are often distinguished: stimulus familiarity (i.e., whether a stimulus has been seen before) and stimulus expectation (i.e., whether a stimulus is expected to occur, based on the context). Neurophysiological studies in monkeys have shown suppression of spiking activity both for expected and for familiar items in object-selective inferotemporal cortex. It is an open question, however, if and how these types of knowledge interact in their modulatory effects on the sensory response. To address this issue and to examine whether previous findings generalize to noninvasively measured neural activity in humans, we separately manipulated stimulus familiarity and expectation while noninvasively recording human brain activity using magnetoencephalography. We observed independent suppression of neural activity by familiarity and expectation, specifically in the lateral occipital complex, the putative human homologue of monkey inferotemporal cortex. Familiarity also led to sharpened response dynamics, which was predominantly observed in early visual cortex. Together, these results show that distinct types of sensory knowledge jointly determine the amount of neural resources dedicated to object processing in the visual ventral stream.
  • Mandy, W., Pellicano, L., St Pourcain, B., Skuse, D., & Heron, J. (2018). The development of autistic social traits across childhood and adolescence in males and females. The Journal of Child Psychology and Psychiatry, 59(11), 1143-1151. doi:10.1111/jcpp.12913.

    Abstract

    Background

    Autism is a dimensional condition, representing the extreme end of a continuum of social competence that extends throughout the general population. Currently, little is known about how autistic social traits (ASTs), measured across the full spectrum of severity, develop during childhood and adolescence, including whether there are developmental differences between boys and girls. Therefore, we sought to chart the trajectories of ASTs in the general population across childhood and adolescence, with a focus on gender differences.
    Methods

    Participants were 9,744 males (n = 4,784) and females (n = 4,960) from ALSPAC, a UK birth cohort study. ASTs were assessed when participants were aged 7, 10, 13 and 16 years, using the parent‐report Social Communication Disorders Checklist. Data were modelled using latent growth curve analysis.
    Results

    Developmental trajectories of males and females were nonlinear, showing a decline from 7 to 10 years, followed by an increase between 10 and 16 years. At 7 years, males had higher levels of ASTs than females (mean raw score difference = 0.88, 95% CI [.72, 1.04]), and were more likely (odds ratio [OR] = 1.99; 95% CI, 1.82, 2.16) to score in the clinical range on the SCDC. By 16 years this gender difference had disappeared: males and females had, on average, similar levels of ASTs (mean difference = 0.00, 95% CI [−0.19, 0.19]) and were equally likely to score in the SCDC's clinical range (OR = 0.91, 95% CI, 0.73, 1.10). This was the result of an increase in females’ ASTs between 10 and 16 years.
    Conclusions

    There are gender‐specific trajectories of autistic social impairment, with females more likely than males to experience an escalation of ASTs during early‐ and midadolescence. It remains to be discovered whether the observed female adolescent increase in ASTs represents the genuine late onset of social difficulties or earlier, subtle, pre‐existing difficulties becoming more obvious.

    Additional information

    jcpp12913-sup-0001-supinfo.docx
  • Mani, N., & Huettig, F. (2012). Prediction during language processing is a piece of cake - but only for skilled producers. Journal of Experimental Psychology: Human Perception and Performance, 38(4), 843-847. doi:10.1037/a0029284.

    Abstract

    Are there individual differences in children’s prediction of upcoming linguistic input and what do these differences reflect? Using a variant of the preferential looking paradigm (Golinkoff et al., 1987), we found that, upon hearing a sentence like “The boy eats a big cake”, two-year-olds fixate edible objects in a visual scene (a cake) soon after they hear the semantically constraining verb, eats, and prior to hearing the word, cake. Importantly, children’s prediction skills were significantly correlated with their productive vocabulary size – Skilled producers (i.e., children with large production vocabularies) showed evidence of predicting upcoming linguistic input while low producers did not. Furthermore, we found that children’s prediction ability is tied specifically to their production skills and not to their comprehension skills. Prediction is really a piece of cake, but only for skilled producers.
  • Martin, A. E. (2018). Cue integration during sentence comprehension: Electrophysiological evidence from ellipsis. PLoS One, 13(11): e0206616. doi:10.1371/journal.pone.0206616.

    Abstract

    Language processing requires us to integrate incoming linguistic representations with representations of past input, often across intervening words and phrases. This computational situation has been argued to require retrieval of the appropriate representations from memory via a set of features or representations serving as retrieval cues. However, even within in a cue-based retrieval account of language comprehension, both the structure of retrieval cues and the particular computation that underlies direct-access retrieval are still underspecified. Evidence from two event-related brain potential (ERP) experiments that show cue-based interference from different types of linguistic representations during ellipsis comprehension are consistent with an architecture wherein different cue types are integrated, and where the interaction of cue with the recent contents of memory determines processing outcome, including expression of the interference effect in ERP componentry. I conclude that retrieval likely includes a computation where cues are integrated with the contents of memory via a linear weighting scheme, and I propose vector addition as a candidate formalization of this computation. I attempt to account for these effects and other related phenomena within a broader cue-based framework of language processing.
  • Martin, A. E., Nieuwland, M. S., & Carreiras, M. (2012). Event-related brain potentials index cue-based retrieval interference during sentence comprehension. NeuroImage, 59(2), 1859-1869. doi:10.1016/j.neuroimage.2011.08.057.

    Abstract

    Successful language use requires access to products of past processing within an evolving discourse. A central issue for any neurocognitive theory of language then concerns the role of memory variables during language processing. Under a cue-based retrieval account of language comprehension, linguistic dependency resolution (e.g., retrieving antecedents) is subject to interference from other information in the sentence, especially information that occurs between the words that form the dependency (e.g., between the antecedent and the retrieval site). Retrieval interference may then shape processing complexity as a function of the match of the information at retrieval with the antecedent versus other recent or similar items in memory. To address these issues, we studied the online processing of ellipsis in Castilian Spanish, a language with morphological gender agreement. We recorded event-related brain potentials while participants read sentences containing noun-phrase ellipsis indicated by the determiner otro/a (‘another’). These determiners had a grammatically correct or incorrect gender with respect to their antecedent nouns that occurred earlier in the sentence. Moreover, between each antecedent and determiner, another noun phrase occurred that was structurally unavailable as an antecedent and that matched or mismatched the gender of the antecedent (i.e., a local agreement attractor). In contrast to extant P600 results on agreement violation processing, and inconsistent with predictions from neurocognitive models of sentence processing, grammatically incorrect determiners evoked a sustained, broadly distributed negativity compared to correct ones between 400 and 1000 ms after word onset, possibly related to sustained negativities as observed for referential processing difficulties. Crucially, this effect was modulated by the attractor: an increased negativity was observed for grammatically correct determiners that did not match the gender of the attractor, suggesting that structurally unavailable noun phrases were at least temporarily considered for grammatically correct ellipsis. These results constitute the first ERP evidence for cue-based retrieval interference during comprehension of grammatical sentences.
  • Martin, A. E., & McElree, B. (2018). Retrieval cues and syntactic ambiguity resolution: Speed-accuracy tradeoff evidence. Language, Cognition and Neuroscience, 33(6), 769-783. doi:10.1080/23273798.2018.1427877.

    Abstract

    Language comprehension involves coping with ambiguity and recovering from misanalysis. Syntactic ambiguity resolution is associated with increased reading times, a classic finding that has shaped theories of sentence processing. However, reaction times conflate the time it takes a process to complete with the quality of the behavior-related information available to the system. We therefore used the speed-accuracy tradeoff procedure (SAT) to derive orthogonal estimates of processing time and interpretation accuracy, and tested whether stronger retrieval cues (via semantic relatedness: neighed->horse vs. fell->horse) aid interpretation during recovery. On average, ambiguous sentences took 250ms longer (SAT rate) to interpret than unambiguous controls, demonstrating veridical differences in processing time. Retrieval cues more strongly related to the true subject always increased accuracy, regardless of ambiguity. These findings are consistent with a language processing architecture where cue-driven operations give rise to interpretation, and wherein diagnostic cues aid retrieval, regardless of parsing difficulty or structural uncertainty.
  • Maslowski, M., Meyer, A. S., & Bosker, H. R. (2018). Listening to yourself is special: Evidence from global speech rate tracking. PLoS One, 13(9): e0203571. doi:10.1371/journal.pone.0203571.

    Abstract

    Listeners are known to use adjacent contextual speech rate in processing temporally ambiguous speech sounds. For instance, an ambiguous vowel between short /A/ and long /a:/ in Dutch sounds relatively long (i.e., as /a:/) embedded in a fast precursor sentence, but short in a slow sentence. Besides the local speech rate, listeners also track talker-specific global speech rates. However, it is yet unclear whether other talkers' global rates are encoded with reference to a listener's self-produced rate. Three experiments addressed this question. In Experiment 1, one group of participants was instructed to speak fast, whereas another group had to speak slowly. The groups were compared on their perception of ambiguous /A/-/a:/ vowels embedded in neutral rate speech from another talker. In Experiment 2, the same participants listened to playback of their own speech and again evaluated target vowels in neutral rate speech. Neither of these experiments provided support for the involvement of self-produced speech in perception of another talker's speech rate. Experiment 3 repeated Experiment 2 but with a new participant sample that was unfamiliar with the participants from Experiment 2. This experiment revealed fewer /a:/ responses in neutral speech in the group also listening to a fast rate, suggesting that neutral speech sounds slow in the presence of a fast talker and vice versa. Taken together, the findings show that self-produced speech is processed differently from speech produced by others. They carry implications for our understanding of the perceptual and cognitive mechanisms involved in rate-dependent speech perception in dialogue settings.
  • Matić, D. (2012). Review of: Assertion by Mark Jary, Palgrave Macmillan, 2010 [Web Post]. The LINGUIST List. Retrieved from http://linguistlist.org/pubs/reviews/get-review.cfm?SubID=4547242.

    Abstract

    Even though assertion has held centre stage in much philosophical and linguistic theorising on language, Mark Jary’s ‘Assertion’ represents the first book-length treatment of the topic. The content of the book is aptly described by the author himself: ''This book has two aims. One is to bring together and discuss in a systematic way a range of perspectives on assertion: philosophical, linguistic and psychological. [...] The other is to present a view of the pragmatics of assertion, with particular emphasis on the contribution of the declarative mood to the process of utterance interpretation.'' (p. 1). The promise contained in this introductory note is to a large extent fulfilled: the first seven chapters of the book discuss many of the relevant philosophical and linguistic approaches to assertion and at the same time provide the background for the presentation of Jary's own view on the pragmatics of declaratives, presented in the last (and longest) chapter.
  • McQueen, J. M., & Huettig, F. (2012). Changing only the probability that spoken words will be distorted changes how they are recognized. Journal of the Acoustical Society of America, 131(1), 509-517. doi:10.1121/1.3664087.

    Abstract

    An eye-tracking experiment examined contextual flexibility in speech processing in response to distortions in spoken input. Dutch participants heard Dutch sentences containing critical words and saw four-picture displays. The name of one picture either had the same onset phonemes as the critical word or had a different first phoneme and rhymed. Participants fixated onset-overlap more than rhyme-overlap pictures, but this tendency varied with speech quality. Relative to a baseline with noise-free sentences, participants looked less at onset-overlap and more at rhyme-overlap pictures when phonemes in the sentences (but not in the critical words) were replaced by noises like those heard on a badly-tuned AM radio. The position of the noises (word-initial or word-medial) had no effect. Noises elsewhere in the sentences apparently made evidence about the critical word less reliable: Listeners became less confident of having heard the onset-overlap name but also less sure of having not heard the rhyme-overlap name. The same acoustic information has different effects on spoken-word recognition as the probability of distortion changes.
  • McQueen, J. M., Tyler, M., & Cutler, A. (2012). Lexical retuning of children’s speech perception: Evidence for knowledge about words’ component sounds. Language Learning and Development, 8, 317-339. doi:10.1080/15475441.2011.641887.

    Abstract

    Children hear new words from many different talkers; to learn words most efficiently, they should be able to represent them independently of talker-specific pronunciation detail. However, do children know what the component sounds of words should be, and can they use that knowledge to deal with different talkers' phonetic realizations? Experiment 1 replicated prior studies on lexically guided retuning of speech perception in adults, with a picture-verification methodology suitable for children. One participant group heard an ambiguous fricative ([s/f]) replacing /f/ (e.g., in words like giraffe); another group heard [s/f] replacing /s/ (e.g., in platypus). The first group subsequently identified more tokens on a Simpie-[s/f]impie-Fimpie toy-name continuum as Fimpie. Experiments 2 and 3 found equivalent lexically guided retuning effects in 12- and 6-year-olds. Children aged 6 have all that is needed for adjusting to talker variation in speech: detailed and abstract phonological representations and the ability to apply them during spoken-word recognition.

    Files private

    Request files
  • Mei, C., Fedorenko, E., Amor, D. J., Boys, A., Hoeflin, C., Carew, P., Burgess, T., Fisher, S. E., & Morgan, A. T. (2018). Deep phenotyping of speech and language skills in individuals with 16p11.2 deletion. European journal of human genetics, 26(5), 676-686. doi:10.1038/s41431-018-0102-x.

    Abstract

    Recurrent deletions of a ~600-kb region of 16p11.2 have been associated with a highly penetrant form of childhood apraxia of speech (CAS). Yet prior findings have been based on a small, potentially biased sample using retrospectively collected data. We examine the prevalence of CAS in a larger cohort of individuals with 16p11.2 deletion using a prospectively designed assessment battery. The broader speech and language phenotype associated with carrying this deletion was also examined. 55 participants with 16p11.2 deletion (47 children, 8 adults) underwent deep phenotyping to test for the presence of CAS and other speech and language diagnoses. Standardized tests of oral motor functioning, speech production, language, and non-verbal IQ were conducted. The majority of children (77%) and half of adults (50%) met criteria for CAS. Other speech outcomes were observed including articulation or phonological errors (i.e., phonetic and cognitive-linguistic errors, respectively), dysarthria (i.e., neuromuscular speech disorder), minimal verbal output, and even typical speech in some. Receptive and expressive language impairment was present in 73% and 70% of children, respectively. Co-occurring neurodevelopmental conditions (e.g., autism) and non-verbal IQ did not correlate with the presence of CAS. Findings indicate that CAS is highly prevalent in children with 16p11.2 deletion with symptoms persisting into adulthood for many. Yet CAS occurs in the context of a broader speech and language profile and other neurobehavioral deficits. Further research will elucidate specific genetic and neural pathways leading to speech and language deficits in individuals with 16p11.2 deletions, resulting in more targeted speech therapies addressing etiological pathways.
  • Mellem, M. S., Bastiaansen, M. C. M., Pilgrim, L. K., Medvedev, A. V., & Friedman, R. B. (2012). Word class and context affect alpha-band oscillatory dynamics in an older population. Frontiers in Psychology, 3, 97. doi:10.3389/fpsyg.2012.00097.

    Abstract

    Differences in the oscillatory EEG dynamics of reading open class (OC) and closed class (CC) words have previously been found (Bastiaansen et al., 2005) and are thought to reflect differences in lexical-semantic content between these word classes. In particular, the theta-band (4–7 Hz) seems to play a prominent role in lexical-semantic retrieval. We tested whether this theta effect is robust in an older population of subjects. Additionally, we examined how the context of a word can modulate the oscillatory dynamics underlying retrieval for the two different classes of words. Older participants (mean age 55) read words presented in either syntactically correct sentences or in a scrambled order (“scrambled sentence”) while their EEG was recorded. We performed time–frequency analysis to examine how power varied based on the context or class of the word. We observed larger power decreases in the alpha (8–12 Hz) band between 200–700 ms for the OC compared to CC words, but this was true only for the scrambled sentence context. We did not observe differences in theta power between these conditions. Context exerted an effect on the alpha and low beta (13–18 Hz) bands between 0 and 700 ms. These results suggest that the previously observed word class effects on theta power changes in a younger participant sample do not seem to be a robust effect in this older population. Though this is an indirect comparison between studies, it may suggest the existence of aging effects on word retrieval dynamics for different populations. Additionally, the interaction between word class and context suggests that word retrieval mechanisms interact with sentence-level comprehension mechanisms in the alpha-band.
  • Menenti, L., Petersson, K. M., & Hagoort, P. (2012). From reference to sense: How the brain encodes meaning for speaking. Frontiers in Psychology, 2, 384. doi:10.3389/fpsyg.2011.00384.

    Abstract

    In speaking, semantic encoding is the conversion of a non-verbal mental representation (the reference) into a semantic structure suitable for expression (the sense). In this fMRI study on sentence production we investigate how the speaking brain accomplishes this transition from non-verbal to verbal representations. In an overt picture description task, we manipulated repetition of sense (the semantic structure of the sentence) and reference (the described situation) separately. By investigating brain areas showing response adaptation to repetition of each of these sentence properties, we disentangle the neuronal infrastructure for these two components of semantic encoding. We also performed a control experiment with the same stimuli and design but without any linguistic task to identify areas involved in perception of the stimuli per se. The bilateral inferior parietal lobes were selectively sensitive to repetition of reference, while left inferior frontal gyrus showed selective suppression to repetition of sense. Strikingly, a widespread network of areas associated with language processing (left middle frontal gyrus, bilateral superior parietal lobes and bilateral posterior temporal gyri) all showed repetition suppression to both sense and reference processing. These areas are probably involved in mapping reference onto sense, the crucial step in semantic encoding. These results enable us to track the transition from non-verbal to verbal representations in our brains.
  • Menenti, L., Segaert, K., & Hagoort, P. (2012). The neuronal infrastructure of speaking. Brain and Language, 122, 71-80. doi:10.1016/j.bandl.2012.04.012.

    Abstract

    Models of speaking distinguish producing meaning, words and syntax as three different linguistic components of speaking. Nevertheless, little is known about the brain’s integrated neuronal infrastructure for speech production. We investigated semantic, lexical and syntactic aspects of speaking using fMRI. In a picture description task, we manipulated repetition of sentence meaning, words, and syntax separately. By investigating brain areas showing response adaptation to repetition of each of these sentence properties, we disentangle the neuronal infrastructure for these processes. We demonstrate that semantic, lexical and syntactic processes are carried out in partly overlapping and partly distinct brain networks and show that the classic left-hemispheric dominance for language is present for syntax but not semantics.
  • Menenti, L., Pickering, M. J., & Garrod, S. C. (2012). Towards a neural basis of interactive alignment in conversation. Frontiers in Human Neuroscience, 6, 185. doi:10.3389/fnhum.2012.00185.

    Abstract

    The interactive-alignment account of dialogue proposes that interlocutors achieve conversational success by aligning their understanding of the situation under discussion. Such alignment occurs because they prime each other at different levels of representation (e.g., phonology, syntax, semantics), and this is possible because these representations are shared across production and comprehension. In this paper, we briefly review the behavioral evidence, and then consider how findings from cognitive neuroscience might lend support to this account, on the assumption that alignment of neural activity corresponds to alignment of mental states. We first review work supporting representational parity between production and comprehension, and suggest that neural activity associated with phonological, lexical, and syntactic aspects of production and comprehension are closely related. We next consider evidence for the neural bases of the activation and use of situation models during production and comprehension, and how these demonstrate the activation of non-linguistic conceptual representations associated with language use. We then review evidence for alignment of neural mechanisms that are specific to the act of communication. Finally, we suggest some avenues of further research that need to be explored to test crucial predictions of the interactive alignment account.
  • Meyer, A. S., Wheeldon, L. R., Van der Meulen, F., & Konopka, A. E. (2012). Effects of speech rate and practice on the allocation of visual attention in multiple object naming. Frontiers in Psychology, 3, 39. doi:10.3389/fpsyg.2012.00039.

    Abstract

    Earlier studies had shown that speakers naming several objects typically look at each object until they have retrieved the phonological form of its name and therefore look longer at objects with long names than at objects with shorter names. We examined whether this tight eye-to-speech coordination was maintained at different speech rates and after increasing amounts of practice. Participants named the same set of objects with monosyllabic or disyllabic names on up to 20 successive trials. In Experiment 1, they spoke as fast as they could, whereas in Experiment 2 they had to maintain a fixed moderate or faster speech rate. In both experiments, the durations of the gazes to the objects decreased with increasing speech rate, indicating that at higher speech rates, the speakers spent less time planning the object names. The eye-speech lag (the time interval between the shift of gaze away from an object and the onset of its name) was independent of the speech rate but became shorter with increasing practice. Consistent word length effects on the durations of the gazes to the objects and the eye speech lags were only found in Experiment 2. The results indicate that shifts of eye gaze are often linked to the completion of phonological encoding, but that speakers can deviate from this default coordination of eye gaze and speech, for instance when the descriptive task is easy and they aim to speak fast.
  • Meyer, A. S., & Schriefers, H. (1991). Phonological facilitation in picture-word interference experiments: Effects of stimulus onset asynchrony and types of interfering stimuli. Journal of Experimental Psychology: Learning, Memory, and Cognition, 17, 1146-1160. doi:10.1037/0278-7393.17.6.1146.

    Abstract

    Subjects named pictures while hearing distractor words that shared word-initial or word-final segments with the picture names or were unrelated to the picture names. The relative timing of distractor and picture presentation was varied. Compared with unrelated distractors, both types of related distractors facilitated picture naming under certain timing conditions. Begin-related distractors facilitated the naming responses if the shared segments began 150 ms before, at, or 150 ms after picture onset. By contrast, end-related distractors only facilitated the responses if the shared segments began at or 150 ms after picture onset. The results suggest that the phonological encoding of the beginning of a word is initiated before the encoding of its end.
  • Meyer, A. S. (1991). The time course of phonological encoding in language production: Phonological encoding inside a syllable. Journal of Memory and Language, 30, 69-69. doi:10.1016/0749-596X(91)90011-8.

    Abstract

    Eight experiments were carried out investigating whether different parts of a syllable must be phonologically encoded in a specific order or whether they can be encoded in any order. A speech production task was used in which the subjects in each test trial had to utter one out of three or five response words as quickly as possible. In the so-called homogeneous condition these words were related in form, while in the heterogeneous condition they were unrelated in form. For monosyllabic response words shorter reaction times were obtained in the homogeneous than in the heterogeneous condition when the words had the same onset, but not when they had the same rhyme. Similarly, for disyllabic response words, the reaction times were shorter in the homogeneous than in the heterogeneous condition when the words shared only the onset of the first syllable, but not when they shared only its rhyme. Furthermore, a stronger facilitatory effect was observed when the words had the entire first syllable in common than when they only shared the onset, or the onset and the nucleus, but not the coda of the first syllable. These results suggest that syllables are phonologically encoded in two ordered steps, the first of which is dedicated to the onset and the second to the rhyme.
  • Meyer, A. S., Alday, P. M., Decuyper, C., & Knudsen, B. (2018). Working together: Contributions of corpus analyses and experimental psycholinguistics to understanding conversation. Frontiers in Psychology, 9: 525. doi:10.3389/fpsyg.2018.00525.

    Abstract

    As conversation is the most important way of using language, linguists and psychologists should combine forces to investigate how interlocutors deal with the cognitive demands arising during conversation. Linguistic analyses of corpora of conversation are needed to understand the structure of conversations, and experimental work is indispensable for understanding the underlying cognitive processes. We argue that joint consideration of corpus and experimental data is most informative when the utterances elicited in a lab experiment match those extracted from a corpus in relevant ways. This requirement to compare like with like seems obvious but is not trivial to achieve. To illustrate this approach, we report two experiments where responses to polar (yes/no) questions were elicited in the lab and the response latencies were compared to gaps between polar questions and answers in a corpus of conversational speech. We found, as expected, that responses were given faster when they were easy to plan and planning could be initiated earlier than when they were harder to plan and planning was initiated later. Overall, in all but one condition, the latencies were longer than one would expect based on the analyses of corpus data. We discuss the implication of this partial match between the data sets and more generally how corpus and experimental data can best be combined in studies of conversation.

    Additional information

    Data_Sheet_1.pdf
  • Minagawa-Kawai, Y., Cristià, A., & Dupoux, E. (2012). Erratum to “Cerebral lateralization and early speech acquisition: A developmental scenario” [Dev. Cogn. Neurosci. 1 (2011) 217–232]. Developmental Cognitive Neuroscience, 2(1), 194-195. doi:10.1016/j.dcn.2011.07.011.

    Abstract

    Refers to Yasuyo Minagawa-Kawai, Alejandrina Cristià, Emmanuel Dupoux "Cerebral lateralization and early speech acquisition: A developmental scenario" Developmental Cognitive Neuroscience, Volume 1, Issue 3, July 2011, Pages 217-232
  • Mishra, R. K., Singh, N., Pandey, A., & Huettig, F. (2012). Spoken language-mediated anticipatory eye movements are modulated by reading ability: Evidence from Indian low and high literates. Journal of Eye Movement Research, 5(1): 3, pp. 1-10. doi:10.16910/jemr.5.1.3.

    Abstract

    We investigated whether levels of reading ability attained through formal literacy are related to anticipatory language-mediated eye movements. Indian low and high literates listened to simple spoken sentences containing a target word (e.g., "door") while at the same time looking at a visual display of four objects (a target, i.e. the door, and three distractors). The spoken sentences were constructed in such a way that participants could use semantic, associative, and syntactic information from adjectives and particles (preceding the critical noun) to anticipate the visual target objects. High literates started to shift their eye gaze to the target objects well before target word onset. In the low literacy group this shift of eye gaze occurred only when the target noun (i.e. "door") was heard, more than a second later. Our findings suggest that formal literacy may be important for the fine-tuning of language-mediated anticipatory mechanisms, abilities which proficient language users can then exploit for other cognitive activities such as spoken language-mediated eye
    gaze. In the conclusion, we discuss three potential mechanisms of how reading acquisition and practice may contribute to the differences in predictive spoken language processing between low and high literates.
  • Mitterer, H. (Ed.). (2012). Ecological aspects of speech perception [Research topic] [Special Issue]. Frontiers in Cognition.

    Abstract

    Our knowledge of speech perception is largely based on experiments conducted with carefully recorded clear speech presented under good listening conditions to undistracted listeners - a near-ideal situation, in other words. But the reality poses a set of different challenges. First of all, listeners may need to divide their attention between speech comprehension and another task (e.g., driving). Outside the laboratory, the speech signal is often slurred by less than careful pronunciation and the listener has to deal with background noise. Moreover, in a globalized world, listeners need to understand speech in more than their native language. Relatedly, the speakers we listen to often have a different language background so we have to deal with a foreign or regional accent we are not familiar with. Finally, outside the laboratory, speech perception is not an end in itself, but rather a mean to contribute to a conversation. Listeners do not only need to understand the speech they are hearing, they also need to use this information to plan and time their own responses. For this special topic, we invite papers that address any of these ecological aspects of speech perception.
  • Mitterer, H., Reinisch, E., & McQueen, J. M. (2018). Allophones, not phonemes in spoken-word recognition. Journal of Memory and Language, 98, 77-92. doi:10.1016/j.jml.2017.09.005.

    Abstract

    What are the phonological representations that listeners use to map information about the segmental content of speech onto the mental lexicon during spoken-word recognition? Recent evidence from perceptual-learning paradigms seems to support (context-dependent) allophones as the basic representational units in spoken-word recognition. But recent evidence from a selective-adaptation paradigm seems to suggest that context-independent phonemes also play a role. We present three experiments using selective adaptation that constitute strong tests of these representational hypotheses. In Experiment 1, we tested generalization of selective adaptation using different allophones of Dutch /r/ and /l/ – a case where generalization has not been found with perceptual learning. In Experiments 2 and 3, we tested generalization of selective adaptation using German back fricatives in which allophonic and phonemic identity were varied orthogonally. In all three experiments, selective adaptation was observed only if adaptors and test stimuli shared allophones. Phonemic identity, in contrast, was neither necessary nor sufficient for generalization of selective adaptation to occur. These findings and other recent data using the perceptual-learning paradigm suggest that pre-lexical processing during spoken-word recognition is based on allophones, and not on context-independent phonemes
  • Mitterer, H., & Tuinman, A. (2012). The role of native-language knowledge in the perception of casual speech in a second language. Frontiers in Psychology, 3, 249. doi:10.3389/fpsyg.2012.00249.

    Abstract

    Casual speech processes, such as /t/-reduction, make word recognition harder. Additionally, word-recognition is also harder in a second language (L2). Combining these challenges, we investigated whether L2 learners have recourse to knowledge from their native language (L1) when dealing with casual-speech processes in their L2. In three experiments, production and perception of /t/-reduction was investigated. An initial production experiment showed that /t/-reduction occurred in both languages and patterned similarly in proper nouns but differed when /t/ was a verbal inflection. Two perception experiments compared the performance of German learners of Dutch with that of native speakers for nouns and verbs. Mirroring the production patterns, German learners' performance strongly resembled that of native Dutch listeners when the reduced /t/ was part of a word stem, but deviated where /t/ was a verbal inflection. These results suggest that a casual speech process in a second language is problematic for learners when the process is not known from the leaner's native language, similar to what has been observed for phoneme contrasts.
  • Monster, I., & Lev-Ari, S. (2018). The effect of social network size on hashtag adoption on Twitter. Cognitive Science, 42(8), 3149-3158. doi:10.1111/cogs.12675.

    Abstract

    Propagation of novel linguistic terms is an important aspect of language use and language
    change. Here, we test how social network size influences people’s likelihood of adopting novel
    labels by examining hashtag use on Twitter. Specifically, we test whether following fewer Twitter
    users leads to more varied and malleable hashtag use on Twitter , because each followed user is
    ascribed greater weight and thus exerts greater influence on the following user. Focusing on Dutch
    users tweeting about the terrorist attack in Brussels in 2016, we show that people who follow
    fewer other users use a larger number of unique hashtags to refer to the event, reflecting greater
    malleability and variability in use. These results have implications for theories of language learning, language use, and language change.
  • Morgan, A. T., van Haaften, L., van Hulst, K., Edley, C., Mei, C., Tan, T. Y., Amor, D., Fisher, S. E., & Koolen, D. A. (2018). Early speech development in Koolen de Vries syndrome limited by oral praxis and hypotonia. European journal of human genetics, 26, 75-84. doi:10.1038/s41431-017-0035-9.

    Abstract

    Communication disorder is common in Koolen de Vries syndrome (KdVS), yet its specific symptomatology has not been examined, limiting prognostic counselling and application of targeted therapies. Here we examine the communication phenotype associated with KdVS. Twenty-nine participants (12 males, 4 with KANSL1 variants, 25 with 17q21.31 microdeletion), aged 1.0–27.0 years were assessed for oral-motor, speech, language, literacy, and social functioning. Early history included hypotonia and feeding difficulties. Speech and language development was delayed and atypical from onset of first words (2; 5–3; 5 years of age on average). Speech was characterised by apraxia (100%) and dysarthria (93%), with stuttering in some (17%). Speech therapy and multi-modal communication (e.g., sign-language) was critical in preschool. Receptive and expressive language abilities were typically commensurate (79%), both being severely affected relative to peers. Children were sociable with a desire to communicate, although some (36%) had pragmatic impairments in domains, where higher-level language was required. A common phenotype was identified, including an overriding ‘double hit’ of oral hypotonia and apraxia in infancy and preschool, associated with severely delayed speech development. Remarkably however, speech prognosis was positive; apraxia resolved, and although dysarthria persisted, children were intelligible by mid-to-late childhood. In contrast, language and literacy deficits persisted, and pragmatic deficits were apparent. Children with KdVS require early, intensive, speech motor and language therapy, with targeted literacy and social language interventions as developmentally appropriate. Greater understanding of the linguistic phenotype may help unravel the relevance of KANSL1 to child speech and language development.

    Additional information

    41431_2017_35_MOESM1_ESM.docx
  • Moseley, R., Carota, F., Hauk, O., Mohr, B., & Pulvermüller, F. (2012). A role for the motor system in binding abstract emotional meaning. Cerebral Cortex, 22(7), 1634-1647. doi:10.1093/cercor/bhr238.

    Abstract

    Sensorimotor areas activate to action- and object-related words, but their role in abstract meaning processing is still debated. Abstract emotion words denoting body internal states are a critical test case because they lack referential links to objects. If actions expressing emotion are crucial for learning correspondences between word forms and emotions, emotion word–evoked activity should emerge in motor brain systems controlling the face and arms, which typically express emotions. To test this hypothesis, we recruited 18 native speakers and used event-related functional magnetic resonance imaging to compare brain activation evoked by abstract emotion words to that by face- and arm-related action words. In addition to limbic regions, emotion words indeed sparked precentral cortex, including body-part–specific areas activated somatotopically by face words or arm words. Control items, including hash mark strings and animal words, failed to activate precentral areas. We conclude that, similar to their role in action word processing, activation of frontocentral motor systems in the dorsal stream reflects the semantic binding of sign and meaning of abstract words denoting emotions and possibly other body internal states.
  • Mostert, P., Albers, A. M., Brinkman, L., Todorova, L., Kok, P., & De Lange, F. P. (2018). Eye movement-related confounds in neural decoding of visual working memory representations. eNeuro, 5(4): ENEURO.0401-17.2018. doi:10.1523/ENEURO.0401-17.2018.

    Abstract

    A relatively new analysis technique, known as neural decoding or multivariate pattern analysis (MVPA), has become increasingly popular for cognitive neuroimaging studies over recent years. These techniques promise to uncover the representational contents of neural signals, as well as the underlying code and the dynamic profile thereof. A field in which these techniques have led to novel insights in particular is that of visual working memory (VWM). In the present study, we subjected human volunteers to a combined VWM/imagery task while recording their neural signals using magnetoencephalography (MEG). We applied multivariate decoding analyses to uncover the temporal profile underlying the neural representations of the memorized item. Analysis of gaze position however revealed that our results were contaminated by systematic eye movements, suggesting that the MEG decoding results from our originally planned analyses were confounded. In addition to the eye movement analyses, we also present the original analyses to highlight how these might have readily led to invalid conclusions. Finally, we demonstrate a potential remedy, whereby we train the decoders on a functional localizer that was specifically designed to target bottom-up sensory signals and as such avoids eye movements. We conclude by arguing for more awareness of the potentially pervasive and ubiquitous effects of eye movement-related confounds.
  • Mulder, K., Van Heuven, W. J., & Dijkstra, T. (2018). Revisiting the neighborhood: How L2 proficiency and neighborhood manipulation affect bilingual processing. Frontiers in Psychology, 9: 1860. doi:10.3389/fpsyg.2018.01860.

    Abstract

    We conducted three neighborhood experiments with Dutch-English bilinguals to test effects of L2 proficiency and neighborhood characteristics within and between languages. In the past 20 years, the English (L2) proficiency of this population has considerably increased. To consider the impact of this development on neighborhood effects, we conducted a strict replication of the English lexical decision task by van Heuven, Dijkstra, & Grainger (1998, Exp. 4). In line with our prediction, English characteristics (neighborhood size, word and bigram frequency) dominated the word and nonword responses, while the nonwords also revealed an interaction of English and Dutch neighborhood size.
    The prominence of English was tested again in two experiments introducing a stronger neighborhood manipulation. In English lexical decision and progressive demasking, English items with no orthographic neighbors at all were contrasted with items having neighbors in English or Dutch (‘hermits’) only, or in both languages. In both tasks, target processing was affected strongly by the presence of English neighbors, but only weakly by Dutch neighbors. Effects are interpreted in terms of two underlying processing mechanisms: language-specific global lexical activation and lexical competition.
  • Mulhern, M. S., Stumpel, C., Stong, N., Brunner, H. G., Bier, L., Lippa, N., Riviello, J., Rouhl, R. P. W., Kempers, M., Pfundt, R., Stegmann, A. P. A., Kukolich, M. K., Telegrafi, A., Lehman, A., Lopez-Rangel, E., Houcinat, N., Barth, M., Den Hollander, N., Hoffer, M. J. V., Weckhuysen, S. and 31 moreMulhern, M. S., Stumpel, C., Stong, N., Brunner, H. G., Bier, L., Lippa, N., Riviello, J., Rouhl, R. P. W., Kempers, M., Pfundt, R., Stegmann, A. P. A., Kukolich, M. K., Telegrafi, A., Lehman, A., Lopez-Rangel, E., Houcinat, N., Barth, M., Den Hollander, N., Hoffer, M. J. V., Weckhuysen, S., Roovers, J., Djemie, T., Barca, D., Ceulemans, B., Craiu, D., Lemke, J. R., Korff, C., Mefford, H. C., Meyers, C. T., Siegler, Z., Hiatt, S. M., Cooper, G. M., Bebin, E. M., Snijders Blok, L., Veenstra-Knol, H. E., Baugh, E. H., Brilstra, E. H., Volker-Touw, C. M. L., Van Binsbergen, E., Revah-Politi, A., Pereira, E., McBrian, D., Pacault, M., Isidor, B., Le Caignec, C., Gilbert-Dussardier, B., Bilan, F., Heinzen, E. L., Goldstein, D. B., Stevens, S. J. C., & Sands, T. T. (2018). NBEA: Developmental disease gene with early generalized epilepsy phenotypes. Annals of Neurology, 84(5), 788-795. doi:10.1002/ana.25350.

    Abstract

    NBEA is a candidate gene for autism, and de novo variants have been reported in neurodevelopmental disease (NDD) cohorts. However, NBEA has not been rigorously evaluated as a disease gene, and associated phenotypes have not been delineated. We identified 24 de novo NBEA variants in patients with NDD, establishing NBEA as an NDD gene. Most patients had epilepsy with onset in the first few years of life, often characterized by generalized seizure types, including myoclonic and atonic seizures. Our data show a broader phenotypic spectrum than previously described, including a myoclonic-astatic epilepsy–like phenotype in a subset of patients.

    Files private

    Request files
  • Nieuwland, M. S., Martin, A. E., & Carreiras, M. (2012). Brain regions that process case: Evidence from basque. Human Brain Mapping, 33(11), 2509-2520. doi:10.1002/hbm.21377.

    Abstract

    The aim of this event-related fMRI study was to investigate the cortical networks involved in case processing, an operation that is crucial to language comprehension yet whose neural underpinnings are not well-understood. What is the relationship of these networks to those that serve other aspects of syntactic and semantic processing? Participants read Basque sentences that contained case violations, number agreement violations or semantic anomalies, or that were both syntactically and semantically correct. Case violations elicited activity increases, compared to correct control sentences, in a set of parietal regions including the posterior cingulate, the precuneus, and the left and right inferior parietal lobules. Number agreement violations also elicited activity increases in left and right inferior parietal regions, and additional activations in the left and right middle frontal gyrus. Regions-of-interest analyses showed that almost all of the clusters that were responsive to case or number agreement violations did not differentiate between these two. In contrast, the left and right anterior inferior frontal gyrus and the dorsomedial prefrontal cortex were only sensitive to semantic violations. Our results suggest that whereas syntactic and semantic anomalies clearly recruit distinct neural circuits, case, and number violations recruit largely overlapping neural circuits and that the distinction between the two rests on the relative contributions of parietal and prefrontal regions, respectively. Furthermore, our results are consistent with recently reported contributions of bilateral parietal and dorsolateral brain regions to syntactic processing, pointing towards potential extensions of current neurocognitive theories of language. Hum Brain Mapp, 2012. © 2011 Wiley Periodicals, Inc.
  • Nieuwland, M. S. (2012). Establishing propositional truth-value in counterfactual and real-world contexts during sentence comprehension: Differential sensitivity of the left and right inferior frontal gyri. NeuroImage, 59(4), 3433-3440. doi:10.1016/j.neuroimage.2011.11.018.

    Abstract

    What makes a proposition true or false has traditionally played an essential role in philosophical and linguistic theories of meaning. A comprehensive neurobiological theory of language must ultimately be able to explain the combined contributions of real-world truth-value and discourse context to sentence meaning. This fMRI study investigated the neural circuits that are sensitive to the propositional truth-value of sentences about counterfactual worlds, aiming to reveal differential hemispheric sensitivity of the inferior prefrontal gyri to counterfactual truth-value and real-world truth-value. Participants read true or false counterfactual conditional sentences (“If N.A.S.A. had not developed its Apollo Project, the first country to land on the moon would be Russia/America”) and real-world sentences (“Because N.A.S.A. developed its Apollo Project, the first country to land on the moon has been America/Russia”) that were matched on contextual constraint and truth-value. ROI analyses showed that whereas the left BA 47 showed similar activity increases to counterfactual false sentences and to real-world false sentences (compared to true sentences), the right BA 47 showed a larger increase for counterfactual false sentences. Moreover, whole-brain analyses revealed a distributed neural circuit for dealing with propositional truth-value. These results constitute the first evidence for hemispheric differences in processing counterfactual truth-value and real-world truth-value, and point toward additional right hemisphere involvement in counterfactual comprehension.
  • Nieuwland, M. S., Politzer-Ahles, S., Heyselaar, E., Segaert, K., Darley, E., Kazanina, N., Von Grebmer Zu Wolfsthurn, S., Bartolozzi, F., Kogan, V., Ito, A., Mézière, D., Barr, D. J., Rousselet, G., Ferguson, H. J., Busch-Moreno, S., Fu, X., Tuomainen, J., Kulakova, E., Husband, E. M., Donaldson, D. I. and 3 moreNieuwland, M. S., Politzer-Ahles, S., Heyselaar, E., Segaert, K., Darley, E., Kazanina, N., Von Grebmer Zu Wolfsthurn, S., Bartolozzi, F., Kogan, V., Ito, A., Mézière, D., Barr, D. J., Rousselet, G., Ferguson, H. J., Busch-Moreno, S., Fu, X., Tuomainen, J., Kulakova, E., Husband, E. M., Donaldson, D. I., Kohút, Z., Rueschemeyer, S.-A., & Huettig, F. (2018). Large-scale replication study reveals a limit on probabilistic prediction in language comprehension. eLife, 7: e33468. doi:10.7554/eLife.33468.

    Abstract

    Do people routinely pre-activate the meaning and even the phonological form of upcoming words? The most acclaimed evidence for phonological prediction comes from a 2005 Nature Neuroscience publication by DeLong, Urbach and Kutas, who observed a graded modulation of electrical brain potentials (N400) to nouns and preceding articles by the probability that people use a word to continue the sentence fragment (‘cloze’). In our direct replication study spanning 9 laboratories (N=334), pre-registered replication-analyses and exploratory Bayes factor analyses successfully replicated the noun-results but, crucially, not the article-results. Pre-registered single-trial analyses also yielded a statistically significant effect for the nouns but not the articles. Exploratory Bayesian single-trial analyses showed that the article-effect may be non-zero but is likely far smaller than originally reported and too small to observe without very large sample sizes. Our results do not support the view that readers routinely pre-activate the phonological form of predictable words.

    Additional information

    Data sets
  • Nieuwland, M. S., & Martin, A. E. (2012). If the real world were irrelevant, so to speak: The role of propositional truth-value in counterfactual sentence comprehension. Cognition, 122(1), 102-109. doi:10.1016/j.cognition.2011.09.001.

    Abstract

    Propositional truth-value can be a defining feature of a sentence’s relevance to the unfolding discourse, and establishing propositional truth-value in context can be key to successful interpretation. In the current study, we investigate its role in the comprehension of counterfactual conditionals, which describe imaginary consequences of hypothetical events, and are thought to require keeping in mind both what is true and what is false. Pre-stored real-world knowledge may therefore intrude upon and delay counterfactual comprehension, which is predicted by some accounts of discourse comprehension, and has been observed during online comprehension. The impact of propositional truth-value may thus be delayed in counterfactual conditionals, as also claimed for sentences containing other types of logical operators (e.g., negation, scalar quantifiers). In an event-related potential (ERP) experiment, we investigated the impact of propositional truth-value when described consequences are both true and predictable given the counterfactual premise. False words elicited larger N400 ERPs than true words, in negated counterfactual sentences (e.g., “If N.A.S.A. had not developed its Apollo Project, the first country to land on the moon would have been Russia/America”) and real-world sentences (e.g., “Because N.A.S.A. developed its Apollo Project, the first country to land on the moon was America/Russia”) alike. These indistinguishable N400 effects of propositional truth-value, elicited by opposite word pairs, argue against disruptions by real-world knowledge during counterfactual comprehension, and suggest that incoming words are mapped onto the counterfactual context without any delay. Thus, provided a sufficiently constraining context, propositional truth-value rapidly impacts ongoing semantic processing, be the proposition factual or counterfactual.
  • Niso, G., Gorgolewski, K. J., Bock, E., Brooks, T. L., Flandin, G., Gramfort, A., Henson, R. N., Jas, M., Litvak, V., Moreau, J. T., Oostenveld, R., Schoffelen, J.-M., Tadel, F., Wexler, J., & Baillet, S. (2018). MEG-BIDS, the brain imaging data structure extended to magnetoencephalography. Scientific Data, 5: 180110. doi:10.1038/sdata.2018.110.

    Abstract

    We present a significant extension of the Brain Imaging Data Structure (BIDS) to support the specific
    aspects of magnetoencephalography (MEG) data. MEG measures brain activity with millisecond
    temporal resolution and unique source imaging capabilities. So far, BIDS was a solution to organise
    magnetic resonance imaging (MRI) data. The nature and acquisition parameters of MRI and MEG data
    are strongly dissimilar. Although there is no standard data format for MEG, we propose MEG-BIDS as a
    principled solution to store, organise, process and share the multidimensional data volumes produced
    by the modality. The standard also includes well-defined metadata, to facilitate future data
    harmonisation and sharing efforts. This responds to unmet needs from the multimodal neuroimaging
    community and paves the way to further integration of other techniques in electrophysiology. MEGBIDS
    builds on MRI-BIDS, extending BIDS to a multimodal data structure. We feature several dataanalytics
    software that have adopted MEG-BIDS, and a diverse sample of open MEG-BIDS data
    resources available to everyone.
  • Noordenbos, M., Segers, E., Serniclaes, W., Mitterer, H., & Verhoeven, L. (2012). Allophonic mode of speech perception in Dutch children at risk for dyslexia: A longitudinal study. Research in developmental disabilities, 33, 1469-1483. doi:10.1016/j.ridd.2012.03.021.

    Abstract

    There is ample evidence that individuals with dyslexia have a phonological deficit. A growing body of research also suggests that individuals with dyslexia have problems with categorical perception, as evidenced by weaker discrimination of between-category differences and better discrimination of within-category differences compared to average readers. Whether the categorical perception problems of individuals with dyslexia are a result of their reading problems or a cause has yet to be determined. Whether the observed perception deficit relates to a more general auditory deficit or is specific to speech also has yet to be determined. To shed more light on these issues, the categorical perception abilities of children at risk for dyslexia and chronological age controls were investigated before and after the onset of formal reading instruction in a longitudinal study. Both identification and discrimination data were collected using identical paradigms for speech and non-speech stimuli. Results showed the children at risk for dyslexia to shift from an allophonic mode of perception in kindergarten to a phonemic mode of perception in first grade, while the control group showed a phonemic mode already in kindergarten. The children at risk for dyslexia thus showed an allophonic perception deficit in kindergarten, which was later suppressed by phonemic perception as a result of formal reading instruction in first grade; allophonic perception in kindergarten can thus be treated as a clinical marker for the possibility of later reading problems.
  • Noordenbos, M., Segers, E., Serniclaes, W., Mitterer, H., & Verhoeven, L. (2012). Neural evidence of allophonic perception in children at risk for dyslexia. Neuropsychologia, 50, 2010-2017. doi:10.1016/j.neuropsychologia.2012.04.026.

    Abstract

    Learning to read is a complex process that develops normally in the majority of children and requires the mapping of graphemes to their corresponding phonemes. Problems with the mapping process nevertheless occur in about 5% of the population and are typically attributed to poor phonological representations, which are — in turn — attributed to underlying speech processing difficulties. We examined auditory discrimination of speech sounds in 6-year-old beginning readers with a familial risk of dyslexia (n=31) and no such risk (n=30) using the mismatch negativity (MMN). MMNs were recorded for stimuli belonging to either the same phoneme category (acoustic variants of/bə/) or different phoneme categories (/bə/vs./də/). Stimuli from different phoneme categories elicited MMNs in both the control and at-risk children, but the MMN amplitude was clearly lower in the at-risk children. In contrast, the stimuli from the same phoneme category elicited an MMN in only the children at risk for dyslexia. These results show children at risk for dyslexia to be sensitive to acoustic properties that are irrelevant in their language. Our findings thus suggest a possible cause of dyslexia in that they show 6-year-old beginning readers with at least one parent diagnosed with dyslexia to have a neural sensitivity to speech contrasts that are irrelevant in the ambient language. This sensitivity clearly hampers the development of stable phonological representations and thus leads to significant reading impairment later in life.
  • Noppeney, U., Jones, S. A., Rohe, T., & Ferrari, A. (2018). See what you hear – How the brain forms representations across the senses. Neuroforum, 24(4), 257-271. doi:10.1515/nf-2017-A066.

    Abstract

    Our senses are constantly bombarded with a myriad of signals. To make sense of this cacophony, the brain needs to integrate signals emanating from a common source, but segregate signals originating from the different sources. Thus, multisensory perception relies critically on inferring the world’s causal structure (i. e. one common vs. multiple independent sources). Behavioural research has shown that the brain arbitrates between sensory integration and segregation consistent with the principles of Bayesian Causal Inference. At the neural level, recent functional magnetic resonance imaging (fMRI) and electroencephalography (EEG) studies have shown that the brain accomplishes Bayesian Causal Inference by dynamically encoding multiple perceptual estimates across the sensory processing hierarchies. Only at the top of the hierarchy in anterior parietal cortices did the brain form perceptual estimates that take into account the observer’s uncertainty about the world’s causal structure consistent with Bayesian Causal Inference.
  • Nora, A., Hultén, A., Karvonen, L., Kim, J.-Y., Lehtonen, M., Yli-Kaitala, H., Service, E., & Salmelin, R. (2012). Long-term phonological learning begins at the level of word form. NeuroImage, 63, 789-799. doi:10.1016/j.neuroimage.2012.07.026.

    Abstract

    Incidental learning of phonological structures through repeated exposure is an important component of native and foreign-language vocabulary acquisition that is not well understood at the neurophysiological level. It is also not settled when this type of learning occurs at the level of word forms as opposed to phoneme sequences. Here, participants listened to and repeated back foreign phonological forms (Korean words) and new native-language word forms (Finnish pseudowords) on two days. Recognition performance was improved, repetition latency became shorter and repetition accuracy increased when phonological forms were encountered multiple times. Cortical magnetoencephalography responses occurred bilaterally but the experimental effects only in the left hemisphere. Superior temporal activity at 300–600 ms, probably reflecting acoustic-phonetic processing, lasted longer for foreign phonology than for native phonology. Formation of longer-term auditory-motor representations was evidenced by a decrease of a spatiotemporally separate left temporal response and correlated increase of left frontal activity at 600–1200 ms on both days. The results point to item-level learning of novel whole-word representations.
  • Norris, D., McQueen, J. M., & Cutler, A. (2018). Commentary on “Interaction in spoken word recognition models". Frontiers in Psychology, 9: 1568. doi:10.3389/fpsyg.2018.01568.
  • Oliver, G., Gullberg, M., Hellwig, F., Mitterer, H., & Indefrey, P. (2012). Acquiring L2 sentence comprehension: A longitudinal study of word monitoring in noise. Bilingualism: Language and Cognition, 15, 841 -857. doi:10.1017/S1366728912000089.

    Abstract

    This study investigated the development of second language online auditory processing with ab initio German learners of Dutch. We assessed the influence of different levels of background noise and different levels of semantic and syntactic target word predictability on word-monitoring latencies. There was evidence of syntactic, but not lexical-semantic, transfer from the L1 to the L2 from the onset of L2 learning. An initial stronger adverse effect of noise on syntactic compared to phonological processing disappeared after two weeks of learning Dutch suggesting a change towards more robust syntactic processing. At the same time the L2 learners started to exploit semantic constraints predicting upcoming target words. The use of semantic predictability remained less efficient compared to native speakers until the end of the observation period. The improvement and the persistent problems in semantic processing we found were independent of noise and rather seem to reflect the need for more context information to build up online semantic representations in L2 listening.
  • Ostarek, M., Ishag, I., Joosen, D., & Huettig, F. (2018). Saccade trajectories reveal dynamic interactions of semantic and spatial information during the processing of implicitly spatial words. Journal of Experimental Psychology: Learning, Memory, and Cognition, 44(10), 1658-1670. doi:10.1037/xlm0000536.

    Abstract

    Implicit up/down words, such as bird and foot, systematically influence performance on visual tasks involving immediately following targets in compatible vs. incompatible locations. Recent studies have observed that the semantic relation between prime words and target pictures can strongly influence the size and even the direction of the effect: Semantically related targets are processed faster in congruent vs. incongruent locations (location-specific priming), whereas unrelated targets are processed slower in congruent locations. Here, we used eye-tracking to investigate the moment-to-moment processes underlying this pattern. Our reaction time results for related targets replicated the location-specific priming effect and showed a trend towards interference for unrelated targets. We then used growth curve analysis to test how up/down words and their match vs. mismatch with immediately following targets in terms of semantics and vertical location influences concurrent saccadic eye movements. There was a strong main effect of spatial association on linear growth with up words biasing changes in y-coordinates over time upwards relative to down words (and vice versa). Similar to the RT data, this effect was strongest for semantically related targets and reversed for unrelated targets. Intriguingly, all conditions showed a bias in the congruent direction in the initial stage of the saccade. Then, at around halfway into the saccade the effect kept increasing in the semantically related condition, and reversed in the unrelated condition. These results suggest that online processing of up/down words triggers direction-specific oculomotor processes that are dynamically modulated by the semantic relation between prime words and targets.
  • Ozker, M., Yoshor, D., & Beauchamp, M. (2018). Converging evidence from electrocorticography and BOLD fMRI for a sharp functional boundary in superior temporal gyrus related to multisensory speech processing. Frontiers in Human Neuroscience, 12: 141. doi:10.3389/fnhum.2018.00141.

    Abstract

    Although humans can understand speech using the auditory modality alone, in noisy environments visual speech information from the talker’s mouth can rescue otherwise unintelligible auditory speech. To investigate the neural substrates of multisensory speech perception, we compared neural activity from the human superior temporal gyrus (STG) in two datasets. One dataset consisted of direct neural recordings (electrocorticography, ECoG) from surface electrodes implanted in epilepsy patients (this dataset has been previously published). The second dataset consisted of indirect measures of neural activity using blood oxygen level dependent functional magnetic resonance imaging (BOLD fMRI). Both ECoG and fMRI participants viewed the same clear and noisy audiovisual speech stimuli and performed the same speech recognition task. Both techniques demonstrated a sharp functional boundary in the STG, spatially coincident with an anatomical boundary defined by the posterior edge of Heschl’s gyrus. Cortex on the anterior side of the boundary responded more strongly to clear audiovisual speech than to noisy audiovisual speech while cortex on the posterior side of the boundary did not. For both ECoG and fMRI measurements, the transition between the functionally distinct regions happened within 10 mm of anterior-to-posterior distance along the STG. We relate this boundary to the multisensory neural code underlying speech perception and propose that it represents an important functional division within the human speech perception network.
  • Ozker, M., Yoshor, D., & Beauchamp, M. (2018). Frontal cortex selects representations of the talker’s mouth to aid in speech perception. eLife, 7: e30387. doi:10.7554/eLife.30387.
  • Palva, J. M., Wang, S. H., Palva, S., Zhigalov, A., Monto, S., Brookes, M. J., & Schoffelen, J.-M. (2018). Ghost interactions in MEG/EEG source space: A note of caution on inter-areal coupling measures. NeuroImage, 173, 632-643. doi:10.1016/j.neuroimage.2018.02.032.

    Abstract

    When combined with source modeling, magneto- (MEG) and electroencephalography (EEG) can be used to study
    long-range interactions among cortical processes non-invasively. Estimation of such inter-areal connectivity is
    nevertheless hindered by instantaneous field spread and volume conduction, which artificially introduce linear
    correlations and impair source separability in cortical current estimates. To overcome the inflating effects of linear
    source mixing inherent to standard interaction measures, alternative phase- and amplitude-correlation based
    connectivity measures, such as imaginary coherence and orthogonalized amplitude correlation have been proposed.
    Being by definition insensitive to zero-lag correlations, these techniques have become increasingly popular
    in the identification of correlations that cannot be attributed to field spread or volume conduction. We show here,
    however, that while these measures are immune to the direct effects of linear mixing, they may still reveal large
    numbers of spurious false positive connections through field spread in the vicinity of true interactions. This
    fundamental problem affects both region-of-interest-based analyses and all-to-all connectome mappings. Most
    importantly, beyond defining and illustrating the problem of spurious, or “ghost” interactions, we provide a
    rigorous quantification of this effect through extensive simulations. Additionally, we further show that signal
    mixing also significantly limits the separability of neuronal phase and amplitude correlations. We conclude that
    spurious correlations must be carefully considered in connectivity analyses in MEG/EEG source space even when
    using measures that are immune to zero-lag correlations.
  • Pascucci, D., Hervais-Adelman, A., & Plomp, G. (2018). Gating by induced A-Gamma asynchrony in selective attention. Human Brain Mapping, 39(10), 3854-3870. doi:10.1002/hbm.24216.

    Abstract

    Visual selective attention operates through top–down mechanisms of signal enhancement and suppression, mediated by a-band oscillations. The effects of such top–down signals on local processing in primary visual cortex (V1) remain poorly understood. In this work, we characterize the interplay between large-s cale interactions and local activity changes in V1 that orchestrat es selective attention, using Granger-causality and phase-amplitude coupling (PAC) analysis of EEG source signals. The task required participants to either attend to or ignore oriented gratings. Results from time-varying, directed connectivity analysis revealed frequency-specific effects of attentional selection: bottom–up g-band influences from visual areas increased rapidly in response to attended stimuli while distributed top–down a-band influences originated from parietal cortex in response to ignored stimuli. Importantly, the results revealed a critical interplay between top–down parietal signals and a–g PAC in visual areas.
    Parietal a-band influences disrupted the a–g coupling in visual cortex, which in turn reduced the amount of g-band outflow from visual area s. Our results are a first demon stration of how directed interactions affect cross-frequency coupling in downstream areas depending on task demands. These findings suggest that parietal cortex realizes selective attention by disrupting cross-frequency coupling at target regions, which prevents them from propagating task-irrelevant information.
  • Paternoster, L., Zhurov, A., Toma, A., Kemp, J., St Pourcain, B., Timpson, N., McMahon, G., McArdle, W., Ring, S., Smith, G., Richmond, S., & Evans, D. (2012). Genome-wide Association Study of Three-Dimensional Facial Morphology Identifies a Variant in PAX3 Associated with Nasion Position. The American Journal of Human Genetics, 90(3), 478-485. doi:10.1016/j.ajhg.2011.12.021.

    Abstract

    Craniofacial morphology is highly heritable, but little is known about which genetic variants influence normal facial variation in the general population. We aimed to identify genetic variants associated with normal facial variation in a population-based cohort of 15-year-olds from the Avon Longitudinal Study of Parents and Children. 3D high-resolution images were obtained with two laser scanners, these were merged and aligned, and 22 landmarks were identified and their x, y, and z coordinates used to generate 54 3D distances reflecting facial features. 14 principal components (PCs) were also generated from the landmark locations. We carried out genome-wide association analyses of these distances and PCs in 2,185 adolescents and attempted to replicate any significant associations in a further 1,622 participants. In the discovery analysis no associations were observed with the PCs, but we identified four associations with the distances, and one of these, the association between rs7559271 in PAX3 and the nasion to midendocanthion distance (n-men), was replicated (p = 4 × 10−7). In a combined analysis, each G allele of rs7559271 was associated with an increase in n-men distance of 0.39 mm (p = 4 × 10−16), explaining 1.3% of the variance. Independent associations were observed in both the z (nasion prominence) and y (nasion height) dimensions (p = 9 × 10−9 and p = 9 × 10−10, respectively), suggesting that the locus primarily influences growth in the yz plane. Rare variants in PAX3 are known to cause Waardenburg syndrome, which involves deafness, pigmentary abnormalities, and facial characteristics including a broad nasal bridge. Our findings show that common variants within this gene also influence normal craniofacial development.
  • Peeters, D. (2018). A standardized set of 3D-objects for virtual reality research and applications. Behavior Research Methods, 50(3), 1047-1054. doi:10.3758/s13428-017-0925-3.

    Abstract

    The use of immersive virtual reality as a research tool is rapidly increasing in numerous scientific disciplines. By combining ecological validity with strict experimental control, immersive virtual reality provides the potential to develop and test scientific theory in rich environments that closely resemble everyday settings. This article introduces the first standardized database of colored three-dimensional (3D) objects that can be used in virtual reality and augmented reality research and applications. The 147 objects have been normed for name agreement, image agreement, familiarity, visual complexity, and corresponding lexical characteristics of the modal object names. The availability of standardized 3D-objects for virtual reality research is important, as reaching valid theoretical conclusions critically hinges on the use of well controlled experimental stimuli. Sharing standardized 3D-objects across different virtual reality labs will allow for science to move forward more quickly.
  • Peeters, D., & Dijkstra, T. (2018). Sustained inhibition of the native language in bilingual language production: A virtual reality approach. Bilingualism: Language and Cognition, 21(5), 1035-1061. doi:10.1017/S1366728917000396.

    Abstract

    Bilinguals often switch languages as a function of the language background of their addressee. The control mechanisms supporting bilinguals' ability to select the contextually appropriate language are heavily debated. Here we present four experiments in which unbalanced bilinguals named pictures in their first language Dutch and their second language English in mixed and blocked contexts. Immersive virtual reality technology was used to increase the ecological validity of the cued language-switching paradigm. Behaviorally, we consistently observed symmetrical switch costs, reversed language dominance, and asymmetrical mixing costs. These findings indicate that unbalanced bilinguals apply sustained inhibition to their dominant L1 in mixed language settings. Consequent enhanced processing costs for the L1 in a mixed versus a blocked context were reflected by a sustained positive component in event-related potentials. Methodologically, the use of virtual reality opens up a wide range of possibilities to study language and communication in bilingual and other communicative settings.
  • Perlman, M., Little, H., Thompson, B., & Thompson, R. L. (2018). Iconicity in signed and spoken vocabulary: A comparison between American Sign Language, British Sign Language, English, and Spanish. Frontiers in Psychology, 9: 1433. doi:10.3389/fpsyg.2018.01433.

    Abstract

    Considerable evidence now shows that all languages, signed and spoken, exhibit a significant amount of iconicity. We examined how the visual-gestural modality of signed languages facilitates iconicity for different kinds of lexical meanings compared to the auditory-vocal modality of spoken languages. We used iconicity ratings of hundreds of signs and words to compare iconicity across the vocabularies of two signed languages – American Sign Language and British Sign Language, and two spoken languages – English and Spanish. We examined (1) the correlation in iconicity ratings between the languages; (2) the relationship between iconicity and an array of semantic variables (ratings of concreteness, sensory experience, imageability, perceptual strength of vision, audition, touch, smell and taste); (3) how iconicity varies between broad lexical classes (nouns, verbs, adjectives, grammatical words and adverbs); and (4) between more specific semantic categories (e.g., manual actions, clothes, colors). The results show several notable patterns that characterize how iconicity is spread across the four vocabularies. There were significant correlations in the iconicity ratings between the four languages, including English with ASL, BSL, and Spanish. The highest correlation was between ASL and BSL, suggesting iconicity may be more transparent in signs than words. In each language, iconicity was distributed according to the semantic variables in ways that reflect the semiotic affordances of the modality (e.g., more concrete meanings more iconic in signs, not words; more auditory meanings more iconic in words, not signs; more tactile meanings more iconic in both signs and words). Analysis of the 220 meanings with ratings in all four languages further showed characteristic patterns of iconicity across broad and specific semantic domains, including those that distinguished between signed and spoken languages (e.g., verbs more iconic in ASL, BSL, and English, but not Spanish; manual actions especially iconic in ASL and BSL; adjectives more iconic in English and Spanish; color words especially low in iconicity in ASL and BSL). These findings provide the first quantitative account of how iconicity is spread across the lexicons of signed languages in comparison to spoken languages
  • Perniss, P. M., Vinson, D., Seifart, F., & Vigliocco, G. (2012). Speaking of shape: The effects of language-specific encoding on semantic representations. Language and Cognition, 4, 223-242. doi:10.1515/langcog-2012-0012.

    Abstract

    The question of whether different linguistic patterns differentially influence semantic and conceptual representations is of central interest in cognitive science. In this paper, we investigate whether the regular encoding of shape within a nominal classification system leads to an increased salience of shape in speakers' semantic representations by comparing English, (Amazonian) Spanish, and Bora, a shape-based classifier language spoken in the Amazonian regions of Columbia and Peru. Crucially, in displaying obligatory use, pervasiveness in grammar, high discourse frequency, and phonological variability of forms corresponding to particular shape features, the Bora classifier system differs in important ways from those in previous studies investigating effects of nominal classification, thereby allowing better control of factors that may have influenced previous findings. In addition, the inclusion of Spanish monolinguals living in the Bora village allowed control for the possibility that differences found between English and Bora speakers may be attributed to their very different living environments. We found that shape is more salient in the semantic representation of objects for speakers of Bora, which systematically encodes shape, than for speakers of English and Spanish, which do not. Our results are consistent with assumptions that semantic representations are shaped and modulated by our specific linguistic experiences.
  • Perry, L. K., Perlman, M., Winter, B., Massaro, D. W., & Lupyan, G. (2018). Iconicity in the speech of children and adults. Developmental Science, 21: e12572. doi:10.1111/desc.12572.

    Abstract

    Iconicity – the correspondence between form and meaning – may help young children learn to use new words. Early-learned words are higher in iconicity than later learned words. However, it remains unclear what role iconicity may play in actual language use. Here, we ask whether iconicity relates not just to the age at which words are acquired, but also to how frequently children and adults use the words in their speech. If iconicity serves to bootstrap word learning, then we would expect that children should say highly iconic words more frequently than less iconic words, especially early in development. We would also expect adults to use iconic words more often when speaking to children than to other adults. We examined the relationship between frequency and iconicity for approximately 2000 English words. Replicating previous findings, we found that more iconic words are learned earlier. Moreover, we found that more iconic words tend to be used more by younger children, and adults use more iconic words when speaking to children than to other adults. Together, our results show that young children not only learn words rated high in iconicity earlier than words low in iconicity, but they also produce these words more frequently in conversation – a pattern that is reciprocated by adults when speaking with children. Thus, the earliest conversations of children are relatively higher in iconicity, suggesting that this iconicity scaffolds the production and comprehension of spoken language during early development.
  • Petersson, K. M., & Hagoort, P. (2012). The neurobiology of syntax: Beyond string-sets [Review article]. Philosophical Transactions of the Royal Society of London. Series B, Biological Sciences, 367, 1971-1883. doi:10.1098/rstb.2012.0101.

    Abstract

    The human capacity to acquire language is an outstanding scientific challenge to understand. Somehow our language capacities arise from the way the human brain processes, develops and learns in interaction with its environment. To set the stage, we begin with a summary of what is known about the neural organization of language and what our artificial grammar learning (AGL) studies have revealed. We then review the Chomsky hierarchy in the context of the theory of computation and formal learning theory. Finally, we outline a neurobiological model of language acquisition and processing based on an adaptive, recurrent, spiking network architecture. This architecture implements an asynchronous, event-driven, parallel system for recursive processing. We conclude that the brain represents grammars (or more precisely, the parser/generator) in its connectivity, and its ability for syntax is based on neurobiological infrastructure for structured sequence processing. The acquisition of this ability is accounted for in an adaptive dynamical systems framework. Artificial language learning (ALL) paradigms might be used to study the acquisition process within such a framework, as well as the processing properties of the underlying neurobiological infrastructure. However, it is necessary to combine and constrain the interpretation of ALL results by theoretical models and empirical studies on natural language processing. Given that the faculty of language is captured by classical computational models to a significant extent, and that these can be embedded in dynamic network architectures, there is hope that significant progress can be made in understanding the neurobiology of the language faculty.
  • Petersson, K. M., Folia, V., & Hagoort, P. (2012). What artificial grammar learning reveals about the neurobiology of syntax. Brain and Language, 120, 83-95. doi:10.1016/j.bandl.2010.08.003.

    Abstract

    In this paper we examine the neurobiological correlates of syntax, the processing of structured sequences, by comparing FMRI results on artificial and natural language syntax. We discuss these and similar findings in the context of formal language and computability theory. We used a simple right-linear unification grammar in an implicit artificial grammar learning paradigm in 32 healthy Dutch university students (natural language FMRI data were already acquired for these participants). We predicted that artificial syntax processing would engage the left inferior frontal region (BA 44/45) and that this activation would overlap with syntax-related variability observed in the natural language experiment. The main findings of this study show that the left inferior frontal region centered on BA 44/45 is active during artificial syntax processing of well-formed (grammatical) sequence independent of local subsequence familiarity. The same region is engaged to a greater extent when a syntactic violation is present and structural unification becomes difficult or impossible. The effects related to artificial syntax in the left inferior frontal region (BA 44/45) were essentially identical when we masked these with activity related to natural syntax in the same subjects. Finally, the medial temporal lobe was deactivated during this operation, consistent with the view that implicit processing does not rely on declarative memory mechanisms that engage the medial temporal lobe. In the context of recent FMRI findings, we raise the question whether Broca’s region (or subregions) is specifically related to syntactic movement operations or the processing of hierarchically nested non-adjacent dependencies in the discussion section. We conclude that this is not the case. Instead, we argue that the left inferior frontal region is a generic on-line sequence processor that unifies information from various sources in an incremental and recursive manner, independent of whether there are any processing requirements related to syntactic movement or hierarchically nested structures. In addition, we argue that the Chomsky hierarchy is not directly relevant for neurobiological systems.
  • Pettenati, P., Sekine, K., Congestrì, E., & Volterra, V. (2012). A comparative study on representational gestures in Italian and Japanese children. Journal of Nonverbal Behavior, 36(2), 149-164. doi:10.1007/s10919-011-0127-0.

    Abstract

    This study compares words and gestures produced in a controlled experimental setting by children raised in different linguistic/cultural environments to examine the robustness of gesture use at an early stage of lexical development. Twenty-two Italian and twenty-two Japanese toddlers (age range 25–37 months) performed the same picture-naming task. Italians produced more spoken correct labels than Japanese but a similar amount of representational gestures temporally matched with words. However, Japanese gestures reproduced more closely the action represented in the picture. Results confirm that gestures are linked to motor actions similarly for all children, suggesting a common developmental stage, only minimally influenced by culture.
  • Piai, V., Rommers, J., & Knight, R. T. (2018). Lesion evidence for a critical role of left posterior but not frontal areas in alpha–beta power decreases during context-driven word production. European Journal of Neuroscience, 48(7), 2622-2629. doi:10.1111/ejn.13695.

    Abstract

    Different frequency bands in the electroencephalogram are postulated to support distinct language functions. Studies have suggested
    that alpha–beta power decreases may index word-retrieval processes. In context-driven word retrieval, participants hear
    lead-in sentences that either constrain the final word (‘He locked the door with the’) or not (‘She walked in here with the’). The last
    word is shown as a picture to be named. Previous studies have consistently found alpha–beta power decreases prior to picture
    onset for constrained relative to unconstrained sentences, localised to the left lateral-temporal and lateral-frontal lobes. However,
    the relative contribution of temporal versus frontal areas to alpha–beta power decreases is unknown. We recorded the electroencephalogram
    from patients with stroke lesions encompassing the left lateral-temporal and inferior-parietal regions or left-lateral
    frontal lobe and from matched controls. Individual participant analyses revealed a behavioural sentence context facilitation effect
    in all participants, except for in the two patients with extensive lesions to temporal and inferior parietal lobes. We replicated the
    alpha–beta power decreases prior to picture onset in all participants, except for in the two same patients with extensive posterior
    lesions. Thus, whereas posterior lesions eliminated the behavioural and oscillatory context effect, frontal lesions did not. Hierarchical
    clustering analyses of all patients’ lesion profiles, and behavioural and electrophysiological effects identified those two
    patients as having a unique combination of lesion distribution and context effects. These results indicate a critical role for the left
    lateral-temporal and inferior parietal lobes, but not frontal cortex, in generating the alpha–beta power decreases underlying context-
    driven word production.
  • Piai, V., Roelofs, A., & Schriefers, H. (2012). Distractor strength and selective attention in picture-naming performance. Memory and cognition, 40, 614-627. doi:10.3758/s13421-011-0171-3.

    Abstract

    Whereas it has long been assumed that competition plays a role in lexical selection in word production (e.g., Levelt, Roelofs, & Meyer, 1999), recently Finkbeiner and Caramazza (2006) argued against the competition assumption on the basis of their observation that visible distractors yield semantic interference in picture naming, whereas masked distractors yield semantic facilitation. We examined an alternative account of these findings that preserves the competition assumption. According to this account, the interference and facilitation effects of distractor words reflect whether or not distractors are strong enough to exceed a threshold for entering the competition process. We report two experiments in which distractor strength was manipulated by means of coactivation and visibility. Naming performance was assessed in terms of mean response time (RT) and RT distributions. In Experiment 1, with low coactivation, semantic facilitation was obtained from clearly visible distractors, whereas poorly visible distractors yielded no semantic effect. In Experiment 2, with high coactivation, semantic interference was obtained from both clearly and poorly visible distractors. These findings support the competition threshold account of the polarity of semantic effects in naming.

Share this page