Publications

Displaying 301 - 400 of 626
  • Lev-Ari, S., & Keysar, B. (2010). Why don't we believe non-native speakers? The influence of accent on credibility. Journal of Experimental Social Psychology, 46(6), 1093-1096. doi:10.1016/j.jesp.2010.05.025.

    Abstract

    Non-native speech is harder to understand than native speech. We demonstrate that this “processing
    difficulty” causes non-native speakers to sound less credible. People judged trivia statements such as “Ants
    don't sleep” as less true when spoken by a non-native than a native speaker. When people were made aware
    of the source of their difficulty they were able to correct when the accent was mild but not when it was
    heavy. This effect was not due to stereotypes of prejudice against foreigners because it occurred even though
    speakers were merely reciting statements provided by a native speaker. Such reduction of credibility may
    have an insidious impact on millions of people, who routinely communicate in a language which is not their
    native tongue
  • Levelt, W. J. M. (2000). Uit talloos veel miljoenen. Natuur & Techniek, 68(11), 90.
  • Levelt, C. C., Schiller, N. O., & Levelt, W. J. M. (1999). A developmental grammar for syllable structure in the production of child language. Brain and Language, 68, 291-299.

    Abstract

    The order of acquisition of Dutch syllable types by first language learners is analyzed as following from an initial ranking and subsequent rerankings of constraints in an optimality theoretic grammar. Initially, structural constraints are all ranked above faithfulness constraints, leading to core syllable (CV) productions only. Subsequently, faithfulness gradually rises to the highest position in the ranking, allowing more and more marked syllable types to appear in production. Local conjunctions of Structural constraints allow for a more detailed analysis.
  • Levelt, W. J. M., Roelofs, A., & Meyer, A. S. (1999). A theory of lexical access in speech production. Behavioral and Brain Sciences, 22, 1-38. doi:10.1017/S0140525X99001776.

    Abstract

    Preparing words in speech production is normally a fast and accurate process. We generate them two or three per second in fluent conversation; and overtly naming a clear picture of an object can easily be initiated within 600 msec after picture onset. The underlying process, however, is exceedingly complex. The theory reviewed in this target article analyzes this process as staged and feedforward. After a first stage of conceptual preparation, word generation proceeds through lexical selection, morphological and phonological encoding, phonetic encoding, and articulation itself. In addition, the speaker exerts some degree of output control, by monitoring of self-produced internal and overt speech. The core of the theory, ranging from lexical selection to the initiation of phonetic encoding, is captured in a computational model, called WEAVER + +. Both the theory and the computational model have been developed in interaction with reaction time experiments, particularly in picture naming or related word production paradigms, with the aim of accounting. for the real-time processing in normal word production. A comprehensive review of theory, model, and experiments is presented. The model can handle some of the main observations in the domain of speech errors (the major empirical domain for most other theories of lexical access), and the theory opens new ways of approaching the cerebral organization of speech production by way of high-temporal-resolution imaging.
  • Levelt, W. J. M. (2000). Dyslexie. Natuur & Techniek, 68(4), 64.
  • Levelt, W. J. M. (1999). Models of word production. Trends in Cognitive Sciences, 3, 223-232.

    Abstract

    Research on spoken word production has been approached from two angles. In one research tradition, the analysis of spontaneous or induced speech errors led to models that can account for speech error distributions. In another tradition, the measurement of picture naming latencies led to chronometric models accounting for distributions of reaction times in word production. Both kinds of models are, however, dealing with the same underlying processes: (1) the speaker’s selection of a word that is semantically and syntactically appropriate; (2) the retrieval of the word’s phonological properties; (3) the rapid syllabification of the word in context; and (4) the preparation of the corresponding articulatory gestures. Models of both traditions explain these processes in terms of activation spreading through a localist, symbolic network. By and large, they share the main levels of representation: conceptual/semantic, syntactic, phonological and phonetic. They differ in various details, such as the amount of cascading and feedback in the network. These research traditions have begun to merge in recent years, leading to highly constructive experimentation. Currently, they are like two similar knives honing each other. A single pair of scissors is in the making.
  • Levelt, W. J. M., Roelofs, A., & Meyer, A. S. (1999). Multiple perspectives on lexical access [authors' response ]. Behavioral and Brain Sciences, 22, 61-72. doi:10.1017/S0140525X99451775.
  • Levelt, W. J. M. (2000). Links en rechts: Waarom hebben we zo vaak problemen met die woorden? Natuur & Techniek, 68(7/8), 90.
  • Levelt, W. J. M. (2018). Is language natural to man? Some historical considerations. Current Opinion in Behavioral Sciences, 21, 127-131. doi:10.1016/j.cobeha.2018.04.003.

    Abstract

    Since the Enlightenment period, natural theories of speech and language evolution have florished in the language sciences. Four ever returning core issues are highlighted in this paper: Firstly, Is language natural to man or just an invention? Secondly, Is language a specific human ability (a ‘language instinct’) or does it arise from general cognitive capacities we share with other animals? Thirdly, Has the evolution of language been a gradual process or did it rather suddenly arise, due to some ‘evolutionary twist’? Lastly, Is the child's language acquisition an appropriate model for language evolution?
  • Levelt, C. C., Schiller, N. O., & Levelt, W. J. M. (2000). The acquisition of syllable types. Language Acquisition, 8(3), 237-263. doi:10.1207/S15327817LA0803_2.

    Abstract

    In this article, we present an account of developmental data regarding the acquisition of syllable types. The data come from a longitudinal corpus of phonetically transcribed speech of 12 children acquiring Dutch as their first language. A developmental order of acquisition of syllable types was deduced by aligning the syllabified data on a Guttman scale. This order could be analyzed as following from an initial ranking and subsequent rerankings in the grammar of the structural constraints ONSET, NO-CODA, *COMPLEX-O, and *COMPLEX-C; some local conjunctions of these constraints; and a faithfulness constraint FAITH. The syllable type frequencies in the speech surrounding the language learner are also considered. An interesting correlation is found between the frequencies and the order of development of the different syllable types.
  • Levelt, W. J. M. (2000). The brain does not serve linguistic theory so easily [Commentary to target article by Grodzinksy]. Behavioral and Brain Sciences, 23(1), 40-41.
  • Levelt, W. J. M., & Meyer, A. S. (2000). Word for word: Multiple lexical access in speech production. European Journal of Cognitive Psychology, 12(4), 433-452. doi:10.1080/095414400750050178.

    Abstract

    It is quite normal for us to produce one or two million word tokens every year. Speaking is a dear occupation and producing words is at the core of it. Still, producing even a single word is a highly complex affair. Recently, Levelt, Roelofs, and Meyer (1999) reviewed their theory of lexical access in speech production, which dissects the word-producing mechanism as a staged application of various dedicated operations. The present paper begins by presenting a bird eye's view of this mechanism. We then square the complexity by asking how speakers control multiple access in generating simple utterances such as a table and a chair. In particular, we address two issues. The first one concerns dependency: Do temporally contiguous access procedures interact in any way, or do they run in modular fashion? The second issue concerns temporal alignment: How much temporal overlap of processing does the system tolerate in accessing multiple content words, such as table and chair? Results from picture-word interference and eye tracking experiments provide evidence for restricted cases of dependency as well as for constraints on the temporal alignment of access procedures.
  • Levinson, S. C. (2010). Advancing our grasp of constrained variation in a crucial cognitive domain [Comment on Doug Jones]. Behavioral and Brain Sciences, 33, 391-392. doi:10.1017/S0140525X1000141X.

    Abstract

    Jones's system of constraints promises interesting insights into the typology of kin term systems. Three problems arise: (1) the conflation of categories with algorithms that assign them threatens to weaken the typological predictions; (2) OT-type constraints have little psychological plausibility; (3) the conflation of kin-term systems and kinship systems may underplay the "utility function" character of real kinship in action.
  • Levinson, S. C. (1999). Maxim. Journal of Linguistic Anthropology, 9, 144-147. doi:10.1525/jlin.1999.9.1-2.144.
  • Levinson, S. C. (2010). Questions and responses in Yélî Dnye, the Papuan language of Rossel Island. Journal of Pragmatics, 42, 2741-2755. doi:10.1016/j.pragma.2010.04.009.

    Abstract

    A corpus of 350 naturally-occurring questions in videotaped interaction shows that questions and their responses in Yélî Dnye (the Papuan language of Rossel Island) both conform to clear universal expectations but also have a number of language-specific peculiarities. They conform in that polar and wh-questions are unrelated in form, wh-questions have the usual sort of special forms, and responses show the same priorities as in other languages (for fast cooperative, adequate answers). But, less expected perhaps, Yélî Dnye polar questions (excepting tags) are unmarked in both morphosyntax and prosody, and the responses include conventional facial expressions, conforming to the propositional response system type (so that assent to ‘He didn’t come?’ means ‘no, he didn’t’). These visual signals are facilitated by high levels of mutual gaze making rapid early responses possible. Tags can occur with non-interrogative illocutionary forces, and could be held to perform speech acts of their own. Wh-questions utilize about a dozen wh-forms, which are only optionally fronted, and there are some interesting specializations of forms (e.g. ‘who’ for any named entities other than places). Most questions of all types are genuinely information seeking, with 27% (mostly tags) seeking confirmation, 19% requesting repair.
  • Levinson, S. C. (2018). Spatial cognition, empathy and language evolution. Studies in Pragmatics, 20, 16-21.

    Abstract

    The evolution of language and spatial cognition may have been deeply interconnected. The argument
    goes as follows: 1. Human native spatial abilities are poor, but we make up for it with linguistic
    and cultural prostheses; 2. The explanation for the loss of native spatial abilities may be
    that language has cannibalized the hippocampus, the mammalian mental ‘GPS’; 3. Consequently,
    language may have borrowed conceptual primitives from spatial cognition (in line with ‘localism’),
    these being differentially combined in different languages; 4. The hippocampus may have
    been colonized because: (a) space was prime subject matter for communication, (b) gesture uses
    space to represent space, and was likely precursor to language. In order to explain why the other
    great apes haven’t gone in the same direction, we need to invoke other factors, notably the ‘interaction
    engine’, the ensemble of interactional abilities that make cooperative communication possible
    and provide the matrix for the evolution and learning of language.
  • Levinson, S. C., & Evans, N. (2010). Time for a sea-change in linguistics: Response to comments on 'The myth of language universals'. Lingua, 120, 2733-2758. doi:10.1016/j.lingua.2010.08.001.

    Abstract

    This paper argues that the language sciences are on the brink of major changes in primary data, methods and theory. Reactions to ‘The myth of language universals’ ([Evans and Levinson, 2009a] and [Evans and Levinson, 2009b]) divide in response to these new challenges. Chomskyan-inspired ‘C-linguists’ defend a status quo, based on intuitive data and disparate universalizing abstract frameworks, reflecting 30 years of changing models. Linguists driven by interests in richer data and linguistic diversity, ‘D-linguists’, though more responsive to the new developments, have tended to lack an integrating framework. Here we outline such an integrative framework of the kind we were presupposing in ‘Myth’, namely a coevolutionary model of the interaction between mind and cultural linguistic traditions which puts variation central at all levels – a model that offers the right kind of response to the new challenges. In doing so we traverse the fundamental questions raised by the commentary in this special issue: What constitutes the data, what is the place of formal representations, how should linguistic comparison be done, what counts as explanation, what is the source of design in language? Radical changes in data, methods and theory are upon us. The future of the discipline will depend on responses to these changes: either the field turns in on itself and atrophies, or it modernizes, and tries to capitalize on the way language lies at the intersection of all the disciplines interested in human nature.
  • Levinson, S. C. (2000). Yélî Dnye and the theory of basic color terms. Journal of Linguistic Anthropology, 10( 1), 3-55. doi:10.1525/jlin.2000.10.1.3.

    Abstract

    The theory of basic color terms was a crucial factor in the demise of linguistic relativity. The theory is now once again under scrutiny and fundamental revision. This article details a case study that undermines one of the central claims of the classical theory, namely that languages universally treat color as a unitary domain, to be exhaustively named. Taken together with other cases, the study suggests that a number of languages have only an incipient color terminology, raising doubts about the linguistic universality of such terminology.
  • Levshina, N. (2018). Probabilistic grammar and constructional predictability: Bayesian generalized additive models of help. GLOSSA-a journal of general linguistics, 3(1): 55. doi:10.5334/gjgl.294.

    Abstract

    The present study investigates the construction with help followed by the bare or to-infinitive in seven varieties of web-based English from Australia, Ghana, Great Britain, Hong Kong, India, Jamaica and the USA. In addition to various factors known from the literature, such as register, minimization of cognitive complexity and avoidance of identity (horror aequi), it studies the effect of predictability of the infinitive given help and the other way round on the language user’s choice between the constructional variants. These probabilistic constraints are tested in a series of Bayesian generalized additive mixed-effects regression models. The results demonstrate that the to-infinitive is particularly frequent in contexts with low predictability, or, in information-theoretic terms, with high information content. This tendency is interpreted as communicatively efficient behaviour, when more predictable units of discourse get less formal marking, and less predictable ones get more formal marking. However, the strength, shape and directionality of predictability effects exhibit variation across the countries, which demonstrates the importance of the cross-lectal perspective in research on communicative efficiency and other universal functional principles.
  • Lewis, A. G., Schriefers, H., Bastiaansen, M., & Schoffelen, J.-M. (2018). Assessing the utility of frequency tagging for tracking memory-based reactivation of word representations. Scientific Reports, 8: 7897. doi:10.1038/s41598-018-26091-3.

    Abstract

    Reinstatement of memory-related neural activity measured with high temporal precision potentially provides a useful index for real-time monitoring of the timing of activation of memory content during cognitive processing. The utility of such an index extends to any situation where one is interested in the (relative) timing of activation of different sources of information in memory, a paradigm case of which is tracking lexical activation during language processing. Essential for this approach is that memory reinstatement effects are robust, so that their absence (in the average) definitively indicates that no lexical activation is present. We used electroencephalography to test the robustness of a reported subsequent memory finding involving reinstatement of frequency-specific entrained oscillatory brain activity during subsequent recognition. Participants learned lists of words presented on a background flickering at either 6 or 15 Hz to entrain a steady-state brain response. Target words subsequently presented on a non-flickering background that were correctly identified as previously seen exhibited reinstatement effects at both entrainment frequencies. Reliability of these statistical inferences was however critically dependent on the approach used for multiple comparisons correction. We conclude that effects are not robust enough to be used as a reliable index of lexical activation during language processing.

    Additional information

    Lewis_etal_2018sup.docx
  • Liang, S., Vega, R., Kong, X., Deng, W., Wang, Q., Ma, X., Li, M., Hu, X., Greenshaw, A. J., Greiner, R., & Li, T. (2018). Neurocognitive Graphs of First-Episode Schizophrenia and Major Depression Based on Cognitive Features. Neuroscience Bulletin, 34(2), 312-320. doi:10.1007/s12264-017-0190-6.

    Abstract

    Neurocognitive deficits are frequently observed in patients with schizophrenia and major depressive disorder (MDD). The relations between cognitive features may be represented by neurocognitive graphs based on cognitive features, modeled as Gaussian Markov random fields. However, it is unclear whether it is possible to differentiate between phenotypic patterns associated with the differential diagnosis of schizophrenia and depression using this neurocognitive graph approach. In this study, we enrolled 215 first-episode patients with schizophrenia (FES), 125 with MDD, and 237 demographically-matched healthy controls (HCs). The cognitive performance of all participants was evaluated using a battery of neurocognitive tests. The graphical LASSO model was trained with a one-vs-one scenario to learn the conditional independent structure of neurocognitive features of each group. Participants in the holdout dataset were classified into different groups with the highest likelihood. A partial correlation matrix was transformed from the graphical model to further explore the neurocognitive graph for each group. The classification approach identified the diagnostic class for individuals with an average accuracy of 73.41% for FES vs HC, 67.07% for MDD vs HC, and 59.48% for FES vs MDD. Both of the neurocognitive graphs for FES and MDD had more connections and higher node centrality than those for HC. The neurocognitive graph for FES was less sparse and had more connections than that for MDD. Thus, neurocognitive graphs based on cognitive features are promising for describing endophenotypes that may discriminate schizophrenia from depression.

    Additional information

    Liang_etal_2017sup.pdf
  • Ligthart, S., Vaez, A., Võsa, U., Stathopoulou, M. G., De Vries, P. S., Prins, B. P., Van der Most, P. J., Tanaka, T., Naderi, E., Rose, L. M., Wu, Y., Karlsson, R., Barbalic, M., Lin, H., Pool, R., Zhu, G., Macé, A., Sidore, C., Trompet, S., Mangino, M. and 267 moreLigthart, S., Vaez, A., Võsa, U., Stathopoulou, M. G., De Vries, P. S., Prins, B. P., Van der Most, P. J., Tanaka, T., Naderi, E., Rose, L. M., Wu, Y., Karlsson, R., Barbalic, M., Lin, H., Pool, R., Zhu, G., Macé, A., Sidore, C., Trompet, S., Mangino, M., Sabater-Lleal, M., Kemp, J. P., Abbasi, A., Kacprowski, T., Verweij, N., Smith, A. V., Huang, T., Marzi, C., Feitosa, M. F., Lohman, K. K., Kleber, M. E., Milaneschi, Y., Mueller, C., Huq, M., Vlachopoulou, E., Lyytikäinen, L.-P., Oldmeadow, C., Deelen, J., Perola, M., Zhao, J. H., Feenstra, B., LifeLines Cohort Study, Amini, M., CHARGE Inflammation Working Group, Lahti, J., Schraut, K. E., Fornage, M., Suktitipat, B., Chen, W.-M., Li, X., Nutile, T., Malerba, G., Luan, J., Bak, T., Schork, N., Del Greco M., F., Thiering, E., Mahajan, A., Marioni, R. E., Mihailov, E., Eriksson, J., Ozel, A. B., Zhang, W., Nethander, M., Cheng, Y.-C., Aslibekyan, S., Ang, W., Gandin, I., Yengo, L., Portas, L., Kooperberg, C., Hofer, E., Rajan, K. B., Schurmann, C., Den Hollander, W., Ahluwalia, T. S., Zhao, J., Draisma, H. H. M., Ford, I., Timpson, N., Teumer, A., Huang, H., Wahl, S., Liu, Y., Huang, J., Uh, H.-W., Geller, F., Joshi, P. K., Yanek, L. R., Trabetti, E., Lehne, B., Vozzi, D., Verbanck, M., Biino, G., Saba, Y., Meulenbelt, I., O’Connell, J. R., Laakso, M., Giulianini, F., Magnusson, P. K. E., Ballantyne, C. M., Hottenga, J. J., Montgomery, G. W., Rivadineira, F., Rueedi, R., Steri, M., Herzig, K.-H., Stott, D. J., Menni, C., Franberg, M., St Pourcain, B., Felix, S. B., Pers, T. H., Bakker, S. J. L., Kraft, P., Peters, A., Vaidya, D., Delgado, G., Smit, J. H., Großmann, V., Sinisalo, J., Seppälä, I., Williams, S. R., Holliday, E. G., Moed, M., Langenberg, C., Räikkönen, K., Ding, J., Campbell, H., Sale, M. M., Chen, Y.-D.-I., James, A. L., Ruggiero, D., Soranzo, N., Hartman, C. A., Smith, E. N., Berenson, G. S., Fuchsberger, C., Hernandez, D., Tiesler, C. M. T., Giedraitis, V., Liewald, D., Fischer, K., Mellström, D., Larsson, A., Wang, Y., Scott, W. R., Lorentzon, M., Beilby, J., Ryan, K. A., Pennell, C. E., Vuckovic, D., Balkau, B., Concas, M. P., Schmidt, R., Mendes de Leon, C. F., Bottinger, E. P., Kloppenburg, M., Paternoster, L., Boehnke, M., Musk, A. W., Willemsen, G., Evans, D. M., Madden, P. A. F., Kähönen, M., Kutalik, Z., Zoledziewska, M., Karhunen, V., Kritchevsky, S. B., Sattar, N., Lachance, G., Clarke, R., Harris, T. B., Raitakari, O. T., Attia, J. R., Van Heemst, D., Kajantie, E., Sorice, R., Gambaro, G., Scott, R. A., Hicks, A. A., Ferrucci, L., Standl, M., Lindgren, C. M., Starr, J. M., Karlsson, M., Lind, L., Li, J. Z., Chambers, J. C., Mori, T. A., De Geus, E. J. C. N., Heath, A. C., Martin, N. G., Auvinen, J., Buckley, B. M., De Craen, A. J. M., Waldenberger, M., Strauch, K., Meitinger, T., Scott, R. J., McEvoy, M., Beekman, M., Bombieri, C., Ridker, P. M., Mohlke, K. L., Pedersen, N. L., Morrison, A. C., Boomsma, D. I., Whitfield, J. B., Strachan, D. P., Hofman, A., Vollenweider, P., Cucca, F., Jarvelin, M.-R., Jukema, J. W., Spector, T. D., Hamsten, A., Zeller, T., Uitterlinden, A. G., Nauck, M., Gudnason, V., Qi, L., Grallert, H., Borecki, I. B., Rotter, J. I., März, W., Wild, P. S., Lokki, M.-L., Boyle, M., Salomaa, V., Melbye, M., Eriksson, J. G., Wilson, J. F., Penninx, B. W. J. H., Becker, D. M., Worrall, B. B., Gibson, G., Krauss, R. M., Ciullo, M., Zaza, G., Wareham, N. J., Oldehinkel, A. J., Palmer, L. J., Murray, S. S., Pramstaller, P. P., Bandinelli, S., Heinrich, J., Ingelsson, E., Deary, I. J., Ma¨gi, R., Vandenput, L., Van der Harst, P., Desch, K. C., Kooner, J. S., Ohlsson, C., Hayward, C., Lehtima¨ki, T., Shuldiner, A. R., Arnett, D. K., Beilin, L. J., Robino, A., Froguel, P., Pirastu, M., Jess, T., Koenig, W., Loos, R. J. F., Evans, D. A., Schmidt, H., Smith, G. D., Slagboom, P. E., Eiriksdottir, G., Morris, A. P., Psaty, B. M., Tracy, R. P., Nolte, I. M., Boerwinkle, E., Visvikis-Siest, S., Reiner, A. P., Gross, M., Bis, J. C., Franke, L., Franco, O. H., Benjamin, E. J., Chasman, D. I., Dupuis, J., Snieder, H., Dehghan, A., & Alizadeh, B. Z. (2018). Genome Analyses of >200,000 Individuals Identify 58 Loci for Chronic Inflammation and Highlight Pathways that Link Inflammation and Complex Disorders. The American Journal of Human Genetics, 103(5), 691-706. doi:10.1016/j.ajhg.2018.09.009.

    Abstract

    C-reactive protein (CRP) is a sensitive biomarker of chronic low-grade inflammation and is associated with multiple complex diseases. The genetic determinants of chronic inflammation remain largely unknown, and the causal role of CRP in several clinical outcomes is debated. We performed two genome-wide association studies (GWASs), on HapMap and 1000 Genomes imputed data, of circulating amounts of CRP by using data from 88 studies comprising 204,402 European individuals. Additionally, we performed in silico functional analyses and Mendelian randomization analyses with several clinical outcomes. The GWAS meta-analyses of CRP revealed 58 distinct genetic loci (p < 5 × 10−8). After adjustment for body mass index in the regression analysis, the associations at all except three loci remained. The lead variants at the distinct loci explained up to 7.0% of the variance in circulating amounts of CRP. We identified 66 gene sets that were organized in two substantially correlated clusters, one mainly composed of immune pathways and the other characterized by metabolic pathways in the liver. Mendelian randomization analyses revealed a causal protective effect of CRP on schizophrenia and a risk-increasing effect on bipolar disorder. Our findings provide further insights into the biology of inflammation and could lead to interventions for treating inflammation and its clinical consequences.
  • Liszkowski, U. (2010). Deictic and other gestures in infancy. Acción psicológica, 7(2), 21-33. doi:10.5944/ap.7.2.212.
  • Liu, X., Gao, Y., Di, Q., Hu, J., Lu, C., Nan, Y., Booth, J. R., & Liu, L. (2018). Differences between child and adult large-scale functional brain networks for reading tasks. Human Brain Mapping, 39(2), 662-679. doi:10.1002/hbm.23871.

    Abstract

    Reading is an important high‐level cognitive function of the human brain, requiring interaction among multiple brain regions. Revealing differences between children's large‐scale functional brain networks for reading tasks and those of adults helps us to understand how the functional network changes over reading development. Here we used functional magnetic resonance imaging data of 17 adults (19–28 years old) and 16 children (11–13 years old), and graph theoretical analyses to investigate age‐related changes in large‐scale functional networks during rhyming and meaning judgment tasks on pairs of visually presented Chinese characters. We found that: (1) adults had stronger inter‐regional connectivity and nodal degree in occipital regions, while children had stronger inter‐regional connectivity in temporal regions, suggesting that adults rely more on visual orthographic processing whereas children rely more on auditory phonological processing during reading. (2) Only adults showed between‐task differences in inter‐regional connectivity and nodal degree, whereas children showed no task differences, suggesting the topological organization of adults’ reading network is more specialized. (3) Children showed greater inter‐regional connectivity and nodal degree than adults in multiple subcortical regions; the hubs in children were more distributed in subcortical regions while the hubs in adults were more distributed in cortical regions. These findings suggest that reading development is manifested by a shift from reliance on subcortical to cortical regions. Taken together, our study suggests that Chinese reading development is supported by developmental changes in brain connectivity properties, and some of these changes may be domain‐general while others may be specific to the reading domain.
  • Xu, S., Liu, P., Chen, Y., Chen, Y., Zhang, W., Zhao, H., Cao, Y., Wang, F., Jiang, N., Lin, S., Li, B., Zhang, Z., Wei, Z., Fan, Y., Jin, Y., He, L., Zhou, R., Dekker, J. D., Tucker, H. O., Fisher, S. E. and 4 moreXu, S., Liu, P., Chen, Y., Chen, Y., Zhang, W., Zhao, H., Cao, Y., Wang, F., Jiang, N., Lin, S., Li, B., Zhang, Z., Wei, Z., Fan, Y., Jin, Y., He, L., Zhou, R., Dekker, J. D., Tucker, H. O., Fisher, S. E., Yao, Z., Liu, Q., Xia, X., & Guo, X. (2018). Foxp2 regulates anatomical features that may be relevant for vocal behaviors and bipedal locomotion. Proceedings of the National Academy of Sciences of the United States of America, 115(35), 8799-8804. doi:10.1073/pnas.1721820115.

    Abstract

    Fundamental human traits, such as language and bipedalism, are associated with a range of anatomical adaptations in craniofacial shaping and skeletal remodeling. However, it is unclear how such morphological features arose during hominin evolution. FOXP2 is a brain-expressed transcription factor implicated in a rare disorder involving speech apraxia and language impairments. Analysis of its evolutionary history suggests that this gene may have contributed to the emergence of proficient spoken language. In the present study, through analyses of skeleton-specific knockout mice, we identified roles of Foxp2 in skull shaping and bone remodeling. Selective ablation of Foxp2 in cartilage disrupted pup vocalizations in a similar way to that of global Foxp2 mutants, which may be due to pleiotropic effects on craniofacial morphogenesis. Our findings also indicate that Foxp2 helps to regulate strength and length of hind limbs and maintenance of joint cartilage and intervertebral discs, which are all anatomical features that are susceptible to adaptations for bipedal locomotion. In light of the known roles of Foxp2 in brain circuits that are important for motor skills and spoken language, we suggest that this gene may have been well placed to contribute to coevolution of neural and anatomical adaptations related to speech and bipedal locomotion.

    Files private

    Request files
  • Liu, J. Z., Tozzi, F., Waterworth, D. M., Pillai, S. G., Muglia, P., Middleton, L., Berrettini, W., Knouff, C. W., Yuan, X., Waeber, G., Vollenweider, P., Preisig, M., Wareham, N. J., Zhao, J. H., Loos, R. J. F., Barroso, I., Khaw, K.-T., Grundy, S., Barter, P., Mahley, R. and 86 moreLiu, J. Z., Tozzi, F., Waterworth, D. M., Pillai, S. G., Muglia, P., Middleton, L., Berrettini, W., Knouff, C. W., Yuan, X., Waeber, G., Vollenweider, P., Preisig, M., Wareham, N. J., Zhao, J. H., Loos, R. J. F., Barroso, I., Khaw, K.-T., Grundy, S., Barter, P., Mahley, R., Kesaniemi, A., McPherson, R., Vincent, J. B., Strauss, J., Kennedy, J. L., Farmer, A., McGuffin, P., Day, R., Matthews, K., Bakke, P., Gulsvik, A., Lucae, S., Ising, M., Brueckl, T., Horstmann, S., Wichmann–, H.-E., Rawal, R., Dahmen, N., Lamina, C., Polasek, O., Zgaga, L., Huffman, J., Campbell, S., Kooner, J., Chambers, J. C., Burnett, M. S., Devaney, J. M., Pichard, A. D., Kent, K. M., Satler, L., Lindsay, J. M., Waksman, R., Epstein, S., Wilson, J. F., Wild, S. H., Campbell, H., Vitart, V., Reilly, M. P., Li, M., Qu, L., Wilensky, R., Matthai, W., Hakonarson, H. H., Rader, D. J., Franke, A., Wittig, M., Schäfer, A., Uda, M., Terracciano, A., Xiao, X., Busonero, F., Scheet, P., Schlessinger, D., St. Clair, D., Rujescu, D., Abecasis, G. R., Grabe, H. J., Teumer, A., Völzke, H., Petersmann, A., John, U., Rudan, I., Hayward, C., Wright, A. F., Kolcic, I., Wright, B. J., Thompson, J. R., Balmforth, A. J., Hall, A. S., Samani, N. J., Anderson, C. A., Ahmad, T., Mathew, C. G., Parkes, M., Satsangi, J., Caulfield, M., Munroe, P. B., Farrall, M., Dominiczak, A., Worthington, J., Thomson, W., Eyre, S., Barton, A., Mooser, V., Francks, C., & Marchini, J. (2010). Meta-analysis and imputation refines the association of 15q25 with smoking quantity. Nature Genetics, 42(5), 436-440. doi:10.1038/ng.572.

    Abstract

    Smoking is a leading global cause of disease and mortality. We established the Oxford-GlaxoSmithKline study (Ox-GSK) to perform a genome-wide meta-analysis of SNP association with smoking-related behavioral traits. Our final data set included 41,150 individuals drawn from 20 disease, population and control cohorts. Our analysis confirmed an effect on smoking quantity at a locus on 15q25 (P = 9.45 x 10(-19)) that includes CHRNA5, CHRNA3 and CHRNB4, three genes encoding neuronal nicotinic acetylcholine receptor subunits. We used data from the 1000 Genomes project to investigate the region using imputation, which allowed for analysis of virtually all common SNPs in the region and offered a fivefold increase in marker density over HapMap2 (ref. 2) as an imputation reference panel. Our fine-mapping approach identified a SNP showing the highest significance, rs55853698, located within the promoter region of CHRNA5. Conditional analysis also identified a secondary locus (rs6495308) in CHRNA3.
  • Long, M., Horton, W. S., Rohde, H., & Sorace, A. (2018). Individual differences in switching and inhibition predict perspective-taking across the lifespan. Cognition, 170, 25-30. doi:10.1016/j.cognition.2017.09.004.

    Abstract

    Studies exploring the influence of executive functions (EF) on perspective-taking have focused on inhibition and working memory in young adults or clinical populations. Less consideration has been given to more complex capacities that also involve switching attention between perspectives, or to changes in EF and concomitant effects on perspective-taking across the lifespan. To address this, we assessed whether individual differences in inhibition and attentional switching in healthy adults (ages 17–84) predict performance on a task in which speakers identified targets for a listener with size-contrasting competitors in common or privileged ground. Modification differences across conditions decreased with age. Further, perspective taking interacted with EF measures: youngest adults’ sensitivity to perspective was best captured by their inhibitory performance; oldest adults’ sensitivity was best captured by switching performance. Perspective-taking likely involves multiple aspects of EF, as revealed by considering a wider range of EF tasks and individual capacities across the lifespan.
  • Lum, J., Kidd, E., Davis, S., & Conti-Ramsden, G. (2010). Longitudinal study of declarative and procedural memory in primary school-aged children. Australian Journal of Psychology, 62(3), 139-148. doi:10.1080/00049530903150547.

    Abstract

    This study examined the development of declarative and procedural memory longitudinally in primary school-aged children. At present, although there is a general consensus that age-related improvements during this period can be found for declarative memory, there are conflicting data on the developmental trajectory of the procedural memory system. At Time 1 children aged around 5½ years were presented with measures of declarative and procedural memory. The tasks were then administered 12 months later. Performance on the declarative memory task was found to improve at a faster rate in comparison to the procedural memory task. The findings of the study support the view that multiple memory systems reach functional maturity at different points in development.
  • Lumaca, M., Ravignani, A., & Baggio, G. (2018). Music evolution in the laboratory: Cultural transmission meets neurophysiology. Frontiers in Neuroscience, 12: 246. doi:10.3389%2Ffnins.2018.00246.

    Abstract

    In recent years, there has been renewed interest in the biological and cultural evolution of music, and specifically in the role played by perceptual and cognitive factors in shaping core features of musical systems, such as melody, harmony, and rhythm. One proposal originates in the language sciences. It holds that aspects of musical systems evolve by adapting gradually, in the course of successive generations, to the structural and functional characteristics of the sensory and memory systems of learners and “users” of music. This hypothesis has found initial support in laboratory experiments on music transmission. In this article, we first review some of the most important theoretical and empirical contributions to the field of music evolution. Next, we identify a major current limitation of these studies, i.e., the lack of direct neural support for the hypothesis of cognitive adaptation. Finally, we discuss a recent experiment in which this issue was addressed by using event-related potentials (ERPs). We suggest that the introduction of neurophysiology in cultural transmission research may provide novel insights on the micro-evolutionary origins of forms of variation observed in cultural systems.
  • Lutzenberger, H. (2018). Manual and nonmanual features of name signs in Kata Kolok and sign language of the Netherlands. Sign Language Studies, 18(4), 546-569. doi:10.1353/sls.2018.0016.

    Abstract

    Name signs are based on descriptions, initialization, and loan translations. Nyst and Baker (2003) have found crosslinguistic similarities in the phonology of name signs, such as a preference for one-handed signs and for the head location. Studying Kata Kolok (KK), a rural sign language without indigenous fingerspelling, strongly suggests that one-handedness is not correlated to initialization, but represents a more general feature of name sign phonology. Like in other sign languages, the head location is used frequently in both KK and Sign Language of the Netherlands (NGT) name signs. The use of nonmanuals, however, is strikingly different. NGT name signs are always accompanied by mouthings, which are absent in KK. Instead, KK name signs may use mouth gestures; these may disambiguate manually identical name signs, and even form independent name signs without any manual features
  • Maguire, W., McMahon, A., Heggarty, P., & Dediu, D. (2010). The past, present, and future of English dialects: Quantifying convergence, divergence, and dynamic equilibrium. Language Variation and Change, 22, 69-104. doi:10.1017/S0954394510000013.

    Abstract

    This article reports on research which seeks to compare and measure the similarities between phonetic transcriptions in the analysis of relationships between varieties of English. It addresses the question of whether these varieties have been converging, diverging, or maintaining equilibrium as a result of endogenous and exogenous phonetic and phonological changes. We argue that it is only possible to identify such patterns of change by the simultaneous comparison of a wide range of varieties of a language across a data set that has not been specifically selected to highlight those changes that are believed to be important. Our analysis suggests that although there has been an obvious reduction in regional variation with the loss of traditional dialects of English and Scots, there has not been any significant convergence (or divergence) of regional accents of English in recent decades, despite the rapid spread of a number of features such as TH-fronting.
  • Majid, A., Roberts, S. G., Cilissen, L., Emmorey, K., Nicodemus, B., O'Grady, L., Woll, B., LeLan, B., De Sousa, H., Cansler, B. L., Shayan, S., De Vos, C., Senft, G., Enfield, N. J., Razak, R. A., Fedden, S., Tufvesson, S., Dingemanse, M., Ozturk, O., Brown, P. and 6 moreMajid, A., Roberts, S. G., Cilissen, L., Emmorey, K., Nicodemus, B., O'Grady, L., Woll, B., LeLan, B., De Sousa, H., Cansler, B. L., Shayan, S., De Vos, C., Senft, G., Enfield, N. J., Razak, R. A., Fedden, S., Tufvesson, S., Dingemanse, M., Ozturk, O., Brown, P., Hill, C., Le Guen, O., Hirtzel, V., Van Gijn, R., Sicoli, M. A., & Levinson, S. C. (2018). Differential coding of perception in the world’s languages. Proceedings of the National Academy of Sciences of the United States of America, 115(45), 11369-11376. doi:10.1073/pnas.1720419115.

    Abstract

    Is there a universal hierarchy of the senses, such that some senses (e.g., vision) are more accessible to consciousness and linguistic description than others (e.g., smell)? The long-standing presumption in Western thought has been that vision and audition are more objective than the other senses, serving as the basis of knowledge and understanding, whereas touch, taste, and smell are crude and of little value. This predicts that humans ought to be better at communicating about sight and hearing than the other senses, and decades of work based on English and related languages certainly suggests this is true. However, how well does this reflect the diversity of languages and communities worldwide? To test whether there is a universal hierarchy of the senses, stimuli from the five basic senses were used to elicit descriptions in 20 diverse languages, including 3 unrelated sign languages. We found that languages differ fundamentally in which sensory domains they linguistically code systematically, and how they do so. The tendency for better coding in some domains can be explained in part by cultural preoccupations. Although languages seem free to elaborate specific sensory domains, some general tendencies emerge: for example, with some exceptions, smell is poorly coded. The surprise is that, despite the gradual phylogenetic accumulation of the senses, and the imbalances in the neural tissue dedicated to them, no single hierarchy of the senses imposes itself upon language.
  • Majid, A. (2018). Humans are neglecting our sense of smell. Here's what we could gain by fixing that. Time, March 7, 2018: 5130634.
  • Majid, A., & Kruspe, N. (2018). Hunter-gatherer olfaction is special. Current Biology, 28(3), 409-413. doi:10.1016/j.cub.2017.12.014.

    Abstract

    People struggle to name odors, but this
    limitation is not universal. Majid and
    Kruspe investigate whether superior
    olfactory performance is due to
    subsistence, ecology, or language family.
    By comparing closely related
    communities in the Malay Peninsula, they
    find that only hunter-gatherers are
    proficient odor namers, suggesting that
    subsistence is crucial.

    Additional information

    The data are archived at RWAAI
  • Majid, A., Burenhult, N., Stensmyr, M., De Valk, J., & Hansson, B. S. (2018). Olfactory language and abstraction across cultures. Philosophical Transactions of the Royal Society of London, Series B: Biological Sciences, 373: 20170139. doi:10.1098/rstb.2017.0139.

    Abstract

    Olfaction presents a particularly interesting arena to explore abstraction in language. Like other abstract domains, such as time, odours can be difficult to conceptualize. An odour cannot be seen or held, it can be difficult to locate in space, and for most people odours are difficult to verbalize. On the other hand, odours give rise to primary sensory experiences. Every time we inhale we are using olfaction to make sense of our environment. We present new experimental data from 30 Jahai hunter-gatherers from the Malay Peninsula and 30 matched Dutch participants from the Netherlands in an odour naming experiment. Participants smelled monomolecular odorants and named odours while reaction times, odour descriptors and facial expressions were measured. We show that while Dutch speakers relied on concrete descriptors, i.e. they referred to odour sources (e.g. smells like lemon), the Jahai used abstract vocabulary to name the same odours (e.g. musty). Despite this differential linguistic categorization, analysis of facial expressions showed that the two groups, nevertheless, had the same initial emotional reactions to odours. Critically, these cross-cultural data present a challenge for how to think about abstraction in language.
  • Majid, A., & Levinson, S. C. (2010). WEIRD languages have misled us, too [Comment on Henrich et al.]. Behavioral and Brain Sciences, 33(2-3), 103. doi:10.1017/S0140525X1000018X.

    Abstract

    The linguistic and cognitive sciences have severely underestimated the degree of linguistic diversity in the world. Part of the reason for this is that we have projected assumptions based on English and familiar languages onto the rest. We focus on some distortions this has introduced, especially in the study of semantics.
  • Malpass, D., & Meyer, A. S. (2010). The time course of name retrieval during multiple-object naming: Evidence from extrafoveal-on-foveal effects. Journal of Experimental Psychology: Learning, Memory, and Cognition, 36, 523-537. doi:10.1037/a0018522.

    Abstract

    The goal of the study was to examine whether speakers naming pairs of objects would retrieve the names of the objects in parallel or in equence. To this end, we recorded the speakers’ eye movements and determined whether the difficulty of retrieving the name of the 2nd object affected the duration of the gazes to the 1st object. Two experiments, which differed in the spatial arrangement of the objects, showed that the speakers looked longer at the 1st object when the name of the 2nd object was easy than when it was more difficult to retrieve. Thus, the easy 2nd-object names interfered more with the processing of the 1st object than the more difficult 2nd-object names. In the 3rd experiment, the processing of the 1st object was rendered more difficult by presenting it upside down. No effect of 2nd-object difficulty on the gaze duration for the 1st object was found. These results suggest that speakers can retrieve the names of a foveated and an extrafoveal object in parallel, provided that the processing of the foveated object is not too demanding
  • Mamus, E., & Boduroglu, A. (2018). The role of context on boundary extension. Visual Cognition, 26(2), 115-130. doi:10.1080/13506285.2017.1399947.

    Abstract

    Boundary extension (BE) is a memory error in which observers remember more of a scene than they actually viewed. This error reflects one’s prediction that a scene naturally continues and is driven by scene schema and contextual knowledge. In two separate experiments we investigated the necessity of context and scene schema in BE. In Experiment 1, observers viewed scenes that either contained semantically consistent or inconsistent objects as well as objects on white backgrounds. In both types of scenes and in the no-background condition there was a BE effect; critically, semantic inconsistency in scenes reduced the magnitude of BE. In Experiment 2 when we used abstract shapes instead of meaningful objects, there was no BE effect. We suggest that although scene schema is necessary to elicit BE, contextual consistency is not required.
  • Manahova, M. E., Mostert, P., Kok, P., Schoffelen, J.-M., & De Lange, F. P. (2018). Stimulus familiarity and expectation jointly modulate neural activity in the visual ventral stream. Journal of Cognitive Neuroscience, 30(9), 1366-1377. doi:10.1162/jocn_a_01281.

    Abstract

    Prior knowledge about the visual world can change how a visual stimulus is processed. Two forms of prior knowledge are often distinguished: stimulus familiarity (i.e., whether a stimulus has been seen before) and stimulus expectation (i.e., whether a stimulus is expected to occur, based on the context). Neurophysiological studies in monkeys have shown suppression of spiking activity both for expected and for familiar items in object-selective inferotemporal cortex. It is an open question, however, if and how these types of knowledge interact in their modulatory effects on the sensory response. To address this issue and to examine whether previous findings generalize to noninvasively measured neural activity in humans, we separately manipulated stimulus familiarity and expectation while noninvasively recording human brain activity using magnetoencephalography. We observed independent suppression of neural activity by familiarity and expectation, specifically in the lateral occipital complex, the putative human homologue of monkey inferotemporal cortex. Familiarity also led to sharpened response dynamics, which was predominantly observed in early visual cortex. Together, these results show that distinct types of sensory knowledge jointly determine the amount of neural resources dedicated to object processing in the visual ventral stream.
  • Mandy, W., Pellicano, L., St Pourcain, B., Skuse, D., & Heron, J. (2018). The development of autistic social traits across childhood and adolescence in males and females. The Journal of Child Psychology and Psychiatry, 59(11), 1143-1151. doi:10.1111/jcpp.12913.

    Abstract

    Background

    Autism is a dimensional condition, representing the extreme end of a continuum of social competence that extends throughout the general population. Currently, little is known about how autistic social traits (ASTs), measured across the full spectrum of severity, develop during childhood and adolescence, including whether there are developmental differences between boys and girls. Therefore, we sought to chart the trajectories of ASTs in the general population across childhood and adolescence, with a focus on gender differences.
    Methods

    Participants were 9,744 males (n = 4,784) and females (n = 4,960) from ALSPAC, a UK birth cohort study. ASTs were assessed when participants were aged 7, 10, 13 and 16 years, using the parent‐report Social Communication Disorders Checklist. Data were modelled using latent growth curve analysis.
    Results

    Developmental trajectories of males and females were nonlinear, showing a decline from 7 to 10 years, followed by an increase between 10 and 16 years. At 7 years, males had higher levels of ASTs than females (mean raw score difference = 0.88, 95% CI [.72, 1.04]), and were more likely (odds ratio [OR] = 1.99; 95% CI, 1.82, 2.16) to score in the clinical range on the SCDC. By 16 years this gender difference had disappeared: males and females had, on average, similar levels of ASTs (mean difference = 0.00, 95% CI [−0.19, 0.19]) and were equally likely to score in the SCDC's clinical range (OR = 0.91, 95% CI, 0.73, 1.10). This was the result of an increase in females’ ASTs between 10 and 16 years.
    Conclusions

    There are gender‐specific trajectories of autistic social impairment, with females more likely than males to experience an escalation of ASTs during early‐ and midadolescence. It remains to be discovered whether the observed female adolescent increase in ASTs represents the genuine late onset of social difficulties or earlier, subtle, pre‐existing difficulties becoming more obvious.

    Additional information

    jcpp12913-sup-0001-supinfo.docx
  • Martin, A. E. (2018). Cue integration during sentence comprehension: Electrophysiological evidence from ellipsis. PLoS One, 13(11): e0206616. doi:10.1371/journal.pone.0206616.

    Abstract

    Language processing requires us to integrate incoming linguistic representations with representations of past input, often across intervening words and phrases. This computational situation has been argued to require retrieval of the appropriate representations from memory via a set of features or representations serving as retrieval cues. However, even within in a cue-based retrieval account of language comprehension, both the structure of retrieval cues and the particular computation that underlies direct-access retrieval are still underspecified. Evidence from two event-related brain potential (ERP) experiments that show cue-based interference from different types of linguistic representations during ellipsis comprehension are consistent with an architecture wherein different cue types are integrated, and where the interaction of cue with the recent contents of memory determines processing outcome, including expression of the interference effect in ERP componentry. I conclude that retrieval likely includes a computation where cues are integrated with the contents of memory via a linear weighting scheme, and I propose vector addition as a candidate formalization of this computation. I attempt to account for these effects and other related phenomena within a broader cue-based framework of language processing.
  • Martin, A. E., & McElree, B. (2018). Retrieval cues and syntactic ambiguity resolution: Speed-accuracy tradeoff evidence. Language, Cognition and Neuroscience, 33(6), 769-783. doi:10.1080/23273798.2018.1427877.

    Abstract

    Language comprehension involves coping with ambiguity and recovering from misanalysis. Syntactic ambiguity resolution is associated with increased reading times, a classic finding that has shaped theories of sentence processing. However, reaction times conflate the time it takes a process to complete with the quality of the behavior-related information available to the system. We therefore used the speed-accuracy tradeoff procedure (SAT) to derive orthogonal estimates of processing time and interpretation accuracy, and tested whether stronger retrieval cues (via semantic relatedness: neighed->horse vs. fell->horse) aid interpretation during recovery. On average, ambiguous sentences took 250ms longer (SAT rate) to interpret than unambiguous controls, demonstrating veridical differences in processing time. Retrieval cues more strongly related to the true subject always increased accuracy, regardless of ambiguity. These findings are consistent with a language processing architecture where cue-driven operations give rise to interpretation, and wherein diagnostic cues aid retrieval, regardless of parsing difficulty or structural uncertainty.
  • Martin-Ordas, G., Haun, D. B. M., Colmenares, F., & Call, J. (2010). Keeping track of time: Evidence for episodic-like memory in great apes. Animal Cognition, 13, 331-340. doi:10.1007/s10071-009-0282-4.

    Abstract

    Episodic memory, as defined by Tulving, can be described in terms of behavioural elements (what, where and when information) but it is also accompained by an awareness of one’s past (chronesthesia) and a subjective conscious experience (autonoetic awareness). Recent experiments have shown that corvids and rodents recall the where, what and when of an event. This capability has been called episodic-like memory because it only fulfils the behavioural criteria for episodic memory. We tested seven chimpanzees, three orangutans and two bonobos of various ages by adapting two paradigms, originally developed by Clayton and colleagues to test scrub jays. In Experiment 1, subjects were fed preferred but perishable food (frozen juice) and less preferred but non-perishable food (grape). After the food items were hidden, subjects could choose one of them either after 5 min or 1 h. The frozen juice was still available after 5 min but melted after 1 h and became unobtainable. Apes chose the frozen juice significantly more after 5 min and the grape after 1 h. In Experiment 2, subjects faced two baiting events happening at different times, yet they formed an integrated memory for the location and time of the baiting event for particular food items. We also included a memory task that required no temporal encoding. Our results showed that apes remember in an integrated fashion what, where and when (i.e., how long ago) an event happened; that is, apes distinguished between different events in which the same food items were hidden in different places at different times. The temporal control of their choices was not dependent on the familiarity of the platforms where the food was hidden. Chimpanzees’ and bonobos’ performance in the temporal encoding task was age-dependent, following an inverted U-shaped distribution. The age had no effect on the performance of the subjects in the task that required no temporal encoding.
  • Maslowski, M., Meyer, A. S., & Bosker, H. R. (2018). Listening to yourself is special: Evidence from global speech rate tracking. PLoS One, 13(9): e0203571. doi:10.1371/journal.pone.0203571.

    Abstract

    Listeners are known to use adjacent contextual speech rate in processing temporally ambiguous speech sounds. For instance, an ambiguous vowel between short /A/ and long /a:/ in Dutch sounds relatively long (i.e., as /a:/) embedded in a fast precursor sentence, but short in a slow sentence. Besides the local speech rate, listeners also track talker-specific global speech rates. However, it is yet unclear whether other talkers' global rates are encoded with reference to a listener's self-produced rate. Three experiments addressed this question. In Experiment 1, one group of participants was instructed to speak fast, whereas another group had to speak slowly. The groups were compared on their perception of ambiguous /A/-/a:/ vowels embedded in neutral rate speech from another talker. In Experiment 2, the same participants listened to playback of their own speech and again evaluated target vowels in neutral rate speech. Neither of these experiments provided support for the involvement of self-produced speech in perception of another talker's speech rate. Experiment 3 repeated Experiment 2 but with a new participant sample that was unfamiliar with the participants from Experiment 2. This experiment revealed fewer /a:/ responses in neutral speech in the group also listening to a fast rate, suggesting that neutral speech sounds slow in the presence of a fast talker and vice versa. Taken together, the findings show that self-produced speech is processed differently from speech produced by others. They carry implications for our understanding of the perceptual and cognitive mechanisms involved in rate-dependent speech perception in dialogue settings.
  • Matic, D. (2010). [Review of "A Historical Dictionary of Kolyma Yukaghir" by Irina Nikolaeva, Berlin: Mouton de Gruyter, 2006]. eLanguage. Book notices. Retrieved from http://elanguage.net/blogs/booknotices/?p=481.
  • McCauley, R. N., & Cohen, E. (2010). Cognitive science and the naturalness of religion. Philosophy Compass, 5, 779-792. doi:10.1111/j.1747-9991.2010.00326.x.

    Abstract

    Cognitive approaches to religious phenomena have attracted considerable interdisciplinary attention since their emergence a couple of decades ago. Proponents offer explanatory accounts of the content and transmission of religious thought and behavior in terms of underlying cognition. A central claim is that the cross-cultural recurrence and historical persistence of religion is attributable to the cognitive naturalness of religious ideas, i.e., attributable to the readiness, the ease, and the speed with which human minds acquire and process popular religious representations. In this article, we primarily provide an introductory summary of foundational questions, assumptions, and hypotheses in this field, including some discussion of features distinguishing cognitive science approaches to religion from established psychological approaches. Relevant ethnographic and experimental evidence illustrate and substantiate core claims. Finally, we briefly consider the broader implications of these cognitive approaches for the appropriateness of ‘religion’ as an explanatorily useful category in the social sciences.
  • McDaniell, R., Lee, B.-K., Song, L., Liu, Z., Boyle, A. P., Erdos, M. R., Scott, L. J., Morken, M. A., Kucera, K. S., Battenhouse, A., Keefe, D., Collins, F. S., Willard, H. F., Lieb, J. D., Furey, T. S., Crawford, G. E., Iyer, V. R., & Birney, E. (2010). Heritable individual-specific and allele-specific chromatin signatures in humans. Science, 328(5975), 235-239. doi:10.1126/science.1184655.

    Abstract

    The extent to which variation in chromatin structure and transcription factor binding may influence gene expression, and thus underlie or contribute to variation in phenotype, is unknown. To address this question, we cataloged both individual-to-individual variation and differences between homologous chromosomes within the same individual (allele-specific variation) in chromatin structure and transcription factor binding in lymphoblastoid cells derived from individuals of geographically diverse ancestry. Ten percent of active chromatin sites were individual-specific; a similar proportion were allele-specific. Both individual-specific and allele-specific sites were commonly transmitted from parent to child, which suggests that they are heritable features of the human genome. Our study shows that heritable chromatin status and transcription factor binding differ as a result of genetic variation and may underlie phenotypic variation in humans.

    Additional information

    McDaniell.SOM.pdf
  • McQueen, J. M., Norris, D., & Cutler, A. (1999). Lexical influence in phonetic decision-making: Evidence from subcategorical mismatches. Journal of Experimental Psychology: Human Perception and Performance, 25, 1363-1389. doi:10.1037/0096-1523.25.5.1363.

    Abstract

    In 5 experiments, listeners heard words and nonwords, some cross-spliced so that they contained acoustic-phonetic mismatches. Performance was worse on mismatching than on matching items. Words cross-spliced with words and words cross-spliced with nonwords produced parallel results. However, in lexical decision and 1 of 3 phonetic decision experiments, performance on nonwords cross-spliced with words was poorer than on nonwords cross-spliced with nonwords. A gating study confirmed that there were misleading coarticulatory cues in the cross-spliced items; a sixth experiment showed that the earlier results were not due to interitem differences in the strength of these cues. Three models of phonetic decision making (the Race model, the TRACE model, and a postlexical model) did not explain the data. A new bottom-up model is outlined that accounts for the findings in terms of lexical involvement at a dedicated decision-making stage.
  • Medland, S. E., Zayats, T., Glaser, B., Nyholt, D. R., Gordon, S. D., Wright, M. J., Montgomery, G. W., Campbell, M. J., Henders, A. K., Timpson, N. J., Peltonen, L., Wolke, D., Ring, S. M., Deloukas, P., Martin, N. G., Smith, G. D., & Evans, D. M. (2010). A variant in LIN28B is associated with 2D:4D finger-length ratio, a putative retrospective biomarker of prenatal testosterone exposure. American Journal of Human Genetics, 86(4), 519-525. doi:10.1016/j.ajhg.2010.02.017.

    Abstract

    The ratio of the lengths of an individual's second to fourth digit (2D:4D) is commonly used as a noninvasive retrospective biomarker for prenatal androgen exposure. In order to identify the genetic determinants of 2D:4D, we applied a genome-wide association approach to 1507 11-year-old children from the Avon Longitudinal Study of Parents and Children (ALSPAC) in whom 2D:4D ratio had been measured, as well as a sample of 1382 12- to 16-year-olds from the Brisbane Adolescent Twin Study. A meta-analysis of the two scans identified a single variant in the LIN28B gene that was strongly associated with 2D:4D (rs314277: p = 4.1 x 10(-8)) and was subsequently independently replicated in an additional 3659 children from the ALSPAC cohort (p = 1.53 x 10(-6)). The minor allele of the rs314277 variant has previously been linked to increased height and delayed age at menarche, but in our study it was associated with increased 2D:4D in the direction opposite to that of previous reports on the correlation between 2D:4D and age at menarche. Our findings call into question the validity of 2D:4D as a simplistic retrospective biomarker for prenatal testosterone exposure.
  • Mei, C., Fedorenko, E., Amor, D. J., Boys, A., Hoeflin, C., Carew, P., Burgess, T., Fisher, S. E., & Morgan, A. T. (2018). Deep phenotyping of speech and language skills in individuals with 16p11.2 deletion. European journal of human genetics, 26(5), 676-686. doi:10.1038/s41431-018-0102-x.

    Abstract

    Recurrent deletions of a ~600-kb region of 16p11.2 have been associated with a highly penetrant form of childhood apraxia of speech (CAS). Yet prior findings have been based on a small, potentially biased sample using retrospectively collected data. We examine the prevalence of CAS in a larger cohort of individuals with 16p11.2 deletion using a prospectively designed assessment battery. The broader speech and language phenotype associated with carrying this deletion was also examined. 55 participants with 16p11.2 deletion (47 children, 8 adults) underwent deep phenotyping to test for the presence of CAS and other speech and language diagnoses. Standardized tests of oral motor functioning, speech production, language, and non-verbal IQ were conducted. The majority of children (77%) and half of adults (50%) met criteria for CAS. Other speech outcomes were observed including articulation or phonological errors (i.e., phonetic and cognitive-linguistic errors, respectively), dysarthria (i.e., neuromuscular speech disorder), minimal verbal output, and even typical speech in some. Receptive and expressive language impairment was present in 73% and 70% of children, respectively. Co-occurring neurodevelopmental conditions (e.g., autism) and non-verbal IQ did not correlate with the presence of CAS. Findings indicate that CAS is highly prevalent in children with 16p11.2 deletion with symptoms persisting into adulthood for many. Yet CAS occurs in the context of a broader speech and language profile and other neurobehavioral deficits. Further research will elucidate specific genetic and neural pathways leading to speech and language deficits in individuals with 16p11.2 deletions, resulting in more targeted speech therapies addressing etiological pathways.
  • Merolla, D., & Ameka, F. K. (2010). Hogbetsotso: Celebration and songs of the Ewe migration story. Interview with Dr. Datey-Kumodzie. Verba Africana series - Video documentation and Digital Materials, 4.
  • Merritt, D. J., Casasanto, D., & Brannon, E. M. (2010). Do monkeys think in metaphors? Representations of space and time in monkeys and humans. Cognition, 117, 191-202. doi:10.1016/j.cognition.2010.08.011.

    Abstract

    Research on the relationship between the representation of space and time has produced two contrasting proposals. ATOM posits that space and time are represented via a common magnitude system, suggesting a symmetrical relationship between space and time. According to metaphor theory, however, representations of time depend on representations of space asymmetrically. Previous findings in humans have supported metaphor theory. Here, we investigate the relationship between time and space in a nonverbal species, by testing whether non-human primates show space–time interactions consistent with metaphor theory or with ATOM. We tested two rhesus monkeys and 16 adult humans in a nonverbal task that assessed the influence of an irrelevant dimension (time or space) on a relevant dimension (space or time). In humans, spatial extent had a large effect on time judgments whereas time had a small effect on spatial judgments. In monkeys, both spatial and temporal manipulations showed large bi-directional effects on judgments. In contrast to humans, spatial manipulations in monkeys did not produce a larger effect on temporal judgments than the reverse. Thus, consistent with previous findings, human adults showed asymmetrical space–time interactions that were predicted by metaphor theory. In contrast, monkeys showed patterns that were more consistent with ATOM.
  • Meulenbroek, O., Kessels, R. P. C., De Rover, M., Petersson, K. M., Olde Rikkert, M. G. M., Rijpkema, M., & Fernández, G. (2010). Age-effects on associative object-location memory. Brain Research, 1315, 100-110. doi:10.1016/j.brainres.2009.12.011.

    Abstract

    Aging is accompanied by an impairment of associative memory. The medial temporal lobe and fronto-striatal network, both involved in associative memory, are known to decline functionally and structurally with age, leading to the so-called associative binding deficit and the resource deficit. Because the MTL and fronto-striatal network interact, they might also be able to support each other. We therefore employed an episodic memory task probing memory for sequences of object–location associations, where the demand on self-initiated processing was manipulated during encoding: either all the objects were visible simultaneously (rich environmental support) or every object became visible transiently (poor environmental support). Following the concept of resource deficit, we hypothesised that the elderly probably have difficulty using their declarative memory system when demands on self-initiated processing are high (poor environmental support). Our behavioural study showed that only the young use the rich environmental support in a systematic way, by placing the objects next to each other. With the task adapted for fMRI, we found that elderly showed stronger activity than young subjects during retrieval of environmentally richly encoded information in the basal ganglia, thalamus, left middle temporal/fusiform gyrus and right medial temporal lobe (MTL). These results indicate that rich environmental support leads to recruitment of the declarative memory system in addition to the fronto-striatal network in elderly, while the young use more posterior brain regions likely related to imagery. We propose that elderly try to solve the task by additional recruitment of stimulus-response associations, which might partly compensate their limited attentional resources.
  • Meyer, A. S., & Levelt, W. J. M. (2000). Merging speech perception and production [Comment on Norris, McQueen and Cutler]. Behavioral and Brain Sciences, 23(3), 339-340. doi:10.1017/S0140525X00373241.

    Abstract

    A comparison of Merge, a model of comprehension, and WEAVER, a model of production, raises five issues: (1) merging models of comprehension and production necessarily creates feedback; (2) neither model is a comprehensive account of word processing; (3) the models are incomplete in different ways; (4) the models differ in their handling of competition; (5) as opposed to WEAVER, Merge is a model of metalinguistic behavior.
  • Meyer, A. S., & Van der Meulen, F. (2000). Phonological priming effects on speech onset latencies and viewing times in object naming. Psychonomic Bulletin & Review, 7, 314-319.
  • Meyer, A. S., & Bock, K. (1999). Representations and processes in the production of pronouns: Some perspectives from Dutch. Journal of Memory and Language, 41(2), 281-301. doi:doi:10.1006/jmla.1999.2649.

    Abstract

    The production and interpretation of pronouns involves the identification of a mental referent and, in connected speech or text, a discourse antecedent. One of the few overt signals of the relationship between a pronoun and its antecedent is agreement in features such as number and grammatical gender. To examine how speakers create these signals, two experiments tested conceptual, lexical. and morphophonological accounts of pronoun production in Dutch. The experiments employed sentence completion and continuation tasks with materials containing noun phrases that conflicted or agreed in grammatical gender. The noun phrases served as the antecedents for demonstrative pronouns tin Experiment 1) and relative pronouns tin Experiment 2) that required gender marking. Gender errors were used to assess the nature of the processes that established the link between pronouns and antecedents. There were more gender errors when candidate antecedents conflicted in grammatical gender, counter to the predictions of a pure conceptual hypothesis. Gender marking on candidate antecedents did not change the magnitude of this interference effect, counter to the predictions of an overt-morphology hypothesis. Mirroring previous findings about pronoun comprehension, the results suggest that speakers of gender-marking languages call on specific linguistic information about antecedents in order to select pronouns and that the information consists of specifications of grammatical gender associated with the lemmas of words.
  • Meyer, A. S., Alday, P. M., Decuyper, C., & Knudsen, B. (2018). Working together: Contributions of corpus analyses and experimental psycholinguistics to understanding conversation. Frontiers in Psychology, 9: 525. doi:10.3389/fpsyg.2018.00525.

    Abstract

    As conversation is the most important way of using language, linguists and psychologists should combine forces to investigate how interlocutors deal with the cognitive demands arising during conversation. Linguistic analyses of corpora of conversation are needed to understand the structure of conversations, and experimental work is indispensable for understanding the underlying cognitive processes. We argue that joint consideration of corpus and experimental data is most informative when the utterances elicited in a lab experiment match those extracted from a corpus in relevant ways. This requirement to compare like with like seems obvious but is not trivial to achieve. To illustrate this approach, we report two experiments where responses to polar (yes/no) questions were elicited in the lab and the response latencies were compared to gaps between polar questions and answers in a corpus of conversational speech. We found, as expected, that responses were given faster when they were easy to plan and planning could be initiated earlier than when they were harder to plan and planning was initiated later. Overall, in all but one condition, the latencies were longer than one would expect based on the analyses of corpus data. We discuss the implication of this partial match between the data sets and more generally how corpus and experimental data can best be combined in studies of conversation.

    Additional information

    Data_Sheet_1.pdf
  • Mitterer, H., & Jesse, A. (2010). Correlation versus causation in multisensory perception. Psychonomic Bulletin & Review, 17, 329-334. doi:10.3758/PBR.17.3.329.

    Abstract

    Events are often perceived in multiple modalities. The co-occurring proximal visual and auditory stimuli events are mostly also causally linked to the distal event. This makes it difficult to evaluate whether learned correlation or perceived causation guides binding in multisensory perception. Piano tones are an interesting exception: Piano tones are associated with seeing key strokes but are directly caused by hammers that hit strings hidden from observation. We examined the influence of seeing the hammer or the key stroke on auditory temporal order judgments (TOJ). Participants judged the temporal order of a dog bark and a piano tone, while seeing the piano stroke shifted temporally relative to its audio signal. Visual lead increased "piano-first" responses in auditory TOJ, but more so if only the associated key stroke than if the sound-producing hammer was visible, though both were equally visually salient. This provides evidence for a learning account of audiovisual perception.
  • Mitterer, H., Reinisch, E., & McQueen, J. M. (2018). Allophones, not phonemes in spoken-word recognition. Journal of Memory and Language, 98, 77-92. doi:10.1016/j.jml.2017.09.005.

    Abstract

    What are the phonological representations that listeners use to map information about the segmental content of speech onto the mental lexicon during spoken-word recognition? Recent evidence from perceptual-learning paradigms seems to support (context-dependent) allophones as the basic representational units in spoken-word recognition. But recent evidence from a selective-adaptation paradigm seems to suggest that context-independent phonemes also play a role. We present three experiments using selective adaptation that constitute strong tests of these representational hypotheses. In Experiment 1, we tested generalization of selective adaptation using different allophones of Dutch /r/ and /l/ – a case where generalization has not been found with perceptual learning. In Experiments 2 and 3, we tested generalization of selective adaptation using German back fricatives in which allophonic and phonemic identity were varied orthogonally. In all three experiments, selective adaptation was observed only if adaptors and test stimuli shared allophones. Phonemic identity, in contrast, was neither necessary nor sufficient for generalization of selective adaptation to occur. These findings and other recent data using the perceptual-learning paradigm suggest that pre-lexical processing during spoken-word recognition is based on allophones, and not on context-independent phonemes
  • Moisik, S. R., Esling, J. H., & Crevier-Buchman, L. (2010). A high-speed laryngoscopic investigation of aryepiglottic trilling. The Journal of the Acoustical Society of America, 127(3), 1548-1558. doi:10.1121/1.3299203.

    Abstract

    Six aryepiglottic trills with varied laryngeal parameters were recorded using high-speed laryngoscopy to investigate the nature of the oscillatory behavior of the upper margin of the epilaryngeal tube. Image analysis techniques were applied to extract data about the patterns of aryepiglottic fold oscillation, with a focus on the oscillatory frequencies of the folds. The acoustic impact of aryepiglottic trilling is also considered, along with possible interactions between the aryepiglottic vibration and vocal fold vibration during the voiced trill. Overall, aryepiglottic trilling is deemed to be correctly labeled as a trill in phonetic terms, while also acting as a means to alter the quality of voicing to be auditorily harsh. In terms of its characterization, aryepiglottic vibration is considerably irregular, but it shows indications of contributing quasi-harmonic excitation of the vocal tract, particularly noticeable under conditions of glottal voicelessness. Aryepiglottic vibrations appear to be largely independent of glottal vibration in terms of oscillatory frequency but can be increased in frequency by increasing overall laryngeal constriction. There is evidence that aryepiglottic vibration induces an alternating vocal fold vibration pattern. It is concluded that aryepiglottic trilling, like ventricular phonation, should be regarded as a complex, if highly irregular, sound source.
  • Monster, I., & Lev-Ari, S. (2018). The effect of social network size on hashtag adoption on Twitter. Cognitive Science, 42(8), 3149-3158. doi:10.1111/cogs.12675.

    Abstract

    Propagation of novel linguistic terms is an important aspect of language use and language
    change. Here, we test how social network size influences people’s likelihood of adopting novel
    labels by examining hashtag use on Twitter. Specifically, we test whether following fewer Twitter
    users leads to more varied and malleable hashtag use on Twitter , because each followed user is
    ascribed greater weight and thus exerts greater influence on the following user. Focusing on Dutch
    users tweeting about the terrorist attack in Brussels in 2016, we show that people who follow
    fewer other users use a larger number of unique hashtags to refer to the event, reflecting greater
    malleability and variability in use. These results have implications for theories of language learning, language use, and language change.
  • Morgan, A. T., van Haaften, L., van Hulst, K., Edley, C., Mei, C., Tan, T. Y., Amor, D., Fisher, S. E., & Koolen, D. A. (2018). Early speech development in Koolen de Vries syndrome limited by oral praxis and hypotonia. European journal of human genetics, 26, 75-84. doi:10.1038/s41431-017-0035-9.

    Abstract

    Communication disorder is common in Koolen de Vries syndrome (KdVS), yet its specific symptomatology has not been examined, limiting prognostic counselling and application of targeted therapies. Here we examine the communication phenotype associated with KdVS. Twenty-nine participants (12 males, 4 with KANSL1 variants, 25 with 17q21.31 microdeletion), aged 1.0–27.0 years were assessed for oral-motor, speech, language, literacy, and social functioning. Early history included hypotonia and feeding difficulties. Speech and language development was delayed and atypical from onset of first words (2; 5–3; 5 years of age on average). Speech was characterised by apraxia (100%) and dysarthria (93%), with stuttering in some (17%). Speech therapy and multi-modal communication (e.g., sign-language) was critical in preschool. Receptive and expressive language abilities were typically commensurate (79%), both being severely affected relative to peers. Children were sociable with a desire to communicate, although some (36%) had pragmatic impairments in domains, where higher-level language was required. A common phenotype was identified, including an overriding ‘double hit’ of oral hypotonia and apraxia in infancy and preschool, associated with severely delayed speech development. Remarkably however, speech prognosis was positive; apraxia resolved, and although dysarthria persisted, children were intelligible by mid-to-late childhood. In contrast, language and literacy deficits persisted, and pragmatic deficits were apparent. Children with KdVS require early, intensive, speech motor and language therapy, with targeted literacy and social language interventions as developmentally appropriate. Greater understanding of the linguistic phenotype may help unravel the relevance of KANSL1 to child speech and language development.

    Additional information

    41431_2017_35_MOESM1_ESM.docx
  • Mostert, P., Albers, A. M., Brinkman, L., Todorova, L., Kok, P., & De Lange, F. P. (2018). Eye movement-related confounds in neural decoding of visual working memory representations. eNeuro, 5(4): ENEURO.0401-17.2018. doi:10.1523/ENEURO.0401-17.2018.

    Abstract

    A relatively new analysis technique, known as neural decoding or multivariate pattern analysis (MVPA), has become increasingly popular for cognitive neuroimaging studies over recent years. These techniques promise to uncover the representational contents of neural signals, as well as the underlying code and the dynamic profile thereof. A field in which these techniques have led to novel insights in particular is that of visual working memory (VWM). In the present study, we subjected human volunteers to a combined VWM/imagery task while recording their neural signals using magnetoencephalography (MEG). We applied multivariate decoding analyses to uncover the temporal profile underlying the neural representations of the memorized item. Analysis of gaze position however revealed that our results were contaminated by systematic eye movements, suggesting that the MEG decoding results from our originally planned analyses were confounded. In addition to the eye movement analyses, we also present the original analyses to highlight how these might have readily led to invalid conclusions. Finally, we demonstrate a potential remedy, whereby we train the decoders on a functional localizer that was specifically designed to target bottom-up sensory signals and as such avoids eye movements. We conclude by arguing for more awareness of the potentially pervasive and ubiquitous effects of eye movement-related confounds.
  • Muglia, P., Tozzi, F., Galwey, N. W., Francks, C., Upmanyu, R., Kong, X., Antoniades, A., Domenici, E., Perry, J., Rothen, S., Vandeleur, C. L., Mooser, V., Waeber, G., Vollenweider, P., Preisig, M., Lucae, S., Muller-Myhsok, B., Holsboer, F., Middleton, L. T., & Roses, A. D. (2010). Genome-wide association study of recurrent major depressive disorder in two European case-control cohorts. Molecular Psychiatry, 15(6), 589-601. doi:10.1038/mp.2008.131.

    Abstract

    Major depressive disorder (MDD) is a highly prevalent disorder with substantial heritability. Heritability has been shown to be substantial and higher in the variant of MDD characterized by recurrent episodes of depression. Genetic studies have thus far failed to identify clear and consistent evidence of genetic risk factors for MDD. We conducted a genome-wide association study (GWAS) in two independent datasets. The first GWAS was performed on 1022 recurrent MDD patients and 1000 controls genotyped on the Illumina 550 platform. The second was conducted on 492 recurrent MDD patients and 1052 controls selected from a population-based collection, genotyped on the Affymetrix 5.0 platform. Neither GWAS identified any SNP that achieved GWAS significance. We obtained imputed genotypes at the Illumina loci for the individuals genotyped on the Affymetrix platform, and performed a meta-analysis of the two GWASs for this common set of approximately half a million SNPs. The meta-analysis did not yield genome-wide significant results either. The results from our study suggest that SNPs with substantial odds ratio are unlikely to exist for MDD, at least in our datasets and among the relatively common SNPs genotyped or tagged by the half-million-loci arrays. Meta-analysis of larger datasets is warranted to identify SNPs with smaller effects or with rarer allele frequencies that contribute to the risk of MDD.
  • Mulder, K., Van Heuven, W. J., & Dijkstra, T. (2018). Revisiting the neighborhood: How L2 proficiency and neighborhood manipulation affect bilingual processing. Frontiers in Psychology, 9: 1860. doi:10.3389/fpsyg.2018.01860.

    Abstract

    We conducted three neighborhood experiments with Dutch-English bilinguals to test effects of L2 proficiency and neighborhood characteristics within and between languages. In the past 20 years, the English (L2) proficiency of this population has considerably increased. To consider the impact of this development on neighborhood effects, we conducted a strict replication of the English lexical decision task by van Heuven, Dijkstra, & Grainger (1998, Exp. 4). In line with our prediction, English characteristics (neighborhood size, word and bigram frequency) dominated the word and nonword responses, while the nonwords also revealed an interaction of English and Dutch neighborhood size.
    The prominence of English was tested again in two experiments introducing a stronger neighborhood manipulation. In English lexical decision and progressive demasking, English items with no orthographic neighbors at all were contrasted with items having neighbors in English or Dutch (‘hermits’) only, or in both languages. In both tasks, target processing was affected strongly by the presence of English neighbors, but only weakly by Dutch neighbors. Effects are interpreted in terms of two underlying processing mechanisms: language-specific global lexical activation and lexical competition.
  • Mulhern, M. S., Stumpel, C., Stong, N., Brunner, H. G., Bier, L., Lippa, N., Riviello, J., Rouhl, R. P. W., Kempers, M., Pfundt, R., Stegmann, A. P. A., Kukolich, M. K., Telegrafi, A., Lehman, A., Lopez-Rangel, E., Houcinat, N., Barth, M., Den Hollander, N., Hoffer, M. J. V., Weckhuysen, S. and 31 moreMulhern, M. S., Stumpel, C., Stong, N., Brunner, H. G., Bier, L., Lippa, N., Riviello, J., Rouhl, R. P. W., Kempers, M., Pfundt, R., Stegmann, A. P. A., Kukolich, M. K., Telegrafi, A., Lehman, A., Lopez-Rangel, E., Houcinat, N., Barth, M., Den Hollander, N., Hoffer, M. J. V., Weckhuysen, S., Roovers, J., Djemie, T., Barca, D., Ceulemans, B., Craiu, D., Lemke, J. R., Korff, C., Mefford, H. C., Meyers, C. T., Siegler, Z., Hiatt, S. M., Cooper, G. M., Bebin, E. M., Snijders Blok, L., Veenstra-Knol, H. E., Baugh, E. H., Brilstra, E. H., Volker-Touw, C. M. L., Van Binsbergen, E., Revah-Politi, A., Pereira, E., McBrian, D., Pacault, M., Isidor, B., Le Caignec, C., Gilbert-Dussardier, B., Bilan, F., Heinzen, E. L., Goldstein, D. B., Stevens, S. J. C., & Sands, T. T. (2018). NBEA: Developmental disease gene with early generalized epilepsy phenotypes. Annals of Neurology, 84(5), 788-795. doi:10.1002/ana.25350.

    Abstract

    NBEA is a candidate gene for autism, and de novo variants have been reported in neurodevelopmental disease (NDD) cohorts. However, NBEA has not been rigorously evaluated as a disease gene, and associated phenotypes have not been delineated. We identified 24 de novo NBEA variants in patients with NDD, establishing NBEA as an NDD gene. Most patients had epilepsy with onset in the first few years of life, often characterized by generalized seizure types, including myoclonic and atonic seizures. Our data show a broader phenotypic spectrum than previously described, including a myoclonic-astatic epilepsy–like phenotype in a subset of patients.

    Files private

    Request files
  • Newbury, D. F., Fisher, S. E., & Monaco, A. P. (2010). Recent advances in the genetics of language impairment. Genome Medicine, 2, 6. doi:10.1186/gm127.

    Abstract

    Specific language impairment (SLI) is defined as an unexpected and persistent impairment in language ability despite adequate opportunity and intelligence and in the absence of any explanatory medical conditions. This condition is highly heritable and affects between 5% and 8% of pre-school children. Over the past few years, investigations have begun to uncover genetic factors that may contribute to susceptibility to language impairment. So far, variants in four specific genes have been associated with spoken language disorders - forkhead box P2 (FOXP2) and contactin-associated protein-like 2 (CNTNAP2) on chromosome7 and calcium-transporting ATPase 2C2 (ATP2C2) and c-MAF inducing protein (CMIP) on chromosome 16. Here, we describe the different ways in which these genes were identified as candidates for language impairment. We discuss how characterization of these genes, and the pathways in which they are involved, may enhance our understanding of language disorders and improve our understanding of the biological foundations of language acquisition.
  • Nieuwland, M. S., Politzer-Ahles, S., Heyselaar, E., Segaert, K., Darley, E., Kazanina, N., Von Grebmer Zu Wolfsthurn, S., Bartolozzi, F., Kogan, V., Ito, A., Mézière, D., Barr, D. J., Rousselet, G., Ferguson, H. J., Busch-Moreno, S., Fu, X., Tuomainen, J., Kulakova, E., Husband, E. M., Donaldson, D. I. and 3 moreNieuwland, M. S., Politzer-Ahles, S., Heyselaar, E., Segaert, K., Darley, E., Kazanina, N., Von Grebmer Zu Wolfsthurn, S., Bartolozzi, F., Kogan, V., Ito, A., Mézière, D., Barr, D. J., Rousselet, G., Ferguson, H. J., Busch-Moreno, S., Fu, X., Tuomainen, J., Kulakova, E., Husband, E. M., Donaldson, D. I., Kohút, Z., Rueschemeyer, S.-A., & Huettig, F. (2018). Large-scale replication study reveals a limit on probabilistic prediction in language comprehension. eLife, 7: e33468. doi:10.7554/eLife.33468.

    Abstract

    Do people routinely pre-activate the meaning and even the phonological form of upcoming words? The most acclaimed evidence for phonological prediction comes from a 2005 Nature Neuroscience publication by DeLong, Urbach and Kutas, who observed a graded modulation of electrical brain potentials (N400) to nouns and preceding articles by the probability that people use a word to continue the sentence fragment (‘cloze’). In our direct replication study spanning 9 laboratories (N=334), pre-registered replication-analyses and exploratory Bayes factor analyses successfully replicated the noun-results but, crucially, not the article-results. Pre-registered single-trial analyses also yielded a statistically significant effect for the nouns but not the articles. Exploratory Bayesian single-trial analyses showed that the article-effect may be non-zero but is likely far smaller than originally reported and too small to observe without very large sample sizes. Our results do not support the view that readers routinely pre-activate the phonological form of predictable words.

    Additional information

    Data sets
  • Nieuwland, M. S., Ditman, T., & Kuperberg, G. R. (2010). On the incrementality of pragmatic processing: An ERP investigation of informativeness and pragmatic abilities. Journal of Memory and Language, 63(3), 324-346. doi:10.1016/j.jml.2010.06.005.

    Abstract

    In two event-related potential (ERP) experiments, we determined to what extent Grice’s maxim of informativeness as well as pragmatic ability contributes to the incremental build-up of sentence meaning, by examining the impact of underinformative versus informative scalar statements (e.g. “Some people have lungs/pets, and…”) on the N400 event-related potential (ERP), an electrophysiological index of semantic processing. In Experiment 1, only pragmatically skilled participants (as indexed by the Autism Quotient Communication subscale) showed a larger N400 to underinformative statements. In Experiment 2, this effect disappeared when the critical words were unfocused so that the local underinformativeness went unnoticed (e.g., “Some people have lungs that…”). Our results suggest that, while pragmatic scalar meaning can incrementally contribute to sentence comprehension, this contribution is dependent on contextual factors, whether these are derived from individual pragmatic abilities or the overall experimental context.
  • Niso, G., Gorgolewski, K. J., Bock, E., Brooks, T. L., Flandin, G., Gramfort, A., Henson, R. N., Jas, M., Litvak, V., Moreau, J. T., Oostenveld, R., Schoffelen, J.-M., Tadel, F., Wexler, J., & Baillet, S. (2018). MEG-BIDS, the brain imaging data structure extended to magnetoencephalography. Scientific Data, 5: 180110. doi:10.1038/sdata.2018.110.

    Abstract

    We present a significant extension of the Brain Imaging Data Structure (BIDS) to support the specific
    aspects of magnetoencephalography (MEG) data. MEG measures brain activity with millisecond
    temporal resolution and unique source imaging capabilities. So far, BIDS was a solution to organise
    magnetic resonance imaging (MRI) data. The nature and acquisition parameters of MRI and MEG data
    are strongly dissimilar. Although there is no standard data format for MEG, we propose MEG-BIDS as a
    principled solution to store, organise, process and share the multidimensional data volumes produced
    by the modality. The standard also includes well-defined metadata, to facilitate future data
    harmonisation and sharing efforts. This responds to unmet needs from the multimodal neuroimaging
    community and paves the way to further integration of other techniques in electrophysiology. MEGBIDS
    builds on MRI-BIDS, extending BIDS to a multimodal data structure. We feature several dataanalytics
    software that have adopted MEG-BIDS, and a diverse sample of open MEG-BIDS data
    resources available to everyone.
  • Nitschke, S., Kidd, E., & Serratrice, L. (2010). First language transfer and long-term structural priming in comprehension. Language and Cognitive Processes, 25(1), 94-114. doi:10.1080/01690960902872793.

    Abstract

    The present study investigated L1 transfer effects in L2 sentence processing and syntactic priming through comprehension in speakers of German and Italian. L1 and L2 speakers of both languages participated in a syntactic priming experiment that aimed to shift their preferred interpretation of ambiguous relative clause constructions. The results suggested that L1 transfer affects L2 processing but not the strength of structural priming, and therefore does not hinder the acquisition of L2 parsing strategies. We also report evidence that structural priming through comprehension can persist in L1 and L2 speakers over an experimental phase without further exposure to primes. Finally, we observed that priming can occur for what are essentially novel form-meaning pairings for L2 learners, suggesting that adult learners can rapidly associate existing forms with new meanings.
  • Noble, J., De Ruiter, J. P., & Arnold, K. (2010). From monkey alarm calls to human language: How simulations can fill the gap. Adaptive Behavior, 18, 66-82. doi:10.1177/1059712309350974.

    Abstract

    Observations of alarm calling behavior in putty-nosed monkeys are suggestive of a link with human language evolution. However, as is often the case in studies of animal behavior and cognition, competing theories are underdetermined by the available data. We argue that computational modeling, and in particular the use of individual-based simulations, is an effective way to reduce the size of the pool of candidate explanations. Simulation achieves this both through the classification of evolutionary trajectories as either plausible or implausible, and by putting lower bounds on the cognitive complexity required to perform particular behaviors. A case is made for using both of these strategies to understand the extent to which the alarm calls of putty-nosed monkeys are likely to be a good model for human language evolution.
  • Noordzij, M. L., Newman-Norlund, S. E., De Ruiter, J. P., Hagoort, P., Levinson, S. C., & Toni, I. (2010). Neural correlates of intentional communication. Frontiers in Neuroscience, 4, E188. doi:10.3389/fnins.2010.00188.

    Abstract

    We know a great deal about the neurophysiological mechanisms supporting instrumental actions, i.e. actions designed to alter the physical state of the environment. In contrast, little is known about our ability to select communicative actions, i.e. actions directly designed to modify the mental state of another agent. We have recently provided novel empirical evidence for a mechanism in which a communicator selects his actions on the basis of a prediction of the communicative intentions that an addressee is most likely to attribute to those actions. The main novelty of those finding was that this prediction of intention recognition is cerebrally implemented within the intention recognition system of the communicator, is modulated by the ambiguity in meaning of the communicative acts, and not by their sensorimotor complexity. The characteristics of this predictive mechanism support the notion that human communicative abilities are distinct from both sensorimotor and linguistic processes.
  • Noppeney, U., Jones, S. A., Rohe, T., & Ferrari, A. (2018). See what you hear – How the brain forms representations across the senses. Neuroforum, 24(4), 257-271. doi:10.1515/nf-2017-A066.

    Abstract

    Our senses are constantly bombarded with a myriad of signals. To make sense of this cacophony, the brain needs to integrate signals emanating from a common source, but segregate signals originating from the different sources. Thus, multisensory perception relies critically on inferring the world’s causal structure (i. e. one common vs. multiple independent sources). Behavioural research has shown that the brain arbitrates between sensory integration and segregation consistent with the principles of Bayesian Causal Inference. At the neural level, recent functional magnetic resonance imaging (fMRI) and electroencephalography (EEG) studies have shown that the brain accomplishes Bayesian Causal Inference by dynamically encoding multiple perceptual estimates across the sensory processing hierarchies. Only at the top of the hierarchy in anterior parietal cortices did the brain form perceptual estimates that take into account the observer’s uncertainty about the world’s causal structure consistent with Bayesian Causal Inference.
  • Norris, D., McQueen, J. M., & Cutler, A. (2018). Commentary on “Interaction in spoken word recognition models". Frontiers in Psychology, 9: 1568. doi:10.3389/fpsyg.2018.01568.
  • Norris, D., McQueen, J. M., & Cutler, A. (2000). Feedback on feedback on feedback: It’s feedforward. (Response to commentators). Behavioral and Brain Sciences, 23, 352-370.

    Abstract

    The central thesis of the target article was that feedback is never necessary in spoken word recognition. The commentaries present no new data and no new theoretical arguments which lead us to revise this position. In this response we begin by clarifying some terminological issues which have lead to a number of significant misunderstandings. We provide some new arguments to support our case that the feedforward model Merge is indeed more parsimonious than the interactive alternatives, and that it provides a more convincing account of the data than alternative models. Finally, we extend the arguments to deal with new issues raised by the commentators such as infant speech perception and neural architecture.
  • Norris, D., McQueen, J. M., & Cutler, A. (2000). Merging information in speech recognition: Feedback is never necessary. Behavioral and Brain Sciences, 23, 299-325.

    Abstract

    Top-down feedback does not benefit speech recognition; on the contrary, it can hinder it. No experimental data imply that feedback loops are required for speech recognition. Feedback is accordingly unnecessary and spoken word recognition is modular. To defend this thesis, we analyse lexical involvement in phonemic decision making. TRACE (McClelland & Elman 1986), a model with feedback from the lexicon to prelexical processes, is unable to account for all the available data on phonemic decision making. The modular Race model (Cutler & Norris 1979) is likewise challenged by some recent results, however. We therefore present a new modular model of phonemic decision making, the Merge model. In Merge, information flows from prelexical processes to the lexicon without feedback. Because phonemic decisions are based on the merging of prelexical and lexical information, Merge correctly predicts lexical involvement in phonemic decisions in both words and nonwords. Computer simulations show how Merge is able to account for the data through a process of competition between lexical hypotheses. We discuss the issue of feedback in other areas of language processing and conclude that modular models are particularly well suited to the problems and constraints of speech recognition.
  • Orfanidou, E., Adam, R., Morgan, G., & McQueen, J. M. (2010). Recognition of signed and spoken language: Different sensory inputs, the same segmentation procedure. Journal of Memory and Language, 62(3), 272-283. doi:10.1016/j.jml.2009.12.001.

    Abstract

    Signed languages are articulated through simultaneous upper-body movements and are seen; spoken languages are articulated through sequential vocal-tract movements and are heard. But word recognition in both language modalities entails segmentation of a continuous input into discrete lexical units. According to the Possible Word Constraint (PWC), listeners segment speech so as to avoid impossible words in the input. We argue here that the PWC is a modality-general principle. Deaf signers of British Sign Language (BSL) spotted real BSL signs embedded in nonsense-sign contexts more easily when the nonsense signs were possible BSL signs than when they were not. A control experiment showed that there were no articulatory differences between the different contexts. A second control experiment on segmentation in spoken Dutch strengthened the claim that the main BSL result likely reflects the operation of a lexical-viability constraint. It appears that signed and spoken languages, in spite of radical input differences, are segmented so as to leave no residues of the input that cannot be words.
  • Ortega, G., & Morgan, G. (2010). Comparing child and adult development of a visual phonological system. Language interaction and acquisition, 1(1), 67-81. doi:10.1075/lia.1.1.05ort.

    Abstract

    Research has documented systematic articulation differences in young children’s first signs compared with the adult input. Explanations range from the implementation of phonological processes, cognitive limitations and motor immaturity. One way of disentangling these possible explanations is to investigate signing articulation in adults who do not know any sign language but have mature cognitive and motor development. Some preliminary observations are provided on signing accuracy in a group of adults using a sign repetition methodology. Adults make the most errors with marked handshapes and produce movement and location errors akin to those reported for child signers. Secondly, there are both positive and negative influences of sign iconicity on sign repetition in adults. Possible reasons are discussed for these iconicity effects based on gesture.
  • Ortega, G. (2010). MSJE TXT: Un evento social. Lectura y vida: Revista latinoamericana de lectura, 4, 44-53.
  • Ostarek, M., Ishag, I., Joosen, D., & Huettig, F. (2018). Saccade trajectories reveal dynamic interactions of semantic and spatial information during the processing of implicitly spatial words. Journal of Experimental Psychology: Learning, Memory, and Cognition, 44(10), 1658-1670. doi:10.1037/xlm0000536.

    Abstract

    Implicit up/down words, such as bird and foot, systematically influence performance on visual tasks involving immediately following targets in compatible vs. incompatible locations. Recent studies have observed that the semantic relation between prime words and target pictures can strongly influence the size and even the direction of the effect: Semantically related targets are processed faster in congruent vs. incongruent locations (location-specific priming), whereas unrelated targets are processed slower in congruent locations. Here, we used eye-tracking to investigate the moment-to-moment processes underlying this pattern. Our reaction time results for related targets replicated the location-specific priming effect and showed a trend towards interference for unrelated targets. We then used growth curve analysis to test how up/down words and their match vs. mismatch with immediately following targets in terms of semantics and vertical location influences concurrent saccadic eye movements. There was a strong main effect of spatial association on linear growth with up words biasing changes in y-coordinates over time upwards relative to down words (and vice versa). Similar to the RT data, this effect was strongest for semantically related targets and reversed for unrelated targets. Intriguingly, all conditions showed a bias in the congruent direction in the initial stage of the saccade. Then, at around halfway into the saccade the effect kept increasing in the semantically related condition, and reversed in the unrelated condition. These results suggest that online processing of up/down words triggers direction-specific oculomotor processes that are dynamically modulated by the semantic relation between prime words and targets.
  • Osterhout, L., & Hagoort, P. (1999). A superficial resemblance does not necessarily mean you are part of the family: Counterarguments to Coulson, King and Kutas (1998) in the P600/SPS-P300 debate. Language and Cognitive Processes, 14, 1-14. doi:10.1080/016909699386356.

    Abstract

    Two recent studies (Coulson et al., 1998;Osterhout et al., 1996)examined the
    relationship between the event-related brain potential (ERP) responses to linguistic syntactic anomalies (P600/SPS) and domain-general unexpected events (P300). Coulson et al. concluded that these responses are highly similar, whereas Osterhout et al. concluded that they are distinct. In this comment, we evaluate the relativemerits of these claims. We conclude that the available evidence indicates that the ERP response to syntactic anomalies is at least partially distinct from the ERP response to unexpected anomalies that do not involve a grammatical violation
  • Otake, T., & Cutler, A. (1999). Perception of suprasegmental structure in a nonnative dialect. Journal of Phonetics, 27, 229-253. doi:10.1006/jpho.1999.0095.

    Abstract

    Two experiments examined the processing of Tokyo Japanese pitchaccent distinctions by native speakers of Japanese from two accentlessvariety areas. In both experiments, listeners were presented with Tokyo Japanese speech materials used in an earlier study with Tokyo Japanese listeners, who clearly exploited the pitch-accent information in spokenword recognition. In the "rst experiment, listeners judged from which of two words, di!ering in accentual structure, isolated syllables had been extracted. Both new groups were, overall, as successful at this task as Tokyo Japanese speakers had been, but their response patterns differed from those of the Tokyo Japanese, for instance in that a bias towards H judgments in the Tokyo Japanese responses was weakened in the present groups' responses. In a second experiment, listeners heard word fragments and guessed what the words were; in this task, the speakers from accentless areas again performed significantly above chance, but their responses showed less sensitivity to the information in the input, and greater bias towards vocabulary distribution frequencies, than had been observed with the Tokyo Japanese listeners. The results suggest that experience with a local accentless dialect affects the processing of accent for word recognition in Tokyo Japanese, even for listeners with extensive exposure to Tokyo Japanese.
  • Ozker, M., Yoshor, D., & Beauchamp, M. (2018). Converging evidence from electrocorticography and BOLD fMRI for a sharp functional boundary in superior temporal gyrus related to multisensory speech processing. Frontiers in Human Neuroscience, 12: 141. doi:10.3389/fnhum.2018.00141.

    Abstract

    Although humans can understand speech using the auditory modality alone, in noisy environments visual speech information from the talker’s mouth can rescue otherwise unintelligible auditory speech. To investigate the neural substrates of multisensory speech perception, we compared neural activity from the human superior temporal gyrus (STG) in two datasets. One dataset consisted of direct neural recordings (electrocorticography, ECoG) from surface electrodes implanted in epilepsy patients (this dataset has been previously published). The second dataset consisted of indirect measures of neural activity using blood oxygen level dependent functional magnetic resonance imaging (BOLD fMRI). Both ECoG and fMRI participants viewed the same clear and noisy audiovisual speech stimuli and performed the same speech recognition task. Both techniques demonstrated a sharp functional boundary in the STG, spatially coincident with an anatomical boundary defined by the posterior edge of Heschl’s gyrus. Cortex on the anterior side of the boundary responded more strongly to clear audiovisual speech than to noisy audiovisual speech while cortex on the posterior side of the boundary did not. For both ECoG and fMRI measurements, the transition between the functionally distinct regions happened within 10 mm of anterior-to-posterior distance along the STG. We relate this boundary to the multisensory neural code underlying speech perception and propose that it represents an important functional division within the human speech perception network.
  • Ozker, M., Yoshor, D., & Beauchamp, M. (2018). Frontal cortex selects representations of the talker’s mouth to aid in speech perception. eLife, 7: e30387. doi:10.7554/eLife.30387.
  • Ozyurek, A., Zwitserlood, I., & Perniss, P. M. (2010). Locative expressions in signed languages: A view from Turkish Sign Language (TID). Linguistics, 48(5), 1111-1145. doi:10.1515/LING.2010.036.

    Abstract

    Locative expressions encode the spatial relationship between two (or more) entities. In this paper, we focus on locative expressions in signed language, which use the visual-spatial modality for linguistic expression, specifically in
    Turkish Sign Language ( Türk İşaret Dili, henceforth TİD). We show that TİD uses various strategies in discourse to encode the relation between a Ground entity (i.e., a bigger and/or backgrounded entity) and a Figure entity (i.e., a
    smaller entity, which is in the focus of attention). Some of these strategies exploit affordances of the visual modality for analogue representation and support evidence for modality-specific effects on locative expressions in sign languages.
    However, other modality-specific strategies, e.g., the simultaneous expression of Figure and Ground, which have been reported for many other sign languages, occurs only sparsely in TİD. Furthermore, TİD uses categorical as well as analogical structures in locative expressions. On the basis of
    these findings, we discuss differences and similarities between signed and spoken languages to broaden our understanding of the range of structures used in natural language (i.e., in both the visual-spatial or oral-aural modalities) to encode locative relations. A general linguistic theory of spatial relations, and specifically of locative expressions, must take all structures that
    might arise in both modalities into account before it can generalize over the human language faculty.
  • Palva, J. M., Wang, S. H., Palva, S., Zhigalov, A., Monto, S., Brookes, M. J., & Schoffelen, J.-M. (2018). Ghost interactions in MEG/EEG source space: A note of caution on inter-areal coupling measures. NeuroImage, 173, 632-643. doi:10.1016/j.neuroimage.2018.02.032.

    Abstract

    When combined with source modeling, magneto- (MEG) and electroencephalography (EEG) can be used to study
    long-range interactions among cortical processes non-invasively. Estimation of such inter-areal connectivity is
    nevertheless hindered by instantaneous field spread and volume conduction, which artificially introduce linear
    correlations and impair source separability in cortical current estimates. To overcome the inflating effects of linear
    source mixing inherent to standard interaction measures, alternative phase- and amplitude-correlation based
    connectivity measures, such as imaginary coherence and orthogonalized amplitude correlation have been proposed.
    Being by definition insensitive to zero-lag correlations, these techniques have become increasingly popular
    in the identification of correlations that cannot be attributed to field spread or volume conduction. We show here,
    however, that while these measures are immune to the direct effects of linear mixing, they may still reveal large
    numbers of spurious false positive connections through field spread in the vicinity of true interactions. This
    fundamental problem affects both region-of-interest-based analyses and all-to-all connectome mappings. Most
    importantly, beyond defining and illustrating the problem of spurious, or “ghost” interactions, we provide a
    rigorous quantification of this effect through extensive simulations. Additionally, we further show that signal
    mixing also significantly limits the separability of neuronal phase and amplitude correlations. We conclude that
    spurious correlations must be carefully considered in connectivity analyses in MEG/EEG source space even when
    using measures that are immune to zero-lag correlations.
  • Pascucci, D., Hervais-Adelman, A., & Plomp, G. (2018). Gating by induced A-Gamma asynchrony in selective attention. Human Brain Mapping, 39(10), 3854-3870. doi:10.1002/hbm.24216.

    Abstract

    Visual selective attention operates through top–down mechanisms of signal enhancement and suppression, mediated by a-band oscillations. The effects of such top–down signals on local processing in primary visual cortex (V1) remain poorly understood. In this work, we characterize the interplay between large-s cale interactions and local activity changes in V1 that orchestrat es selective attention, using Granger-causality and phase-amplitude coupling (PAC) analysis of EEG source signals. The task required participants to either attend to or ignore oriented gratings. Results from time-varying, directed connectivity analysis revealed frequency-specific effects of attentional selection: bottom–up g-band influences from visual areas increased rapidly in response to attended stimuli while distributed top–down a-band influences originated from parietal cortex in response to ignored stimuli. Importantly, the results revealed a critical interplay between top–down parietal signals and a–g PAC in visual areas.
    Parietal a-band influences disrupted the a–g coupling in visual cortex, which in turn reduced the amount of g-band outflow from visual area s. Our results are a first demon stration of how directed interactions affect cross-frequency coupling in downstream areas depending on task demands. These findings suggest that parietal cortex realizes selective attention by disrupting cross-frequency coupling at target regions, which prevents them from propagating task-irrelevant information.
  • Peeters, D. (2018). A standardized set of 3D-objects for virtual reality research and applications. Behavior Research Methods, 50(3), 1047-1054. doi:10.3758/s13428-017-0925-3.

    Abstract

    The use of immersive virtual reality as a research tool is rapidly increasing in numerous scientific disciplines. By combining ecological validity with strict experimental control, immersive virtual reality provides the potential to develop and test scientific theory in rich environments that closely resemble everyday settings. This article introduces the first standardized database of colored three-dimensional (3D) objects that can be used in virtual reality and augmented reality research and applications. The 147 objects have been normed for name agreement, image agreement, familiarity, visual complexity, and corresponding lexical characteristics of the modal object names. The availability of standardized 3D-objects for virtual reality research is important, as reaching valid theoretical conclusions critically hinges on the use of well controlled experimental stimuli. Sharing standardized 3D-objects across different virtual reality labs will allow for science to move forward more quickly.
  • Peeters, D., & Dijkstra, T. (2018). Sustained inhibition of the native language in bilingual language production: A virtual reality approach. Bilingualism: Language and Cognition, 21(5), 1035-1061. doi:10.1017/S1366728917000396.

    Abstract

    Bilinguals often switch languages as a function of the language background of their addressee. The control mechanisms supporting bilinguals' ability to select the contextually appropriate language are heavily debated. Here we present four experiments in which unbalanced bilinguals named pictures in their first language Dutch and their second language English in mixed and blocked contexts. Immersive virtual reality technology was used to increase the ecological validity of the cued language-switching paradigm. Behaviorally, we consistently observed symmetrical switch costs, reversed language dominance, and asymmetrical mixing costs. These findings indicate that unbalanced bilinguals apply sustained inhibition to their dominant L1 in mixed language settings. Consequent enhanced processing costs for the L1 in a mixed versus a blocked context were reflected by a sustained positive component in event-related potentials. Methodologically, the use of virtual reality opens up a wide range of possibilities to study language and communication in bilingual and other communicative settings.
  • Perlman, M., Little, H., Thompson, B., & Thompson, R. L. (2018). Iconicity in signed and spoken vocabulary: A comparison between American Sign Language, British Sign Language, English, and Spanish. Frontiers in Psychology, 9: 1433. doi:10.3389/fpsyg.2018.01433.

    Abstract

    Considerable evidence now shows that all languages, signed and spoken, exhibit a significant amount of iconicity. We examined how the visual-gestural modality of signed languages facilitates iconicity for different kinds of lexical meanings compared to the auditory-vocal modality of spoken languages. We used iconicity ratings of hundreds of signs and words to compare iconicity across the vocabularies of two signed languages – American Sign Language and British Sign Language, and two spoken languages – English and Spanish. We examined (1) the correlation in iconicity ratings between the languages; (2) the relationship between iconicity and an array of semantic variables (ratings of concreteness, sensory experience, imageability, perceptual strength of vision, audition, touch, smell and taste); (3) how iconicity varies between broad lexical classes (nouns, verbs, adjectives, grammatical words and adverbs); and (4) between more specific semantic categories (e.g., manual actions, clothes, colors). The results show several notable patterns that characterize how iconicity is spread across the four vocabularies. There were significant correlations in the iconicity ratings between the four languages, including English with ASL, BSL, and Spanish. The highest correlation was between ASL and BSL, suggesting iconicity may be more transparent in signs than words. In each language, iconicity was distributed according to the semantic variables in ways that reflect the semiotic affordances of the modality (e.g., more concrete meanings more iconic in signs, not words; more auditory meanings more iconic in words, not signs; more tactile meanings more iconic in both signs and words). Analysis of the 220 meanings with ratings in all four languages further showed characteristic patterns of iconicity across broad and specific semantic domains, including those that distinguished between signed and spoken languages (e.g., verbs more iconic in ASL, BSL, and English, but not Spanish; manual actions especially iconic in ASL and BSL; adjectives more iconic in English and Spanish; color words especially low in iconicity in ASL and BSL). These findings provide the first quantitative account of how iconicity is spread across the lexicons of signed languages in comparison to spoken languages
  • Perniss, P. M., Thompson, R. L., & Vigliocco, G. (2010). Iconicity as a general property of language: Evidence from spoken and signed languages [Review article]. Frontiers in Psychology, 1, E227. doi:10.3389/fpsyg.2010.00227.

    Abstract

    Current views about language are dominated by the idea of arbitrary connections between linguistic form and meaning. However, if we look beyond the more familiar Indo-European languages and also include both spoken and signed language modalities, we find that motivated, iconic form-meaning mappings are, in fact, pervasive in language. In this paper, we review the different types of iconic mappings that characterize languages in both modalities, including the predominantly visually iconic mappings in signed languages. Having shown that iconic mapping are present across languages, we then proceed to review evidence showing that language users (signers and speakers) exploit iconicity in language processing and language acquisition. While not discounting the presence and importance of arbitrariness in language, we put forward the idea that iconicity need also be recognized as a general property of language, which may serve the function of reducing the gap between linguistic form and conceptual representation to allow the language system to “hook up” to motor and perceptual experience.
  • Perry, L. K., Perlman, M., Winter, B., Massaro, D. W., & Lupyan, G. (2018). Iconicity in the speech of children and adults. Developmental Science, 21: e12572. doi:10.1111/desc.12572.

    Abstract

    Iconicity – the correspondence between form and meaning – may help young children learn to use new words. Early-learned words are higher in iconicity than later learned words. However, it remains unclear what role iconicity may play in actual language use. Here, we ask whether iconicity relates not just to the age at which words are acquired, but also to how frequently children and adults use the words in their speech. If iconicity serves to bootstrap word learning, then we would expect that children should say highly iconic words more frequently than less iconic words, especially early in development. We would also expect adults to use iconic words more often when speaking to children than to other adults. We examined the relationship between frequency and iconicity for approximately 2000 English words. Replicating previous findings, we found that more iconic words are learned earlier. Moreover, we found that more iconic words tend to be used more by younger children, and adults use more iconic words when speaking to children than to other adults. Together, our results show that young children not only learn words rated high in iconicity earlier than words low in iconicity, but they also produce these words more frequently in conversation – a pattern that is reciprocated by adults when speaking with children. Thus, the earliest conversations of children are relatively higher in iconicity, suggesting that this iconicity scaffolds the production and comprehension of spoken language during early development.
  • Petersson, K. M., Elfgren, C., & Ingvar, M. (1999). Dynamic changes in the functional anatomy of the human brain during recall of abstract designs related to practice. Neuropsychologia, 37, 567-587.

    Abstract

    In the present PET study we explore some functional aspects of the interaction between attentional/control processes and learning/memory processes. The network of brain regions supporting recall of abstract designs were studied in a less practiced and in a well practiced state. The results indicate that automaticity, i.e., a decreased dependence on attentional and working memory resources, develops as a consequence of practice. This corresponds to the practice related decreases of activity in the prefrontal, anterior cingulate, and posterior parietal regions. In addition, the activity of the medial temporal regions decreased as a function of practice. This indicates an inverse relation between the strength of encoding and the activation of the MTL during retrieval. Furthermore, the pattern of practice related increases in the auditory, posterior insular-opercular extending into perisylvian supra marginal region, and the right mid occipito-temporal region, may reflect a lower degree of inhibitory attentional modulation of task irrelevant processing and more fully developed representations of the abstract designs, respectively. We also suggest that free recall is dependent on bilateral prefrontal processing, in particular non-automatic free recall. The present results cofirm previous functional neuroimaging studies of memory retrieval indicating that recall is subserved by a network of interacting brain regions. Furthermore, the results indicate that some components of the neural network subserving free recall may have a dynamic role and that there is a functional restructuring of the information processing networks during the learning process.
  • Petersson, K. M., Reis, A., Castro-Caldas, A., & Ingvar, M. (1999). Effective auditory-verbal encoding activates the left prefrontal and the medial temporal lobes: A generalization to illiterate subjects. NeuroImage, 10, 45-54. doi:10.1006/nimg.1999.0446.

    Abstract

    Recent event-related FMRI studies indicate that the prefrontal (PFC) and the medial temporal lobe (MTL) regions are more active during effective encoding than during ineffective encoding. The within-subject design and the use of well-educated young college students in these studies makes it important to replicate these results in other study populations. In this PET study, we used an auditory word-pair association cued-recall paradigm and investigated a group of healthy upper middle-aged/older illiterate women. We observed a positive correlation between cued-recall success and the regional cerebral blood flow of the left inferior PFC (BA 47) and the MTLs. Specifically, we used the cuedrecall success as a covariate in a general linear model and the results confirmed that the left inferior PFC and the MTLare more active during effective encoding than during ineffective encoding. These effects were observed during encoding of both semantically and phonologically related word pairs, indicating that these effects are robust in the studied population, that is, reproducible within group. These results generalize the results of Brewer et al. (1998, Science 281, 1185– 1187) and Wagner et al. (1998, Science 281, 1188–1191) to an upper middle aged/older illiterate population. In addition, the present study indicates that effective relational encoding correlates positively with the activity of the anterior medial temporal lobe regions.
  • Petersson, K. M., Reis, A., Askelöf, S., Castro-Caldas, A., & Ingvar, M. (2000). Language processing modulated by literacy: A network analysis of verbal repetition in literate and illiterate subjects. Journal of Cognitive Neuroscience, 12(3), 364-382. doi:10.1162/089892900562147.
  • Petersson, K. M., Elfgren, C., & Ingvar, M. (1999). Learning-related effects and functional neuroimaging. Human Brain Mapping, 7, 234-243. doi:10.1002/(SICI)1097-0193(1999)7:4<234:AID-HBM2>3.0.CO;2-O.

    Abstract

    A fundamental problem in the study of learning is that learning-related changes may be confounded by nonspecific time effects. There are several strategies for handling this problem. This problem may be of greater significance in functional magnetic resonance imaging (fMRI) compared to positron emission tomography (PET). Using the general linear model, we describe, compare, and discuss two approaches for separating learning-related from nonspecific time effects. The first approach makes assumptions on the general behavior of nonspecific effects and explicitly models these effects, i.e., nonspecific time effects are incorporated as a linear or nonlinear confounding covariate in the statistical model. The second strategy makes no a priori assumption concerning the form of nonspecific time effects, but implicitly controls for nonspecific effects using an interaction approach, i.e., learning effects are assessed with an interaction contrast. The two approaches depend on specific assumptions and have specific limitations. With certain experimental designs, both approaches may be used and the results compared, lending particular support to effects that are independent of the method used. A third and perhaps better approach that sometimes may be practically unfeasible is to use a completely temporally balanced experimental design. The choice of approach may be of particular importance when learning related effects are studied with fMRI.
  • Petersson, K. M., Nichols, T. E., Poline, J.-B., & Holmes, A. P. (1999). Statistical limitations in functional neuroimaging I: Non-inferential methods and statistical models. Philosofical Transactions of the Royal Soeciety B, 354, 1239-1260.
  • Petersson, K. M., Nichols, T. E., Poline, J.-B., & Holmes, A. P. (1999). Statistical limitations in functional neuroimaging II: Signal detection and statistical inference. Philosophical Transactions of the Royal Society of London. Series B, Biological Sciences, 354, 1261-1282.
  • Petrovic, P., Ingvar, M., Stone-Elander, S., Petersson, K. M., & Hansson, P. (1999). A PET activation study of dynamic mechanical allodynia in patients with mononeuropathy. Pain, 83, 459-470.

    Abstract

    The objective of this study was to investigate the central processing of dynamic mechanical allodynia in patients with mononeuropathy. Regional cerebral bloodflow, as an indicator of neuronal activity, was measured with positron emission tomography. Paired comparisons were made between three different states; rest, allodynia during brushing the painful skin area, and brushing of the homologous contralateral area. Bilateral activations were observed in the primary somatosensory cortex (S1) and the secondary somatosensory cortex (S2) during allodynia compared to rest. The S1 activation contralateral to the site of the stimulus was more expressed during allodynia than during innocuous touch. Significant activations of the contralateral posterior parietal cortex, the periaqueductal gray (PAG), the thalamus bilaterally and motor areas were also observed in the allodynic state compared to both non-allodynic states. In the anterior cingulate cortex (ACC) there was only a suggested activation when the allodynic state was compared with the non-allodynic states. In order to account for the individual variability in the intensity of allodynia and ongoing spontaneous pain, rCBF was regressed on the individually reported pain intensity, and significant covariations were observed in the ACC and the right anterior insula. Significantly decreased regional blood flow was observed bilaterally in the medial and lateral temporal lobe as well as in the occipital and posterior cingulate cortices when the allodynic state was compared to the non-painful conditions. This finding is consistent with previous studies suggesting attentional modulation and a central coping strategy for known and expected painful stimuli. Involvement of the medial pain system has previously been reported in patients with mononeuropathy during ongoing spontaneous pain. This study reveals a bilateral activation of the lateral pain system as well as involvement of the medial pain system during dynamic mechanical allodynia in patients with mononeuropathy.

Share this page