Publications

Displaying 301 - 400 of 708
  • Hu, C.-P., Kong, X., Wagenmakers, E.-J., Ly, A., & Peng, K. (2018). The Bayes factor and its implementation in JASP: A practical primer. Advances in Psychological Science, 26(6), 951-965. doi:10.3724/SP.J.1042.2018.00951.

    Abstract

    Statistical inference plays a critical role in modern scientific research, however, the dominant method for statistical inference in science, null hypothesis significance testing (NHST), is often misunderstood and misused, which leads to unreproducible findings. To address this issue, researchers propose to adopt the Bayes factor as an alternative to NHST. The Bayes factor is a principled Bayesian tool for model selection and hypothesis testing, and can be interpreted as the strength for both the null hypothesis H0 and the alternative hypothesis H1 based on the current data. Compared to NHST, the Bayes factor has the following advantages: it quantifies the evidence that the data provide for both the H0 and the H1, it is not “violently biased” against H0, it allows one to monitor the evidence as the data accumulate, and it does not depend on sampling plans. Importantly, the recently developed open software JASP makes the calculation of Bayes factor accessible for most researchers in psychology, as we demonstrated for the t-test. Given these advantages, adopting the Bayes factor will improve psychological researchers’ statistical inferences. Nevertheless, to make the analysis more reproducible, researchers should keep their data analysis transparent and open.
  • Konishi, M., Verdonschot, R. G., & Kakimoto, N. (2021). An investigation of tooth loss factors in elderly patients using panoramic radiographs. Oral Radiology, 37(3), 436-442. doi:10.1007/s11282-020-00475-6.

    Abstract

    Objectives The aim of this study was to observe the dental condition in a group of elderly patients over a period of 10 years in order to clarify important risk factors. Materials and methods Participants were elderly patients (in their eighties) who took panoramic radiographs between 2015 and 2016, and for whom panoramic radiographs taken around 10 year earlier were also available. The number of remaining and lost teeth, the Eichner Index, the presence or absence of molar occlusion, the respective condition of dental pulp, dental crowns, alveolar bone resorption, as well as periapical lesions were investigated through the analysis of panoramic radiographs. Additionally, other important variables were collected from patients' medical records. From the obtained panoramic radiograph sets, the patients' dental condition was investigated, and a systematic comparison was conducted. Results The analysis of the panoramic radiographs showed that the number of remaining teeth decreased from an average of 20.8-15.5, and the percentage of patients with 20 or more teeth decreased from 69.2 to 26.9%. A factor analysis investigating tooth loss risk suggested that tooth loss was associated with the bridge, P2 or greater resorption of the alveolar bone, and apical lesions, and gender (with males having a higher risk compared to females). Conclusions Teeth showing P2 or greater alveolar bone resorption, bridge, and apical lesions on panoramic radiographs are most likely to be lost in an elderly patient's near future. Consequently, this group should be encouraged to visit their dental clinics regularly and receive comprehensive instruction on individual self-care methods.
  • Konishi, M., Fujita, M., Shimabukuro, K., Wongratwanich, P., Verdonschot, R. G., & Kakimoto, N. (2021). Intraoral ultrasonographic features of tongue cancer and the incidence of cervical lymph node metastasis. Journal of Oral and Maxillofacial Surgery, 79(4), 932-939. doi:10.1016/j.joms.2020.09.006.

    Abstract

    Purpose: The purpose of this study was to investigate the relationship between the visual characteristics of tongue lesion images obtained through intraoral ultrasonographic examination and the occurrence of late cervical lymph node metastasis in patients with tongue cancer.
    Patients and Methods: This study investigated patients with primary tongue cancer who were examined using intraoral ultrasonography at Hiroshima University Hospital between January 2014 and December 2017. The inclusion criteria were squamous cell carcinoma, curative treatment administration, lateral side of tongue, surgery or brachytherapy alone, no cervical lymph node or distant metastasis as primary treatment, and treatment in our hospital. The exclusion criteria were carcinoma in situ, palliative treatment, dorsum of tongue, and multiple primary cancers. The follow-up period was more than 1 year. The primary endpoint was the occurrence of late cervical lymph node metastasis, and the primary predictor variables were age, gender, longest diameter, thickness, margin or border shapes of the lesion, and treatment methods. The relationship between the occurrence of late cervical lymph node metastasis and the longest diameter, thickness, margin types, and border types as evaluated through intraoral ultrasonography were assessed. The data were collected through a retrospective chart review.
    Results: Fifty-four patients were included in this study. The analysis indicated that irregular lesion margins were significantly associated with the occurrence of late cervical lymph node metastasis (P < .0001). The cutoff value for late cervical lymph node metastasis was 21.2 mm for the longest diameter and 3.9 mm for the thickness.
    Conclusions: The results of this study indicates that the irregular lesion margin assessed using intraoral ultrasonography may serve as an effective predictor of late cervical lymph node metastasis in N0 cases. (C) 2020 American Association of Oral and Maxillofacial Surgeons
  • Konishi, M., Fujita, M., Takeuchi, Y., Kubo, K., Imano, N., Nishibuchi, I., Murakami, Y., Shimabukuro, K., Wongratwanich, P., Verdonschot, R. G., Kakimoto, N., & Nagata, Y. (2021). Treatment outcomes of real-time intraoral sonography-guided implantation technique of 198Au grain brachytherapy for T1 and T2 tongue cancer. Journal of Radiation Research, 62(5), 871-876. doi:10.1093/jrr/rrab059.

    Abstract

    It is often challenging to determine the accurate size and shape of oral lesions through computed tomography (CT) or magnetic resonance imaging (MRI) when they are very small or obscured by metallic artifacts, such as dental prostheses. Intraoral ultrasonography (IUS) has been shown to be beneficial in obtaining precise information about total tumor extension, as well as the exact location and guiding the insertion of catheters during interstitial brachytherapy. We evaluated the role of IUS in assessing the clinical outcomes of interstitial brachytherapy with 198Au grains in tongue cancer through a retrospective medical chart review. The data from 45 patients with T1 (n = 21) and T2 (n = 24) tongue cancer, who were mainly treated with 198Au grain implants between January 2005 and April 2019, were included in this study. 198Au grain implantations were carried out, and positioning of the implants was confirmed by IUS, to ensure that 198Au grains were appropriately placed for the deep border of the tongue lesion. The five-year local control rates of T1 and T2 tongue cancers were 95.2% and 95.5%, respectively. We propose that the use of IUS to identify the extent of lesions and the position of implanted grains is effective when performing brachytherapy with 198Au grains.
  • Konopka, A., Meyer, A. S., & Forest, T. A. (2018). Planning to speak in L1 and L2. Cognitive Psychology, 102, 72-104. doi:10.1016/j.cogpsych.2017.12.003.

    Abstract

    The leading theories of sentence planning – Hierarchical Incrementality and Linear Incrementality – differ in their assumptions about the coordination of processes that map preverbal information onto language. Previous studies showed that, in native (L1) speakers, this coordination can vary with the ease of executing the message-level and sentence-level processes necessary to plan and produce an utterance. We report the first series of experiments to systematically examine how linguistic experience influences sentence planning in native (L1) speakers (i.e., speakers with life-long experience using the target language) and non-native (L2) speakers (i.e., speakers with less experience using the target language). In all experiments, speakers spontaneously generated one-sentence descriptions of simple events in Dutch (L1) and English (L2). Analyses of eye-movements across early and late time windows (pre- and post-400 ms) compared the extent of early message-level encoding and the onset of linguistic encoding. In Experiment 1, speakers were more likely to engage in extensive message-level encoding and to delay sentence-level encoding when using their L2. Experiments 2–4 selectively facilitated encoding of the preverbal message, encoding of the agent character (i.e., the first content word in active sentences), and encoding of the sentence verb (i.e., the second content word in active sentences) respectively. Experiment 2 showed that there is no delay in the onset of L2 linguistic encoding when speakers are familiar with the events. Experiments 3 and 4 showed that the delay in the onset of L2 linguistic encoding is not due to speakers delaying encoding of the agent, but due to a preference to encode information needed to select a suitable verb early in the formulation process. Overall, speakers prefer to temporally separate message-level from sentence-level encoding and to prioritize encoding of relational information when planning L2 sentences, consistent with Hierarchical Incrementality
  • Kösem, A., Bosker, H. R., Takashima, A., Meyer, A. S., Jensen, O., & Hagoort, P. (2018). Neural entrainment determines the words we hear. Current Biology, 28, 2867-2875. doi:10.1016/j.cub.2018.07.023.

    Abstract

    Low-frequency neural entrainment to rhythmic input
    has been hypothesized as a canonical mechanism
    that shapes sensory perception in time. Neural
    entrainment is deemed particularly relevant for
    speech analysis, as it would contribute to the extraction
    of discrete linguistic elements from continuous
    acoustic signals. However, its causal influence in
    speech perception has been difficult to establish.
    Here, we provide evidence that oscillations build temporal
    predictions about the duration of speech tokens
    that affect perception. Using magnetoencephalography
    (MEG), we studied neural dynamics during
    listening to sentences that changed in speech rate.
    Weobserved neural entrainment to preceding speech
    rhythms persisting for several cycles after the change
    in rate. The sustained entrainment was associated
    with changes in the perceived duration of the last
    word’s vowel, resulting in the perception of words
    with different meanings. These findings support oscillatory
    models of speech processing, suggesting that
    neural oscillations actively shape speech perception.
  • Köster, O., Hess, M. M., Schiller, N. O., & Künzel, H. J. (1998). The correlation between auditory speech sensitivity and speaker recognition ability. Forensic Linguistics: The international Journal of Speech, Language and the Law, 5, 22-32.

    Abstract

    In various applications of forensic phonetics the question arises as to how far aural-perceptual speaker recognition performance is reliable. Therefore, it is necessary to examine the relationship between speaker recognition results and human perception/production abilities like musicality or speech sensitivity. In this study, performance in a speaker recognition experiment and a speech sensitivity test are correlated. The results show a moderately significant positive correlation between the two tasks. Generally, performance in the speaker recognition task was better than in the speech sensitivity test. Professionals in speech and singing yielded a more homogeneous correlation than non-experts. Training in speech as well as choir-singing seems to have a positive effect on performance in speaker recognition. It may be concluded, firstly, that in cases where the reliability of voice line-up results or the credibility of a testimony have to be considered, the speech sensitivity test could be a useful indicator. Secondly, the speech sensitivity test might be integrated into the canon of possible procedures for the accreditation of forensic phoneticians. Both tests may also be used in combination.
  • Kotz, S. A., Ravignani, A., & Fitch, W. T. (2018). The evolution of rhythm processing. Trends in Cognitive Sciences, 22(10), 896-910. doi:10.1016/j.tics.2018.08.002.
  • Kouwenhoven, H., Van Mulken, M., & Ernestus, M. (2018). Communication strategy use by Spanish speakers of English in formal and informal speech. International Journal of Bilingualism, 22(3), 285-305. doi:10.1177/1367006916672946.

    Abstract

    Research questions:

    Are emergent bilinguals sensitive to register variation in their use of communication strategies? What strategies do LX speakers, in casu Spanish speakers of English, use as a function of situational context? What role do individual differences play?
    Methodology:

    This within-speaker study compares Spanish second-language English speakers’ communication strategy use in an informal, peer-to-peer conversation and a formal interview.
    Data and analysis:

    The 15 hours of informal and 9.5 hours of formal speech from the Nijmegen Corpus of Spanish English were coded for 19 different communication strategies.
    Findings/conclusions:

    Overall, speakers prefer self-reliant strategies, which allow them to continue communication without their interlocutor’s help. Of the self-reliant strategies, least effort strategies such as code-switching are used more often in informal speech, whereas relatively more effortful strategies (e.g. reformulations) are used more in informal speech, when the need to be unambiguously understood is felt as more important. Individual differences played a role: some speakers were more affected by a change in formality than others.
    Originality:

    Sensitivity to register variation has not yet been studied within communicative strategy use.
    Implications:

    General principles of communication govern speakers’ strategy selection, notably the protection of positive face and the least effort and cooperative principles.

    Files private

    Request files
  • Kouwenhoven, H., Ernestus, M., & Van Mulken, M. (2018). Register variation by Spanish users of English. The Nijmegen Corpus of Spanish English. Corpus Linguistics and Linguistic Theory, 14(1), 35-63. doi:10.1515/cllt-2013-0054.

    Abstract

    English serves as a lingua franca in situations with varying degrees of
    formality. How formality affects non-native speech has rarely been studied. We
    investigated register variation by Spanish users of English by comparing formal
    and informal speech from the Nijmegen Corpus of Spanish English that we
    created. This corpus comprises speech from thirty-four Spanish speakers of
    English in interaction with Dutch confederates in two speech situations.
    Formality affected the amount of laughter and overlapping speech and the
    number of Spanish words. Moreover, formal speech had a more informational
    character than informal speech. We discuss how our findings relate to register
    variation in Spanish

    Files private

    Request files
  • De Kovel, C. G. F., Lisgo, S. N., Fisher, S. E., & Francks, C. (2018). Subtle left-right asymmetry of gene expression profiles in embryonic and foetal human brains. Scientific Reports, 8: 12606. doi:10.1038/s41598-018-29496-2.

    Abstract

    Left-right laterality is an important aspect of human –and in fact all vertebrate– brain organization for which the genetic basis is poorly understood. Using RNA sequencing data we contrasted gene expression in left- and right-sided samples from several structures of the anterior central nervous systems of post mortem human embryos and foetuses. While few individual genes stood out as significantly lateralized, most structures showed evidence of laterality of their overall transcriptomic profiles. These left-right differences showed overlap with age-dependent changes in expression, indicating lateralized maturation rates, but not consistently in left-right orientation over all structures. Brain asymmetry may therefore originate in multiple locations, or if there is a single origin, it is earlier than 5 weeks post conception, with structure-specific lateralized processes already underway by this age. This pattern is broadly consistent with the weak correlations reported between various aspects of adult brain laterality, such as language dominance and handedness.
  • De Kovel, C. G. F., Lisgo, S. N., & Francks, C. (2018). Transcriptomic analysis of left-right differences in human embryonic forebrain and midbrain. Scientific Data, 5: 180164. doi:10.1038/sdata.2018.164.

    Abstract

    Left-right asymmetry is subtle but pervasive in the human central nervous system. This asymmetry is initiated early during development, but its mechanisms are poorly known. Forebrains and midbrains were dissected from six human embryos at Carnegie stages 15 or 16, one of which was female. The structures were divided into left and right sides, and RNA was isolated. RNA was sequenced with 100 base-pair paired ends using Illumina Hiseq 4000. After quality control, five paired brain sides were available for midbrain and forebrain. A paired analysis between left- and right sides of a given brain structure across the embryos identified left-right differences. The dataset, consisting of Fastq files and a read count table, can be further used to study early development of the human brain
  • Krämer, I. (1998). Children's interpretations of indefinite object noun phrases. Linguistics in the Netherlands, 1998, 163-174. doi:10.1075/avt.15.15kra.
  • Kreuzer, H. (Ed.). (1971). Methodische Perspektiven [Special Issue]. Zeitschrift für Literaturwissenschaft und Linguistik, (1/2).
  • Kuerbitz, J., Arnett, M., Ehrman, S., Williams, M. T., Voorhees, C. V., Fisher, S. E., Garratt, A. N., Muglia, L. J., Waclaw, R. R., & Campbell, K. (2018). Loss of intercalated cells (ITCs) in the mouse amygdala of Tshz1 mutants correlates with fear, depression and social interaction phenotypes. The Journal of Neuroscience, 38, 1160-1177. doi:10.1523/JNEUROSCI.1412-17.2017.

    Abstract

    The intercalated cells (ITCs) of the amygdala have been shown to be critical regulatory components of amygdalar circuits, which control appropriate fear responses. Despite this, the molecular processes guiding ITC development remain poorly understood. Here we establish the zinc finger transcription factor Tshz1 as a marker of ITCs during their migration from the dorsal lateral ganglionic eminence through maturity. Using germline and conditional knock-out (cKO) mouse models, we show that Tshz1 is required for the proper migration and differentiation of ITCs. In the absence of Tshz1, migrating ITC precursors fail to settle in their stereotypical locations encapsulating the lateral amygdala and BLA. Furthermore, they display reductions in the ITC marker Foxp2 and ectopic persistence of the dorsal lateral ganglionic eminence marker Sp8. Tshz1 mutant ITCs show increased cell death at postnatal time points, leading to a dramatic reduction by 3 weeks of age. In line with this, Foxp2-null mutants also show a loss of ITCs at postnatal time points, suggesting that Foxp2 may function downstream of Tshz1 in the maintenance of ITCs. Behavioral analysis of male Tshz1 cKOs revealed defects in fear extinction as well as an increase in floating during the forced swim test, indicative of a depression-like phenotype. Moreover, Tshz1 cKOs display significantly impaired social interaction (i.e., increased passivity) regardless of partner genetics. Together, these results suggest that Tshz1 plays a critical role in the development of ITCs and that fear, depression-like and social behavioral deficits arise in their absence. SIGNIFICANCE STATEMENT We show here that the zinc finger transcription factor Tshz1 is expressed during development of the intercalated cells (ITCs) within the mouse amygdala. These neurons have previously been shown to play a crucial role in fear extinction. Tshz1 mouse mutants exhibit severely reduced numbers of ITCs as a result of abnormal migration, differentiation, and survival of these neurons. Furthermore, the loss of ITCs in mouse Tshz1 mutants correlates well with defects in fear extinction as well as the appearance of depression-like and abnormal social interaction behaviors reminiscent of depressive disorders observed in human patients with distal 18q deletions, including the Tshz1 locus.
  • Lakens, D., Adolfi, F. G., Albers, C. J., Anvari, F., Apps, M. A. J., Argamon, S. E., Baguley, T., Becker, R. B., Benning, S. D., Bradford, D. E., Buchanan, E. M., Caldwell, A. R., Van Calster, B., Carlsson, R., Chen, S.-C., Chung, B., Colling, L. J., Collins, G. S., Crook, Z., Cross, E. S. and 68 moreLakens, D., Adolfi, F. G., Albers, C. J., Anvari, F., Apps, M. A. J., Argamon, S. E., Baguley, T., Becker, R. B., Benning, S. D., Bradford, D. E., Buchanan, E. M., Caldwell, A. R., Van Calster, B., Carlsson, R., Chen, S.-C., Chung, B., Colling, L. J., Collins, G. S., Crook, Z., Cross, E. S., Daniels, S., Danielsson, H., DeBruine, L., Dunleavy, D. J., Earp, B. D., Feist, M. I., Ferrelle, J. D., Field, J. G., Fox, N. W., Friesen, A., Gomes, C., Gonzalez-Marquez, M., Grange, J. A., Grieve, A. P., Guggenberger, R., Grist, J., Van Harmelen, A.-L., Hasselman, F., Hochard, K. D., Hoffarth, M. R., Holmes, N. P., Ingre, M., Isager, P. M., Isotalus, H. K., Johansson, C., Juszczyk, K., Kenny, D. A., Khalil, A. A., Konat, B., Lao, J., Larsen, E. G., Lodder, G. M. A., Lukavský, J., Madan, C. R., Manheim, D., Martin, S. R., Martin, A. E., Mayo, D. G., McCarthy, R. J., McConway, K., McFarland, C., Nio, A. Q. X., Nilsonne, G., De Oliveira, C. L., De Xivry, J.-J.-O., Parsons, S., Pfuhl, G., Quinn, K. A., Sakon, J. J., Saribay, S. A., Schneider, I. K., Selvaraju, M., Sjoerds, Z., Smith, S. G., Smits, T., Spies, J. R., Sreekumar, V., Steltenpohl, C. N., Stenhouse, N., Świątkowski, W., Vadillo, M. A., Van Assen, M. A. L. M., Williams, M. N., Williams, S. E., Williams, D. R., Yarkoni, T., Ziano, I., & Zwaan, R. A. (2018). Justify your alpha. Nature Human Behaviour, 2, 168-171. doi:10.1038/s41562-018-0311-x.

    Abstract

    In response to recommendations to redefine statistical significance to P ≤ 0.005, we propose that researchers should transparently report and justify all choices they make when designing a study, including the alpha level.
  • Lam, N. H. L., Hulten, A., Hagoort, P., & Schoffelen, J.-M. (2018). Robust neuronal oscillatory entrainment to speech displays individual variation in lateralisation. Language, Cognition and Neuroscience, 33(8), 943-954. doi:10.1080/23273798.2018.1437456.

    Abstract

    Neural oscillations may be instrumental for the tracking and segmentation of continuous speech. Earlier work has suggested that delta, theta and gamma oscillations entrain to the speech rhythm. We used magnetoencephalography and a large sample of 102 participants to investigate oscillatory entrainment to speech, and observed robust entrainment of delta and theta activity, and weak group-level gamma entrainment. We show that the peak frequency and the hemispheric lateralisation of the entrainment are subject to considerable individual variability. The first finding may support the involvement of intrinsic oscillations in entrainment, and the second finding suggests that there is no systematic default right-hemispheric bias for processing acoustic signals on a slow time scale. Although low frequency entrainment to speech is a robust phenomenon, the characteristics of entrainment vary across individuals, and this variation is important for understanding the underlying neural mechanisms of entrainment, as well as its functional significance.
  • Lattenkamp, E. Z., Hörpel, S. G., Mengede, J., & Firzlaff, U. (2021). A researcher’s guide to the comparison of vocal production learning. Philosophical Transactions of the Royal Society of London, Series B: Biological Sciences, 376: 20200237. doi:10.1098/rstb.2020.0237.

    Abstract

    Vocal production learning (VPL) is the capacity to learn to produce new vocalizations, which is a rare ability in the animal kingdom and thus far has only been identified in a handful of mammalian taxa and three groups of birds. Over the last few decades, approaches to the demonstration of VPL have varied among taxa, sound production systems and functions. These discrepancies strongly impede direct comparisons between studies. In the light of the growing number of experimental studies reporting VPL, the need for comparability is becoming more and more pressing. The comparative evaluation of VPL across studies would be facilitated by unified and generalized reporting standards, which would allow a better positioning of species on any proposed VPL continuum. In this paper, we specifically highlight five factors influencing the comparability of VPL assessments: (i) comparison to an acoustic baseline, (ii) comprehensive reporting of acoustic parameters, (iii) extended reporting of training conditions and durations, (iv) investigating VPL function via behavioural, perception-based experiments and (v) validation of findings on a neuronal level. These guidelines emphasize the importance of comparability between studies in order to unify the field of vocal learning.
  • Lattenkamp, E. Z., Linnenschmidt, M., Mardus, E., Vernes, S. C., Wiegrebe, L., & Schutte, M. (2021). The vocal development of the pale spear-nosed bat is dependent on auditory feedback. Philosophical Transactions of the Royal Society of London, Series B: Biological Sciences, 376: 20200253. doi:10.1098/rstb.2020.0253.

    Abstract

    Human vocal development and speech learning require acoustic feedback, and
    humans who are born deaf do not acquire a normal adult speech capacity. Most
    other mammals display a largely innate vocal repertoire. Like humans, bats are
    thought to be one of the few taxa capable of vocal learning as they can acquire
    new vocalizations by modifying vocalizations according to auditory experiences.
    We investigated the effect of acoustic deafening on the vocal development of the
    pale spear-nosed bat. Three juvenile pale spear-nosed bats were deafened, and
    their vocal development was studied in comparison with an age-matched, hear-
    ing control group. The results show that during development the deafened bats
    increased their vocal activity, and their vocalizations were substantially altered,
    being much shorter, higher in pitch, and more aperiodic than the vocalizations
    of the control animals. The pale spear-nosed bat relies on auditory feedback
    for vocal development and, in the absence of auditory input, species-atypical
    vocalizations are acquired. This work serves as a basis for further research
    using the pale spear-nosed bat as a mammalian model for vocal learning, and
    contributes to comparative studies on hearing impairment across species.
    This article is part of the theme issue ‘Vocal learning in animals and
    humans’.
  • Lattenkamp, E. Z., Kaiser, S., Kaucic, R., Großmann, M., Koselj, K., & Goerlitz, H. R. (2018). Environmental acoustic cues guide the biosonar attention of a highly specialised echolocator. Journal of Experimental Biology, 221(8): jeb165696. doi:10.1242/jeb.165696.

    Abstract

    Sensory systems experience a trade-off between maximizing the
    detail and amount of sampled information. Thistrade-off is particularly
    pronounced in sensorysystemsthat are highlyspecialised fora single
    task and thus experience limitations in other tasks. We hypothesised
    that combining sensory input from multiple streams of information
    may resolve this trade-off and improve detection and sensing
    reliability. Specifically, we predicted that perceptive limitations
    experienced by animals reliant on specialised active echolocation
    can be compensated for by the phylogenetically older and less
    specialised process of passive hearing. We tested this hypothesis in
    greater horseshoe bats, which possess morphological and neural
    specialisations allowing them to identify fluttering prey in dense
    vegetation using echolocation only. At the same time, their
    echolocation system is both spatially and temporally severely
    limited. Here, we show that greater horseshoe bats employ passive
    hearing to initially detect and localise prey-generated and other
    environmental sounds, and then raise vocalisation level and
    concentrate the scanning movements of their sonar beam on the
    sound source for further investigation with echolocation. These
    specialised echolocators thus supplement echo-acoustic information
    with environmental acoustic cues, enlarging perceived space beyond
    their biosonar range. Contrary to our predictions, we did not find
    consistent preferences for prey-related acoustic stimuli, indicating the
    use of passive acoustic cues also for detection of non-prey objects.
    Our findings suggest that even specialised echolocators exploit a
    wide range of environmental information, and that phylogenetically
    older sensory systems can support the evolution of sensory
    specialisations by compensating for their limitations.
  • Lattenkamp, E. Z., Nagy, M., Drexl, M., Vernes, S. C., Wiegrebe, L., & Knörnschild, M. (2021). Hearing sensitivity and amplitude coding in bats are differentially shaped by echolocation calls and social calls. Proceedings of the Royal Society B: Biological Sciences, 288(1942): 20202600. doi:10.1098/rspb.2020.2600.

    Abstract

    Differences in auditory perception between species are influenced by phylogenetic origin and the perceptual challenges imposed by the natural environment, such as detecting prey- or predator-generated sounds and communication signals. Bats are well suited for comparative studies on auditory perception since they predominantly rely on echolocation to perceive the world, while their social calls and most environmental sounds have low frequencies. We tested if hearing sensitivity and stimulus level coding in bats differ between high and low-frequency ranges by measuring auditory brainstem responses (ABRs) of 86 bats belonging to 11 species. In most species, auditory sensitivity was equally good at both high- and low-frequency ranges, while amplitude was more finely coded for higher frequency ranges. Additionally, we conducted a phylogenetic comparative analysis by combining our ABR data with published data on 27 species. Species-specific peaks in hearing sensitivity correlated with peak frequencies of echolocation calls and pup isolation calls, suggesting that changes in hearing sensitivity evolved in response to frequency changes of echolocation and social calls. Overall, our study provides the most comprehensive comparative assessment of bat hearing capacities to date and highlights the evolutionary pressures acting on their sensory perception.

    Additional information

    data
  • Lattenkamp, E. Z., & Vernes, S. C. (2018). Vocal learning: A language-relevant trait in need of a broad cross-species approach. Current Opinion in Behavioral Sciences, 21, 209-215. doi:10.1016/j.cobeha.2018.04.007.

    Abstract

    Although humans are unmatched in their capacity to produce
    speech and learn language, comparative approaches in diverse
    animalmodelsareabletoshedlightonthebiologicalunderpinnings
    of language-relevant traits. In the study of vocal learning, a trait
    crucial for spoken language, passerine birds have been the
    dominant models, driving invaluable progress in understanding the
    neurobiology and genetics of vocal learning despite being only
    distantly related to humans. To date, there is sparse evidence that
    our closest relatives, nonhuman primates have the capability to
    learn new vocalisations. However, a number of other mammals
    have shown the capacity for vocal learning, such as some
    cetaceans, pinnipeds, elephants, and bats, and we anticipate that
    with further study more species will gain membership to this
    (currently) select club. A broad, cross-species comparison of vocal
    learning, coupled with careful consideration of the components
    underlying this trait, is crucial to determine how human speech and
    spoken language is biologically encoded and how it evolved. We
    emphasise the need to draw on the pool of promising species that
    havethusfarbeenunderstudiedorneglected.Thisisbynomeansa
    call for fewer studies in songbirds, or an unfocused treasure-hunt,
    but rather an appeal for structured comparisons across a range of
    species, considering phylogenetic relationships, ecological and
    morphological constrains, developmental and social factors, and
    neurogenetic underpinnings. Herein, we promote a comparative
    approachhighlightingtheimportanceofstudyingvocallearningina
    broad range of model species, and describe a common framework
    for targeted cross-taxon studies to shed light on the biology and
    evolution of vocal learning.
  • Lattenkamp, E. Z., Vernes, S. C., & Wiegrebe, L. (2018). Volitional control of social vocalisations and vocal usage learning in bats. Journal of Experimental Biology, 221(14): jeb.180729. doi:10.1242/jeb.180729.

    Abstract

    Bats are gregarious, highly vocal animals that possess a broad repertoire of social vocalisations. For in-depth studies of their vocal behaviours, including vocal flexibility and vocal learning, it is necessary to gather repeatable evidence from controlled laboratory experiments on isolated individuals. However, such studies are rare for one simple reason: eliciting social calls in isolation and under operant control is challenging and has rarely been achieved. To overcome this limitation, we designed an automated setup that allows conditioning of social vocalisations in a new context, and tracks spectro-temporal changes in the recorded calls over time. Using this setup, we were able to reliably evoke social calls from temporarily isolated lesser spear-nosed bats (Phyllostomus discolor). When we adjusted the call criteria that could result in food reward, bats responded by adjusting temporal and spectral call parameters. This was achieved without the help of an auditory template or social context to direct the bats. Our results demonstrate vocal flexibility and vocal usage learning in bats. Our setup provides a new paradigm that allows the controlled study of the production and learning of social vocalisations in isolated bats, overcoming limitations that have, until now, prevented in-depth studies of these behaviours.

    Additional information

    JEB180729supp.pdf
  • Law, R., & Pylkkänen, L. (2021). Lists with and without syntax: A new approach to measuring the neural processing of syntax. The Journal of Neuroscience, 41(10), 2186-2196. doi:10.1523/JNEUROSCI.1179-20.2021.

    Abstract

    In the neurobiology of language, a fundamental challenge is deconfounding syntax from semantics. Changes in syntactic structure usually correlate with changes in meaning. We approached this challenge from a new angle. We deployed word lists, which are usually the unstructured control in studies of syntax, as both the test and the control stimulus. Three-noun lists (lamps, dolls, guitars) were embedded in sentences (The eccentric man hoarded lamps, dolls, guitars…) and in longer lists (forks, pen, toilet, rodeo, graves, drums, mulch, lamps, dolls, guitars…). This allowed us to perfectly control both lexical characteristics and local combinatorics: the same words occurred in both conditions and in neither case did the list items locally compose into phrases (e.g. ‘lamps’ and ‘dolls’ do not form a phrase). But in one case, the list partakes in a syntactic tree, while in the other, it does not. Being embedded inside a syntactic tree increased source-localized MEG activity at ~250-300ms from word onset in the left inferior frontal cortex, at ~300-350ms in the left anterior temporal lobe and, most reliably, at ~330-400ms in left posterior temporal cortex. In contrast, effects of semantic association strength, which we also varied, localized in left temporo-parietal cortex, with high associations increasing activity at around 400ms. This dissociation offers a novel characterization of the structure vs. meaning contrast in the brain: The fronto-temporal network that is familiar from studies of sentence processing can be driven by the sheer presence of global sentence structure, while associative semantics has a more posterior neural signature.

    Additional information

    Link to Preprint on BioRxiv
  • Lee, J. J., Wedow, R., Okbay, A., Kong, E., Maghzian, O., Zacher, M., Nguyen-Viet, T. A., Bowers, P., Sidorenko, J., Linnér, R. K., Fontana, M. A., Kundu, T., Lee, C., Li, H., Li, R., Royer, R., Timshel, P. N., Walters, R. K., Willoughby, E. A., Yengo, L. and 57 moreLee, J. J., Wedow, R., Okbay, A., Kong, E., Maghzian, O., Zacher, M., Nguyen-Viet, T. A., Bowers, P., Sidorenko, J., Linnér, R. K., Fontana, M. A., Kundu, T., Lee, C., Li, H., Li, R., Royer, R., Timshel, P. N., Walters, R. K., Willoughby, E. A., Yengo, L., 23andMe Research Team, COGENT (Cognitive Genomics Consortium), Social Science Genetic Association Consortium, Alver, M., Bao, Y., Clark, D. W., Day, F. R., Furlotte, N. A., Joshi, P. K., Kemper, K. E., Kleinman, A., Langenberg, C., Mägi, R., Trampush, J. W., Verma, S. S., Wu, Y., Lam, M., Zhao, J. H., Zheng, Z., Boardman, J. D., Campbell, H., Freese, J., Harris, K. M., Hayward, C., Herd, P., Kumari, M., Lencz, T., Luan, J., Malhotra, A. K., Metspalu, A., Milani, L., Ong, K. K., Perry, J. R. B., Porteous, D. J., Ritchie, M. D., Smart, M. C., Smith, B. H., Tung, J. Y., Wareham, N. J., Wilson, J. F., Beauchamp, J. P., Conley, D. C., Esko, T., Lehrer, S. F., Magnusson, P. K. E., Oskarsson, S., Pers, T. H., Robinson, M. R., Thom, K., Watson, C., Chabris, C. F., Meyer, M. N., Laibson, D. I., Yang, J., Johannesson, M., Koellinger, P. D., Turley, P., Visscher, P. M., Benjamin, D. J., & Cesarini, D. (2018). Gene discovery and polygenic prediction from a genome-wide association study of educational attainment in 1.1 million individuals. Nature Genetics, 50(8), 1112-1121. doi:10.1038/s41588-018-0147-3.

    Abstract

    Here we conducted a large-scale genetic association analysis of educational attainment in a sample of approximately 1.1 million individuals and identify 1,271 independent genome-wide-significant SNPs. For the SNPs taken together, we found evidence of heterogeneous effects across environments. The SNPs implicate genes involved in brain-development processes and neuron-to-neuron communication. In a separate analysis of the X chromosome, we identify 10 independent genome-wide-significant SNPs and estimate a SNP heritability of around 0.3% in both men and women, consistent with partial dosage compensation. A joint (multi-phenotype) analysis of educational attainment and three related cognitive phenotypes generates polygenic scores that explain 11–13% of the variance in educational attainment and 7–10% of the variance in cognitive performance. This prediction accuracy substantially increases the utility of polygenic scores as tools in research.
  • Lemen, H., Lieven, E., & Theakston, A. (2021). A comparison of the pragmatic patterns in the spontaneous because- and if-sentences produced by children and their caregivers. Journal of Pragmatics, 185, 15-34. doi:10.1016/j.pragma.2021.07.016.

    Abstract

    Findings from corpus (e.g. Diessel, 2004) and comprehension (e.g. De Ruiter et al., 2018) studies show that children produce the adverbial connectives because and if long before they seem able to understand them. However, although children's comprehension is typically tested on sentences expressing the pragmatic relationship which Sweetser (1990) calls “Content”, children also hear and produce sentences expressing “Speech–Act” relationships (e.g. De Ruiter et al., 2021; Kyratzis et al., 1990). To better understand the possible influence of pragmatic variation on 2- to 4- year-old children's acquisition of these connectives, we coded the because and if Speech–Act sentences of 14 British English-speaking mother-child dyads for the type of illocutionary act they contained, as well as the phrasing following the connective. Analyses revealed that children's because Speech–Act sentences were primarily explanations of Statements/Claims, while their if Speech–Act sentences typically related to permission and politeness. While children's because-sentences showed a great deal of individuality, their if-sentences closely resembled their mothers’, containing a high proportion of recurring phrases which appear to be abstracted from input. We discuss how these patterns might help shape children's understanding of each connective and contribute to the children's overall difficulty with because and if.
  • Lemhöfer, K., Huestegge, L., & Mulder, K. (2018). Another cup of TEE? The processing of second language near-cognates in first language reading. Language, Cognition and Neuroscience, 33(8), 968-991. doi:10.1080/23273798.2018.1433863.

    Abstract

    A still unresolved issue is in how far native language (L1) processing in bilinguals is influenced by the second language (L2). We investigated this in two word recognition experiments in L1, using homophonic near-cognates that are spelled in L2. In a German lexical decision task (Experiment 1), German-Dutch bilinguals had more difficulties to reject these Dutch-spelled near-cognates than other misspellings, while this was not the case for non-Dutch speaking Germans. In Experiment 2, the same materials were embedded in German sentences. Analyses of eye movements during reading showed that only non-Dutch speaking Germans, but not Dutch-speaking participants were slowed down by the Dutch cognate misspellings. Additionally, in both experiments, bilinguals with larger vocabulary sizes in Dutch tended to show larger near-cognate effects. Thus, Dutch word knowledge influenced word recognition in L1 German in both task contexts, suggesting that L1 word recognition in bilinguals is non-selective with respect to L2.
  • De León, L., & Levinson, S. C. (Eds.). (1992). Space in Mesoamerican languages [Special Issue]. Zeitschrift für Phonetik, Sprachwissenschaft und Kommunikationsforschung, 45(6).
  • Lev-Ari, S. (2018). Social network size can influence linguistic malleability and the propagation of linguistic change. Cognition, 176, 31-39. doi:10.1016/j.cognition.2018.03.003.

    Abstract

    We learn language from our social environment, but the more sources we have, the less informative each source is, and therefore, the less weight we ascribe its input. According to this principle, people with larger social networks should give less weight to new incoming information, and should therefore be less susceptible to the influence of new speakers. This paper tests this prediction, and shows that speakers with smaller social networks indeed have more malleable linguistic representations. In particular, they are more likely to adjust their lexical boundary following exposure to a new speaker. Experiment 2 uses computational simulations to test whether this greater malleability could lead people with smaller social networks to be important for the propagation of linguistic change despite the fact that they interact with fewer people. The results indicate that when innovators were connected with people with smaller rather than larger social networks, the population exhibited greater and faster diffusion. Together these experiments show that the properties of people’s social networks can influence individuals’ learning and use as well as linguistic phenomena at the community level.
  • Lev-Ari, S. (2018). The influence of social network size on speech perception. Quarterly Journal of Experimental Psychology, 71(10), 2249-2260. doi:10.1177/1747021817739865.

    Abstract

    Infants and adults learn new phonological varieties better when exposed to multiple rather than a single speaker. This
    article tests whether having a larger social network similarly facilitates phonological performance. Experiment 1 shows
    that people with larger social networks are better at vowel perception in noise, indicating that the benefit of laboratory
    exposure to multiple speakers extends to real life experience and to adults tested in their native language. Furthermore,
    the experiment shows that this association is not due to differences in amount of input or to cognitive differences
    between people with different social network sizes. Follow-up computational simulations reveal that the benefit of
    larger social networks is mostly due to increased input variability. Additionally, the simulations show that the boost
    that larger social networks provide is independent of the amount of input received but is larger if the population is
    more heterogeneous. Finally, a comparison of “adult” and “child” simulations reconciles previous conflicting findings
    by suggesting that input variability along the relevant dimension might be less useful at the earliest stages of learning.
    Together, this article shows when and how the size of our social network influences our speech perception. It thus
    shows how aspects of our lifestyle can influence our linguistic performance.

    Additional information

    QJE-STD_17-073.R4-Table_A1.docx
  • Lev-Ari, S., Ho, E., & Keysar, B. (2018). The unforeseen consequences of interacting with non-native speakers. Topics in Cognitive Science, 10, 835-849. doi:10.1111/tops.12325.

    Abstract

    Sociolinguistic research shows that listeners' expectations of speakers influence their interpretation of the speech, yet this is often ignored in cognitive models of language comprehension. Here, we focus on the case of interactions between native and non-native speakers. Previous literature shows that listeners process the language of non-native speakers in less detail, because they expect them to have lower linguistic competence. We show that processing the language of non-native speakers increases lexical competition and access in general, not only of the non-native speaker's speech, and that this leads to poorer memory of one's own speech during the interaction. We further find that the degree to which people adjust their processing to non-native speakers is related to the degree to which they adjust their speech to them. We discuss implications for cognitive models of language processing and sociolinguistic research on attitudes.
  • Levelt, W. J. M. (1992). Accessing words in speech production: Stages, processes and representations. Cognition, 42, 1-22. doi:10.1016/0010-0277(92)90038-J.

    Abstract

    This paper introduces a special issue of Cognition on lexical access in speech production. Over the last quarter century, the psycholinguistic study of speaking, and in particular of accessing words in speech, received a major new impetus from the analysis of speech errors, dysfluencies and hesitations, from aphasiology, and from new paradigms in reaction time research. The emerging theoretical picture partitions the accessing process into two subprocesses, the selection of an appropriate lexical item (a “lemma”) from the mental lexicon, and the phonological encoding of that item, that is, the computation of a phonetic program for the item in the context of utterance. These two theoretical domains are successively introduced by outlining some core issues that have been or still have to be addressed. The final section discusses the controversial question whether phonological encoding can affect lexical selection. This partitioning is also followed in this special issue as a whole. There are, first, four papers on lexical selection, then three papers on phonological encoding, and finally one on the interaction between selection and phonological encoding.
  • Levelt, W. J. M., & Wheeldon, L. (1994). Do speakers have access to a mental syllabary? Cognition, 50, 239-269. doi:10.1016/0010-0277(94)90030-2.

    Abstract

    The first, theoretical part of this paper sketches a framework for phonological encoding in which the speaker successively generates phonological syllables in connected speech. The final stage of this process, phonetic encoding, consists of accessing articulatory gestural scores for each of these syllables in a "mental syllabary". The second, experimental part studies various predictions derived from this theory. The main finding is a syllable frequency effect: words ending in a high-frequent syllable are named faster than words ending in a low-frequent syllable. As predicted, this syllable frequency effect is independent of and additive to the effect of word frequency on naming latency. The effect, moreover, is not due to the complexity of the word-final syllable. In the General Discussion, the syllabary model is further elaborated with respect to phonological underspecification and activation spreading. Alternative accounts of the empirical findings in terms of core syllables and demisyllables are considered.
  • Levelt, W. J. M., Praamstra, P., Meyer, A. S., Helenius, P., & Salmelin, R. (1998). An MEG study of picture naming. Journal of Cognitive Neuroscience, 10(5), 553-567. doi:10.1162/089892998562960.

    Abstract

    The purpose of this study was to relate a psycholinguistic processing model of picture naming to the dynamics of cortical activation during picture naming. The activation was recorded from eight Dutch subjects with a whole-head neuromagnetometer. The processing model, based on extensive naming latency studies, is a stage model. In preparing a picture's name, the speaker performs a chain of specific operations. They are, in this order, computing the visual percept, activating an appropriate lexical concept, selecting the target word from the mental lexicon, phonological encoding, phonetic encoding, and initiation of articulation. The time windows for each of these operations are reasonably well known and could be related to the peak activity of dipole sources in the individual magnetic response patterns. The analyses showed a clear progression over these time windows from early occipital activation, via parietal and temporal to frontal activation. The major specific findings were that (1) a region in the left posterior temporal lobe, agreeing with the location of Wernicke's area, showed prominent activation starting about 200 msec after picture onset and peaking at about 350 msec, (i.e., within the stage of phonological encoding), and (2) a consistent activation was found in the right parietal cortex, peaking at about 230 msec after picture onset, thus preceding and partly overlapping with the left temporal response. An interpretation in terms of the management of visual attention is proposed.
  • Levelt, W. J. M. (1992). Fairness in reviewing: A reply to O'Connell. Journal of Psycholinguistic Research, 21, 401-403.
  • Levelt, W. J. M. (1982). Het lineariseringsprobleem van de spreker. Tijdschrift voor Taal- en Tekstwetenschap (TTT), 2(1), 1-15.
  • Levelt, W. J. M. (1994). Hoofdstukken uit de psychologie. Nederlands tijdschrift voor de psychologie, 49, 1-14.
  • Levelt, W. J. M. (2018). Is language natural to man? Some historical considerations. Current Opinion in Behavioral Sciences, 21, 127-131. doi:10.1016/j.cobeha.2018.04.003.

    Abstract

    Since the Enlightenment period, natural theories of speech and language evolution have florished in the language sciences. Four ever returning core issues are highlighted in this paper: Firstly, Is language natural to man or just an invention? Secondly, Is language a specific human ability (a ‘language instinct’) or does it arise from general cognitive capacities we share with other animals? Thirdly, Has the evolution of language been a gradual process or did it rather suddenly arise, due to some ‘evolutionary twist’? Lastly, Is the child's language acquisition an appropriate model for language evolution?
  • Levelt, W. J. M., & Schiller, N. O. (1998). Is the syllable frame stored? [Commentary on the BBS target article 'The frame/content theory of evolution of speech production' by Peter F. McNeilage]. Behavioral and Brain Sciences, 21, 520.

    Abstract

    This commentary discusses whether abstract metrical frames are stored. For stress-assigning languages (e.g., Dutch and English), which have a dominant stress pattern, metrical frames are stored only for words that deviate from the default stress pattern. The majority of the words in these languages are produced without retrieving any independent syllabic or metrical frame.
  • Levelt, W. J. M. (1973). Recente ontwikkelingen in de taalpsychologie. Forum der Letteren, 14(4), 235-254.
  • Levelt, W. J. M. (1992). Sprachliche Musterbildung und Mustererkennung. Nova Acta Leopoldina NF, 67(281), 357-370.
  • Levelt, W. J. M., & Bonarius, M. (1973). Suffixes as deep structure clues. Methodology and Science, 6(1), 7-37.

    Abstract

    Recent work on sentence recognition suggests that listeners use their knowledge of the language to directly infer deep structure syntactic relations from surface structure markers. Suffixes may be such clues, especially in agglutinative languages. A cross-language (Dutch-Finnish) experiment is reported, designed to investigate whether the suffix structure of Finnish words (as opposed to suffixless Dutch words) can facilitate prompted recall of sentences in case these suffixes differentiate between possible deep structures. The experiment, in which 80 subjects recall sentences at the occasion of prompt words, gives only slight confirmatory evidence. Meanwhile, another prompted recall effect (Blumenthal's) could not be replicated.
  • Levelt, W. J. M., & Kelter, S. (1982). Surface form and memory in question answering. Cognitive Psychology, 14, 78-106. doi:10.1016/0010-0285(82)90005-6.

    Abstract

    Speakers tend to repeat materials from previous talk. This tendency is experimentally established and manipulated in various question-answering situations. It is shown that a question's surface form can affect the format of the answer given, even if this form has little semantic or conversational consequence, as in the pair Q: (At) what time do you close. A: “(At)five o'clock.” Answerers tend to match the utterance to the prepositional (nonprepositional) form of the question. This “correspondence effect” may diminish or disappear when, following the question, additional verbal material is presented to the answerer. The experiments show that neither the articulatory buffer nor long-term memory is normally involved in this retention of recent speech. Retaining recent speech in working memory may fulfill a variety of functions for speaker and listener, among them the correct production and interpretation of surface anaphora. Reusing recent materials may, moreover, be more economical than regenerating speech anew from a semantic base, and thus contribute to fluency. But the realization of this strategy requires a production system in which linguistic formulation can take place relatively independent of, and parallel to, conceptual planning.
  • Levelt, W. J. M. (1982). Science policy: Three recent idols, and a goddess. IPO Annual Progress Report, 17, 32-35.
  • Levelt, W. J. M. (1992). The perceptual loop theory not disconfirmed: A reply to MacKay. Consciousness and Cognition, 1, 226-230. doi:10.1016/1053-8100(92)90062-F.

    Abstract

    In his paper, MacKay reviews his Node Structure theory of error detection, but precedes it with a critical discussion of the Perceptual Loop theory of self-monitoring proposed in Levelt (1983, 1989). The present commentary is concerned with this latter critique and shows that there are more than casual problems with MacKay’s argumentation.
  • Levelt, W. J. M. (1998). The genetic perspective in psycholinguistics, or: Where do spoken words come from? Journal of Psycholinguistic Research, 27(2), 167-180. doi:10.1023/A:1023245931630.

    Abstract

    The core issue in the 19-century sources of psycholinguistics was the question, "Where does language come from?'' This genetic perspective unified the study of the ontogenesis, the phylogenesis, the microgenesis, and to some extent the neurogenesis of language. This paper makes the point that this original perspective is still a valid and attractive one. It is exemplified by a discussion of the genesis of spoken words.
  • Levelt, W. J. M. (1982). Zelfcorrecties in het spreekproces. KNAW: Mededelingen van de afdeling letterkunde, nieuwe reeks, 45(8), 215-228.
  • Levinson, S. C., & Brown, P. (1994). Immanuel Kant among the Tenejapans: Anthropology as empirical philosophy. Ethos, 22(1), 3-41. Retrieved from http://www.jstor.org/stable/640467.

    Abstract

    This paper confronts Kant’s (1768) view of human conceptions of space as fundamentally divided along the three planes of the human body with an empirical case study in the Mayan community of Tenejapa in southern Mexico, whose inhabitants do not use left/right distinctions to project regions in space. Tenejapans have names for the left hand and the right hand, and also a term for hand/arm in general, but they do not generalize the distinction to spatial regions -- there is no linguistic expression glossing as 'to the left' or 'on the left-hand side', for example. Tenejapans also show a remarkable indifference to incongruous counterparts. Nor is there any system of value associations with the left and the right. The Tenejapan evidence that speaks to these Kantian themes points in two directions: (a) Kant was wrong to think that the structure of spatial regions founded on the human frame, and in particular the distinctions based on left and right, are in some sense essential human intuitions; (b) Kant may have been right to think that the left/right opposition, the perception of enantiomorphs, clockwiseness, East-West dichotomies, etc., are intimately connected to an overall system of spatial conception.
  • Levinson, S. C., & Haviland, J. B. (1994). Introduction: Spatial conceptualization in Mayan languages. Linguistics, 32(4/5), 613-622.
  • Levinson, S. C. (1992). Primer for the field investigation of spatial description and conception. Pragmatics, 2(1), 5-47.
  • Levinson, S. C., & Haviland, J. B. (Eds.). (1994). Space in Mayan languages [Special Issue]. Linguistics, 32(4/5).
  • Levinson, S. C. (2018). Spatial cognition, empathy and language evolution. Studies in Pragmatics, 20, 16-21.

    Abstract

    The evolution of language and spatial cognition may have been deeply interconnected. The argument
    goes as follows: 1. Human native spatial abilities are poor, but we make up for it with linguistic
    and cultural prostheses; 2. The explanation for the loss of native spatial abilities may be
    that language has cannibalized the hippocampus, the mammalian mental ‘GPS’; 3. Consequently,
    language may have borrowed conceptual primitives from spatial cognition (in line with ‘localism’),
    these being differentially combined in different languages; 4. The hippocampus may have
    been colonized because: (a) space was prime subject matter for communication, (b) gesture uses
    space to represent space, and was likely precursor to language. In order to explain why the other
    great apes haven’t gone in the same direction, we need to invoke other factors, notably the ‘interaction
    engine’, the ensemble of interactional abilities that make cooperative communication possible
    and provide the matrix for the evolution and learning of language.
  • Levinson, S. C. (1998). Studying spatial conceptualization across cultures: Anthropology and cognitive science. Ethos, 26(1), 7-24. doi:10.1525/eth.1998.26.1.7.

    Abstract

    Philosophers, psychologists, and linguists have argued that spatial conception is pivotal to cognition in general, providing a general, egocentric, and universal framework for cognition as well as metaphors for conceptualizing many other domains. But in an aboriginal community in Northern Queensland, a system of cardinal directions informs not only language, but also memory for arbitrary spatial arrays and directions. This work suggests that fundamental cognitive parameters, like the system of coding spatial locations, can vary cross-culturally, in line with the language spoken by a community. This opens up the prospect of a fruitful dialogue between anthropology and the cognitive sciences on the complex interaction between cultural and universal factors in the constitution of mind.
  • Levinson, S. C. (1994). Vision, shape and linguistic description: Tzeltal body-part terminology and object description. Linguistics, 32(4/5), 791-856.
  • Levshina, N. (2021). Cross-linguistic trade-offs and causal relationships between cues to grammatical subject and object, and the problem of efficiency-related explanations. Frontiers in Psychology, 12: 648200. doi:10.3389/fpsyg.2021.648200.

    Abstract

    Cross-linguistic studies focus on inverse correlations (trade-offs) between linguistic variables that reflect different cues to linguistic meanings. For example, if a language has no case marking, it is likely to rely on word order as a cue for identification of grammatical roles. Such inverse correlations are interpreted as manifestations of language users’ tendency to use language efficiently. The present study argues that this interpretation is problematic. Linguistic variables, such as the presence of case, or flexibility of word order, are aggregate properties, which do not represent the use of linguistic cues in context directly. Still, such variables can be useful for circumscribing the potential role of communicative efficiency in language evolution, if we move from cross-linguistic trade-offs to multivariate causal networks. This idea is illustrated by a case study of linguistic variables related to four types of Subject and Object cues: case marking, rigid word order of Subject and Object, tight semantics and verb-medial order. The variables are obtained from online language corpora in thirty languages, annotated with the Universal Dependencies. The causal model suggests that the relationships between the variables can be explained predominantly by sociolinguistic factors, leaving little space for a potential impact of efficient linguistic behavior.
  • Levshina, N., & Moran, S. (2021). Efficiency in human languages: Corpus evidence for universal principles. Linguistics Vanguard, 7(s3): 20200081. doi:10.1515/lingvan-2020-0081.

    Abstract

    Over the last few years, there has been a growing interest in communicative efficiency. It has been argued that language users act efficiently, saving effort for processing and articulation, and that language structure and use reflect this tendency. The emergence of new corpus data has brought to life numerous studies on efficient language use in the lexicon, in morphosyntax, and in discourse and phonology in different languages. In this introductory paper, we discuss communicative efficiency in human languages, focusing on evidence of efficient language use found in multilingual corpora. The evidence suggests that efficiency is a universal feature of human language. We provide an overview of different manifestations of efficiency on different levels of language structure, and we discuss the major questions and findings so far, some of which are addressed for the first time in the contributions in this special collection.
  • Levshina, N., & Moran, S. (Eds.). (2021). Efficiency in human languages: Corpus evidence for universal principles [Special Issue]. Linguistics Vanguard, 7(s3).
  • Levshina, N. (2021). Communicative efficiency and differential case marking: A reverse-engineering approach. Linguistics Vanguard, 7(s3): 20190087. doi:10.1515/lingvan-2019-0087.
  • Levshina, N. (2018). Probabilistic grammar and constructional predictability: Bayesian generalized additive models of help. GLOSSA-a journal of general linguistics, 3(1): 55. doi:10.5334/gjgl.294.

    Abstract

    The present study investigates the construction with help followed by the bare or to-infinitive in seven varieties of web-based English from Australia, Ghana, Great Britain, Hong Kong, India, Jamaica and the USA. In addition to various factors known from the literature, such as register, minimization of cognitive complexity and avoidance of identity (horror aequi), it studies the effect of predictability of the infinitive given help and the other way round on the language user’s choice between the constructional variants. These probabilistic constraints are tested in a series of Bayesian generalized additive mixed-effects regression models. The results demonstrate that the to-infinitive is particularly frequent in contexts with low predictability, or, in information-theoretic terms, with high information content. This tendency is interpreted as communicatively efficient behaviour, when more predictable units of discourse get less formal marking, and less predictable ones get more formal marking. However, the strength, shape and directionality of predictability effects exhibit variation across the countries, which demonstrates the importance of the cross-lectal perspective in research on communicative efficiency and other universal functional principles.
  • Lewis, A. G., Schriefers, H., Bastiaansen, M., & Schoffelen, J.-M. (2018). Assessing the utility of frequency tagging for tracking memory-based reactivation of word representations. Scientific Reports, 8: 7897. doi:10.1038/s41598-018-26091-3.

    Abstract

    Reinstatement of memory-related neural activity measured with high temporal precision potentially provides a useful index for real-time monitoring of the timing of activation of memory content during cognitive processing. The utility of such an index extends to any situation where one is interested in the (relative) timing of activation of different sources of information in memory, a paradigm case of which is tracking lexical activation during language processing. Essential for this approach is that memory reinstatement effects are robust, so that their absence (in the average) definitively indicates that no lexical activation is present. We used electroencephalography to test the robustness of a reported subsequent memory finding involving reinstatement of frequency-specific entrained oscillatory brain activity during subsequent recognition. Participants learned lists of words presented on a background flickering at either 6 or 15 Hz to entrain a steady-state brain response. Target words subsequently presented on a non-flickering background that were correctly identified as previously seen exhibited reinstatement effects at both entrainment frequencies. Reliability of these statistical inferences was however critically dependent on the approach used for multiple comparisons correction. We conclude that effects are not robust enough to be used as a reliable index of lexical activation during language processing.

    Additional information

    Lewis_etal_2018sup.docx
  • Liang, S., Vega, R., Kong, X., Deng, W., Wang, Q., Ma, X., Li, M., Hu, X., Greenshaw, A. J., Greiner, R., & Li, T. (2018). Neurocognitive Graphs of First-Episode Schizophrenia and Major Depression Based on Cognitive Features. Neuroscience Bulletin, 34(2), 312-320. doi:10.1007/s12264-017-0190-6.

    Abstract

    Neurocognitive deficits are frequently observed in patients with schizophrenia and major depressive disorder (MDD). The relations between cognitive features may be represented by neurocognitive graphs based on cognitive features, modeled as Gaussian Markov random fields. However, it is unclear whether it is possible to differentiate between phenotypic patterns associated with the differential diagnosis of schizophrenia and depression using this neurocognitive graph approach. In this study, we enrolled 215 first-episode patients with schizophrenia (FES), 125 with MDD, and 237 demographically-matched healthy controls (HCs). The cognitive performance of all participants was evaluated using a battery of neurocognitive tests. The graphical LASSO model was trained with a one-vs-one scenario to learn the conditional independent structure of neurocognitive features of each group. Participants in the holdout dataset were classified into different groups with the highest likelihood. A partial correlation matrix was transformed from the graphical model to further explore the neurocognitive graph for each group. The classification approach identified the diagnostic class for individuals with an average accuracy of 73.41% for FES vs HC, 67.07% for MDD vs HC, and 59.48% for FES vs MDD. Both of the neurocognitive graphs for FES and MDD had more connections and higher node centrality than those for HC. The neurocognitive graph for FES was less sparse and had more connections than that for MDD. Thus, neurocognitive graphs based on cognitive features are promising for describing endophenotypes that may discriminate schizophrenia from depression.

    Additional information

    Liang_etal_2017sup.pdf
  • Ligthart, S., Vaez, A., Võsa, U., Stathopoulou, M. G., De Vries, P. S., Prins, B. P., Van der Most, P. J., Tanaka, T., Naderi, E., Rose, L. M., Wu, Y., Karlsson, R., Barbalic, M., Lin, H., Pool, R., Zhu, G., Macé, A., Sidore, C., Trompet, S., Mangino, M. and 267 moreLigthart, S., Vaez, A., Võsa, U., Stathopoulou, M. G., De Vries, P. S., Prins, B. P., Van der Most, P. J., Tanaka, T., Naderi, E., Rose, L. M., Wu, Y., Karlsson, R., Barbalic, M., Lin, H., Pool, R., Zhu, G., Macé, A., Sidore, C., Trompet, S., Mangino, M., Sabater-Lleal, M., Kemp, J. P., Abbasi, A., Kacprowski, T., Verweij, N., Smith, A. V., Huang, T., Marzi, C., Feitosa, M. F., Lohman, K. K., Kleber, M. E., Milaneschi, Y., Mueller, C., Huq, M., Vlachopoulou, E., Lyytikäinen, L.-P., Oldmeadow, C., Deelen, J., Perola, M., Zhao, J. H., Feenstra, B., LifeLines Cohort Study, Amini, M., CHARGE Inflammation Working Group, Lahti, J., Schraut, K. E., Fornage, M., Suktitipat, B., Chen, W.-M., Li, X., Nutile, T., Malerba, G., Luan, J., Bak, T., Schork, N., Del Greco M., F., Thiering, E., Mahajan, A., Marioni, R. E., Mihailov, E., Eriksson, J., Ozel, A. B., Zhang, W., Nethander, M., Cheng, Y.-C., Aslibekyan, S., Ang, W., Gandin, I., Yengo, L., Portas, L., Kooperberg, C., Hofer, E., Rajan, K. B., Schurmann, C., Den Hollander, W., Ahluwalia, T. S., Zhao, J., Draisma, H. H. M., Ford, I., Timpson, N., Teumer, A., Huang, H., Wahl, S., Liu, Y., Huang, J., Uh, H.-W., Geller, F., Joshi, P. K., Yanek, L. R., Trabetti, E., Lehne, B., Vozzi, D., Verbanck, M., Biino, G., Saba, Y., Meulenbelt, I., O’Connell, J. R., Laakso, M., Giulianini, F., Magnusson, P. K. E., Ballantyne, C. M., Hottenga, J. J., Montgomery, G. W., Rivadineira, F., Rueedi, R., Steri, M., Herzig, K.-H., Stott, D. J., Menni, C., Franberg, M., St Pourcain, B., Felix, S. B., Pers, T. H., Bakker, S. J. L., Kraft, P., Peters, A., Vaidya, D., Delgado, G., Smit, J. H., Großmann, V., Sinisalo, J., Seppälä, I., Williams, S. R., Holliday, E. G., Moed, M., Langenberg, C., Räikkönen, K., Ding, J., Campbell, H., Sale, M. M., Chen, Y.-D.-I., James, A. L., Ruggiero, D., Soranzo, N., Hartman, C. A., Smith, E. N., Berenson, G. S., Fuchsberger, C., Hernandez, D., Tiesler, C. M. T., Giedraitis, V., Liewald, D., Fischer, K., Mellström, D., Larsson, A., Wang, Y., Scott, W. R., Lorentzon, M., Beilby, J., Ryan, K. A., Pennell, C. E., Vuckovic, D., Balkau, B., Concas, M. P., Schmidt, R., Mendes de Leon, C. F., Bottinger, E. P., Kloppenburg, M., Paternoster, L., Boehnke, M., Musk, A. W., Willemsen, G., Evans, D. M., Madden, P. A. F., Kähönen, M., Kutalik, Z., Zoledziewska, M., Karhunen, V., Kritchevsky, S. B., Sattar, N., Lachance, G., Clarke, R., Harris, T. B., Raitakari, O. T., Attia, J. R., Van Heemst, D., Kajantie, E., Sorice, R., Gambaro, G., Scott, R. A., Hicks, A. A., Ferrucci, L., Standl, M., Lindgren, C. M., Starr, J. M., Karlsson, M., Lind, L., Li, J. Z., Chambers, J. C., Mori, T. A., De Geus, E. J. C. N., Heath, A. C., Martin, N. G., Auvinen, J., Buckley, B. M., De Craen, A. J. M., Waldenberger, M., Strauch, K., Meitinger, T., Scott, R. J., McEvoy, M., Beekman, M., Bombieri, C., Ridker, P. M., Mohlke, K. L., Pedersen, N. L., Morrison, A. C., Boomsma, D. I., Whitfield, J. B., Strachan, D. P., Hofman, A., Vollenweider, P., Cucca, F., Jarvelin, M.-R., Jukema, J. W., Spector, T. D., Hamsten, A., Zeller, T., Uitterlinden, A. G., Nauck, M., Gudnason, V., Qi, L., Grallert, H., Borecki, I. B., Rotter, J. I., März, W., Wild, P. S., Lokki, M.-L., Boyle, M., Salomaa, V., Melbye, M., Eriksson, J. G., Wilson, J. F., Penninx, B. W. J. H., Becker, D. M., Worrall, B. B., Gibson, G., Krauss, R. M., Ciullo, M., Zaza, G., Wareham, N. J., Oldehinkel, A. J., Palmer, L. J., Murray, S. S., Pramstaller, P. P., Bandinelli, S., Heinrich, J., Ingelsson, E., Deary, I. J., Ma¨gi, R., Vandenput, L., Van der Harst, P., Desch, K. C., Kooner, J. S., Ohlsson, C., Hayward, C., Lehtima¨ki, T., Shuldiner, A. R., Arnett, D. K., Beilin, L. J., Robino, A., Froguel, P., Pirastu, M., Jess, T., Koenig, W., Loos, R. J. F., Evans, D. A., Schmidt, H., Smith, G. D., Slagboom, P. E., Eiriksdottir, G., Morris, A. P., Psaty, B. M., Tracy, R. P., Nolte, I. M., Boerwinkle, E., Visvikis-Siest, S., Reiner, A. P., Gross, M., Bis, J. C., Franke, L., Franco, O. H., Benjamin, E. J., Chasman, D. I., Dupuis, J., Snieder, H., Dehghan, A., & Alizadeh, B. Z. (2018). Genome Analyses of >200,000 Individuals Identify 58 Loci for Chronic Inflammation and Highlight Pathways that Link Inflammation and Complex Disorders. The American Journal of Human Genetics, 103(5), 691-706. doi:10.1016/j.ajhg.2018.09.009.

    Abstract

    C-reactive protein (CRP) is a sensitive biomarker of chronic low-grade inflammation and is associated with multiple complex diseases. The genetic determinants of chronic inflammation remain largely unknown, and the causal role of CRP in several clinical outcomes is debated. We performed two genome-wide association studies (GWASs), on HapMap and 1000 Genomes imputed data, of circulating amounts of CRP by using data from 88 studies comprising 204,402 European individuals. Additionally, we performed in silico functional analyses and Mendelian randomization analyses with several clinical outcomes. The GWAS meta-analyses of CRP revealed 58 distinct genetic loci (p < 5 × 10−8). After adjustment for body mass index in the regression analysis, the associations at all except three loci remained. The lead variants at the distinct loci explained up to 7.0% of the variance in circulating amounts of CRP. We identified 66 gene sets that were organized in two substantially correlated clusters, one mainly composed of immune pathways and the other characterized by metabolic pathways in the liver. Mendelian randomization analyses revealed a causal protective effect of CRP on schizophrenia and a risk-increasing effect on bipolar disorder. Our findings provide further insights into the biology of inflammation and could lead to interventions for treating inflammation and its clinical consequences.
  • Liu, X., Gao, Y., Di, Q., Hu, J., Lu, C., Nan, Y., Booth, J. R., & Liu, L. (2018). Differences between child and adult large-scale functional brain networks for reading tasks. Human Brain Mapping, 39(2), 662-679. doi:10.1002/hbm.23871.

    Abstract

    Reading is an important high‐level cognitive function of the human brain, requiring interaction among multiple brain regions. Revealing differences between children's large‐scale functional brain networks for reading tasks and those of adults helps us to understand how the functional network changes over reading development. Here we used functional magnetic resonance imaging data of 17 adults (19–28 years old) and 16 children (11–13 years old), and graph theoretical analyses to investigate age‐related changes in large‐scale functional networks during rhyming and meaning judgment tasks on pairs of visually presented Chinese characters. We found that: (1) adults had stronger inter‐regional connectivity and nodal degree in occipital regions, while children had stronger inter‐regional connectivity in temporal regions, suggesting that adults rely more on visual orthographic processing whereas children rely more on auditory phonological processing during reading. (2) Only adults showed between‐task differences in inter‐regional connectivity and nodal degree, whereas children showed no task differences, suggesting the topological organization of adults’ reading network is more specialized. (3) Children showed greater inter‐regional connectivity and nodal degree than adults in multiple subcortical regions; the hubs in children were more distributed in subcortical regions while the hubs in adults were more distributed in cortical regions. These findings suggest that reading development is manifested by a shift from reliance on subcortical to cortical regions. Taken together, our study suggests that Chinese reading development is supported by developmental changes in brain connectivity properties, and some of these changes may be domain‐general while others may be specific to the reading domain.
  • Xu, S., Liu, P., Chen, Y., Chen, Y., Zhang, W., Zhao, H., Cao, Y., Wang, F., Jiang, N., Lin, S., Li, B., Zhang, Z., Wei, Z., Fan, Y., Jin, Y., He, L., Zhou, R., Dekker, J. D., Tucker, H. O., Fisher, S. E. and 4 moreXu, S., Liu, P., Chen, Y., Chen, Y., Zhang, W., Zhao, H., Cao, Y., Wang, F., Jiang, N., Lin, S., Li, B., Zhang, Z., Wei, Z., Fan, Y., Jin, Y., He, L., Zhou, R., Dekker, J. D., Tucker, H. O., Fisher, S. E., Yao, Z., Liu, Q., Xia, X., & Guo, X. (2018). Foxp2 regulates anatomical features that may be relevant for vocal behaviors and bipedal locomotion. Proceedings of the National Academy of Sciences of the United States of America, 115(35), 8799-8804. doi:10.1073/pnas.1721820115.

    Abstract

    Fundamental human traits, such as language and bipedalism, are associated with a range of anatomical adaptations in craniofacial shaping and skeletal remodeling. However, it is unclear how such morphological features arose during hominin evolution. FOXP2 is a brain-expressed transcription factor implicated in a rare disorder involving speech apraxia and language impairments. Analysis of its evolutionary history suggests that this gene may have contributed to the emergence of proficient spoken language. In the present study, through analyses of skeleton-specific knockout mice, we identified roles of Foxp2 in skull shaping and bone remodeling. Selective ablation of Foxp2 in cartilage disrupted pup vocalizations in a similar way to that of global Foxp2 mutants, which may be due to pleiotropic effects on craniofacial morphogenesis. Our findings also indicate that Foxp2 helps to regulate strength and length of hind limbs and maintenance of joint cartilage and intervertebral discs, which are all anatomical features that are susceptible to adaptations for bipedal locomotion. In light of the known roles of Foxp2 in brain circuits that are important for motor skills and spoken language, we suggest that this gene may have been well placed to contribute to coevolution of neural and anatomical adaptations related to speech and bipedal locomotion.

    Files private

    Request files
  • Long, M., Horton, W. S., Rohde, H., & Sorace, A. (2018). Individual differences in switching and inhibition predict perspective-taking across the lifespan. Cognition, 170, 25-30. doi:10.1016/j.cognition.2017.09.004.

    Abstract

    Studies exploring the influence of executive functions (EF) on perspective-taking have focused on inhibition and working memory in young adults or clinical populations. Less consideration has been given to more complex capacities that also involve switching attention between perspectives, or to changes in EF and concomitant effects on perspective-taking across the lifespan. To address this, we assessed whether individual differences in inhibition and attentional switching in healthy adults (ages 17–84) predict performance on a task in which speakers identified targets for a listener with size-contrasting competitors in common or privileged ground. Modification differences across conditions decreased with age. Further, perspective taking interacted with EF measures: youngest adults’ sensitivity to perspective was best captured by their inhibitory performance; oldest adults’ sensitivity was best captured by switching performance. Perspective-taking likely involves multiple aspects of EF, as revealed by considering a wider range of EF tasks and individual capacities across the lifespan.
  • Long, M., Moore, I., Mollica, F., & Rubio-Fernandez, P. (2021). Contrast perception as a visual heuristic in the formulation of referential expressions. Cognition, 217: 104879. doi:10.1016/j.cognition.2021.104879.

    Abstract

    We hypothesize that contrast perception works as a visual heuristic, such that when speakers perceive a significant degree of contrast in a visual context, they tend to produce the corresponding adjective to describe a referent. The contrast perception heuristic supports efficient audience design, allowing speakers to produce referential expressions with minimum expenditure of cognitive resources, while facilitating the listener's visual search for the referent. We tested the perceptual contrast hypothesis in three language-production experiments. Experiment 1 revealed that speakers overspecify color adjectives in polychrome displays, whereas in monochrome displays they overspecified other properties that were contrastive. Further support for the contrast perception hypothesis comes from a re-analysis of previous work, which confirmed that color contrast elicits color overspecification when detected in a given display, but not when detected across monochrome trials. Experiment 2 revealed that even atypical colors (which are often overspecified) are only mentioned if there is color contrast. In Experiment 3, participants named a target color faster in monochrome than in polychrome displays, suggesting that the effect of color contrast is not analogous to ease of production. We conclude that the tendency to overspecify color in polychrome displays is not a bottom-up effect driven by the visual salience of color as a property, but possibly a learned communicative strategy. We discuss the implications of our account for pragmatic theories of referential communication and models of audience design, challenging the view that overspecification is a form of egocentric behavior.

    Additional information

    supplementary data
  • Long, M., Shukla, V., & Rubio-Fernandez, P. (2021). The development of simile comprehension: From similarity to scalar implicature. Child Development, 92(4), 1439-1457. doi:10.1111/cdev.13507.

    Abstract

    Similes require two different pragmatic skills: appreciating the intended similarity and deriving a scalar implicature (e.g., “Lucy is like a parrot” normally implies that Lucy is not a parrot), but previous studies overlooked this second skill. In Experiment 1, preschoolers (N = 48; ages 3–5) understood “X is like a Y” as an expression of similarity. In Experiment 2 (N = 99; ages 3–6, 13) and Experiment 3 (N = 201; ages 3–5 and adults), participants received metaphors (“Lucy is a parrot”) or similes (“Lucy is like a parrot”) as clues to select one of three images (a parrot, a girl or a parrot-looking girl). An early developmental trend revealed that 3-year-olds started deriving the implicature “X is not a Y,” whereas 5-year-olds performed like adults.
  • Lopopolo, A., Van de Bosch, A., Petersson, K. M., & Willems, R. M. (2021). Distinguishing syntactic operations in the brain: Dependency and phrase-structure parsing. Neurobiology of Language, 2(1), 152-175. doi:10.1162/nol_a_00029.

    Abstract

    Finding the structure of a sentence — the way its words hold together to convey meaning — is a fundamental step in language comprehension. Several brain regions, including the left inferior frontal gyrus, the left posterior superior temporal gyrus, and the left anterior temporal pole, are supposed to support this operation. The exact role of these areas is nonetheless still debated. In this paper we investigate the hypothesis that different brain regions could be sensitive to different kinds of syntactic computations. We compare the fit of phrase-structure and dependency structure descriptors to activity in brain areas using fMRI. Our results show a division between areas with regard to the type of structure computed, with the left ATP and left IFG favouring dependency structures and left pSTG favouring phrase structures.
  • Lowndes, R., Molz, B., Warriner, L., Herbik, A., De Best, P. B., Raz, N., Gouws, A., Ahmadi, K., McLean, R. J., Gottlob, I., Kohl, S., Choritz, L., Maguire, J., Kanowski, M., Käsmann-Kellner, B., Wieland, I., Banin, E., Levin, N., Hoffmann, M. B., Morland, A. B. and 1 moreLowndes, R., Molz, B., Warriner, L., Herbik, A., De Best, P. B., Raz, N., Gouws, A., Ahmadi, K., McLean, R. J., Gottlob, I., Kohl, S., Choritz, L., Maguire, J., Kanowski, M., Käsmann-Kellner, B., Wieland, I., Banin, E., Levin, N., Hoffmann, M. B., Morland, A. B., & Baseler, H. A. (2021). Structural differences across multiple visual cortical regions in the absence of cone function in congenital achromatopsia. Frontiers in Neuroscience, 15: 718958. doi:10.3389/fnins.2021.718958.

    Abstract

    Most individuals with congenital achromatopsia (ACHM) carry mutations that affect the retinal phototransduction pathway of cone photoreceptors, fundamental to both high acuity vision and colour perception. As the central fovea is occupied solely by cones, achromats have an absence of retinal input to the visual cortex and a small central area of blindness. Additionally, those with complete ACHM have no colour perception, and colour processing regions of the ventral cortex also lack typical chromatic signals from the cones. This study examined the cortical morphology (grey matter volume, cortical thickness, and cortical surface area) of multiple visual cortical regions in ACHM (n = 15) compared to normally sighted controls (n = 42) to determine the cortical changes that are associated with the retinal characteristics of ACHM. Surface-based morphometry was applied to T1-weighted MRI in atlas-defined early, ventral and dorsal visual regions of interest. Reduced grey matter volume in V1, V2, V3, and V4 was found in ACHM compared to controls, driven by a reduction in cortical surface area as there was no significant reduction in cortical thickness. Cortical surface area (but not thickness) was reduced in a wide range of areas (V1, V2, V3, TO1, V4, and LO1). Reduction in early visual areas with large foveal representations (V1, V2, and V3) suggests that the lack of foveal input to the visual cortex was a major driving factor in morphological changes in ACHM. However, the significant reduction in ventral area V4 coupled with the lack of difference in dorsal areas V3a and V3b suggest that deprivation of chromatic signals to visual cortex in ACHM may also contribute to changes in cortical morphology. This research shows that the congenital lack of cone input to the visual cortex can lead to widespread structural changes across multiple visual areas.

    Additional information

    table S1
  • Lumaca, M., Ravignani, A., & Baggio, G. (2018). Music evolution in the laboratory: Cultural transmission meets neurophysiology. Frontiers in Neuroscience, 12: 246. doi:10.3389%2Ffnins.2018.00246.

    Abstract

    In recent years, there has been renewed interest in the biological and cultural evolution of music, and specifically in the role played by perceptual and cognitive factors in shaping core features of musical systems, such as melody, harmony, and rhythm. One proposal originates in the language sciences. It holds that aspects of musical systems evolve by adapting gradually, in the course of successive generations, to the structural and functional characteristics of the sensory and memory systems of learners and “users” of music. This hypothesis has found initial support in laboratory experiments on music transmission. In this article, we first review some of the most important theoretical and empirical contributions to the field of music evolution. Next, we identify a major current limitation of these studies, i.e., the lack of direct neural support for the hypothesis of cognitive adaptation. Finally, we discuss a recent experiment in which this issue was addressed by using event-related potentials (ERPs). We suggest that the introduction of neurophysiology in cultural transmission research may provide novel insights on the micro-evolutionary origins of forms of variation observed in cultural systems.
  • Lutzenberger, H., De Vos, C., Crasborn, O., & Fikkert, P. (2021). Formal variation in the Kata Kolok lexicon. Glossa: a journal of general linguistics, 6. doi:10.16995/glossa.5880.

    Abstract

    Sign language lexicons incorporate phonological specifications. Evidence from emerging sign languages suggests that phonological structure emerges gradually in a new language. In this study, we investigate variation in the form of signs across 20 deaf adult signers of Kata Kolok, a sign language that emerged spontaneously in a Balinese village community. Combining methods previously used for sign comparisons, we introduce a new numeric measure of variation. Our nuanced yet comprehensive approach to form variation integrates three levels (iconic motivation, surface realisation, feature differences) and allows for refinement through weighting the variation score by token and signer frequency. We demonstrate that variation in the form of signs appears in different degrees at different levels. Token frequency in a given dataset greatly affects how much variation can surface, suggesting caution in interpreting previous findings. Different sign variants have different scopes of use among the signing population, with some more widely used than others. Both frequency weightings (token and signer) identify dominant sign variants, i.e., sign forms that are produced frequently or by many signers. We argue that variation does not equal the absence of conventionalisation. Indeed, especially in micro-community sign languages, variation may be key to understanding patterns of language emergence.
  • Lutzenberger, H. (2018). Manual and nonmanual features of name signs in Kata Kolok and sign language of the Netherlands. Sign Language Studies, 18(4), 546-569. doi:10.1353/sls.2018.0016.

    Abstract

    Name signs are based on descriptions, initialization, and loan translations. Nyst and Baker (2003) have found crosslinguistic similarities in the phonology of name signs, such as a preference for one-handed signs and for the head location. Studying Kata Kolok (KK), a rural sign language without indigenous fingerspelling, strongly suggests that one-handedness is not correlated to initialization, but represents a more general feature of name sign phonology. Like in other sign languages, the head location is used frequently in both KK and Sign Language of the Netherlands (NGT) name signs. The use of nonmanuals, however, is strikingly different. NGT name signs are always accompanied by mouthings, which are absent in KK. Instead, KK name signs may use mouth gestures; these may disambiguate manually identical name signs, and even form independent name signs without any manual features
  • Majid, A., Roberts, S. G., Cilissen, L., Emmorey, K., Nicodemus, B., O'Grady, L., Woll, B., LeLan, B., De Sousa, H., Cansler, B. L., Shayan, S., De Vos, C., Senft, G., Enfield, N. J., Razak, R. A., Fedden, S., Tufvesson, S., Dingemanse, M., Ozturk, O., Brown, P. and 6 moreMajid, A., Roberts, S. G., Cilissen, L., Emmorey, K., Nicodemus, B., O'Grady, L., Woll, B., LeLan, B., De Sousa, H., Cansler, B. L., Shayan, S., De Vos, C., Senft, G., Enfield, N. J., Razak, R. A., Fedden, S., Tufvesson, S., Dingemanse, M., Ozturk, O., Brown, P., Hill, C., Le Guen, O., Hirtzel, V., Van Gijn, R., Sicoli, M. A., & Levinson, S. C. (2018). Differential coding of perception in the world’s languages. Proceedings of the National Academy of Sciences of the United States of America, 115(45), 11369-11376. doi:10.1073/pnas.1720419115.

    Abstract

    Is there a universal hierarchy of the senses, such that some senses (e.g., vision) are more accessible to consciousness and linguistic description than others (e.g., smell)? The long-standing presumption in Western thought has been that vision and audition are more objective than the other senses, serving as the basis of knowledge and understanding, whereas touch, taste, and smell are crude and of little value. This predicts that humans ought to be better at communicating about sight and hearing than the other senses, and decades of work based on English and related languages certainly suggests this is true. However, how well does this reflect the diversity of languages and communities worldwide? To test whether there is a universal hierarchy of the senses, stimuli from the five basic senses were used to elicit descriptions in 20 diverse languages, including 3 unrelated sign languages. We found that languages differ fundamentally in which sensory domains they linguistically code systematically, and how they do so. The tendency for better coding in some domains can be explained in part by cultural preoccupations. Although languages seem free to elaborate specific sensory domains, some general tendencies emerge: for example, with some exceptions, smell is poorly coded. The surprise is that, despite the gradual phylogenetic accumulation of the senses, and the imbalances in the neural tissue dedicated to them, no single hierarchy of the senses imposes itself upon language.
  • Majid, A. (2018). Humans are neglecting our sense of smell. Here's what we could gain by fixing that. Time, March 7, 2018: 5130634.
  • Majid, A., & Kruspe, N. (2018). Hunter-gatherer olfaction is special. Current Biology, 28(3), 409-413. doi:10.1016/j.cub.2017.12.014.

    Abstract

    People struggle to name odors, but this
    limitation is not universal. Majid and
    Kruspe investigate whether superior
    olfactory performance is due to
    subsistence, ecology, or language family.
    By comparing closely related
    communities in the Malay Peninsula, they
    find that only hunter-gatherers are
    proficient odor namers, suggesting that
    subsistence is crucial.

    Additional information

    The data are archived at RWAAI
  • Majid, A., Burenhult, N., Stensmyr, M., De Valk, J., & Hansson, B. S. (2018). Olfactory language and abstraction across cultures. Philosophical Transactions of the Royal Society of London, Series B: Biological Sciences, 373: 20170139. doi:10.1098/rstb.2017.0139.

    Abstract

    Olfaction presents a particularly interesting arena to explore abstraction in language. Like other abstract domains, such as time, odours can be difficult to conceptualize. An odour cannot be seen or held, it can be difficult to locate in space, and for most people odours are difficult to verbalize. On the other hand, odours give rise to primary sensory experiences. Every time we inhale we are using olfaction to make sense of our environment. We present new experimental data from 30 Jahai hunter-gatherers from the Malay Peninsula and 30 matched Dutch participants from the Netherlands in an odour naming experiment. Participants smelled monomolecular odorants and named odours while reaction times, odour descriptors and facial expressions were measured. We show that while Dutch speakers relied on concrete descriptors, i.e. they referred to odour sources (e.g. smells like lemon), the Jahai used abstract vocabulary to name the same odours (e.g. musty). Despite this differential linguistic categorization, analysis of facial expressions showed that the two groups, nevertheless, had the same initial emotional reactions to odours. Critically, these cross-cultural data present a challenge for how to think about abstraction in language.
  • Mak, M., & Willems, R. M. (2021). Eyelit: Eye movement and reader response data during literary reading. Journal of open humanities data, 7: 25. doi:10.5334/johd.49.

    Abstract

    An eye-tracking data set is described of 102 participants reading three Dutch literary short stories each (7790 words in total per participant). The pre-processed data set includes (1) Fixation report, (2) Saccade report, (3) Interest Area report, (4) Trial report (aggregated data for each page), (5) Sample report (sampling rate = 500 Hz), (6) Questionnaire data on reading experiences and participant characteristics, and (7) word characteristics for all words (with the potential of calculating additional word characteristics). It is stored on DANS, and can be used to study word characteristics or literary reading and all facets of eye movements.
  • Mamus, E., & Boduroglu, A. (2018). The role of context on boundary extension. Visual Cognition, 26(2), 115-130. doi:10.1080/13506285.2017.1399947.

    Abstract

    Boundary extension (BE) is a memory error in which observers remember more of a scene than they actually viewed. This error reflects one’s prediction that a scene naturally continues and is driven by scene schema and contextual knowledge. In two separate experiments we investigated the necessity of context and scene schema in BE. In Experiment 1, observers viewed scenes that either contained semantically consistent or inconsistent objects as well as objects on white backgrounds. In both types of scenes and in the no-background condition there was a BE effect; critically, semantic inconsistency in scenes reduced the magnitude of BE. In Experiment 2 when we used abstract shapes instead of meaningful objects, there was no BE effect. We suggest that although scene schema is necessary to elicit BE, contextual consistency is not required.
  • Manahova, M. E., Mostert, P., Kok, P., Schoffelen, J.-M., & De Lange, F. P. (2018). Stimulus familiarity and expectation jointly modulate neural activity in the visual ventral stream. Journal of Cognitive Neuroscience, 30(9), 1366-1377. doi:10.1162/jocn_a_01281.

    Abstract

    Prior knowledge about the visual world can change how a visual stimulus is processed. Two forms of prior knowledge are often distinguished: stimulus familiarity (i.e., whether a stimulus has been seen before) and stimulus expectation (i.e., whether a stimulus is expected to occur, based on the context). Neurophysiological studies in monkeys have shown suppression of spiking activity both for expected and for familiar items in object-selective inferotemporal cortex. It is an open question, however, if and how these types of knowledge interact in their modulatory effects on the sensory response. To address this issue and to examine whether previous findings generalize to noninvasively measured neural activity in humans, we separately manipulated stimulus familiarity and expectation while noninvasively recording human brain activity using magnetoencephalography. We observed independent suppression of neural activity by familiarity and expectation, specifically in the lateral occipital complex, the putative human homologue of monkey inferotemporal cortex. Familiarity also led to sharpened response dynamics, which was predominantly observed in early visual cortex. Together, these results show that distinct types of sensory knowledge jointly determine the amount of neural resources dedicated to object processing in the visual ventral stream.
  • Mandy, W., Pellicano, L., St Pourcain, B., Skuse, D., & Heron, J. (2018). The development of autistic social traits across childhood and adolescence in males and females. The Journal of Child Psychology and Psychiatry, 59(11), 1143-1151. doi:10.1111/jcpp.12913.

    Abstract

    Background

    Autism is a dimensional condition, representing the extreme end of a continuum of social competence that extends throughout the general population. Currently, little is known about how autistic social traits (ASTs), measured across the full spectrum of severity, develop during childhood and adolescence, including whether there are developmental differences between boys and girls. Therefore, we sought to chart the trajectories of ASTs in the general population across childhood and adolescence, with a focus on gender differences.
    Methods

    Participants were 9,744 males (n = 4,784) and females (n = 4,960) from ALSPAC, a UK birth cohort study. ASTs were assessed when participants were aged 7, 10, 13 and 16 years, using the parent‐report Social Communication Disorders Checklist. Data were modelled using latent growth curve analysis.
    Results

    Developmental trajectories of males and females were nonlinear, showing a decline from 7 to 10 years, followed by an increase between 10 and 16 years. At 7 years, males had higher levels of ASTs than females (mean raw score difference = 0.88, 95% CI [.72, 1.04]), and were more likely (odds ratio [OR] = 1.99; 95% CI, 1.82, 2.16) to score in the clinical range on the SCDC. By 16 years this gender difference had disappeared: males and females had, on average, similar levels of ASTs (mean difference = 0.00, 95% CI [−0.19, 0.19]) and were equally likely to score in the SCDC's clinical range (OR = 0.91, 95% CI, 0.73, 1.10). This was the result of an increase in females’ ASTs between 10 and 16 years.
    Conclusions

    There are gender‐specific trajectories of autistic social impairment, with females more likely than males to experience an escalation of ASTs during early‐ and midadolescence. It remains to be discovered whether the observed female adolescent increase in ASTs represents the genuine late onset of social difficulties or earlier, subtle, pre‐existing difficulties becoming more obvious.

    Additional information

    jcpp12913-sup-0001-supinfo.docx
  • Manhardt, F., Brouwer, S., & Ozyurek, A. (2021). A tale of two modalities: Sign and speech influence in each other in bimodal bilinguals. Psychological Science, 32(3), 424-436. doi:10.1177/0956797620968789.

    Abstract

    Bimodal bilinguals are hearing individuals fluent in a sign and a spoken language. Can the two languages influence each other in such individuals despite differences in the visual (sign) and vocal (speech) modalities of expression? We investigated cross-linguistic influences on bimodal bilinguals’ expression of spatial relations. Unlike spoken languages, sign uses iconic linguistic forms that resemble physical features of objects in a spatial relation and thus expresses specific semantic information. Hearing bimodal bilinguals (n = 21) fluent in Dutch and Sign Language of the Netherlands and their hearing nonsigning and deaf signing peers (n = 20 each) described left/right relations between two objects. Bimodal bilinguals expressed more specific information about physical features of objects in speech than nonsigners, showing influence from sign language. They also used fewer iconic signs with specific semantic information than deaf signers, demonstrating influence from speech. Bimodal bilinguals’ speech and signs are shaped by two languages from different modalities.

    Additional information

    supplementary materials
  • Martin, A. E. (2018). Cue integration during sentence comprehension: Electrophysiological evidence from ellipsis. PLoS One, 13(11): e0206616. doi:10.1371/journal.pone.0206616.

    Abstract

    Language processing requires us to integrate incoming linguistic representations with representations of past input, often across intervening words and phrases. This computational situation has been argued to require retrieval of the appropriate representations from memory via a set of features or representations serving as retrieval cues. However, even within in a cue-based retrieval account of language comprehension, both the structure of retrieval cues and the particular computation that underlies direct-access retrieval are still underspecified. Evidence from two event-related brain potential (ERP) experiments that show cue-based interference from different types of linguistic representations during ellipsis comprehension are consistent with an architecture wherein different cue types are integrated, and where the interaction of cue with the recent contents of memory determines processing outcome, including expression of the interference effect in ERP componentry. I conclude that retrieval likely includes a computation where cues are integrated with the contents of memory via a linear weighting scheme, and I propose vector addition as a candidate formalization of this computation. I attempt to account for these effects and other related phenomena within a broader cue-based framework of language processing.
  • Martin, A. E., & McElree, B. (2018). Retrieval cues and syntactic ambiguity resolution: Speed-accuracy tradeoff evidence. Language, Cognition and Neuroscience, 33(6), 769-783. doi:10.1080/23273798.2018.1427877.

    Abstract

    Language comprehension involves coping with ambiguity and recovering from misanalysis. Syntactic ambiguity resolution is associated with increased reading times, a classic finding that has shaped theories of sentence processing. However, reaction times conflate the time it takes a process to complete with the quality of the behavior-related information available to the system. We therefore used the speed-accuracy tradeoff procedure (SAT) to derive orthogonal estimates of processing time and interpretation accuracy, and tested whether stronger retrieval cues (via semantic relatedness: neighed->horse vs. fell->horse) aid interpretation during recovery. On average, ambiguous sentences took 250ms longer (SAT rate) to interpret than unambiguous controls, demonstrating veridical differences in processing time. Retrieval cues more strongly related to the true subject always increased accuracy, regardless of ambiguity. These findings are consistent with a language processing architecture where cue-driven operations give rise to interpretation, and wherein diagnostic cues aid retrieval, regardless of parsing difficulty or structural uncertainty.
  • Maslowski, M., Meyer, A. S., & Bosker, H. R. (2018). Listening to yourself is special: Evidence from global speech rate tracking. PLoS One, 13(9): e0203571. doi:10.1371/journal.pone.0203571.

    Abstract

    Listeners are known to use adjacent contextual speech rate in processing temporally ambiguous speech sounds. For instance, an ambiguous vowel between short /A/ and long /a:/ in Dutch sounds relatively long (i.e., as /a:/) embedded in a fast precursor sentence, but short in a slow sentence. Besides the local speech rate, listeners also track talker-specific global speech rates. However, it is yet unclear whether other talkers' global rates are encoded with reference to a listener's self-produced rate. Three experiments addressed this question. In Experiment 1, one group of participants was instructed to speak fast, whereas another group had to speak slowly. The groups were compared on their perception of ambiguous /A/-/a:/ vowels embedded in neutral rate speech from another talker. In Experiment 2, the same participants listened to playback of their own speech and again evaluated target vowels in neutral rate speech. Neither of these experiments provided support for the involvement of self-produced speech in perception of another talker's speech rate. Experiment 3 repeated Experiment 2 but with a new participant sample that was unfamiliar with the participants from Experiment 2. This experiment revealed fewer /a:/ responses in neutral speech in the group also listening to a fast rate, suggesting that neutral speech sounds slow in the presence of a fast talker and vice versa. Taken together, the findings show that self-produced speech is processed differently from speech produced by others. They carry implications for our understanding of the perceptual and cognitive mechanisms involved in rate-dependent speech perception in dialogue settings.
  • McConnell, K., & Blumenthal-Dramé, A. (2021). Usage-Based Individual Differences in the Probabilistic Processing of Multi-Word Sequences. Frontiers in Communication, 6: 703351. doi:10.3389/fcomm.2021.703351.

    Abstract

    While it is widely acknowledged that both predictive expectations and retrodictive
    integration influence language processing, the individual differences that affect these
    two processes and the best metrics for observing them have yet to be fully described.
    The present study aims to contribute to the debate by investigating the extent to which
    experienced-based variables modulate the processing of word pairs (bigrams).
    Specifically, we investigate how age and reading experience correlate with lexical
    anticipation and integration, and how this effect can be captured by the metrics of
    forward and backward transition probability (TP). Participants read more and less
    strongly associated bigrams, paired to control for known lexical covariates such as
    bigram frequency and meaning (i.e., absolute control, total control, absolute silence,
    total silence) in a self-paced reading (SPR) task. They additionally completed
    assessments of exposure to print text (Author Recognition Test, Shipley vocabulary
    assessment, Words that Go Together task) and provided their age. Results show that
    both older age and lesser reading experience individually correlate with stronger TP
    effects. Moreover, TP effects differ across the spillover region (the two words following
    the noun in the bigram)
  • McQueen, J. M., Norris, D., & Cutler, A. (1994). Competition in spoken word recognition: Spotting words in other words. Journal of Experimental Psychology: Learning, Memory, and Cognition, 20, 621-638.

    Abstract

    Although word boundaries are rarely clearly marked, listeners can rapidly recognize the individual words of spoken sentences. Some theories explain this in terms of competition between multiply activated lexical hypotheses; others invoke sensitivity to prosodic structure. We describe a connectionist model, SHORTLIST, in which recognition by activation and competition is successful with a realistically sized lexicon. Three experiments are then reported in which listeners detected real words embedded in nonsense strings, some of which were themselves the onsets of longer words. Effects both of competition between words and of prosodic structure were observed, suggesting that activation and competition alone are not sufficient to explain word recognition in continuous speech. However, the results can be accounted for by a version of SHORTLIST that is sensitive to prosodic structure.
  • Mei, C., Fedorenko, E., Amor, D. J., Boys, A., Hoeflin, C., Carew, P., Burgess, T., Fisher, S. E., & Morgan, A. T. (2018). Deep phenotyping of speech and language skills in individuals with 16p11.2 deletion. European journal of human genetics, 26(5), 676-686. doi:10.1038/s41431-018-0102-x.

    Abstract

    Recurrent deletions of a ~600-kb region of 16p11.2 have been associated with a highly penetrant form of childhood apraxia of speech (CAS). Yet prior findings have been based on a small, potentially biased sample using retrospectively collected data. We examine the prevalence of CAS in a larger cohort of individuals with 16p11.2 deletion using a prospectively designed assessment battery. The broader speech and language phenotype associated with carrying this deletion was also examined. 55 participants with 16p11.2 deletion (47 children, 8 adults) underwent deep phenotyping to test for the presence of CAS and other speech and language diagnoses. Standardized tests of oral motor functioning, speech production, language, and non-verbal IQ were conducted. The majority of children (77%) and half of adults (50%) met criteria for CAS. Other speech outcomes were observed including articulation or phonological errors (i.e., phonetic and cognitive-linguistic errors, respectively), dysarthria (i.e., neuromuscular speech disorder), minimal verbal output, and even typical speech in some. Receptive and expressive language impairment was present in 73% and 70% of children, respectively. Co-occurring neurodevelopmental conditions (e.g., autism) and non-verbal IQ did not correlate with the presence of CAS. Findings indicate that CAS is highly prevalent in children with 16p11.2 deletion with symptoms persisting into adulthood for many. Yet CAS occurs in the context of a broader speech and language profile and other neurobehavioral deficits. Further research will elucidate specific genetic and neural pathways leading to speech and language deficits in individuals with 16p11.2 deletions, resulting in more targeted speech therapies addressing etiological pathways.
  • Melnychuk, T., Galke, L., Seidlmayer, E., Förster, K. U., Tochtermann, K., & Schultz, C. (2021). Früherkennung wissenschaftlicher Konvergenz im Hochschulmanagement. Hochschulmanagement, 16(1), 24-28.

    Abstract

    It is crucial for universities to recognize early signals of scientific convergence. Scientific convergence describes a dynamic pattern where the distance between different fields of knowledge shrinks over time. This knowledge
    space is beneficial to radical innovations and new promising research topics. Research in converging areas of knowledge can therefore allow universities to establish a leading position in the science community.
    The Q-AKTIV project develops a new approach on the basis of machine learning to identify scientific convergence at an early stage. In this work, we briefly present this approach and the first results of empirical validation. We discuss the benefits of an instrument building on our approach for the strategic management of universities and
    other research institutes.
  • Menks, W. M., Fehlbaum, L. V., Borbás, R., Sterzer, P., Stadler, C., & Raschle, N. M. (2021). Eye gaze patterns and functional brain responses during emotional face processing in adolescents with conduct disorder. NeuroImage: Clinical, 29: 102519. doi:10.1016/j.nicl.2020.102519.

    Abstract

    Background: Conduct disorder (CD) is characterized by severe aggressive and antisocial behavior. Initial evidence
    suggests neural deficits and aberrant eye gaze pattern during emotion processing in CD; both concepts, however,
    have not yet been studied simultaneously. The present study assessed the functional brain correlates of emotional
    face processing with and without consideration of concurrent eye gaze behavior in adolescents with CD
    compared to typically developing (TD) adolescents.
    Methods: 58 adolescents (23CD/35TD; average age = 16 years/range = 14–19 years) underwent an implicit
    emotional face processing task. Neuroimaging analyses were conducted for a priori-defined regions of interest
    (insula, amygdala, and medial orbitofrontal cortex) and using a full-factorial design assessing the main effects of
    emotion (neutral, anger, fear), group and the interaction thereof (cluster-level, p < .05 FWE-corrected) with and
    without consideration of concurrent eye gaze behavior (i.e., time spent on the eye region).
    Results: Adolescents with CD showed significant hypo-activations during emotional face processing in right
    anterior insula compared to TD adolescents, independent of the emotion presented. In-scanner eye-tracking data
    revealed that adolescents with CD spent significantly less time on the eye, but not mouth region. Correcting for
    eye gaze behavior during emotional face processing reduced group differences previously observed for right
    insula.
    Conclusions: Atypical insula activation during emotional face processing in adolescents with CD may partly be
    explained by attentional mechanisms (i.e., reduced gaze allocation to the eyes, independent of the emotion
    presented). An increased understanding of the mechanism causal for emotion processing deficits observed in CD
    may ultimately aid the development of personalized intervention programs

    Additional information

    1-s2.0-S2213158220303569-mmc1.doc
  • He, J., Meyer, A. S., Creemers, A., & Brehm, L. (2021). Conducting language production research online: A web-based study of semantic context and name agreement effects in multi-word production. Collabra: Psychology, 7(1): 29935. doi:10.1525/collabra.29935.

    Abstract

    Few web-based experiments have explored spoken language production, perhaps due to concerns of data quality, especially for measuring onset latencies. The present study highlights how speech production research can be done outside of the laboratory by measuring utterance durations and speech fluency in a multiple-object naming task when examining two effects related to lexical selection: semantic context and name agreement. A web-based modified blocked-cyclic naming paradigm was created, in which participants named a total of sixteen simultaneously presented pictures on each trial. The pictures were either four tokens from the same semantic category (homogeneous context), or four tokens from different semantic categories (heterogeneous context). Name agreement of the pictures was varied orthogonally (high, low). In addition to onset latency, five dependent variables were measured to index naming performance: accuracy, utterance duration, total pause time, the number of chunks (word groups pronounced without intervening pauses), and first chunk length. Bayesian analyses showed effects of semantic context and name agreement for some of the dependent measures, but no interaction. We discuss the methodological implications of the current study and make best practice recommendations for spoken language production research in an online environment.
  • He, J., Meyer, A. S., & Brehm, L. (2021). Concurrent listening affects speech planning and fluency: The roles of representational similarity and capacity limitation. Language, Cognition and Neuroscience, 36(10), 1258-1280. doi:10.1080/23273798.2021.1925130.

    Abstract

    In a novel continuous speaking-listening paradigm, we explored how speech planning was affected by concurrent listening. In Experiment 1, Dutch speakers named pictures with high versus low name agreement while ignoring Dutch speech, Chinese speech, or eight-talker babble. Both name agreement and type of auditory input influenced response timing and chunking, suggesting that representational similarity impacts lexical selection and the scope of advance planning in utterance generation. In Experiment 2, Dutch speakers named pictures with high or low name agreement while either ignoring Dutch words, or attending to them for a later memory test. Both name agreement and attention demand influenced response timing and chunking, suggesting that attention demand impacts lexical selection and the planned utterance units in each response. The study indicates that representational similarity and attention demand play important roles in linguistic dual-task interference, and the interference can be managed by adapting when and how to plan speech.

    Additional information

    supplemental material
  • Meyer, A. S. (1992). Investigation of phonological encoding through speech error analyses: Achievements, limitations, and alternatives. Cognition, 42, 181-211. doi:10.1016/0010-0277(92)90043-H.

    Abstract

    Phonological encoding in language production can be defined as a set of processes generating utterance forms on the basis of semantic and syntactic information. Most evidence about these processes stems from analyses of sound errors. In section 1 of this paper, certain important results of these analyses are reviewed. Two prominent models of phonological encoding, which are mainly based on speech error evidence, are discussed in section 2. In section 3, limitations of speech error analyses are discussed, and it is argued that detailed and comprehensive models of phonological encoding cannot be derived solely on the basis of error analyses. As is argued in section 4, a new research strategy is required. Instead of using the properties of errors to draw inferences about the generation of correct word forms, future research should directly investigate the normal process of phonological encoding.
  • Meyer, A. S., & Bock, K. (1992). The tip-of-the-tongue phenomenon: Blocking or partial activation? Memory and Cognition, 20, 181-211.

    Abstract

    Tip-of-the-tongue states may represent the momentary unavailability of an otherwise accessible word or the weak activation of an otherwise inaccessible word. In three experiments designed to address these alternative views, subjects attempted to retrieve rare target words from their definitions. The definitions were followed by cues that were related to the targets in sound, by cues that were related in meaning, and by cues that were not related to the targets. Experiment 1 found that compared with unrelated cues, related cue words that were presented immediately after target definitions helped rather than hindered lexical retrieval, and that sound cues were more effective retrieval aids than meaning cues. Experiment 2 replicated these results when cues were presented after an initial target-retrieval attempt. These findings reverse a previous one (Jones, 1989) that was reproduced in Experiment 3 and shown to stem from a small group of unusually difficult target definitions.
  • Meyer, A. S. (1994). Timing in sentence production. Journal of Memory and Language, 33, 471-492. doi:doi:10.1006/jmla.1994.1022.

    Abstract

    Recently, a new theory of timing in sentence production has been proposed by Ferreira (1993). This theory assumes that at the phonological level, each syllable of an utterance is assigned one or more abstract timing units depending on its position in the prosodic structure. The number of timing units associated with a syllable determines the time interval between its onset and the onset of the next syllable. An interesting prediction from the theory, which was confirmed in Ferreira's experiments with speakers of American English, is that the time intervals between syllable onsets should only depend on the syllables' positions in the prosodic structure, but not on their segmental content. However, in the present experiments, which were carried out in Dutch, the intervals between syllable onsets were consistently longer for phonetically long syllables than for short syllables. The implications of this result for models of timing in sentence production are discussed.
  • Meyer, A. S., Alday, P. M., Decuyper, C., & Knudsen, B. (2018). Working together: Contributions of corpus analyses and experimental psycholinguistics to understanding conversation. Frontiers in Psychology, 9: 525. doi:10.3389/fpsyg.2018.00525.

    Abstract

    As conversation is the most important way of using language, linguists and psychologists should combine forces to investigate how interlocutors deal with the cognitive demands arising during conversation. Linguistic analyses of corpora of conversation are needed to understand the structure of conversations, and experimental work is indispensable for understanding the underlying cognitive processes. We argue that joint consideration of corpus and experimental data is most informative when the utterances elicited in a lab experiment match those extracted from a corpus in relevant ways. This requirement to compare like with like seems obvious but is not trivial to achieve. To illustrate this approach, we report two experiments where responses to polar (yes/no) questions were elicited in the lab and the response latencies were compared to gaps between polar questions and answers in a corpus of conversational speech. We found, as expected, that responses were given faster when they were easy to plan and planning could be initiated earlier than when they were harder to plan and planning was initiated later. Overall, in all but one condition, the latencies were longer than one would expect based on the analyses of corpus data. We discuss the implication of this partial match between the data sets and more generally how corpus and experimental data can best be combined in studies of conversation.

    Additional information

    Data_Sheet_1.pdf
  • Meyer, A. S., Sleiderink, A. M., & Levelt, W. J. M. (1998). Viewing and naming objects: Eye movements during noun phrase production. Cognition, 66(2), B25-B33. doi:10.1016/S0010-0277(98)00009-2.

    Abstract

    Eye movements have been shown to reflect word recognition and language comprehension processes occurring during reading and auditory language comprehension. The present study examines whether the eye movements speakers make during object naming similarly reflect speech planning processes. In Experiment 1, speakers named object pairs saying, for instance, 'scooter and hat'. The objects were presented as ordinary line drawings or with partly dele:ed contours and had high or low frequency names. Contour type and frequency both significantly affected the mean naming latencies and the mean time spent looking at the objects. The frequency effects disappeared in Experiment 2, in which the participants categorized the objects instead of naming them. This suggests that the frequency effects of Experiment 1 arose during lexical retrieval. We conclude that eye movements during object naming indeed reflect linguistic planning processes and that the speakers' decision to move their eyes from one object to the next is contingent upon the retrieval of the phonological form of the object names.
  • Mickan, A., McQueen, J. M., Valentini, B., Piai, V., & Lemhöfer, K. (2021). Electrophysiological evidence for cross-language interference in foreign-language attrition. Neuropsychologia, 155: 107795. doi:10.1016/j.neuropsychologia.2021.107795.

    Abstract

    Foreign language attrition (FLA) appears to be driven by interference from other, more recently-used languages (Mickan et al., 2020). Here we tracked these interference dynamics electrophysiologically to further our understanding of the underlying processes. Twenty-seven Dutch native speakers learned 70 new Italian words over two days. On a third day, EEG was recorded as they performed naming tasks on half of these words in English and, finally, as their memory for all the Italian words was tested in a picture-naming task. Replicating Mickan et al., recall was slower and tended to be less complete for Italian words that were interfered with (i.e., named in English) than for words that were not. These behavioral interference effects were accompanied by an enhanced frontal N2 and a decreased late positivity (LPC) for interfered compared to not-interfered items. Moreover, interfered items elicited more theta power. We also found an increased N2 during the interference phase for items that participants were later slower to retrieve in Italian. We interpret the N2 and theta effects as markers of interference, in line with the idea that Italian retrieval at final test is hampered by competition from recently practiced English translations. The LPC, in turn, reflects the consequences of interference: the reduced accessibility of interfered Italian labels. Finally, that retrieval ease at final test was related to the degree of interference during previous English retrieval shows that FLA is already set in motion during the interference phase, and hence can be the direct consequence of using other languages.

    Additional information

    data via Donders Repository
  • Misersky, J., Slivac, K., Hagoort, P., & Flecken, M. (2021). The State of the Onion: Grammatical aspect modulates object representation during event comprehension. Cognition, 214: 104744. doi:10.1016/j.cognition.2021.104744.

    Abstract

    The present ERP study assessed whether grammatical aspect is used as a cue in online event comprehension, in particular when reading about events in which an object is visually changed. While perfective aspect cues holistic event representations, including an event's endpoint, progressive aspect highlights intermediate phases of an event. In a 2 × 3 design, participants read SVO sentences describing a change-of-state event (e.g., to chop an onion), with grammatical Aspect manipulated (perfective “chopped” vs progressive “was chopping”). Thereafter, they saw a Picture of an object either having undergone substantial state-change (SC; a chopped onion), no state-change (NSC; an onion in its original state) or an unrelated object (U; a cactus, acting as control condition). Their task was to decide whether the object in the Picture was mentioned in the sentence. We focused on N400 modulation, with ERPs time-locked to picture onset. U pictures elicited an N400 response as expected, suggesting detection of categorical mismatches in object type. For SC and NSC pictures, a whole-head follow-up analysis revealed a P300, implying people were engaged in detailed evaluation of pictures of matching objects. SC pictures received most positive responses overall. Crucially, there was an interaction of Aspect and Picture: SC pictures resulted in a higher amplitude P300 after sentences in the perfective compared to the progressive. Thus, while the perfective cued for a holistic event representation, including the resultant state of the affected object (i.e., the chopped onion) constraining object representations online, the progressive defocused event completion and object-state change. Grammatical aspect thus guided online event comprehension by cueing the visual representation(s) of an object's state.
  • Misra, S. (2021). Real-time dynamic fur and hair simulation using verlet integration. International Journal of Scientific and Research Publication (IJSRP), 11(2), 444-450. doi:10.29322/IJSRP.11.02.2021.p11053.

    Abstract

    Throughout the history of game development, the physics behind the real-time hair simulation has continued to pose a challenge due to lack of availability of computational resources required by the system. Unlike rendering an animation, where the requirement of real-time simulation is absent, game hair physics needs more efficiency when it comes to utilization of computational resources. Generally, for making a hair strand mesh, a cylinder or a capsule mesh is an obvious choice despite its requirement of a higher number of draw calls or resources. This paper proposes the use of an innovative and highly efficient use of quad polygons, whose normals face the render in conjunction with the use of Verlet integration, which delivers optimal results by keeping the frames per second (FPS) stable. Additionally, the proposed physics also allows for physical forces, such as gravity and wind, to affect hair movement as well as simulate a natural curl in the hair strand.
  • Mitterer, H., Reinisch, E., & McQueen, J. M. (2018). Allophones, not phonemes in spoken-word recognition. Journal of Memory and Language, 98, 77-92. doi:10.1016/j.jml.2017.09.005.

    Abstract

    What are the phonological representations that listeners use to map information about the segmental content of speech onto the mental lexicon during spoken-word recognition? Recent evidence from perceptual-learning paradigms seems to support (context-dependent) allophones as the basic representational units in spoken-word recognition. But recent evidence from a selective-adaptation paradigm seems to suggest that context-independent phonemes also play a role. We present three experiments using selective adaptation that constitute strong tests of these representational hypotheses. In Experiment 1, we tested generalization of selective adaptation using different allophones of Dutch /r/ and /l/ – a case where generalization has not been found with perceptual learning. In Experiments 2 and 3, we tested generalization of selective adaptation using German back fricatives in which allophonic and phonemic identity were varied orthogonally. In all three experiments, selective adaptation was observed only if adaptors and test stimuli shared allophones. Phonemic identity, in contrast, was neither necessary nor sufficient for generalization of selective adaptation to occur. These findings and other recent data using the perceptual-learning paradigm suggest that pre-lexical processing during spoken-word recognition is based on allophones, and not on context-independent phonemes

Share this page