Publications

Displaying 301 - 400 of 1243
  • Frey, V., De Mulder, H. N. M., Ter Bekke, M., Struiksma, M. E., Van Berkum, J. J. A., & Buskens, V. (2022). Do self-talk phrases affect behavior in ultimatum games? Mind & Society, 21, 89-119. doi:10.1007/s11299-022-00286-8.

    Abstract

    The current study investigates whether self-talk phrases can influence behavior in Ultimatum Games. In our three self-talk treatments, participants were instructed to tell themselves (i) to keep their own interests in mind, (ii) to also think of the other person, or (iii) to take some time to contemplate their decision. We investigate how such so-called experimenter-determined strategic self-talk phrases affect behavior and emotions in comparison to a control treatment without instructed self-talk. The results demonstrate that other-focused self-talk can nudge proposers towards fair behavior, as offers were higher in this group than in the other conditions. For responders, self-talk tended to increase acceptance rates of unfair offers as compared to the condition without self-talk. This effect is significant for both other-focused and contemplation-inducing self-talk but not for self-focused self-talk. In the self-focused condition, responders were most dissatisfied with unfair offers. These findings suggest that use of self-talk can increase acceptance rates in responders, and that focusing on personal interests can undermine this effect as it negatively impacts the responders’ emotional experience. In sum, our study shows that strategic self-talk interventions can be used to affect behavior in bargaining situations.

    Additional information

    data and analysis files
  • Frost, R. L. A., Monaghan, P., & Christiansen, M. H. (2019). Mark my words: High frequency marker words impact early stages of language learning. Journal of Experimental Psychology: Learning, Memory, and Cognition, 45(10), 1883-1898. doi:10.1037/xlm0000683.

    Abstract

    High frequency words have been suggested to benefit both speech segmentation and grammatical categorization of the words around them. Despite utilizing similar information, these tasks are usually investigated separately in studies examining learning. We determined whether including high frequency words in continuous speech could support categorization when words are being segmented for the first time. We familiarized learners with continuous artificial speech comprising repetitions of target words, which were preceded by high-frequency marker words. Crucially, marker words distinguished targets into 2 distributionally defined categories. We measured learning with segmentation and categorization tests and compared performance against a control group that heard the artificial speech without these marker words (i.e., just the targets, with no cues for categorization). Participants segmented the target words from speech in both conditions, but critically when the marker words were present, they influenced acquisition of word-referent mappings in a subsequent transfer task, with participants demonstrating better early learning for mappings that were consistent (rather than inconsistent) with the distributional categories. We propose that high-frequency words may assist early grammatical categorization, while speech segmentation is still being learned.

    Additional information

    Supplemental Material
  • Fuhrmann, D., Ravignani, A., Marshall-Pescini, S., & Whiten, A. (2014). Synchrony and motor mimicking in chimpanzee observational learning. Scientific Reports, 4: 5283. doi:10.1038/srep05283.

    Abstract

    Cumulative tool-based culture underwrote our species' evolutionary success and tool-based nut-cracking is one of the strongest candidates for cultural transmission in our closest relatives, chimpanzees. However the social learning processes that may explain both the similarities and differences between the species remain unclear. A previous study of nut-cracking by initially naïve chimpanzees suggested that a learning chimpanzee holding no hammer nevertheless replicated hammering actions it witnessed. This observation has potentially important implications for the nature of the social learning processes and underlying motor coding involved. In the present study, model and observer actions were quantified frame-by-frame and analysed with stringent statistical methods, demonstrating synchrony between the observer's and model's movements, cross-correlation of these movements above chance level and a unidirectional transmission process from model to observer. These results provide the first quantitative evidence for motor mimicking underlain by motor coding in apes, with implications for mirror neuron function.

    Additional information

    Supplementary Information
  • Furman, R., Kuntay, A., & Ozyurek, A. (2014). Early language-specificity of children's event encoding in speech and gesture: Evidence from caused motion in Turkish. Language, Cognition and Neuroscience, 29, 620-634. doi:10.1080/01690965.2013.824993.

    Abstract

    Previous research on language development shows that children are tuned early on to the language-specific semantic and syntactic encoding of events in their native language. Here we ask whether language-specificity is also evident in children's early representations in gesture accompanying speech. In a longitudinal study, we examined the spontaneous speech and cospeech gestures of eight Turkish-speaking children aged one to three and focused on their caused motion event expressions. In Turkish, unlike in English, the main semantic elements of caused motion such as Action and Path can be encoded in the verb (e.g. sok- ‘put in’) and the arguments of a verb can be easily omitted. We found that Turkish-speaking children's speech indeed displayed these language-specific features and focused on verbs to encode caused motion. More interestingly, we found that their early gestures also manifested specificity. Children used iconic cospeech gestures (from 19 months onwards) as often as pointing gestures and represented semantic elements such as Action with Figure and/or Path that reinforced or supplemented speech in language-specific ways until the age of three. In the light of previous reports on the scarcity of iconic gestures in English-speaking children's early productions, we argue that the language children learn shapes gestures and how they get integrated with speech in the first three years of life.
  • Galbiati, A., Verga, L., Giora, E., Zucconi, M., & Ferini-Strambi, L. (2019). The risk of neurodegeneration in REM sleep behavior disorder: A systematic review and meta-analysis of longitudinal studies. Sleep Medicine Reviews, 43, 37-46. doi:10.1016/j.smrv.2018.09.008.

    Abstract

    Several studies report an association between REM Sleep Behavior Disorder (RBD) and neurodegenerative diseases, in particular synucleinopathies. Interestingly, the onset of RBD precedes the development of neurodegeneration by several years. This review and meta-analysis aims to establish the rate of conversion of RBD into neurodegenerative diseases. Longitudinal studies were searched from the PubMed, Web of Science, and SCOPUS databases. Using random-effect modeling, we performed a meta-analysis on the rate of RBD conversions into neurodegeneration. Furthermore, we fitted a Kaplan-Meier analysis and compared the differences between survival curves of different diseases with log-rank tests. The risk for developing neurodegenerative diseases was 33.5% at five years follow-up, 82.4% at 10.5 years and 96.6% at 14 years. The average conversion rate was 31.95% after a mean duration of follow-up of 4.75 ± 2.43 years. The majority of RBD patients converted to Parkinson's Disease (43%), followed by Dementia with Lewy Bodies (25%). The estimated risk for RBD patients to develop a neurodegenerative disease over a long-term follow-up is more than 90%. Future studies should include control group for the evaluation of REM sleep without atonia as marker for neurodegeneration also in non-clinical population and target RBD as precursor of neurodegeneration to develop protective trials.
  • Gamba, M., Torti, V., De Gregorio, C., Raimondi, T., Miaretsoa, L., Carugati, F., Cristiano, W., Randrianarison, R. M., Bonadonna, G., Zanoli, A., Friard, O., Valente, D., Ravignani, A., & Giacoma, C. (2022). Caractéristiques rythmiques du chant de l'indri et nouvelles perspectives pour une évaluation comparative du rythme chez les primates non humains. Revue de primatologie, 13. doi:10.4000/primatologie.14989.

    Abstract

    Since the discovery that rhythmic abilities are universal in humans, temporal features of vocal communication have greatly interested researchers studying animal communication. Rhythmic patterns are a valuable tool for species discrimination, mate choice, and individual recognition. A recent study showed that bird songs and human music share rhythmic categories when a signal's temporal intervals are distributed categorically rather than uniformly. Following that study, we aimed to investigate whether songs of indris (Indri indri), the only singing lemur, may show similar features. We measured the inter-onset intervals (tk), delimited by the onsets of two consecutive units, and the rhythmic ratios between these intervals (rk), calculated by dividing an interval by itself plus its adjacent, and finded a three-cluster distribution. Two clusters corresponded to rhythmic categories at 1:1 and 1:2, and the third approached a 2:1 ratio. Our results demonstrated for the first time that another primate besides humans produces categorical rhythms, an ability likely evolved convergently among singing species such as songbirds, indris, and humans. Understanding which communicative features are shared with other species is fundamental to understanding how they have evolved. In this regard, thanks to the simplicity of data processing and interpretation, our study relied on an accessible analytical approach that could open up new branches of the investigation into primate communication, leading the way to reconstruct a phylogeny of rhythm abilities across the entire order.
  • Ganushchak, L., Konopka, A. E., & Chen, Y. (2014). What the eyes say about planning of focused referents during sentence formulation: a cross-linguistic investigation. Frontiers in Psychology, 5: 1124. doi:10.3389/fpsyg.2014.01124.

    Abstract

    This study investigated how sentence formulation is influenced by a preceding discourse context. In two eye-tracking experiments, participants described pictures of two-character transitive events in Dutch (Experiment 1) and Chinese (Experiment 2). Focus was manipulated by presenting questions before each picture. In the Neutral condition, participants first heard ‘What is happening here?’ In the Object or Subject Focus conditions, the questions asked about the Object or Subject character (What is the policeman stopping? Who is stopping the truck?). The target response was the same in all conditions (The policeman is stopping the truck). In both experiments, sentence formulation in the Neutral condition showed the expected pattern of speakers fixating the subject character (policeman) before the object character (truck). In contrast, in the focus conditions speakers rapidly directed their gaze preferentially only to the character they needed to encode to answer the question (the new, or focused, character). The timing of gaze shifts to the new character varied by language group (Dutch vs. Chinese): shifts to the new character occurred earlier when information in the question can be repeated in the response with the same syntactic structure (in Chinese but not in Dutch). The results show that discourse affects the timecourse of linguistic formulation in simple sentences and that these effects can be modulated by language-specific linguistic structures such as parallels in the syntax of questions and declarative sentences.
  • Ganushchak, L. Y., & Acheson, D. J. (Eds.). (2014). What's to be learned from speaking aloud? - Advances in the neurophysiological measurement of overt language production. [Research topic] [Special Issue]. Frontiers in Language Sciences. Retrieved from http://www.frontiersin.org/Language_Sciences/researchtopics/What_s_to_be_Learned_from_Spea/1671.

    Abstract

    Researchers have long avoided neurophysiological experiments of overt speech production due to the suspicion that artifacts caused by muscle activity may lead to a bad signal-to-noise ratio in the measurements. However, the need to actually produce speech may influence earlier processing and qualitatively change speech production processes and what we can infer from neurophysiological measures thereof. Recently, however, overt speech has been successfully investigated using EEG, MEG, and fMRI. The aim of this Research Topic is to draw together recent research on the neurophysiological basis of language production, with the aim of developing and extending theoretical accounts of the language production process. In this Research Topic of Frontiers in Language Sciences, we invite both experimental and review papers, as well as those about the latest methods in acquisition and analysis of overt language production data. All aspects of language production are welcome: i.e., from conceptualization to articulation during native as well as multilingual language production. Focus should be placed on using the neurophysiological data to inform questions about the processing stages of language production. In addition, emphasis should be placed on the extent to which the identified components of the electrophysiological signal (e.g., ERP/ERF, neuronal oscillations, etc.), brain areas or networks are related to language comprehension and other cognitive domains. By bringing together electrophysiological and neuroimaging evidence on language production mechanisms, a more complete picture of the locus of language production processes and their temporal and neurophysiological signatures will emerge.
  • Gao, Y., Meng, X., Bai, Z., Liu, X., Zhang, M., Li, H., Ding, G., Liu, L., & Booth, J. R. (2022). Left and right arcuate fasciculi are uniquely related to word reading skills in Chinese-English bilingual children. Neurobiology of Language, 3(1), 109-131. doi:10.1162/nol_a_00051.

    Abstract

    Whether reading in different writing systems recruits language-unique or language-universal neural processes is a long-standing debate. Many studies have shown the left arcuate fasciculus (AF) to be involved in phonological and reading processes. In contrast, little is known about the role of the right AF in reading, but some have suggested that it may play a role in visual spatial aspects of reading or the prosodic components of language. The right AF may be more important for reading in Chinese due to its logographic and tonal properties, but this hypothesis has yet to be tested. We recruited a group of Chinese-English bilingual children (8.2 to 12.0 years old) to explore the common and unique relation of reading skill in English and Chinese to fractional anisotropy (FA) in the bilateral AF. We found that both English and Chinese reading skills were positively correlated with FA in the rostral part of the left AF-direct segment. Additionally, English reading skill was positively correlated with FA in the caudal part of the left AF-direct segment, which was also positively correlated with phonological awareness. In contrast, Chinese reading skill was positively correlated with FA in certain segments of the right AF, which was positively correlated with visual spatial ability, but not tone discrimination ability. Our results suggest that there are language universal substrates of reading across languages, but that certain left AF nodes support phonological mechanisms important for reading in English, whereas certain right AF nodes support visual spatial mechanisms important for reading in Chinese.

    Additional information

    supplementary materials
  • Gao, Y., Zheng, L., Liu, X., Nichols, E. S., Zhang, M., Shang, L., Ding, G., Meng, Z., & Liu, L. (2019). First and second language reading difficulty among Chinese–English bilingual children: The prevalence and influences from demographic characteristics. Frontiers in Psychology, 10: 2544. doi:10.3389/fpsyg.2019.02544.

    Abstract

    Learning to read a second language (L2) can pose a great challenge for children who have already been struggling to read in their first language (L1). Moreover, it is not clear whether, to what extent, and under what circumstances L1 reading difficulty increases the risk of L2 reading difficulty. This study investigated Chinese (L1) and English (L2) reading skills in a large representative sample of 1,824 Chinese–English bilingual children in Grades 4 and 5 from both urban and rural schools in Beijing. We examined the prevalence of reading difficulty in Chinese only (poor Chinese readers, PC), English only (poor English readers, PE), and both Chinese and English (poor bilingual readers, PB) and calculated the co-occurrence, that is, the chances of becoming a poor reader in English given that the child was already a poor reader in Chinese. We then conducted a multinomial logistic regression analysis and compared the prevalence of PC, PE, and PB between children in Grade 4 versus Grade 5, in urban versus rural areas, and in boys versus girls. Results showed that compared to girls, boys demonstrated significantly higher risk of PC, PE, and PB. Meanwhile, compared to the 5th graders, the 4th graders demonstrated significantly higher risk of PC and PB. In addition, children enrolled in the urban schools were more likely to become better second language readers, thus leading to a concerning rural–urban gap in the prevalence of L2 reading difficulty. Finally, among these Chinese–English bilingual children, regardless of sex and school location, poor reading skill in Chinese significantly increased the risk of also being a poor English reader, with a considerable and stable co-occurrence of approximately 36%. In sum, this study suggests that despite striking differences between alphabetic and logographic writing systems, L1 reading difficulty still significantly increases the risk of L2 reading difficulty. This indicates the shared meta-linguistic skills in reading different writing systems and the importance of understanding the universality and the interdependent relationship of reading between different writing systems. Furthermore, the male disadvantage (in both L1 and L2) and the urban–rural gap (in L2) found in the prevalence of reading difficulty calls for special attention to disadvantaged populations in educational practice.
  • Gao, X., Dera, J., Nijhoff, A. D., & Willems, R. M. (2019). Is less readable liked better? The case of font readability in poetry appreciation. PLoS One, 14(12): e0225757. doi:10.1371/journal.pone.0225757.

    Abstract

    Previous research shows conflicting findings for the effect of font readability on comprehension and memory for language. It has been found that—perhaps counterintuitively–a hard to read font can be beneficial for language comprehension, especially for difficult language. Here we test how font readability influences the subjective experience of poetry reading. In three experiments we tested the influence of poem difficulty and font readability on the subjective experience of poems. We specifically predicted that font readability would have opposite effects on the subjective experience of easy versus difficult poems. Participants read poems which could be more or less difficult in terms of conceptual or structural aspects, and which were presented in a font that was either easy or more difficult to read. Participants read existing poems and subsequently rated their subjective experience (measured through four dependent variables: overall liking, perceived flow of the poem, perceived topic clarity, and perceived structure). In line with previous literature we observed a Poem Difficulty x Font Readability interaction effect for subjective measures of poetry reading. We found that participants rated easy poems as nicer when presented in an easy to read font, as compared to when presented in a hard to read font. Despite the presence of the interaction effect, we did not observe the predicted opposite effect for more difficult poems. We conclude that font readability can influence reading of easy and more difficult poems differentially, with strongest effects for easy poems.

    Additional information

    https://osf.io/jwcqt/
  • Gao, X., & Jiang, T. (2018). Sensory constraints on perceptual simulation during sentence reading. Journal of Experimental Psychology: Human Perception and Performance, 44(6), 848-855. doi:10.1037/xhp0000475.

    Abstract

    Resource-constrained models of language processing predict that perceptual simulation during language understanding would be compromised by sensory limitations (such as reading text in unfamiliar/difficult font), whereas strong versions of embodied theories of language would predict that simulating perceptual symbols in language would not be impaired even under sensory-constrained situations. In 2 experiments, sensory decoding difficulty was manipulated by using easy and hard fonts to study perceptual simulation during sentence reading (Zwaan, Stanfield, & Yaxley, 2002). Results indicated that simulating perceptual symbols in language was not compromised by surface-form decoding challenges such as difficult font, suggesting relative resilience of embodied language processing in the face of certain sensory constraints. Further implications for learning from text and individual differences in language processing will be discussed
  • Garcia, R., Dery, J. E., Roeser, J., & Höhle, B. (2018). Word order preferences of Tagalog-speaking adults and children. First Language, 38(6), 617-640. doi:10.1177/0142723718790317.

    Abstract

    This article investigates the word order preferences of Tagalog-speaking adults and five- and seven-year-old children. The participants were asked to complete sentences to describe pictures depicting actions between two animate entities. Adults preferred agent-initial constructions in the patient voice but not in the agent voice, while the children produced mainly agent-initial constructions regardless of voice. This agent-initial preference, despite the lack of a close link between the agent and the subject in Tagalog, shows that this word order preference is not merely syntactically-driven (subject-initial preference). Additionally, the children’s agent-initial preference in the agent voice, contrary to the adults’ lack of preference, shows that children do not respect the subject-last principle of ordering Tagalog full noun phrases. These results suggest that language-specific optional features like a subject-last principle take longer to be acquired.
  • Garcia, R., Roeser, J., & Kidd, E. (2022). Online data collection to address language sampling bias: Lessons from the COVID-19 pandemic. Linguistics Vanguard. Advance online publication. doi:10.1515/lingvan-2021-0040.

    Abstract

    The COVID-19 pandemic has massively limited how linguists can collect data, and out of necessity, researchers across several disciplines have moved data collection online. Here we argue that the rising popularity of remote web-based experiments also provides an opportunity for widening the context of linguistic research by facilitating data collection from understudied populations. We discuss collecting production data from adult native speakers of Tagalog using an unsupervised web-based experiment. Compared to equivalent lab experiments, data collection went quicker, and the sample was more diverse, without compromising data quality. However, there were also technical and human issues that come with this method. We discuss these challenges and provide suggestions on how to overcome them.
  • Garcia, R., & Kidd, E. (2022). Acquiring verb-argument structure in Tagalog: A multivariate corpus analysis of caregiver and child speech. Linguistics, 60(6), 1855-1906. doi:10.1515/ling-2021-0107.

    Abstract

    Western Austronesian languages have typologically rare but theoretically important voice systems that raise many questions about their learnability. While these languages have been featured prominently in the descriptive and typological literature, data on acquisition is sparse. In the current paper, we report on a variationist analysis of Tagalog child-directed speech using a newly collected corpus of caregiver-child interaction. We determined the constraints that condition voice use, voice selection, argument position, and thematic role assignment, thus providing the first quantitative analysis of verb argument structure variation in the language. We also examined whether children are sensitive to the constraints on variability. Our analyses showed that, despite the diversity of structures that children have to learn under Tagalog’s voice system, there are unique factors that strongly predict the speakers’ choice between the voice and word order alternations, with children’s choices related to structure alternations being similar to what is available in their input. The results thus suggest that input distributions provide many cues to the acquisition of the Tagalog voice system, making it eminently learnable despite its apparent complexity.
  • Garcia, R., Roeser, J., & Höhle, B. (2019). Thematic role assignment in the L1 acquisition of Tagalog: Use of word order and morphosyntactic markers. Language Acquisition, 26(3), 235-261. doi:10.1080/10489223.2018.1525613.

    Abstract

    It is a common finding across languages that young children have problems in understanding patient-initial sentences. We used Tagalog, a verb-initial language with a reliable voice-marking system and highly frequent patient voice constructions, to test the predictions of several accounts that have been proposed to explain this difficulty: the frequency account, the Competition Model, and the incremental processing account. Study 1 presents an analysis of Tagalog child-directed speech, which showed that the dominant argument order is agent-before-patient and that morphosyntactic markers are highly valid cues to thematic role assignment. In Study 2, we used a combined self-paced listening and picture verification task to test how Tagalog-speaking adults and 5- and 7-year-old children process reversible transitive sentences. Results showed that adults performed well in all conditions, while children’s accuracy and listening times for the first noun phrase indicated more difficulty in interpreting patient-initial sentences in the agent voice compared to the patient voice. The patient voice advantage is partly explained by both the frequency account and incremental processing account.
  • Gaskell, M. G., Warker, J., Lindsay, S., Frost, R. L. A., Guest, J., Snowdon, R., & Stackhouse, A. (2014). Sleep Underpins the Plasticity of Language Production. Psychological Science, 25(7), 1457-1465. doi:10.1177/0956797614535937.

    Abstract

    The constraints that govern acceptable phoneme combinations in speech perception and production have considerable plasticity. We addressed whether sleep influences the acquisition of new constraints and their integration into the speech-production system. Participants repeated sequences of syllables in which two phonemes were artificially restricted to syllable onset or syllable coda, depending on the vowel in that sequence. After 48 sequences, participants either had a 90-min nap or remained awake. Participants then repeated 96 sequences so implicit constraint learning could be examined, and then were tested for constraint generalization in a forced-choice task. The sleep group, but not the wake group, produced speech errors at test that were consistent with restrictions on the placement of phonemes in training. Furthermore, only the sleep group generalized their learning to new materials. Polysomnography data showed that implicit constraint learning was associated with slow-wave sleep. These results show that sleep facilitates the integration of new linguistic knowledge with existing production constraints. These data have relevance for systems-consolidation models of sleep.

    Additional information

    https://osf.io/zqg9y/
  • Gasparini, L., Tsuji, S., & Bergmann, C. (2022). Ten easy steps to conducting transparent, reproducible meta‐analyses for infant researchers. Infancy, 27(4), 736-764. doi:10.1111/infa.12470.

    Abstract

    Meta-analyses provide researchers with an overview of the body of evidence in a topic, with quantified estimates of effect sizes and the role of moderators, and weighting studies according to their precision. We provide a guide for conducting a transparent and reproducible meta-analysis in the field of developmental psychology within the framework of the MetaLab platform, in 10 steps: (1) Choose a topic for your meta-analysis, (2) Formulate your research question and specify inclusion criteria, (3) Preregister and document all stages of your meta-analysis, (4) Conduct the literature search, (5) Collect and screen records, (6) Extract data from eligible studies, (7) Read the data into analysis software and compute effect sizes, (8) Visualize your data, (9) Create meta-analytic models to assess the strength of the effect and investigate possible moderators, (10) Write up and promote your meta-analysis. Meta-analyses can inform future studies, through power calculations, by identifying robust methods and exposing research gaps. By adding a new meta-analysis to MetaLab, datasets across multiple topics of developmental psychology can be synthesized, and the dataset can be maintained as a living, community-augmented meta-analysis to which researchers add new data, allowing for a cumulative approach to evidence synthesis.
  • Gehrig, J., Michalareas, G., Forster, M.-T., Lei, J., Hok, P., Laufs, H., Senft, C., Seifert, V., Schoffelen, J.-M., Hanslmayr, H., & Kell, C. A. (2019). Low-frequency oscillations code speech during verbal working memory. The Journal of Neuroscience, 39(33), 6498-6512. doi:10.1523/JNEUROSCI.0018-19.2019.

    Abstract

    The way the human brain represents speech in memory is still unknown. An obvious characteristic of speech is its evolvement over time.
    During speech processing, neural oscillations are modulated by the temporal properties of the acoustic speech signal, but also acquired
    knowledge on the temporal structure of language influences speech perception-related brain activity. This suggests that speech could be
    represented in the temporal domain, a form of representation that the brain also uses to encode autobiographic memories. Empirical
    evidence for such a memory code is lacking. We investigated the nature of speech memory representations using direct cortical recordings
    in the left perisylvian cortex during delayed sentence reproduction in female and male patients undergoing awake tumor surgery.
    Our results reveal that the brain endogenously represents speech in the temporal domain. Temporal pattern similarity analyses revealed
    that the phase of frontotemporal low-frequency oscillations, primarily in the beta range, represents sentence identity in working memory.
    The positive relationship between beta power during working memory and task performance suggests that working memory
    representations benefit from increased phase separation.
  • Genon, S., & Forkel, S. J. (2022). How do different parts of brain white matter develop after birth in humans? Neuron, 110(23), 3860-3863. doi:10.1016/j.neuron.2022.11.011.

    Abstract

    Understanding human white matter development is vital to characterize typical brain organization and developmental neurocognitive disorders. In this issue of Neuron, Nazeri and colleagues1 identify different parts of white matter in the neonatal brain and show their maturational trajectories in line with microstructural feature development.
  • Gerrits, F., Senft, G., & Wisse, D. (2018). Bomiyoyeva and bomduvadoya: Two rare structures on the Trobriand Islands exclusively reserved for Tabalu chiefs. Anthropos, 113, 93-113. doi:10.5771/0257-9774-2018-1-93.

    Abstract

    This article presents information about two so far undescribed buildings made by the Trobriand Islanders, the bomiyoyeva and the bomduvadova. These structures are connected to the highest-ranking chiefs living in Labai and Omarakana on Kiriwina Island. They highlight the power and eminence of these chiefs. After a brief report on the history of this project, the structure of the two houses, their function, and their use is described and information on their construction and their mythical background is provided. Finally, everyday as well as ritual, social, and political functions of both buildings are discussed. [Melanesia, Trobriand Islands, Tabalu chiefs, yams houses, bomiyoyeva, bomduvadova, authoritative capacities]

    Additional information

    link to journal
  • Gialluisi, A., Andlauer, T. F. M., Mirza-Schreiber, N., Moll, K., Becker, J., Hoffmann, P., Ludwig, K. U., Czamara, D., St Pourcain, B., Brandler, W., Honbolygó, F., Tóth, D., Csépe, V., Huguet, G., Morris, A. P., Hulslander, J., Willcutt, E. G., DeFries, J. C., Olson, R. K., Smith, S. D. and 25 moreGialluisi, A., Andlauer, T. F. M., Mirza-Schreiber, N., Moll, K., Becker, J., Hoffmann, P., Ludwig, K. U., Czamara, D., St Pourcain, B., Brandler, W., Honbolygó, F., Tóth, D., Csépe, V., Huguet, G., Morris, A. P., Hulslander, J., Willcutt, E. G., DeFries, J. C., Olson, R. K., Smith, S. D., Pennington, B. F., Vaessen, A., Maurer, U., Lyytinen, H., Peyrard-Janvid, M., Leppänen, P. H. T., Brandeis, D., Bonte, M., Stein, J. F., Talcott, J. B., Fauchereau, F., Wilcke, A., Francks, C., Bourgeron, T., Monaco, A. P., Ramus, F., Landerl, K., Kere, J., Scerri, T. S., Paracchini, S., Fisher, S. E., Schumacher, J., Nöthen, M. M., Müller-Myhsok, B., & Schulte-Körne, G. (2019). Genome-wide association scan identifies new variants associated with a cognitive predictor of dyslexia. Translational Psychiatry, 9(1): 77. doi:10.1038/s41398-019-0402-0.

    Abstract

    Developmental dyslexia (DD) is one of the most prevalent learning disorders, with high impact on school and psychosocial development and high comorbidity with conditions like attention-deficit hyperactivity disorder (ADHD), depression, and anxiety. DD is characterized by deficits in different cognitive skills, including word reading, spelling, rapid naming, and phonology. To investigate the genetic basis of DD, we conducted a genome-wide association study (GWAS) of these skills within one of the largest studies available, including nine cohorts of reading-impaired and typically developing children of European ancestry (N = 2562–3468). We observed a genome-wide significant effect (p < 1 × 10−8) on rapid automatized naming of letters (RANlet) for variants on 18q12.2, within MIR924HG (micro-RNA 924 host gene; rs17663182 p = 4.73 × 10−9), and a suggestive association on 8q12.3 within NKAIN3 (encoding a cation transporter; rs16928927, p = 2.25 × 10−8). rs17663182 (18q12.2) also showed genome-wide significant multivariate associations with RAN measures (p = 1.15 × 10−8) and with all the cognitive traits tested (p = 3.07 × 10−8), suggesting (relational) pleiotropic effects of this variant. A polygenic risk score (PRS) analysis revealed significant genetic overlaps of some of the DD-related traits with educational attainment (EDUyears) and ADHD. Reading and spelling abilities were positively associated with EDUyears (p ~ [10−5–10−7]) and negatively associated with ADHD PRS (p ~ [10−8−10−17]). This corroborates a long-standing hypothesis on the partly shared genetic etiology of DD and ADHD, at the genome-wide level. Our findings suggest new candidate DD susceptibility genes and provide new insights into the genetics of dyslexia and its comorbities.
  • Gialluisi, A., Newbury, D. F., Wilcutt, E. G., Olson, R. K., DeFries, J. C., Brandler, W. M., Pennington, B. F., Smith, S. D., Scerri, T. S., Simpson, N. H., The SLI Consortium, Luciano, M., Evans, D. M., Bates, T. C., Stein, J. F., Talcott, J. B., Monaco, A. P., Paracchini, S., Francks, C., & Fisher, S. E. (2014). Genome-wide screening for DNA variants associated with reading and language traits. Genes, Brain and Behavior, 13, 686-701. doi:10.1111/gbb.12158.

    Abstract

    Reading and language abilities are heritable traits that are likely to share some genetic influences with each other. To identify pleiotropic genetic variants affecting these traits, we first performed a Genome-wide Association Scan (GWAS) meta-analysis using three richly characterised datasets comprising individuals with histories of reading or language problems, and their siblings. GWAS was performed in a total of 1862 participants using the first principal component computed from several quantitative measures of reading- and language-related abilities, both before and after adjustment for performance IQ. We identified novel suggestive associations at the SNPs rs59197085 and rs5995177 (uncorrected p≈10−7 for each SNP), located respectively at the CCDC136/FLNC and RBFOX2 genes. Each of these SNPs then showed evidence for effects across multiple reading and language traits in univariate association testing against the individual traits. FLNC encodes a structural protein involved in cytoskeleton remodelling, while RBFOX2 is an important regulator of alternative splicing in neurons. The CCDC136/FLNC locus showed association with a comparable reading/language measure in an independent sample of 6434 participants from the general population, although involving distinct alleles of the associated SNP. Our datasets will form an important part of on-going international efforts to identify genes contributing to reading and language skills.
  • Gialluisi, A., Pippucci, T., & Romeo, G. (2014). Reply to ten Kate et al. European Journal of Human Genetics, 2, 157-158. doi:10.1038/ejhg.2013.153.
  • Giglio, L., Ostarek, M., Weber, K., & Hagoort, P. (2022). Commonalities and asymmetries in the neurobiological infrastructure for language production and comprehension. Cerebral Cortex, 32(7), 1405-1418. doi:10.1093/cercor/bhab287.

    Abstract

    The neurobiology of sentence production has been largely understudied compared to the neurobiology of sentence comprehension, due to difficulties with experimental control and motion-related artifacts in neuroimaging. We studied the neural response to constituents of increasing size and specifically focused on the similarities and differences in the production and comprehension of the same stimuli. Participants had to either produce or listen to stimuli in a gradient of constituent size based on a visual prompt. Larger constituent sizes engaged the left inferior frontal gyrus (LIFG) and middle temporal gyrus (LMTG) extending to inferior parietal areas in both production and comprehension, confirming that the neural resources for syntactic encoding and decoding are largely overlapping. An ROI analysis in LIFG and LMTG also showed that production elicited larger responses to constituent size than comprehension and that the LMTG was more engaged in comprehension than production, while the LIFG was more engaged in production than comprehension. Finally, increasing constituent size was characterized by later BOLD peaks in comprehension but earlier peaks in production. These results show that syntactic encoding and parsing engage overlapping areas, but there are asymmetries in the engagement of the language network due to the specific requirements of production and comprehension.

    Additional information

    supplementary material
  • Gisladottir, R. S., Bögels, S., & Levinson, S. C. (2018). Oscillatory brain responses reflect anticipation during comprehension of speech acts in spoken dialogue. Frontiers in Human Neuroscience, 12: 34. doi:10.3389/fnhum.2018.00034.

    Abstract

    Everyday conversation requires listeners to quickly recognize verbal actions, so-called speech acts, from the underspecified linguistic code and prepare a relevant response within the tight time constraints of turn-taking. The goal of this study was to determine the time-course of speech act recognition by investigating oscillatory EEG activity during comprehension of spoken dialogue. Participants listened to short, spoken dialogues with target utterances that delivered three distinct speech acts (Answers, Declinations, Pre-offers). The targets were identical across conditions at lexico-syntactic and phonetic/prosodic levels but differed in the pragmatic interpretation of the speech act performed. Speech act comprehension was associated with reduced power in the alpha/beta bands just prior to Declination speech acts, relative to Answers and Pre-offers. In addition, we observed reduced power in the theta band during the beginning of Declinations, relative to Answers. Based on the role of alpha and beta desynchronization in anticipatory processes, the results are taken to indicate that anticipation plays a role in speech act recognition. Anticipation of speech acts could be critical for efficient turn-taking, allowing interactants to quickly recognize speech acts and respond within the tight time frame characteristic of conversation. The results show that anticipatory processes can be triggered by the characteristics of the interaction, including the speech act type.

    Additional information

    data sheet 1.pdf
  • Glock, P., Raum, B., Heermann, T., Kretschmer, S., Schweizer, J., Mücksch, J., Alagöz, G., & Schwille, P. (2019). Stationary patterns in a two-protein reaction-diffusion system. ACS Synthetic Biology, 8(1), 148-157. doi:10.1021/acssynbio.8b00415.

    Abstract

    Patterns formed by reaction-diffusion mechanisms are crucial for the development or sustenance of most organisms in nature. Patterns include dynamic waves, but are more often found as static distributions, such as animal skin patterns. Yet, a simplistic biological model system to reproduce and quantitatively investigate static reaction-diffusion patterns has been missing so far. Here, we demonstrate that the Escherichia coli MM system, known for its oscillatory behavior between the cell poles, is under certain conditions capable of transitioning to quasi-stationary protein distributions on membranes closely resembling Turing patterns. We systematically titrated both proteins, MinD and MinE, and found that removing all purification tags and linkers from the N-terminus of MinE was critical for static patterns to occur. At small bulk heights, dynamic patterns dominate, such as in rod-shaped microcompartments. We see implications of this work for studying pattern formation in general, but also for creating artificial gradients as downstream cues in synthetic biology applications.
  • Goldrick, M., McClain, R., Cibelli, E., Adi, Y., Gustafson, E., Moers, C., & Keshet, J. (2019). The influence of lexical selection disruptions on articulation. Journal of Experimental Psychology: Learning, Memory, and Cognition, 45(6), 1107-1141. doi:10.1037/xlm0000633.

    Abstract

    Interactive models of language production predict that it should be possible to observe long-distance interactions; effects that arise at one level of processing influence multiple subsequent stages of representation and processing. We examine the hypothesis that disruptions arising in nonform-based levels of planning—specifically, lexical selection—should modulate articulatory processing. A novel automatic phonetic analysis method was used to examine productions in a paradigm yielding both general disruptions to formulation processes and, more specifically, overt errors during lexical selection. This analysis method allowed us to examine articulatory disruptions at multiple levels of analysis, from whole words to individual segments. Baseline performance by young adults was contrasted with young speakers’ performance under time pressure (which previous work has argued increases interaction between planning and articulation) and performance by older adults (who may have difficulties inhibiting nontarget representations, leading to heightened interactive effects). The results revealed the presence of interactive effects. Our new analysis techniques revealed these effects were strongest in initial portions of responses, suggesting that speech is initiated as soon as the first segment has been planned. Interactive effects did not increase under response pressure, suggesting interaction between planning and articulation is relatively fixed. Unexpectedly, lexical selection disruptions appeared to yield some degree of facilitation in articulatory processing (possibly reflecting semantic facilitation of target retrieval) and older adults showed weaker, not stronger interactive effects (possibly reflecting weakened connections between lexical and form-level representations).
  • Goncharova, M. V., & Klenova, A. V. (2019). Siberian crane chick calls reflect their thermal state. Bioacoustics, 28, 115-128. doi:10.1080/09524622.2017.1399827.

    Abstract

    Chicks can convey information about their needs with calls. But it is still unknown if there are any universal need indicators in chick vocalizations. Previous studies have shown that in some species vocal activity and/or temporal-frequency variables of calls are related to the chick state, whereas other studies did not confirm it. Here, we tested experimentally whether vocal activity and temporal-frequency variables of calls change with cooling. We studied 10 human-raised
    Siberian crane (Grus leucogeranus) chicks at 3–15 days of age. We found that the cooled chicks produced calls higher in fundamental
    frequency and power variables, longer in duration and at a higher calling rate than in the control chicks. However, we did not find
    significant changes in level of entropy and occurrence of non-linear phenomena in chick calls recorded during the experimental cooling. We suggest that the level of vocal activity is a universal indicator of need for warmth in precocial and semi-precocial birds (e.g. cranes), but not in altricial ones. We also assume that coding of needs via temporal-frequency variables of calls is typical in species whose adults could not confuse their chicks with other chicks. Siberian cranes stay on separate territories during their breeding season, so parents do not need to check individuality of their offspring in the home area. In this case, all call characteristics are available for other purposes and serve to communicate chicks’ vital needs.
  • Gonzalez Gomez, N., Hayashi, A., Tsuji, S., Mazuka, R., & Nazzi, T. (2014). The role of the input on the development of the LC bias: A crosslinguistic comparison. Cognition, 132(3), 301-311. doi:10.1016/j.cognition.2014.04.004.

    Abstract

    Previous studies have described the existence of a phonotactic bias called the Labial–Coronal (LC) bias, corresponding to a tendency to produce more words beginning with a labial consonant followed by a coronal consonant (i.e. “bat”) than the opposite CL pattern (i.e. “tap”). This bias has initially been interpreted in terms of articulatory constraints of the human speech production system. However, more recently, it has been suggested that this presumably language-general LC bias in production might be accompanied by LC and CL biases in perception, acquired in infancy on the basis of the properties of the linguistic input. The present study investigates the origins of these perceptual biases, testing infants learning Japanese, a language that has been claimed to possess more CL than LC sequences, and comparing them with infants learning French, a language showing a clear LC bias in its lexicon. First, a corpus analysis of Japanese IDS and ADS revealed the existence of an overall LC bias, except for plosive sequences in ADS, which show a CL bias across counts. Second, speech preference experiments showed a perceptual preference for CL over LC plosive sequences (all recorded by a Japanese speaker) in 13- but not in 7- and 10-month-old Japanese-learning infants (Experiment 1), while revealing the emergence of an LC preference between 7 and 10 months in French-learning infants, using the exact same stimuli. These crosslinguistic behavioral differences, obtained with the same stimuli, thus reflect differences in processing in two populations of infants, which can be linked to differences in the properties of the lexicons of their respective native languages. These findings establish that the emergence of a CL/LC bias is related to exposure to a linguistic input.
  • Goodhew, S. C., Reynolds, K., Edwards, M., & Kidd, E. (2022). The content of gender stereotypes embedded in language use. Journal of Language and Social Psychology, 41(2), 219-231. doi:10.1177/0261927X211033930.

    Abstract

    Gender stereotypes have endured despite substantial change in gender roles. Previous work has assessed how gender stereotypes affect language production in particular interactional contexts. Here, we assessed communication biases where context was less specified: written texts to diffuse audiences. We used Latent Semantic Analysis (LSA) to computationally quantify the similarity in meaning between gendered names and stereotype-linked terms in these communications. This revealed that female names were more similar in meaning to the proscriptive (undesirable) masculine terms, such as emotional.
  • Goodhew, S. C., McGaw, B., & Kidd, E. (2014). Why is the sunny side always up? Explaining the spatial mapping of concepts by language use. Psychonomic Bulletin & Review, 21(5), 1287-1293. doi:10.3758/s13423-014-0593-6.

    Abstract

    Humans appear to rely on spatial mappings to represent and describe concepts. The conceptual cuing effect describes the tendency for participants to orient attention to a spatial location following the presentation of an unrelated cue word (e.g., orienting attention upward after reading the word sky). To date, such effects have predominately been explained within the embodied cognition framework, according to which people’s attention is oriented on the basis of prior experience (e.g., sky → up via perceptual simulation). However, this does not provide a compelling explanation for how abstract words have the same ability to orient attention. Why, for example, does dream also orient attention upward? We report on an experiment that investigated the role of language use (specifically, collocation between concept words and spatial words for up and down dimensions) and found that it predicted the cuing effect. The results suggest that language usage patterns may be instrumental in explaining conceptual cuing.
  • Gordon, J. K., & Clough, S. (2022). How do clinicians judge fluency in aphasia? Journal of Speech, Language, and Hearing Research, 65(4), 1521-1542. doi:10.1044/2021_JSLHR-21-00484.

    Abstract

    Purpose: Aphasia fluency is multiply determined by underlying impairments in lexical retrieval, grammatical formulation, and speech production. This poses challenges for establishing a reliable and feasible tool to measure fluency in the clinic. We examine the reliability and validity of perceptual ratings and clinical perspectives on the utility and relevance of methods used to assess fluency.
    Method: In an online survey, 112 speech-language pathologists rated spontaneous speech samples from 181 people with aphasia (PwA) on eight perceptual rating scales (overall fluency, speech rate, pausing, effort, melody, phrase length, grammaticality, and lexical retrieval) and answered questions about their current practices for assessing fluency in the clinic.
    Results: Interrater reliability for the eight perceptual rating scales ranged from fair to good. The most reliable scales were speech rate, pausing, and phrase length. Similarly, clinicians' perceived fluency ratings were most strongly correlated to objective measures of speech rate and utterance length but were also related to grammatical complexity, lexical diversity, and phonological errors. Clinicians' ratings reflected expected aphasia subtype patterns: Individuals with Broca's and transcortical motor aphasia were rated below average on fluency, whereas those with anomic, conduction, and Wernicke's aphasia were rated above average. Most respondents reported using multiple methods in the clinic to measure fluency but relying most frequently on subjective judgments.
    Conclusions: This study lends support for the use of perceptual rating scales as valid assessments of speech-language production but highlights the need for a more reliable method for clinical use. We describe next steps for developing such a tool that is clinically feasible and helps to identify the underlying deficits disrupting fluency to inform treatment targets.
  • Gori, M., Vercillo, T., Sandini, G., & Burr, D. (2014). Tactile feedback improves auditory spatial localization. Frontiers in Psychology, 5: 1121. doi:10.3389/fpsyg.2014.01121.

    Abstract

    Our recent studies suggest that congenitally blind adults have severely impaired thresholds in an auditory spatial bisection task, pointing to the importance of vision in constructing complex auditory spatial maps (Gon etal., 2014). To explore strategies that may improve the auditory spatial sense in visually impaired people, we investigated the impact of tactile feedback on spatial auditory localization in 48 blindfolded sighted subjects. We measured auditory spatial bisection thresholds before and after training, either with tactile feedback, verbal feedback, or no feedback. Audio thresholds were first measured with a spatial bisection task: subjects judged whether the second sound of a three sound sequence was spatially closer to the first or the third sound. The tactile feedback group underwent two audio-tactile feedback sessions of 100 trials, where each auditory trial was followed by the same spatial sequence played on the subject's forearm; auditory spatial bisection thresholds were evaluated after each session. In the verbal feedback condition, the positions of the sounds were verbally reported to the subject after each feedback trial.The no feedback group did the same sequence of trials, with no feedback. Performance improved significantly only after audio-tactile feedback. The results suggest that direct tactile feedback interacts with the auditory spatial localization system, possibly by a process of cross-sensory recalibration. Control tests with the subject rotated suggested that this effect occurs only when the tactile and acoustic sequences are spatially congruent. Our results suggest that the tactile system can be used to recalibrate the auditory sense of space. These results encourage the possibility of designing rehabilitation programs to help blind persons establish a robust auditory sense of space, through training with the tactile modality.
  • Goriot, C., Broersma, M., McQueen, J. M., Unsworth, S., & Van Hout, R. (2018). Language balance and switching ability in children acquiring English as a second language. Journal of Experimental Child Psychology, 173, 168-186. doi:10.1016/j.jecp.2018.03.019.

    Abstract

    This study investigated whether relative lexical proficiency in Dutch and English in child second language (L2) learners is related to executive functioning. Participants were Dutch primary school pupils of three different age groups (4–5, 8–9, and 11–12 years) who either were enrolled in an early-English schooling program or were age-matched controls not on that early-English program. Participants performed tasks that measured switching, inhibition, and working memory. Early-English program pupils had greater knowledge of English vocabulary and more balanced Dutch–English lexicons. In both groups, lexical balance, a ratio measure obtained by dividing vocabulary scores in English by those in Dutch, was related to switching but not to inhibition or working memory performance. These results show that for children who are learning an L2 in an instructional setting, and for whom managing two languages is not yet an automatized process, language balance may be more important than L2 proficiency in influencing the relation between childhood bilingualism and switching abilities.
  • De Grauwe, S., Willems, R. M., Rüschemeyer, S.-A., Lemhöfer, K., & Schriefers, H. (2014). Embodied language in first- and second-language speakers: Neural correlates of processing motor verbs. Neuropsychologia, 56, 334-349. doi:10.1016/j.neuropsychologia.2014.02.003.

    Abstract

    The involvement of neural motor and sensory systems in the processing of language has so far mainly been studied in native (L1) speakers. In an fMRI experiment, we investigated whether non-native (L2) semantic representations are rich enough to allow for activation in motor and somatosensory brain areas. German learners of Dutch and a control group of Dutch native speakers made lexical decisions about visually presented Dutch motor and non-motor verbs. Region-of-interest (ROI) and whole-brain analyses indicated that L2 speakers, like L1 speakers, showed significantly increased activation for simple motor compared to non-motor verbs in motor and somatosensory regions. This effect was not restricted to Dutch-German cognate verbs, but was also present for non-cognate verbs. These results indicate that L2 semantic representations are rich enough for motor-related activations to develop in motor and somatosensory areas.
  • De Grauwe, S., Lemhöfer, K., Willems, R. M., & Schriefers, H. (2014). L2 speakers decompose morphologically complex verbs: fMRI evidence from priming of transparent derived verbs. Frontiers in Human Neuroscience, 8: 802. doi:10.3389/fnhum.2014.00802.

    Abstract

    In this functional magnetic resonance imaging (fMRI) long-lag priming study, we investigated the processing of Dutch semantically transparent, derived prefix verbs. In such words, the meaning of the word as a whole can be deduced from the meanings of its parts, e.g., wegleggen “put aside.” Many behavioral and some fMRI studies suggest that native (L1) speakers decompose transparent derived words. The brain region usually implicated in morphological decomposition is the left inferior frontal gyrus (LIFG). In non-native (L2) speakers, the processing of transparent derived words has hardly been investigated, especially in fMRI studies, and results are contradictory: some studies find more reliance on holistic (i.e., non-decompositional) processing by L2 speakers; some find no difference between L1 and L2 speakers. In this study, we wanted to find out whether Dutch transparent derived prefix verbs are decomposed or processed holistically by German L2 speakers of Dutch. Half of the derived verbs (e.g., omvallen “fall down”) were preceded by their stem (e.g., vallen “fall”) with a lag of 4–6 words (“primed”); the other half (e.g., inslapen “fall asleep”) were not (“unprimed”). L1 and L2 speakers of Dutch made lexical decisions on these visually presented verbs. Both region of interest analyses and whole-brain analyses showed that there was a significant repetition suppression effect for primed compared to unprimed derived verbs in the LIFG. This was true both for the analyses over L2 speakers only and for the analyses over the two language groups together. The latter did not reveal any interaction with language group (L1 vs. L2) in the LIFG. Thus, L2 speakers show a clear priming effect in the LIFG, an area that has been associated with morphological decomposition. Our findings are consistent with the idea that L2 speakers engage in decomposition of transparent derived verbs rather than processing them holistically

    Additional information

    Data Sheet 1.docx
  • Gray, R., & Jordan, F. (2000). Language trees support the express-train sequence of Austronesian expansion. Nature, 405, 1052-1055. doi:10.1038/35016575.

    Abstract

    Languages, like molecules, document evolutionary history. Darwin(1) observed that evolutionary change in languages greatly resembled the processes of biological evolution: inheritance from a common ancestor and convergent evolution operate in both. Despite many suggestions(2-4), few attempts have been made to apply the phylogenetic methods used in biology to linguistic data. Here we report a parsimony analysis of a large language data set. We use this analysis to test competing hypotheses - the "express-train''(5) and the "entangled-bank''(6,7) models - for the colonization of the Pacific by Austronesian-speaking peoples. The parsimony analysis of a matrix of 77 Austronesian languages with 5,185 lexical items produced a single most-parsimonious tree. The express-train model was converted into an ordered geographical character and mapped onto the language tree. We found that the topology of the language tree was highly compatible with the express-train model.
  • Grey, S., Schubel, L. C., McQueen, J. M., & Van Hell, J. G. (2019). Processing foreign-accented speech in a second language: Evidence from ERPs during sentence comprehension in bilinguals. Bilingualism: Language and Cognition, 22(5), 912-929. doi:10.1017/S1366728918000937.

    Abstract

    This study examined electrophysiological correlates of sentence comprehension of native-accented and foreign-accented
    speech in a second language (L2), for sentences produced in a foreign accent different from that associated with the listeners’
    L1. Bilingual speaker-listeners process different accents in their L2 conversations, but the effects on real-time L2 sentence
    comprehension are unknown. Dutch–English bilinguals listened to native American-English accented sentences and foreign
    (and for them unfamiliarly-accented) Chinese-English accented sentences while EEG was recorded. Behavioral sentence
    comprehension was highly accurate for both native-accented and foreign-accented sentences. ERPs showed different patterns
    for L2 grammar and semantic processing of native- and foreign-accented speech. For grammar, only native-accented speech
    elicited an Nref. For semantics, both native- and foreign-accented speech elicited an N400 effect, but with a delayed onset
    across both accent conditions. These findings suggest that the way listeners comprehend native- and foreign-accented
    sentences in their L2 depends on their familiarity with the accent.
  • Griffin, Z. M., & Bock, K. (2000). What the eyes say about speaking. Psychological Science, 11(4), 274-279. doi:10.1111/1467-9280.00255.

    Abstract

    To study the time course of sentence formulation, we monitored the eye movements of speakers as they described simple events. The similarity between speakers' initial eye movements and those of observers performing a nonverbal event-comprehension task suggested that response-relevant information was rapidly extracted from scenes, allowing speakers to select grammatical subjects based on comprehended events rather than salience. When speaking extemporaneously, speakers began fixating pictured elements less than a second before naming them within their descriptions, a finding consistent with incremental lexical encoding. Eye movements anticipated the order of mention despite changes in picture orientation, in who-did-what-to-whom, and in sentence structure. The results support Wundt's theory of sentence production.

    Files private

    Request files
  • Groen, I. I. A., Jahfari, S., Seijdel, N., Ghebreab, S., Lamme, V. A. F., & Scholte, H. S. (2018). Scene complexity modulates degree of feedback activity during object detection in natural scenes. PLoS Computational Biology, 14: e1006690. doi:10.1371/journal.pcbi.1006690.

    Abstract

    Selective brain responses to objects arise within a few hundreds of milliseconds of neural processing, suggesting that visual object recognition is mediated by rapid feed-forward activations. Yet disruption of neural responses in early visual cortex beyond feed-forward processing stages affects object recognition performance. Here, we unite these discrepant findings by reporting that object recognition involves enhanced feedback activity (recurrent processing within early visual cortex) when target objects are embedded in natural scenes that are characterized by high complexity. Human participants performed an animal target detection task on natural scenes with low, medium or high complexity as determined by a computational model of low-level contrast statistics. Three converging lines of evidence indicate that feedback was selectively enhanced for high complexity scenes. First, functional magnetic resonance imaging (fMRI) activity in early visual cortex (V1) was enhanced for target objects in scenes with high, but not low or medium complexity. Second, event-related potentials (ERPs) evoked by target objects were selectively enhanced at feedback stages of visual processing (from ~220 ms onwards) for high complexity scenes only. Third, behavioral performance for high complexity scenes deteriorated when participants were pressed for time and thus less able to incorporate the feedback activity. Modeling of the reaction time distributions using drift diffusion revealed that object information accumulated more slowly for high complexity scenes, with evidence accumulation being coupled to trial-to-trial variation in the EEG feedback response. Together, these results suggest that while feed-forward activity may suffice to recognize isolated objects, the brain employs recurrent processing more adaptively in naturalistic settings, using minimal feedback for simple scenes and increasing feedback for complex scenes.

    Additional information

    data via OSF
  • Grove, J., Ripke, S., Als, T. D., Mattheisen, M., Walters, R., Won, H., Pallesen, J., Agerbo, E., Andreassen, O. A., Anney, R., Belliveau, R., Bettella, F., Buxbaum, J. D., Bybjerg-Grauholm, J., Bækved-Hansen, M., Cerrato, F., Chambert, K., Christensen, J. H., Churchhouse, C., Dellenvall, K. and 55 moreGrove, J., Ripke, S., Als, T. D., Mattheisen, M., Walters, R., Won, H., Pallesen, J., Agerbo, E., Andreassen, O. A., Anney, R., Belliveau, R., Bettella, F., Buxbaum, J. D., Bybjerg-Grauholm, J., Bækved-Hansen, M., Cerrato, F., Chambert, K., Christensen, J. H., Churchhouse, C., Dellenvall, K., Demontis, D., De Rubeis, S., Devlin, B., Djurovic, S., Dumont, A., Goldstein, J., Hansen, C. S., Hauberg, M. E., Hollegaard, M. V., Hope, S., Howrigan, D. P., Huang, H., Hultman, C., Klei, L., Maller, J., Martin, J., Martin, A. R., Moran, J., Nyegaard, M., Nærland, T., Palmer, D. S., Palotie, A., Pedersen, C. B., Pedersen, M. G., Poterba, T., Poulsen, J. B., St Pourcain, B., Qvist, P., Rehnström, K., Reichenberg, A., Reichert, J., Robinson, E. B., Roeder, K., Roussos, P., Saemundsen, E., Sandin, S., Satterstrom, F. K., Smith, G. D., Stefansson, H., Stefansson, K., Steinberg, S., Stevens, C., Sullivan, P. F., Turley, P., Walters, G. B., Xu, X., Autism Spectrum Disorders Working Group of The Psychiatric Genomics Consortium, BUPGEN, Major Depressive Disorder Working Group of the Psychiatric Genomics Consortium, Me Research Team, Geschwind, D., Nordentoft, M., Hougaard, D. M., Werge, T., Mors, O., Mortensen, P. B., Neale, B. M., Daly, M. J., & Børglum, A. D. (2019). Identification of common genetic risk variants for autism spectrum disorder. Nature Genetics, 51, 431-444. doi:10.1038/s41588-019-0344-8.

    Abstract

    Autism spectrum disorder (ASD) is a highly heritable and heterogeneous group of neurodevelopmental phenotypes diagnosed in more than 1% of children. Common genetic variants contribute substantially to ASD susceptibility, but to date no individual variants have been robustly associated with ASD. With a marked sample-size increase from a unique Danish population resource, we report a genome-wide association meta-analysis of 18,381 individuals with ASD and 27,969 controls that identified five genome-wide-significant loci. Leveraging GWAS results from three phenotypes with significantly overlapping genetic architectures (schizophrenia, major depression, and educational attainment), we identified seven additional loci shared with other traits at equally strict significance levels. Dissecting the polygenic architecture, we found both quantitative and qualitative polygenic heterogeneity across ASD subtypes. These results highlight biological insights, particularly relating to neuronal function and corticogenesis, and establish that GWAS performed at scale will be much more productive in the near term in ASD.

    Additional information

    Supplementary Text and Figures
  • Guadalupe, T., Kong, X., Akkermans, S. E. A., Fisher, S. E., & Francks, C. (2022). Relations between hemispheric asymmetries of grey matter and auditory processing of spoken syllables in 281 healthy adults. Brain Structure & Function, 227, 561-572. doi:10.1007/s00429-021-02220-z.

    Abstract

    Most people have a right-ear advantage for the perception of spoken syllables, consistent with left hemisphere dominance for speech processing. However, there is considerable variation, with some people showing left-ear advantage. The extent to which this variation is reflected in brain structure remains unclear. We tested for relations between hemispheric asymmetries of auditory processing and of grey matter in 281 adults, using dichotic listening and voxel-based morphometry. This was the largest study of this issue to date. Per-voxel asymmetry indexes were derived for each participant following registration of brain magnetic resonance images to a template that was symmetrized. The asymmetry index derived from dichotic listening was related to grey matter asymmetry in clusters of voxels corresponding to the amygdala and cerebellum lobule VI. There was also a smaller, non-significant cluster in the posterior superior temporal gyrus, a region of auditory cortex. These findings contribute to the mapping of asymmetrical structure–function links in the human brain and suggest that subcortical structures should be investigated in relation to hemispheric dominance for speech processing, in addition to auditory cortex.

    Additional information

    supplementary information
  • Guadalupe, T., Willems, R. M., Zwiers, M., Arias Vasquez, A., Hoogman, M., Hagoort, P., Fernández, G., Buitelaar, J., Franke, B., Fisher, S. E., & Francks, C. (2014). Differences in cerebral cortical anatomy of left- and right-handers. Frontiers in Psychology, 5: 261. doi:10.3389/fpsyg.2014.00261.

    Abstract

    The left and right sides of the human brain are specialized for different kinds of information processing, and much of our cognition is lateralized to an extent towards one side or the other. Handedness is a reflection of nervous system lateralization. Roughly ten percent of people are mixed- or left-handed, and they show an elevated rate of reductions or reversals of some cerebral functional asymmetries compared to right-handers. Brain anatomical correlates of left-handedness have also been suggested. However, the relationships of left-handedness to brain structure and function remain far from clear. We carried out a comprehensive analysis of cortical surface area differences between 106 left-handed subjects and 1960 right-handed subjects, measured using an automated method of regional parcellation (FreeSurfer, Destrieux atlas). This is the largest study sample that has so far been used in relation to this issue. No individual cortical region showed an association with left-handedness that survived statistical correction for multiple testing, although there was a nominally significant association with the surface area of a previously implicated region: the left precentral sulcus. Identifying brain structural correlates of handedness may prove useful for genetic studies of cerebral asymmetries, as well as providing new avenues for the study of relations between handedness, cerebral lateralization and cognition.
  • Guadalupe, T., Zwiers, M. P., Teumer, A., Wittfeld, K., Arias Vasquez, A., Hoogman, M., Hagoort, P., Fernández, G., Buitelaar, J., Hegenscheid, K., Völzke, H., Franke, B., Fisher, S. E., Grabe, H. J., & Francks, C. (2014). Measurement and genetics of human subcortical and hippocampal asymmetries in large datasets. Human Brain Mapping, 35(7), 3277-3289. doi:10.1002/hbm.22401.

    Abstract

    Functional and anatomical asymmetries are prevalent features of the human brain, linked to gender, handedness, and cognition. However, little is known about the neurodevelopmental processes involved. In zebrafish, asymmetries arise in the diencephalon before extending within the central nervous system. We aimed to identify genes involved in the development of subtle, left-right volumetric asymmetries of human subcortical structures using large datasets. We first tested the feasibility of measuring left-right volume differences in such large-scale samples, as assessed by two automated methods of subcortical segmentation (FSL|FIRST and FreeSurfer), using data from 235 subjects who had undergone MRI twice. We tested the agreement between the first and second scan, and the agreement between the segmentation methods, for measures of bilateral volumes of six subcortical structures and the hippocampus, and their volumetric asymmetries. We also tested whether there were biases introduced by left-right differences in the regional atlases used by the methods, by analyzing left-right flipped images. While many bilateral volumes were measured well (scan-rescan r = 0.6-0.8), most asymmetries, with the exception of the caudate nucleus, showed lower repeatabilites. We meta-analyzed genome-wide association scan results for caudate nucleus asymmetry in a combined sample of 3,028 adult subjects but did not detect associations at genome-wide significance (P < 5 × 10-8). There was no enrichment of genetic association in genes involved in left-right patterning of the viscera. Our results provide important information for researchers who are currently aiming to carry out large-scale genome-wide studies of subcortical and hippocampal volumes, and their asymmetries
  • Guerra, E., & Knoeferle, P. (2014). Spatial distance effects on incremental semantic interpretation of abstract sentences: Evidence from eye tracking. Cognition, 133(3), 535-552. doi:10.1016/j.cognition.2014.07.007.

    Abstract

    A large body of evidence has shown that visual context information can rapidly modulate language comprehension for concrete sentences and when it is mediated by a referential or a lexical-semantic link. What has not yet been examined is whether visual context can also modulate comprehension of abstract sentences incrementally when it is neither referenced by, nor lexically associated with, the sentence. Three eye-tracking reading experiments examined the effects of spatial distance between words (Experiment 1) and objects (Experiment 2 and 3) on participants’ reading times for sentences that convey similarity or difference between two abstract nouns (e.g., ‘Peace and war are certainly different...’). Before reading the sentence, participants inspected a visual context with two playing cards that moved either far apart or close together. In Experiment 1, the cards turned and showed the first two nouns of the sentence (e.g., ‘peace’, ‘war’). In Experiments 2 and 3, they turned but remained blank. Participants’ reading times at the adjective (Experiment 1: first-pass reading time; Experiment 2: total times) and at the second noun phrase (Experiment 3: first-pass times) were faster for sentences that expressed similarity when the preceding words/objects were close together (vs. far apart) and for sentences that expressed dissimilarity when the preceding words/objects were far apart (vs. close together). Thus, spatial distance between words or entirely unrelated objects can rapidly and incrementally modulate the semantic interpretation of abstract sentences.

    Additional information

    mmc1.doc
  • Guest, O., Kanayet, F. J., & Love, B. C. (2019). Gerrymandering and computational redistricting. Journal of Computational Social Science, 2, 119-131. doi:10.1007/s42001-019-00053-9.

    Abstract

    Partisan gerrymandering poses a threat to democracy. Moreover, the complexity of the districting task may exceed human capacities. One potential solution is using computational models to automate the districting process by optimizing objective and open criteria, such as how spatially compact districts are. We formulated one such model that minimised pairwise distance between voters within a district. Using US Census Bureau data, we confirmed our prediction that the difference in compactness between the computed and actual districts would be greatest for states that are large and, therefore, difficult for humans to properly district given their limited capacities. The computed solutions highlighted differences in how humans and machines solve this task with machine solutions more fully optimised and displaying emergent properties not evident in human solutions. These results suggest a division of labour in which humans debate and formulate districting criteria whereas machines optimise the criteria to draw the district boundaries. We discuss how criteria can be expanded beyond notions of compactness to include other factors, such as respecting municipal boundaries, historic communities, and relevant legislation.
  • Guggenheim, J. A., Williams, C., Northstone, K., Howe, L. D., Tilling, K., St Pourcain, B., McMahon, G., & Lawlor, D. A. (2014). Does Vitamin D Mediate the Protective Effects of Time Outdoors On Myopia? Findings From a Prospective Birth Cohort. Investigative Ophthalmology & Visual Science, 55(12), 8550-8558. doi:10.1167/iovs.14-15839.
  • Gullberg, M. (1995). Giving language a hand: gesture as a cue based communicative strategy. Working Papers, Lund University, Dept. of Linguistics, 44, 41-60.

    Abstract

    All accounts of communicative behaviour in general, and communicative strategies in particular, mention gesture1 in relation to language acquisition (cf. Faerch & Kasper 1983 for an overview). However, few attempts have been made to investigate how spoken language and spontaneous gesture combine to determine discourse referents. Referential gesture and referential discourse will be of particular interest, since communicative strategies in second language discourse often involve labelling problems.

    This paper will focus on two issues:

    1) Within a cognitive account of communicative strategies, gesture will be seen to be part of conceptual or analysis-based strategies, in that relational features in the referents are exploited;

    2) It will be argued that communication strategies can be seen in terms of cue manipulation in the same sense as sentence processing has been analysed in terms of competing cues. Strategic behaviour, and indeed the process of referring in general, are seen in terms of cues, combining or competing to determine discourse referents. Gesture can then be regarded as being such a cue at the discourse level, and as a cue-based communicative strategy, in that gesture functions by exploiting physically based cues which can be recognised as being part of the referent. The question of iconicity and motivation vs. the arbitrary qualities of gesture as a strategic cue will be addressed in connection with this.
  • Gunz, P., Tilot, A. K., Wittfeld, K., Teumer, A., Shapland, C. Y., Van Erp, T. G. M., Dannemann, M., Vernot, B., Neubauer, S., Guadalupe, T., Fernandez, G., Brunner, H., Enard, W., Fallon, J., Hosten, N., Völker, U., Profico, A., Di Vincenzo, F., Manzi, G., Kelso, J. and 7 moreGunz, P., Tilot, A. K., Wittfeld, K., Teumer, A., Shapland, C. Y., Van Erp, T. G. M., Dannemann, M., Vernot, B., Neubauer, S., Guadalupe, T., Fernandez, G., Brunner, H., Enard, W., Fallon, J., Hosten, N., Völker, U., Profico, A., Di Vincenzo, F., Manzi, G., Kelso, J., St Pourcain, B., Hublin, J.-J., Franke, B., Pääbo, S., Macciardi, F., Grabe, H. J., & Fisher, S. E. (2019). Neandertal introgression sheds light on modern human endocranial globularity. Current Biology, 29(1), 120-127. doi:10.1016/j.cub.2018.10.065.

    Abstract

    One of the features that distinguishes modern humans from our extinct relatives
    and ancestors is a globular shape of the braincase [1-4]. As the endocranium
    closely mirrors the outer shape of the brain, these differences might reflect
    altered neural architecture [4,5]. However, in the absence of fossil brain tissue the
    underlying neuroanatomical changes as well as their genetic bases remain
    elusive. To better understand the biological foundations of modern human
    endocranial shape, we turn to our closest extinct relatives, the Neandertals.
    Interbreeding between modern humans and Neandertals has resulted in
    introgressed fragments of Neandertal DNA in the genomes of present-day non-
    Africans [6,7]. Based on shape analyses of fossil skull endocasts, we derive a
    measure of endocranial globularity from structural magnetic resonance imaging
    (MRI) scans of thousands of modern humans, and study the effects of
    introgressed fragments of Neandertal DNA on this phenotype. We find that
    Neandertal alleles on chromosomes 1 and 18 are associated with reduced
    endocranial globularity. These alleles influence expression of two nearby genes,
    UBR4 and PHLPP1, which are involved in neurogenesis and myelination,
    respectively. Our findings show how integration of fossil skull data with archaic
    genomics and neuroimaging can suggest developmental mechanisms that may
    contribute to the unique modern human endocranial shape.

    Additional information

    mmc1.pdf mmc2.xlsx
  • Gur, C., & Sumer, B. (2022). Learning to introduce referents in narration is resilient to the effects of late sign language exposure. Sign Language & Linguistics, 25(2), 205-234. doi:10.1075/sll.21004.gur.

    Abstract

    The present study investigates the effects of late sign language exposure on narrative development in Turkish Sign Language (TİD) by focusing on the introductions of main characters and the linguistic strategies used in these introductions. We study these domains by comparing narrations produced by native and late signers in TİD. The results of our study reveal that late sign language exposure does not hinder the acquisition of linguistic devices to introduce main characters in narrations. Thus, their acquisition seems to be resilient to the effects of late language exposure. Our study further suggests that a two-year exposure to sign language facilitates the acquisition of these skills in signing children even in the case of late language exposure, thus providing further support for the importance of sign language exposure to develop linguistic skills for signing children.
  • Gussenhoven, C., Lu, Y.-A., Lee-Kim, S.-I., Liu, C., Rahmani, H., Riad, T., & Zora, H. (2022). The sequence recall task and lexicality of tone: Exploring tone “deafness”. Frontiers in Psychology, 13: 902569. doi:10.3389/fpsyg.2022.902569.

    Abstract

    Many perception and processing effects of the lexical status of tone have been found in behavioral, psycholinguistic, and neuroscientific research, often pitting varieties of tonal Chinese against non-tonal Germanic languages. While the linguistic and cognitive evidence for lexical tone is therefore beyond dispute, the word prosodic systems of many languages continue to escape the categorizations of typologists. One controversy concerns the existence of a typological class of “pitch accent languages,” another the underlying phonological nature of surface tone contrasts, which in some cases have been claimed to be metrical rather than tonal. We address the question whether the Sequence Recall Task (SRT), which has been shown to discriminate between languages with and without word stress, can distinguish languages with and without lexical tone. Using participants from non-tonal Indonesian, semi-tonal Swedish, and two varieties of tonal Mandarin, we ran SRTs with monosyllabic tonal contrasts to test the hypothesis that high performance in a tonal SRT indicates the lexical status of tone. An additional question concerned the extent to which accuracy scores depended on phonological and phonetic properties of a language’s tone system, like its complexity, the existence of an experimental contrast in a language’s phonology, and the phonetic salience of a contrast. The results suggest that a tonal SRT is not likely to discriminate between tonal and non-tonal languages within a typologically varied group, because of the effects of specific properties of their tone systems. Future research should therefore address the first hypothesis with participants from otherwise similar tonal and non-tonal varieties of the same language, where results from a tonal SRT may make a useful contribution to the typological debate on word prosody.

    Additional information

    also published as book chapter (2023)
  • Haagen, T., Dona, L., Bosscha, S., Zamith, B., Koetschruyter, R., & Wijnholds, G. (2022). Noun Phrase and Verb Phrase Ellipsis in Dutch: Identifying Subject-Verb Dependencies with BERTje. Computational Linguistics in the Netherlands Journal, 12, 49-63.

    Abstract

    Previous research has set out to quantify the syntactic capacity of BERTje (the Dutch equivalent of BERT) in the context of phenomena such as control verb nesting and verb raising in Dutch. Another complex language phenomenon is ellipsis, where a constituent is omitted from a sentence and can be recovered using context. Like verb raising and control verb nesting, ellipsis is suitable for evaluating BERTje’s linguistic capacity since it requires the processing of syntactic and lexical cues to recover the elided phrases. This work outlines an approach to identify subject-verb dependencies in Dutch sentences with verb phrase and noun phrase ellipsis using BERTje. Results will inform about BERTje’s capability of capturing syntactic information and its ability to capture ellipsis in particular. Understanding more about how computational models process ellipsis and how it can be improved is crucial for boosting the performance of language models, as natural language contains many instances of ellipsis. Using training data from Lassy, converted to contextualized embeddings using BERTje, a probe model is trained to identify subject-verb dependencies. The model is tested on sentences generated using a Context Free Grammar (CFG), which is designed to generate sentences containing ellipsis. These sentences are also converted to contextualized representations using BERTje. Results show that BERTje’s syntactic abilities are lacking, shown by accuracy drops compared to baseline measures.

    Additional information

    direct link to journal
  • Hagoort, P., & Brown, C. M. (2000). ERP effects of listening to speech compared to reading: the P600/SPS to syntactic violations in spoken sentences and rapid serial visual presentation. Neuropsychologia, 38, 1531-1549.

    Abstract

    In this study, event-related brain potential ffects of speech processing are obtained and compared to similar effects in sentence reading. In two experiments sentences were presented that contained three different types of grammatical violations. In one experiment sentences were presented word by word at a rate of four words per second. The grammatical violations elicited a Syntactic Positive Shift (P600/SPS), 500 ms after the onset of the word that rendered the sentence ungrammatical. The P600/SPS consisted of two phases, an early phase with a relatively equal anterior-posterior distribution and a later phase with a strong posterior distribution. We interpret the first phase as an indication of structural integration complexity, and the second phase as an indication of failing parsing operations and/or an attempt at reanalysis. In the second experiment the same syntactic violations were presented in sentences spoken at a normal rate and with normal intonation. These violations elicited a P600/SPS with the same onset as was observed for the reading of these sentences. In addition two of the three violations showed a preceding frontal negativity, most clearly over the left hemisphere.
  • Hagoort, P., & Brown, C. M. (2000). ERP effects of listening to speech: semantic ERP effects. Neuropsychologia, 38, 1518-1530.

    Abstract

    In this study, event-related brain potential effects of speech processing are obtained and compared to similar effects insentence reading. In two experiments spoken sentences were presented with semantic violations in sentence-signal or mid-sentence positions. For these violations N400 effects were obtained that were very similar to N400 effects obtained in reading. However, the N400 effects in speech were preceded by an earlier negativity (N250). This negativity is not commonly observed with written input. The early effect is explained as a manifestation of a mismatch between the word forms expected on the basis of the context, and the actual cohort of activated word candidates that is generated on the basis of the speech signal.
  • Hagoort, P. (2002). De koninklijke verloving tussen psychologie en neurowetenschap. De Psycholoog, 37, 107-113.
  • Hagoort, P. (2014). Nodes and networks in the neural architecture for language: Broca's region and beyond. Current Opinion in Neurobiology, 28, 136-141. doi:10.1016/j.conb.2014.07.013.

    Abstract

    Current views on the neurobiological underpinnings of language are discussed that deviate in a number of ways from the classical Wernicke–Lichtheim–Geschwind model. More areas than Broca's and Wernicke's region are involved in language. Moreover, a division along the axis of language production and language comprehension does not seem to be warranted. Instead, for central aspects of language processing neural infrastructure is shared between production and comprehension. Three different accounts of the role of Broca's area in language are discussed. Arguments are presented in favor of a dynamic network view, in which the functionality of a region is co-determined by the network of regions in which it is embedded at particular moments in time. Finally, core regions of language processing need to interact with other networks (e.g. the attentional networks and the ToM network) to establish full functionality of language and communication.
  • Hagoort, P. (2018). Prerequisites for an evolutionary stance on the neurobiology of language. Current Opinion in Behavioral Sciences, 21, 191-194. doi:10.1016/j.cobeha.2018.05.012.
  • Hagoort, P., Brown, C. M., & Swaab, T. Y. (1995). Semantic deficits in right hemisphere patients. Brain and Language, 51, 161-163. doi:10.1006/brln.1995.1058.
  • Hagoort, P. (2019). The meaning making mechanism(s) behind the eyes and between the ears. Philosophical Transactions of the Royal Society of London, Series B: Biological Sciences, 375: 20190301. doi:10.1098/rstb.2019.0301.

    Abstract

    In this contribution, the following four questions are discussed: (i) where is meaning?; (ii) what is meaning?; (iii) what is the meaning of mechanism?; (iv) what are the mechanisms of meaning? I will argue that meanings are in the head. Meanings have multiple facets, but minimally one needs to make a distinction between single word meanings (lexical meaning) and the meanings of multi-word utterances. The latter ones cannot be retrieved from memory, but need to be constructed on the fly. A mechanistic account of the meaning-making mind requires an analysis at both a functional and a neural level, the reason being that these levels are causally interdependent. I will show that an analysis exclusively focusing on patterns of brain activation lacks explanatory power. Finally, I shall present an initial sketch of how the dynamic interaction between temporo-parietal areas and inferior frontal cortex might instantiate the interpretation of linguistic utterances in the context of a multimodal setting and ongoing discourse information.
  • Hagoort, P. (2019). The neurobiology of language beyond single word processing. Science, 366(6461), 55-58. doi:10.1126/science.aax0289.

    Abstract

    In this Review, I propose a multiple-network view for the neurobiological basis of distinctly human language skills. A much more complex picture of interacting brain areas emerges than in the classical neurobiological model of language. This is because using language is more than single-word processing, and much goes on beyond the information given in the acoustic or orthographic tokens that enter primary sensory cortices. This requires the involvement of multiple networks with functionally nonoverlapping contributions

    Files private

    Request files
  • Hagoort, P., & Indefrey, P. (2014). The neurobiology of language beyond single words. Annual Review of Neuroscience, 37, 347-362. doi:10.1146/annurev-neuro-071013-013847.

    Abstract

    A hallmark of human language is that we combine lexical building blocks retrieved from memory in endless new ways. This combinatorial aspect of language is referred to as unification. Here we focus on the neurobiological infrastructure for syntactic and semantic unification. Unification is characterized by a high-speed temporal profile including both prediction and integration of retrieved lexical elements. A meta-analysis of numerous neuroimaging studies reveals a clear dorsal/ventral gradient in both left inferior frontal cortex and left posterior temporal cortex, with dorsal foci for syntactic processing and ventral foci for semantic processing. In addition to core areas for unification, further networks need to be recruited to realize language-driven communication to its full extent. One example is the theory of mind network, which allows listeners and readers to infer the intended message (speaker meaning) from the coded meaning of the linguistic utterance. This indicates that sensorimotor simulation cannot handle all of language processing.
  • Hagoort, P. (2000). What we shall know only tomorrow. Brain and Language, 71, 89-92. doi:10.1006/brln.1999.2221.
  • Hahn, L. E., Benders, T., Snijders, T. M., & Fikkert, P. (2018). Infants' sensitivity to rhyme in songs. Infant Behavior and Development, 52, 130-139. doi:10.1016/j.infbeh.2018.07.002.

    Abstract

    Children’s songs often contain rhyming words at phrase endings. In this study, we investigated whether infants can already recognize this phonological pattern in songs. Earlier studies using lists of spoken words were equivocal on infants’ spontaneous processing of rhymes (Hayes, Slater, & Brown, 2000; Jusczyk, Goodman, & Baumann, 1999). Songs, however, constitute an ecologically valid rhyming stimulus, which could allow for spontaneous processing of this phonological pattern in infants. Novel children’s songs with rhyming and non-rhyming lyrics using pseudo-words were presented to 35 9-month-old Dutch infants using the Headturn Preference Procedure. Infants on average listened longer to the non-rhyming songs, with around half of the infants however exhibiting a preference for the rhyming songs. These results highlight that infants have the processing abilities to benefit from their natural rhyming input for the development of their phonological abilities.
  • Hammarstroem, H., & Güldemann, T. (2014). Quantifying geographical determinants of large-scale distributions of linguistic features. Language Dynamics and Change, 4, 87-115. doi:10.1163/22105832-00401002.

    Abstract

    In the recent past the work on large-scale linguistic distributions across the globe has intensified considerably. Work on macro-areal relationships in Africa (Güldemann, 2010) suggests that the shape of convergence areas may be determined by climatic factors and geophysical features such as mountains, water bodies, coastlines, etc. Worldwide data is now available for geophysical features as well as linguistic features, including numeral systems and basic constituent order. We explore the possibility that the shape of areal aggregations of individual features in these two linguistic domains correlates with Köppen-Geiger climate zones. Furthermore, we test the hypothesis that the shape of such areal feature aggregations is determined by the contour of adjacent geophysical features like mountain ranges or coastlines. In these first basic tests, we do not find clear evidence that either Köppen-Geiger climate zones or the contours of geophysical features are major predictors for the linguistic data at hand

    Files private

    Request files
  • Hammarstroem, H., & Donohue, M. (2014). Some principles on the use of macro-areas in typological comparison. Language Dynamics and Change, 4, 167-187. doi:10.1163/22105832-00401001.

    Abstract

    While the notion of the ‘area’ or ‘Sprachbund’ has a long history in linguistics, with geographically-defined regions frequently cited as a useful means to explain typological distributions, the problem of delimiting areas has not been well addressed. Lists of general-purpose, largely independent ‘macro-areas’ (typically continent size) have been proposed as a step to rule out contact as an explanation for various large-scale linguistic phenomena. This squib points out some problems in some of the currently widely-used predetermined areas, those found in the World Atlas of Language Structures (Haspelmath et al., 2005). Instead, we propose a principled division of the world’s landmasses into six macro-areas that arguably have better geographical independence properties
  • Hammarström, H. (2014). [Review of the book A grammar of the great Andamanese language: An ethnolinguistic study by Anvita Abbi]. Journal of South Asian Languages and Linguistics, 1, 111-116. doi:10.1515/jsall-2014-0007.
  • Han, J.-I., & Verdonschot, R. G. (2019). Spoken-word production in Korean: A non-word masked priming and phonological Stroop task investigation. Quarterly Journal of Experimental Psychology, 72(4), 901-912. doi:10.1177/1747021818770989.

    Abstract

    Speech production studies have shown that phonological unit initially used to fill the metrical frame during phonological encoding is language specific, that is, a phoneme for English and Dutch, an atonal syllable for Mandarin Chinese, and a mora for Japanese. However, only a few studies chronometrically investigated speech production in Korean, and they obtained mixed results. Korean is particularly interesting as there might be both phonemic and syllabic influences during phonological encoding. The purpose of this study is to further examine the initial phonological preparation unit in Korean, employing a masked priming task (Experiment 1) and a phonological Stroop task (Experiment 2). The results showed that significant onset (and onset-plus, that is, consonant-vowel [CV]) effects were found in both experiments, but there was no compelling evidence for a prominent role for the syllable. When the prime words were presented in three different forms related to the targets, namely, without any change, with re-syllabified codas, and with nasalised codas, there were no significant differences in facilitation among the three forms. Alternatively, it is also possible that participants may not have had sufficient time to process the primes up to the point that re-syllabification or nasalisation could have been carried out. In addition, the results of a Stroop task demonstrated that the onset phoneme effect was not driven by any orthographic influence. These findings suggest that the onset segment and not the syllable is the initial (or proximate) phonological unit used in the segment-to-frame encoding process during speech planning in Korean.

    Additional information

    stimuli for experiment 1 and 2
  • Härle, M., Dobel, C., Cohen, R., & Rockstroh, B. (2002). Brain activity during syntactic and semantic processing - a magnetoencephalographic study. Brain Topography, 15(1), 3-11. doi:10.1023/A:1020070521429.

    Abstract

    Drawings of objects were presented in series of 54 each to 14 German speaking subjects with the tasks to indicate by button presses a) whether the grammatical gender of an object name was masculine ("der") or feminine ("die") and b) whether the depicted object was man-made or nature-made. The magnetoencephalogram (MEG) was recorded with a whole-head neuromagnetometer and task-specific patterns of brain activity were determined in the source space (Minimum Norm Estimates, MNE). A left-temporal focus of activity 150-275 ms after stimulus onset in the gender decision compared to the semantic classification task was discussed as indicating the retrieval of syntactic information, while a more expanded left hemispheric activity in the gender relative to the semantic task 300-625 ms after stimulus onset was discussed as indicating phonological encoding. A predominance of activity in the semantic task was observed over right fronto-central region 150-225 ms after stimulus-onset, suggesting that semantic and syntactic processes are prominent in this stage of lexical selection.
  • Harmon, Z., Idemaru, K., & Kapatsinski, V. (2019). Learning mechanisms in cue reweighting. Cognition, 189, 76-88. doi:10.1016/j.cognition.2019.03.011.

    Abstract

    Feedback has been shown to be effective in shifting attention across perceptual cues to a phonological contrast in speech perception (Francis, Baldwin & Nusbaum, 2000). However, the learning mechanisms behind this process remain obscure. We compare the predictions of supervised error-driven learning (Rescorla & Wagner, 1972) and reinforcement learning (Sutton & Barto, 1998) using computational simulations. Supervised learning predicts downweighting of an informative cue when the learner receives evidence that it is no longer informative. In contrast, reinforcement learning suggests that a reduction in cue weight requires positive evidence for the informativeness of an alternative cue. Experimental evidence supports the latter prediction, implicating reinforcement learning as the mechanism behind the effect of feedback on cue weighting in speech perception. Native English listeners were exposed to either bimodal or unimodal VOT distributions spanning the unaspirated/aspirated boundary (bear/pear). VOT is the primary cue to initial stop voicing in English. However, lexical feedback in training indicated that VOT was no longer predictive of voicing. Reduction in the weight of VOT was observed only when participants could use an alternative cue, F0, to predict voicing. Frequency distributions had no effect on learning. Overall, the results suggest that attention shifting in learning the phonetic cues to phonological categories is accomplished using simple reinforcement learning principles that also guide the choice of actions in other domains.
  • Harneit, A., Braun, U., Geiger, L. S., Zang, Z., Hakobjan, M., Van Donkelaar, M. M. J., Schweiger, J. I., Schwarz, K., Gan, G., Erk, S., Heinz, A., Romanczuk‐Seiferth, N., Witt, S., Rietschel, M., Walter, H., Franke, B., Meyer‐Lindenberg, A., & Tost, H. (2019). MAOA-VNTR genotype affects structural and functional connectivity in distributed brain networks. Human Brain Mapping, 40(18), 5202-5212. doi:10.1002/hbm.24766.

    Abstract

    Previous studies have linked the low expression variant of a variable number of tandem repeat polymorphism in the monoamine oxidase A gene (MAOA‐L) to the risk for impulsivity and aggression, brain developmental abnormalities, altered cortico‐limbic circuit function, and an exaggerated neural serotonergic tone. However, the neurobiological effects of this variant on human brain network architecture are incompletely understood. We studied healthy individuals and used multimodal neuroimaging (sample size range: 219–284 across modalities) and network‐based statistics (NBS) to probe the specificity of MAOA‐L‐related connectomic alterations to cortical‐limbic circuits and the emotion processing domain. We assessed the spatial distribution of affected links across several neuroimaging tasks and data modalities to identify potential alterations in network architecture. Our results revealed a distributed network of node links with a significantly increased connectivity in MAOA‐L carriers compared to the carriers of the high expression (H) variant. The hyperconnectivity phenotype primarily consisted of between‐lobe (“anisocoupled”) network links and showed a pronounced involvement of frontal‐temporal connections. Hyperconnectivity was observed across functional magnetic resonance imaging (fMRI) of implicit emotion processing (pFWE = .037), resting‐state fMRI (pFWE = .022), and diffusion tensor imaging (pFWE = .044) data, while no effects were seen in fMRI data of another cognitive domain, that is, spatial working memory (pFWE = .540). These observations are in line with prior research on the MAOA‐L variant and complement these existing data by novel insights into the specificity and spatial distribution of the neurogenetic effects. Our work highlights the value of multimodal network connectomic approaches for imaging genetics.
  • Hasson, U., Egidi, G., Marelli, M., & Willems, R. M. (2018). Grounding the neurobiology of language in first principles: The necessity of non-language-centric explanations for language comprehension. Cognition, 180(1), 135-157. doi:10.1016/j.cognition.2018.06.018.

    Abstract

    Recent decades have ushered in tremendous progress in understanding the neural basis of language. Most of our current knowledge on language and the brain, however, is derived from lab-based experiments that are far removed from everyday language use, and that are inspired by questions originating in linguistic and psycholinguistic contexts. In this paper we argue that in order to make progress, the field needs to shift its focus to understanding the neurobiology of naturalistic language comprehension. We present here a new conceptual framework for understanding the neurobiological organization of language comprehension. This framework is non-language-centered in the computational/neurobiological constructs it identifies, and focuses strongly on context. Our core arguments address three general issues: (i) the difficulty in extending language-centric explanations to discourse; (ii) the necessity of taking context as a serious topic of study, modeling it formally and acknowledging the limitations on external validity when studying language comprehension outside context; and (iii) the tenuous status of the language network as an explanatory construct. We argue that adopting this framework means that neurobiological studies of language will be less focused on identifying correlations between brain activity patterns and mechanisms postulated by psycholinguistic theories. Instead, they will be less self-referential and increasingly more inclined towards integration of language with other cognitive systems, ultimately doing more justice to the neurobiological organization of language and how it supports language as it is used in everyday life.
  • Haun, D. B. M., Rekers, Y., & Tomasello, M. (2014). Children conform the behavior of peers; Other great apes stick with what they know. Psychological Science, 25, 2160-2167. doi:10.1177/0956797614553235.

    Abstract

    All primates learn things from conspecifics socially, but it is not clear whether they conform to the behavior of these conspecifics—if conformity is defined as overriding individually acquired behavioral tendencies in order to copy peers’ behavior. In the current study, chimpanzees, orangutans, and 2-year-old human children individually acquired a problem-solving strategy. They then watched several conspecific peers demonstrate an alternative strategy. The children switched to this new, socially demonstrated strategy in roughly half of all instances, whereas the other two great-ape species almost never adjusted their behavior to the majority’s. In a follow-up study, children switched much more when the peer demonstrators were still present than when they were absent, which suggests that their conformity arose at least in part from social motivations. These results demonstrate an important difference between the social learning of humans and great apes, a difference that might help to account for differences in human and nonhuman cultures

    Additional information

    Haun_Rekers_Tomasello_2014_supp.pdf
  • Havron, N., Raviv, L., & Arnon, I. (2018). Literate and preliterate children show different learning patterns in an artificial language learning task. Journal of Cultural Cognitive Science, 2, 21-33. doi:10.1007/s41809-018-0015-9.

    Abstract

    Literacy affects many aspects of cognitive and linguistic processing. Among them, it increases the salience of words as units of linguistic processing. Here, we explored the impact of literacy acquisition on children’s learning of an artifical language. Recent accounts of L1–L2 differences relate adults’ greater difficulty with language learning to their smaller reliance on multiword units. In particular, multiword units are claimed to be beneficial for learning opaque grammatical relations like grammatical gender. Since literacy impacts the reliance on words as units of processing, we ask if and how acquiring literacy may change children’s language-learning results. We looked at children’s success in learning novel noun labels relative to their success in learning article-noun gender agreement, before and after learning to read. We found that preliterate first graders were better at learning agreement (larger units) than at learning nouns (smaller units), and that the difference between the two trial types significantly decreased after these children acquired literacy. In contrast, literate third graders were as good in both trial types. These findings suggest that literacy affects not only language processing, but also leads to important differences in language learning. They support the idea that some of children’s advantage in language learning comes from their previous knowledge and experience with language—and specifically, their lack of experience with written texts.
  • Haworth, S., Shapland, C. Y., Hayward, C., Prins, B. P., Felix, J. F., Medina-Gomez, C., Rivadeneira, F., Wang, C., Ahluwalia, T. S., Vrijheid, M., Guxens, M., Sunyer, J., Tachmazidou, I., Walter, K., Iotchkova, V., Jackson, A., Cleal, L., Huffmann, J., Min, J. L., Sass, L. and 15 moreHaworth, S., Shapland, C. Y., Hayward, C., Prins, B. P., Felix, J. F., Medina-Gomez, C., Rivadeneira, F., Wang, C., Ahluwalia, T. S., Vrijheid, M., Guxens, M., Sunyer, J., Tachmazidou, I., Walter, K., Iotchkova, V., Jackson, A., Cleal, L., Huffmann, J., Min, J. L., Sass, L., Timmers, P. R. H. J., UK10K consortium, Davey Smith, G., Fisher, S. E., Wilson, J. F., Cole, T. J., Fernandez-Orth, D., Bønnelykke, K., Bisgaard, H., Pennell, C. E., Jaddoe, V. W. V., Dedoussis, G., Timpson, N. J., Zeggini, E., Vitart, V., & St Pourcain, B. (2019). Low-frequency variation in TP53 has large effects on head circumference and intracranial volume. Nature Communications, 10: 357. doi:10.1038/s41467-018-07863-x.

    Abstract

    Cranial growth and development is a complex process which affects the closely related traits of head circumference (HC) and intracranial volume (ICV). The underlying genetic influences affecting these traits during the transition from childhood to adulthood are little understood, but might include both age-specific genetic influences and low-frequency genetic variation. To understand these influences, we model the developmental genetic architecture of HC, showing this is genetically stable and correlated with genetic determinants of ICV. Investigating up to 46,000 children and adults of European descent, we identify association with final HC and/or final ICV+HC at 9 novel common and low-frequency loci, illustrating that genetic variation from a wide allele frequency spectrum contributes to cranial growth. The largest effects are reported for low-frequency variants within TP53, with 0.5 cm wider heads in increaser-allele carriers versus non-carriers during mid-childhood, suggesting a previously unrecognized role of TP53 transcripts in human cranial development.

    Additional information

    Supplementary Information
  • Hebebrand, J., Peters, T., Schijven, D., Hebebrand, M., Grasemann, C., Winkler, T. W., Heid, I. M., Antel, J., Föcker, M., Tegeler, L., Brauner, L., Adan, R. A., Luykx, J. J., Correll, C. U., König, I. R., Hinney, A., & Libuda, L. (2018). The role of genetic variation of human metabolism for BMI, mental traits and mental disorders. Molecular Metabolism, 12, 1-11. doi:10.1016/j.molmet.2018.03.015.

    Abstract

    Objective
    The aim was to assess whether loci associated with metabolic traits also have a significant role in BMI and mental traits/disorders
    Methods
    We first assessed the number of single nucleotide polymorphisms (SNPs) with genome-wide significance for human metabolism (NHGRI-EBI Catalog). These 516 SNPs (216 independent loci) were looked-up in genome-wide association studies for association with body mass index (BMI) and the mental traits/disorders educational attainment, neuroticism, schizophrenia, well-being, anxiety, depressive symptoms, major depressive disorder, autism-spectrum disorder, attention-deficit/hyperactivity disorder, Alzheimer's disease, bipolar disorder, aggressive behavior, and internalizing problems. A strict significance threshold of p < 6.92 × 10−6 was based on the correction for 516 SNPs and all 14 phenotypes, a second less conservative threshold (p < 9.69 × 10−5) on the correction for the 516 SNPs only.
    Results
    19 SNPs located in nine independent loci revealed p-values < 6.92 × 10−6; the less strict criterion was met by 41 SNPs in 24 independent loci. BMI and schizophrenia showed the most pronounced genetic overlap with human metabolism with three loci each meeting the strict significance threshold. Overall, genetic variation associated with estimated glomerular filtration rate showed up frequently; single metabolite SNPs were associated with more than one phenotype. Replications in independent samples were obtained for BMI and educational attainment.
    Conclusions
    Approximately 5–10% of the regions involved in the regulation of blood/urine metabolite levels seem to also play a role in BMI and mental traits/disorders and related phenotypes. If validated in metabolomic studies of the respective phenotypes, the associated blood/urine metabolites may enable novel preventive and therapeutic strategies.
  • Heesen, R., Fröhlich, M., Sievers, C., Woensdregt, M., & Dingemanse, M. (2022). Coordinating social action: A primer for the cross-species investigation of communicative repair. Philosophical Transactions of the Royal Society of London, Series B: Biological Sciences, 377(1859): 20210110. doi:10.1098/rstb.2021.0110.

    Abstract

    Human joint action is inherently cooperative, manifested in the collaborative efforts of participants to minimize communicative trouble through interactive repair. Although interactive repair requires sophisticated cognitive abilities,
    it can be dissected into basic building blocks shared with non-human animal species. A review of the primate literature shows that interactionally contingent signal sequences are at least common among species of nonhuman great apes, suggesting a gradual evolution of repair. To pioneer a cross-species assessment of repair this paper aims at (i) identifying necessary precursors of human interactive repair; (ii) proposing a coding framework for its comparative study in humans and non-human species; and (iii) using this framework to analyse examples of interactions of humans (adults/children) and non-human great apes. We hope this paper will serve as a primer for cross-species comparisons of communicative breakdowns and how they are repaired.
  • Heilbron, M., Armeni, K., Schoffelen, J.-M., Hagoort, P., & De Lange, F. P. (2022). A hierarchy of linguistic predictions during natural language comprehension. Proceedings of the National Academy of Sciences of the United States of America, 119(32): e2201968119. doi:10.1073/pnas.2201968119.

    Abstract

    Understanding spoken language requires transforming ambiguous acoustic streams into a hierarchy of representations, from phonemes to meaning. It has been suggested that the brain uses prediction to guide the interpretation of incoming input. However, the role of prediction in language processing remains disputed, with disagreement about both the ubiquity and representational nature of predictions. Here, we address both issues by analyzing brain recordings of participants listening to audiobooks, and using a deep neural network (GPT-2) to precisely quantify contextual predictions. First, we establish that brain responses to words are modulated by ubiquitous predictions. Next, we disentangle model-based predictions into distinct dimensions, revealing dissociable neural signatures of predictions about syntactic category (parts of speech), phonemes, and semantics. Finally, we show that high-level (word) predictions inform low-level (phoneme) predictions, supporting hierarchical predictive processing. Together, these results underscore the ubiquity of prediction in language processing, showing that the brain spontaneously predicts upcoming language at multiple levels of abstraction.

    Additional information

    supporting information
  • Hersh, T. A., Dimond, A. L., Ruth, B. A., Lupica, N. V., Bruce, J. C., Kelley, J. M., King, B. L., & Lutton, B. V. (2018). A role for the CXCR4-CXCL12 axis in the little skate, Leucoraja erinacea. American Journal of Physiology-Regulatory, Integrative and Comparative Physiology, 315, R218-R229. doi:10.1152/ajpregu.00322.2017.

    Abstract

    The interaction between C-X-C chemokine receptor type 4 (CXCR4) and its cognate ligand C-X-C motif chemokine ligand 12 (CXCL12) plays a critical role in regulating hematopoietic stem cell activation and subsequent cellular mobilization. Extensive studies of these genes have been conducted in mammals, but much less is known about the expression and function of CXCR4 and CXCL12 in non-mammalian vertebrates. In the present study, we identify simultaneous expression of CXCR4 and CXCL12 orthologs in the epigonal organ (the primary hematopoietic tissue) of the little skate, Leucoraja erinacea. Genetic and phylogenetic analyses were functionally supported by significant mobilization of leukocytes following administration of Plerixafor, a CXCR4 antagonist and clinically important drug. Our results provide evidence that, as in humans, Plerixafor disrupts CXCR4/CXCL12 binding in the little skate, facilitating release of leukocytes into the bloodstream. Our study illustrates the value of the little skate as a model organism, particularly in studies of hematopoiesis and potentially for preclinical research on hematological and vascular disorders.

    Files private

    Request files
  • Hersh, T., King, B., & Lutton, B. V. (2014). Novel bioinformatics tools for analysis of gene expression in the skate, Leucoraja erinacea. The Bulletin, MDI Biological Laboratory, 53, 16-18.
  • Hersh, T. A., Gero, S., Rendell, L., Cantor, M., Weilgart, L., Amano, M., Dawson, S. M., Slooten, E., Johnson, C. M., Kerr, I., Payne, R., Rogan, A., Antunes, R., Andrews, O., Ferguson, E. L., Hom-Weaver, C. A., Norris, T. F., Barkley, Y. M., Merkens, K. P., Oleson, E. M. and 7 moreHersh, T. A., Gero, S., Rendell, L., Cantor, M., Weilgart, L., Amano, M., Dawson, S. M., Slooten, E., Johnson, C. M., Kerr, I., Payne, R., Rogan, A., Antunes, R., Andrews, O., Ferguson, E. L., Hom-Weaver, C. A., Norris, T. F., Barkley, Y. M., Merkens, K. P., Oleson, E. M., Doniol-Valcroze, T., Pilkington, J. F., Gordon, J., Fernandes, M., Guerra, M., Hickmott, L., & Whitehead, H. (2022). Evidence from sperm whale clans of symbolic marking in non-human cultures. Proceedings of the National Academy of Sciences of the United States of America, 119(37): e2201692119. doi:10.1073/pnas.2201692119.

    Abstract

    Culture, a pillar of the remarkable ecological success of humans, is increasingly recognized as a powerful force structuring nonhuman animal populations. A key gap between these two types of culture is quantitative evidence of symbolic markers—seemingly arbitrary traits that function as reliable indicators of cultural group membership to conspecifics. Using acoustic data collected from 23 Pacific Ocean locations, we provide quantitative evidence that certain sperm whale acoustic signals exhibit spatial patterns consistent with a symbolic marker function. Culture segments sperm whale populations into behaviorally distinct clans, which are defined based on dialects of stereotyped click patterns (codas). We classified 23,429 codas into types using contaminated mixture models and hierarchically clustered coda repertoires into seven clans based on similarities in coda usage; then we evaluated whether coda usage varied with geographic distance within clans or with spatial overlap between clans. Similarities in within-clan usage of both “identity codas” (coda types diagnostic of clan identity) and “nonidentity codas” (coda types used by multiple clans) decrease as space between repertoire recording locations increases. However, between-clan similarity in identity, but not nonidentity, coda usage decreases as clan spatial overlap increases. This matches expectations if sympatry is related to a measurable pressure to diversify to make cultural divisions sharper, thereby providing evidence that identity codas function as symbolic markers of clan identity. Our study provides quantitative evidence of arbitrary traits, resembling human ethnic markers, conveying cultural identity outside of humans, and highlights remarkable similarities in the distributions of human ethnolinguistic groups and sperm whale clans.
  • Hervais-Adelman, A., Kumar, U., Mishra, R., Tripathi, V., Guleria, A., Singh, J. P., & Huettig, F. (2022). How does literacy affect speech processing? Not by enhancing cortical responses to speech, but by promoting connectivity of acoustic-phonetic and graphomotor cortices. Journal of Neuroscience, 42(47), 8826-8841. doi:10.1523/JNEUROSCI.1125-21.2022.

    Abstract

    Previous research suggests that literacy, specifically learning alphabetic letter-to-phoneme mappings, modifies online speech processing, and enhances brain responses, as indexed by the blood-oxygenation level dependent signal (BOLD), to speech in auditory areas associated with phonological processing (Dehaene et al., 2010). However, alphabets are not the only orthographic systems in use in the world, and hundreds of millions of individuals speak languages that are not written using alphabets. In order to make claims that literacy per se has broad and general consequences for brain responses to speech, one must seek confirmatory evidence from non-alphabetic literacy. To this end, we conducted a longitudinal fMRI study in India probing the effect of literacy in Devanagari, an abugida, on functional connectivity and cerebral responses to speech in 91 variously literate Hindi-speaking male and female human participants. Twenty-two completely illiterate participants underwent six months of reading and writing training. Devanagari literacy increases functional connectivity between acoustic-phonetic and graphomotor brain areas, but we find no evidence that literacy changes brain responses to speech, either in cross-sectional or longitudinal analyses. These findings shows that a dramatic reconfiguration of the neurofunctional substrates of online speech processing may not be a universal result of learning to read, and suggest that the influence of writing on speech processing should also be investigated.
  • Hervais-Adelman, A., Egorova, N., & Golestani, N. (2018). Beyond bilingualism: Multilingual experience correlates with caudate volume. Brain Structure and Function, 223(7), 3495-3502. doi:10.1007/s00429-018-1695-0.

    Abstract

    The multilingual brain implements mechanisms that serve to select the appropriate language as a function of the communicative environment. Engaging these mechanisms on a regular basis appears to have consequences for brain structure and function. Studies have implicated the caudate nuclei as important nodes in polyglot language control processes, and have also shown structural differences in the caudate nuclei in bilingual compared to monolingual populations. However, the majority of published work has focused on the categorical differences between monolingual and bilingual individuals, and little is known about whether these findings extend to multilingual individuals, who have even greater language control demands. In the present paper, we present an analysis of the volume and morphology of the caudate nuclei, putamen, pallidum and thalami in 75 multilingual individuals who speak three or more languages. Volumetric analyses revealed a significant relationship between multilingual experience and right caudate volume, as well as a marginally significant relationship with left caudate volume. Vertex-wise analyses revealed a significant enlargement of dorsal and anterior portions of the left caudate nucleus, known to have connectivity with executive brain regions, as a function of multilingual expertise. These results suggest that multilingual expertise might exercise a continuous impact on brain structure, and that as additional languages beyond a second are acquired, the additional demands for linguistic and cognitive control result in modifications to brain structures associated with language management processes.
  • Hervais-Adelman, A., Pefkou, M., & Golestani, N. (2014). Bilingual speech-in-noise: Neural bases of semantic context use in the native language. Brain and Language, 132, 1-6. doi:10.1016/j.bandl.2014.01.009.

    Abstract

    Bilingual listeners comprehend speech-in-noise better in their native than non-native language. This native-language benefit is thought to arise from greater use of top-down linguistic information to assist degraded speech comprehension. Using functional magnetic resonance imaging, we recently showed that left angular gyrus activation is modulated when semantic context is used to assist native language speech-in-noise comprehension (Golestani, Hervais-Adelman, Obleser, & Scott, 2013). Here, we extend the previous work, by reanalyzing the previous data alongside the results obtained in the non-native language of the same late bilingual participants. We found a behavioral benefit of semantic context in processing speech-in-noise in the native language only, and the imaging results also revealed a native language context effect in the left angular gyrus. We also find a complementary role of lower-level auditory regions during stimulus-driven processing. Our findings help to elucidate the neural basis of the established native language behavioral benefit of speech-in-noise processing. (C) 2014 Elsevier Inc. All rights reserved.
  • Hervais-Adelman, A., Moser-Mercer, B., & Golestani, N. (2018). Commentary: Broca pars triangularis constitutes a “hub” of the language-control network during simultaneous language translation. Frontiers in Human Neuroscience, 12: 22. doi:10.3389/fnhum.2018.00022.

    Abstract

    A commentary on
    Broca Pars Triangularis Constitutes a “Hub” of the Language-Control Network during Simultaneous Language Translation

    by Elmer, S. (2016). Front. Hum. Neurosci. 10:491. doi: 10.3389/fnhum.2016.00491

    Elmer (2016) conducted an fMRI investigation of “simultaneous language translation” in five participants. The article presents group and individual analyses of German-to-Italian and Italian-to-German translation, confined to a small set of anatomical regions previously reported to be involved in multilingual control. Here we take the opportunity to discuss concerns regarding certain aspects of the study.
  • Hervais-Adelman, A., Kumar, U., Mishra, R. K., Tripathi, V. N., Guleria, A., Singh, J. P., Eisner, F., & Huettig, F. (2019). Learning to read recycles visual cortical networks without destruction. Science Advances, 5(9): eaax0262. doi:10.1126/sciadv.aax0262.

    Abstract

    Learning to read is associated with the appearance of an orthographically sensitive brain region known as the visual word form area. It has been claimed that development of this area proceeds by impinging upon territory otherwise available for the processing of culturally relevant stimuli such as faces and houses. In a large-scale functional magnetic resonance imaging study of a group of individuals of varying degrees of literacy (from completely illiterate to highly literate), we examined cortical responses to orthographic and nonorthographic visual stimuli. We found that literacy enhances responses to other visual input in early visual areas and enhances representational similarity between text and faces, without reducing the extent of response to nonorthographic input. Thus, acquisition of literacy in childhood recycles existing object representation mechanisms but without destructive competition.

    Additional information

    aax0262_SM.pdf
  • Hessels, R. S., Hooge, I., Snijders, T. M., & Kemner, C. (2014). Is there a limit to the superiority of individuals with ASD in visual search? Journal of Autism and Developmental Disorders, 44, 443-451. doi:10.1007/s10803-013-1886-8.

    Abstract

    Superiority in visual search for individuals diagnosed with autism spectrum disorder (ASD) is a well-reported finding. We administered two visual search tasks to individuals with ASD and matched controls. One showed no difference between the groups, and one did show the expected superior performance for individuals with ASD. These results offer an explanation, formulated in terms of load theory. We suggest that there is a limit to the superiority in visual search for individuals with ASD, related to the perceptual load of the stimuli. When perceptual load becomes so high that no additional task-(ir)relevant information can be processed, performance will be based on single stimulus identification, in which no differences between individuals with ASD and controls have been demonstrated
  • Heyne, H. O., Singh, T., Stamberger, H., Jamra, R. A., Caglayan, H., Craiu, D., Guerrini, R., Helbig, K. L., Koeleman, B. P. C., Kosmicki, J. A., Linnankivi, T., May, P., Muhle, H., Møller, R. S., Neubauer, B. A., Palotie, A., Pendziwiat, M., Striano, P., Tang, S., Wu, S. and 9 moreHeyne, H. O., Singh, T., Stamberger, H., Jamra, R. A., Caglayan, H., Craiu, D., Guerrini, R., Helbig, K. L., Koeleman, B. P. C., Kosmicki, J. A., Linnankivi, T., May, P., Muhle, H., Møller, R. S., Neubauer, B. A., Palotie, A., Pendziwiat, M., Striano, P., Tang, S., Wu, S., EuroEPINOMICS RES Consortium, De Kovel, C. G. F., Poduri, A., Weber, Y. G., Weckhuysen, S., Sisodiya, S. M., Daly, M. J., Helbig, I., Lal, D., & Lemke, J. R. (2018). De novo variants in neurodevelopmental disorders with epilepsy. Nature Genetics, 50, 1048-1053. doi:10.1038/s41588-018-0143-7.

    Abstract

    Epilepsy is a frequent feature of neurodevelopmental disorders (NDDs), but little is known about genetic differences between NDDs with and without epilepsy. We analyzed de novo variants (DNVs) in 6,753 parent–offspring trios ascertained to have different NDDs. In the subset of 1,942 individuals with NDDs with epilepsy, we identified 33 genes with a significant excess of DNVs, of which SNAP25 and GABRB2 had previously only limited evidence of disease association. Joint analysis of all individuals with NDDs also implicated CACNA1E as a novel disease-associated gene. Comparing NDDs with and without epilepsy, we found missense DNVs, DNVs in specific genes, age of recruitment, and severity of intellectual disability to be associated with epilepsy. We further demonstrate the extent to which our results affect current genetic testing as well as treatment, emphasizing the benefit of accurate genetic diagnosis in NDDs with epilepsy.
  • Heyselaar, E., Mazaheri, A., Hagoort, P., & Segaert, K. (2018). Changes in alpha activity reveal that social opinion modulates attention allocation during face processing. NeuroImage, 174, 432-440. doi:10.1016/j.neuroimage.2018.03.034.

    Abstract

    Participants’ performance differs when conducting a task in the presence of a secondary individual, moreover the opinion the participant has of this individual also plays a role. Using EEG, we investigated how previous interactions with, and evaluations of, an avatar in virtual reality subsequently influenced attentional allocation to the face of that avatar. We focused on changes in the alpha activity as an index of attentional allocation. We found that the onset of an avatar’s face whom the participant had developed a rapport with induced greater alpha suppression. This suggests greater attentional resources are allocated to the interacted-with avatars. The evaluative ratings of the avatar induced a U-shaped change in alpha suppression, such that participants paid most attention when the avatar was rated as average. These results suggest that attentional allocation is an important element of how behaviour is altered in the presence of a secondary individual and is modulated by our opinion of that individual.

    Additional information

    mmc1.docx
  • Heyselaar, E., & Segaert, K. (2019). Memory encoding of syntactic information involves domain-general attentional resources. Evidence from dual-task studies. Quarterly Journal of Experimental Psychology, 72(6), 1285-1296. doi:10.1177/1747021818801249.

    Abstract

    We investigate the type of attention (domain-general or language-specific) used during
    syntactic processing. We focus on syntactic priming: In this task, participants listen to a
    sentence that describes a picture (prime sentence), followed by a picture the participants need
    to describe (target sentence). We measure the proportion of times participants use the
    syntactic structure they heard in the prime sentence to describe the current target sentence as a
    measure of syntactic processing. Participants simultaneously conducted a motion-object
    tracking (MOT) task, a task commonly used to tax domain-general attentional resources. We
    manipulated the number of objects the participant had to track; we thus measured
    participants’ ability to process syntax while their attention is not-, slightly-, or overly-taxed.
    Performance in the MOT task was significantly worse when conducted as a dual-task
    compared to as a single task. We observed an inverted U-shaped curve on priming magnitude
    when conducting the MOT task concurrently with prime sentences (i.e., memory encoding),
    but no effect when conducted with target sentences (i.e., memory retrieval). Our results
    illustrate how, during the encoding of syntactic information, domain-general attention
    differentially affects syntactic processing, whereas during the retrieval of syntactic
    information domain-general attention does not influence syntactic processing
  • Hickman, L. J., Keating, C. T., Ferrari, A., & Cook, J. L. (2022). Skin conductance as an index of alexithymic traits in the general population. Psychological Reports, 125(3), 1363-1379. doi:10.1177/00332941211005118.

    Abstract

    Alexithymia concerns a difficulty identifying and communicating one’s own emotions, and a tendency towards externally-oriented thinking. Recent work argues that such alexithymic traits are due to altered arousal response and poor subjective awareness of “objective” arousal responses. Although there are individual differences within the general population in identifying and describing emotions, extant research has focused on highly alexithymic individuals. Here we investigated whether mean arousal and concordance between subjective and objective arousal underpin individual differences in alexithymic traits in a general population sample. Participants rated subjective arousal responses to 60 images from the International Affective Picture System whilst their skin conductance was recorded. The Autism Quotient was employed to control for autistic traits in the general population. Analysis using linear models demonstrated that mean arousal significantly predicted Toronto Alexithymia Scale scores above and beyond autistic traits, but concordance scores did not. This indicates that, whilst objective arousal is a useful predictor in populations that are both above and below the cut-off values for alexithymia, concordance scores between objective and subjective arousal do not predict variation in alexithymic traits in the general population.
  • Hilverman, C., Clough, S., Duff, M. C., & Cook, S. W. (2018). Patients with hippocampal amnesia successfully integrate gesture and speech. Neuropsychologia, 117, 332-338. doi:10.1016/j.neuropsychologia.2018.06.012.

    Abstract

    During conversation, people integrate information from co-speech hand gestures with information in spoken language. For example, after hearing the sentence, "A piece of the log flew up and hit Carl in the face" while viewing a gesture directed at the nose, people tend to later report that the log hit Carl in the nose (information only in gesture) rather than in the face (information in speech). The cognitive and neural mechanisms that support the integration of gesture with speech are unclear. One possibility is that the hippocampus known for its role in relational memory and information integration is necessary for integrating gesture and speech. To test this possibility, we examined how patients with hippocampal amnesia and healthy and brain-damaged comparison participants express information from gesture in a narrative retelling task. Participants watched videos of an experimenter telling narratives that included hand gestures that contained supplementary information. Participants were asked to retell the narratives and their spoken retellings were assessed for the presence of information from gesture. For features that had been accompanied by supplementary gesture, patients with amnesia retold fewer of these features overall and fewer retellings that matched the speech from the narrative. Yet their retellings included features that contained information that had been present uniquely in. gesture in amounts that were not reliably different from comparison groups. Thus, a functioning hippocampus is not necessary for gesture-speech integration over short timescales. Providing unique information in gesture may enhance communication for individuals with declarative memory impairment, possibly via non-declarative memory mechanisms.
  • Hoedemaker, R. S., & Gordon, P. C. (2014). Embodied language comprehension: Encoding-based and goal-driven processes. Journal of Experimental Psychology: General, 143(2), 914-929. doi:10.1037/a0032348.

    Abstract

    Theories of embodied language comprehension have proposed that language is understood through perceptual simulation of the sensorimotor characteristics of its meaning. Strong support for this claim requires demonstration of encoding-based activation of sensorimotor representations that is distinct from task-related or goal-driven processes. Participants in 3 eye-tracking experiments were presented with triplets of either numbers or object and animal names. In Experiment 1, participants indicated whether the size of the referent of the middle object or animal name was in between the size of the 2 outer items. In Experiment 2, the object and animal names were encoded for an immediate recognition memory task. In Experiment 3, participants completed the same comparison task of Experiment 1 for both words and numbers. During the comparison tasks, word and number decision times showed a symbolic distance effect, such that response time was inversely related to the size difference between the items. A symbolic distance effect was also observed for animal and object encoding times in cases where encoding time likely reflected some goal-driven processes as well. When semantic size was irrelevant to the task (Experiment 2), it had no effect on word encoding times. Number encoding times showed a numerical distance priming effect: Encoding time increased with numerical difference between items. Together these results suggest that while activation of numerical magnitude representations is encoding-based as well as goal-driven, activation of size information associated with words is goal-driven and does not occur automatically during encoding. This conclusion challenges strong theories of embodied cognition which claim that language comprehension consists of activation of analog sensorimotor representations irrespective of higher level processes related to context or task-specific goals
  • Hoedemaker, R. S., & Gordon, P. C. (2014). It takes time to prime: Semantic priming in the ocular lexical decision task. Journal of Experimental Psychology: Human Perception and Performance, 40(6), 2179-2197. doi:10.1037/a0037677.

    Abstract

    Two eye-tracking experiments were conducted in which the manual response mode typically used in lexical decision tasks (LDTs) was replaced with an eye-movement response through a sequence of 3 words. This ocular LDT combines the explicit control of task goals found in LDTs with the highly practiced ocular response used in reading text. In Experiment 1, forward saccades indicated an affirmative lexical decision (LD) on each word in the triplet. In Experiment 2, LD responses were delayed until all 3 letter strings had been read. The goal of the study was to evaluate the contribution of task goals and response mode to semantic priming. Semantic priming is very robust in tasks that involve recognition of words in isolation, such as LDT, but limited during text reading, as measured using eye movements. Gaze durations in both experiments showed robust semantic priming even though ocular response times were much shorter than manual LDs for the same words in the English Lexicon Project. Ex-Gaussian distribution fits revealed that the priming effect was concentrated in estimates of tau (τ), meaning that priming was most pronounced in the slow tail of the distribution. This pattern shows differential use of the prime information, which may be more heavily recruited in cases in which the LD is difficult, as indicated by longer response times. Compared with the manual LD responses, ocular LDs provide a more sensitive measure of this task-related influence on word recognition as measured by the LDT.
  • Hoedemaker, R. S., & Meyer, A. S. (2019). Planning and coordination of utterances in a joint naming task. Journal of Experimental Psychology: Learning, Memory, and Cognition, 45(4), 732-752. doi:10.1037/xlm0000603.

    Abstract

    Dialogue requires speakers to coordinate. According to the model of dialogue as joint action, interlocutors achieve this coordination by corepresenting their own and each other’s task share in a functionally equivalent manner. In two experiments, we investigated this corepresentation account using an interactive joint naming task in which pairs of participants took turns naming sets of objects on a shared display. Speaker A named the first, or the first and third object, and Speaker B named the second object. In control conditions, Speaker A named one, two, or all three objects and Speaker B remained silent. We recorded the timing of the speakers’ utterances and Speaker A’s eye movements. Interturn pause durations indicated that the speakers effectively coordinated their utterances in time. Speaker A’s speech onset latencies depended on the number of objects they named, but were unaffected by Speaker B’s naming task. This suggests speakers were not fully incorporating their partner’s task into their own speech planning. Moreover, Speaker A’s eye movements indicated that they were much less likely to attend to objects their partner named than to objects they named themselves. When speakers did inspect their partner’s objects, viewing times were too short to suggest that speakers were retrieving these object names as if they were planning to name the objects themselves. These results indicate that speakers prioritized planning their own responses over attending to their interlocutor’s task and suggest that effective coordination can be achieved without full corepresentation of the partner’s task.
  • Hoeks, J. C. J., Vonk, W., & Schriefers, H. (2002). Processing coordinated structures in context: The effect of topic-structure on ambiguity resolution. Journal of Memory and Language, 46(1), 99-119. doi:10.1006/jmla.2001.2800.

    Abstract

    When a sentence such as The model embraced the designer and the photographer laughed is read, the noun phrase the photographer is temporarily ambiguous: It can be either one of the objects of embraced (NP-coordination) or the subject of a new, conjoined sentence (S-coordination). It has been shown for a number of languages, including Dutch (the language used in this study), that readers prefer NP-coordination over S-coordination, at least in isolated sentences. In the present paper, it will be suggested that NP-coordination is preferred because it is the simpler of the two options in terms of topic-structure; in NP-coordinations there is only one topic, whereas S-coordinations contain two. Results from off-line (sentence completion) and online studies (a self-paced reading and an eye tracking experiment) support this topic-structure explanation. The processing difficulty associated with S-coordinated sentences disappeared when these sentences followed contexts favoring a two-topic continuation. This finding establishes topic-structure as an important factor in online sentence processing.
  • Hoey, E. (2018). How speakers continue with talk after a lapse in conversation. Research on Language and Social Interaction, 51(3), 329-346. doi:10.1080/08351813.2018.1485234.

    Abstract

    How do conversational participants continue with turn-by-turn talk after a momentary lapse? If all participants forgo the option to speak at possible sequence completion, an extended silence may emerge that can indicate a lack of anything to talk about next. For the interaction to proceed recognizably as a conversation, the postlapse turn needs to implicate more talk. Using conversation analysis, I examine three practical alternatives regarding sequentially implicative postlapse turns: Participants may move to end the interaction, continue with some prior matter, or start something new. Participants are shown using resources grounded in the interaction’s overall structural organization, the materials from the interaction-so-far, the mentionables they bring to interaction, and the situated environment itself. Comparing these alternatives, there’s suggestive quantitative evidence for a preference for continuation. The analysis of lapse resolution shows lapses as places for the management of multiple possible courses of action. Data are in U.S. and UK English.
  • Hoey, E. (2014). Sighing in interaction: Somatic, semiotic, and social. Research on Language and Social Interaction, 47(2), 175-200. doi:10.1080/08351813.2014.900229.

    Abstract

    Participants in interaction routinely orient to gaze, bodily comportment, and nonlexical vocalizations as salient for developing an analysis of the unfolding course of action. In this article, I address the respiratory phenomenon of sighing, the aim being to describe sighing as a situated practice that contributes to the achievement of particular actions in interaction. I report on the various actions sighs implement or construct and how their positioning and delivery informs participants’ understandings of their significance for interaction. Data are in American English
  • Hogan-Brown, A. L., Hoedemaker, R. S., Gordon, P. C., & Losh, M. (2014). Eye-voice span during rapid automatized naming: Evidence of reduced automaticity in individuals with autism spectrum disorder and their siblings. Journal of Neurodevelopmental Disorders, 6(1): 33. doi:10.1186/1866-1955-6-33.

    Abstract

    Background: Individuals with autism spectrum disorder (ASD) and their parents demonstrate impaired performance in rapid automatized naming (RAN), a task that recruits a variety of linguistic and executive processes. Though the basic processes that contribute to RAN differences remain unclear, eye-voice relationships, as measured through eye tracking, can provide insight into cognitive and perceptual processes contributing to RAN performance. For example, in RAN, eye-voice span (EVS), the distance ahead the eyes are when articulation of a target item's label begins, is an indirect measure of automaticity of the processes underlying RAN. The primary objective of this study was to investigate automaticity in naming processes, as indexed by EVS during RAN. The secondary objective was to characterize RAN difficulties in individuals with ASD and their siblings. Methods: Participants (aged 15 – 33 years) included 21 individuals with ASD, 23 siblings of individuals with ASD, and 24 control subjects, group-matched on chronological age. Naming time, frequency of errors, and EVS were measured during a RAN task and compared across groups. Results: A stepwise pattern of RAN performance was observed, with individuals with ASD demonstrating the slowest naming across all RAN conditions, controls demonstrating the fastest naming, and siblings demonstrating intermediate performance. Individuals with ASD exhibited smaller EVSs than controls on all RAN conditions, and siblings exhibited smaller EVSs during number naming (the most highly automatized type of naming). EVSs were correlated with naming times in controls only, and only in the more automatized conditions. Conclusions: These results suggest that reduced automaticity in the component processes of RAN may underpin differences in individuals with ASD and their siblings. These findings also provide further support that RAN abilities are impacted by genetic liability to ASD. This study has important implications for understanding the underlying skills contributing to language-related deficits in ASD.
  • Holler, J., Drijvers, L., Rafiee, A., & Majid, A. (2022). Embodied space-pitch associations are shaped by language. Cognitive Science, 46(2): e13083. doi:10.1111/cogs.13083.

    Abstract

    Height-pitch associations are claimed to be universal and independent of language, but this claim remains controversial. The present study sheds new light on this debate with a multimodal analysis of individual sound and melody descriptions obtained in an interactive communication paradigm with speakers of Dutch and Farsi. The findings reveal that, in contrast to Dutch speakers, Farsi speakers do not use a height-pitch metaphor consistently in speech. Both Dutch and Farsi speakers’ co-speech gestures did reveal a mapping of higher pitches to higher space and lower pitches to lower space, and this gesture space-pitch mapping tended to co-occur with corresponding spatial words (high-low). However, this mapping was much weaker in Farsi speakers than Dutch speakers. This suggests that cross-linguistic differences shape the conceptualization of pitch and further calls into question the universality of height-pitch associations.

    Additional information

    supporting information

Share this page