Publications

Displaying 601 - 693 of 693
  • Ter Bekke, M., Drijvers, L., & Holler, J. (2024). Hand gestures have predictive potential during conversation: An investigation of the timing of gestures in relation to speech. Cognitive Science, 48(1): e13407. doi:10.1111/cogs.13407.

    Abstract

    During face-to-face conversation, transitions between speaker turns are incredibly fast. These fast turn exchanges seem to involve next speakers predicting upcoming semantic information, such that next turn planning can begin before a current turn is complete. Given that face-to-face conversation also involves the use of communicative bodily signals, an important question is how bodily signals such as co-speech hand gestures play into these processes of prediction and fast responding. In this corpus study, we found that hand gestures that depict or refer to semantic information started before the corresponding information in speech, which held both for the onset of the gesture as a whole, as well as the onset of the stroke (the most meaningful part of the gesture). This early timing potentially allows listeners to use the gestural information to predict the corresponding semantic information to be conveyed in speech. Moreover, we provided further evidence that questions with gestures got faster responses than questions without gestures. However, we found no evidence for the idea that how much a gesture precedes its lexical affiliate (i.e., its predictive potential) relates to how fast responses were given. The findings presented here highlight the importance of the temporal relation between speech and gesture and help to illuminate the potential mechanisms underpinning multimodal language processing during face-to-face conversation.
  • Ter Bekke, M., Drijvers, L., & Holler, J. (2024). Gestures speed up responses to questions. Language, Cognition and Neuroscience, 39(4), 423-430. doi:10.1080/23273798.2024.2314021.

    Abstract

    Most language use occurs in face-to-face conversation, which involves rapid turn-taking. Seeing communicative bodily signals in addition to hearing speech may facilitate such fast responding. We tested whether this holds for co-speech hand gestures by investigating whether these gestures speed up button press responses to questions. Sixty native speakers of Dutch viewed videos in which an actress asked yes/no-questions, either with or without a corresponding iconic hand gesture. Participants answered the questions as quickly and accurately as possible via button press. Gestures did not impact response accuracy, but crucially, gestures sped up responses, suggesting that response planning may be finished earlier when gestures are seen. How much gestures sped up responses was not related to their timing in the question or their timing with respect to the corresponding information in speech. Overall, these results are in line with the idea that multimodality may facilitate fast responding during face-to-face conversation.
  • Ter Bekke, M., Levinson, S. C., Van Otterdijk, L., Kühn, M., & Holler, J. (2024). Visual bodily signals and conversational context benefit the anticipation of turn ends. Cognition, 248: 105806. doi:10.1016/j.cognition.2024.105806.

    Abstract

    The typical pattern of alternating turns in conversation seems trivial at first sight. But a closer look quickly reveals the cognitive challenges involved, with much of it resulting from the fast-paced nature of conversation. One core ingredient to turn coordination is the anticipation of upcoming turn ends so as to be able to ready oneself for providing the next contribution. Across two experiments, we investigated two variables inherent to face-to-face conversation, the presence of visual bodily signals and preceding discourse context, in terms of their contribution to turn end anticipation. In a reaction time paradigm, participants anticipated conversational turn ends better when seeing the speaker and their visual bodily signals than when they did not, especially so for longer turns. Likewise, participants were better able to anticipate turn ends when they had access to the preceding discourse context than when they did not, and especially so for longer turns. Critically, the two variables did not interact, showing that visual bodily signals retain their influence even in the context of preceding discourse. In a pre-registered follow-up experiment, we manipulated the visibility of the speaker's head, eyes and upper body (i.e. torso + arms). Participants were better able to anticipate turn ends when the speaker's upper body was visible, suggesting a role for manual gestures in turn end anticipation. Together, these findings show that seeing the speaker during conversation may critically facilitate turn coordination in interaction.
  • Terporten, R., Huizeling, E., Heidlmayr, K., Hagoort, P., & Kösem, A. (2024). The interaction of context constraints and predictive validity during sentence reading. Journal of Cognitive Neuroscience, 36(2), 225-238. doi:10.1162/jocn_a_02082.

    Abstract

    Words are not processed in isolation; instead, they are commonly embedded in phrases and sentences. The sentential context influences the perception and processing of a word. However, how this is achieved by brain processes and whether predictive mechanisms underlie this process remain a debated topic. Here, we employed an experimental paradigm in which we orthogonalized sentence context constraints and predictive validity, which was defined as the ratio of congruent to incongruent sentence endings within the experiment. While recording electroencephalography, participants read sentences with three levels of sentential context constraints (high, medium, and low). Participants were also separated into two groups that differed in their ratio of valid congruent to incongruent target words that could be predicted from the sentential context. For both groups, we investigated modulations of alpha power before, and N400 amplitude modulations after target word onset. The results reveal that the N400 amplitude gradually decreased with higher context constraints and cloze probability. In contrast, alpha power was not significantly affected by context constraint. Neither the N400 nor alpha power were significantly affected by changes in predictive validity.
  • Thothathiri, M., Basnakova, J., Lewis, A. G., & Briand, J. M. (2024). Fractionating difficulty during sentence comprehension using functional neuroimaging. Cerebral Cortex, 34(2): bhae032. doi:10.1093/cercor/bhae032.

    Abstract

    Sentence comprehension is highly practiced and largely automatic, but this belies the complexity of the underlying processes. We used functional neuroimaging to investigate garden-path sentences that cause difficulty during comprehension, in order to unpack the different processes used to support sentence interpretation. By investigating garden-path and other types of sentences within the same individuals, we functionally profiled different regions within the temporal and frontal cortices in the left hemisphere. The results revealed that different aspects of comprehension difficulty are handled by left posterior temporal, left anterior temporal, ventral left frontal, and dorsal left frontal cortices. The functional profiles of these regions likely lie along a spectrum of specificity to generality, including language-specific processing of linguistic representations, more general conflict resolution processes operating over linguistic representations, and processes for handling difficulty in general. These findings suggest that difficulty is not unitary and that there is a role for a variety of linguistic and non-linguistic processes in supporting comprehension.

    Additional information

    supplementary information
  • Tilmatine, M., Hubers, F., & Hintz, F. (2021). Exploring individual differences in recognizing idiomatic expressions in context. Journal of Cognition, 4(1): 37. doi:10.5334/joc.183.

    Abstract

    Written language comprehension requires readers to integrate incoming information with stored mental knowledge to construct meaning. Literally plausible idiomatic expressions can activate both figurative and literal interpretations, which convey different meanings. Previous research has shown that contexts biasing the figurative or literal interpretation of an idiom can facilitate its processing. Moreover, there is evidence that processing of idiomatic expressions is subject to individual differences in linguistic knowledge and cognitive-linguistic skills. It is therefore conceivable that individuals vary in the extent to which they experience context-induced facilitation in processing idiomatic expressions. To explore the interplay between reader-related variables and contextual facilitation, we conducted a self-paced reading experiment. We recruited participants who had recently completed a battery of 33 behavioural tests measuring individual differences in linguistic knowledge, general cognitive skills and linguistic processing skills. In the present experiment, a subset of these participants read idiomatic expressions that were either presented in isolation or preceded by a figuratively or literally biasing context. We conducted analyses on the reading times of idiom-final nouns and the word thereafter (spill-over region) across the three conditions, including participants’ scores from the individual differences battery. Our results showed no main effect of the preceding context, but substantial variation in contextual facilitation between readers. We observed main effects of participants’ word reading ability and non-verbal intelligence on reading times as well as an interaction between condition and linguistic knowledge. We encourage interested researchers to exploit the present dataset for follow-up studies on individual differences in idiom processing.
  • Tilot, A. K., Khramtsova, E. A., Liang, D., Grasby, K. L., Jahanshad, N., Painter, J., Colodro-Conde, L., Bralten, J., Hibar, D. P., Lind, P. A., Liu, S., Brotman, S. M., Thompson, P. M., Medland, S. E., Macciardi, F., Stranger, B. E., Davis, L. K., Fisher, S. E., & Stein, J. L. (2021). The evolutionary history of common genetic variants influencing human cortical surface area. Cerebral Cortex, 31(4), 1873-1887. doi:10.1093/cercor/bhaa327.

    Abstract

    Structural brain changes along the lineage leading to modern Homo sapiens contributed to our distinctive cognitive and social abilities. However, the evolutionarily relevant molecular variants impacting key aspects of neuroanatomy are largely unknown. Here, we integrate evolutionary annotations of the genome at diverse timescales with common variant associations from large-scale neuroimaging genetic screens. We find that alleles with evidence of recent positive polygenic selection over the past 2000–3000 years are associated with increased surface area (SA) of the entire cortex, as well as specific regions, including those involved in spoken language and visual processing. Therefore, polygenic selective pressures impact the structure of specific cortical areas even over relatively recent timescales. Moreover, common sequence variation within human gained enhancers active in the prenatal cortex is associated with postnatal global SA. We show that such variation modulates the function of a regulatory element of the developmentally relevant transcription factor HEY2 in human neural progenitor cells and is associated with structural changes in the inferior frontal cortex. These results indicate that non-coding genomic regions active during prenatal cortical development are involved in the evolution of human brain structure and identify novel regulatory elements and genes impacting modern human brain structure.
  • Titus, A., Dijkstra, T., Willems, R. M., & Peeters, D. (2024). Beyond the tried and true: How virtual reality, dialog setups, and a focus on multimodality can take bilingual language production research forward. Neuropsychologia, 193: 108764. doi:10.1016/j.neuropsychologia.2023.108764.

    Abstract

    Bilinguals possess the ability of expressing themselves in more than one language, and typically do so in contextually rich and dynamic settings. Theories and models have indeed long considered context factors to affect bilingual language production in many ways. However, most experimental studies in this domain have failed to fully incorporate linguistic, social, or physical context aspects, let alone combine them in the same study. Indeed, most experimental psycholinguistic research has taken place in isolated and constrained lab settings with carefully selected words or sentences, rather than under rich and naturalistic conditions. We argue that the most influential experimental paradigms in the psycholinguistic study of bilingual language production fall short of capturing the effects of context on language processing and control presupposed by prominent models. This paper therefore aims to enrich the methodological basis for investigating context aspects in current experimental paradigms and thereby move the field of bilingual language production research forward theoretically. After considering extensions of existing paradigms proposed to address context effects, we present three far-ranging innovative proposals, focusing on virtual reality, dialog situations, and multimodality in the context of bilingual language production.
  • Torres Borda, L., Jadoul, Y., Rasilo, H., Salazar-Casals, A., & Ravignani, A. (2021). Vocal plasticity in harbour seal pups. Philosophical Transactions of the Royal Society of London, Series B: Biological Sciences, 376(1840): 20200456. doi:10.1098/rstb.2020.0456.

    Abstract

    Vocal plasticity can occur in response to environmental and biological factors, including conspecifics' vocalizations and noise. Pinnipeds are one of the few mammalian groups capable of vocal learning, and are therefore relevant to understanding the evolution of vocal plasticity in humans and other animals. Here, we investigate the vocal plasticity of harbour seals (Phoca vitulina), a species with vocal learning abilities observed in adulthood but not puppyhood. To evaluate early mammalian vocal development, we tested 1–3 weeks-old seal pups. We tailored noise playbacks to this species and age to induce seal pups to shift their fundamental frequency (f0), rather than adapt call amplitude or temporal characteristics. We exposed individual pups to low- and high-intensity bandpass-filtered noise, which spanned—and masked—their typical range of f0; simultaneously, we recorded pups' spontaneous calls. Unlike most mammals, pups modified their vocalizations by lowering their f0 in response to increased noise. This modulation was precise and adapted to the particular experimental manipulation of the noise condition. In addition, higher levels of noise induced less dispersion around the mean f0, suggesting that pups may have actively focused their phonatory efforts to target lower frequencies. Noise did not seem to affect call amplitude. However, one seal showed two characteristics of the Lombard effect known for human speech in noise: significant increase in call amplitude and flattening of spectral tilt. Our relatively low noise levels may have favoured f0 modulation while inhibiting amplitude adjustments. This lowering of f0 is unusual, as most animals commonly display no such f0 shift. Our data represent a relatively rare case in mammalian neonates, and have implications for the evolution of vocal plasticity and vocal learning across species, including humans.

    Additional information

    supplement
  • Tourtouri, E. N., Delogu, F., & Crocker, M. W. (2021). Rational Redundancy in Referring Expressions: Evidence from Event-related Potentials. Cognitive Science, 45(12): e13071. doi:10.1111/cogs.13071.

    Abstract

    In referential communication, Grice's Maxim of Quantity is thought to imply that utterances conveying unnecessary information should incur comprehension difficulties. There is, however, considerable evidence that speakers frequently encode redundant information in their referring expressions, raising the question as to whether such overspecifications hinder listeners' processing. Evidence from previous work is inconclusive, and mostly comes from offline studies. In this article, we present two event-related potential (ERP) experiments, investigating the real-time comprehension of referring expressions that contain redundant adjectives in complex visual contexts. Our findings provide support for both Gricean and bounded-rational accounts. We argue that these seemingly incompatible results can be reconciled if common ground is taken into account. We propose a bounded-rational account of overspecification, according to which even redundant words can be beneficial to comprehension to the extent that they facilitate the reduction of listeners' uncertainty regarding the target referent.
  • Trompenaars, T., Kaluge, T. A., Sarabi, R., & De Swart, P. (2021). Cognitive animacy and its relation to linguistic animacy: Evidence from Japanese and Persian. Language Sciences, 86: 101399. doi:10.1016/j.langsci.2021.101399.

    Abstract

    Animacy, commonly defined as the distinction between living and non-living entities, is a useful notion in cognitive science and linguistics employed to describe and predict variation in psychological and linguistic behaviour. In the (psycho)linguistics literature we find linguistic animacy dichotomies which are (implicitly) assumed to correspond to biological dichotomies. We argue this is problematic, as it leaves us without a cognitively grounded, universal description for non-prototypical cases. We show that ‘animacy’ in language can be better understood as universally emerging from a gradual, cognitive property by collecting animacy ratings for a great range of nouns from Japanese and Persian. We used these cognitive ratings in turn to predict linguistic variation in these languages traditionally explained through dichotomous distinctions. We show that whilst (speakers of) languages may subtly differ in their conceptualisation of animacy, universality may be found in the process of mapping conceptual animacy to linguistic variation.
  • Trujillo, J. P., & Holler, J. (2021). The kinematics of social action: Visual signals provide cues for what interlocutors do in conversation. Brain Sciences, 11: 996. doi:10.3390/brainsci11080996.

    Abstract

    During natural conversation, people must quickly understand the meaning of what the other speaker is saying. This concerns not just the semantic content of an utterance, but also the social action (i.e., what the utterance is doing—requesting information, offering, evaluating, checking mutual understanding, etc.) that the utterance is performing. The multimodal nature of human language raises the question of whether visual signals may contribute to the rapid processing of such social actions. However, while previous research has shown that how we move reveals the intentions underlying instrumental actions, we do not know whether the intentions underlying fine-grained social actions in conversation are also revealed in our bodily movements. Using a corpus of dyadic conversations combined with manual annotation and motion tracking, we analyzed the kinematics of the torso, head, and hands during the asking of questions. Manual annotation categorized these questions into six more fine-grained social action types (i.e., request for information, other-initiated repair, understanding check, stance or sentiment, self-directed, active participation). We demonstrate, for the first time, that the kinematics of the torso, head and hands differ between some of these different social action categories based on a 900 ms time window that captures movements starting slightly prior to or within 600 ms after utterance onset. These results provide novel insights into the extent to which our intentions shape the way that we move, and provide new avenues for understanding how this phenomenon may facilitate the fast communication of meaning in conversational interaction, social action, and conversation

    Additional information

    analyses scripts
  • Trujillo, J. P., Ozyurek, A., Holler, J., & Drijvers, L. (2021). Speakers exhibit a multimodal Lombard effect in noise. Scientific Reports, 11: 16721. doi:10.1038/s41598-021-95791-0.

    Abstract

    In everyday conversation, we are often challenged with communicating in non-ideal settings, such as in noise. Increased speech intensity and larger mouth movements are used to overcome noise in constrained settings (the Lombard effect). How we adapt to noise in face-to-face interaction, the natural environment of human language use, where manual gestures are ubiquitous, is currently unknown. We asked Dutch adults to wear headphones with varying levels of multi-talker babble while attempting to communicate action verbs to one another. Using quantitative motion capture and acoustic analyses, we found that (1) noise is associated with increased speech intensity and enhanced gesture kinematics and mouth movements, and (2) acoustic modulation only occurs when gestures are not present, while kinematic modulation occurs regardless of co-occurring speech. Thus, in face-to-face encounters the Lombard effect is not constrained to speech but is a multimodal phenomenon where the visual channel carries most of the communicative burden.

    Additional information

    supplementary material
  • Trujillo, J. P., Ozyurek, A., Kan, C. C., Sheftel-Simanova, I., & Bekkering, H. (2021). Differences in the production and perception of communicative kinematics in autism. Autism Research, 14(12), 2640-2653. doi:10.1002/aur.2611.

    Abstract

    In human communication, social intentions and meaning are often revealed in the way we move. In this study, we investigate the flexibility of human communication in terms of kinematic modulation in a clinical population, namely, autistic individuals. The aim of this study was twofold: to assess (a) whether communicatively relevant kinematic features of gestures differ between autistic and neurotypical individuals, and (b) if autistic individuals use communicative kinematic modulation to support gesture recognition. We tested autistic and neurotypical individuals on a silent gesture production task and a gesture comprehension task. We measured movement during the gesture production task using a Kinect motion tracking device in order to determine if autistic individuals differed from neurotypical individuals in their gesture kinematics. For the gesture comprehension task, we assessed whether autistic individuals used communicatively relevant kinematic cues to support recognition. This was done by using stick-light figures as stimuli and testing for a correlation between the kinematics of these videos and recognition performance. We found that (a) silent gestures produced by autistic and neurotypical individuals differ in communicatively relevant kinematic features, such as the number of meaningful holds between movements, and (b) while autistic individuals are overall unimpaired at recognizing gestures, they processed repetition and complexity, measured as the amount of submovements perceived, differently than neurotypicals do. These findings highlight how subtle aspects of neurotypical behavior can be experienced differently by autistic individuals. They further demonstrate the relationship between movement kinematics and social interaction in high-functioning autistic individuals.

    Additional information

    supporting information
  • Trujillo, J. P., & Holler, J. (2024). Information distribution patterns in naturalistic dialogue differ across languages. Psychonomic Bulletin & Review. Advance online publication. doi:10.3758/s13423-024-02452-0.

    Abstract

    The natural ecology of language is conversation, with individuals taking turns speaking to communicate in a back-and-forth fashion. Language in this context involves strings of words that a listener must process while simultaneously planning their own next utterance. It would thus be highly advantageous if language users distributed information within an utterance in a way that may facilitate this processing–planning dynamic. While some studies have investigated how information is distributed at the level of single words or clauses, or in written language, little is known about how information is distributed within spoken utterances produced during naturalistic conversation. It also is not known how information distribution patterns of spoken utterances may differ across languages. We used a set of matched corpora (CallHome) containing 898 telephone conversations conducted in six different languages (Arabic, English, German, Japanese, Mandarin, and Spanish), analyzing more than 58,000 utterances, to assess whether there is evidence of distinct patterns of information distributions at the utterance level, and whether these patterns are similar or differed across the languages. We found that English, Spanish, and Mandarin typically show a back-loaded distribution, with higher information (i.e., surprisal) in the last half of utterances compared with the first half, while Arabic, German, and Japanese showed front-loaded distributions, with higher information in the first half compared with the last half. Additional analyses suggest that these patterns may be related to word order and rate of noun and verb usage. We additionally found that back-loaded languages have longer turn transition times (i.e.,time between speaker turns)

    Additional information

    Data availability
  • Trujillo, J. P., & Holler, J. (2024). Conversational facial signals combine into compositional meanings that change the interpretation of speaker intentions. Scientific Reports, 14: 2286. doi:10.1038/s41598-024-52589-0.

    Abstract

    Human language is extremely versatile, combining a limited set of signals in an unlimited number of ways. However, it is unknown whether conversational visual signals feed into the composite utterances with which speakers communicate their intentions. We assessed whether different combinations of visual signals lead to different intent interpretations of the same spoken utterance. Participants viewed a virtual avatar uttering spoken questions while producing single visual signals (i.e., head turn, head tilt, eyebrow raise) or combinations of these signals. After each video, participants classified the communicative intention behind the question. We found that composite utterances combining several visual signals conveyed different meaning compared to utterances accompanied by the single visual signals. However, responses to combinations of signals were more similar to the responses to related, rather than unrelated, individual signals, indicating a consistent influence of the individual visual signals on the whole. This study therefore provides first evidence for compositional, non-additive (i.e., Gestalt-like) perception of multimodal language.

    Additional information

    41598_2024_52589_MOESM1_ESM.docx
  • Tsoukala, C., Frank, S. L., Van Den Bosch, A., Valdés Kroff, J., & Broersma, M. (2021). Modeling the auxiliary phrase asymmetry in code-switched Spanish–English. Bilingualism: Language and Cognition, 24(2), 271-280. doi:10.1017/S1366728920000449.

    Abstract

    Spanish–English bilinguals rarely code-switch in the perfect structure between the Spanish auxiliary haber (“to have”) and the participle (e.g., “Ella ha voted”; “She has voted”). However, they are somewhat likely to switch in the progressive structure between the Spanish auxiliary estar (“to be”) and the participle (“Ella está voting”; “She is voting”). This phenomenon is known as the “auxiliary phrase asymmetry”. One hypothesis as to why this occurs is that estar has more semantic weight as it also functions as an independent verb, whereas haber is almost exclusively used as an auxiliary verb. To test this hypothesis, we employed a connectionist model that produces spontaneous code-switches. Through simulation experiments, we showed that i) the asymmetry emerges in the model and that ii) the asymmetry disappears when using haber also as a main verb, which adds semantic weight. Therefore, the lack of semantic weight of haber may indeed cause the asymmetry.
  • Tsoukala, C., Broersma, M., Van den Bosch, A., & Frank, S. L. (2021). Simulating code-switching using a neural network model of bilingual sentence production. Computational Brain & Behavior, 4, 87-100. doi:10.1007/s42113-020-00088-6.

    Abstract

    Code-switching is the alternation from one language to the other during bilingual speech. We present a novel method of researching this phenomenon using computational cognitive modeling. We trained a neural network of bilingual sentence production to simulate early balanced Spanish–English bilinguals, late speakers of English who have Spanish as a dominant native language, and late speakers of Spanish who have English as a dominant native language. The model produced code-switches even though it was not exposed to code-switched input. The simulations predicted how code-switching patterns differ between early balanced and late non-balanced bilinguals; the balanced bilingual simulation code-switches considerably more frequently, which is in line with what has been observed in human speech production. Additionally, we compared the patterns produced by the simulations with two corpora of spontaneous bilingual speech and identified noticeable commonalities and differences. To our knowledge, this is the first computational cognitive model simulating the code-switched production of non-balanced bilinguals and comparing the simulated production of balanced and non-balanced bilinguals with that of human bilinguals.

    Additional information

    dual-path model
  • Vágvölgyi, R., Bergström, K., Bulajić, A., Klatte, M., Fernandes, T., Grosche, M., Huettig, F., Rüsseler, J., & Lachmann, T. (2021). Functional illiteracy and developmental dyslexia: Looking for common roots. A systematic review. Journal of Cultural Cognitive Science, 5, 159-179. doi:10.1007/s41809-021-00074-9.

    Abstract

    A considerable amount of the population in more economically developed countries are functionally illiterate (i.e., low literate). Despite some years of schooling and basic reading skills, these individuals cannot properly read and write and, as a consequence have problems to understand even short texts. An often-discussed approach (Greenberg et al., 1997) assumes weak phonological processing skills coupled with untreated developmental dyslexia as possible causes of functional illiteracy. Although there is some data suggesting commonalities between low literacy and developmental dyslexia, it is still not clear, whether these reflect shared consequences (i.e., cognitive and behavioral profile) or shared causes. The present systematic review aims at exploring the similarities and differences identified in empirical studies investigating both functional illiterate and developmental dyslexic samples. Nine electronic databases were searched in order to identify all quantitative studies published in English or German. Although a broad search strategy and few limitations were applied, only 5 studies have been identified adequate from the resulting 9269 references. The results point to the lack of studies directly comparing functional illiterate with developmental dyslexic samples. Moreover, a huge variance has been identified between the studies in how they approached the concept of functional illiteracy, particularly when it came to critical categories such the applied definition, terminology, criteria for inclusion in the sample, research focus, and outcome measures. The available data highlight the need for more direct comparisons in order to understand what extent functional illiteracy and dyslexia share common characteristics.

    Additional information

    supplementary materials
  • Van Bergen, G., & Hogeweg, L. (2021). Managing interpersonal discourse expectations: a comparative analysis of contrastive discourse particles in Dutch. Linguistics, 59(2), 333-360. doi:10.1515/ling-2021-0020.

    Abstract

    In this article we investigate how speakers manage discourse expectations in dialogue by comparing the meaning and use of three Dutch discourse particles, i.e. wel, toch and eigenlijk, which all express a contrast between their host utterance and a discourse-based expectation. The core meanings of toch, wel and eigenlijk are formally distinguished on the basis of two intersubjective parameters: (i) whether the particle marks alignment or misalignment between speaker and addressee discourse beliefs, and (ii) whether the particle requires an assessment of the addressee’s representation of mutual discourse beliefs. By means of a quantitative corpus study, we investigate to what extent the intersubjective meaning distinctions between wel, toch and eigenlijk are reflected in statistical usage patterns across different social situations. Results suggest that wel, toch and eigenlijk are lexicalizations of distinct generalized politeness strategies when expressing contrast in social interaction. Our findings call for an interdisciplinary approach to discourse particles in order to enhance our understanding of their functions in language.
  • Van Heukelum, S., Tulva, K., Geers, F. E., van Dulm, S., Ruisch, I. H., Mill, J., Viana, J. F., Beckmann, C. F., Buitelaar, J. K., Poelmans, G., Glennon, J. C., Vogt, B. A., Havenith, M. N., & França, A. S. (2021). A central role for anterior cingulate cortex in the control of pathological aggression. Current Biology, 31, 2321-2333.e5. doi:10.1016/j.cub.2021.03.062.

    Abstract

    Controlling aggression is a crucial skill in social species like rodents and humans and has been associated with anterior cingulate cortex (ACC). Here, we directly link the failed regulation of aggression in BALB/cJ mice to ACC hypofunction. We first show that ACC in BALB/cJ mice is structurally degraded: neuron density is decreased, with pervasive neuron death and reactive astroglia. Gene-set enrichment analysis suggested that this process is driven by neuronal degeneration, which then triggers toxic astrogliosis. cFos expression across ACC indicated functional consequences: during aggressive encounters, ACC was engaged in control mice, but not BALB/cJ mice. Chemogenetically activating ACC during aggressive encounters drastically suppressed pathological aggression but left species-typical aggression intact. The network effects of our chemogenetic perturbation suggest that this behavioral rescue is mediated by suppression of amygdala and hypothalamus and activation of mediodorsal thalamus. Together, these findings highlight the central role of ACC in curbing pathological aggression.
  • Ip, H. F., Van der Laan, C. M., Krapohl, E. M. L., Brikell, I., Sánchez-Mora, C., Nolte, I. M., St Pourcain, B., Bolhuis, K., Palviainen, T., Zafarmand, H., Colodro-Conde, L., Gordon, S., Zayats, T., Aliev, F., Jiang, C., Wang, C. A., Saunders, G., Karhunen, V., Hammerschlag, A. R., Adkins, D. E. and 129 moreIp, H. F., Van der Laan, C. M., Krapohl, E. M. L., Brikell, I., Sánchez-Mora, C., Nolte, I. M., St Pourcain, B., Bolhuis, K., Palviainen, T., Zafarmand, H., Colodro-Conde, L., Gordon, S., Zayats, T., Aliev, F., Jiang, C., Wang, C. A., Saunders, G., Karhunen, V., Hammerschlag, A. R., Adkins, D. E., Border, R., Peterson, R. E., Prinz, J. A., Thiering, E., Seppälä, I., Vilor-Tejedor, N., Ahluwalia, T. S., Day, F. R., Hottenga, J.-J., Allegrini, A. G., Rimfeld, K., Chen, Q., Lu, Y., Martin, J., Soler Artigas, M., Rovira, P., Bosch, R., Español, G., Ramos Quiroga, J. A., Neumann, A., Ensink, J., Grasby, K., Morosoli, J. J., Tong, X., Marrington, S., Middeldorp, C., Scott, J. G., Vinkhuyzen, A., Shabalin, A. A., Corley, R., Evans, L. M., Sugden, K., Alemany, S., Sass, L., Vinding, R., Ruth, K., Tyrrell, J., Davies, G. E., Ehli, E. A., Hagenbeek, F. A., De Zeeuw, E., Van Beijsterveldt, T. C., Larsson, H., Snieder, H., Verhulst, F. C., Amin, N., Whipp, A. M., Korhonen, T., Vuoksimaa, E., Rose, R. J., Uitterlinden, A. G., Heath, A. C., Madden, P., Haavik, J., Harris, J. R., Helgeland, Ø., Johansson, S., Knudsen, G. P. S., Njolstad, P. R., Lu, Q., Rodriguez, A., Henders, A. K., Mamun, A., Najman, J. M., Brown, S., Hopfer, C., Krauter, K., Reynolds, C., Smolen, A., Stallings, M., Wadsworth, S., Wall, T. L., Silberg, J. L., Miller, A., Keltikangas-Järvinen, L., Hakulinen, C., Pulkki-Råback, L., Havdahl, A., Magnus, P., Raitakari, O. T., Perry, J. R. B., Llop, S., Lopez-Espinosa, M.-J., Bønnelykke, K., Bisgaard, H., Sunyer, J., Lehtimäki, T., Arseneault, L., Standl, M., Heinrich, J., Boden, J., Pearson, J., Horwood, L. J., Kennedy, M., Poulton, R., Eaves, L. J., Maes, H. H., Hewitt, J., Copeland, W. E., Costello, E. J., Williams, G. M., Wray, N., Järvelin, M.-R., McGue, M., Iacono, W., Caspi, A., Moffitt, T. E., Whitehouse, A., Pennell, C. E., Klump, K. L., Burt, S. A., Dick, D. M., Reichborn-Kjennerud, T., Martin, N. G., Medland, S. E., Vrijkotte, T., Kaprio, J., Tiemeier, H., Davey Smith, G., Hartman, C. A., Oldehinkel, A. J., Casas, M., Ribasés, M., Lichtenstein, P., Lundström, S., Plomin, R., Bartels, M., Nivard, M. G., & Boomsma, D. I. (2021). Genetic association study of childhood aggression across raters, instruments, and age. Translational Psychiatry, 11: 413. doi:10.1038/s41398-021-01480-x.
  • van der Burght, C. L., Friederici, A. D., Goucha, T., & Hartwigsen, G. (2021). Pitch accents create dissociable syntactic and semantic expectations during sentence processing. Cognition, 212: 104702. doi:10.1016/j.cognition.2021.104702.

    Abstract

    The language system uses syntactic, semantic, as well as prosodic cues to efficiently guide auditory sentence comprehension. Prosodic cues, such as pitch accents, can build expectations about upcoming sentence elements. This study investigates to what extent syntactic and semantic expectations generated by pitch accents can be dissociated and if so, which cues take precedence when contradictory information is present. We used sentences in which one out of two nominal constituents was placed in contrastive focus with a third one. All noun phrases carried overt syntactic information (case-marking of the determiner) and semantic information (typicality of the thematic role of the noun). Two experiments (a sentence comprehension and a sentence completion task) show that focus, marked by pitch accents, established expectations in both syntactic and semantic domains. However, only the syntactic expectations, when violated, were strong enough to interfere with sentence comprehension. Furthermore, when contradictory cues occurred in the same sentence, the local syntactic cue (case-marking) took precedence over the semantic cue (thematic role), and overwrote previous information cued by prosody. The findings indicate that during auditory sentence comprehension the processing system integrates different sources of information for argument role assignment, yet primarily relies on syntactic information.
  • Van Paridon, J., Ostarek, M., Arunkumar, M., & Huettig, F. (2021). Does neuronal recycling result in destructive competition? The influence of learning to read on the recognition of faces. Psychological Science, 32, 459-465. doi:10.1177/0956797620971652.

    Abstract

    Written language, a human cultural invention, is far too recent for dedicated neural
    infrastructure to have evolved in its service. Culturally newly acquired skills (e.g. reading) thus ‘recycle’ evolutionarily older circuits that originally evolved for different, but similar functions (e.g. visual object recognition). The destructive competition hypothesis predicts that this neuronal recycling has detrimental behavioral effects on the cognitive functions a cortical network originally evolved for. In a study with 97 literate, low-literate, and illiterate participants from the same socioeconomic background we find that even after adjusting for cognitive ability and test-taking familiarity, learning to read is associated with an increase, rather than a decrease, in object recognition abilities. These results are incompatible with the claim that neuronal recycling results in destructive competition and consistent with the possibility that learning to read instead fine-tunes general object recognition mechanisms, a hypothesis that needs further neuroscientific investigation.

    Additional information

    supplemental material
  • Van Leeuwen, T. M., Wilsson, L., Norrman, H. N., Dingemanse, M., Bölte, S., & Neufeld, J. (2021). Perceptual processing links autism and synesthesia: A co-twin control study. Cortex, 145, 236-249. doi:10.1016/j.cortex.2021.09.016.
  • Van Turennout, M., Hagoort, P., & Brown, C. M. (1998). Brain activitity during speaking: From syntax to phonology in 40 milliseconds. Science, 280, 572-574.

    Abstract

    In normal conversation, speakers translate thoughts into words at high speed. To enable this speed, the retrieval of distinct types of linguistic knowledge has to be orchestrated with millisecond precision. The nature of this orchestration is still largely unknown. This report presents dynamic measures of the real-time activation of two basic types of linguistic knowledge, syntax and phonology. Electrophysiological data demonstrate that during noun-phrase production speakers retrieve the syntactic gender of a noun before its abstract phonological properties. This two-step process operates at high speed: the data show that phonological information is already available 40 milliseconds after syntactic properties have been retrieved.
  • Van Turennout, M., Hagoort, P., & Brown, C. M. (1998). Brain activity during speaking: From syntax to phonology in 40 milliseconds. Science, 280(5363), 572-574. doi:10.1126/science.280.5363.572.
  • Van Turennout, M., Hagoort, P., & Brown, C. M. (1997). Electrophysiological evidence on the time course of semantic and phonological processes in speech production. Journal of Experimental Psychology: Learning, Memory, and Cognition, 23(4), 787-806.

    Abstract

    The temporal properties of semantic and phonological processes in speech production were investigated in a new experimental paradigm using movement-related brain potentials. The main experimental task was picture naming. In addition, a 2-choice reaction go/no-go procedure was included, involving a semantic and a phonological categorization of the picture name. Lateralized readiness potentials (LRPs) were derived to test whether semantic and phonological information activated motor processes at separate moments in time. An LRP was only observed on no-go trials when the semantic (not the phonological) decision determined the response hand. Varying the position of the critical phoneme in the picture name did not affect the onset of the LRP but rather influenced when the LRP began to differ on go and no-go trials and allowed the duration of phonological encoding of a word to be estimated. These results provide electrophysiological evidence for early semantic activation and later phonological encoding.
  • Van de Geer, J. P., & Levelt, W. J. M. (1963). Detection of visual patterns disturbed by noise: An exploratory study. Quarterly Journal of Experimental Psychology, 15, 192-204. doi:10.1080/17470216308416324.

    Abstract

    An introductory study of the perception of stochastically specified events is reported. The initial problem was to determine whether the perceiver can split visual input data of this kind into random and determined components. The inability of subjects to do so with the stimulus material used (a filmlike sequence of dot patterns), led to the more general question of how subjects code this kind of visual material. To meet the difficulty of defining the subjects' responses, two experiments were designed. In both, patterns were presented as a rapid sequence of dots on a screen. The patterns were more or less disturbed by “noise,” i.e. the dots did not appear exactly at their proper places. In the first experiment the response was a rating on a semantic scale, in the second an identification from among a set of alternative patterns. The results of these experiments give some insight in the coding systems adopted by the subjects. First, noise appears to be detrimental to pattern recognition, especially to patterns with little spread. Second, this shows connections with the factors obtained from analysis of the semantic ratings, e.g. easily disturbed patterns show a large drop in the semantic regularity factor, when only a little noise is added.
  • Van Ooijen, B., Cutler, A., & Norris, D. (1991). Detection times for vowels versus consonants. In Eurospeech 91: Vol. 3 (pp. 1451-1454). Genova: Istituto Internazionale delle Comunicazioni.

    Abstract

    This paper reports two experiments with vowels and consonants as phoneme detection targets in real words. In the first experiment, two relatively distinct vowels were compared with two confusible stop consonants. Response times to the vowels were longer than to the consonants. Response times correlated negatively with target phoneme length. In the second, two relatively distinct vowels were compared with their corresponding semivowels. This time, the vowels were detected faster than the semivowels. We conclude that response time differences between vowels and stop consonants in this task may reflect differences between phoneme categories in the variability of tokens, both in the acoustic realisation of targets and in the' representation of targets by subjects.
  • Van Berkum, J. J. A., Hijne, H., De Jong, T., Van Joolingen, W. R., & Njoo, M. (1991). Aspects of computer simulations in education. Education & Computing, 6(3/4), 231-239.

    Abstract

    Computer simulations in an instructional context can be characterized according to four aspects (themes): simulation models, learning goals, learning processes and learner activity. The present paper provides an outline of these four themes. The main classification criterion for simulation models is quantitative vs. qualitative models. For quantitative models a further subdivision can be made by classifying the independent and dependent variables as continuous or discrete. A second criterion is whether one of the independent variables is time, thus distinguishing dynamic and static models. Qualitative models on the other hand use propositions about non-quantitative properties of a system or they describe quantitative aspects in a qualitative way. Related to the underlying model is the interaction with it. When this interaction has a normative counterpart in the real world we call it a procedure. The second theme of learning with computer simulation concerns learning goals. A learning goal is principally classified along three dimensions, which specify different aspects of the knowledge involved. The first dimension, knowledge category, indicates that a learning goal can address principles, concepts and/or facts (conceptual knowledge) or procedures (performance sequences). The second dimension, knowledge representation, captures the fact that knowledge can be represented in a more declarative (articulate, explicit), or in a more compiled (implicit) format, each one having its own advantages and drawbacks. The third dimension, knowledge scope, involves the learning goal's relation with the simulation domain; knowledge can be specific to a particular domain, or generalizable over classes of domains (generic). A more or less separate type of learning goal refers to knowledge acquisition skills that are pertinent to learning in an exploratory environment. Learning processes constitute the third theme. Learning processes are defined as cognitive actions of the learner. Learning processes can be classified using a multilevel scheme. The first (highest) of these levels gives four main categories: orientation, hypothesis generation, testing and evaluation. Examples of more specific processes are model exploration and output interpretation. The fourth theme of learning with computer simulations is learner activity. Learner activity is defined as the ‘physical’ interaction of the learner with the simulations (as opposed to the mental interaction that was described in the learning processes). Five main categories of learner activity are distinguished: defining experimental settings (variables, parameters etc.), interaction process choices (deciding a next step), collecting data, choice of data presentation and metacontrol over the simulation.
  • Van Valin Jr., R. D. (2000). Focus structure or abstract syntax? A role and reference grammar account of some ‘abstract’ syntactic phenomena. In Z. Estrada Fernández, & I. Barreras Aguilar (Eds.), Memorias del V Encuentro Internacional de Lingüística en el Noroeste: (2 v.) Estudios morfosintácticos (pp. 39-62). Hermosillo: Editorial Unison.
  • Van Berkum, J. J. A., & De Jong, T. (1991). Instructional environments for simulations. Education & Computing, 6(3/4), 305-358.

    Abstract

    The use of computer simulations in education and training can have substantial advantages over other approaches. In comparison with alternatives such as textbooks, lectures, and tutorial courseware, a simulation-based approach offers the opportunity to learn in a relatively realistic problem-solving context, to practise task performance without stress, to systematically explore both realistic and hypothetical situations, to change the time-scale of events, and to interact with simplified versions of the process or system being simulated. However, learners are often unable to cope with the freedom offered by, and the complexity of, a simulation. As a result many of them resort to an unsystematic, unproductive mode of exploration. There is evidence that simulation-based learning can be improved if the learner is supported while working with the simulation. Constructing such an instructional environment around simulations seems to run counter to the freedom the learner is allowed to in ‘stand alone’ simulations. The present article explores instructional measures that allow for an optimal freedom for the learner. An extensive discussion of learning goals brings two main types of learning goals to the fore: conceptual knowledge and operational knowledge. A third type of learning goal refers to the knowledge acquisition (exploratory learning) process. Cognitive theory has implications for the design of instructional environments around simulations. Most of these implications are quite general, but they can also be related to the three types of learning goals. For conceptual knowledge the sequence and choice of models and problems is important, as is providing the learner with explanations and minimization of error. For operational knowledge cognitive theory recommends learning to take place in a problem solving context, the explicit tracing of the behaviour of the learner, providing immediate feedback and minimization of working memory load. For knowledge acquisition goals, it is recommended that the tutor takes the role of a model and coach, and that learning takes place together with a companion. A second source of inspiration for designing instructional environments can be found in Instructional Design Theories. Reviewing these shows that interacting with a simulation can be a part of a more comprehensive instructional strategy, in which for example also prerequisite knowledge is taught. Moreover, information present in a simulation can also be represented in a more structural or static way and these two forms of presentation provoked to perform specific learning processes and learner activities by tutor controlled variations in the simulation, and by tutor initiated prodding techniques. And finally, instructional design theories showed that complex models and procedures can be taught by starting with central and simple elements of these models and procedures and subsequently presenting more complex models and procedures. Most of the recent simulation-based intelligent tutoring systems involve troubleshooting of complex technical systems. Learners are supposed to acquire knowledge of particular system principles, of troubleshooting procedures, or of both. Commonly encountered instructional features include (a) the sequencing of increasingly complex problems to be solved, (b) the availability of a range of help information on request, (c) the presence of an expert troubleshooting module which can step in to provide criticism on learner performance, hints on the problem nature, or suggestions on how to proceed, (d) the option of having the expert module demonstrate optimal performance afterwards, and (e) the use of different ways of depicting the simulated system. A selection of findings is summarized by placing them under the four themes we think to be characteristic of learning with computer simulations (see de Jong, this volume).
  • Van de Weijer, J. (1997). Language input to a prelingual infant. In A. Sorace, C. Heycock, & R. Shillcock (Eds.), Proceedings of the GALA '97 conference on language acquisition (pp. 290-293). Edinburgh University Press.

    Abstract

    Pitch, intonation, and speech rate were analyzed in a collection of everyday speech heard by one Dutch infant between the ages of six and nine months. Components of each of these variables were measured in the speech of three adult speakers (mother, father, baby-sitter) when they addressed the infant, and when they addressed another adult. The results are in line with previously reported findings which are usually based on laboratory or prearranged settings: infant-directed speech in a natural setting exhibits more pitch variation, a larger number of simple intonation contours, and slower speech rate than does adult-directed speech.
  • Van Heuven, V. J., Haan, J., Janse, E., & Van der Torre, E. J. (1997). Perceptual identification of sentence type and the time-distribution of prosodic interrogativity markers in Dutch. In Proceedings of the ESCA Tutorial and Research Workshop on Intonation: Theory, Models and Applications, Athens, Greece, 1997 (pp. 317-320).

    Abstract

    Dutch distinguishes at least four sentence types: statements and questions, the latter type being subdivided into wh-questions (beginning with a question word), yes/no-questions (with inversion of subject and finite), and declarative questions (lexico-syntactically identical to statement). Acoustically, each of these (sub)types was found to have clearly distinct global F0-patterns, as well as a characteristic distribution of final rises [1,2]. The present paper explores the separate contribution of parameters of global downtrend and size of accent-lending pitch movements versus aspects of the terminal rise to the human identification of the four sentence (sub)types, at various positions in the time-course of the utterance. The results show that interrogativity in Dutch can be identified at an early point in the utterance. However, wh-questions are not distinct from statements.
  • Van de Geer, J. P., Levelt, W. J. M., & Plomp, R. (1962). The connotation of musical consonance. Acta Psychologica, 20, 308-319.

    Abstract

    As a preliminary to further research on musical consonance an explanatory investigation was made on the different modes of judgment of musical intervals. This was done by way of a semantic differential. Subjects rated 23 intervals against 10 scales. In a factor analysis three factors appeared: pitch, evaluation and fusion. The relation between these factors and some physical characteristics has been investigated. The scale consonant-dissonant showed to be purely evaluative (in opposition to Stumpf's theory). This evaluative connotation is not in accordance with the musicological meaning of consonance. Suggestions to account for this difference have been given.
  • Van Tiel, B., Deliens, G., Geelhand, P., Murillo Oosterwijk, A., & Kissine, M. (2021). Strategic deception in adults with autism spectrum disorder. Journal of Autism and Developmental Disorders, 51, 255-266. doi:10.1007/s10803-020-04525-0.

    Abstract

    Autism Spectrum Disorder (ASD) is often associated with impaired perspective-taking skills. Deception is an important indicator of perspective-taking, and therefore may be thought to pose difficulties to people with ASD (e.g., Baron-Cohen in J Child Psychol Psychiatry 3:1141–1155, 1992). To test this hypothesis, we asked participants with and without ASD to play a computerised deception game. We found that participants with ASD were equally likely—and in complex cases of deception even more likely—to deceive and detect deception, and learned deception at a faster rate. However, participants with ASD initially deceived less frequently, and were slower at detecting deception. These results suggest that people with ASD readily engage in deception but may do so through conscious and effortful reasoning about other people’s perspectiv
  • Van Paridon, J., & Thompson, B. (2021). subs2vec: Word embeddings from subtitles in 55 languages. Behavior Research Methods, 53(2), 629-655. doi:10.3758/s13428-020-01406-3.

    Abstract

    This paper introduces a novel collection of word embeddings, numerical representations of lexical semantics, in 55 languages, trained on a large corpus of pseudo-conversational speech transcriptions from television shows and movies. The embeddings were trained on the OpenSubtitles corpus using the fastText implementation of the skipgram algorithm. Performance comparable with (and in some cases exceeding) embeddings trained on non-conversational (Wikipedia) text is reported on standard benchmark evaluation datasets. A novel evaluation method of particular relevance to psycholinguists is also introduced: prediction of experimental lexical norms in multiple languages. The models, as well as code for reproducing the models and all analyses reported in this paper (implemented as a user-friendly Python package), are freely available at: https://github.com/jvparidon/subs2vec.

    Additional information

    https://github.com/jvparidon/subs2vec
  • Van Berkum, J. J. A. (1997). Syntactic processes in speech production: The retrieval of grammatical gender. Cognition, 64(2), 115-152. doi:10.1016/S0010-0277(97)00026-7.

    Abstract

    Jescheniak and Levelt (Jescheniak, J.-D., Levelt, W.J.M. 1994. Journal of Experimental Psychology: Learning, Memory and Cognition 20 (4), 824–843) have suggested that the speed with which native speakers of a gender-marking language retrieve the grammatical gender of a noun from their mental lexicon may depend on the recency of earlier access to that same noun's gender, as the result of a mechanism that is dedicated to facilitate gender-marked anaphoric reference to recently introduced discourse entities. This hypothesis was tested in two picture naming experiments. Recent gender access did not facilitate the production of gender-marked adjective noun phrases (Experiment 1), nor that of gender-marked definite article noun phrases (Experiment 2), even though naming times for the latter utterances were sensitive to the gender of a written distractor word superimposed on the picture to be named. This last result replicates and extends earlier gender-specific picture-word interference results (Schriefers, H. 1993. Journal of Experimental Psychology: Learning, Memory, and Cognition 19 (4), 841–850), showing that one can selectively tap into the production of grammatical gender agreement during speaking. The findings are relevant to theories of speech production and the representation of grammatical gender for that process.
  • Van der Veer, G. C., Bagnara, S., & Kempen, G. (1991). Preface. Acta Psychologica, 78, ix. doi:10.1016/0001-6918(91)90002-H.
  • Van Berkum, J. J. A., Hagoort, P., & Brown, C. M. (2000). The use of referential context and grammatical gender in parsing: A reply to Brysbaert and Mitchell. Journal of Psycholinguistic Research, 29(5), 467-481. doi:10.1023/A:1005168025226.

    Abstract

    Based on the results of an event-related brain potentials (ERP) experiment (van Berkum, Brown, & Hagoort. 1999a, b), we have recently argued that discourse-level referential context can be taken into account extremely rapidly by the parser. Moreover, our ERP results indicated that local grammatical gender information, although available within a few hundred milliseconds from word onset, is not always used quickly enough to prevent the parser from considering a discourse-supported, but agreement-violating, syntactic analysis. In a comment on our work, Brysbaert and Mitchell (2000) have raised concerns about the methodology of our ERP experiment and have challenged our interpretation of the results. In this reply, we argue that these concerns are unwarranted and, that, in contrast to our own interpretation, the alternative explanations provided by Brysbaert and Mitchell do not account for the full pattern of ERP results.
  • Van Geert, E., Ding, R., & Wagemans, J. (2024). A cross-cultural comparison of aesthetic preferences for neatly organized compositions: Native Chinese- versus Native Dutch-speaking samples. Empirical Studies of the Arts. Advance online publication. doi:10.1177/02762374241245917.

    Abstract

    Do aesthetic preferences for images of neatly organized compositions (e.g., images collected on blogs like Things Organized Neatly©) generalize across cultures? In an earlier study, focusing on stimulus and personal properties related to order and complexity, Western participants indicated their preference for one of two simultaneously presented images (100 pairs). In the current study, we compared the data of the native Dutch-speaking participants from this earlier sample (N = 356) to newly collected data from a native Chinese-speaking sample (N = 220). Overall, aesthetic preferences were quite similar across cultures. When relating preferences for each sample to ratings of order, complexity, soothingness, and fascination collected from a Western, mainly Dutch-speaking sample, the results hint at a cross-culturally consistent preference for images that Western participants rate as more ordered, but a cross-culturally diverse relation between preferences and complexity.
  • Van der Werff, J., Ravignani, A., & Jadoul, Y. (2024). thebeat: A Python package for working with rhythms and other temporal sequences. Behavior Research Methods, 56, 3725-3736. doi:10.3758/s13428-023-02334-8.

    Abstract

    thebeat is a Python package for working with temporal sequences and rhythms in the behavioral and cognitive sciences, as well as in bioacoustics. It provides functionality for creating experimental stimuli, and for visualizing and analyzing temporal data. Sequences, sounds, and experimental trials can be generated using single lines of code. thebeat contains functions for calculating common rhythmic measures, such as interval ratios, and for producing plots, such as circular histograms. thebeat saves researchers time when creating experiments, and provides the first steps in collecting widely accepted methods for use in timing research. thebeat is an open-source, on-going, and collaborative project, and can be extended for use in specialized subfields. thebeat integrates easily with the existing Python ecosystem, allowing one to combine our tested code with custom-made scripts. The package was specifically designed to be useful for both skilled and novice programmers. thebeat provides a foundation for working with temporal sequences onto which additional functionality can be built. This combination of specificity and plasticity should facilitate research in multiple research contexts and fields of study.
  • Varola*, M., Verga*, L., Sroka, M., Villanueva, S., Charrier, I., & Ravignani, A. (2021). Can harbor seals (Phoca vitulina) discriminate familiar conspecific calls after long periods of separation? PeerJ, 9: e12431. doi:10.7717/peerj.12431.

    Abstract

    * - indicates joint first authorship -
    The ability to discriminate between familiar and unfamiliar calls may play a key role in pinnipeds’ communication and survival, as in the case of mother-pup interactions. Vocal discrimination abilities have been suggested to be more developed in pinniped species with the highest selective pressure such as the otariids; yet, in some group-living phocids, such as harbor seals (Phoca vitulina), mothers are also able to recognize their pup’s voice. Conspecifics’ vocal recognition in pups has never been investigated; however, the repeated interaction occurring between pups within the breeding season suggests that long-term vocal discrimination may occur. Here we explored this hypothesis by presenting three rehabilitated seal pups with playbacks of vocalizations from unfamiliar or familiar pups. It is uncommon for seals to come into rehabilitation for a second time in their lifespan, and this study took advantage of these rare cases. A simple visual inspection of the data plots seemed to show more reactions, and of longer duration, in response to familiar as compared to unfamiliar playbacks in two out of three pups. However, statistical analyses revealed no significant difference between the experimental conditions. We also found no significant asymmetry in orientation (left vs. right) towards familiar and unfamiliar sounds. While statistics do not support the hypothesis of an established ability to discriminate familiar vocalizations from unfamiliar ones in harbor seal pups, further investigations with a larger sample size are needed to confirm or refute this hypothesis.

    Additional information

    dataset
  • Vega-Mendoza, M., Pickering, M. J., & Nieuwland, M. S. (2021). Concurrent use of animacy and event-knowledge during comprehension: Evidence from event-related potentials. Neuropsychologia, 152: 107724. doi:10.1016/j.neuropsychologia.2020.107724.

    Abstract

    In two ERP experiments, we investigated whether readers prioritize animacy over real-world event-knowledge during sentence comprehension. We used the paradigm of Paczynski and Kuperberg (2012), who argued that animacy is prioritized based on the observations that the ‘related anomaly effect’ (reduced N400s for context-related anomalous words compared to unrelated words) does not occur for animacy violations, and that animacy violations but not relatedness violations elicit P600 effects. Participants read passive sentences with plausible agents (e.g., The prescription for the mental disorder was written by the psychiatrist) or implausible agents that varied in animacy and semantic relatedness (schizophrenic/guard/pill/fence). In Experiment 1 (with a plausibility judgment task), plausible sentences elicited smaller N400s relative to all types of implausible sentences. Crucially, animate words elicited smaller N400s than inanimate words, and related words elicited smaller N400s than unrelated words, but Bayesian analysis revealed substantial evidence against an interaction between animacy and relatedness. Moreover, at the P600 time-window, we observed more positive ERPs for animate than inanimate words and for related than unrelated words at anterior regions. In Experiment 2 (without judgment task), we observed an N400 effect with animacy violations, but no other effects. Taken together, the results of our experiments fail to support a prioritized role of animacy information over real-world event-knowledge, but they support an interactive, constraint-based view on incremental semantic processing.
  • Verdonschot, R. G., Han, J.-I., & Kinoshita, S. (2021). The proximate unit in Korean speech production: Phoneme or syllable? Quarterly Journal of Experimental Psychology, 74, 187-198. doi:10.1177/1747021820950239.

    Abstract

    We investigated the “proximate unit” in Korean, that is, the initial phonological unit selected in speech production by Korean speakers. Previous studies have shown mixed evidence indicating either a phoneme-sized or a syllable-sized unit. We conducted two experiments in which participants named pictures while ignoring superimposed non-words. In English, for this task, when the picture (e.g., dog) and distractor phonology (e.g., dark) initially overlap, typically the picture target is named faster. We used a range of conditions (in Korean) varying from onset overlap to syllabic overlap, and the results indicated an important role for the syllable, but not the phoneme. We suggest that the basic unit used in phonological encoding in Korean is different from Germanic languages such as English and Dutch and also from Japanese and possibly also Chinese. Models dealing with the architecture of language production can use these results when providing a framework suitable for all languages in the world, including Korean.
  • Verga, L., & Ravignani, A. (2021). Strange seal sounds: Claps, slaps, and multimodal pinniped rhythms. Frontiers in Ecology and Evolution, 9: 644497. doi:10.3389/fevo.2021.644497.
  • Verga, L., Schwartze, M., Stapert, S., Winkens, I., & Kotz, S. A. (2021). Dysfunctional timing in traumatic brain injury patients: Co-occurrence of cognitive, motor, and perceptual deficits. Frontiers in Psychology, 12: 731898. doi:10.3389/fpsyg.2021.731898.

    Abstract

    Timing is an essential part of human cognition and of everyday life activities, such as walking or holding a conversation. Previous studies showed that traumatic brain injury (TBI) often affects cognitive functions such as processing speed and time-sensitive abilities, causing long-term sequelae as well as daily impairments. However, the existing evidence on timing capacities in TBI is mostly limited to perception and the processing of isolated intervals. It is therefore open whether the observed deficits extend to motor timing and to continuous dynamic tasks that more closely match daily life activities. The current study set out to answer these questions by assessing audio motor timing abilities and their relationship with cognitive functioning in a group of TBI patients (n=15) and healthy matched controls. We employed a comprehensive set of tasks aiming at testing timing abilities across perception and production and from single intervals to continuous auditory sequences. In line with previous research, we report functional impairments in TBI patients concerning cognitive processing speed and perceptual timing. Critically, these deficits extended to motor timing: The ability to adjust to tempo changes in an auditory pacing sequence was impaired in TBI patients, and this motor timing deficit covaried with measures of processing speed. These findings confirm previous evidence on perceptual and cognitive timing deficits resulting from TBI and provide first evidence for comparable deficits in motor behavior. This suggests basic co-occurring perceptual and motor timing impairments that may factor into a wide range of daily activities. Our results thus place TBI into the wider range of pathologies with well-documented timing deficits (such as Parkinson’s disease) and encourage the search for novel timing-based therapeutic interventions (e.g., employing dynamic and/or musical stimuli) with high transfer potential to everyday life activities.

    Additional information

    supplementary material
  • Verhoef, T., & Ravignani, A. (2021). Melodic universals emerge or are sustained through cultural evolution. Frontiers in Psychology, 12: 668300. doi:10.3389/fpsyg.2021.668300.

    Abstract

    To understand why music is structured the way it is, we need an explanation that accounts for both the universality and variability found in musical traditions. Here we test whether statistical universals that have been identified for melodic structures in music can emerge as a result of cultural adaptation to human biases through iterated learning. We use data from an experiment in which artificial whistled systems, where sounds were produced with a slide whistle, were learned by human participants and transmitted multiple times from person to person. These sets of whistled signals needed to be memorized and recalled and the reproductions of one participant were used as the input set for the next. We tested for the emergence of seven different melodic features, such as discrete pitches, motivic patterns, or phrase repetition, and found some evidence for the presence of most of these statistical universals. We interpret this as promising evidence that, similarly to rhythmic universals, iterated learning experiments can also unearth melodic statistical universals. More, ideally cross-cultural, experiments are nonetheless needed. Simulating the cultural transmission of artificial proto-musical systems can help unravel the origins of universal tendencies in musical structures.
  • Verhoef, E., Grove, J., Shapland, C. Y., Demontis, D., Burgess, S., Rai, D., Børglum, A. D., & St Pourcain, B. (2021). Discordant associations of educational attainment with ASD and ADHD implicate a polygenic form of pleiotropy. Nature Communications, 12: 6534. doi:10.1038/s41467-021-26755-1.

    Abstract

    Autism Spectrum Disorder (ASD) and Attention-Deficit/Hyperactivity Disorder (ADHD) are complex co-occurring neurodevelopmental conditions. Their genetic architectures reveal striking similarities but also differences, including strong, discordant polygenic associations with educational attainment (EA). To study genetic mechanisms that present as ASD-related positive and ADHD-related negative genetic correlations with EA, we carry out multivariable regression analyses using genome-wide summary statistics (N = 10,610–766,345). Our results show that EA-related genetic variation is shared across ASD and ADHD architectures, involving identical marker alleles. However, the polygenic association profile with EA, across shared marker alleles, is discordant for ASD versus ADHD risk, indicating independent effects. At the single-variant level, our results suggest either biological pleiotropy or co-localisation of different risk variants, implicating MIR19A/19B microRNA mechanisms. At the polygenic level, they point to a polygenic form of pleiotropy that contributes to the detectable genome-wide correlation between ASD and ADHD and is consistent with effect cancellation across EA-related regions.

    Additional information

    supplementary information
  • Verhoef, E., Shapland, C. Y., Fisher, S. E., Dale, P. S., & St Pourcain, B. (2021). The developmental origins of genetic factors influencing language and literacy: Associations with early-childhood vocabulary. Journal of Child Psychology and Psychiatry, 62(6), 728-738. doi:10.1111/jcpp.13327.

    Abstract

    Background

    The heritability of language and literacy skills increases from early‐childhood to adolescence. The underlying mechanisms are little understood and may involve (a) the amplification of genetic influences contributing to early language abilities, and/or (b) the emergence of novel genetic factors (innovation). Here, we investigate the developmental origins of genetic factors influencing mid‐childhood/early‐adolescent language and literacy. We evaluate evidence for the amplification of early‐childhood genetic factors for vocabulary, in addition to genetic innovation processes.
    Methods

    Expressive and receptive vocabulary scores at 38 months, thirteen language‐ and literacy‐related abilities and nonverbal cognition (7–13 years) were assessed in unrelated children from the Avon Longitudinal Study of Parents and Children (ALSPAC, Nindividuals ≤ 6,092). We investigated the multivariate genetic architecture underlying early‐childhood expressive and receptive vocabulary, and each of 14 mid‐childhood/early‐adolescent language, literacy or cognitive skills with trivariate structural equation (Cholesky) models as captured by genome‐wide genetic relationship matrices. The individual path coefficients of the resulting structural models were finally meta‐analysed to evaluate evidence for overarching patterns.
    Results

    We observed little support for the emergence of novel genetic sources for language, literacy or cognitive abilities during mid‐childhood or early adolescence. Instead, genetic factors of early‐childhood vocabulary, especially those unique to receptive skills, were amplified and represented the majority of genetic variance underlying many of these later complex skills (≤99%). The most predictive early genetic factor accounted for 29.4%(SE = 12.9%) to 45.1%(SE = 7.6%) of the phenotypic variation in verbal intelligence and literacy skills, but also for 25.7%(SE = 6.4%) in performance intelligence, while explaining only a fraction of the phenotypic variation in receptive vocabulary (3.9%(SE = 1.8%)).
    Conclusions

    Genetic factors contributing to many complex skills during mid‐childhood and early adolescence, including literacy, verbal cognition and nonverbal cognition, originate developmentally in early‐childhood and are captured by receptive vocabulary. This suggests developmental genetic stability and overarching aetiological mechanisms.

    Additional information

    supporting information
  • Verhoef, E., Shapland, C. Y., Fisher, S. E., Dale, P. S., & St Pourcain, B. (2021). The developmental genetic architecture of vocabulary skills during the first three years of life: Capturing emerging associations with later-life reading and cognition. PLoS Genetics, 17(2): e1009144. doi:10.1371/journal.pgen.1009144.

    Abstract

    Individual differences in early-life vocabulary measures are heritable and associated with subsequent reading and cognitive abilities, although the underlying mechanisms are little understood. Here, we (i) investigate the developmental genetic architecture of expressive and receptive vocabulary in early-life and (ii) assess timing of emerging genetic associations with mid-childhood verbal and non-verbal skills. We studied longitudinally assessed early-life vocabulary measures (15–38 months) and later-life verbal and non-verbal skills (7–8 years) in up to 6,524 unrelated children from the population-based Avon Longitudinal Study of Parents and Children (ALSPAC) cohort. We dissected the phenotypic variance of rank-transformed scores into genetic and residual components by fitting multivariate structural equation models to genome-wide genetic-relationship matrices. Our findings show that the genetic architecture of early-life vocabulary involves multiple distinct genetic factors. Two of these genetic factors are developmentally stable and also contribute to genetic variation in mid-childhood skills: One genetic factor emerging with expressive vocabulary at 24 months (path coefficient: 0.32(SE = 0.06)) was also related to later-life reading (path coefficient: 0.25(SE = 0.12)) and verbal intelligence (path coefficient: 0.42(SE = 0.13)), explaining up to 17.9% of the phenotypic variation. A second, independent genetic factor emerging with receptive vocabulary at 38 months (path coefficient: 0.15(SE = 0.07)), was more generally linked to verbal and non-verbal cognitive abilities in mid-childhood (reading path coefficient: 0.57(SE = 0.07); verbal intelligence path coefficient: 0.60(0.10); performance intelligence path coefficient: 0.50(SE = 0.08)), accounting for up to 36.1% of the phenotypic variation and the majority of genetic variance in these later-life traits (≥66.4%). Thus, the genetic foundations of mid-childhood reading and cognitive abilities are diverse. They involve at least two independent genetic factors that emerge at different developmental stages during early language development and may implicate differences in cognitive processes that are already detectable during toddlerhood.

    Additional information

    supporting information
  • Verhoef, E., Allegrini, A. G., Jansen, P. R., Lange, K., Wang, C. A., Morgan, A. T., Ahluwalia, T. S., Symeonides, C., EAGLE-Working Group, Eising, E., Franken, M.-C., Hypponen, E., Mansell, T., Olislagers, M., Omerovic, E., Rimfeld, K., Schlag, F., Selzam, S., Shapland, C. Y., Tiemeier, H., Whitehouse, A. J. O. Verhoef, E., Allegrini, A. G., Jansen, P. R., Lange, K., Wang, C. A., Morgan, A. T., Ahluwalia, T. S., Symeonides, C., EAGLE-Working Group, Eising, E., Franken, M.-C., Hypponen, E., Mansell, T., Olislagers, M., Omerovic, E., Rimfeld, K., Schlag, F., Selzam, S., Shapland, C. Y., Tiemeier, H., Whitehouse, A. J. O., Saffery, R., Bønnelykke, K., Reilly, S., Pennell, C. E., Wake, M., Cecil, C. A., Plomin, R., Fisher, S. E., & St Pourcain, B. (2024). Genome-wide analyses of vocabulary size in infancy and toddlerhood: Associations with Attention-Deficit/Hyperactivity Disorder and cognition-related traits. Biological Psychiatry, 95(1), 859-869. doi:10.1016/j.biopsych.2023.11.025.

    Abstract

    Background

    The number of words children produce (expressive vocabulary) and understand (receptive vocabulary) changes rapidly during early development, partially due to genetic factors. Here, we performed a meta–genome-wide association study of vocabulary acquisition and investigated polygenic overlap with literacy, cognition, developmental phenotypes, and neurodevelopmental conditions, including attention-deficit/hyperactivity disorder (ADHD).

    Methods

    We studied 37,913 parent-reported vocabulary size measures (English, Dutch, Danish) for 17,298 children of European descent. Meta-analyses were performed for early-phase expressive (infancy, 15–18 months), late-phase expressive (toddlerhood, 24–38 months), and late-phase receptive (toddlerhood, 24–38 months) vocabulary. Subsequently, we estimated single nucleotide polymorphism–based heritability (SNP-h2) and genetic correlations (rg) and modeled underlying factor structures with multivariate models.

    Results

    Early-life vocabulary size was modestly heritable (SNP-h2 = 0.08–0.24). Genetic overlap between infant expressive and toddler receptive vocabulary was negligible (rg = 0.07), although each measure was moderately related to toddler expressive vocabulary (rg = 0.69 and rg = 0.67, respectively), suggesting a multifactorial genetic architecture. Both infant and toddler expressive vocabulary were genetically linked to literacy (e.g., spelling: rg = 0.58 and rg = 0.79, respectively), underlining genetic similarity. However, a genetic association of early-life vocabulary with educational attainment and intelligence emerged only during toddlerhood (e.g., receptive vocabulary and intelligence: rg = 0.36). Increased ADHD risk was genetically associated with larger infant expressive vocabulary (rg = 0.23). Multivariate genetic models in the ALSPAC (Avon Longitudinal Study of Parents and Children) cohort confirmed this finding for ADHD symptoms (e.g., at age 13; rg = 0.54) but showed that the association effect reversed for toddler receptive vocabulary (rg = −0.74), highlighting developmental heterogeneity.

    Conclusions

    The genetic architecture of early-life vocabulary changes during development, shaping polygenic association patterns with later-life ADHD, literacy, and cognition-related traits.
  • Vernes, S. C., Kriengwatana, B. P., Beeck, V. C., Fischer, J., Tyack, P. L., Ten Cate, C., & Janik, V. M. (2021). The multi-dimensional nature of vocal learning. Philosophical Transactions of the Royal Society of London, Series B: Biological Sciences, 376: 20200236. doi:10.1098/rstb.2020.0236.

    Abstract

    How learning affects vocalizations is a key question in the study of animal
    communication and human language. Parallel efforts in birds and humans
    have taught us much about how vocal learning works on a behavioural
    and neurobiological level. Subsequent efforts have revealed a variety of
    cases among mammals in which experience also has a major influence on
    vocal repertoires. Janik and Slater (Anim. Behav. 60, 1–11. (doi:10.1006/
    anbe.2000.1410)) introduced the distinction between vocal usage and pro-
    duction learning, providing a general framework to categorize how
    different types of learning influence vocalizations. This idea was built on
    by Petkov and Jarvis (Front. Evol. Neurosci. 4, 12. (doi:10.3389/fnevo.2012.
    00012)) to emphasize a more continuous distribution between limited and
    more complex vocal production learners. Yet, with more studies providing
    empirical data, the limits of the initial frameworks become apparent.
    We build on these frameworks to refine the categorization of vocal learning
    in light of advances made since their publication and widespread agreement
    that vocal learning is not a binary trait. We propose a novel classification
    system, based on the definitions by Janik and Slater, that deconstructs
    vocal learning into key dimensions to aid in understanding the mechanisms
    involved in this complex behaviour. We consider how vocalizations can
    change without learning, and a usage learning framework that considers
    context specificity and timing. We identify dimensions of vocal production
    learning, including the copying of auditory models (convergence/
    divergence on model sounds, accuracy of copying), the degree of change
    (type and breadth of learning) and timing (when learning takes place, the
    length of time it takes and how long it is retained). We consider grey
    areas of classification and current mechanistic understanding of these beha-
    viours. Our framework identifies research needs and will help to inform
    neurobiological and evolutionary studies endeavouring to uncover the
    multi-dimensional nature of vocal learning.
    This article is part of the theme issue ‘Vocal learning in animals and
    humans’.
  • Vernes, S. C., Janik, V. M., Fitch, W. T., & Slater, P. J. B. (Eds.). (2021). Vocal learning in animals and humans [Special Issue]. Philosophical Transactions of the Royal Society of London, Series B: Biological Sciences, 376.
  • Vernes, S. C., Janik, V. M., Fitch, W. T., & Slater, P. J. B. (2021). Vocal learning in animals and humans. Philosophical Transactions of the Royal Society of London, Series B: Biological Sciences, 376: 20200234. doi:10.1098/rstb.2020.0234.
  • Von Holzen, K., & Bergmann, C. (2021). The development of infants’ responses to mispronunciations: A meta-analysis. Developmental Psychology, 57(1), 1-18. doi:10.1037/dev0001141.

    Abstract

    As they develop into mature speakers of their native language, infants must not only learn words but also the sounds that make up those words. To do so, they must strike a balance between accepting speaker dependent variation (e.g. mood, voice, accent), but appropriately rejecting variation when it (potentially) changes a word's meaning (e.g. cat vs. hat). This meta-analysis focuses on studies investigating infants' ability to detect mispronunciations in familiar words, or mispronunciation sensitivity. Our goal was to evaluate the development of infants' phonological representations for familiar words as well as explore the role of experimental manipulations related to theoretical questions and analysis choices. The results show that although infants are sensitive to mispronunciations, they still accept these altered forms as labels for target objects. Interestingly, this ability is not modulated by age or vocabulary size, suggesting that a mature understanding of native language phonology may be present in infants from an early age, possibly before the vocabulary explosion. These results also support several theoretical assumptions made in the literature, such as sensitivity to mispronunciation size and position of the mispronunciation. We also shed light on the impact of data analysis choices that may lead to different conclusions regarding the development of infants' mispronunciation sensitivity. Our paper concludes with recommendations for improved practice in testing infants' word and sentence processing on-line.
  • Vosse, T., & Kempen, G. (1991). A hybrid model of human sentence processing: Parsing right-branching, center-embedded and cross-serial dependencies. In M. Tomita (Ed.), Proceedings of the Second International Workshop on Parsing Technologies.
  • Vosse, T., & Kempen, G. (2000). Syntactic structure assembly in human parsing: A computational model based on competitive inhibition and a lexicalist grammar. Cognition, 75, 105-143.

    Abstract

    We present the design, implementation and simulation results of a psycholinguistic model of human syntactic processing that meets major empirical criteria. The parser operates in conjunction with a lexicalist grammar and is driven by syntactic information associated with heads of phrases. The dynamics of the model are based on competition by lateral inhibition ('competitive inhibition'). Input words activate lexical frames (i.e. elementary trees anchored to input words) in the mental lexicon, and a network of candidate 'unification links' is set up between frame nodes. These links represent tentative attachments that are graded rather than all-or-none. Candidate links that, due to grammatical or 'treehood' constraints, are incompatible, compete for inclusion in the final syntactic tree by sending each other inhibitory signals that reduce the competitor's attachment strength. The outcome of these local and simultaneous competitions is controlled by dynamic parameters, in particular by the Entry Activation and the Activation Decay rate of syntactic nodes, and by the Strength and Strength Build-up rate of Unification links. In case of a successful parse, a single syntactic tree is returned that covers the whole input string and consists of lexical frames connected by winning Unification links. Simulations are reported of a significant range of psycholinguistic parsing phenomena in both normal and aphasic speakers of English: (i) various effects of linguistic complexity (single versus double, center versus right-hand self-embeddings of relative clauses; the difference between relative clauses with subject and object extraction; the contrast between a complement clause embedded within a relative clause versus a relative clause embedded within a complement clause); (ii) effects of local and global ambiguity, and of word-class and syntactic ambiguity (including recency and length effects); (iii) certain difficulty-of-reanalysis effects (contrasts between local ambiguities that are easy to resolve versus ones that lead to serious garden-path effects); (iv) effects of agrammatism on parsing performance, in particular the performance of various groups of aphasic patients on several sentence types.
  • Wagner, M. A., Broersma, M., McQueen, J. M., Dhaene, S., & Lemhöfer, K. (2021). Phonetic convergence to non-native speech: Acoustic and perceptual evidence. Journal of Phonetics, 88: 101076. doi:10.1016/j.wocn.2021.101076.

    Abstract

    While the tendency of speakers to align their speech to that of others acoustic-phonetically has been widely studied among native speakers, very few studies have examined whether natives phonetically converge to non-native speakers. Here we measured native Dutch speakers’ convergence to a non-native speaker with an unfamiliar accent in a novel non-interactive task. Furthermore, we assessed the role of participants’ perceptions of the non-native accent in their tendency to converge. In addition to a perceptual measure (AXB ratings), we examined convergence on different acoustic dimensions (e.g., vowel spectra, fricative CoG, speech rate, overall f0) to determine what dimensions, if any, speakers converge to. We further combined these two types of measures to discover what dimensions weighed in raters’ judgments of convergence. The results reveal overall convergence to our non-native speaker, as indexed by both perceptual and acoustic measures. However, the ratings suggest the stronger participants rated the non-native accent to be, the less likely they were to converge. Our findings add to the growing body of evidence that natives can phonetically converge to non-native speech, even without any apparent socio-communicative motivation to do so. We argue that our results are hard to integrate with a purely social view of convergence.
  • Wang, M.-Y., Korbmacher, M., Eikeland, R., Craven, A. R., & Specht, K. (2024). The intra‐individual reliability of1H‐MRSmeasurement in the anterior cingulate cortex across 1 year. Human Brain Mapping, 45(1): e26531. doi:10.1002/hbm.26531.

    Abstract

    Magnetic resonance spectroscopy (MRS) is the primary method that can measure the levels of metabolites in the brain in vivo. To achieve its potential in clinical usage, the reliability of the measurement requires further articulation. Although there are many studies that investigate the reliability of gamma-aminobutyric acid (GABA), comparatively few studies have investigated the reliability of other brain metabolites, such as glutamate (Glu), N-acetyl-aspartate (NAA), creatine (Cr), phosphocreatine (PCr), or myo-inositol (mI), which all play a significant role in brain development and functions. In addition, previous studies which predominately used only two measurements (two data points) failed to provide the details of the time effect (e.g., time-of-day) on MRS measurement within subjects. Therefore, in this study, MRS data located in the anterior cingulate cortex (ACC) were repeatedly recorded across 1 year leading to at least 25 sessions for each subject with the aim of exploring the variability of other metabolites by using the index coefficient of variability (CV); the smaller the CV, the more reliable the measurements. We found that the metabolites of NAA, tNAA, and tCr showed the smallest CVs (between 1.43% and 4.90%), and the metabolites of Glu, Glx, mI, and tCho showed modest CVs (between 4.26% and 7.89%). Furthermore, we found that the concentration reference of the ratio to water results in smaller CVs compared to the ratio to tCr. In addition, we did not find any time-of-day effect on the MRS measurements. Collectively, the results of this study indicate that the MRS measurement is reasonably reliable in quantifying the levels of metabolites.

    Additional information

    tables and figures data
  • Wang, X., Jahagirdar, S., Bakker, W., Lute, C., Kemp, B., Knegsel, A. v., & Saccenti, E. (2024). Discrimination of Lipogenic or Glucogenic Diet Effects in Early-Lactation Dairy Cows Using Plasma Metabolite Abundances and Ratios in Combination with Machine Learning. Metabolites, 14(4): 230. doi:10.3390/metabo14040230.

    Abstract

    During early lactation, dairy cows have a negative energy balance since their energy demands exceed their energy intake: in this study, we aimed to investigate the association between diet and plasma metabolomics profiles and how these relate to energy unbalance of course in the early-lactation stage. Holstein-Friesian cows were randomly assigned to a glucogenic (n = 15) or lipogenic (n = 15) diet in early lactation. Blood was collected in week 2 and week 4 after calving. Plasma metabolite profiles were detected using liquid chromatography–mass spectrometry (LC-MS), and a total of 39 metabolites were identified. Two plasma metabolomic profiles were available every week for each cow. Metabolite abundance and metabolite ratios were used for the analysis using the XGboost algorithm to discriminate between diet treatment and lactation week. Using metabolite ratios resulted in better discrimination performance compared with the metabolite abundances in assigning cows to a lipogenic diet or a glucogenic diet. The quality of the discrimination of performance of lipogenic diet and glucogenic diet effects improved from 0.606 to 0.753 and from 0.696 to 0.842 in week 2 and week 4 (as measured by area under the curve, AUC), when the metabolite abundance ratios were used instead of abundances. The top discriminating ratios for diet were the ratio of arginine to tyrosine and the ratio of aspartic acid to valine in week 2 and week 4, respectively. For cows fed the lipogenic diet, choline and the ratio of creatinine to tryptophan were top features to discriminate cows in week 2 vs. week 4. For cows fed the glucogenic diet, methionine and the ratio of 4-hydroxyproline to choline were top features to discriminate dietary effects in week 2 or week 4. This study shows the added value of using metabolite abundance ratios to discriminate between lipogenic and glucogenic diet and lactation weeks in early-lactation cows when using metabolomics data. The application of this research will help to accurately regulate the nutrition of lactating dairy cows and promote sustainable agricultural development.
  • Wassenaar, M., Hagoort, P., & Brown, C. M. (1997). Syntactic ERP effects in Broca's aphasics with agrammatic comprehension. Brain and Language, 60, 61-64. doi:10.1006/brln.1997.1911.
  • Weber, A. (1998). Listening to nonnative language which violates native assimilation rules. In D. Duez (Ed.), Proceedings of the European Scientific Communication Association workshop: Sound patterns of Spontaneous Speech (pp. 101-104).

    Abstract

    Recent studies using phoneme detection tasks have shown that spoken-language processing is neither facilitated nor interfered with by optional assimilation, but is inhibited by violation of obligatory assimilation. Interpretation of these results depends on an assessment of their generality, specifically, whether they also obtain when listeners are processing nonnative language. Two separate experiments are presented in which native listeners of German and native listeners of Dutch had to detect a target fricative in legal monosyllabic Dutch nonwords. All of the nonwords were correct realisations in standard Dutch. For German listeners, however, half of the nonwords contained phoneme strings which violate the German fricative assimilation rule. Whereas the Dutch listeners showed no significant effects, German listeners detected the target fricative faster when the German fricative assimilation was violated than when no violation occurred. The results might suggest that violation of assimilation rules does not have to make processing more difficult per se.
  • Weber, A. (2000). Phonotactic and acoustic cues for word segmentation in English. In Proceedings of the 6th International Conference on Spoken Language Processing (ICSLP 2000) (pp. 782-785).

    Abstract

    This study investigates the influence of both phonotactic and acoustic cues on the segmentation of spoken English. Listeners detected embedded English words in nonsense sequences (word spotting). Words aligned with phonotactic boundaries were easier to detect than words without such alignment. Acoustic cues to boundaries could also have signaled word boundaries, especially when word onsets lacked phonotactic alignment. However, only one of several durational boundary cues showed a marginally significant correlation with response times (RTs). The results suggest that word segmentation in English is influenced primarily by phonotactic constraints and only secondarily by acoustic aspects of the speech signal.
  • Weber, A. (2000). The role of phonotactics in the segmentation of native and non-native continuous speech. In A. Cutler, J. M. McQueen, & R. Zondervan (Eds.), Proceedings of SWAP, Workshop on Spoken Word Access Processes. Nijmegen: MPI for Psycholinguistics.

    Abstract

    Previous research has shown that listeners make use of their knowledge of phonotactic constraints to segment speech into individual words. The present study investigates the influence of phonotactics when segmenting a non-native language. German and English listeners detected embedded English words in nonsense sequences. German listeners also had knowledge of English, but English listeners had no knowledge of German. Word onsets were either aligned with a syllable boundary or not, according to the phonotactics of the two languages. Words aligned with either German or English phonotactic boundaries were easier for German listeners to detect than words without such alignment. Responses of English listeners were influenced primarily by English phonotactic alignment. The results suggest that both native and non-native phonotactic constraints influence lexical segmentation of a non-native, but familiar, language.
  • Wesseldijk, L. W., Henechowicz, T. L., Baker, D. J., Bignardi, G., Karlsson, R., Gordon, R. L., Mosing, M. A., Ullén, F., & Fisher, S. E. (2024). Notes from Beethoven’s genome. Current Biology, 34(6), R233-R234. doi:10.1016/j.cub.2024.01.025.

    Abstract

    Rapid advances over the last decade in DNA sequencing and statistical genetics enable us to investigate the genomic makeup of individuals throughout history. In a recent notable study, Begg et al.1 used Ludwig van Beethoven’s hair strands for genome sequencing and explored genetic predispositions for some of his documented medical issues. Given that it was arguably Beethoven’s skills as a musician and composer that made him an iconic figure in Western culture, we here extend the approach and apply it to musicality. We use this as an example to illustrate the broader challenges of individual-level genetic predictions.

    Additional information

    supplemental information
  • Wilkinson, G. S., Adams, D. M., Haghani, A., Lu, A. T., Zoller, J., Breeze, C. E., Arnold, B. D., Ball, H. C., Carter, G. G., Cooper, L. N., Dechmann, D. K. N., Devanna, P., Fasel, N. J., Galazyuk, A. V., Günther, L., Hurme, E., Jones, G., Knörnschild, M., Lattenkamp, E. Z., Li, C. Z. and 17 moreWilkinson, G. S., Adams, D. M., Haghani, A., Lu, A. T., Zoller, J., Breeze, C. E., Arnold, B. D., Ball, H. C., Carter, G. G., Cooper, L. N., Dechmann, D. K. N., Devanna, P., Fasel, N. J., Galazyuk, A. V., Günther, L., Hurme, E., Jones, G., Knörnschild, M., Lattenkamp, E. Z., Li, C. Z., Mayer, F., Reinhardt, J. A., Medellin, R. A., Nagy, M., Pope, B., Power, M. L., Ransome, R. D., Teeling, E. C., Vernes, S. C., Zamora-Mejías, D., Zhang, J., Faure, P. A., Greville, L. J., Herrera M., L. G., Flores-Martínez, J. J., & Horvath, S. (2021). DNA methylation predicts age and provides insight into exceptional longevity of bats. Nature Communications, 12: 1615. doi:10.1038/s41467-021-21900-2.

    Abstract

    Exceptionally long-lived species, including many bats, rarely show overt signs of aging, making it difficult to determine why species differ in lifespan. Here, we use DNA methylation (DNAm) profiles from 712 known-age bats, representing 26 species, to identify epigenetic changes associated with age and longevity. We demonstrate that DNAm accurately predicts chronological age. Across species, longevity is negatively associated with the rate of DNAm change at age-associated sites. Furthermore, analysis of several bat genomes reveals that hypermethylated age- and longevity-associated sites are disproportionately located in promoter regions of key transcription factors (TF) and enriched for histone and chromatin features associated with transcriptional regulation. Predicted TF binding site motifs and enrichment analyses indicate that age-related methylation change is influenced by developmental processes, while longevity-related DNAm change is associated with innate immunity or tumorigenesis genes, suggesting that bat longevity results from augmented immune response and cancer suppression.

    Additional information

    supplementary information
  • Willems, R. M., & Peelen, M. V. (2021). How context changes the neural basis of perception and language. iScience, 24(5): 102392. doi:10.1016/j.isci.2021.102392.

    Abstract

    Cognitive processes—from basic sensory analysis to language understanding—are typically contextualized. While the importance of considering context for understanding cognition has long been recognized in psychology and philosophy, it has not yet had much impact on cognitive neuroscience research, where cognition is often studied in decontextualized paradigms. Here, we present examples of recent studies showing that context changes the neural basis of diverse cognitive processes, including perception, attention, memory, and language. Within the domains of perception and language, we review neuroimaging results showing that context interacts with stimulus processing, changes activity in classical perception and language regions, and recruits additional brain regions that contribute crucially to naturalistic perception and language. We discuss how contextualized cognitive neuroscience will allow for discovering new principles of the mind and brain.
  • Winter, B., Lupyan, G., Perry, L. K., Dingemanse, M., & Perlman, M. (2024). Iconicity ratings for 14,000+ English words. Behavior Research Methods, 56, 1640-1655. doi:10.3758/s13428-023-02112-6.

    Abstract

    Iconic words and signs are characterized by a perceived resemblance between aspects of their form and aspects of their meaning. For example, in English, iconic words include peep and crash, which mimic the sounds they denote, and wiggle and zigzag, which mimic motion. As a semiotic property of words and signs, iconicity has been demonstrated to play a role in word learning, language processing, and language evolution. This paper presents the results of a large-scale norming study for more than 14,000 English words conducted with over 1400 American English speakers. We demonstrate the utility of these ratings by replicating a number of existing findings showing that iconicity ratings are related to age of acquisition, sensory modality, semantic neighborhood density, structural markedness, and playfulness. We discuss possible use cases and limitations of the rating dataset, which is made publicly available.
  • Wittek, A. (1998). Learning verb meaning via adverbial modification: Change-of-state verbs in German and the adverb "wieder" again. In A. Greenhill, M. Hughes, H. Littlefield, & H. Walsh (Eds.), Proceedings of the 22nd Annual Boston University Conference on Language Development (pp. 779-790). Somerville, MA: Cascadilla Press.
  • Woensdregt, M., Cummins, C., & Smith, K. (2021). A computational model of the cultural co-evolution of language and mindreading. Synthese, 199, 1347-1385. doi:10.1007/s11229-020-02798-7.

    Abstract

    Several evolutionary accounts of human social cognition posit that language has co-evolved with the sophisticated mindreading abilities of modern humans. It has also been argued that these mindreading abilities are the product of cultural, rather than biological, evolution. Taken together, these claims suggest that the evolution of language has played an important role in the cultural evolution of human social cognition. Here we present a new computational model which formalises the assumptions that underlie this hypothesis, in order to explore how language and mindreading interact through cultural evolution. This model treats communicative behaviour as an interplay between the context in which communication occurs, an agent’s individual perspective on the world, and the agent’s lexicon. However, each agent’s perspective and lexicon are private mental representations, not directly observable to other agents. Learners are therefore confronted with the task of jointly inferring the lexicon and perspective of their cultural parent, based on their utterances in context. Simulation results show that given these assumptions, an informative lexicon evolves not just under a pressure to be successful at communicating, but also under a pressure for accurate perspective-inference. When such a lexicon evolves, agents become better at inferring others’ perspectives; not because their innate ability to learn about perspectives changes, but because sharing a language (of the right type) with others helps them to do so.
  • Wolf, M. C., Meyer, A. S., Rowland, C. F., & Hintz, F. (2021). The effects of input modality, word difficulty and reading experience on word recognition accuracy. Collabra: Psychology, 7(1): 24919. doi:10.1525/collabra.24919.

    Abstract

    Language users encounter words in at least two different modalities. Arguably, the most frequent encounters are in spoken or written form. Previous research has shown that – compared to the spoken modality – written language features more difficult words. Thus, frequent reading might have effects on word recognition. In the present study, we investigated 1) whether input modality (spoken, written, or bimodal) has an effect on word recognition accuracy, 2) whether this modality effect interacts with word difficulty, 3) whether the interaction of word difficulty and reading experience impacts word recognition accuracy, and 4) whether this interaction is influenced by input modality. To do so, we re-analysed a dataset that was collected in the context of a vocabulary test development to assess in which modality test words should be presented. Participants had carried out a word recognition task, where non-words and words of varying difficulty were presented in auditory, visual and audio-visual modalities. In addition to this main experiment, participants had completed a receptive vocabulary and an author recognition test to measure their reading experience. Our re-analyses did not reveal evidence for an effect of input modality on word recognition accuracy, nor for interactions with word difficulty or language experience. Word difficulty interacted with reading experience in that frequent readers were more accurate in recognizing difficult words than individuals who read less frequently. Practical implications are discussed.
  • Wolna, A., Szewczyk, J., Diaz, M., Domagalik, A., Szwed, M., & Wodniecka, Z. (2024). Domain-general and language-specific contributions to speech production in a second language: An fMRI study using functional localizers. Scientific Reports, 14: 57. doi:10.1038/s41598-023-49375-9.

    Abstract

    For bilinguals, speaking in a second language (L2) compared to the native language (L1) is usually more difficult. In this study we asked whether the difficulty in L2 production reflects increased demands imposed on domain-general or core language mechanisms. We compared the brain response to speech production in L1 and L2 within two functionally-defined networks in the brain: the Multiple Demand (MD) network and the language network. We found that speech production in L2 was linked to a widespread increase of brain activity in the domain-general MD network. The language network did not show a similarly robust differences in processing speech in the two languages, however, we found increased response to L2 production in the language-specific portion of the left inferior frontal gyrus (IFG). To further explore our results, we have looked at domain-general and language-specific response within the brain structures postulated to form a Bilingual Language Control (BLC) network. Within this network, we found a robust increase in response to L2 in the domain-general, but also in some language-specific voxels including in the left IFG. Our findings show that L2 production strongly engages domain-general mechanisms, but only affects language sensitive portions of the left IFG. These results put constraints on the current model of bilingual language control by precisely disentangling the domain-general and language-specific contributions to the difficulty in speech production in L2.

    Additional information

    supplementary materials
  • Wolna, A., Szewczyk, J., Diaz, M., Domagalik, A., Szwed, M., & Wodniecka, Z. (2024). Tracking components of bilingual language control in speech production: An fMRI study using functional localizers. Neurobiology of Language, 5(2), 315-340. doi:10.1162/nol_a_00128.

    Abstract

    When bilingual speakers switch back to speaking in their native language (L1) after having used their second language (L2), they often experience difficulty in retrieving words in their L1. This phenomenon is referred to as the L2 after-effect. We used the L2 after-effect as a lens to explore the neural bases of bilingual language control mechanisms. Our goal was twofold: first, to explore whether bilingual language control draws on domain-general or language-specific mechanisms; second, to investigate the precise mechanism(s) that drive the L2 after-effect. We used a precision fMRI approach based on functional localizers to measure the extent to which the brain activity that reflects the L2 after-effect overlaps with the language network (Fedorenko et al., 2010) and the domain-general multiple demand network (Duncan, 2010), as well as three task-specific networks that tap into interference resolution, lexical retrieval, and articulation. Forty-two Polish–English bilinguals participated in the study. Our results show that the L2 after-effect reflects increased engagement of domain-general but not language-specific resources. Furthermore, contrary to previously proposed interpretations, we did not find evidence that the effect reflects increased difficulty related to lexical access, articulation, and the resolution of lexical interference. We propose that difficulty of speech production in the picture naming paradigm—manifested as the L2 after-effect—reflects interference at a nonlinguistic level of task schemas or a general increase of cognitive control engagement during speech production in L1 after L2.

    Additional information

    supplementary materials
  • Wongratwanich, P., Shimabukuro, K., Konishi, M., Nagasaki, T., Ohtsuka, M., Suei, Y., Nakamoto, T., Verdonschot, R. G., Kanesaki, T., Sutthiprapaporn, P., & Kakimoto, N. (2021). Do various imaging modalities provide potential early detection and diagnosis of medication-related osteonecrosis of the jaw? A review. Dentomaxillofacial Radiology, 50: 20200417. doi:10.1259/dmfr.20200417.

    Abstract


    Objective: Patients with medication-related osteonecrosis of the jaw (MRONJ) often visit their dentists at advanced stages and subsequently require treatments that greatly affect quality of life. Currently, no clear diagnostic criteria exist to assess MRONJ, and the definitive diagnosis solely relies on clinical bone exposure. This ambiguity leads to a diagnostic delay, complications, and unnecessary burden. This article aims to identify imaging modalities' usage and findings of MRONJ to provide possible approaches for early detection.

    Methods: Literature searches were conducted using PubMed, Web of Science, Scopus, and Cochrane Library to review all diagnostic imaging modalities for MRONJ.

    Results: Panoramic radiography offers a fundamental understanding of the lesions. Imaging findings were comparable between non-exposed and exposed MRONJ, showing osteolysis, osteosclerosis, and thickened lamina dura. Mandibular cortex index Class II could be a potential early MRONJ indicator. While three-dimensional modalities, CT and CBCT, were able to show more features unique to MRONJ such as a solid type periosteal reaction, buccal predominance of cortical perforation, and bone-within-bone appearance. MRI signal intensities of vital bones are hypointense on T1WI and hyperintense on T2WI and STIR when necrotic bone shows hypointensity on all T1WI, T2WI, and STIR. Functional imaging is the most sensitive method but is usually performed in metastasis detection rather than being a diagnostic tool for early MRONJ.

    Conclusion: Currently, MRONJ-specific imaging features cannot be firmly established. However, the current data are valuable as it may lead to a more efficient diagnostic procedure along with a more suitable selection of imaging modalities.
  • Yoshihara, M., Nakayama, M., Verdonschot, R. G., Hino, Y., & Lupker, S. J. (2021). Orthographic properties of distractors do influence phonological Stroop effects: Evidence from Japanese Romaji distractors. Memory & Cognition, 49(3), 600-612. doi:10.3758/s13421-020-01103-8.

    Abstract

    In attempting to understand mental processes, it is important to use a task that appropriately reflects the underlying processes being investigated. Recently, Verdonschot and Kinoshita (Memory & Cognition, 46,410-425, 2018) proposed that a variant of the Stroop task-the "phonological Stroop task"-might be a suitable tool for investigating speech production. The major advantage of this task is that the task is apparently not affected by the orthographic properties of the stimuli, unlike other, commonly used, tasks (e.g., associative-cuing and word-reading tasks). The viability of this proposal was examined in the present experiments by manipulating the script types of Japanese distractors. For Romaji distractors (e.g., "kushi"), color-naming responses were faster when the initial phoneme was shared between the color name and the distractor than when the initial phonemes were different, thereby showing a phoneme-based phonological Stroop effect (Experiment1). In contrast, no such effect was observed when the same distractors were presented in Katakana (e.g., "< ") pound, replicating Verdonschot and Kinoshita's original results (Experiment2). A phoneme-based effect was again found when the Katakana distractors used in Verdonschot and Kinoshita's original study were transcribed and presented in Romaji (Experiment3). Because the observation of a phonemic effectdirectly depended on the orthographic properties of the distractor stimuli, we conclude that the phonological Stroop task is also susceptible to orthographic influences.
  • Zaadnoordijk, L., Buckler, H., Cusack, R., Tsuji, S., & Bergmann, C. (2021). A global perspective on testing infants online: Introducing ManyBabies-AtHome. Frontiers in Psychology, 12: 703234. doi:10.3389/fpsyg.2021.703234.

    Abstract

    Online testing holds great promise for infant scientists. It could increase participant diversity, improve reproducibility and collaborative possibilities, and reduce costs for researchers and participants. However, despite the rise of platforms and participant databases, little work has been done to overcome the challenges of making this approach available to researchers across the world. In this paper, we elaborate on the benefits of online infant testing from a global perspective and identify challenges for the international community that have been outside of the scope of previous literature. Furthermore, we introduce ManyBabies-AtHome, an international, multi-lab collaboration that is actively working to facilitate practical and technical aspects of online testing as well as address ethical concerns regarding data storage and protection, and cross-cultural variation. The ultimate goal of this collaboration is to improve the method of testing infants online and make it globally available.
  • Zavala, R. (1997). Functional analysis of Akatek voice constructions. International Journal of American Linguistics, 63(4), 439-474.

    Abstract

    L'A. étudie les corrélations entre structure syntaxique et fonction pragmatique dans les alternances de voix en akatek, une langue maya appartenant au sous-groupe Q'anjob'ala. Les alternances pragmatiques de voix sont les mécanismes par lesquels les langues encodent les différents degrés de topicalité des deux principaux participants d'un événement sémantiquement transitif, l'agent et le patient. A l'aide d'une analyse quantitative, l'A. évalue la topicalité de ces participants et identifie les structures syntaxiques permettant d'exprimer les quatre principales fonctions de voix en akatek : active-directe, inverse, passive et antipassive
  • Zettersten, M., Cox, C., Bergmann, C., Tsui, A. S. M., Soderstrom, M., Mayor, J., Lundwall, R. A., Lewis, M., Kosie, J. E., Kartushina, N., Fusaroli, R., Frank, M. C., Byers-Heinlein, K., Black, A. K., & Mathur, M. B. (2024). Evidence for infant-directed speech preference is consistent across large-scale, multi-site replication and meta-analysis. Open Mind, 8, 439-461. doi:10.1162/opmi_a_00134.

    Abstract

    There is substantial evidence that infants prefer infant-directed speech (IDS) to adult-directed speech (ADS). The strongest evidence for this claim has come from two large-scale investigations: i) a community-augmented meta-analysis of published behavioral studies and ii) a large-scale multi-lab replication study. In this paper, we aim to improve our understanding of the IDS preference and its boundary conditions by combining and comparing these two data sources across key population and design characteristics of the underlying studies. Our analyses reveal that both the meta-analysis and multi-lab replication show moderate effect sizes (d ≈ 0.35 for each estimate) and that both of these effects persist when relevant study-level moderators are added to the models (i.e., experimental methods, infant ages, and native languages). However, while the overall effect size estimates were similar, the two sources diverged in the effects of key moderators: both infant age and experimental method predicted IDS preference in the multi-lab replication study, but showed no effect in the meta-analysis. These results demonstrate that the IDS preference generalizes across a variety of experimental conditions and sampling characteristics, while simultaneously identifying key differences in the empirical picture offered by each source individually and pinpointing areas where substantial uncertainty remains about the influence of theoretically central moderators on IDS preference. Overall, our results show how meta-analyses and multi-lab replications can be used in tandem to understand the robustness and generalizability of developmental phenomena.

    Additional information

    supplementary data link to preprint
  • Zhang, Y., Ding, R., Frassinelli, D., Tuomainen, J., Klavinskis-Whiting, S., & Vigliocco, G. (2021). Electrophysiological signatures of second language multimodal comprehension. In T. Fitch, C. Lamm, H. Leder, & K. Teßmar-Raible (Eds.), Proceedings of the 43rd Annual Conference of the Cognitive Science Society (CogSci 2021) (pp. 2971-2977). Vienna: Cognitive Science Society.

    Abstract

    Language is multimodal: non-linguistic cues, such as prosody,
    gestures and mouth movements, are always present in face-to-
    face communication and interact to support processing. In this
    paper, we ask whether and how multimodal cues affect L2
    processing by recording EEG for highly proficient bilinguals
    when watching naturalistic materials. For each word, we
    quantified surprisal and the informativeness of prosody,
    gestures, and mouth movements. We found that each cue
    modulates the N400: prosodic accentuation, meaningful
    gestures, and informative mouth movements all reduce N400.
    Further, effects of meaningful gestures but not mouth
    informativeness are enhanced by prosodic accentuation,
    whereas effects of mouth are enhanced by meaningful gestures
    but reduced by beat gestures. Compared with L1, L2
    participants benefit less from cues and their interactions, except
    for meaningful gestures and mouth movements. Thus, in real-
    world language comprehension, L2 comprehenders use
    multimodal cues just as L1 speakers albeit to a lesser extent.
  • Yu, C., Zhang, Y., Slone, L. K., & Smith, L. B. (2021). The infant’s view redefines the problem of referential uncertainty in early word learning. Proceedings of the National Academy of Sciences of the United States of America, 118(52): e2107019118. doi:10.1073/pnas.2107019118.

    Abstract

    The learning of first object names is deemed a hard problem due to the uncertainty inherent in mapping a heard name to the intended referent in a cluttered and variable world. However, human infants readily solve this problem. Despite considerable theoretical discussion, relatively little is known about the uncertainty infants face in the real world. We used head-mounted eye tracking during parent–infant toy play and quantified the uncertainty by measuring the distribution of infant attention to the potential referents when a parent named both familiar and unfamiliar toy objects. The results show that infant gaze upon hearing an object name is often directed to a single referent which is equally likely to be a wrong competitor or the intended target. This bimodal gaze distribution clarifies and redefines the uncertainty problem and constrains possible solutions.
  • Zhang, Y., Yurovsky, D., & Yu, C. (2021). Cross-situational learning from ambiguous egocentric input is a continuous process: Evidence using the human simulation paradigm. Cognitive Science, 45(7): e13010. doi:10.1111/cogs.13010.

    Abstract

    Recent laboratory experiments have shown that both infant and adult learners can acquire word-referent mappings using cross-situational statistics. The vast majority of the work on this topic has used unfamiliar objects presented on neutral backgrounds as the visual contexts for word learning. However, these laboratory contexts are much different than the real-world contexts in which learning occurs. Thus, the feasibility of generalizing cross-situational learning beyond the laboratory is in question. Adapting the Human Simulation Paradigm, we conducted a series of experiments examining cross-situational learning from children's egocentric videos captured during naturalistic play. Focusing on individually ambiguous naming moments that naturally occur during toy play, we asked how statistical learning unfolds in real time through accumulating cross-situational statistics in naturalistic contexts. We found that even when learning situations were individually ambiguous, learners' performance gradually improved over time. This improvement was driven in part by learners' use of partial knowledge acquired from previous learning situations, even when they had not yet discovered correct word-object mappings. These results suggest that word learning is a continuous process by means of real-time information integration.
  • Zhang, Y., Amatuni, A., Cain, E., Wang, X., Crandall, D., & Yu, C. (2021). Human learners integrate visual and linguistic information cross-situational verb learning. In T. Fitch, C. Lamm, H. Leder, & K. Teßmar-Raible (Eds.), Proceedings of the 43rd Annual Conference of the Cognitive Science Society (CogSci 2021) (pp. 2267-2273). Vienna: Cognitive Science Society.

    Abstract

    Learning verbs is challenging because it is difficult to infer the precise meaning of a verb when there are a multitude of relations that one can derive from a single event. To study this verb learning challenge, we used children's egocentric view collected from naturalistic toy-play interaction as learning materials and investigated how visual and linguistic information provided in individual naming moments as well as cross-situational information provided from multiple learning moments can help learners resolve this mapping problem using the Human Simulation Paradigm. Our results show that learners benefit from seeing children's egocentric views compared to third-person observations. In addition, linguistic information can help learners identify the correct verb meaning by eliminating possible meanings that do not belong to the linguistic category. Learners are also able to integrate visual and linguistic information both within and across learning situations to reduce the ambiguity in the space of possible verb meanings.
  • Zhong, S., Wei, L., Zhao, C., Yang, L., Di, Z., Francks, C., & Gong, G. (2021). Interhemispheric relationship of genetic influence on human brain connectivity. Cerebral Cortex, 31(1), 77-88. doi:10.1093/cercor/bhaa207.

    Abstract

    To understand the origins of interhemispheric differences and commonalities/coupling in human brain wiring, it is crucial to determine how homologous interregional connectivities of the left and right hemispheres are genetically determined and related. To address this, in the present study, we analyzed human twin and pedigree samples with high-quality diffusion magnetic resonance imaging tractography and estimated the heritability and genetic correlation of homologous left and right white matter (WM) connections. The results showed that the heritability of WM connectivity was similar and coupled between the 2 hemispheres and that the degree of overlap in genetic factors underlying homologous WM connectivity (i.e., interhemispheric genetic correlation) varied substantially across the human brain: from complete overlap to complete nonoverlap. Particularly, the heritability was significantly stronger and the chance of interhemispheric complete overlap in genetic factors was higher in subcortical WM connections than in cortical WM connections. In addition, the heritability and interhemispheric genetic correlations were stronger for long-range connections than for short-range connections. These findings highlight the determinants of the genetics underlying WM connectivity and its interhemispheric relationships, and provide insight into genetic basis of WM connectivity asymmetries in both healthy and disease states.

    Additional information

    Supplementary data
  • Zhou, W., Broersma, M., & Cutler, A. (2021). Asymmetric memory for birth language perception versus production in young international adoptees. Cognition, 213: 104788. doi:10.1016/j.cognition.2021.104788.

    Abstract

    Adults who as children were adopted into a different linguistic community retain knowledge of their birth language. The possession (without awareness) of such knowledge is known to facilitate the (re)learning of birth-language speech patterns; this perceptual learning predicts such adults' production success as well, indicating that the retained linguistic knowledge is abstract in nature. Adoptees' acquisition of their adopted language is fast and complete; birth-language mastery disappears rapidly, although this latter process has been little studied. Here, 46 international adoptees from China aged four to 10 years, with Dutch as their new language, plus 47 matched non-adopted Dutch-native controls and 40 matched non-adopted Chinese controls, undertook across a two-week period 10 blocks of training in perceptually identifying Chinese speech contrasts (one segmental, one tonal) which were unlike any Dutch contrasts. Chinese controls easily accomplished all these tasks. The same participants also provided speech production data in an imitation task. In perception, adoptees and Dutch controls scored equivalently poorly at the outset of training; with training, the adoptees significantly improved while the Dutch controls did not. In production, adoptees' imitations both before and after training could be better identified, and received higher goodness ratings, than those of Dutch controls. The perception results confirm that birth-language knowledge is stored and can facilitate re-learning in post-adoption childhood; the production results suggest that although processing of phonological category detail appears to depend on access to the stored knowledge, general articulatory dimensions can at this age also still be remembered, and may facilitate spoken imitation.

    Additional information

    stimulus materials
  • Zhou, H., Van der Ham, S., De Boer, B., Bogaerts, L., & Raviv, L. (2024). Modality and stimulus effects on distributional statistical learning: Sound vs. sight, time vs. space. Journal of Memory and Language, 138: 104531. doi:10.1016/j.jml.2024.104531.

    Abstract

    Statistical learning (SL) is postulated to play an important role in the process of language acquisition as well as in other cognitive functions. It was found to enable learning of various types of statistical patterns across different sensory modalities. However, few studies have distinguished distributional SL (DSL) from sequential and spatial SL, or examined DSL across modalities using comparable tasks. Considering the relevance of such findings to the nature of SL, the current study investigated the modality- and stimulus-specificity of DSL. Using a within-subject design we compared DSL performance in auditory and visual modalities. For each sensory modality, two stimulus types were used: linguistic versus non-linguistic auditory stimuli and temporal versus spatial visual stimuli. In each condition, participants were exposed to stimuli that varied in their length as they were drawn from two categories (short versus long). DSL was assessed using a categorization task and a production task. Results showed that learners’ performance was only correlated for tasks in the same sensory modality. Moreover, participants were better at categorizing the temporal signals in the auditory conditions than in the visual condition, where in turn an advantage of the spatial condition was observed. In the production task participants exaggerated signal length more for linguistic signals than non-linguistic signals. Together, these findings suggest that DSL is modality- and stimulus-sensitive.

    Additional information

    link to preprint
  • Zimianiti, E. (2021). Adjective-noun constructions in Griko: Focusing on measuring adjectives and their placement in the nominal domain. LingUU Journal, 5(2), 62-75.

    Abstract

    This paper examines adjectival placement in Griko, an Italian-Greek lan-
    guage variety. Guardiano and Stavrou (2019, 2014) have argued that
    there is a gap of evidence in the diachrony of adjectives in prenominal
    position and in particular, of measuring adjectives. Evidence is presented
    in this paper contradicting the aforementioned claims. After considering
    the placement of adjectives in Greek and Italian, and their similarities
    and differences, the adjectival pattern of Griko is analysed. The analysis
    is based mostly on written data from the early 20th century proving the
    prenominal position of adjectives and adding to the diachronic schema of
    adjectival placement in Griko.
  • Zimianiti, E., Dimitrakopoulou, M., & Tsangalidis, A. (2021). Τhematic roles in dementia: The case of psychological verbs. In A. Botinis (Ed.), ExLing 2021: Proceedings of the 12th International Conference of Experimental Linguistics (pp. 269-272). Athens, Greece: ExLing Society.

    Abstract

    This study investigates the difficulty of people with Mild Cognitive Impairment (MCI), mild and moderate Alzheimer’s disease (AD) in the production and comprehension of psychological verbs, as thematic realization may involve both the canonical and non-canonical realization of arguments. More specifically, we aim to examine whether there is a deficit in the mapping of syntactic and semantic representations in psych-predicates regarding Greek-speaking individuals with MCI and AD, and whether the linguistic abilities associated with θ-role assignment decrease as the disease progresses. Moreover, given the decline of cognitive abilities in people with MCI and AD, we explore the effects of components of memory (Semantic, Episodic, and Working Memory) on the assignment of thematic roles in constructions with psychological verbs.
  • Zinken, J., Kaiser, J., Weidner, M., Mondada, L., Rossi, G., & Sorjonen, M.-L. (2021). Rule talk: Instructing proper play with impersonal deontic statements. Frontiers in Communication, 6: 660394. doi:10.3389/fcomm.2021.660394.

    Abstract

    The present paper explores how rules are enforced and talked about in everyday life. Drawing on a corpus of board game recordings across European languages, we identify a sequential and praxeological context for rule talk. After a game rule is breached, a participant enforces proper play and then formulates a rule with an impersonal deontic statement (e.g. ‘It’s not allowed to do this’). Impersonal deontic statements express what may or may not be done without tying the obligation to a particular individual. Our analysis shows that such statements are used as part of multi-unit and multi-modal turns where rule talk is accomplished through both grammatical and embodied means. Impersonal deontic statements serve multiple interactional goals: they account for having changed another’s behavior in the moment and at the same time impart knowledge for the future. We refer to this complex action as an “instruction”. The results of this study advance our understanding of rules and rule-following in everyday life, and of how resources of language and the body are combined to enforce and formulate rules.
  • Zioga, I., Zhou, Y. J., Weissbart, H., Martin, A. E., & Haegens, S. (2024). Alpha and beta oscillations differentially support word production in a rule-switching task. eNeuro, 11(4): ENEURO.0312-23.2024. doi:10.1523/ENEURO.0312-23.2024.

    Abstract

    Research into the role of brain oscillations in basic perceptual and cognitive functions has suggested that the alpha rhythm reflects functional inhibition while the beta rhythm reflects neural ensemble (re)activation. However, little is known regarding the generalization of these proposed fundamental operations to linguistic processes, such as speech comprehension and production. Here, we recorded magnetoencephalography in participants performing a novel rule-switching paradigm. Specifically, Dutch native speakers had to produce an alternative exemplar from the same category or a feature of a given target word embedded in spoken sentences (e.g., for the word “tuna”, an exemplar from the same category—“seafood”—would be “shrimp”, and a feature would be “pink”). A cue indicated the task rule—exemplar or feature—either before (pre-cue) or after (retro-cue) listening to the sentence. Alpha power during the working memory delay was lower for retro-cue compared with that for pre-cue in the left hemispheric language-related regions. Critically, alpha power negatively correlated with reaction times, suggestive of alpha facilitating task performance by regulating inhibition in regions linked to lexical retrieval. Furthermore, we observed a different spatiotemporal pattern of beta activity for exemplars versus features in the right temporoparietal regions, in line with the proposed role of beta in recruiting neural networks for the encoding of distinct categories. Overall, our study provides evidence for the generalizability of the role of alpha and beta oscillations from perceptual to more “complex, linguistic processes” and offers a novel task to investigate links between rule-switching, working memory, and word production.
  • Zora, H., Riad, T., Ylinen, S., & Csépe, V. (2021). Phonological variations are compensated at the lexical level: Evidence from auditory neural activity. Frontiers in Human Neuroscience, 15: 622904. doi:10.3389/fnhum.2021.622904.

    Abstract

    Dealing with phonological variations is important for speech processing. This article addresses whether phonological variations introduced by assimilatory processes are compensated for at the pre-lexical or lexical level, and whether the nature of variation and the phonological context influence this process. To this end, Swedish nasal regressive place assimilation was investigated using the mismatch negativity (MMN) component. In nasal regressive assimilation, the coronal nasal assimilates to the place of articulation of a following segment, most clearly with a velar or labial place of articulation, as in utan mej “without me” > [ʉːtam mɛjː]. In a passive auditory oddball paradigm, 15 Swedish speakers were presented with Swedish phrases with attested and unattested phonological variations and contexts for nasal assimilation. Attested variations – a coronal-to-labial change as in utan “without” > [ʉːtam] – were contrasted with unattested variations – a labial-to-coronal change as in utom “except” > ∗[ʉːtɔn] – in appropriate and inappropriate contexts created by mej “me” [mɛjː] and dej “you” [dɛjː]. Given that the MMN amplitude depends on the degree of variation between two stimuli, the MMN responses were expected to indicate to what extent the distance between variants was tolerated by the perceptual system. Since the MMN response reflects not only low-level acoustic processing but also higher-level linguistic processes, the results were predicted to indicate whether listeners process assimilation at the pre-lexical and lexical levels. The results indicated no significant interactions across variations, suggesting that variations in phonological forms do not incur any cost in lexical retrieval; hence such variation is compensated for at the lexical level. However, since the MMN response reached significance only for a labial-to-coronal change in a labial context and for a coronal-to-labial change in a coronal context, the compensation might have been influenced by the nature of variation and the phonological context. It is therefore concluded that while assimilation is compensated for at the lexical level, there is also some influence from pre-lexical processing. The present results reveal not only signal-based perception of phonological units, but also higher-level lexical processing, and are thus able to reconcile the bottom-up and top-down models of speech processing.
  • Zora, H., & Csépe, V. (2021). Perception of Prosodic Modulations of Linguistic and Paralinguistic Origin: Evidence From Early Auditory Event-Related Potentials. Frontiers in Neuroscience, 15: 797487. doi:10.3389/fnins.2021.797487.

    Abstract

    How listeners handle prosodic cues of linguistic and paralinguistic origin is a central question for spoken communication. In the present EEG study, we addressed this question by examining neural responses to variations in pitch accent (linguistic) and affective (paralinguistic) prosody in Swedish words, using a passive auditory oddball paradigm. The results indicated that changes in pitch accent and affective prosody elicited mismatch negativity (MMN) responses at around 200 ms, confirming the brain’s pre-attentive response to any prosodic modulation. The MMN amplitude was, however, statistically larger to the deviation in affective prosody in comparison to the deviation in pitch accent and affective prosody combined, which is in line with previous research indicating not only a larger MMN response to affective prosody in comparison to neutral prosody but also a smaller MMN response to multidimensional deviants than unidimensional ones. The results, further, showed a significant P3a response to the affective prosody change in comparison to the pitch accent change at around 300 ms, in accordance with previous findings showing an enhanced positive response to emotional stimuli. The present findings provide evidence for distinct neural processing of different prosodic cues, and statistically confirm the intrinsic perceptual and motivational salience of paralinguistic information in spoken communication.

Share this page