Publications

Displaying 301 - 400 of 576
  • Mak, M., Faber, M., & Willems, R. M. (2023). Different kinds of simulation during literary reading: Insights from a combined fMRI and eye-tracking study. Cortex, 162, 115-135. doi:10.1016/j.cortex.2023.01.014.

    Abstract

    Mental simulation is an important aspect of narrative reading. In a previous study, we found that gaze durations are differentially impacted by different kinds of mental simulation. Motor simulation, perceptual simulation, and mentalizing as elicited by literary short stories influenced eye movements in distinguishable ways (Mak & Willems, 2019). In the current study, we investigated the existence of a common neural locus for these different kinds of simulation. We additionally investigated whether individual differences during reading, as indexed by the eye movements, are reflected in domain-specific activations in the brain. We found a variety of brain areas activated by simulation-eliciting content, both modality-specific brain areas and a general simulation area. Individual variation in percent signal change in activated areas was related to measures of story appreciation as well as personal characteristics (i.e., transportability, perspective taking). Taken together, these findings suggest that mental simulation is supported by both domain-specific processes grounded in previous experiences, and by the neural mechanisms that underlie higher-order language processing (e.g., situation model building, event indexing, integration).

    Additional information

    figures localizer tasks appendix C1
  • Mamus, E., Speed, L. J., Ozyurek, A., & Majid, A. (2021). Sensory modality of input influences encoding of motion events in speech but not co-speech gestures. In T. Fitch, C. Lamm, H. Leder, & K. Teßmar-Raible (Eds.), Proceedings of the 43rd Annual Conference of the Cognitive Science Society (CogSci 2021) (pp. 376-382). Vienna: Cognitive Science Society.

    Abstract

    Visual and auditory channels have different affordances and
    this is mirrored in what information is available for linguistic
    encoding. The visual channel has high spatial acuity, whereas
    the auditory channel has better temporal acuity. These
    differences may lead to different conceptualizations of events
    and affect multimodal language production. Previous studies of
    motion events typically present visual input to elicit speech and
    gesture. The present study compared events presented as audio-
    only, visual-only, or multimodal (visual+audio) input and
    assessed speech and co-speech gesture for path and manner of
    motion in Turkish. Speakers with audio-only input mentioned
    path more and manner less in verbal descriptions, compared to
    speakers who had visual input. There was no difference in the
    type or frequency of gestures across conditions, and gestures
    were dominated by path-only gestures. This suggests that input
    modality influences speakers’ encoding of path and manner of
    motion events in speech, but not in co-speech gestures.
  • Mamus, E., Speed, L. J., Rissman, L., Majid, A., & Özyürek, A. (2023). Lack of visual experience affects multimodal language production: Evidence from congenitally blind and sighted people. Cognitive Science, 47(1): e13228. doi:10.1111/cogs.13228.

    Abstract

    The human experience is shaped by information from different perceptual channels, but it is still debated whether and how differential experience influences language use. To address this, we compared congenitally blind, blindfolded, and sighted people's descriptions of the same motion events experienced auditorily by all participants (i.e., via sound alone) and conveyed in speech and gesture. Comparison of blind and sighted participants to blindfolded participants helped us disentangle the effects of a lifetime experience of being blind versus the task-specific effects of experiencing a motion event by sound alone. Compared to sighted people, blind people's speech focused more on path and less on manner of motion, and encoded paths in a more segmented fashion using more landmarks and path verbs. Gestures followed the speech, such that blind people pointed to landmarks more and depicted manner less than sighted people. This suggests that visual experience affects how people express spatial events in the multimodal language and that blindness may enhance sensitivity to paths of motion due to changes in event construal. These findings have implications for the claims that language processes are deeply rooted in our sensory experiences.
  • Mamus, E., Speed, L., Özyürek, A., & Majid, A. (2023). The effect of input sensory modality on the multimodal encoding of motion events. Language, Cognition and Neuroscience, 38(5), 711-723. doi:10.1080/23273798.2022.2141282.

    Abstract

    Each sensory modality has different affordances: vision has higher spatial acuity than audition, whereas audition has better temporal acuity. This may have consequences for the encoding of events and its subsequent multimodal language production—an issue that has received relatively little attention to date. In this study, we compared motion events presented as audio-only, visual-only, or multimodal (visual + audio) input and measured speech and co-speech gesture depicting path and manner of motion in Turkish. Input modality affected speech production. Speakers with audio-only input produced more path descriptions and fewer manner descriptions in speech compared to speakers who received visual input. In contrast, the type and frequency of gestures did not change across conditions. Path-only gestures dominated throughout. Our results suggest that while speech is more susceptible to auditory vs. visual input in encoding aspects of motion events, gesture is less sensitive to such differences.

    Additional information

    Supplemental material
  • Manhardt, F. (2021). A tale of two modalities. PhD Thesis, Radboud University Nijmegen, Nijmegen.
  • Manhardt, F., Brouwer, S., & Ozyurek, A. (2021). A tale of two modalities: Sign and speech influence in each other in bimodal bilinguals. Psychological Science, 32(3), 424-436. doi:10.1177/0956797620968789.

    Abstract

    Bimodal bilinguals are hearing individuals fluent in a sign and a spoken language. Can the two languages influence each other in such individuals despite differences in the visual (sign) and vocal (speech) modalities of expression? We investigated cross-linguistic influences on bimodal bilinguals’ expression of spatial relations. Unlike spoken languages, sign uses iconic linguistic forms that resemble physical features of objects in a spatial relation and thus expresses specific semantic information. Hearing bimodal bilinguals (n = 21) fluent in Dutch and Sign Language of the Netherlands and their hearing nonsigning and deaf signing peers (n = 20 each) described left/right relations between two objects. Bimodal bilinguals expressed more specific information about physical features of objects in speech than nonsigners, showing influence from sign language. They also used fewer iconic signs with specific semantic information than deaf signers, demonstrating influence from speech. Bimodal bilinguals’ speech and signs are shaped by two languages from different modalities.

    Additional information

    supplementary materials
  • Manhardt, F., Brouwer, S., Van Wijk, E., & Özyürek, A. (2023). Word order preference in sign influences speech in hearing bimodal bilinguals but not vice versa: Evidence from behavior and eye-gaze. Bilingualism: Language and Cognition, 26(1), 48-61. doi:10.1017/S1366728922000311.

    Abstract

    We investigated cross-modal influences between speech and sign in hearing bimodal bilinguals, proficient in a spoken and a sign language, and its consequences on visual attention during message preparation using eye-tracking. We focused on spatial expressions in which sign languages, unlike spoken languages, have a modality-driven preference to mention grounds (big objects) prior to figures (smaller objects). We compared hearing bimodal bilinguals’ spatial expressions and visual attention in Dutch and Dutch Sign Language (N = 18) to those of their hearing non-signing (N = 20) and deaf signing peers (N = 18). In speech, hearing bimodal bilinguals expressed more ground-first descriptions and fixated grounds more than hearing non-signers, showing influence from sign. In sign, they used as many ground-first descriptions as deaf signers and fixated grounds equally often, demonstrating no influence from speech. Cross-linguistic influence of word order preference and visual attention in hearing bimodal bilinguals appears to be one-directional modulated by modality-driven differences.
  • Maskalenka, K., Alagöz, G., Krueger, F., Wright, J., Rostovskaya, M., Nakhuda, A., Bendall, A., Krueger, C., Walker, S., Scally, A., & Rugg-Gunn, P. J. (2023). NANOGP1, a tandem duplicate of NANOG, exhibits partial functional conservation in human naïve pluripotent stem cells. Development, 150(2): dev201155. doi:10.1242/dev.201155.

    Abstract

    Gene duplication events can drive evolution by providing genetic material for new gene functions, and they create opportunities for diverse developmental strategies to emerge between species. To study the contribution of duplicated genes to human early development, we examined the evolution and function of NANOGP1, a tandem duplicate of the transcription factor NANOG. We found that NANOGP1 and NANOG have overlapping but distinct expression profiles, with high NANOGP1 expression restricted to early epiblast cells and naïve-state pluripotent stem cells. Sequence analysis and epitope-tagging revealed that NANOGP1 is protein coding with an intact homeobox domain. The duplication that created NANOGP1 occurred earlier in primate evolution than previously thought and has been retained only in great apes, whereas Old World monkeys have disabled the gene in different ways, including homeodomain point mutations. NANOGP1 is a strong inducer of naïve pluripotency; however, unlike NANOG, it is not required to maintain the undifferentiated status of human naïve pluripotent cells. By retaining expression, sequence and partial functional conservation with its ancestral copy, NANOGP1 exemplifies how gene duplication and subfunctionalisation can contribute to transcription factor activity in human pluripotency and development.
  • Mazzini, S., Holler, J., & Drijvers, L. (2023). Studying naturalistic human communication using dual-EEG and audio-visual recordings. STAR Protocols, 4(3): 102370. doi:10.1016/j.xpro.2023.102370.

    Abstract

    We present a protocol to study naturalistic human communication using dual-EEG and audio-visual recordings. We describe preparatory steps for data collection including setup preparation, experiment design, and piloting. We then describe the data collection process in detail which consists of participant recruitment, experiment room preparation, and data collection. We also outline the kinds of research questions that can be addressed with the current protocol, including several analysis possibilities, from conversational to advanced time-frequency analyses.
    For complete details on the use and execution of this protocol, please refer to Drijvers and Holler (2022).
  • McConnell, K. (2023). Individual Differences in Holistic and Compositional Language Processing. Journal of Cognition, 6. doi:10.5334/joc.283.

    Abstract

    Individual differences in cognitive abilities are ubiquitous across the spectrum of proficient language users. Although speakers differ with regard to their memory capacity, ability for inhibiting distraction, and ability to shift between different processing levels, comprehension is generally successful. However, this does not mean it is identical across individuals; listeners and readers may rely on different processing strategies to exploit distributional information in the service of efficient understanding. In the following psycholinguistic reading experiment, we investigate potential sources of individual differences in the processing of co-occurring words. Participants read modifier-noun bigrams like absolute silence in a self-paced reading task. Backward transition probability (BTP) between the two lexemes was used to quantify the prominence of the bigram as a whole in comparison to the frequency of its parts. Of five individual difference measures (processing speed, verbal working memory, cognitive inhibition, global-local scope shifting, and personality), two proved to be significantly associated with the effect of BTP on reading times. Participants who could inhibit a distracting global environment in order to more efficiently retrieve a single part and those that preferred the local level in the shifting task showed greater effects of the co-occurrence probability of the parts. We conclude that some participants are more likely to retrieve bigrams via their parts and their co-occurrence statistics whereas others more readily retrieve the two words together as a single chunked unit.
  • McConnell, K., & Blumenthal-Dramé, A. (2021). Usage-Based Individual Differences in the Probabilistic Processing of Multi-Word Sequences. Frontiers in Communication, 6: 703351. doi:10.3389/fcomm.2021.703351.

    Abstract

    While it is widely acknowledged that both predictive expectations and retrodictive
    integration influence language processing, the individual differences that affect these
    two processes and the best metrics for observing them have yet to be fully described.
    The present study aims to contribute to the debate by investigating the extent to which
    experienced-based variables modulate the processing of word pairs (bigrams).
    Specifically, we investigate how age and reading experience correlate with lexical
    anticipation and integration, and how this effect can be captured by the metrics of
    forward and backward transition probability (TP). Participants read more and less
    strongly associated bigrams, paired to control for known lexical covariates such as
    bigram frequency and meaning (i.e., absolute control, total control, absolute silence,
    total silence) in a self-paced reading (SPR) task. They additionally completed
    assessments of exposure to print text (Author Recognition Test, Shipley vocabulary
    assessment, Words that Go Together task) and provided their age. Results show that
    both older age and lesser reading experience individually correlate with stronger TP
    effects. Moreover, TP effects differ across the spillover region (the two words following
    the noun in the bigram)
  • McLean, B., Dunn, M., & Dingemanse, M. (2023). Two measures are better than one: Combining iconicity ratings and guessing experiments for a more nuanced picture of iconicity in the lexicon. Language and Cognition, 15(4), 719-739. doi:10.1017/langcog.2023.9.

    Abstract

    Iconicity in language is receiving increased attention from many fields, but our understanding of iconicity is only as good as the measures we use to quantify it. We collected iconicity measures for 304 Japanese words from English-speaking participants, using rating and guessing tasks. The words included ideophones (structurally marked depictive words) along with regular lexical items from similar semantic domains (e.g., fuwafuwa ‘fluffy’, jawarakai ‘soft’). The two measures correlated, speaking to their validity. However, ideophones received consistently higher iconicity ratings than other items, even when guessed at the same accuracies, suggesting the rating task is more sensitive to cues like structural markedness that frame words as iconic. These cues did not always guide participants to the meanings of ideophones in the guessing task, but they did make them more confident in their guesses, even when they were wrong. Consistently poor guessing results reflect the role different experiences play in shaping construals of iconicity. Using multiple measures in tandem allows us to explore the interplay between iconicity and these external factors. To facilitate this, we introduce a reproducible workflow for creating rating and guessing tasks from standardised wordlists, while also making improvements to the robustness, sensitivity and discriminability of previous approaches.
  • McQueen, J. M., Jesse, A., & Mitterer, H. (2023). Lexically mediated compensation for coarticulation still as elusive as a white christmash. Cognitive Science: a multidisciplinary journal, 47(9): e13342. doi:10.1111/cogs.13342.

    Abstract

    Luthra, Peraza-Santiago, Beeson, Saltzman, Crinnion, and Magnuson (2021) present data from the lexically mediated compensation for coarticulation paradigm that they claim provides conclusive evidence in favor of top-down processing in speech perception. We argue here that this evidence does not support that conclusion. The findings are open to alternative explanations, and we give data in support of one of them (that there is an acoustic confound in the materials). Lexically mediated compensation for coarticulation thus remains elusive, while prior data from the paradigm instead challenge the idea that there is top-down processing in online speech recognition.

    Additional information

    supplementary materials
  • Melnychuk, T., Galke, L., Seidlmayer, E., Förster, K. U., Tochtermann, K., & Schultz, C. (2021). Früherkennung wissenschaftlicher Konvergenz im Hochschulmanagement. Hochschulmanagement, 16(1), 24-28.

    Abstract

    It is crucial for universities to recognize early signals of scientific convergence. Scientific convergence describes a dynamic pattern where the distance between different fields of knowledge shrinks over time. This knowledge
    space is beneficial to radical innovations and new promising research topics. Research in converging areas of knowledge can therefore allow universities to establish a leading position in the science community.
    The Q-AKTIV project develops a new approach on the basis of machine learning to identify scientific convergence at an early stage. In this work, we briefly present this approach and the first results of empirical validation. We discuss the benefits of an instrument building on our approach for the strategic management of universities and
    other research institutes.
  • Melnychuk, T., Galke, L., Seidlmayer, E., Bröring, S., Förstner, K. U., Tochtermann, K., & Schultz, C. (2023). Development of similarity measures from graph-structured bibliographic metadata: An application to identify scientific convergence. IEEE Transactions on Engineering Management. Advance online publication. doi:10.1109/TEM.2023.3308008.

    Abstract

    Scientific convergence is a phenomenon where the distance between hitherto distinct scientific fields narrows and the fields gradually overlap over time. It is creating important potential for research, development, and innovation. Although scientific convergence is crucial for the development of radically new technology, the identification of emerging scientific convergence is particularly difficult since the underlying knowledge flows are rather fuzzy and unstable in the early convergence stage. Nevertheless, novel scientific publications emerging at the intersection of different knowledge fields may reflect convergence processes. Thus, in this article, we exploit the growing number of research and digital libraries providing bibliographic metadata to propose an automated analysis of science dynamics. We utilize and adapt machine-learning methods (DeepWalk) to automatically learn a similarity measure between scientific fields from graphs constructed on bibliographic metadata. With a time-based perspective, we apply our approach to analyze the trajectories of evolving similarities between scientific fields. We validate the learned similarity measure by evaluating it within the well-explored case of cholesterol-lowering ingredients in which scientific convergence between the distinct scientific fields of nutrition and pharmaceuticals has partially taken place. Our results confirm that the similarity trajectories learned by our approach resemble the expected behavior, indicating that our approach may allow researchers and practitioners to detect and predict scientific convergence early.
  • Menks, W. M., Fehlbaum, L. V., Borbás, R., Sterzer, P., Stadler, C., & Raschle, N. M. (2021). Eye gaze patterns and functional brain responses during emotional face processing in adolescents with conduct disorder. NeuroImage: Clinical, 29: 102519. doi:10.1016/j.nicl.2020.102519.

    Abstract

    Background: Conduct disorder (CD) is characterized by severe aggressive and antisocial behavior. Initial evidence
    suggests neural deficits and aberrant eye gaze pattern during emotion processing in CD; both concepts, however,
    have not yet been studied simultaneously. The present study assessed the functional brain correlates of emotional
    face processing with and without consideration of concurrent eye gaze behavior in adolescents with CD
    compared to typically developing (TD) adolescents.
    Methods: 58 adolescents (23CD/35TD; average age = 16 years/range = 14–19 years) underwent an implicit
    emotional face processing task. Neuroimaging analyses were conducted for a priori-defined regions of interest
    (insula, amygdala, and medial orbitofrontal cortex) and using a full-factorial design assessing the main effects of
    emotion (neutral, anger, fear), group and the interaction thereof (cluster-level, p < .05 FWE-corrected) with and
    without consideration of concurrent eye gaze behavior (i.e., time spent on the eye region).
    Results: Adolescents with CD showed significant hypo-activations during emotional face processing in right
    anterior insula compared to TD adolescents, independent of the emotion presented. In-scanner eye-tracking data
    revealed that adolescents with CD spent significantly less time on the eye, but not mouth region. Correcting for
    eye gaze behavior during emotional face processing reduced group differences previously observed for right
    insula.
    Conclusions: Atypical insula activation during emotional face processing in adolescents with CD may partly be
    explained by attentional mechanisms (i.e., reduced gaze allocation to the eyes, independent of the emotion
    presented). An increased understanding of the mechanism causal for emotion processing deficits observed in CD
    may ultimately aid the development of personalized intervention programs

    Additional information

    1-s2.0-S2213158220303569-mmc1.doc
  • Merkx, D., & Frank, S. L. (2021). Human sentence processing: Recurrence or attention? In E. Chersoni, N. Hollenstein, C. Jacobs, Y. Oseki, L. Prévot, & E. Santus (Eds.), Proceedings of the Workshop on Cognitive Modeling and Computational Linguistics (CMCL 2021) (pp. 12-22). Stroudsburg, PA, USA: Association for Computational Linguistics (ACL). doi:10.18653/v1/2021.cmcl-1.2.

    Abstract

    Recurrent neural networks (RNNs) have long been an architecture of interest for computational models of human sentence processing. The recently introduced Transformer architecture outperforms RNNs on many natural language processing tasks but little is known about its ability to model human language processing. We compare Transformer- and RNN-based language models’ ability to account for measures of human reading effort. Our analysis shows Transformers to outperform RNNs in explaining self-paced reading times and neural activity during reading English sentences, challenging the widely held idea that human sentence processing involves recurrent and immediate processing and provides evidence for cue-based retrieval.
  • Merkx, D., Frank, S. L., & Ernestus, M. (2021). Semantic sentence similarity: Size does not always matter. In Proceedings of Interspeech 2021 (pp. 4393-4397). doi:10.21437/Interspeech.2021-1464.

    Abstract

    This study addresses the question whether visually grounded speech recognition (VGS) models learn to capture sentence semantics without access to any prior linguistic knowledge. We produce synthetic and natural spoken versions of a well known semantic textual similarity database and show that our VGS model produces embeddings that correlate well with human semantic similarity judgements. Our results show that a model trained on a small image-caption database outperforms two models trained on much larger databases, indicating that database size is not all that matters. We also investigate the importance of having multiple captions per image and find that this is indeed helpful even if the total number of images is lower, suggesting that paraphrasing is a valuable learning signal. While the general trend in the field is to create ever larger datasets to train models on, our findings indicate other characteristics of the database can just as important.
  • He, J., Meyer, A. S., Creemers, A., & Brehm, L. (2021). Conducting language production research online: A web-based study of semantic context and name agreement effects in multi-word production. Collabra: Psychology, 7(1): 29935. doi:10.1525/collabra.29935.

    Abstract

    Few web-based experiments have explored spoken language production, perhaps due to concerns of data quality, especially for measuring onset latencies. The present study highlights how speech production research can be done outside of the laboratory by measuring utterance durations and speech fluency in a multiple-object naming task when examining two effects related to lexical selection: semantic context and name agreement. A web-based modified blocked-cyclic naming paradigm was created, in which participants named a total of sixteen simultaneously presented pictures on each trial. The pictures were either four tokens from the same semantic category (homogeneous context), or four tokens from different semantic categories (heterogeneous context). Name agreement of the pictures was varied orthogonally (high, low). In addition to onset latency, five dependent variables were measured to index naming performance: accuracy, utterance duration, total pause time, the number of chunks (word groups pronounced without intervening pauses), and first chunk length. Bayesian analyses showed effects of semantic context and name agreement for some of the dependent measures, but no interaction. We discuss the methodological implications of the current study and make best practice recommendations for spoken language production research in an online environment.
  • He, J., Meyer, A. S., & Brehm, L. (2021). Concurrent listening affects speech planning and fluency: The roles of representational similarity and capacity limitation. Language, Cognition and Neuroscience, 36(10), 1258-1280. doi:10.1080/23273798.2021.1925130.

    Abstract

    In a novel continuous speaking-listening paradigm, we explored how speech planning was affected by concurrent listening. In Experiment 1, Dutch speakers named pictures with high versus low name agreement while ignoring Dutch speech, Chinese speech, or eight-talker babble. Both name agreement and type of auditory input influenced response timing and chunking, suggesting that representational similarity impacts lexical selection and the scope of advance planning in utterance generation. In Experiment 2, Dutch speakers named pictures with high or low name agreement while either ignoring Dutch words, or attending to them for a later memory test. Both name agreement and attention demand influenced response timing and chunking, suggesting that attention demand impacts lexical selection and the planned utterance units in each response. The study indicates that representational similarity and attention demand play important roles in linguistic dual-task interference, and the interference can be managed by adapting when and how to plan speech.

    Additional information

    supplemental material
  • Meyer, A. S. (2023). Timing in conversation. Journal of Cognition, 6(1), 1-17. doi:10.5334/joc.268.

    Abstract

    Turn-taking in everyday conversation is fast, with median latencies in corpora of conversational speech often reported to be under 300 ms. This seems like magic, given that experimental research on speech planning has shown that speakers need much more time to plan and produce even the shortest of utterances. This paper reviews how language scientists have combined linguistic analyses of conversations and experimental work to understand the skill of swift turn-taking and proposes a tentative solution to the riddle of fast turn-taking.
  • Mickan, A., McQueen, J. M., Brehm, L., & Lemhöfer, K. (2023). Individual differences in foreign language attrition: A 6-month longitudinal investigation after a study abroad. Language, Cognition and Neuroscience, 38(1), 11-39. doi:10.1080/23273798.2022.2074479.

    Abstract

    While recent laboratory studies suggest that the use of competing languages is a driving force in foreign language (FL) attrition (i.e. forgetting), research on “real” attriters has failed to demonstrate
    such a relationship. We addressed this issue in a large-scale longitudinal study, following German students throughout a study abroad in Spain and their first six months back in Germany. Monthly,
    percentage-based frequency of use measures enabled a fine-grained description of language use.
    L3 Spanish forgetting rates were indeed predicted by the quantity and quality of Spanish use, and
    correlated negatively with L1 German and positively with L2 English letter fluency. Attrition rates
    were furthermore influenced by prior Spanish proficiency, but not by motivation to maintain
    Spanish or non-verbal long-term memory capacity. Overall, this study highlights the importance
    of language use for FL retention and sheds light on the complex interplay between language
    use and other determinants of attrition.
  • Mickan, A., McQueen, J. M., Valentini, B., Piai, V., & Lemhöfer, K. (2021). Electrophysiological evidence for cross-language interference in foreign-language attrition. Neuropsychologia, 155: 107795. doi:10.1016/j.neuropsychologia.2021.107795.

    Abstract

    Foreign language attrition (FLA) appears to be driven by interference from other, more recently-used languages (Mickan et al., 2020). Here we tracked these interference dynamics electrophysiologically to further our understanding of the underlying processes. Twenty-seven Dutch native speakers learned 70 new Italian words over two days. On a third day, EEG was recorded as they performed naming tasks on half of these words in English and, finally, as their memory for all the Italian words was tested in a picture-naming task. Replicating Mickan et al., recall was slower and tended to be less complete for Italian words that were interfered with (i.e., named in English) than for words that were not. These behavioral interference effects were accompanied by an enhanced frontal N2 and a decreased late positivity (LPC) for interfered compared to not-interfered items. Moreover, interfered items elicited more theta power. We also found an increased N2 during the interference phase for items that participants were later slower to retrieve in Italian. We interpret the N2 and theta effects as markers of interference, in line with the idea that Italian retrieval at final test is hampered by competition from recently practiced English translations. The LPC, in turn, reflects the consequences of interference: the reduced accessibility of interfered Italian labels. Finally, that retrieval ease at final test was related to the degree of interference during previous English retrieval shows that FLA is already set in motion during the interference phase, and hence can be the direct consequence of using other languages.

    Additional information

    data via Donders Repository
  • Mickan, A. (2021). What was that Spanish word again? Investigations into the cognitive mechanisms underlying foreign language attrition. PhD Thesis, Radboud University Nijmegen, Nijmegen.
  • Misersky, J., Slivac, K., Hagoort, P., & Flecken, M. (2021). The State of the Onion: Grammatical aspect modulates object representation during event comprehension. Cognition, 214: 104744. doi:10.1016/j.cognition.2021.104744.

    Abstract

    The present ERP study assessed whether grammatical aspect is used as a cue in online event comprehension, in particular when reading about events in which an object is visually changed. While perfective aspect cues holistic event representations, including an event's endpoint, progressive aspect highlights intermediate phases of an event. In a 2 × 3 design, participants read SVO sentences describing a change-of-state event (e.g., to chop an onion), with grammatical Aspect manipulated (perfective “chopped” vs progressive “was chopping”). Thereafter, they saw a Picture of an object either having undergone substantial state-change (SC; a chopped onion), no state-change (NSC; an onion in its original state) or an unrelated object (U; a cactus, acting as control condition). Their task was to decide whether the object in the Picture was mentioned in the sentence. We focused on N400 modulation, with ERPs time-locked to picture onset. U pictures elicited an N400 response as expected, suggesting detection of categorical mismatches in object type. For SC and NSC pictures, a whole-head follow-up analysis revealed a P300, implying people were engaged in detailed evaluation of pictures of matching objects. SC pictures received most positive responses overall. Crucially, there was an interaction of Aspect and Picture: SC pictures resulted in a higher amplitude P300 after sentences in the perfective compared to the progressive. Thus, while the perfective cued for a holistic event representation, including the resultant state of the affected object (i.e., the chopped onion) constraining object representations online, the progressive defocused event completion and object-state change. Grammatical aspect thus guided online event comprehension by cueing the visual representation(s) of an object's state.
  • Mishra, C., Offrede, T., Fuchs, S., Mooshammer, C., & Skantze, G. (2023). Does a robot’s gaze aversion affect human gaze aversion? Frontiers in Robotics and AI, 10: 1127626. doi:10.3389/frobt.2023.1127626.

    Abstract

    Gaze cues serve an important role in facilitating human conversations and are generally considered to be one of the most important non-verbal cues. Gaze cues are used to manage turn-taking, coordinate joint attention, regulate intimacy, and signal cognitive effort. In particular, it is well established that gaze aversion is used in conversations to avoid prolonged periods of mutual gaze. Given the numerous functions of gaze cues, there has been extensive work on modelling these cues in social robots. Researchers have also tried to identify the impact of robot gaze on human participants. However, the influence of robot gaze behavior on human gaze behavior has been less explored. We conducted a within-subjects user study (N = 33) to verify if a robot’s gaze aversion influenced human gaze aversion behavior. Our results show that participants tend to avert their gaze more when the robot keeps staring at them as compared to when the robot exhibits well-timed gaze aversions. We interpret our findings in terms of intimacy regulation: humans try to compensate for the robot’s lack of gaze aversion.
  • Mishra, C., Verdonschot, R. G., Hagoort, P., & Skantze, G. (2023). Real-time emotion generation in human-robot dialogue using large language models. Frontiers in Robotics and AI, 10: 1271610. doi:10.3389/frobt.2023.1271610.

    Abstract

    Affective behaviors enable social robots to not only establish better connections with humans but also serve as a tool for the robots to express their internal states. It has been well established that emotions are important to signal understanding in Human-Robot Interaction (HRI). This work aims to harness the power of Large Language Models (LLM) and proposes an approach to control the affective behavior of robots. By interpreting emotion appraisal as an Emotion Recognition in Conversation (ERC) tasks, we used GPT-3.5 to predict the emotion of a robot’s turn in real-time, using the dialogue history of the ongoing conversation. The robot signaled the predicted emotion using facial expressions. The model was evaluated in a within-subjects user study (N = 47) where the model-driven emotion generation was compared against conditions where the robot did not display any emotions and where it displayed incongruent emotions. The participants interacted with the robot by playing a card sorting game that was specifically designed to evoke emotions. The results indicated that the emotions were reliably generated by the LLM and the participants were able to perceive the robot’s emotions. It was found that the robot expressing congruent model-driven facial emotion expressions were perceived to be significantly more human-like, emotionally appropriate, and elicit a more positive impression. Participants also scored significantly better in the card sorting game when the robot displayed congruent facial expressions. From a technical perspective, the study shows that LLMs can be used to control the affective behavior of robots reliably in real-time. Additionally, our results could be used in devising novel human-robot interactions, making robots more effective in roles where emotional interaction is important, such as therapy, companionship, or customer service.
  • Misra, S. (2021). Real-time dynamic fur and hair simulation using verlet integration. International Journal of Scientific and Research Publication (IJSRP), 11(2), 444-450. doi:10.29322/IJSRP.11.02.2021.p11053.

    Abstract

    Throughout the history of game development, the physics behind the real-time hair simulation has continued to pose a challenge due to lack of availability of computational resources required by the system. Unlike rendering an animation, where the requirement of real-time simulation is absent, game hair physics needs more efficiency when it comes to utilization of computational resources. Generally, for making a hair strand mesh, a cylinder or a capsule mesh is an obvious choice despite its requirement of a higher number of draw calls or resources. This paper proposes the use of an innovative and highly efficient use of quad polygons, whose normals face the render in conjunction with the use of Verlet integration, which delivers optimal results by keeping the frames per second (FPS) stable. Additionally, the proposed physics also allows for physical forces, such as gravity and wind, to affect hair movement as well as simulate a natural curl in the hair strand.
  • Monaghan, P., Donnelly, S., Alcock, K., Bidgood, A., Cain, K., Durrant, S., Frost, R. L. A., Jago, L. S., Peter, M. S., Pine, J. M., Turnbull, H., & Rowland, C. F. (2023). Learning to generalise but not segment an artificial language at 17 months predicts children’s language skills 3 years later. Cognitive Psychology, 147: 101607. doi:10.1016/j.cogpsych.2023.101607.

    Abstract

    We investigated whether learning an artificial language at 17 months was predictive of children’s natural language vocabulary and grammar skills at 54 months. Children at 17 months listened to an artificial language containing non-adjacent dependencies, and were then tested on their learning to segment and to generalise the structure of the language. At 54 months, children were then tested on a range of standardised natural language tasks that assessed receptive and expressive vocabulary and grammar. A structural equation model demonstrated that learning the artificial language generalisation at 17 months predicted language abilities – a composite of vocabulary and grammar skills – at 54 months, whereas artificial language segmentation at 17 months did not predict language abilities at this age. Artificial language learning tasks – especially those that probe grammar learning – provide a valuable tool for uncovering the mechanisms driving children’s early language development.

    Additional information

    supplementary data
  • Montero-Melis, G. (2021). Consistency in motion event encoding across languages. Frontiers in Psychology, 12: 625153. doi:10.3389/fpsyg.2021.625153.

    Abstract

    Syntactic templates serve as schemas, allowing speakers to describe complex events in a systematic fashion. Motion events have long served as a prime example of how different languages favor different syntactic frames, in turn biasing their speakers towards different event conceptualizations. However, there is also variability in how motion events are syntactically framed within languages. Here we measure the consistency in event encoding in two languages, Spanish and Swedish. We test a dominant account in the literature, namely that variability within a language can be explained by specific properties of the events. This event-properties account predicts that descriptions of one and the same event should be consistent within a language, even in languages where there is overall variability in the use of syntactic frames. Spanish and Swedish speakers (N=84) described 32 caused motion events. While the most frequent syntactic framing in each language was as expected based on typology (Spanish: verb-framed, Swedish: satellite-framed, cf. Talmy, 2000), Swedish descriptions were substantially more consistent than Spanish descriptions. Swedish speakers almost invariably encoded all events with a single syntactic frame and systematically conveyed manner of motion. Spanish descriptions, in contrast, varied much more regarding syntactic framing and expression of manner. Crucially, variability in Spanish descriptions was not mainly a function of differences between events, as predicted by the event-properties account. Rather, Spanish variability in syntactic framing was driven by speaker biases. A similar picture arose for whether Spanish descriptions expressed manner information or not: Even after accounting for the effect of syntactic choice, a large portion of the variance in Spanish manner encoding remained attributable to differences among speakers. The results show that consistency in motion event encoding starkly differs across languages: Some languages (like Swedish) bias their speakers towards a particular linguistic event schema much more than others (like Spanish). Implications of these findings are discussed with respect to the typology of event framing, theories on the relationship between language and thought, and speech planning. In addition, the tools employed here to quantify variability can be applied to other domains of language.

    Additional information

    data and analysis scripts
  • Moreno Santillán, D. D., Lama, T. M., Gutierrez Guerrero, Y. T., Brown, A. M., Donat, P., Zhao, H., Rossiter, S. J., Yohe, L. R., Potter, J. H., Teeling, E. C., Vernes, S. C., Davies, K. T. J., Myers, E., Hughes, G. M., Huang, Z., Hoffmann, F., Corthals, A. P., Ray, D. A., & Dávalos, L. M. (2021). Large‐scale genome sampling reveals unique immunity and metabolic adaptations in bats. Molecular Ecology, 30(23), 6449-6467. doi:10.1111/mec.16027.

    Abstract

    Comprising more than 1,400 species, bats possess adaptations unique among mammals including powered flight, unexpected longevity, and extraordinary immunity. Some of the molecular mechanisms underlying these unique adaptations includes DNA repair, metabolism and immunity. However, analyses have been limited to a few divergent lineages, reducing the scope of inferences on gene family evolution across the Order Chiroptera. We conducted an exhaustive comparative genomic study of 37 bat species, one generated in this study, encompassing a large number of lineages, with a particular emphasis on multi-gene family evolution across immune and metabolic genes. In agreement with previous analyses, we found lineage-specific expansions of the APOBEC3 and MHC-I gene families, and loss of the proinflammatory PYHIN gene family. We inferred more than 1,000 gene losses unique to bats, including genes involved in the regulation of inflammasome pathways such as epithelial defense receptors, the natural killer gene complex and the interferon-gamma induced pathway. Gene set enrichment analyses revealed genes lost in bats are involved in defense response against pathogen-associated molecular patterns and damage-associated molecular patterns. Gene family evolution and selection analyses indicate bats have evolved fundamental functional differences compared to other mammals in both innate and adaptive immune system, with the potential to enhance anti-viral immune response while dampening inflammatory signaling. In addition, metabolic genes have experienced repeated expansions related to convergent shifts to plant-based diets. Our analyses support the hypothesis that, in tandem with flight, ancestral bats had evolved a unique set of immune adaptations whose functional implications remain to be explored.

    Additional information

    supplementary material table S1-S18
  • Morgan, A., Braden, R., Wong, M. M. K., Colin, E., Amor, D., Liégeois, F., Srivastava, S., Vogel, A., Bizaoui, V., Ranguin, K., Fisher, S. E., & Van Bon, B. W. (2021). Speech and language deficits are central to SETBP1 haploinsufficiency disorder. European Journal of Human Genetics, 29, 1216-1225. doi:10.1038/s41431-021-00894-x.

    Abstract

    Expressive communication impairment is associated with haploinsufficiency of SETBP1, as reported in small case series. Heterozygous pathogenic loss-of-function (LoF) variants in SETBP1 have also been identified in independent cohorts ascertained for childhood apraxia of speech (CAS), warranting further investigation of the roles of this gene in speech development. Thirty-one participants (12 males, aged 0; 8–23; 2 years, 28 with pathogenic SETBP1 LoF variants, 3 with 18q12.3 deletions) were assessed for speech, language and literacy abilities. Broader development was examined with standardised motor, social and daily life skills assessments. Gross and fine motor deficits (94%) and intellectual impairments (68%) were common. Protracted and aberrant speech development was consistently seen, regardless of motor or intellectual ability. We expand the linguistic phenotype associated with SETBP1 LoF syndrome (SETBP1 haploinsufficiency disorder), revealing a striking speech presentation that implicates both motor (CAS, dysarthria) and language (phonological errors) systems, with CAS (80%) being the most common diagnosis. In contrast to past reports, the understanding of language was rarely better preserved than language expression (29%). Language was typically low, to moderately impaired, with commensurate expression and comprehension ability. Children were sociable with a strong desire to communicate. Minimally verbal children (32%) augmented speech with sign language, gestures or digital devices. Overall, relative to general development, spoken language and literacy were poorer than social, daily living, motor and adaptive behaviour skills. Our findings show that poor communication is a central feature of SETBP1 haploinsufficiency disorder, confirming this gene as a strong candidate for speech and language disorders.
  • Morison, L., Meffert, E., Stampfer, M., Steiner-Wilke, I., Vollmer, B., Schulze, K., Briggs, T., Braden, R., Vogel, A. P., Thompson-Lake, D., Patel, C., Blair, E., Goel, H., Turner, S., Moog, U., Riess, A., Liegeois, F., Koolen, D. A., Amor, D. J., Kleefstra, T. and 3 moreMorison, L., Meffert, E., Stampfer, M., Steiner-Wilke, I., Vollmer, B., Schulze, K., Briggs, T., Braden, R., Vogel, A. P., Thompson-Lake, D., Patel, C., Blair, E., Goel, H., Turner, S., Moog, U., Riess, A., Liegeois, F., Koolen, D. A., Amor, D. J., Kleefstra, T., Fisher, S. E., Zweier, C., & Morgan, A. T. (2023). In-depth characterisation of a cohort of individuals with missense and loss-of-function variants disrupting FOXP2. Journal of Medical Genetics, 60(6), 597-607. doi:10.1136/jmg-2022-108734.

    Abstract

    Background
    Heterozygous disruptions of FOXP2 were the first identified molecular cause for severe speech disorder; childhood apraxia of speech (CAS), yet few cases have been reported, limiting knowledge of the condition.

    Methods
    Here we phenotyped 29 individuals from 18 families with pathogenic FOXP2-only variants (13 loss-of-function, 5 missense variants; 14 males; aged 2 years to 62 years). Health and development (cognitive, motor, social domains) was examined, including speech and language outcomes with the first cross-linguistic analysis of English and German.

    Results
    Speech disorders were prevalent (24/26, 92%) and CAS was most common (23/26, 89%), with similar speech presentations across English and German. Speech was still impaired in adulthood and some speech sounds (e.g. ‘th’, ‘r’, ‘ch’, ‘j’) were never acquired. Language impairments (22/26, 85%) ranged from mild to severe. Comorbidities included feeding difficulties in infancy (10/27, 37%), fine (14/27, 52%) and gross (14/27, 52%) motor impairment, anxiety (6/28, 21%), depression (7/28, 25%), and sleep disturbance (11/15, 44%). Physical features were common (23/28, 82%) but with no consistent pattern. Cognition ranged from average to mildly impaired, and was incongruent with language ability; for example, seven participants with severe language disorder had average non-verbal cognition.

    Conclusions
    Although we identify increased prevalence of conditions like anxiety, depression and sleep disturbance, we confirm that the consequences of FOXP2 dysfunction remain relatively specific to speech disorder, as compared to other recently identified monogenic conditions associated with CAS. Thus, our findings reinforce that FOXP2 provides a valuable entrypoint for examining the neurobiological bases of speech disorder.
  • Mudd, K., Lutzenberger, H., De Vos, C., & De Boer, B. (2021). Social structure and lexical uniformity: A case study of gender differences in the Kata Kolok community. In T. Fitch, C. Lamm, H. Leder, & K. Teßmar-Raible (Eds.), Proceedings of the 43rd Annual Conference of the Cognitive Science Society (CogSci 2021) (pp. 2692-2698). Vienna: Cognitive Science Society.

    Abstract

    Language emergence is characterized by a high degree of lex-
    ical variation. It has been suggested that the speed at which
    lexical conventionalization occurs depends partially on social
    structure. In large communities, individuals receive input from
    many sources, creating a pressure for lexical convergence.
    In small, insular communities, individuals can remember id-
    iolects and share common ground with interlocuters, allow-
    ing these communities to retain a high degree of lexical vari-
    ation. We look at lexical variation in Kata Kolok, a sign lan-
    guage which emerged six generations ago in a Balinese vil-
    lage, where women tend to have more tightly-knit social net-
    works than men. We test if there are differing degrees of lexical
    uniformity between women and men by reanalyzing a picture
    description task in Kata Kolok. We find that women’s produc-
    tions exhibit less lexical uniformity than men’s. One possible
    explanation of this finding is that women’s more tightly-knit
    social networks allow for remembering idiolects, alleviating
    the pressure for lexical convergence, but social network data
    from the Kata Kolok community is needed to support this ex-
    planation.
  • Muhinyi, A., & Rowland, C. F. (2023). Contributions of abstract extratextual talk and interactive style to preschoolers’ vocabulary development. Journal of Child Language, 50(1), 198-213. doi:10.1017/S0305000921000696.

    Abstract

    Caregiver abstract talk during shared reading predicts preschool-age children’s vocabulary development. However, previous research has focused on level of abstraction with less consideration of the style of extratextual talk. Here, we investigated the relation between these two dimensions of extratextual talk, and their contributions to variance in children’s vocabulary skills. Caregiver level of abstraction was associated with an interactive reading style. Controlling for socioeconomic status and child age, high interactivity predicted children’s concurrent vocabulary skills whereas abstraction did not. Controlling for earlier vocabulary skills, neither dimension of the extratextual talk predicted later vocabulary. Theoretical and practical relevance are discussed.
  • Nabrotzky, J., Ambrazaitis, G., Zellers, M., & House, D. (2023). Temporal alignment of manual gestures’ phase transitions with lexical and post-lexical accentual F0 peaks in spontaneous Swedish interaction. In W. Pouw, J. Trujillo, H. R. Bosker, L. Drijvers, M. Hoetjes, J. Holler, S. Kadava, L. Van Maastricht, E. Mamus, & A. Ozyurek (Eds.), Gesture and Speech in Interaction (GeSpIn) Conference. doi:10.17617/2.3527194.

    Abstract

    Many studies investigating the temporal alignment of co-speech
    gestures to acoustic units in the speech signal find a close
    coupling of the gestural landmarks and pitch accents or the
    stressed syllable of pitch-accented words. In English, a pitch
    accent is anchored in the lexically stressed syllable. Hence, it is
    unclear whether it is the lexical phonological dimension of
    stress, or the phrase-level prominence that determines the
    details of speech-gesture synchronization. This paper explores
    the relation between gestural phase transitions and accentual F0
    peaks in Stockholm Swedish, which exhibits a lexical pitch
    accent distinction. When produced with phrase-level
    prominence, there are three different configurations of
    lexicality of F0 peaks and the status of the syllable it is aligned
    with. Through analyzing the alignment of the different F0 peaks
    with gestural onsets in spontaneous dyadic conversations, we
    aim to contribute to our understanding of the role of lexical
    prosodic phonology in the co-production of speech and gesture.
    The results, though limited by a small dataset, still suggest
    differences between the three types of peaks concerning which
    types of gesture phase onsets they tend to align with, and how
    well these landmarks align with each other, although these
    differences did not reach significance.
  • Nielsen, A. K. S., & Dingemanse, M. (2021). Iconicity in word learning and beyond: A critical review. Language and Speech, 64(1), 52-72. doi:10.1177/0023830920914339.

    Abstract

    Interest in iconicity (the resemblance-based mapping between aspects of form and meaning) is in the midst of a resurgence, and a prominent focus in the field has been the possible role of iconicity in language learning. Here we critically review theory and empirical findings in this domain. We distinguish local learning enhancement (where the iconicity of certain lexical items influences the learning of those items) and general learning enhancement (where the iconicity of certain lexical items influences the later learning of non-iconic items or systems). We find that evidence for local learning enhancement is quite strong, though not as clear cut as it is often described and based on a limited sample of languages. Despite common claims about broader facilitatory effects of iconicity on learning, we find that current evidence for general learning enhancement is lacking. We suggest a number of productive avenues for future research and specify what types of evidence would be required to show a role for iconicity in general learning enhancement. We also review evidence for functions of iconicity beyond word learning: iconicity enhances comprehension by providing complementary representations, supports communication about sensory imagery, and expresses affective meanings. Even if learning benefits may be modest or cross-linguistically varied, on balance, iconicity emerges as a vital aspect of language.
  • Nieuwland, M. S. (2021). How ‘rational’ is semantic prediction? A critique and re-analysis of. Cognition, 215: 104848. doi:10.1016/j.cognition.2021.104848.

    Abstract

    In a recent article in Cognition, Delaney-Busch et al. (2019) claim evidence for ‘rational’, Bayesian adaptation of semantic predictions, using ERP data from Lau, Holcomb, and Kuperberg (2013). Participants read associatively related and unrelated prime-target word pairs in a first block with only 10% related trials and a second block with 50%. Related words elicited smaller N400s than unrelated words, and this difference was strongest in the second block, suggesting greater engagement in predictive processing. Using a rational adaptor model, Delaney-Busch et al. argue that the stronger N400 reduction for related words in the second block developed as a function of the number of related trials, and concluded therefore that participants predicted related words more strongly when their predictions were fulfilled more often. In this critique, I discuss two critical flaws in their analyses, namely the confounding of prediction effects with those of lexical frequency and the neglect of data from the first block. Re-analyses suggest a different picture: related words by themselves did not yield support for their conclusion, and the effect of relatedness gradually strengthened in othe two blocks in a similar way. Therefore, the N400 did not yield evidence that participants rationally adapted their semantic predictions. Within the framework proposed by Delaney-Busch et al., presumed semantic predictions may even be thought of as ‘irrational’. While these results yielded no evidence for rational or probabilistic prediction, they do suggest that participants became increasingly better at predicting target words from prime words.
  • Nieuwland, M. S. (2021). Commentary: Rational adaptation in lexical prediction: The influence of prediction strength. Frontiers in Psychology, 12: 735849. doi:10.3389/fpsyg.2021.735849.
  • Norris, D., & Cutler, A. (2021). More why, less how: What we need from models of cognition. Cognition, 213: 104688. doi:10.1016/j.cognition.2021.104688.

    Abstract

    Science regularly experiences periods in which simply describing the world is prioritised over attempting to explain it. Cognition, this journal, came into being some 45 years ago as an attempt to lay one such period to rest; without doubt, it has helped create the current cognitive science climate in which theory is decidedly welcome. Here we summarise the reasons why a theoretical approach is imperative in our field, and call attention to some potentially counter-productive trends in which cognitive models are concerned too exclusively with how processes work at the expense of why the processes exist in the first place and thus what the goal of modelling them must be.
  • Nota, N., Trujillo, J. P., & Holler, J. (2021). Facial signals and social actions in multimodal face-to-face interaction. Brain Sciences, 11(8): 1017. doi:10.3390/brainsci11081017.

    Abstract

    In a conversation, recognising the speaker’s social action (e.g., a request) early may help the potential following speakers understand the intended message quickly, and plan a timely response. Human language is multimodal, and several studies have demonstrated the contribution of the body to communication. However, comparatively few studies have investigated (non-emotional) conversational facial signals and very little is known about how they contribute to the communication of social actions. Therefore, we investigated how facial signals map onto the expressions of two fundamental social actions in conversations: asking questions and providing responses. We studied the distribution and timing of 12 facial signals across 6778 questions and 4553 responses, annotated holistically in a corpus of 34 dyadic face-to-face Dutch conversations. Moreover, we analysed facial signal clustering to find out whether there are specific combinations of facial signals within questions or responses. Results showed a high proportion of facial signals, with a qualitatively different distribution in questions versus responses. Additionally, clusters of facial signals were identified. Most facial signals occurred early in the utterance, and had earlier onsets in questions. Thus, facial signals may critically contribute to the communication of social actions in conversation by providing social action-specific visual information.
  • Nota, N., Trujillo, J. P., & Holler, J. (2023). Specific facial signals associate with categories of social actions conveyed through questions. PLoS One, 18(7): e0288104. doi:10.1371/journal.pone.0288104.

    Abstract

    The early recognition of fundamental social actions, like questions, is crucial for understanding the speaker’s intended message and planning a timely response in conversation. Questions themselves may express more than one social action category (e.g., an information request “What time is it?”, an invitation “Will you come to my party?” or a criticism “Are you crazy?”). Although human language use occurs predominantly in a multimodal context, prior research on social actions has mainly focused on the verbal modality. This study breaks new ground by investigating how conversational facial signals may map onto the expression of different types of social actions conveyed through questions. The distribution, timing, and temporal organization of facial signals across social actions was analysed in a rich corpus of naturalistic, dyadic face-to-face Dutch conversations. These social actions were: Information Requests, Understanding Checks, Self-Directed questions, Stance or Sentiment questions, Other-Initiated Repairs, Active Participation questions, questions for Structuring, Initiating or Maintaining Conversation, and Plans and Actions questions. This is the first study to reveal differences in distribution and timing of facial signals across different types of social actions. The findings raise the possibility that facial signals may facilitate social action recognition during language processing in multimodal face-to-face interaction.

    Additional information

    supporting information
  • Nota, N., Trujillo, J. P., Jacobs, V., & Holler, J. (2023). Facilitating question identification through natural intensity eyebrow movements in virtual avatars. Scientific Reports, 13: 21295. doi:10.1038/s41598-023-48586-4.

    Abstract

    In conversation, recognizing social actions (similar to ‘speech acts’) early is important to quickly understand the speaker’s intended message and to provide a fast response. Fast turns are typical for fundamental social actions like questions, since a long gap can indicate a dispreferred response. In multimodal face-to-face interaction, visual signals may contribute to this fast dynamic. The face is an important source of visual signalling, and previous research found that prevalent facial signals such as eyebrow movements facilitate the rapid recognition of questions. We aimed to investigate whether early eyebrow movements with natural movement intensities facilitate question identification, and whether specific intensities are more helpful in detecting questions. Participants were instructed to view videos of avatars where the presence of eyebrow movements (eyebrow frown or raise vs. no eyebrow movement) was manipulated, and to indicate whether the utterance in the video was a question or statement. Results showed higher accuracies for questions with eyebrow frowns, and faster response times for questions with eyebrow frowns and eyebrow raises. No additional effect was observed for the specific movement intensity. This suggests that eyebrow movements that are representative of naturalistic multimodal behaviour facilitate question recognition.
  • Nota, N., Trujillo, J. P., & Holler, J. (2023). Conversational eyebrow frowns facilitate question identification: An online study using virtual avatars. Cognitive Science, 47(12): e13392. doi:10.1111/cogs.13392.

    Abstract

    Conversation is a time-pressured environment. Recognizing a social action (the ‘‘speech act,’’ such as a question requesting information) early is crucial in conversation to quickly understand the intended message and plan a timely response. Fast turns between interlocutors are especially relevant for responses to questions since a long gap may be meaningful by itself. Human language is multimodal, involving speech as well as visual signals from the body, including the face. But little is known about how conversational facial signals contribute to the communication of social actions. Some of the most prominent facial signals in conversation are eyebrow movements. Previous studies found links between eyebrow movements and questions, suggesting that these facial signals could contribute to the rapid recognition of questions. Therefore, we aimed to investigate whether early eyebrow movements (eyebrow frown or raise vs. no eyebrow movement) facilitate question identification. Participants were instructed to view videos of avatars where the presence of eyebrow movements accompanying questions was manipulated. Their task was to indicate whether the utterance was a question or a statement as accurately and quickly as possible. Data were collected using the online testing platform Gorilla. Results showed higher accuracies and faster response times for questions with eyebrow frowns, suggesting a facilitative role of eyebrow frowns for question identification. This means that facial signals can critically contribute to the communication of social actions in conversation by signaling social action-specific visual information and providing visual cues to speakers’ intentions.

    Additional information

    link to preprint
  • Nota, N. (2023). Talking faces: The contribution of conversational facial signals to language use and processing. PhD Thesis, Radboud University Nijmegen, Nijmegen.
  • Nozais, V., Forkel, S. J., Petit, L., Talozzi, L., Corbetta, M., Thiebaut de Schotten, M., & Joliot, M. (2023). Atlasing white matter and grey matter joint contributions to resting-state networks in the human brain. Communications Biology, 6: 726. doi:10.1038/s42003-023-05107-3.

    Abstract

    Over the past two decades, the study of resting-state functional magnetic resonance imaging has revealed that functional connectivity within and between networks is linked to cognitive states and pathologies. However, the white matter connections supporting this connectivity remain only partially described. We developed a method to jointly map the white and grey matter contributing to each resting-state network (RSN). Using the Human Connectome Project, we generated an atlas of 30 RSNs. The method also highlighted the overlap between networks, which revealed that most of the brain’s white matter (89%) is shared between multiple RSNs, with 16% shared by at least 7 RSNs. These overlaps, especially the existence of regions shared by numerous networks, suggest that white matter lesions in these areas might strongly impact the communication within networks. We provide an atlas and an open-source software to explore the joint contribution of white and grey matter to RSNs and facilitate the study of the impact of white matter damage to these networks. In a first application of the software with clinical data, we were able to link stroke patients and impacted RSNs, showing that their symptoms aligned well with the estimated functions of the networks.
  • Nozais, V., Forkel, S. J., Foulon, C., Petit, L., & Thiebaut de Schotten, M. (2021). Functionnectome as a framework to analyse the contribution of brain circuits to fMRI. Communications Biology, 4: 1035. doi:10.1038/s42003-021-02530-2.

    Abstract

    In recent years, the field of functional neuroimaging has moved away from a pure localisationist approach of isolated functional brain regions to a more integrated view of these regions within functional networks. However, the methods used to investigate functional networks rely on local signals in grey matter and are limited in identifying anatomical circuitries supporting the interaction between brain regions. Mapping the brain circuits mediating the functional signal between brain regions would propel our understanding of the brain’s functional signatures and dysfunctions. We developed a method to unravel the relationship between brain circuits and functions: The Functionnectome. The Functionnectome combines the functional signal from fMRI with white matter circuits’ anatomy to unlock and chart the first maps of functional white matter. To showcase this method’s versatility, we provide the first functional white matter maps revealing the joint contribution of connected areas to motor, working memory, and language functions. The Functionnectome comes with an open-source companion software and opens new avenues into studying functional networks by applying the method to already existing datasets and beyond task fMRI.

    Additional information

    supplementary information
  • Ntemou, E., Ohlerth, A.-K., Ille, S., Krieg, S., Bastiaanse, R., & Rofes, A. (2021). Mapping Verb Retrieval With nTMS: The Role of Transitivity. Frontiers in Human Neuroscience, 15: 719461. doi:10.3389/fnhum.2021.719461.

    Abstract

    Navigated Transcranial Magnetic Stimulation (nTMS) is used to understand the cortical organization of language in preparation for the surgical removal of a brain tumor. Action naming with finite verbs can be employed for that purpose, providing additional information to object naming. However, little research has focused on the properties of the verbs that are used in action naming tasks, such as their status as transitive (taking an object; e.g., to read) or intransitive (not taking an object; e.g., to wink). Previous neuroimaging data show higher activation for transitive compared to intransitive verbs in posterior perisylvian regions bilaterally. In the present study, we employed nTMS and production of finite verbs to investigate the cortical underpinnings of transitivity. Twenty neurologically healthy native speakers of German participated in the study. They underwent language mapping in both hemispheres with nTMS. The action naming task with finite verbs consisted of transitive (e.g., The man reads the book) and intransitive verbs (e.g., The woman winks) and was controlled for relevant psycholinguistic variables. Errors were classified in four different error categories (i.e., non-linguistic errors, grammatical errors, lexico-semantic errors and, errors at the sound level) and were analyzed quantitatively. We found more nTMS-positive points in the left hemisphere, particularly in the left parietal lobe for the production of transitive compared to intransitive verbs. These positive points most commonly corresponded to lexico-semantic errors. Our findings are in line with previous aphasia and neuroimaging studies, suggesting that a more widespread network is used for the production of verbs with a larger number of arguments (i.e., transitives). The higher number of lexico-semantic errors with transitive compared to intransitive verbs in the left parietal lobe supports previous claims for the role of left posterior areas in the retrieval of argument structure information.
  • Numssen, O., van der Burght, C. L., & Hartwigsen, G. (2023). Revisiting the focality of non-invasive brain stimulation - implications for studies of human cognition. Neuroscience and Biobehavioral Reviews, 149: 105154. doi:10.1016/j.neubiorev.2023.105154.

    Abstract

    Non-invasive brain stimulation techniques are popular tools to investigate brain function in health and disease. Although transcranial magnetic stimulation (TMS) is widely used in cognitive neuroscience research to probe causal structure-function relationships, studies often yield inconclusive results. To improve the effectiveness of TMS studies, we argue that the cognitive neuroscience community needs to revise the stimulation focality principle – the spatial resolution with which TMS can differentially stimulate cortical regions. In the motor domain, TMS can differentiate between cortical muscle representations of adjacent fingers. However, this high degree of spatial specificity cannot be obtained in all cortical regions due to the influences of cortical folding patterns on the TMS-induced electric field. The region-dependent focality of TMS should be assessed a priori to estimate the experimental feasibility. Post-hoc simulations allow modeling of the relationship between cortical stimulation exposure and behavioral modulation by integrating data across stimulation sites or subjects.

    Files private

    Request files
  • Offrede, T., Mishra, C., Skantze, G., Fuchs, S., & Mooshammer, C. (2023). Do Humans Converge Phonetically When Talking to a Robot? In R. Skarnitzl, & J. Volin (Eds.), Proceedings of the 20th International Congress of Phonetic Sciences (pp. 3507-3511). Prague: GUARANT International.

    Abstract

    Phonetic convergence—i.e., adapting one’s speech
    towards that of an interlocutor—has been shown
    to occur in human-human conversations as well as
    human-machine interactions. Here, we investigate
    the hypothesis that human-to-robot convergence is
    influenced by the human’s perception of the robot
    and by the conversation’s topic. We conducted a
    within-subjects experiment in which 33 participants
    interacted with two robots differing in their eye gaze
    behavior—one looked constantly at the participant;
    the other produced gaze aversions, similarly to a
    human’s behavior. Additionally, the robot asked
    questions with increasing intimacy levels.
    We observed that the speakers tended to converge
    on F0 to the robots. However, this convergence
    to the robots was not modulated by how the
    speakers perceived them or by the topic’s intimacy.
    Interestingly, speakers produced lower F0 means
    when talking about more intimate topics. We
    discuss these findings in terms of current theories of
    conversational convergence.
  • Ohlerth, A.-K., Bastiaanse, R., Negwer, C., Sollmann, N., Schramm, S., Schroder, A., & Krieg, S. M. (2021). Benefit of action naming over object naming for visualization of subcortical language pathways in navigated transcranial magnetic stimulation-based diffusion tensor imaging-fiber tracking. Frontiers in Human Neuroscience, 15: 748274. doi:10.3389/fnhum.2021.748274.

    Abstract

    Visualization of functionally significant subcortical white matter fibers is needed in neurosurgical procedures in order to avoid damage to the language network during resection. In an effort to achieve this, positive cortical points revealed during preoperative language mapping with navigated transcranial magnetic stimulation (nTMS) can be employed as regions of interest (ROIs) for diffusion tensor imaging (DTI) fiber tracking. However, the effect that the use of different language tasks has on nTMS mapping and subsequent DTI-fiber tracking remains unexplored. The visualization of ventral stream tracts with an assumed lexico-semantic role may especially benefit from ROIs delivered by the lexico-semantically demanding verb task, Action Naming. In a first step, bihemispheric nTMS language mapping was administered in 18 healthy participants using the standard task Object Naming and the novel task Action Naming to trigger verbs in a small sentence context. Cortical areas in which nTMS induced language errors were identified as language-positive cortical sites. In a second step, nTMS-based DTI-fiber tracking was conducted using solely these language-positive points as ROIs. The ability of the two tasks’ ROIs to visualize the dorsal tracts Arcuate Fascicle and Superior Longitudinal Fascicle, the ventral tracts Inferior Longitudinal Fascicle, Uncinate Fascicle, and Inferior Fronto-Occipital Fascicle, the speech-articulatory Cortico-Nuclear Tract, and interhemispheric commissural fibers was compared in both hemispheres. In the left hemisphere, ROIs of Action Naming led to a significantly higher fraction of overall visualized tracts, specifically in the ventral stream’s Inferior Fronto-Occipital and Inferior Longitudinal Fascicle. No difference was found between tracking with Action Naming vs. Object Naming seeds for dorsal stream tracts, neither for the speech-articulatory tract nor the inter-hemispheric connections. While the two tasks appeared equally demanding for phonological-articulatory processes, ROI seeding through the task Action Naming seemed to better visualize lexico-semantic tracts in the ventral stream. This distinction was not evident in the right hemisphere. However, the distribution of tracts exposed was, overall, mirrored relative to those in the left hemisphere network. In presurgical practice, mapping and tracking of language pathways may profit from these findings and should consider inclusion of the Action Naming task, particularly for lesions in ventral subcortical regions.
  • Ohlerth, A.-K., Bastiaanse, R., Negwer, C., Sollmann, N., Schramm, S., Schroder, A., & Krieg, S. (2021). Bihemispheric Navigated Transcranial Magnetic Stimulation Mapping for Action Naming Compared to Object Naming in Sentence Context. Brain Sciences, 11(9): 1190. doi:10.3390/brainsci11091190.

    Abstract

    Preoperative language mapping with navigated transcranial magnetic stimulation (nTMS) is currently based on the disruption of performance during object naming. The resulting cortical language maps, however, lack accuracy when compared to intraoperative mapping. The question arises whether nTMS results can be improved, when another language task is considered, involving verb retrieval in sentence context. Twenty healthy German speakers were tested with object naming and a novel action naming task during nTMS language mapping. Error rates and categories in both hemispheres were compared. Action naming showed a significantly higher error rate than object naming in both hemispheres. Error category comparison revealed that this discrepancy stems from more lexico-semantic errors during action naming, indicating lexico-semantic retrieval of the verb being more affected than noun retrieval. In an area-wise comparison, higher error rates surfaced in multiple right-hemisphere areas, but only trends in the left ventral postcentral gyrus and middle superior temporal gyrus. Hesitation errors contributed significantly to the error count, but did not dull the mapping results. Inclusion of action naming coupled with a detailed error analysis may be favorable for nTMS mapping and ultimately improve accuracy in preoperative planning. Moreover, the results stress the recruitment of both left- and right-hemispheric areas during naming.
  • Oliveira‑Stahl, G., Farboud, S., Sterling, M. L., Heckman, J. J., Van Raalte, B., Lenferink, D., Van der Stam, A., Smeets, C. J. L. M., Fisher, S. E., & Englitz, B. (2023). High-precision spatial analysis of mouse courtship vocalization behavior reveals sex and strain differences. Scientific Reports, 13: 5219. doi:10.1038/s41598-023-31554-3.

    Abstract

    Mice display a wide repertoire of vocalizations that varies with sex, strain, and context. Especially during social interaction, including sexually motivated dyadic interaction, mice emit sequences of ultrasonic vocalizations (USVs) of high complexity. As animals of both sexes vocalize, a reliable attribution of USVs to their emitter is essential. The state-of-the-art in sound localization for USVs in 2D allows spatial localization at a resolution of multiple centimeters. However, animals interact at closer ranges, e.g. snout-to-snout. Hence, improved algorithms are required to reliably assign USVs. We present a novel algorithm, SLIM (Sound Localization via Intersecting Manifolds), that achieves a 2–3-fold improvement in accuracy (13.1–14.3 mm) using only 4 microphones and extends to many microphones and localization in 3D. This accuracy allows reliable assignment of 84.3% of all USVs in our dataset. We apply SLIM to courtship interactions between adult C57Bl/6J wildtype mice and those carrying a heterozygous Foxp2 variant (R552H). The improved spatial accuracy reveals that vocalization behavior is dependent on the spatial relation between the interacting mice. Female mice vocalized more in close snout-to-snout interaction while male mice vocalized more when the male snout was in close proximity to the female's ano-genital region. Further, we find that the acoustic properties of the ultrasonic vocalizations (duration, Wiener Entropy, and sound level) are dependent on the spatial relation between the interacting mice as well as on the genotype. In conclusion, the improved attribution of vocalizations to their emitters provides a foundation for better understanding social vocal behaviors.

    Additional information

    supplementary movies and figures
  • Onnis, L., & Huettig, F. (2021). Can prediction and retrodiction explain whether frequent multi-word phrases are accessed ’precompiled’ from memory or compositionally constructed on the fly? Brain Research, 1772: 147674. doi:10.1016/j.brainres.2021.147674.

    Abstract

    An important debate on the architecture of the language faculty has been the extent to which it relies on a compositional system that constructs larger units from morphemes to words to phrases to utterances on the fly and in real time using grammatical rules; or a system that chunks large preassembled, stored units of language from memory; or some combination of both approaches. Good empirical evidence exists for both ’computed’ and ’large stored’ forms in language, but little is known about what shapes multi-word storage / access or compositional processing. Here we explored whether predictive and retrodictive processes are a likely determinant of multi-word storage / processing. Our results suggest that forward and backward predictability are independently informative in determining the lexical cohesiveness of multi-word phrases. In addition, our results call for a reevaluation of the role of retrodiction in contemporary language processing accounts (cf. Ferreira and Chantavarin 2018).
  • Ortega, G., & Ostarek, M. (2021). Evidence for visual simulation during sign language processing. Journal of Experimental Psychology: General, 150(10), 2158-2166. doi:10.1037/xge0001041.

    Abstract

    What are the mental processes that allow us to understand the meaning of words? A large body of evidence suggests that when we process speech, we engage a process of perceptual simulation whereby sensorimotor states are activated as a source of semantic information. But does the same process take place when words are expressed with the hands and perceived through the eyes? To date, it is not known whether perceptual simulation is also observed in sign languages, the manual-visual languages of deaf communities. Continuous flash suppression is a method that addresses this question by measuring the effect of language on detection sensitivity to images that are suppressed from awareness. In spoken languages, it has been reported that listening to a word (e.g., “bottle”) activates visual features of an object (e.g., the shape of a bottle), and this in turn facilitates image detection. An interesting but untested question is whether the same process takes place when deaf signers see signs. We found that processing signs boosted the detection of congruent images, making otherwise invisible pictures visible. A boost of visual processing was observed only for signers but not for hearing nonsigners, suggesting that the penetration of the visual system through signs requires a fully fledged manual language. Iconicity did not modulate the effect of signs on detection, neither in signers nor in hearing nonsigners. This suggests that visual simulation during language processing occurs regardless of language modality (sign vs. speech) or iconicity, pointing to a foundational role of simulation for language comprehension.

    Additional information

    supplementary material
  • Ostarek, M., & Bottini, R. (2021). Towards strong inference in research on embodiment – Possibilities and limitations of causal paradigms. Journal of Cognition, 4(1): 5. doi:10.5334/joc.139.

    Abstract

    A central question in the cognitive sciences is which role embodiment plays for high-
    level cognitive functions, such as conceptual processing. here, we propose that one
    reason why progress regarding this question has been slow is a lacking focus on what
    platt (1964) called “strong inference”. strong inference is possible when results from an
    experimental paradigm are not merely consistent with a hypothesis, but they provide
    decisive evidence for one particular hypothesis compared to competing hypotheses. We
    discuss how causal paradigms, which test the functional relevance of sensory-motor
    processes for high-level cognitive functions, can move the field forward. in particular,
    we explore how congenital sensory-motor disorders, acquired sensory-motor deficits,
    and interference paradigms with healthy participants can be utilized as an opportunity
    to better understand the role of sensory experience in conceptual processing. Whereas
    all three approaches can bring about valuable insights, we highlight that the study of
    congenitally and acquired sensorimotor disorders is particularly effective in the case
    of conceptual domains with strong unimodal basis (e.g., colors), whereas interference
    paradigms with healthy participants have a broader application, avoid many of the
    practical and interpretational limitations of patient studies, and allow a systematic
    and step-wise progressive inference approach to causal mechanisms.
  • Ota, M., San Jose, A., & Smith, K. (2021). The emergence of word-internal repetition through iterated learning: Explaining the mismatch between learning biases and language design. Cognition, 210: 104585. doi:10.1016/j.cognition.2021.104585.

    Abstract

    The idea that natural language is shaped by biases in learning plays a key role in our understanding of how human language is structured, but its corollary that there should be a correspondence between typological generalisations and ease of acquisition is not always supported. For example, natural languages tend to avoid close repetitions of consonants within a word, but developmental evidence suggests that, if anything, words containing sound repetitions are more, not less, likely to be acquired than those without. In this study, we use word-internal repetition as a test case to provide a cultural evolutionary explanation of when and how learning biases impact on language design. Two artificial language experiments showed that adult speakers possess a bias for both consonant and vowel repetitions when learning novel words, but the effects of this bias were observable in language transmission only when there was a relatively high learning pressure on the lexicon. Based on these results, we argue that whether the design of a language reflects biases in learning depends on the relative strength of pressures from learnability and communication efficiency exerted on the linguistic system during cultural transmission.

    Additional information

    supplementary data data
  • Özer, D., Karadöller, D. Z., Özyürek, A., & Göksun, T. (2023). Gestures cued by demonstratives in speech guide listeners' visual attention during spatial language comprehension. Journal of Experimental Psychology: General, 152(9), 2623-2635. doi:10.1037/xge0001402.

    Abstract

    Gestures help speakers and listeners during communication and thinking, particularly for visual-spatial information. Speakers tend to use gestures to complement the accompanying spoken deictic constructions, such as demonstratives, when communicating spatial information (e.g., saying “The candle is here” and gesturing to the right side to express that the candle is on the speaker's right). Visual information conveyed by gestures enhances listeners’ comprehension. Whether and how listeners allocate overt visual attention to gestures in different speech contexts is mostly unknown. We asked if (a) listeners gazed at gestures more when they complement demonstratives in speech (“here”) compared to when they express redundant information to speech (e.g., “right”) and (b) gazing at gestures related to listeners’ information uptake from those gestures. We demonstrated that listeners fixated gestures more when they expressed complementary than redundant information in the accompanying speech. Moreover, overt visual attention to gestures did not predict listeners’ comprehension. These results suggest that the heightened communicative value of gestures as signaled by external cues, such as demonstratives, guides listeners’ visual attention to gestures. However, overt visual attention does not seem to be necessary to extract the cued information from the multimodal message.
  • Ozyurek, A. (2021). Considering the nature of multimodal language from a crosslinguistic perspective. Journal of Cognition, 4(1): 42. doi:10.5334/joc.165.

    Abstract

    Language in its primary face-to-face context is multimodal (e.g., Holler and Levinson, 2019; Perniss, 2018). Thus, understanding how expressions in the vocal and visual modalities together contribute to our notions of language structure, use, processing, and transmission (i.e., acquisition, evolution, emergence) in different languages and cultures should be a fundamental goal of language sciences. This requires a new framework of language that brings together how arbitrary and non-arbitrary and motivated semiotic resources of language relate to each other. Current commentary evaluates such a proposal by Murgiano et al (2021) from a crosslinguistic perspective taking variation as well as systematicity in multimodal utterances into account.
  • Papoutsi*, C., Zimianiti*, E., Bosker, H. R., & Frost, R. L. A. (2023). Statistical learning at a virtual cocktail party. Psychonomic Bulletin & Review. Advance online publication. doi:10.3758/s13423-023-02384-1.

    Abstract

    * These two authors contributed equally to this study
    Statistical learning – the ability to extract distributional regularities from input – is suggested to be key to language acquisition. Yet, evidence for the human capacity for statistical learning comes mainly from studies conducted in carefully controlled settings without auditory distraction. While such conditions permit careful examination of learning, they do not reflect the naturalistic language learning experience, which is replete with auditory distraction – including competing talkers. Here, we examine how statistical language learning proceeds in a virtual cocktail party environment, where the to-be-learned input is presented alongside a competing speech stream with its own distributional regularities. During exposure, participants in the Dual Talker group concurrently heard two novel languages, one produced by a female talker and one by a male talker, with each talker virtually positioned at opposite sides of the listener (left/right) using binaural acoustic manipulations. Selective attention was manipulated by instructing participants to attend to only one of the two talkers. At test, participants were asked to distinguish words from part-words for both the attended and the unattended languages. Results indicated that participants’ accuracy was significantly higher for trials from the attended vs. unattended
    language. Further, the performance of this Dual Talker group was no different compared to a control group who heard only one language from a single talker (Single Talker group). We thus conclude that statistical learning is modulated by selective attention, being relatively robust against the additional cognitive load provided by competing speech, emphasizing its efficiency in naturalistic language learning situations.

    Additional information

    supplementary file
  • Parente, F., Conklin, K., Guy, J. M., & Scott, R. (2021). The role of empirical methods in investigating readers’ constructions of authorial creativity in literary reading. Language and Literature: International Journal of Stylistics, 30(1), 21-36. doi:10.1177/0963947020952200.

    Abstract

    The popularity of literary biographies and the importance publishers place on author publicity materials suggest the concept of an author’s creative intentions is important to readers’ appreciation of literary works. However, the question of how this kind of contextual information informs literary interpretation is contentious. One area of dispute concerns the extent to which readers’ constructions of an author’s creative intentions are text-centred and therefore can adequately be understood by linguistic evidence alone. The current study shows how the relationship between linguistic and contextual factors in readers’ constructions of an author’s creative intentions may be investigated empirically. We use eye-tracking to determine whether readers’ responses to textual features (changes to lexis and punctuation) are affected by prior, extra-textual prompts concerning information about an author’s creative intentions. We showed participants pairs of sentences from Oscar Wilde and Henry James while monitoring their eye movements. The first sentence was followed by a prompt denoting a different attribution (Authorial, Editorial/Publisher and Typographic) for the change that, if present, would appear in the second sentence. After reading the second sentence, participants were asked whether they had detected a change and, if so, to describe it. If the concept of an author’s creative intentions is implicated in literary reading this should influence participants’ reading behaviour and ability to accurately report a change based on the prompt. The findings showed that readers’ noticing of textual variants was sensitive to the prior prompt about its authorship, in the sense of producing an effect on attention and re-reading times. But they also showed that these effects did not follow the pattern predicted of them, based on prior assumptions about readers’ cultures. This last finding points to the importance, as well as the challenges, of further investigating the role of contextual information in readers’ constructions of an author’s creative intentions.
  • Parlatini, V., Itahashi, T., Lee, Y., Liu, S., Nguyen, T. T., Aoki, Y. Y., Forkel, S. J., Catani, M., Rubia, K., Zhou, J. H., Murphy, D. G., & Cortese, S. (2023). White matter alterations in Attention-Deficit/Hyperactivity Disorder (ADHD): a systematic review of 129 diffusion imaging studies with meta-analysis. Molecular Psychiatry, 28, 4098-4123. doi:10.1038/s41380-023-02173-1.

    Abstract

    Aberrant anatomical brain connections in attention-deficit/hyperactivity disorder (ADHD) are reported inconsistently across
    diffusion weighted imaging (DWI) studies. Based on a pre-registered protocol (Prospero: CRD42021259192), we searched PubMed,
    Ovid, and Web of Knowledge until 26/03/2022 to conduct a systematic review of DWI studies. We performed a quality assessment
    based on imaging acquisition, preprocessing, and analysis. Using signed differential mapping, we meta-analyzed a subset of the
    retrieved studies amenable to quantitative evidence synthesis, i.e., tract-based spatial statistics (TBSS) studies, in individuals of any
    age and, separately, in children, adults, and high-quality datasets. Finally, we conducted meta-regressions to test the effect of age,
    sex, and medication-naïvety. We included 129 studies (6739 ADHD participants and 6476 controls), of which 25 TBSS studies
    provided peak coordinates for case-control differences in fractional anisotropy (FA)(32 datasets) and 18 in mean diffusivity (MD)(23
    datasets). The systematic review highlighted white matter alterations (especially reduced FA) in projection, commissural and
    association pathways of individuals with ADHD, which were associated with symptom severity and cognitive deficits. The meta-
    analysis showed a consistent reduced FA in the splenium and body of the corpus callosum, extending to the cingulum. Lower FA
    was related to older age, and case-control differences did not survive in the pediatric meta-analysis. About 68% of studies were of
    low quality, mainly due to acquisitions with non-isotropic voxels or lack of motion correction; and the sensitivity analysis in high-
    quality datasets yielded no significant results. Findings suggest prominent alterations in posterior interhemispheric connections
    subserving cognitive and motor functions affected in ADHD, although these might be influenced by non-optimal acquisition
    parameters/preprocessing. Absence of findings in children may be related to the late development of callosal fibers, which may
    enhance case-control differences in adulthood. Clinicodemographic and methodological differences were major barriers to
    consistency and comparability among studies, and should be addressed in future investigations.
  • Hu, Y., Lv, Q., Pascual, E., Liang, J., & Huettig, F. (2021). Syntactic priming in illiterate and literate older Chinese adults. Journal of Cultural Cognitive Science, 5, 267-286. doi:10.1007/s41809-021-00082-9.

    Abstract

    Does life-long literacy experience modulate syntactic priming in spoken language processing? Such a postulated influence is compatible with usage-based theories of language processing that propose that all linguistic skills are a function of accumulated experience with language across life. Here we investigated the effect of literacy experience on syntactic priming in Mandarin in sixty Chinese older adults from Hebei province. Thirty participants were completely illiterate and thirty were literate Mandarin speakers of similar age and socioeconomic background. We first observed usage differences: literates produced robustly more prepositional object (PO) constructions than illiterates. This replicates, with a different sample, language, and cultural background, previous findings that literacy experience affects (baseline) usage of PO and DO transitive alternates. We also observed robust syntactic priming for double-object (DO), but not prepositional-object (PO) dative alternations for both groups. The magnitude of this DO priming however was higher in literates than in illiterates. We also observed that cumulative adaptation in syntactic priming differed as a function of literacy. Cumulative syntactic priming in literates appears to be related mostly to comprehending others, whereas in illiterates it is also associated with repeating self-productions. Further research is needed to confirm this interpretation.
  • Passmore, S., Barth, W., Greenhill, S. J., Quinn, K., Sheard, C., Argyriou, P., Birchall, J., Bowern, C., Calladine, J., Deb, A., Diederen, A., Metsäranta, N. P., Araujo, L. H., Schembri, R., Hickey-Hall, J., Honkola, T., Mitchell, A., Poole, L., Rácz, P. M., Roberts, S. G. and 4 morePassmore, S., Barth, W., Greenhill, S. J., Quinn, K., Sheard, C., Argyriou, P., Birchall, J., Bowern, C., Calladine, J., Deb, A., Diederen, A., Metsäranta, N. P., Araujo, L. H., Schembri, R., Hickey-Hall, J., Honkola, T., Mitchell, A., Poole, L., Rácz, P. M., Roberts, S. G., Ross, R. M., Thomas-Colquhoun, E., Evans, N., & Jordan, F. M. (2023). Kinbank: A global database of kinship terminology. PLOS ONE, 18: e0283218. doi:10.1371/journal.pone.0283218.

    Abstract

    For a single species, human kinship organization is both remarkably diverse and strikingly organized. Kinship terminology is the structured vocabulary used to classify, refer to, and address relatives and family. Diversity in kinship terminology has been analyzed by anthropologists for over 150 years, although recurrent patterning across cultures remains incompletely explained. Despite the wealth of kinship data in the anthropological record, comparative studies of kinship terminology are hindered by data accessibility. Here we present Kinbank, a new database of 210,903 kinterms from a global sample of 1,229 spoken languages. Using open-access and transparent data provenance, Kinbank offers an extensible resource for kinship terminology, enabling researchers to explore the rich diversity of human family organization and to test longstanding hypotheses about the origins and drivers of recurrent patterns. We illustrate our contribution with two examples. We demonstrate strong gender bias in the phonological structure of parent terms across 1,022 languages, and we show that there is no evidence for a coevolutionary relationship between cross-cousin marriage and bifurcate-merging terminology in Bantu languages. Analysing kinship data is notoriously challenging; Kinbank aims to eliminate data accessibility issues from that challenge and provide a platform to build an interdisciplinary understanding of kinship.

    Additional information

    Supporting Information
  • Paulat, N. S., Storer, J. M., Moreno-Santillán, D. D., Osmanski, A. B., Sullivan, K. A. M., Grimshaw, J. R., Korstian, J., Halsey, M., Garcia, C. J., Crookshanks, C., Roberts, J., Smit, A. F. A., Hubley, R., Rosen, J., Teeling, E. C., Vernes, S. C., Myers, E., Pippel, M., Brown, T., Hiller, M. and 5 morePaulat, N. S., Storer, J. M., Moreno-Santillán, D. D., Osmanski, A. B., Sullivan, K. A. M., Grimshaw, J. R., Korstian, J., Halsey, M., Garcia, C. J., Crookshanks, C., Roberts, J., Smit, A. F. A., Hubley, R., Rosen, J., Teeling, E. C., Vernes, S. C., Myers, E., Pippel, M., Brown, T., Hiller, M., Zoonomia Consortium, Rojas, D., Dávalos, L. M., Lindblad-Toh, K., Karlsson, E. K., & Ray, D. A. (2023). Chiropterans are a hotspot for horizontal transfer of DNA transposons in mammalia. Molecular Biology and Evolution, 40(5): msad092. doi:10.1093/molbev/msad092.

    Abstract

    Horizontal transfer of transposable elements (TEs) is an important mechanism contributing to genetic diversity and innovation. Bats (order Chiroptera) have repeatedly been shown to experience horizontal transfer of TEs at what appears to be a high rate compared with other mammals. We investigated the occurrence of horizontally transferred (HT) DNA transposons involving bats. We found over 200 putative HT elements within bats; 16 transposons were shared across distantly related mammalian clades, and 2 other elements were shared with a fish and two lizard species. Our results indicate that bats are a hotspot for horizontal transfer of DNA transposons. These events broadly coincide with the diversification of several bat clades, supporting the hypothesis that DNA transposon invasions have contributed to genetic diversification of bats.

    Additional information

    supplemental methods supplemental tables
  • Pazoki, R., Lin, B. D., Van Eijk, K. R., Schijven, D., De Zwarte, S., GROUP Investigators, Guloksuz, S., & Luykx, J. J. (2021). Phenome-wide and genome-wide analyses of quality of life in schizophrenia. BJPsych Open, 7(1): e13. doi:10.1192/bjo.2020.140.

    Abstract

    Background
    Schizophrenia negatively affects quality of life (QoL). A handful of variables from small studies have been reported to influence QoL in patients with schizophrenia, but a study comprehensively dissecting the genetic and non-genetic contributing factors to QoL in these patients is currently lacking.

    Aims
    We adopted a hypothesis-generating approach to assess the phenotypic and genotypic determinants of QoL in schizophrenia.

    Method
    The study population comprised 1119 patients with a psychotic disorder, 1979 relatives and 586 healthy controls. Using linear regression, we tested >100 independent demographic, cognitive and clinical phenotypes for their association with QoL in patients. We then performed genome-wide association analyses of QoL and examined the association between polygenic risk scores for schizophrenia, major depressive disorder and subjective well-being and QoL.

    Results
    We found nine phenotypes to be significantly and independently associated with QoL in patients, the most significant ones being negative (β = −1.17; s.e. 0.05; P = 1 × 10–83; r2 = 38%), depressive (β = −1.07; s.e. 0.05; P = 2 × 10–79; r2 = 36%) and emotional distress (β = −0.09; s.e. 0.01; P = 4 × 10–59, r2 = 25%) symptoms. Schizophrenia and subjective well-being polygenic risk scores, using various P-value thresholds, were significantly and consistently associated with QoL (lowest association P-value = 6.8 × 10–6). Several sensitivity analyses confirmed the results.

    Conclusions
    Various clinical phenotypes of schizophrenia, as well as schizophrenia and subjective well-being polygenic risk scores, are associated with QoL in patients with schizophrenia and their relatives. These may be targeted by clinicians to more easily identify vulnerable patients with schizophrenia for further social and clinical interventions to improve their QoL.
  • Peeters, D., Krahmer, E., & Maes, A. (2021). A conceptual framework for the study of demonstrative reference. Psychonomic Bulletin & Review, 28, 409-433. doi:10.3758/s13423-020-01822-8.

    Abstract

    Language allows us to efficiently communicate about the things in the world around us. Seemingly simple words like this and that are a cornerstone of our capability to refer, as they contribute to guiding the attention of our addressee to the specific entity we are talking about. Such demonstratives are acquired early in life, ubiquitous in everyday talk, often closely tied to our gestural communicative abilities, and present in all spoken languages of the world. Based on a review of recent experimental work, we here introduce a new conceptual framework of demonstrative reference. In the context of this framework, we argue that several physical, psychological, and referent-intrinsic factors dynamically interact to influence whether a speaker will use one demonstrative form (e.g., this) or another (e.g., that) in a given setting. However, the relative influence of these factors themselves is argued to be a function of the cultural language setting at hand, the theory-of-mind capacities of the speaker, and the affordances of the specific context in which the speech event takes place. It is demonstrated that the framework has the potential to reconcile findings in the literature that previously seemed irreconcilable. We show that the framework may to a large extent generalize to instances of endophoric reference (e.g., anaphora) and speculate that it may also describe the specific form and kinematics a speaker’s pointing gesture takes. Testable predictions and novel research questions derived from the framework are presented and discussed.
  • Pender, R., Fearon, P., St Pourcain, B., Heron, J., & Mandy, W. (2023). Developmental trajectories of autistic social traits in the general population. Psychological Medicine, 53(3), 814-822. doi:10.1017/S0033291721002166.

    Abstract

    Background

    Autistic people show diverse trajectories of autistic traits over time, a phenomenon labelled ‘chronogeneity’. For example, some show a decrease in symptoms, whilst others experience an intensification of difficulties. Autism spectrum disorder (ASD) is a dimensional condition, representing one end of a trait continuum that extends throughout the population. To date, no studies have investigated chronogeneity across the full range of autistic traits. We investigated the nature and clinical significance of autism trait chronogeneity in a large, general population sample.
    Methods

    Autistic social/communication traits (ASTs) were measured in the Avon Longitudinal Study of Parents and Children using the Social and Communication Disorders Checklist (SCDC) at ages 7, 10, 13 and 16 (N = 9744). We used Growth Mixture Modelling (GMM) to identify groups defined by their AST trajectories. Measures of ASD diagnosis, sex, IQ and mental health (internalising and externalising) were used to investigate external validity of the derived trajectory groups.
    Results

    The selected GMM model identified four AST trajectory groups: (i) Persistent High (2.3% of sample), (ii) Persistent Low (83.5%), (iii) Increasing (7.3%) and (iv) Decreasing (6.9%) trajectories. The Increasing group, in which females were a slight majority (53.2%), showed dramatic increases in SCDC scores during adolescence, accompanied by escalating internalising and externalising difficulties. Two-thirds (63.6%) of the Decreasing group were male.
    Conclusions

    Clinicians should note that for some young people autism-trait-like social difficulties first emerge during adolescence accompanied by problems with mood, anxiety, conduct and attention. A converse, majority-male group shows decreasing social difficulties during adolescence.
  • Pereira Soares, S. M., Kubota, M., Rossi, E., & Rothman, J. (2021). Determinants of bilingualism predict dynamic changes in resting state EEG oscillations. Brain and Language, 223: 105030. doi:10.1016/j.bandl.2021.105030.

    Abstract

    This study uses resting state EEG data from 103 bilinguals to understand how determinants of bilingualism may
    reshape the mind/brain. Participants completed the LSBQ, which quantifies language use and crucially the di-
    vision of labor of dual-language use in diverse activities and settings over the lifespan. We hypothesized cor-
    relations between the degree of active bilingualism with power of neural oscillations in specific frequency bands.
    Moreover, we anticipated levels of mean coherence (connectivity between brain regions) to vary by degree of
    bilingual language experience. Results demonstrated effects of Age of L2/2L1 onset on high beta and gamma
    powers. Higher usage of the non-societal language at home and society modulated indices of functional con-
    nectivity in theta, alpha and gamma frequencies. Results add to the emerging literature on the neuromodulatory
    effects of bilingualism for rs-EEG, and are in line with claims that bilingualism effects are modulated by degree of
    engagement with dual-language experiential factors
  • Pereira Soares, S. M., Chaouch-Orozco, A., & González Alonso, J. (2023). Innovations and challenges in acquisition and processing methodologies for L3/Ln. In J. Cabrelli, A. Chaouch-Orozco, J. González Alonso, S. M. Pereira Soares, E. Puig-Mayenco, & J. Rothman (Eds.), The Cambridge handbook of third language acquisition (pp. 661-682). Cambridge: Cambridge University Press. doi:10.1017/9781108957823.026.

    Abstract

    The advent of psycholinguistic and neurolinguistic methodologies has provided new insights into theories of language acquisition. Sequential multilingualism is no exception, and some of the most recent work on the subject has incorporated a particular focus on language processing. This chapter surveys some of the work on the processing of lexical and morphosyntactic aspects of third or further languages, with different offline and online methodologies. We also discuss how, while increasingly sophisticated techniques and experimental designs have improved our understanding of third language acquisition and processing, simpler but clever designs can answer pressing questions in our theoretical debate. We provide examples of both sophistication and clever simplicity in experimental design, and argue that the field would benefit from incorporating a combination of both concepts into future work.
  • Petras, K., Ten Oever, S., Dalal, S. S., & Goffaux, V. (2021). Information redundancy across spatial scales modulates early visual cortical processing. NeuroImage, 244: 118613. doi:10.1016/j.neuroimage.2021.118613.

    Abstract

    Visual images contain redundant information across spatial scales where low spatial frequency contrast is informative towards the location and likely content of high spatial frequency detail. Previous research suggests that the visual system makes use of those redundancies to facilitate efficient processing. In this framework, a fast, initial analysis of low-spatial frequency (LSF) information guides the slower and later processing of high spatial frequency (HSF) detail. Here, we used multivariate classification as well as time-frequency analysis of MEG responses to the viewing of intact and phase scrambled images of human faces to demonstrate that the availability of redundant LSF information, as found in broadband intact images, correlates with a reduction in HSF representational dominance in both early and higher-level visual areas as well as a reduction of gamma-band power in early visual cortex. Our results indicate that the cross spatial frequency information redundancy that can be found in all natural images might be a driving factor in the efficient integration of fine image details.

    Additional information

    supplementary materials
  • Piai, V., & Eikelboom, D. (2023). Brain areas critical for picture naming: A systematic review and meta-analysis of lesion-symptom mapping studies. Neurobiology of Language, 4(2), 280-296. doi:10.1162/nol_a_00097.

    Abstract

    Lesion-symptom mapping (LSM) studies have revealed brain areas critical for naming, typically finding significant associations between damage to left temporal, inferior parietal, and inferior fontal regions and impoverished naming performance. However, specific subregions found in the available literature vary. Hence, the aim of this study was to perform a systematic review and meta-analysis of published lesion-based findings, obtained from studies with unique cohorts investigating brain areas critical for accuracy in naming in stroke patients at least 1 month post-onset. An anatomic likelihood estimation (ALE) meta-analysis of these LSM studies was performed. Ten papers entered the ALE meta-analysis, with similar lesion coverage over left temporal and left inferior frontal areas. This small number is a major limitation of the present study. Clusters were found in left anterior temporal lobe, posterior temporal lobe extending into inferior parietal areas, in line with the arcuate fasciculus, and in pre- and postcentral gyri and middle frontal gyrus. No clusters were found in left inferior frontal gyrus. These results were further substantiated by examining five naming studies that investigated performance beyond global accuracy, corroborating the ALE meta-analysis results. The present review and meta-analysis highlight the involvement of left temporal and inferior parietal cortices in naming, and of mid to posterior portions of the temporal lobe in particular in conceptual-lexical retrieval for speaking.

    Additional information

    data
  • Di Pisa, G., Pereira Soares, S. M., & Rothman, J. (2021). Brain, mind and linguistic processing insights into the dynamic nature of bilingualism and its outcome effects. Journal of Neurolinguistics, 58: 100965. doi:10.1016/j.jneuroling.2020.100965.
  • Pliatsikas, C., Pereira Soares, S. M., Voits, T., Deluca, V., & Rothman, J. (2021). Bilingualism is a long-term cognitively challenging experience that modulates metabolite concentrations in the healthy brain. Scientific Reports, 11: 7090. doi:10.1038/s41598-021-86443-4.

    Abstract

    Cognitively demanding experiences, including complex skill acquisition and processing, have been
    shown to induce brain adaptations, at least at the macroscopic level, e.g. on brain volume and/or
    functional connectivity. However, the neurobiological bases of these adaptations, including at the
    cellular level, are unclear and understudied. Here we use bilingualism as a case study to investigate
    the metabolic correlates of experience-based brain adaptations. We employ Magnetic Resonance
    Spectroscopy to measure metabolite concentrations in the basal ganglia, a region critical to language
    control which is reshaped by bilingualism. Our results show increased myo-Inositol and decreased
    N-acetyl aspartate concentrations in bilinguals compared to monolinguals. Both metabolites are
    linked to synaptic pruning, a process underlying experience-based brain restructuring. Interestingly,
    both concentrations correlate with relative amount of bilingual engagement. This suggests that
    degree of long-term cognitive experiences matters at the level of metabolic concentrations, which
    might accompany, if not drive, macroscopic brain adaptations.

    Additional information

    41598_2021_86443_MOESM1_ESM.pdf
  • Poletiek, F. H., Monaghan, P., van de Velde, M., & Bocanegra, B. R. (2021). The semantics-syntax interface: Learning grammatical categories and hierarchical syntactic structure through semantics. Journal of Experimental Psychology: Learning, Memory, and Cognition, 47(7), 1141-1155. doi:10.1037/xlm0001044.

    Abstract

    Language is infinitely productive because syntax defines dependencies between grammatical categories of words and constituents, so there is interchangeability of these words and constituents within syntactic structures. Previous laboratory-based studies of language learning have shown that complex language structures like hierarchical center embeddings (HCE) are very hard to learn, but these studies tend to simplify the language learning task, omitting semantics and focusing either on learning dependencies between individual words or on acquiring the category membership of those words. We tested whether categories of words and dependencies between these categories and between constituents, could be learned simultaneously in an artificial language with HCE’s, when accompanied by scenes illustrating the sentence’s intended meaning. Across four experiments, we showed that participants were able to learn the HCE language varying words across categories and category-dependencies, and constituents across constituents-dependencies. They also were able to generalize the learned structure to novel sentences and novel scenes that they had not previously experienced. This simultaneous learning resulting in a productive complex language system, may be a consequence of grounding complex syntax acquisition in semantics.
  • Postema, M. (2021). Left-right asymmetry of the human brain: Associations with neurodevelopmental disorders and genetic factors. PhD Thesis, Radboud University Nijmegen, Nijmegen.
  • Postema, M., Hoogman, M., Ambrosino, S., Asherson, P., Banaschewski, T., Bandeira, C. E., Baranov, A., Bau, C. H. D., Baumeister, S., Baur-Streubel, R., Bellgrove, M. A., Biederman, J., Bralten, J., Brandeis, D., Brem, S., Buitelaar, J. K., Busatto, G. F., Castellanos, F. X., Cercignani, M., Chaim-Avancini, T. M. and 85 morePostema, M., Hoogman, M., Ambrosino, S., Asherson, P., Banaschewski, T., Bandeira, C. E., Baranov, A., Bau, C. H. D., Baumeister, S., Baur-Streubel, R., Bellgrove, M. A., Biederman, J., Bralten, J., Brandeis, D., Brem, S., Buitelaar, J. K., Busatto, G. F., Castellanos, F. X., Cercignani, M., Chaim-Avancini, T. M., Chantiluke, K. C., Christakou, A., Coghill, D., Conzelmann, A., Cubillo, A. I., Cupertino, R. B., De Zeeuw, P., Doyle, A. E., Durston, S., Earl, E. A., Epstein, J. N., Ethofer, T., Fair, D. A., Fallgatter, A. J., Faraone, S. V., Frodl, T., Gabel, M. C., Gogberashvili, T., Grevet, E. H., Haavik, J., Harrison, N. A., Hartman, C. A., Heslenfeld, D. J., Hoekstra, P. J., Hohmann, S., Høvik, M. F., Jernigan, T. L., Kardatzki, B., Karkashadze, G., Kelly, C., Kohls, G., Konrad, K., Kuntsi, J., Lazaro, L., Lera-Miguel, S., Lesch, K.-P., Louza, M. R., Lundervold, A. J., Malpas, C. B., Mattos, P., McCarthy, H., Namazova-Baranova, L., Nicolau, R., Nigg, J. T., Novotny, S. E., Oberwelland Weiss, E., O'Gorman Tuura, R. L., Oosterlaan, J., Oranje, B., Paloyelis, Y., Pauli, P., Picon, F. A., Plessen, K. J., Ramos-Quiroga, J. A., Reif, A., Reneman, L., Rosa, P. G. P., Rubia, K., Schrantee, A., Schweren, L. J. S., Seitz, J., Shaw, P., Silk, T. J., Skokauskas, N., Soliva Vila, J. C., Stevens, M. C., Sudre, G., Tamm, L., Tovar-Moll, F., Van Erp, T. G. M., Vance, A., Vilarroya, O., Vives-Gilabert, Y., Von Polier, G. G., Walitza, S., Yoncheva, Y. N., Zanetti, M. V., Ziegler, G. C., Glahn, D. C., Jahanshad, N., Medland, S. E., ENIGMA ADHD Working Group, Thompson, P. M., Fisher, S. E., Franke, B., & Francks, C. (2021). Analysis of structural brain asymmetries in Attention-Deficit/Hyperactivity Disorder in 39 datasets. Journal of Child Psychology and Psychiatry, 62(10), 1202-1219. doi:10.1111/jcpp.13396.

    Abstract

    Objective: Some studies have suggested alterations of structural brain asymmetry in attention-deficit/hyperactivity disorder (ADHD), but findings have been contradictory and based on small samples. Here we performed the largest-ever analysis of brain left-right asymmetry in ADHD, using 39 datasets of the ENIGMA consortium.
    Methods: We analyzed asymmetry of subcortical and cerebral cortical structures in up to 1,933 people with ADHD and 1,829 unaffected controls. Asymmetry Indexes (AIs) were calculated per participant for each bilaterally paired measure, and linear mixed effects modelling was applied separately in children, adolescents, adults, and the total sample, to test exhaustively for potential associations of ADHD with structural brain asymmetries.
    Results: There was no evidence for altered caudate nucleus asymmetry in ADHD, in contrast to prior literature. In children, there was less rightward asymmetry of the total hemispheric surface area compared to controls (t=2.1, P=0.04). Lower rightward asymmetry of medial orbitofrontal cortex surface area in ADHD (t=2.7, P=0.01) was similar to a recent finding for autism spectrum disorder. There were also some differences in cortical thickness asymmetry across age groups. In adults with ADHD, globus pallidus asymmetry was altered compared to those without ADHD. However, all effects were small (Cohen’s d from -0.18 to 0.18) and would not survive study-wide correction for multiple testing.
    Conclusion: Prior studies of altered structural brain asymmetry in ADHD were likely under-powered to detect the small effects reported here. Altered structural asymmetry is unlikely to provide a useful biomarker for ADHD, but may provide neurobiological insights into the trait.

    Additional information

    jcpp13396-sup-0001-supinfo.pdf
  • Pouw, W., Dingemanse, M., Motamedi, Y., & Ozyurek, A. (2021). A systematic investigation of gesture kinematics in evolving manual languages in the lab. Cognitive Science, 45(7): e13014. doi:10.1111/cogs.13014.

    Abstract

    Silent gestures consist of complex multi-articulatory movements but are now primarily studied through categorical coding of the referential gesture content. The relation of categorical linguistic content with continuous kinematics is therefore poorly understood. Here, we reanalyzed the video data from a gestural evolution experiment (Motamedi, Schouwstra, Smith, Culbertson, & Kirby, 2019), which showed increases in the systematicity of gesture content over time. We applied computer vision techniques to quantify the kinematics of the original data. Our kinematic analyses demonstrated that gestures become more efficient and less complex in their kinematics over generations of learners. We further detect the systematicity of gesture form on the level of thegesture kinematic interrelations, which directly scales with the systematicity obtained on semantic coding of the gestures. Thus, from continuous kinematics alone, we can tap into linguistic aspects that were previously only approachable through categorical coding of meaning. Finally, going beyond issues of systematicity, we show how unique gesture kinematic dialects emerged over generations as isolated chains of participants gradually diverged over iterations from other chains. We, thereby, conclude that gestures can come to embody the linguistic system at the level of interrelationships between communicative tokens, which should calibrate our theories about form and linguistic content.
  • Pouw, W., Wit, J., Bögels, S., Rasenberg, M., Milivojevic, B., & Ozyurek, A. (2021). Semantically related gestures move alike: Towards a distributional semantics of gesture kinematics. In V. G. Duffy (Ed.), Digital human modeling and applications in health, safety, ergonomics and risk management. human body, motion and behavior:12th International Conference, DHM 2021, Held as Part of the 23rd HCI International Conference, HCII 2021 (pp. 269-287). Berlin: Springer. doi:10.1007/978-3-030-77817-0_20.
  • Pouw, W., Proksch, S., Drijvers, L., Gamba, M., Holler, J., Kello, C., Schaefer, R. S., & Wiggins, G. A. (2021). Multilevel rhythms in multimodal communication. Philosophical Transactions of the Royal Society of London, Series B: Biological Sciences, 376: 20200334. doi:10.1098/rstb.2020.0334.

    Abstract

    It is now widely accepted that the brunt of animal communication is conducted via several modalities, e.g. acoustic and visual, either simultaneously or sequentially. This is a laudable multimodal turn relative to traditional accounts of temporal aspects of animal communication which have focused on a single modality at a time. However, the fields that are currently contributing to the study of multimodal communication are highly varied, and still largely disconnected given their sole focus on a particular level of description or their particular concern with human or non-human animals. Here, we provide an integrative overview of converging findings that show how multimodal processes occurring at neural, bodily, as well as social interactional levels each contribute uniquely to the complex rhythms that characterize communication in human and non-human animals. Though we address findings for each of these levels independently, we conclude that the most important challenge in this field is to identify how processes at these different levels connect.
  • Pouw, W., De Jonge-Hoekstra, L., Harrison, S. J., Paxton, A., & Dixon, J. A. (2021). Gesture-speech physics in fluent speech and rhythmic upper limb movements. Annals of the New York Academy of Sciences, 1491(1), 89-105. doi:10.1111/nyas.14532.

    Abstract

    Communicative hand gestures are often coordinated with prosodic aspects of speech, and salient moments of gestural movement (e.g., quick changes in speed) often co-occur with salient moments in speech (e.g., near peaks in fundamental frequency and intensity). A common understanding is that such gesture and speech coordination is culturally and cognitively acquired, rather than having a biological basis. Recently, however, the biomechanical physical coupling of arm movements to speech movements has been identified as a potentially important factor in understanding the emergence of gesture-speech coordination. Specifically, in the case of steady-state vocalization and mono-syllable utterances, forces produced during gesturing are transferred onto the tensioned body, leading to changes in respiratory-related activity and thereby affecting vocalization F0 and intensity. In the current experiment (N = 37), we extend this previous line of work to show that gesture-speech physics impacts fluent speech, too. Compared with non-movement, participants who are producing fluent self-formulated speech, while rhythmically moving their limbs, demonstrate heightened F0 and amplitude envelope, and such effects are more pronounced for higher-impulse arm versus lower-impulse wrist movement. We replicate that acoustic peaks arise especially during moments of peak-impulse (i.e., the beat) of the movement, namely around deceleration phases of the movement. Finally, higher deceleration rates of higher-mass arm movements were related to higher peaks in acoustics. These results confirm a role for physical-impulses of gesture affecting the speech system. We discuss the implications of
    gesture-speech physics for understanding of the emergence of communicative gesture, both ontogenetically and phylogenetically.

    Additional information

    data and analyses
  • Preisig, B., Riecke, L., Sjerps, M. J., Kösem, A., Kop, B. R., Bramson, B., Hagoort, P., & Hervais-Adelman, A. (2021). Selective modulation of interhemispheric connectivity by transcranial alternating current stimulation influences binaural integration. Proceedings of the National Academy of Sciences of the United States of America, 118(7): e2015488118. doi:10.1073/pnas.2015488118.

    Abstract

    Brain connectivity plays a major role in the encoding, transfer, and
    integration of sensory information. Interregional synchronization
    of neural oscillations in the γ-frequency band has been suggested
    as a key mechanism underlying perceptual integration. In a recent
    study, we found evidence for this hypothesis showing that the
    modulation of interhemispheric oscillatory synchrony by means of
    bihemispheric high-density transcranial alternating current stimulation
    (HD-TACS) affects binaural integration of dichotic acoustic features.
    Here, we aimed to establish a direct link between oscillatory
    synchrony, effective brain connectivity, and binaural integration.
    We experimentally manipulated oscillatory synchrony (using bihemispheric
    γ-TACS with different interhemispheric phase lags) and
    assessed the effect on effective brain connectivity and binaural integration
    (as measured with functional MRI and a dichotic listening
    task, respectively). We found that TACS reduced intrahemispheric
    connectivity within the auditory cortices and antiphase (interhemispheric
    phase lag 180°) TACS modulated connectivity between the
    two auditory cortices. Importantly, the changes in intra- and interhemispheric
    connectivity induced by TACS were correlated with
    changes in perceptual integration. Our results indicate that γ-band
    synchronization between the two auditory cortices plays a functional
    role in binaural integration, supporting the proposed role
    of interregional oscillatory synchrony in perceptual integration.
  • Pronina, M., Hübscher, I., Holler, J., & Prieto, P. (2021). Interactional training interventions boost children’s expressive pragmatic abilities: Evidence from a novel multidimensional testing approach. Cognitive Development, 57: 101003. doi:10.1016/j.cogdev.2020.101003.

    Abstract

    This study investigates the effectiveness of training preschoolers in order to enhance their social cognition and pragmatic skills. Eighty-three 3–4-year-olds were divided into three groups and listened to stories enriched with mental state terms. Then, whereas the control group engaged in non-reflective activities, the two experimental groups were guided by a trainer to reflect on mental states depicted in the stories. In one of these groups, the children were prompted to not only talk about these states but also “embody” them through prosodic and gestural cues. Results showed that while there were no significant effects on Theory of Mind, emotion understanding, and mental state verb comprehension, the experimental groups significantly improved their pragmatic skill scores pretest-to-posttest. These results suggest that interactional interventions can contribute to preschoolers’ pragmatic development, demonstrate the value of the new embodied training, and highlight the importance of multidimensional testing for the evaluation of intervention effects.
  • Puebla, G., Martin, A. E., & Doumas, L. A. A. (2021). The relational processing limits of classic and contemporary neural network models of language processing. Language, Cognition and Neuroscience, 36(2), 240-254. doi:10.1080/23273798.2020.1821906.

    Abstract

    Whether neural networks can capture relational knowledge is a matter of long-standing controversy. Recently, some researchers have argued that (1) classic connectionist models can handle relational structure and (2) the success of deep learning approaches to natural language processing suggests that structured representations are unnecessary to model human language. We tested the Story Gestalt model, a classic connectionist model of text comprehension, and a Sequence-to-Sequence with Attention model, a modern deep learning architecture for natural language processing. Both models were trained to answer questions about stories based on abstract thematic roles. Two simulations varied the statistical structure of new stories while keeping their relational structure intact. The performance of each model fell below chance at least under one manipulation. We argue that both models fail our tests because they can't perform dynamic binding. These results cast doubts on the suitability of traditional neural networks for explaining relational reasoning and language processing phenomena.

    Additional information

    supplementary material
  • Quaresima, A., Fitz, H., Duarte, R., Van den Broek, D., Hagoort, P., & Petersson, K. M. (2023). The Tripod neuron: A minimal structural reduction of the dendritic tree. The Journal of Physiology, 601(15), 3007-3437. doi:10.1113/JP283399.

    Abstract

    Neuron models with explicit dendritic dynamics have shed light on mechanisms for coincidence detection, pathway selection and temporal filtering. However, it is still unclear which morphological and physiological features are required to capture these phenomena. In this work, we introduce the Tripod neuron model and propose a minimal structural reduction of the dendritic tree that is able to reproduce these computations. The Tripod is a three-compartment model consisting of two segregated passive dendrites and a somatic compartment modelled as an adaptive, exponential integrate-and-fire neuron. It incorporates dendritic geometry, membrane physiology and receptor dynamics as measured in human pyramidal cells. We characterize the response of the Tripod to glutamatergic and GABAergic inputs and identify parameters that support supra-linear integration, coincidence-detection and pathway-specific gating through shunting inhibition. Following NMDA spikes, the Tripod neuron generates plateau potentials whose duration depends on the dendritic length and the strength of synaptic input. When fitted with distal compartments, the Tripod encodes previous activity into a dendritic depolarized state. This dendritic memory allows the neuron to perform temporal binding, and we show that it solves transition and sequence detection tasks on which a single-compartment model fails. Thus, the Tripod can account for dendritic computations previously explained only with more detailed neuron models or neural networks. Due to its simplicity, the Tripod neuron can be used efficiently in simulations of larger cortical circuits.
  • Raghavan, R., Raviv, L., & Peeters, D. (2023). What's your point? Insights from virtual reality on the relation between intention and action in the production of pointing gestures. Cognition, 240: 105581. doi:10.1016/j.cognition.2023.105581.

    Abstract

    Human communication involves the process of translating intentions into communicative actions. But how exactly do our intentions surface in the visible communicative behavior we display? Here we focus on pointing gestures, a fundamental building block of everyday communication, and investigate whether and how different types of underlying intent modulate the kinematics of the pointing hand and the brain activity preceding the gestural movement. In a dynamic virtual reality environment, participants pointed at a referent to either share attention with their addressee, inform their addressee, or get their addressee to perform an action. Behaviorally, it was observed that these different underlying intentions modulated how long participants kept their arm and finger still, both prior to starting the movement and when keeping their pointing hand in apex position. In early planning stages, a neurophysiological distinction was observed between a gesture that is used to share attitudes and knowledge with another person versus a gesture that mainly uses that person as a means to perform an action. Together, these findings suggest that our intentions influence our actions from the earliest neurophysiological planning stages to the kinematic endpoint of the movement itself.
  • Raimondi, T., Di Panfilo, G., Pasquali, M., Zarantonello, M., Favaro, L., Savini, T., Gamba, M., & Ravignani, A. (2023). Isochrony and rhythmic interaction in ape duetting. Proceedings of the Royal Society B: Biological Sciences, 290: 20222244. doi:10.1098/rspb.2022.2244.

    Abstract

    How did rhythm originate in humans, and other species? One cross-cultural universal, frequently found in human music, is isochrony: when note onsets repeat regularly like the ticking of a clock. Another universal consists in synchrony (e.g. when individuals coordinate their notes so that they are sung at the same time). An approach to biomusicology focuses on similarities and differences across species, trying to build phylogenies of musical traits. Here we test for the presence of, and a link between, isochrony and synchrony in a non-human animal. We focus on the songs of one of the few singing primates, the lar gibbon (Hylobates lar), extracting temporal features from their solo songs and duets. We show that another ape exhibits one rhythmic feature at the core of human musicality: isochrony. We show that an enhanced call rate overall boosts isochrony, suggesting that respiratory physiological constraints play a role in determining the song's rhythmic structure. However, call rate alone cannot explain the flexible isochrony we witness. Isochrony is plastic and modulated depending on the context of emission: gibbons are more isochronous when duetting than singing solo. We present evidence for rhythmic interaction: we find statistical causality between one individual's note onsets and the co-singer's onsets, and a higher than chance degree of synchrony in the duets. Finally, we find a sex-specific trade-off between individual isochrony and synchrony. Gibbon's plasticity for isochrony and rhythmic overlap may suggest a potential shared selective pressure for interactive vocal displays in singing primates. This pressure may have convergently shaped human and gibbon musicality while acting on a common neural primate substrate. Beyond humans, singing primates are promising models to understand how music and, specifically, a sense of rhythm originated in the primate phylogeny.
  • Räsänen, O., Seshadri, S., Lavechin, M., Cristia, A., & Casillas, M. (2021). ALICE: An open-source tool for automatic measurement of phoneme, syllable, and word counts from child-centered daylong recordings. Behavior Research Methods, 53, 818-835. doi:10.3758/s13428-020-01460-x.

    Abstract

    Recordings captured by wearable microphones are a standard method for investigating young children’s language environments. A key measure to quantify from such data is the amount of speech present in children’s home environments. To this end, the LENA recorder and software—a popular system for measuring linguistic input—estimates the number of adult words that children may hear over the course of a recording. However, word count estimation is challenging to do in a language-independent manner; the relationship between observable acoustic patterns and language-specific lexical entities is far from uniform across human languages. In this paper, we ask whether some alternative linguistic units, namely phone(me)s or syllables, could be measured instead of, or in parallel with, words in order to achieve improved cross-linguistic applicability and comparability of an automated system for measuring child language input. We discuss the advantages and disadvantages of measuring different units from theoretical and technical points of view. We also investigate the practical applicability of measuring such units using a novel system called Automatic LInguistic unit Count Estimator (ALICE) together with audio from seven child-centered daylong audio corpora from diverse cultural and linguistic environments. We show that language-independent measurement of phoneme counts is somewhat more accurate than syllables or words, but all three are highly correlated with human annotations on the same data. We share an open-source implementation of ALICE for use by the language research community, allowing automatic phoneme, syllable, and word count estimation from child-centered audio recordings.
  • Rasenberg, M. (2023). Mutual understanding from a multimodal and interactional perspective. PhD Thesis, Radboud University Nijmegen, Nijmegen.
  • Rasenberg, M., Amha, A., Coler, M., van Koppen, M., van Miltenburg, E., de Rijk, L., Stommel, W., & Dingemanse, M. (2023). Reimagining language: Towards a better understanding of language by including our interactions with non-humans. Linguistics in the Netherlands, 40, 309-317. doi:10.1075/avt.00095.ras.

    Abstract

    What is language and who or what can be said to have it? In this essay we consider this question in the context of interactions with non-humans, specifically: animals and computers. While perhaps an odd pairing at first glance, here we argue that these domains can offer contrasting perspectives through which we can explore and reimagine language. The interactions between humans and animals, as well as between humans and computers, reveal both the essence and the boundaries of language: from examining the role of sequence and contingency in human-animal interaction, to unravelling the challenges of natural interactions with “smart” speakers and language models. By bringing together disparate fields around foundational questions, we push the boundaries of linguistic inquiry and uncover new insights into what language is and how it functions in diverse non-humanexclusive contexts.
  • Rasing, N. B., Van de Geest-Buit, W., Chan, O. Y. A., Mul, K., Lanser, A., Erasmus, C. E., Groothuis, J. T., Holler, J., Ingels, K. J. A. O., Post, B., Siemann, I., & Voermans, N. C. (2023). Psychosocial functioning in patients with altered facial expression: A scoping review in five neurological diseases. Disability and Rehabilitation. Advance online publication. doi:10.1080/09638288.2023.2259310.

    Abstract

    Purpose

    To perform a scoping review to investigate the psychosocial impact of having an altered facial expression in five neurological diseases.
    Methods

    A systematic literature search was performed. Studies were on Bell’s palsy, facioscapulohumeral muscular dystrophy (FSHD), Moebius syndrome, myotonic dystrophy type 1, or Parkinson’s disease patients; had a focus on altered facial expression; and had any form of psychosocial outcome measure. Data extraction focused on psychosocial outcomes.
    Results

    Bell’s palsy, myotonic dystrophy type 1, and Parkinson’s disease patients more often experienced some degree of psychosocial distress than healthy controls. In FSHD, facial weakness negatively influenced communication and was experienced as a burden. The psychosocial distress applied especially to women (Bell’s palsy and Parkinson’s disease), and patients with more severely altered facial expression (Bell’s palsy), but not for Moebius syndrome patients. Furthermore, Parkinson’s disease patients with more pronounced hypomimia were perceived more negatively by observers. Various strategies were reported to compensate for altered facial expression.
    Conclusions

    This review showed that patients with altered facial expression in four of five included neurological diseases had reduced psychosocial functioning. Future research recommendations include studies on observers’ judgements of patients during social interactions and on the effectiveness of compensation strategies in enhancing psychosocial functioning.
    Implications for rehabilitation

    Negative effects of altered facial expression on psychosocial functioning are common and more abundant in women and in more severely affected patients with various neurological disorders.

    Health care professionals should be alert to psychosocial distress in patients with altered facial expression.

    Learning of compensatory strategies could be a beneficial therapy for patients with psychosocial distress due to an altered facial expression.
  • Ravignani, A. (2021). Isochrony, vocal learning and the acquisition of rhythm and melody. Behavioral and Brain Sciences, 44: e88. doi:10.1017/S0140525X20001478.

    Abstract

    A cross-species perspective can extend and provide testable predictions for Savage et al.’s
    framework. Rhythm and melody, I argue, could bootstrap each other in the evolution of
    musicality. Isochrony may function as a temporal grid to support rehearsing and learning
    modulated, pitched vocalizations. Once this melodic plasticity is acquired, focus can shift back to refining rhythm processing and beat induction.
  • Ravignani, A., & Herbst, C. T. (2023). Voices in the ocean: Toothed whales evolved a third way of making sounds similar to that of land mammals and birds. Science, 379(6635), 881-882. doi:10.1126/science.adg5256.
  • Ravignani, A., & De Boer, B. (2021). Joint origins of speech and music: Testing evolutionary hypotheses on modern humans. Semiotica, 239, 169-176. doi:10.1515/sem-2019-0048.

    Abstract

    How music and speech evolved is a mystery. Several hypotheses on their
    origins, including one on their joint origins, have been put forward but rarely
    tested. Here we report and comment on the first experiment testing the hypothesis
    that speech and music bifurcated from a common system. We highlight strengths
    of the reported experiment, point out its relatedness to animal work, and suggest
    three alternative interpretations of its results. We conclude by sketching a future
    empirical programme extending this work.
  • Raviv, L., De Heer Kloots, M., & Meyer, A. S. (2021). What makes a language easy to learn? A preregistered study on how systematic structure and community size affect language learnability. Cognition, 210: 104620. doi:10.1016/j.cognition.2021.104620.

    Abstract

    Cross-linguistic differences in morphological complexity could have important consequences for language learning. Specifically, it is often assumed that languages with more regular, compositional, and transparent grammars are easier to learn by both children and adults. Moreover, it has been shown that such grammars are more likely to evolve in bigger communities. Together, this suggests that some languages are acquired faster than others, and that this advantage can be traced back to community size and to the degree of systematicity in the language. However, the causal relationship between systematic linguistic structure and language learnability has not been formally tested, despite its potential importance for theories on language evolution, second language learning, and the origin of linguistic diversity. In this pre-registered study, we experimentally tested the effects of community size and systematic structure on adult language learning. We compared the acquisition of different yet comparable artificial languages that were created by big or small groups in a previous communication experiment, which varied in their degree of systematic linguistic structure. We asked (a) whether more structured languages were easier to learn; and (b) whether languages created by the bigger groups were easier to learn. We found that highly systematic languages were learned faster and more accurately by adults, but that the relationship between language learnability and linguistic structure was typically non-linear: high systematicity was advantageous for learning, but learners did not benefit from partly or semi-structured languages. Community size did not affect learnability: languages that evolved in big and small groups were equally learnable, and there was no additional advantage for languages created by bigger groups beyond their degree of systematic structure. Furthermore, our results suggested that predictability is an important advantage of systematic structure: participants who learned more structured languages were better at generalizing these languages to new, unfamiliar meanings, and different participants who learned the same more structured languages were more likely to produce similar labels. That is, systematic structure may allow speakers to converge effortlessly, such that strangers can immediately understand each other.
  • Raviv, L., & Kirby, S. (2023). Self domestication and the cultural evolution of language. In J. J. Tehrani, J. Kendal, & R. Kendal (Eds.), The Oxford Handbook of Cultural Evolution. Oxford: Oxford University Press. doi:10.1093/oxfordhb/9780198869252.013.60.

    Abstract

    The structural design features of human language emerge in the process of cultural evolution, shaping languages over the course of communication, learning, and transmission. What role does this leave biological evolution? This chapter highlights the biological bases and preconditions that underlie the particular type of prosocial behaviours and cognitive inference abilities that are required for languages to emerge via cultural evolution to begin with.
  • Raviv, L., Jacobson, S. L., Plotnik, J. M., Bowman, J., Lynch, V., & Benítez-Burraco, A. (2023). Elephants as an animal model for self-domestication. Proceedings of the National Academy of Sciences of the United States of America, 120(15): e2208607120. doi:10.1073/pnas.2208607120.

    Abstract

    Humans are unique in their sophisticated culture and societal structures, their complex languages, and their extensive tool use. According to the human self-domestication hypothesis, this unique set of traits may be the result of an evolutionary process of self-induced domestication, in which humans evolved to be less aggressive and more cooperative. However, the only other species that has been argued to be self-domesticated besides humans so far is bonobos, resulting in a narrow scope for investigating this theory limited to the primate order. Here, we propose an animal model for studying self-domestication: the elephant. First, we support our hypothesis with an extensive cross-species comparison, which suggests that elephants indeed exhibit many of the features associated with self-domestication (e.g., reduced aggression, increased prosociality, extended juvenile period, increased playfulness, socially regulated cortisol levels, and complex vocal behavior). Next, we present genetic evidence to reinforce our proposal, showing that genes positively selected in elephants are enriched in pathways associated with domestication traits and include several candidate genes previously associated with domestication. We also discuss several explanations for what may have triggered a self-domestication process in the elephant lineage. Our findings support the idea that elephants, like humans and bonobos, may be self-domesticated. Since the most recent common ancestor of humans and elephants is likely the most recent common ancestor of all placental mammals, our findings have important implications for convergent evolution beyond the primate taxa, and constitute an important advance toward understanding how and why self-domestication shaped humans’ unique cultural niche.

    Additional information

    supporting information
  • Rebuschat, P., Monaghan, P., & Schoetensack, C. (2021). Learning vocabulary and grammar from cross-situational statistics. Cognition, 206: 104475. doi:10.1016/j.cognition.2020.104475.

    Abstract

    Across multiple situations, child and adult learners are sensitive to co-occurrences between individual words and their referents in the environment, which provide a means by which the ambiguity of word-world mappings may be resolved (Monaghan & Mattock, 2012; Scott & Fisher, 2012; Smith & Yu, 2008; Yu & Smith, 2007). In three studies, we tested whether cross-situational learning is sufficiently powerful to support simultaneous learning the referents for words from multiple grammatical categories, a more realistic reflection of more complex natural language learning situations. In Experiment 1, adult learners heard sentences comprising nouns, verbs, adjectives, and grammatical markers indicating subject and object roles, and viewed a dynamic scene to which the sentence referred. In Experiments 2 and 3, we further increased the uncertainty of the referents by presenting two scenes alongside each sentence. In all studies, we found that cross-situational statistical learning was sufficiently powerful to facilitate acquisition of both vocabulary and grammar from complex sentence-to-scene correspondences, simulating the situations that more closely resemble the challenge facing the language learner.

    Additional information

    supplementary material
  • Redl, T. (2021). Masculine generic pronouns: Investigating the processing of an unintended gender cue. PhD Thesis, Radboud University Nijmegen, Nijmegen.
  • Redl, T., Frank, S. L., De Swart, P., & De Hoop, H. (2021). The male bias of a generically-intended masculine pronoun: Evidence from eye-tracking and sentence evaluation. PLoS One, 16(4): e0249309. doi:10.1371/journal.pone.0249309.

    Abstract

    Two experiments tested whether the Dutch possessive pronoun zijn ‘his’ gives rise to a gender inference and thus causes a male bias when used generically in sentences such as Everyone was putting on his shoes. Experiment 1 (N = 120, 48 male) was a conceptual replication of a previous eye-tracking study that had not found evidence of a male bias. The results of the current eye-tracking experiment showed the generically-intended masculine pronoun to trigger a gender inference and cause a male bias, but for male participants and in stereotypically neutral stereotype contexts only. No evidence for a male bias was thus found in stereotypically female and male context nor for female participants altogether. Experiment 2 (N = 80, 40 male) used the same stimuli as Experiment 1, but employed the sentence evaluation paradigm. No evidence of a male bias was found in Experiment 2. Taken together, the results suggest that the generically-intended masculine pronoun zijn ‘his’ can cause a male bias for male participants even when the referents are previously introduced by inclusive and grammatically gender-unmarked iedereen ‘everyone’. This male bias surfaces with eye-tracking, which taps directly into early language processing, but not in offline sentence evaluations. Furthermore, the results suggest that the intended generic reading of the masculine possessive pronoun zijn ‘his’ is more readily available for women than for men.

    Additional information

    data

Share this page