Publications

Displaying 1101 - 1200 of 1276
  • Tamariz, M., Roberts, S. G., Martínez, J. I., & Santiago, J. (2018). The Interactive Origin of Iconicity. Cognitive Science, 42, 334-349. doi:10.1111/cogs.12497.

    Abstract

    We investigate the emergence of iconicity, specifically a bouba-kiki effect in miniature artificial languages under different functional constraints: when the languages are reproduced and when they are used communicatively. We ran transmission chains of (a) participant dyads who played an interactive communicative game and (b) individual participants who played a matched learning game. An analysis of the languages over six generations in an iterated learning experiment revealed that in the Communication condition, but not in the Reproduction condition, words for spiky shapes tend to be rated by naive judges as more spiky than the words for round shapes. This suggests that iconicity may not only be the outcome of innovations introduced by individuals, but, crucially, the result of interlocutor negotiation of new communicative conventions. We interpret our results as an illustration of cultural evolution by random mutation and selection (as opposed to by guided variation).
  • Tan, Y., & Martin, R. C. (2018). Verbal short-term memory capacities and executive function in semantic and syntactic interference resolution during sentence comprehension: Evidence from aphasia. Neuropsychologia, 113, 111-125. doi:10.1016/j.neuropsychologia.2018.03.001.

    Abstract

    This study examined the role of verbal short-term memory (STM) and executive function (EF) underlying semantic and syntactic interference resolution during sentence comprehension for persons with aphasia (PWA) with varying degrees of STM and EF deficits. Semantic interference was manipulated by varying the semantic plausibility of the intervening NP as subject of the verb and syntactic interference was manipulated by varying whether the NP was another subject or an object. Nine PWA were assessed on sentence reading times and on comprehension question performance. PWA showed exaggerated semantic and syntactic interference effects relative to healthy age-matched control subjects. Importantly, correlational analyses showed that while answering comprehension questions, PWA’ semantic STM capacity related to their ability to resolve semantic but not syntactic interference. In contrast, PWA’ EF abilities related to their ability to resolve syntactic but not semantic interference. Phonological STM deficits were not related to the ability to resolve either type of interference. The results for semantic STM are consistent with prior findings indicating a role for semantic but not phonological STM in sentence comprehension, specifically with regard to maintaining semantic information prior to integration. The results for syntactic interference are consistent with the recent findings suggesting that EF is critical for syntactic processing.
  • Tatsumi, T., & Sala, G. (2023). Learning conversational dependency: Children’s response usingunin Japanese. Journal of Child Language, 50(5), 1226-1244. doi:10.1017/S0305000922000344.

    Abstract

    This study investigates how Japanese-speaking children learn interactional dependencies in conversations that determine the use of un, a token typically used as a positive response for yes-no questions, backchannel, and acknowledgement. We hypothesise that children learn to produce un appropriately by recognising different types of cues occurring in the immediately preceding turns. We built a set of generalised linear models on the longitudinal conversation data from seven children aged 1 to 5 years and their caregivers. Our models revealed that children not only increased their un production, but also learned to attend relevant cues in the preceding turns to understand when to respond by producing un. Children increasingly produced un when their interlocutors asked a yes-no question or signalled the continuation of their own speech. These results illustrate how children learn the probabilistic dependency between adjacent turns, and become able to participate in conversational interactions.
  • Teeling, E., Vernes, S. C., Davalos, L. M., Ray, D. A., Gilbert, M. T. P., Myers, E., & Bat1K Consortium (2018). Bat biology, genomes, and the Bat1K project: To generate chromosome-level genomes for all living bat species. Annual Review of Animal Biosciences, 6, 23-46. doi:10.1146/annurev-animal-022516-022811.

    Abstract

    Bats are unique among mammals, possessing some of the rarest mammalian adaptations, including true self-powered flight, laryngeal echolocation, exceptional longevity, unique immunity, contracted genomes, and vocal learning. They provide key ecosystem services, pollinating tropical plants, dispersing seeds, and controlling insect pest populations, thus driving healthy ecosystems. They account for more than 20% of all living mammalian diversity, and their crown-group evolutionary history dates back to the Eocene. Despite their great numbers and diversity, many species are threatened and endangered. Here we announce Bat1K, an initiative to sequence the genomes of all living bat species (n∼1,300) to chromosome-level assembly. The Bat1K genome consortium unites bat biologists (>132 members as of writing), computational scientists, conservation organizations, genome technologists, and any interested individuals committed to a better understanding of the genetic and evolutionary mechanisms that underlie the unique adaptations of bats. Our aim is to catalog the unique genetic diversity present in all living bats to better understand the molecular basis of their unique adaptations; uncover their evolutionary history; link genotype with phenotype; and ultimately better understand, promote, and conserve bats. Here we review the unique adaptations of bats and highlight how chromosome-level genome assemblies can uncover the molecular basis of these traits. We present a novel sequencing and assembly strategy and review the striking societal and scientific benefits that will result from the Bat1K initiative.
  • Ten Bosch, L., Oostdijk, N., & De Ruiter, J. P. (2004). Turn-taking in social talk dialogues: Temporal, formal and functional aspects. In 9th International Conference Speech and Computer (SPECOM'2004) (pp. 454-461).

    Abstract

    This paper presents a quantitative analysis of the
    turn-taking mechanism evidenced in 93 telephone
    dialogues that were taken from the 9-million-word
    Spoken Dutch Corpus. While the first part of the paper
    focuses on the temporal phenomena of turn taking, such
    as durations of pauses and overlaps of turns in the
    dialogues, the second part explores the discoursefunctional
    aspects of utterances in a subset of 8
    dialogues that were annotated especially for this
    purpose. The results show that speakers adapt their turntaking
    behaviour to the interlocutor’s behaviour.
    Furthermore, the results indicate that male-male dialogs
    show a higher proportion of overlapping turns than
    female-female dialogues.
  • Ten Bosch, L., Ernestus, M., & Boves, L. (2018). Analyzing reaction time sequences from human participants in auditory experiments. In Proceedings of Interspeech 2018 (pp. 971-975). doi:10.21437/Interspeech.2018-1728.

    Abstract

    Sequences of reaction times (RT) produced by participants in an experiment are not only influenced by the stimuli, but by many other factors as well, including fatigue, attention, experience, IQ, handedness, etc. These confounding factors result in longterm effects (such as a participant’s overall reaction capability) and in short- and medium-time fluctuations in RTs (often referred to as ‘local speed effects’). Because stimuli are usually presented in a random sequence different for each participant, local speed effects affect the underlying ‘true’ RTs of specific trials in different ways across participants. To be able to focus statistical analysis on the effects of the cognitive process under study, it is necessary to reduce the effect of confounding factors as much as possible. In this paper we propose and compare techniques and criteria for doing so, with focus on reducing (‘filtering’) the local speed effects. We show that filtering matters substantially for the significance analyses of predictors in linear mixed effect regression models. The performance of filtering is assessed by the average between-participant correlation between filtered RT sequences and by Akaike’s Information Criterion, an important measure of the goodness-of-fit of linear mixed effect regression models.
  • Ten Bosch, L., Oostdijk, N., & De Ruiter, J. P. (2004). Durational aspects of turn-taking in spontaneous face-to-face and telephone dialogues. In P. Sojka, I. Kopecek, & K. Pala (Eds.), Text, Speech and Dialogue: Proceedings of the 7th International Conference TSD 2004 (pp. 563-570). Heidelberg: Springer.

    Abstract

    On the basis of two-speaker spontaneous conversations, it is shown that the distributions of both pauses and speech-overlaps of telephone and faceto-face dialogues have different statistical properties. Pauses in a face-to-face
    dialogue last up to 4 times longer than pauses in telephone conversations in functionally comparable conditions. There is a high correlation (0.88 or larger) between the average pause duration for the two speakers across face-to-face
    dialogues and telephone dialogues. The data provided form a first quantitative analysis of the complex turn-taking mechanism evidenced in the dialogues available in the 9-million-word Spoken Dutch Corpus.
  • Ten Bosch, L., & Boves, L. (2018). Information encoding by deep neural networks: what can we learn? In Proceedings of Interspeech 2018 (pp. 1457-1461). doi:10.21437/Interspeech.2018-1896.

    Abstract

    The recent advent of deep learning techniques in speech tech-nology and in particular in automatic speech recognition hasyielded substantial performance improvements. This suggeststhat deep neural networks (DNNs) are able to capture structurein speech data that older methods for acoustic modeling, suchas Gaussian Mixture Models and shallow neural networks failto uncover. In image recognition it is possible to link repre-sentations on the first couple of layers in DNNs to structuralproperties of images, and to representations on early layers inthe visual cortex. This raises the question whether it is possi-ble to accomplish a similar feat with representations on DNNlayers when processing speech input. In this paper we presentthree different experiments in which we attempt to untanglehow DNNs encode speech signals, and to relate these repre-sentations to phonetic knowledge, with the aim to advance con-ventional phonetic concepts and to choose the topology of aDNNs more efficiently. Two experiments investigate represen-tations formed by auto-encoders. A third experiment investi-gates representations on convolutional layers that treat speechspectrograms as if they were images. The results lay the basisfor future experiments with recursive networks.
  • Ten Oever, S., & Martin, A. E. (2024). Interdependence of “what” and “when” in the brain. Journal of Cognitive Neuroscience, 36(1), 167-186. doi:10.1162/jocn_a_02067.

    Abstract

    From a brain's-eye-view, when a stimulus occurs and what it is are interrelated aspects of interpreting the perceptual world. Yet in practice, the putative perceptual inferences about sensory content and timing are often dichotomized and not investigated as an integrated process. We here argue that neural temporal dynamics can influence what is perceived, and in turn, stimulus content can influence the time at which perception is achieved. This computational principle results from the highly interdependent relationship of what and when in the environment. Both brain processes and perceptual events display strong temporal variability that is not always modeled; we argue that understanding—and, minimally, modeling—this temporal variability is key for theories of how the brain generates unified and consistent neural representations and that we ignore temporal variability in our analysis practice at the peril of both data interpretation and theory-building. Here, we review what and when interactions in the brain, demonstrate via simulations how temporal variability can result in misguided interpretations and conclusions, and outline how to integrate and synthesize what and when in theories and models of brain computation.
  • Ten Oever, S., Titone, L., te Rietmolen, N., & Martin, A. E. (2024). Phase-dependent word perception emerges from region-specific sensitivity to the statistics of language. Proceedings of the National Academy of Sciences of the United States of America, 121(3): e2320489121. doi:10.1073/pnas.2320489121.

    Abstract

    Neural oscillations reflect fluctuations in excitability, which biases the percept of ambiguous sensory input. Why this bias occurs is still not fully understood. We hypothesized that neural populations representing likely events are more sensitive, and thereby become active on earlier oscillatory phases, when the ensemble itself is less excitable. Perception of ambiguous input presented during less-excitable phases should therefore be biased toward frequent or predictable stimuli that have lower activation thresholds. Here, we show such a frequency bias in spoken word recognition using psychophysics, magnetoencephalography (MEG), and computational modelling. With MEG, we found a double dissociation, where the phase of oscillations in the superior temporal gyrus and medial temporal gyrus biased word-identification behavior based on phoneme and lexical frequencies, respectively. This finding was reproduced in a computational model. These results demonstrate that oscillations provide a temporal ordering of neural activity based on the sensitivity of separable neural populations.
  • Ter Bekke, M., Drijvers, L., & Holler, J. (2024). Hand gestures have predictive potential during conversation: An investigation of the timing of gestures in relation to speech. Cognitive Science, 48(1): e13407. doi:10.1111/cogs.13407.

    Abstract

    During face-to-face conversation, transitions between speaker turns are incredibly fast. These fast turn exchanges seem to involve next speakers predicting upcoming semantic information, such that next turn planning can begin before a current turn is complete. Given that face-to-face conversation also involves the use of communicative bodily signals, an important question is how bodily signals such as co-speech hand gestures play into these processes of prediction and fast responding. In this corpus study, we found that hand gestures that depict or refer to semantic information started before the corresponding information in speech, which held both for the onset of the gesture as a whole, as well as the onset of the stroke (the most meaningful part of the gesture). This early timing potentially allows listeners to use the gestural information to predict the corresponding semantic information to be conveyed in speech. Moreover, we provided further evidence that questions with gestures got faster responses than questions without gestures. However, we found no evidence for the idea that how much a gesture precedes its lexical affiliate (i.e., its predictive potential) relates to how fast responses were given. The findings presented here highlight the importance of the temporal relation between speech and gesture and help to illuminate the potential mechanisms underpinning multimodal language processing during face-to-face conversation.
  • Ter Bekke, M., Drijvers, L., & Holler, J. (2024). Gestures speed up responses to questions. Language, Cognition and Neuroscience, 39(4), 423-430. doi:10.1080/23273798.2024.2314021.

    Abstract

    Most language use occurs in face-to-face conversation, which involves rapid turn-taking. Seeing communicative bodily signals in addition to hearing speech may facilitate such fast responding. We tested whether this holds for co-speech hand gestures by investigating whether these gestures speed up button press responses to questions. Sixty native speakers of Dutch viewed videos in which an actress asked yes/no-questions, either with or without a corresponding iconic hand gesture. Participants answered the questions as quickly and accurately as possible via button press. Gestures did not impact response accuracy, but crucially, gestures sped up responses, suggesting that response planning may be finished earlier when gestures are seen. How much gestures sped up responses was not related to their timing in the question or their timing with respect to the corresponding information in speech. Overall, these results are in line with the idea that multimodality may facilitate fast responding during face-to-face conversation.
  • Ter Bekke, M., Levinson, S. C., Van Otterdijk, L., Kühn, M., & Holler, J. (2024). Visual bodily signals and conversational context benefit the anticipation of turn ends. Cognition, 248: 105806. doi:10.1016/j.cognition.2024.105806.

    Abstract

    The typical pattern of alternating turns in conversation seems trivial at first sight. But a closer look quickly reveals the cognitive challenges involved, with much of it resulting from the fast-paced nature of conversation. One core ingredient to turn coordination is the anticipation of upcoming turn ends so as to be able to ready oneself for providing the next contribution. Across two experiments, we investigated two variables inherent to face-to-face conversation, the presence of visual bodily signals and preceding discourse context, in terms of their contribution to turn end anticipation. In a reaction time paradigm, participants anticipated conversational turn ends better when seeing the speaker and their visual bodily signals than when they did not, especially so for longer turns. Likewise, participants were better able to anticipate turn ends when they had access to the preceding discourse context than when they did not, and especially so for longer turns. Critically, the two variables did not interact, showing that visual bodily signals retain their influence even in the context of preceding discourse. In a pre-registered follow-up experiment, we manipulated the visibility of the speaker's head, eyes and upper body (i.e. torso + arms). Participants were better able to anticipate turn ends when the speaker's upper body was visible, suggesting a role for manual gestures in turn end anticipation. Together, these findings show that seeing the speaker during conversation may critically facilitate turn coordination in interaction.
  • Terporten, R., Huizeling, E., Heidlmayr, K., Hagoort, P., & Kösem, A. (2024). The interaction of context constraints and predictive validity during sentence reading. Journal of Cognitive Neuroscience, 36(2), 225-238. doi:10.1162/jocn_a_02082.

    Abstract

    Words are not processed in isolation; instead, they are commonly embedded in phrases and sentences. The sentential context influences the perception and processing of a word. However, how this is achieved by brain processes and whether predictive mechanisms underlie this process remain a debated topic. Here, we employed an experimental paradigm in which we orthogonalized sentence context constraints and predictive validity, which was defined as the ratio of congruent to incongruent sentence endings within the experiment. While recording electroencephalography, participants read sentences with three levels of sentential context constraints (high, medium, and low). Participants were also separated into two groups that differed in their ratio of valid congruent to incongruent target words that could be predicted from the sentential context. For both groups, we investigated modulations of alpha power before, and N400 amplitude modulations after target word onset. The results reveal that the N400 amplitude gradually decreased with higher context constraints and cloze probability. In contrast, alpha power was not significantly affected by context constraint. Neither the N400 nor alpha power were significantly affected by changes in predictive validity.
  • Terrill, A. (1998). Biri. München: Lincom Europa.

    Abstract

    This work presents a salvage grammar of the Biri language of Eastern Central Queensland, a Pama-Nyungan language belonging to the large Maric subgroup. As the language is no longer used, the grammatical description is based on old written sources and on recordings made by linguists in the 1960s and 1970s. Biri is in many ways typical of the Pama-Nyungan languages of Southern Queensland. It has split case marking systems, marking nouns according to an ergative/absolutive system and pronouns according to a nominative/accusative system. Unusually for its area, Biri also has bound pronouns on its verb, cross-referencing the person, number and case of core participants. As far as it is possible, the grammatical discussion is ‘theory neutral’. The first four chapters deal with the phonology, morphology, and syntax of the language. The last two chapters contain a substantial discussion of Biri’s place in the Pama-Nyungan family. In chapter 6 the numerous dialects of the Biri language are discussed. In chapter 7 the close linguistic relationship between Biri and the surrounding languages is examined.
  • Terrill, A. (2004). Coordination in Lavukaleve. In M. Haspelmath (Ed.), Coordinating Constructions. (pp. 427-443). Amsterdam: John Benjamins.
  • Tezcan, F., Weissbart, H., & Martin, A. E. (2023). A tradeoff between acoustic and linguistic feature encoding in spoken language comprehension. eLife, 12: e82386. doi:10.7554/eLife.82386.

    Abstract

    When we comprehend language from speech, the phase of the neural response aligns with particular features of the speech input, resulting in a phenomenon referred to as neural tracking. In recent years, a large body of work has demonstrated the tracking of the acoustic envelope and abstract linguistic units at the phoneme and word levels, and beyond. However, the degree to which speech tracking is driven by acoustic edges of the signal, or by internally-generated linguistic units, or by the interplay of both, remains contentious. In this study, we used naturalistic story-listening to investigate (1) whether phoneme-level features are tracked over and above acoustic edges, (2) whether word entropy, which can reflect sentence- and discourse-level constraints, impacted the encoding of acoustic and phoneme-level features, and (3) whether the tracking of acoustic edges was enhanced or suppressed during comprehension of a first language (Dutch) compared to a statistically familiar but uncomprehended language (French). We first show that encoding models with phoneme-level linguistic features, in addition to acoustic features, uncovered an increased neural tracking response; this signal was further amplified in a comprehended language, putatively reflecting the transformation of acoustic features into internally generated phoneme-level representations. Phonemes were tracked more strongly in a comprehended language, suggesting that language comprehension functions as a neural filter over acoustic edges of the speech signal as it transforms sensory signals into abstract linguistic units. We then show that word entropy enhances neural tracking of both acoustic and phonemic features when sentence- and discourse-context are less constraining. When language was not comprehended, acoustic features, but not phonemic ones, were more strongly modulated, but in contrast, when a native language is comprehended, phoneme features are more strongly modulated. Taken together, our findings highlight the flexible modulation of acoustic, and phonemic features by sentence and discourse-level constraint in language comprehension, and document the neural transformation from speech perception to language comprehension, consistent with an account of language processing as a neural filter from sensory to abstract representations.
  • Theakston, A. L., Lieven, E. V., Pine, J. M., & Rowland, C. F. (2004). Semantic generality, input frequency and the acquisition of syntax. Journal of Child Language, 31(1), 61-99. doi:10.1017/S0305000903005956.

    Abstract

    In many areas of language acquisition, researchers have suggested that semantic generality plays an important role in determining the order of acquisition of particular lexical forms. However, generality is typically confounded with the effects of input frequency and it is therefore unclear to what extent semantic generality or input frequency determines the early acquisition of particular lexical items. The present study evaluates the relative influence of semantic status and properties of the input on the acquisition of verbs and their argument structures in the early speech of 9 English-speaking children from 2;0 to 3;0. The children's early verb utterances are examined with respect to (1) the order of acquisition of particular verbs in three different constructions, (2) the syntactic diversity of use of individual verbs, (3) the relative proportional use of semantically general verbs as a function of total verb use, and (4) their grammatical accuracy. The data suggest that although measures of semantic generality correlate with various measures of early verb use, once the effects of verb use in the input are removed, semantic generality is not a significant predictor of early verb use. The implications of these results for semantic-based theories of verb argument structure acquisition are discussed.
  • Thompson, B., & Lupyan, G. (2018). Automatic estimation of lexical concreteness in 77 languages. In C. Kalish, M. Rau, J. Zhu, & T. T. Rogers (Eds.), Proceedings of the 40th Annual Conference of the Cognitive Science Society (CogSci 2018) (pp. 1122-1127). Austin, TX: Cognitive Science Society.

    Abstract

    We estimate lexical Concreteness for millions of words across 77 languages. Using a simple regression framework, we combine vector-based models of lexical semantics with experimental norms of Concreteness in English and Dutch. By applying techniques to align vector-based semantics across distinct languages, we compute and release Concreteness estimates at scale in numerous languages for which experimental norms are not currently available. This paper lays out the technique and its efficacy. Although this is a difficult dataset to evaluate immediately, Concreteness estimates computed from English correlate with Dutch experimental norms at $\rho$ = .75 in the vocabulary at large, increasing to $\rho$ = .8 among Nouns. Our predictions also recapitulate attested relationships with word frequency. The approach we describe can be readily applied to numerous lexical measures beyond Concreteness
  • Thompson, B., Roberts, S., & Lupyan, G. (2018). Quantifying semantic similarity across languages. In C. Kalish, M. Rau, J. Zhu, & T. T. Rogers (Eds.), Proceedings of the 40th Annual Conference of the Cognitive Science Society (CogSci 2018) (pp. 2551-2556). Austin, TX: Cognitive Science Society.

    Abstract

    Do all languages convey semantic knowledge in the same way? If language simply mirrors the structure of the world, the answer should be a qualified “yes”. If, however, languages impose structure as much as reflecting it, then even ostensibly the “same” word in different languages may mean quite different things. We provide a first pass at a large-scale quantification of cross-linguistic semantic alignment of approximately 1000 meanings in 55 languages. We find that the translation equivalents in some domains (e.g., Time, Quantity, and Kinship) exhibit high alignment across languages while the structure of other domains (e.g., Politics, Food, Emotions, and Animals) exhibits substantial cross-linguistic variability. Our measure of semantic alignment correlates with known phylogenetic distances between languages: more phylogenetically distant languages have less semantic alignment. We also find semantic alignment to correlate with cultural distances between societies speaking the languages, suggesting a rich co-adaptation of language and culture even in domains of experience that appear most constrained by the natural world
  • Thorin, J., Sadakata, M., Desain, P., & McQueen, J. M. (2018). Perception and production in interaction during non-native speech category learning. The Journal of the Acoustical Society of America, 144(1), 92-103. doi:10.1121/1.5044415.

    Abstract

    Establishing non-native phoneme categories can be a notoriously difficult endeavour—in both speech perception and speech production. This study asks how these two domains interact in the course of this learning process. It investigates the effect of perceptual learning and related production practice of a challenging non-native category on the perception and/or production of that category. A four-day perceptual training protocol on the British English /æ/-/ɛ/ vowel contrast was combined with either related or unrelated production practice. After feedback on perceptual categorisation of the contrast, native Dutch participants in the related production group (N = 19) pronounced the trial's correct answer, while participants in the unrelated production group (N = 19) pronounced similar but phonologically unrelated words. Comparison of pre- and post-tests showed significant improvement over the course of training in both perception and production, but no differences between the groups were found. The lack of an effect of production practice is discussed in the light of previous, competing results and models of second-language speech perception and production. This study confirms that, even in the context of related production practice, perceptual training boosts production learning.
  • Thothathiri, M., Basnakova, J., Lewis, A. G., & Briand, J. M. (2024). Fractionating difficulty during sentence comprehension using functional neuroimaging. Cerebral Cortex, 34(2): bhae032. doi:10.1093/cercor/bhae032.

    Abstract

    Sentence comprehension is highly practiced and largely automatic, but this belies the complexity of the underlying processes. We used functional neuroimaging to investigate garden-path sentences that cause difficulty during comprehension, in order to unpack the different processes used to support sentence interpretation. By investigating garden-path and other types of sentences within the same individuals, we functionally profiled different regions within the temporal and frontal cortices in the left hemisphere. The results revealed that different aspects of comprehension difficulty are handled by left posterior temporal, left anterior temporal, ventral left frontal, and dorsal left frontal cortices. The functional profiles of these regions likely lie along a spectrum of specificity to generality, including language-specific processing of linguistic representations, more general conflict resolution processes operating over linguistic representations, and processes for handling difficulty in general. These findings suggest that difficulty is not unitary and that there is a role for a variety of linguistic and non-linguistic processes in supporting comprehension.

    Additional information

    supplementary information
  • Tian, X., Ding, N., Teng, X., Bai, F., & Poeppel, D. (2018). Imagined speech influences perceived loudness of sound. Nature Human Behaviour, 2, 225-234. doi:10.1038/s41562-018-0305-8.

    Abstract

    The way top-down and bottom-up processes interact to shape our perception and behaviour is a fundamental question and remains highly controversial. How early in a processing stream do such interactions occur, and what factors govern such interactions? The degree of abstractness of a perceptual attribute (for example, orientation versus shape in vision, or loudness versus sound identity in hearing) may determine the locus of neural processing and interaction between bottom-up and internal information. Using an imagery-perception repetition paradigm, we find that imagined speech affects subsequent auditory perception, even for a low-level attribute such as loudness. This effect is observed in early auditory responses in magnetoencephalography and electroencephalography that correlate with behavioural loudness ratings. The results suggest that the internal reconstruction of neural representations without external stimulation is flexibly regulated by task demands, and that such top-down processes can interact with bottom-up information at an early perceptual stage to modulate perception.
  • Tilot, A. K., Kucera, K. S., Vino, A., Asher, J. E., Baron-Cohen, S., & Fisher, S. E. (2018). Rare variants in axonogenesis genes connect three families with sound–color synesthesia. Proceedings of the National Academy of Sciences of the United States of America, 115(12), 3168-3173. doi:10.1073/pnas.1715492115.

    Abstract

    Synesthesia is a rare nonpathological phenomenon where stimulation of one sense automatically provokes a secondary perception in another. Hypothesized to result from differences in cortical wiring during development, synesthetes show atypical structural and functional neural connectivity, but the underlying molecular mechanisms are unknown. The trait also appears to be more common among people with autism spectrum disorder and savant abilities. Previous linkage studies searching for shared loci of large effect size across multiple families have had limited success. To address the critical lack of candidate genes, we applied whole-exome sequencing to three families with sound–color (auditory–visual) synesthesia affecting multiple relatives across three or more generations. We identified rare genetic variants that fully cosegregate with synesthesia in each family, uncovering 37 genes of interest. Consistent with reports indicating genetic heterogeneity, no variants were shared across families. Gene ontology analyses highlighted six genes—COL4A1, ITGA2, MYO10, ROBO3, SLC9A6, and SLIT2—associated with axonogenesis and expressed during early childhood when synesthetic associations are formed. These results are consistent with neuroimaging-based hypotheses about the role of hyperconnectivity in the etiology of synesthesia and offer a potential entry point into the neurobiology that organizes our sensory experiences.

    Additional information

    Tilot_etal_2018SI.pdf
  • Titus, A., Dijkstra, T., Willems, R. M., & Peeters, D. (2024). Beyond the tried and true: How virtual reality, dialog setups, and a focus on multimodality can take bilingual language production research forward. Neuropsychologia, 193: 108764. doi:10.1016/j.neuropsychologia.2023.108764.

    Abstract

    Bilinguals possess the ability of expressing themselves in more than one language, and typically do so in contextually rich and dynamic settings. Theories and models have indeed long considered context factors to affect bilingual language production in many ways. However, most experimental studies in this domain have failed to fully incorporate linguistic, social, or physical context aspects, let alone combine them in the same study. Indeed, most experimental psycholinguistic research has taken place in isolated and constrained lab settings with carefully selected words or sentences, rather than under rich and naturalistic conditions. We argue that the most influential experimental paradigms in the psycholinguistic study of bilingual language production fall short of capturing the effects of context on language processing and control presupposed by prominent models. This paper therefore aims to enrich the methodological basis for investigating context aspects in current experimental paradigms and thereby move the field of bilingual language production research forward theoretically. After considering extensions of existing paradigms proposed to address context effects, we present three far-ranging innovative proposals, focusing on virtual reality, dialog situations, and multimodality in the context of bilingual language production.
  • Tkalcec, A., Bierlein, M., Seeger‐Schneider, G., Walitza, S., Jenny, B., Menks, W. M., Felhbaum, L. V., Borbas, R., Cole, D. M., Raschle, N., Herbrecht, E., Stadler, C., & Cubillo, A. (2023). Empathy deficits, callous‐unemotional traits and structural underpinnings in autism spectrum disorder and conduct disorder youth. Autism Research, 16(10), 1946-1962. doi:10.1002/aur.2993.

    Abstract

    Distinct empathy deficits are often described in patients with conduct disorder (CD) and autism spectrum disorder (ASD) yet their neural underpinnings and the influence of comorbid Callous-Unemotional (CU) traits are unclear. This study compares the cognitive (CE) and affective empathy (AE) abilities of youth with CD and ASD, their potential neuroanatomical correlates, and the influence of CU traits on empathy. Adolescents and parents/caregivers completed empathy questionnaires (N = 148 adolescents, mean age = 15.16 years) and T1 weighted images were obtained from a subsample (N = 130). Group differences in empathy and the influence of CU traits were investigated using Bayesian analyses and Voxel-Based Morphometry with Threshold-Free Cluster Enhancement focusing on regions involved in AE (insula, amygdala, inferior frontal gyrus and cingulate cortex) and CE processes (ventromedial prefrontal cortex, temporoparietal junction, superior temporal gyrus, and precuneus). The ASD group showed lower parent-reported AE and CE scores and lower self-reported CE scores while the CD group showed lower parent-reported CE scores than controls. When accounting for the influence of CU traits no AE deficits in ASD and CE deficits in CD were found, but CE deficits in ASD remained. Across all participants, CU traits were negatively associated with gray matter volumes in anterior cingulate which extends into the mid cingulate, ventromedial prefrontal cortex, and precuneus. Thus, although co-occurring CU traits have been linked to global empathy deficits in reports and underlying brain structures, its influence on empathy aspects might be disorder-specific. Investigating the subdimensions of empathy may therefore help to identify disorder-specific empathy deficits.
  • Tomasek, M., Ravignani, A., Boucherie, P. H., Van Meyel, S., & Dufour, V. (2023). Spontaneous vocal coordination of vocalizations to water noise in rooks (Corvus frugilegus): An exploratory study. Ecology and Evolution, 13(2): e9791. doi:10.1002/ece3.9791.

    Abstract

    The ability to control one's vocal production is a major advantage in acoustic communication. Yet, not all species have the same level of control over their vocal output. Several bird species can interrupt their song upon hearing an external stimulus, but there is no evidence how flexible this behavior is. Most research on corvids focuses on their cognitive abilities, but few studies explore their vocal aptitudes. Recent research shows that crows can be experimentally trained to vocalize in response to a brief visual stimulus. Our study investigated vocal control abilities with a more ecologically embedded approach in rooks. We show that two rooks could spontaneously coordinate their vocalizations to a long-lasting stimulus (the sound of their small bathing pool being filled with a water hose), one of them adjusting roughly (in the second range) its vocalizations as the stimuli began and stopped. This exploratory study adds to the literature showing that corvids, a group of species capable of cognitive prowess, are indeed able to display good vocal control abilities.
  • Torreira, F., & Grice, M. (2018). Melodic constructions in Spanish: Metrical structure determines the association properties of intonational tones. Journal of the International Phonetic Association, 48(1), 9-32. doi:10.1017/S0025100317000603.

    Abstract

    This paper explores phrase-length-related alternations in the association of tones to positions in metrical structure in two melodic constructions of Spanish. An imitation-and-completion task eliciting (a) the low–falling–rising contour and (b) the circumflex contour on intonation phrases (IPs) of one, two, and three prosodic words revealed that, although the focus structure and pragmatic context is constant across conditions, phrases containing one prosodic word differ in their nuclear (i.e. final) pitch accents and edge tones from phrases containing more than one prosodic word. For contour (a), short intonation phrases (e.g. [ Ma no lo ] IP ) were produced with a low accent followed by a high edge tone (L ∗ H% in ToBI notation), whereas longer phrases (e.g. [ El her ma no de la a m igadeMa no lo ] IP ‘Manolo’s friend’s brother’) had a low accent on the first stressed syllable, a rising accent on the last stressed syllable, and a low edge tone (L ∗ L+H ∗ L%). For contour (b), short phrases were produced with a high–rise (L+H ∗ ¡H%), whereas longer phrases were produced with an initial accentual rise followed by an upstepped rise–fall (L+H ∗ ¡H ∗ L%). These findings imply that the common practice of describing the structure of intonation contours as consisting of a constant nuclear pitch accent and following edge tone is not adequate for modeling Spanish intonation. To capture the observed melodic alternations, we argue for clearer separation between tones and metrical structure, whereby intonational tones do not necessarily have an intrinsic culminative or delimitative function (i.e. as pitch accents or as edge tones). Instead, this function results from melody-specific principles of tonal–metrical association.
  • Tourtouri, E. N., Delogu, F., & Crocker, M. W. (2018). Specificity and entropy reduction in situated referential processing. In G. Gunzelmann, A. Howes, T. Tenbrink, & E. Davelaar (Eds.), Proceedings of the 39th Annual Conference of the Cognitive Science Society (CogSci 2017) (pp. 3356-3361). Austin: Cognitive Science Society.

    Abstract

    In situated communication, reference to an entity in the shared visual context can be established using eitheranexpression that conveys precise (minimally specified) or redundant (over-specified) information. There is, however, along-lasting debate in psycholinguistics concerningwhether the latter hinders referential processing. We present evidence from an eyetrackingexperiment recordingfixations as well asthe Index of Cognitive Activity –a novel measure of cognitive workload –supporting the view that over-specifications facilitate processing. We further present originalevidence that, above and beyond the effect of specificity,referring expressions thatuniformly reduce referential entropyalso benefitprocessing
  • Tribushinina, E., Mak, M., Dubinkina, E., & Mak, W. M. (2018). Adjective production by Russian-speaking children with developmental language disorder and Dutch–Russian simultaneous bilinguals: Disentangling the profiles. Applied Psycholinguistics, 39(5), 1033-1064. doi:10.1017/S0142716418000115.

    Abstract

    Bilingual children with reduced exposure to one or both languages may have language profiles that are
    apparently similar to those of children with developmental language disorder (DLD). Children with
    DLD receive enough input, but have difficulty using this input for acquisition due to processing deficits.
    The present investigation aims to determine aspects of adjective production that are differentially
    affected by reduced input (in bilingualism) and reduced intake (in DLD). Adjectives were elicited
    from Dutch–Russian simultaneous bilinguals with limited exposure to Russian and Russian-speaking
    monolinguals with andwithout DLD.Anantonymelicitation taskwas used to assess the size of adjective
    vocabularies, and a degree task was employed to compare the preferences of the three groups in the
    use of morphological, lexical, and syntactic degree markers. The results revealed that adjective–noun
    agreement is affected to the same extent by both reduced input and reduced intake. The size of adjective
    lexicons is also negatively affected by both, but more so by reduced exposure. However, production
    of morphological degree markers and learning of semantic paradigms are areas of relative strength in
    which bilinguals outperform monolingual children with DLD.We suggest that reduced input might be
    counterbalanced by linguistic and cognitive advantages of bilingualism
  • Trilsbeek, P. (2004). Report from DoBeS training week. Language Archive Newsletter, 1(3), 12-12.
  • Trilsbeek, P. (2004). DoBeS Training Course. Language Archive Newsletter, 1(2), 6-6.
  • Tromp, J. (2018). Indirect request comprehension in different contexts. PhD Thesis, Radboud University Nijmegen, Nijmegen.
  • Tromp, J., Peeters, D., Meyer, A. S., & Hagoort, P. (2018). The combined use of Virtual Reality and EEG to study language processing in naturalistic environments. Behavior Research Methods, 50(2), 862-869. doi:10.3758/s13428-017-0911-9.

    Abstract

    When we comprehend language, we often do this in rich settings in which we can use many cues to understand what someone is saying. However, it has traditionally been difficult to design experiments with rich three-dimensional contexts that resemble our everyday environments, while maintaining control over the linguistic and non-linguistic information that is available. Here we test the validity of combining electroencephalography (EEG) and Virtual Reality (VR) to overcome this problem. We recorded electrophysiological brain activity during language processing in a well-controlled three-dimensional virtual audiovisual environment. Participants were immersed in a virtual restaurant, while wearing EEG equipment. In the restaurant participants encountered virtual restaurant guests. Each guest was seated at a separate table with an object on it (e.g. a plate with salmon). The restaurant guest would then produce a sentence (e.g. “I just ordered this salmon.”). The noun in the spoken sentence could either match (“salmon”) or mismatch (“pasta”) with the object on the table, creating a situation in which the auditory information was either appropriate or inappropriate in the visual context. We observed a reliable N400 effect as a consequence of the mismatch. This finding validates the combined use of VR and EEG as a tool to study the neurophysiological mechanisms of everyday language comprehension in rich, ecologically valid settings.
  • Trompenaars, T. (2018). Empathy for the inanimate. Linguistics in the Netherlands, 35, 125-138. doi:10.1075/avt.00009.tro.

    Abstract

    Narrative fiction may invite us to share the perspective of characters which are very much unlike ourselves. Inanimate objects featuring as protagonists or narrators are an extreme example of this. The way readers experience these characters was examined by means of a narrative immersion study. Participants (N = 200) judged narratives containing animate or inanimate characters in predominantly Agent or Experiencer roles. Narratives with inanimate characters were judged to be less emotionally engaging. This effect was influenced by the dominant thematic role associated with the character: inanimate Agents led to more defamiliarization compared to their animate counterparts than inanimate Experiencers. I argue for an integrated account of thematic roles and animacy in literary experience and linguistics in general.
  • Trompenaars, T., Hogeweg, L., Stoop, W., & De Hoop, H. (2018). The language of an inanimate narrator. Open Linguistics, 4, 707-721. doi:10.1515/opli-2018-0034.

    Abstract

    We show by means of a corpus study that the language used by the inanimate first person narrator in the novel Specht en zoon deviates from what we would expect on the basis of the fact that the narrator is inanimate, but at the same time also differsfrom the language of a human narrator in the novel De wijde blik on several linguistic dimensions. Whereas the human narrator is associated strongly with action verbs, preferring the Agent role, the inanimate narrator is much more limited to the Experiencer role, predominantly associated with cognition and sensory verbs. Our results show that animacy as a linguistic concept may be refined by taking into account the myriad ways in which an entity’s conceptual animacy may be expressed: we accept the conceptual animacy of the inanimate narrator despite its inability to act on its environment, showing this need not be a requirement for animacy
  • Trujillo, J. P., Simanova, I., Bekkering, H., & Ozyurek, A. (2018). Communicative intent modulates production and perception of actions and gestures: A Kinect study. Cognition, 180, 38-51. doi:10.1016/j.cognition.2018.04.003.

    Abstract

    Actions may be used to directly act on the world around us, or as a means of communication. Effective communication requires the addressee to recognize the act as being communicative. Humans are sensitive to ostensive communicative cues, such as direct eye gaze (Csibra & Gergely, 2009). However, there may be additional cues present in the action or gesture itself. Here we investigate features that characterize the initiation of a communicative interaction in both production and comprehension.

    We asked 40 participants to perform 31 pairs of object-directed actions and representational gestures in more- or less- communicative contexts. Data were collected using motion capture technology for kinematics and video recording for eye-gaze. With these data, we focused on two issues. First, if and how actions and gestures are systematically modulated when performed in a communicative context. Second, if observers exploit such kinematic information to classify an act as communicative.

    Our study showed that during production the communicative context modulates space–time dimensions of kinematics and elicits an increase in addressee-directed eye-gaze. Naïve participants detected communicative intent in actions and gestures preferentially using eye-gaze information, only utilizing kinematic information when eye-gaze was unavailable.

    Our study highlights the general communicative modulation of action and gesture kinematics during production but also shows that addressees only exploit this modulation to recognize communicative intention in the absence of eye-gaze. We discuss these findings in terms of distinctive but potentially overlapping functions of addressee directed eye-gaze and kinematic modulations within the wider context of human communication and learning.
  • Trujillo, J. P., & Holler, J. (2023). Interactionally embedded gestalt principles of multimodal human communication. Perspectives on Psychological Science, 18(5), 1136-1159. doi:10.1177/17456916221141422.

    Abstract

    Natural human interaction requires us to produce and process many different signals, including speech, hand and head gestures, and facial expressions. These communicative signals, which occur in a variety of temporal relations with each other (e.g., parallel or temporally misaligned), must be rapidly processed as a coherent message by the receiver. In this contribution, we introduce the notion of interactionally embedded, affordance-driven gestalt perception as a framework that can explain how this rapid processing of multimodal signals is achieved as efficiently as it is. We discuss empirical evidence showing how basic principles of gestalt perception can explain some aspects of unimodal phenomena such as verbal language processing and visual scene perception but require additional features to explain multimodal human communication. We propose a framework in which high-level gestalt predictions are continuously updated by incoming sensory input, such as unfolding speech and visual signals. We outline the constituent processes that shape high-level gestalt perception and their role in perceiving relevance and prägnanz. Finally, we provide testable predictions that arise from this multimodal interactionally embedded gestalt-perception framework. This review and framework therefore provide a theoretically motivated account of how we may understand the highly complex, multimodal behaviors inherent in natural social interaction.
  • Trujillo, J. P., Dideriksen, C., Tylén, K., Christiansen, M. H., & Fusaroli, R. (2023). The dynamic interplay of kinetic and linguistic coordination in Danish and Norwegian conversation. Cognitive Science, 47(6): e13298. doi:10.1111/cogs.13298.

    Abstract

    In conversation, individuals work together to achieve communicative goals, complementing and aligning language and body with each other. An important emerging question is whether interlocutors entrain with one another equally across linguistic levels (e.g., lexical, syntactic, and semantic) and modalities (i.e., speech and gesture), or whether there are complementary patterns of behaviors, with some levels or modalities diverging and others converging in coordinated fashions. This study assesses how kinematic and linguistic entrainment interact with one another across levels of measurement, and according to communicative context. We analyzed data from two matched corpora of dyadic interaction between—respectively—Danish and Norwegian native speakers engaged in affiliative conversations and task-oriented conversations. We assessed linguistic entrainment at the lexical, syntactic, and semantic level, and kinetic alignment of the head and hands using video-based motion tracking and dynamic time warping. We tested whether—across the two languages—linguistic alignment correlates with kinetic alignment, and whether these kinetic-linguistic associations are modulated either by the type of conversation or by the language spoken. We found that kinetic entrainment was positively associated with low-level linguistic (i.e., lexical) entrainment, while negatively associated with high-level linguistic (i.e., semantic) entrainment, in a cross-linguistically robust way. Our findings suggest that conversation makes use of a dynamic coordination of similarity and complementarity both between individuals as well as between different communicative modalities, and provides evidence for a multimodal, interpersonal synergy account of interaction.
  • Trujillo, J. P. (2024). Motion-tracking technology for the study of gesture. In A. Cienki (Ed.), The Cambridge Handbook of Gesture Studies. Cambridge: Cambridge University Press.
  • Trujillo, J. P., & Holler, J. (2024). Conversational facial signals combine into compositional meanings that change the interpretation of speaker intentions. Scientific Reports, 14: 2286. doi:10.1038/s41598-024-52589-0.

    Abstract

    Human language is extremely versatile, combining a limited set of signals in an unlimited number of ways. However, it is unknown whether conversational visual signals feed into the composite utterances with which speakers communicate their intentions. We assessed whether different combinations of visual signals lead to different intent interpretations of the same spoken utterance. Participants viewed a virtual avatar uttering spoken questions while producing single visual signals (i.e., head turn, head tilt, eyebrow raise) or combinations of these signals. After each video, participants classified the communicative intention behind the question. We found that composite utterances combining several visual signals conveyed different meaning compared to utterances accompanied by the single visual signals. However, responses to combinations of signals were more similar to the responses to related, rather than unrelated, individual signals, indicating a consistent influence of the individual visual signals on the whole. This study therefore provides first evidence for compositional, non-additive (i.e., Gestalt-like) perception of multimodal language.

    Additional information

    41598_2024_52589_MOESM1_ESM.docx
  • Trujillo, J. P., & Holler, J. (2024). Information distribution patterns in naturalistic dialogue differ across languages. Psychonomic Bulletin & Review, 31, 1723-1734. doi:10.3758/s13423-024-02452-0.

    Abstract

    The natural ecology of language is conversation, with individuals taking turns speaking to communicate in a back-and-forth fashion. Language in this context involves strings of words that a listener must process while simultaneously planning their own next utterance. It would thus be highly advantageous if language users distributed information within an utterance in a way that may facilitate this processing–planning dynamic. While some studies have investigated how information is distributed at the level of single words or clauses, or in written language, little is known about how information is distributed within spoken utterances produced during naturalistic conversation. It also is not known how information distribution patterns of spoken utterances may differ across languages. We used a set of matched corpora (CallHome) containing 898 telephone conversations conducted in six different languages (Arabic, English, German, Japanese, Mandarin, and Spanish), analyzing more than 58,000 utterances, to assess whether there is evidence of distinct patterns of information distributions at the utterance level, and whether these patterns are similar or differed across the languages. We found that English, Spanish, and Mandarin typically show a back-loaded distribution, with higher information (i.e., surprisal) in the last half of utterances compared with the first half, while Arabic, German, and Japanese showed front-loaded distributions, with higher information in the first half compared with the last half. Additional analyses suggest that these patterns may be related to word order and rate of noun and verb usage. We additionally found that back-loaded languages have longer turn transition times (i.e.,time between speaker turns)

    Additional information

    Data availability
  • Trupp, M. D., Bignardi, G., Specker, E., Vessel, E. A., & Pelowski, M. (2023). Who benefits from online art viewing, and how: The role of pleasure, meaningfulness, and trait aesthetic responsiveness in computer-based art interventions for well-being. Computers in Human Behavior, 145: 107764. doi:10.1016/j.chb.2023.107764.

    Abstract

    When experienced in-person, engagement with art has been associated with positive outcomes in well-being and mental health. However, especially in the last decade, art viewing, cultural engagement, and even ‘trips’ to museums have begun to take place online, via computers, smartphones, tablets, or in virtual reality. Similarly, to what has been reported for in-person visits, online art engagements—easily accessible from personal devices—have also been associated to well-being impacts. However, a broader understanding of for whom and how online-delivered art might have well-being impacts is still lacking. In the present study, we used a Monet interactive art exhibition from Google Arts and Culture to deepen our understanding of the role of pleasure, meaning, and individual differences in the responsiveness to art. Beyond replicating the previous group-level effects, we confirmed our pre-registered hypothesis that trait-level inter-individual differences in aesthetic responsiveness predict some of the benefits that online art viewing has on well-being and further that such inter-individual differences at the trait level were mediated by subjective experiences of pleasure and especially meaningfulness felt during the online-art intervention. The role that participants' experiences play as a possible mechanism during art interventions is discussed in light of recent theoretical models.

    Additional information

    supplementary material
  • Udden, J., & Männel, C. (2018). Artificial grammar learning and its neurobiology in relation to language processing and development. In S.-A. Rueschemeyer, & M. G. Gaskell (Eds.), The Oxford Handbook of Psycholinguistics (2nd ed., pp. 755-783). Oxford: Oxford University Press.

    Abstract

    The artificial grammar learning (AGL) paradigm enables systematic investigation of the acquisition of linguistically relevant structures. It is a paradigm of interest for language processing research, interfacing with theoretical linguistics, and for comparative research on language acquisition and evolution. This chapter presents a key for understanding major variants of the paradigm. An unbiased summary of neuroimaging findings of AGL is presented, using meta-analytic methods, pointing to the crucial involvement of the bilateral frontal operculum and regions in the right lateral hemisphere. Against a background of robust posterior temporal cortex involvement in processing complex syntax, the evidence for involvement of the posterior temporal cortex in AGL is reviewed. Infant AGL studies testing for neural substrates are reviewed, covering the acquisition of adjacent and non-adjacent dependencies as well as algebraic rules. The language acquisition data suggest that comparisons of learnability of complex grammars performed with adults may now also be possible with children.
  • Uhrig, P., Payne, E., Pavlova, I., Burenko, I., Dykes, N., Baltazani, M., Burrows, E., Hale, S., Torr, P., & Wilson, A. (2023). Studying time conceptualisation via speech, prosody, and hand gesture: Interweaving manual and computational methods of analysis. In W. Pouw, J. Trujillo, H. R. Bosker, L. Drijvers, M. Hoetjes, J. Holler, S. Kadava, L. Van Maastricht, E. Mamus, & A. Ozyurek (Eds.), Gesture and Speech in Interaction (GeSpIn) Conference. doi:10.17617/2.3527220.

    Abstract

    This paper presents a new interdisciplinary methodology for the
    analysis of future conceptualisations in big messy media data.
    More specifically, it focuses on the depictions of post-Covid
    futures by RT during the pandemic, i.e. on data which are of
    interest not just from the perspective of academic research but
    also of policy engagement. The methodology has been
    developed to support the scaling up of fine-grained data-driven
    analysis of discourse utterances larger than individual lexical
    units which are centred around ‘will’ + the infinitive. It relies
    on the true integration of manual analytical and computational
    methods and tools in researching three modalities – textual,
    prosodic1, and gestural. The paper describes the process of
    building a computational infrastructure for the collection and
    processing of video data, which aims to empower the manual
    analysis. It also shows how manual analysis can motivate the
    development of computational tools. The paper presents
    individual computational tools to demonstrate how the
    combination of human and machine approaches to analysis can
    reveal new manifestations of cohesion between gesture and
    prosody. To illustrate the latter, the paper shows how the
    boundaries of prosodic units can work to help determine the
    boundaries of gestural units for future conceptualisations.
  • Ullman, M. T., Bulut, T., & Walenski, M. (2024). Hijacking limitations of working memory load to test for composition in language. Cognition, 251: 105875. doi:10.1016/j.cognition.2024.105875.

    Abstract

    Although language depends on storage and composition, just what is stored or (de)composed remains unclear. We leveraged working memory load limitations to test for composition, hypothesizing that decomposed forms should particularly tax working memory. We focused on a well-studied paradigm, English inflectional morphology. We predicted that (compositional) regulars should be harder to maintain in working memory than (non-compositional) irregulars, using a 3-back production task. Frequency, phonology, orthography, and other potentially confounding factors were controlled for. Compared to irregulars, regulars and their accompanying −s/−ing-affixed filler items yielded more errors. Underscoring the decomposition of only regulars, regulars yielded more bare-stem (e.g., walk) and stem affixation errors (walks/walking) than irregulars, whereas irregulars yielded more past-tense-form affixation errors (broughts/tolded). In line with previous evidence that regulars can be stored under certain conditions, the regular-irregular difference held specifically for phonologically consistent (not inconsistent) regulars, in particular for both low and high frequency consistent regulars in males, but only for low frequency consistent regulars in females. Sensitivity analyses suggested the findings were robust. The study further elucidates the computation of inflected forms, and introduces a simple diagnostic for linguistic composition.

    Additional information

    Data availabillity
  • Uluşahin, O., Bosker, H. R., McQueen, J. M., & Meyer, A. S. (2023). No evidence for convergence to sub-phonemic F2 shifts in shadowing. In R. Skarnitzl, & J. Volín (Eds.), Proceedings of the 20th International Congress of the Phonetic Sciences (ICPhS 2023) (pp. 96-100). Prague: Guarant International.

    Abstract

    Over the course of a conversation, interlocutors sound more and more like each other in a process called convergence. However, the automaticity and grain size of convergence are not well established. This study therefore examined whether female native Dutch speakers converge to large yet sub-phonemic shifts in the F2 of the vowel /e/. Participants first performed a short reading task to establish baseline F2s for the vowel /e/, then shadowed 120 target words (alongside 360 fillers) which contained one instance of a manipulated vowel /e/ where the F2 had been shifted down to that of the vowel /ø/. Consistent exposure to large (sub-phonemic) downward shifts in F2 did not result in convergence. The results raise issues for theories which view convergence as a product of automatic integration between perception and production.
  • Uluşahin, O., Bosker, H. R., McQueen, J. M., & Meyer, A. S. (2024). Knowledge of a talker’s f0 affects subsequent perception of voiceless fricatives. In Y. Chen, A. Chen, & A. Arvaniti (Eds.), Proceedings of Speech Prosody 2024 (pp. 432-436).

    Abstract

    The human brain deals with the infinite variability of speech through multiple mechanisms. Some of them rely solely on information in the speech input (i.e., signal-driven) whereas some rely on linguistic or real-world knowledge (i.e., knowledge-driven). Many signal-driven perceptual processes rely on the enhancement of acoustic differences between incoming speech sounds, producing contrastive adjustments. For instance, when an ambiguous voiceless fricative is preceded by a high fundamental frequency (f0) sentence, the fricative is perceived as having lower a spectral center of gravity (CoG). However, it is not clear whether knowledge of a talker’s typical f0 can lead to similar contrastive effects. This study investigated a possible talker f0 effect on fricative CoG perception. In the exposure phase, two groups of participants (N=16 each) heard the same talker at high or low f0 for 20 minutes. Later, in the test phase, participants rated fixed-f0 /?ɔk/ tokens as being /sɔk/ (i.e., high CoG) or /ʃɔk/ (i.e., low CoG), where /?/ represents a fricative from a 5-step /s/-/ʃ/ continuum. Surprisingly, the data revealed the opposite of our contrastive hypothesis, whereby hearing high f0 instead biased perception towards high CoG. Thus, we demonstrated that talker f0 information affects fricative CoG perception.
  • Ünal, E., & Papafragou, A. (2018). Evidentials, information sources and cognition. In A. Y. Aikhenvald (Ed.), The Oxford Handbook of Evidentiality (pp. 175-184). Oxford University Press.
  • Ünal, E., & Papafragou, A. (2018). The relation between language and mental state reasoning. In J. Proust, & M. Fortier (Eds.), Metacognitive diversity: An interdisciplinary approach (pp. 153-169). Oxford: Oxford University Press.
  • Ünal, E., Mamus, E., & Özyürek, A. (2023). Multimodal encoding of motion events in speech, gesture, and cognition. Language and Cognition. Advance online publication. doi:10.1017/langcog.2023.61.

    Abstract

    How people communicate about motion events and how this is shaped by language typology are mostly studied with a focus on linguistic encoding in speech. Yet, human communication typically involves an interactional exchange of multimodal signals, such as hand gestures that have different affordances for representing event components. Here, we review recent empirical evidence on multimodal encoding of motion in speech and gesture to gain a deeper understanding of whether and how language typology shapes linguistic expressions in different modalities, and how this changes across different sensory modalities of input and interacts with other aspects of cognition. Empirical evidence strongly suggests that Talmy’s typology of event integration predicts multimodal event descriptions in speech and gesture and visual attention to event components prior to producing these descriptions. Furthermore, variability within the event itself, such as type and modality of stimuli, may override the influence of language typology, especially for expression of manner.
  • Ung, D. C., Iacono, G., Méziane, H., Blanchard, E., Papon, M.-A., Selten, M., van Rhijn, J.-R., Van Rhijn, J. R., Montjean, R., Rucci, J., Martin, S., Fleet, A., Birling, M.-C., Marouillat, S., Roepman, R., Selloum, M., Lux, A., Thépault, R.-A., Hamel, P., Mittal, K. and 7 moreUng, D. C., Iacono, G., Méziane, H., Blanchard, E., Papon, M.-A., Selten, M., van Rhijn, J.-R., Van Rhijn, J. R., Montjean, R., Rucci, J., Martin, S., Fleet, A., Birling, M.-C., Marouillat, S., Roepman, R., Selloum, M., Lux, A., Thépault, R.-A., Hamel, P., Mittal, K., Vincent, J. B., Dorseuil, O., Stunnenberg, H. G., Billuart, P., Nadif Kasri, N., Hérault, Y., & Laumonnier, F. (2018). Ptchd1 deficiency induces excitatory synaptic and cognitive dysfunctions in mouse. Molecular Psychiatry, 23, 1356-1367. doi:10.1038/mp.2017.39.

    Abstract

    Synapse development and neuronal activity represent fundamental processes for the establishment of cognitive function. Structural organization as well as signalling pathways from receptor stimulation to gene expression regulation are mediated by synaptic activity and misregulated in neurodevelopmental disorders such as autism spectrum disorder (ASD) and intellectual disability (ID). Deleterious mutations in the PTCHD1 (Patched domain containing 1) gene have been described in male patients with X-linked ID and/or ASD. The structure of PTCHD1 protein is similar to the Patched (PTCH1) receptor; however, the cellular mechanisms and pathways associated with PTCHD1 in the developing brain are poorly determined. Here we show that PTCHD1 displays a C-terminal PDZ-binding motif that binds to the postsynaptic proteins PSD95 and SAP102. We also report that PTCHD1 is unable to rescue the canonical sonic hedgehog (SHH) pathway in cells depleted of PTCH1, suggesting that both proteins are involved in distinct cellular signalling pathways. We find that Ptchd1 deficiency in male mice (Ptchd1−/y) induces global changes in synaptic gene expression, affects the expression of the immediate-early expression genes Egr1 and Npas4 and finally impairs excitatory synaptic structure and neuronal excitatory activity in the hippocampus, leading to cognitive dysfunction, motor disabilities and hyperactivity. Thus our results support that PTCHD1 deficiency induces a neurodevelopmental disorder causing excitatory synaptic dysfunction.

    Additional information

    mp201739x1.pdf
  • Uzbas, F., & O’Neill, A. (2023). Spatial Centrosome Proteomic Profiling of Human iPSC-derived Neural Cells. BIO-PROTOCOL, 13(17): e4812. doi:10.21769/BioProtoc.4812.

    Abstract

    The centrosome governs many pan-cellular processes including cell division, migration, and cilium formation.
    However, very little is known about its cell type-specific protein composition and the sub-organellar domains where
    these protein interactions take place. Here, we outline a protocol for the spatial interrogation of the centrosome
    proteome in human cells, such as those differentiated from induced pluripotent stem cells (iPSCs), through co-
    immunoprecipitation of protein complexes around selected baits that are known to reside at different structural parts
    of the centrosome, followed by mass spectrometry. The protocol describes expansion and differentiation of human
    iPSCs to dorsal forebrain neural progenitors and cortical projection neurons, harvesting and lysis of cells for protein
    isolation, co-immunoprecipitation with antibodies against selected bait proteins, preparation for mass spectrometry,
    processing the mass spectrometry output files using MaxQuant software, and statistical analysis using Perseus
    software to identify the enriched proteins by each bait. Given the large number of cells needed for the isolation of
    centrosome proteins, this protocol can be scaled up or down by modifying the number of bait proteins and can also
    be carried out in batches. It can potentially be adapted for other cell types, organelles, and species as well.
  • Vagliano, I., Galke, L., Mai, F., & Scherp, A. (2018). Using adversarial autoencoders for multi-modal automatic playlist continuation. In C.-W. Chen, P. Lamere, M. Schedl, & H. Zamani (Eds.), RecSys Challenge '18: Proceedings of the ACM Recommender Systems Challenge 2018 (pp. 5.1-5.6). New York: ACM. doi:10.1145/3267471.3267476.

    Abstract

    The task of automatic playlist continuation is generating a list of recommended tracks that can be added to an existing playlist. By suggesting appropriate tracks, i. e., songs to add to a playlist, a recommender system can increase the user engagement by making playlist creation easier, as well as extending listening beyond the end of current playlist. The ACM Recommender Systems Challenge 2018 focuses on such task. Spotify released a dataset of playlists, which includes a large number of playlists and associated track listings. Given a set of playlists from which a number of tracks have been withheld, the goal is predicting the missing tracks in those playlists. We participated in the challenge as the team Unconscious Bias and, in this paper, we present our approach. We extend adversarial autoencoders to the problem of automatic playlist continuation. We show how multiple input modalities, such as the playlist titles as well as track titles, artists and albums, can be incorporated in the playlist continuation task.
  • Valentin, B., Verga, L., Benoit, C.-E., Kotz, S. A., & Dalla Bella, S. (2018). Test-retest reliability of the battery for the assessment of auditory sensorimotor and timing abilities (BAASTA). Annals of Physical and Rehabilitation Medicine, 61(6), 395-400. doi:10.1016/j.rehab.2018.04.001.

    Abstract

    Perceptual and sensorimotor timing skills can be thoroughly assessed with the Battery for the Assessment of Auditory Sensorimotor and Timing Abilities (BAASTA). The battery has been used for testing rhythmic skills in healthy adults and patient populations (e.g., with Parkinson disease), showing sensitivity to timing and rhythm deficits. Here we assessed the test-retest reliability of the BAASTA in 20 healthy adults. Participants were tested twice with the BAASTA, implemented on a tablet interface, with a 2-week interval. They completed 4 perceptual tasks, namely, duration discrimination, anisochrony detection with tones and music, and the Beat Alignment Test (BAT). Moreover, they completed motor tasks via finger tapping, including unpaced and paced tapping with tones and music, synchronization-continuation, and adaptive tapping to a sequence with a tempo change. Despite high variability among individuals, the results showed good test-retest reliability in most tasks. A slight but significant improvement from test to retest was found in tapping with music, which may reflect a learning effect. In general, the BAASTA was found a reliable tool for evaluating timing and rhythm skills.
  • Van Alphen, P. M. (2004). Perceptual relevance of prevoicing in Dutch. PhD Thesis, Radboud University Nijmegen, Nijmegen. doi:10.17617/2.58551.

    Abstract

    In this dissertation the perceptual relevance of prevoicing in Dutch was investigated. Prevoicing is the presence of vocal fold vibration during the closure of initial voiced plosives (negative voice onset time). The presence or absence of prevoicing is generally used to describe the difference between voiced and voiceless Dutch plosives. The first experiment described in this dissertation showed that prevoicing is frequently absent in Dutch and that several factors affect the production of prevoicing. A detailed acoustic analysis of the voicing distinction identified several acoustic correlates of voicing. Prevoicing appeared to be by far the best predictor. Perceptual classification data revealed that prevoicing was indeed the strongest cue that listeners use when classifying plosives as voiced or voiceless. In the cases where prevoicing was absent, other acoustic cues influenced classification, such that some of these tokens were still perceived as being voiced. In the second part of this dissertation the influence of prevoicing variation on spoken-word recognition was examined. In several cross-modal priming experiments two types of prevoicing variation were contrasted: a difference between the presence and absence of prevoicing (6 versus 0 periods of prevoicing) and a difference in the amount of prevoicing (12 versus 6 periods). All these experiments indicated that primes with 12 and 6 periods of prevoicing had the same effect on lexical decisions to the visual targets. The primes without prevoicing had a different effect, but only when their voiceless counterparts were real words. Phonetic detail appears to influence lexical access only when it is useful: In Dutch, the presence versus absence of prevoicing is informative, while the amount of prevoicing is not.

    Additional information

    full text via Radboud Repository
  • Van den Brink, D., & Hagoort, P. (2004). The influence of semantic and syntactic context constraints on lexical selection and integration in spoken-word comprehension as revealed by ERPs. Journal of Cognitive Neuroscience, 16(6), 1068-1084. doi:10.1162/0898929041502670.

    Abstract

    An event-related brain potential experiment was carried out to investigate the influence of semantic and syntactic context constraints on lexical selection and integration in spoken-word comprehension. Subjects were presented with constraining spoken sentences that contained a critical word that was either (a) congruent, (b) semantically and syntactically incongruent, but beginning with the same initial phonemes as the congruent critical word, or (c) semantically and syntactically incongruent, beginning with phonemes that differed from the congruent critical word. Relative to the congruent condition, an N200 effect reflecting difficulty in the lexical selection process was obtained in the semantically and syntactically incongruent condition where word onset differed from that of the congruent critical word. Both incongruent conditions elicited a large N400 followed by a left anterior negativity (LAN) time-locked to the moment of word category violation and a P600 effect. These results would best fit within a cascaded model of spoken-word processing, proclaiming an optimal use of contextual information during spokenword identification by allowing for semantic and syntactic processing to take place in parallel after bottom-up activation of a set of candidates, and lexical integration to proceed with a limited number of candidates that still match the acoustic input.
  • van der Burght, C. L., Numssen, O., Schlaak, B., Goucha, T., & Hartwigsen, G. (2023). Differential contributions of inferior frontal gyrus subregions to sentence processing guided by intonation. Human Brain Mapping, 44(2), 585-598. doi:10.1002/hbm.26086.

    Abstract

    Auditory sentence comprehension involves processing content (semantics), grammar (syntax), and intonation (prosody). The left inferior frontal gyrus (IFG) is involved in sentence comprehension guided by these different cues, with neuroimaging studies preferentially locating syntactic and semantic processing in separate IFG subregions. However, this regional specialisation and its functional relevance has yet to be confirmed. This study probed the role of the posterior IFG (pIFG) for syntactic processing and the anterior IFG (aIFG) for semantic processing with repetitive transcranial magnetic stimulation (rTMS) in a task that required the interpretation of the sentence’s prosodic realisation. Healthy participants performed a sentence completion task with syntactic and semantic decisions, while receiving 10 Hz rTMS over either left aIFG, pIFG, or vertex (control). Initial behavioural analyses showed an inhibitory effect on accuracy without task-specificity. However, electrical field simulations revealed differential effects for both subregions. In the aIFG, stronger stimulation led to slower semantic processing, with no effect of pIFG stimulation. In contrast, we found a facilitatory effect on syntactic processing in both aIFG and pIFG, where higher stimulation strength was related to faster responses. Our results provide first evidence for the functional relevance of left aIFG in semantic processing guided by intonation. The stimulation effect on syntactic responses emphasises the importance of the IFG for syntax processing, without supporting the hypothesis of a pIFG-specific involvement. Together, the results support the notion of functionally specialised IFG subregions for diverse but fundamental cues for language processing.

    Additional information

    supplementary information
  • Van Hoey, T., Thompson, A. L., Do, Y., & Dingemanse, M. (2023). Iconicity in ideophones: Guessing, memorizing, and reassessing. Cognitive Science, 47(4): e13268. doi:10.1111/cogs.13268.

    Abstract

    Iconicity, or the resemblance between form and meaning, is often ascribed to a special status and contrasted with default assumptions of arbitrariness in spoken language. But does iconicity in spoken language have a special status when it comes to learnability? A simple way to gauge learnability is to see how well something is retrieved from memory. We can further contrast this with guessability, to see (1) whether the ease of guessing the meanings of ideophones outperforms the rate at which they are remembered; and (2) how willing participants’ are to reassess what they were taught in a prior task—a novel contribution of this study. We replicate prior guessing and memory tasks using ideophones and adjectives from Japanese, Korean, and Igbo. Our results show that although native Cantonese speakers guessed ideophone meanings above chance level, they memorized both ideophones and adjectives with comparable accuracy. However, response time data show that participants took significantly longer to respond correctly to adjective–meaning pairs—indicating a discrepancy in a cognitive effort that favored the recognition of ideophones. In a follow-up reassessment task, participants who were taught foil translations were more likely to choose the true translations for ideophones rather than adjectives. By comparing the findings from our guessing and memory tasks, we conclude that iconicity is more accessible if a task requires participants to actively seek out sound-meaning associations.
  • Van Wonderen, E., & Nieuwland, M. S. (2023). Lexical prediction does not rationally adapt to prediction error: ERP evidence from pre-nominal articles. Journal of Memory and Language, 132: 104435. doi:10.1016/j.jml.2023.104435.

    Abstract

    People sometimes predict upcoming words during language comprehension, but debate remains on when and to what extent such predictions indeed occur. The rational adaptation hypothesis holds that predictions develop with expected utility: people predict more strongly when predictions are frequently confirmed (low prediction error) rather than disconfirmed. However, supporting evidence is mixed thus far and has only involved measuring responses to supposedly predicted nouns, not to preceding articles that may also be predicted. The current, large-sample (N = 200) ERP study on written discourse comprehension in Dutch therefore employs the well-known ‘pre-nominal prediction effect’: enhanced N400-like ERPs for articles that are unexpected given a likely upcoming noun’s gender (i.e., the neuter gender article ‘het’ when people expect the common gender noun phrase ‘de krant’, the newspaper) compared to expected articles. We investigated whether the pre-nominal prediction effect is larger when most of the presented stories contain predictable article-noun combinations (75% predictable, 25% unpredictable) compared to when most stories contain unpredictable combinations (25% predictable, 75% unpredictable). Our results show the pre-nominal prediction effect in both contexts, with little evidence to suggest that this effect depended on the percentage of predictable combinations. Moreover, the little evidence suggesting such a dependence was primarily observed for unexpected, neuter-gender articles (‘het’), which is inconsistent with the rational adaptation hypothesis. In line with recent demonstrations (Nieuwland, 2021a,b), our results suggest that linguistic prediction is less ‘rational’ or Bayes optimal than is often suggested.
  • Van Turennout, M., Hagoort, P., & Brown, C. M. (1998). Brain activitity during speaking: From syntax to phonology in 40 milliseconds. Science, 280, 572-574.

    Abstract

    In normal conversation, speakers translate thoughts into words at high speed. To enable this speed, the retrieval of distinct types of linguistic knowledge has to be orchestrated with millisecond precision. The nature of this orchestration is still largely unknown. This report presents dynamic measures of the real-time activation of two basic types of linguistic knowledge, syntax and phonology. Electrophysiological data demonstrate that during noun-phrase production speakers retrieve the syntactic gender of a noun before its abstract phonological properties. This two-step process operates at high speed: the data show that phonological information is already available 40 milliseconds after syntactic properties have been retrieved.
  • Van Turennout, M., Hagoort, P., & Brown, C. M. (1998). Brain activity during speaking: From syntax to phonology in 40 milliseconds. Science, 280(5363), 572-574. doi:10.1126/science.280.5363.572.
  • Van den Broek, G., Takashima, A., Segers, E., & Verhoeven, L. (2018). Contextual Richness and Word Learning: Context Enhances Comprehension but Retrieval Enhances Retention. Language Learning, 68(2), 546-585. doi:10.1111/lang.12285.

    Abstract

    Learning new vocabulary from context typically requires multiple encounters during which word meaning can be retrieved from memory or inferred from context. We compared the effect of memory retrieval and context inferences on short‐ and long‐term retention in three experiments. Participants studied novel words and then practiced the words either in an uninformative context that required the retrieval of word meaning from memory (“I need the funguo”) or in an informative context from which word meaning could be inferred (“I want to unlock the door: I need the funguo”). The informative context facilitated word comprehension during practice. However, later recall of word form and meaning and word recognition in a new context were better after successful retrieval practice and retrieval practice with feedback than after context‐inference practice. These findings suggest benefits of retrieval during contextualized vocabulary learning whereby the uninformative context enhanced word retention by triggering memory retrieval.
  • Van Alphen, P. M., De Bree, E., Gerrits, E., De Jong, J., Wilsenach, C., & Wijnen, F. (2004). Early language development in children with a genetic risk of dyslexia. Dyslexia, 10, 265-288. doi:10.1002/dys.272.

    Abstract

    We report on a prospective longitudinal research programme exploring the connection between language acquisition deficits and dyslexia. The language development profile of children at-risk for dyslexia is compared to that of age-matched controls as well as of children who have been diagnosed with specific language impairment (SLI). The experiments described concern the perception and production of grammatical morphology, categorical perception of speech sounds, phonological processing (non-word repetition), mispronunciation detection, and rhyme detection. The results of each of these indicate that the at-risk children as a group underperform in comparison to the controls, and that, in most cases, they approach the SLI group. It can be concluded that dyslexia most likely has precursors in language development, also in domains other than those traditionally considered conditional for the acquisition of literacy skills. The dyslexia-SLI connection awaits further, particularly qualitative, analyses.
  • Van de Geer, J. P., & Levelt, W. J. M. (1963). Detection of visual patterns disturbed by noise: An exploratory study. Quarterly Journal of Experimental Psychology, 15, 192-204. doi:10.1080/17470216308416324.

    Abstract

    An introductory study of the perception of stochastically specified events is reported. The initial problem was to determine whether the perceiver can split visual input data of this kind into random and determined components. The inability of subjects to do so with the stimulus material used (a filmlike sequence of dot patterns), led to the more general question of how subjects code this kind of visual material. To meet the difficulty of defining the subjects' responses, two experiments were designed. In both, patterns were presented as a rapid sequence of dots on a screen. The patterns were more or less disturbed by “noise,” i.e. the dots did not appear exactly at their proper places. In the first experiment the response was a rating on a semantic scale, in the second an identification from among a set of alternative patterns. The results of these experiments give some insight in the coding systems adopted by the subjects. First, noise appears to be detrimental to pattern recognition, especially to patterns with little spread. Second, this shows connections with the factors obtained from analysis of the semantic ratings, e.g. easily disturbed patterns show a large drop in the semantic regularity factor, when only a little noise is added.
  • Van den Brink, D. (2004). Contextual influences on spoken-word processing: An electrophysiological approach. PhD Thesis, Radboud University Nijmegen, Nijmegen. doi:10.17617/2.57773.

    Abstract

    The aim of this thesis was to gain more insight into spoken-word comprehension and the influence of sentence-contextual information on these processes using ERPs. By manipulating critical words in semantically constraining sententes, in semantic or syntactic sense, and examining the consequences in the electrophysiological signal (e.g., elicitation of ERP components such as the N400, N200, LAN, and P600), three questions were tackled: I At which moment is context information used in the spoken-word recognition process? II What is the temporal relationship between lexical selection and integration of the meaning of a spoken word into a higher-order level representeation of the preceding sentence? III What is the time course of the processing of different sources of linguistic information obtained from the context, such as phonological, semantic and syntactic information, during spoken-word comprehension? From the results of this thesis it can be concluded that sentential context already exerts an influence on spoken-word processing at approximately 200 ms after word onset. In addition, semantic integration is attempted before a spoken word can be selected on the basis of the acoustic signal, i.e. before lexical selection is completed. Finally, knowledge of the syntactic category of a word is not needed before semantic integration can take place. These findings, therefore, were interpreted as providing evidence for an account of cascaded spoken-word processing that proclaims an optimal use of contextual information during spoken-word identification. Optimal use is accomplished by allowing for semantic and syntactic processing to take place in parallel after bottom-up activation of a set of candidates, and lexical integration to proceed with a limited number of candidates that still match the acoustic input

    Additional information

    full text via Radboud Repository
  • Van Alphen, P. M., & Smits, R. (2004). Acoustical and perceptual analysis of the voicing distinction in Dutch initial plosives: The role of prevoicing. Journal of Phonetics, 32(4), 455-491. doi:10.1016/j.wocn.2004.05.001.

    Abstract

    Three experiments investigated the voicing distinction in Dutch initial labial and alveolar plosives. The difference between voiced and voiceless Dutch plosives is generally described in terms of the presence or absence of prevoicing (negative voice onset time). Experiment 1 showed, however, that prevoicing was absent in 25% of voiced plosive productions across 10 speakers. The production of prevoicing was influenced by place of articulation of the plosive, by whether the plosive occurred in a consonant cluster or not, and by speaker sex. Experiment 2 was a detailed acoustic analysis of the voicing distinction, which identified several acoustic correlates of voicing. Prevoicing appeared to be by far the best predictor. Perceptual classification data revealed that prevoicing was indeed the strongest cue that listeners use when classifying plosives as voiced or voiceless. In the cases where prevoicing was absent, other acoustic cues influenced classification, such that some of these tokens were still perceived as being voiced. These secondary cues were different for the two places of articulation. We discuss the paradox raised by these findings: although prevoicing is the most reliable cue to the voicing distinction for listeners, it is not reliably produced by speakers.
  • Van Rhijn, J. R., Fisher, S. E., Vernes, S. C., & Nadif Kasri, N. (2018). Foxp2 loss of function increases striatal direct pathway inhibition via increased GABA release. Brain Structure and Function, 223(9), 4211-4226. doi:10.1007/s00429-018-1746-6.

    Abstract

    Heterozygous mutations of the Forkhead-box protein 2 (FOXP2) gene in humans cause childhood apraxia of speech. Loss of Foxp2 in mice is known to affect striatal development and impair motor skills. However, it is unknown if striatal excitatory/inhibitory balance is affected during development and if the imbalance persists into adulthood. We investigated the effect of reduced Foxp2 expression, via a loss-of-function mutation, on striatal medium spiny neurons (MSNs). Our data show that heterozygous loss of Foxp2 decreases excitatory (AMPA receptor-mediated) and increases inhibitory (GABA receptor-mediated) currents in D1 dopamine receptor positive MSNs of juvenile and adult mice. Furthermore, reduced Foxp2 expression increases GAD67 expression, leading to both increased presynaptic content and release of GABA. Finally, pharmacological blockade of inhibitory activity in vivo partially rescues motor skill learning deficits in heterozygous Foxp2 mice. Our results suggest a novel role for Foxp2 in the regulation of striatal direct pathway activity through managing inhibitory drive.

    Additional information

    429_2018_1746_MOESM1_ESM.docx
  • Van Wijk, C., & Kempen, G. (1980). Functiewoorden: Een inventarisatie voor het Nederlands. ITL: Review of Applied Linguistics, 53-68.
  • Van Bergen, G., & Bosker, H. R. (2018). Linguistic expectation management in online discourse processing: An investigation of Dutch inderdaad 'indeed' and eigenlijk 'actually'. Journal of Memory and Language, 103, 191-209. doi:10.1016/j.jml.2018.08.004.

    Abstract

    Interpersonal discourse particles (DPs), such as Dutch inderdaad (≈‘indeed’) and eigenlijk (≈‘actually’) are highly frequent in everyday conversational interaction. Despite extensive theoretical descriptions of their polyfunctionality, little is known about how they are used by language comprehenders. In two visual world eye-tracking experiments involving an online dialogue completion task, we asked to what extent inderdaad, confirming an inferred expectation, and eigenlijk, contrasting with an inferred expectation, influence real-time understanding of dialogues. Answers in the dialogues contained a DP or a control adverb, and a critical discourse referent was replaced by a beep; participants chose the most likely dialogue completion by clicking on one of four referents in a display. Results show that listeners make rapid and fine-grained situation-specific inferences about the use of DPs, modulating their expectations about how the dialogue will unfold. Findings further specify and constrain theories about the conversation-managing function and polyfunctionality of DPs.
  • Van Geenhoven, V. (1998). On the Argument Structure of some Noun Incorporating Verbs in West Greenlandic. In M. Butt, & W. Geuder (Eds.), The Projection of Arguments - Lexical and Compositional Factors (pp. 225-263). Stanford, CA, USA: CSLI Publications.
  • Van Campen, A. D., Kunert, R., Van den Wildenberg, W. P. M., & Ridderinkhof, K. R. (2018). Repetitive transcranial magnetic stimulation over inferior frontal cortex impairs the suppression (but not expression) of action impulses during action conflict. Psychophysiology, 55(3): e13003. doi:10.1111/psyp.13003.

    Abstract

    In the recent literature, the effects of noninvasive neurostimulation on cognitive functioning appear to lack consistency and replicability. We propose that such effects may be concealed unless dedicated, sensitive, and process-specific dependent measures are used. The expression and subsequent suppression of response capture are often studied using conflict tasks. Response-time distribution analyses have been argued to provide specific measures of the susceptibility to make fast impulsive response errors, as well as the proficiency of the selective suppression of these impulses. These measures of response capture and response inhibition are particularly sensitive to experimental manipulations and clinical deficiencies that are typically obfuscated in commonly used overall performance analyses. Recent work using structural and functional imaging techniques links these behavioral outcome measures to the integrity of frontostriatal networks. These studies suggest that the presupplementary motor area (pre-SMA) is linked to the susceptibility to response capture whereas the right inferior frontal cortex (rIFC) is associated with the selective suppression of action impulses. Here, we used repetitive transcranial magnetic stimulation (rTMS) to test the causal involvement of these two cortical areas in response capture and inhibition in the Simon task. Disruption of rIFC function specifically impaired selective suppression of conflicting action tendencies, whereas the anticipated increase of fast impulsive errors after perturbing pre-SMA function was not confirmed. These results provide a proof of principle of the notion that the selection of appropriate dependent measures is perhaps crucial to establish the effects of neurostimulation on specific cognitive functions.
  • Van Valin Jr., R. D. (1998). The acquisition of WH-questions and the mechanisms of language acquisition. In M. Tomasello (Ed.), The new psychology of language: Cognitive and functional approaches to language structure (pp. 221-249). Mahwah, New Jersey: Erlbaum.
  • Van de Geer, J. P., Levelt, W. J. M., & Plomp, R. (1962). The connotation of musical consonance. Acta Psychologica, 20, 308-319.

    Abstract

    As a preliminary to further research on musical consonance an explanatory investigation was made on the different modes of judgment of musical intervals. This was done by way of a semantic differential. Subjects rated 23 intervals against 10 scales. In a factor analysis three factors appeared: pitch, evaluation and fusion. The relation between these factors and some physical characteristics has been investigated. The scale consonant-dissonant showed to be purely evaluative (in opposition to Stumpf's theory). This evaluative connotation is not in accordance with the musicological meaning of consonance. Suggestions to account for this difference have been given.
  • Van Leeuwen, E. J. C., Cohen, E., Collier-Baker, E., Rapold, C. J., Schäfer, M., Schütte, S., & Haun, D. B. M. (2018). The development of human social learning across seven societies. Nature Communications, 9: 2076. doi:10.1038/s41467-018-04468-2.

    Abstract

    Social information use is a pivotal characteristic of the human species. Avoiding the cost of individual exploration, social learning confers substantial fitness benefits under a wide variety of environmental conditions, especially when the process is governed by biases toward relative superiority (e.g., experts, the majority). Here, we examine the development of social information use in children aged 4–14 years (n = 605) across seven societies in a standardised social learning task. We measured two key aspects of social information use: general reliance on social information and majority preference. We show that the extent to which children rely on social information depends on children’s cultural background. The extent of children’s majority preference also varies cross-culturally, but in contrast to social information use, the ontogeny of majority preference follows a U-shaped trajectory across all societies. Our results demonstrate both cultural continuity and diversity in the realm of human social learning.

    Additional information

    VanLeeuwen_etal_2018sup.pdf
  • Van Berkum, J. J. A. (2004). Sentence comprehension in a wider discourse: Can we use ERPs to keep track of things? In M. Carreiras, Jr., & C. Clifton (Eds.), The on-line study of sentence comprehension: eyetracking, ERPs and beyond (pp. 229-270). New York: Psychology Press.
  • Van Donkelaar, M. M. J., Hoogman, M., Pappa, I., Tiemeier, H., Buitelaar, J. K., Franke, B., & Bralten, J. (2018). Pleiotropic Contribution of MECOM and AVPR1A to Aggression and Subcortical Brain Volumes. Frontiers in Behavioral Neuroscience, 12: 61. doi:10.3389/fnbeh.2018.00061.

    Abstract

    Reactive and proactive subtypes of aggression have been recognized to help parse etiological heterogeneity of this complex phenotype. With a heritability of about 50%, genetic factors play a role in the development of aggressive behavior. Imaging studies implicate brain structures related to social behavior in aggression etiology, most notably the amygdala and striatum. This study aimed to gain more insight into the pathways from genetic risk factors for aggression to aggression phenotypes. To this end, we conducted genome-wide gene-based cross-trait meta-analyses of aggression with the volumes of amygdala, nucleus accumbens and caudate nucleus to identify genes influencing both aggression and aggression-related brain volumes. We used data of large-scale genome-wide association studies (GWAS) of: (a) aggressive behavior in children and adolescents (EAGLE, N = 18,988); and (b) Magnetic Resonance Imaging (MRI)-based volume measures of aggression-relevant subcortical brain regions (ENIGMA2, N = 13,171). Second, the identified genes were further investigated in a sample of healthy adults (mean age (SD) = 25.28 (4.62) years; 43% male) who had genome-wide genotyping data and questionnaire data on aggression subtypes available (Brain Imaging Genetics, BIG, N = 501) to study their effect on reactive and proactive subtypes of aggression. Our meta-analysis identified two genes, MECOM and AVPR1A, significantly associated with both aggression risk and nucleus accumbens (MECOM) and amygdala (AVPR1A) brain volume. Subsequent in-depth analysis of these genes in healthy adults (BIG), including sex as an interaction term in the model, revealed no significant subtype-specific gene-wide associations. Using cross-trait meta-analysis of brain measures and psychiatric phenotypes, this study generated new hypotheses about specific links between genes, the brain and behavior. Results indicate that MECOM and AVPR1A may exert an effect on aggression through mechanisms involving nucleus accumbens and amygdala volumes, respectively.
  • Van Leeuwen, E. J. C., Cronin, K. A., & Haun, D. B. M. (2018). Population-specific social dynamics in chimpanzees. Proceedings of the National Academy of Sciences of the United States of America, 115(45), 11393-11400. doi:10.1073/pnas.1722614115.

    Abstract

    Understanding intraspecific variation in sociality is essential for characterizing the flexibility and evolution of social systems, yet its study in nonhuman animals is rare. Here, we investigated whether chimpanzees exhibit population-level differences in sociality that cannot be easily explained by differences in genetics or ecology. We compared social proximity and grooming tendencies across four semiwild populations of chimpanzees living in the same ecological environment over three consecutive years, using both linear mixed models and social network analysis. Results indicated temporally stable, population-level differences in dyadic-level sociality. Moreover, group cohesion measures capturing network characteristics beyond dyadic interactions (clustering, modularity, and social differentiation) showed population-level differences consistent with the dyadic indices. Subsequently, we explored whether the observed intraspecific variation in sociality could be attributed to cultural processes by ruling out alternative sources of variation including the influences of ecology, genetics, and differences in population demographics. We conclude that substantial variation in social behavior exists across neighboring populations of chimpanzees and that this variation is in part shaped by cultural processes.

    Additional information

    pnas.1722614115.sapp.pdf
  • Van de Ven, M., & Ernestus, M. (2018). The role of segmental and durational cues in the processing of reduced words. Language and Speech, 61(3), 358-383. doi:10.1177/0023830917727774.

    Abstract

    In natural conversations, words are generally shorter and they often lack segments. It is unclear to what extent such durational and segmental reductions affect word recognition. The present study investigates to what extent reduction in the initial syllable hinders word comprehension, which types of segments listeners mostly rely on, and whether listeners use word duration as a cue in word recognition. We conducted three experiments in Dutch, in which we adapted the gating paradigm to study the comprehension of spontaneously uttered conversational speech by aligning the gates with the edges of consonant clusters or vowels. Participants heard the context and some segmental and/or durational information from reduced target words with unstressed initial syllables. The initial syllable varied in its degree of reduction, and in half of the stimuli the vowel was not clearly present. Participants gave too short answers if they were only provided with durational information from the target words, which shows that listeners are unaware of the reductions that can occur in spontaneous speech. More importantly, listeners required fewer segments to recognize target words if the vowel in the initial syllable was absent. This result strongly suggests that this vowel hardly plays a role in word comprehension, and that its presence may even delay this process. More important are the consonants and the stressed vowel.
  • Van der Werf, O. J., Schuhmann, T., De Graaf, T., Ten Oever, S., & Sack, A. T. (2023). Investigating the role of task relevance during rhythmic sampling of spatial locations. Scientific Reports, 13: 12707. doi:10.1038/s41598-023-38968-z.

    Abstract

    Recently it has been discovered that visuospatial attention operates rhythmically, rather than being stably employed over time. A low-frequency 7–8 Hz rhythmic mechanism coordinates periodic windows to sample relevant locations and to shift towards other, less relevant locations in a visual scene. Rhythmic sampling theories would predict that when two locations are relevant 8 Hz sampling mechanisms split into two, effectively resulting in a 4 Hz sampling frequency at each location. Therefore, it is expected that rhythmic sampling is influenced by the relative importance of locations for the task at hand. To test this, we employed an orienting task with an arrow cue, where participants were asked to respond to a target presented in one visual field. The cue-to-target interval was systematically varied, allowing us to assess whether performance follows a rhythmic pattern across cue-to-target delays. We manipulated a location’s task relevance by altering the validity of the cue, thereby predicting the correct location in 60%, 80% or 100% of trials. Results revealed significant 4 Hz performance fluctuations at cued right visual field targets with low cue validity (60%), suggesting regular sampling of both locations. With high cue validity (80%), we observed a peak at 8 Hz towards non-cued targets, although not significant. These results were in line with our hypothesis suggesting a goal-directed balancing of attentional sampling (cued location) and shifting (non-cued location) depending on the relevance of locations in a visual scene. However, considering the hemifield specificity of the effect together with the absence of expected effects for cued trials in the high valid conditions we further discuss the interpretation of the data.

    Additional information

    supplementary information
  • Van Geert, E., Ding, R., & Wagemans, J. (2024). A cross-cultural comparison of aesthetic preferences for neatly organized compositions: Native Chinese- versus Native Dutch-speaking samples. Empirical Studies of the Arts. Advance online publication. doi:10.1177/02762374241245917.

    Abstract

    Do aesthetic preferences for images of neatly organized compositions (e.g., images collected on blogs like Things Organized Neatly©) generalize across cultures? In an earlier study, focusing on stimulus and personal properties related to order and complexity, Western participants indicated their preference for one of two simultaneously presented images (100 pairs). In the current study, we compared the data of the native Dutch-speaking participants from this earlier sample (N = 356) to newly collected data from a native Chinese-speaking sample (N = 220). Overall, aesthetic preferences were quite similar across cultures. When relating preferences for each sample to ratings of order, complexity, soothingness, and fascination collected from a Western, mainly Dutch-speaking sample, the results hint at a cross-culturally consistent preference for images that Western participants rate as more ordered, but a cross-culturally diverse relation between preferences and complexity.
  • van der Burght, C. L., Friederici, A. D., Maran, M., Papitto, G., Pyatigorskaya, E., Schroen, J., Trettenbrein, P., & Zaccarella, E. (2023). Cleaning up the brickyard: How theory and methodology shape experiments in cognitive neuroscience of language. Journal of Cognitive Neuroscience, 35(12), 2067-2088. doi:10.1162/jocn_a_02058.

    Abstract

    The capacity for language is a defining property of our species, yet despite decades of research evidence on its neural basis is still mixed and a generalized consensus is difficult to achieve. We suggest that this is partly caused by researchers defining “language” in different ways, with focus on a wide range of phenomena, properties, and levels of investigation. Accordingly, there is very little agreement amongst cognitive neuroscientists of language on the operationalization of fundamental concepts to be investigated in neuroscientific experiments. Here, we review chains of derivation in the cognitive neuroscience of language, focusing on how the hypothesis under consideration is defined by a combination of theoretical and methodological assumptions. We first attempt to disentangle the complex relationship between linguistics, psychology, and neuroscience in the field. Next, we focus on how conclusions that can be drawn from any experiment are inherently constrained by auxiliary assumptions, both theoretical and methodological, on which the validity of conclusions drawn rests. These issues are discussed in the context of classical experimental manipulations as well as study designs that employ novel approaches such as naturalistic stimuli and computational modelling. We conclude by proposing that a highly interdisciplinary field such as the cognitive neuroscience of language requires researchers to form explicit statements concerning the theoretical definitions, methodological choices, and other constraining factors involved in their work.
  • Van der Werff, J., Ravignani, A., & Jadoul, Y. (2024). thebeat: A Python package for working with rhythms and other temporal sequences. Behavior Research Methods, 56, 3725-3736. doi:10.3758/s13428-023-02334-8.

    Abstract

    thebeat is a Python package for working with temporal sequences and rhythms in the behavioral and cognitive sciences, as well as in bioacoustics. It provides functionality for creating experimental stimuli, and for visualizing and analyzing temporal data. Sequences, sounds, and experimental trials can be generated using single lines of code. thebeat contains functions for calculating common rhythmic measures, such as interval ratios, and for producing plots, such as circular histograms. thebeat saves researchers time when creating experiments, and provides the first steps in collecting widely accepted methods for use in timing research. thebeat is an open-source, on-going, and collaborative project, and can be extended for use in specialized subfields. thebeat integrates easily with the existing Python ecosystem, allowing one to combine our tested code with custom-made scripts. The package was specifically designed to be useful for both skilled and novice programmers. thebeat provides a foundation for working with temporal sequences onto which additional functionality can be built. This combination of specificity and plasticity should facilitate research in multiple research contexts and fields of study.
  • van der Burght, C. L., & Meyer, A. S. (2024). Interindividual variation in weighting prosodic and semantic cues during sentence comprehension – a partial replication of Van der Burght et al. (2021). In Y. Chen, A. Chen, & A. Arvaniti (Eds.), Proceedings of Speech Prosody 2024 (pp. 792-796). doi:10.21437/SpeechProsody.2024-160.

    Abstract

    Contrastive pitch accents can mark sentence elements occupying parallel roles. In “Mary kissed John, not Peter”, a pitch accent on Mary or John cues the implied syntactic role of Peter. Van der Burght, Friederici, Goucha, and Hartwigsen (2021) showed that listeners can build expectations concerning syntactic and semantic properties of upcoming words, derived from pitch accent information they heard previously. To further explore these expectations, we attempted a partial replication of the original German study in Dutch. In the experimental sentences “Yesterday, the police officer arrested the thief, not the inspector/murderer”, a pitch accent on subject or object cued the subject/object role of the ellipsis clause. Contrasting elements were additionally cued by the thematic role typicality of the nouns. Participants listened to sentences in which the ellipsis clause was omitted and selected the most plausible sentence-final noun (presented visually) via button press. Replicating the original study results, listeners based their sentence-final preference on the pitch accent information available in the sentence. However, as in the original study, individual differences between listeners were found, with some following prosodic information and others relying on a structural bias. The results complement the literature on ellipsis resolution and on interindividual variability in cue weighting.
  • Vanderauwera, J., De Vos, A., Forkel, S. J., Catani, M., Wouters, J., Vandermosten, M., & Ghesquière, P. (2018). Neural organization of ventral white matter tracts parallels the initial steps of reading development: A DTI tractography study. Brain and Language, 183, 32-40. doi:10.1016/j.bandl.2018.05.007.

    Abstract

    Insight in the developmental trajectory of the neuroanatomical reading correlates is important to understand related cognitive processes and disorders. In adults, a dual pathway model has been suggested encompassing a dorsal phonological and a ventral orthographic white matter system. This dichotomy seems not present in pre-readers, and the specific role of ventral white matter in reading remains unclear. Therefore, the present longitudinal study investigated the relation between ventral white matter and cognitive processes underlying reading in children with a broad range of reading skills (n = 61). Ventral pathways of the reading network were manually traced using diffusion tractography: the inferior fronto-occipital fasciculus (IFOF), inferior longitudinal fasciculus (ILF) and uncinate fasciculus (UF). Pathways were examined pre-reading (5–6 years) and after two years of reading acquisition (7–8 years). Dimension reduction for the cognitive measures resulted in one component for pre-reading cognitive measures and a separate phonological and orthographic component for the early reading measures. Regression analyses revealed a relation between the pre-reading cognitive component and bilateral IFOF and left ILF. Interestingly, exclusively the left IFOF was related to the orthographic component, whereas none of the pathways was related to the phonological component. Hence, the left IFOF seems to serve as the lexical reading route, already in the earliest reading stages.
  • Vanlangendonck, F., Takashima, A., Willems, R. M., & Hagoort, P. (2018). Distinguishable memory retrieval networks for collaboratively and non-collaboratively learned information. Neuropsychologia, 111, 123-132. doi:10.1016/j.neuropsychologia.2017.12.008.

    Abstract

    Learning often occurs in communicative and collaborative settings, yet almost all research into the neural basis of memory relies on participants encoding and retrieving information on their own. We investigated whether learning linguistic labels in a collaborative context at least partly relies on cognitively and neurally distinct representations, as compared to learning in an individual context. Healthy human participants learned labels for sets of abstract shapes in three different tasks. They came up with labels with another person in a collaborative communication task (collaborative condition), by themselves (individual condition), or were given pre-determined unrelated labels to learn by themselves (arbitrary condition). Immediately after learning, participants retrieved and produced the labels aloud during a communicative task in the MRI scanner. The fMRI results show that the retrieval of collaboratively generated labels as compared to individually learned labels engages brain regions involved in understanding others (mentalizing or theory of mind) and autobiographical memory, including the medial prefrontal cortex, the right temporoparietal junction and the precuneus. This study is the first to show that collaboration during encoding affects the neural networks involved in retrieval.
  • Vanlangendonck, F., Willems, R. M., & Hagoort, P. (2018). Taking common ground into account: Specifying the role of the mentalizing network in communicative language production. PLoS One, 13(10): e0202943. doi:10.1371/journal.pone.0202943.
  • Varma, S., Daselaar, S. M., Kessels, R. P. C., & Takashima, A. (2018). Promotion and suppression of autobiographical thinking differentially affect episodic memory consolidation. PLoS One, 13(8): e0201780. doi:10.1371/journal.pone.0201780.

    Abstract

    During a post-encoding delay period, the ongoing consolidation of recently acquired memories can suffer interference if the delay period involves encoding of new memories, or sensory stimulation tasks. Interestingly, two recent independent studies suggest that (i) autobiographical thinking also interferes markedly with ongoing consolidation of recently learned wordlist material, while (ii) a 2-Back task might not interfere with ongoing consolidation, possibly due to the suppression of autobiographical thinking. In this study, we directly compare these conditions against a quiet wakeful rest baseline to test whether the promotion (via familiar sound-cues) or suppression (via a 2-Back task) of autobiographical thinking during the post-encoding delay period can affect consolidation of studied wordlists in a negative or a positive way, respectively. Our results successfully replicate previous studies and show a significant interference effect (as compared to the rest condition) when learning is followed by familiar sound-cues that promote autobiographical thinking, whereas no interference effect is observed when learning is followed by the 2-Back task. Results from a post-experimental experience-sampling questionnaire further show significant differences in the degree of autobiographical thinking reported during the three post-encoding periods: highest in the presence of sound-cues and lowest during the 2-Back task. In conclusion, our results suggest that varying levels of autobiographical thought during the post-encoding period may modulate episodic memory consolidation.
  • Verdonschot, R. G., & Kinoshita, S. (2018). Mora or more? The phonological unit of Japanese word production in the Stroop color naming task. Memory & Cognition, 46(3), 410-425. doi:10.3758/s13421-017-0774-4.

    Abstract

    In English, Dutch, and other European languages, it is well established that the fundamental phonological unit in word production is the phoneme; in contrast, recent studies have shown that in Chinese it is the (atonal) syllable and in Japanese the mora. The present study investigated whether this cross-language variation in the size of the unit of word production is due to the type of script used in the language (i.e., alphabetic, morphosyllabic, or moraic). Capitalizing on the multiscriptal nature of Japanese, and using the Stroop color naming task, we show that the overlap in the initial mora between the color name and the written distractor facilitates color naming independent of script type. These results confirm the mora as the phonological unit of word production in Japanese, and establish the Stroop color naming task as a useful task for investigating the fundamental (or "proximate") phonological unit used in speech production.
  • Verdonschot, R. G., Van der Wal, J., Lewis, A. G., Knudsen, B., Von Grebmer zu Wolfsthurn, S., Schiller, N. O., & Hagoort, P. (2024). Information structure in Makhuwa: Electrophysiological evidence for a universal processing account. Proceedings of the National Academy of Sciences of the United States of America, 121(30): e2315438121. doi:10.1073/pnas.2315438121.

    Abstract

    There is evidence from both behavior and brain activity that the way information is structured, through the use of focus, can up-regulate processing of focused constituents, likely to give prominence to the relevant aspects of the input. This is hypothesized to be universal, regardless of the different ways in which languages encode focus. In order to test this universalist hypothesis, we need to go beyond the more familiar linguistic strategies for marking focus, such as by means of intonation or specific syntactic structures (e.g., it-clefts). Therefore, in this study, we examine Makhuwa-Enahara, a Bantu language spoken in northern Mozambique, which uniquely marks focus through verbal conjugation. The participants were presented with sentences that consisted of either a semantically anomalous constituent or a semantically nonanomalous constituent. Moreover, focus on this particular constituent could be either present or absent. We observed a consistent pattern: Focused information generated a more negative N400 response than the same information in nonfocus position. This demonstrates that regardless of how focus is marked, its consequence seems to result in an upregulation of processing of information that is in focus.

    Additional information

    supplementary materials
  • Verga, L., D’Este, G., Cassani, S., Leitner, C., Kotz, S. A., Ferini-Strambi, L., & Galbiati, A. (2023). Sleeping with time in mind? A literature review and a proposal for a screening questionnaire on self-awakening. PLoS One, 18(3): e0283221. doi:10.1371/journal.pone.0283221.

    Abstract

    Some people report being able to spontaneously “time” the end of their sleep. This ability to self-awaken challenges the idea of sleep as a passive cognitive state. Yet, current evidence on this phenomenon is limited, partly because of the varied definitions of self-awakening and experimental approaches used to study it. Here, we provide a review of the literature on self-awakening. Our aim is to i) contextualise the phenomenon, ii) propose an operating definition, and iii) summarise the scientific approaches used so far. The literature review identified 17 studies on self-awakening. Most of them adopted an objective sleep evaluation (76%), targeted nocturnal sleep (76%), and used a single criterion to define the success of awakening (82%); for most studies, this corresponded to awakening occurring in a time window of 30 minutes around the expected awakening time. Out of 715 total participants, 125 (17%) reported to be self-awakeners, with an average age of 23.24 years and a slight predominance of males compared to females. These results reveal self-awakening as a relatively rare phenomenon. To facilitate the study of self-awakening, and based on the results of the literature review, we propose a quick paper-and-pencil screening questionnaire for self-awakeners and provide an initial validation for it. Taken together, the combined results of the literature review and the proposed questionnaire help in characterising a theoretical framework for self-awakenings, while providing a useful tool and empirical suggestions for future experimental studies, which should ideally employ objective measurements.
  • Verga, L., Kotz, S. A., & Ravignani, A. (2023). The evolution of social timing. Physics of Life Reviews, 46, 131-151. doi:10.1016/j.plrev.2023.06.006.

    Abstract

    Sociality and timing are tightly interrelated in human interaction as seen in turn-taking or synchronised dance movements. Sociality and timing also show in communicative acts of other species that might be pleasurable, but also necessary for survival. Sociality and timing often co-occur, but their shared phylogenetic trajectory is unknown: How, when, and why did they become so tightly linked? Answering these questions is complicated by several constraints; these include the use of divergent operational definitions across fields and species, the focus on diverse mechanistic explanations (e.g., physiological, neural, or cognitive), and the frequent adoption of anthropocentric theories and methodologies in comparative research. These limitations hinder the development of an integrative framework on the evolutionary trajectory of social timing and make comparative studies not as fruitful as they could be. Here, we outline a theoretical and empirical framework to test contrasting hypotheses on the evolution of social timing with species-appropriate paradigms and consistent definitions. To facilitate future research, we introduce an initial set of representative species and empirical hypotheses. The proposed framework aims at building and contrasting evolutionary trees of social timing toward and beyond the crucial branch represented by our own lineage. Given the integration of cross-species and quantitative approaches, this research line might lead to an integrated empirical-theoretical paradigm and, as a long-term goal, explain why humans are such socially coordinated animals.
  • Verga, L., Schwartze, M., & Kotz, S. A. (2023). Neurophysiology of language pathologies. In M. Grimaldi, E. Brattico, & Y. Shtyrov (Eds.), Language Electrified: Neuromethods (pp. 753-776). New York, NY: Springer US. doi:10.1007/978-1-0716-3263-5_24.

    Abstract

    Language- and speech-related disorders are among the most frequent consequences of developmental and acquired pathologies. While classical approaches to the study of these disorders typically employed the lesion method to unveil one-to-one correspondence between locations, the extent of the brain damage, and corresponding symptoms, recent advances advocate the use of online methods of investigation. For example, the use of electrophysiology or magnetoencephalography—especially when combined with anatomical measures—allows for in vivo tracking of real-time language and speech events, and thus represents a particularly promising venue for future research targeting rehabilitative interventions. In this chapter, we provide a comprehensive overview of language and speech pathologies arising from cortical and/or subcortical damage, and their corresponding neurophysiological and pathological symptoms. Building upon the reviewed evidence and literature, we aim at providing a description of how the neurophysiology of the language network changes as a result of brain damage. We will conclude by summarizing the evidence presented in this chapter, while suggesting directions for future research.
  • Verheijen, J., Van der Zee, J., Gijselinck, I., Van den Bossche, T., Dillen, L., Heeman, B., Gómez-Tortosa, E., Lladó, A., Sanchez-Valle, R., Graff, C., Pastor, P., Pastor, M. A., Benussi, L., Ghidoni, R., Binetti, G., Clarimon, J., De Mendonça, A., Gelpi, E., Tsolaki, M., Diehl-Schmid, J. and 12 moreVerheijen, J., Van der Zee, J., Gijselinck, I., Van den Bossche, T., Dillen, L., Heeman, B., Gómez-Tortosa, E., Lladó, A., Sanchez-Valle, R., Graff, C., Pastor, P., Pastor, M. A., Benussi, L., Ghidoni, R., Binetti, G., Clarimon, J., De Mendonça, A., Gelpi, E., Tsolaki, M., Diehl-Schmid, J., Nacmias, B., Almeida, M. R., Borroni, B., Matej, R., Ruiz, A., Engelborghs, S., Vandenberghe, R., De Deyn, P. P., Cruts, M., Van Broeckhoven, C., Sleegers, K., BELNEU Consortium, & EU EOD Consortium (2018). Common and rare TBK1 variants in early-onset Alzheimer disease in a European cohort. Neurobiology of Aging, 62, 245.e1-245.e7. doi:10.1016/j.neurobiolaging.2017.10.012.

    Abstract

    TANK-binding kinase 1 (TBK1) loss-of-function (LoF) mutations are known to cause frontotemporal dementia (FTD) and amyotrophic lateral sclerosis (ALS), often combined with memory deficits early in the disease course. We performed targeted resequencing of TBK1 in 1253 early onset Alzheimer's disease (EOAD) patients from 8 European countries to investigate whether pathogenic TBK1 mutations are enriched among patients with clinical diagnosis of EOAD. Variant frequencies were compared against 2117 origin-matched controls. We identified only 1 LoF mutation (p.Thr79del) in a patient clinically diagnosed with Alzheimer's disease and a positive family history of ALS. We did not observe enrichment of rare variants in EOAD patients compared to controls, nor of rare variants affecting NFκB induction. Of 3 common coding variants, rs7486100 showed evidence of association (OR 1.46 [95% CI 1.13–1.9]; p-value 0.01). Homozygous carriers of the risk allele showed reduced expression of TBK1 (p-value 0.03). Our findings are not indicative of a significant role for TBK1 mutations in EOAD. The association between common variants in TBK1, disease risk and reduced TBK1 expression warrants follow-up in FTD/ALS cohorts. © 2017 The Author(s)

    Additional information

    Supplementary data
  • Verheijen, J., & Sleegers, K. (2018). Understanding Alzheimer Disease at the interface between genetics and transcriptomics. Trends in Genetics, 34(6), 434-447. doi:10.1016/j.tig.2018.02.007.

    Abstract

    Over 25 genes are known to affect the risk of developing Alzheimer disease (AD), the most common neurodegenerative dementia. However, mechanistic insights and improved disease management remains limited, due to difficulties in determining the functional consequences of genetic associations. Transcriptomics is increasingly being used to corroborate or enhance interpretation of genetic discoveries. These approaches, which include second and third generation sequencing, single-cell sequencing, and bioinformatics, reveal allele-specific events connecting AD risk genes to expression profiles, and provide converging evidence of pathophysiological pathways underlying AD. Simultaneously, they highlight brain region- and cell-type-specific expression patterns, and alternative splicing events that affect the straightforward relation between a genetic variant and AD, re-emphasizing the need for an integrated approach of genetics and transcriptomics in understanding AD. © 2018 The Authors
  • Verhoef, E., Allegrini, A. G., Jansen, P. R., Lange, K., Wang, C. A., Morgan, A. T., Ahluwalia, T. S., Symeonides, C., EAGLE-Working Group, Eising, E., Franken, M.-C., Hypponen, E., Mansell, T., Olislagers, M., Omerovic, E., Rimfeld, K., Schlag, F., Selzam, S., Shapland, C. Y., Tiemeier, H., Whitehouse, A. J. O. Verhoef, E., Allegrini, A. G., Jansen, P. R., Lange, K., Wang, C. A., Morgan, A. T., Ahluwalia, T. S., Symeonides, C., EAGLE-Working Group, Eising, E., Franken, M.-C., Hypponen, E., Mansell, T., Olislagers, M., Omerovic, E., Rimfeld, K., Schlag, F., Selzam, S., Shapland, C. Y., Tiemeier, H., Whitehouse, A. J. O., Saffery, R., Bønnelykke, K., Reilly, S., Pennell, C. E., Wake, M., Cecil, C. A., Plomin, R., Fisher, S. E., & St Pourcain, B. (2024). Genome-wide analyses of vocabulary size in infancy and toddlerhood: Associations with Attention-Deficit/Hyperactivity Disorder and cognition-related traits. Biological Psychiatry, 95(1), 859-869. doi:10.1016/j.biopsych.2023.11.025.

    Abstract

    Background

    The number of words children produce (expressive vocabulary) and understand (receptive vocabulary) changes rapidly during early development, partially due to genetic factors. Here, we performed a meta–genome-wide association study of vocabulary acquisition and investigated polygenic overlap with literacy, cognition, developmental phenotypes, and neurodevelopmental conditions, including attention-deficit/hyperactivity disorder (ADHD).

    Methods

    We studied 37,913 parent-reported vocabulary size measures (English, Dutch, Danish) for 17,298 children of European descent. Meta-analyses were performed for early-phase expressive (infancy, 15–18 months), late-phase expressive (toddlerhood, 24–38 months), and late-phase receptive (toddlerhood, 24–38 months) vocabulary. Subsequently, we estimated single nucleotide polymorphism–based heritability (SNP-h2) and genetic correlations (rg) and modeled underlying factor structures with multivariate models.

    Results

    Early-life vocabulary size was modestly heritable (SNP-h2 = 0.08–0.24). Genetic overlap between infant expressive and toddler receptive vocabulary was negligible (rg = 0.07), although each measure was moderately related to toddler expressive vocabulary (rg = 0.69 and rg = 0.67, respectively), suggesting a multifactorial genetic architecture. Both infant and toddler expressive vocabulary were genetically linked to literacy (e.g., spelling: rg = 0.58 and rg = 0.79, respectively), underlining genetic similarity. However, a genetic association of early-life vocabulary with educational attainment and intelligence emerged only during toddlerhood (e.g., receptive vocabulary and intelligence: rg = 0.36). Increased ADHD risk was genetically associated with larger infant expressive vocabulary (rg = 0.23). Multivariate genetic models in the ALSPAC (Avon Longitudinal Study of Parents and Children) cohort confirmed this finding for ADHD symptoms (e.g., at age 13; rg = 0.54) but showed that the association effect reversed for toddler receptive vocabulary (rg = −0.74), highlighting developmental heterogeneity.

    Conclusions

    The genetic architecture of early-life vocabulary changes during development, shaping polygenic association patterns with later-life ADHD, literacy, and cognition-related traits.
  • Verhoeven, L., Baayen, R. H., & Schreuder, R. (2004). Orthographic constraints and frequency effects in complex word identification. Written Language and Literacy, 7(1), 49-59.

    Abstract

    In an experimental study we explored the role of word frequency and orthographic constraints in the reading of Dutch bisyllabic words. Although Dutch orthography is highly regular, several deviations from a one-to-one correspondence occur. In polysyllabic words, the grapheme E may represent three different vowels: /ε /, /e/, or /œ /. In the experiment, skilled adult readers were presented lists of bisyllabic words containing the vowel E in the initial syllable and the same grapheme or another vowel in the second syllable. We expected word frequency to be related to word latency scores. On the basis of general word frequency data, we also expected the interpretation of the initial syllable as a stressed /e/ to be facilitated as compared to the interpretation of an unstressed /œ /. We found a strong negative correlation between word frequency and latency scores. Moreover, for words with E in either syllable we found a preference for a stressed /e/ interpretation, indicating a lexical frequency effect. The results are discussed with reference to a parallel dual-route model of word decoding.
  • Vernes, S. C. (2018). Vocal learning in bats: From genes to behaviour. In C. Cuskley, M. Flaherty, H. Little, L. McCrohon, A. Ravignani, & T. Verhoef (Eds.), Proceedings of the 12th International Conference on the Evolution of Language (EVOLANG XII) (pp. 516-518). Toruń, Poland: NCU Press. doi:10.12775/3991-1.128.
  • Vessel, E. A., Pasqualette, L., Uran, C., Koldehoff, S., Bignardi, G., & Vinck, M. (2023). Self-relevance predicts the aesthetic appeal of real and synthetic artworks generated via neural style transfer. Psychological Science, 34(9), 1007-1023. doi:10.1177/09567976231188107.

    Abstract

    What determines the aesthetic appeal of artworks? Recent work suggests that aesthetic appeal can, to some extent, be predicted from a visual artwork’s image features. Yet a large fraction of variance in aesthetic ratings remains unexplained and may relate to individual preferences. We hypothesized that an artwork’s aesthetic appeal depends strongly on self-relevance. In a first study (N = 33 adults, online replication N = 208), rated aesthetic appeal for real artworks was positively predicted by rated self-relevance. In a second experiment (N = 45 online), we created synthetic, self-relevant artworks using deep neural networks that transferred the style of existing artworks to photographs. Style transfer was applied to self-relevant photographs selected to reflect participant-specific attributes such as autobiographical memories. Self-relevant, synthetic artworks were rated as more aesthetically appealing than matched control images, at a level similar to human-made artworks. Thus, self-relevance is a key determinant of aesthetic appeal, independent of artistic skill and image features.

    Additional information

    supplementary materials
  • Viebahn, M., McQueen, J. M., Ernestus, M., Frauenfelder, U. H., & Bürki, A. (2018). How much does orthography influence the processing of reduced word forms? Evidence from novel-word learning about French schwa deletion. The Quarterly Journal of Experimental Psychology, 71(11), 2378-2394. doi:10.1177/1747021817741859.

    Abstract

    This study examines the influence of orthography on the processing of reduced word forms. For this purpose, we compared the impact of phonological variation with the impact of spelling-sound consistency on the processing of words that may be produced with or without the vowel schwa. Participants learnt novel French words in which the vowel schwa was present or absent in the first syllable. In Experiment 1, the words were consistently produced without schwa or produced in a variable manner (i.e., sometimes produced with and sometimes produced without schwa). In Experiment 2, words were always produced in a consistent manner, but an orthographic exposure phase was included in which words that were produced without schwa were either spelled with or without the letter . Results from naming and eye-tracking tasks suggest that both phonological variation and spelling-sound consistency influence the processing of spoken novel words. However, the influence of phonological variation outweighs the effect of spelling-sound consistency. Our findings therefore suggest that the influence of orthography on the processing of reduced word forms is relatively small.

Share this page