Publications

Displaying 1401 - 1500 of 1664
  • Stassen, H., & Levelt, W. J. M. (1976). Systemen, automaten en grammatica's. In J. Michon, E. Eijkman, & L. De Klerk (Eds.), Handboek der psychonomie (pp. 100-127). Deventer: Van Loghum Slaterus.
  • Stehouwer, H., & van Zaanen, M. (2009). Language models for contextual error detection and correction. In Proceedings of the EACL 2009 Workshop on Computational Linguistic Aspects of Grammatical Inference (pp. 41-48). Association for Computational Linguistics.

    Abstract

    The problem of identifying and correcting confusibles, i.e. context-sensitive spelling errors, in text is typically tackled using specifically trained machine learning classifiers. For each different set of confusibles, a specific classifier is trained and tuned. In this research, we investigate a more generic approach to context-sensitive confusible correction. Instead of using specific classifiers, we use one generic classifier based on a language model. This measures the likelihood of sentences with different possible solutions of a confusible in place. The advantage of this approach is that all confusible sets are handled by a single model. Preliminary results show that the performance of the generic classifier approach is only slightly worse that that of the specific classifier approach
  • Stehouwer, H., & Van Zaanen, M. (2009). Token merging in language model-based confusible disambiguation. In T. Calders, K. Tuyls, & M. Pechenizkiy (Eds.), Proceedings of the 21st Benelux Conference on Artificial Intelligence (pp. 241-248).

    Abstract

    In the context of confusible disambiguation (spelling correction that requires context), the synchronous back-off strategy combined with traditional n-gram language models performs well. However, when alternatives consist of a different number of tokens, this classification technique cannot be applied directly, because the computation of the probabilities is skewed. Previous work already showed that probabilities based on different order n-grams should not be compared directly. In this article, we propose new probability metrics in which the size of the n is varied according to the number of tokens of the confusible alternative. This requires access to n-grams of variable length. Results show that the synchronous back-off method is extremely robust. We discuss the use of suffix trees as a technique to store variable length n-gram information efficiently.
  • Stevelink, R., Luykx, J. J., Lin, B. D., Leu, C., Lal, D., Smith, A. W., Schijven, D., Carpay, J. A., Rademaker, K., Baldez, R., A., R., Devinsky, O., Braun, K. P. J., Jansen, F. E., Smit, D. J. A., Koeleman, B. P. C., International League Against Epilepsy Consortium on Complex Epilepsies, & Epi25 Collaborative (2021). Shared genetic basis between genetic generalized epilepsy and background electroencephalographic oscillations. Epilepsia, 62(7), 1518-1527. doi:10.1111/epi.16922.

    Abstract

    Abstract Objective Paroxysmal epileptiform abnormalities on electroencephalography (EEG) are the hallmark of epilepsies, but it is uncertain to what extent epilepsy and background EEG oscillations share neurobiological underpinnings. Here, we aimed to assess the genetic correlation between epilepsy and background EEG oscillations. Methods Confounding factors, including the heterogeneous etiology of epilepsies and medication effects, hamper studies on background brain activity in people with epilepsy. To overcome this limitation, we compared genetic data from a genome-wide association study (GWAS) on epilepsy (n = 12 803 people with epilepsy and 24 218 controls) with that from a GWAS on background EEG (n = 8425 subjects without epilepsy), in which background EEG oscillation power was quantified in four different frequency bands: alpha, beta, delta, and theta. We replicated our findings in an independent epilepsy replication dataset (n = 4851 people with epilepsy and 20 428 controls). To assess the genetic overlap between these phenotypes, we performed genetic correlation analyses using linkage disequilibrium score regression, polygenic risk scores, and Mendelian randomization analyses. Results Our analyses show strong genetic correlations of genetic generalized epilepsy (GGE) with background EEG oscillations, primarily in the beta frequency band. Furthermore, we show that subjects with higher beta and theta polygenic risk scores have a significantly higher risk of having generalized epilepsy. Mendelian randomization analyses suggest a causal effect of GGE genetic liability on beta oscillations. Significance Our results point to shared biological mechanisms underlying background EEG oscillations and the susceptibility for GGE, opening avenues to investigate the clinical utility of background EEG oscillations in the diagnostic workup of epilepsy.

    Additional information

    supporting information
  • Stewart, A. J., Kidd, E., & Haigh, M. (2009). Early sensitivity to discourse-level anomalies: Evidence from self-paced reading. Discourse Processes, 46(1), 46-69. doi:10.1080/01638530802629091.

    Abstract

    Two word-by-word, self-paced reading experiments investigated the speed with which readers were sensitive to discourse-level anomalies. An account arguing for delayed sensitivity (Guzman & Klin, 2000 Guzman, A. E. and Klin, C. M. 2000. Maintaining global coherence in reading: The role of sentence boundaries.. Memory & Cognition, 28: 722–730. [PubMed], [Web of Science ®], [Google Scholar]) was contrasted with one allowing for rapid sensitivity (Myers & O'Brien, 1998 Myers, J. L. and O'Brien, E. J. 1998. Accessing the discourse representation during reading.. Discourse Processes, 26: 131–157. [Taylor & Francis Online], [Web of Science ®], [Google Scholar]). Anomalies related to spatial information (Experiment 1) and character-attribute information (Experiment 2) were examined. Both experiments found that readers displayed rapid sensitivity to the anomalous information. A reading time penalty was observed for the region of text containing the anomalous information. This finding is most compatible with an account of text processing whereby incoming words are rapidly evaluated with respect to prior context. They are not consistent with an account that argues for delayed integration. Results are discussed in light of their implications for competing models of text processing.
  • Stewart, A. J., Haigh, M., & Kidd, E. (2009). An investigation into the online processing of counterfactual and indicative conditionals. Quarterly Journal of Experimental Psychology, 62(11), 2113-2125. doi:10.1080/17470210902973106.

    Abstract

    The ability to represent conditional information is central to human cognition. In two self-paced reading experiments we investigated how readers process counterfactual conditionals (e.g., If Darren had been athletic, he could probably have played on the rugby team ) and indicative conditionals (e.g., If Darren is athletic, he probably plays on the rugby team ). In Experiment 1 we focused on how readers process counterfactual conditional sentences. We found that processing of the antecedent of counterfactual conditionals was rapidly constrained by prior context (i.e., knowing whether Darren was or was not athletic). A reading-time penalty was observed for the critical region of text comprising the last word of the antecedent and the first word of the consequent when the information in the antecedent did not fit with prior context. In Experiment 2 we contrasted counterfactual conditionals with indicative conditionals. For counterfactual conditionals we found the same effect on the critical region as we found in Experiment 1. In contrast, however, we found no evidence that processing of the antecedent of indicative conditionals was constrained by prior context. For indicative conditionals (but not for counterfactual conditionals), the results we report are consistent with the suppositional account of conditionals. We propose that current theories of conditionals need to be able to account for online processing differences between indicative and counterfactual conditionals
  • Stivers, T. (2004). Potilaan vastarinta: Keino vaikuttaa lääkärin hoitopäätökseen. Sosiaalilääketieteellinen Aikakauslehti, 41, 199-213.
  • Stivers, T. (2004). "No no no" and other types of multiple sayings in social interaction. Human Communication Research, 30(2), 260-293. doi:10.1111/j.1468-2958.2004.tb00733.x.

    Abstract

    Relying on the methodology of conversation analysis, this article examines a practice in ordinary conversation characterized by the resaying of a word, phrase, or sentence. The article shows that multiple sayings such as "No no no" or "Alright alright alright" are systematic in both their positioning relative to the interlocutor's talk and in their function. Specifically, the findings are that multiple sayings are a resource speakers have to display that their turn is addressing an in progress course of action rather than only the just prior utterance. Speakers of multiple sayings communicate their stance that the prior speaker has persisted unnecessarily in the prior course of action and should properly halt course of action.
  • Stivers, T. (2004). Question sequences in interaction. In A. Majid (Ed.), Field Manual Volume 9 (pp. 45-47). Nijmegen: Max Planck Institute for Psycholinguistics. doi:10.17617/2.506967.

    Abstract

    When people request information, they have a variety of means for eliciting the information. In English two of the primary resources for eliciting information include asking questions, making statements about their interlocutor (thereby generating confirmation or revision). But within these types there are a variety of ways that these information elicitors can be designed. The goal of this task is to examine how different languages seek and provide information, the extent to which syntax vs prosodic resources are used (e.g., in questions), and the extent to which the design of information seeking actions and their responses display a structural preference to promote social solidarity.
  • Stivers, T. (1998). Prediagnostic commentary in veterinarian-client interaction. Research on Language and Social Interaction, 31(2), 241-277. doi:10.1207/s15327973rlsi3102_4.
  • Stivers, T., Enfield, N. J., Brown, P., Englert, C., Hayashi, M., Heinemann, T., Hoymann, G., Rossano, F., De Ruiter, J. P., Yoon, K.-E., & Levinson, S. C. (2009). Universals and cultural variation in turn-taking in conversation. Proceedings of the National Academy of Sciences of the United States of America, 106 (26), 10587-10592. doi:10.1073/pnas.0903616106.

    Abstract

    Informal verbal interaction is the core matrix for human social life. A mechanism for coordinating this basic mode of interaction is a system of turn-taking that regulates who is to speak and when. Yet relatively little is known about how this system varies across cultures. The anthropological literature reports significant cultural differences in the timing of turn-taking in ordinary conversation. We test these claims and show that in fact there are striking universals in the underlying pattern of response latency in conversation. Using a worldwide sample of 10 languages drawn from traditional indigenous communities to major world languages, we show that all of the languages tested provide clear evidence for a general avoidance of overlapping talk and a minimization of silence between conversational turns. In addition, all of the languages show the same factors explaining within-language variation in speed of response. We do, however, find differences across the languages in the average gap between turns, within a range of 250 ms from the cross-language mean. We believe that a natural sensitivity to these tempo differences leads to a subjective perception of dramatic or even fundamental differences as offered in ethnographic reports of conversational style. Our empirical evidence suggests robust human universals in this domain, where local variations are quantitative only, pointing to a single shared infrastructure for language use with likely ethological foundations.

    Additional information

    Stivers_2009_universals_suppl.pdf
  • Stolker, C. J. J. M., & Poletiek, F. H. (1998). Smartengeld - Wat zijn we eigenlijk aan het doen? Naar een juridische en psychologische evaluatie. In F. Stadermann (Ed.), Bewijs en letselschade (pp. 71-86). Lelystad, The Netherlands: Koninklijke Vermande.
  • Sumer, B., & Ozyurek, A. (2020). No effects of modality in development of locative expressions of space in signing and speaking children. Journal of Child Language, 47(6), 1101-1131. doi:10.1017/S0305000919000928.

    Abstract

    Linguistic expressions of locative spatial relations in sign languages are mostly visually- motivated representations of space involving mapping of entities and spatial relations between them onto the hands and the signing space. These are also morphologically complex forms. It is debated whether modality-specific aspects of spatial expressions modulate spatial language development differently in signing compared to speaking children. In a picture description task, we compared the use of locative expressions for containment, support and occlusion relations by deaf children acquiring Turkish Sign Language and hearing children acquiring Turkish (3;5-9;11 years). Unlike previous reports suggesting a boosting effect of iconicity, and / or a hindering effect of morphological complexity of the locative forms in sign languages, our results show similar developmental patterns for signing and speaking children's acquisition of these forms. Our results suggest the primacy of cognitive development guiding the acquisition of locative expressions by speaking and signing children.
  • Suppes, P., Böttner, M., & Liang, L. (1998). Machine Learning of Physics Word Problems: A Preliminary Report. In A. Aliseda, R. van Glabbeek, & D. Westerståhl (Eds.), Computing Natural Language (pp. 141-154). Stanford, CA, USA: CSLI Publications.
  • Sutcliffe, D. J., Dinasarapu, A. R., Visser, J. E., Den Hoed, J., Seifar, F., Joshi, P., Ceballos-Picot, I., Sardar, T., Hess, E. J., Sun, Y. V., Wen, Z., Zwick, M. E., & Jinnah, H. A. (2021). Induced pluripotent stem cells from subjects with Lesch-Nyhan disease. Scientific Reports, 11: 8523. doi:10.1038/s41598-021-87955-9.

    Abstract

    Lesch-Nyhan disease (LND) is an inherited disorder caused by pathogenic variants in the HPRT1 gene, which encodes the purine recycling enzyme hypoxanthine–guanine phosphoribosyltransferase (HGprt). We generated 6 induced pluripotent stem cell (iPSC) lines from 3 individuals with LND, along with 6 control lines from 3 normal individuals. All 12 lines had the characteristics of pluripotent stem cells, as assessed by immunostaining for pluripotency markers, expression of pluripotency genes, and differentiation into the 3 primary germ cell layers. Gene expression profiling with RNAseq demonstrated significant heterogeneity among the lines. Despite this heterogeneity, several anticipated abnormalities were readily detectable across all LND lines, including reduced HPRT1 mRNA. Several unexpected abnormalities were also consistently detectable across the LND lines, including decreases in FAR2P1 and increases in RNF39. Shotgun proteomics also demonstrated several expected abnormalities in the LND lines, such as absence of HGprt protein. The proteomics study also revealed several unexpected abnormalities across the LND lines, including increases in GNAO1 decreases in NSE4A. There was a good but partial correlation between abnormalities revealed by the RNAseq and proteomics methods. Finally, functional studies demonstrated LND lines had no HGprt enzyme activity and resistance to the toxic pro-drug 6-thioguanine. Intracellular purines in the LND lines were normal, but they did not recycle hypoxanthine. These cells provide a novel resource to reveal insights into the relevance of heterogeneity among iPSC lines and applications for modeling LND.

    Additional information

    supplementary material
  • Swaab, T., Brown, C. M., & Hagoort, P. (1995). Delayed integration of lexical ambiguities in Broca's aphasics: Evidence from event-related potentials. Brain and Language, 51, 159-161. doi:10.1006/brln.1995.1058.
  • Swaab, T. Y., Brown, C. M., & Hagoort, P. (1998). Understanding ambiguous words in sentence contexts: Electrophysiological evidence for delayed contextual selection in Broca's aphasia. Neuropsychologia, 36(8), 737-761. doi:10.1016/S0028-3932(97)00174-7.

    Abstract

    This study investigates whether spoken sentence comprehension deficits in Broca's aphasics results from their inability to access the subordinate meaning of ambiguous words (e.g. bank), or alternatively, from a delay in their selection of the contextually appropriate meaning. Twelve Broca's aphasics and twelve elderly controls were presented with lexical ambiguities in three context conditions, each followed by the same target words. In the concordant condition, the sentence context biased the meaning of the sentence final ambiguous word that was related to the target. In the discordant condition, the sentence context biased the meaning of the sentence final ambiguous word that was incompatible with the target.In the unrelated condition, the sentence-final word was unambiguous and unrelated to the target. The task of the subjects was to listen attentively to the stimuli The activational status of the ambiguous sentence-final words was inferred from the amplitude of the N399 to the targets at two inter-stimulus intervals (ISIs) (100 ms and 1250 ms). At the short ISI, the Broca's aphasics showed clear evidence of activation of the subordinate meaning. In contrast to elderly controls, however, the Broca's aphasics were not successful at selecting the appropriate meaning of the ambiguity in the short ISI version of the experiment. But at the long ISI, in accordance with the performance of the elderly controls, the patients were able to successfully complete the contextual selection process. These results indicate that Broca's aphasics are delayed in the process of contextual selection. It is argued that this finding of delayed selection is compatible with the idea that comprehension deficits in Broca's aphasia result from a delay in the process of integrating lexical information.
  • Swift, M. (1998). [Book review of LOUIS-JACQUES DORAIS, La parole inuit: Langue, culture et société dans l'Arctique nord-américain]. Language in Society, 27, 273-276. doi:10.1017/S0047404598282042.

    Abstract

    This volume on Inuit speech follows the evolution of a native language of the North American Arctic, from its historical roots to its present-day linguistic structure and patterns of use from Alaska to Greenland. Drawing on a wide range of research from the fields of linguistics, anthropology, and sociology, Dorais integrates these diverse perspectives in a comprehensive view of native language development, maintenance, and use under conditions of marginalization due to social transition.
  • Tagliapietra, L., Fanari, R., De Candia, C., & Tabossi, P. (2009). Phonotactic regularities in the segmentation of spoken Italian. Quarterly Journal of Experimental Psychology, 62(2), 392 -415. doi:10.1080/17470210801907379.

    Abstract

    Five word-spotting experiments explored the role of consonantal and vocalic phonotactic cues in the segmentation of spoken Italian. The first set of experiments tested listeners' sensitivity to phonotactic constraints cueing syllable boundaries. Participants were slower in spotting words in nonsense strings when target onsets were misaligned (e.g., lago in ri.blago) than when they were aligned (e.g., lago in rin.lago) with phonotactically determined syllabic boundaries. This effect held also for sequences that occur only word-medially (e.g., /tl/ in ri.tlago), and competition effects could not account for the disadvantage in the misaligned condition. Similarly, target detections were slower when their offsets were misaligned (e.g., cittaacute in cittaacuteu.ba) than when they were aligned (e.g., cittaacute in cittaacute.oba) with a phonotactic syllabic boundary. The second set of experiments tested listeners' sensitivity to phonotactic cues, which specifically signal lexical (and not just syllable) boundaries. Results corroborate the role of syllabic information in speech segmentation and suggest that Italian listeners make little use of additional phonotactic information that specifically cues word boundaries.

    Files private

    Request files
  • Tagliapietra, L., Fanari, R., Collina, S., & Tabossi, P. (2009). Syllabic effects in Italian lexical access. Journal of Psycholinguistic Research, 38(6), 511-526. doi:10.1007/s10936-009-9116-4.

    Abstract

    Two cross-modal priming experiments tested whether lexical access is constrained by syllabic structure in Italian. Results extend the available Italian data on the processing of stressed syllables showing that syllabic information restricts the set of candidates to those structurally consistent with the intended word (Experiment 1). Lexical access, however, takes place as soon as possible and it is not delayed till the incoming input corresponds to the first syllable of the word. And, the initial activated set includes candidates whose syllabic structure does not match the intended word (Experiment 2). The present data challenge the early hypothesis that in Romance languages syllables are the units for lexical access during spoken word recognition. The implications of the results for our understanding of the role of syllabic information in language processing are discussed.
  • Takashima, A., Konopka, A. E., Meyer, A. S., Hagoort, P., & Weber, K. (2020). Speaking in the brain: The interaction between words and syntax in sentence production. Journal of Cognitive Neuroscience, 32(8), 1466-1483. doi:10.1162/jocn_a_01563.

    Abstract

    This neuroimaging study investigated the neural infrastructure of sentence-level language production. We compared brain activation patterns, as measured with BOLD-fMRI, during production of sentences that differed in verb argument structures (intransitives, transitives, ditransitives) and the lexical status of the verb (known verbs or pseudoverbs). The experiment consisted of 30 mini-blocks of six sentences each. Each mini-block started with an example for the type of sentence to be produced in that block. On each trial in the mini-blocks, participants were first given the (pseudo-)verb followed by three geometric shapes to serve as verb arguments in the sentences. Production of sentences with known verbs yielded greater activation compared to sentences with pseudoverbs in the core language network of the left inferior frontal gyrus, the left posterior middle temporalgyrus, and a more posterior middle temporal region extending into the angular gyrus, analogous to effects observed in language comprehension. Increasing the number of verb arguments led to greater activation in an overlapping left posterior middle temporal gyrus/angular gyrus area, particularly for known verbs, as well as in the bilateral precuneus. Thus, producing sentences with more complex structures using existing verbs leads to increased activation in the language network, suggesting some reliance on memory retrieval of stored lexical–syntactic information during sentence production. This study thus provides evidence from sentence-level language production in line with functional models of the language network that have so far been mainly based on single-word production, comprehension, and language processing in aphasia.
  • Tan, Y., & Hagoort, P. (2020). Catecholaminergic modulation of semantic processing in sentence comprehension. Cerebral Cortex, 30(12), 6426-6443. doi:10.1093/cercor/bhaa204.

    Abstract

    Catecholamine (CA) function has been widely implicated in cognitive functions that are tied to the prefrontal cortex and striatal areas. The present study investigated the effects of methylphenidate, which is a CA agonist, on the electroencephalogram (EEG) response related to semantic processing using a double-blind, placebo-controlled, randomized, crossover, within-subject design. Forty-eight healthy participants read semantically congruent or incongruent sentences after receiving 20-mg methylphenidate or a placebo while their brain activity was monitored with EEG. To probe whether the catecholaminergic modulation is task-dependent, in one condition participants had to focus on comprehending the sentences, while in the other condition, they only had to attend to the font size of the sentence. The results demonstrate that methylphenidate has a task-dependent effect on semantic processing. Compared to placebo, when semantic processing was task-irrelevant, methylphenidate enhanced the detection of semantic incongruence as indexed by a larger N400 amplitude in the incongruent sentences; when semantic processing was task-relevant, methylphenidate induced a larger N400 amplitude in the semantically congruent condition, which was followed by a larger late positive complex effect. These results suggest that CA-related neurotransmitters influence language processing, possibly through the projections between the prefrontal cortex and the striatum, which contain many CA receptors.
  • Tartaro, G., Takashima, A., & McQueen, J. M. (2021). Consolidation as a mechanism for word learning in sequential bilinguals. Bilingualism: Language and Cognition, 24(5), 864-878. doi:10.1017/S1366728921000286.

    Abstract

    First-language research suggests that new words, after initial episodic-memory encoding, are consolidated and hence become lexically integrated. We asked here if lexical consolidation, about word forms and meanings, occurs in a second language. Italian–English sequential bilinguals learned novel English-like words (e.g., apricon, taught to mean “stapler”). fMRI analyses failed to reveal a predicted shift, after consolidation time, from hippocampal to temporal neocortical activity. In a pause-detection task, responses to existing phonological competitors of learned words (e.g., apricot for apricon) were slowed down if the words had been learned two days earlier (i.e., after consolidation time) but not if they had been learned the same day. In a lexical-decision task, new words primed responses to semantically-related existing words (e.g., apricon-paper) whether the words were learned that day or two days earlier. Consolidation appears to support integration of words into the bilingual lexicon, possibly more rapidly for meanings than for forms.

    Additional information

    materials, procedure, results
  • Ten Bosch, L., Oostdijk, N., & De Ruiter, J. P. (2004). Turn-taking in social talk dialogues: Temporal, formal and functional aspects. In 9th International Conference Speech and Computer (SPECOM'2004) (pp. 454-461).

    Abstract

    This paper presents a quantitative analysis of the
    turn-taking mechanism evidenced in 93 telephone
    dialogues that were taken from the 9-million-word
    Spoken Dutch Corpus. While the first part of the paper
    focuses on the temporal phenomena of turn taking, such
    as durations of pauses and overlaps of turns in the
    dialogues, the second part explores the discoursefunctional
    aspects of utterances in a subset of 8
    dialogues that were annotated especially for this
    purpose. The results show that speakers adapt their turntaking
    behaviour to the interlocutor’s behaviour.
    Furthermore, the results indicate that male-male dialogs
    show a higher proportion of overlapping turns than
    female-female dialogues.
  • Ten Oever, S., & Martin, A. E. (2021). An oscillating computational model can track pseudo-rhythmic speech by using linguistic predictions. eLife, 10: e68066. doi:10.7554/eLife.68066.

    Abstract

    Neuronal oscillations putatively track speech in order to optimize sensory processing. However, it is unclear how isochronous brain oscillations can track pseudo-rhythmic speech input. Here we propose that oscillations can track pseudo-rhythmic speech when considering that speech time is dependent on content-based predictions flowing from internal language models. We show that temporal dynamics of speech are dependent on the predictability of words in a sentence. A computational model including oscillations, feedback, and inhibition is able to track pseudo-rhythmic speech input. As the model processes, it generates temporal phase codes, which are a candidate mechanism for carrying information forward in time. The model is optimally sensitive to the natural temporal speech dynamics and can explain empirical data on temporal speech illusions. Our results suggest that speech tracking does not have to rely only on the acoustics but could also exploit ongoing interactions between oscillations and constraints flowing from internal language models.
  • Ten Oever, S., Sack, A. T., Oehrn, C. R., & Axmacher, N. (2021). An engram of intentionally forgotten information. Nature Communications, 12: 6443. doi:10.1038/s41467-021-26713-x.

    Abstract

    Successful forgetting of unwanted memories is crucial for goal-directed behavior and mental wellbeing. While memory retention strengthens memory traces, it is unclear what happens to memory traces of events that are actively forgotten. Using intracranial EEG recordings from lateral temporal cortex, we find that memory traces for actively forgotten information are partially preserved and exhibit unique neural signatures. Memory traces of successfully remembered items show stronger encoding-retrieval similarity in gamma frequency patterns. By contrast, encoding-retrieval similarity of item-specific memory traces of actively forgotten items depend on activity at alpha/beta frequencies commonly associated with functional inhibition. Additional analyses revealed selective modification of item-specific patterns of connectivity and top-down information flow from dorsolateral prefrontal cortex to lateral temporal cortex in memory traces of intentionally forgotten items. These results suggest that intentional forgetting relies more on inhibitory top-down connections than intentional remembering, resulting in inhibitory memory traces with unique neural signatures and representational formats.

    Additional information

    supplementary figures
  • Ten Bosch, L., Oostdijk, N., & De Ruiter, J. P. (2004). Durational aspects of turn-taking in spontaneous face-to-face and telephone dialogues. In P. Sojka, I. Kopecek, & K. Pala (Eds.), Text, Speech and Dialogue: Proceedings of the 7th International Conference TSD 2004 (pp. 563-570). Heidelberg: Springer.

    Abstract

    On the basis of two-speaker spontaneous conversations, it is shown that the distributions of both pauses and speech-overlaps of telephone and faceto-face dialogues have different statistical properties. Pauses in a face-to-face
    dialogue last up to 4 times longer than pauses in telephone conversations in functionally comparable conditions. There is a high correlation (0.88 or larger) between the average pause duration for the two speakers across face-to-face
    dialogues and telephone dialogues. The data provided form a first quantitative analysis of the complex turn-taking mechanism evidenced in the dialogues available in the 9-million-word Spoken Dutch Corpus.
  • Ten Oever, S., Meierdierks, T., Duecker, F., De Graaf, T., & Sack, A. (2020). Phase-coded oscillatory ordering promotes the separation of closely matched representations to optimize perceptual discrimination. iScience, 23(7): 101282. doi:10.1016/j.isci.2020.101282.

    Abstract

    Low-frequency oscillations are proposed to be involved in separating neuronal representations belonging to different items. Although item-specific neuronal activity was found to cluster on different oscillatory phases, the influence of this mechanism on perception is unknown. Here, we investigated the perceptual consequences of neuronal item separation through oscillatory clustering. In an electroencephalographic experiment, participants categorized sounds parametrically varying in pitch, relative to an arbitrary pitch boundary. Pre-stimulus theta and alpha phase biased near-boundary sound categorization to one category or the other. Phase also modulated whether evoked neuronal responses contributed stronger to the fit of the sound envelope of one or another category. Intriguingly, participants with stronger oscillatory clustering (phase strongly biasing sound categorization) in the theta, but not alpha, range had steeper perceptual psychometric slopes (sharper sound category discrimination). These results indicate that neuronal sorting by phase directly influences subsequent perception and has a positive impact on discrimination performance

    Additional information

    Supplemental Information
  • Ten Oever, S., De Weerd, P., & Sack, A. T. (2020). Phase-dependent amplification of working memory content and performance. Nature Communications, 11: 1832. doi:10.1038/s41467-020-15629-7.

    Abstract

    Successful working memory performance has been related to oscillatory mechanisms operating in low-frequency ranges. Yet, their mechanistic interaction with the distributed neural activity patterns representing the content of the memorized information remains unclear. Here, we record EEG during a working memory retention interval, while a task-irrelevant, high-intensity visual impulse stimulus is presented to boost the read-out of distributed neural activity related to the content held in working memory. Decoding of this activity with a linear classifier reveals significant modulations of classification accuracy by oscillatory phase in the theta/alpha ranges at the moment of impulse presentation. Additionally, behavioral accuracy is highest at the phases showing maximized decoding accuracy. At those phases, behavioral accuracy is higher in trials with the impulse compared to no-impulse trials. This constitutes the first evidence in humans that working memory information is maximized within limited phase ranges, and that phase-selective, sensory impulse stimulation can improve working memory.
  • Teng, X., Ma, M., Yang, J., Blohm, S., Cai, Q., & Tian, X. (2020). Constrained structure of ancient Chinese poetry facilitates speech content grouping. Current Biology, 30, 1299-1305. doi:10.1016/j.cub.2020.01.059.

    Abstract

    Ancient Chinese poetry is constituted by structured language that deviates from ordinary language usage [1, 2]; its poetic genres impose unique combinatory constraints on linguistic elements [3]. How does the constrained poetic structure facilitate speech segmentation when common linguistic [4, 5, 6, 7, 8] and statistical cues [5, 9] are unreliable to listeners in poems? We generated artificial Jueju, which arguably has the most constrained structure in ancient Chinese poetry, and presented each poem twice as an isochronous sequence of syllables to native Mandarin speakers while conducting magnetoencephalography (MEG) recording. We found that listeners deployed their prior knowledge of Jueju to build the line structure and to establish the conceptual flow of Jueju. Unprecedentedly, we found a phase precession phenomenon indicating predictive processes of speech segmentation—the neural phase advanced faster after listeners acquired knowledge of incoming speech. The statistical co-occurrence of monosyllabic words in Jueju negatively correlated with speech segmentation, which provides an alternative perspective on how statistical cues facilitate speech segmentation. Our findings suggest that constrained poetic structures serve as a temporal map for listeners to group speech contents and to predict incoming speech signals. Listeners can parse speech streams by using not only grammatical and statistical cues but also their prior knowledge of the form of language.

    Additional information

    Supplemental Information
  • Ter Bekke, M., Drijvers, L., & Holler, J. (2020). The predictive potential of hand gestures during conversation: An investigation of the timing of gestures in relation to speech. In Proceedings of the 7th GESPIN - Gesture and Speech in Interaction Conference. Stockholm: KTH Royal Institute of Technology.

    Abstract

    In face-to-face conversation, recipients might use the bodily movements of the speaker (e.g. gestures) to facilitate language processing. It has been suggested that one way through which this facilitation may happen is prediction. However, for this to be possible, gestures would need to precede speech, and it is unclear whether this is true during natural conversation.
    In a corpus of Dutch conversations, we annotated hand gestures that represent semantic information and occurred during questions, and the word(s) which corresponded most closely to the gesturally depicted meaning. Thus, we tested whether representational gestures temporally precede their lexical affiliates. Further, to see whether preceding gestures may indeed facilitate language processing, we asked whether the gesture-speech asynchrony predicts the response time to the question the gesture is part of.
    Gestures and their strokes (most meaningful movement component) indeed preceded the corresponding lexical information, thus demonstrating their predictive potential. However, while questions with gestures got faster responses than questions without, there was no evidence that questions with larger gesture-speech asynchronies get faster responses. These results suggest that gestures indeed have the potential to facilitate predictive language processing, but further analyses on larger datasets are needed to test for links between asynchrony and processing advantages.
  • Ter Hark, S. E., Jamain, S., Schijven, D., Lin, B. D., Bakker, M. K., Boland-Auge, A., Deleuze, J.-F., Troudet, R., Malhotra, A. K., Gülöksüz, S., Vinkers, C. H., Ebdrup, B. H., Kahn, R. S., Leboyer, M., & Luykx, J. J. (2020). A new genetic locus for antipsychotic-induced weight gain: A genome-wide study of first-episode psychosis patients using amisulpride (from the OPTiMiSE cohort). Journal of Psychopharmacology, 34(5), 524-531. doi:10.1177/0269881120907972.

    Abstract

    Background:Antipsychotic-induced weight gain is a common and debilitating side effect of antipsychotics. Although genome-wide association studies of antipsychotic-induced weight gain have been performed, few genome-wide loci have been discovered. Moreover, these genome-wide association studies have included a wide variety of antipsychotic compounds.Aims:We aim to gain more insight in the genomic loci affecting antipsychotic-induced weight gain. Given the variable pharmacological properties of antipsychotics, we hypothesized that targeting a single antipsychotic compound would provide new clues about genomic loci affecting antipsychotic-induced weight gain.Methods:All subjects included for this genome-wide association study (n=339) were first-episode schizophrenia spectrum disorder patients treated with amisulpride and were minimally medicated (defined as antipsychotic use <2?weeks in the previous year and/or <6?weeks lifetime). Weight gain was defined as the increase in body mass index from before until approximately 1 month after amisulpride treatment.Results:Our genome-wide association analyses for antipsychotic-induced weight gain yielded one genome-wide significant hit (rs78310016; ?=1.05; p=3.66 ? 10?08; n=206) in a locus not previously associated with antipsychotic-induced weight gain or body mass index. Minor allele carriers had an odds ratio of 3.98 (p=1.0 ? 10?03) for clinically meaningful antipsychotic-induced weight gain (?7% of baseline weight). In silico analysis elucidated a chromatin interaction with 3-Hydroxy-3-Methylglutaryl-CoA Synthase 1. In an attempt to replicate single-nucleotide polymorphisms previously associated with antipsychotic-induced weight gain, we found none were associated with amisulpride-induced weight gain.Conclusion:Our findings suggest the involvement of rs78310016 and possibly 3-Hydroxy-3-Methylglutaryl-CoA Synthase 1 in antipsychotic-induced weight gain. In line with the unique binding profile of this atypical antipsychotic, our findings furthermore hint that biological mechanisms underlying amisulpride-induced weight gain differ from antipsychotic-induced weight gain by other atypical antipsychotics.
  • Ter Avest, I. J., & Mulder, K. (2009). The Acquisition of Gender Agreement in the Determiner Phrase by Bilingual Children. Toegepaste Taalwetenschap in Artikelen, 81(1), 133-142.
  • Terband, H., Rodd, J., & Maas, E. (2020). Testing hypotheses about the underlying deficit of Apraxia of Speech (AOS) through computational neural modelling with the DIVA model. International Journal of Speech-Language Pathology, 22(4), 475-486. doi:10.1080/17549507.2019.1669711.

    Abstract

    Purpose: A recent behavioural experiment featuring a noise masking paradigm suggests that Apraxia of Speech (AOS) reflects a disruption of feedforward control, whereas feedback control is spared and plays a more prominent role in achieving and maintaining segmental contrasts. The present study set out to validate the interpretation of AOS as a possible feedforward impairment using computational neural modelling with the DIVA (Directions Into Velocities of Articulators) model.

    Method: In a series of computational simulations with the DIVA model featuring a noise-masking paradigm mimicking the behavioural experiment, we investigated the effect of a feedforward, feedback, feedforward + feedback, and an upper motor neuron dysarthria impairment on average vowel spacing and dispersion in the production of six/bVt/speech targets.

    Result: The simulation results indicate that the output of the model with the simulated feedforward deficit resembled the group findings for the human speakers with AOS best.

    Conclusion: These results provide support to the interpretation of the human observations, corroborating the notion that AOS can be conceptualised as a deficit in feedforward control.
  • Terporten, R. (2020). The power of context: How linguistic contextual information shapes brain dynamics during sentence processing. PhD Thesis, Radboud University Nijmegen, Nijmegen.
  • Terrill, A. (1998). Biri. München: Lincom Europa.

    Abstract

    This work presents a salvage grammar of the Biri language of Eastern Central Queensland, a Pama-Nyungan language belonging to the large Maric subgroup. As the language is no longer used, the grammatical description is based on old written sources and on recordings made by linguists in the 1960s and 1970s. Biri is in many ways typical of the Pama-Nyungan languages of Southern Queensland. It has split case marking systems, marking nouns according to an ergative/absolutive system and pronouns according to a nominative/accusative system. Unusually for its area, Biri also has bound pronouns on its verb, cross-referencing the person, number and case of core participants. As far as it is possible, the grammatical discussion is ‘theory neutral’. The first four chapters deal with the phonology, morphology, and syntax of the language. The last two chapters contain a substantial discussion of Biri’s place in the Pama-Nyungan family. In chapter 6 the numerous dialects of the Biri language are discussed. In chapter 7 the close linguistic relationship between Biri and the surrounding languages is examined.
  • Terrill, A. (2009). [Review of Felix K. Ameka, Alan Dench, and Nicholas Evans (eds). 2006. Catching language: The standing challenge of grammar writing]. Language Documentation & Conservation, 3(1), 132-137. Retrieved from http://hdl.handle.net/10125/4432.
  • Terrill, A. (2004). Coordination in Lavukaleve. In M. Haspelmath (Ed.), Coordinating Constructions. (pp. 427-443). Amsterdam: John Benjamins.
  • Tesink, C. M. J. Y., Buitelaar, J. K., Petersson, K. M., Van der Gaag, R. J., Kan, C. C., Tendolkar, I., & Hagoort, P. (2009). Neural correlates of pragmatic language comprehension in autism disorders. Brain, 132, 1941-1952. doi:10.1093/brain/awp103.

    Abstract

    Difficulties with pragmatic aspects of communication are universal across individuals with autism spectrum disorders (ASDs). Here we focused on an aspect of pragmatic language comprehension that is relevant to social interaction in daily life: the integration of speaker characteristics inferred from the voice with the content of a message. Using functional magnetic resonance imaging (fMRI), we examined the neural correlates of the integration of voice-based inferences about the speaker’s age, gender or social background, and sentence content in adults with ASD and matched control participants. Relative to the control group, the ASD group showed increased activation in right inferior frontal gyrus (RIFG; Brodmann area 47) for speakerincongruent sentences compared to speaker-congruent sentences. Given that both groups performed behaviourally at a similar level on a debriefing interview outside the scanner, the increased activation in RIFG for the ASD group was interpreted as being compensatory in nature. It presumably reflects spill-over processing from the language dominant left hemisphere due to higher task demands faced by the participants with ASD when integrating speaker characteristics and the content of a spoken sentence. Furthermore, only the control group showed decreased activation for speaker-incongruent relative to speaker-congruent sentences in right ventral medial prefrontal cortex (vMPFC; Brodmann area 10), including right anterior cingulate cortex (ACC; Brodmann area 24/32). Since vMPFC is involved in self-referential processing related to judgments and inferences about self and others, the absence of such a modulation in vMPFC activation in the ASD group possibly points to atypical default self-referential mental activity in ASD. Our results show that in ASD compensatory mechanisms are necessary in implicit, low-level inferential processes in spoken language understanding. This indicates that pragmatic language problems in ASD are not restricted to high-level inferential processes, but encompass the most basic aspects of pragmatic language processing.
  • Tesink, C. M. J. Y., Petersson, K. M., Van Berkum, J. J. A., Van den Brink, D., Buitelaar, J. K., & Hagoort, P. (2009). Unification of speaker and meaning in language comprehension: An fMRI study. Journal of Cognitive Neuroscience, 21, 2085-2099. doi:10.1162/jocn.2008.21161.

    Abstract

    When interpreting a message, a listener takes into account several sources of linguistic and extralinguistic information. Here we focused on one particular form of extralinguistic information, certain speaker characteristics as conveyed by the voice. Using functional magnetic resonance imaging, we examined the neural structures involved in the unification of sentence meaning and voice-based inferences about the speaker's age, sex, or social background. We found enhanced activation in the inferior frontal gyrus bilaterally (BA 45/47) during listening to sentences whose meaning was incongruent with inferred speaker characteristics. Furthermore, our results showed an overlap in brain regions involved in unification of speaker-related information and those used for the unification of semantic and world knowledge information [inferior frontal gyrus bilaterally (BA 45/47) and left middle temporal gyrus (BA 21)]. These findings provide evidence for a shared neural unification system for linguistic and extralinguistic sources of information and extend the existing knowledge about the role of inferior frontal cortex as a crucial component for unification during language comprehension.
  • Theakston, A., & Rowland, C. F. (2009). Introduction to Special Issue: Cognitive approaches to language acquisition. Cognitive Linguistics, 20(3), 477-480. doi:10.1515/COGL.2009.021.
  • Theakston, A. L., & Rowland, C. F. (2009). The acquisition of auxiliary syntax: A longitudinal elicitation study. Part 1: Auxiliary BE. Journal of Speech, Language, and Hearing Research, 52, 1449-1470. doi:10.1044/1092-4388(2009/08-0037).

    Abstract

    Purpose: The question of how and when English-speaking children acquire auxiliaries is the subject of extensive debate. Some researchers posit the existence of innately given Universal Grammar principles to guide acquisition, although some aspects of the auxiliary system must be learned from the input. Others suggest that auxiliaries can be learned without Universal Grammar, citing evidence of piecemeal learning in their support. This study represents a unique attempt to trace the development of auxiliary syntax by using a longitudinal elicitation methodology. Method: Twelve English-speaking children participated in 3 tasks designed to elicit auxiliary BE in declaratives and yes/no and wh-questions. They completed each task 6 times in total between the ages of 2;10 (years;months) and 3;6. Results: The children’s levels of correct use of 2 forms of BE (is,are) differed according to auxiliary form and sentence structure, and these relations changed over development. An analysis of the children’s errors also revealed complex interactions between these factors. Conclusion: These data are problematic for existing accounts of auxiliary acquisition and highlight the need for researchers working within both generativist and constructivist frameworks to develop more detailed theories of acquisition that directly predict the pattern of acquisition observed.
  • Theakston, A. L., Lieven, E. V., Pine, J. M., & Rowland, C. F. (2004). Semantic generality, input frequency and the acquisition of syntax. Journal of Child Language, 31(1), 61-99. doi:10.1017/S0305000903005956.

    Abstract

    In many areas of language acquisition, researchers have suggested that semantic generality plays an important role in determining the order of acquisition of particular lexical forms. However, generality is typically confounded with the effects of input frequency and it is therefore unclear to what extent semantic generality or input frequency determines the early acquisition of particular lexical items. The present study evaluates the relative influence of semantic status and properties of the input on the acquisition of verbs and their argument structures in the early speech of 9 English-speaking children from 2;0 to 3;0. The children's early verb utterances are examined with respect to (1) the order of acquisition of particular verbs in three different constructions, (2) the syntactic diversity of use of individual verbs, (3) the relative proportional use of semantically general verbs as a function of total verb use, and (4) their grammatical accuracy. The data suggest that although measures of semantic generality correlate with various measures of early verb use, once the effects of verb use in the input are removed, semantic generality is not a significant predictor of early verb use. The implications of these results for semantic-based theories of verb argument structure acquisition are discussed.
  • Thomassen, A., & Kempen, G. (1976). Geheugen. In J. A. Michon, E. Eijkman, & L. F. De Klerk (Eds.), Handboek der Psychonomie (pp. 354-387). Deventer: Van Loghum Slaterus.
  • Thompson, B., Raviv, L., & Kirby, S. (2020). Complexity can be maintained in small populations: A model of lexical variability in emerging sign languages. In A. Ravignani, C. Barbieri, M. Flaherty, Y. Jadoul, E. Lattenkamp, H. Little, M. Martins, K. Mudd, & T. Verhoef (Eds.), The Evolution of Language: Proceedings of the 13th International Conference (Evolang13) (pp. 440-442). Nijmegen: The Evolution of Language Conferences.
  • Thompson, P. M., Jahanshad, N., Ching, C. R. K., Salminen, L. E., Thomopoulos, S. I., Bright, J., Baune, B. T., Bertolín, S., Bralten, J., Bruin, W. B., Bülow, R., Chen, J., Chye, Y., Dannlowski, U., De Kovel, C. G. F., Donohoe, G., Eyler, L. T., Faraone, S. V., Favre, P., Filippi, C. A. and 151 moreThompson, P. M., Jahanshad, N., Ching, C. R. K., Salminen, L. E., Thomopoulos, S. I., Bright, J., Baune, B. T., Bertolín, S., Bralten, J., Bruin, W. B., Bülow, R., Chen, J., Chye, Y., Dannlowski, U., De Kovel, C. G. F., Donohoe, G., Eyler, L. T., Faraone, S. V., Favre, P., Filippi, C. A., Frodl, T., Garijo, D., Gil, Y., Grabe, H. J., Grasby, K. L., Hajek, T., Han, L. K. M., Hatton, S. N., Hilbert, K., Ho, T. C., Holleran, L., Homuth, G., Hosten, N., Houenou, J., Ivanov, I., Jia, T., Kelly, S., Klein, M., Kwon, J. S., Laansma, M. A., Leerssen, J., Lueken, U., Nunes, A., O'Neill, J., Opel, N., Piras, F., Piras, F., Postema, M., Pozzi, E., Shatokhina, N., Soriano-Mas, C., Spalletta, G., Sun, D., Teumer, A., Tilot, A. K., Tozzi, L., Van der Merwe, C., Van Someren, E. J. W., Van Wingen, G. A., Völzke, H., Walton, E., Wang, L., Winkler, A. M., Wittfeld, K., Wright, M. J., Yun, J.-Y., Zhang, G., Zhang-James, Y., Adhikari, B. M., Agartz, I., Aghajani, M., Aleman, A., Althoff, R. R., Altmann, A., Andreassen, O. A., Baron, D. A., Bartnik-Olson, B. L., Bas-Hoogendam, J. M., Baskin-Sommers, A. R., Bearden, C. E., Berner, L. A., Boedhoe, P. S. W., Brouwer, R. M., Buitelaar, J. K., Caeyenberghs, K., Cecil, C. A. M., Cohen, R. A., Cole, J. H., Conrod, P. J., De Brito, S. A., De Zwarte, S. M. C., Dennis, E. L., Desrivieres, S., Dima, D., Ehrlich, S., Esopenko, C., Fairchild, G., Fisher, S. E., Fouche, J.-P., Francks, C., Frangou, S., Franke, B., Garavan, H. P., Glahn, D. C., Groenewold, N. A., Gurholt, T. P., Gutman, B. A., Hahn, T., Harding, I. H., Hernaus, D., Hibar, D. P., Hillary, F. G., Hoogman, M., Hulshoff Pol, H. E., Jalbrzikowski, M., Karkashadze, G. A., Klapwijk, E. T., Knickmeyer, R. C., Kochunov, P., Koerte, I. K., Kong, X., Liew, S.-L., Lin, A. P., Logue, M. W., Luders, E., Macciardi, F., Mackey, S., Mayer, A. R., McDonald, C. R., McMahon, A. B., Medland, S. E., Modinos, G., Morey, R. A., Mueller, S. C., Mukherjee, P., Namazova-Baranova, L., Nir, T. M., Olsen, A., Paschou, P., Pine, D. S., Pizzagalli, F., Rentería, M. E., Rohrer, J. D., Sämann, P. G., Schmaal, L., Schumann, G., Shiroishi, M. S., Sisodiya, S. M., Smit, D. J. A., Sønderby, I. E., Stein, D. J., Stein, J. L., Tahmasian, M., Tate, D. F., Turner, J. A., Van den Heuvel, O. A., Van der Wee, N. J. A., Van der Werf, Y. D., Van Erp, T. G. M., Van Haren, N. E. M., Van Rooij, D., Van Velzen, L. S., Veer, I. M., Veltman, D. J., Villalon-Reina, J. E., Walter, H., Whelan, C. D., Wilde, E. A., Zarei, M., Zelman, V., & Enigma Consortium (2020). ENIGMA and global neuroscience: A decade of large-scale studies of the brain in health and disease across more than 40 countries. Translational Psychiatry, 10(1): 100. doi:10.1038/s41398-020-0705-1.

    Abstract

    This review summarizes the last decade of work by the ENIGMA (Enhancing NeuroImaging Genetics through Meta Analysis) Consortium, a global alliance of over 1400 scientists across 43 countries, studying the human brain in health and disease. Building on large-scale genetic studies that discovered the first robustly replicated genetic loci associated with brain metrics, ENIGMA has diversified into over 50 working groups (WGs), pooling worldwide data and expertise to answer fundamental questions in neuroscience, psychiatry, neurology, and genetics. Most ENIGMA WGs focus on specific psychiatric and neurological conditions, other WGs study normal variation due to sex and gender differences, or development and aging; still other WGs develop methodological pipelines and tools to facilitate harmonized analyses of “big data” (i.e., genetic and epigenetic data, multimodal MRI, and electroencephalography data). These international efforts have yielded the largest neuroimaging studies to date in schizophrenia, bipolar disorder, major depressive disorder, post-traumatic stress disorder, substance use disorders, obsessive-compulsive disorder, attention-deficit/hyperactivity disorder, autism spectrum disorders, epilepsy, and 22q11.2 deletion syndrome. More recent ENIGMA WGs have formed to study anxiety disorders, suicidal thoughts and behavior, sleep and insomnia, eating disorders, irritability, brain injury, antisocial personality and conduct disorder, and dissociative identity disorder. Here, we summarize the first decade of ENIGMA’s activities and ongoing projects, and describe the successes and challenges encountered along the way. We highlight the advantages of collaborative large-scale coordinated data analyses for testing reproducibility and robustness of findings, offering the opportunity to identify brain systems involved in clinical syndromes across diverse samples and associated genetic, environmental, demographic, cognitive, and psychosocial factors.

    Additional information

    41398_2020_705_MOESM1_ESM.pdf
  • Thompson, P. A., Bishop, D. V. M., Eising, E., Fisher, S. E., & Newbury, D. F. (2020). Generalized Structured Component Analysis in candidate gene association studies: Applications and limitations [version 2; peer review: 3 approved]. Wellcome Open Research, 4: 142. doi:10.12688/wellcomeopenres.15396.2.

    Abstract

    Background: Generalized Structured Component Analysis (GSCA) is a component-based alternative to traditional covariance-based structural equation modelling. This method has previously been applied to test for association between candidate genes and clinical phenotypes, contrasting with traditional genetic association analyses that adopt univariate testing of many individual single nucleotide polymorphisms (SNPs) with correction for multiple testing.
    Methods: We first evaluate the ability of the GSCA method to replicate two previous findings from a genetics association study of developmental language disorders. We then present the results of a simulation study to test the validity of the GSCA method under more restrictive data conditions, using smaller sample sizes and larger numbers of SNPs than have previously been investigated. Finally, we compare GSCA performance against univariate association analysis conducted using PLINK v1.9.
    Results: Results from simulations show that power to detect effects depends not just on sample size, but also on the ratio of SNPs with effect to number of SNPs tested within a gene. Inclusion of many SNPs in a model dilutes true effects.
    Conclusions: We propose that GSCA is a useful method for replication studies, when candidate SNPs have been identified, but should not be used for exploratory analysis.

    Additional information

    data via OSF
  • Thorin, J. (2020). Can you hear what you cannot say? The interactions of speech perception and production during non-native phoneme learning. PhD Thesis, Radboud University Nijmegen, Nijmegen.
  • Tilmatine, M., Hubers, F., & Hintz, F. (2021). Exploring individual differences in recognizing idiomatic expressions in context. Journal of Cognition, 4(1): 37. doi:10.5334/joc.183.

    Abstract

    Written language comprehension requires readers to integrate incoming information with stored mental knowledge to construct meaning. Literally plausible idiomatic expressions can activate both figurative and literal interpretations, which convey different meanings. Previous research has shown that contexts biasing the figurative or literal interpretation of an idiom can facilitate its processing. Moreover, there is evidence that processing of idiomatic expressions is subject to individual differences in linguistic knowledge and cognitive-linguistic skills. It is therefore conceivable that individuals vary in the extent to which they experience context-induced facilitation in processing idiomatic expressions. To explore the interplay between reader-related variables and contextual facilitation, we conducted a self-paced reading experiment. We recruited participants who had recently completed a battery of 33 behavioural tests measuring individual differences in linguistic knowledge, general cognitive skills and linguistic processing skills. In the present experiment, a subset of these participants read idiomatic expressions that were either presented in isolation or preceded by a figuratively or literally biasing context. We conducted analyses on the reading times of idiom-final nouns and the word thereafter (spill-over region) across the three conditions, including participants’ scores from the individual differences battery. Our results showed no main effect of the preceding context, but substantial variation in contextual facilitation between readers. We observed main effects of participants’ word reading ability and non-verbal intelligence on reading times as well as an interaction between condition and linguistic knowledge. We encourage interested researchers to exploit the present dataset for follow-up studies on individual differences in idiom processing.
  • Tilot, A. K., Khramtsova, E. A., Liang, D., Grasby, K. L., Jahanshad, N., Painter, J., Colodro-Conde, L., Bralten, J., Hibar, D. P., Lind, P. A., Liu, S., Brotman, S. M., Thompson, P. M., Medland, S. E., Macciardi, F., Stranger, B. E., Davis, L. K., Fisher, S. E., & Stein, J. L. (2021). The evolutionary history of common genetic variants influencing human cortical surface area. Cerebral Cortex, 31(4), 1873-1887. doi:10.1093/cercor/bhaa327.

    Abstract

    Structural brain changes along the lineage leading to modern Homo sapiens contributed to our distinctive cognitive and social abilities. However, the evolutionarily relevant molecular variants impacting key aspects of neuroanatomy are largely unknown. Here, we integrate evolutionary annotations of the genome at diverse timescales with common variant associations from large-scale neuroimaging genetic screens. We find that alleles with evidence of recent positive polygenic selection over the past 2000–3000 years are associated with increased surface area (SA) of the entire cortex, as well as specific regions, including those involved in spoken language and visual processing. Therefore, polygenic selective pressures impact the structure of specific cortical areas even over relatively recent timescales. Moreover, common sequence variation within human gained enhancers active in the prenatal cortex is associated with postnatal global SA. We show that such variation modulates the function of a regulatory element of the developmentally relevant transcription factor HEY2 in human neural progenitor cells and is associated with structural changes in the inferior frontal cortex. These results indicate that non-coding genomic regions active during prenatal cortical development are involved in the evolution of human brain structure and identify novel regulatory elements and genes impacting modern human brain structure.
  • Timpson, N. J., Tobias, J. H., Richards, J. B., Soranzo, N., Duncan, E. L., Sims, A.-M., Whittaker, P., Kumanduri, V., Zhai, G., Glaser, B., Eisman, J., Jones, G., Nicholson, G., Prince, R., Seeman, E., Spector, T. D., Brown, M. A., Peltonen, L., Smith, G. D., Deloukas, P. and 1 moreTimpson, N. J., Tobias, J. H., Richards, J. B., Soranzo, N., Duncan, E. L., Sims, A.-M., Whittaker, P., Kumanduri, V., Zhai, G., Glaser, B., Eisman, J., Jones, G., Nicholson, G., Prince, R., Seeman, E., Spector, T. D., Brown, M. A., Peltonen, L., Smith, G. D., Deloukas, P., & Evans, D. M. (2009). Common variants in the region around Osterix are associated with bone mineral density and growth in childhood. Human Molecular Genetics, 18(8), 1510-1517. doi:10.1093/hmg/ddp052.

    Abstract

    Peak bone mass achieved in adolescence is a determinant of bone mass in later life. In order to identify genetic variants affecting bone mineral density (BMD), we performed a genome-wide association study of BMD and related traits in 1518 children from the Avon Longitudinal Study of Parents and Children (ALSPAC). We compared results with a scan of 134 adults with high or low hip BMD. We identified associations with BMD in an area of chromosome 12 containing the Osterix (SP7) locus, a transcription factor responsible for regulating osteoblast differentiation (ALSPAC: P = 5.8 x 10(-4); Australia: P = 3.7 x 10(-4)). This region has previously shown evidence of association with adult hip and lumbar spine BMD in an Icelandic population, as well as nominal association in a UK population. A meta-analysis of these existing studies revealed strong association between SNPs in the Osterix region and adult lumbar spine BMD (P = 9.9 x 10(-11)). In light of these findings, we genotyped a further 3692 individuals from ALSPAC who had whole body BMD and confirmed the association in children as well (P = 5.4 x 10(-5)). Moreover, all SNPs were related to height in ALSPAC children, but not weight or body mass index, and when height was included as a covariate in the regression equation, the association with total body BMD was attenuated. We conclude that genetic variants in the region of Osterix are associated with BMD in children and adults probably through primary effects on growth.
  • Todorova, L., & Neville, D. A. (2020). Associative and identity words promote the speed of visual categorization: A hierarchical drift diffusion account. Frontiers in Psychology, 11: 955. doi:10.3389/fpsyg.2020.00955.

    Abstract

    Words can either boost or hinder the processing of visual information, which can lead to facilitation or interference of the behavioral response. We investigated the stage (response execution or target processing) of verbal interference/facilitation in the response priming paradigm with a gender categorization task. Participants in our study were asked to judge whether the presented stimulus was a female or male face that was briefly preceded by a gender word either congruent (prime: “man,” target: “man”), incongruent (prime: “woman,” target: “man”) or neutral (prime: “day,” target: “man”) with respect to the face stimulus. We investigated whether related word-picture pairs resulted in faster reaction times in comparison to the neutral word-picture pairs (facilitation) and whether unrelated word-picture pairs resulted in slower reaction times in comparison to neutral word-picture pairs (interference). We further examined whether these effects (if any) map onto response conflict or aspects of target processing. In addition, identity (“man,” “woman”) and associative (“tie,” “dress”) primes were introduced to investigate the cognitive mechanisms of semantic and Stroop-like effects in response priming (introduced respectively by associations and identity words). We analyzed responses and reaction times using the drift diffusion model to examine the effect of facilitation and/or interference as a function of the prime type. We found that regardless of prime type words introduce a facilitatory effect, which maps to the processes of visual attention and response execution.
  • Todorova, L., Neville, D. A., & Piai, V. (2020). Lexical-semantic and executive deficits revealed by computational modelling: A drift diffusion model perspective. Neuropsychologia, 146: 107560. doi:10.1016/j.neuropsychologia.2020.107560.

    Abstract

    Flexible language use requires coordinated functioning of two systems: conceptual representations and control. The interaction between the two systems can be observed when people are asked to match a word to a picture. Participants are slower and less accurate for related word-picture pairs (word: banana, picture: apple) relative to unrelated pairs (word: banjo, picture: apple). The mechanism underlying interference however is still unclear. We analyzed word-picture matching (WPM) performance of patients with stroke-induced lesions to the left-temporal (N = 5) or left-frontal cortex (N = 5) and matched controls (N = 12) using the drift diffusion model (DDM). In DDM, the process of making a decision is described as the stochastic accumulation of evidence towards a response. The parameters of the DDM model that characterize this process are decision threshold, drift rate, starting point and non-decision time, each of which bears cognitive interpretability. We compared the estimated model parameters from controls and patients to investigate the mechanisms of WPM interference. WPM performance in controls was explained by the amount of information needed to make a decision (decision threshold): a higher threshold was associated with related word-picture pairs relative to unrelated ones. No difference was found in the quality of the evidence (drift rate). This suggests an executive rather than semantic mechanism underlying WPM interference. Both patients with temporal and frontal lesions exhibited both increased drift rate and decision threshold for unrelated pairs relative to related ones. Left-frontal and temporal damage affected the computations required by WPM similarly, resulting in systematic deficits across lexical-semantic memory and executive functions. These results support a diverse but interactive role of lexical-semantic memory and semantic control mechanisms.

    Additional information

    supplementary material
  • Todorova, L. (2021). Language bias in visually driven decisions: Computational neurophysiological mechanisms. PhD Thesis, Radboud University Nijmegen, Nijmegen.
  • Torreira, F., & Ernestus, M. (2009). Probabilistic effects on French [t] duration. In Proceedings of the 10th Annual Conference of the International Speech Communication Association (Interspeech 2009) (pp. 448-451). Causal Productions Pty Ltd.

    Abstract

    The present study shows that [t] consonants are affected by probabilistic factors in a syllable-timed language as French, and in spontaneous as well as in journalistic speech. Study 1 showed a word bigram frequency effect in spontaneous French, but its exact nature depended on the corpus on which the probabilistic measures were based. Study 2 investigated journalistic speech and showed an effect of the joint frequency of the test word and its following word. We discuss the possibility that these probabilistic effects are due to the speaker’s planning of upcoming words, and to the speaker’s adaptation to the listener’s needs.
  • Torres Borda, L., Jadoul, Y., Rasilo, H., Salazar-Casals, A., & Ravignani, A. (2021). Vocal plasticity in harbour seal pups. Philosophical Transactions of the Royal Society of London, Series B: Biological Sciences, 376(1840): 20200456. doi:10.1098/rstb.2020.0456.

    Abstract

    Vocal plasticity can occur in response to environmental and biological factors, including conspecifics' vocalizations and noise. Pinnipeds are one of the few mammalian groups capable of vocal learning, and are therefore relevant to understanding the evolution of vocal plasticity in humans and other animals. Here, we investigate the vocal plasticity of harbour seals (Phoca vitulina), a species with vocal learning abilities observed in adulthood but not puppyhood. To evaluate early mammalian vocal development, we tested 1–3 weeks-old seal pups. We tailored noise playbacks to this species and age to induce seal pups to shift their fundamental frequency (f0), rather than adapt call amplitude or temporal characteristics. We exposed individual pups to low- and high-intensity bandpass-filtered noise, which spanned—and masked—their typical range of f0; simultaneously, we recorded pups' spontaneous calls. Unlike most mammals, pups modified their vocalizations by lowering their f0 in response to increased noise. This modulation was precise and adapted to the particular experimental manipulation of the noise condition. In addition, higher levels of noise induced less dispersion around the mean f0, suggesting that pups may have actively focused their phonatory efforts to target lower frequencies. Noise did not seem to affect call amplitude. However, one seal showed two characteristics of the Lombard effect known for human speech in noise: significant increase in call amplitude and flattening of spectral tilt. Our relatively low noise levels may have favoured f0 modulation while inhibiting amplitude adjustments. This lowering of f0 is unusual, as most animals commonly display no such f0 shift. Our data represent a relatively rare case in mammalian neonates, and have implications for the evolution of vocal plasticity and vocal learning across species, including humans.

    Additional information

    supplement
  • Tourtouri, E. N., Delogu, F., & Crocker, M. W. (2021). Rational Redundancy in Referring Expressions: Evidence from Event-related Potentials. Cognitive Science, 45(12): e13071. doi:10.1111/cogs.13071.

    Abstract

    In referential communication, Grice's Maxim of Quantity is thought to imply that utterances conveying unnecessary information should incur comprehension difficulties. There is, however, considerable evidence that speakers frequently encode redundant information in their referring expressions, raising the question as to whether such overspecifications hinder listeners' processing. Evidence from previous work is inconclusive, and mostly comes from offline studies. In this article, we present two event-related potential (ERP) experiments, investigating the real-time comprehension of referring expressions that contain redundant adjectives in complex visual contexts. Our findings provide support for both Gricean and bounded-rational accounts. We argue that these seemingly incompatible results can be reconciled if common ground is taken into account. We propose a bounded-rational account of overspecification, according to which even redundant words can be beneficial to comprehension to the extent that they facilitate the reduction of listeners' uncertainty regarding the target referent.
  • Tourtouri, E. N. (2020). Rational redundancy in situated communication. PhD Thesis, Saarland University, Saarbrücken.

    Abstract

    Contrary to the Gricean maxims of Quantity (Grice, 1975), it has been repeatedly shown that speakers often include redundant information in their utterances (over- specifications). Previous research on referential communication has long debated whether this redundancy is the result of speaker-internal or addressee-oriented processes, while it is also unclear whether referential redundancy hinders or facilitates comprehension. We present a bounded-rational account of referential redundancy, according to which any word in an utterance, even if it is redundant, can be beneficial to comprehension, to the extent that it facilitates the reduction of listeners’ uncertainty regarding the target referent in a co-present visual scene. Information-theoretic metrics, such as Shannon’s entropy (Shannon, 1948), were employed in order to quantify this uncertainty in bits of information, and gain an estimate of the cognitive effort related to referential processing. Under this account, speakers may, therefore, utilise redundant adjectives in order to reduce the visually-determined entropy (and thereby their listeners’ cognitive effort) more uniformly across their utterances. In a series of experiments, we examined both the comprehension and the production of over-specifications in complex visual contexts. Our findings are in line with the bounded-rational account. Specifically, we present evidence that: (a) in view of complex visual scenes, listeners’ processing and identification of the target referent may be facilitated by the use of redundant adjectives, as well as by a more uniform reduction of uncertainty across the utterance, and (b) that, while both speaker-internal and addressee-oriented processes are at play in the production of over-specifications, listeners’ processing concerns may also influence the encoding of redundant adjectives, at least for some speakers, who encode redundant adjectives more frequently when these adjectives contribute to a more uniform reduction of referential entropy.
  • Trilsbeek, P. (2004). Report from DoBeS training week. Language Archive Newsletter, 1(3), 12-12.
  • Trilsbeek, P. (2004). DoBeS Training Course. Language Archive Newsletter, 1(2), 6-6.
  • Trilsbeek, P., & Van Uytvanck, D. (2009). Regional archives and community portals. IASA Journal, 32, 69-73.
  • Trompenaars, T., Kaluge, T. A., Sarabi, R., & De Swart, P. (2021). Cognitive animacy and its relation to linguistic animacy: Evidence from Japanese and Persian. Language Sciences, 86: 101399. doi:10.1016/j.langsci.2021.101399.

    Abstract

    Animacy, commonly defined as the distinction between living and non-living entities, is a useful notion in cognitive science and linguistics employed to describe and predict variation in psychological and linguistic behaviour. In the (psycho)linguistics literature we find linguistic animacy dichotomies which are (implicitly) assumed to correspond to biological dichotomies. We argue this is problematic, as it leaves us without a cognitively grounded, universal description for non-prototypical cases. We show that ‘animacy’ in language can be better understood as universally emerging from a gradual, cognitive property by collecting animacy ratings for a great range of nouns from Japanese and Persian. We used these cognitive ratings in turn to predict linguistic variation in these languages traditionally explained through dichotomous distinctions. We show that whilst (speakers of) languages may subtly differ in their conceptualisation of animacy, universality may be found in the process of mapping conceptual animacy to linguistic variation.
  • Trompenaars, T. (2021). Bringing stories to life: Animacy in narrative and processing. PhD Thesis, Radboud University Nijmegen, Nijmegen.
  • Trujillo, J. P., & Holler, J. (2021). The kinematics of social action: Visual signals provide cues for what interlocutors do in conversation. Brain Sciences, 11: 996. doi:10.3390/brainsci11080996.

    Abstract

    During natural conversation, people must quickly understand the meaning of what the other speaker is saying. This concerns not just the semantic content of an utterance, but also the social action (i.e., what the utterance is doing—requesting information, offering, evaluating, checking mutual understanding, etc.) that the utterance is performing. The multimodal nature of human language raises the question of whether visual signals may contribute to the rapid processing of such social actions. However, while previous research has shown that how we move reveals the intentions underlying instrumental actions, we do not know whether the intentions underlying fine-grained social actions in conversation are also revealed in our bodily movements. Using a corpus of dyadic conversations combined with manual annotation and motion tracking, we analyzed the kinematics of the torso, head, and hands during the asking of questions. Manual annotation categorized these questions into six more fine-grained social action types (i.e., request for information, other-initiated repair, understanding check, stance or sentiment, self-directed, active participation). We demonstrate, for the first time, that the kinematics of the torso, head and hands differ between some of these different social action categories based on a 900 ms time window that captures movements starting slightly prior to or within 600 ms after utterance onset. These results provide novel insights into the extent to which our intentions shape the way that we move, and provide new avenues for understanding how this phenomenon may facilitate the fast communication of meaning in conversational interaction, social action, and conversation

    Additional information

    analyses scripts
  • Trujillo, J. P., Ozyurek, A., Holler, J., & Drijvers, L. (2021). Speakers exhibit a multimodal Lombard effect in noise. Scientific Reports, 11: 16721. doi:10.1038/s41598-021-95791-0.

    Abstract

    In everyday conversation, we are often challenged with communicating in non-ideal settings, such as in noise. Increased speech intensity and larger mouth movements are used to overcome noise in constrained settings (the Lombard effect). How we adapt to noise in face-to-face interaction, the natural environment of human language use, where manual gestures are ubiquitous, is currently unknown. We asked Dutch adults to wear headphones with varying levels of multi-talker babble while attempting to communicate action verbs to one another. Using quantitative motion capture and acoustic analyses, we found that (1) noise is associated with increased speech intensity and enhanced gesture kinematics and mouth movements, and (2) acoustic modulation only occurs when gestures are not present, while kinematic modulation occurs regardless of co-occurring speech. Thus, in face-to-face encounters the Lombard effect is not constrained to speech but is a multimodal phenomenon where the visual channel carries most of the communicative burden.

    Additional information

    supplementary material
  • Trujillo, J. P., Ozyurek, A., Kan, C. C., Sheftel-Simanova, I., & Bekkering, H. (2021). Differences in the production and perception of communicative kinematics in autism. Autism Research, 14(12), 2640-2653. doi:10.1002/aur.2611.

    Abstract

    In human communication, social intentions and meaning are often revealed in the way we move. In this study, we investigate the flexibility of human communication in terms of kinematic modulation in a clinical population, namely, autistic individuals. The aim of this study was twofold: to assess (a) whether communicatively relevant kinematic features of gestures differ between autistic and neurotypical individuals, and (b) if autistic individuals use communicative kinematic modulation to support gesture recognition. We tested autistic and neurotypical individuals on a silent gesture production task and a gesture comprehension task. We measured movement during the gesture production task using a Kinect motion tracking device in order to determine if autistic individuals differed from neurotypical individuals in their gesture kinematics. For the gesture comprehension task, we assessed whether autistic individuals used communicatively relevant kinematic cues to support recognition. This was done by using stick-light figures as stimuli and testing for a correlation between the kinematics of these videos and recognition performance. We found that (a) silent gestures produced by autistic and neurotypical individuals differ in communicatively relevant kinematic features, such as the number of meaningful holds between movements, and (b) while autistic individuals are overall unimpaired at recognizing gestures, they processed repetition and complexity, measured as the amount of submovements perceived, differently than neurotypicals do. These findings highlight how subtle aspects of neurotypical behavior can be experienced differently by autistic individuals. They further demonstrate the relationship between movement kinematics and social interaction in high-functioning autistic individuals.

    Additional information

    supporting information
  • Trujillo, J. P., Levinson, S. C., & Holler, J. (2021). Visual information in computer-mediated interaction matters: Investigating the association between the availability of gesture and turn transition timing in conversation. In M. Kurosu (Ed.), Human-Computer Interaction. Design and User Experience Case Studies. HCII 2021 (pp. 643-657). Cham: Springer. doi:10.1007/978-3-030-78468-3_44.

    Abstract

    Natural human interaction involves the fast-paced exchange of speaker turns. Crucially, if a next speaker waited with planning their turn until the current speaker was finished, language production models would predict much longer turn transition times than what we observe. Next speakers must therefore prepare their turn in parallel to listening. Visual signals likely play a role in this process, for example by helping the next speaker to process the ongoing utterance and thus prepare an appropriately-timed response.

    To understand how visual signals contribute to the timing of turn-taking, and to move beyond the mostly qualitative studies of gesture in conversation, we examined unconstrained, computer-mediated conversations between 20 pairs of participants while systematically manipulating speaker visibility. Using motion tracking and manual gesture annotation, we assessed 1) how visibility affected the timing of turn transitions, and 2) whether use of co-speech gestures and 3) the communicative kinematic features of these gestures were associated with changes in turn transition timing.

    We found that 1) decreased visibility was associated with less tightly timed turn transitions, and 2) the presence of gestures was associated with more tightly timed turn transitions across visibility conditions. Finally, 3) structural and salient kinematics contributed to gesture’s facilitatory effect on turn transition times.

    Our findings suggest that speaker visibility--and especially the presence and kinematic form of gestures--during conversation contributes to the temporal coordination of conversational turns in computer-mediated settings. Furthermore, our study demonstrates that it is possible to use naturalistic conversation and still obtain controlled results.
  • Trujillo, J. P. (2020). Movement speaks for itself: The kinematic and neural dynamics of communicative action and gesture. PhD Thesis, Radboud University Nijmegen, Nijmegen.
  • Trujillo, J. P., Simanova, I., Bekkering, H., & Ozyurek, A. (2020). The communicative advantage: How kinematic signaling supports semantic comprehension. Psychological Research, 84, 1897-1911. doi:10.1007/s00426-019-01198-y.

    Abstract

    Humans are unique in their ability to communicate information through representational gestures which visually simulate an action (eg. moving hands as if opening a jar). Previous research indicates that the intention to communicate modulates the kinematics (e.g., velocity, size) of such gestures. If and how this modulation influences addressees’ comprehension of gestures have not been investigated. Here we ask whether communicative kinematic modulation enhances semantic comprehension (i.e., identification) of gestures. We additionally investigate whether any comprehension advantage is due to enhanced early identification or late identification. Participants (n = 20) watched videos of representational gestures produced in a more- (n = 60) or less-communicative (n = 60) context and performed a forced-choice recognition task. We tested the isolated role of kinematics by removing visibility of actor’s faces in Experiment I, and by reducing the stimuli to stick-light figures in Experiment II. Three video lengths were used to disentangle early identification from late identification. Accuracy and response time quantified main effects. Kinematic modulation was tested for correlations with task performance. We found higher gesture identification performance in more- compared to less-communicative gestures. However, early identification was only enhanced within a full visual context, while late identification occurred even when viewing isolated kinematics. Additionally, temporally segmented acts with more post-stroke holds were associated with higher accuracy. Our results demonstrate that communicative signaling, interacting with other visual cues, generally supports gesture identification, while kinematic modulation specifically enhances late identification in the absence of other cues. Results provide insights into mutual understanding processes as well as creating artificial communicative agents.

    Additional information

    Supplementary material
  • Trujillo, J. P., Simanova, I., Ozyurek, A., & Bekkering, H. (2020). Seeing the unexpected: How brains read communicative intent through kinematics. Cerebral Cortex, 30(3), 1056-1067. doi:10.1093/cercor/bhz148.

    Abstract

    Social interaction requires us to recognize subtle cues in behavior, such as kinematic differences in actions and gestures produced with different social intentions. Neuroscientific studies indicate that the putative mirror neuron system (pMNS) in the premotor cortex and mentalizing system (MS) in the medial prefrontal cortex support inferences about contextually unusual actions. However, little is known regarding the brain dynamics of these systems when viewing communicatively exaggerated kinematics. In an event-related functional magnetic resonance imaging experiment, 28 participants viewed stick-light videos of pantomime gestures, recorded in a previous study, which contained varying degrees of communicative exaggeration. Participants made either social or nonsocial classifications of the videos. Using participant responses and pantomime kinematics, we modeled the probability of each video being classified as communicative. Interregion connectivity and activity were modulated by kinematic exaggeration, depending on the task. In the Social Task, communicativeness of the gesture increased activation of several pMNS and MS regions and modulated top-down coupling from the MS to the pMNS, but engagement of the pMNS and MS was not found in the nonsocial task. Our results suggest that expectation violations can be a key cue for inferring communicative intention, extending previous findings from wholly unexpected actions to more subtle social signaling.
  • Tsoukala, C., Frank, S. L., Van Den Bosch, A., Valdés Kroff, J., & Broersma, M. (2021). Modeling the auxiliary phrase asymmetry in code-switched Spanish–English. Bilingualism: Language and Cognition, 24(2), 271-280. doi:10.1017/S1366728920000449.

    Abstract

    Spanish–English bilinguals rarely code-switch in the perfect structure between the Spanish auxiliary haber (“to have”) and the participle (e.g., “Ella ha voted”; “She has voted”). However, they are somewhat likely to switch in the progressive structure between the Spanish auxiliary estar (“to be”) and the participle (“Ella está voting”; “She is voting”). This phenomenon is known as the “auxiliary phrase asymmetry”. One hypothesis as to why this occurs is that estar has more semantic weight as it also functions as an independent verb, whereas haber is almost exclusively used as an auxiliary verb. To test this hypothesis, we employed a connectionist model that produces spontaneous code-switches. Through simulation experiments, we showed that i) the asymmetry emerges in the model and that ii) the asymmetry disappears when using haber also as a main verb, which adds semantic weight. Therefore, the lack of semantic weight of haber may indeed cause the asymmetry.
  • Tsoukala, C., Broersma, M., Van den Bosch, A., & Frank, S. L. (2021). Simulating code-switching using a neural network model of bilingual sentence production. Computational Brain & Behavior, 4, 87-100. doi:10.1007/s42113-020-00088-6.

    Abstract

    Code-switching is the alternation from one language to the other during bilingual speech. We present a novel method of researching this phenomenon using computational cognitive modeling. We trained a neural network of bilingual sentence production to simulate early balanced Spanish–English bilinguals, late speakers of English who have Spanish as a dominant native language, and late speakers of Spanish who have English as a dominant native language. The model produced code-switches even though it was not exposed to code-switched input. The simulations predicted how code-switching patterns differ between early balanced and late non-balanced bilinguals; the balanced bilingual simulation code-switches considerably more frequently, which is in line with what has been observed in human speech production. Additionally, we compared the patterns produced by the simulations with two corpora of spontaneous bilingual speech and identified noticeable commonalities and differences. To our knowledge, this is the first computational cognitive model simulating the code-switched production of non-balanced bilinguals and comparing the simulated production of balanced and non-balanced bilinguals with that of human bilinguals.

    Additional information

    dual-path model
  • Tsoukala, C., Frank, S. L., Van den Bosch, A., Kroff, J. V., & Broersma, M. (2020). Simulating Spanish-English code-switching: El modelo está generating code-switches. In E. Chersoni, C. Jacobs, Y. Oseki, L. Prévot, & E. Santus (Eds.), Proceedings of the Workshop on Cognitive Modeling and Computational Linguistics (pp. 20-29). Stroudsburg, PA, USA: Association for Computational Linguistics (ACL).

    Abstract

    Multilingual speakers are able to switch from
    one language to the other (“code-switch”) be-
    tween or within sentences. Because the under-
    lying cognitive mechanisms are not well un-
    derstood, in this study we use computational
    cognitive modeling to shed light on the pro-
    cess of code-switching. We employed the
    Bilingual Dual-path model, a Recurrent Neu-
    ral Network of bilingual sentence production
    (Tsoukala et al., 2017) and simulated sentence
    production in simultaneous Spanish-English
    bilinguals. Our first goal was to investigate
    whether the model would code-switch with-
    out being exposed to code-switched training
    input. The model indeed produced code-
    switches even without any exposure to such
    input and the patterns of code-switches are
    in line with earlier linguistic work (Poplack,
    1980). The second goal of this study was to
    investigate an auxiliary phrase asymmetry that
    exists in Spanish-English code-switched pro-
    duction. Using this cognitive model, we ex-
    amined a possible cause for this asymmetry.
    To our knowledge, this is the first computa-
    tional cognitive model that aims to simulate
    code-switched sentence production.
  • Tsoukala, C. (2021). Bilingual sentence production and code-switching: Neural network simulations. PhD Thesis, Radboud University Nijmegen, Nijmegen.
  • Tsuji, S., Cristia, A., Frank, M. C., & Bergmann, C. (2020). Addressing publication bias in Meta-Analysis: Empirical findings from community-augmented meta-analyses of infant language development. Zeitschrift für Psychologie, 228(1), 50-61. doi:10.1027/2151-2604/a000393.

    Abstract

    Meta-analyses are an indispensable research synthesis tool for characterizing bodies of literature and advancing theories. One important open question concerns the inclusion of unpublished data into meta-analyses. Finding such studies can be effortful, but their exclusion potentially leads to consequential biases like overestimation of a literature’s mean effect. We address two questions about unpublished data using MetaLab, a collection of community-augmented meta-analyses focused on developmental psychology. First, we assess to what extent MetaLab datasets include gray literature, and by what search strategies they are unearthed. We find that an average of 11% of datapoints are from unpublished literature; standard search strategies like database searches, complemented with individualized approaches like including authors’ own data, contribute the majority of this literature. Second, we analyze the effect of including versus excluding unpublished literature on estimates of effect size and publication bias, and find this decision does not affect outcomes. We discuss lessons learned and implications.

    Additional information

    Link to Dataset on PsychArchives
  • Tulling, M., Law, R., Cournane, A., & Pylkkänen, L. (2020). Neural correlates of modal displacement and discourse-updating under (un)certainty. eNeuro, 8(1): 0290-20.2020. doi:10.1523/ENEURO.0290-20.2020.

    Abstract

    A hallmark of human thought is the ability to think about not just the actual world, but also about alternative ways the world could be. One way to study this contrast is through language. Language has grammatical devices for expressing possibilities and necessities, such as the words might or must. With these devices, called “modal expressions,” we can study the actual vs. possible contrast in a highly controlled way. While factual utterances such as “There is a monster under my bed” update the here-and-now of a discourse model, a modal version of this sentence, “There might be a monster under my bed,” displaces from the here-and-now and merely postulates a possibility. We used magnetoencephalography (MEG) to test whether the processes of discourse updating and modal displacement dissociate in the brain. Factual and modal utterances were embedded in short narratives, and across two experiments, factual expressions increased the measured activity over modal expressions. However, the localization of the increase appeared to depend on perspective: signal localizing in right temporo-parietal areas increased when updating others’ beliefs, while frontal medial areas seem sensitive to updating one’s own beliefs. The presence of modal displacement did not elevate MEG signal strength in any of our analyses. In sum, this study identifies potential neural signatures of the process by which facts get added to our mental representation of the world.Competing Interest StatementThe authors have declared no competing interest.

    Additional information

    Link to Preprint on BioRxiv
  • Tyler, M., & Cutler, A. (2009). Cross-language differences in cue use for speech segmentation. Journal of the Acoustical Society of America, 126, 367-376. doi:10.1121/1.3129127.

    Abstract

    Two artificial-language learning experiments directly compared English, French, and Dutch listeners’ use of suprasegmental cues for continuous-speech segmentation. In both experiments, listeners heard unbroken sequences of consonant-vowel syllables, composed of recurring three- and four-syllable “words.” These words were demarcated by(a) no cue other than transitional probabilities induced by their recurrence, (b) a consistent left-edge cue, or (c) a consistent right-edge cue. Experiment 1 examined a vowel lengthening cue. All three listener groups benefited from this cue in right-edge position; none benefited from it in left-edge position. Experiment 2 examined a pitch-movement cue. English listeners used this cue in left-edge position, French listeners used it in right-edge position, and Dutch listeners used it in both positions. These findings are interpreted as evidence of both language-universal and language-specific effects. Final lengthening is a language-universal effect expressing a more general (non-linguistic) mechanism. Pitch movement expresses prominence which has characteristically different placements across languages: typically at right edges in French, but at left edges in English and Dutch. Finally, stress realization in English versus Dutch encourages greater attention to suprasegmental variation by Dutch than by English listeners, allowing Dutch listeners to benefit from an informative pitch-movement cue even in an uncharacteristic position.
  • Uddén, J., Araújo, S., Forkstam, C., Ingvar, M., Hagoort, P., & Petersson, K. M. (2009). A matter of time: Implicit acquisition of recursive sequence structures. In N. Taatgen, & H. Van Rijn (Eds.), Proceedings of the Thirty-First Annual Conference of the Cognitive Science Society (pp. 2444-2449).

    Abstract

    A dominant hypothesis in empirical research on the evolution of language is the following: the fundamental difference between animal and human communication systems is captured by the distinction between regular and more complex non-regular grammars. Studies reporting successful artificial grammar learning of nested recursive structures and imaging studies of the same have methodological shortcomings since they typically allow explicit problem solving strategies and this has been shown to account for the learning effect in subsequent behavioral studies. The present study overcomes these shortcomings by using subtle violations of agreement structure in a preference classification task. In contrast to the studies conducted so far, we use an implicit learning paradigm, allowing the time needed for both abstraction processes and consolidation to take place. Our results demonstrate robust implicit learning of recursively embedded structures (context-free grammar) and recursive structures with cross-dependencies (context-sensitive grammar) in an artificial grammar learning task spanning 9 days. Keywords: Implicit artificial grammar learning; centre embedded; cross-dependency; implicit learning; context-sensitive grammar; context-free grammar; regular grammar; non-regular grammar
  • Uhlmann, M. (2020). Neurobiological models of sentence processing. PhD Thesis, Radboud University Nijmegen, Nijmegen.
  • Ullas, S., Formisano, E., Eisner, F., & Cutler, A. (2020). Interleaved lexical and audiovisual information can retune phoneme boundaries. Attention, Perception & Psychophysics, 82, 2018-2026. doi:10.3758/s13414-019-01961-8.

    Abstract

    To adapt to situations in which speech perception is difficult, listeners can adjust boundaries between phoneme categories using perceptual learning. Such adjustments can draw on lexical information in surrounding speech, or on visual cues via speech-reading. In the present study, listeners proved they were able to flexibly adjust the boundary between two plosive/stop consonants, /p/-/t/, using both lexical and speech-reading information and given the same experimental design for both cue types. Videos of a speaker pronouncing pseudo-words and audio recordings of Dutch words were presented in alternating blocks of either stimulus type. Listeners were able to switch between cues to adjust phoneme boundaries, and resulting effects were comparable to results from listeners receiving only a single source of information. Overall, audiovisual cues (i.e., the videos) produced the stronger effects, commensurate with their applicability for adapting to noisy environments. Lexical cues were able to induce effects with fewer exposure stimuli and a changing phoneme bias, in a design unlike most prior studies of lexical retuning. While lexical retuning effects were relatively weaker compared to audiovisual recalibration, this discrepancy could reflect how lexical retuning may be more suitable for adapting to speakers than to environments. Nonetheless, the presence of the lexical retuning effects suggests that it may be invoked at a faster rate than previously seen. In general, this technique has further illuminated the robustness of adaptability in speech perception, and offers the potential to enable further comparisons across differing forms of perceptual learning.
  • Ullas, S., Formisano, E., Eisner, F., & Cutler, A. (2020). Audiovisual and lexical cues do not additively enhance perceptual adaptation. Psychonomic Bulletin & Review, 27, 707-715. doi:10.3758/s13423-020-01728-5.

    Abstract

    When listeners experience difficulty in understanding a speaker, lexical and audiovisual (or lipreading) information can be a helpful source of guidance. These two types of information embedded in speech can also guide perceptual adjustment, also
    known as recalibration or perceptual retuning. With retuning or recalibration, listeners can use these contextual cues to temporarily or permanently reconfigure internal representations of phoneme categories to adjust to and understand novel interlocutors more easily. These two types of perceptual learning, previously investigated in large part separately, are highly similar in allowing listeners to use speech-external information to make phoneme boundary adjustments. This study explored whether the two sources may work in conjunction to induce adaptation, thus emulating real life, in which listeners are indeed likely to encounter both types of cue together. Listeners who received combined audiovisual and lexical cues showed perceptual learning effects
    similar to listeners who only received audiovisual cues, while listeners who received only lexical cues showed weaker effects compared with the two other groups. The combination of cues did not lead to additive retuning or recalibration effects, suggesting that lexical and audiovisual cues operate differently with regard to how listeners use them for reshaping perceptual categories.
    Reaction times did not significantly differ across the three conditions, so none of the forms of adjustment were either aided or
    hindered by processing time differences. Mechanisms underlying these forms of perceptual learning may diverge in numerous ways despite similarities in experimental applications.

    Additional information

    Data and materials
  • Ullas, S., Hausfeld, L., Cutler, A., Eisner, F., & Formisano, E. (2020). Neural correlates of phonetic adaptation as induced by lexical and audiovisual context. Journal of Cognitive Neuroscience, 32(11), 2145-2158. doi:10.1162/jocn_a_01608.

    Abstract

    When speech perception is difficult, one way listeners adjust is by reconfiguring phoneme category boundaries, drawing on contextual information. Both lexical knowledge and lipreading cues are used in this way, but it remains unknown whether these two differing forms of perceptual learning are similar at a neural level. This study compared phoneme boundary adjustments driven by lexical or audiovisual cues, using ultra-high-field 7-T fMRI. During imaging, participants heard exposure stimuli and test stimuli. Exposure stimuli for lexical retuning were audio recordings of words, and those for audiovisual recalibration were audio–video recordings of lip movements during utterances of pseudowords. Test stimuli were ambiguous phonetic strings presented without context, and listeners reported what phoneme they heard. Reports reflected phoneme biases in preceding exposure blocks (e.g., more reported /p/ after /p/-biased exposure). Analysis of corresponding brain responses indicated that both forms of cue use were associated with a network of activity across the temporal cortex, plus parietal, insula, and motor areas. Audiovisual recalibration also elicited significant occipital cortex activity despite the lack of visual stimuli. Activity levels in several ROIs also covaried with strength of audiovisual recalibration, with greater activity accompanying larger recalibration shifts. Similar activation patterns appeared for lexical retuning, but here, no significant ROIs were identified. Audiovisual and lexical forms of perceptual learning thus induce largely similar brain response patterns. However, audiovisual recalibration involves additional visual cortex contributions, suggesting that previously acquired visual information (on lip movements) is retrieved and deployed to disambiguate auditory perception.
  • Ünal, E., & Papafragou, A. (2020). Relations between language and cognition: Evidentiality and sources of knowledge. Topics in Cognitive Science, 12(1), 115-135. doi:10.1111/tops.12355.

    Abstract

    Understanding and acquiring language involve mapping language onto conceptual representations. Nevertheless, several issues remain unresolved with respect to (a) how such mappings are performed, and (b) whether conceptual representations are susceptible to cross‐linguistic influences. In this article, we discuss these issues focusing on the domain of evidentiality and sources of knowledge. Empirical evidence in this domain yields growing support for the proposal that linguistic categories of evidentiality are tightly linked to, build on, and reflect conceptual representations of sources of knowledge that are shared across speakers of different languages.
  • Urbanus, B. H. A., Peter, S., Fisher, S. E., & De Zeeuw, C. I. (2020). Region-specific Foxp2 deletions in cortex, striatum or cerebellum cannot explain vocalization deficits observed in spontaneous global knockouts. Scientific Reports, 10: 21631. doi:10.1038/s41598-020-78531-8.

    Abstract

    FOXP2 has been identified as a gene related to speech in humans, based on rare mutations that yield significant impairments in speech at the level of both motor performance and language comprehension. Disruptions of the murine orthologue Foxp2 in mouse pups have been shown to interfere with production of ultrasonic vocalizations (USVs). However, it remains unclear which structures are responsible for these deficits. Here, we show that conditional knockout mice with selective Foxp2 deletions targeting the cerebral cortex, striatum or cerebellum, three key sites of motor control with robust neural gene expression, do not recapture the profile of pup USV deficits observed in mice with global disruptions of this gene. Moreover, we observed that global Foxp2 knockout pups show substantive reductions in USV production as well as an overproduction of short broadband noise “clicks”, which was not present in the brain region-specific knockouts. These data indicate that deficits of Foxp2 expression in the cortex, striatum or cerebellum cannot solely explain the disrupted vocalization behaviours in global Foxp2 knockouts. Our findings raise the possibility that the impact of Foxp2 disruption on USV is mediated at least in part by effects of this gene on the anatomical prerequisites for vocalizing.
  • Vágvölgyi, R., Bergström, K., Bulajić, A., Klatte, M., Fernandes, T., Grosche, M., Huettig, F., Rüsseler, J., & Lachmann, T. (2021). Functional illiteracy and developmental dyslexia: Looking for common roots. A systematic review. Journal of Cultural Cognitive Science, 5, 159-179. doi:10.1007/s41809-021-00074-9.

    Abstract

    A considerable amount of the population in more economically developed countries are functionally illiterate (i.e., low literate). Despite some years of schooling and basic reading skills, these individuals cannot properly read and write and, as a consequence have problems to understand even short texts. An often-discussed approach (Greenberg et al., 1997) assumes weak phonological processing skills coupled with untreated developmental dyslexia as possible causes of functional illiteracy. Although there is some data suggesting commonalities between low literacy and developmental dyslexia, it is still not clear, whether these reflect shared consequences (i.e., cognitive and behavioral profile) or shared causes. The present systematic review aims at exploring the similarities and differences identified in empirical studies investigating both functional illiterate and developmental dyslexic samples. Nine electronic databases were searched in order to identify all quantitative studies published in English or German. Although a broad search strategy and few limitations were applied, only 5 studies have been identified adequate from the resulting 9269 references. The results point to the lack of studies directly comparing functional illiterate with developmental dyslexic samples. Moreover, a huge variance has been identified between the studies in how they approached the concept of functional illiteracy, particularly when it came to critical categories such the applied definition, terminology, criteria for inclusion in the sample, research focus, and outcome measures. The available data highlight the need for more direct comparisons in order to understand what extent functional illiteracy and dyslexia share common characteristics.

    Additional information

    supplementary materials
  • Vainio, M., Suni, A., Raitio, T., Nurminen, J., Järvikivi, J., & Alku, P. (2009). New method for delexicalization and its application to prosodic tagging for text-to-speech synthesis. In Proceedings of the 10th Annual Conference of the International Speech Communication Association (Interspeech 2009) (pp. 1703-1706).

    Abstract

    This paper describes a new flexible delexicalization method based on glottal excited parametric speech synthesis scheme. The system utilizes inverse filtered glottal flow and all-pole modelling of the vocal tract. The method provides a possibility to retain and manipulate all relevant prosodic features of any kind of speech. Most importantly, the features include voice quality, which has not been properly modeled in earlier delexicalization methods. The functionality of the new method was tested in a prosodic tagging experiment aimed at providing word prominence data for a text-to-speech synthesis system. The experiment confirmed the usefulness of the method and further corroborated earlier evidence that linguistic factors influence the perception of prosodic prominence.
  • Van Berkum, J. J. A., Holleman, B., Nieuwland, M. S., Otten, M., & Murre, J. (2009). Right or wrong? The brain's fast response to morally objectionable statements. Psychological Science, 20, 1092 -1099. doi:10.1111/j.1467-9280.2009.02411.x.

    Abstract

    How does the brain respond to statements that clash with a person's value system? We recorded event-related brain potentials while respondents from contrasting political-ethical backgrounds completed an attitude survey on drugs, medical ethics, social conduct, and other issues. Our results show that value-based disagreement is unlocked by language extremely rapidly, within 200 to 250 ms after the first word that indicates a clash with the reader's value system (e.g., "I think euthanasia is an acceptable/unacceptable…"). Furthermore, strong disagreement rapidly influences the ongoing analysis of meaning, which indicates that even very early processes in language comprehension are sensitive to a person's value system. Our results testify to rapid reciprocal links between neural systems for language and for valuation.

    Additional information

    Critical survey statements (in Dutch)
  • Van Alphen, P. M. (2004). Perceptual relevance of prevoicing in Dutch. PhD Thesis, Radboud University Nijmegen, Nijmegen. doi:10.17617/2.58551.

    Abstract

    In this dissertation the perceptual relevance of prevoicing in Dutch was investigated. Prevoicing is the presence of vocal fold vibration during the closure of initial voiced plosives (negative voice onset time). The presence or absence of prevoicing is generally used to describe the difference between voiced and voiceless Dutch plosives. The first experiment described in this dissertation showed that prevoicing is frequently absent in Dutch and that several factors affect the production of prevoicing. A detailed acoustic analysis of the voicing distinction identified several acoustic correlates of voicing. Prevoicing appeared to be by far the best predictor. Perceptual classification data revealed that prevoicing was indeed the strongest cue that listeners use when classifying plosives as voiced or voiceless. In the cases where prevoicing was absent, other acoustic cues influenced classification, such that some of these tokens were still perceived as being voiced. In the second part of this dissertation the influence of prevoicing variation on spoken-word recognition was examined. In several cross-modal priming experiments two types of prevoicing variation were contrasted: a difference between the presence and absence of prevoicing (6 versus 0 periods of prevoicing) and a difference in the amount of prevoicing (12 versus 6 periods). All these experiments indicated that primes with 12 and 6 periods of prevoicing had the same effect on lexical decisions to the visual targets. The primes without prevoicing had a different effect, but only when their voiceless counterparts were real words. Phonetic detail appears to influence lexical access only when it is useful: In Dutch, the presence versus absence of prevoicing is informative, while the amount of prevoicing is not.

    Additional information

    full text via Radboud Repository
  • Van Berkum, J. J. A. (2009). The neuropragmatics of 'simple' utterance comprehension: An ERP review. In U. Sauerland, & K. Yatsushiro (Eds.), Semantics and pragmatics: From experiment to theory (pp. 276-316). Basingstoke: Palgrave Macmillan.

    Abstract

    In this chapter, I review my EEG research on comprehending sentences in context from a pragmatics-oriented perspective. The review is organized around four questions: (1) When and how do extra-sentential factors such as the prior text, identity of the speaker, or value system of the comprehender affect the incremental sentence interpretation processes indexed by the so-called N400 component of the ERP? (2) When and how do people identify the referents for expressions such as “he” or “the review”, and how do referential processes interact with sense and syntax? (3) How directly pragmatic are the interpretation-relevant ERP effects reported here? (4) Do readers and listeners anticipate upcoming information? One important claim developed in the chapter is that the well-known N400 component, although often associated with ‘semantic integration’, only indirectly reflects the sense-making involved in structure-sensitive dynamic composition of the type studied in semantics and pragmatics. According to the multiple-cause intensified retrieval (MIR) account -- essentially an extension of the memory retrieval account proposed by Kutas and colleagues -- the amplitude of the word-elicited N400 reflects the computational resources used in retrieving the relatively invariant coded meaning stored in semantic long-term memory for, and made available by, the word at hand. Such retrieval becomes more resource-intensive when the coded meanings cued by this word do not match with expectations raised by the relevant interpretive context, but also when certain other relevance signals, such as strong affective connotation or a marked delivery, indicate the need for deeper processing. The most important consequence of this account is that pragmatic modulations of the N400 come about not because the N400 at hand directly reflects a rich compositional-semantic and/or Gricean analysis to make sense of the word’s coded meaning in this particular context, but simply because the semantic and pragmatic implications of the preceding words have already been computed, and now define a less or more helpful interpretive background within which to retrieve coded meaning for the critical word.
  • Van den Brink, D., & Hagoort, P. (2004). The influence of semantic and syntactic context constraints on lexical selection and integration in spoken-word comprehension as revealed by ERPs. Journal of Cognitive Neuroscience, 16(6), 1068-1084. doi:10.1162/0898929041502670.

    Abstract

    An event-related brain potential experiment was carried out to investigate the influence of semantic and syntactic context constraints on lexical selection and integration in spoken-word comprehension. Subjects were presented with constraining spoken sentences that contained a critical word that was either (a) congruent, (b) semantically and syntactically incongruent, but beginning with the same initial phonemes as the congruent critical word, or (c) semantically and syntactically incongruent, beginning with phonemes that differed from the congruent critical word. Relative to the congruent condition, an N200 effect reflecting difficulty in the lexical selection process was obtained in the semantically and syntactically incongruent condition where word onset differed from that of the congruent critical word. Both incongruent conditions elicited a large N400 followed by a left anterior negativity (LAN) time-locked to the moment of word category violation and a P600 effect. These results would best fit within a cascaded model of spoken-word processing, proclaiming an optimal use of contextual information during spokenword identification by allowing for semantic and syntactic processing to take place in parallel after bottom-up activation of a set of candidates, and lexical integration to proceed with a limited number of candidates that still match the acoustic input.
  • Van Bergen, G., & Hogeweg, L. (2021). Managing interpersonal discourse expectations: a comparative analysis of contrastive discourse particles in Dutch. Linguistics, 59(2), 333-360. doi:10.1515/ling-2021-0020.

    Abstract

    In this article we investigate how speakers manage discourse expectations in dialogue by comparing the meaning and use of three Dutch discourse particles, i.e. wel, toch and eigenlijk, which all express a contrast between their host utterance and a discourse-based expectation. The core meanings of toch, wel and eigenlijk are formally distinguished on the basis of two intersubjective parameters: (i) whether the particle marks alignment or misalignment between speaker and addressee discourse beliefs, and (ii) whether the particle requires an assessment of the addressee’s representation of mutual discourse beliefs. By means of a quantitative corpus study, we investigate to what extent the intersubjective meaning distinctions between wel, toch and eigenlijk are reflected in statistical usage patterns across different social situations. Results suggest that wel, toch and eigenlijk are lexicalizations of distinct generalized politeness strategies when expressing contrast in social interaction. Our findings call for an interdisciplinary approach to discourse particles in order to enhance our understanding of their functions in language.
  • Van Heukelum, S., Tulva, K., Geers, F. E., van Dulm, S., Ruisch, I. H., Mill, J., Viana, J. F., Beckmann, C. F., Buitelaar, J. K., Poelmans, G., Glennon, J. C., Vogt, B. A., Havenith, M. N., & França, A. S. (2021). A central role for anterior cingulate cortex in the control of pathological aggression. Current Biology, 31, 2321-2333.e5. doi:10.1016/j.cub.2021.03.062.

    Abstract

    Controlling aggression is a crucial skill in social species like rodents and humans and has been associated with anterior cingulate cortex (ACC). Here, we directly link the failed regulation of aggression in BALB/cJ mice to ACC hypofunction. We first show that ACC in BALB/cJ mice is structurally degraded: neuron density is decreased, with pervasive neuron death and reactive astroglia. Gene-set enrichment analysis suggested that this process is driven by neuronal degeneration, which then triggers toxic astrogliosis. cFos expression across ACC indicated functional consequences: during aggressive encounters, ACC was engaged in control mice, but not BALB/cJ mice. Chemogenetically activating ACC during aggressive encounters drastically suppressed pathological aggression but left species-typical aggression intact. The network effects of our chemogenetic perturbation suggest that this behavioral rescue is mediated by suppression of amygdala and hypothalamus and activation of mediodorsal thalamus. Together, these findings highlight the central role of ACC in curbing pathological aggression.
  • Ip, H. F., Van der Laan, C. M., Krapohl, E. M. L., Brikell, I., Sánchez-Mora, C., Nolte, I. M., St Pourcain, B., Bolhuis, K., Palviainen, T., Zafarmand, H., Colodro-Conde, L., Gordon, S., Zayats, T., Aliev, F., Jiang, C., Wang, C. A., Saunders, G., Karhunen, V., Hammerschlag, A. R., Adkins, D. E. and 129 moreIp, H. F., Van der Laan, C. M., Krapohl, E. M. L., Brikell, I., Sánchez-Mora, C., Nolte, I. M., St Pourcain, B., Bolhuis, K., Palviainen, T., Zafarmand, H., Colodro-Conde, L., Gordon, S., Zayats, T., Aliev, F., Jiang, C., Wang, C. A., Saunders, G., Karhunen, V., Hammerschlag, A. R., Adkins, D. E., Border, R., Peterson, R. E., Prinz, J. A., Thiering, E., Seppälä, I., Vilor-Tejedor, N., Ahluwalia, T. S., Day, F. R., Hottenga, J.-J., Allegrini, A. G., Rimfeld, K., Chen, Q., Lu, Y., Martin, J., Soler Artigas, M., Rovira, P., Bosch, R., Español, G., Ramos Quiroga, J. A., Neumann, A., Ensink, J., Grasby, K., Morosoli, J. J., Tong, X., Marrington, S., Middeldorp, C., Scott, J. G., Vinkhuyzen, A., Shabalin, A. A., Corley, R., Evans, L. M., Sugden, K., Alemany, S., Sass, L., Vinding, R., Ruth, K., Tyrrell, J., Davies, G. E., Ehli, E. A., Hagenbeek, F. A., De Zeeuw, E., Van Beijsterveldt, T. C., Larsson, H., Snieder, H., Verhulst, F. C., Amin, N., Whipp, A. M., Korhonen, T., Vuoksimaa, E., Rose, R. J., Uitterlinden, A. G., Heath, A. C., Madden, P., Haavik, J., Harris, J. R., Helgeland, Ø., Johansson, S., Knudsen, G. P. S., Njolstad, P. R., Lu, Q., Rodriguez, A., Henders, A. K., Mamun, A., Najman, J. M., Brown, S., Hopfer, C., Krauter, K., Reynolds, C., Smolen, A., Stallings, M., Wadsworth, S., Wall, T. L., Silberg, J. L., Miller, A., Keltikangas-Järvinen, L., Hakulinen, C., Pulkki-Råback, L., Havdahl, A., Magnus, P., Raitakari, O. T., Perry, J. R. B., Llop, S., Lopez-Espinosa, M.-J., Bønnelykke, K., Bisgaard, H., Sunyer, J., Lehtimäki, T., Arseneault, L., Standl, M., Heinrich, J., Boden, J., Pearson, J., Horwood, L. J., Kennedy, M., Poulton, R., Eaves, L. J., Maes, H. H., Hewitt, J., Copeland, W. E., Costello, E. J., Williams, G. M., Wray, N., Järvelin, M.-R., McGue, M., Iacono, W., Caspi, A., Moffitt, T. E., Whitehouse, A., Pennell, C. E., Klump, K. L., Burt, S. A., Dick, D. M., Reichborn-Kjennerud, T., Martin, N. G., Medland, S. E., Vrijkotte, T., Kaprio, J., Tiemeier, H., Davey Smith, G., Hartman, C. A., Oldehinkel, A. J., Casas, M., Ribasés, M., Lichtenstein, P., Lundström, S., Plomin, R., Bartels, M., Nivard, M. G., & Boomsma, D. I. (2021). Genetic association study of childhood aggression across raters, instruments, and age. Translational Psychiatry, 11: 413. doi:10.1038/s41398-021-01480-x.
  • Van Dijk, C. N. (2021). Cross-linguistic influence during real-time sentence processing in bilingual children and adults. PhD Thesis, Raboud University Nijmegen, Nijmegen.
  • van der Burght, C. L., Friederici, A. D., Goucha, T., & Hartwigsen, G. (2021). Pitch accents create dissociable syntactic and semantic expectations during sentence processing. Cognition, 212: 104702. doi:10.1016/j.cognition.2021.104702.

    Abstract

    The language system uses syntactic, semantic, as well as prosodic cues to efficiently guide auditory sentence comprehension. Prosodic cues, such as pitch accents, can build expectations about upcoming sentence elements. This study investigates to what extent syntactic and semantic expectations generated by pitch accents can be dissociated and if so, which cues take precedence when contradictory information is present. We used sentences in which one out of two nominal constituents was placed in contrastive focus with a third one. All noun phrases carried overt syntactic information (case-marking of the determiner) and semantic information (typicality of the thematic role of the noun). Two experiments (a sentence comprehension and a sentence completion task) show that focus, marked by pitch accents, established expectations in both syntactic and semantic domains. However, only the syntactic expectations, when violated, were strong enough to interfere with sentence comprehension. Furthermore, when contradictory cues occurred in the same sentence, the local syntactic cue (case-marking) took precedence over the semantic cue (thematic role), and overwrote previous information cued by prosody. The findings indicate that during auditory sentence comprehension the processing system integrates different sources of information for argument role assignment, yet primarily relies on syntactic information.
  • van der Burght, C. L. (2021). The central contribution of prosody to sentence processing: Evidence from behavioural and neuroimaging studies. PhD Thesis, Max Planck Institute for Human Cognitive and Brain Sciences, Leipzig.
  • Van den Heuvel, H., Oostdijk, N., Rowland, C. F., & Trilsbeek, P. (2020). The CLARIN Knowledge Centre for Atypical Communication Expertise. In N. Calzolari, F. Béchet, P. Blache, K. Choukri, C. Cieri, T. Declerck, S. Goggi, H. Isahara, B. Maegaard, J. Mariani, H. Mazo, A. Moreno, J. Odijk, & S. Piperidis (Eds.), Proceedings of the 12th Language Resources and Evaluation Conference (LREC 2020) (pp. 3312-3316). Marseille, France: European Language Resources Association.

    Abstract

    This paper introduces a new CLARIN Knowledge Center which is the K-Centre for Atypical Communication Expertise (ACE for short) which has been established at the Centre for Language and Speech Technology (CLST) at Radboud University. Atypical communication is an umbrella term used here to denote language use by second language learners, people with language disorders or those suffering from language disabilities, but also more broadly by bilinguals and users of sign languages. It involves multiple modalities (text, speech, sign, gesture) and encompasses different developmental stages. ACE closely collaborates with The Language Archive (TLA) at the Max Planck Institute for Psycholinguistics in order to safeguard GDPR-compliant data storage and access. We explain the mission of ACE and show its potential on a number of showcases and a use case.
  • Van Paridon, J., Ostarek, M., Arunkumar, M., & Huettig, F. (2021). Does neuronal recycling result in destructive competition? The influence of learning to read on the recognition of faces. Psychological Science, 32, 459-465. doi:10.1177/0956797620971652.

    Abstract

    Written language, a human cultural invention, is far too recent for dedicated neural
    infrastructure to have evolved in its service. Culturally newly acquired skills (e.g. reading) thus ‘recycle’ evolutionarily older circuits that originally evolved for different, but similar functions (e.g. visual object recognition). The destructive competition hypothesis predicts that this neuronal recycling has detrimental behavioral effects on the cognitive functions a cortical network originally evolved for. In a study with 97 literate, low-literate, and illiterate participants from the same socioeconomic background we find that even after adjusting for cognitive ability and test-taking familiarity, learning to read is associated with an increase, rather than a decrease, in object recognition abilities. These results are incompatible with the claim that neuronal recycling results in destructive competition and consistent with the possibility that learning to read instead fine-tunes general object recognition mechanisms, a hypothesis that needs further neuroscientific investigation.

    Additional information

    supplemental material
  • Van Paridon, J. (2021). Speaking while listening: Language processing in speech shadowing and translation. PhD Thesis, Radboud University Nijmegen, Nijmegen.
  • Van Es, M. W. J. (2020). On the role of oscillatory synchrony in neural processing and behavior. PhD Thesis, Radboud University Nijmegen, Nijmegen.

Share this page