Publications

Displaying 301 - 400 of 437
  • Ravignani, A., & Sonnweber, R. (2017). Chimpanzees process structural isomorphisms across sensory modalities. Cognition, 161, 74-79. doi:10.1016/j.cognition.2017.01.005.
  • Ravignani, A., Gross, S., Garcia, M., Rubio-Garcia, A., & De Boer, B. (2017). How small could a pup sound? The physical bases of signaling body size in harbor seals. Current Zoology, 63(4), 457-465. doi:10.1093/cz/zox026.

    Abstract

    Vocal communication is a crucial aspect of animal behavior. The mechanism which most mammals use to vocalize relies on three anatomical components. First, air overpressure is generated inside the lower vocal tract. Second, as the airstream goes through the glottis, sound is produced via vocal fold vibration. Third, this sound is further filtered by the geometry and length of the upper vocal tract. Evidence from mammalian anatomy and bioacoustics suggests that some of these three components may covary with an animal’s body size. The framework provided by acoustic allometry suggests that, because vocal tract length (VTL) is more strongly constrained by the growth of the body than vocal fold length (VFL), VTL generates more reliable acoustic cues to an animal’s size. This hypothesis is often tested acoustically but rarely anatomically, especially in pinnipeds. Here, we test the anatomical bases of the acoustic allometry hypothesis in harbor seal pups Phoca vitulina. We dissected and measured vocal tract, vocal folds, and other anatomical features of 15 harbor seals post-mortem. We found that, while VTL correlates with body size, VFL does not. This suggests that, while body growth puts anatomical constraints on how vocalizations are filtered by harbor seals’ vocal tract, no such constraints appear to exist on vocal folds, at least during puppyhood. It is particularly interesting to find anatomical constraints on harbor seals’ vocal tracts, the same anatomical region partially enabling pups to produce individually distinctive vocalizations.
  • Ravignani, A., & Norton, P. (2017). Measuring rhythmic complexity: A primer to quantify and compare temporal structure in speech, movement, and animal vocalizations. Journal of Language Evolution, 2(1), 4-19. doi:10.1093/jole/lzx002.

    Abstract

    Research on the evolution of human speech and phonology benefits from the comparative approach: structural, spectral, and temporal features can be extracted and compared across species in an attempt to reconstruct the evolutionary history of human speech. Here we focus on analytical tools to measure and compare temporal structure in human speech and animal vocalizations. We introduce the reader to a range of statistical methods usable, on the one hand, to quantify rhythmic complexity in single vocalizations, and on the other hand, to compare rhythmic structure between multiple vocalizations. These methods include: time series analysis, distributional measures, variability metrics, Fourier transform, auto- and cross-correlation, phase portraits, and circular statistics. Using computer-generated data, we apply a range of techniques, walking the reader through the necessary software and its functions. We describe which techniques are most appropriate to test particular hypotheses on rhythmic structure, and provide possible interpretations of the tests. These techniques can be equally well applied to find rhythmic structure in gesture, movement, and any other behavior developing over time, when the research focus lies on its temporal structure. This introduction to quantitative techniques for rhythm and timing analysis will hopefully spur additional comparative research, and will produce comparable results across all disciplines working on the evolution of speech, ultimately advancing the field.

    Additional information

    lzx002_Supp.docx
  • Ravignani, A. (2017). Interdisciplinary debate: Agree on definitions of synchrony [Correspondence]. Nature, 545, 158. doi:10.1038/545158c.
  • Ravignani, A., & Madison, G. (2017). The paradox of isochrony in the evolution of human rhythm. Frontiers in Psychology, 8: 1820. doi:10.3389/fpsyg.2017.01820.

    Abstract

    Isochrony is crucial to the rhythm of human music. Some neural, behavioral and anatomical traits underlying rhythm perception and production are shared with a broad range of species. These may either have a common evolutionary origin, or have evolved into similar traits under different evolutionary pressures. Other traits underlying rhythm are rare across species, only found in humans and few other animals. Isochrony, or stable periodicity, is common to most human music, but isochronous behaviors are also found in many species. It appears paradoxical that humans are particularly good at producing and perceiving isochronous patterns, although this ability does not conceivably confer any evolutionary advantage to modern humans. This article will attempt to solve this conundrum. To this end, we define the concept of isochrony from the present functional perspective of physiology, cognitive neuroscience, signal processing, and interactive behavior, and review available evidence on isochrony in the signals of humans and other animals. We then attempt to resolve the paradox of isochrony by expanding an evolutionary hypothesis about the function that isochronous behavior may have had in early hominids. Finally, we propose avenues for empirical research to examine this hypothesis and to understand the evolutionary origin of isochrony in general.
  • Ravignani, A. (2017). Visualizing and interpreting rhythmic patterns using phase space plots. Music Perception, 34(5), 557-568. doi:10.1525/MP.2017.34.5.557.

    Abstract

    STRUCTURE IN MUSICAL RHYTHM CAN BE MEASURED using a number of analytical techniques. While some techniques—like circular statistics or grammar induction—rely on strong top-down assumptions, assumption-free techniques can only provide limited insights on higher-order rhythmic structure. I suggest that research in music perception and performance can benefit from systematically adopting phase space plots, a visualization technique originally developed in mathematical physics that overcomes the aforementioned limitations. By jointly plotting adjacent interonset intervals (IOI), the motivic rhythmic structure of musical phrases, if present, is visualized geometrically without making any a priori assumptions concerning isochrony, beat induction, or metrical hierarchies. I provide visual examples and describe how particular features of rhythmic patterns correspond to geometrical shapes in phase space plots. I argue that research on music perception and systematic musicology stands to benefit from this descriptive tool, particularly in comparative analyses of rhythm production. Phase space plots can be employed as an initial assumption-free diagnostic to find higher order structures (i.e., beyond distributional regularities) before proceeding to more specific, theory-driven analyses.
  • Reifegerste, J., Meyer, A. S., & Zwitserlood, P. (2017). Inflectional complexity and experience affect plural processing in younger and older readers of Dutch and German. Language, Cognition and Neuroscience, 32(4), 471-487. doi:10.1080/23273798.2016.1247213.

    Abstract

    According to dual-route models of morphological processing, regular inflected words can be retrieved as whole-word forms or decomposed into morphemes. Baayen, Dijkstra, and Schreuder [(1997). Singulars and plurals in Dutch: Evidence for a parallel dual-route model. Journal of AQ2 Memory and Language, 37, 94–117. doi:10.1006/jmla.1997.2509] proposed a ¶ dual-route model according to which plurals of singular-dominant words (e.g. “brides”) are decomposed, while plurals of plural-dominant words (e.g. “peas”) are accessed as whole-word units. We report two lexical-decision experiments investigating how plural processing is influenced by participants’ age (a proxy for experience with word forms) and morphological complexity of the language (German versus Dutch). For both Dutch participant groups and older German participants, we replicated the interaction between number and dominance reported by Baayen and colleagues. Younger German participants showed a main effect of number, indicating access of all plurals via decomposition. Access to stored forms seems to depend on morphological richness and experience with word forms. The data pattern fits neither full-decomposition nor full-storage models, but is compatible with dual-route models

    Additional information

    plcp_a_1247213_sm6144.pdf
  • Roberts, S. G., & Levinson, S. C. (2017). Conversation, cognition and cultural evolution: A model of the cultural evolution of word order through pressures imposed from turn taking in conversation. Interaction studies, 18(3), 402-429. doi:10.1075/is.18.3.06rob.

    Abstract

    This paper outlines a first attempt to model the special constraints that arise in language processing in conversation, and to explore the implications such functional considerations may have on language typology and language change. In particular, we focus on processing pressures imposed by conversational turn-taking and their consequences for the cultural evolution of the structural properties of language. We present an agent-based model of cultural evolution where agents take turns at talk in conversation. When the start of planning for the next turn is constrained by the position of the verb, the stable distribution of dominant word orders across languages evolves to match the actual distribution reasonably well. We suggest that the interface of cognition and interaction should be a more central part of the story of language evolution.
  • De Roeck, A., Van den Bossche, T., Van der Zee, J., Verheijen, J., De Coster, W., Van Dongen, J., Dillen, L., Baradaran-Heravi, Y., Heeman, B., Sanchez-Valle, R., Lladó, A., Nacmias, B., Sorbi, S., Gelpi, E., Grau-Rivera, O., Gómez-Tortosa, E., Pastor, P., Ortega-Cubero, S., Pastor, M. A., Graff, C. and 25 moreDe Roeck, A., Van den Bossche, T., Van der Zee, J., Verheijen, J., De Coster, W., Van Dongen, J., Dillen, L., Baradaran-Heravi, Y., Heeman, B., Sanchez-Valle, R., Lladó, A., Nacmias, B., Sorbi, S., Gelpi, E., Grau-Rivera, O., Gómez-Tortosa, E., Pastor, P., Ortega-Cubero, S., Pastor, M. A., Graff, C., Thonberg, H., Benussi, L., Ghidoni, R., Binetti, G., de Mendonça, A., Martins, M., Borroni, B., Padovani, A., Almeida, M. R., Santana, I., Diehl-Schmid, J., Alexopoulos, P., Clarimon, J., Lleó, A., Fortea, J., Tsolaki, M., Koutroumani, M., Matěj, R., Rohan, Z., De Deyn, P., Engelborghs, S., Cras, P., Van Broeckhoven, C., Sleegers, K., & European Early-Onset Dementia (EU EOD) consortium (2017). Deleterious ABCA7 mutations and transcript rescue mechanisms in early onset Alzheimer’s disease. Acta Neuropathologica, 134, 475-487. doi:10.1007/s00401-017-1714-x.

    Abstract

    Premature termination codon (PTC) mutations in the ATP-Binding Cassette, Sub-Family A, Member 7 gene (ABCA7) have recently been identified as intermediate-to-high penetrant risk factor for late-onset Alzheimer’s disease (LOAD). High variability, however, is observed in downstream ABCA7 mRNA and protein expression, disease penetrance, and onset age, indicative of unknown modifying factors. Here, we investigated the prevalence and disease penetrance of ABCA7 PTC mutations in a large early onset AD (EOAD)—control cohort, and examined the effect on transcript level with comprehensive third-generation long-read sequencing. We characterized the ABCA7 coding sequence with next-generation sequencing in 928 EOAD patients and 980 matched control individuals. With MetaSKAT rare variant association analysis, we observed a fivefold enrichment (p = 0.0004) of PTC mutations in EOAD patients (3%) versus controls (0.6%). Ten novel PTC mutations were only observed in patients, and PTC mutation carriers in general had an increased familial AD load. In addition, we observed nominal risk reducing trends for three common coding variants. Seven PTC mutations were further analyzed using targeted long-read cDNA sequencing on an Oxford Nanopore MinION platform. PTC-containing transcripts for each investigated PTC mutation were observed at varying proportion (5–41% of the total read count), implying incomplete nonsense-mediated mRNA decay (NMD). Furthermore, we distinguished and phased several previously unknown alternative splicing events (up to 30% of transcripts). In conjunction with PTC mutations, several of these novel ABCA7 isoforms have the potential to rescue deleterious PTC effects. In conclusion, ABCA7 PTC mutations play a substantial role in EOAD, warranting genetic screening of ABCA7 in genetically unexplained patients. Long-read cDNA sequencing revealed both varying degrees of NMD and transcript-modifying events, which may influence ABCA7 dosage, disease severity, and may create opportunities for therapeutic interventions in AD. © 2017, The Author(s).

    Additional information

    Supplementary material
  • Roelofs, A., & Shitova, N. (2017). Importance of response time in assessing the cerebral dynamics of spoken word production: Comment on Munding et al. Language, Cognition and Neuroscience, 32(8), 1064-1067. doi:10.1080/23273798.2016.1274415.
  • Rojas-Berscia, L. M., & Bourdeau, C. (2017). Optional or syntactic ergativity in Shawi? Distribution and possible origins. Linguistic discovery, 15(1), 50-65. doi:10.1349/PS1.1537-0852.A.481.

    Abstract

    In this article we provide a preliminary description and analysis of the most common ergative
    constructions in Shawi, a Kawapanan language spoken in Northwestern Amazonia. We offer a
    comparison with its sister language, Shiwilu, for which an optional ergativity-marking pattern has
    been claimed (Valenzuela, 2008, 2011). There is not enough evidence, however, to claim the exact
    same for Shawi. Ergativity in the language is driven by mere syntactic motivations. One of the
    most common constituent orders in the language where the ergative marker is obligatory is OAV.
    We close the article with a tentative proposal on the passive origins of OAV ergative constructions
    in the language, via a by-phrase-like incorporation, and eventual grammaticalisation, resorting
    to the formal syntactic theory known as Semantic Syntax (Seuren, 1996).
  • Rommers, J., Dickson, D. S., Norton, J. J. S., Wlotko, E. W., & Federmeier, K. D. (2017). Alpha and theta band dynamics related to sentential constraint and word expectancy. Language, Cognition and Neuroscience, 32(5), 576-589. doi:10.1080/23273798.2016.1183799.

    Abstract

    Despite strong evidence for prediction during language comprehension, the underlying
    mechanisms, and the extent to which they are specific to language, remain unclear. Re-analysing
    an event-related potentials study, we examined responses in the time-frequency domain to
    expected and unexpected (but plausible) words in strongly and weakly constraining sentences,
    and found results similar to those reported in nonverbal domains. Relative to expected words,
    unexpected words elicited an increase in the theta band (4–7 Hz) in strongly constraining
    contexts, suggesting the involvement of control processes to deal with the consequences of
    having a prediction disconfirmed. Prior to critical word onset, strongly constraining sentences
    exhibited a decrease in the alpha band (8–12 Hz) relative to weakly constraining sentences,
    suggesting that comprehenders can take advantage of predictive sentence contexts to prepare
    for the input. The results suggest that the brain recruits domain-general preparation and control
    mechanisms when making and assessing predictions during sentence comprehension
  • Rommers, J., Meyer, A. S., & Praamstra, P. (2017). Lateralized electrical brain activity reveals covert attention allocation during speaking. Neuropsychologia, 95, 101-110. doi:10.1016/j.neuropsychologia.2016.12.013.

    Abstract

    Speakers usually begin to speak while only part of the utterance has been planned. Earlier work has shown that speech planning processes are reflected in speakers’ eye movements as they describe visually presented objects. However, to-be-named objects can be processed to some extent before they have been fixated upon, presumably because attention can be allocated to objects covertly, without moving the eyes. The present study investigated whether EEG could track speakers’ covert attention allocation as they produced short utterances to describe pairs of objects (e.g., “dog and chair”). The processing difficulty of each object was varied by presenting it in upright orientation (easy) or in upside down orientation (difficult). Background squares flickered at different frequencies in order to elicit steady-state visual evoked potentials (SSVEPs). The N2pc component, associated with the focusing of attention on an item, was detectable not only prior to speech onset, but also during speaking. The time course of the N2pc showed that attention shifted to each object in the order of mention prior to speech onset. Furthermore, greater processing difficulty increased the time speakers spent attending to each object. This demonstrates that the N2pc can track covert attention allocation in a naming task. In addition, an effect of processing difficulty at around 200–350 ms after stimulus onset revealed early attention allocation to the second to-be-named object. The flickering backgrounds elicited SSVEPs, but SSVEP amplitude was not influenced by processing difficulty. These results help complete the picture of the coordination of visual information uptake and motor output during speaking.
  • Rose, M. L., Mok, Z., & Sekine, K. (2017). Communicative effectiveness of pantomime gesture in people with aphasia. International Journal of Language & Communication disorders, 52(2), 227-237. doi:10.1111/1460-6984.12268.

    Abstract

    Background: Human communication occurs through both verbal and visual/motoric modalities. Simultaneous
    conversational speech and gesture occurs across all cultures and age groups. When verbal communication is
    compromised, more of the communicative load can be transferred to the gesture modality. Although people with
    aphasia produce meaning-laden gestures, the communicative value of these has not been adequately investigated.
    Aims: To investigate the communicative effectiveness of pantomime gesture produced spontaneously by individuals
    with aphasia during conversational discourse.
    Methods & Procedures: Sixty-seven undergraduate students wrote down the messages conveyed by 11 people with
    aphasia that produced pantomime while engaged in conversational discourse. Students were presented with a
    speech-only, a gesture-only and a combined speech and gesture condition and guessed messages in both a free
    description and a multiple-choice task.
    Outcomes & Results: As hypothesized, listener comprehension was more accurate in the combined pantomime
    gesture and speech condition as compared with the gesture- or speech-only conditions. Participants achieved
    greater accuracy in the multiple-choice task as compared with the free-description task, but only in the gestureonly
    condition. The communicative effectiveness of the pantomime gestures increased as the fluency of the
    participants with aphasia decreased.
    Conclusions&Implications: These results indicate that when pantomime gesture was presented with aphasic speech,
    the combination had strong communicative effectiveness. Future studies could investigate how pantomimes can
    be integrated into interventions for people with aphasia, particularly emphasizing elicitation of pantomimes in as
    natural a context as possible and highlighting the opportunity for efficient message repair.
  • Rougier​, N. P., Hinsen, K., Alexandre, F., Arildsen, T., Barba, L. A., Benureau, F. C. Y., Brown, C. T., De Buyl, P., Caglayan, O., Davison, A. P., Delsuc, M.-A., Detorakis, G., Diem, A. K., Drix, D., Enel, P., Girard, B., Guest, O., Hall, M. G., Henriques, R. N., Hinaut, X. and 25 moreRougier​, N. P., Hinsen, K., Alexandre, F., Arildsen, T., Barba, L. A., Benureau, F. C. Y., Brown, C. T., De Buyl, P., Caglayan, O., Davison, A. P., Delsuc, M.-A., Detorakis, G., Diem, A. K., Drix, D., Enel, P., Girard, B., Guest, O., Hall, M. G., Henriques, R. N., Hinaut, X., Jaron, K. S., Khamassi, M., Klein, A., Manninen, T., Marchesi, P., McGlinn, D., Metzner, C., Petchey, O., Plesser, H. E., Poisot, T., Ram, K., Ram, Y., Roesch, E., Rossant, C., Rostami, V., Shifman, A., Stachelek, J., Stimberg, M., Stollmeier, F., Vaggi, F., Viejo, G., Vitay, J., Vostinar, A. E., Yurchak, R., & Zito, T. (2017). Sustainable computational science. PeerJ Computer Science, 3: e142. doi:10.7717/peerj-cs.142.

    Abstract

    Computer science offers a large set of tools for prototyping, writing, running, testing, validating, sharing and reproducing results; however, computational science lags behind. In the best case, authors may provide their source code as a compressed archive and they may feel confident their research is reproducible. But this is not exactly true. James Buckheit and David Donoho proposed more than two decades ago that an article about computational results is advertising, not scholarship. The actual scholarship is the full software environment, code, and data that produced the result. This implies new workflows, in particular in peer-reviews. Existing journals have been slow to adapt: source codes are rarely requested and are hardly ever actually executed to check that they produce the results advertised in the article. ReScience is a peer-reviewed journal that targets computational research and encourages the explicit replication of already published research, promoting new and open-source implementations in order to ensure that the original research can be replicated from its description. To achieve this goal, the whole publishing chain is radically different from other traditional scientific journals. ReScience resides on GitHub where each new implementation of a computational study is made available together with comments, explanations, and software tests.
  • Rowland, C. F., & Monaghan, P. (2017). Developmental psycholinguistics teaches us that we need multi-method, not single-method, approaches to the study of linguistic representation. Commentary on Branigan and Pickering "An experimental approach to linguistic representation". Behavioral and Brain Sciences, 40: e308. doi:10.1017/S0140525X17000565.

    Abstract

    In developmental psycholinguistics, we have, for many years,
    been generating and testing theories that propose both descriptions of
    adult representations and explanations of how those representations
    develop. We have learnt that restricting ourselves to any one
    methodology yields only incomplete data about the nature of linguistic
    representations. We argue that we need a multi-method approach to the
    study of representation.
  • Rubianes, M., Drijvers, L., Muñoz, F., Jiménez-Ortega, L., Almeida-Rivera, T., Sánchez-García, J., Fondevila, S., Casado, P., & Martín-Loeches, M. (2024). The self-reference effect can modulate language syntactic processing even without explicit awareness: An electroencephalography study. Journal of Cognitive Neuroscience, 36(3), 460-474. doi:10.1162/jocn_a_02104.

    Abstract

    Although it is well established that self-related information can rapidly capture our attention and bias cognitive functioning, whether this self-bias can affect language processing remains largely unknown. In addition, there is an ongoing debate as to the functional independence of language processes, notably regarding the syntactic domain. Hence, this study investigated the influence of self-related content on syntactic speech processing. Participants listened to sentences that could contain morphosyntactic anomalies while the masked face identity (self, friend, or unknown faces) was presented for 16 msec preceding the critical word. The language-related ERP components (left anterior negativity [LAN] and P600) appeared for all identity conditions. However, the largest LAN effect followed by a reduced P600 effect was observed for self-faces, whereas a larger LAN with no reduction of the P600 was found for friend faces compared with unknown faces. These data suggest that both early and late syntactic processes can be modulated by self-related content. In addition, alpha power was more suppressed over the left inferior frontal gyrus only when self-faces appeared before the critical word. This may reflect higher semantic demands concomitant to early syntactic operations (around 150–550 msec). Our data also provide further evidence of self-specific response, as reflected by the N250 component. Collectively, our results suggest that identity-related information is rapidly decoded from facial stimuli and may impact core linguistic processes, supporting an interactive view of syntactic processing. This study provides evidence that the self-reference effect can be extended to syntactic processing.
  • Rubio-Fernández, P. (2017). Can we forget what we know in a false‐belief task? An investigation of the true‐belief default. Cognitive Science: a multidisciplinary journal, 41, 218-241. doi:10.1111/cogs.12331.

    Abstract

    It has been generally assumed in the Theory of Mind literature of the past 30 years that young children fail standard false-belief tasks because they attribute their own knowledge to the protagonist (what Leslie and colleagues called a “true-belief default”). Contrary to the traditional view, we have recently proposed that the children's bias is task induced. This alternative view was supported by studies showing that 3 year olds are able to pass a false-belief task that allows them to focus on the protagonist, without drawing their attention to the target object in the test phase. For a more accurate comparison of these two accounts, the present study tested the true-belief default with adults. Four experiments measuring eye movements and response inhibition revealed that (a) adults do not have an automatic tendency to respond to the false-belief question according to their own knowledge and (b) the true-belief response need not be inhibited in order to correctly predict the protagonist's actions. The positive results observed in the control conditions confirm the accuracy of the various measures used. I conclude that the results of this study undermine the true-belief default view and those models that posit mechanisms of response inhibition in false-belief reasoning. Alternatively, the present study with adults and recent studies with children suggest that participants' focus of attention in false-belief tasks may be key to their performance.
  • Rubio-Fernández, P. (2017). Why are bilinguals better than monolinguals at false-belief tasks? Psychonomic Bulletin & Review, 24, 987-998. doi:10.3758/s13423-016-1143-1.

    Abstract

    In standard Theory of Mind tasks, such as the Sally-Anne, children have to predict the behaviour of a mistaken character, which requires attributing the character a false belief. Hundreds of developmental studies in the last 30 years have shown that children under 4 fail standard false-belief tasks. However, recent studies have revealed that bilingual children and adults outperform their monolingual peers in this type of tasks. Bilinguals’ better performance in false-belief tasks has generally been interpreted as a result of their better inhibitory control; that is, bilinguals are allegedly better than monolinguals at inhibiting the erroneous response to the false-belief question. In this review, I challenge the received view and argue instead that bilinguals’ better false-belief performance results from more effective attention management. This challenge ties in with two independent lines of research: on the one hand, recent studies on the role of attentional processes in false-belief tasks with monolingual children and adults; and on the other, current research on bilinguals’ performance in different Executive Function tasks. The review closes with an exploratory discussion of further benefits of bilingual cognition to Theory of Mind development and pragmatics, which may be independent from Executive Function.
  • Rubio-Fernández, P., Geurts, B., & Cummins, C. (2017). Is an apple like a fruit? A study on comparison and categorisation statements. Review of Philosophy and Psychology, 8, 367-390. doi:10.1007/s13164-016-0305-4.

    Abstract

    Categorisation models of metaphor interpretation are based on the premiss that categorisation statements (e.g., ‘Wilma is a nurse’) and comparison statements (e.g., ‘Betty is like a nurse’) are fundamentally different types of assertion. Against this assumption, we argue that the difference is merely a quantitative one: ‘x is a y’ unilaterally entails ‘x is like a y’, and therefore the latter is merely weaker than the former. Moreover, if ‘x is like a y’ licenses the inference that x is not a y, then that inference is a scalar implicature. We defend these claims partly on theoretical grounds and partly on the basis of experimental evidence. A suite of experiments indicates both that ‘x is a y’ unilaterally entails that x is like a y, and that in several respects the non-y inference behaves exactly as one should expect from a scalar implicature. We discuss the implications of our view of categorisation and comparison statements for categorisation models of metaphor interpretation.
  • Rubio-Fernández, P. (2017). The director task: A test of Theory-of-Mind use or selective attention? Psychonomic Bulletin & Review, 24, 1121-1128. doi:10.3758/s13423-016-1190-7.

    Abstract

    Over two decades, the director task has increasingly been employed as a test of the use of Theory of Mind in communication, first in psycholinguistics and more recently in social cognition research. A new version of this task was designed to test two independent hypotheses. First, optimal performance in the director task, as established by the standard metrics of interference, is possible by using selective attention alone, and not necessarily Theory of Mind. Second, pragmatic measures of Theory-of-Mind use can reveal that people actively represent the director’s mental states, contrary to recent claims that they only use domain-general cognitive processes to perform this task. The results of this study support both hypotheses and provide a new interactive paradigm to reliably test Theory-of-Mind use in referential communication.
  • Rubio-Fernández, P., Jara-Ettinger, J., & Gibson, E. (2017). Can processing demands explain toddlers’ performance in false-belief tasks? [Response to Setoh et al. (2016, PNAS)]. Proceedings of the National Academy of Sciences of the United States of America, 114(19): E3750. doi:10.1073/pnas.1701286114.
  • Rubio-Fernández, P. (2024). Cultural evolutionary pragmatics: Investigating the codevelopment and coevolution of language and social cognition. Psychological Review, 131(1), 18-35. doi:10.1037/rev0000423.

    Abstract

    Language and social cognition come together in communication, but their relation has been intensely contested. Here, I argue that these two distinctively human abilities are connected in a positive feedback loop, whereby the development of one cognitive skill boosts the development of the other. More specifically, I hypothesize that language and social cognition codevelop in ontogeny and coevolve in diachrony through the acquisition, mature use, and cultural evolution of reference systems (e.g., demonstratives: “this” vs. “that”; articles: “a” vs. “the”; pronouns: “I” vs. “you”). I propose to study the connection between reference systems and communicative social cognition across three parallel timescales—language acquisition, language use, and language change, as a new research program for cultural evolutionary pragmatics. Within that framework, I discuss the coevolution of language and communicative social cognition as cognitive gadgets, and introduce a new methodological approach to study how universals and cross-linguistic differences in reference systems may result in different developmental pathways to human social cognition.
  • San Roque, L., Floyd, S., & Norcliffe, E. (2017). Evidentiality and interrogativity. Lingua, 186-187, 120-143. doi:10.1016/j.lingua.2014.11.003.

    Abstract

    Understanding of evidentials is incomplete without consideration of their behaviour in interrogative contexts. We discuss key formal, semantic, and pragmatic features of cross-linguistic variation concerning the use of evidential markers in interrogative clauses. Cross-linguistic data suggest that an exclusively speaker-centric view of evidentiality is not sufficient to explain the semantics of information source marking, as in many languages it is typical for evidentials in questions to represent addressee perspective. Comparison of evidentiality and the related phenomenon of egophoricity emphasises how knowledge-based linguistic systems reflect attention to the way knowledge is distributed among participants in the speech situation
  • Sauppe, S. (2017). Symmetrical and asymmetrical voice systems and processing load: Pupillometric evidence from sentence production in Tagalog and German. Language, 93(2), 288-313. doi:10.1353/lan.2017.0015.

    Abstract

    The voice system of Tagalog has been proposed to be symmetrical in the sense that there are no morphologically unmarked voice forms. This stands in contrast to asymmetrical voice systems which exhibit unmarked and marked voices (e.g., active and passive in German). This paper investigates the psycholinguistic processing consequences of the symmetrical and asymmetrical nature of the Tagalog and German voice systems by analyzing changes in cognitive load during sentence production. Tagalog and German native speakers' pupil diameters were recorded while they produced sentences with different voice markings. Growth curve analyses of the shape of task-evoked pupillary responses revealed that processing load changes were similar for different voices in the symmetrical voice system of Tagalog. By contrast, actives and passives in the asymmetrical voice system of German exhibited different patterns of processing load changes during sentence production. This is interpreted as supporting the notion of symmetry in the Tagalog voice system. Mental effort during sentence planning changes in different ways in the two languages because the grammatical architecture of their voice systems is different. Additionally, an anti-Patient bias in sentence production was found in Tagalog: cognitive load increased at the same time and at the same rate but was maintained for a longer time when the patient argument was the subject, as compared to agent subjects. This indicates that while both voices in Tagalog afford similar planning operations, linking patients to the subject function is more effortful. This anti-Patient bias in production adds converging evidence to “subject preferences” reported in the sentence comprehension literature.
  • Sauppe, S. (2017). Word order and voice influence the timing of verb planning in German sentence production. Frontiers in Psychology, 8: 1648. doi:10.3389/fpsyg.2017.01648.

    Abstract

    Theories of incremental sentence production make different assumptions about when speakers encode information about described events and when verbs are selected, accordingly. An eye tracking experiment on German testing the predictions from linear and hierarchical incrementality about the timing of event encoding and verb planning is reported. In the experiment, participants described depictions of two-participant events with sentences that differed in voice and word order. Verb-medial active sentences and actives and passives with sentence-final verbs were compared. Linear incrementality predicts that sentences with verbs placed early differ from verb-final sentences because verbs are assumed to only be planned shortly before they are articulated. By contrast, hierarchical incrementality assumes that speakers start planning with relational encoding of the event. A weak version of hierarchical incrementality assumes that only the action is encoded at the outset of formulation and selection of lexical verbs only occurs shortly before they are articulated, leading to the prediction of different fixation patterns for verb-medial and verb-final sentences. A strong version of hierarchical incrementality predicts no differences between verb-medial and verb-final sentences because it assumes that verbs are always lexically selected early in the formulation process. Based on growth curve analyses of fixations to agent and patient characters in the described pictures, and the influence of character humanness and the lack of an influence of the visual salience of characters on speakers' choice of active or passive voice, the current results suggest that while verb planning does not necessarily occur early during formulation, speakers of German always create an event representation early
  • Schijven, D., Soheili-Nezhad, S., Fisher, S. E., & Francks, C. (2024). Exome-wide analysis implicates rare protein-altering variants in human handedness. Nature Communications, 15: 2632. doi:10.1038/s41467-024-46277-w.

    Abstract

    Handedness is a manifestation of brain hemispheric specialization. Left-handedness occurs at increased rates in neurodevelopmental disorders. Genome-wide association studies have identified common genetic effects on handedness or brain asymmetry, which mostly involve variants outside protein-coding regions and may affect gene expression. Implicated genes include several that encode tubulins (microtubule components) or microtubule-associated proteins. Here we examine whether left-handedness is also influenced by rare coding variants (frequencies ≤ 1%), using exome data from 38,043 left-handed and 313,271 right-handed individuals from the UK Biobank. The beta-tubulin gene TUBB4B shows exome-wide significant association, with a rate of rare coding variants 2.7 times higher in left-handers than right-handers. The TUBB4B variants are mostly heterozygous missense changes, but include two frameshifts found only in left-handers. Other TUBB4B variants have been linked to sensorineural and/or ciliopathic disorders, but not the variants found here. Among genes previously implicated in autism or schizophrenia by exome screening, DSCAM and FOXP1 show evidence for rare coding variant association with left-handedness. The exome-wide heritability of left-handedness due to rare coding variants was 0.91%. This study reveals a role for rare, protein-altering variants in left-handedness, providing further evidence for the involvement of microtubules and disorder-relevant genes.
  • Schoffelen, J.-M., Hulten, A., Lam, N. H. L., Marquand, A. F., Udden, J., & Hagoort, P. (2017). Frequency-specific directed interactions in the human brain network for language. Proceedings of the National Academy of Sciences of the United States of America, 114(30), 8083-8088. doi:10.1073/pnas.1703155114.

    Abstract

    The brain’s remarkable capacity for language requires bidirectional interactions between functionally specialized brain regions. We used magnetoencephalography to investigate interregional interactions in the brain network for language while 102 participants were reading sentences. Using Granger causality analysis, we identified inferior frontal cortex and anterior temporal regions to receive widespread input and middle temporal regions to send widespread output. This fits well with the notion that these regions play a central role in language processing. Characterization of the functional topology of this network, using data-driven matrix factorization, which allowed for partitioning into a set of subnetworks, revealed directed connections at distinct frequencies of interaction. Connections originating from temporal regions peaked at alpha frequency, whereas connections originating from frontal and parietal regions peaked at beta frequency. These findings indicate that the information flow between language-relevant brain areas, which is required for linguistic processing, may depend on the contributions of distinct brain rhythms

    Additional information

    pnas.201703155SI.pdf
  • Schuerman, W. L., Meyer, A. S., & McQueen, J. M. (2017). Mapping the speech code: Cortical responses linking the perception and production of vowels. Frontiers in Human Neuroscience, 11: 161. doi:10.3389/fnhum.2017.00161.

    Abstract

    The acoustic realization of speech is constrained by the physical mechanisms by which it is produced. Yet for speech perception, the degree to which listeners utilize experience derived from speech production has long been debated. In the present study, we examined how sensorimotor adaptation during production may affect perception, and how this relationship may be reflected in early vs. late electrophysiological responses. Participants first performed a baseline speech production task, followed by a vowel categorization task during which EEG responses were recorded. In a subsequent speech production task, half the participants received shifted auditory feedback, leading most to alter their articulations. This was followed by a second, post-training vowel categorization task. We compared changes in vowel production to both behavioral and electrophysiological changes in vowel perception. No differences in phonetic categorization were observed between groups receiving altered or unaltered feedback. However, exploratory analyses revealed correlations between vocal motor behavior and phonetic categorization. EEG analyses revealed correlations between vocal motor behavior and cortical responses in both early and late time windows. These results suggest that participants' recent production behavior influenced subsequent vowel perception. We suggest that the change in perception can be best characterized as a mapping of acoustics onto articulation
  • Schuerman, W. L., Nagarajan, S., McQueen, J. M., & Houde, J. (2017). Sensorimotor adaptation affects perceptual compensation for coarticulation. The Journal of the Acoustical Society of America, 141(4), 2693-2704. doi:10.1121/1.4979791.

    Abstract

    A given speech sound will be realized differently depending on the context in which it is produced. Listeners have been found to compensate perceptually for these coarticulatory effects, yet it is unclear to what extent this effect depends on actual production experience. In this study, whether changes in motor-to-sound mappings induced by adaptation to altered auditory feedback can affect perceptual compensation for coarticulation is investigated. Specifically, whether altering how the vowel [i] is produced can affect the categorization of a stimulus continuum between an alveolar and a palatal fricative whose interpretation is dependent on vocalic context is tested. It was found that participants could be sorted into three groups based on whether they tended to oppose the direction of the shifted auditory feedback, to follow it, or a mixture of the two, and that these articulatory responses, not the shifted feedback the participants heard, correlated with changes in perception. These results indicate that sensorimotor adaptation to altered feedback can affect the perception of unaltered yet coarticulatorily-dependent speech sounds, suggesting a modulatory role of sensorimotor experience on speech perception
  • Seijdel, N., Schoffelen, J.-M., Hagoort, P., & Drijvers, L. (2024). Attention drives visual processing and audiovisual integration during multimodal communication. The Journal of Neuroscience, 44(10): e0870232023. doi:10.1523/JNEUROSCI.0870-23.2023.

    Abstract

    During communication in real-life settings, our brain often needs to integrate auditory and visual information, and at the same time actively focus on the relevant sources of information, while ignoring interference from irrelevant events. The interaction between integration and attention processes remains poorly understood. Here, we use rapid invisible frequency tagging (RIFT) and magnetoencephalography (MEG) to investigate how attention affects auditory and visual information processing and integration, during multimodal communication. We presented human participants (male and female) with videos of an actress uttering action verbs (auditory; tagged at 58 Hz) accompanied by two movie clips of hand gestures on both sides of fixation (attended stimulus tagged at 65 Hz; unattended stimulus tagged at 63 Hz). Integration difficulty was manipulated by a lower-order auditory factor (clear/degraded speech) and a higher-order visual semantic factor (matching/mismatching gesture). We observed an enhanced neural response to the attended visual information during degraded speech compared to clear speech. For the unattended information, the neural response to mismatching gestures was enhanced compared to matching gestures. Furthermore, signal power at the intermodulation frequencies of the frequency tags, indexing non-linear signal interactions, was enhanced in left frontotemporal and frontal regions. Focusing on LIFG (Left Inferior Frontal Gyrus), this enhancement was specific for the attended information, for those trials that benefitted from integration with a matching gesture. Together, our results suggest that attention modulates audiovisual processing and interaction, depending on the congruence and quality of the sensory input.

    Additional information

    link to preprint
  • Sekine, K., & Kita, S. (2017). The listener automatically uses spatial story representations from the speaker's cohesive gestures when processing subsequent sentences without gestures. Acta Psychologica, 179, 89-95. doi:10.1016/j.actpsy.2017.07.009.

    Abstract

    This study examined spatial story representations created by speaker's cohesive gestures. Participants were presented with three-sentence discourse with two protagonists. In the first and second sentences, gestures consistently located the two protagonists in the gesture space: one to the right and the other to the left. The third sentence (without gestures) referred to one of the protagonists, and the participants responded with one of the two keys to indicate the relevant protagonist. The response keys were either spatially congruent or incongruent with the gesturally established locations for the two participants. Though the cohesive gestures did not provide any clue for the correct response, they influenced performance: the reaction time in the congruent condition was faster than that in the incongruent condition. Thus, cohesive gestures automatically establish spatial story representations and the spatial story representations remain activated in a subsequent sentence without any gesture.
  • Sekine, K., & Özyürek, A. (2024). Children benefit from gestures to understand degraded speech but to a lesser extent than adults. Frontiers in Psychology, 14: 1305562. doi:10.3389/fpsyg.2023.1305562.

    Abstract

    The present study investigated to what extent children, compared to adults, benefit from gestures to disambiguate degraded speech by manipulating speech signals and manual modality. Dutch-speaking adults (N = 20) and 6- and 7-year-old children (N = 15) were presented with a series of video clips in which an actor produced a Dutch action verb with or without an accompanying iconic gesture. Participants were then asked to repeat what they had heard. The speech signal was either clear or altered into 4- or 8-band noise-vocoded speech. Children had more difficulty than adults in disambiguating degraded speech in the speech-only condition. However, when presented with both speech and gestures, children reached a comparable level of accuracy to that of adults in the degraded-speech-only condition. Furthermore, for adults, the enhancement of gestures was greater in the 4-band condition than in the 8-band condition, whereas children showed the opposite pattern. Gestures help children to disambiguate degraded speech, but children need more phonological information than adults to benefit from use of gestures. Children’s multimodal language integration needs to further develop to adapt flexibly to challenging situations such as degraded speech, as tested in our study, or instances where speech is heard with environmental noise or through a face mask.

    Additional information

    supplemental material
  • Senft, G. (2017). Absolute frames of spatial reference in Austronesian languages. Russian Journal of Linguistics, 21, 686-705. doi:10.22363/2312-9182-2017-21-4-686-705.

    Abstract

    This paper provides a brief survey on various absolute frames of spatial reference that can be observed in a number of Austronesian languages – with an emphasis on languages of the Oceanic subgroup. It is based on research of conceptions of space and systems of spatial reference that was initiated by the “space project” of the Cognitive Anthropology Research Group (now the Department of Language and Cognition) at the Max Planck Institute for Psycholinguistics and by my anthology “Referring to Space” (Senft 1997a; see Keller 2002: 250). The examples illustrating these different absolute frames of spatial reference reveal once more that earlier generalizations within the domain of “SPACE” were strongly biased by research on Indo-European languages; they also reveal how complex some of these absolute frames of spatial reference found in these languages are. The paper ends with a summary of Wegener’s (2002) preliminary typology of these absolute frames of spatial reference.
  • Senft, G. (2017). Acquiring Kilivila Pragmatics - the Role of the Children's (Play-)Groups in the first 7 Years of their Lives on the Trobriand Islands in Papua New Guinea. Studies in Pragmatics, 19, 40-53.

    Abstract

    Trobriand children are breastfed until they can walk; then they are abruptly weaned and the parents dramatically reduce the pervasive loving care that their children experienced before. The children have to find a place within the children’s groups in their villages. They learn to behave according to their community’s rules and regulations which find their expression in forms of verbal and non-verbal behavior. They acquire their culture specific pragmatics under the control of older members of their groups. The children's “small republic” is the primary institution of verbal and cultural socialization. Attempts of parental education are confined to a minimum.
  • Senft, G. (1994). Ein Vorschlag, wie man standardisiert Daten zum Thema 'Sprache, Kognition und Konzepte des Raumes' in verschiedenen Kulturen erheben kann. Linguistische Berichte, 154, 413-429.
  • Senft, G. (1994). [Review of the book Language, culture and society: An introduction by Zdenek Salzmann]. Man, 29, 756-757.
  • Senft, G. (1994). Grammaticalisation of body-part terms in Kilivila. Language and Linguistics in Melanesia, 25, 98-99.
  • Senft, G. (1994). Spatial reference in Kilivila: The Tinkertoy Matching Games - A case study. Language and Linguistics in Melanesia, 25, 55-93.
  • Senft, G. (1994). These 'Procrustean' feelings: Some of my problems in describing Kilivila. Semaian, 11, 86-105.
  • Seuren, P. A. M. (1982). De spellingsproblematiek in Suriname: Een inleiding. OSO, 1(1), 71-79.
  • Seuren, P. A. M. (1994). [Review of the Dictionary of St. Lucian Creole, part 1: Kweyol- English', part 2: English-Kweyol compiled by Jones E. Mondesir and ed. by Lawrence D. Carrington]. Linguistics, 32(1), 157-158. doi:10.1515/ling.1991.29.4.719.
  • Seuren, P. A. M. (1983). [Review of the book The inheritance of presupposition by J. Dinsmore]. Journal of Semantics, 2(3/4), 356-358. doi:10.1093/semant/2.3-4.356.
  • Seuren, P. A. M. (1983). [Review of the book Thirty million theories of grammar by J. McCawley]. Journal of Semantics, 2(3/4), 325-341. doi:10.1093/semant/2.3-4.325.
  • Seuren, P. A. M. (1983). In memoriam Jan Voorhoeve. Bijdragen tot de Taal-, Land- en Volkenkunde, 139(4), 403-406.
  • Seuren, P. A. M. (1982). Internal variability in competence. Linguistische Berichte, 77, 1-31.
  • Seuren, P. A. M. (1983). Overwegingen bij de spelling van het Sranan en een spellingsvoorstel. OSO, 2(1), 67-81.
  • Seuren, P. A. M. (1994). Soaps and serials. Journal of Pidgin and Creole Languages, 9(1), 131-149. doi:10.1075/jpcl.9.1.18seu.
  • Severijnen, G. G. A., Bosker, H. R., & McQueen, J. M. (2024). Your “VOORnaam” is not my “VOORnaam”: An acoustic analysis of individual talker differences in word stress in Dutch. Journal of Phonetics, 103: 101296. doi:10.1016/j.wocn.2024.101296.

    Abstract

    Different talkers speak differently, even within the same homogeneous group. These differences lead to acoustic variability in speech, causing challenges for correct perception of the intended message. Because previous descriptions of this acoustic variability have focused mostly on segments, talker variability in prosodic structures is not yet well documented. The present study therefore examined acoustic between-talker variability in word stress in Dutch. We recorded 40 native Dutch talkers from a participant sample with minimal dialectal variation and balanced gender, producing segmentally overlapping words (e.g., VOORnaam vs. voorNAAM; ‘first name’ vs. ‘respectable’, capitalization indicates lexical stress), and measured different acoustic cues to stress. Each individual participant’s acoustic measurements were analyzed using Linear Discriminant Analyses, which provide coefficients for each cue, reflecting the strength of each cue in a talker’s productions. On average, talkers primarily used mean F0, intensity, and duration. Moreover, each participant also employed a unique combination of cues, illustrating large prosodic variability between talkers. In fact, classes of cue-weighting tendencies emerged, differing in which cue was used as the main cue. These results offer the most comprehensive acoustic description, to date, of word stress in Dutch, and illustrate that large prosodic variability is present between individual talkers.
  • Shan, W., Zhang, Y., Zhao, J., Wu, S., Zhao, L., Ip, P., Tucker, J. D., & Jiang, F. (2024). Positive parent–child interactions moderate certain maltreatment effects on psychosocial well-being in 6-year-old children. Pediatric Research, 95, 802-808. doi:10.1038/s41390-023-02842-5.

    Abstract

    Background: Positive parental interactions may buffer maltreated children from poor psychosocial outcomes. The study aims to evaluate the associations between various types of maltreatment and psychosocial outcomes in early childhood, and examine the moderating effect of positive parent-child interactions on them.

    Methods: Data were from a representative Chinese 6-year-old children sample (n = 17,088). Caregivers reported the history of child maltreatment perpetrated by any individuals, completed the Strengths and Difficulties Questionnaire as a proxy for psychosocial well-being, and reported the frequency of their interactions with children by the Chinese Parent-Child Interaction Scale.

    Results: Physical abuse, emotional abuse, neglect, and sexual abuse were all associated with higher odds of psychosocial problems (aOR = 1.90 [95% CI: 1.57-2.29], aOR = 1.92 [95% CI: 1.75-2.10], aOR = 1.64 [95% CI: 1.17-2.30], aOR = 2.03 [95% CI: 1.30-3.17]). Positive parent-child interactions were associated with lower odds of psychosocial problems after accounting for different types of maltreatment. The moderating effect of frequent parent-child interactions was found only in the association between occasional only physical abuse and psychosocial outcomes (interaction term: aOR = 0.34, 95% CI: 0.15-0.77).

    Conclusions: Maltreatment and positive parent-child interactions have impacts on psychosocial well-being in early childhood. Positive parent-child interactions could only buffer the adverse effect of occasional physical abuse on psychosocial outcomes. More frequent parent-child interactions may be an important intervention opportunity among some children.

    Impact: It provides the first data on the prevalence of different single types and combinations of maltreatment in early childhood in Shanghai, China by drawing on a city-level population-representative sample. It adds to evidence that different forms and degrees of maltreatment were all associated with a higher risk of psychosocial problems in early childhood. Among them, sexual abuse posed the highest risk, followed by emotional abuse. It innovatively found that higher frequencies of parent-child interactions may provide buffering effects only to children who are exposed to occasional physical abuse. It provides a potential intervention opportunity, especially for physically abused children.
  • Shitova, N., Roelofs, A., Schriefers, H., Bastiaansen, M., & Schoffelen, J.-M. (2017). Control adjustments in speaking: Electrophysiology of the Gratton effect in picture naming. Cortex, 92, 289-303. doi:10.1016/j.cortex.2017.04.017.

    Abstract

    Accumulating evidence suggests that spoken word production requires different amounts of top-down control depending on the prevailing circumstances. For example, during Stroop-like tasks, the interference in response time (RT) is typically larger following congruent trials than following incongruent trials. This effect is called the Gratton effect, and has been taken to reflect top-down control adjustments based on the previous trial type. Such control adjustments have been studied extensively in Stroop and Eriksen flanker tasks (mostly using manual responses), but not in the picture-word interference (PWI) task, which is a workhorse of language production research. In one of the few studies of the Gratton effect in PWI, Van Maanen and Van Rijn (2010) examined the effect in picture naming RTs during dual-task performance. Based on PWI effect differences between dual-task conditions, they argued that the functional locus of the PWI effect differs between post-congruent trials (i.e., locus in perceptual and conceptual encoding) and post-incongruent trials (i.e., locus in word planning). However, the dual-task procedure may have contaminated the results. We therefore performed an EEG study on the Gratton effect in a regular PWI task. We observed a PWI effect in the RTs, in the N400 component of the event-related brain potentials, and in the midfrontal theta power, regardless of the previous trial type. Moreover, the RTs, N400, and theta power reflected the Gratton effect. These results provide evidence that the PWI effect arises at the word planning stage following both congruent and incongruent trials, while the amount of top-down control changes depending on the previous trial type.
  • Shitova, N., Roelofs, A., Schriefers, H., Bastiaansen, M. C. M., & Schoffelen, J.-M. (2017). Control adjustments in speaking: Electrophysiology of the Gratton effect in picture naming. Cortex, 92, 289-303. doi:10.1016/j.cortex.2017.04.017.

    Abstract

    Accumulating evidence suggests that spoken word production requires different amounts of top-down control depending on the prevailing circumstances. For example, during Stroop-like tasks, the interference in response time (RT) is typically larger following congruent trials than following incongruent trials. This effect is called the Gratton effect, and has been taken to reflect top-down control adjustments based on the previous trial type. Such control adjustments have been studied extensively in Stroop and Eriksen flanker tasks (mostly using manual responses), but not in the picture–word interference (PWI) task, which is a workhorse of language production research. In one of the few studies of the Gratton effect in PWI, Van Maanen and Van Rijn (2010) examined the effect in picture naming RTs during dual-task performance. Based on PWI effect differences between dual-task conditions, they argued that the functional locus of the PWI effect differs between post-congruent trials (i.e., locus in perceptual and conceptual encoding) and post-incongruent trials (i.e., locus in word planning). However, the dual-task procedure may have contaminated the results. We therefore performed an electroencephalography (EEG) study on the Gratton effect in a regular PWI task. We observed a PWI effect in the RTs, in the N400 component of the event-related brain potentials, and in the midfrontal theta power, regardless of the previous trial type. Moreover, the RTs, N400, and theta power reflected the Gratton effect. These results provide evidence that the PWI effect arises at the word planning stage following both congruent and incongruent trials, while the amount of top-down control changes depending on the previous trial type.
  • Shitova, N., Roelofs, A., Coughler, C., & Schriefers, H. (2017). P3 event-related brain potential reflects allocation and use of central processing capacity in language production. Neuropsychologia, 106, 138-145. doi:10.1016/j.neuropsychologia.2017.09.024.

    Abstract

    Allocation and use of central processing capacity have been associated with the P3 event-related brain potential amplitude in a large variety of non-linguistic tasks. However, little is known about the P3 in spoken language production. Moreover, the few studies that are available report opposing P3 effects when task complexity is manipulated. We investigated allocation and use of central processing capacity in a spoken phrase production task: Participants switched every second trial between describing pictures using noun phrases with one adjective (size only; simple condition, e.g., “the big desk”) or two adjectives (size and color; complex condition, e.g., “the big red desk”). Capacity allocation was manipulated by complexity, and capacity use by switching. Response time (RT) was longer for complex than for simple trials. Moreover, complexity and switching interacted: RTs were longer on switch than on repeat trials for simple phrases but shorter on switch than on repeat trials for complex phrases. P3 amplitude increased with complexity. Moreover, complexity and switching interacted: The complexity effect was larger on the switch trials than on the repeat trials. These results provide evidence that the allocation and use of central processing capacity in language production are differentially reflected in the P3 amplitude.
  • Silva, S., Inácio, F., Folia, V., & Petersson, K. M. (2017). Eye movements in implicit artificial grammar learning. Journal of Experimental Psychology: Learning, Memory, and Cognition, 43(9), 1387-1402. doi:10.1037/xlm0000350.

    Abstract

    Artificial grammar learning (AGL) has been probed with forced-choice behavioral tests (active tests). Recent attempts to probe the outcomes of learning (implicitly acquired knowledge) with eye-movement responses (passive tests) have shown null results. However, these latter studies have not tested for sensitivity effects, for example, increased eye movements on a printed violation. In this study, we tested for sensitivity effects in AGL tests with (Experiment 1) and without (Experiment 2) concurrent active tests (preference- and grammaticality classification) in an eye-tracking experiment. Eye movements discriminated between sequence types in passive tests and more so in active tests. The eye-movement profile did not differ between preference and grammaticality classification, and it resembled sensitivity effects commonly observed in natural syntax processing. Our findings show that the outcomes of implicit structured sequence learning can be characterized in eye tracking. More specifically, whole trial measures (dwell time, number of fixations) showed robust AGL effects, whereas first-pass measures (first-fixation duration) did not. Furthermore, our findings strengthen the link between artificial and natural syntax processing, and they shed light on the factors that determine performance differences in preference and grammaticality classification tests
  • Silva, S., Petersson, K. M., & Castro, S. L. (2017). The effects of ordinal load on incidental temporal learning. Quarterly Journal of Experimental Psychology, 70(4), 664-674. doi:10.1080/17470218.2016.1146909.

    Abstract

    How can we grasp the temporal structure of events? A few studies have indicated that representations of temporal structure are acquired when there is an intention to learn, but not when learning is incidental. Response-to-stimulus intervals, uncorrelated temporal structures, unpredictable ordinal information, and lack of metrical organization have been pointed out as key obstacles to incidental temporal learning, but the literature includes piecemeal demonstrations of learning under all these circumstances. We suggest that the unacknowledged effects of ordinal load may help reconcile these conflicting findings, ordinal load referring to the cost of identifying the sequence of events (e.g., tones, locations) where a temporal pattern is embedded. In a first experiment, we manipulated ordinal load into simple and complex levels. Participants learned ordinal-simple sequences, despite their uncorrelated temporal structure and lack of metrical organization. They did not learn ordinal-complex sequences, even though there were no response-to-stimulus intervals nor unpredictable ordinal information. In a second experiment, we probed learning of ordinal-complex sequences with strong metrical organization, and again there was no learning. We conclude that ordinal load is a key obstacle to incidental temporal learning. Further analyses showed that the effect of ordinal load is to mask the expression of temporal knowledge, rather than to prevent learning.
  • Silva, S., Folia, V., Hagoort, P., & Petersson, K. M. (2017). The P600 in Implicit Artificial Grammar Learning. Cognitive Science, 41(1), 137-157. doi:10.1111/cogs.12343.

    Abstract

    The suitability of the Artificial Grammar Learning (AGL) paradigm to capture relevant aspects of the acquisition of linguistic structures has been empirically tested in a number of EEG studies. Some have shown a syntax-related P600 component, but it has not been ruled out that the AGL P600 effect is a response to surface features (e.g., subsequence familiarity) rather than the underlying syntax structure. Therefore, in this study, we controlled for the surface characteristics of the test sequences (associative chunk strength) and recorded the EEG before (baseline preference classification) and
    after (preference and grammaticality classification) exposure to a grammar. A typical, centroparietal P600 effect was elicited by grammatical violations after exposure, suggesting that the AGL P600 effect signals a response to structural irregularities. Moreover, preference and grammaticality classification showed a qualitatively similar ERP profile, strengthening the idea that the implicit structural mere
    exposure paradigm in combination with preference classification is a suitable alternative to the traditional grammaticality classification test.
  • Silverstein, P., Bergmann, C., & Syed, M. (Eds.). (2024). Open science and metascience in developmental psychology [Special Issue]. Infant and Child Development, 33(1).
  • Silverstein, P., Bergmann, C., & Syed, M. (2024). Open science and metascience in developmental psychology: Introduction to the special issue. Infant and Child Development, 33(1): e2495. doi:10.1002/icd.2495.
  • Simon, E., & Sjerps, M. J. (2017). Phonological category quality in the mental lexicon of child and adult learners. International Journal of Bilingualism, 21(4), 474-499. doi:10.1177/1367006915626589.

    Abstract

    Aims and objectives: The aim was to identify which criteria children use to decide on the category membership of native and non-native vowels, and to get insight into the organization of phonological representations in the bilingual mind. Methodology: The study consisted of two cross-language mispronunciation detection tasks in which L2 vowels were inserted into L1 words and vice versa. In Experiment 1, 10- to 12-year-old Dutch-speaking children were presented with Dutch words which were either pronounced with the target Dutch vowel or with an English vowel inserted in the Dutch consonantal frame. Experiment 2 was a mirror of the first, with English words which were pronounced “correctly” or which were “mispronounced” with a Dutch vowel. Data and analysis: Analyses focused on extent to which child and adult listeners accepted substitutions of Dutch vowels by English ones, and vice versa. Findings: The results of Experiment 1 revealed that between the age of ten and twelve children have well-established phonological vowel categories in their native language. However, Experiment 2 showed that in their non-native language, children tended to accept mispronounced items which involve sounds from their native language. At the same time, though, they did not fully rely on their native phonemic inventory because the children accepted most of the correctly pronounced English items. Originality: While many studies have examined native and non-native perception by infants and adults, studies on first and second language perception of school-age children are rare. This study adds to the body of literature aimed at expanding our knowledge in this area. Implications: The study has implications for models of the organization of the bilingual mind: while proficient adult non-native listeners generally have clearly separated sets of phonological representations for their two languages, for non-proficient child learners the L1 phonology still exerts a strong influence on the L2 phonology.
  • Skeide, M. A., Kumar, U., Mishra, R. K., Tripathi, V. N., Guleria, A., Singh, J. P., Eisner, F., & Huettig, F. (2017). Learning to read alters cortico-subcortical crosstalk in the visual system of illiterates. Science Advances, 5(3): e1602612. doi:10.1126/sciadv.1602612.

    Abstract

    Learning to read is known to result in a reorganization of the developing cerebral cortex. In this longitudinal resting-state functional magnetic resonance imaging study in illiterate adults we show that only 6 months of literacy training can lead to neuroplastic changes in the mature brain. We observed that literacy-induced neuroplasticity is not confined to the cortex but increases the functional connectivity between the occipital lobe and subcortical areas in the midbrain and
    the thalamus. Individual rates of connectivity increase were significantly related to the individualdecoding skill gains. These findings crucially complement current neurobiological concepts ofnormal and impaired literacy acquisition.
  • Skirgard, H., Roberts, S. G., & Yencken, L. (2017). Why are some languages confused for others? Investigating data from the Great Language Game. PLoS One, 12(4): e0165934. doi:10.1371/journal.pone.0165934.

    Abstract

    In this paper we explore the results of a large-scale online game called ‘the Great Language Game’, in which people listen to an audio speech sample and make a forced-choice guess about the identity of the language from 2 or more alternatives. The data include 15 million guesses from 400 audio recordings of 78 languages. We investigate which languages are confused for which in the game, and if this correlates with the similarities that linguists identify between languages. This includes shared lexical items, similar sound inventories and established historical relationships. Our findings are, as expected, that players are more likely to confuse two languages that are objectively more similar. We also investigate factors that may affect players’ ability to accurately select the target language, such as how many people speak the language, how often the language is mentioned in written materials and the economic power of the target language community. We see that non-linguistic factors affect players’ ability to accurately identify the target. For example, languages with wider ‘global reach’ are more often identified correctly. This suggests that both linguistic and cultural knowledge influence the perception and recognition of languages and their similarity.
  • Slonimska, A., & Roberts, S. G. (2017). A case for systematic sound symbolism in pragmatics: Universals in wh-words. Journal of Pragmatics, 116, 1-20. doi:10.1016/j.pragma.2017.04.004.

    Abstract

    This study investigates whether there is a universal tendency for content
    interrogative words (wh-­words) within a language to sound similar in order to facilitate
    pragmatic inference in conversation. Gaps between turns in conversation are very
    short, meaning that listeners must begin planning their turn as soon as possible.
    While previous research has shown that paralinguistic features such as prosody and
    eye gaze provide cues to the pragmatic function of upcoming turns, we hypothesise
    that a systematic phonetic cue that marks interrogative words would also help early
    recognition of questions (allowing early preparation of answers), for instance wh-­
    words sounding similar within a language. We analyzed 226 languages from 66
    different language families by means of permutation tests. We found that initial
    segments of wh-­words were more similar within a language than between languages,
    also when controlling for language family, geographic area (stratified permutation)
    and analyzability (compound phrases excluded). Random samples tests revealed that
    initial segments of wh-­words were more similar than initial segments of randomly
    selected word sets and conceptually related word sets (e.g., body parts, actions,
    pronouns). Finally, we hypothesized that this cue would be more useful at the
    beginning of a turn, so the similarity of the initial segment of wh-­words should be
    greater in languages that place them at the beginning of a clause. We gathered
    typological data on 110 languages, and found the predicted trend, although statistical
    significance was not attained. While there may be several mechanisms that bring
    about this pattern (e.g., common derivation), we suggest that the ultimate explanation
    of the similarity of interrogative words is to facilitate early speech-­act recognition.
    Importantly, this hypothesis can be tested empirically, and the current results provide
    a sound basis for future experimental tests.
  • Slonimska, A. (2024). The role of iconicity and simultaneity in efficient communication in the visual modality: Evidence from LIS (Italian Sign Language) [Dissertation Abstract]. Sign Language & Linguistics. Advance online publication. doi:10.1075/sll.00084.slo.
  • Smith, A. C., Monaghan, P., & Huettig, F. (2017). The multimodal nature of spoken word processing in the visual world: Testing the predictions of alternative models of multimodal integration. Journal of Memory and Language, 93, 276-303. doi:10.1016/j.jml.2016.08.005.

    Abstract

    Ambiguity in natural language is ubiquitous, yet spoken communication is effective due to integration of information carried in the speech signal with information available in the surrounding multimodal landscape. Language mediated visual attention requires visual and linguistic information integration and has thus been used to examine properties of the architecture supporting multimodal processing during spoken language comprehension. In this paper we test predictions generated by alternative models of this multimodal system. A model (TRACE) in which multimodal information is combined at the point of the lexical representations of words generated predictions of a stronger effect of phonological rhyme relative to semantic and visual information on gaze behaviour, whereas a model in which sub-lexical information can interact across modalities (MIM) predicted a greater influence of visual and semantic information, compared to phonological rhyme. Two visual world experiments designed to test these predictions offer support for sub-lexical multimodal interaction during online language processing.
  • Soheili-Nezhad, S., Ibáñez-Solé, O., Izeta, A., Hoeijmakers, J. H. J., & Stoeger, T. (2024). Time is ticking faster for long genes in aging. Trends in Genetics, 40(4), 299-312. doi:10.1016/j.tig.2024.01.009.

    Abstract

    Recent studies of aging organisms have identified a systematic phenomenon, characterized by a negative correlation between gene length and their expression in various cell types, species, and diseases. We term this phenomenon gene-length-dependent transcription decline (GLTD) and suggest that it may represent a bottleneck in the transcription machinery and thereby significantly contribute to aging as an etiological factor. We review potential links between GLTD and key aging processes such as DNA damage and explore their potential in identifying disease modification targets. Notably, in Alzheimer’s disease, GLTD spotlights extremely long synaptic genes at chromosomal fragile sites (CFSs) and their vulnerability to postmitotic DNA damage. We suggest that GLTD is an integral element of biological aging.
  • Sollis, E., Deriziotis, P., Saitsu, H., Miyake, N., Matsumoto, N., J.V.Hoffer, M. J. V., Ruivenkamp, C. A., Alders, M., Okamoto, N., Bijlsma, E. K., Plomp, A. S., & Fisher, S. E. (2017). Equivalent missense variant in the FOXP2 and FOXP1 transcription factors causes distinct neurodevelopmental disorders. Human Mutation, 38(11), 1542-1554. doi:10.1002/humu.23303.

    Abstract

    The closely related paralogues FOXP2 and FOXP1 encode transcription factors with shared functions in the development of many tissues, including the brain. However, while mutations in FOXP2 lead to a speech/language disorder characterized by childhood apraxia of speech (CAS), the clinical profile of FOXP1 variants includes a broader neurodevelopmental phenotype with global developmental delay, intellectual disability and speech/language impairment. Using clinical whole-exome sequencing, we report an identical de novo missense FOXP1 variant identified in three unrelated patients. The variant, p.R514H, is located in the forkhead-box DNA-binding domain and is equivalent to the well-studied p.R553H FOXP2 variant that co-segregates with CAS in a large UK family. We present here for the first time a direct comparison of the molecular and clinical consequences of the same mutation affecting the equivalent residue in FOXP1 and FOXP2. Detailed functional characterization of the two variants in cell model systems revealed very similar molecular consequences, including aberrant subcellular localization, disruption of transcription factor activity and deleterious effects on protein interactions. Nonetheless, clinical manifestations were broader and more severe in the three cases carrying the p.R514H FOXP1 variant than in individuals with the p.R553H variant related to CAS, highlighting divergent roles of FOXP2 and FOXP1 in neurodevelopment.

    Additional information

    humu23303-sup-0001-SuppMat.pdf
  • Soutschek, A., Burke, C. J., Beharelle, A. R., Schreiber, R., Weber, S. C., Karipidis, I. I., Ten Velden, J., Weber, B., Haker, H., Kalenscher, T., & Tobler, P. N. (2017). The dopaminergic reward system underpins gender differences in social preferences. Nature Human Behaviour, 1, 819-827. doi:10.1038/s41562-017-0226-y.

    Abstract

    Women are known to have stronger prosocial preferences than men, but it remains an open question as to how these behavioural differences arise from differences in brain functioning. Here, we provide a neurobiological account for the hypothesized gender difference. In a pharmacological study and an independent neuroimaging study, we tested the hypothesis that the neural reward system encodes the value of sharing money with others more strongly in women than in men. In the pharmacological study, we reduced receptor type-specific actions of dopamine, a neurotransmitter related to reward processing, which resulted in more selfish decisions in women and more prosocial decisions in men. Converging findings from an independent neuroimaging study revealed gender-related activity in neural reward circuits during prosocial decisions. Thus, the neural reward system appears to be more sensitive to prosocial rewards in women than in men, providing a neurobiological account for why women often behave more prosocially than men.

    A large body of evidence suggests that women are often more prosocial (for example, generous, altruistic and inequality averse) than men, at least when other factors such as reputation and strategic considerations are excluded1,2,3. This dissociation could result from cultural expectations and gender stereotypes, because in Western societies women are more strongly expected to be prosocial4,5,6 and sensitive to variations in social context than men1. It remains an open question, however, whether and how on a neurobiological level the social preferences of women and men arise from differences in brain functioning. The assumption of gender differences in social preferences predicts that the neural reward system’s sensitivity to prosocial and selfish rewards should differ between women and men. Specifically, the hypothesis would be that the neural reward system is more sensitive to prosocial than selfish rewards in women and more sensitive to selfish than prosocial rewards in men. The goal of the current study was to test in two independent experiments for the hypothesized gender differences on both a pharmacological and a haemodynamic level. In particular, we examined the functions of the neurotransmitter dopamine using a dopamine receptor antagonist, and the role of the striatum (a brain region strongly innervated by dopamine neurons) during social decision-making in women and men using neuroimaging.

    The neurotransmitter dopamine is thought to play a key role in neural reward processing7,8. Recent evidence suggests that dopaminergic activity is sensitive not only to rewards for oneself but to rewards for others as well9. The assumption that dopamine is sensitive to both self- and other-related outcomes is consistent with the finding that the striatum shows activation for both selfish and shared rewards10,11,12,13,14,15. The dopaminergic response may represent a net signal encoding the difference between the value of preferred and unpreferred rewards8. Regarding the hypothesized gender differences in social preferences, this account makes the following predictions. If women prefer shared (prosocial) outcomes2, women’s dopaminergic signals to shared rewards will be stronger than to non-shared (selfish) rewards, so reducing dopaminergic activity should bias women to make more selfish decisions. In line with this hypothesis, a functional imaging study reported enhanced striatal activation in female participants during charitable donations11. In contrast, if men prefer selfish over prosocial rewards, dopaminergic activity should be enhanced to selfish compared to prosocial rewards. In line with this view, upregulating dopaminergic activity in a sample of exclusively male participants increased selfish behaviour in a bargaining game16. Thus, contrary to the hypothesized effect in women, reducing dopaminergic neurotransmission should render men more prosocial. Taken together, the current study tested the following three predictions: we expected the dopaminergic reward system (1) to be more sensitive to prosocial than selfish rewards in women and (2) to be more sensitive to selfish than prosocial rewards in men. As a consequence of these two predictions, we also predicted (3) dopaminoceptive regions such as the striatum to show stronger activation to prosocial relative to selfish rewards in women than in men.

    To test these predictions, we conducted a pharmacological study in which we reduced dopaminergic neurotransmission with amisulpride. Amisulpride is a dopamine antagonist that is highly specific for dopaminergic D2/D3 receptors17. After receiving amisulpride or placebo, participants performed an interpersonal decision task18,19,20, in which they made choices between a monetary reward only for themselves (selfish reward option) and sharing money with others (prosocial reward option). We expected that blocking dopaminergic neurotransmission with amisulpride, relative to placebo, would result in fewer prosocial choices in women and more prosocial choices in men. To investigate whether potential gender-related effects of dopamine are selective for social decision-making, we also tested the effects of amisulpride on time preferences in a non-social control task that was matched to the interpersonal decision task in terms of choice structure.

    In addition, because dopaminergic neurotransmission plays a crucial role in brain regions involved in value processing, such as the striatum21, a gender-related role of dopaminergic activity for social decision-making should also be reflected by dissociable activity patterns in the striatum. Therefore, to further test our hypothesis, we investigated the neural correlates of social decision-making in a functional imaging study. In line with our predictions for the pharmacological study, we expected to find stronger striatum activity during prosocial relative to selfish decisions in women, whereas men should show enhanced activity in the striatum for selfish relative to prosocial choices.

    Additional information

    Supplementary Information
  • Speed, L. J., & Majid, A. (2017). Dutch modality exclusivity norms: Simulating perceptual modality in space. Behavior Research Methods, 49(6), 2204-2218. doi:10.3758/s13428-017-0852-3.

    Abstract

    Perceptual information is important for the meaning of nouns. We present modality exclusivity norms for 485 Dutch nouns rated on visual, auditory, haptic, gustatory, and olfactory associations. We found these nouns are highly multimodal. They were rated most dominant in vision, and least in olfaction. A factor analysis identified two main dimensions: one loaded strongly on olfaction and gustation (reflecting joint involvement in flavor), and a second loaded strongly on vision and touch (reflecting joint involvement in manipulable objects). In a second study, we validated the ratings with similarity judgments. As expected, words from the same dominant modality were rated more similar than words from different dominant modalities; but – more importantly – this effect was enhanced when word pairs had high modality strength ratings. We further demonstrated the utility of our ratings by investigating whether perceptual modalities are differentially experienced in space, in a third study. Nouns were categorized into their dominant modality and used in a lexical decision experiment where the spatial position of words was either in proximal or distal space. We found words dominant in olfaction were processed faster in proximal than distal space compared to the other modalities, suggesting olfactory information is mentally simulated as “close” to the body. Finally, we collected ratings of emotion (valence, dominance, and arousal) to assess its role in perceptual space simulation, but the valence did not explain the data. So, words are processed differently depending on their perceptual associations, and strength of association is captured by modality exclusivity ratings.

    Additional information

    13428_2017_852_MOESM1_ESM.xlsx
  • Stergiakouli, E., Martin, J., Hamshere, M. L., Heron, J., St Pourcain, B., Timpson, N. J., Thapar, A., & Smith, G. D. (2017). Association between polygenic risk scores for attention-deficit hyperactivity disorder and educational and cognitive outcomes in the general population. International Journal of Epidemiology, 46(2), 421-428. doi:10.1093/ije/dyw216.

    Abstract

    Background: Children with a diagnosis of attention-deficit hyperactivity disorder (ADHD) have lower cognitive ability and are at risk of adverse educational outcomes; ADHD genetic risks have been found to predict childhood cognitive ability and other neurodevelopmental traits in the general population; thus genetic risks might plausibly also contribute to cognitive ability later in development and to educational underachievement.

    Methods: We generated ADHD polygenic risk scores in the Avon Longitudinal Study of Parents and Children participants (maximum N: 6928 children and 7280 mothers) based on the results of a discovery clinical sample, a genome-wide association study of 727 cases with ADHD diagnosis and 5081 controls. We tested if ADHD polygenic risk scores were associated with educational outcomes and IQ in adolescents and their mothers.

    Results: High ADHD polygenic scores in adolescents were associated with worse educational outcomes at Key Stage 3 [national tests conducted at age 13–14 years; β = −1.4 (−2.0 to −0.8), P = 2.3 × 10−6), at General Certificate of Secondary Education exams at age 15–16 years (β = −4.0 (−6.1 to −1.9), P = 1.8 × 10−4], reduced odds of sitting Key Stage 5 examinations at age 16–18 years [odds ratio (OR) = 0.90 (0.88 to 0.97), P = 0.001] and lower IQ scores at age 15.5 [β = −0.8 (−1.2 to −0.4), P = 2.4 × 10−4]. Moreover, maternal ADHD polygenic scores were associated with lower maternal educational achievement [β = −0.09 (−0.10 to −0.06), P = 0.005] and lower maternal IQ [β = −0.6 (−1.2 to −0.1), P = 0.03].

    Conclusions: ADHD diagnosis risk alleles impact on functional outcomes in two generations (mother and child) and likely have intergenerational environmental effects.
  • Stergiakouli, E., Smith, G. D., Martin, J., Skuse, D. H., Viechtbauer, W., Ring, S. M., Ronald, A., Evans, D. E., Fisher, S. E., Thapar, A., & St Pourcain, B. (2017). Shared genetic influences between dimensional ASD and ADHD symptoms during child and adolescent development. Molecular Autism, 8: 18. doi:10.1186/s13229-017-0131-2.

    Abstract

    Background: Shared genetic influences between attention-deficit/hyperactivity disorder (ADHD) symptoms and
    autism spectrum disorder (ASD) symptoms have been reported. Cross-trait genetic relationships are, however,
    subject to dynamic changes during development. We investigated the continuity of genetic overlap between ASD
    and ADHD symptoms in a general population sample during childhood and adolescence. We also studied uni- and
    cross-dimensional trait-disorder links with respect to genetic ADHD and ASD risk.
    Methods: Social-communication difficulties (N ≤ 5551, Social and Communication Disorders Checklist, SCDC) and
    combined hyperactive-impulsive/inattentive ADHD symptoms (N ≤ 5678, Strengths and Difficulties Questionnaire,
    SDQ-ADHD) were repeatedly measured in a UK birth cohort (ALSPAC, age 7 to 17 years). Genome-wide summary
    statistics on clinical ASD (5305 cases; 5305 pseudo-controls) and ADHD (4163 cases; 12,040 controls/pseudo-controls)
    were available from the Psychiatric Genomics Consortium. Genetic trait variances and genetic overlap between
    phenotypes were estimated using genome-wide data.
    Results: In the general population, genetic influences for SCDC and SDQ-ADHD scores were shared throughout
    development. Genetic correlations across traits reached a similar strength and magnitude (cross-trait rg ≤ 1,
    pmin = 3 × 10−4) as those between repeated measures of the same trait (within-trait rg ≤ 0.94, pmin = 7 × 10−4).
    Shared genetic influences between traits, especially during later adolescence, may implicate variants in K-RAS signalling
    upregulated genes (p-meta = 6.4 × 10−4).
    Uni-dimensionally, each population-based trait mapped to the expected behavioural continuum: risk-increasing alleles
    for clinical ADHD were persistently associated with SDQ-ADHD scores throughout development (marginal regression
    R2 = 0.084%). An age-specific genetic overlap between clinical ASD and social-communication difficulties during
    childhood was also shown, as per previous reports. Cross-dimensionally, however, neither SCDC nor SDQ-ADHD scores
    were linked to genetic risk for disorder.
    Conclusions: In the general population, genetic aetiologies between social-communication difficulties and ADHD
    symptoms are shared throughout child and adolescent development and may implicate similar biological pathways
    that co-vary during development. Within both the ASD and the ADHD dimension, population-based traits are also linked
    to clinical disorder, although much larger clinical discovery samples are required to reliably detect cross-dimensional
    trait-disorder relationships.
  • Stivers, T., Chalfoun, A., & Rossi, G. (2024). To err is human but to persist is diabolical: Toward a theory of interactional policing. Frontiers in Sociology: Sociological Theory, 9: 1369776. doi:10.3389/fsoc.2024.1369776.

    Abstract

    Social interaction is organized around norms and preferences that guide our construction of actions and our interpretation of those of others, creating a reflexive moral order. Sociological theory suggests two possibilities for the type of moral order that underlies the policing of interactional norm and preference violations: a morality that focuses on the nature of violations themselves and a morality that focuses on the positioning of actors as they maintain their conduct comprehensible, even when they depart from norms and preferences. We find that actors are more likely to reproach interactional violations for which an account is not provided by the transgressor, and that actors weakly reproach or let pass first offenses while more strongly policing violators who persist in bad behavior. Based on these findings, we outline a theory of interactional policing that rests not on the nature of the violation but rather on actors' moral positioning.
  • Stoehr, A., Benders, T., Van Hell, J. G., & Fikkert, P. (2017). Second language attainment and first language attrition: The case of VOT in immersed Dutch–German late bilinguals. Second Language Research, 33(4), 483-518. doi:10.1177/0267658317704261.

    Abstract

    Speech of late bilinguals has frequently been described in terms of cross-linguistic influence (CLI) from the native language (L1) to the second language (L2), but CLI from the L2 to the L1 has received relatively little attention. This article addresses L2 attainment and L1 attrition in voicing systems through measures of voice onset time (VOT) in two groups of Dutch–German late bilinguals in the Netherlands. One group comprises native speakers of Dutch and the other group comprises native speakers of German, and the two groups further differ in their degree of L2 immersion. The L1-German–L2-Dutch bilinguals (N = 23) are exposed to their L2 at home and outside the home, and the L1-Dutch–L2-German bilinguals (N = 18) are only exposed to their L2 at home. We tested L2 attainment by comparing the bilinguals’ L2 to the other bilinguals’ L1, and L1 attrition by comparing the bilinguals’ L1 to Dutch monolinguals (N = 29) and German monolinguals (N = 27). Our findings indicate that complete L2 immersion may be advantageous in L2 acquisition, but at the same time it may cause L1 phonetic attrition. We discuss how the results match the predictions made by Flege’s Speech Learning Model and explore how far bilinguals’ success in acquiring L2 VOT and maintaining L1 VOT depends on the immersion context, articulatory constraints and the risk of sounding foreign accented.
  • Ye, Z., Stolk, A., Toni, I., & Hagoort, P. (2017). Oxytocin modulates semantic integration in speech comprehension. Journal of Cognitive Neuroscience, 29, 267-276. doi:10.1162/jocn_a_01044.

    Abstract

    Listeners interpret utterances by integrating information from multiple sources including word level semantics and world knowledge. When the semantics of an expression is inconsistent with his or her knowledge about the world, the listener may have to search through the conceptual space for alternative possible world scenarios that can make the expression more acceptable. Such cognitive exploration requires considerable computational resources and might depend on motivational factors. This study explores whether and how oxytocin, a neuropeptide known to influence socialmotivation by reducing social anxiety and enhancing affiliative tendencies, can modulate the integration of world knowledge and sentence meanings. The study used a betweenparticipant double-blind randomized placebo-controlled design. Semantic integration, indexed with magnetoencephalography through the N400m marker, was quantified while 45 healthymale participants listened to sentences that were either congruent or incongruent with facts of the world, after receiving intranasally delivered oxytocin or placebo. Compared with congruent sentences, world knowledge incongruent sentences elicited a stronger N400m signal from the left inferior frontal and anterior temporal regions and medial pFC (the N400m effect) in the placebo group. Oxytocin administration significantly attenuated the N400meffect at both sensor and cortical source levels throughout the experiment, in a state-like manner. Additional electrophysiological markers suggest that the absence of the N400m effect in the oxytocin group is unlikely due to the lack of early sensory or semantic processing or a general downregulation of attention. These findings suggest that oxytocin drives listeners to resolve challenges of semantic integration, possibly by promoting the cognitive exploration of alternative possible world scenarios.
  • Tachmazidou, I., Süveges, D., Min, J. L., Ritchie, G. R. S., Steinberg, J., Walter, K., Iotchkova, V., Schwartzentruber, J., Huang, J., Memari, Y., McCarthy, S., Crawford, A. A., Bombieri, C., Cocca, M., Farmaki, A.-E., Gaunt, T. R., Jousilahti, P., Kooijman, M. N., Lehne, B., Malerba, G. and 83 moreTachmazidou, I., Süveges, D., Min, J. L., Ritchie, G. R. S., Steinberg, J., Walter, K., Iotchkova, V., Schwartzentruber, J., Huang, J., Memari, Y., McCarthy, S., Crawford, A. A., Bombieri, C., Cocca, M., Farmaki, A.-E., Gaunt, T. R., Jousilahti, P., Kooijman, M. N., Lehne, B., Malerba, G., Männistö, S., Matchan, A., Medina-Gomez, C., Metrustry, S. J., Nag, A., Ntalla, I., Paternoster, L., Rayner, N. W., Sala, C., Scott, W. R., Shihab, H. A., Southam, L., St Pourcain, B., Traglia, M., Trajanoska, K., Zaza, G., Zhang, W., Artigas, M. S., Bansal, N., Benn, M., Chen, Z., Danecek, P., Lin, W.-Y., Locke, A., Luan, J., Manning, A. K., Mulas, A., Sidore, C., Tybjaerg-Hansen, A., Varbo, A., Zoledziewska, M., Finan, C., Hatzikotoulas, K., Hendricks, A. E., Kemp, J. P., Moayyeri, A., Panoutsopoulou, K., Szpak, M., Wilson, S. G., Boehnke, M., Cucca, F., Di Angelantonio, E., Langenberg, C., Lindgren, C., McCarthy, M. I., Morris, A. P., Nordestgaard, B. G., Scott, R. A., Tobin, M. D., Wareham, N. J., Burton, P., Chambers, J. C., Smith, G. D., Dedoussis, G., Felix, J. F., Franco, O. H., Gambaro, G., Gasparini, P., Hammond, C. J., Hofman, A., Jaddoe, V. W. V., Kleber, M., Kooner, J. S., Perola, M., Relton, C., Ring, S. M., Rivadeneira, F., Salomaa, V., Spector, T. D., Stegle, O., Toniolo, D., Uitterlinden, A. G., Barroso, I., Greenwood, C. M. T., Perry, J. R. B., Walker, B. R., Butterworth, A. S., Xue, Y., Durbin, R., Small, K. S., Soranzo, N., Timpson, N. J., & Zeggini, E. (2017). Whole-Genome Sequencing coupled to imputation discovers genetic signals for anthropometric traits. The American Journal of Human Genetics, 100(6), 865-884. doi:10.1016/j.ajhg.2017.04.014.

    Abstract

    Deep sequence-based imputation can enhance the discovery power of genome-wide association studies by assessing previously unexplored variation across the common- and low-frequency spectra. We applied a hybrid whole-genome sequencing (WGS) and deep imputation approach to examine the broader allelic architecture of 12 anthropometric traits associated with height, body mass, and fat distribution in up to 267,616 individuals. We report 106 genome-wide significant signals that have not been previously identified, including 9 low-frequency variants pointing to functional candidates. Of the 106 signals, 6 are in genomic regions that have not been implicated with related traits before, 28 are independent signals at previously reported regions, and 72 represent previously reported signals for a different anthropometric trait. 71% of signals reside within genes and fine mapping resolves 23 signals to one or two likely causal variants. We confirm genetic overlap between human monogenic and polygenic anthropometric traits and find signal enrichment in cis expression QTLs in relevant tissues. Our results highlight the potential of WGS strategies to enhance biologically relevant discoveries across the frequency spectrum.
  • Takashima, A., Bakker, I., Van Hell, J. G., Janzen, G., & McQueen, J. M. (2017). Interaction between episodic and semantic memory networks in the acquisition and consolidation of novel spoken words. Brain and Language, 167, 44-60. doi:10.1016/j.bandl.2016.05.009.

    Abstract

    When a novel word is learned, its memory representation is thought to undergo a process of consolidation and integration. In this study, we tested whether the neural representations of novel words change as a function of consolidation by observing brain activation patterns just after learning and again after a delay of one week. Words learned with meanings were remembered better than those learned without meanings. Both episodic (hippocampus-dependent) and semantic (dependent on distributed neocortical areas) memory systems were utilised during recognition of the novel words. The extent to which the two systems were involved changed as a function of time and the amount of associated information, with more involvement of both systems for the meaningful words than for the form-only words after the one-week delay. These results suggest that the reason the meaningful words were remembered better is that their retrieval can benefit more from these two complementary memory systems
  • Takashima, A., Carota, F., Schoots, V., Redmann, A., Jehee, J., & Indefrey, P. (2024). Tomatoes are red: The perception of achromatic objects elicits retrieval of associated color knowledge. Journal of Cognitive Neuroscience, 36(1), 24-45. doi:10.1162/jocn_a_02068.

    Abstract

    When preparing to name an object, semantic knowledge about the object and its attributes is activated, including perceptual properties. It is unclear, however, whether semantic attribute activation contributes to lexical access or is a consequence of activating a concept irrespective of whether that concept is to be named or not. In this study, we measured neural responses using fMRI while participants named objects that are typically green or red, presented in black line drawings. Furthermore, participants underwent two other tasks with the same objects, color naming and semantic judgment, to see if the activation pattern we observe during picture naming is (a) similar to that of a task that requires accessing the color attribute and (b) distinct from that of a task that requires accessing the concept but not its name or color. We used representational similarity analysis to detect brain areas that show similar patterns within the same color category, but show different patterns across the two color categories. In all three tasks, activation in the bilateral fusiform gyri (“Human V4”) correlated with a representational model encoding the red–green distinction weighted by the importance of color feature for the different objects. This result suggests that when seeing objects whose color attribute is highly diagnostic, color knowledge about the objects is retrieved irrespective of whether the color or the object itself have to be named.
  • Tamaoka, K., Makioka, S., Sanders, S., & Verdonschot, R. G. (2017). www.kanjidatabase.com: A new interactive online database for psychological and linguistic research on Japanese kanji and their compound words. Psychological Research, 81(3), 696-708. doi:10.1007/s00426-016-0764-3.

    Abstract

    Most experimental research making use of the Japanese language has involved the 1945 officially standardized kanji (Japanese logographic characters) in the Joyo kanji list (originally announced by the Japanese government in 1981). However, this list was extensively modified in 2010: five kanji were removed and 196 kanji were added; the latest revision of the list now has a total of 2136 kanji. Using an up-to-date corpus consisting of 11 years' worth of articles printed in the Mainichi Newspaper (2000-2010), we have constructed two novel databases that can be used in psychological research using the Japanese language: (1) a database containing a wide variety of properties on the latest 2136 Joyo kanji, and (2) a novel database containing 27,950 two-kanji compound words (or jukugo). Based on these two databases, we have created an interactive website (www.kanjidatabase.com) to retrieve and store linguistic information to be used in psychological and linguistic experiments. The present paper reports the most important characteristics for the new databases, as well as their value for experimental psychological and linguistic research.
  • Tamaoka, K., Yu, S., Zhang, J., Otsuka, Y., Lim, H., Koizumi, M., & Verdonschot, R. G. (2024). Syntactic structures in motion: Investigating word order variations in verb-final (Korean) and verb-initial (Tongan) languages. Frontiers in Psychology, 15: 1360191. doi:10.3389/fpsyg.2024.1360191.

    Abstract

    This study explored sentence processing in two typologically distinct languages: Korean, a verb-final language, and Tongan, a verb-initial language. The first experiment revealed that in Korean, sentences arranged in the scrambled OSV (Object, Subject, Verb) order were processed more slowly than those in the canonical SOV order, highlighting a scrambling effect. It also found that sentences with subject topicalization in the SOV order were processed as swiftly as those in the canonical form, whereas sentences with object topicalization in the OSV order were processed with speeds and accuracy comparable to scrambled sentences. However, since topicalization and scrambling in Korean use the same OSV order, independently distinguishing the effects of topicalization is challenging. In contrast, Tongan allows for a clear separation of word orders for topicalization and scrambling, facilitating an independent evaluation of topicalization effects. The second experiment, employing a maze task, confirmed that Tongan’s canonical VSO order was processed more efficiently than the VOS scrambled order, thereby verifying a scrambling effect. The third experiment investigated the effects of both scrambling and topicalization in Tongan, finding that the canonical VSO order was processed most efficiently in terms of speed and accuracy, unlike the VOS scrambled and SVO topicalized orders. Notably, the OVS object-topicalized order was processed as efficiently as the VSO canonical order, while the SVO subject-topicalized order was slower than VSO but faster than VOS. By independently assessing the effects of topicalization apart from scrambling, this study demonstrates that both subject and object topicalization in Tongan facilitate sentence processing, contradicting the predictions based on movement-based anticipation.

    Additional information

    appendix 1-3
  • Tan, Y., Martin, R. C., & Van Dyke, J. A. (2017). Semantic and syntactic interference in sentence comprehension: A comparison of working memory models. Frontiers in Psychology, 8: 198. doi:10.3389/fpsyg.2017.00198.

    Abstract

    This study investigated the nature of the underlying working memory system supporting sentence processing through examining individual differences in sensitivity to retrieval interference effects during sentence comprehension. Interference effects occur when readers incorrectly retrieve sentence constituents which are similar to those required during integrative processes. We examined interference arising from a partial match between distracting constituents and syntactic and semantic cues, and related these interference effects to performance on working memory, short-term memory (STM), vocabulary, and executive function tasks. For online sentence comprehension, as measured by self-paced reading, the magnitude of individuals' syntactic interference effects was predicted by general WM capacity and the relation remained significant when partialling out vocabulary, indicating that the effects were not due to verbal knowledge. For offline sentence comprehension, as measured by responses to comprehension questions, both general WM capacity and vocabulary knowledge interacted with semantic interference for comprehension accuracy, suggesting that both general WM capacity and the quality of semantic representations played a role in determining how well interference was resolved offline. For comprehension question reaction times, a measure of semantic STM capacity interacted with semantic but not syntactic interference. However, a measure of phonological capacity (digit span) and a general measure of resistance to response interference (Stroop effect) did not predict individuals' interference resolution abilities in either online or offline sentence comprehension. The results are discussed in relation to the multiple capacities account of working memory (e.g., Martin and Romani, 1994; Martin and He, 2004), and the cue-based retrieval parsing approach (e.g., Lewis et al., 2006; Van Dyke et al., 2014). While neither approach was fully supported, a possible means of reconciling the two approaches and directions for future research are proposed.
  • Tanner, J. E., & Perlman, M. (2017). Moving beyond ‘meaning’: Gorillas combine gestures into sequences for creative display. Language & Communication, 54, 56-72. doi:10.1016/j.langcom.2016.10.006.

    Abstract

    The great apes produce gestures intentionally and flexibly, and sometimes they combine their gestures into sequences, producing two or more gestures in close succession. We reevaluate previous findings related to ape gesture sequences and present qualitative analysis of videotaped gorilla interaction. We present evidence that gorillas produce at least two different kinds of gesture sequences: some sequences are largely composed of gestures that depict motion in an iconic manner, typically requesting particular action by the partner; others are multimodal and contain gestures – often percussive in nature – that are performed in situations of play or display. Display sequences seem to primarily exhibit the performer’s emotional state and physical fitness but have no immediate functional goal. Analysis reveals that some gorilla play and display sequences can be 1) organized hierarchically into longer bouts and repetitions; 2) innovative and individualized, incorporating objects and environmental features; and 3) highly interactive between partners. It is illuminating to look beyond ‘meaning’ in the conventional linguistic sense and look at the possibility that characteristics of music and dance, as well as those of language, are included in the gesturing of apes.
  • Ten Oever, S., Schroeder, C. E., Poeppel, D., Van Atteveldt, N., Mehta, A. D., Megevand, P., Groppe, D. M., & Zion-Golumbic, E. (2017). Low-frequency cortical oscillations entrain to subthreshold rhythmic auditory stimuli. The Journal of Neuroscience, 37(19), 4903-4912. doi:10.1523/JNEUROSCI.3658-16.2017.

    Abstract

    Many environmental stimuli contain temporal regularities, a feature that can help predict forthcoming input. Phase locking (entrainment) of ongoing low-frequency neuronal oscillations to rhythmic stimuli is proposed as a potential mechanism for enhancing neuronal responses and perceptual sensitivity, by aligning high-excitability phases to events within a stimulus stream. Previous experiments show that rhythmic structure has a behavioral benefit even when the rhythm itself is below perceptual detection thresholds (ten Oever et al., 2014). It is not known whether this "inaudible" rhythmic sound stream also induces entrainment. Here we tested this hypothesis using magnetoencephalography and electrocorticography in humans to record changes in neuronal activity as subthreshold rhythmic stimuli gradually became audible. We found that significant phase locking to the rhythmic sounds preceded participants' detection of them. Moreover, no significant auditory-evoked responses accompanied this prethreshold entrainment. These auditory-evoked responses, distinguished by robust, broad-band increases in intertrial coherence, only appeared after sounds were reported as audible. Taken together with the reduced perceptual thresholds observed for rhythmic sequences, these findings support the proposition that entrainment of low-frequency oscillations serves a mechanistic role in enhancing perceptual sensitivity for temporally predictive sounds. This framework has broad implications for understanding the neural mechanisms involved in generating temporal predictions and their relevance for perception, attention, and awareness.
  • Ten Oever, S., & Martin, A. E. (2024). Interdependence of “what” and “when” in the brain. Journal of Cognitive Neuroscience, 36(1), 167-186. doi:10.1162/jocn_a_02067.

    Abstract

    From a brain's-eye-view, when a stimulus occurs and what it is are interrelated aspects of interpreting the perceptual world. Yet in practice, the putative perceptual inferences about sensory content and timing are often dichotomized and not investigated as an integrated process. We here argue that neural temporal dynamics can influence what is perceived, and in turn, stimulus content can influence the time at which perception is achieved. This computational principle results from the highly interdependent relationship of what and when in the environment. Both brain processes and perceptual events display strong temporal variability that is not always modeled; we argue that understanding—and, minimally, modeling—this temporal variability is key for theories of how the brain generates unified and consistent neural representations and that we ignore temporal variability in our analysis practice at the peril of both data interpretation and theory-building. Here, we review what and when interactions in the brain, demonstrate via simulations how temporal variability can result in misguided interpretations and conclusions, and outline how to integrate and synthesize what and when in theories and models of brain computation.
  • Ter Bekke, M., Drijvers, L., & Holler, J. (2024). Hand gestures have predictive potential during conversation: An investigation of the timing of gestures in relation to speech. Cognitive Science, 48(1): e13407. doi:10.1111/cogs.13407.

    Abstract

    During face-to-face conversation, transitions between speaker turns are incredibly fast. These fast turn exchanges seem to involve next speakers predicting upcoming semantic information, such that next turn planning can begin before a current turn is complete. Given that face-to-face conversation also involves the use of communicative bodily signals, an important question is how bodily signals such as co-speech hand gestures play into these processes of prediction and fast responding. In this corpus study, we found that hand gestures that depict or refer to semantic information started before the corresponding information in speech, which held both for the onset of the gesture as a whole, as well as the onset of the stroke (the most meaningful part of the gesture). This early timing potentially allows listeners to use the gestural information to predict the corresponding semantic information to be conveyed in speech. Moreover, we provided further evidence that questions with gestures got faster responses than questions without gestures. However, we found no evidence for the idea that how much a gesture precedes its lexical affiliate (i.e., its predictive potential) relates to how fast responses were given. The findings presented here highlight the importance of the temporal relation between speech and gesture and help to illuminate the potential mechanisms underpinning multimodal language processing during face-to-face conversation.
  • Ter Bekke, M., Drijvers, L., & Holler, J. (2024). Gestures speed up responses to questions. Language, Cognition and Neuroscience. Advance online publication. doi:10.1080/23273798.2024.2314021.

    Abstract

    Most language use occurs in face-to-face conversation, which involves rapid turn-taking. Seeing communicative bodily signals in addition to hearing speech may facilitate such fast responding. We tested whether this holds for co-speech hand gestures by investigating whether these gestures speed up button press responses to questions. Sixty native speakers of Dutch viewed videos in which an actress asked yes/no-questions, either with or without a corresponding iconic hand gesture. Participants answered the questions as quickly and accurately as possible via button press. Gestures did not impact response accuracy, but crucially, gestures sped up responses, suggesting that response planning may be finished earlier when gestures are seen. How much gestures sped up responses was not related to their timing in the question or their timing with respect to the corresponding information in speech. Overall, these results are in line with the idea that multimodality may facilitate fast responding during face-to-face conversation.
  • Ter Bekke, M., Levinson, S. C., Van Otterdijk, L., Kühn, M., & Holler, J. (2024). Visual bodily signals and conversational context benefit the anticipation of turn ends. Cognition, 248: 105806. doi:10.1016/j.cognition.2024.105806.

    Abstract

    The typical pattern of alternating turns in conversation seems trivial at first sight. But a closer look quickly reveals the cognitive challenges involved, with much of it resulting from the fast-paced nature of conversation. One core ingredient to turn coordination is the anticipation of upcoming turn ends so as to be able to ready oneself for providing the next contribution. Across two experiments, we investigated two variables inherent to face-to-face conversation, the presence of visual bodily signals and preceding discourse context, in terms of their contribution to turn end anticipation. In a reaction time paradigm, participants anticipated conversational turn ends better when seeing the speaker and their visual bodily signals than when they did not, especially so for longer turns. Likewise, participants were better able to anticipate turn ends when they had access to the preceding discourse context than when they did not, and especially so for longer turns. Critically, the two variables did not interact, showing that visual bodily signals retain their influence even in the context of preceding discourse. In a pre-registered follow-up experiment, we manipulated the visibility of the speaker's head, eyes and upper body (i.e. torso + arms). Participants were better able to anticipate turn ends when the speaker's upper body was visible, suggesting a role for manual gestures in turn end anticipation. Together, these findings show that seeing the speaker during conversation may critically facilitate turn coordination in interaction.
  • Terporten, R., Huizeling, E., Heidlmayr, K., Hagoort, P., & Kösem, A. (2024). The interaction of context constraints and predictive validity during sentence reading. Journal of Cognitive Neuroscience, 36(2), 225-238. doi:10.1162/jocn_a_02082.

    Abstract

    Words are not processed in isolation; instead, they are commonly embedded in phrases and sentences. The sentential context influences the perception and processing of a word. However, how this is achieved by brain processes and whether predictive mechanisms underlie this process remain a debated topic. Here, we employed an experimental paradigm in which we orthogonalized sentence context constraints and predictive validity, which was defined as the ratio of congruent to incongruent sentence endings within the experiment. While recording electroencephalography, participants read sentences with three levels of sentential context constraints (high, medium, and low). Participants were also separated into two groups that differed in their ratio of valid congruent to incongruent target words that could be predicted from the sentential context. For both groups, we investigated modulations of alpha power before, and N400 amplitude modulations after target word onset. The results reveal that the N400 amplitude gradually decreased with higher context constraints and cloze probability. In contrast, alpha power was not significantly affected by context constraint. Neither the N400 nor alpha power were significantly affected by changes in predictive validity.
  • Thompson, P. M., Andreassen, O. A., Arias-Vasquez, A., Bearden, C. E., Boedhoe, P. S., Brouwer, R. M., Buckner, R. L., Buitelaar, J. K., Bulaeva, K. B., Cannon, D. M., Cohen, R. A., Conrod, P. J., Dale, A. M., Deary, I. J., Dennis, E. L., De Reus, M. A., Desrivieres, S., Dima, D., Donohoe, G., Fisher, S. E. and 51 moreThompson, P. M., Andreassen, O. A., Arias-Vasquez, A., Bearden, C. E., Boedhoe, P. S., Brouwer, R. M., Buckner, R. L., Buitelaar, J. K., Bulaeva, K. B., Cannon, D. M., Cohen, R. A., Conrod, P. J., Dale, A. M., Deary, I. J., Dennis, E. L., De Reus, M. A., Desrivieres, S., Dima, D., Donohoe, G., Fisher, S. E., Fouche, J.-P., Francks, C., Frangou, S., Franke, B., Ganjgahi, H., Garavan, H., Glahn, D. C., Grabe, H. J., Guadalupe, T., Gutman, B. A., Hashimoto, R., Hibar, D. P., Holland, D., Hoogman, M., Pol, H. E. H., Hosten, N., Jahanshad, N., Kelly, S., Kochunov, P., Kremen, W. S., Lee, P. H., Mackey, S., Martin, N. G., Mazoyer, B., McDonald, C., Medland, S. E., Morey, R. A., Nichols, T. E., Paus, T., Pausova, Z., Schmaal, L., Schumann, G., Shen, L., Sisodiya, S. M., Smit, D. J., Smoller, J. W., Stein, D. J., Stein, J. L., Toro, R., Turner, J. A., Van den Heuvel, M., Van den Heuvel, O. A., Van Erp, T. G., Van Rooij, D., Veltman, D. J., Walter, H., Wang, Y., Wardlaw, J. M., Whelan, C. D., Wright, M. J., & Ye, J. (2017). ENIGMA and the Individual: Predicting Factors that Affect the Brain in 35 Countries Worldwide. NeuroImage, 145, 389-408. doi:10.1016/j.neuroimage.2015.11.057.
  • Thompson, J. R., Minelli, C., Bowden, J., Del Greco, F. M., Gill, D., Jones, E. M., Shapland, C. Y., & Sheehan, N. A. (2017). Mendelian randomization incorporating uncertainty about pleiotropy. Statistics in Medicine, 36(29), 4627-4645. doi:10.1002/sim.7442.

    Abstract

    Mendelian randomization (MR) requires strong assumptions about the genetic instruments, of which the most difficult to justify relate to pleiotropy. In a two-sample MR, different methods of analysis are available if we are able to assume, M1: no pleiotropy (fixed effects meta-analysis), M2: that there may be pleiotropy but that the average pleiotropic effect is zero (random effects meta-analysis), and M3: that the average pleiotropic effect is nonzero (MR-Egger). In the latter 2 cases, we also require that the size of the pleiotropy is independent of the size of the effect on the exposure. Selecting one of these models without good reason would run the risk of misrepresenting the evidence for causality. The most conservative strategy would be to use M3 in all analyses as this makes the weakest assumptions, but such an analysis gives much less precise estimates and so should be avoided whenever stronger assumptions are credible. We consider the situation of a two-sample design when we are unsure which of these 3 pleiotropy models is appropriate. The analysis is placed within a Bayesian framework and Bayesian model averaging is used. We demonstrate that even large samples of the scale used in genome-wide meta-analysis may be insufficient to distinguish the pleiotropy models based on the data alone. Our simulations show that Bayesian model averaging provides a reasonable trade-off between bias and precision. Bayesian model averaging is recommended whenever there is uncertainty about the nature of the pleiotropy

    Additional information

    sim7442-sup-0001-Supplementary.pdf
  • Thothathiri, M., Basnakova, J., Lewis, A. G., & Briand, J. M. (2024). Fractionating difficulty during sentence comprehension using functional neuroimaging. Cerebral Cortex, 34(2): bhae032. doi:10.1093/cercor/bhae032.

    Abstract

    Sentence comprehension is highly practiced and largely automatic, but this belies the complexity of the underlying processes. We used functional neuroimaging to investigate garden-path sentences that cause difficulty during comprehension, in order to unpack the different processes used to support sentence interpretation. By investigating garden-path and other types of sentences within the same individuals, we functionally profiled different regions within the temporal and frontal cortices in the left hemisphere. The results revealed that different aspects of comprehension difficulty are handled by left posterior temporal, left anterior temporal, ventral left frontal, and dorsal left frontal cortices. The functional profiles of these regions likely lie along a spectrum of specificity to generality, including language-specific processing of linguistic representations, more general conflict resolution processes operating over linguistic representations, and processes for handling difficulty in general. These findings suggest that difficulty is not unitary and that there is a role for a variety of linguistic and non-linguistic processes in supporting comprehension.

    Additional information

    supplementary information
  • Titus, A., Dijkstra, T., Willems, R. M., & Peeters, D. (2024). Beyond the tried and true: How virtual reality, dialog setups, and a focus on multimodality can take bilingual language production research forward. Neuropsychologia, 193: 108764. doi:10.1016/j.neuropsychologia.2023.108764.

    Abstract

    Bilinguals possess the ability of expressing themselves in more than one language, and typically do so in contextually rich and dynamic settings. Theories and models have indeed long considered context factors to affect bilingual language production in many ways. However, most experimental studies in this domain have failed to fully incorporate linguistic, social, or physical context aspects, let alone combine them in the same study. Indeed, most experimental psycholinguistic research has taken place in isolated and constrained lab settings with carefully selected words or sentences, rather than under rich and naturalistic conditions. We argue that the most influential experimental paradigms in the psycholinguistic study of bilingual language production fall short of capturing the effects of context on language processing and control presupposed by prominent models. This paper therefore aims to enrich the methodological basis for investigating context aspects in current experimental paradigms and thereby move the field of bilingual language production research forward theoretically. After considering extensions of existing paradigms proposed to address context effects, we present three far-ranging innovative proposals, focusing on virtual reality, dialog situations, and multimodality in the context of bilingual language production.
  • Travis, C. E., Cacoullos, R. T., & Kidd, E. (2017). Cross-language priming: A view from bilingual speech. Bilingualism: Language and Cognition, 20(2), 283-298. doi:10.1017/S1366728915000127.

    Abstract

    In the current paper we report on a study of priming of variable Spanish 1sg subject expression in spontaneous Spanish–English bilingual speech (based on the New Mexico Spanish–English Bilingual corpus, Torres Cacoullos & Travis, in preparation). We show both within- and cross-language Coreferential Subject Priming; however, cross-language priming from English to Spanish is weaker and shorter lived than within-language Spanish-to-Spanish priming, a finding that appears not to be attributable to lexical boost. Instead, interactions with subject continuity and verb type show that the strength of priming depends on co-occurring contextual features and particular [pronoun + verb] constructions, from the more lexically specific to the more schematically general. Quantitative patterns in speech thus offer insights unavailable from experimental work into the scope and locus of priming effects, suggesting that priming in bilingual discourse can serve to gauge degrees of strength of within- and cross-language associations between usage-based constructions.
  • Troncoso Ruiz, A., & Elordieta, G. (2017). Prosodic accommodation and salience: The nuclear contours of Andalusian Spanish speakers in Asturias. Loquens, 4(2): e403. doi:10.3989/loquens.2017.043.

    Abstract

    This study investigates the convergent accommodating behaviour of Andalusian speakers (Southern Spain) relocated in Asturias (Northern Spain), a context of dialect contact, in terms of intonation. We aim to address three research questions: (1) is there evidence for accommodation? (2) Do social factors determine accommodation? And (3) does salience predict which prosodic features are more likely to be adopted by relocated speakers? We elaborated a corpus of spontaneous speech including an experimental group of Andalusian speakers in Asturias and two control groups of Asturian and Andalusian people. The relocated Andalusians were interviewed by a speaker of Andalusian Spanish and a speaker of Amestáu (hybrid variety between Asturian and Spanish), and their intonation patterns were compared to the ones found in the control populations. During the interviews, we also gathered data about how integrated these relocated speakers were in Asturias. We found that all participants show a tendency towards convergent accommodation to the Amestáu interlocutor, producing late falling pitch contours in nuclear position in declaratives and final falling contours in absolute interrogatives. The most integrated speakers in the Asturian community are the ones showing more features of the varieties spoken in the area. Finally, the most salient features to an Andalusian ear—the presence of final falls in Asturian, Asturian Spanish and Amestáu absolute interrogatives as opposed to final rises in Andalusian and Standard Peninsular Spanish—were the ones showing the highest percentages of adoption in relocated speakers. We could conclude, then, that the most salient prosodic features are acquired more easily by the most integrated relocated speakers.
  • Trujillo, J. P., & Holler, J. (2024). Information distribution patterns in naturalistic dialogue differ across languages. Psychonomic Bulletin & Review. Advance online publication. doi:10.3758/s13423-024-02452-0.

    Abstract

    The natural ecology of language is conversation, with individuals taking turns speaking to communicate in a back-and-forth fashion. Language in this context involves strings of words that a listener must process while simultaneously planning their own next utterance. It would thus be highly advantageous if language users distributed information within an utterance in a way that may facilitate this processing–planning dynamic. While some studies have investigated how information is distributed at the level of single words or clauses, or in written language, little is known about how information is distributed within spoken utterances produced during naturalistic conversation. It also is not known how information distribution patterns of spoken utterances may differ across languages. We used a set of matched corpora (CallHome) containing 898 telephone conversations conducted in six different languages (Arabic, English, German, Japanese, Mandarin, and Spanish), analyzing more than 58,000 utterances, to assess whether there is evidence of distinct patterns of information distributions at the utterance level, and whether these patterns are similar or differed across the languages. We found that English, Spanish, and Mandarin typically show a back-loaded distribution, with higher information (i.e., surprisal) in the last half of utterances compared with the first half, while Arabic, German, and Japanese showed front-loaded distributions, with higher information in the first half compared with the last half. Additional analyses suggest that these patterns may be related to word order and rate of noun and verb usage. We additionally found that back-loaded languages have longer turn transition times (i.e.,time between speaker turns)

    Additional information

    Data availability
  • Trujillo, J. P., & Holler, J. (2024). Conversational facial signals combine into compositional meanings that change the interpretation of speaker intentions. Scientific Reports, 14: 2286. doi:10.1038/s41598-024-52589-0.

    Abstract

    Human language is extremely versatile, combining a limited set of signals in an unlimited number of ways. However, it is unknown whether conversational visual signals feed into the composite utterances with which speakers communicate their intentions. We assessed whether different combinations of visual signals lead to different intent interpretations of the same spoken utterance. Participants viewed a virtual avatar uttering spoken questions while producing single visual signals (i.e., head turn, head tilt, eyebrow raise) or combinations of these signals. After each video, participants classified the communicative intention behind the question. We found that composite utterances combining several visual signals conveyed different meaning compared to utterances accompanied by the single visual signals. However, responses to combinations of signals were more similar to the responses to related, rather than unrelated, individual signals, indicating a consistent influence of the individual visual signals on the whole. This study therefore provides first evidence for compositional, non-additive (i.e., Gestalt-like) perception of multimodal language.

    Additional information

    41598_2024_52589_MOESM1_ESM.docx
  • Tsuji, S., Fikkert, P., Minagawa, Y., Dupoux, E., Filippin, L., Versteegh, M., Hagoort, P., & Cristia, A. (2017). The more, the better? Behavioral and neural correlates of frequent and infrequent vowel exposure. Developmental Psychobiology, 59, 603-612. doi:10.1002/dev.21534.

    Abstract

    A central assumption in the perceptual attunement literature holds that exposure to a speech sound contrast leads to improvement in native speech sound processing. However, whether the amount of exposure matters for this process has not been put to a direct test. We elucidated indicators of frequency-dependent perceptual attunement by comparing 5–8-month-old Dutch infants’ discrimination of tokens containing a highly frequent [hɪt-he:t] and a highly infrequent [hʏt-hø:t] native vowel contrast as well as a non-native [hɛt-hæt] vowel contrast in a behavioral visual habituation paradigm (Experiment 1). Infants discriminated both native contrasts similarly well, but did not discriminate the non-native contrast. We sought further evidence for subtle differences in the processing of the two native contrasts using near-infrared spectroscopy and a within-participant design (Experiment 2). The neuroimaging data did not provide additional evidence that responses to native contrasts are modulated by frequency of exposure. These results suggest that even large differences in exposure to a native contrast may not directly translate to behavioral and neural indicators of perceptual attunement, raising the possibility that frequency of exposure does not influence improvements in discriminating native contrasts.

    Additional information

    dev21534-sup-0001-SuppInfo-S1.docx
  • Udden, J., Ingvar, M., Hagoort, P., & Petersson, K. M. (2017). Broca’s region: A causal role in implicit processing of grammars with crossed non-adjacent dependencies. Cognition, 164, 188-198. doi:10.1016/j.cognition.2017.03.010.

    Abstract

    Non-adjacent dependencies are challenging for the language learning machinery and are acquired later than adjacent dependencies. In this transcranial magnetic stimulation (TMS) study, we show that participants successfully discriminated between grammatical and non-grammatical sequences after having implicitly acquired an artificial language with crossed non-adjacent dependencies. Subsequent to transcranial magnetic stimulation of Broca’s region, discrimination was impaired compared to when a language-irrelevant control region (vertex) was stimulated. These results support the view that Broca’s region is engaged in structured sequence processing and extend previous functional neuroimaging results on artificial grammar learning (AGL) in two directions: first, the results establish that Broca’s region is a causal component in the processing of non-adjacent dependencies, and second, they show that implicit processing of non-adjacent dependencies engages Broca’s region. Since patients with lesions in Broca’s region do not always show grammatical processing difficulties, the result that Broca’s region is causally linked to processing of non-adjacent dependencies is a step towards clarification of the exact nature of syntactic deficits caused by lesions or perturbation to Broca’s region. Our findings are consistent with previous results and support a role for Broca’s region in general structured sequence processing, rather than a specific role for the processing of hierarchically organized sentence structure.
  • Udden, J., Snijders, T. M., Fisher, S. E., & Hagoort, P. (2017). A common variant of the CNTNAP2 gene is associated with structural variation in the left superior occipital gyrus. Brain and Language, 172, 16-21. doi:10.1016/j.bandl.2016.02.003.

    Abstract

    The CNTNAP2 gene encodes a cell-adhesion molecule that influences the properties of neural networks and the morphology and density of neurons and glial cells. Previous studies have shown association of CNTNAP2 variants with language-related phenotypes in health and disease. Here, we report associations of a common CNTNAP2 polymorphism (rs7794745) with variation in grey matter in a region in the dorsal visual stream. We tried to replicate an earlier study on 314 subjects by Tan and colleagues (2010), but now in a substantially larger group of more than 1700 subjects. Carriers of the T allele showed reduced grey matter volume in left superior occipital gyrus, while we did not replicate associations with grey matter volume in other regions identified by Tan et al (2010). Our work illustrates the importance of independent replication in neuroimaging genetic studies of language-related candidate genes.
  • De Vaan, L., Van Krieken, K., Van den Bosch, W., Schreuder, R., & Ernestus, M. (2017). The traces that novel morphologically complex words leave in memory are abstract in nature. The Mental Lexicon, 12(2), 181-218. doi:10.1075/ml.16006.vaa.

    Abstract

    Previous work has shown that novel morphologically complex words (henceforth neologisms) leave traces in memory after just one encounter. This study addressed the question whether these traces are abstract in nature or exemplars. In three experiments, neologisms were either primed by themselves or by their stems. The primes occurred in the visual modality whereas the targets were presented in the auditory modality (Experiment 1) or vice versa (Experiments 2 and 3). The primes were presented in sentences in a selfpaced reading task (Experiment 1) or in stories in a listening comprehension task (Experiments 2 and 3). The targets were incorporated in lexical decision tasks, auditory or visual (Experiment 1 and Experiment 2, respectively), or in stories in a self-paced reading task (Experiment 3). The experimental part containing the targets immediately followed the familiarization phase with the primes (Experiment 1), or after a one week delay (Experiments 2 and 3). In all experiments, participants recognized neologisms faster if they had encountered them before (identity priming) than if the familiarization phase only contained the neologisms’ stems (stem priming). These results show that the priming effects are robust despite substantial differences between the primes and the targets. This suggests that the traces novel morphologically complex words leave in memory after just one encounter are abstract in nature.
  • De Valk, J. M., Wnuk, E., Huisman, J. L. A., & Majid, A. (2017). Odor-color associations differ with verbal descriptors for odors: A comparison of three linguistically diverse groups. Psychonomic Bulletin & Review, 24(4), 1171-1179. doi:10.3758/s13423-016-1179-2.

    Abstract

    People appear to have systematic associations between odors and colors. Previous research has emphasized the perceptual nature of these associations, but little attention has been paid to what role language might play. It is possible odor–color associations arise through a process of labeling; that is, participants select a descriptor for an odor and then choose a color accordingly (e.g., banana odor → “banana” label → yellow). If correct, this would predict odor–color associations would differ as odor descriptions differ. We compared speakers of Dutch (who overwhelmingly describe odors by referring to the source; e.g., smells like banana) with speakers of Maniq and Thai (who also describe odors with dedicated, abstract smell vocabulary; e.g., musty), and tested whether the type of descriptor mattered for odor–color associations. Participants were asked to select a color that they associated with an odor on two separate occasions (to test for consistency), and finally to label the odors. We found the hunter-gatherer Maniq showed few, if any, consistent or accurate odor–color associations. More importantly, we found the types of descriptors used to name the smells were related to the odor–color associations. When people used abstract smell terms to describe odors, they were less likely to choose a color match, but when they described an odor with a source-based term, their color choices more accurately reflected the odor source, particularly when the odor source was named correctly (e.g., banana odor → yellow). This suggests language is an important factor in odor–color cross-modal associations

    Additional information

    13423_2016_1179_MOESM1_ESM.docx
  • Van der Ven, F., Takashima, A., Segers, A., & Verhoeven, L. (2017). Semantic priming in Dutch children: Word meaning integration and study modality effects. Language Learning, 67(3), 546-568. doi:10.1111/lang.12235.

Share this page