Publications

Displaying 301 - 400 of 512
  • Mekki, Y., Guillemot, V., Lemaître, H., Carrión-Castillo, A., Forkel, S. J., Frouin, V., & Philippe, C. (2022). The genetic architecture of language functional connectivity. NeuroImage, 249: 118795. doi:10.1016/j.neuroimage.2021.118795.

    Abstract

    Language is a unique trait of the human species, of which the genetic architecture remains largely unknown. Through language disorders studies, many candidate genes were identified. However, such complex and multifactorial trait is unlikely to be driven by only few genes and case-control studies, suffering from a lack of power, struggle to uncover significant variants. In parallel, neuroimaging has significantly contributed to the understanding of structural and functional aspects of language in the human brain and the recent availability of large scale cohorts like UK Biobank have made possible to study language via image-derived endophenotypes in the general population. Because of its strong relationship with task-based fMRI (tbfMRI) activations and its easiness of acquisition, resting-state functional MRI (rsfMRI) have been more popularised, making it a good surrogate of functional neuronal processes. Taking advantage of such a synergistic system by aggregating effects across spatially distributed traits, we performed a multivariate genome-wide association study (mvGWAS) between genetic variations and resting-state functional connectivity (FC) of classical brain language areas in the inferior frontal (pars opercularis, triangularis and orbitalis), temporal and inferior parietal lobes (angular and supramarginal gyri), in 32,186 participants from UK Biobank. Twenty genomic loci were found associated with language FCs, out of which three were replicated in an independent replication sample. A locus in 3p11.1, regulating EPHA3 gene expression, is found associated with FCs of the semantic component of the language network, while a locus in 15q14, regulating THBS1 gene expression is found associated with FCs of the perceptual-motor language processing, bringing novel insights into the neurobiology of language.
  • Menks, W. M., Ekerdt, C., Janzen, G., Kidd, E., Lemhöfer, K., Fernández, G., & McQueen, J. M. (2022). Study protocol: A comprehensive multi-method neuroimaging approach to disentangle developmental effects and individual differences in second language learning. BMC Psychology, 10: 169. doi:10.1186/s40359-022-00873-x.

    Abstract

    Background

    While it is well established that second language (L2) learning success changes with age and across individuals, the underlying neural mechanisms responsible for this developmental shift and these individual differences are largely unknown. We will study the behavioral and neural factors that subserve new grammar and word learning in a large cross-sectional developmental sample. This study falls under the NWO (Nederlandse Organisatie voor Wetenschappelijk Onderzoek [Dutch Research Council]) Language in Interaction consortium (website: https://www.languageininteraction.nl/).
    Methods

    We will sample 360 healthy individuals across a broad age range between 8 and 25 years. In this paper, we describe the study design and protocol, which involves multiple study visits covering a comprehensive behavioral battery and extensive magnetic resonance imaging (MRI) protocols. On the basis of these measures, we will create behavioral and neural fingerprints that capture age-based and individual variability in new language learning. The behavioral fingerprint will be based on first and second language proficiency, memory systems, and executive functioning. We will map the neural fingerprint for each participant using the following MRI modalities: T1‐weighted, diffusion-weighted, resting-state functional MRI, and multiple functional-MRI paradigms. With respect to the functional MRI measures, half of the sample will learn grammatical features and half will learn words of a new language. Combining all individual fingerprints allows us to explore the neural maturation effects on grammar and word learning.
    Discussion

    This will be one of the largest neuroimaging studies to date that investigates the developmental shift in L2 learning covering preadolescence to adulthood. Our comprehensive approach of combining behavioral and neuroimaging data will contribute to the understanding of the mechanisms influencing this developmental shift and individual differences in new language learning. We aim to answer: (I) do these fingerprints differ according to age and can these explain the age-related differences observed in new language learning? And (II) which aspects of the behavioral and neural fingerprints explain individual differences (across and within ages) in grammar and word learning? The results of this study provide a unique opportunity to understand how the development of brain structure and function influence new language learning success.
  • Menn, K. H., Ward, E., Braukmann, R., Van den Boomen, C., Buitelaar, J., Hunnius, S., & Snijders, T. M. (2022). Neural tracking in infancy predicts language development in children with and without family history of autism. Neurobiology of Language, 3(3), 495-514. doi:10.1162/nol_a_00074.

    Abstract

    During speech processing, neural activity in non-autistic adults and infants tracks the speech envelope. Recent research in adults indicates that this neural tracking relates to linguistic knowledge and may be reduced in autism. Such reduced tracking, if present already in infancy, could impede language development. In the current study, we focused on children with a family history of autism, who often show a delay in first language acquisition. We investigated whether differences in tracking of sung nursery rhymes during infancy relate to language development and autism symptoms in childhood. We assessed speech-brain coherence at either 10 or 14 months of age in a total of 22 infants with high likelihood of autism due to family history and 19 infants without family history of autism. We analyzed the relationship between speech-brain coherence in these infants and their vocabulary at 24 months as well as autism symptoms at 36 months. Our results showed significant speech-brain coherence in the 10- and 14-month-old infants. We found no evidence for a relationship between speech-brain coherence and later autism symptoms. Importantly, speech-brain coherence in the stressed syllable rate (1–3 Hz) predicted later vocabulary. Follow-up analyses showed evidence for a relationship between tracking and vocabulary only in 10-month-olds but not 14-month-olds and indicated possible differences between the likelihood groups. Thus, early tracking of sung nursery rhymes is related to language development in childhood.
  • Meyer, A. S., Roelofs, A., & Levelt, W. J. M. (2003). Word length effects in object naming: The role of a response criterion. Journal of Memory and Language, 48(1), 131-147. doi:10.1016/S0749-596X(02)00509-0.

    Abstract

    According to Levelt, Roelofs, and Meyer (1999) speakers generate the phonological and phonetic representations of successive syllables of a word in sequence and only begin to speak after having fully planned at least one complete phonological word. Therefore, speech onset latencies should be longer for long than for short words. We tested this prediction in four experiments in which Dutch participants named or categorized objects with monosyllabic or disyllabic names. Experiment 1 yielded a length effect on production latencies when objects with long and short names were tested in separate blocks, but not when they were mixed. Experiment 2 showed that the length effect was not due to a difference in the ease of object recognition. Experiment 3 replicated the results of Experiment 1 using a within-participants design. In Experiment 4, the long and short target words appeared in a phrasal context. In addition to the speech onset latencies, we obtained the viewing times for the target objects, which have been shown to depend on the time necessary to plan the form of the target names. We found word length effects for both dependent variables, but only when objects with short and long names were presented in separate blocks. We argue that in pure and mixed blocks speakers used different response deadlines, which they tried to meet by either generating the motor programs for one syllable or for all syllables of the word before speech onset. Computer simulations using WEAVER++ support this view.
  • Meyer, A. S., & Schriefers, H. (1991). Phonological facilitation in picture-word interference experiments: Effects of stimulus onset asynchrony and types of interfering stimuli. Journal of Experimental Psychology: Learning, Memory, and Cognition, 17, 1146-1160. doi:10.1037/0278-7393.17.6.1146.

    Abstract

    Subjects named pictures while hearing distractor words that shared word-initial or word-final segments with the picture names or were unrelated to the picture names. The relative timing of distractor and picture presentation was varied. Compared with unrelated distractors, both types of related distractors facilitated picture naming under certain timing conditions. Begin-related distractors facilitated the naming responses if the shared segments began 150 ms before, at, or 150 ms after picture onset. By contrast, end-related distractors only facilitated the responses if the shared segments began at or 150 ms after picture onset. The results suggest that the phonological encoding of the beginning of a word is initiated before the encoding of its end.
  • Meyer, A. S. (1991). The time course of phonological encoding in language production: Phonological encoding inside a syllable. Journal of Memory and Language, 30, 69-69. doi:10.1016/0749-596X(91)90011-8.

    Abstract

    Eight experiments were carried out investigating whether different parts of a syllable must be phonologically encoded in a specific order or whether they can be encoded in any order. A speech production task was used in which the subjects in each test trial had to utter one out of three or five response words as quickly as possible. In the so-called homogeneous condition these words were related in form, while in the heterogeneous condition they were unrelated in form. For monosyllabic response words shorter reaction times were obtained in the homogeneous than in the heterogeneous condition when the words had the same onset, but not when they had the same rhyme. Similarly, for disyllabic response words, the reaction times were shorter in the homogeneous than in the heterogeneous condition when the words shared only the onset of the first syllable, but not when they shared only its rhyme. Furthermore, a stronger facilitatory effect was observed when the words had the entire first syllable in common than when they only shared the onset, or the onset and the nucleus, but not the coda of the first syllable. These results suggest that syllables are phonologically encoded in two ordered steps, the first of which is dedicated to the onset and the second to the rhyme.
  • Meyer, A. S. (1994). Timing in sentence production. Journal of Memory and Language, 33, 471-492. doi:doi:10.1006/jmla.1994.1022.

    Abstract

    Recently, a new theory of timing in sentence production has been proposed by Ferreira (1993). This theory assumes that at the phonological level, each syllable of an utterance is assigned one or more abstract timing units depending on its position in the prosodic structure. The number of timing units associated with a syllable determines the time interval between its onset and the onset of the next syllable. An interesting prediction from the theory, which was confirmed in Ferreira's experiments with speakers of American English, is that the time intervals between syllable onsets should only depend on the syllables' positions in the prosodic structure, but not on their segmental content. However, in the present experiments, which were carried out in Dutch, the intervals between syllable onsets were consistently longer for phonetically long syllables than for short syllables. The implications of this result for models of timing in sentence production are discussed.
  • Meyer, A. S., Sleiderink, A. M., & Levelt, W. J. M. (1998). Viewing and naming objects: Eye movements during noun phrase production. Cognition, 66(2), B25-B33. doi:10.1016/S0010-0277(98)00009-2.

    Abstract

    Eye movements have been shown to reflect word recognition and language comprehension processes occurring during reading and auditory language comprehension. The present study examines whether the eye movements speakers make during object naming similarly reflect speech planning processes. In Experiment 1, speakers named object pairs saying, for instance, 'scooter and hat'. The objects were presented as ordinary line drawings or with partly dele:ed contours and had high or low frequency names. Contour type and frequency both significantly affected the mean naming latencies and the mean time spent looking at the objects. The frequency effects disappeared in Experiment 2, in which the participants categorized the objects instead of naming them. This suggests that the frequency effects of Experiment 1 arose during lexical retrieval. We conclude that eye movements during object naming indeed reflect linguistic planning processes and that the speakers' decision to move their eyes from one object to the next is contingent upon the retrieval of the phonological form of the object names.
  • Misersky, J., Peeters, D., & Flecken, M. (2022). The potential of immersive virtual reality for the study of event perception. Frontiers in Virtual Reality, 3: 697934. doi:10.3389/frvir.2022.697934.

    Abstract

    In everyday life, we actively engage in different activities from a first-person perspective. However, experimental psychological research in the field of event perception is often limited to relatively passive, third-person computer-based paradigms. In the present study, we tested the feasibility of using immersive virtual reality in combination with eye tracking with participants in active motion. Behavioral research has shown that speakers of aspectual and non-aspectual languages attend to goals (endpoints) in motion events differently, with speakers of non-aspectual languages showing relatively more attention to goals (endpoint bias). In the current study, native speakers of German (non-aspectual) and English (aspectual) walked on a treadmill across 3-D terrains in VR, while their eye gaze was continuously tracked. Participants encountered landmark objects on the side of the road, and potential endpoint objects at the end of it. Using growth curve analysis to analyze fixation patterns over time, we found no differences in eye gaze behavior between German and English speakers. This absence of cross-linguistic differences was also observed in behavioral tasks with the same participants. Methodologically, based on the quality of the data, we conclude that our dynamic eye-tracking setup can be reliably used to study what people look at while moving through rich and dynamic environments that resemble the real world.
  • Molz, B., Herbik, A., Baseler, H. A., de Best, P. B., Vernon, R. W., Raz, N., Gouws, A. D., Ahmadi, K., Lowndes, R., McLean, R. J., Gottlob, I., Kohl, S., Choritz, L., Maguire, J., Kanowski, M., Käsmann-Kellner, B., Wieland, I., Banin, E., Levin, N., Hoffmann, M. B. and 1 moreMolz, B., Herbik, A., Baseler, H. A., de Best, P. B., Vernon, R. W., Raz, N., Gouws, A. D., Ahmadi, K., Lowndes, R., McLean, R. J., Gottlob, I., Kohl, S., Choritz, L., Maguire, J., Kanowski, M., Käsmann-Kellner, B., Wieland, I., Banin, E., Levin, N., Hoffmann, M. B., & Morland, A. B. (2022). Structural changes to primary visual cortex in the congenital absence of cone input in achromatopsia. NeuroImage: Clinical, 33: 102925. doi:10.1016/j.nicl.2021.102925.

    Abstract

    Autosomal recessive Achromatopsia (ACHM) is a rare inherited disorder associated with dysfunctional cone photoreceptors resulting in a congenital absence of cone input to visual cortex. This might lead to distinct changes in cortical architecture with a negative impact on the success of gene augmentation therapies. To investigate the status of the visual cortex in these patients, we performed a multi-centre study focusing on the cortical structure of regions that normally receive predominantly cone input. Using high-resolution T1-weighted MRI scans and surface-based morphometry, we compared cortical thickness, surface area and grey matter volume in foveal, parafoveal and paracentral representations of primary visual cortex in 15 individuals with ACHM and 42 normally sighted, healthy controls (HC). In ACHM, surface area was reduced in all tested representations, while thickening of the cortex was found highly localized to the most central representation. These results were comparable to more widespread changes in brain structure reported in congenitally blind individuals, suggesting similar developmental processes, i.e., irrespective of the underlying cause and extent of vision loss. The cortical differences we report here could limit the success of treatment of ACHM in adulthood. Interventions earlier in life when cortical structure is not different from normal would likely offer better visual outcomes for those with ACHM.
  • Montero-Melis, G., Van Paridon, J., Ostarek, M., & Bylund, E. (2022). No evidence for embodiment: The motor system is not needed to keep action words in working memory. Cortex, 150, 108-125. doi:10.1016/j.cortex.2022.02.006.

    Abstract

    Increasing evidence implicates the sensorimotor systems with high-level cognition, but the extent to which these systems play a functional role remains debated. Using an elegant design, Shebani and Pulvermüller (2013) reported that carrying out a demanding rhythmic task with the hands led to selective impairment of working memory for hand-related words (e.g., clap), while carrying out the same task with the feet led to selective memory impairment for foot-related words (e.g., kick). Such a striking double dissociation is acknowledged even by critics to constitute strong evidence for an embodied account of working memory. Here, we report on an attempt at a direct replication of this important finding. We followed a sequential sampling design and stopped data collection at N=77 (more than five times the original sample size), at which point the evidence for the lack of the critical selective interference effect was very strong (BF01 = 91). This finding constitutes strong evidence against a functional contribution of the motor system to keeping action words in working memory. Our finding fits into the larger emerging picture in the field of embodied cognition that sensorimotor simulations are neither required nor automatic in high-level cognitive processes, but that they may play a role depending on the task. Importantly, we urge researchers to engage in transparent, high-powered, and fully pre-registered experiments like the present one to ensure the field advances on a solid basis.
  • Morey, R. D., Kaschak, M. P., Díez-Álamo, A. M., Glenberg, A. M., Zwaan, R. A., Lakens, D., Ibáñez, A., García, A., Gianelli, C., Jones, J. L., Madden, J., Alifano, F., Bergen, B., Bloxsom, N. G., Bub, D. N., Cai, Z. G., Chartier, C. R., Chatterjee, A., Conwell, E., Cook, S. W. and 25 moreMorey, R. D., Kaschak, M. P., Díez-Álamo, A. M., Glenberg, A. M., Zwaan, R. A., Lakens, D., Ibáñez, A., García, A., Gianelli, C., Jones, J. L., Madden, J., Alifano, F., Bergen, B., Bloxsom, N. G., Bub, D. N., Cai, Z. G., Chartier, C. R., Chatterjee, A., Conwell, E., Cook, S. W., Davis, J. D., Evers, E., Girard, S., Harter, D., Hartung, F., Herrera, E., Huettig, F., Humphries, S., Juanchich, M., Kühne, K., Lu, S., Lynes, T., Masson, M. E. J., Ostarek, M., Pessers, S., Reglin, R., Steegen, S., Thiessen, E. D., Thomas, L. E., Trott, S., Vandekerckhove, J., Vanpaemel, W., Vlachou, M., Williams, K., & Ziv-Crispel, N. (2022). A pre-registered, multi-lab non-replication of the Action-sentence Compatibility Effect (ACE). Psychonomic Bulletin & Review, 29, 613-626. doi:10.3758/s13423-021-01927-8.

    Abstract

    The Action-sentence Compatibility Effect (ACE) is a well-known demonstration of the role of motor activity in the comprehension of language. Participants are asked to make sensibility judgments on sentences by producing movements toward the body or away from the body. The ACE is the finding that movements are faster when the direction of the movement (e.g., toward) matches the direction of the action in the to-be-judged sentence (e.g., Art gave you the pen describes action toward you). We report on a pre- registered, multi-lab replication of one version of the ACE. The results show that none of the 18 labs involved in the study observed a reliable ACE, and that the meta-analytic estimate of the size of the ACE was essentially zero.
  • Murphy, E., Woolnough, O., Rollo, P. S., Roccaforte, Z., Segaert, K., Hagoort, P., & Tandon, N. (2022). Minimal phrase composition revealed by intracranial recordings. The Journal of Neuroscience, 42(15), 3216-3227. doi:10.1523/JNEUROSCI.1575-21.2022.

    Abstract

    The ability to comprehend phrases is an essential integrative property of the brain. Here we evaluate the neural processes that enable the transition from single word processing to a minimal compositional scheme. Previous research has reported conflicting timing effects of composition, and disagreement persists with respect to inferior frontal and posterior temporal contributions. To address these issues, 19 patients (10 male, 19 female) implanted with penetrating depth or surface subdural intracranial electrodes heard auditory recordings of adjective-noun, pseudoword-noun and adjective-pseudoword phrases and judged whether the phrase matched a picture. Stimulus-dependent alterations in broadband gamma activity, low frequency power and phase-locking values across the language-dominant left hemisphere were derived. This revealed a mosaic located on the lower bank of the posterior superior temporal sulcus (pSTS), in which closely neighboring cortical sites displayed exclusive sensitivity to either lexicality or phrase structure, but not both. Distinct timings were found for effects of phrase composition (210–300 ms) and pseudoword processing (approximately 300–700 ms), and these were localized to neighboring electrodes in pSTS. The pars triangularis and temporal pole encoded anticipation of composition in broadband low frequencies, and both regions exhibited greater functional connectivity with pSTS during phrase composition. Our results suggest that the pSTS is a highly specialized region comprised of sparsely interwoven heterogeneous constituents that encodes both lower and higher level linguistic features. This hub in pSTS for minimal phrase processing may form the neural basis for the human-specific computational capacity for forming hierarchically organized linguistic structures.
  • Narasimhan, B. (2003). Motion events and the lexicon: The case of Hindi. Lingua, 113(2), 123-160. doi:10.1016/S0024-3841(02)00068-2.

    Abstract

    English, and a variety of Germanic languages, allow constructions such as the bottle floated into the cave , whereas languages such as Spanish, French, and Hindi are highly restricted in allowing manner of motion verbs to occur with path phrases. This typological observation has been accounted for in terms of the conflation of complex meaning in basic or derived verbs [Talmy, L., 1985. Lexicalization patterns: semantic structure in lexical forms. In: Shopen, T. (Ed.), Language Typology and Syntactic Description 3: Grammatical Categories and the Lexicon. Cambridge University Press, Cambridge, pp. 57–149; Levin, B., Rappaport-Hovav, M., 1995. Unaccusativity: At the Syntax–Lexical Semantics Interface. MIT Press, Cambridge, MA], or the presence of path “satellites” with special grammatical properties in the lexicon of languages such as English, which allow such phrasal combinations [cf. Talmy, L., 1985. Lexicalization patterns: semantic structure in lexical forms. In: Shopen, T. (Ed.), Language Typology and Syntactic Description 3: Grammatical Categories and the Lexicon. Cambridge University Press, Cambridge, pp. 57–149; Talmy, L., 1991. Path to realisation: via aspect and result. In: Proceedings of the Seventeenth Annual Meeting of the Berkeley Linguistics Society. Berkeley Linguistics Society, Berkeley, pp. 480–520]. I use data from Hindi to show that there is little empirical support for the claim that the constraint on the phrasal combination is correlated with differences in verb meaning or the presence of satellites in the lexicon of a language. However, proposals which eschew lexicalization accounts for more general aspectual constraints on the manner verb + path phrase combination in Spanish-type languages (Aske, J., 1989. Path Predicates in English and Spanish: A Closer look. In: Proceedings of the Fifteenth Annual Meeting of the Berkeley Linguistics Society. Berkeley Linguistics Society, Berkeley, pp. 1–14) cannot account for the full range of data in Hindi either. On the basis of these facts, I argue that an empirically adequate account can be formulated in terms of a general mapping constraint, formulated in terms of whether the lexical requirements of the verb strictly or weakly constrain its syntactic privileges of occurrence. In Hindi, path phrases can combine with manner of motion verbs only to the degree that they are compatible with the semantic profile of the verb. Path phrases in English, on the other hand, can extend the verb's “semantic profile” subject to certain constraints. I suggest that path phrases are licensed in English by the semantic requirements of the “construction” in which they appear rather than by the selectional requirements of the verb (Fillmore, C., Kay, P., O'Connor, M.C., 1988, Regularity and idiomaticity in grammatical constructions. Language 64, 501–538; Jackendoff, 1990, Semantic Structures. MIT Press, Cambridge, MA; Goldberg, 1995, Constructions: A Construction Grammar Approach to Argument Structure. University of Chicago Press, Chicago and London).
  • Nayak, S., Coleman, P. L., Ladányi, E., Nitin, R., Gustavson, D. E., Fisher, S. E., Magne, C. L., & Gordon, R. L. (2022). The Musical Abilities, Pleiotropy, Language, and Environment (MAPLE) framework for understanding musicality-language links across the lifespan. Neurobiology of Language, 3(4), 615-664. doi:10.1162/nol_a_00079.

    Abstract

    Using individual differences approaches, a growing body of literature finds positive associations between musicality and language-related abilities, complementing prior findings of links between musical training and language skills. Despite these associations, musicality has been often overlooked in mainstream models of individual differences in language acquisition and development. To better understand the biological basis of these individual differences, we propose the Musical Abilities, Pleiotropy, Language, and Environment (MAPLE) framework. This novel integrative framework posits that musical and language-related abilities likely share some common genetic architecture (i.e., genetic pleiotropy) in addition to some degree of overlapping neural endophenotypes, and genetic influences on musically and linguistically enriched environments. Drawing upon recent advances in genomic methodologies for unraveling pleiotropy, we outline testable predictions for future research on language development and how its underlying neurobiological substrates may be supported by genetic pleiotropy with musicality. In support of the MAPLE framework, we review and discuss findings from over seventy behavioral and neural studies, highlighting that musicality is robustly associated with individual differences in a range of speech-language skills required for communication and development. These include speech perception-in-noise, prosodic perception, morphosyntactic skills, phonological skills, reading skills, and aspects of second/foreign language learning. Overall, the current work provides a clear agenda and framework for studying musicality-language links using individual differences approaches, with an emphasis on leveraging advances in the genomics of complex musicality and language traits.
  • Neumann, A., Nolte, I. M., Pappa, I., Ahluwalia, T. S., Pettersson, E., Rodriguez, A., Whitehouse, A., Van Beijsterveldt, C. E. M., Benyamin, B., Hammerschlag, A. R., Helmer, Q., Karhunen, V., Krapohl, E., Lu, Y., Van der Most, P. J., Palviainen, T., St Pourcain, B., Seppälä, I., Suarez, A., Vilor-Tejedor, N. and 41 moreNeumann, A., Nolte, I. M., Pappa, I., Ahluwalia, T. S., Pettersson, E., Rodriguez, A., Whitehouse, A., Van Beijsterveldt, C. E. M., Benyamin, B., Hammerschlag, A. R., Helmer, Q., Karhunen, V., Krapohl, E., Lu, Y., Van der Most, P. J., Palviainen, T., St Pourcain, B., Seppälä, I., Suarez, A., Vilor-Tejedor, N., Tiesler, C. M. T., Wang, C., Wills, A., Zhou, A., Alemany, S., Bisgaard, H., Bønnelykke, K., Davies, G. E., Hakulinen, C., Henders, A. K., Hyppönen, E., Stokholm, J., Bartels, M., Hottenga, J.-J., Heinrich, J., Hewitt, J., Keltikangas-Järvinen, L., Korhonen, T., Kaprio, J., Lahti, J., Lahti-Pulkkinen, M., Lehtimäki, T., Middeldorp, C. M., Najman, J. M., Pennell, C., Power, C., Oldehinkel, A. J., Plomin, R., Räikkönen, K., Raitakari, O. T., Rimfeld, K., Sass, L., Snieder, H., Standl, M., Sunyer, J., Williams, G. M., Bakermans-Kranenburg, M. J., Boomsma, D. I., Van IJzendoorn, M. H., Hartman, C. A., & Tiemeier, H. (2022). A genome-wide association study of total child psychiatric problems scores. PLOS ONE, 17(8): e0273116. doi:10.1371/journal.pone.0273116.

    Abstract

    Substantial genetic correlations have been reported across psychiatric disorders and numerous cross-disorder genetic variants have been detected. To identify the genetic variants underlying general psychopathology in childhood, we performed a genome-wide association study using a total psychiatric problem score. We analyzed 6,844,199 common SNPs in 38,418 school-aged children from 20 population-based cohorts participating in the EAGLE consortium. The SNP heritability of total psychiatric problems was 5.4% (SE = 0.01) and two loci reached genome-wide significance: rs10767094 and rs202005905. We also observed an association of SBF2, a gene associated with neuroticism in previous GWAS, with total psychiatric problems. The genetic effects underlying the total score were shared with common psychiatric disorders only (attention-deficit/hyperactivity disorder, anxiety, depression, insomnia) (rG > 0.49), but not with autism or the less common adult disorders (schizophrenia, bipolar disorder, or eating disorders) (rG < 0.01). Importantly, the total psychiatric problem score also showed at least a moderate genetic correlation with intelligence, educational attainment, wellbeing, smoking, and body fat (rG > 0.29). The results suggest that many common genetic variants are associated with childhood psychiatric symptoms and related phenotypes in general instead of with specific symptoms. Further research is needed to establish causality and pleiotropic mechanisms between related traits.

    Additional information

    Full summary results
  • Niarchou, M., Gustavson, D. E., Sathirapongsasuti, J. F., Anglada-Tort, M., Eising, E., Bell, E., McArthur, E., Straub, P., The 23andMe Research Team, McAuley, J. D., Capra, J. A., Ullén, F., Creanza, N., Mosing, M. A., Hinds, D., Davis, L. K., Jacoby, N., & Gordon, R. L. (2022). Genome-wide association study of musical beat synchronization demonstrates high polygenicity. Nature Human Behaviour, 6(9), 1292-1309. doi:10.1038/s41562-022-01359-x.

    Abstract

    Moving in synchrony to the beat is a fundamental component of musicality. Here we conducted a genome-wide association study to identify common genetic variants associated with beat synchronization in 606,825 individuals. Beat synchronization exhibited a highly polygenic architecture, with 69 loci reaching genome-wide significance (P < 5 × 10−8) and single-nucleotide-polymorphism-based heritability (on the liability scale) of 13%–16%. Heritability was enriched for genes expressed in brain tissues and for fetal and adult brain-specific gene regulatory elements, underscoring the role of central-nervous-system-expressed genes linked to the genetic basis of the trait. We performed validations of the self-report phenotype (through separate experiments) and of the genome-wide association study (polygenic scores for beat synchronization were associated with patients algorithmically classified as musicians in medical records of a separate biobank). Genetic correlations with breathing function, motor function, processing speed and chronotype suggest shared genetic architecture with beat synchronization and provide avenues for new phenotypic and genetic explorations.

    Additional information

    supplementary information
  • Nijveld, A., Ten Bosch, L., & Ernestus, M. (2022). The use of exemplars differs between native and non-native listening. Bilingualism: Language and Cognition, 25(5), 841-855. doi:10.1017/S1366728922000116.

    Abstract

    This study compares the role of exemplars in native and non-native listening. Two English identity priming experiments were conducted with native English, Dutch non-native, and Spanish non-native listeners. In Experiment 1, primes and targets were spoken in the same or a different voice. Only the native listeners showed exemplar effects. In Experiment 2, primes and targets had the same or a different degree of vowel reduction. The Dutch, but not the Spanish, listeners were familiar with this reduction pattern from their L1 phonology. In this experiment, exemplar effects only arose for the Spanish listeners. We propose that in these lexical decision experiments the use of exemplars is co-determined by listeners’ available processing resources, which is modulated by the familiarity with the variation type from their L1 phonology. The use of exemplars differs between native and non-native listening, suggesting qualitative differences between native and non-native speech comprehension processes.
  • Noordman, L. G. M., & Vonk, W. (1998). Memory-based processing in understanding causal information. Discourse Processes, 191-212. doi:10.1080/01638539809545044.

    Abstract

    The reading process depends both on the text and on the reader. When we read a text, propositions in the current input are matched to propositions in the memory representation of the previous discourse but also to knowledge structures in long‐term memory. Therefore, memory‐based text processing refers both to the bottom‐up processing of the text and to the top‐down activation of the reader's knowledge. In this article, we focus on the role of cognitive structures in the reader's knowledge. We argue that causality is an important category in structuring human knowledge and that this property has consequences for text processing. Some research is discussed that illustrates that the more the information in the text reflects causal categories, the more easily the information is processed.
  • Nordlinger, R., Garrido Rodriguez, G., & Kidd, E. (2022). Sentence planning and production in Murrinhpatha, an Australian 'free word order' language. Language, 98(2), 187-220. Retrieved from https://muse.jhu.edu/article/857152.

    Abstract

    Psycholinguistic theories are based on a very small set of unrepresentative languages, so it is as yet unclear how typological variation shapes mechanisms supporting language use. In this article we report the first on-line experimental study of sentence production in an Australian free word order language: Murrinhpatha. Forty-six adult native speakers of Murrinhpatha described a series of unrelated transitive scenes that were manipulated for humanness (±human) in the agent and patient roles while their eye movements were recorded. Speakers produced a large range of word orders, consistent with the language having flexible word order, with variation significantly influenced by agent and patient humanness. An analysis of eye movements showed that Murrinhpatha speakers' first fixation on an event character did not alone determine word order; rather, early in speech planning participants rapidly encoded both event characters and their relationship to each other. That is, they engaged in relational encoding, laying down a very early conceptual foundation for the word order they eventually produced. These results support a weakly hierarchical account of sentence production and show that speakers of a free word order language encode the relationships between event participants during earlier stages of sentence planning than is typically observed for languages with fixed word orders.
  • Norris, D., McQueen, J. M., & Cutler, A. (2003). Perceptual learning in speech. Cognitive Psychology, 47(2), 204-238. doi:10.1016/S0010-0285(03)00006-9.

    Abstract

    This study demonstrates that listeners use lexical knowledge in perceptual learning of speech sounds. Dutch listeners first made lexical decisions on Dutch words and nonwords. The final fricative of 20 critical words had been replaced by an ambiguous sound, between [f] and [s]. One group of listeners heard ambiguous [f]-final words (e.g., [WI tlo?], from witlof, chicory) and unambiguous [s]-final words (e.g., naaldbos, pine forest). Another group heard the reverse (e.g., ambiguous [na:ldbo?], unambiguous witlof). Listeners who had heard [?] in [f]-final words were subsequently more likely to categorize ambiguous sounds on an [f]–[s] continuum as [f] than those who heard [?] in [s]-final words. Control conditions ruled out alternative explanations based on selective adaptation and contrast. Lexical information can thus be used to train categorization of speech. This use of lexical information differs from the on-line lexical feedback embodied in interactive models of speech perception. In contrast to on-line feedback, lexical feedback for learning is of benefit to spoken word recognition (e.g., in adapting to a newly encountered dialect).
  • Nyberg, L., Marklund, P., Persson, J., Cabeza, R., Forkstam, C., Petersson, K. M., & Ingvar, M. (2003). Common prefrontal activations during working memory, episodic memory, and semantic memory. Neuropsychologia, 41(3), 371-377. doi:10.1016/S0028-3932(02)00168-9.

    Abstract

    Regions of the prefrontal cortex (PFC) are typically activated in many different cognitive functions. In most studies, the focus has been on the role of specific PFC regions in specific cognitive domains, but more recently similarities in PFC activations across cognitive domains have been stressed. Such similarities may suggest that a region mediates a common function across a variety of cognitive tasks. In this study, we compared the activation patterns associated with tests of working memory, semantic memory and episodic memory. The results converged on a general involvement of four regions across memory tests. These were located in left frontopolar cortex, left mid-ventrolateral PFC, left mid-dorsolateral PFC and dorsal anterior cingulate cortex. These findings provide evidence that some PFC regions are engaged during many different memory tests. The findings are discussed in relation to theories about the functional contribition of the PFC regions and the architecture of memory.
  • Nyberg, L., Sandblom, J., Jones, S., Stigsdotter Neely, A., Petersson, K. M., Ingvar, M., & Bäckman, L. (2003). Neural correlates of training-related memory improvement in adulthood and aging. Proceedings of the National Academy of Sciences of the United States of America, 100(23), 13728-13733. doi:10.1073/pnas.1735487100.

    Abstract

    Cognitive studies show that both younger and older adults can increase their memory performance after training in using a visuospatial mnemonic, although age-related memory deficits tend to be magnified rather than reduced after training. Little is known about the changes in functional brain activity that accompany training-induced memory enhancement, and whether age-related activity changes are associated with the size of training-related gains. Here, we demonstrate that younger adults show increased activity during memory encoding in occipito-parietal and frontal brain regions after learning the mnemonic. Older adults did not show increased frontal activity, and only those elderly persons who benefited from the mnemonic showed increased occipitoparietal activity. These findings suggest that age-related differences in cognitive reserve capacity may reflect both a frontal processing deficiency and a posterior production deficiency.
  • O'Brien, D. P., & Bowerman, M. (1998). Martin D. S. Braine (1926–1996): Obituary. American Psychologist, 53, 563. doi:10.1037/0003-066X.53.5.563.

    Abstract

    Memorializes Martin D. S. Braine, whose research on child language acquisition and on both child and adult thinking and reasoning had a major influence on modern cognitive psychology. Addressing meaning as well as position, Braine argued that children start acquiring language by learning narrow-scope positional formulas that map components of meaning to positions in the utterance. These proposals were critical in starting discussions of the possible universality of the pivot-grammar stage and of the role of syntax, semantics,and pragmatics in children's early grammar and were pivotal to the rise of approaches in which cognitive development in language acquisition is stressed.
  • Ogdie, M. N., MacPhie, I. L., Minassian, S. L., Yang, M., Fisher, S. E., Francks, C., Cantor, R. M., McCracken, J. T., McGough, J. J., Nelson, S. F., Monaco, A. P., & Smalley, S. L. (2003). A genomewide scan for Attention-Deficit/Hyperactivity Disorder in an extended sample: Suggestive linkage on 17p11. American Journal of Human Genetics, 72(5), 1268-1279. doi:10.1086/375139.

    Abstract

    Attention-deficit/hyperactivity disorder (ADHD [MIM 143465]) is a common, highly heritable neurobehavioral disorder of childhood onset, characterized by hyperactivity, impulsivity, and/or inattention. As part of an ongoing study of the genetic etiology of ADHD, we have performed a genomewide linkage scan in 204 nuclear families comprising 853 individuals and 270 affected sibling pairs (ASPs). Previously, we reported genomewide linkage analysis of a “first wave” of these families composed of 126 ASPs. A follow-up investigation of one region on 16p yielded significant linkage in an extended sample. The current study extends the original sample of 126 ASPs to 270 ASPs and provides linkage analyses of the entire sample, using polymorphic microsatellite markers that define an ∼10-cM map across the genome. Maximum LOD score (MLS) analysis identified suggestive linkage for 17p11 (MLS=2.98) and four nominal regions with MLS values >1.0, including 5p13, 6q14, 11q25, and 20q13. These data, taken together with the fine mapping on 16p13, suggest two regions as highly likely to harbor risk genes for ADHD: 16p13 and 17p11. Interestingly, both regions, as well as 5p13, have been highlighted in genomewide scans for autism.
  • Ohlerth, A.-K., Bastiaanse, R., Nickels, L., Neu, B., Zhang, W., Ille, S., Sollmann, N., & Krieg, S. M. (2022). Dual-task nTMS mapping to visualize the cortico-subcortical language network and capture postoperative outcome—A patient series in neurosurgery. Frontiers in Oncology, 11: 788122. doi:10.3389/fonc.2021.788122.

    Abstract

    Background: Perioperative assessment of language function in brain tumor patients commonly relies on administration of object naming during stimulation mapping. Ample research, however, points to the benefit of adding verb tasks to the testing paradigm in order to delineate and preserve postoperative language function more comprehensively. This research uses a case series approach to explore the feasibility and added value of a dual-task protocol that includes both a noun task (object naming) and a verb task (action naming) in perioperative delineation of language functions.

    Materials and Methods: Seven neurosurgical cases underwent perioperative language assessment with both object and action naming. This entailed preoperative baseline testing, preoperative stimulation mapping with navigated Transcranial Magnetic Stimulation (nTMS) with subsequent white matter visualization, intraoperative mapping with Direct Electrical Stimulation (DES) in 4 cases, and postoperative imaging and examination of language change.

    Results: We observed a divergent pattern of language organization and decline between cases who showed lesions close to the delineated language network and hence underwent DES mapping, and those that did not. The latter displayed no new impairment postoperatively consistent with an unharmed network for the neural circuits of both object and action naming. For the cases who underwent DES, on the other hand, a higher sensitivity was found for action naming over object naming. Firstly, action naming preferentially predicted the overall language state compared to aphasia batteries. Secondly, it more accurately predicted intraoperative positive language areas as revealed by DES. Thirdly, double dissociations between postoperatively unimpaired object naming and impaired action naming and vice versa indicate segregated skills and neural representation for noun versus verb processing, especially in the ventral stream. Overlaying postoperative imaging with object and action naming networks revealed that dual-task nTMS mapping can explain the drop in performance in those cases where the network appeared in proximity to the resection cavity.

    Conclusion: Using a dual-task protocol for visualization of cortical and subcortical language areas through nTMS mapping proved to be able to capture network-to-deficit relations in our case series. Ultimately, adding action naming to clinical nTMS and DES mapping may help prevent postoperative deficits of this seemingly segregated skill.

    Additional information

    table 1 and table 2
  • Okbay, A., Wu, Y., Wang, N., Jayashankar, H., Bennett, M., Nehzati, S. M., Sidorenko, J., Kweon, H., Goldman, G., Gjorgjieva, T., Jiang, Y., Hicks, B., Tian, C., Hinds, D. A., Ahlskog, R., Magnusson, P. K. E., Oskarsson, S., Hayward, C., Campbell, A., Porteous, D. J. and 18 moreOkbay, A., Wu, Y., Wang, N., Jayashankar, H., Bennett, M., Nehzati, S. M., Sidorenko, J., Kweon, H., Goldman, G., Gjorgjieva, T., Jiang, Y., Hicks, B., Tian, C., Hinds, D. A., Ahlskog, R., Magnusson, P. K. E., Oskarsson, S., Hayward, C., Campbell, A., Porteous, D. J., Freese, J., Herd, P., 23andMe Research Team, Social Science Genetic Association Consortium, Watson, C., Jala, J., Conley, D., Koellinger, P. D., Johannesson, M., Laibson, D., Meyer, M. N., Lee, J. J., Kong, A., Yengo, L., Cesarini, D., Turley, P., Visscher, P. M., Beauchamp, J. P., Benjamin, D. J., & Young, A. I. (2022). Polygenic prediction of educational attainment within and between families from genome-wide association analyses in 3 million individuals. Nature Genetics, 54, 437-449. doi:10.1038/s41588-022-01016-z.

    Abstract

    We conduct a genome-wide association study (GWAS) of educational attainment (EA) in a sample of ~3 million individuals and identify 3,952 approximately uncorrelated genome-wide-significant single-nucleotide polymorphisms (SNPs). A genome-wide polygenic predictor, or polygenic index (PGI), explains 12–16% of EA variance and contributes to risk prediction for ten diseases. Direct effects (i.e., controlling for parental PGIs) explain roughly half the PGI’s magnitude of association with EA and other phenotypes. The correlation between mate-pair PGIs is far too large to be consistent with phenotypic assortment alone, implying additional assortment on PGI-associated factors. In an additional GWAS of dominance deviations from the additive model, we identify no genome-wide-significant SNPs, and a separate X-chromosome additive GWAS identifies 57.

    Additional information

    supplementary information
  • Onnis, L., Lim, A., Cheung, S., & Huettig, F. (2022). Is the mind inherently predicting? Exploring forward and backward looking in language processing. Cognitive Science, 46(10): e13201. doi:10.1111/cogs.13201.

    Abstract

    Prediction is one characteristic of the human mind. But what does it mean to say the mind is a ’prediction machine’ and inherently forward looking as is frequently claimed? In natural languages, many contexts are not easily predictable in a forward fashion. In English for example many frequent verbs do not carry unique meaning on their own, but instead rely on another word or words that follow them to become meaningful. Upon reading take a the processor often cannot easily predict walk as the next word. But the system can ‘look back’ and integrate walk more easily when it follows take a (e.g., as opposed to make|get|have a walk). In the present paper we provide further evidence for the importance of both forward and backward looking in language processing. In two self-paced reading tasks and an eye-tracking reading task, we found evidence that adult English native speakers’ sensitivity to word forward and backward conditional probability significantly explained variance in reading times over and above psycholinguistic predictors of reading latencies. We conclude that both forward and backward-looking (prediction and integration) appear to be important characteristics of language processing. Our results thus suggest that it makes just as much sense to call the mind an ’integration machine’ which is inherently backward looking.

    Additional information

    Open Data and Open Materials
  • Oswald, J. N., Van Cise, A. M., Dassow, A., Elliott, T., Johnson, M. T., Ravignani, A., & Podos, J. (2022). A collection of best practices for the collection and analysis of bioacoustic data. Applied Sciences, 12(23): 12046. doi:10.3390/app122312046.

    Abstract

    The field of bioacoustics is rapidly developing and characterized by diverse methodologies, approaches and aims. For instance, bioacoustics encompasses studies on the perception of pure tones in meticulously controlled laboratory settings, documentation of species’ presence and activities using recordings from the field, and analyses of circadian calling patterns in animal choruses. Newcomers to the field are confronted with a vast and fragmented literature, and a lack of accessible reference papers or textbooks. In this paper we contribute towards filling this gap. Instead of a classical list of “dos” and “don’ts”, we review some key papers which, we believe, embody best practices in several bioacoustic subfields. In the first three case studies, we discuss how bioacoustics can help identify the ‘who’, ‘where’ and ‘how many’ of animals within a given ecosystem. Specifically, we review cases in which bioacoustic methods have been applied with success to draw inferences regarding species identification, population structure, and biodiversity. In fourth and fifth case studies, we highlight how structural properties in signal evolution can emerge via ecological constraints or cultural transmission. Finally, in a sixth example, we discuss acoustic methods that have been used to infer predator–prey dynamics in cases where direct observation was not feasible. Across all these examples, we emphasize the importance of appropriate recording parameters and experimental design. We conclude by highlighting common best practices across studies as well as caveats about our own overview. We hope our efforts spur a more general effort in standardizing best practices across the subareas we’ve highlighted in order to increase compatibility among bioacoustic studies and inspire cross-pollination across the discipline.
  • Owoyele, B., Trujillo, J. P., De Melo, G., & Pouw, W. (2022). Masked-Piper: Masking personal identities in visual recordings while preserving multimodal information. SoftwareX, 20: 101236. doi:10.1016/j.softx.2022.101236.

    Abstract

    In this increasingly data-rich world, visual recordings of human behavior are often unable to be shared due to concerns about privacy. Consequently, data sharing in fields such as behavioral science, multimodal communication, and human movement research is often limited. In addition, in legal and other non-scientific contexts, privacy-related concerns may preclude the sharing of video recordings and thus remove the rich multimodal context that humans recruit to communicate. Minimizing the risk of identity exposure while preserving critical behavioral information would maximize utility of public resources (e.g., research grants) and time invested in audio–visual​ research. Here we present an open-source computer vision tool that masks the identities of humans while maintaining rich information about communicative body movements. Furthermore, this masking tool can be easily applied to many videos, leveraging computational tools to augment the reproducibility and accessibility of behavioral research. The tool is designed for researchers and practitioners engaged in kinematic and affective research. Application areas include teaching/education, communication and human movement research, CCTV, and legal contexts.

    Additional information

    setup and usage
  • Ozker, M., Doyle, W., Devinsky, O., & Flinker, A. (2022). A cortical network processes auditory error signals during human speech production to maintain fluency. PLoS Biology, 20: e3001493. doi:10.1371/journal.pbio.3001493.

    Abstract

    Hearing one’s own voice is critical for fluent speech production as it allows for the detection and correction of vocalization errors in real time. This behavior known as the auditory feedback control of speech is impaired in various neurological disorders ranging from stuttering to aphasia; however, the underlying neural mechanisms are still poorly understood. Computational models of speech motor control suggest that, during speech production, the brain uses an efference copy of the motor command to generate an internal estimate of the speech output. When actual feedback differs from this internal estimate, an error signal is generated to correct the internal estimate and update necessary motor commands to produce intended speech. We were able to localize the auditory error signal using electrocorticographic recordings from neurosurgical participants during a delayed auditory feedback (DAF) paradigm. In this task, participants hear their voice with a time delay as they produced words and sentences (similar to an echo on a conference call), which is well known to disrupt fluency by causing slow and stutter-like speech in humans. We observed a significant response enhancement in auditory cortex that scaled with the duration of feedback delay, indicating an auditory speech error signal. Immediately following auditory cortex, dorsal precentral gyrus (dPreCG), a region that has not been implicated in auditory feedback processing before, exhibited a markedly similar response enhancement, suggesting a tight coupling between the 2 regions. Critically, response enhancement in dPreCG occurred only during articulation of long utterances due to a continuous mismatch between produced speech and reafferent feedback. These results suggest that dPreCG plays an essential role in processing auditory error signals during speech production to maintain fluency.

    Additional information

    data and code
  • Park, B.-y., Larivière, S., Rodríguez-Cruces, R., Royer, J., Tavakol, S., Wang, Y., Caciagli, L., Caligiuri, M. E., Gambardella, A., Concha, L., Keller, S. S., Cendes, F., Alvim, M. K. M., Yasuda, C., Bonilha, L., Gleichgerrcht, E., Focke, N. K., Kreilkamp, B. A. K., Domin, M., Von Podewils, F. and 66 morePark, B.-y., Larivière, S., Rodríguez-Cruces, R., Royer, J., Tavakol, S., Wang, Y., Caciagli, L., Caligiuri, M. E., Gambardella, A., Concha, L., Keller, S. S., Cendes, F., Alvim, M. K. M., Yasuda, C., Bonilha, L., Gleichgerrcht, E., Focke, N. K., Kreilkamp, B. A. K., Domin, M., Von Podewils, F., Langner, S., Rummel, C., Rebsamen, M., Wiest, R., Martin, P., Kotikalapudi, R., Bender, B., O’Brien, T. J., Law, M., Sinclair, B., Vivash, L., Desmond, P. M., Malpas, C. B., Lui, E., Alhusaini, S., Doherty, C. P., Cavalleri, G. L., Delanty, N., Kälviäinen, R., Jackson, G. D., Kowalczyk, M., Mascalchi, M., Semmelroch, M., Thomas, R. H., Soltanian-Zadeh, H., Davoodi-Bojd, E., Zhang, J., Lenge, M., Guerrini, R., Bartolini, E., Hamandi, K., Foley, S., Weber, B., Depondt, C., Absil, J., Carr, S. J. A., Abela, E., Richardson, M. P., Devinsky, O., Severino, M., Striano, P., Parodi, C., Tortora, D., Hatton, S. N., Vos, S. B., Duncan, J. S., Galovic, M., Whelan, C. D., Bargalló, N., Pariente, J., Conde, E., Vaudano, A. E., Tondelli, M., Meletti, S., Kong, X., Francks, C., Fisher, S. E., Caldairou, B., Ryten, M., Labate, A., Sisodiya, S. M., Thompson, P. M., McDonald, C. R., Bernasconi, A., Bernasconi, N., & Bernhardt, B. C. (2022). Topographic divergence of atypical cortical asymmetry and atrophy patterns in temporal lobe epilepsy. Brain, 145(4), 1285-1298. doi:10.1093/brain/awab417.

    Abstract

    Temporal lobe epilepsy (TLE), a common drug-resistant epilepsy in adults, is primarily a limbic network disorder associated with predominant unilateral hippocampal pathology. Structural MRI has provided an in vivo window into whole-brain grey matter structural alterations in TLE relative to controls, by either mapping (i) atypical inter-hemispheric asymmetry or (ii) regional atrophy. However, similarities and differences of both atypical asymmetry and regional atrophy measures have not been systematically investigated.

    Here, we addressed this gap using the multi-site ENIGMA-Epilepsy dataset comprising MRI brain morphological measures in 732 TLE patients and 1,418 healthy controls. We compared spatial distributions of grey matter asymmetry and atrophy in TLE, contextualized their topographies relative to spatial gradients in cortical microstructure and functional connectivity calculated using 207 healthy controls obtained from Human Connectome Project and an independent dataset containing 23 TLE patients and 53 healthy controls, and examined clinical associations using machine learning.

    We identified a marked divergence in the spatial distribution of atypical inter-hemispheric asymmetry and regional atrophy mapping. The former revealed a temporo-limbic disease signature while the latter showed diffuse and bilateral patterns. Our findings were robust across individual sites and patients. Cortical atrophy was significantly correlated with disease duration and age at seizure onset, while degrees of asymmetry did not show a significant relationship to these clinical variables.

    Our findings highlight that the mapping of atypical inter-hemispheric asymmetry and regional atrophy tap into two complementary aspects of TLE-related pathology, with the former revealing primary substrates in ipsilateral limbic circuits and the latter capturing bilateral disease effects. These findings refine our notion of the neuropathology of TLE and may inform future discovery and validation of complementary MRI biomarkers in TLE.

    Additional information

    awab417_supplementary_data.pdf
  • Paterson, K. B., Liversedge, S. P., Rowland, C. F., & Filik, R. (2003). Children's comprehension of sentences with focus particles. Cognition, 89(3), 263-294. doi:10.1016/S0010-0277(03)00126-4.

    Abstract

    We report three studies investigating children's and adults' comprehension of sentences containing the focus particle only. In Experiments 1 and 2, four groups of participants (6–7 years, 8–10 years, 11–12 years and adult) compared sentences with only in different syntactic positions against pictures that matched or mismatched events described by the sentence. Contrary to previous findings (Crain, S., Ni, W., & Conway, L. (1994). Learning, parsing and modularity. In C. Clifton, L. Frazier, & K. Rayner (Eds.), Perspectives on sentence processing. Hillsdale, NJ: Lawrence Erlbaum; Philip, W., & Lynch, E. (1999). Felicity, relevance, and acquisition of the grammar of every and only. In S. C. Howell, S. A. Fish, & T. Keith-Lucas (Eds.), Proceedings of the 24th annual Boston University conference on language development. Somerville, MA: Cascadilla Press) we found that young children predominantly made errors by failing to process contrast information rather than errors in which they failed to use syntactic information to restrict the scope of the particle. Experiment 3 replicated these findings with pre-schoolers.
  • Pearson, L., & Pouw, W. (2022). Gesture–vocal coupling in Karnatak music performance: A neuro–bodily distributed aesthetic entanglement. Annals of the New York Academy of Sciences, 1515(1), 219-236. doi:10.1111/nyas.14806.

    Abstract

    In many musical styles, vocalists manually gesture while they sing. Coupling between gesture kinematics and vocalization has been examined in speech contexts, but it is an open question how these couple in music making. We examine this in a corpus of South Indian, Karnatak vocal music that includes motion-capture data. Through peak magnitude analysis (linear mixed regression) and continuous time-series analyses (generalized additive modeling), we assessed whether vocal trajectories around peaks in vertical velocity, speed, or acceleration were coupling with changes in vocal acoustics (namely, F0 and amplitude). Kinematic coupling was stronger for F0 change versus amplitude, pointing to F0's musical significance. Acceleration was the most predictive for F0 change and had the most reliable magnitude coupling, showing a one-third power relation. That acceleration, rather than other kinematics, is maximally predictive for vocalization is interesting because acceleration entails force transfers onto the body. As a theoretical contribution, we argue that gesturing in musical contexts should be understood in relation to the physical connections between gesturing and vocal production that are brought into harmony with the vocalists’ (enculturated) performance goals. Gesture–vocal coupling should, therefore, be viewed as a neuro–bodily distributed aesthetic entanglement.

    Additional information

    tables
  • Pederson, E., Danziger, E., Wilkins, D. G., Levinson, S. C., Kita, S., & Senft, G. (1998). Semantic typology and spatial conceptualization. Language, 74(3), 557-589. doi:10.2307/417793.
  • Pereira Soares, S. M., Kupisch, T., & Rothman, J. (2022). Testing potential transfer effects in heritage and adult L2 bilinguals acquiring a mini grammar as an additional language: An ERP approach. Brain Sciences, 12: 669. doi:10.3390/brainsci12050669.

    Abstract

    Models on L3/Ln acquisition differ with respect to how they envisage degree (holistic
    vs. selective transfer of the L1, L2 or both) and/or timing (initial stages vs. development) of how
    the influence of source languages unfolds. This study uses EEG/ERPs to examine these models,
    bringing together two types of bilinguals: heritage speakers (HSs) (Italian-German, n = 15) compared
    to adult L2 learners (L1 German, L2 English, n = 28) learning L3/Ln Latin. Participants were trained
    on a selected Latin lexicon over two sessions and, afterward, on two grammatical properties: case
    (similar between German and Latin) and adjective–noun order (similar between Italian and Latin).
    Neurophysiological findings show an N200/N400 deflection for the HSs in case morphology and a
    P600 effect for the German L2 group in adjectival position. None of the current L3/Ln models predict
    the observed results, which questions the appropriateness of this methodology. Nevertheless, the
    results are illustrative of differences in how HSs and L2 learners approach the very initial stages of
    additional language learning, the implications of which are discussed
  • Pereira Soares, S. M., Prystauka, Y., DeLuca, V., & Rothman, J. (2022). Type of bilingualism conditions individual differences in the oscillatory dynamics of inhibitory control. Frontiers in Human Neuroscience, 16: 910910. doi:10.3389/fnhum.2022.910910.

    Abstract

    The present study uses EEG time-frequency representations (TFRs) with a Flanker task to investigate if and how individual differences in bilingual language experience modulate neurocognitive outcomes (oscillatory dynamics) in two bilingual group types: late bilinguals (L2 learners) and early bilinguals (heritage speakers—HSs). TFRs were computed for both incongruent and congruent trials. The difference between the two (Flanker effect vis-à-vis cognitive interference) was then (1) compared between the HSs and the L2 learners, (2) modeled as a function of individual differences with bilingual experience within each group separately and (3) probed for its potential (a)symmetry between brain and behavioral data. We found no differences at the behavioral and neural levels for the between-groups comparisons. However, oscillatory dynamics (mainly theta increase and alpha suppression) of inhibition and cognitive control were found to be modulated by individual differences in bilingual language experience, albeit distinctly within each bilingual group. While the results indicate adaptations toward differential brain recruitment in line with bilingual language experience variation overall, this does not manifest uniformly. Rather, earlier versus later onset to bilingualism—the bilingual type—seems to constitute an independent qualifier to how individual differences play out.

    Additional information

    supplementary material
  • Perfors, A., & Kidd, E. (2022). The role of stimulus‐specific perceptual fluency in statistical learning. Cognitive Science, 46(2): e13100. doi:10.1111/cogs.13100.

    Abstract

    Humans have the ability to learn surprisingly complicated statistical information in a variety of modalities and situations, often based on relatively little input. These statistical learning (SL) skills appear to underlie many kinds of learning, but despite their ubiquity, we still do not fully understand precisely what SL is and what individual differences on SL tasks reflect. Here, we present experimental work suggesting that at least some individual differences arise from stimulus-specific variation in perceptual fluency: the ability to rapidly or efficiently code and remember the stimuli that SL occurs over. Experiment 1 demonstrates that participants show improved SL when the stimuli are simple and familiar; Experiment 2 shows that this improvement is not evident for simple but unfamiliar stimuli; and Experiment 3 shows that for the same stimuli (Chinese characters), SL is higher for people who are familiar with them (Chinese speakers) than those who are not (English speakers matched on age and education level). Overall, our findings indicate that performance on a standard SL task varies substantially within the same (visual) modality as a function of whether the stimuli involved are familiar or not, independent of stimulus complexity. Moreover, test–retest correlations of performance in an SL task using stimuli of the same level of familiarity (but distinct items) are stronger than correlations across the same task with stimuli of different levels of familiarity. Finally, we demonstrate that SL performance is predicted by an independent measure of stimulus-specific perceptual fluency that contains no SL component at all. Our results suggest that a key component of SL performance may be related to stimulus-specific processing and familiarity.
  • Petersson, K. M. (1998). Comments on a Monte Carlo approach to the analysis of functional neuroimaging data. NeuroImage, 8, 108-112.
  • Petersson, K. M., Sandblom, J., Elfgren, C., & Ingvar, M. (2003). Instruction-specific brain activations during episodic encoding: A generalized level of processing effect. Neuroimage, 20, 1795-1810. doi:10.1016/S1053-8119(03)00414-2.

    Abstract

    In a within-subject design we investigated the levels-of-processing (LOP) effect using visual material in a behavioral and a corresponding PET study. In the behavioral study we characterize a generalized LOP effect, using pleasantness and graphical quality judgments in the encoding situation, with two types of visual material, figurative and nonfigurative line drawings. In the PET study we investigate the related pattern of brain activations along these two dimensions. The behavioral results indicate that instruction and material contribute independently to the level of recognition performance. Therefore the LOP effect appears to stem both from the relative relevance of the stimuli (encoding opportunity) and an altered processing of stimuli brought about by the explicit instruction (encoding mode). In the PET study, encoding of visual material under the pleasantness (deep) instruction yielded left lateralized frontoparietal and anterior temporal activations while surface-based perceptually oriented processing (shallow instruction) yielded right lateralized frontoparietal, posterior temporal, and occipitotemporal activations. The result that deep encoding was related to the left prefrontal cortex while shallow encoding was related to the right prefrontal cortex, holding the material constant, is not consistent with the HERA model. In addition, we suggest that the anterior medial superior frontal region is related to aspects of self-referential semantic processing and that the inferior parts of the anterior cingulate as well as the medial orbitofrontal cortex is related to affective processing, in this case pleasantness evaluation of the stimuli regardless of explicit semantic content. Finally, the left medial temporal lobe appears more actively engaged by elaborate meaning-based processing and the complex response pattern observed in different subregions of the MTL lends support to the suggestion that this region is functionally segregated.
  • Pijls, F., Daelemans, W., & Kempen, G. (1987). Artificial intelligence tools for grammar and spelling instruction. Instructional Science, 16(4), 319-336. doi:10.1007/BF00117750.

    Abstract

    In The Netherlands, grammar teaching is an especially important subject in the curriculum of children aged 10-15 for several reasons. However, in spite of all attention and time invested, the results are poor. This article describes the problems and our attempt to overcome them by developing an intelligent computational instructional environment consisting of: a linguistic expert system, containing a module representing grammar and spelling rules and a number of modules to manipulate these rules; a didactic module; and a student interface with special facilities for grammar and spelling. Three prototypes of the functionality are discussed: BOUWSTEEN and COGO, which are programs for constructing and analyzing Dutch sentences; and TDTDT, a program for the conjugation of Dutch verbs.
  • Pijls, F., & Kempen, G. (1987). Kennistechnologische leermiddelen in het grammatica- en spellingonderwijs. Nederlands Tijdschrift voor de Psychologie, 42, 354-363.
  • Pine, J. M., Lieven, E. V., & Rowland, C. F. (1998). Comparing different models of the development of the English verb category. Linguistics, 36(4), 807-830. doi:10.1515/ling.1998.36.4.807.

    Abstract

    In this study data from the first six months of 12 children s multiword speech were used to test the validity of Valian's (1991) syntactic perfor-mance-limitation account and Tomasello s (1992) verb-island account of early multiword speech with particular reference to the development of the English verb category. The results provide evidence for appropriate use of verb morphology, auxiliary verb structures, pronoun case marking, and SVO word order from quite early in development. However, they also demonstrate a great deal of lexical specificity in the children's use of these systems, evidenced by a lack of overlap in the verbs to which different morphological markers were applied, a lack of overlap in the verbs with which different auxiliary verbs were used, a disproportionate use of the first person singular nominative pronoun I, and a lack of overlap in the lexical items that served as the subjects and direct objects of transitive verbs. These findings raise problems for both a syntactic performance-limitation account and a strong verb-island account of the data and suggest the need to develop a more general lexiealist account of early multiword speech that explains why some words come to function as "islands" of organization in the child's grammar and others do not.
  • Poletiek, F. H. (1998). De geest van de jury. Psychologie en Maatschappij, 4, 376-378.
  • Poort, E. D., & Rodd, J. M. (2022). Cross-lingual priming of cognates and interlingual homographs from L2 to L1. Glossa Psycholinguistics, 1(1): 11. doi:10.5070/G601147.

    Abstract

    Many word forms exist in multiple languages, and can have either the same meaning (cognates) or a different meaning (interlingual homographs). Previous experiments have shown that processing of interlingual homographs in a bilingual’s second language is slowed down by recent experience with these words in the bilingual’s native language, while processing of cognates can be speeded up (Poort et al., 2016; Poort & Rodd, 2019a). The current experiment replicated Poort and Rodd’s (2019a) Experiment 2 but switched the direction of priming: Dutch–English bilinguals (n = 106) made Dutch semantic relatedness judgements to probes related to cognates (n = 50), interlingual homographs (n = 50) and translation equivalents (n = 50) they had seen 15 minutes previously embedded in English sentences. The current experiment is the first to show that a single encounter with an interlingual homograph in one’s second language can also affect subsequent processing in one’s native language. Cross-lingual priming did not affect the cognates. The experiment also extended Poort and Rodd (2019a)’s finding of a large interlingual homograph inhibition effect in a semantic relatedness task in the participants’ L2 to their L1, but again found no evidence for a cognate facilitation effect in a semantic relatedness task. These findings extend the growing literature that emphasises the high level of interaction in a bilingual’s mental lexicon, by demonstrating the influence of L2 experience on the processing of L1 words. Data, scripts, materials and pre-registration available via https://osf.io/2swyg/?view_only=b2ba2e627f6f4eaeac87edab2b59b236.
  • Postema, A., Van Mierlo, H., Bakker, A. B., & Barendse, M. T. (2022). Study-to-sports spillover among competitive athletes: A field study. International Journal of Sport and Exercise Psychology. Advance online publication. doi:10.1080/1612197X.2022.2058054.

    Abstract

    Combining academics and athletics is challenging but important for the psychological and psychosocial development of those involved. However, little is known about how experiences in academics spill over and relate to athletics. Drawing on the enrichment mechanisms proposed by the Work-Home Resources model, we posit that study crafting behaviours are positively related to volatile personal resources, which, in turn, are related to higher athletic achievement. Via structural equation modelling, we examine a path model among 243 student-athletes, incorporating study crafting behaviours and personal resources (i.e., positive affect and study engagement), and self- and coach-rated athletic achievement measured two weeks later. Results show that optimising the academic environment by crafting challenging study demands relates positively to positive affect and study engagement. In turn, positive affect related positively to self-rated athletic achievement, whereas – unexpectedly – study engagement related negatively to coach-rated athletic achievement. Optimising the academic environment through cognitive crafting and crafting social study resources did not relate to athletic outcomes. We discuss how these findings offer new insights into the interplay between academics and athletics.
  • Poulton, V. R., & Nieuwland, M. S. (2022). Can you hear what’s coming? Failure to replicate ERP evidence for phonological prediction. Neurobiology of Language, 3(4), 556 -574. doi:10.1162/nol_a_00078.

    Abstract

    Prediction-based theories of language comprehension assume that listeners predict both the meaning and phonological form of likely upcoming words. In alleged event-related potential (ERP) demonstrations of phonological prediction, prediction-mismatching words elicit a phonological mismatch negativity (PMN), a frontocentral negativity that precedes the centroparietal N400 component. However, classification and replicability of the PMN has proven controversial, with ongoing debate on whether the PMN is a distinct component or merely an early part of the N400. In this electroencephalography (EEG) study, we therefore attempted to replicate the PMN effect and its separability from the N400, using a participant sample size (N = 48) that was more than double that of previous studies. Participants listened to sentences containing either a predictable word or an unpredictable word with/without phonological overlap with the predictable word. Preregistered analyses revealed a widely distributed negative-going ERP in response to unpredictable words in both the early (150–250 ms) and the N400 (300–500 ms) time windows. Bayes factor analysis yielded moderate evidence against a different scalp distribution of the effects in the two time windows. Although our findings do not speak against phonological prediction during sentence comprehension, they do speak against the PMN effect specifically as a marker of phonological prediction mismatch. Instead of an PMN effect, our results demonstrate the early onset of the auditory N400 effect associated with unpredictable words. Our failure to replicate further highlights the risk associated with commonly employed data-contingent analyses (e.g., analyses involving time windows or electrodes that were selected based on visual inspection) and small sample sizes in the cognitive neuroscience of language.
  • Pouw, W., & Holler, J. (2022). Timing in conversation is dynamically adjusted turn by turn in dyadic telephone conversations. Cognition, 222: 105015. doi:10.1016/j.cognition.2022.105015.

    Abstract

    Conversational turn taking in humans involves incredibly rapid responding. The timing mechanisms underpinning such responses have been heavily debated, including questions such as who is doing the timing. Similar to findings on rhythmic tapping to a metronome, we show that floor transfer offsets (FTOs) in telephone conversations are serially dependent, such that FTOs are lag-1 negatively autocorrelated. Finding this serial dependence on a turn-by-turn basis (lag-1) rather than on the basis of two or more turns, suggests a counter-adjustment mechanism operating at the level of the dyad in FTOs during telephone conversations, rather than a more individualistic self-adjustment within speakers. This finding, if replicated, has major implications for models describing turn taking, and confirms the joint, dyadic nature of human conversational dynamics. Future research is needed to see how pervasive serial dependencies in FTOs are, such as for example in richer communicative face-to-face contexts where visual signals affect conversational timing.
  • Pouw, W., & Dixon, J. A. (2022). What you hear and see specifies the perception of a limb-respiratory-vocal act. Proceedings of the Royal Society B: Biological Sciences, 289(1979): 20221026. doi:10.1098/rspb.2022.1026.
  • Pouw, W., Harrison, S. J., & Dixon, J. A. (2022). The importance of visual control and biomechanics in the regulation of gesture-speech synchrony for an individual deprived of proprioceptive feedback of body position. Scientific Reports, 12: 14775. doi:10.1038/s41598-022-18300-x.

    Abstract

    Do communicative actions such as gestures fundamentally differ in their control mechanisms from other actions? Evidence for such fundamental differences comes from a classic gesture-speech coordination experiment performed with a person (IW) with deafferentation (McNeill, 2005). Although IW has lost both his primary source of information about body position (i.e., proprioception) and discriminative touch from the neck down, his gesture-speech coordination has been reported to be largely unaffected, even if his vision is blocked. This is surprising because, without vision, his object-directed actions almost completely break down. We examine the hypothesis that IW’s gesture-speech coordination is supported by the biomechanical effects of gesturing on head posture and speech. We find that when vision is blocked, there are micro-scale increases in gesture-speech timing variability, consistent with IW’s reported experience that gesturing is difficult without vision. Supporting the hypothesis that IW exploits biomechanical consequences of the act of gesturing, we find that: (1) gestures with larger physical impulses co-occur with greater head movement, (2) gesture-speech synchrony relates to larger gesture-concurrent head movements (i.e. for bimanual gestures), (3) when vision is blocked, gestures generate more physical impulse, and (4) moments of acoustic prominence couple more with peaks of physical impulse when vision is blocked. It can be concluded that IW’s gesturing ability is not based on a specialized language-based feedforward control as originally concluded from previous research, but is still dependent on a varied means of recurrent feedback from the body.

    Additional information

    supplementary tables
  • Pouw, W., & Fuchs, S. (2022). Origins of vocal-entangled gesture. Neuroscience and Biobehavioral Reviews, 141: 104836. doi:10.1016/j.neubiorev.2022.104836.

    Abstract

    Gestures during speaking are typically understood in a representational framework: they represent absent or distal states of affairs by means of pointing, resemblance, or symbolic replacement. However, humans also gesture along with the rhythm of speaking, which is amenable to a non-representational perspective. Such a perspective centers on the phenomenon of vocal-entangled gestures and builds on evidence showing that when an upper limb with a certain mass decelerates/accelerates sufficiently, it yields impulses on the body that cascade in various ways into the respiratory–vocal system. It entails a physical entanglement between body motions, respiration, and vocal activities. It is shown that vocal-entangled gestures are realized in infant vocal–motor babbling before any representational use of gesture develops. Similarly, an overview is given of vocal-entangled processes in non-human animals. They can frequently be found in rats, bats, birds, and a range of other species that developed even earlier in the phylogenetic tree. Thus, the origins of human gesture lie in biomechanics, emerging early in ontogeny and running deep in phylogeny.
  • Praamstra, P., Meyer, A. S., & Levelt, W. J. M. (1994). Neurophysiological manifestations of auditory phonological processing: Latency variation of a negative ERP component timelocked to phonological mismatch. Journal of Cognitive Neuroscience, 6(3), 204-219. doi:10.1162/jocn.1994.6.3.204.

    Abstract

    Two experiments examined phonological priming effects on reaction times, error rates, and event-related brain potential (ERP) measures in an auditory lexical decision task. In Experiment 1 related prime-target pairs rhymed, and in Experiment 2 they alliterated (i.e., shared the consonantal onset and vowel). Event-related potentials were recorded in a delayed response task. Reaction times and error rates were obtained both for the delayed and an immediate response task. The behavioral data of Experiment 1 provided evidence for phonological facilitation of word, but not of nonword decisions. The brain potentials were more negative to unrelated than to rhyming word-word pairs between 450 and 700 msec after target onset. This negative enhancement was not present for word-nonword pairs. Thus, the ERP results match the behavioral data. The behavioral data of Experiment 2 provided no evidence for phonological Facilitation. However, between 250 and 450 msec after target onset, i.e., considerably earlier than in Experiment 1, brain potentials were more negative for unrelated than for alliterating word and word-nonword pairs. It is argued that the ERP effects in the two experiments could be modulations of the same underlying component, possibly the N400. The difference in the timing of the effects is likely to be due to the fact that the shared segments in related stimulus pairs appeared in different word positions in the two experiments.
  • Praamstra, P., Stegeman, D. F., Cools, A. R., Meyer, A. S., & Horstink, M. W. I. M. (1998). Evidence for lateral premotor and parietal overactivity in Parkinson's disease during sequential and bimanual movements: A PET study. Brain, 121, 769-772. doi:10.1093/brain/121.4.769.
  • Praamstra, P., Hagoort, P., Maassen, B., & Crul, T. (1991). Word deafness and auditory cortical function: A case history and hypothesis. Brain, 114, 1197-1225. doi:10.1093/brain/114.3.1197.

    Abstract

    A patient who already had Wernick's aphasia due to a left temporal lobe lesion suffered a severe deterioration specifically of auditory language comprehension, subsequent to right temporal lobe infarction. A detailed comparison of his new condition with his language status before the second stroke revealed that the newly acquired deficit was limited to tasks related to auditory input. Further investigations demonstrated a speech perceptual disorder, which we analysed as due to deficits both at the level of general auditory processes and at the level of phonetic analysis. We discuss some arguments related to hemisphere specialization of phonetic processing and to the disconnection explanation of word deafness that support the hypothesis of word deafness being generally caused by mixed deficits.
  • Preisig, B., & Hervais-Adelman, A. (2022). The predictive value of individual electric field modeling for transcranial alternating current stimulation induced brain modulation. Frontiers in Cellular Neuroscience, 16: 818703. doi:10.3389/fncel.2022.818703.

    Abstract

    There is considerable individual variability in the reported effectiveness of non-invasive brain stimulation. This variability has often been ascribed to differences in the neuroanatomy and resulting differences in the induced electric field inside the brain. In this study, we addressed the question whether individual differences in the induced electric field can predict the neurophysiological and behavioral consequences of gamma band tACS. In a within-subject experiment, bi-hemispheric gamma band tACS and sham stimulation was applied in alternating blocks to the participants’ superior temporal lobe, while task-evoked auditory brain activity was measured with concurrent functional magnetic resonance imaging (fMRI) and a dichotic listening task. Gamma tACS was applied with different interhemispheric phase lags. In a recent study, we could show that anti-phase tACS (180° interhemispheric phase lag), but not in-phase tACS (0° interhemispheric phase lag), selectively modulates interhemispheric brain connectivity. Using a T1 structural image of each participant’s brain, an individual simulation of the induced electric field was computed. From these simulations, we derived two predictor variables: maximal strength (average of the 10,000 voxels with largest electric field values) and precision of the electric field (spatial correlation between the electric field and the task evoked brain activity during sham stimulation). We found considerable variability in the individual strength and precision of the electric fields. Importantly, the strength of the electric field over the right hemisphere predicted individual differences of tACS induced brain connectivity changes. Moreover, we found in both hemispheres a statistical trend for the effect of electric field strength on tACS induced BOLD signal changes. In contrast, the precision of the electric field did not predict any neurophysiological measure. Further, neither strength, nor precision predicted interhemispheric integration. In conclusion, we found evidence for the dose-response relationship between individual differences in electric fields and tACS induced activity and connectivity changes in concurrent fMRI. However, the fact that this relationship was stronger in the right hemisphere suggests that the relationship between the electric field parameters, neurophysiology, and behavior may be more complex for bi-hemispheric tACS.
  • Preisig, B., Riecke, L., & Hervais-Adelman, A. (2022). Speech sound categorization: The contribution of non-auditory and auditory cortical regions. NeuroImage, 258: 119375. doi:10.1016/j.neuroimage.2022.119375.

    Abstract

    Which processes in the human brain lead to the categorical perception of speech sounds? Investigation of this question is hampered by the fact that categorical speech perception is normally confounded by acoustic differences in the stimulus. By using ambiguous sounds, however, it is possible to dissociate acoustic from perceptual stimulus representations. Twenty-seven normally hearing individuals took part in an fMRI study in which they were presented with an ambiguous syllable (intermediate between /da/ and /ga/) in one ear and with disambiguating acoustic feature (third formant, F3) in the other ear. Multi-voxel pattern searchlight analysis was used to identify brain areas that consistently differentiated between response patterns associated with different syllable reports. By comparing responses to different stimuli with identical syllable reports and identical stimuli with different syllable reports, we disambiguated whether these regions primarily differentiated the acoustics of the stimuli or the syllable report. We found that BOLD activity patterns in left perisylvian regions (STG, SMG), left inferior frontal regions (vMC, IFG, AI), left supplementary motor cortex (SMA/pre-SMA), and right motor and somatosensory regions (M1/S1) represent listeners’ syllable report irrespective of stimulus acoustics. Most of these regions are outside of what is traditionally regarded as auditory or phonological processing areas. Our results indicate that the process of speech sound categorization implicates decision-making mechanisms and auditory-motor transformations.

    Additional information

    figures and table
  • Price, K. M., Wigg, K. G., Eising, E., Feng, Y., Blokland, K., Wilkinson, M., Kerr, E. N., Guger, S. L., Quantitative Trait Working Group of the GenLang Consortium, Fisher, S. E., Lovett, M. W., Strug, L. J., & Barr, C. L. (2022). Hypothesis-driven genome-wide association studies provide novel insights into genetics of reading disabilities. Translational Psychiatry, 12: 495. doi:10.1038/s41398-022-02250-z.

    Abstract

    Reading Disability (RD) is often characterized by difficulties in the phonology of the language. While the molecular mechanisms underlying it are largely undetermined, loci are being revealed by genome-wide association studies (GWAS). In a previous GWAS for word reading (Price, 2020), we observed that top single-nucleotide polymorphisms (SNPs) were located near to or in genes involved in neuronal migration/axon guidance (NM/AG) or loci implicated in autism spectrum disorder (ASD). A prominent theory of RD etiology posits that it involves disturbed neuronal migration, while potential links between RD-ASD have not been extensively investigated. To improve power to identify associated loci, we up-weighted variants involved in NM/AG or ASD, separately, and performed a new Hypothesis-Driven (HD)–GWAS. The approach was applied to a Toronto RD sample and a meta-analysis of the GenLang Consortium. For the Toronto sample (n = 624), no SNPs reached significance; however, by gene-set analysis, the joint contribution of ASD-related genes passed the threshold (p~1.45 × 10–2, threshold = 2.5 × 10–2). For the GenLang Cohort (n = 26,558), SNPs in DOCK7 and CDH4 showed significant association for the NM/AG hypothesis (sFDR q = 1.02 × 10–2). To make the GenLang dataset more similar to Toronto, we repeated the analysis restricting to samples selected for reading/language deficits (n = 4152). In this GenLang selected subset, we found significant association for a locus intergenic between BTG3-C21orf91 for both hypotheses (sFDR q < 9.00 × 10–4). This study contributes candidate loci to the genetics of word reading. Data also suggest that, although different variants may be involved, alleles implicated in ASD risk may be found in the same genes as those implicated in word reading. This finding is limited to the Toronto sample suggesting that ascertainment influences genetic associations.
  • Rasenberg, M., Pouw, W., Özyürek, A., & Dingemanse, M. (2022). The multimodal nature of communicative efficiency in social interaction. Scientific Reports, 12: 19111. doi:10.1038/s41598-022-22883-w.

    Abstract

    How does communicative efficiency shape language use? We approach this question by studying it at the level of the dyad, and in terms of multimodal utterances. We investigate whether and how people minimize their joint speech and gesture efforts in face-to-face interactions, using linguistic and kinematic analyses. We zoom in on other-initiated repair—a conversational microcosm where people coordinate their utterances to solve problems with perceiving or understanding. We find that efforts in the spoken and gestural modalities are wielded in parallel across repair turns of different types, and that people repair conversational problems in the most cost-efficient way possible, minimizing the joint multimodal effort for the dyad as a whole. These results are in line with the principle of least collaborative effort in speech and with the reduction of joint costs in non-linguistic joint actions. The results extend our understanding of those coefficiency principles by revealing that they pertain to multimodal utterance design.

    Additional information

    Data and analysis scripts
  • Rasenberg, M., Özyürek, A., Bögels, S., & Dingemanse, M. (2022). The primacy of multimodal alignment in converging on shared symbols for novel referents. Discourse Processes, 59(3), 209-236. doi:10.1080/0163853X.2021.1992235.

    Abstract

    When people establish shared symbols for novel objects or concepts, they have been shown to rely on the use of multiple communicative modalities as well as on alignment (i.e., cross-participant repetition of communicative behavior). Yet these interactional resources have rarely been studied together, so little is known about if and how people combine multiple modalities in alignment to achieve joint reference. To investigate this, we systematically track the emergence of lexical and gestural alignment in a referential communication task with novel objects. Quantitative analyses reveal that people frequently use a combination of lexical and gestural alignment, and that such multimodal alignment tends to emerge earlier compared to unimodal alignment. Qualitative analyses of the interactional contexts in which alignment emerges reveal how people flexibly deploy lexical and gestural alignment (independently, simultaneously or successively) to adjust to communicative pressures.
  • Ravignani, A., & Garcia, M. (2022). A cross-species framework to identify vocal learning abilities in mammals. Philosophical Transactions of the Royal Society of London, Series B: Biological Sciences, 377: 20200394. doi:10.1098/rstb.2020.0394.

    Abstract

    Vocal production learning (VPL) is the experience-driven ability to produce novel vocal signals through imitation or modification of existing vocalizations. A parallel strand of research investigates acoustic allometry, namely how information about body size is conveyed by acoustic signals. Recently, we proposed that deviation from acoustic allometry principles as a result of sexual selection may have been an intermediate step towards the evolution of vocal learning abilities in mammals. Adopting a more hypothesis-neutral stance, here we perform phylogenetic regressions and other analyses further testing a potential link between VPL and being an allometric outlier. We find that multiple species belonging to VPL clades deviate from allometric scaling but in the opposite direction to that expected from size exaggeration mechanisms. In other words, our correlational approach finds an association between VPL and being an allometric outlier. However, the direction of this association, contra our original hypothesis, may indicate that VPL did not necessarily emerge via sexual selection for size exaggeration: VPL clades show higher vocalization frequencies than expected. In addition, our approach allows us to identify species with potential for VPL abilities: we hypothesize that those outliers from acoustic allometry lying above the regression line may be VPL species. Our results may help better understand the cross-species diversity, variability and aetiology of VPL, which among other things is a key underpinning of speech in our species.

    This article is part of the theme issue ‘Voice modulation: from origin and mechanism to social impact (Part II)’.

    Additional information

    Raw data Supplementary material
  • Ravignani, A. (2022). Language evolution: Sound meets gesture? [Review of the book From signal to symbol: The evolution of language by By R. Planer and K. Sterelny]. Evolutionary Anthropology, 31, 317-318. doi:10.1002/evan.21961.
  • Raviv, L., Lupyan, G., & Green, S. C. (2022). How variability shapes learning and generalization. Trends in Cognitive Sciences, 26(6), 462-483. doi:10.1016/j.tics.2022.03.007.

    Abstract

    Learning is using past experiences to inform new behaviors and actions. Because all experiences are unique, learning always requires some generalization. An effective way of improving generalization is to expose learners to more variable (and thus often more representative) input. More variability tends to make initial learning more challenging, but eventually leads to more general and robust performance. This core principle has been repeatedly rediscovered and renamed in different domains (e.g., contextual diversity, desirable difficulties, variability of practice). Reviewing this basic result as it has been formulated in different domains allows us to identify key patterns, distinguish between different kinds of variability, discuss the roles of varying task-relevant versus irrelevant dimensions, and examine the effects of introducing variability at different points in training.
  • Raviv, L., Peckre, L. R., & Boeckx, C. (2022). What is simple is actually quite complex: A critical note on terminology in the domain of language and communication. Journal of Comparative Psychology, 136(4), 215-220. doi:10.1037/com0000328.

    Abstract

    On the surface, the fields of animal communication and human linguistics have arrived at conflicting theories and conclusions with respect to the effect of social complexity on communicative complexity. For example, an increase in group size is argued to have opposite consequences on human versus animal communication systems: although an increase in human community size leads to some types of language simplification, an increase in animal group size leads to an increase in signal complexity. But do human and animal communication systems really show such a fundamental discrepancy? Our key message is that the tension between these two adjacent fields is the result of (a) a focus on different levels of analysis (namely, signal variation or grammar-like rules) and (b) an inconsistent use of terminology (namely, the terms “simple” and “complex”). By disentangling and clarifying these terms with respect to different measures of communicative complexity, we show that although animal and human communication systems indeed show some contradictory effects with respect to signal variability, they actually display essentially the same patterns with respect to grammar-like structure. This is despite the fact that the definitions of complexity and simplicity are actually aligned for signal variability, but diverge for grammatical structure. We conclude by advocating for the use of more objective and descriptive terms instead of terms such as “complexity,” which can be applied uniformly for human and animal communication systems—leading to comparable descriptions of findings across species and promoting a more productive dialogue between fields.
  • Redl, T., Szuba, A., de Swart, P., Frank, S. L., & de Hoop, H. (2022). Masculine generic pronouns as a gender cue in generic statements. Discourse Processes, 59, 828-845. doi:10.1080/0163853X.2022.2148071.

    Abstract

    An eye-tracking experiment was conducted with speakers of Dutch (N = 84, 36 male), a language that falls between grammatical and natural-gender languages. We tested whether a masculine generic pronoun causes a male bias when used in generic statements—that is, in the absence of a specific referent. We tested two types of generic statements by varying conceptual number, hypothesizing that the pronoun zijn “his” was more likely to cause a male bias with a conceptually singular than a conceptually plural ante-cedent (e.g., Someone (conceptually singular)/Everyone (conceptually plural) with perfect pitch can tune his instrument quickly). We found male participants to exhibit a male bias but with the conceptually singular antecedent only. Female participants showed no signs of a male bias. The results show that the generically intended masculine pronoun zijn “his” leads to a male bias in conceptually singular generic contexts but that this further depends on participant gender.

    Additional information

    Data availability
  • Reinisch, E., & Bosker, H. R. (2022). Encoding speech rate in challenging listening conditions: White noise and reverberation. Attention, Perception & Psychophysics, 84, 2303 -2318. doi:10.3758/s13414-022-02554-8.

    Abstract

    Temporal contrasts in speech are perceived relative to the speech rate of the surrounding context. That is, following a fast context
    sentence, listeners interpret a given target sound as longer than following a slow context, and vice versa. This rate effect, often
    referred to as “rate-dependent speech perception,” has been suggested to be the result of a robust, low-level perceptual process,
    typically examined in quiet laboratory settings. However, speech perception often occurs in more challenging listening condi-
    tions. Therefore, we asked whether rate-dependent perception would be (partially) compromised by signal degradation relative to
    a clear listening condition. Specifically, we tested effects of white noise and reverberation, with the latter specifically distorting
    temporal information. We hypothesized that signal degradation would reduce the precision of encoding the speech rate in the
    context and thereby reduce the rate effect relative to a clear context. This prediction was borne out for both types of degradation in
    Experiment 1, where the context sentences but not the subsequent target words were degraded. However, in Experiment 2, which
    compared rate effects when contexts and targets were coherent in terms of signal quality, no reduction of the rate effect was
    found. This suggests that, when confronted with coherently degraded signals, listeners adapt to challenging listening situations,
    eliminating the difference between rate-dependent perception in clear and degraded conditions. Overall, the present study
    contributes towards understanding the consequences of different types of listening environments on the functioning of low-
    level perceptual processes that listeners use during speech perception.

    Additional information

    Data availability
  • Reis, A., Guerreiro, M., & Petersson, K. M. (2003). A sociodemographic and neuropsychological characterization of an illiterate population. Applied Neuropsychology, 10, 191-204. doi:10.1207/s15324826an1004_1.

    Abstract

    The objectives of this article are to characterize the performance and to discuss the performance differences between literate and illiterate participants in a well-defined study population.We describe the participant-selection procedure used to investigate this population. Three groups with similar sociocultural backgrounds living in a relatively homogeneous fishing community in southern Portugal were characterized in terms of socioeconomic and sociocultural background variables and compared on a simple neuropsychological test battery; specifically, a literate group with more than 4 years of education (n = 9), a literate group with 4 years of education (n = 26), and an illiterate group (n = 31) were included in this study.We compare and discuss our results with other similar studies on the effects of literacy and illiteracy. The results indicate that naming and identification of real objects, verbal fluency using ecologically relevant semantic criteria, verbal memory, and orientation are not affected by literacy or level of formal education. In contrast, verbal working memory assessed with digit span, verbal abstraction, long-term semantic memory, and calculation (i.e., multiplication) are significantly affected by the level of literacy. We indicate that it is possible, with proper participant-selection procedures, to exclude general cognitive impairment and to control important sociocultural factors that potentially could introduce bias when studying the specific effects of literacy and level of formal education on cognitive brain function.
  • Reis, A., & Petersson, K. M. (2003). Educational level, socioeconomic status and aphasia research: A comment on Connor et al. (2001)- Effect of socioeconomic status on aphasia severity and recovery. Brain and Language, 87, 449-452. doi:10.1016/S0093-934X(03)00140-8.

    Abstract

    Is there a relation between socioeconomic factors and aphasia severity and recovery? Connor, Obler, Tocco, Fitzpatrick, and Albert (2001) describe correlations between the educational level and socioeconomic status of aphasic subjects with aphasia severity and subsequent recovery. As stated in the introduction by Connor et al. (2001), studies of the influence of educational level and literacy (or illiteracy) on aphasia severity have yielded conflicting results, while no significant link between socioeconomic status and aphasia severity and recovery has been established. In this brief note, we will comment on their findings and conclusions, beginning first with a brief review of literacy and aphasia research, and complexities encountered in these fields of investigation. This serves as a general background to our specific comments on Connor et al. (2001), which will be focusing on methodological issues and the importance of taking normative values in consideration when subjects with different socio-cultural or socio-economic backgrounds are assessed.
  • de Reus, K., Carlson, D., Lowry, A., Gross, S., Garcia, M., Rubio-Garcia, A., Salazar-Casals, A., & Ravignani, A. (2022). Vocal tract allometry in a mammalian vocal learner. Journal of Experimental Biology, 225(8): jeb243766. doi:10.1242/jeb.243766.

    Abstract

    Acoustic allometry occurs when features of animal vocalisations can be predicted from body size measurements. Despite this being considered the norm, allometry sometimes breaks, resulting in species sounding smaller or larger than expected. A recent hypothesis suggests that allometry-breaking animals cluster into two groups: those with anatomical adaptations to their vocal tracts and those capable of learning new sounds (vocal learners). Here we test this hypothesis by probing vocal tract allometry in a proven mammalian vocal learner, the harbour seal (Phoca vitulina). We test whether vocal tract structures and body size scale allometrically in 68 individuals. We find that both body length and body weight accurately predict vocal tract length and one tracheal dimension. Independently, body length predicts vocal fold length while body weight predicts a second tracheal dimension. All vocal tract measures are larger in weaners than in pups and some structures are sexually dimorphic within age classes. We conclude that harbour seals do comply with allometric constraints, lending support to our hypothesis. However, allometry between body size and vocal fold length seems to emerge after puppyhood, suggesting that ontogeny may modulate the anatomy-learning distinction previously hypothesised as clear-cut. Species capable of producing non-allometric signals while their vocal tract scales allometrically, like seals, may then use non-morphological allometry-breaking mechanisms. We suggest that seals, and potentially other vocal learning mammals, may achieve allometry-breaking through developed neural control over their vocal organs.
  • Rinker, T., Papadopoulou, D., Ávila-Varela, D., Bosch, J., Castro, S., Olioumtsevits, K., Pereira Soares, S. M., Wodniecka, Z., & Marinis, T. (2022). Does multilingualism bring benefits?: What do teachers think about multilingualism? The Multilingual Mind: Policy Reports 2022, 3. doi:10.48787/kops/352-2-1m7py02eqd0b56.
  • Roelofs, A. (2003). Shared phonological encoding processes and representations of languages in bilingual speakers. Language and Cognitive Processes, 18(2), 175-204. doi:10.1080/01690960143000515.

    Abstract

    Four form-preparation experiments investigated whether aspects of phonological encoding processes and representations are shared between languages in bilingual speakers. The participants were Dutch--English bilinguals. Experiment 1 showed that the basic rightward incrementality revealed in studies for the first language is also observed for second-language words. In Experiments 2 and 3, speakers were given words to produce that did or did not share onset segments, and that came or did not come from different languages. It was found that when onsets were shared among the response words, those onsets were prepared, even when the words came from different languages. Experiment 4 showed that preparation requires prior knowledge of the segments and that knowledge about their phonological features yields no effect. These results suggest that both first- and second-language words are phonologically planned through the same serial order mechanism and that the representations of segments common to the languages are shared.
  • Roelofs, A., Meyer, A. S., & Levelt, W. J. M. (1998). A case for the lemma/lexeme distinction in models of speaking: Comment on Caramazza and Miozzo (1997). Cognition, 69(2), 219-230. doi:10.1016/S0010-0277(98)00056-0.

    Abstract

    In a recent series of papers, Caramazza and Miozzo [Caramazza, A., 1997. How many levels of processing are there in lexical access? Cognitive Neuropsychology 14, 177-208; Caramazza, A., Miozzo, M., 1997. The relation between syntactic and phonological knowledge in lexical access: evidence from the 'tip-of-the-tongue' phenomenon. Cognition 64, 309-343; Miozzo, M., Caramazza, A., 1997. On knowing the auxiliary of a verb that cannot be named: evidence for the independence of grammatical and phonological aspects of lexical knowledge. Journal of Cognitive Neuropsychology 9, 160-166] argued against the lemma/lexeme distinction made in many models of lexical access in speaking, including our network model [Roelofs, A., 1992. A spreading-activation theory of lemma retrieval in speaking. Cognition 42, 107-142; Levelt, W.J.M., Roelofs, A., Meyer, A.S., 1998. A theory of lexical access in speech production. Behavioral and Brain Sciences, (in press)]. Their case was based on the observations that grammatical class deficits of brain-damaged patients and semantic errors may be restricted to either spoken or written forms and that the grammatical gender of a word and information about its form can be independently available in tip-of-the-tongue stales (TOTs). In this paper, we argue that though our model is about speaking, not taking position on writing, extensions to writing are possible that are compatible with the evidence from aphasia and speech errors. Furthermore, our model does not predict a dependency between gender and form retrieval in TOTs. Finally, we argue that Caramazza and Miozzo have not accounted for important parts of the evidence motivating the lemma/lexeme distinction, such as word frequency effects in homophone production, the strict ordering of gender and pho neme access in LRP data, and the chronometric and speech error evidence for the production of complex morphology.
  • Roelofs, A. (2003). Goal-referenced selection of verbal action: Modeling attentional control in the Stroop task. Psychological Review, 110(1), 88-125.

    Abstract

    This article presents a new account of the color-word Stroop phenomenon ( J. R. Stroop, 1935) based on an implemented model of word production, WEAVER++ ( W. J. M. Levelt, A. Roelofs, & A. S. Meyer, 1999b; A. Roelofs, 1992, 1997c). Stroop effects are claimed to arise from processing interactions within the language-production architecture and explicit goal-referenced control. WEAVER++ successfully simulates 16 classic data sets, mostly taken from the review by C. M. MacLeod (1991), including incongruency, congruency, reverse-Stroop, response-set, semantic-gradient, time-course, stimulus, spatial, multiple-task, manual, bilingual, training, age, and pathological effects. Three new experiments tested the account against alternative explanations. It is shown that WEAVER++ offers a more satisfactory account of the data than other models.
  • Roelofs, A., & Meyer, A. S. (1998). Metrical structure in planning the production of spoken words. Journal of Experimental Psychology: Learning, Memory, and Cognition, 24, 922-939. doi:10.1037/0278-7393.24.4.922.

    Abstract

    According to most models of speech production, the planning of spoken words involves the independent retrieval of segments and metrical frames followed by segment-to-frame association. In some models, the metrical frame includes a specification of the number and ordering of consonants and vowels, but in the word-form encoding by activation and verification (WEAVER) model (A. Roelofs, 1997), the frame specifies only the stress pattern across syllables. In 6 implicit priming experiments, on each trial, participants produced 1 word out of a small set as quickly as possible. In homogeneous sets, the response words shared word-initial segments, whereas in heterogeneous sets, they did not. Priming effects from shared segments depended on all response words having the same number of syllables and stress pattern, but not on their having the same number of consonants and vowels. No priming occurred when the response words had only the same metrical frame but shared no segments. Computer simulations demonstrated that WEAVER accounts for the findings.
  • Roelofs, A. (1998). Rightward incrementality in encoding simple phrasal forms in speech production. Journal of Experimental Psychology: Learning, Memory, and Cognition, 24, 904-921. doi:10.1037/0278-7393.24.4.904.

    Abstract

    This article reports 7 experiments investigating whether utterances are planned in a parallel or rightward incremental fashion during language production. The experiments examined the role of linear order, length, frequency, and repetition in producing Dutch verb–particle combinations. On each trial, participants produced 1 utterance out of a set of 3 as quickly as possible. The responses shared part of their form or not. For particle-initial infinitives, facilitation was obtained when the responses shared the particle but not when they shared the verb. For verb-initial imperatives, however, facilitation was obtained for the verbs but not for the particles. The facilitation increased with length, decreased with frequency, and was independent of repetition. A simple rightward incremental model accounts quantitatively for the results.
  • Rohde, H., & Rubio-Fernández, P. (2022). Color interpretation is guided by informativity expectations, not by world knowledge about colors. Journal of Memory and Language, 127: 104371. doi:10.1016/j.jml.2022.104371.

    Abstract

    When people hear words for objects with prototypical colors (e.g., ‘banana’), they look at objects of the same color (e.g., lemon), suggesting a link in comprehension between objects and their prototypical colors. However, that link does not carry over to production: The experimental record also shows that when people speak, they tend to omit prototypical colors, using color adjectives when it is informative (e.g., when referring to clothes, which have no prototypical color). These findings yield an interesting prediction, which we tested here: while prior work shows that people look at yellow objects when hearing ‘banana’, they should look away from bananas when hearing ‘yellow’. The results of an offline sentence-completion task (N = 100) and an online eye-tracking task (N = 41) confirmed that when presented with truncated color descriptions (e.g., ‘Click on the yellow…’), people anticipate clothing items rather than stereotypical fruits. A corpus analysis ruled out the possibility that this association between color and clothing arises from simple context-free co-occurrence statistics. We conclude that comprehenders make linguistic predictions based not only on what they know about the world (e.g., which objects are yellow) but also on what speakers tend to say about the world (i.e., what content would be informative).

    Additional information

    supplementary data 1
  • Rojas-Berscia, L. M., Lehecka, T., Claassen, S. A., Peute, A. A. K., Escobedo, M. P., Escobedo, S. P., Tangoa, A. H., & Pizango, E. Y. (2022). Embedding in Shawi narrations: A quantitative analysis of embedding in a post-colonial Amazonian indigenous society. Language in Society, 51(3), 427-451. doi:10.1017/S0047404521000634.

    Abstract

    In this article, we provide the first quantitative account of the frequent use of embedding in Shawi, a Kawapanan language spoken in Peruvian Northwestern Amazonia. We collected a corpus of ninety-two Frog Stories (Mayer 1969) from three different field sites in 2015 and 2016. Using the glossed corpus as our data, we conducted a generalised mixed model analysis, where we predicted the use of embedding with several macrosocial variables, such as gender, age, and education level. We show that bilingualism (Amazonian Spanish-Shawi) and education, mostly restricted by complex gender differences in Shawi communities, play a significant role in the establishment of linguistic preferences in narration. Moreover, we argue that the use of embedding reflects the impact of the mestizo1 society from the nineteenth century until today in Santa Maria de Cahuapanas, reshaping not only Shawi demographics but also linguistic practices
  • Rothman, J., Bayram, F., DeLuca, V., Di Pisa, G., Duñabeitia, J. A., Gharibi, K., Hao, J., Kolb, N., Kubota, M., Kupisch, T., Laméris, T., Luque, A., Van Osch, B., Pereira Soares, S. M., Prystauka, Y., Tat, D., Tomić, A., Voits, T., & Wulff, S. (2022). Monolingual comparative normativity in bilingualism research is out of “control”: Arguments and alternatives. Applied Psycholinguistics, 44(3), 316-329. doi:10.1017/S0142716422000315.

    Abstract

    Herein, we contextualize, problematize, and offer some insights for moving beyond the problem of monolingual comparative normativity in (psycho) linguistic research on bilingualism. We argue that, in the vast majority of cases, juxtaposing (functional) monolinguals to bilinguals fails to offer what the comparison is supposedly intended to do: meet the standards of empirical control in line with the scientific method. Instead, the default nature of monolingual comparative normativity has historically contributed to inequalities in many facets of bilingualism research and continues to impede progress on multiple levels. Beyond framing our views on the matter, we offer some epistemological considerations and methodological alternatives to this standard practice that improve empirical rigor while fostering increased diversity, inclusivity, and equity in our field.
  • Rowland, C. F., Pine, J. M., Lieven, E. V., & Theakston, A. L. (2003). Determinants of acquisition order in wh-questions: Re-evaluating the role of caregiver speech. Journal of Child Language, 30(3), 609-635. doi:10.1017/S0305000903005695.

    Abstract

    Accounts that specify semantic and/or syntactic complexity as the primary determinant of the order in which children acquire particular words or grammatical constructions have been highly influential in the literature on question acquisition. One explanation of wh-question acquisition in particular suggests that the order in which English speaking children acquire wh-questions is determined by two interlocking linguistic factors; the syntactic function of the wh-word that heads the question and the semantic generality (or ‘lightness’) of the main verb (Bloom, Merkin & Wootten, 1982; Bloom, 1991). Another more recent view, however, is that acquisition is influenced by the relative frequency with which children hear particular wh-words and verbs in their input (e.g. Rowland & Pine, 2000). In the present study over 300 hours of naturalistic data from twelve two- to three-year-old children and their mothers were analysed in order to assess the relative contribution of complexity and input frequency to wh-question acquisition. The analyses revealed, first, that the acquisition order of wh-questions could be predicted successfully from the frequency with which particular wh-words and verbs occurred in the children's input and, second, that syntactic and semantic complexity did not reliably predict acquisition once input frequency was taken into account. These results suggest that the relationship between acquisition and complexity may be a by-product of the high correlation between complexity and the frequency with which mothers use particular wh-words and verbs. We interpret the results in terms of a constructivist view of language acquisition.
  • Rowland, C. F., & Pine, J. M. (2003). The development of inversion in wh-questions: a reply to Van Valin. Journal of Child Language, 30(1), 197-212. doi:10.1017/S0305000902005445.

    Abstract

    Van Valin (Journal of Child Language29, 2002, 161–75) presents a critique of Rowland & Pine (Journal of Child Language27, 2000, 157–81) and argues that the wh-question data from Adam (in Brown, A first language, Cambridge, MA, 1973) cannot be explained in terms of input frequencies as we suggest. Instead, he suggests that the data can be more successfully accounted for in terms of Role and Reference Grammar. In this note we re-examine the pattern of inversion and uninversion in Adam's wh-questions and argue that the RRG explanation cannot account for some of the developmental facts it was designed to explain.
  • Rubio-Fernandez, P., Long, M., Shukla, V., Bhatia, V., & Sinha, P. (2022). Visual perspective taking is not automatic in a simplified Dot task: Evidence from newly sighted children, primary school children and adults. Neuropsychologia, 172: e0153485. doi:10.1016/j.neuropsychologia.2022.108256.

    Abstract

    In the Dot task, children and adults involuntarily compute an avatar’s visual perspective, which has been interpreted by some as automatic Theory of Mind. This interpretation has been challenged by other researchers arguing that the task reveals automatic attentional orienting. Here we tested a new interpretation of previous findings: the seemingly automatic processes revealed by the Dot task result from the high Executive Control demands of this verification paradigm, which taxes short-term memory and imposes perspective-switching costs. We tested this hypothesis in three experiments conducted in India with newly sighted children (Experiment 1; N = 5; all girls), neurotypical children (Experiment 2; ages 5–10; N = 90; 38 girls) and adults (Experiment 3; N = 30; 18 women) in a highly simplified version of the Dot task. No evidence of automatic perspective-taking was observed, although all groups revealed perspective-taking costs. A newly sighted child and the youngest children in our sample also showed an egocentric bias, which disappeared by age 10, confirming that visual perspective taking develops during the school years. We conclude that the standard Dot task imposes such methodological demands on both children and adults that the alleged evidence of automatic processes (either mindreading or domain general) may simply reveal limitations in Executive Control.

    Additional information

    1-s2.0-S0028393222001154-mmc1.docx
  • Rubio-Fernández, P., Shukla, V., Bhatia, V., Ben-Ami, S., & Sinha, P. (2022). Head turning is an effective cue for gaze following: Evidence from newly sighted individuals, school children and adults. Neuropsychologia, 174: 108330. doi:10.1016/j.neuropsychologia.2022.108330.

    Abstract

    In referential communication, gaze is often interpreted as a social cue that facilitates comprehension and enables word learning. Here we investigated the degree to which head turning facilitates gaze following. We presented participants with static pictures of a man looking at a target object in a first and third block of trials (pre- and post-intervention), while they saw short videos of the same man turning towards the target in the second block of trials (intervention). In Experiment 1, newly sighted individuals (treated for congenital cataracts; N = 8) benefited from the motion cues, both when comparing their initial performance with static gaze cues to their performance with dynamic head turning, and their performance with static cues before and after the videos. In Experiment 2, neurotypical school children (ages 5–10 years; N = 90) and adults (N = 30) also revealed improved performance with motion cues, although most participants had started to follow the static gaze cues before they saw the videos. Our results confirm that head turning is an effective social cue when interpreting new words, offering new insights for a pathways approach to development.
  • Rubio-Fernández, P., Wienholz, A., Ballard, C. M., Kirby, S., & Lieberman, A. M. (2022). Adjective position and referential efficiency in American Sign Language: Effects of adjective semantics, sign type and age of sign exposure. Journal of Memory and Language, 126: 104348. doi:10.1016/j.jml.2022.104348.

    Abstract

    Previous research has pointed at communicative efficiency as a possible constraint on language structure. Here we investigated adjective position in American Sign Language (ASL), a language with relatively flexible word order, to test the incremental efficiency hypothesis, according to which both speakers and signers try to produce efficient referential expressions that are sensitive to the word order of their languages. The results of three experiments using a standard referential communication task confirmed that deaf ASL signers tend to produce absolute adjectives, such as color or material, in prenominal position, while scalar adjectives tend to be produced in prenominal position when expressed as lexical signs, but in postnominal position when expressed as classifiers. Age of ASL exposure also had an effect on referential choice, with early-exposed signers producing more classifiers than late-exposed signers, in some cases. Overall, our results suggest that linguistic, pragmatic and developmental factors affect referential choice in ASL, supporting the hypothesis that communicative efficiency is an important factor in shaping language structure and use.
  • Rubio-Fernandez, P. (2022). Demonstrative systems: From linguistic typology to social cognition. Cognitive Psychology, 139: 101519. doi:10.1016/j.cogpsych.2022.101519.

    Abstract

    This study explores the connection between language and social cognition by empirically testing different typological analyses of various demonstrative systems. Linguistic typology classifies demonstrative systems as distance-oriented or person-oriented, depending on whether they indicate the location of a referent relative only to the speaker, or to both the speaker and the listener. From the perspective of social cognition, speakers of languages with person-oriented systems must monitor their listener’s spatial location in order to accurately use their demonstratives, while speakers of languages with distance-oriented systems can use demonstratives from their own, egocentric perspective. Resolving an ongoing controversy around the nature of the Spanish demonstrative system, the results of Experiment 1 confirmed that this demonstrative system is person oriented, while the English system is distance oriented. Experiment 2 revealed that not all three-way demonstrative systems are person oriented, with Japanese speakers showing sensitivity to the listener’s spatial location, while Turkish speakers did not show such an effect in their demonstrative choice. In Experiment 3, Catalan-Spanish bilinguals showed sensitivity to listener position in their choice of the Spanish distal form, but not in their choice of the medial form. These results were interpreted as a transfer effect from Catalan, which revealed analogous results to English. Experiment 4 investigated the use of demonstratives to redirect a listener’s attention to the intended referent, which is a universal function of demonstratives that also hinges on social cognition. Japanese and Spanish speakers chose between their proximal and distal demonstratives flexibly, depending on whether the listener was looking closer or further from the referent, whereas Turkish speakers chose their medial form for attention correction. In conclusion, the results of this study support the view that investigating how speakers of different languages jointly use language and social cognition in communication has the potential to unravel the deep connection between these two fundamentally human capacities.
  • Ruggeri, K., Panin, A., Vdovic, M., Većkalov, B., Abdul-Salaam, N., Achterberg, J., Akil, C., Amatya, J., Amatya, K., Andersen, T. L., Aquino, S. D., Arunasalam, A., Ashcroft-Jones, S., Askelund, A. D., Ayacaxli, N., Bagheri Sheshdeh, A., Bailey, A., Barea Arroyo, P., Basulto Mejía, G., Benvenuti, M. and 151 moreRuggeri, K., Panin, A., Vdovic, M., Većkalov, B., Abdul-Salaam, N., Achterberg, J., Akil, C., Amatya, J., Amatya, K., Andersen, T. L., Aquino, S. D., Arunasalam, A., Ashcroft-Jones, S., Askelund, A. D., Ayacaxli, N., Bagheri Sheshdeh, A., Bailey, A., Barea Arroyo, P., Basulto Mejía, G., Benvenuti, M., Berge, M. L., Bermaganbet, A., Bibilouri, K., Bjørndal, L. D., Black, S., Blomster Lyshol, J. K., Brik, T., Buabang, E. K., Burghart, M., Bursalıoğlu, A., Buzayu, N. M., Čadek, M., De Carvalho, N. M., Cazan, A.-M., Çetinçelik, M., Chai, V. E., Chen, P., Chen, S., Clay, G., D’Ambrogio, S., Damnjanović, K., Duffy, G., Dugue, T., Dwarkanath, T., Envuladu, E. A., Erceg, N., Esteban-Serna, C., Farahat, E., Farrokhnia, R. A., Fawad, M., Fedryansyah, M., Feng, D., Filippi, S., Fonollá, M. A., Freichel, R., Freira, L., Friedemann, M., Gao, Z., Ge, S., Geiger, S. J., George, L., Grabovski, I., Gracheva, A., Gracheva, A., Hajian, A., Hasan, N., Hecht, M., Hong, X., Hubená, B., Ikonomeas, A. G. F., Ilić, S., Izydorczyk, D., Jakob, L., Janssens, M., Jarke, H., Kácha, O., Kalinova, K. N., Kapingura, F. M., Karakasheva, R., Kasdan, D. O., Kemel, E., Khorrami, P., Krawiec, J. M., Lagidze, N., Lazarević, A., Lazić, A., Lee, H. S., Lep, Ž., Lins, S., Lofthus, I. S., Macchia, L., Mamede, S., Mamo, M. A., Maratkyzy, L., Mareva, S., Marwaha, S., McGill, L., McParland, S., Melnic, A., Meyer, S. A., Mizak, S., Mohammed, A., Mukhyshbayeva, A., Navajas, J., Neshevska, D., Niazi, S. J., Nieves, A. E. N., Nippold, F., Oberschulte, J., Otto, T., Pae, R., Panchelieva, T., Park, S. Y., Pascu, D. S., Pavlović, I., Petrović, M. B., Popović, D., Prinz, G. M., Rachev, N. R., Ranc, P., Razum, J., Rho, C. E., Riitsalu, L., Rocca, F., Rosenbaum, R. S., Rujimora, J., Rusyidi, B., Rutherford, C., Said, R., Sanguino, I., Sarikaya, A. K., Say, N., Schuck, J., Shiels, M., Shir, Y., Sievert, E. D. C., Soboleva, I., Solomonia, T., Soni, S., Soysal, I., Stablum, F., Sundström, F. T. A., Tang, X., Tavera, F., Taylor, J., Tebbe, A.-L., Thommesen, K. K., Tobias-Webb, J., Todsen, A. L., Toscano, F., Tran, T., Trinh, J., Turati, A., Ueda, K., Vacondio, M., Vakhitov, V., Valencia, A. J., Van Reyn, C., Venema, T. A. G., Verra, S. E., Vintr, J., Vranka, M. A., Wagner, L., Wu, X., Xing, K. Y., Xu, K., Xu, S., Yamada, Y., Yosifova, A., Zupan, Z., & García-Garzon, E. (2022). The globalizability of temporal discounting. Nature Human Behaviour, 6, 1386-1397. doi:10.1038/s41562-022-01392-w.

    Abstract

    Economic inequality is associated with preferences for smaller, immediate gains over larger, delayed ones. Such temporal discounting may feed into rising global inequality, yet it is unclear whether it is a function of choice preferences or norms, or rather the absence of sufficient resources for immediate needs. It is also not clear whether these reflect true differences in choice patterns between income groups. We tested temporal discounting and five intertemporal choice anomalies using local currencies and value standards in 61 countries (N = 13,629). Across a diverse sample, we found consistent, robust rates of choice anomalies. Lower-income groups were not significantly different, but economic inequality and broader financial circumstances were clearly correlated with population choice patterns.
  • De Ruiter, J. P., Rossignol, S., Vuurpijl, L., Cunningham, D. W., & Levelt, W. J. M. (2003). SLOT: A research platform for investigating multimodal communication. Behavior Research Methods, Instruments, & Computers, 35(3), 408-419.

    Abstract

    In this article, we present the spatial logistics task (SLOT) platform for investigating multimodal communication between 2 human participants. Presented are the SLOT communication task and the software and hardware that has been developed to run SLOT experiments and record the participants’ multimodal behavior. SLOT offers a high level of flexibility in varying the context of the communication and is particularly useful in studies of the relationship between pen gestures and speech. We illustrate the use of the SLOT platform by discussing the results of some early experiments. The first is an experiment on negotiation with a one-way mirror between the participants, and the second is an exploratory study of automatic recognition of spontaneous pen gestures. The results of these studies demonstrate the usefulness of the SLOT platform for conducting multimodal communication research in both human– human and human–computer interactions.
  • Sainburg, T., Mai, A., & Gentner, T. Q. (2022). Long-range sequential dependencies precede complex syntactic production in language acquisition. Proceedings of the Royal Society B: Biological Sciences, 289: 20212657. doi:10.1098/rspb.2021.2657.

    Abstract

    To convey meaning, human language relies on hierarchically organized, long-
    range relationships spanning words, phrases, sentences and discourse. As the
    distances between elements (e.g. phonemes, characters, words) in human
    language sequences increase, the strength of the long-range relationships
    between those elements decays following a power law. This power-law
    relationship has been attributed variously to long-range sequential organiz-
    ation present in human language syntax, semantics and discourse structure.
    However, non-linguistic behaviours in numerous phylogenetically distant
    species, ranging from humpback whale song to fruit fly motility, also demon-
    strate similar long-range statistical dependencies. Therefore, we hypothesized
    that long-range statistical dependencies in human speech may occur indepen-
    dently of linguistic structure. To test this hypothesis, we measured long-range
    dependencies in several speech corpora from children (aged 6 months–
    12 years). We find that adult-like power-law statistical dependencies are present
    in human vocalizations at the earliest detectable ages, prior to the production of
    complex linguistic structure. These linguistic structures cannot, therefore, be
    the sole cause of long-range statistical dependencies in language
  • Salazar-Casals, A., de Reus, K., Greskewitz, N., Havermans, J., Geut, M., Villanueva, S., & Rubio-Garcia, A. (2022). Increased incidence of entanglements and ingested marine debris in Dutch seals from 2010 to 2020. Oceans, 3(3), 389-400. doi:10.3390/oceans3030026.

    Abstract

    In recent decades, the amount of marine debris has increased in our oceans. As wildlife interactions with debris increase, so does the number of entangled animals, impairing normal behavior and potentially affecting the survival of these individuals. The current study summarizes data on two phocid species, harbor (Phoca vitulina) and gray seals (Halichoerus grypus), affected by marine debris in Dutch waters from 2010 to 2020. The findings indicate that the annual entanglement rate (13.2 entanglements/year) has quadrupled compared with previous studies. Young seals, particularly gray seals, are the most affected individuals, with most animals found or sighted with fishing nets wrapped around their necks. Interestingly, harbor seals showed a higher incidence of ingested debris. Species differences with regard to behavior, foraging strategies, and habitat preferences may explain these findings. The lack of consistency across reports suggests that it is important to standardize data collection from now on. Despite increased public awareness about the adverse environmental effects of marine debris, more initiatives and policies are needed to ensure the protection of the marine environment in the Netherlands.
  • Salverda, A. P., Dahan, D., & McQueen, J. M. (2003). The role of prosodic boundaries in the resolution of lexical embedding in speech comprehension. Cognition, 90(1), 51-89. doi:10.1016/S0010-0277(03)00139-2.

    Abstract

    Participants' eye movements were monitored as they heard sentences and saw four pictured objects on a computer screen. Participants were instructed to click on the object mentioned in the sentence. There were more transitory fixations to pictures representing monosyllabic words (e.g. ham) when the first syllable of the target word (e.g. hamster) had been replaced by a recording of the monosyllabic word than when it came from a different recording of the target word. This demonstrates that a phonemically identical sequence can contain cues that modulate its lexical interpretation. This effect was governed by the duration of the sequence, rather than by its origin (i.e. which type of word it came from). The longer the sequence, the more monosyllabic-word interpretations it generated. We argue that cues to lexical-embedding disambiguation, such as segmental lengthening, result from the realization of a prosodic boundary that often but not always follows monosyllabic words, and that lexical candidates whose word boundaries are aligned with prosodic boundaries are favored in the word-recognition process.
  • Scharenborg, O., ten Bosch, L., Boves, L., & Norris, D. (2003). Bridging automatic speech recognition and psycholinguistics: Extending Shortlist to an end-to-end model of human speech recognition [Letter to the editor]. Journal of the Acoustical Society of America, 114, 3032-3035. doi:10.1121/1.1624065.

    Abstract

    This letter evaluates potential benefits of combining human speech recognition ~HSR! and automatic speech recognition by building a joint model of an automatic phone recognizer ~APR! and a computational model of HSR, viz., Shortlist @Norris, Cognition 52, 189–234 ~1994!#. Experiments based on ‘‘real-life’’ speech highlight critical limitations posed by some of the simplifying assumptions made in models of human speech recognition. These limitations could be overcome by avoiding hard phone decisions at the output side of the APR, and by using a match between the input and the internal lexicon that flexibly copes with deviations from canonical phonemic representations.
  • Scharenborg, O., Ten Bosch, L., & Boves, L. (2003). ‘Early recognition’ of words in continuous speech. Automatic Speech Recognition and Understanding, 2003 IEEE Workshop, 61-66. doi:10.1109/ASRU.2003.1318404.

    Abstract

    In this paper, we present an automatic speech recognition (ASR) system based on the combination of an automatic phone recogniser and a computational model of human speech recognition – SpeM – that is capable of computing ‘word activations’ during the recognition process, in addition to doing normal speech recognition, a task in which conventional ASR architectures only provide output after the end of an utterance. We explain the notion of word activation and show that it can be used for ‘early recognition’, i.e. recognising a word before the end of the word is available. Our ASR system was tested on 992 continuous speech utterances, each containing at least one target word: a city name of at least two syllables. The results show that early recognition was obtained for 72.8% of the target words that were recognised correctly. Also, it is shown that word activation can be used as an effective confidence measure.
  • Schiller, N. O., Münte, T. F., Horemans, I., & Jansma, B. M. (2003). The influence of semantic and phonological factors on syntactic decisions: An event-related brain potential study. Psychophysiology, 40(6), 869-877. doi:10.1111/1469-8986.00105.

    Abstract

    During language production and comprehension, information about a word's syntactic properties is sometimes needed. While the decision about the grammatical gender of a word requires access to syntactic knowledge, it has also been hypothesized that semantic (i.e., biological gender) or phonological information (i.e., sound regularities) may influence this decision. Event-related potentials (ERPs) were measured while native speakers of German processed written words that were or were not semantically and/or phonologically marked for gender. Behavioral and ERP results showed that participants were faster in making a gender decision when words were semantically and/or phonologically gender marked than when this was not the case, although the phonological effects were less clear. In conclusion, our data provide evidence that even though participants performed a grammatical gender decision, this task can be influenced by semantic and phonological factors.
  • Schiller, N. O., Bles, M., & Jansma, B. M. (2003). Tracking the time course of phonological encoding in speech production: An event-related brain potential study on internal monitoring. Cognitive Brain Research, 17(3), 819-831. doi:10.1016/S0926-6410(03)00204-0.

    Abstract

    This study investigated the time course of phonological encoding during speech production planning. Previous research has shown that conceptual/semantic information precedes syntactic information in the planning of speech production and that syntactic information is available earlier than phonological information. Here, we studied the relative time courses of the two different processes within phonological encoding, i.e. metrical encoding and syllabification. According to one prominent theory of language production, metrical encoding involves the retrieval of the stress pattern of a word, while syllabification is carried out to construct the syllabic structure of a word. However, the relative timing of these two processes is underspecified in the theory. We employed an implicit picture naming task and recorded event-related brain potentials to obtain fine-grained temporal information about metrical encoding and syllabification. Results revealed that both tasks generated effects that fall within the time window of phonological encoding. However, there was no timing difference between the two effects, suggesting that they occur approximately at the same time.
  • Schiller, N. O., & Caramazza, A. (2003). Grammatical feature selection in noun phrase production: Evidence from German and Dutch. Journal of Memory and Language, 48(1), 169-194. doi:10.1016/S0749-596X(02)00508-9.

    Abstract

    In this study, we investigated grammatical feature selection during noun phrase production in German and Dutch. More specifically, we studied the conditions under which different grammatical genders select either the same or different determiners or suffixes. Pictures of one or two objects paired with a gender-congruent or a gender-incongruent distractor word were presented. Participants named the pictures using a singular or plural noun phrase with the appropriate determiner and/or adjective in German or Dutch. Significant effects of gender congruency were only obtained in the singular condition where the selection of determiners is governed by the target’s gender, but not in the plural condition where the determiner is identical for all genders. When different suffixes were to be selected in the gender-incongruent condition, no gender congruency effect was obtained. The results suggest that the so-called gender congruency effect is really a determiner congruency effect. The overall pattern of results is interpreted as indicating that grammatical feature selection is an automatic consequence of lexical node selection and therefore not subject to interference from other grammatical features. This implies that lexical node and grammatical feature selection operate with distinct principles.
  • Schiller, N. O. (1998). The effect of visually masked syllable primes on the naming latencies of words and pictures. Journal of Memory and Language, 39, 484-507. doi:10.1006/jmla.1998.2577.

    Abstract

    To investigate the role of the syllable in Dutch speech production, five experiments were carried out to examine the effect of visually masked syllable primes on the naming latencies for written words and pictures. Targets had clear syllable boundaries and began with a CV syllable (e.g., ka.no) or a CVC syllable (e.g., kak.tus), or had ambiguous syllable boundaries and began with a CV[C] syllable (e.g., ka[pp]er). In the syllable match condition, bisyllabic Dutch nouns or verbs were preceded by primes that were identical to the target’s first syllable. In the syllable mismatch condition, the prime was either shorter or longer than the target’s first syllable. A neutral condition was also included. None of the experiments showed a syllable priming effect. Instead, all related primes facilitated the naming of the targets. It is concluded that the syllable does not play a role in the process of phonological encoding in Dutch. Because the amount of facilitation increased with increasing overlap between prime and target, the priming effect is accounted for by a segmental overlap hypothesis.
  • Schlag, F., Allegrini, A. G., Buitelaar, J., Verhoef, E., Van Donkelaar, M. M. J., Plomin, R., Rimfeld, K., Fisher, S. E., & St Pourcain, B. (2022). Polygenic risk for mental disorder reveals distinct association profiles across social behaviour in the general population. Molecular Psychiatry, 27, 1588-1598. doi:10.1038/s41380-021-01419-0.

    Abstract

    Many mental health conditions present a spectrum of social difficulties that overlaps with social behaviour in the general population including shared but little characterised genetic links. Here, we systematically investigate heterogeneity in shared genetic liabilities with attention-deficit/hyperactivity disorder (ADHD), autism spectrum disorders (ASD), bipolar disorder (BP), major depression (MD) and schizophrenia across a spectrum of different social symptoms. Longitudinally assessed low-prosociality and peer-problem scores in two UK population-based cohorts (4–17 years; parent- and teacher-reports; Avon Longitudinal Study of Parents and Children(ALSPAC): N ≤ 6,174; Twins Early Development Study(TEDS): N ≤ 7,112) were regressed on polygenic risk scores for disorder, as informed by genome-wide summary statistics from large consortia, using negative binomial regression models. Across ALSPAC and TEDS, we replicated univariate polygenic associations between social behaviour and risk for ADHD, MD and schizophrenia. Modelling variation in univariate genetic effects jointly using random-effect meta-regression revealed evidence for polygenic links between social behaviour and ADHD, ASD, MD, and schizophrenia risk, but not BP. Differences in age, reporter and social trait captured 45–88% in univariate effect variation. Cross-disorder adjusted analyses demonstrated that age-related heterogeneity in univariate effects is shared across mental health conditions, while reporter- and social trait-specific heterogeneity captures disorder-specific profiles. In particular, ADHD, MD, and ASD polygenic risk were more strongly linked to peer problems than low prosociality, while schizophrenia was associated with low prosociality only. The identified association profiles suggest differences in the social genetic architecture across mental disorders when investigating polygenic overlap with population-based social symptoms spanning 13 years of child and adolescent development.
  • Schoenmakers, G.-J., Poortvliet, M., & Schaeffer, J. (2022). Topicality and anaphoricity in Dutch scrambling. Natural Language & Linguistic Theory, 40, 541-571. doi:10.1007/s11049-021-09516-z.

    Abstract

    Direct objects in Dutch can precede or follow adverbs, a phenomenon commonly referred to as scrambling. The linguistic literature agrees in its assumption that scrambling is regulated by the topicality and anaphoricity status of definite objects, but theories vary as to what kinds of objects exactly are predicted to scramble. This study reports experimental data from a sentence completion experiment with adult native speakers of Dutch, showing that topics are scrambled more often than foci, and that anaphoric objects are scrambled more often than non-anaphoric objects. However, while the data provide support for the assumption that topicality and anaphoricity play an important role in scrambling, they also indicate that the discourse status of the object in and of itself cannot explain the full scrambling variation.
  • Schubotz, L., Özyürek, A., & Holler, J. (2022). Individual differences in working memory and semantic fluency predict younger and older adults' multimodal recipient design in an interactive spatial task. Acta Psychologica, 229: 103690. doi:10.1016/j.actpsy.2022.103690.

    Abstract

    Aging appears to impair the ability to adapt speech and gestures based on knowledge shared with an addressee
    (common ground-based recipient design) in narrative settings. Here, we test whether this extends to spatial settings
    and is modulated by cognitive abilities. Younger and older adults gave instructions on how to assemble 3D-
    models from building blocks on six consecutive trials. We induced mutually shared knowledge by either
    showing speaker and addressee the model beforehand, or not. Additionally, shared knowledge accumulated
    across the trials. Younger and crucially also older adults provided recipient-designed utterances, indicated by a
    significant reduction in the number of words and of gestures when common ground was present. Additionally, we
    observed a reduction in semantic content and a shift in cross-modal distribution of information across trials.
    Rather than age, individual differences in verbal and visual working memory and semantic fluency predicted the
    extent of addressee-based adaptations. Thus, in spatial tasks, individual cognitive abilities modulate the inter-
    active language use of both younger and older adul

    Additional information

    1-s2.0-S0001691822002050-mmc1.docx
  • Scott, D. R., & Cutler, A. (1984). Segmental phonology and the perception of syntactic structure. Journal of Verbal Learning and Verbal Behavior, 23, 450-466. Retrieved from http://www.sciencedirect.com/science//journal/00225371.

    Abstract

    Recent research in speech production has shown that syntactic structure is reflected in segmental phonology--the application of certain phonological rules of English (e.g., palatalization and alveolar flapping) is inhibited across phrase boundaries. We examined whether such segmental effects can be used in speech perception as cues to syntactic structure, and the relation between the use of these segmental features as syntactic markers in production and perception. Speakers of American English (a dialect in which the above segmental effects occur) could indeed use the segmental cues in syntax perception; speakers of British English (in which the effects do not occur) were unable to make use of them, while speakers of British English who were long-term residents of the United States showed intermediate performance.
  • Seifart, F. (2003). Marqueurs de classe généraux et spécifiques en Miraña. Faits de Langues, 21, 121-132.
  • Senft, G. (1998). Body and mind in the Trobriand Islands. Ethos, 26, 73-104. doi:10.1525/eth.1998.26.1.73.

    Abstract

    This article discusses how the Trobriand Islanders speak about body and mind. It addresses the following questions: do the linguistic datafit into theories about lexical universals of body-part terminology? Can we make inferences about the Trobrianders' conceptualization of psychological and physical states on the basis of these data? If a Trobriand Islander sees these idioms as external manifestations of inner states, then can we interpret them as a kind of ethnopsychological theory about the body and its role for emotions, knowledge, thought, memory, and so on? Can these idioms be understood as representation of Trobriand ethnopsychological theory?

Share this page