Publications

Displaying 601 - 700 of 974
  • McConnell, K., & Blumenthal-Dramé, A. (2022). Effects of task and corpus-derived association scores on the online processing of collocations. Corpus Linguistics and Linguistic Theory, 18, 33-76. doi:10.1515/cllt-2018-0030.

    Abstract

    In the following self-paced reading study, we assess the cognitive realism of six widely used corpus-derived measures of association strength between words (collocated modifier–noun combinations like vast majority): MI, MI3, Dice coefficient, T-score, Z-score, and log-likelihood. The ability of these collocation metrics to predict reading times is tested against predictors of lexical processing cost that are widely established in the psycholinguistic and usage-based literature, respectively: forward/backward transition probability and bigram frequency. In addition, the experiment includes the treatment variable of task: it is split into two blocks which only differ in the format of interleaved comprehension questions (multiple choice vs. typed free response). Results show that the traditional corpus-linguistic metrics are outperformed by both backward transition probability and bigram frequency. Moreover, the multiple-choice condition elicits faster overall reading times than the typed condition, and the two winning metrics show stronger facilitation on the critical word (i.e. the noun in the bigrams) in the multiple-choice condition. In the typed condition, we find an effect that is weaker and, in the case of bigram frequency, longer lasting, continuing into the first spillover word. We argue that insufficient attention to task effects might have obscured the cognitive correlates of association scores in earlier research.
  • McCurdy, R., Clough, S., Edwards, M., & Duff, M. (2022). The lesion method: What individual patients can teach us about the brain. Frontiers for Young Minds, 10: 869030. doi:10.3389/frym.2022.869030.

    Abstract

    Scientists who study the brain try to understand how it performs everyday behaviors like language, memory, and emotion. Scientists learn a lot by studying how these behaviors change when the brain is damaged. Over the past 200 years, they have made many discoveries by studying individuals with brain damage. For example, one patient could not form sentences after damaging a specific area of his brain. The scientist who studied him concluded that the damaged brain area was important for producing speech. This approach is called the lesion method, and it has taught us a lot about the brain. In this article, we introduce five patients throughout history who forever changed our understanding of the brain. We describe how researchers use these early discoveries to ask new questions about the brain, and we conclude by discussing how the lesion method is used today.
  • McQueen, J. M., Cutler, A., Briscoe, T., & Norris, D. (1995). Models of continuous speech recognition and the contents of the vocabulary. Language and Cognitive Processes, 10, 309-331. doi:10.1080/01690969508407098.

    Abstract

    Several models of spoken word recognition postulate that recognition is achieved via a process of competition between lexical hypotheses. Competition not only provides a mechanism for isolated word recognition, it also assists in continuous speech recognition, since it offers a means of segmenting continuous input into individual words. We present statistics on the pattern of occurrence of words embedded in the polysyllabic words of the English vocabulary, showing that an overwhelming majority (84%) of polysyllables have shorter words embedded within them. Positional analyses show that these embeddings are most common at the onsets of the longer word. Although both phonological and syntactic constraints could rule out some embedded words, they do not remove the problem. Lexical competition provides a means of dealing with lexical embedding. It is also supported by a growing body of experimental evidence. We present results which indicate that competition operates both between word candidates that begin at the same point in the input and candidates that begin at different points (McQueen, Norris, & Cutler, 1994, Noms, McQueen, & Cutler, in press). We conclude that lexical competition is an essential component in models of continuous speech recognition.
  • McQueen, J. M., Norris, D., & Cutler, A. (1999). Lexical influence in phonetic decision-making: Evidence from subcategorical mismatches. Journal of Experimental Psychology: Human Perception and Performance, 25, 1363-1389. doi:10.1037/0096-1523.25.5.1363.

    Abstract

    In 5 experiments, listeners heard words and nonwords, some cross-spliced so that they contained acoustic-phonetic mismatches. Performance was worse on mismatching than on matching items. Words cross-spliced with words and words cross-spliced with nonwords produced parallel results. However, in lexical decision and 1 of 3 phonetic decision experiments, performance on nonwords cross-spliced with words was poorer than on nonwords cross-spliced with nonwords. A gating study confirmed that there were misleading coarticulatory cues in the cross-spliced items; a sixth experiment showed that the earlier results were not due to interitem differences in the strength of these cues. Three models of phonetic decision making (the Race model, the TRACE model, and a postlexical model) did not explain the data. A new bottom-up model is outlined that accounts for the findings in terms of lexical involvement at a dedicated decision-making stage.
  • Mehler, J., & Cutler, A. (1990). Psycholinguistic implications of phonological diversity among languages. In M. Piattelli-Palmerini (Ed.), Cognitive science in Europe: Issues and trends (pp. 119-134). Rome: Golem.
  • Mekki, Y., Guillemot, V., Lemaître, H., Carrión-Castillo, A., Forkel, S. J., Frouin, V., & Philippe, C. (2022). The genetic architecture of language functional connectivity. NeuroImage, 249: 118795. doi:10.1016/j.neuroimage.2021.118795.

    Abstract

    Language is a unique trait of the human species, of which the genetic architecture remains largely unknown. Through language disorders studies, many candidate genes were identified. However, such complex and multifactorial trait is unlikely to be driven by only few genes and case-control studies, suffering from a lack of power, struggle to uncover significant variants. In parallel, neuroimaging has significantly contributed to the understanding of structural and functional aspects of language in the human brain and the recent availability of large scale cohorts like UK Biobank have made possible to study language via image-derived endophenotypes in the general population. Because of its strong relationship with task-based fMRI (tbfMRI) activations and its easiness of acquisition, resting-state functional MRI (rsfMRI) have been more popularised, making it a good surrogate of functional neuronal processes. Taking advantage of such a synergistic system by aggregating effects across spatially distributed traits, we performed a multivariate genome-wide association study (mvGWAS) between genetic variations and resting-state functional connectivity (FC) of classical brain language areas in the inferior frontal (pars opercularis, triangularis and orbitalis), temporal and inferior parietal lobes (angular and supramarginal gyri), in 32,186 participants from UK Biobank. Twenty genomic loci were found associated with language FCs, out of which three were replicated in an independent replication sample. A locus in 3p11.1, regulating EPHA3 gene expression, is found associated with FCs of the semantic component of the language network, while a locus in 15q14, regulating THBS1 gene expression is found associated with FCs of the perceptual-motor language processing, bringing novel insights into the neurobiology of language.
  • Menks, W. M., Ekerdt, C., Janzen, G., Kidd, E., Lemhöfer, K., Fernández, G., & McQueen, J. M. (2022). Study protocol: A comprehensive multi-method neuroimaging approach to disentangle developmental effects and individual differences in second language learning. BMC Psychology, 10: 169. doi:10.1186/s40359-022-00873-x.

    Abstract

    Background

    While it is well established that second language (L2) learning success changes with age and across individuals, the underlying neural mechanisms responsible for this developmental shift and these individual differences are largely unknown. We will study the behavioral and neural factors that subserve new grammar and word learning in a large cross-sectional developmental sample. This study falls under the NWO (Nederlandse Organisatie voor Wetenschappelijk Onderzoek [Dutch Research Council]) Language in Interaction consortium (website: https://www.languageininteraction.nl/).
    Methods

    We will sample 360 healthy individuals across a broad age range between 8 and 25 years. In this paper, we describe the study design and protocol, which involves multiple study visits covering a comprehensive behavioral battery and extensive magnetic resonance imaging (MRI) protocols. On the basis of these measures, we will create behavioral and neural fingerprints that capture age-based and individual variability in new language learning. The behavioral fingerprint will be based on first and second language proficiency, memory systems, and executive functioning. We will map the neural fingerprint for each participant using the following MRI modalities: T1‐weighted, diffusion-weighted, resting-state functional MRI, and multiple functional-MRI paradigms. With respect to the functional MRI measures, half of the sample will learn grammatical features and half will learn words of a new language. Combining all individual fingerprints allows us to explore the neural maturation effects on grammar and word learning.
    Discussion

    This will be one of the largest neuroimaging studies to date that investigates the developmental shift in L2 learning covering preadolescence to adulthood. Our comprehensive approach of combining behavioral and neuroimaging data will contribute to the understanding of the mechanisms influencing this developmental shift and individual differences in new language learning. We aim to answer: (I) do these fingerprints differ according to age and can these explain the age-related differences observed in new language learning? And (II) which aspects of the behavioral and neural fingerprints explain individual differences (across and within ages) in grammar and word learning? The results of this study provide a unique opportunity to understand how the development of brain structure and function influence new language learning success.
  • Menn, K. H., Ward, E., Braukmann, R., Van den Boomen, C., Buitelaar, J., Hunnius, S., & Snijders, T. M. (2022). Neural tracking in infancy predicts language development in children with and without family history of autism. Neurobiology of Language, 3(3), 495-514. doi:10.1162/nol_a_00074.

    Abstract

    During speech processing, neural activity in non-autistic adults and infants tracks the speech envelope. Recent research in adults indicates that this neural tracking relates to linguistic knowledge and may be reduced in autism. Such reduced tracking, if present already in infancy, could impede language development. In the current study, we focused on children with a family history of autism, who often show a delay in first language acquisition. We investigated whether differences in tracking of sung nursery rhymes during infancy relate to language development and autism symptoms in childhood. We assessed speech-brain coherence at either 10 or 14 months of age in a total of 22 infants with high likelihood of autism due to family history and 19 infants without family history of autism. We analyzed the relationship between speech-brain coherence in these infants and their vocabulary at 24 months as well as autism symptoms at 36 months. Our results showed significant speech-brain coherence in the 10- and 14-month-old infants. We found no evidence for a relationship between speech-brain coherence and later autism symptoms. Importantly, speech-brain coherence in the stressed syllable rate (1–3 Hz) predicted later vocabulary. Follow-up analyses showed evidence for a relationship between tracking and vocabulary only in 10-month-olds but not 14-month-olds and indicated possible differences between the likelihood groups. Thus, early tracking of sung nursery rhymes is related to language development in childhood.
  • Merkx, D., Frank, S. L., & Ernestus, M. (2022). Seeing the advantage: Visually grounding word embeddings to better capture human semantic knowledge. In E. Chersoni, N. Hollenstein, C. Jacobs, Y. Oseki, L. Prévot, & E. Santus (Eds.), Proceedings of the Workshop on Cognitive Modeling and Computational Linguistics (CMCL 2022) (pp. 1-11). Stroudsburg, PA, USA: Association for Computational Linguistics (ACL).

    Abstract

    Distributional semantic models capture word-level meaning that is useful in many natural language processing tasks and have even been shown to capture cognitive aspects of word meaning. The majority of these models are purely text based, even though the human sensory experience is much richer. In this paper we create visually grounded word embeddings by combining English text and images and compare them to popular text-based methods, to see if visual information allows our model to better capture cognitive aspects of word meaning. Our analysis shows that visually grounded embedding similarities are more predictive of the human reaction times in a large priming experiment than the purely text-based embeddings. The visually grounded embeddings also correlate well with human word similarity ratings.Importantly, in both experiments we show that he grounded embeddings account for a unique portion of explained variance, even when we include text-based embeddings trained on huge corpora. This shows that visual grounding allows our model to capture information that cannot be extracted using text as the only source of information.
  • Merkx, D. (2022). Modelling multi-modal language learning: From sentences to words. PhD Thesis, Radboud University Nijmegen, Nijmegen.
  • Meyer, A. S., Ouellet, M., & Häcker, C. (2008). Parallel processing of objects in a naming task. Journal of Experimental Psychology: Learning, Memory, and Cognition, 34, 982-987. doi:10.1037/0278-7393.34.4.982.

    Abstract

    The authors investigated whether speakers who named several objects processed them sequentially or in parallel. Speakers named object triplets, arranged in a triangle, in the order left, right, and bottom object. The left object was easy or difficult to identify and name. During the saccade from the left to the right object, the right object shown at trial onset (the interloper) was replaced by a new object (the target), which the speakers named. Interloper and target were identical or unrelated objects, or they were conceptually unrelated objects with the same name (e.g., bat [animal] and [baseball] bat). The mean duration of the gazes to the target was shorter when interloper and target were identical or had the same name than when they were unrelated. The facilitatory effects of identical and homophonous interlopers were significantly larger when the left object was easy to process than when it was difficult to process. This interaction demonstrates that the speakers processed the left and right objects in parallel.
  • Meyer, A. S., & Bock, K. (1999). Representations and processes in the production of pronouns: Some perspectives from Dutch. Journal of Memory and Language, 41(2), 281-301. doi:doi:10.1006/jmla.1999.2649.

    Abstract

    The production and interpretation of pronouns involves the identification of a mental referent and, in connected speech or text, a discourse antecedent. One of the few overt signals of the relationship between a pronoun and its antecedent is agreement in features such as number and grammatical gender. To examine how speakers create these signals, two experiments tested conceptual, lexical. and morphophonological accounts of pronoun production in Dutch. The experiments employed sentence completion and continuation tasks with materials containing noun phrases that conflicted or agreed in grammatical gender. The noun phrases served as the antecedents for demonstrative pronouns tin Experiment 1) and relative pronouns tin Experiment 2) that required gender marking. Gender errors were used to assess the nature of the processes that established the link between pronouns and antecedents. There were more gender errors when candidate antecedents conflicted in grammatical gender, counter to the predictions of a pure conceptual hypothesis. Gender marking on candidate antecedents did not change the magnitude of this interference effect, counter to the predictions of an overt-morphology hypothesis. Mirroring previous findings about pronoun comprehension, the results suggest that speakers of gender-marking languages call on specific linguistic information about antecedents in order to select pronouns and that the information consists of specifications of grammatical gender associated with the lemmas of words.
  • Meyer, A. S. (1990). The time course of phonological encoding in language production: The encoding of successive syllables of a word. Journal of Memory and Language, 29, 524-545. doi:10.1016/0749-596X(90)90050-A.

    Abstract

    A series of experiments was carried out investigating the time course of phonological encoding in language production, i.e., the question of whether all parts of the phonological form of a word are created in parallel, or whether they are created in a specific order. a speech production task was used in which the subjects in each test trial had to say one out of three or five response words as quickly as possible. In one condition, information was provided about part of the forms of the words to be uttered, in another condition this was not the case. The production of disyllabic words was speeded by information about their first syllable, but not by information about their second syllable. Experiments using trisyllabic words showed that a facilitatory effect could be obtained from information about the second syllable of the words, provided that the first syllable was also known. These findings suggest that the syllables of a word must be encoded strictly sequentially, according to their order in the word.
  • Misersky, J. (2022). About time: Exploring the role of grammatical aspect in event cognition. PhD Thesis, Radboud University Nijmegen, Nijmegen.
  • Misersky, J., Peeters, D., & Flecken, M. (2022). The potential of immersive virtual reality for the study of event perception. Frontiers in Virtual Reality, 3: 697934. doi:10.3389/frvir.2022.697934.

    Abstract

    In everyday life, we actively engage in different activities from a first-person perspective. However, experimental psychological research in the field of event perception is often limited to relatively passive, third-person computer-based paradigms. In the present study, we tested the feasibility of using immersive virtual reality in combination with eye tracking with participants in active motion. Behavioral research has shown that speakers of aspectual and non-aspectual languages attend to goals (endpoints) in motion events differently, with speakers of non-aspectual languages showing relatively more attention to goals (endpoint bias). In the current study, native speakers of German (non-aspectual) and English (aspectual) walked on a treadmill across 3-D terrains in VR, while their eye gaze was continuously tracked. Participants encountered landmark objects on the side of the road, and potential endpoint objects at the end of it. Using growth curve analysis to analyze fixation patterns over time, we found no differences in eye gaze behavior between German and English speakers. This absence of cross-linguistic differences was also observed in behavioral tasks with the same participants. Methodologically, based on the quality of the data, we conclude that our dynamic eye-tracking setup can be reliably used to study what people look at while moving through rich and dynamic environments that resemble the real world.
  • Mishra, C., & Skantze, G. (2022). Knowing where to look: A planning-based architecture to automate the gaze behavior of social robots. In Proceedings of the 31st IEEE International Conference on Robot and Human Interactive Communication (RO-MAN) (pp. 1201-1208). doi:10.1109/RO-MAN53752.2022.9900740.

    Abstract

    Gaze cues play an important role in human communication and are used to coordinate turn-taking and joint attention, as well as to regulate intimacy. In order to have fluent conversations with people, social robots need to exhibit humanlike gaze behavior. Previous Gaze Control Systems (GCS) in HRI have automated robot gaze using data-driven or heuristic approaches. However, these systems tend to be mainly reactive in nature. Planning the robot gaze ahead of time could help in achieving more realistic gaze behavior and better eye-head coordination. In this paper, we propose and implement a novel planning-based GCS. We evaluate our system in a comparative within-subjects user study (N=26) between a reactive system and our proposed system. The results show that the users preferred the proposed system and that it was significantly more interpretable and better at regulating intimacy.
  • Mitterer, H., & De Ruiter, J. P. (2008). Recalibrating color categories using world knowledge. Psychological Science, 19(7), 629-634. doi:10.1111/j.1467-9280.2008.02133.x.

    Abstract

    When the perceptual system uses color to facilitate object recognition, it must solve the color-constancy problem: The light an object reflects to an observer's eyes confounds properties of the source of the illumination with the surface reflectance of the object. Information from the visual scene (bottom-up information) is insufficient to solve this problem. We show that observers use world knowledge about objects and their prototypical colors as a source of top-down information to improve color constancy. Specifically, observers use world knowledge to recalibrate their color categories. Our results also suggest that similar effects previously observed in language perception are the consequence of a general perceptual process.
  • Mitterer, H., & Ernestus, M. (2008). The link between speech perception and production is phonological and abstract: Evidence from the shadowing task. Cognition, 109(1), 168-173. doi:10.1016/j.cognition.2008.08.002.

    Abstract

    This study reports a shadowing experiment, in which one has to repeat a speech stimulus as fast as possible. We tested claims about a direct link between perception and production based on speech gestures, and obtained two types of counterevidence. First, shadowing is not slowed down by a gestural mismatch between stimulus and response. Second, phonetic detail is more likely to be imitated in a shadowing task if it is phonologically relevant. This is consistent with the idea that speech perception and speech production are only loosely coupled, on an abstract phonological level.
  • Mitterer, H., Yoneyama, K., & Ernestus, M. (2008). How we hear what is hardly there: Mechanisms underlying compensation for /t/-reduction in speech comprehension. Journal of Memory and Language, 59, 133-152. doi:10.1016/j.jml.2008.02.004.

    Abstract

    In four experiments, we investigated how listeners compensate for reduced /t/ in Dutch. Mitterer and Ernestus [Mitterer,H., & Ernestus, M. (2006). Listeners recover /t/s that speakers lenite: evidence from /t/-lenition in Dutch. Journal of Phonetics, 34, 73–103] showed that listeners are biased to perceive a /t/ more easily after /s/ than after /n/, compensating for the tendency of speakers to reduce word-final /t/ after /s/ in spontaneous conversations. We tested the robustness of this phonological context effect in perception with three very different experimental tasks: an identification task, a discrimination task with native listeners and with non-native listeners who do not have any experience with /t/-reduction,and a passive listening task (using electrophysiological dependent measures). The context effect was generally robust against these experimental manipulations, although we also observed some deviations from the overall pattern. Our combined results show that the context effect in compensation for reduced /t/ results from a complex process involving auditory constraints, phonological learning, and lexical constraints.
  • Mitterer, H. (2008). How are words reduced in spontaneous speech? In A. Botonis (Ed.), Proceedings of ISCA Tutorial and Research Workshop On Experimental Linguistics (pp. 165-168). Athens: University of Athens.

    Abstract

    Words are reduced in spontaneous speech. If reductions are constrained by functional (i.e., perception and production) constraints, they should not be arbitrary. This hypothesis was tested by examing the pronunciations of high- to mid-frequency words in a Dutch and a German spontaneous speech corpus. In logistic-regression models the "reduction likelihood" of a phoneme was predicted by fixed-effect predictors such as position within the word, word length, word frequency, and stress, as well as random effects such as phoneme identity and word. The models for Dutch and German show many communalities. This is in line with the assumption that similar functional constraints influence reductions in both languages.
  • Molz, B., Herbik, A., Baseler, H. A., de Best, P. B., Vernon, R. W., Raz, N., Gouws, A. D., Ahmadi, K., Lowndes, R., McLean, R. J., Gottlob, I., Kohl, S., Choritz, L., Maguire, J., Kanowski, M., Käsmann-Kellner, B., Wieland, I., Banin, E., Levin, N., Hoffmann, M. B. and 1 moreMolz, B., Herbik, A., Baseler, H. A., de Best, P. B., Vernon, R. W., Raz, N., Gouws, A. D., Ahmadi, K., Lowndes, R., McLean, R. J., Gottlob, I., Kohl, S., Choritz, L., Maguire, J., Kanowski, M., Käsmann-Kellner, B., Wieland, I., Banin, E., Levin, N., Hoffmann, M. B., & Morland, A. B. (2022). Structural changes to primary visual cortex in the congenital absence of cone input in achromatopsia. NeuroImage: Clinical, 33: 102925. doi:10.1016/j.nicl.2021.102925.

    Abstract

    Autosomal recessive Achromatopsia (ACHM) is a rare inherited disorder associated with dysfunctional cone photoreceptors resulting in a congenital absence of cone input to visual cortex. This might lead to distinct changes in cortical architecture with a negative impact on the success of gene augmentation therapies. To investigate the status of the visual cortex in these patients, we performed a multi-centre study focusing on the cortical structure of regions that normally receive predominantly cone input. Using high-resolution T1-weighted MRI scans and surface-based morphometry, we compared cortical thickness, surface area and grey matter volume in foveal, parafoveal and paracentral representations of primary visual cortex in 15 individuals with ACHM and 42 normally sighted, healthy controls (HC). In ACHM, surface area was reduced in all tested representations, while thickening of the cortex was found highly localized to the most central representation. These results were comparable to more widespread changes in brain structure reported in congenitally blind individuals, suggesting similar developmental processes, i.e., irrespective of the underlying cause and extent of vision loss. The cortical differences we report here could limit the success of treatment of ACHM in adulthood. Interventions earlier in life when cortical structure is not different from normal would likely offer better visual outcomes for those with ACHM.
  • Montero-Melis, G., Van Paridon, J., Ostarek, M., & Bylund, E. (2022). No evidence for embodiment: The motor system is not needed to keep action words in working memory. Cortex, 150, 108-125. doi:10.1016/j.cortex.2022.02.006.

    Abstract

    Increasing evidence implicates the sensorimotor systems with high-level cognition, but the extent to which these systems play a functional role remains debated. Using an elegant design, Shebani and Pulvermüller (2013) reported that carrying out a demanding rhythmic task with the hands led to selective impairment of working memory for hand-related words (e.g., clap), while carrying out the same task with the feet led to selective memory impairment for foot-related words (e.g., kick). Such a striking double dissociation is acknowledged even by critics to constitute strong evidence for an embodied account of working memory. Here, we report on an attempt at a direct replication of this important finding. We followed a sequential sampling design and stopped data collection at N=77 (more than five times the original sample size), at which point the evidence for the lack of the critical selective interference effect was very strong (BF01 = 91). This finding constitutes strong evidence against a functional contribution of the motor system to keeping action words in working memory. Our finding fits into the larger emerging picture in the field of embodied cognition that sensorimotor simulations are neither required nor automatic in high-level cognitive processes, but that they may play a role depending on the task. Importantly, we urge researchers to engage in transparent, high-powered, and fully pre-registered experiments like the present one to ensure the field advances on a solid basis.
  • Morey, R. D., Kaschak, M. P., Díez-Álamo, A. M., Glenberg, A. M., Zwaan, R. A., Lakens, D., Ibáñez, A., García, A., Gianelli, C., Jones, J. L., Madden, J., Alifano, F., Bergen, B., Bloxsom, N. G., Bub, D. N., Cai, Z. G., Chartier, C. R., Chatterjee, A., Conwell, E., Cook, S. W. and 25 moreMorey, R. D., Kaschak, M. P., Díez-Álamo, A. M., Glenberg, A. M., Zwaan, R. A., Lakens, D., Ibáñez, A., García, A., Gianelli, C., Jones, J. L., Madden, J., Alifano, F., Bergen, B., Bloxsom, N. G., Bub, D. N., Cai, Z. G., Chartier, C. R., Chatterjee, A., Conwell, E., Cook, S. W., Davis, J. D., Evers, E., Girard, S., Harter, D., Hartung, F., Herrera, E., Huettig, F., Humphries, S., Juanchich, M., Kühne, K., Lu, S., Lynes, T., Masson, M. E. J., Ostarek, M., Pessers, S., Reglin, R., Steegen, S., Thiessen, E. D., Thomas, L. E., Trott, S., Vandekerckhove, J., Vanpaemel, W., Vlachou, M., Williams, K., & Ziv-Crispel, N. (2022). A pre-registered, multi-lab non-replication of the Action-sentence Compatibility Effect (ACE). Psychonomic Bulletin & Review, 29, 613-626. doi:10.3758/s13423-021-01927-8.

    Abstract

    The Action-sentence Compatibility Effect (ACE) is a well-known demonstration of the role of motor activity in the comprehension of language. Participants are asked to make sensibility judgments on sentences by producing movements toward the body or away from the body. The ACE is the finding that movements are faster when the direction of the movement (e.g., toward) matches the direction of the action in the to-be-judged sentence (e.g., Art gave you the pen describes action toward you). We report on a pre- registered, multi-lab replication of one version of the ACE. The results show that none of the 18 labs involved in the study observed a reliable ACE, and that the meta-analytic estimate of the size of the ACE was essentially zero.
  • Morgan, J. L., Van Elswijk, G., & Meyer, A. S. (2008). Extrafoveal processing of objects in a naming task: Evidence from word probe experiments. Psychonomic Bulletin & Review, 15, 561-565. doi:10.3758/PBR.15.3.561.

    Abstract

    In two experiments, we investigated the processing of extrafoveal objects in a double-object naming task. On most trials, participants named two objects; but on some trials, the objects were replaced shortly after trial onset by a written word probe, which participants had to name instead of the objects. In Experiment 1, the word was presented in the same location as the left object either 150 or 350 msec after trial onset and was either phonologically related or unrelated to that object name. Phonological facilitation was observed at the later but not at the earlier SOA. In Experiment 2, the word was either phonologically related or unrelated to the right object and was presented 150 msec after the speaker had begun to inspect that object. In contrast with Experiment 1, phonological facilitation was found at this early SOA, demonstrating that the speakers had begun to process the right object prior to fixation.
  • Mortensen, L., Meyer, A. S., & Humphreys, G. W. (2008). Speech planning during multiple-object naming: Effects of ageing. Quarterly Journal of Experimental Psychology, 61, 1217 -1238. doi:10.1080/17470210701467912.

    Abstract

    Two experiments were conducted with younger and older speakers. In Experiment 1, participants named single objects that were intact or visually degraded, while hearing distractor words that were phonologically related or unrelated to the object name. In both younger and older participants naming latencies were shorter for intact than for degraded objects and shorter when related than when unrelated distractors were presented. In Experiment 2, the single objects were replaced by object triplets, with the distractors being phonologically related to the first object's name. Naming latencies and gaze durations for the first object showed degradation and relatedness effects that were similar to those in single-object naming. Older participants were slower than younger participants when naming single objects and slower and less fluent on the second but not the first object when naming object triplets. The results of these experiments indicate that both younger and older speakers plan object names sequentially, but that older speakers use this planning strategy less efficiently.
  • Murphy, E., Woolnough, O., Rollo, P. S., Roccaforte, Z., Segaert, K., Hagoort, P., & Tandon, N. (2022). Minimal phrase composition revealed by intracranial recordings. The Journal of Neuroscience, 42(15), 3216-3227. doi:10.1523/JNEUROSCI.1575-21.2022.

    Abstract

    The ability to comprehend phrases is an essential integrative property of the brain. Here we evaluate the neural processes that enable the transition from single word processing to a minimal compositional scheme. Previous research has reported conflicting timing effects of composition, and disagreement persists with respect to inferior frontal and posterior temporal contributions. To address these issues, 19 patients (10 male, 19 female) implanted with penetrating depth or surface subdural intracranial electrodes heard auditory recordings of adjective-noun, pseudoword-noun and adjective-pseudoword phrases and judged whether the phrase matched a picture. Stimulus-dependent alterations in broadband gamma activity, low frequency power and phase-locking values across the language-dominant left hemisphere were derived. This revealed a mosaic located on the lower bank of the posterior superior temporal sulcus (pSTS), in which closely neighboring cortical sites displayed exclusive sensitivity to either lexicality or phrase structure, but not both. Distinct timings were found for effects of phrase composition (210–300 ms) and pseudoword processing (approximately 300–700 ms), and these were localized to neighboring electrodes in pSTS. The pars triangularis and temporal pole encoded anticipation of composition in broadband low frequencies, and both regions exhibited greater functional connectivity with pSTS during phrase composition. Our results suggest that the pSTS is a highly specialized region comprised of sparsely interwoven heterogeneous constituents that encodes both lower and higher level linguistic features. This hub in pSTS for minimal phrase processing may form the neural basis for the human-specific computational capacity for forming hierarchically organized linguistic structures.
  • Narasimhan, B., & Dimroth, C. (2008). Word order and information status in child language. Cognition, 107, 317-329. doi:10.1016/j.cognition.2007.07.010.

    Abstract

    In expressing rich, multi-dimensional thought in language, speakers are influenced by a range of factors that influence the ordering of utterance constituents. A fundamental principle that guides constituent ordering in adults has to do with information status, the accessibility of referents in discourse. Typically, adults order previously mentioned referents (“old” or accessible information) first, before they introduce referents that have not yet been mentioned in the discourse (“new” or inaccessible information) at both sentential and phrasal levels. Here we ask whether a similar principle influences ordering patterns at the phrasal level in children who are in the early stages of combining words productively. Prior research shows that when conveying semantic relations, children reproduce language-specific ordering patterns in the input, suggesting that they do not have a bias for any particular order to describe “who did what to whom”. But our findings show that when they label “old” versus “new” referents, 3- to 5-year-old children prefer an ordering pattern opposite to that of adults (Study 1). Children’s ordering preference is not derived from input patterns, as “old-before-new” is also the preferred order in caregivers’ speech directed to young children (Study 2). Our findings demonstrate that a key principle governing ordering preferences in adults does not originate in early childhood, but develops: from new-to-old to old-to-new.
  • Nas, G., Kempen, G., & Hudson, P. (1984). De rol van spelling en klank bij woordherkenning tijdens het lezen. In A. Thomassen, L. Noordman, & P. Elling (Eds.), Het leesproces. Lisse: Swets & Zeitlinger.
  • Nayak, S., Coleman, P. L., Ladányi, E., Nitin, R., Gustavson, D. E., Fisher, S. E., Magne, C. L., & Gordon, R. L. (2022). The Musical Abilities, Pleiotropy, Language, and Environment (MAPLE) framework for understanding musicality-language links across the lifespan. Neurobiology of Language, 3(4), 615-664. doi:10.1162/nol_a_00079.

    Abstract

    Using individual differences approaches, a growing body of literature finds positive associations between musicality and language-related abilities, complementing prior findings of links between musical training and language skills. Despite these associations, musicality has been often overlooked in mainstream models of individual differences in language acquisition and development. To better understand the biological basis of these individual differences, we propose the Musical Abilities, Pleiotropy, Language, and Environment (MAPLE) framework. This novel integrative framework posits that musical and language-related abilities likely share some common genetic architecture (i.e., genetic pleiotropy) in addition to some degree of overlapping neural endophenotypes, and genetic influences on musically and linguistically enriched environments. Drawing upon recent advances in genomic methodologies for unraveling pleiotropy, we outline testable predictions for future research on language development and how its underlying neurobiological substrates may be supported by genetic pleiotropy with musicality. In support of the MAPLE framework, we review and discuss findings from over seventy behavioral and neural studies, highlighting that musicality is robustly associated with individual differences in a range of speech-language skills required for communication and development. These include speech perception-in-noise, prosodic perception, morphosyntactic skills, phonological skills, reading skills, and aspects of second/foreign language learning. Overall, the current work provides a clear agenda and framework for studying musicality-language links using individual differences approaches, with an emphasis on leveraging advances in the genomics of complex musicality and language traits.
  • Need, A. C., Attix, D. K., McEvoy, J. M., Cirulli, E. T., Linney, K. N., Wagoner, A. P., Gumbs, C. E., Giegling, I., Möller, H.-J., Francks, C., Muglia, P., Roses, A., Gibson, G., Weale, M. E., Rujescu, D., & Goldstein, D. B. (2008). Failure to replicate effect of Kibra on human memory in two large cohorts of European origin. American Journal of Medical Genetics Part B: Neuropsychiatric Genetics, 147B, 667-668. doi:10.1002/ajmg.b.30658.

    Abstract

    It was recently suggested that the Kibra polymorphism rs17070145 has a strong effect on multiple episodic memory tasks in humans. We attempted to replicate this using two cohorts of European genetic origin (n = 319 and n = 365). We found no association with either the original SNP or a set of tagging SNPs in the Kibra gene with multiple verbal memory tasks, including one that was an exact replication (Auditory Verbal Learning Task, AVLT). These results suggest that Kibra does not have a strong and general effect on human memory.

    Additional information

    SupplementaryMethodsIAmJMedGen.doc
  • Neumann, A., Nolte, I. M., Pappa, I., Ahluwalia, T. S., Pettersson, E., Rodriguez, A., Whitehouse, A., Van Beijsterveldt, C. E. M., Benyamin, B., Hammerschlag, A. R., Helmer, Q., Karhunen, V., Krapohl, E., Lu, Y., Van der Most, P. J., Palviainen, T., St Pourcain, B., Seppälä, I., Suarez, A., Vilor-Tejedor, N. and 41 moreNeumann, A., Nolte, I. M., Pappa, I., Ahluwalia, T. S., Pettersson, E., Rodriguez, A., Whitehouse, A., Van Beijsterveldt, C. E. M., Benyamin, B., Hammerschlag, A. R., Helmer, Q., Karhunen, V., Krapohl, E., Lu, Y., Van der Most, P. J., Palviainen, T., St Pourcain, B., Seppälä, I., Suarez, A., Vilor-Tejedor, N., Tiesler, C. M. T., Wang, C., Wills, A., Zhou, A., Alemany, S., Bisgaard, H., Bønnelykke, K., Davies, G. E., Hakulinen, C., Henders, A. K., Hyppönen, E., Stokholm, J., Bartels, M., Hottenga, J.-J., Heinrich, J., Hewitt, J., Keltikangas-Järvinen, L., Korhonen, T., Kaprio, J., Lahti, J., Lahti-Pulkkinen, M., Lehtimäki, T., Middeldorp, C. M., Najman, J. M., Pennell, C., Power, C., Oldehinkel, A. J., Plomin, R., Räikkönen, K., Raitakari, O. T., Rimfeld, K., Sass, L., Snieder, H., Standl, M., Sunyer, J., Williams, G. M., Bakermans-Kranenburg, M. J., Boomsma, D. I., Van IJzendoorn, M. H., Hartman, C. A., & Tiemeier, H. (2022). A genome-wide association study of total child psychiatric problems scores. PLOS ONE, 17(8): e0273116. doi:10.1371/journal.pone.0273116.

    Abstract

    Substantial genetic correlations have been reported across psychiatric disorders and numerous cross-disorder genetic variants have been detected. To identify the genetic variants underlying general psychopathology in childhood, we performed a genome-wide association study using a total psychiatric problem score. We analyzed 6,844,199 common SNPs in 38,418 school-aged children from 20 population-based cohorts participating in the EAGLE consortium. The SNP heritability of total psychiatric problems was 5.4% (SE = 0.01) and two loci reached genome-wide significance: rs10767094 and rs202005905. We also observed an association of SBF2, a gene associated with neuroticism in previous GWAS, with total psychiatric problems. The genetic effects underlying the total score were shared with common psychiatric disorders only (attention-deficit/hyperactivity disorder, anxiety, depression, insomnia) (rG > 0.49), but not with autism or the less common adult disorders (schizophrenia, bipolar disorder, or eating disorders) (rG < 0.01). Importantly, the total psychiatric problem score also showed at least a moderate genetic correlation with intelligence, educational attainment, wellbeing, smoking, and body fat (rG > 0.29). The results suggest that many common genetic variants are associated with childhood psychiatric symptoms and related phenotypes in general instead of with specific symptoms. Further research is needed to establish causality and pleiotropic mechanisms between related traits.

    Additional information

    Full summary results
  • Niarchou, M., Gustavson, D. E., Sathirapongsasuti, J. F., Anglada-Tort, M., Eising, E., Bell, E., McArthur, E., Straub, P., The 23andMe Research Team, McAuley, J. D., Capra, J. A., Ullén, F., Creanza, N., Mosing, M. A., Hinds, D., Davis, L. K., Jacoby, N., & Gordon, R. L. (2022). Genome-wide association study of musical beat synchronization demonstrates high polygenicity. Nature Human Behaviour, 6(9), 1292-1309. doi:10.1038/s41562-022-01359-x.

    Abstract

    Moving in synchrony to the beat is a fundamental component of musicality. Here we conducted a genome-wide association study to identify common genetic variants associated with beat synchronization in 606,825 individuals. Beat synchronization exhibited a highly polygenic architecture, with 69 loci reaching genome-wide significance (P < 5 × 10−8) and single-nucleotide-polymorphism-based heritability (on the liability scale) of 13%–16%. Heritability was enriched for genes expressed in brain tissues and for fetal and adult brain-specific gene regulatory elements, underscoring the role of central-nervous-system-expressed genes linked to the genetic basis of the trait. We performed validations of the self-report phenotype (through separate experiments) and of the genome-wide association study (polygenic scores for beat synchronization were associated with patients algorithmically classified as musicians in medical records of a separate biobank). Genetic correlations with breathing function, motor function, processing speed and chronotype suggest shared genetic architecture with beat synchronization and provide avenues for new phenotypic and genetic explorations.

    Additional information

    supplementary information
  • Nieuwland, M. S., & Van Berkum, J. J. A. (2008). The neurocognition of referential ambiguity in language comprehension. Language and Linguistics Compass, 2(4), 603-630. doi:10.1111/j.1749-818x.2008.00070.x.

    Abstract

    Referential ambiguity arises whenever readers or listeners are unable to select a unique referent for a linguistic expression out of multiple candidates. In the current article, we review a series of neurocognitive experiments from our laboratory that examine the neural correlates of referential ambiguity, and that employ the brain signature of referential ambiguity to derive functional properties of the language comprehension system. The results of our experiments converge to show that referential ambiguity resolution involves making an inference to evaluate the referential candidates. These inferences only take place when both referential candidates are, at least initially, equally plausible antecedents. Whether comprehenders make these anaphoric inferences is strongly context dependent and co-determined by characteristics of the reader. In addition, readers appear to disregard referential ambiguity when the competing candidates are each semantically incoherent, suggesting that, under certain circumstances, semantic analysis can proceed even when referential analysis has not yielded a unique antecedent. Finally, results from a functional neuroimaging study suggest that whereas the neural systems that deal with referential ambiguity partially overlap with those that deal with referential failure, they show an inverse coupling with the neural systems associated with semantic processing, possibly reflecting the relative contributions of semantic and episodic processing to re-establish semantic and referential coherence, respectively.
  • Nieuwland, M. S., & Van Berkum, J. J. A. (2008). The interplay between semantic and referential aspects of anaphoric noun phrase resolution: Evidence from ERPs. Brain & Language, 106, 119-131. doi:10.1016/j.bandl.2008.05.001.

    Abstract

    In this event-related brain potential (ERP) study, we examined how semantic and referential aspects of anaphoric noun phrase resolution interact during discourse comprehension. We used a full factorial design that crossed referential ambiguity with semantic incoherence. Ambiguous anaphors elicited a sustained negative shift (Nref effect), and incoherent anaphors elicited an N400 effect. Simultaneously ambiguous and incoherent anaphors elicited an ERP pattern resembling that of the incoherent anaphors. These results suggest that semantic incoherence can preclude readers from engaging in anaphoric inferencing. Furthermore, approximately half of our participants unexpectedly showed common late positive effects to the three types of problematic anaphors. We relate the latter finding to recent accounts of what the P600 might reflect, and to the role of individual differences therein.
  • Nieuwland, M. S., & Kuperberg, G. R. (2008). When the truth Is not too hard to handle. An event-related potential study on the pragmatics of negation. Psychological Science, 19(12), 1213-1218. doi:10.1111/j.1467-9280.2008.02226.x.

    Abstract

    Our brains rapidly map incoming language onto what we hold to be true. Yet there are claims that such integration and verification processes are delayed in sentences containing negation words like not. However, studies have often confounded whether a statement is true and whether it is a natural thing to say during normal communication. In an event-related potential (ERP) experiment, we aimed to disentangle effects of truth value and pragmatic licensing on the comprehension of affirmative and negated real-world statements. As in affirmative sentences, false words elicited a larger N400 ERP than did true words in pragmatically licensed negated sentences (e.g., “In moderation, drinking red wine isn't bad/good…”), whereas true and false words elicited similar responses in unlicensed negated sentences (e.g., “A baby bunny's fur isn't very hard/soft…”). These results suggest that negation poses no principled obstacle for readers to immediately relate incoming words to what they hold to be true.
  • Nijhof, S., & Zwitserlood, I. (1999). Pluralization in Sign Language of the Netherlands (NGT). In J. Don, & T. Sanders (Eds.), OTS Yearbook 1998-1999 (pp. 58-78). Utrecht: UiL OTS.
  • Nijveld, A., Ten Bosch, L., & Ernestus, M. (2022). The use of exemplars differs between native and non-native listening. Bilingualism: Language and Cognition, 25(5), 841-855. doi:10.1017/S1366728922000116.

    Abstract

    This study compares the role of exemplars in native and non-native listening. Two English identity priming experiments were conducted with native English, Dutch non-native, and Spanish non-native listeners. In Experiment 1, primes and targets were spoken in the same or a different voice. Only the native listeners showed exemplar effects. In Experiment 2, primes and targets had the same or a different degree of vowel reduction. The Dutch, but not the Spanish, listeners were familiar with this reduction pattern from their L1 phonology. In this experiment, exemplar effects only arose for the Spanish listeners. We propose that in these lexical decision experiments the use of exemplars is co-determined by listeners’ available processing resources, which is modulated by the familiarity with the variation type from their L1 phonology. The use of exemplars differs between native and non-native listening, suggesting qualitative differences between native and non-native speech comprehension processes.
  • Nobe, S., Furuyama, N., Someya, Y., Sekine, K., Suzuki, M., & Hayashi, K. (2008). A longitudinal study on gesture of simultaneous interpreter. The Japanese Journal of Speech Sciences, 8, 63-83.
  • Nordlinger, R., Garrido Rodriguez, G., & Kidd, E. (2022). Sentence planning and production in Murrinhpatha, an Australian 'free word order' language. Language, 98(2), 187-220. Retrieved from https://muse.jhu.edu/article/857152.

    Abstract

    Psycholinguistic theories are based on a very small set of unrepresentative languages, so it is as yet unclear how typological variation shapes mechanisms supporting language use. In this article we report the first on-line experimental study of sentence production in an Australian free word order language: Murrinhpatha. Forty-six adult native speakers of Murrinhpatha described a series of unrelated transitive scenes that were manipulated for humanness (±human) in the agent and patient roles while their eye movements were recorded. Speakers produced a large range of word orders, consistent with the language having flexible word order, with variation significantly influenced by agent and patient humanness. An analysis of eye movements showed that Murrinhpatha speakers' first fixation on an event character did not alone determine word order; rather, early in speech planning participants rapidly encoded both event characters and their relationship to each other. That is, they engaged in relational encoding, laying down a very early conceptual foundation for the word order they eventually produced. These results support a weakly hierarchical account of sentence production and show that speakers of a free word order language encode the relationships between event participants during earlier stages of sentence planning than is typically observed for languages with fixed word orders.
  • Norris, D., & McQueen, J. M. (2008). Shortlist B: A Bayesian model of continuous speech recognition. Psychological Review, 115(2), 357-395. doi:10.1037/0033-295X.115.2.357.

    Abstract

    A Bayesian model of continuous speech recognition is presented. It is based on Shortlist ( D. Norris, 1994; D. Norris, J. M. McQueen, A. Cutler, & S. Butterfield, 1997) and shares many of its key assumptions: parallel competitive evaluation of multiple lexical hypotheses, phonologically abstract prelexical and lexical representations, a feedforward architecture with no online feedback, and a lexical segmentation algorithm based on the viability of chunks of the input as possible words. Shortlist B is radically different from its predecessor in two respects. First, whereas Shortlist was a connectionist model based on interactive-activation principles, Shortlist B is based on Bayesian principles. Second, the input to Shortlist B is no longer a sequence of discrete phonemes; it is a sequence of multiple phoneme probabilities over 3 time slices per segment, derived from the performance of listeners in a large-scale gating study. Simulations are presented showing that the model can account for key findings: data on the segmentation of continuous speech, word frequency effects, the effects of mispronunciations on word recognition, and evidence on lexical involvement in phonemic decision making. The success of Shortlist B suggests that listeners make optimal Bayesian decisions during spoken-word recognition.
  • Norris, D., McQueen, J. M., & Cutler, A. (1995). Competition and segmentation in spoken word recognition. Journal of Experimental Psychology: Learning, Memory, and Cognition, 21, 1209-1228.

    Abstract

    Spoken utterances contain few reliable cues to word boundaries, but listeners nonetheless experience little difficulty identifying words in continuous speech. The authors present data and simulations that suggest that this ability is best accounted for by a model of spoken-word recognition combining competition between alternative lexical candidates and sensitivity to prosodic structure. In a word-spotting experiment, stress pattern effects emerged most clearly when there were many competing lexical candidates for part of the input. Thus, competition between simultaneously active word candidates can modulate the size of prosodic effects, which suggests that spoken-word recognition must be sensitive both to prosodic structure and to the effects of competition. A version of the Shortlist model ( D. G. Norris, 1994b) incorporating the Metrical Segmentation Strategy ( A. Cutler & D. Norris, 1988) accurately simulates the results using a lexicon of more than 25,000 words.
  • Obleser, J., Eisner, F., & Kotz, S. A. (2008). Bilateral speech comprehension reflects differential sensitivity to spectral and temporal features. Journal of Neuroscience, 28(32), 8116-8124. doi:doi:10.1523/JNEUROSCI.1290-08.2008.

    Abstract

    Speech comprehension has been shown to be a strikingly bilateral process, but the differential contributions of the subfields of left and right auditory cortices have remained elusive. The hypothesis that left auditory areas engage predominantly in decoding fast temporal perturbations of a signal whereas the right areas are relatively more driven by changes of the frequency spectrum has not been directly tested in speech or music. This brain-imaging study independently manipulated the speech signal itself along the spectral and the temporal domain using noise-band vocoding. In a parametric design with five temporal and five spectral degradation levels in word comprehension, a functional distinction of the left and right auditory association cortices emerged: increases in the temporal detail of the signal were most effective in driving brain activation of the left anterolateral superior temporal sulcus (STS), whereas the right homolog areas exhibited stronger sensitivity to the variations in spectral detail. In accordance with behavioral measures of speech comprehension acquired in parallel, change of spectral detail exhibited a stronger coupling with the STS BOLD signal. The relative pattern of lateralization (quantified using lateralization quotients) proved reliable in a jack-knifed iterative reanalysis of the group functional magnetic resonance imaging model. This study supplies direct evidence to the often implied functional distinction of the two cerebral hemispheres in speech processing. Applying direct manipulations to the speech signal rather than to low-level surrogates, the results lend plausibility to the notion of complementary roles for the left and right superior temporal sulci in comprehending the speech signal.
  • Ohlerth, A.-K., Bastiaanse, R., Nickels, L., Neu, B., Zhang, W., Ille, S., Sollmann, N., & Krieg, S. M. (2022). Dual-task nTMS mapping to visualize the cortico-subcortical language network and capture postoperative outcome—A patient series in neurosurgery. Frontiers in Oncology, 11: 788122. doi:10.3389/fonc.2021.788122.

    Abstract

    Background: Perioperative assessment of language function in brain tumor patients commonly relies on administration of object naming during stimulation mapping. Ample research, however, points to the benefit of adding verb tasks to the testing paradigm in order to delineate and preserve postoperative language function more comprehensively. This research uses a case series approach to explore the feasibility and added value of a dual-task protocol that includes both a noun task (object naming) and a verb task (action naming) in perioperative delineation of language functions.

    Materials and Methods: Seven neurosurgical cases underwent perioperative language assessment with both object and action naming. This entailed preoperative baseline testing, preoperative stimulation mapping with navigated Transcranial Magnetic Stimulation (nTMS) with subsequent white matter visualization, intraoperative mapping with Direct Electrical Stimulation (DES) in 4 cases, and postoperative imaging and examination of language change.

    Results: We observed a divergent pattern of language organization and decline between cases who showed lesions close to the delineated language network and hence underwent DES mapping, and those that did not. The latter displayed no new impairment postoperatively consistent with an unharmed network for the neural circuits of both object and action naming. For the cases who underwent DES, on the other hand, a higher sensitivity was found for action naming over object naming. Firstly, action naming preferentially predicted the overall language state compared to aphasia batteries. Secondly, it more accurately predicted intraoperative positive language areas as revealed by DES. Thirdly, double dissociations between postoperatively unimpaired object naming and impaired action naming and vice versa indicate segregated skills and neural representation for noun versus verb processing, especially in the ventral stream. Overlaying postoperative imaging with object and action naming networks revealed that dual-task nTMS mapping can explain the drop in performance in those cases where the network appeared in proximity to the resection cavity.

    Conclusion: Using a dual-task protocol for visualization of cortical and subcortical language areas through nTMS mapping proved to be able to capture network-to-deficit relations in our case series. Ultimately, adding action naming to clinical nTMS and DES mapping may help prevent postoperative deficits of this seemingly segregated skill.

    Additional information

    table 1 and table 2
  • Okbay, A., Wu, Y., Wang, N., Jayashankar, H., Bennett, M., Nehzati, S. M., Sidorenko, J., Kweon, H., Goldman, G., Gjorgjieva, T., Jiang, Y., Hicks, B., Tian, C., Hinds, D. A., Ahlskog, R., Magnusson, P. K. E., Oskarsson, S., Hayward, C., Campbell, A., Porteous, D. J. and 18 moreOkbay, A., Wu, Y., Wang, N., Jayashankar, H., Bennett, M., Nehzati, S. M., Sidorenko, J., Kweon, H., Goldman, G., Gjorgjieva, T., Jiang, Y., Hicks, B., Tian, C., Hinds, D. A., Ahlskog, R., Magnusson, P. K. E., Oskarsson, S., Hayward, C., Campbell, A., Porteous, D. J., Freese, J., Herd, P., 23andMe Research Team, Social Science Genetic Association Consortium, Watson, C., Jala, J., Conley, D., Koellinger, P. D., Johannesson, M., Laibson, D., Meyer, M. N., Lee, J. J., Kong, A., Yengo, L., Cesarini, D., Turley, P., Visscher, P. M., Beauchamp, J. P., Benjamin, D. J., & Young, A. I. (2022). Polygenic prediction of educational attainment within and between families from genome-wide association analyses in 3 million individuals. Nature Genetics, 54, 437-449. doi:10.1038/s41588-022-01016-z.

    Abstract

    We conduct a genome-wide association study (GWAS) of educational attainment (EA) in a sample of ~3 million individuals and identify 3,952 approximately uncorrelated genome-wide-significant single-nucleotide polymorphisms (SNPs). A genome-wide polygenic predictor, or polygenic index (PGI), explains 12–16% of EA variance and contributes to risk prediction for ten diseases. Direct effects (i.e., controlling for parental PGIs) explain roughly half the PGI’s magnitude of association with EA and other phenotypes. The correlation between mate-pair PGIs is far too large to be consistent with phenotypic assortment alone, implying additional assortment on PGI-associated factors. In an additional GWAS of dominance deviations from the additive model, we identify no genome-wide-significant SNPs, and a separate X-chromosome additive GWAS identifies 57.

    Additional information

    supplementary information
  • O’Neill, A. C., Uzbas, F., Antognolli, G., Merino, F., Draganova, K., Jäck, A., Zhang, S., Pedini, G., Schessner, J. P., Cramer, K., Schepers, A., Metzger, F., Esgleas, M., Smialowski, P., Guerrini, R., Falk, S., Feederle, R., Freytag, S., Wang, Z., Bahlo, M. O’Neill, A. C., Uzbas, F., Antognolli, G., Merino, F., Draganova, K., Jäck, A., Zhang, S., Pedini, G., Schessner, J. P., Cramer, K., Schepers, A., Metzger, F., Esgleas, M., Smialowski, P., Guerrini, R., Falk, S., Feederle, R., Freytag, S., Wang, Z., Bahlo, M., Jungmann, R., Bagni, C., Borner, G. H. H., Robertson, S. P., Hauck, S. M., & Götz, M. (2022). Spatial centrosome proteome of human neural cells uncovers disease-relevant heterogeneity. Science, 376(6599): eabf9088. doi:10.1126/science.abf9088.

    Abstract

    The centrosome provides an intracellular anchor for the cytoskeleton, regulating cell division, cell migration, and cilia formation. We used spatial proteomics to elucidate protein interaction networks at the centrosome of human induced pluripotent stem cell–derived neural stem cells (NSCs) and neurons. Centrosome-associated proteins were largely cell type–specific, with protein hubs involved in RNA dynamics. Analysis of neurodevelopmental disease cohorts identified a significant overrepresentation of NSC centrosome proteins with variants in patients with periventricular heterotopia (PH). Expressing the PH-associated mutant pre-mRNA-processing factor 6 (PRPF6) reproduced the periventricular misplacement in the developing mouse brain, highlighting missplicing of transcripts of a microtubule-associated kinase with centrosomal location as essential for the phenotype. Collectively, cell type–specific centrosome interactomes explain how genetic variants in ubiquitous proteins may convey brain-specific phenotypes.
  • Onnis, L., Lim, A., Cheung, S., & Huettig, F. (2022). Is the mind inherently predicting? Exploring forward and backward looking in language processing. Cognitive Science, 46(10): e13201. doi:10.1111/cogs.13201.

    Abstract

    Prediction is one characteristic of the human mind. But what does it mean to say the mind is a ’prediction machine’ and inherently forward looking as is frequently claimed? In natural languages, many contexts are not easily predictable in a forward fashion. In English for example many frequent verbs do not carry unique meaning on their own, but instead rely on another word or words that follow them to become meaningful. Upon reading take a the processor often cannot easily predict walk as the next word. But the system can ‘look back’ and integrate walk more easily when it follows take a (e.g., as opposed to make|get|have a walk). In the present paper we provide further evidence for the importance of both forward and backward looking in language processing. In two self-paced reading tasks and an eye-tracking reading task, we found evidence that adult English native speakers’ sensitivity to word forward and backward conditional probability significantly explained variance in reading times over and above psycholinguistic predictors of reading latencies. We conclude that both forward and backward-looking (prediction and integration) appear to be important characteristics of language processing. Our results thus suggest that it makes just as much sense to call the mind an ’integration machine’ which is inherently backward looking.

    Additional information

    Open Data and Open Materials
  • Osterhout, L., & Hagoort, P. (1999). A superficial resemblance does not necessarily mean you are part of the family: Counterarguments to Coulson, King and Kutas (1998) in the P600/SPS-P300 debate. Language and Cognitive Processes, 14, 1-14. doi:10.1080/016909699386356.

    Abstract

    Two recent studies (Coulson et al., 1998;Osterhout et al., 1996)examined the
    relationship between the event-related brain potential (ERP) responses to linguistic syntactic anomalies (P600/SPS) and domain-general unexpected events (P300). Coulson et al. concluded that these responses are highly similar, whereas Osterhout et al. concluded that they are distinct. In this comment, we evaluate the relativemerits of these claims. We conclude that the available evidence indicates that the ERP response to syntactic anomalies is at least partially distinct from the ERP response to unexpected anomalies that do not involve a grammatical violation
  • Oswald, J. N., Van Cise, A. M., Dassow, A., Elliott, T., Johnson, M. T., Ravignani, A., & Podos, J. (2022). A collection of best practices for the collection and analysis of bioacoustic data. Applied Sciences, 12(23): 12046. doi:10.3390/app122312046.

    Abstract

    The field of bioacoustics is rapidly developing and characterized by diverse methodologies, approaches and aims. For instance, bioacoustics encompasses studies on the perception of pure tones in meticulously controlled laboratory settings, documentation of species’ presence and activities using recordings from the field, and analyses of circadian calling patterns in animal choruses. Newcomers to the field are confronted with a vast and fragmented literature, and a lack of accessible reference papers or textbooks. In this paper we contribute towards filling this gap. Instead of a classical list of “dos” and “don’ts”, we review some key papers which, we believe, embody best practices in several bioacoustic subfields. In the first three case studies, we discuss how bioacoustics can help identify the ‘who’, ‘where’ and ‘how many’ of animals within a given ecosystem. Specifically, we review cases in which bioacoustic methods have been applied with success to draw inferences regarding species identification, population structure, and biodiversity. In fourth and fifth case studies, we highlight how structural properties in signal evolution can emerge via ecological constraints or cultural transmission. Finally, in a sixth example, we discuss acoustic methods that have been used to infer predator–prey dynamics in cases where direct observation was not feasible. Across all these examples, we emphasize the importance of appropriate recording parameters and experimental design. We conclude by highlighting common best practices across studies as well as caveats about our own overview. We hope our efforts spur a more general effort in standardizing best practices across the subareas we’ve highlighted in order to increase compatibility among bioacoustic studies and inspire cross-pollination across the discipline.
  • Otake, T., Davis, S. M., & Cutler, A. (1995). Listeners’ representations of within-word structure: A cross-linguistic and cross-dialectal investigation. In J. Pardo (Ed.), Proceedings of EUROSPEECH 95: Vol. 3 (pp. 1703-1706). Madrid: European Speech Communication Association.

    Abstract

    Japanese, British English and American English listeners were presented with spoken words in their native language, and asked to mark on a written transcript of each word the first natural division point in the word. The results showed clear and strong patterns of consensus, indicating that listeners have available to them conscious representations of within-word structure. Orthography did not play a strongly deciding role in the results. The patterns of response were at variance with results from on-line studies of speech segmentation, suggesting that the present task taps not those representations used in on-line listening, but levels of representation which may involve much richer knowledge of word-internal structure.
  • Otake, T., & Cutler, A. (1999). Perception of suprasegmental structure in a nonnative dialect. Journal of Phonetics, 27, 229-253. doi:10.1006/jpho.1999.0095.

    Abstract

    Two experiments examined the processing of Tokyo Japanese pitchaccent distinctions by native speakers of Japanese from two accentlessvariety areas. In both experiments, listeners were presented with Tokyo Japanese speech materials used in an earlier study with Tokyo Japanese listeners, who clearly exploited the pitch-accent information in spokenword recognition. In the "rst experiment, listeners judged from which of two words, di!ering in accentual structure, isolated syllables had been extracted. Both new groups were, overall, as successful at this task as Tokyo Japanese speakers had been, but their response patterns differed from those of the Tokyo Japanese, for instance in that a bias towards H judgments in the Tokyo Japanese responses was weakened in the present groups' responses. In a second experiment, listeners heard word fragments and guessed what the words were; in this task, the speakers from accentless areas again performed significantly above chance, but their responses showed less sensitivity to the information in the input, and greater bias towards vocabulary distribution frequencies, than had been observed with the Tokyo Japanese listeners. The results suggest that experience with a local accentless dialect affects the processing of accent for word recognition in Tokyo Japanese, even for listeners with extensive exposure to Tokyo Japanese.
  • Otten, M., & Van Berkum, J. J. A. (2008). Discourse-based word anticipation during language processing: Prediction of priming? Discourse Processes, 45, 464-496. doi:10.1080/01638530802356463.

    Abstract

    Language is an intrinsically open-ended system. This fact has led to the widely shared assumption that readers and listeners do not predict upcoming words, at least not in a way that goes beyond simple priming between words. Recent evidence, however, suggests that readers and listeners do anticipate upcoming words “on the fly” as a text unfolds. In 2 event-related potentials experiments, this study examined whether these predictions are based on the exact message conveyed by the prior discourse or on simpler word-based priming mechanisms. Participants read texts that strongly supported the prediction of a specific word, mixed with non-predictive control texts that contained the same prime words. In Experiment 1A, anomalous words that replaced a highly predictable (as opposed to a non-predictable but coherent) word elicited a long-lasting positive shift, suggesting that the prior discourse had indeed led people to predict specific words. In Experiment 1B, adjectives whose suffix mismatched the predictable noun's syntactic gender elicited a short-lived late negativity in predictive stories but not in prime control stories. Taken together, these findings reveal that the conceptual basis for predicting specific upcoming words during reading is the exact message conveyed by the discourse and not the mere presence of prime words.
  • Owoyele, B., Trujillo, J. P., De Melo, G., & Pouw, W. (2022). Masked-Piper: Masking personal identities in visual recordings while preserving multimodal information. SoftwareX, 20: 101236. doi:10.1016/j.softx.2022.101236.

    Abstract

    In this increasingly data-rich world, visual recordings of human behavior are often unable to be shared due to concerns about privacy. Consequently, data sharing in fields such as behavioral science, multimodal communication, and human movement research is often limited. In addition, in legal and other non-scientific contexts, privacy-related concerns may preclude the sharing of video recordings and thus remove the rich multimodal context that humans recruit to communicate. Minimizing the risk of identity exposure while preserving critical behavioral information would maximize utility of public resources (e.g., research grants) and time invested in audio–visual​ research. Here we present an open-source computer vision tool that masks the identities of humans while maintaining rich information about communicative body movements. Furthermore, this masking tool can be easily applied to many videos, leveraging computational tools to augment the reproducibility and accessibility of behavioral research. The tool is designed for researchers and practitioners engaged in kinematic and affective research. Application areas include teaching/education, communication and human movement research, CCTV, and legal contexts.

    Additional information

    setup and usage
  • Ozker, M., Doyle, W., Devinsky, O., & Flinker, A. (2022). A cortical network processes auditory error signals during human speech production to maintain fluency. PLoS Biology, 20: e3001493. doi:10.1371/journal.pbio.3001493.

    Abstract

    Hearing one’s own voice is critical for fluent speech production as it allows for the detection and correction of vocalization errors in real time. This behavior known as the auditory feedback control of speech is impaired in various neurological disorders ranging from stuttering to aphasia; however, the underlying neural mechanisms are still poorly understood. Computational models of speech motor control suggest that, during speech production, the brain uses an efference copy of the motor command to generate an internal estimate of the speech output. When actual feedback differs from this internal estimate, an error signal is generated to correct the internal estimate and update necessary motor commands to produce intended speech. We were able to localize the auditory error signal using electrocorticographic recordings from neurosurgical participants during a delayed auditory feedback (DAF) paradigm. In this task, participants hear their voice with a time delay as they produced words and sentences (similar to an echo on a conference call), which is well known to disrupt fluency by causing slow and stutter-like speech in humans. We observed a significant response enhancement in auditory cortex that scaled with the duration of feedback delay, indicating an auditory speech error signal. Immediately following auditory cortex, dorsal precentral gyrus (dPreCG), a region that has not been implicated in auditory feedback processing before, exhibited a markedly similar response enhancement, suggesting a tight coupling between the 2 regions. Critically, response enhancement in dPreCG occurred only during articulation of long utterances due to a continuous mismatch between produced speech and reafferent feedback. These results suggest that dPreCG plays an essential role in processing auditory error signals during speech production to maintain fluency.

    Additional information

    data and code
  • Ozturk, O., & Papafragou, A. (2008). Acquisition of evidentiality and source monitoring. In H. Chan, H. Jacob, & E. Kapia (Eds.), Proceedings from the 32nd Annual Boston University Conference on Language Development [BUCLD 32] (pp. 368-377). Somerville, Mass.: Cascadilla Press.
  • Ozyurek, A., Kita, S., Allen, S., Brown, A., Furman, R., & Ishizuka, T. (2008). Development of cross-linguistic variation in speech and gesture: motion events in English and Turkish. Developmental Psychology, 44(4), 1040-1054. doi:10.1037/0012-1649.44.4.1040.

    Abstract

    The way adults express manner and path components of a motion event varies across typologically different languages both in speech and cospeech gestures, showing that language specificity in event encoding influences gesture. The authors tracked when and how this multimodal cross-linguistic variation develops in children learning Turkish and English, 2 typologically distinct languages. They found that children learn to speak in language-specific ways from age 3 onward (i.e., English speakers used 1 clause and Turkish speakers used 2 clauses to express manner and path). In contrast, English- and Turkish-speaking children’s gestures looked similar at ages 3 and 5 (i.e., separate gestures for manner and path), differing from each other only at age 9 and in adulthood (i.e., English speakers used 1 gesture, but Turkish speakers used separate gestures for manner and path). The authors argue that this pattern of the development of cospeech gestures reflects a gradual shift to language-specific representations during speaking and shows that looking at speech alone may not be sufficient to understand the full process of language acquisition.
  • Ozyurek, A., & Kita, S. (1999). Expressing manner and path in English and Turkish: Differences in speech, gesture, and conceptualization. In M. Hahn, & S. C. Stoness (Eds.), Proceedings of the Twenty-first Annual Conference of the Cognitive Science Society (pp. 507-512). London: Erlbaum.
  • Park, B.-y., Larivière, S., Rodríguez-Cruces, R., Royer, J., Tavakol, S., Wang, Y., Caciagli, L., Caligiuri, M. E., Gambardella, A., Concha, L., Keller, S. S., Cendes, F., Alvim, M. K. M., Yasuda, C., Bonilha, L., Gleichgerrcht, E., Focke, N. K., Kreilkamp, B. A. K., Domin, M., Von Podewils, F. and 66 morePark, B.-y., Larivière, S., Rodríguez-Cruces, R., Royer, J., Tavakol, S., Wang, Y., Caciagli, L., Caligiuri, M. E., Gambardella, A., Concha, L., Keller, S. S., Cendes, F., Alvim, M. K. M., Yasuda, C., Bonilha, L., Gleichgerrcht, E., Focke, N. K., Kreilkamp, B. A. K., Domin, M., Von Podewils, F., Langner, S., Rummel, C., Rebsamen, M., Wiest, R., Martin, P., Kotikalapudi, R., Bender, B., O’Brien, T. J., Law, M., Sinclair, B., Vivash, L., Desmond, P. M., Malpas, C. B., Lui, E., Alhusaini, S., Doherty, C. P., Cavalleri, G. L., Delanty, N., Kälviäinen, R., Jackson, G. D., Kowalczyk, M., Mascalchi, M., Semmelroch, M., Thomas, R. H., Soltanian-Zadeh, H., Davoodi-Bojd, E., Zhang, J., Lenge, M., Guerrini, R., Bartolini, E., Hamandi, K., Foley, S., Weber, B., Depondt, C., Absil, J., Carr, S. J. A., Abela, E., Richardson, M. P., Devinsky, O., Severino, M., Striano, P., Parodi, C., Tortora, D., Hatton, S. N., Vos, S. B., Duncan, J. S., Galovic, M., Whelan, C. D., Bargalló, N., Pariente, J., Conde, E., Vaudano, A. E., Tondelli, M., Meletti, S., Kong, X., Francks, C., Fisher, S. E., Caldairou, B., Ryten, M., Labate, A., Sisodiya, S. M., Thompson, P. M., McDonald, C. R., Bernasconi, A., Bernasconi, N., & Bernhardt, B. C. (2022). Topographic divergence of atypical cortical asymmetry and atrophy patterns in temporal lobe epilepsy. Brain, 145(4), 1285-1298. doi:10.1093/brain/awab417.

    Abstract

    Temporal lobe epilepsy (TLE), a common drug-resistant epilepsy in adults, is primarily a limbic network disorder associated with predominant unilateral hippocampal pathology. Structural MRI has provided an in vivo window into whole-brain grey matter structural alterations in TLE relative to controls, by either mapping (i) atypical inter-hemispheric asymmetry or (ii) regional atrophy. However, similarities and differences of both atypical asymmetry and regional atrophy measures have not been systematically investigated.

    Here, we addressed this gap using the multi-site ENIGMA-Epilepsy dataset comprising MRI brain morphological measures in 732 TLE patients and 1,418 healthy controls. We compared spatial distributions of grey matter asymmetry and atrophy in TLE, contextualized their topographies relative to spatial gradients in cortical microstructure and functional connectivity calculated using 207 healthy controls obtained from Human Connectome Project and an independent dataset containing 23 TLE patients and 53 healthy controls, and examined clinical associations using machine learning.

    We identified a marked divergence in the spatial distribution of atypical inter-hemispheric asymmetry and regional atrophy mapping. The former revealed a temporo-limbic disease signature while the latter showed diffuse and bilateral patterns. Our findings were robust across individual sites and patients. Cortical atrophy was significantly correlated with disease duration and age at seizure onset, while degrees of asymmetry did not show a significant relationship to these clinical variables.

    Our findings highlight that the mapping of atypical inter-hemispheric asymmetry and regional atrophy tap into two complementary aspects of TLE-related pathology, with the former revealing primary substrates in ipsilateral limbic circuits and the latter capturing bilateral disease effects. These findings refine our notion of the neuropathology of TLE and may inform future discovery and validation of complementary MRI biomarkers in TLE.

    Additional information

    awab417_supplementary_data.pdf
  • Patel, A. D., Iversen, J. R., Wassenaar, M., & Hagoort, P. (2008). Musical syntactic processing in agrammatic Broca's aphasia. Aphasiology, 22(7/8), 776-789. doi:10.1080/02687030701803804.

    Abstract

    Background: Growing evidence for overlap in the syntactic processing of language and music in non-brain-damaged individuals leads to the question of whether aphasic individuals with grammatical comprehension problems in language also have problems processing structural relations in music.

    Aims: The current study sought to test musical syntactic processing in individuals with Broca's aphasia and grammatical comprehension deficits, using both explicit and implicit tasks.

    Methods & Procedures: Two experiments were conducted. In the first experiment 12 individuals with Broca's aphasia (and 14 matched controls) were tested for their sensitivity to grammatical and semantic relations in sentences, and for their sensitivity to musical syntactic (harmonic) relations in chord sequences. An explicit task (acceptability judgement of novel sequences) was used. The second experiment, with 9 individuals with Broca's aphasia (and 12 matched controls), probed musical syntactic processing using an implicit task (harmonic priming).

    Outcomes & Results: In both experiments the aphasic group showed impaired processing of musical syntactic relations. Control experiments indicated that this could not be attributed to low-level problems with the perception of pitch patterns or with auditory short-term memory for tones.

    Conclusions: The results suggest that musical syntactic processing in agrammatic aphasia deserves systematic investigation, and that such studies could help probe the nature of the processing deficits underlying linguistic agrammatism. Methodological suggestions are offered for future work in this little-explored area.
  • Pearson, L., & Pouw, W. (2022). Gesture–vocal coupling in Karnatak music performance: A neuro–bodily distributed aesthetic entanglement. Annals of the New York Academy of Sciences, 1515(1), 219-236. doi:10.1111/nyas.14806.

    Abstract

    In many musical styles, vocalists manually gesture while they sing. Coupling between gesture kinematics and vocalization has been examined in speech contexts, but it is an open question how these couple in music making. We examine this in a corpus of South Indian, Karnatak vocal music that includes motion-capture data. Through peak magnitude analysis (linear mixed regression) and continuous time-series analyses (generalized additive modeling), we assessed whether vocal trajectories around peaks in vertical velocity, speed, or acceleration were coupling with changes in vocal acoustics (namely, F0 and amplitude). Kinematic coupling was stronger for F0 change versus amplitude, pointing to F0's musical significance. Acceleration was the most predictive for F0 change and had the most reliable magnitude coupling, showing a one-third power relation. That acceleration, rather than other kinematics, is maximally predictive for vocalization is interesting because acceleration entails force transfers onto the body. As a theoretical contribution, we argue that gesturing in musical contexts should be understood in relation to the physical connections between gesturing and vocal production that are brought into harmony with the vocalists’ (enculturated) performance goals. Gesture–vocal coupling should, therefore, be viewed as a neuro–bodily distributed aesthetic entanglement.

    Additional information

    tables
  • Pederson, E. (1995). Questionnaire on event realization. In D. Wilkins (Ed.), Extensions of space and beyond: manual for field elicitation for the 1995 field season (pp. 54-60). Nijmegen: Max Planck Institute for Psycholinguistics. doi:10.17617/2.3004359.

    Abstract

    "Event realisation" refers to the normal final state of the affected entity of an activity described by a verb. For example, the sentence John killed the mosquito entails that the mosquito is afterwards dead – this is the full realisation of a killing event. By contrast, a sentence such as John hit the mosquito does not entail the mosquito’s death (even though we might assume this to be a likely result). In using a certain verb, which features of event realisation are entailed and which are just likely? This questionnaire supports cross-linguistic exploration of event realisation for a range of event types.
  • Pereira Soares, S. M., Kupisch, T., & Rothman, J. (2022). Testing potential transfer effects in heritage and adult L2 bilinguals acquiring a mini grammar as an additional language: An ERP approach. Brain Sciences, 12: 669. doi:10.3390/brainsci12050669.

    Abstract

    Models on L3/Ln acquisition differ with respect to how they envisage degree (holistic
    vs. selective transfer of the L1, L2 or both) and/or timing (initial stages vs. development) of how
    the influence of source languages unfolds. This study uses EEG/ERPs to examine these models,
    bringing together two types of bilinguals: heritage speakers (HSs) (Italian-German, n = 15) compared
    to adult L2 learners (L1 German, L2 English, n = 28) learning L3/Ln Latin. Participants were trained
    on a selected Latin lexicon over two sessions and, afterward, on two grammatical properties: case
    (similar between German and Latin) and adjective–noun order (similar between Italian and Latin).
    Neurophysiological findings show an N200/N400 deflection for the HSs in case morphology and a
    P600 effect for the German L2 group in adjectival position. None of the current L3/Ln models predict
    the observed results, which questions the appropriateness of this methodology. Nevertheless, the
    results are illustrative of differences in how HSs and L2 learners approach the very initial stages of
    additional language learning, the implications of which are discussed
  • Pereira Soares, S. M., Prystauka, Y., DeLuca, V., & Rothman, J. (2022). Type of bilingualism conditions individual differences in the oscillatory dynamics of inhibitory control. Frontiers in Human Neuroscience, 16: 910910. doi:10.3389/fnhum.2022.910910.

    Abstract

    The present study uses EEG time-frequency representations (TFRs) with a Flanker task to investigate if and how individual differences in bilingual language experience modulate neurocognitive outcomes (oscillatory dynamics) in two bilingual group types: late bilinguals (L2 learners) and early bilinguals (heritage speakers—HSs). TFRs were computed for both incongruent and congruent trials. The difference between the two (Flanker effect vis-à-vis cognitive interference) was then (1) compared between the HSs and the L2 learners, (2) modeled as a function of individual differences with bilingual experience within each group separately and (3) probed for its potential (a)symmetry between brain and behavioral data. We found no differences at the behavioral and neural levels for the between-groups comparisons. However, oscillatory dynamics (mainly theta increase and alpha suppression) of inhibition and cognitive control were found to be modulated by individual differences in bilingual language experience, albeit distinctly within each bilingual group. While the results indicate adaptations toward differential brain recruitment in line with bilingual language experience variation overall, this does not manifest uniformly. Rather, earlier versus later onset to bilingualism—the bilingual type—seems to constitute an independent qualifier to how individual differences play out.

    Additional information

    supplementary material
  • Perfors, A., & Kidd, E. (2022). The role of stimulus‐specific perceptual fluency in statistical learning. Cognitive Science, 46(2): e13100. doi:10.1111/cogs.13100.

    Abstract

    Humans have the ability to learn surprisingly complicated statistical information in a variety of modalities and situations, often based on relatively little input. These statistical learning (SL) skills appear to underlie many kinds of learning, but despite their ubiquity, we still do not fully understand precisely what SL is and what individual differences on SL tasks reflect. Here, we present experimental work suggesting that at least some individual differences arise from stimulus-specific variation in perceptual fluency: the ability to rapidly or efficiently code and remember the stimuli that SL occurs over. Experiment 1 demonstrates that participants show improved SL when the stimuli are simple and familiar; Experiment 2 shows that this improvement is not evident for simple but unfamiliar stimuli; and Experiment 3 shows that for the same stimuli (Chinese characters), SL is higher for people who are familiar with them (Chinese speakers) than those who are not (English speakers matched on age and education level). Overall, our findings indicate that performance on a standard SL task varies substantially within the same (visual) modality as a function of whether the stimuli involved are familiar or not, independent of stimulus complexity. Moreover, test–retest correlations of performance in an SL task using stimuli of the same level of familiarity (but distinct items) are stronger than correlations across the same task with stimuli of different levels of familiarity. Finally, we demonstrate that SL performance is predicted by an independent measure of stimulus-specific perceptual fluency that contains no SL component at all. Our results suggest that a key component of SL performance may be related to stimulus-specific processing and familiarity.
  • Perniss, P. M., & Ozyurek, A. (2008). Representations of action, motion and location in sign space: A comparison of German (DGS) and Turkish (TID) sign language narratives. In J. Quer (Ed.), Signs of the time: Selected papers from TISLR 8 (pp. 353-376). Seedorf: Signum Press.
  • Perniss, P. M., & Zeshan, U. (2008). Possessive and existential constructions in Kata Kolok (Bali). In Possessive and existential constructions in sign languages. Nijmegen: Ishara Press.
  • Perniss, P. M., & Zeshan, U. (2008). Possessive and existential constructions: Introduction and overview. In Possessive and existential constructions in sign languages (pp. 1-31). Nijmegen: Ishara Press.
  • Petersson, K. M. (2008). On cognition, structured sequence processing, and adaptive dynamical systems. American Institute of Physics Conference Proceedings, 1060(1), 195-200.

    Abstract

    Cognitive neuroscience approaches the brain as a cognitive system: a system that functionally is conceptualized in terms of information processing. We outline some aspects of this concept and consider a physical system to be an information processing device when a subclass of its physical states can be viewed as representational/cognitive and transitions between these can be conceptualized as a process operating on these states by implementing operations on the corresponding representational structures. We identify a generic and fundamental problem in cognition: sequentially organized structured processing. Structured sequence processing provides the brain, in an essential sense, with its processing logic. In an approach addressing this problem, we illustrate how to integrate levels of analysis within a framework of adaptive dynamical systems. We note that the dynamical system framework lends itself to a description of asynchronous event-driven devices, which is likely to be important in cognition because the brain appears to be an asynchronous processing system. We use the human language faculty and natural language processing as a concrete example through out.
  • Petersson, K. M., Elfgren, C., & Ingvar, M. (1999). Dynamic changes in the functional anatomy of the human brain during recall of abstract designs related to practice. Neuropsychologia, 37, 567-587.

    Abstract

    In the present PET study we explore some functional aspects of the interaction between attentional/control processes and learning/memory processes. The network of brain regions supporting recall of abstract designs were studied in a less practiced and in a well practiced state. The results indicate that automaticity, i.e., a decreased dependence on attentional and working memory resources, develops as a consequence of practice. This corresponds to the practice related decreases of activity in the prefrontal, anterior cingulate, and posterior parietal regions. In addition, the activity of the medial temporal regions decreased as a function of practice. This indicates an inverse relation between the strength of encoding and the activation of the MTL during retrieval. Furthermore, the pattern of practice related increases in the auditory, posterior insular-opercular extending into perisylvian supra marginal region, and the right mid occipito-temporal region, may reflect a lower degree of inhibitory attentional modulation of task irrelevant processing and more fully developed representations of the abstract designs, respectively. We also suggest that free recall is dependent on bilateral prefrontal processing, in particular non-automatic free recall. The present results cofirm previous functional neuroimaging studies of memory retrieval indicating that recall is subserved by a network of interacting brain regions. Furthermore, the results indicate that some components of the neural network subserving free recall may have a dynamic role and that there is a functional restructuring of the information processing networks during the learning process.
  • Petersson, K. M., Reis, A., Castro-Caldas, A., & Ingvar, M. (1999). Effective auditory-verbal encoding activates the left prefrontal and the medial temporal lobes: A generalization to illiterate subjects. NeuroImage, 10, 45-54. doi:10.1006/nimg.1999.0446.

    Abstract

    Recent event-related FMRI studies indicate that the prefrontal (PFC) and the medial temporal lobe (MTL) regions are more active during effective encoding than during ineffective encoding. The within-subject design and the use of well-educated young college students in these studies makes it important to replicate these results in other study populations. In this PET study, we used an auditory word-pair association cued-recall paradigm and investigated a group of healthy upper middle-aged/older illiterate women. We observed a positive correlation between cued-recall success and the regional cerebral blood flow of the left inferior PFC (BA 47) and the MTLs. Specifically, we used the cuedrecall success as a covariate in a general linear model and the results confirmed that the left inferior PFC and the MTLare more active during effective encoding than during ineffective encoding. These effects were observed during encoding of both semantically and phonologically related word pairs, indicating that these effects are robust in the studied population, that is, reproducible within group. These results generalize the results of Brewer et al. (1998, Science 281, 1185– 1187) and Wagner et al. (1998, Science 281, 1188–1191) to an upper middle aged/older illiterate population. In addition, the present study indicates that effective relational encoding correlates positively with the activity of the anterior medial temporal lobe regions.
  • Petersson, K. M., Elfgren, C., & Ingvar, M. (1999). Learning-related effects and functional neuroimaging. Human Brain Mapping, 7, 234-243. doi:10.1002/(SICI)1097-0193(1999)7:4<234:AID-HBM2>3.0.CO;2-O.

    Abstract

    A fundamental problem in the study of learning is that learning-related changes may be confounded by nonspecific time effects. There are several strategies for handling this problem. This problem may be of greater significance in functional magnetic resonance imaging (fMRI) compared to positron emission tomography (PET). Using the general linear model, we describe, compare, and discuss two approaches for separating learning-related from nonspecific time effects. The first approach makes assumptions on the general behavior of nonspecific effects and explicitly models these effects, i.e., nonspecific time effects are incorporated as a linear or nonlinear confounding covariate in the statistical model. The second strategy makes no a priori assumption concerning the form of nonspecific time effects, but implicitly controls for nonspecific effects using an interaction approach, i.e., learning effects are assessed with an interaction contrast. The two approaches depend on specific assumptions and have specific limitations. With certain experimental designs, both approaches may be used and the results compared, lending particular support to effects that are independent of the method used. A third and perhaps better approach that sometimes may be practically unfeasible is to use a completely temporally balanced experimental design. The choice of approach may be of particular importance when learning related effects are studied with fMRI.
  • Petersson, K. M., Nichols, T. E., Poline, J.-B., & Holmes, A. P. (1999). Statistical limitations in functional neuroimaging I: Non-inferential methods and statistical models. Philosofical Transactions of the Royal Soeciety B, 354, 1239-1260.
  • Petersson, K. M., Nichols, T. E., Poline, J.-B., & Holmes, A. P. (1999). Statistical limitations in functional neuroimaging II: Signal detection and statistical inference. Philosophical Transactions of the Royal Society of London. Series B, Biological Sciences, 354, 1261-1282.
  • Petrovic, P., Ingvar, M., Stone-Elander, S., Petersson, K. M., & Hansson, P. (1999). A PET activation study of dynamic mechanical allodynia in patients with mononeuropathy. Pain, 83, 459-470.

    Abstract

    The objective of this study was to investigate the central processing of dynamic mechanical allodynia in patients with mononeuropathy. Regional cerebral bloodflow, as an indicator of neuronal activity, was measured with positron emission tomography. Paired comparisons were made between three different states; rest, allodynia during brushing the painful skin area, and brushing of the homologous contralateral area. Bilateral activations were observed in the primary somatosensory cortex (S1) and the secondary somatosensory cortex (S2) during allodynia compared to rest. The S1 activation contralateral to the site of the stimulus was more expressed during allodynia than during innocuous touch. Significant activations of the contralateral posterior parietal cortex, the periaqueductal gray (PAG), the thalamus bilaterally and motor areas were also observed in the allodynic state compared to both non-allodynic states. In the anterior cingulate cortex (ACC) there was only a suggested activation when the allodynic state was compared with the non-allodynic states. In order to account for the individual variability in the intensity of allodynia and ongoing spontaneous pain, rCBF was regressed on the individually reported pain intensity, and significant covariations were observed in the ACC and the right anterior insula. Significantly decreased regional blood flow was observed bilaterally in the medial and lateral temporal lobe as well as in the occipital and posterior cingulate cortices when the allodynic state was compared to the non-painful conditions. This finding is consistent with previous studies suggesting attentional modulation and a central coping strategy for known and expected painful stimuli. Involvement of the medial pain system has previously been reported in patients with mononeuropathy during ongoing spontaneous pain. This study reveals a bilateral activation of the lateral pain system as well as involvement of the medial pain system during dynamic mechanical allodynia in patients with mononeuropathy.
  • Pijls, F., Kempen, G., & Janner, E. (1990). Intelligent modules for Dutch grammar instruction. In J. Pieters, P. Simons, & L. De Leeuw (Eds.), Research on computer-based instruction. Amsterdam: Swets & Zeitlinger.
  • Poletiek, F. H. (2008). Het probleem van escalerende beschuldigingen [Boekbespreking van Kindermishandeling door H. Crombag en den Hartog]. Maandblad voor Geestelijke Volksgezondheid, (2), 163-166.
  • Poort, E. D., & Rodd, J. M. (2022). Cross-lingual priming of cognates and interlingual homographs from L2 to L1. Glossa Psycholinguistics, 1(1): 11. doi:10.5070/G601147.

    Abstract

    Many word forms exist in multiple languages, and can have either the same meaning (cognates) or a different meaning (interlingual homographs). Previous experiments have shown that processing of interlingual homographs in a bilingual’s second language is slowed down by recent experience with these words in the bilingual’s native language, while processing of cognates can be speeded up (Poort et al., 2016; Poort & Rodd, 2019a). The current experiment replicated Poort and Rodd’s (2019a) Experiment 2 but switched the direction of priming: Dutch–English bilinguals (n = 106) made Dutch semantic relatedness judgements to probes related to cognates (n = 50), interlingual homographs (n = 50) and translation equivalents (n = 50) they had seen 15 minutes previously embedded in English sentences. The current experiment is the first to show that a single encounter with an interlingual homograph in one’s second language can also affect subsequent processing in one’s native language. Cross-lingual priming did not affect the cognates. The experiment also extended Poort and Rodd (2019a)’s finding of a large interlingual homograph inhibition effect in a semantic relatedness task in the participants’ L2 to their L1, but again found no evidence for a cognate facilitation effect in a semantic relatedness task. These findings extend the growing literature that emphasises the high level of interaction in a bilingual’s mental lexicon, by demonstrating the influence of L2 experience on the processing of L1 words. Data, scripts, materials and pre-registration available via https://osf.io/2swyg/?view_only=b2ba2e627f6f4eaeac87edab2b59b236.
  • Postema, A., Van Mierlo, H., Bakker, A. B., & Barendse, M. T. (2022). Study-to-sports spillover among competitive athletes: A field study. International Journal of Sport and Exercise Psychology. Advance online publication. doi:10.1080/1612197X.2022.2058054.

    Abstract

    Combining academics and athletics is challenging but important for the psychological and psychosocial development of those involved. However, little is known about how experiences in academics spill over and relate to athletics. Drawing on the enrichment mechanisms proposed by the Work-Home Resources model, we posit that study crafting behaviours are positively related to volatile personal resources, which, in turn, are related to higher athletic achievement. Via structural equation modelling, we examine a path model among 243 student-athletes, incorporating study crafting behaviours and personal resources (i.e., positive affect and study engagement), and self- and coach-rated athletic achievement measured two weeks later. Results show that optimising the academic environment by crafting challenging study demands relates positively to positive affect and study engagement. In turn, positive affect related positively to self-rated athletic achievement, whereas – unexpectedly – study engagement related negatively to coach-rated athletic achievement. Optimising the academic environment through cognitive crafting and crafting social study resources did not relate to athletic outcomes. We discuss how these findings offer new insights into the interplay between academics and athletics.
  • Poulton, V. R., & Nieuwland, M. S. (2022). Can you hear what’s coming? Failure to replicate ERP evidence for phonological prediction. Neurobiology of Language, 3(4), 556 -574. doi:10.1162/nol_a_00078.

    Abstract

    Prediction-based theories of language comprehension assume that listeners predict both the meaning and phonological form of likely upcoming words. In alleged event-related potential (ERP) demonstrations of phonological prediction, prediction-mismatching words elicit a phonological mismatch negativity (PMN), a frontocentral negativity that precedes the centroparietal N400 component. However, classification and replicability of the PMN has proven controversial, with ongoing debate on whether the PMN is a distinct component or merely an early part of the N400. In this electroencephalography (EEG) study, we therefore attempted to replicate the PMN effect and its separability from the N400, using a participant sample size (N = 48) that was more than double that of previous studies. Participants listened to sentences containing either a predictable word or an unpredictable word with/without phonological overlap with the predictable word. Preregistered analyses revealed a widely distributed negative-going ERP in response to unpredictable words in both the early (150–250 ms) and the N400 (300–500 ms) time windows. Bayes factor analysis yielded moderate evidence against a different scalp distribution of the effects in the two time windows. Although our findings do not speak against phonological prediction during sentence comprehension, they do speak against the PMN effect specifically as a marker of phonological prediction mismatch. Instead of an PMN effect, our results demonstrate the early onset of the auditory N400 effect associated with unpredictable words. Our failure to replicate further highlights the risk associated with commonly employed data-contingent analyses (e.g., analyses involving time windows or electrodes that were selected based on visual inspection) and small sample sizes in the cognitive neuroscience of language.
  • Pouw, W., & Holler, J. (2022). Timing in conversation is dynamically adjusted turn by turn in dyadic telephone conversations. Cognition, 222: 105015. doi:10.1016/j.cognition.2022.105015.

    Abstract

    Conversational turn taking in humans involves incredibly rapid responding. The timing mechanisms underpinning such responses have been heavily debated, including questions such as who is doing the timing. Similar to findings on rhythmic tapping to a metronome, we show that floor transfer offsets (FTOs) in telephone conversations are serially dependent, such that FTOs are lag-1 negatively autocorrelated. Finding this serial dependence on a turn-by-turn basis (lag-1) rather than on the basis of two or more turns, suggests a counter-adjustment mechanism operating at the level of the dyad in FTOs during telephone conversations, rather than a more individualistic self-adjustment within speakers. This finding, if replicated, has major implications for models describing turn taking, and confirms the joint, dyadic nature of human conversational dynamics. Future research is needed to see how pervasive serial dependencies in FTOs are, such as for example in richer communicative face-to-face contexts where visual signals affect conversational timing.
  • Pouw, W., & Dixon, J. A. (2022). What you hear and see specifies the perception of a limb-respiratory-vocal act. Proceedings of the Royal Society B: Biological Sciences, 289(1979): 20221026. doi:10.1098/rspb.2022.1026.
  • Pouw, W., Harrison, S. J., & Dixon, J. A. (2022). The importance of visual control and biomechanics in the regulation of gesture-speech synchrony for an individual deprived of proprioceptive feedback of body position. Scientific Reports, 12: 14775. doi:10.1038/s41598-022-18300-x.

    Abstract

    Do communicative actions such as gestures fundamentally differ in their control mechanisms from other actions? Evidence for such fundamental differences comes from a classic gesture-speech coordination experiment performed with a person (IW) with deafferentation (McNeill, 2005). Although IW has lost both his primary source of information about body position (i.e., proprioception) and discriminative touch from the neck down, his gesture-speech coordination has been reported to be largely unaffected, even if his vision is blocked. This is surprising because, without vision, his object-directed actions almost completely break down. We examine the hypothesis that IW’s gesture-speech coordination is supported by the biomechanical effects of gesturing on head posture and speech. We find that when vision is blocked, there are micro-scale increases in gesture-speech timing variability, consistent with IW’s reported experience that gesturing is difficult without vision. Supporting the hypothesis that IW exploits biomechanical consequences of the act of gesturing, we find that: (1) gestures with larger physical impulses co-occur with greater head movement, (2) gesture-speech synchrony relates to larger gesture-concurrent head movements (i.e. for bimanual gestures), (3) when vision is blocked, gestures generate more physical impulse, and (4) moments of acoustic prominence couple more with peaks of physical impulse when vision is blocked. It can be concluded that IW’s gesturing ability is not based on a specialized language-based feedforward control as originally concluded from previous research, but is still dependent on a varied means of recurrent feedback from the body.

    Additional information

    supplementary tables
  • Pouw, W., & Fuchs, S. (2022). Origins of vocal-entangled gesture. Neuroscience and Biobehavioral Reviews, 141: 104836. doi:10.1016/j.neubiorev.2022.104836.

    Abstract

    Gestures during speaking are typically understood in a representational framework: they represent absent or distal states of affairs by means of pointing, resemblance, or symbolic replacement. However, humans also gesture along with the rhythm of speaking, which is amenable to a non-representational perspective. Such a perspective centers on the phenomenon of vocal-entangled gestures and builds on evidence showing that when an upper limb with a certain mass decelerates/accelerates sufficiently, it yields impulses on the body that cascade in various ways into the respiratory–vocal system. It entails a physical entanglement between body motions, respiration, and vocal activities. It is shown that vocal-entangled gestures are realized in infant vocal–motor babbling before any representational use of gesture develops. Similarly, an overview is given of vocal-entangled processes in non-human animals. They can frequently be found in rats, bats, birds, and a range of other species that developed even earlier in the phylogenetic tree. Thus, the origins of human gesture lie in biomechanics, emerging early in ontogeny and running deep in phylogeny.
  • Praamstra, P., Plat, E. M., Meyer, A. S., & Horstink, M. W. I. M. (1999). Motor cortex activation in Parkinson's disease: Dissociation of electrocortical and peripheral measures of response generation. Movement Disorders, 14, 790-799. doi:10.1002/1531-8257(199909)14:5<790:AID-MDS1011>3.0.CO;2-A.

    Abstract

    This study investigated characteristics of motor cortex activation and response generation in Parkinson's disease with measures of electrocortical activity (lateralized readiness potential [LRP]), electromyographic activity (EMG), and isometric force in a noise-compatibility task. When presented with stimuli consisting of incompatible target and distracter elements asking for responses of opposite hands, patients were less able than control subjects to suppress activation of the motor cortex controlling the wrong response hand. This was manifested in the pattern of reaction times and in an incorrect lateralization of the LRP. Onset latency and rise time of the LRP did not differ between patients and control subjects, but EMG and response force developed more slowly in patients. Moreover, in patients but not in control subjects, the rate of development of EMG and response force decreased as reaction time increased. We hypothesize that this dissociation between electrocortical activity and peripheral measures in Parkinson's disease is the result of changes in motor cortex function that alter the relation between signal-related and movement-related neural activity in the motor cortex. In the LRP, this altered balance may obscure an abnormal development of movement-related neural activity.
  • Preisig, B., & Hervais-Adelman, A. (2022). The predictive value of individual electric field modeling for transcranial alternating current stimulation induced brain modulation. Frontiers in Cellular Neuroscience, 16: 818703. doi:10.3389/fncel.2022.818703.

    Abstract

    There is considerable individual variability in the reported effectiveness of non-invasive brain stimulation. This variability has often been ascribed to differences in the neuroanatomy and resulting differences in the induced electric field inside the brain. In this study, we addressed the question whether individual differences in the induced electric field can predict the neurophysiological and behavioral consequences of gamma band tACS. In a within-subject experiment, bi-hemispheric gamma band tACS and sham stimulation was applied in alternating blocks to the participants’ superior temporal lobe, while task-evoked auditory brain activity was measured with concurrent functional magnetic resonance imaging (fMRI) and a dichotic listening task. Gamma tACS was applied with different interhemispheric phase lags. In a recent study, we could show that anti-phase tACS (180° interhemispheric phase lag), but not in-phase tACS (0° interhemispheric phase lag), selectively modulates interhemispheric brain connectivity. Using a T1 structural image of each participant’s brain, an individual simulation of the induced electric field was computed. From these simulations, we derived two predictor variables: maximal strength (average of the 10,000 voxels with largest electric field values) and precision of the electric field (spatial correlation between the electric field and the task evoked brain activity during sham stimulation). We found considerable variability in the individual strength and precision of the electric fields. Importantly, the strength of the electric field over the right hemisphere predicted individual differences of tACS induced brain connectivity changes. Moreover, we found in both hemispheres a statistical trend for the effect of electric field strength on tACS induced BOLD signal changes. In contrast, the precision of the electric field did not predict any neurophysiological measure. Further, neither strength, nor precision predicted interhemispheric integration. In conclusion, we found evidence for the dose-response relationship between individual differences in electric fields and tACS induced activity and connectivity changes in concurrent fMRI. However, the fact that this relationship was stronger in the right hemisphere suggests that the relationship between the electric field parameters, neurophysiology, and behavior may be more complex for bi-hemispheric tACS.
  • Preisig, B., Riecke, L., & Hervais-Adelman, A. (2022). Speech sound categorization: The contribution of non-auditory and auditory cortical regions. NeuroImage, 258: 119375. doi:10.1016/j.neuroimage.2022.119375.

    Abstract

    Which processes in the human brain lead to the categorical perception of speech sounds? Investigation of this question is hampered by the fact that categorical speech perception is normally confounded by acoustic differences in the stimulus. By using ambiguous sounds, however, it is possible to dissociate acoustic from perceptual stimulus representations. Twenty-seven normally hearing individuals took part in an fMRI study in which they were presented with an ambiguous syllable (intermediate between /da/ and /ga/) in one ear and with disambiguating acoustic feature (third formant, F3) in the other ear. Multi-voxel pattern searchlight analysis was used to identify brain areas that consistently differentiated between response patterns associated with different syllable reports. By comparing responses to different stimuli with identical syllable reports and identical stimuli with different syllable reports, we disambiguated whether these regions primarily differentiated the acoustics of the stimuli or the syllable report. We found that BOLD activity patterns in left perisylvian regions (STG, SMG), left inferior frontal regions (vMC, IFG, AI), left supplementary motor cortex (SMA/pre-SMA), and right motor and somatosensory regions (M1/S1) represent listeners’ syllable report irrespective of stimulus acoustics. Most of these regions are outside of what is traditionally regarded as auditory or phonological processing areas. Our results indicate that the process of speech sound categorization implicates decision-making mechanisms and auditory-motor transformations.

    Additional information

    figures and table
  • Price, K. M., Wigg, K. G., Eising, E., Feng, Y., Blokland, K., Wilkinson, M., Kerr, E. N., Guger, S. L., Quantitative Trait Working Group of the GenLang Consortium, Fisher, S. E., Lovett, M. W., Strug, L. J., & Barr, C. L. (2022). Hypothesis-driven genome-wide association studies provide novel insights into genetics of reading disabilities. Translational Psychiatry, 12: 495. doi:10.1038/s41398-022-02250-z.

    Abstract

    Reading Disability (RD) is often characterized by difficulties in the phonology of the language. While the molecular mechanisms underlying it are largely undetermined, loci are being revealed by genome-wide association studies (GWAS). In a previous GWAS for word reading (Price, 2020), we observed that top single-nucleotide polymorphisms (SNPs) were located near to or in genes involved in neuronal migration/axon guidance (NM/AG) or loci implicated in autism spectrum disorder (ASD). A prominent theory of RD etiology posits that it involves disturbed neuronal migration, while potential links between RD-ASD have not been extensively investigated. To improve power to identify associated loci, we up-weighted variants involved in NM/AG or ASD, separately, and performed a new Hypothesis-Driven (HD)–GWAS. The approach was applied to a Toronto RD sample and a meta-analysis of the GenLang Consortium. For the Toronto sample (n = 624), no SNPs reached significance; however, by gene-set analysis, the joint contribution of ASD-related genes passed the threshold (p~1.45 × 10–2, threshold = 2.5 × 10–2). For the GenLang Cohort (n = 26,558), SNPs in DOCK7 and CDH4 showed significant association for the NM/AG hypothesis (sFDR q = 1.02 × 10–2). To make the GenLang dataset more similar to Toronto, we repeated the analysis restricting to samples selected for reading/language deficits (n = 4152). In this GenLang selected subset, we found significant association for a locus intergenic between BTG3-C21orf91 for both hypotheses (sFDR q < 9.00 × 10–4). This study contributes candidate loci to the genetics of word reading. Data also suggest that, although different variants may be involved, alleles implicated in ASD risk may be found in the same genes as those implicated in word reading. This finding is limited to the Toronto sample suggesting that ascertainment influences genetic associations.
  • Proios, H., Asaridou, S. S., & Brugger, P. (2008). Random number generation in patients with aphasia: A test of executive functions. Acta Neuropsychologica, 6(2), 157-168.

    Abstract

    Randomization performance was studied using the "Mental Dice Task" in 20 patients with aphasia (APH) and 101 elderly normal control subjects (NC). The produced sequences were compared to 100 computer-generated pseudorandom sequences with respect to 7 measures of sequential bias. The performance of APH differed significantly from NC participants, according to all but one measure, i.e. Turning Point Index (points of change between ascending and descending sequences). NC participants differed significantly from the computer generated sequences, according to all measures of randomness. Finally, APH differed significantly from the computer simulator, according to all measures but mean Repetition Gap score (gap between a digit and its reoccurrence). Despite the heterogeneity of our APH group, there were no significant differences in randomization performance between patients with different language impairments. All the APH displayed a distinct performance profile, with more response stereotypy, counting tendencies, and inhibition problems, as hypothesised, while at the same time responding more randomly than NC by showing less of a cycling strategy and more number repetitions.
  • Rapold, C. J., & Widlok, T. (2008). Dimensions of variability in Northern Khoekhoe language and culture. Southern African Humanities, 20, 133-161. Retrieved from http://www.sahumanities.org.za/RapoldWidlok_203.aspx.

    Abstract

    This article takes an interdisciplinary route towards explaining the complex history of Hai//om culture and language. We begin this article with a short review of ideas relating to 'origins' and historical reconstructions as they are currently played out among Khoekhoe groups in Namibia, in particular with regard to the Hai//om. We then take a comparative look at parts of the kinship system and the tonology of ≠Âkhoe Hai//om and other variants of Khoekhoe. With regard to the kinship and naming system, we see patterns that show similarities with Nama and Damara on the one hand but also with 'San' groups on the other hand. With regard to tonology, new data from three northern Khoekoe varieties shows similarities as well as differences with Standard Namibian Khoekhoe and Ju and Tuu varieties. The historical scenarios that might explain these facts suggest different centres of innovations and opposite directions of diffusion. The anthropological and linguistic data demonstrates that only a fine-grained and multi-layered approach that goes far beyond any simplistic dichotomies can do justice to the Hai//om riddle.
  • Rasenberg, M., Pouw, W., Özyürek, A., & Dingemanse, M. (2022). The multimodal nature of communicative efficiency in social interaction. Scientific Reports, 12: 19111. doi:10.1038/s41598-022-22883-w.

    Abstract

    How does communicative efficiency shape language use? We approach this question by studying it at the level of the dyad, and in terms of multimodal utterances. We investigate whether and how people minimize their joint speech and gesture efforts in face-to-face interactions, using linguistic and kinematic analyses. We zoom in on other-initiated repair—a conversational microcosm where people coordinate their utterances to solve problems with perceiving or understanding. We find that efforts in the spoken and gestural modalities are wielded in parallel across repair turns of different types, and that people repair conversational problems in the most cost-efficient way possible, minimizing the joint multimodal effort for the dyad as a whole. These results are in line with the principle of least collaborative effort in speech and with the reduction of joint costs in non-linguistic joint actions. The results extend our understanding of those coefficiency principles by revealing that they pertain to multimodal utterance design.

    Additional information

    Data and analysis scripts
  • Rasenberg, M., Özyürek, A., Bögels, S., & Dingemanse, M. (2022). The primacy of multimodal alignment in converging on shared symbols for novel referents. Discourse Processes, 59(3), 209-236. doi:10.1080/0163853X.2021.1992235.

    Abstract

    When people establish shared symbols for novel objects or concepts, they have been shown to rely on the use of multiple communicative modalities as well as on alignment (i.e., cross-participant repetition of communicative behavior). Yet these interactional resources have rarely been studied together, so little is known about if and how people combine multiple modalities in alignment to achieve joint reference. To investigate this, we systematically track the emergence of lexical and gestural alignment in a referential communication task with novel objects. Quantitative analyses reveal that people frequently use a combination of lexical and gestural alignment, and that such multimodal alignment tends to emerge earlier compared to unimodal alignment. Qualitative analyses of the interactional contexts in which alignment emerges reveal how people flexibly deploy lexical and gestural alignment (independently, simultaneously or successively) to adjust to communicative pressures.
  • Ravignani, A., & Garcia, M. (2022). A cross-species framework to identify vocal learning abilities in mammals. Philosophical Transactions of the Royal Society of London, Series B: Biological Sciences, 377: 20200394. doi:10.1098/rstb.2020.0394.

    Abstract

    Vocal production learning (VPL) is the experience-driven ability to produce novel vocal signals through imitation or modification of existing vocalizations. A parallel strand of research investigates acoustic allometry, namely how information about body size is conveyed by acoustic signals. Recently, we proposed that deviation from acoustic allometry principles as a result of sexual selection may have been an intermediate step towards the evolution of vocal learning abilities in mammals. Adopting a more hypothesis-neutral stance, here we perform phylogenetic regressions and other analyses further testing a potential link between VPL and being an allometric outlier. We find that multiple species belonging to VPL clades deviate from allometric scaling but in the opposite direction to that expected from size exaggeration mechanisms. In other words, our correlational approach finds an association between VPL and being an allometric outlier. However, the direction of this association, contra our original hypothesis, may indicate that VPL did not necessarily emerge via sexual selection for size exaggeration: VPL clades show higher vocalization frequencies than expected. In addition, our approach allows us to identify species with potential for VPL abilities: we hypothesize that those outliers from acoustic allometry lying above the regression line may be VPL species. Our results may help better understand the cross-species diversity, variability and aetiology of VPL, which among other things is a key underpinning of speech in our species.

    This article is part of the theme issue ‘Voice modulation: from origin and mechanism to social impact (Part II)’.

    Additional information

    Raw data Supplementary material
  • Ravignani, A., Asano, R., Valente, D., Ferretti, F., Hartmann, S., Hayashi, M., Jadoul, Y., Martins, M., Oseki, Y., Rodrigues, E. D., Vasileva, O., & Wacewicz, S. (Eds.). (2022). The evolution of language: Proceedings of the Joint Conference on Language Evolution (JCoLE). Nijmegen: Joint Conference on Language Evolution (JCoLE). doi:10.17617/2.3398549.
  • Ravignani, A. (2022). Language evolution: Sound meets gesture? [Review of the book From signal to symbol: The evolution of language by By R. Planer and K. Sterelny]. Evolutionary Anthropology, 31, 317-318. doi:10.1002/evan.21961.
  • Raviv, L., Lupyan, G., & Green, S. C. (2022). How variability shapes learning and generalization. Trends in Cognitive Sciences, 26(6), 462-483. doi:10.1016/j.tics.2022.03.007.

    Abstract

    Learning is using past experiences to inform new behaviors and actions. Because all experiences are unique, learning always requires some generalization. An effective way of improving generalization is to expose learners to more variable (and thus often more representative) input. More variability tends to make initial learning more challenging, but eventually leads to more general and robust performance. This core principle has been repeatedly rediscovered and renamed in different domains (e.g., contextual diversity, desirable difficulties, variability of practice). Reviewing this basic result as it has been formulated in different domains allows us to identify key patterns, distinguish between different kinds of variability, discuss the roles of varying task-relevant versus irrelevant dimensions, and examine the effects of introducing variability at different points in training.
  • Raviv, L., Jacobson, S. L., Plotnik, J. M., Bowman, J., Lynch, V., & Benítez-Burraco, A. (2022). Elephants as a new animal model for studying the evolution of language as a result of self-domestication. In A. Ravignani, R. Asano, D. Valente, F. Ferretti, S. Hartmann, M. Hayashi, Y. Jadoul, M. Martins, Y. Oseki, E. D. Rodrigues, O. Vasileva, & S. Wacewicz (Eds.), The evolution of language: Proceedings of the Joint Conference on Language Evolution (JCoLE) (pp. 606-608). Nijmegen: Joint Conference on Language Evolution (JCoLE).
  • Raviv, L., Peckre, L. R., & Boeckx, C. (2022). What is simple is actually quite complex: A critical note on terminology in the domain of language and communication. Journal of Comparative Psychology, 136(4), 215-220. doi:10.1037/com0000328.

    Abstract

    On the surface, the fields of animal communication and human linguistics have arrived at conflicting theories and conclusions with respect to the effect of social complexity on communicative complexity. For example, an increase in group size is argued to have opposite consequences on human versus animal communication systems: although an increase in human community size leads to some types of language simplification, an increase in animal group size leads to an increase in signal complexity. But do human and animal communication systems really show such a fundamental discrepancy? Our key message is that the tension between these two adjacent fields is the result of (a) a focus on different levels of analysis (namely, signal variation or grammar-like rules) and (b) an inconsistent use of terminology (namely, the terms “simple” and “complex”). By disentangling and clarifying these terms with respect to different measures of communicative complexity, we show that although animal and human communication systems indeed show some contradictory effects with respect to signal variability, they actually display essentially the same patterns with respect to grammar-like structure. This is despite the fact that the definitions of complexity and simplicity are actually aligned for signal variability, but diverge for grammatical structure. We conclude by advocating for the use of more objective and descriptive terms instead of terms such as “complexity,” which can be applied uniformly for human and animal communication systems—leading to comparable descriptions of findings across species and promoting a more productive dialogue between fields.
  • Razafindrazaka, H., & Brucato, N. (2008). Esclavage et diaspora Africaine. In É. Crubézy, J. Braga, & G. Larrouy (Eds.), Anthropobiologie: Évolution humaine (pp. 326-328). Issy-les-Moulineaux: Elsevier Masson.
  • Razafindrazaka, H., Brucato, N., & Mazières, S. (2008). Les Noirs marrons. In É. Crubézy, J. Braga, & G. Larrouy (Eds.), Anthropobiologie: Évolution humaine (pp. 319-320). Issy-les-Moulineaux: Elsevier Masson.
  • Redl, T., Szuba, A., de Swart, P., Frank, S. L., & de Hoop, H. (2022). Masculine generic pronouns as a gender cue in generic statements. Discourse Processes, 59, 828-845. doi:10.1080/0163853X.2022.2148071.

    Abstract

    An eye-tracking experiment was conducted with speakers of Dutch (N = 84, 36 male), a language that falls between grammatical and natural-gender languages. We tested whether a masculine generic pronoun causes a male bias when used in generic statements—that is, in the absence of a specific referent. We tested two types of generic statements by varying conceptual number, hypothesizing that the pronoun zijn “his” was more likely to cause a male bias with a conceptually singular than a conceptually plural ante-cedent (e.g., Someone (conceptually singular)/Everyone (conceptually plural) with perfect pitch can tune his instrument quickly). We found male participants to exhibit a male bias but with the conceptually singular antecedent only. Female participants showed no signs of a male bias. The results show that the generically intended masculine pronoun zijn “his” leads to a male bias in conceptually singular generic contexts but that this further depends on participant gender.

    Additional information

    Data availability
  • Reinisch, E., Jesse, A., & McQueen, J. M. (2008). The strength of stress-related lexical competition depends on the presence of first-syllable stress. In Proceedings of Interspeech 2008 (pp. 1954-1954).

    Abstract

    Dutch listeners' looks to printed words were tracked while they listened to instructions to click with their mouse on one of them. When presented with targets from word pairs where the first two syllables were segmentally identical but differed in stress location, listeners used stress information to recognize the target before segmental information disambiguated the words. Furthermore, the amount of lexical competition was influenced by the presence or absence of word-initial stress.

Share this page