Publications

Displaying 1 - 100 of 1262
  • Abbott, M. J., Angele, B., Ahn, D., & Rayner, K. (2015). Skipping syntactically illegal the previews: The role of predictability. Journal of Experimental Psychology: Learning, Memory, and Cognition, 41(6), 1703-1714. doi:10.1037/xlm0000142.

    Abstract

    Readers tend to skip words, particularly when they are short, frequent, or predictable. Angele and Rayner (2013) recently reported that readers are often unable to detect syntactic anomalies in parafoveal vision. In the present study, we manipulated target word predictability to assess whether contextual constraint modulates the-skipping behavior. The results provide further evidence that readers frequently skip the article the when infelicitous in context. Readers skipped predictable words more often than unpredictable words, even when the, which was syntactically illegal and unpredictable from the prior context, was presented as a parafoveal preview. The results of the experiment were simulated using E-Z Reader 10 by assuming that cloze probability can be dissociated from parafoveal visual input. It appears that when a short word is predictable in context, a decision to skip it can be made even if the information available parafoveally conflicts both visually and syntactically with those predictions.
  • Abdel Rahman, R., Van Turennout, M., & Levelt, W. J. M. (2003). Phonological encoding is not contingent on semantic feature retrieval: An electrophysiological study on object naming. Journal of Experimental Psychology: Learning, Memory, and Cognition, 29(5), 850-860. doi:10.1037/0278-7393.29.5.850.

    Abstract

    In the present study, the authors examined with event-related brain potentials whether phonological encoding in picture naming is mediated by basic semantic feature retrieval or proceeds independently. In a manual 2-choice go/no-go task the choice response depended on a semantic classification (animal vs. object) and the execution decision was contingent on a classification of name phonology (vowel vs. consonant). The introduction of a semantic task mixing procedure allowed for selectively manipulating the speed of semantic feature retrieval. Serial and parallel models were tested on the basis of their differential predictions for the effect of this manipulation on the lateralized readiness potential and N200 component. The findings indicate that phonological code retrieval is not strictly contingent on prior basic semantic feature processing.
  • Abdel Rahman, R., & Sommer, W. (2003). Does phonological encoding in speech production always follow the retrieval of semantic knowledge?: Electrophysiological evidence for parallel processing. Cognitive Brain Research, 16(3), 372-382. doi:10.1016/S0926-6410(02)00305-1.

    Abstract

    In this article a new approach to the distinction between serial/contingent and parallel/independent processing in the human cognitive system is applied to semantic knowledge retrieval and phonological encoding of the word form in picture naming. In two-choice go/nogo tasks pictures of objects were manually classified on the basis of semantic and phonological information. An additional manipulation of the duration of the faster and presumably mediating process (semantic retrieval) allowed to derive differential predictions from the two alternative models. These predictions were tested with two event-related brain potentials (ERPs), the lateralized readiness potential (LRP) and the N200. The findings indicate that phonological encoding can proceed in parallel to the retrieval of semantic features. A suggestion is made how to accommodate these findings with models of speech production.
  • Acheson, D. J., & Hagoort, P. (2014). Twisting tongues to test for conflict monitoring in speech production. Frontiers in Human Neuroscience, 8: 206. doi:10.3389/fnhum.2014.00206.

    Abstract

    A number of recent studies have hypothesized that monitoring in speech production may occur via domain-general mechanisms responsible for the detection of response conflict. Outside of language, two ERP components have consistently been elicited in conflict-inducing tasks (e.g., the flanker task): the stimulus-locked N2 on correct trials, and the response-locked error-related negativity (ERN). The present investigation used these electrophysiological markers to test whether a common response conflict monitor is responsible for monitoring in speech and non-speech tasks. Electroencephalography (EEG) was recorded while participants performed a tongue twister (TT) task and a manual version of the flanker task. In the TT task, people rapidly read sequences of four nonwords arranged in TT and non-TT patterns three times. In the flanker task, people responded with a left/right button press to a center-facing arrow, and conflict was manipulated by the congruency of the flanking arrows. Behavioral results showed typical effects of both tasks, with increased error rates and slower speech onset times for TT relative to non-TT trials and for incongruent relative to congruent flanker trials. In the flanker task, stimulus-locked EEG analyses replicated previous results, with a larger N2 for incongruent relative to congruent trials, and a response-locked ERN. In the TT task, stimulus-locked analyses revealed broad, frontally-distributed differences beginning around 50 ms and lasting until just before speech initiation, with TT trials more negative than non-TT trials; response-locked analyses revealed an ERN. Correlation across these measures showed some correlations within a task, but little evidence of systematic cross-task correlation. Although the present results do not speak against conflict signals from the production system serving as cues to self-monitoring, they are not consistent with signatures of response conflict being mediated by a single, domain-general conflict monitor
  • Agus, T., Carrion Castillo, A., Pressnitzer, D., & Ramus, F. (2014). Perceptual learning of acoustic noise by individuals with dyslexia. Journal of Speech, Language, and Hearing Research., 57, 1069-1077. doi:10.1044/1092-4388(2013/13-0020).

    Abstract

    Purpose: A phonological deficit is thought to affect most individuals with developmental dyslexia. The present study addresses whether the phonological deficit is caused by difficulties with perceptual learning of fine acoustic details. Method: A demanding test of nonverbal auditory memory, “noise learning,” was administered to both adults with dyslexia and control adult participants. On each trial, listeners had to decide whether a stimulus was a 1-s noise token or 2 abutting presentations of the same 0.5-s noise token (repeated noise). Without the listener’s knowledge, the exact same noise tokens were presented over many trials. An improved ability to perform the task for such “reference” noises reflects learning of their acoustic details. Results: Listeners with dyslexia did not differ from controls in any aspect of the task, qualitatively or quantitatively. They required the same amount of training to achieve discrimination of repeated from nonrepeated noises, and they learned the reference noises as often and as rapidly as the control group. However, they did show all the hallmarks of dyslexia, including a well-characterized phonological deficit. Conclusion: The data did not support the hypothesis that deficits in basic auditory processing or nonverbal learning and memory are the cause of the phonological deficit in dyslexia
  • Ahlsson, F., Åkerud, H., Schijven, D., Olivier, J., & Sundström-Poromaa, I. (2015). Gene expression in placentas from nondiabetic women giving birth to large for gestational age infants. Reproductive Sciences, 22(10), 1281-1288. doi:10.1177/1933719115578928.

    Abstract

    Gestational diabetes, obesity, and excessive weight gain are known independent risk factors for the birth of a large for gestational age (LGA) infant. However, only 1 of the 10 infants born LGA is born by mothers with diabetes or obesity. Thus, the aim of the present study was to compare placental gene expression between healthy, nondiabetic mothers (n = 22) giving birth to LGA infants and body mass index-matched mothers (n = 24) giving birth to appropriate for gestational age infants. In the whole gene expression analysis, only 29 genes were found to be differently expressed in LGA placentas. Top upregulated genes included insulin-like growth factor binding protein 1, aminolevulinate δ synthase 2, and prolactin, whereas top downregulated genes comprised leptin, gametocyte-specific factor 1, and collagen type XVII α 1. Two enriched gene networks were identified, namely, (1) lipid metabolism, small molecule biochemistry, and organismal development and (2) cellular development, cellular growth, proliferation, and tumor morphology.
  • Ahluwalia, T. S., Prins, B. P., Abdollahi, M., Armstrong, N. J., Aslibekyan, S., Bain, L., Jefferis, B., Baumert, J., Beekman, M., Ben-Shlomo, Y., Bis, J. C., Mitchell, B. D., De Geus, E., Delgado, G. E., Marek, D., Eriksson, J., Kajantie, E., Kanoni, S., Kemp, J. P., Lu, C. and 106 moreAhluwalia, T. S., Prins, B. P., Abdollahi, M., Armstrong, N. J., Aslibekyan, S., Bain, L., Jefferis, B., Baumert, J., Beekman, M., Ben-Shlomo, Y., Bis, J. C., Mitchell, B. D., De Geus, E., Delgado, G. E., Marek, D., Eriksson, J., Kajantie, E., Kanoni, S., Kemp, J. P., Lu, C., Marioni, R. E., McLachlan, S., Milaneschi, Y., Nolte, I. M., Petrelis, A. M., Porcu, E., Sabater-Lleal, M., Naderi, E., Seppälä, I., Shah, T., Singhal, G., Standl, M., Teumer, A., Thalamuthu, A., Thiering, E., Trompet, S., Ballantyne, C. M., Benjamin, E. J., Casas, J. P., Toben, C., Dedoussis, G., Deelen, J., Durda, P., Engmann, J., Feitosa, M. F., Grallert, H., Hammarstedt, A., Harris, S. E., Homuth, G., Hottenga, J.-J., Jalkanen, S., Jamshidi, Y., Jawahar, M. C., Jess, T., Kivimaki, M., Kleber, M. E., Lahti, J., Liu, Y., Marques-Vidal, P., Mellström, D., Mooijaart, S. P., Müller-Nurasyid, M., Penninx, B., Revez, J. A., Rossing, P., Räikkönen, K., Sattar, N., Scharnagl, H., Sennblad, B., Silveira, A., St Pourcain, B., Timpson, N. J., Trollor, J., CHARGE Inflammation Working Group, Van Dongen, J., Van Heemst, D., Visvikis-Siest, S., Vollenweider, P., Völker, U., Waldenberger, M., Willemsen, G., Zabaneh, D., Morris, R. W., Arnett, D. K., Baune, B. T., Boomsma, D. I., Chang, Y.-P.-C., Deary, I. J., Deloukas, P., Eriksson, J. G., Evans, D. M., Ferreira, M. A., Gaunt, T., Gudnason, V., Hamsten, A., Heinrich, J., Hingorani, A., Humphries, S. E., Jukema, J. W., Koenig, W., Kumari, M., Kutalik, Z., Lawlor, D. A., Lehtimäki, T., März, W., Mather, K. A., Naitza, S., Nauck, M., Ohlsson, C., Price, J. F., Raitakari, O., Rice, K., Sachdev, P. S., Slagboom, E., Sørensen, T. I. A., Spector, T., Stacey, D., Stathopoulou, M. G., Tanaka, T., Wannamethee, S. G., Whincup, P., Rotter, J. I., Dehghan, A., Boerwinkle, E., Psaty, B. M., Snieder, H., & Alizadeh, B. Z. (2021). Genome-wide association study of circulating interleukin 6 levels identifies novel loci. Human Molecular Genetics, 5(1), 393-409. doi:10.1093/hmg/ddab023.

    Abstract

    Interleukin 6 (IL-6) is a multifunctional cytokine with both pro- and anti-inflammatory properties with a heritability estimate of up to 61%. The circulating levels of IL-6 in blood have been associated with an increased risk of complex disease pathogenesis. We conducted a two-staged, discovery and replication meta genome-wide association study (GWAS) of circulating serum IL-6 levels comprising up to 67 428 (ndiscovery = 52 654 and nreplication = 14 774) individuals of European ancestry. The inverse variance fixed effects based discovery meta-analysis, followed by replication led to the identification of two independent loci, IL1F10/IL1RN rs6734238 on chromosome (Chr) 2q14, (Pcombined = 1.8 × 10−11), HLA-DRB1/DRB5 rs660895 on Chr6p21 (Pcombined = 1.5 × 10−10) in the combined meta-analyses of all samples. We also replicated the IL6R rs4537545 locus on Chr1q21 (Pcombined = 1.2 × 10−122). Our study identifies novel loci for circulating IL-6 levels uncovering new immunological and inflammatory pathways that may influence IL-6 pathobiology.
  • Ahn, D., Ferreira, V. S., & Gollan, T. H. (2021). Selective activation of language specific structural representations: Evidence from extended picture-word interference. Journal of Memory and Language, 120: 104249. doi:10.1016/j.jml.2021.104249.

    Abstract

    How do bilingual speakers represent and use information that guides the assembly of the words into phrases and sentences (i.e., sentence structures) for languages that have different word orders? Cross-language syntactic priming effects provide mixed evidence on whether bilinguals access sentence structures from both languages even when speaking just one. Here, we compared English monolinguals, Korean-immersed Korean-English bilinguals, and English-immersed Korean-English bilinguals while they produced noun phrases (“the lemon below the lobster”), which have different word orders in English and Korean (the Korean translation word order is [lobster][below][lemon]). We examined when speakers plan each noun using an extended picture-word interference paradigm, by measuring articulation times for each word in the phrase with either the distractor word “apple” (which slows the planning of “lemon”) or “crab” (which slows the planning of “lobster”). Results suggest that for phrases that are different in linear word order across languages, bilinguals only access the sentence structure of the one language they are actively speaking at the time, even when switching languages between trials.
  • Akker, E., & Cutler, A. (2003). Prosodic cues to semantic structure in native and nonnative listening. Bilingualism: Language and Cognition, 6(2), 81-96. doi:10.1017/S1366728903001056.

    Abstract

    Listeners efficiently exploit sentence prosody to direct attention to words bearing sentence accent. This effect has been explained as a search for focus, furthering rapid apprehension of semantic structure. A first experiment supported this explanation: English listeners detected phoneme targets in sentences more rapidly when the target-bearing words were in accented position or in focussed position, but the two effects interacted, consistent with the claim that the effects serve a common cause. In a second experiment a similar asymmetry was observed with Dutch listeners and Dutch sentences. In a third and a fourth experiment, proficient Dutch users of English heard English sentences; here, however, the two effects did not interact. The results suggest that less efficient mapping of prosody to semantics may be one way in which nonnative listening fails to equal native listening.
  • Alario, F.-X., Schiller, N. O., Domoto-Reilly, K., & Caramazza, A. (2003). The role of phonological and orthographic information in lexical selection. Brain and Language, 84(3), 372-398. doi:10.1016/S0093-934X(02)00556-4.

    Abstract

    We report the performance of two patients with lexico-semantic deficits following left MCA CVA. Both patients produce similar numbers of semantic paraphasias in naming tasks, but presented one crucial difference: grapheme-to-phoneme and phoneme-to-grapheme conversion procedures were available only to one of them. We investigated the impact of this availability on the process of lexical selection during word production. The patient for whom conversion procedures were not operational produced semantic errors in transcoding tasks such as reading and writing to dictation; furthermore, when asked to name a given picture in multiple output modalities—e.g., to say the name of a picture and immediately after to write it down—he produced lexically inconsistent responses. By contrast, the patient for whom conversion procedures were available did not produce semantic errors in transcoding tasks and did not produce lexically inconsistent responses in multiple picture-naming tasks. These observations are interpreted in the context of the summation hypothesis (Hillis & Caramazza, 1991), according to which the activation of lexical entries for production would be made on the basis of semantic information and, when available, on the basis of form-specific information. The implementation of this hypothesis in models of lexical access is discussed in detail.
  • Alday, P. M. (2015). Be Careful When Assuming the Obvious: Commentary on “The Placement of the Head that Minimizes Online Memory: A Complex Systems Approach”. Language Dynamics and Change, 5(1), 138-146. doi:10.1163/22105832-00501008.

    Abstract

    Ferrer-i-Cancho (this volume) presents a mathematical model of both the synchronic and diachronic nature of word order based on the assumption that memory costs are a never decreasing function of distance and a few very general linguistic assumptions. However, even these minimal and seemingly obvious assumptions are not as safe as they appear in light of recent typological and psycholinguistic evidence. The interaction of word order and memory has further depths to be explored.
  • Alday, P. M., Schlesewsky, M., & Bornkessel-Schlesewsky, I. (2015). Discovering prominence and its role in language processing: An individual (differences) approach. Linguistics Vanguard, 1(1), 201-213. doi:10.1515/lingvan-2014-1013.

    Abstract

    It has been suggested that, during real time language comprehension, the human language processing system attempts to identify the argument primarily responsible for the state of affairs (the “actor”) as quickly and unambiguously as possible. However, previous work on a prominence (e.g. animacy, definiteness, case marking) based heuristic for actor identification has suffered from underspecification of the relationship between different cue hierarchies. Qualitative work has yielded a partial ordering of many features (e.g. MacWhinney et al. 1984), but a precise quantification has remained elusive due to difficulties in exploring the full feature space in a particular language. Feature pairs tend to correlate strongly in individual languages for semantic-pragmatic reasons (e.g., animate arguments tend to be actors and actors tend to be morphosyntactically privileged), and it is thus difficult to create acceptable stimuli for a fully factorial design even for binary features. Moreover, the exponential function grows extremely rapidly and a fully crossed factorial design covering the entire feature space would be prohibitively long for a purely within-subjects design. Here, we demonstrate the feasibility of parameter estimation in a short experiment. We are able to estimate parameters at a single subject level for the parameters animacy, case and number. This opens the door for research into individual differences and population variation. Moreover, the framework we introduce here can be used in the field to measure more “exotic” languages and populations, even with small sample sizes. Finally, pooled single-subject results are used to reduce the number of free parameters in previous work based on the extended Argument Dependency Model (Bornkessel-Schlesewsky and Schlesewsky 2006, 2009, 2013, in press; Alday et al. 2014).
  • Alday, P. M. (2019). How much baseline correction do we need in ERP research? Extended GLM model can replace baseline correction while lifting its limits. Psychophysiology, 56(12): e13451. doi:10.1111/psyp.13451.

    Abstract

    Baseline correction plays an important role in past and current methodological debates in ERP research (e.g., the Tanner vs. Maess debate in the Journal of Neuroscience Methods), serving as a potential alternative to strong high‐pass filtering. However, the very assumptions that underlie traditional baseline also undermine it, implying a reduction in the signal‐to‐noise ratio. In other words, traditional baseline correction is statistically unnecessary and even undesirable. Including the baseline interval as a predictor in a GLM‐based statistical approach allows the data to determine how much baseline correction is needed, including both full traditional and no baseline correction as special cases. This reduces the amount of variance in the residual error term and thus has the potential to increase statistical power.
  • Alday, P. M. (2019). M/EEG analysis of naturalistic stories: a review from speech to language processing. Language, Cognition and Neuroscience, 34(4), 457-473. doi:10.1080/23273798.2018.1546882.

    Abstract

    M/EEG research using naturally spoken stories as stimuli has focused largely on speech and not
    language processing. The temporal resolution of M/EEG is a two-edged sword, allowing for the
    study of the fine acoustic structure of speech, yet easily overwhelmed by the temporal noise of
    variation in constituent length. Recent theories on the neural encoding of linguistic structure
    require the temporal resolution of M/EEG, yet suffer from confounds when studied on traditional,
    heavily controlled stimuli. Recent methodological advances allow for synthesising naturalistic
    designs and traditional, controlled designs into effective M/EEG research on naturalistic
    language. In this review, we highlight common threads throughout the at-times distinct research
    traditions of speech and language processing. We conclude by examining the tradeoffs and
    successes of three M/EEG studies on fully naturalistic language paradigms and the future
    directions they suggest.
  • Alday, P. M., & Kretzschmar, F. (2019). Speed-accuracy tradeoffs in brain and behavior: Testing the independence of P300 and N400 related processes in behavioral responses to sentence categorization. Frontiers in Human Neuroscience, 13: 285. doi:10.3389/fnhum.2019.00285.

    Abstract

    Although the N400 was originally discovered in a paradigm designed to elicit a P300 (Kutas and Hillyard, 1980), its relationship with the P300 and how both overlapping event-related potentials (ERPs) determine behavioral profiles is still elusive. Here we conducted an ERP (N = 20) and a multiple-response speed-accuracy tradeoff (SAT) experiment (N = 16) on distinct participant samples using an antonym paradigm (The opposite of black is white/nice/yellow with acceptability judgment). We hypothesized that SAT profiles incorporate processes of task-related decision-making (P300) and stimulus-related expectation violation (N400). We replicated previous ERP results (Roehm et al., 2007): in the correct condition (white), the expected target elicits a P300, while both expectation violations engender an N400 [reduced for related (yellow) vs. unrelated targets (nice)]. Using multivariate Bayesian mixed-effects models, we modeled the P300 and N400 responses simultaneously and found that correlation between residuals and subject-level random effects of each response window was minimal, suggesting that the components are largely independent. For the SAT data, we found that antonyms and unrelated targets had a similar slope (rate of increase in accuracy over time) and an asymptote at ceiling, while related targets showed both a lower slope and a lower asymptote, reaching only approximately 80% accuracy. Using a GLMM-based approach (Davidson and Martin, 2013), we modeled these dynamics using response time and condition as predictors. Replacing the predictor for condition with the averaged P300 and N400 amplitudes from the ERP experiment, we achieved identical model performance. We then examined the piecewise contribution of the P300 and N400 amplitudes with partial effects (see Hohenstein and Kliegl, 2015). Unsurprisingly, the P300 amplitude was the strongest contributor to the SAT-curve in the antonym condition and the N400 was the strongest contributor in the unrelated condition. In brief, this is the first demonstration of how overlapping ERP responses in one sample of participants predict behavioral SAT profiles of another sample. The P300 and N400 reflect two independent but interacting processes and the competition between these processes is reflected differently in behavioral parameters of speed and accuracy.

    Additional information

    Supplementary material
  • Alday, P. M., Schlesewsky, M., & Bornkessel-Schlesewsky, I. (2014). Towards a Computational Model of Actor-Based Language Comprehension. Neuroinformatics, 12(1), 143-179. doi:10.1007/s12021-013-9198-x.

    Abstract

    Neurophysiological data from a range of typologically diverse languages provide evidence for a cross-linguistically valid, actor-based strategy of understanding sentence-level meaning. This strategy seeks to identify the participant primarily responsible for the state of affairs (the actor) as quickly and unambiguously as possible, thus resulting in competition for the actor role when there are multiple candidates. Due to its applicability across languages with vastly different characteristics, we have proposed that the actor strategy may derive from more basic cognitive or neurobiological organizational principles, though it is also shaped by distributional properties of the linguistic input (e.g. the morphosyntactic coding strategies for actors in a given language). Here, we describe an initial computational model of the actor strategy and how it interacts with language-specific properties. Specifically, we contrast two distance metrics derived from the output of the computational model (one weighted and one unweighted) as potential measures of the degree of competition for actorhood by testing how well they predict modulations of electrophysiological activity engendered by language processing. To this end, we present an EEG study on word order processing in German and use linear mixed-effects models to assess the effect of the various distance metrics. Our results show that a weighted metric, which takes into account the weighting of an actor-identifying feature in the language under consideration outperforms an unweighted distance measure. We conclude that actor competition effects cannot be reduced to feature overlap between multiple sentence participants and thereby to the notion of similarity-based interference, which is prominent in current memory-based models of language processing. Finally, we argue that, in addition to illuminating the underlying neurocognitive mechanisms of actor competition, the present model can form the basis for a more comprehensive, neurobiologically plausible computational model of constructing sentence-level meaning.
  • Alferink, I., & Gullberg, M. (2014). French-Dutch bilinguals do not maintain obligatory semantic distinctions: Evidence from placement verbs. Bilingualism: Language and Cognition, 17, 22-37. doi:10.1017/S136672891300028X.

    Abstract

    It is often said that bilinguals are not the sum of two monolinguals but that bilingual systems represent a third pattern. This study explores the exact nature of this pattern. We ask whether there is evidence of a merged system when one language makes an obligatory distinction that the other one does not, namely in the case of placement verbs in French and Dutch, and whether such a merged system is realised as a more general or a more specific system. The results show that in elicited descriptions Belgian French-Dutch bilinguals drop one of the categories in one of the languages, resulting in a more general semantic system in comparison with the non-contact variety. They do not uphold the obligatory distinction in the verb nor elsewhere despite its communicative relevance. This raises important questions regarding how widespread these differences are and what drives these patterns
  • Alhama, R. G., & Zuidema, W. (2019). A review of computational models of basic rule learning: The neural-symbolic debate and beyond. Psychonomic Bulletin & Review, 26(4), 1174-1194. doi:10.3758/s13423-019-01602-z.

    Abstract

    We present a critical review of computational models of generalization of simple grammar-like rules, such as ABA and ABB. In particular, we focus on models attempting to account for the empirical results of Marcus et al. (Science, 283(5398), 77–80 1999). In that study, evidence is reported of generalization behavior by 7-month-old infants, using an Artificial Language Learning paradigm. The authors fail to replicate this behavior in neural network simulations, and claim that this failure reveals inherent limitations of a whole class of neural networks: those that do not incorporate symbolic operations. A great number of computational models were proposed in follow-up studies, fuelling a heated debate about what is required for a model to generalize. Twenty years later, this debate is still not settled. In this paper, we review a large number of the proposed models. We present a critical analysis of those models, in terms of how they contribute to answer the most relevant questions raised by the experiment. After identifying which aspects require further research, we propose a list of desiderata for advancing our understanding on generalization.
  • Alibali, M. W., Flevares, L. M., & Goldin-Meadow, S. (1997). Assessing knowledge conveyed in gesture: Do teachers have the upper hand? Journal of Educational Psychology, 89(1), 183-193. doi:10.1037/0022-0663.89.1.183.

    Abstract

    Children's gestures can reveal important information about their problem-solving strategies. This study investigated whether the information children express only in gesture is accessible to adults not trained in gesture coding. Twenty teachers and 20 undergraduates viewed videotaped vignettes of 12 children explaining their solutions to equations. Six children expressed the same strategy in speech and gesture, and 6 expressed different strategies. After each vignette, adults described the child's reasoning. For children who expressed different strategies in speech and gesture, both teachers and undergraduates frequently described strategies that children had not expressed in speech. These additional strategies could often be traced
    to the children's gestures. Sensitivity to gesture was comparable for teachers and
    undergraduates. Thus, even without training, adults glean information, not only from children's words but also from their hands.
  • Ambridge, B., Kidd, E., Rowland, C. F., & Theakston, A. L. (2015). Authors' response [The ubiquity of frequency effects in first language acquisition]. Journal of Child Language, 42(2), 316-322. doi:10.1017/S0305000914000841.

    Abstract

    Our target paper argued for the ubiquity of frequency effects in acquisition, and that any comprehensive theory must take into account the multiplicity of ways that frequently occurring and co-occurring linguistic units affect the acquisition process. The commentaries on the paper provide a largely unanimous endorsement of this position, but raise additional issues likely to frame further discussion and theoretical development. Specifically, while most commentators did not deny the importance of frequency effects, all saw this as the tip of the theoretical iceberg. In this short response we discuss common themes raised in the commentaries, focusing on the broader issue of what frequency effects mean for language acquisition.

    Additional information

    Target paper
  • Ambridge, B., Pine, J. M., Rowland, C. F., Freudenthal, D., & Chang, F. (2014). Avoiding dative overgeneralisation errors: semantics, statistics or both? Language, Cognition and Neuroscience, 29(2), 218-243. doi:10.1080/01690965.2012.738300.

    Abstract

    How do children eventually come to avoid the production of overgeneralisation errors, in particular, those involving the dative (e.g., *I said her “no”)? The present study addressed this question by obtaining from adults and children (5–6, 9–10 years) judgements of well-formed and over-general datives with 301 different verbs (44 for children). A significant effect of pre-emption—whereby the use of a verb in the prepositional-object (PO)-dative construction constitutes evidence that double-object (DO)-dative uses are not permitted—was observed for every age group. A significant effect of entrenchment—whereby the use of a verb in any construction constitutes evidence that unattested dative uses are not permitted—was also observed for every age group, with both predictors also accounting for developmental change between ages 5–6 and 9–10 years. Adults demonstrated knowledge of a morphophonological constraint that prohibits Latinate verbs from appearing in the DO-dative construction (e.g., *I suggested her the trip). Verbs’ semantic properties (supplied by independent adult raters) explained additional variance for all groups and developmentally, with the relative influence of narrow- vs broad-range semantic properties increasing with age. We conclude by outlining an account of the formation and restriction of argument-structure generalisations designed to accommodate these findings.
  • Ambridge, B., Bidgood, A., Twomey, K. E., Pine, J. M., Rowland, C. F., & Freudenthal, D. (2015). Preemption versus Entrenchment: Towards a Construction-General Solution to the Problem of the Retreat from Verb Argument Structure Overgeneralization. PLoS One, 10(4): e0123723. doi:10.1371/journal.pone.0123723.

    Abstract

    Participants aged 5;2-6;8, 9;2-10;6 and 18;1-22;2 (72 at each age) rated verb argument structure overgeneralization errors (e.g., *Daddy giggled the baby) using a five-point scale. The study was designed to investigate the feasibility of two proposed construction-general solutions to the question of how children retreat from, or avoid, such errors. No support was found for the prediction of the preemption hypothesis that the greater the frequency of the verb in the single most nearly synonymous construction (for this example, the periphrastic causative; e.g., Daddy made the baby giggle), the lower the acceptability of the error. Support was found, however, for the prediction of the entrenchment hypothesis that the greater the overall frequency of the verb, regardless of construction, the lower the acceptability of the error, at least for the two older groups. Thus while entrenchment appears to be a robust solution to the problem of the retreat from error, and one that generalizes across different error types, we did not find evidence that this is the case for preemption. The implication is that the solution to the retreat from error lies not with specialized mechanisms, but rather in a probabilistic process of construction competition.
  • Ambridge, B., Kidd, E., Rowland, C. F., & Theakston, A. L. (2015). The ubiquity of frequency effects in first language acquisition. Journal of Child Language, 42(2), 239-273. doi:10.1017/S030500091400049X.

    Abstract

    This review article presents evidence for the claim that frequency effects are pervasive in children's first language acquisition, and hence constitute a phenomenon that any successful account must explain. The article is organized around four key domains of research: children's acquisition of single words, inflectional morphology, simple syntactic constructions, and more advanced constructions. In presenting this evidence, we develop five theses. (i) There exist different types of frequency effect, from effects at the level of concrete lexical strings to effects at the level of abstract cues to thematic-role assignment, as well as effects of both token and type, and absolute and relative, frequency. High-frequency forms are (ii) early acquired and (iii) prevent errors in contexts where they are the target, but also (iv) cause errors in contexts in which a competing lower-frequency form is the target. (v) Frequency effects interact with other factors (e.g. serial position, utterance length), and the patterning of these interactions is generally informative with regard to the nature of the learning mechanism. We conclude by arguing that any successful account of language acquisition, from whatever theoretical standpoint, must be frequency sensitive to the extent that it can explain the effects documented in this review, and outline some types of account that do and do not meet this criterion.

    Additional information

    Author's response
  • Araújo, S., Huettig, F., & Meyer, A. S. (2021). What underlies the deficit in rapid automatized naming (RAN) in adults with dyslexia? Evidence from eye movements. Scientific Studies of Reading, 25(6), 534-549. doi:10.1080/10888438.2020.1867863.

    Abstract

    This eye-tracking study explored how phonological encoding and speech production planning for successive words are coordinated in adult readers with dyslexia (N = 22) and control readers (N = 25) during rapid automatized naming (RAN). Using an object-RAN task, we orthogonally manipulated the word-form frequency and phonological neighborhood density of the object names and assessed the effects on speech and eye movements and their temporal coordination. In both groups, there was a significant interaction between word frequency and neighborhood density: shorter fixations for dense than for sparse neighborhoods were observed for low-, but not for high-frequency words. This finding does not suggest a specific difficulty in lexical phonological access in dyslexia. However, in readers with dyslexia only, these lexical effects percolated to the late processing stages, indicated by longer offset eye-speech lags. We close by discussing potential reasons for this finding, including suboptimal specification of phonological representations and deficits in attention control or in multi-item coordination.
  • Araújo, S., Fernandes, T., & Huettig, F. (2019). Learning to read facilitates retrieval of phonological representations in rapid automatized naming: Evidence from unschooled illiterate, ex-illiterate, and schooled literate adults. Developmental Science, 22(4): e12783. doi:10.1111/desc.12783.

    Abstract

    Rapid automatized naming (RAN) of visual items is a powerful predictor of reading skills. However, the direction and locus of the association between RAN and reading is still largely unclear. Here we investigated whether literacy acquisition directly bolsters RAN efficiency for objects, adopting a strong methodological design, by testing three groups of adults matched in age and socioeconomic variables, who differed only in literacy/schooling: unschooled illiterate and ex-illiterate, and schooled literate adults. To investigate in a fine-grained manner whether and how literacy facilitates lexical retrieval, we orthogonally manipulated the word-form frequency (high vs. low) and phonological neighborhood density (dense vs. spare) of the objects’ names. We observed that literacy experience enhances the automaticity with which visual stimuli (e.g., objects) can be retrieved and named: relative to readers (ex-illiterate and literate), illiterate adults performed worse on RAN. Crucially, the group difference was exacerbated and significant only for those items that were of low frequency and from sparse neighborhoods. These results thus suggest that, regardless of schooling and age at which literacy was acquired, learning to read facilitates the access to and retrieval of phonological representations, especially of difficult lexical items.
  • Araújo, S., Faísca, L., Bramão, I., Petersson, K. M., & Reis, A. (2014). Lexical and phonological processes in dyslexic readers: Evidences from a visual lexical decision task. Dyslexia, 20, 38-53. doi:10.1002/dys.1461.

    Abstract

    The aim of the present study was to investigate whether reading failure in the context of an orthography of intermediate consistency is linked to inefficient use of the lexical orthographic reading procedure. The performance of typically developing and dyslexic Portuguese-speaking children was examined in a lexical decision task, where the stimulus lexicality, word frequency and length were manipulated. Both lexicality and length effects were larger in the dyslexic group than in controls, although the interaction between group and frequency disappeared when the data were transformed to control for general performance factors. Children with dyslexia were influenced in lexical decision making by the stimulus length of words and pseudowords, whereas age-matched controls were influenced by the length of pseudowords only. These findings suggest that non-impaired readers rely mainly on lexical orthographic information, but children with dyslexia preferentially use the phonological decoding procedure—albeit poorly—most likely because they struggle to process orthographic inputs as a whole such as controls do. Accordingly, dyslexic children showed significantly poorer performance than controls for all types of stimuli, including words that could be considered over-learned, such as high-frequency words. This suggests that their orthographic lexical entries are less established in the orthographic lexicon
  • Araújo, S., Faísca, L., Bramão, I., Reis, A., & Petersson, K. M. (2015). Lexical and sublexical orthographic processing: An ERP study with skilled and dyslexic adult readers. Brain and Language, 141, 16-27. doi:10.1016/j.bandl.2014.11.007.

    Abstract

    This ERP study investigated the cognitive nature of the P1–N1 components during orthographic processing. We used an implicit reading task with various types of stimuli involving different amounts of sublexical or lexical orthographic processing (words, pseudohomophones, pseudowords, nonwords, and symbols), and tested average and dyslexic readers. An orthographic regularity effect (pseudowords– nonwords contrast) was observed in the average but not in the dyslexic group. This suggests an early sensitivity to the dependencies among letters in word-forms that reflect orthographic structure, while the dyslexic brain apparently fails to be appropriately sensitive to these complex features. Moreover, in the adults the N1-response may already reflect lexical access: (i) the N1 was sensitive to the familiar vs. less familiar orthographic sequence contrast; (ii) and early effects of the phonological form (words-pseudohomophones contrast) were also found. Finally, the later N320 component was attenuated in the dyslexics, suggesting suboptimal processing in later stages of phonological analysis.
  • Araújo, S., Reis, A., Petersson, K. M., & Faísca, L. (2015). Rapid automatized naming and reading performance: A meta-analysis. Journal of Educational Psychology, 107(3), 868-883. doi:10.1037/edu0000006.

    Abstract

    Evidence that rapid naming skill is associated with reading ability has become increasingly prevalent in recent years. However, there is considerable variation in the literature concerning the magnitude of this relationship. The objective of the present study was to provide a comprehensive analysis of the evidence on the relationship between rapid automatized naming (RAN) and reading performance. To this end, we conducted a meta-analysis of the correlational relationship between these 2 constructs to (a) determine the overall strength of the RAN–reading association and (b) identify variables that systematically moderate this relationship. A random-effects model analysis of data from 137 studies (857 effect sizes; 28,826 participants) indicated a moderate-to-strong relationship between RAN and reading performance (r = .43, I2 = 68.40). Further analyses revealed that RAN contributes to the 4 measures of reading (word reading, text reading, non-word reading, and reading comprehension), but higher coefficients emerged in favor of real word reading and text reading. RAN stimulus type and type of reading score were the factors with the greatest moderator effect on the magnitude of the RAN–reading relationship. The consistency of orthography and the subjects’ grade level were also found to impact this relationship, although the effect was contingent on reading outcome. It was less evident whether the subjects’ reading proficiency played a role in the relationship. Implications for future studies are discussed.
  • Armeni, K., Willems, R. M., Van den Bosch, A., & Schoffelen, J.-M. (2019). Frequency-specific brain dynamics related to prediction during language comprehension. NeuroImage, 198, 283-295. doi:10.1016/j.neuroimage.2019.04.083.

    Abstract

    The brain's remarkable capacity to process spoken language virtually in real time requires fast and efficient information processing machinery. In this study, we investigated how frequency-specific brain dynamics relate to models of probabilistic language prediction during auditory narrative comprehension. We recorded MEG activity while participants were listening to auditory stories in Dutch. Using trigram statistical language models, we estimated for every word in a story its conditional probability of occurrence. On the basis of word probabilities, we computed how unexpected the current word is given its context (word perplexity) and how (un)predictable the current linguistic context is (word entropy). We then evaluated whether source-reconstructed MEG oscillations at different frequency bands are modulated as a function of these language processing metrics. We show that theta-band source dynamics are increased in high relative to low entropy states, likely reflecting lexical computations. Beta-band dynamics are increased in situations of low word entropy and perplexity possibly reflecting maintenance of ongoing cognitive context. These findings lend support to the idea that the brain engages in the active generation and evaluation of predicted language based on the statistical properties of the input signal.

    Additional information

    Supplementary data
  • Arunkumar, M., Van Paridon, J., Ostarek, M., & Huettig, F. (2021). Do illiterates have illusions? A conceptual (non)replication of Luria (1976). Journal of Cultural Cognitive Science, 5, 143-158. doi:10.1007/s41809-021-00080-x.

    Abstract

    Luria (1976) famously observed that people who never learnt to read and write do not perceive visual illusions. We conducted a conceptual replication of the Luria study of the effect of literacy on the processing of visual illusions. We designed two carefully controlled experiments with 161 participants with varying literacy levels ranging from complete illiterates to high literates in Chennai, India. Accuracy and reaction time in the identification of two types of visual shape and color illusions and the identification of appropriate control images were measured. Separate statistical analyses of Experiments 1 and 2 as well as pooled analyses of both experiments do not provide any support for the notion that literacy effects the perception of visual illusions. Our large sample, carefully controlled study strongly suggests that literacy does not meaningfully affect the identification of visual illusions and raises some questions about other reports about cultural effects on illusion perception.
  • Asaridou, S. S., Hagoort, P., & McQueen, J. M. (2015). Effects of early bilingual experience with a tone and a non-tone language on speech-music. PLoS One, 10(12): e0144225. doi:10.1371/journal.pone.0144225.

    Abstract

    We investigated music and language processing in a group of early bilinguals who spoke a tone language and a non-tone language (Cantonese and Dutch). We assessed online speech-music processing interactions, that is, interactions that occur when speech and music are processed simultaneously in songs, with a speeded classification task. In this task, participants judged sung pseudowords either musically (based on the direction of the musical interval) or phonologically (based on the identity of the sung vowel). We also assessed longer-term effects of linguistic experience on musical ability, that is, the influence of extensive prior experience with language when processing music. These effects were assessed with a task in which participants had to learn to identify musical intervals and with four pitch-perception tasks. Our hypothesis was that due to their experience in two different languages using lexical versus intonational tone, the early Cantonese-Dutch bilinguals would outperform the Dutch control participants. In online processing, the Cantonese-Dutch bilinguals processed speech and music more holistically than controls. This effect seems to be driven by experience with a tone language, in which integration of segmental and pitch information is fundamental. Regarding longer-term effects of linguistic experience, we found no evidence for a bilingual advantage in either the music-interval learning task or the pitch-perception tasks. Together, these results suggest that being a Cantonese-Dutch bilingual does not have any measurable longer-term effects on pitch and music processing, but does have consequences for how speech and music are processed jointly.

    Additional information

    Data Availability
  • Athanasopoulos, P., Bylund, E., Montero-Melis, G., Damjanovic, L., Schartner, A., Kibbe, A., Riches, N., & Thierry, G. (2015). Two languages, two minds: Flexible cognitive processing driven by language of operation. Psychological Science, 26(4), 518-526. doi:10.1177/0956797614567509.

    Abstract

    People make sense of objects and events around them by classifying them into identifiable categories. The extent to which language affects this process has been the focus of a long-standing debate: Do different languages cause their speakers to behave differently? Here, we show that fluent German-English bilinguals categorize motion events according to the grammatical constraints of the language in which they operate. First, as predicted from cross-linguistic differences in motion encoding, bilingual participants functioning in a German testing context prefer to match events on the basis of motion completion to a greater extent than do bilingual participants in an English context. Second, when bilingual participants experience verbal interference in English, their categorization behavior is congruent with that predicted for German; when bilingual participants experience verbal interference in German, their categorization becomes congruent with that predicted for English. These findings show that language effects on cognition are context-bound and transient, revealing unprecedented levels of malleability in human cognition.

    Files private

    Request files
  • Azar, Z., & Ozyurek, A. (2015). Discourse Management: Reference tracking in speech and gesture in Turkish narratives. Dutch Journal of Applied Linguistics, 4(2), 222-240. doi:10.1075/dujal.4.2.06aza.

    Abstract

    Speakers achieve coherence in discourse by alternating between differential lexical forms e.g. noun phrase, pronoun, and null form in accordance with the accessibility of the entities they refer to, i.e. whether they introduce an entity into discourse for the first time or continue referring to an entity they already mentioned before. Moreover, tracking of entities in discourse is a multimodal phenomenon. Studies show that speakers are sensitive to the informational structure of discourse and use fuller forms (e.g. full noun phrases) in speech and gesture more when re-introducing an entity while they use attenuated forms (e.g. pronouns) in speech and gesture less when maintaining a referent. However, those studies focus mainly on non-pro-drop languages (e.g. English, German and French). The present study investigates whether the same pattern holds for pro-drop languages. It draws data from adult native speakers of Turkish using elicited narratives. We find that Turkish speakers mostly use fuller forms to code subject referents in re-introduction context and the null form in maintenance context and they point to gesture space for referents more in re-introduction context compared maintained context. Hence we provide supportive evidence for the reverse correlation between the accessibility of a discourse referent and its coding in speech and gesture. We also find that, as a novel contribution, third person pronoun is used in re-introduction context only when the referent was previously mentioned as the object argument of the immediately preceding clause.
  • Azar, Z., Backus, A., & Ozyurek, A. (2019). General and language specific factors influence reference tracking in speech and gesture in discourse. Discourse Processes, 56(7), 553-574. doi:10.1080/0163853X.2018.1519368.

    Abstract

    Referent accessibility influences expressions in speech and gestures in similar ways. Speakers mostly use richer forms as noun phrases (NPs) in speech and gesture more when referents have low accessibility, whereas they use reduced forms such as pronouns more often and gesture less when referents have high accessibility. We investigated the relationships between speech and gesture during reference tracking in a pro-drop language—Turkish. Overt pronouns were not strongly associated with accessibility but with pragmatic context (i.e., marking similarity, contrast). Nevertheless, speakers gestured more when referents were re-introduced versus maintained and when referents were expressed with NPs versus pronouns. Pragmatic context did not influence gestures. Further, pronouns in low-accessibility contexts were accompanied with gestures—possibly for reference disambiguation—more often than previously found for non-pro-drop languages in such contexts. These findings enhance our understanding of the relationships between speech and gesture at the discourse level.
  • Baayen, R. H., Dijkstra, T., & Schreuder, R. (1997). Singulars and Plurals in Dutch: Evidence for a Parallel Dual-Route Model. Journal of Memory and Language, 37(1), 94-117. doi:10.1006/jmla.1997.2509.

    Abstract

    Are regular morphologically complex words stored in the mental lexicon? Answers to this question have ranged from full listing to parsing for every regular complex word. We investigated the roles of storage and parsing in the visual domain for the productive Dutch plural suffix -en.Two experiments are reported that show that storage occurs for high-frequency noun plurals. A mathematical formalization of a parallel dual-route race model is presented that accounts for the patterns in the observed reaction time data with essentially one free parameter, the speed of the parsing route. Parsing for noun plurals appears to be a time-costly process, which we attribute to the ambiguity of -en,a suffix that is predominantly used as a verbal ending. A third experiment contrasted nouns and verbs. This experiment revealed no effect of surface frequency for verbs, but again a solid effect for nouns. Together, our results suggest that many noun plurals are stored in order to avoid the time-costly resolution of the subcategorization conflict that arises when the -ensuffix is attached to nouns.

    Files private

    Request files
  • Baayen, R. H. (1997). The pragmatics of the 'tenses' in biblical Hebrew. Studies in Language, 21(2), 245-285. doi:10.1075/sl.21.2.02baa.

    Abstract

    In this paper, I present an analysis of the so-called tense forms of Biblical Hebrew. While there is fairly broad consensus on the interpretation of the yiqtol tense form, the interpretation of the qdtal tense form has led to considerable controversy. I will argue that the qātal form has no intrinsic semantic value and that it serves a pragmatic function only, namely, signaling to the hearer that the event or state expressed by the verb cannot be tightly integrated into the discourse representation of the hearer, given the speaker's estimate of their common ground.
  • Baayen, R. H., Lieber, R., & Schreuder, R. (1997). The morphological complexity of simplex nouns. Linguistics, 35, 861-877. doi:10.1515/ling.1997.35.5.861.
  • Baayen, R. H., & Lieber, R. (1997). Word frequency distributions and lexical semantics. Computers and the Humanities, 30, 281-291.

    Abstract

    This paper addresses the relation between meaning, lexical productivity, and frequency of use. Using density estimation as a visualization tool, we show that differences in semantic structure can be reflected in probability density functions estimated for word frequency distributions. We call attention to an example of a bimodal density, and suggest that bimodality arises when distributions of well-entrenched lexical tems, which appear to be lognormal, are mixed with distributions of productively reated nonce formations
  • Baggio, G., van Lambalgen, M., & Hagoort, P. (2015). Logic as Marr's computational level: Four case studies. Topics in Cognitive Science, 7, 287-298. doi:10.1111/tops.12125.

    Abstract

    We sketch four applications of Marr's levels-of-analysis methodology to the relations between logic and experimental data in the cognitive neuroscience of language and reasoning. The first part of the paper illustrates the explanatory power of computational level theories based on logic. We show that a Bayesian treatment of the suppression task in reasoning with conditionals is ruled out by EEG data, supporting instead an analysis based on defeasible logic. Further, we describe how results from an EEG study on temporal prepositions can be reanalyzed using formal semantics, addressing a potential confound. The second part of the article demonstrates the predictive power of logical theories drawing on EEG data on processing progressive constructions and on behavioral data on conditional reasoning in people with autism. Logical theories can constrain processing hypotheses all the way down to neurophysiology, and conversely neuroscience data can guide the selection of alternative computational level models of cognition.
  • Bakker, I., Takashima, A., Van Hall, J. G., & McQueen, J. M. (2015). Changes in theta and beta oscillations as signatures of novel word consolidation. Journal of cognitive neuroscience, 27(7), 1286-1297. doi:10.1162/jocn_a_00801.

    Abstract

    The complementary learning systems account of word learning states that novel words, like other types of memories, undergo an offline consolidation process during which they are gradually integrated into the neocortical memory network. A fundamental change in the neural representation of a novel word should therefore occur in the hours after learning. The present EEG study tested this hypothesis by investigating whether novel words learned before a 24-hr consolidation period elicited more word-like oscillatory responses than novel words learned immediately before testing. In line with previous studies indicating that theta synchronization reflects lexical access, unfamiliar novel words elicited lower power in the theta band (4–8 Hz) than existing words. Recently learned words still showed a marginally lower theta increase than existing words, but theta responses to novel words that had been acquired 24 hr earlier were indistinguishable from responses to existing words. Consistent with evidence that beta desynchronization (16–21 Hz) is related to lexical-semantic processing, we found that both unfamiliar and recently learned novel words elicited less beta desynchronization than existing words. In contrast, no difference was found between novel words learned 24 hr earlier and existing words. These data therefore suggest that an offline consolidation period enables novel words to acquire lexically integrated, word-like neural representations.
  • Bakker, I., Takashima, A., van Hell, J. G., Janzen, G., & McQueen, J. M. (2014). Competition from unseen or unheard novel words: Lexical consolidation across modalities. Journal of Memory and Language, 73, 116-139. doi:10.1016/j.jml.2014.03.002.

    Abstract

    In four experiments we investigated the formation of novel word memories across modalities, using competition between novel words and their existing phonological/orthographic neighbours as a test of lexical integration. Auditorily acquired novel words entered into competition both in the spoken modality (Experiment 1) and in the written modality (Experiment 4) after a consolidation period of 24 h. Words acquired from print, on the other hand, showed competition effects after 24 h in a visual word recognition task (Experiment 3) but required additional training and a consolidation period of a week before entering into spoken-word competition (Experiment 2). These cross-modal effects support the hypothesis that lexicalised rather than episodic representations underlie post-consolidation competition effects. We suggest that sublexical phoneme–grapheme conversion during novel word encoding and/or offline consolidation enables the formation of modality-specific lexemes in the untrained modality, which subsequently undergo the same cortical integration process as explicitly perceived word forms in the trained modality. Although conversion takes place in both directions, speech input showed an advantage over print both in terms of lexicalisation and explicit memory performance. In conclusion, the brain is able to integrate and consolidate internally generated lexical information as well as external perceptual input.
  • Bakker, I., Takashima, A., van Hell, J. G., Janzen, G., & McQueen, J. M. (2015). Tracking lexical consolidation with ERPs: Lexical and semantic-priming effects on N400 and LPC responses to newly-learned words. Neuropsychologia, 79, 33-41. doi:10.1016/j.neuropsychologia.2015.10.020.
  • Bakker-Marshall, I., Takashima, A., Fernandez, C. B., Janzen, G., McQueen, J. M., & Van Hell, J. G. (2021). Overlapping and distinct neural networks supporting novel word learning in bilinguals and monolinguals. Bilingualism: Language and Cognition, 24(3), 524-536. doi:10.1017/S1366728920000589.

    Abstract

    This study investigated how bilingual experience alters neural mechanisms supporting novel word learning. We hypothesised that novel words elicit increased semantic activation in the larger bilingual lexicon, potentially stimulating stronger memory integration than in monolinguals. English monolinguals and Spanish–English bilinguals were trained on two sets of written Swahili–English word pairs, one set on each of two consecutive days, and performed a recognition task in the MRI-scanner. Lexical integration was measured through visual primed lexical decision. Surprisingly, no group difference emerged in explicit word memory, and priming occurred only in the monolingual group. This difference in lexical integration may indicate an increased need for slow neocortical interleaving of old and new information in the denser bilingual lexicon. The fMRI data were consistent with increased use of cognitive control networks in monolinguals and of articulatory motor processes in bilinguals, providing further evidence for experience-induced neural changes: monolinguals and bilinguals reached largely comparable behavioural performance levels in novel word learning, but did so by recruiting partially overlapping but non-identical neural systems to acquire novel words.
  • Balakrishnan, B., Verheijen, J., Lupo, A., Raymond, K., Turgeon, C., Yang, Y., Carter, K. L., Whitehead, K. J., Kozicz, T., Morava, E., & Lai, K. (2019). A novel phosphoglucomutase-deficient mouse model reveals aberrant glycosylation and early embryonic lethality. Journal of Inherited Metabolic Disease, 42(5), 998-1007. doi:10.1002/jimd.12110.

    Abstract

    Patients with phosphoglucomutase (PGM1) deficiency, a congenital disorder of glycosylation (CDG) suffer from multiple disease phenotypes. Midline cleft defects are present at birth. Overtime, additional clinical phenotypes, which include severe hypoglycemia, hepatopathy, growth retardation, hormonal deficiencies, hemostatic anomalies, frequently lethal, early-onset of dilated cardiomyopathy and myopathy emerge, reflecting the central roles of the enzyme in (glycogen) metabolism and glycosylation. To delineate the pathophysiology of the tissue-specific disease phenotypes, we constructed a constitutive Pgm2 (mouse ortholog of human PGM1)-knockout (KO) mouse model using CRISPR-Cas9 technology. After multiple crosses between heterozygous parents, we were unable to identify homozygous life births in 78 newborn pups (P = 1.59897E-06), suggesting an embryonic lethality phenotype in the homozygotes. Ultrasound studies of the course of pregnancy confirmed Pgm2-deficient pups succumb before E9.5. Oral galactose supplementation (9 mg/mL drinking water) did not rescue the lethality. Biochemical studies of tissues and skin fibroblasts harvested from heterozygous animals confirmed reduced Pgm2 enzyme activity and abundance, but no change in glycogen content. However, glycomics analyses in serum revealed an abnormal glycosylation pattern in the Pgm2(+/-) animals, similar to that seen in PGM1-CDG.
  • Bank, R., Crasborn, O., & Van Hout, R. (2015). Alignment of two languages: The spreading of mouthings in Sign Language of the Netherlands. International Journal of Bilingualism, 19, 40-55. doi:10.1177/1367006913484991.

    Abstract

    Mouthings and mouth gestures are omnipresent in Sign Language of the Netherlands (NGT). Mouthings in NGT are mouth actions that have their origin in spoken Dutch, and are usually time aligned with the signs they co-occur with. Frequently, however, they spread over one or more adjacent signs, so that one mouthing co-occurs with multiple manual signs. We conducted a corpus study to explore how frequently this occurs in NGT and whether there is any sociolinguistic variation in the use of spreading. Further, we looked at the circumstances under which spreading occurs. Answers to these questions may give us insight into the prosodic structure of sign languages. We investigated a sample of the Corpus NGT containing 5929 mouthings by 46 participants. We found that spreading over an adjacent sign is independent of social factors. Further, mouthings that spread are longer than non-spreading mouthings, whether measured in syllables or in milliseconds. By using a relatively large amount of natural data, we succeeded in gaining more insight into the way mouth actions are utilised in sign languages
  • Baranova, J. (2015). Other-initiated repair in Russian. Open linguistics, 1(1), 555-577. doi:10.1515/opli-2015-0019.

    Abstract

    This article describes the interactional patterns and linguistic structures associated with otherinitiated repair, as observed in a corpus of video-recorded conversations in Russian. In the discussion of various repair cases special attention is given to the modifications that the trouble source turn undergoes in response to an open versus a restricted repair initiation. Speakers often modify their problematic turn in multiple ways at ones when responding to an open repair initiation. They can alter the word order of the problematic turn, change prosodic contour of the utterance, omit redundant elements and add more specific ones. By contrast, restricted repair initiations usually receive specific repair solutions that target only one problem at a time
  • Barendse, M. T., Albers, C. J., Oort, F. J., & Timmerman, M. E. (2014). Measurement bias detection through Bayesian factor analysis. Frontiers in Psychology, 5: 1087. doi:10.3389/fpsyg.2014.01087.

    Abstract

    Measurement bias has been defined as a violation of measurement invariance. Potential violators—variables that possibly violate measurement invariance—can be investigated through restricted factor analysis (RFA). The purpose of the present paper is to investigate a Bayesian approach to estimate RFA models with interaction effects, in order to detect uniform and nonuniform measurement bias. Because modeling nonuniform bias requires an interaction term, it is more complicated than modeling uniform bias. The Bayesian approach seems especially suited for such complex models. In a simulation study we vary the type of bias (uniform, nonuniform), the type of violator (observed continuous, observed dichotomous, latent continuous), and the correlation between the trait and the violator (0.0, 0.5). For each condition, 100 sets of data are generated and analyzed. We examine the accuracy of the parameter estimates and the performance of two bias detection procedures, based on the DIC fit statistic, in Bayesian RFA. Results show that the accuracy of the estimated parameters is satisfactory. Bias detection rates are high in all conditions with an observed violator, and still satisfactory in all other conditions.
  • Barendse, M. T., Oort, F. J., & Timmerman, M. E. (2015). Using exploratory factor analysis to determine the dimensionality of discrete responses. Structural Equation Modeling: A Multidisciplinary Journal, 22(1), 87-101. doi:10.1080/10705511.2014.934850.

    Abstract

    Exploratory factor analysis (EFA) is commonly used to determine the dimensionality of continuous data. In a simulation study we investigate its usefulness with discrete data. We vary response scales (continuous, dichotomous, polytomous), factor loadings (medium, high), sample size (small, large), and factor structure (simple, complex). For each condition, we generate 1,000 data sets and apply EFA with 5 estimation methods (maximum likelihood [ML] of covariances, ML of polychoric correlations, robust ML, weighted least squares [WLS], and robust WLS) and 3 fit criteria (chi-square test, root mean square error of approximation, and root mean square residual). The various EFA procedures recover more factors when sample size is large, factor loadings are high, factor structure is simple, and response scales have more options. Robust WLS of polychoric correlations is the preferred method, as it is theoretically justified and shows fewer convergence problems than the other estimation methods.
  • Barlas, P., Kyriakou, K., Guest, O., Kleanthous, S., & Otterbacher, J. (2021). To "see" is to stereotype: Image tagging algorithms, gender recognition, and the accuracy-fairness trade-off. Proceedings of the ACM on Human Computer Interaction, 4(CSCW3): 32. doi:10.1145/3432931.

    Abstract

    Machine-learned computer vision algorithms for tagging images are increasingly used by developers and researchers, having become popularized as easy-to-use "cognitive services." Yet these tools struggle with gender recognition, particularly when processing images of women, people of color and non-binary individuals. Socio-technical researchers have cited data bias as a key problem; training datasets often over-represent images of people and contexts that convey social stereotypes. The social psychology literature explains that people learn social stereotypes, in part, by observing others in particular roles and contexts, and can inadvertently learn to associate gender with scenes, occupations and activities. Thus, we study the extent to which image tagging algorithms mimic this phenomenon. We design a controlled experiment, to examine the interdependence between algorithmic recognition of context and the depicted person's gender. In the spirit of auditing to understand machine behaviors, we create a highly controlled dataset of people images, imposed on gender-stereotyped backgrounds. Our methodology is reproducible and our code publicly available. Evaluating five proprietary algorithms, we find that in three, gender inference is hindered when a background is introduced. Of the two that "see" both backgrounds and gender, it is the one whose output is most consistent with human stereotyping processes that is superior in recognizing gender. We discuss the accuracy--fairness trade-off, as well as the importance of auditing black boxes in better understanding this double-edged sword.
  • Baron-Cohen, S., Murphy, L., Chakrabarti, B., Craig, I., Mallya, U., Lakatosova, S., Rehnstrom, K., Peltonen, L., Wheelwright, S., Allison, C., Fisher, S. E., & Warrier, V. (2014). A genome wide association study of mathematical ability reveals an association at chromosome 3q29, a locus associated with autism and learning difficulties: A preliminary study. PLoS One, 9(5): e96374. doi:10.1371/journal.pone.0096374.

    Abstract

    Mathematical ability is heritable, but few studies have directly investigated its molecular genetic basis. Here we aimed to identify specific genetic contributions to variation in mathematical ability. We carried out a genome wide association scan using pooled DNA in two groups of U.K. samples, based on end of secondary/high school national academic exam achievement: high (n = 419) versus low (n = 183) mathematical ability while controlling for their verbal ability. Significant differences in allele frequencies between these groups were searched for in 906,600 SNPs using the Affymetrix GeneChip Human Mapping version 6.0 array. After meeting a threshold of p<1.5×10−5, 12 SNPs from the pooled association analysis were individually genotyped in 542 of the participants and analyzed to validate the initial associations (lowest p-value 1.14 ×10−6). In this analysis, one of the SNPs (rs789859) showed significant association after Bonferroni correction, and four (rs10873824, rs4144887, rs12130910 rs2809115) were nominally significant (lowest p-value 3.278 × 10−4). Three of the SNPs of interest are located within, or near to, known genes (FAM43A, SFT2D1, C14orf64). The SNP that showed the strongest association, rs789859, is located in a region on chromosome 3q29 that has been previously linked to learning difficulties and autism. rs789859 lies 1.3 kbp downstream of LSG1, and 700 bp upstream of FAM43A, mapping within the potential promoter/regulatory region of the latter. To our knowledge, this is only the second study to investigate the association of genetic variants with mathematical ability, and it highlights a number of interesting markers for future study.
  • Barthel, M., & Sauppe, S. (2019). Speech planning at turn transitions in dialogue is associated with increased processing load. Cognitive Science, 43(7): e12768. doi:10.1111/cogs.12768.

    Abstract

    Speech planning is a sophisticated process. In dialog, it regularly starts in overlap with an incoming turn by a conversation partner. We show that planning spoken responses in overlap with incoming turns is associated with higher processing load than planning in silence. In a dialogic experiment, participants took turns with a confederate describing lists of objects. The confederate’s utterances (to which participants responded) were pre‐recorded and varied in whether they ended in a verb or an object noun and whether this ending was predictable or not. We found that response planning in overlap with sentence‐final verbs evokes larger task‐evoked pupillary responses, while end predictability had no effect. This finding indicates that planning in overlap leads to higher processing load for next speakers in dialog and that next speakers do not proactively modulate the time course of their response planning based on their predictions of turn endings. The turn‐taking system exerts pressure on the language processing system by pushing speakers to plan in overlap despite the ensuing increase in processing load.
  • Bartolozzi, F., Jongman, S. R., & Meyer, A. S. (2021). Concurrent speech planning does not eliminate repetition priming from spoken words: Evidence from linguistic dual-tasking. Journal of Experimental Psychology: Learning, Memory, and Cognition, 47(3), 466-480. doi:10.1037/xlm0000944.

    Abstract

    In conversation, production and comprehension processes may overlap, causing interference. In 3 experiments, we investigated whether repetition priming can work as a supporting device, reducing costs associated with linguistic dual-tasking. Experiment 1 established the rate of decay of repetition priming from spoken words to picture naming for primes embedded in sentences. Experiments 2 and 3 investigated whether the rate of decay was faster when participants comprehended the prime while planning to name unrelated pictures. In all experiments, the primed picture followed the sentences featuring the prime on the same trial, or 10 or 50 trials later. The results of the 3 experiments were strikingly similar: robust repetition priming was observed when the primed picture followed the prime sentence. Thus, repetition priming was observed even when the primes were processed while the participants prepared an unrelated spoken utterance. Priming might, therefore, support utterance planning in conversation, where speakers routinely listen while planning their utterances.

    Additional information

    supplemental material
  • Basnakova, J., Weber, K., Petersson, K. M., Van Berkum, J. J. A., & Hagoort, P. (2014). Beyond the language given: The neural correlates of inferring speaker meaning. Cerebral Cortex, 24(10), 2572-2578. doi:10.1093/cercor/bht112.

    Abstract

    Even though language allows us to say exactly what we mean, we often use language to say things indirectly, in a way that depends on the specific communicative context. For example, we can use an apparently straightforward sentence like "It is hard to give a good presentation" to convey deeper meanings, like "Your talk was a mess!" One of the big puzzles in language science is how listeners work out what speakers really mean, which is a skill absolutely central to communication. However, most neuroimaging studies of language comprehension have focused on the arguably much simpler, context-independent process of understanding direct utterances. To examine the neural systems involved in getting at contextually constrained indirect meaning, we used functional magnetic resonance imaging as people listened to indirect replies in spoken dialog. Relative to direct control utterances, indirect replies engaged dorsomedial prefrontal cortex, right temporo-parietal junction and insula, as well as bilateral inferior frontal gyrus and right medial temporal gyrus. This suggests that listeners take the speaker's perspective on both cognitive (theory of mind) and affective (empathy-like) levels. In line with classic pragmatic theories, our results also indicate that currently popular "simulationist" accounts of language comprehension fail to explain how listeners understand the speaker's intended message.
  • Bašnákova, J., Van Berkum, J. J. A., Weber, K., & Hagoort, P. (2015). A job interview in the MRI scanner: How does indirectness affect addressees and overhearers? Neuropsychologia, 76, 79-91. doi:10.1016/j.neuropsychologia.2015.03.030.

    Abstract

    In using language, people not only exchange information, but also navigate their social world – for example, they can express themselves indirectly to avoid losing face. In this functional magnetic resonance imaging study, we investigated the neural correlates of interpreting face-saving indirect replies, in a situation where participants only overheard the replies as part of a conversation between two other people, as well as in a situation where the participants were directly addressed themselves. We created a fictional job interview context where indirect replies serve as a natural communicative strategy to attenuate one’s shortcomings, and asked fMRI participants to either pose scripted questions and receive answers from three putative job candidates (addressee condition) or to listen to someone else interview the same candidates (overhearer condition). In both cases, the need to evaluate the candidate ensured that participants had an active interest in comprehending the replies. Relative to direct replies, face-saving indirect replies increased activation in medial prefrontal cortex, bilateral temporo-parietal junction (TPJ), bilateral inferior frontal gyrus and bilateral middle temporal gyrus, in active overhearers and active addressees alike, with similar effect size, and comparable to findings obtained in an earlier passive listening study (Bašnáková et al., 2013). In contrast, indirectness effects in bilateral anterior insula and pregenual ACC, two regions implicated in emotional salience and empathy, were reliably stronger in addressees than in active overhearers. Our findings indicate that understanding face-saving indirect language requires additional cognitive perspective-taking and other discourse-relevant cognitive processing, to a comparable extent in active overhearers and addressees. Furthermore, they indicate that face-saving indirect language draws upon affective systems more in addressees than in overhearers, presumably because the addressee is the one being managed by a face-saving reply. In all, face-saving indirectness provides a window on the cognitive as well as affect-related neural systems involved in human communication.
  • Bastiaansen, M. C. M., & Hagoort, P. (2003). Event-induced theta responses as a window on the dynamics of memory. Cortex, 39(4-5), 967-972. doi:10.1016/S0010-9452(08)70873-6.

    Abstract

    An important, but often ignored distinction in the analysis of EEG signals is that between evoked activity and induced activity. Whereas evoked activity reflects the summation of transient post-synaptic potentials triggered by an event, induced activity, which is mainly oscillatory in nature, is thought to reflect changes in parameters controlling dynamic interactions within and between brain structures. We hypothesize that induced activity may yield information about the dynamics of cell assembly formation, activation and subsequent uncoupling, which may play a prominent role in different types of memory operations. We then describe a number of analysis tools that can be used to study the reactivity of induced rhythmic activity, both in terms of amplitude changes and of phase variability.

    We briefly discuss how alpha, gamma and theta rhythms are thought to be generated, paying special attention to the hypothesis that the theta rhythm reflects dynamic interactions between the hippocampal system and the neocortex. This hypothesis would imply that studying the reactivity of scalp-recorded theta may provide a window on the contribution of the hippocampus to memory functions.

    We review studies investigating the reactivity of scalp-recorded theta in paradigms engaging episodic memory, spatial memory and working memory. In addition, we review studies that relate theta reactivity to processes at the interface of memory and language. Despite many unknowns, the experimental evidence largely supports the hypothesis that theta activity plays a functional role in cell assembly formation, a process which may constitute the neural basis of memory formation and retrieval. The available data provide only highly indirect support for the hypothesis that scalp-recorded theta yields information about hippocampal functioning. It is concluded that studying induced rhythmic activity holds promise as an additional important way to study brain function.
  • Bastiaansen, M. C. M., & Hagoort, P. (2015). Frequency-based segregation of syntactic and semantic unification during online sentence level language comprehension. Journal of Cognitive Neuroscience, 27(11), 2095-2107. doi:10.1162/jocn_a_00829.

    Abstract

    During sentence level language comprehension, semantic and syntactic unification are functionally distinct operations. Nevertheless, both recruit roughly the same brain areas (spatially overlapping networks in the left frontotemporal cortex) and happen at the same time (in the first few hundred milliseconds after word onset). We tested the hypothesis that semantic and syntactic unification are segregated by means of neuronal synchronization of the functionally relevant networks in different frequency ranges: gamma (40 Hz and up) for semantic unification and lower beta (10–20 Hz) for syntactic unification. EEG power changes were quantified as participants read either correct sentences, syntactically correct though meaningless sentences (syntactic prose), or sentences that did not contain any syntactic structure (random word lists). Other sentences contained either a semantic anomaly or a syntactic violation at a critical word in the sentence. Larger EEG gamma-band power was observed for semantically coherent than for semantically anomalous sentences. Similarly, beta-band power was larger for syntactically correct sentences than for incorrect ones. These results confirm the existence of a functional dissociation in EEG oscillatory dynamics during sentence level language comprehension that is compatible with the notion of a frequency-based segregation of syntactic and semantic unification.
  • Bastos, A. M., Vezoli, J., Bosman, C. A., Schoffelen, J.-M., Oostenveld, R., Dowdall, J. R., De Weerd, P., Kennedy, H., & Fries, P. (2015). Visual areas exert feedforward and feedback influences through distinct frequency channels. Neuron, 85(2), 390-401. doi:10.1016/j.neuron.2014.12.018.

    Abstract

    Visual cortical areas subserve cognitive functions by interacting in both feedforward and feedback directions. While feedforward influences convey sensory signals, feedback influences modulate feedforward signaling according to the current behavioral context. We investigated whether these interareal influences are subserved differentially by rhythmic synchronization. We correlated frequency-specific directed influences among 28 pairs of visual areas with anatomical metrics of the feedforward or feedback character of the respective interareal projections. This revealed that in the primate visual system, feedforward influences are carried by theta-band ( approximately 4 Hz) and gamma-band ( approximately 60-80 Hz) synchronization, and feedback influences by beta-band ( approximately 14-18 Hz) synchronization. The functional directed influences constrain a functional hierarchy similar to the anatomical hierarchy, but exhibiting task-dependent dynamic changes in particular with regard to the hierarchical positions of frontal areas. Our results demonstrate that feedforward and feedback signaling use distinct frequency channels, suggesting that they subserve differential communication requirements.
  • Bauer, B. L. M. (2019). Language contact and language borrowing? Compound verb forms in the Old French translation of the Gospel of St. Mark. Belgian Journal of Linguistics, 33, 210-250. doi:10.1075/bjl.00028.bau.

    Abstract

    This study investigates the potential influence of Latin syntax on the development of analytic verb forms in a well-defined and concrete instance of language contact, the Old French translation of a Latin Gospel. The data show that the formation of verb forms in the Old French was remarkably independent from the Latin original. While the Old French text closely follows the narrative of the Latin Gospel, its usage of compound verb forms is not dictated by the source text, as reflected e.g. in the quasi-omnipresence of the relative sequence finite verb + pp, which – with a few exceptions – all trace back to a different structure in the Latin text. Engels (VerenigdeStaten) Another important innovative difference in the Old French is the widespread use of aveir ‘have’ as an auxiliary, unknown in Latin. The article examines in detail the relation between the verbal forms in the two texts, showing that the translation is in line with of grammar. The usage of compound verb forms in the Old French Gospel is therefore autonomous rather than contact stimulated, let alone contact induced. The results challenge Blatt’s (1957) assumption identifying compound verb forms as a shared feature in European languages that should be ascribed to Latin influence.

    Files private

    Request files
  • Bauer, B. L. M. (2015). Origins of grammatical forms and evidence from Latin. Journal of Indo-European studies, 43, 201-235.

    Abstract

    This article submits that the instances of incipient grammaticalization that are found in the later stages of Latin and that resulted in new grammatical forms in Romance, reflect a major linguistic innovation. While the new grammatical forms are created out of lexical or mildly grammatical autonomous elements, earlier processes seem to primarily involve particles with a certain semantic value and freezing. This fundamental difference explains why the attempts of early Indo-Europeanists such as Franz Bopp at tracing the lexical origins of Indo-European inflected forms were unsuccessful and strongly criticized by the Neo-Grammarians.
  • Bauer, B. L. M. (1997). Response to David Lightfoot’s Review of The Emergence and Development of SVO Patterning in Latin and French: Diachronic and Psycholinguistic Perspectives. Language, 73(2), 352-358.
  • Bavin, E. L., Kidd, E., Prendergast, L., Baker, E., Dissanayake, C., & Prior, M. (2014). Severity of autism is related to children's language processing. Autism Research, 7(6), 687-694. doi:10.1002/aur.1410.

    Abstract

    Problems in language processing have been associated with autism spectrum disorder (ASD), with some research attributing the problems to overall language skills rather than a diagnosis of ASD. Lexical access was assessed in a looking-while-listening task in three groups of 5- to 7-year-old children; two had high-functioning ASD (HFA), an ASD severe (ASD-S) group (n = 16) and an ASD moderate (ASD-M) group (n = 21). The third group were typically developing (TD) (n = 48). Participants heard sentences of the form “Where's the x?” and their eye movements to targets (e.g., train), phonological competitors (e.g., tree), and distractors were recorded. Proportions of looking time at target were analyzed within 200 ms intervals. Significant group differences were found between the ASD-S and TD groups only, at time intervals 1000–1200 and 1200–1400 ms postonset. The TD group was more likely to be fixated on target. These differences were maintained after adjusting for language, verbal and nonverbal IQ, and attention scores. An analysis using parent report of autistic-like behaviors showed higher scores to be associated with lower proportions of looking time at target, regardless of group. Further analysis showed fixation for the TD group to be significantly faster than for the ASD-S. In addition, incremental processing was found for all groups. The study findings suggest that severity of autistic behaviors will impact significantly on children's language processing in real life situations when exposed to syntactically complex material. They also show the value of using online methods for understanding how young children with ASD process language. Autism Res 2014, 7: 687–694.
  • Becker, M., Devanna, P., Fisher, S. E., & Vernes, S. C. (2015). A chromosomal rearrangement in a child with severe speech and language disorder separates FOXP2 from a functional enhancer. Molecular Cytogenetics, 8: 69. doi:10.1186/s13039-015-0173-0.

    Abstract

    Mutations of FOXP2 in 7q31 cause a rare disorder involving speech apraxia, accompanied by expressive and receptive language impairments. A recent report described a child with speech and language deficits, and a genomic rearrangement affecting chromosomes 7 and 11. One breakpoint mapped to 7q31 and, although outside its coding region, was hypothesised to disrupt FOXP2 expression. We identified an element 2 kb downstream of this breakpoint with epigenetic characteristics of an enhancer. We show that this element drives reporter gene expression in human cell-lines. Thus, displacement of this element by translocation may disturb gene expression, contributing to the observed language phenotype.
  • Bekemeier, N., Brenner, D., Klepp, A., Biermann-Ruben, K., & Indefrey, P. (2019). Electrophysiological correlates of concept type shifts. PLoS One, 14(3): e0212624. doi:10.1371/journal.pone.0212624.

    Abstract

    A recent semantic theory of nominal concepts by Löbner [1] posits that–due to their inherent uniqueness and relationality properties–noun concepts can be classified into four concept types (CTs): sortal, individual, relational, functional. For sortal nouns the default determination is indefinite (a stone), for individual nouns it is definite (the sun), for relational and functional nouns it is possessive (his ear, his father). Incongruent determination leads to a concept type shift: his father (functional concept: unique, relational)–a father (sortal concept: non-unique, non-relational). Behavioral studies on CT shifts have demonstrated a CT congruence effect, with congruent determiners triggering faster lexical decision times on the subsequent noun than incongruent ones [2, 3]. The present ERP study investigated electrophysiological correlates of congruent and incongruent determination in German noun phrases, and specifically, whether the CT congruence effect could be indexed by such classic ERP components as N400, LAN or P600. If incongruent determination affects the lexical retrieval or semantic integration of the noun, it should be reflected in the amplitude of the N400 component. If, however, CT congruence is processed by the same neuronal mechanisms that underlie morphosyntactic processing, incongruent determination should trigger LAN or/and P600. These predictions were tested in two ERP studies. In Experiment 1, participants just listened to noun phrases. In Experiment 2, they performed a wellformedness judgment task. The processing of (in)congruent CTs (his sun vs. the sun) was compared to the processing of morphosyntactic and semantic violations in control conditions. Whereas the control conditions elicited classic electrophysiological violation responses (N400, LAN, & P600), CT-incongruences did not. Instead they showed novel concept-type specific response patterns. The absence of the classic ERP components suggests that CT-incongruent determination is not perceived as a violation of the semantic or morphosyntactic structure of the noun phrase.

    Additional information

    dataset
  • Benetti, S., Zonca, J., Ferrari, A., Rezk, M., Rabini, G., & Collignon, O. (2021). Visual motion processing recruits regions selective for auditory motion in early deaf individuals. NeuroImage, 230: 117816. doi:10.1016/j.neuroimage.2021.117816.

    Abstract

    In early deaf individuals, the auditory deprived temporal brain regions become engaged in visual processing. In our study we tested further the hypothesis that intrinsic functional specialization guides the expression of cross-modal responses in the deprived auditory cortex. We used functional MRI to characterize the brain response to horizontal, radial and stochastic visual motion in early deaf and hearing individuals matched for the use of oral or sign language. Visual motion showed enhanced response in the ‘deaf’ mid-lateral planum temporale, a region selective to auditory motion as demonstrated by a separate auditory motion localizer in hearing people. Moreover, multivariate pattern analysis revealed that this reorganized temporal region showed enhanced decoding of motion categories in the deaf group, while visual motion-selective region hMT+/V5 showed reduced decoding when compared to hearing people. Dynamic Causal Modelling revealed that the ‘deaf’ motion-selective temporal region shows a specific increase of its functional interactions with hMT+/V5 and is now part of a large-scale visual motion selective network. In addition, we observed preferential responses to radial, compared to horizontal, visual motion in the ‘deaf’ right superior temporal cortex region that also show preferential response to approaching/receding sounds in the hearing brain. Overall, our results suggest that the early experience of auditory deprivation interacts with intrinsic constraints and triggers a large-scale reallocation of computational load between auditory and visual brain regions that typically support the multisensory processing of motion information.

    Additional information

    supplementary materials
  • Bentum, M., Ten Bosch, L., Van den Bosch, A., & Ernestus, M. (2019). Do speech registers differ in the predictability of words? International Journal of Corpus Linguistics, 24(1), 98-130. doi:10.1075/ijcl.17062.ben.

    Abstract

    Previous research has demonstrated that language use can vary depending on the context of situation. The present paper extends this finding by comparing word predictability differences between 14 speech registers ranging from highly informal conversations to read-aloud books. We trained 14 statistical language models to compute register-specific word predictability and trained a register classifier on the perplexity score vector of the language models. The classifier distinguishes perfectly between samples from all speech registers and this result generalizes to unseen materials. We show that differences in vocabulary and sentence length cannot explain the speech register classifier’s performance. The combined results show that speech registers differ in word predictability.
  • Benyamin, B., St Pourcain, B., Davis, O. S., Davies, G., Hansell, N. K., Brion, M.-J., Kirkpatrick, R. M., Cents, R. A. M., Franić, S., Miller, M. B., Haworth, C. M. A., Meaburn, E., Price, T. S., Evans, D. M., Timpson, N., Kemp, J., Ring, S., McArdle, W., Medland, S. E., Yang, J. and 23 moreBenyamin, B., St Pourcain, B., Davis, O. S., Davies, G., Hansell, N. K., Brion, M.-J., Kirkpatrick, R. M., Cents, R. A. M., Franić, S., Miller, M. B., Haworth, C. M. A., Meaburn, E., Price, T. S., Evans, D. M., Timpson, N., Kemp, J., Ring, S., McArdle, W., Medland, S. E., Yang, J., Harris, S. E., Liewald, D. C., Scheet, P., Xiao, X., Hudziak, J. J., de Geus, E. J. C., Jaddoe, V. W. V., Starr, J. M., Verhulst, F. C., Pennell, C., Tiemeier, H., Iacono, W. G., Palmer, L. J., Montgomery, G. W., Martin, N. G., Boomsma, D. I., Posthuma, D., McGue, M., Wright, M. J., Davey Smith, G., Deary, I. J., Plomin, R., & Visscher, P. M. (2014). Childhood intelligence is heritable, highly polygenic and associated with FNBP1L. Molecular Psychiatry, 19(2), 253-258. doi:10.1038/mp.2012.184.

    Abstract

    Intelligence in childhood, as measured by psychometric cognitive tests, is a strong predictor of many important life outcomes, including educational attainment, income, health and lifespan. Results from twin, family and adoption studies are consistent with general intelligence being highly heritable and genetically stable throughout the life course. No robustly associated genetic loci or variants for childhood intelligence have been reported. Here, we report the first genome-wide association study (GWAS) on childhood intelligence (age range 6–18 years) from 17 989 individuals in six discovery and three replication samples. Although no individual single-nucleotide polymorphisms (SNPs) were detected with genome-wide significance, we show that the aggregate effects of common SNPs explain 22–46% of phenotypic variation in childhood intelligence in the three largest cohorts (P=3.9 × 10−15, 0.014 and 0.028). FNBP1L, previously reported to be the most significantly associated gene for adult intelligence, was also significantly associated with childhood intelligence (P=0.003). Polygenic prediction analyses resulted in a significant correlation between predictor and outcome in all replication cohorts. The proportion of childhood intelligence explained by the predictor reached 1.2% (P=6 × 10−5), 3.5% (P=10−3) and 0.5% (P=6 × 10−5) in three independent validation cohorts. Given the sample sizes, these genetic prediction results are consistent with expectations if the genetic architecture of childhood intelligence is like that of body mass index or height. Our study provides molecular support for the heritability and polygenic nature of childhood intelligence. Larger sample sizes will be required to detect individual variants with genome-wide significance.
  • Bergelson*, E., Casillas*, M., Soderstrom, M., Seidl, A., Warlaumont, A. S., & Amatuni, A. (2019). What Do North American Babies Hear? A large-scale cross-corpus analysis. Developmental Science, 22(1): e12724. doi:10.1111/desc.12724.

    Abstract

    - * indicates joint first authorship - Abstract: A range of demographic variables influence how much speech young children hear. However, because studies have used vastly different sampling methods, quantitative comparison of interlocking demographic effects has been nearly impossible, across or within studies. We harnessed a unique collection of existing naturalistic, day-long recordings from 61 homes across four North American cities to examine language input as a function of age, gender, and maternal education. We analyzed adult speech heard by 3- to 20-month-olds who wore audio recorders for an entire day. We annotated speaker gender and speech register (child-directed or adult-directed) for 10,861 utterances from female and male adults in these recordings. Examining age, gender, and maternal education collectively in this ecologically-valid dataset, we find several key results. First, the speaker gender imbalance in the input is striking: children heard 2--3x more speech from females than males. Second, children in higher-maternal-education homes heard more child-directed speech than those in lower-maternal education homes. Finally, our analyses revealed a previously unreported effect: the proportion of child-directed speech in the input increases with age, due to a decrease in adult-directed speech with age. This large-scale analysis is an important step forward in collectively examining demographic variables that influence early development, made possible by pooled, comparable, day-long recordings of children's language environments. The audio recordings, annotations, and annotation software are readily available for re-use and re-analysis by other researchers.

    Additional information

    desc12724-sup-0001-supinfo.pdf
  • Berghuis, B., De Kovel, C. G. F., van Iterson, L., Lamberts, R. J., Sander, J. W., Lindhout, D., & Koeleman, B. P. C. (2015). Complex SCN8A DNA-abnormalities in an individual with therapy resistant absence epilepsy. Epilepsy Research, 115, 141-144. doi:10.1016/j.eplepsyres.2015.06.007.

    Abstract

    Background De novo SCN8A missense mutations have been identified as a rare dominant cause of epileptic encephalopathy. We described a person with epileptic encephalopathy associated with a mosaic deletion of the SCN8A gene. Methods Array comparative genome hybridization was used to identify chromosomal abnormalities. Next Generation Sequencing was used to screen for variants in known and candidate epilepsy genes. A single nucleotide polymorphism array was used to test whether the SCN8A variants were in cis or in trans. Results We identified a de novo mosaic deletion of exons 2–14 of SCN8A, and a rare maternally inherited missense variant on the other allele in a woman presenting with absence seizures, challenging behavior, intellectual disability and QRS-fragmentation on the ECG. We also found a variant in SCN5A. Conclusions The combination of a rare missense variant with a de novo mosaic deletion of a large part of the SCN8A gene suggests that other possible mechanisms for SCN8A mutations may cause epilepsy; loss of function, genetic modifiers and cellular interference may play a role. This case expands the phenotype associated with SCN8A mutations, with absence epilepsy and regression in language and memory skills.
  • Bergmann, C., Bosch, L. t., Fikkert, P., & Boves, L. (2015). Modelling the Noise-Robustness of Infants’ Word Representations: The Impact of Previous Experience. PLoS One, 10(7): e0132245. doi:10.1371/journal.pone.0132245.

    Abstract

    During language acquisition, infants frequently encounter ambient noise. We present a computational model to address whether specific acoustic processing abilities are necessary to detect known words in moderate noise—an ability attested experimentally in infants. The model implements a general purpose speech encoding and word detection procedure. Importantly, the model contains no dedicated processes for removing or cancelling out ambient noise, and it can replicate the patterns of results obtained in several infant experiments. In addition to noise, we also addressed the role of previous experience with particular target words: does the frequency of a word matter, and does it play a role whether that word has been spoken by one or multiple speakers? The simulation results show that both factors affect noise robustness. We also investigated how robust word detection is to changes in speaker identity by comparing words spoken by known versus unknown speakers during the simulated test. This factor interacted with both noise level and past experience, showing that an increase in exposure is only helpful when a familiar speaker provides the test material. Added variability proved helpful only when encountering an unknown speaker. Finally, we addressed whether infants need to recognise specific words, or whether a more parsimonious explanation of infant behaviour, which we refer to as matching, is sufficient. Recognition involves a focus of attention on a specific target word, while matching only requires finding the best correspondence of acoustic input to a known pattern in the memory. Attending to a specific target word proves to be more noise robust, but a general word matching procedure can be sufficient to simulate experimental data stemming from young infants. A change from acoustic matching to targeted recognition provides an explanation of the improvements observed in infants around their first birthday. In summary, we present a computational model incorporating only the processes infants might employ when hearing words in noise. Our findings show that a parsimonious interpretation of behaviour is sufficient and we offer a formal account of emerging abilities.
  • Bertamini, M., Rampone, G., Makin, A. D. J., & Jessop, A. (2019). Symmetry preference in shapes, faces, flowers and landscapes. PeerJ, 7: e7078. doi:10.7717/peerj.7078.

    Abstract

    Most people like symmetry, and symmetry has been extensively used in visual art and architecture. In this study, we compared preference for images of abstract and familiar objects in the original format or when containing perfect bilateral symmetry. We created pairs of images for different categories: male faces, female faces, polygons, smoothed version of the polygons, flowers, and landscapes. This design allows us to compare symmetry preference in different domains. Each observer saw all categories randomly interleaved but saw only one of the two images in a pair. After recording preference, we recorded a rating of how salient the symmetry was for each image, and measured how quickly observers could decide which of the two images in a pair was symmetrical. Results reveal a general preference for symmetry in the case of shapes and faces. For landscapes, natural (no perfect symmetry) images were preferred. Correlations with judgments of saliency were present but generally low, and for landscapes the salience of symmetry was negatively related to preference. However, even within the category where symmetry was not liked (landscapes), the separate analysis of original and modified stimuli showed an interesting pattern: Salience of symmetry was correlated positively (artificial) or negatively (original) with preference, suggesting different effects of symmetry within the same class of stimuli based on context and categorization.

    Additional information

    Supplemental Information
  • Besharati, S., Forkel, S. J., Kopelman, M., Solms, M., Jenkinson, P. M., & Fotopoulou, A. (2014). The affective modulation of motor awareness in anosognosia for hemiplegia: Behavioural and lesion evidence. Cortex, 61, 127-140. doi:10.1016/j.cortex.2014.08.016.

    Abstract

    The possible role of emotion in anosognosia for hemiplegia (i.e., denial of motor deficits contralateral to a brain lesion), has long been debated between psychodynamic and neurocognitive theories. However, there are only a handful of case studies focussing on this topic, and the precise role of emotion in anosognosia for hemiplegia requires empirical investigation. In the present study, we aimed to investigate how negative and positive emotions influence motor awareness in anosognosia. Positive and negative emotions were induced under carefully-controlled experimental conditions in right-hemisphere stroke patients with anosognosia for hemiplegia (n = 11) and controls with clinically normal awareness (n = 10). Only the negative, emotion induction condition resulted in a significant improvement of motor awareness in anosognosic patients compared to controls; the positive emotion induction did not. Using lesion overlay and voxel-based lesion-symptom mapping approaches, we also investigated the brain lesions associated with the diagnosis of anosognosia, as well as with performance on the experimental task. Anatomical areas that are commonly damaged in AHP included the right-hemisphere motor and sensory cortices, the inferior frontal cortex, and the insula. Additionally, the insula, putamen and anterior periventricular white matter were associated with less awareness change following the negative emotion induction. This study suggests that motor unawareness and the observed lack of negative emotions about one's disabilities cannot be adequately explained by either purely motivational or neurocognitive accounts. Instead, we propose an integrative account in which insular and striatal lesions result in weak interoceptive and motivational signals. These deficits lead to faulty inferences about the self, involving a difficulty to personalise new sensorimotor information, and an abnormal adherence to premorbid beliefs about the body.

    Additional information

    supplementary file
  • Bidgood, A., Pine, J., Rowland, C. F., Sala, G., Freudenthal, D., & Ambridge, B. (2021). Verb argument structure overgeneralisations for the English intransitive and transitive constructions: Grammaticality judgments and production priming. Language and Cognition, 13(3), 397-437. doi:10.1017/langcog.2021.8.

    Abstract

    We used a multi-method approach to investigate how children avoid (or retreat from) argument structure overgeneralisation errors (e.g., *You giggled me). Experiment 1 investigated how semantic and statistical constraints (preemption and entrenchment) influence children’s and adults’ judgments of the grammatical acceptability of 120 verbs in transitive and intransitive sentences. Experiment 2 used syntactic priming to elicit overgeneralisation errors from children (aged 5–6) to investigate whether the same constraints operate in production. For judgments, the data showed effects of preemption, entrenchment, and semantics for all ages. For production, only an effect of preemption was observed, and only for transitivisation errors with intransitive-only verbs (e.g., *The man laughed the girl). We conclude that preemption, entrenchment, and semantic effects are real, but are obscured by particular features of the present production task.

    Additional information

    supplementary material
  • Bidgood, A., Ambridge, B., Pine, J. M., & Rowland, C. F. (2014). The retreat from locative overgeneralisation errors: A novel verb grammaticality judgment study. PLoS One, 9(5): e97634. doi:10.1371/journal.pone.0097634.

    Abstract

    Whilst some locative verbs alternate between the ground- and figure-locative constructions (e.g. Lisa sprayed the flowers with water/Lisa sprayed water onto the flowers), others are restricted to one construction or the other (e.g. *Lisa filled water into the cup/*Lisa poured the cup with water). The present study investigated two proposals for how learners (aged 5–6, 9–10 and adults) acquire this restriction, using a novel-verb-learning grammaticality-judgment paradigm. In support of the semantic verb class hypothesis, participants in all age groups used the semantic properties of novel verbs to determine the locative constructions (ground/figure/both) in which they could and could not appear. In support of the frequency hypothesis, participants' tolerance of overgeneralisation errors decreased with each increasing level of verb frequency (novel/low/high). These results underline the need to develop an integrated account of the roles of semantics and frequency in the retreat from argument structure overgeneralisation.
  • Bielczyk, N. Z., Piskała, K., Płomecka, M., Radziński, P., Todorova, L., & Foryś, U. (2019). Time-delay model of perceptual decision making in cortical networks. PLoS One, 14: e0211885. doi:10.1371/journal.pone.0211885.

    Abstract

    It is known that cortical networks operate on the edge of instability, in which oscillations can appear. However, the influence of this dynamic regime on performance in decision making, is not well understood. In this work, we propose a population model of decision making based on a winner-take-all mechanism. Using this model, we demonstrate that local slow inhibition within the competing neuronal populations can lead to Hopf bifurcation. At the edge of instability, the system exhibits ambiguity in the decision making, which can account for the perceptual switches observed in human experiments. We further validate this model with fMRI datasets from an experiment on semantic priming in perception of ambivalent (male versus female) faces. We demonstrate that the model can correctly predict the drop in the variance of the BOLD within the Superior Parietal Area and Inferior Parietal Area while watching ambiguous visual stimuli.

    Additional information

    supporting information
  • Bierwisch, M. (1997). Universal Grammar and the Basic Variety. Second Language Research, 13(4), 348-366. doi:10.1177/026765839701300403.

    Abstract

    The Basic Variety (BV) as conceived by Klein and Perdue (K&P) is a relatively stable state in the process of spontaneous (adult) second language acquisition, characterized by a small set of phrasal, semantic and pragmatic principles. These principles are derived by inductive generalization from a fairly large body of data. They are considered by K&P as roughly equivalent to those of Universal Grammar (UG) in the sense of Chomsky's Minimalist Program, with the proviso that the BV allows for only weak (or unmarked) formal features. The present article first discusses the viability of the BV principles proposed by K&P, arguing that some of them are in need of clarification with learner varieties, and that they are, in any case, not likely to be part of UG, as they exclude phenomena (e.g., so-called psych verbs) that cannot be ruled out even from the core of natural language. The article also considers the proposal that learner varieties of the BV type are completely unmarked instantiations of UG. Putting aside problems arising from the Minimalist Program, especially the question whether a grammar with only weak features would be a factual possibility and what it would look like, it is argued that the BV as characterized by K&P must be considered as the result of a process that crucially differs from first language acquisition as furnished by UG for a number of reasons, including properties of the BV itself. As a matter of fact, several of the properties claimed for the BV by K&P are more likely the result of general learning strategies than of language-specific principles. If this is correct, the characterization of the BV is a fairly interesting result, albeit of a rather different type than K&P suggest.
  • Birhane, A., & Guest, O. (2021). Towards decolonising computational sciences. Kvinder, Køn & Forskning, 29(2), 60-73. doi:10.7146/kkf.v29i2.124899.

    Abstract

    This article sets out our perspective on how to begin the journey of decolonising computational fi elds, such as data and cognitive sciences. We see this struggle as requiring two basic steps: a) realisation that the present-day system has inherited, and still enacts, hostile, conservative, and oppressive behaviours and principles towards women of colour; and b) rejection of the idea that centring individual people is a solution to system-level problems. The longer we ignore these two steps, the more “our” academic system maintains its toxic structure, excludes, and harms Black women and other minoritised groups. This also keeps the door open to discredited pseudoscience, like eugenics and physiognomy. We propose that grappling with our fi elds’ histories and heritage holds the key to avoiding mistakes of the past. In contrast to, for example, initiatives such as “diversity boards”, which can be harmful because they superfi cially appear reformatory but nonetheless center whiteness and maintain the status quo. Building on the work of many women of colour, we hope to advance the dialogue required to build both a grass-roots and a top-down re-imagining of computational sciences — including but not limited to psychology, neuroscience, cognitive science, computer science, data science, statistics, machine learning, and artifi cial intelligence. We aspire to progress away from
    these fi elds’ stagnant, sexist, and racist shared past into an ecosystem that welcomes and nurtures
    demographically diverse researchers and ideas that critically challenge the status quo.
  • Blackwell, N. L., Perlman, M., & Fox Tree, J. E. (2015). Quotation as a multimodal construction. Journal of Pragmatics, 81, 1-7. doi:10.1016/j.pragma.2015.03.004.

    Abstract

    Quotations are a means to report a broad range of events in addition to speech, and often involve both vocal and bodily demonstration. The present study examined the use of quotation to report a variety of multisensory events (i.e., containing salient visible and audible elements) as participants watched and then described a set of video clips including human speech and animal vocalizations. We examined the relationship between demonstrations conveyed through the vocal versus bodily modality, comparing them across four common quotation devices (be like, go, say, and zero quotatives), as well as across direct and non-direct quotations and retellings. We found that direct quotations involved high levels of both vocal and bodily demonstration, while non-direct quotations involved lower levels in both these channels. In addition, there was a strong positive correlation between vocal and bodily demonstration for direct quotation. This result supports a Multimodal Hypothesis where information from the two channels arises from one central concept.
  • Blasi, D. E., Moran, S., Moisik, S. R., Widmer, P., Dediu, D., & Bickel, B. (2019). Human sound systems are shaped by post-Neolithic changes in bite configuration. Science, 363(6432): eaav3218. doi:10.1126/science.aav3218.

    Abstract

    Linguistic diversity, now and in the past, is widely regarded to be independent of biological changes that took place after the emergence of Homo sapiens. We show converging evidence from paleoanthropology, speech biomechanics, ethnography, and historical linguistics that labiodental sounds (such as “f” and “v”) were innovated after the Neolithic. Changes in diet attributable to food-processing technologies modified the human bite from an edge-to-edge configuration to one that preserves adolescent overbite and overjet into adulthood. This change favored the emergence and maintenance of labiodentals. Our findings suggest that language is shaped not only by the contingencies of its history, but also by culturally induced changes in human biology.

    Files private

    Request files
  • Bluijs, S., Dera, J., & Peeters, D. (2021). Waarom digitale literatuur in het literatuuronderwijs thuishoort. Tijdschrift voor Nederlandse Taal- en Letterkunde, 137(2), 150-163. doi:10.5117/TNTL2021.2.003.BLUI.
  • Blythe, J. (2015). Other-initiated repair in Murrinh-Patha. Open Linguistics, 1, 283-308. doi:10.1515/opli-2015-0003.

    Abstract

    The range of linguistic structures and interactional practices associated with other-initiated repair (OIR) is surveyed for the Northern Australian language Murrinh-Patha. By drawing on a video corpus of informal Murrinh- Patha conversation, the OIR formats are compared in terms of their utility and versatility. Certain “restricted” formats have semantic properties that point to prior trouble source items. While these make the restricted repair initiators more specialised, the “open” formats are less well resourced semantically, which makes them more versatile. They tend to be used when the prior talk is potentially problematic in more ways than one. The open formats (especially thangku, “what?”) tend to solicit repair operations on each potential source of trouble, such that the resultant repair solution improves upon the troublesource turn in several ways
  • Bocanegra, B. R., Poletiek, F. H., Ftitache, B., & Clark, A. (2019). Intelligent problem-solvers externalize cognitive operations. Nature Human Behaviour, 3, 136-142. doi:10.1038/s41562-018-0509-y.

    Abstract

    Humans are nature’s most intelligent and prolific users of external props and aids (such as written texts, slide-rules and software packages). Here we introduce a method for investigating how people make active use of their task environment during problem-solving and apply this approach to the non-verbal Raven Advanced Progressive Matrices test for fluid intelligence. We designed a click-and-drag version of the Raven test in which participants could create different external spatial configurations while solving the puzzles. In our first study, we observed that the click-and-drag test was better than the conventional static test at predicting academic achievement of university students. This pattern of results was partially replicated in a novel sample. Importantly, environment-altering actions were clustered in between periods of apparent inactivity, suggesting that problem-solvers were delicately balancing the execution of internal and external cognitive operations. We observed a systematic relationship between this critical phasic temporal signature and improved test performance. Our approach is widely applicable and offers an opportunity to quantitatively assess a powerful, although understudied, feature of human intelligence: our ability to use external objects, props and aids to solve complex problems.
  • Bock, K., Irwin, D. E., Davidson, D. J., & Levelt, W. J. M. (2003). Minding the clock. Journal of Memory and Language, 48, 653-685. doi:10.1016/S0749-596X(03)00007-X.

    Abstract

    Telling time is an exercise in coordinating language production with visual perception. By coupling different ways of saying times with different ways of seeing them, the performance of time-telling can be used to track cognitive transformations from visual to verbal information in connected speech. To accomplish this, we used eyetracking measures along with measures of speech timing during the production of time expressions. Our findings suggest that an effective interface between what has been seen and what is to be said can be constructed within 300 ms. This interface underpins a preverbal plan or message that appears to guide a comparatively slow, strongly incremental formulation of phrases. The results begin to trace the divide between seeing and saying -or thinking and speaking- that must be bridged during the creation of even the most prosaic utterances of a language.
  • Böckler, A., Hömke, P., & Sebanz, N. (2014). Invisible Man: Exclusion from shared attention affects gaze behavior and self-reports. Social Psychological and Personality Science, 5(2), 140-148. doi:10.1177/1948550613488951.

    Abstract

    Social exclusion results in lowered satisfaction of basic needs and shapes behavior in subsequent social situations. We investigated
    participants’ immediate behavioral response during exclusion from an interaction that consisted of establishing eye contact. A
    newly developed eye-tracker-based ‘‘looking game’’ was employed; participants exchanged looks with two virtual partners in an
    exchange where the player who had just been looked at chose whom to look at next. While some participants received as many
    looks as the virtual players (included), others were ignored after two initial looks (excluded). Excluded participants reported lower
    basic need satisfaction, lower evaluation of the interaction, and devaluated their interaction partners more than included
    participants, demonstrating that people are sensitive to epistemic ostracism. In line with William’s need-threat model,
    eye-tracking results revealed that excluded participants did not withdraw from the unfavorable interaction, but increased the
    number of looks to the player who could potentially reintegrate them.
  • Bode, S., Feuerriegel, D., Bennett, D., & Alday, P. M. (2019). The Decision Decoding ToolBOX (DDTBOX) -- A Multivariate Pattern Analysis Toolbox for Event-Related Potentials. Neuroinformatics, 17(1), 27-42. doi:10.1007/s12021-018-9375-z.

    Abstract

    In recent years, neuroimaging research in cognitive neuroscience has increasingly used multivariate pattern analysis (MVPA) to investigate higher cognitive functions. Here we present DDTBOX, an open-source MVPA toolbox for electroencephalography (EEG) data. DDTBOX runs under MATLAB and is well integrated with the EEGLAB/ERPLAB and Fieldtrip toolboxes (Delorme and Makeig 2004; Lopez-Calderon and Luck 2014; Oostenveld et al. 2011). It trains support vector machines (SVMs) on patterns of event-related potential (ERP) amplitude data, following or preceding an event of interest, for classification or regression of experimental variables. These amplitude patterns can be extracted across space/electrodes (spatial decoding), time (temporal decoding), or both (spatiotemporal decoding). DDTBOX can also extract SVM feature weights, generate empirical chance distributions based on shuffled-labels decoding for group-level statistical testing, provide estimates of the prevalence of decodable information in the population, and perform a variety of corrections for multiple comparisons. It also includes plotting functions for single subject and group results. DDTBOX complements conventional analyses of ERP components, as subtle multivariate patterns can be detected that would be overlooked in standard analyses. It further allows for a more explorative search for information when no ERP component is known to be specifically linked to a cognitive process of interest. In summary, DDTBOX is an easy-to-use and open-source toolbox that allows for characterising the time-course of information related to various perceptual and cognitive processes. It can be applied to data from a large number of experimental paradigms and could therefore be a valuable tool for the neuroimaging community.
  • De Boer, B., & Perlman, M. (2014). Physical mechanisms may be as important as brain mechanisms in evolution of speech [Commentary on Ackerman, Hage, & Ziegler. Brain Mechanisms of acoustic communication in humans and nonhuman primates: an evolutionary perspective]. Behavioral and Brain Sciences, 37(6), 552-553. doi:10.1017/S0140525X13004007.

    Abstract

    We present two arguments why physical adaptations for vocalization may be as important as neural adaptations. First, fine control over vocalization is not easy for physical reasons, and modern humans may be exceptional. Second, we present an example of a gorilla that shows rudimentary voluntary control over vocalization, indicating that some neural control is already shared with great apes.
  • Bögels, S., & Torreira, F. (2021). Turn-end estimation in conversational turn-taking: The roles of context and prosody. Discourse Processes, 58(10), 903-924. doi:10.1080/0163853X.2021.1986664.

    Abstract

    This study investigated the role of contextual and prosodic information in turn-end estimation by means of a button-press task. We presented participants with turns extracted from a corpus of telephone calls visually (i.e., in transcribed form, word-by-word) and auditorily, and asked them to anticipate turn ends by pressing a button. The availability of the previous conversational context was generally helpful for turn-end estimation in short turns only, and more clearly so in the visual task than in the auditory task. To investigate the role of prosody, we examined whether participants in the auditory task pressed the button close to turn-medial points likely to constitute turn ends based on lexico-syntactic information alone. We observed that the vast majority of such button presses occurred in the presence of an intonational boundary rather than in its absence. These results are consistent with the view that prosodic cues in the proximity of turn ends play a relevant role in turn-end estimation.
  • Bögels, S., Barr, D., Garrod, S., & Kessler, K. (2015). Conversational interaction in the scanner: Mentalizing during language processing as revealed by MEG. Cerebral Cortex, 25(9), 3219-3234. doi:10.1093/cercor/bhu116.

    Abstract

    Humans are especially good at taking another’s perspective — representing what others might be thinking or experiencing. This “mentalizing” capacity is apparent in everyday human interactions and conversations. We investigated its neural basis using magnetoencephalography. We focused on whether mentalizing was engaged spontaneously and routinely to understand an utterance’s meaning or largely on-demand, to restore "common ground" when expectations were violated. Participants conversed with 1 of 2 confederate speakers and established tacit agreements about objects’ names. In a subsequent “test” phase, some of these agreements were violated by either the same or a different speaker. Our analysis of the neural processing of test phase utterances revealed recruitment of neural circuits associated with language (temporal cortex), episodic memory (e.g., medial temporal lobe), and mentalizing (temporo-parietal junction and ventro-medial prefrontal cortex). Theta oscillations (3 - 7 Hz) were modulated most prominently, and we observed phase coupling between functionally distinct neural circuits. The episodic memory and language circuits were recruited in anticipation of upcoming referring expressions, suggesting that context-sensitive predictions were spontaneously generated. In contrast, the mentalizing areas were recruited on-demand, as a means for detecting and resolving perceived pragmatic anomalies, with little evidence they were activated to make partner-specific predictions about upcoming linguistic utterances.
  • Bögels, S., & Torreira, F. (2015). Listeners use intonational phrase boundaries to project turn ends in spoken interaction. Journal of phonetics, 52, 46-57. doi:10.1016/j.wocn.2015.04.004.

    Abstract

    In conversation, turn transitions between speakers often occur smoothly, usually within a time window of a few hundred milliseconds. It has been argued, on the basis of a button-press experiment [De Ruiter, J. P., Mitterer, H., & Enfield, N. J. (2006). Projecting the end of a speaker's turn: A cognitive cornerstone of conversation. Language, 82(3):515–535], that participants in conversation rely mainly on lexico-syntactic information when timing and producing their turns, and that they do not need to make use of intonational cues to achieve smooth transitions and avoid overlaps. In contrast to this view, but in line with previous observational studies, our results from a dialogue task and a button-press task involving questions and answers indicate that the identification of the end of intonational phrases is necessary for smooth turn-taking. In both tasks, participants never responded to questions (i.e., gave an answer or pressed a button to indicate a turn end) at turn-internal points of syntactic completion in the absence of an intonational phrase boundary. Moreover, in the button-press task, they often pressed the button at the same point of syntactic completion when the final word of an intonational phrase was cross-spliced at that location. Furthermore, truncated stimuli ending in a syntactic completion point but lacking an intonational phrase boundary led to significantly delayed button presses. In light of these results, we argue that earlier claims that intonation is not necessary for correct turn-end projection are misguided, and that research on turn-taking should continue to consider intonation as a source of turn-end cues along with other linguistic and communicative phenomena.
  • Bögels, S., Magyari, L., & Levinson, S. C. (2015). Neural signatures of response planning occur midway through an incoming question in conversation. Scientific Reports, 5: 12881. doi:10.1038/srep12881.

    Abstract

    A striking puzzle about language use in everyday conversation is that turn-taking latencies are usually very short, whereas planning language production takes much longer. This implies overlap between language comprehension and production processes, but the nature and extent of such overlap has never been studied directly. Combining an interactive quiz paradigm with EEG measurements in an innovative way, we show that production planning processes start as soon as possible, that is, within half a second after the answer to a question can be retrieved (up to several seconds before the end of the question). Localization of ERP data shows early activation even of brain areas related to late stages of production planning (e.g., syllabification). Finally, oscillation results suggest an attention switch from comprehension to production around the same time frame. This perspective from interactive language use throws new light on the performance characteristics that language competence involves.
  • Bögels, S., Kendrick, K. H., & Levinson, S. C. (2015). Never say no… How the brain interprets the pregnant pause in conversation. PLoS One, 10(12): e0145474. doi:10.1371/journal.pone.0145474.

    Abstract

    In conversation, negative responses to invitations, requests, offers, and the like are more likely to occur with a delay – conversation analysts talk of them as dispreferred. Here we examine the contrastive cognitive load ‘yes’ and ‘no’ responses make, either when relatively fast (300 ms after question offset) or delayed (1000 ms). Participants heard short dialogues contrasting in speed and valence of response while having their EEG recorded. We found that a fast ‘no’ evokes an N400-effect relative to a fast ‘yes’; however this contrast disappeared in the delayed responses. 'No' responses however elicited a late frontal positivity both if they were fast and if they were delayed. We interpret these results as follows: a fast ‘no’ evoked an N400 because an immediate response is expected to be positive – this effect disappears as the response time lengthens because now in ordinary conversation the probability of a ‘no’ has increased. However, regardless of the latency of response, a ‘no’ response is associated with a late positivity, since a negative response is always dispreferred. Together these results show that negative responses to social actions exact a higher cognitive load, but especially when least expected, in immediate response.

    Additional information

    Data availability
  • Bohnemeyer, J. (2003). Invisible time lines in the fabric of events: Temporal coherence in Yukatek narratives. Journal of Linguistic Anthropology, 13(2), 139-162. doi:10.1525/jlin.2003.13.2.139.

    Abstract

    This article examines how narratives are structured in a language in which event order is largely not coded. Yucatec Maya lacks both tense inflections and temporal connectives corresponding to English after and before. It is shown that the coding of events in Yucatec narratives is subject to a strict iconicity constraint within paragraph boundaries. Aspectual viewpoint shifting is used to reconcile iconicity preservation with the requirements of a more flexible narrative structure.
  • Bolton, J. L., Hayward, C., Direk, N., Lewis, J. G., Hammond, G. L., Hill, L. A., Anderson, A., Huffman, J., Wilson, J. F., Campbell, H., Rudan, I., Wright, A., Hastie, N., Wild, S. H., Velders, F. P., Hofman, A., Uitterlinden, A. G., Lahti, J., Räikkönen, K., Kajantie, E. and 37 moreBolton, J. L., Hayward, C., Direk, N., Lewis, J. G., Hammond, G. L., Hill, L. A., Anderson, A., Huffman, J., Wilson, J. F., Campbell, H., Rudan, I., Wright, A., Hastie, N., Wild, S. H., Velders, F. P., Hofman, A., Uitterlinden, A. G., Lahti, J., Räikkönen, K., Kajantie, E., Widen, E., Palotie, A., Eriksson, J. G., Kaakinen, M., Järvelin, M.-R., Timpson, N. J., Davey Smith, G., Ring, S. M., Evans, D. M., St Pourcain, B., Tanaka, T., Milaneschi, Y., Bandinelli, S., Ferrucci, L., van der Harst, P., Rosmalen, J. G. M., Bakker, S. J. L., Verweij, N., Dullaart, R. P. F., Mahajan, A., Lindgren, C. M., Morris, A., Lind, L., Ingelsson, E., Anderson, L. N., Pennell, C. E., Lye, S. J., Matthews, S. G., Eriksson, J., Mellstrom, D., Ohlsson, C., Price, J. F., Strachan, M. W. J., Reynolds, R. M., Tiemeier, H., Walker, B. R., & CORtisol NETwork (CORNET) Consortium (2014). Genome Wide Association Identifies Common Variants at the SERPINA6/SERPINA1 Locus Influencing Plasma Cortisol and Corticosteroid Binding Globulin. PLoS Genetics, 10(7): e1004474. doi:10.1371/journal.pgen.1004474.

    Abstract

    Variation in plasma levels of cortisol, an essential hormone in the stress response, is associated in population-based studies with cardio-metabolic, inflammatory and neuro-cognitive traits and diseases. Heritability of plasma cortisol is estimated at 30-60% but no common genetic contribution has been identified. The CORtisol NETwork (CORNET) consortium undertook genome wide association meta-analysis for plasma cortisol in 12,597 Caucasian participants, replicated in 2,795 participants. The results indicate that <1% of variance in plasma cortisol is accounted for by genetic variation in a single region of chromosome 14. This locus spans SERPINA6, encoding corticosteroid binding globulin (CBG, the major cortisol-binding protein in plasma), and SERPINA1, encoding α1-antitrypsin (which inhibits cleavage of the reactive centre loop that releases cortisol from CBG). Three partially independent signals were identified within the region, represented by common SNPs; detailed biochemical investigation in a nested sub-cohort showed all these SNPs were associated with variation in total cortisol binding activity in plasma, but some variants influenced total CBG concentrations while the top hit (rs12589136) influenced the immunoreactivity of the reactive centre loop of CBG. Exome chip and 1000 Genomes imputation analysis of this locus in the CROATIA-Korcula cohort identified missense mutations in SERPINA6 and SERPINA1 that did not account for the effects of common variants. These findings reveal a novel common genetic source of variation in binding of cortisol by CBG, and reinforce the key role of CBG in determining plasma cortisol levels. In turn this genetic variation may contribute to cortisol-associated degenerative diseases.
  • Bornkessel-Schlesewsky, I., Alday, P. M., Kretzschmar, F., Grewe, T., Gumpert, M., Schumacher, P. B., & Schlesewsky, M. (2015). Age-related changes in predictive capacity versus internal model adaptability: Electrophysiological evidence that individual differences outweigh effects of age. Frontiers in Aging Neuroscience, 7: 217. doi:10.3389/fnagi.2015.00217.

    Abstract

    Hierarchical predictive coding has been identified as a possible unifying principle of brain function, and recent work in cognitive neuroscience has examined how it may be affected by age–related changes. Using language comprehension as a test case, the present study aimed to dissociate age-related changes in prediction generation versus internal model adaptation following a prediction error. Event-related brain potentials (ERPs) were measured in a group of older adults (60–81 years; n = 40) as they read sentences of the form “The opposite of black is white/yellow/nice.” Replicating previous work in young adults, results showed a target-related P300 for the expected antonym (“white”; an effect assumed to reflect a prediction match), and a graded N400 effect for the two incongruous conditions (i.e. a larger N400 amplitude for the incongruous continuation not related to the expected antonym, “nice,” versus the incongruous associated condition, “yellow”). These effects were followed by a late positivity, again with a larger amplitude in the incongruous non-associated versus incongruous associated condition. Analyses using linear mixed-effects models showed that the target-related P300 effect and the N400 effect for the incongruous non-associated condition were both modulated by age, thus suggesting that age-related changes affect both prediction generation and model adaptation. However, effects of age were outweighed by the interindividual variability of ERP responses, as reflected in the high proportion of variance captured by the inclusion of by-condition random slopes for participants and items. We thus argue that – at both a neurophysiological and a functional level – the notion of general differences between language processing in young and older adults may only be of limited use, and that future research should seek to better understand the causes of interindividual variability in the ERP responses of older adults and its relation to cognitive performance.
  • Bosker, H. R. (2021). Using fuzzy string matching for automated assessment of listener transcripts in speech intelligibility studies. Behavior Research Methods, 53(5), 1945-1953. doi:10.3758/s13428-021-01542-4.

    Abstract

    Many studies of speech perception assess the intelligibility of spoken sentence stimuli by means
    of transcription tasks (‘type out what you hear’). The intelligibility of a given stimulus is then often
    expressed in terms of percentage of words correctly reported from the target sentence. Yet scoring
    the participants’ raw responses for words correctly identified from the target sentence is a time-
    consuming task, and hence resource-intensive. Moreover, there is no consensus among speech
    scientists about what specific protocol to use for the human scoring, limiting the reliability of
    human scores. The present paper evaluates various forms of fuzzy string matching between
    participants’ responses and target sentences, as automated metrics of listener transcript accuracy.
    We demonstrate that one particular metric, the Token Sort Ratio, is a consistent, highly efficient,
    and accurate metric for automated assessment of listener transcripts, as evidenced by high
    correlations with human-generated scores (best correlation: r = 0.940) and a strong relationship to
    acoustic markers of speech intelligibility. Thus, fuzzy string matching provides a practical tool for
    assessment of listener transcript accuracy in large-scale speech intelligibility studies. See
    https://tokensortratio.netlify.app for an online implementation.
  • Bosker, H. R., Badaya, E., & Corley, M. (2021). Discourse markers activate their, like, cohort competitors. Discourse Processes, 58(9), 837-851. doi:10.1080/0163853X.2021.1924000.

    Abstract

    Speech in everyday conversations is riddled with discourse markers (DMs), such as well, you know, and like. However, in many lab-based studies of speech comprehension, such DMs are typically absent from the carefully articulated and highly controlled speech stimuli. As such, little is known about how these DMs influence online word recognition. The present study specifically investigated the online processing of DM like and how it influences the activation of words in the mental lexicon. We specifically targeted the cohort competitor (CC) effect in the Visual World Paradigm: Upon hearing spoken instructions to “pick up the beaker,” human listeners also typically fixate—next to the target object—referents that overlap phonologically with the target word (cohort competitors such as beetle; CCs). However, several studies have argued that CC effects are constrained by syntactic, semantic, pragmatic, and discourse constraints. Therefore, the present study investigated whether DM like influences online word recognition by activating its cohort competitors (e.g., lightbulb). In an eye-tracking experiment using the Visual World Paradigm, we demonstrate that when participants heard spoken instructions such as “Now press the button for the, like … unicycle,” they showed anticipatory looks to the CC referent (lightbulb)well before hearing the target. This CC effect was sustained for a relatively long period of time, even despite hearing disambiguating information (i.e., the /k/ in like). Analysis of the reaction times also showed that participants were significantly faster to select CC targets (lightbulb) when preceded by DM like. These findings suggest that seemingly trivial DMs, such as like, activate their CCs, impacting online word recognition. Thus, we advocate a more holistic perspective on spoken language comprehension in naturalistic communication, including the processing of DMs.
  • Bosker, H. R., & Peeters, D. (2021). Beat gestures influence which speech sounds you hear. Proceedings of the Royal Society B: Biological Sciences, 288: 20202419. doi:10.1098/rspb.2020.2419.

    Abstract

    Beat gestures—spontaneously produced biphasic movements of the hand—
    are among the most frequently encountered co-speech gestures in human
    communication. They are closely temporally aligned to the prosodic charac-
    teristics of the speech signal, typically occurring on lexically stressed
    syllables. Despite their prevalence across speakers of the world’s languages,
    how beat gestures impact spoken word recognition is unclear. Can these
    simple ‘flicks of the hand’ influence speech perception? Across a range
    of experiments, we demonstrate that beat gestures influence the explicit
    and implicit perception of lexical stress (e.g. distinguishing OBject from
    obJECT), and in turn can influence what vowels listeners hear. Thus, we pro-
    vide converging evidence for a manual McGurk effect: relatively simple and
    widely occurring hand movements influence which speech sounds we hear

    Additional information

    example stimuli and experimental data
  • Bosker, H. R., Van Os, M., Does, R., & Van Bergen, G. (2019). Counting 'uhm's: how tracking the distribution of native and non-native disfluencies influences online language comprehension. Journal of Memory and Language, 106, 189-202. doi:10.1016/j.jml.2019.02.006.

    Abstract

    Disfluencies, like 'uh', have been shown to help listeners anticipate reference to low-frequency words. The associative account of this 'disfluency bias' proposes that listeners learn to associate disfluency with low-frequency referents based on prior exposure to non-arbitrary disfluency distributions (i.e., greater probability of low-frequency words after disfluencies). However, there is limited evidence for listeners actually tracking disfluency distributions online. The present experiments are the first to show that adult listeners, exposed to a typical or more atypical disfluency distribution (i.e., hearing a talker unexpectedly say uh before high-frequency words), flexibly adjust their predictive strategies to the disfluency distribution at hand (e.g., learn to predict high-frequency referents after disfluency). However, when listeners were presented with the same atypical disfluency distribution but produced by a non-native speaker, no adjustment was observed. This suggests pragmatic inferences can modulate distributional learning, revealing the flexibility of, and constraints on, distributional learning in incremental language comprehension.
  • Bosker, H. R., Quené, H., Sanders, T. J. M., & de Jong, N. H. (2014). Native 'um's elicit prediction of low-frequency referents, but non-native 'um's do not. Journal of Memory and Language, 75, 104-116. doi:10.1016/j.jml.2014.05.004.

    Abstract

    Speech comprehension involves extensive use of prediction. Linguistic prediction may be guided by the semantics or syntax, but also by the performance characteristics of the speech signal, such as disfluency. Previous studies have shown that listeners, when presented with the filler uh, exhibit a disfluency bias for discourse-new or unknown referents, drawing inferences about the source of the disfluency. The goal of the present study is to study the contrast between native and non-native disfluencies in speech comprehension. Experiment 1 presented listeners with pictures of high-frequency (e.g., a hand) and low-frequency objects (e.g., a sewing machine) and with fluent and disfluent instructions. Listeners were found to anticipate reference to low-frequency objects when encountering disfluency, thus attributing disfluency to speaker trouble in lexical retrieval. Experiment 2 showed that, when participants listened to disfluent non-native speech, no anticipation of low-frequency referents was observed. We conclude that listeners can adapt their predictive strategies to the (non-native) speaker at hand, extending our understanding of the role of speaker identity in speech comprehension.
  • Bosker, H. R., Quené, H., Sanders, T. J. M., & de Jong, N. H. (2014). The perception of fluency in native and non-native speech. Language Learning, 64, 579-614. doi:10.1111/lang.12067.

    Abstract

    Where native speakers supposedly are fluent by default, non-native speakers often have to strive hard to achieve a native-like fluency level. However, disfluencies (such as pauses, fillers, repairs, etc.) occur in both native and non-native speech and it is as yet unclear ow luency raters weigh the fluency characteristics of native and non-native speech. Two rating experiments compared the way raters assess the luency of native and non-native speech. The fluency characteristics of native and non- native speech were controlled by using phonetic anipulations in pause (Experiment 1) and speed characteristics (Experiment 2). The results show that the ratings on manipulated native and on-native speech were affected in a similar fashion. This suggests that there is no difference in the way listeners weigh the fluency haracteristics of native and non-native speakers.
  • Braden, R. O., Amor, D. J., Fisher, S. E., Mei, C., Myers, C. T., Mefford, H., Gill, D., Srivastava, S., Swanson, L. C., Goel, H., Scheffer, I. E., & Morgan, A. T. (2021). Severe speech impairment is a distinguishing feature of FOXP1-related disorder. Developmental Medicine & Child Neurology, 63(12), 1417-1426. doi:10.1111/dmcn.14955.

    Abstract

    Aim
    To delineate the speech and language phenotype of a cohort of individuals with FOXP1-related disorder.

    Method
    We administered a standardized test battery to examine speech and oral motor function, receptive and expressive language, non-verbal cognition, and adaptive behaviour. Clinical history and cognitive assessments were analysed together with speech and language findings.

    Results
    Twenty-nine patients (17 females, 12 males; mean age 9y 6mo; median age 8y [range 2y 7mo–33y]; SD 6y 5mo) with pathogenic FOXP1 variants (14 truncating, three missense, three splice site, one in-frame deletion, eight cytogenic deletions; 28 out of 29 were de novo variants) were studied. All had atypical speech, with 21 being verbal and eight minimally verbal. All verbal patients had dysarthric and apraxic features, with phonological deficits in most (14 out of 16). Language scores were low overall. In the 21 individuals who carried truncating or splice site variants and small deletions, expressive abilities were relatively preserved compared with comprehension.

    Interpretation
    FOXP1-related disorder is characterized by a complex speech and language phenotype with prominent dysarthria, broader motor planning and programming deficits, and linguistic-based phonological errors. Diagnosis of the speech phenotype associated with FOXP1-related dysfunction will inform early targeted therapy.

    Additional information

    figure S1 table S1

Share this page