Publications

Displaying 201 - 300 of 1150
  • Devaraju, K., Barnabé-Heider, F., Kokaia, Z., & Lindvall, O. (2013). FoxJ1-expressing cells contribute to neurogenesis in forebrain of adult rats: Evidence from in vivo electroporation combined with piggyBac transposon. ScienceDirect, 319(18), 2790-2800. doi:10.1016/j.yexcr.2013.08.028.

    Abstract

    Ependymal cells in the lateral ventricular wall are considered to be post-mitotic but can give rise to neuroblasts and astrocytes after stroke in adult mice due to insult-induced suppression of Notch signaling. The transcription factor FoxJ1, which has been used to characterize mouse ependymal cells, is also expressed by a subset of astrocytes. Cells expressing FoxJ1, which drives the expression of motile cilia, contribute to early postnatal neurogenesis in mouse olfactory bulb. The distribution and progeny of FoxJ1-expressing cells in rat forebrain are unknown. Here we show using immunohistochemistry that the overall majority of FoxJ1-expressing cells in the lateral ventricular wall of adult rats are ependymal cells with a minor population being astrocytes. To allow for long-term fate mapping of FoxJ1-derived cells, we used the piggyBac system for in vivo gene transfer with electroporation. Using this method, we found that FoxJ1-expressing cells, presumably the astrocytes, give rise to neuroblasts and mature neurons in the olfactory bulb both in intact and stroke-damaged brain of adult rats. No significant contribution of FoxJ1-derived cells to stroke-induced striatal neurogenesis was detected. These data indicate that in the adult rat brain, FoxJ1-expressing cells contribute to the formation of new neurons in the olfactory bulb but are not involved in the cellular repair after stroke.
  • Díaz-Caneja, C. M., Alloza, C., Gordaliza, P. M., Fernández Pena, A., De Hoyos, L., Santonja, J., Buimer, E. E. L., Van Haren, N. E. M., Cahn, W., Arango, C., Kahn, R. S., Hulshoff Pol, H. E., Schnack, H. G., & Janssen, J. (2021). Sex differences in lifespan trajectories and variability of human sulcal and gyral morphology. Cerebral Cortex, 31(11), 5107-5120. doi:10.1093/cercor/bhab145.

    Abstract

    Sex differences in development and aging of human sulcal morphology have been understudied. We charted sex differences in trajectories and inter-individual variability of global sulcal depth, width, and length, pial surface area, exposed (hull) gyral surface area, unexposed sulcal surface area, cortical thickness, and cortex volume across the lifespan in a longitudinal sample (700 scans, 194 participants two scans, 104 three scans, age range: 16-70 years) of neurotypical males and females. After adjusting for brain volume, females had thicker cortex and steeper thickness decline until age 40 years; trajectories converged thereafter. Across sexes, sulcal shortening was faster before age 40, while sulcal shallowing and widening were faster thereafter. While hull area remained stable, sulcal surface area declined and was more strongly associated with sulcal shortening than with sulcal shallowing and widening. Males showed greater variability for cortex volume and thickness and lower variability for sulcal width. Across sexes, variability decreased with age for all measures except for cortical volume and thickness. Our findings highlight the association between loss of sulcal area, notably through sulcal shortening, with cortex volume loss. Studying sex differences in lifespan trajectories may improve knowledge of individual differences in brain development and the pathophysiology of neuropsychiatric conditions.

    Additional information

    supplementary data
  • Dimroth, C. (1998). Indiquer la portée en allemand L2: Une étude longitudinale de l'acquisition des particules de portée. AILE (Acquisition et Interaction en Langue étrangère), 11, 11-34.
  • Dingemanse, M. (2013). Ideophones and gesture in everyday speech. Gesture, 13, 143-165. doi:10.1075/gest.13.2.02din.

    Abstract

    This article examines the relation between ideophones and gestures in a corpus of everyday discourse in Siwu, a richly ideophonic language spoken in Ghana. The overall frequency of ideophone-gesture couplings in everyday speech is lower than previously suggested, but two findings shed new light on the relation between ideophones and gesture. First, discourse type makes a difference: ideophone-gesture couplings are more frequent in narrative contexts, a finding that explains earlier claims, which were based not on everyday language use but on elicited narratives. Second, there is a particularly strong coupling between ideophones and one type of gesture: iconic gestures. This coupling allows us to better understand iconicity in relation to the affordances of meaning and modality. Ultimately, the connection between ideophones and iconic gestures is explained by reference to the depictive nature of both. Ideophone and iconic gesture are two aspects of the process of depiction
  • Dingemanse, M., Torreira, F., & Enfield, N. J. (2013). Is “Huh?” a universal word? Conversational infrastructure and the convergent evolution of linguistic items. PLoS One, 8(11): e78273. doi:10.1371/journal.pone.0078273.

    Abstract

    A word like Huh?–used as a repair initiator when, for example, one has not clearly heard what someone just said– is found in roughly the same form and function in spoken languages across the globe. We investigate it in naturally occurring conversations in ten languages and present evidence and arguments for two distinct claims: that Huh? is universal, and that it is a word. In support of the first, we show that the similarities in form and function of this interjection across languages are much greater than expected by chance. In support of the second claim we show that it is a lexical, conventionalised form that has to be learnt, unlike grunts or emotional cries. We discuss possible reasons for the cross-linguistic similarity and propose an account in terms of convergent evolution. Huh? is a universal word not because it is innate but because it is shaped by selective pressures in an interactional environment that all languages share: that of other-initiated repair. Our proposal enhances evolutionary models of language change by suggesting that conversational infrastructure can drive the convergent cultural evolution of linguistic items.
  • Doherty, M., & Klein, W. (Eds.). (1991). Übersetzung [Special Issue]. Zeitschrift für Literaturwissenschaft und Linguistik, (84).
  • Dolscheid, S., Shayan, S., Majid, A., & Casasanto, D. (2013). The thickness of musical pitch: Psychophysical evidence for linguistic relativity. Psychological Science, 24, 613-621. doi:10.1177/0956797612457374.

    Abstract

    Do people who speak different languages think differently, even when they are not using language? To find out, we used nonlinguistic psychophysical tasks to compare mental representations of musical pitch in native speakers of Dutch and Farsi. Dutch speakers describe pitches as high (hoog) or low (laag), whereas Farsi speakers describe pitches as thin (na-zok) or thick (koloft). Differences in language were reflected in differences in performance on two pitch-reproduction tasks, even though the tasks used simple, nonlinguistic stimuli and responses. To test whether experience using language influences mental representations of pitch, we trained native Dutch speakers to describe pitch in terms of thickness, as Farsi speakers do. After the training, Dutch speakers’ performance on a nonlinguistic psychophysical task resembled the performance of native Farsi speakers. People who use different linguistic space-pitch metaphors also think about pitch differently. Language can play a causal role in shaping nonlinguistic representations of musical pitch.

    Additional information

    DS_10.1177_0956797612457374.pdf
  • Donnelly, S., & Kidd, E. (2021). Onset neighborhood density slows lexical access in high vocabulary 30‐month olds. Cognitive Science, 45(9): e13022. doi:10.1111/cogs.13022.

    Abstract

    There is consensus that the adult lexicon exhibits lexical competition. In particular, substantial evidence demonstrates that words with more phonologically similar neighbors are recognized less efficiently than words with fewer neighbors. How and when these effects emerge in the child's lexicon is less clear. In the current paper, we build on previous research by testing whether phonological onset density slows lexical access in a large sample of 100 English-acquiring 30-month-olds. The children participated in a visual world looking-while-listening task, in which their attention was directed to one of two objects on a computer screen while their eye movements were recorded. We found moderate evidence of inhibitory effects of onset neighborhood density on lexical access and clear evidence for an interaction between onset neighborhood density and vocabulary, with larger effects of onset neighborhood density for children with larger vocabularies. Results suggest the lexicons of 30-month-olds exhibit lexical-level competition, with competition increasing with vocabulary size.
  • Donnelly, S., & Kidd, E. (2021). On the structure and source of individual differences in toddlers' comprehension of transitive sentences. Frontiers in Psychology, 12: 661022. doi:10.3389/fpsyg.2021.661022.

    Abstract

    How children learn grammar is one of the most fundamental questions in cognitive science. Two theoretical accounts, namely, the Early Abstraction and Usage-Based accounts, propose competing answers to this question. To compare the predictions of these accounts, we tested the comprehension of 92 24-month old children of transitive sentences with novel verbs (e.g., “The boy is gorping the girl!”) with the Intermodal Preferential Looking (IMPL) task. We found very little evidence that children looked to the target video at above-chance levels. Using mixed and mixture models, we tested the predictions the two accounts make about: (i) the structure of individual differences in the IMPL task and (ii) the relationship between vocabulary knowledge, lexical processing, and performance in the IMPL task. However, the results did not strongly support either of the two accounts. The implications for theories on language acquisition and for tasks developed for examining individual differences are discussed.

    Additional information

    data via OSF
  • Donnelly, S., & Kidd, E. (2021). The longitudinal relationship between conversational turn-taking and vocabulary growth in early language development. Child Development, 92(2), 609-625. doi:10.1111/cdev.13511.

    Abstract

    Children acquire language embedded within the rich social context of interaction. This paper reports on a longitudinal study investigating the developmental relationship between conversational turn‐taking and vocabulary growth in English‐acquiring children (N = 122) followed between 9 and 24 months. Daylong audio recordings obtained every 3 months provided several indices of the language environment, including the number of adult words children heard in their environment and their number of conversational turns. Vocabulary was measured independently via parental report. Growth curve analyses revealed a bidirectional relationship between conversational turns and vocabulary growth, controlling for the amount of words in children’s environments. The results are consistent with theoretical approaches that identify social interaction as a core component of early language acquisition.
  • Doumas, L. A. A., & Martin, A. E. (2021). A model for learning structured representations of similarity and relative magnitude from experience. Current Opinion in Behavioral Sciences, 37, 158-166. doi:10.1016/j.cobeha.2021.01.001.

    Abstract

    How a system represents information tightly constrains the kinds of problems it can solve. Humans routinely solve problems that appear to require abstract representations of stimulus properties and relations. How we acquire such representations has central importance in an account of human cognition. We briefly describe a theory of how a system can learn invariant responses to instances of similarity and relative magnitude, and how structured, relational representations can be learned from initially unstructured inputs. Two operations, comparing distributed representations and learning from the concomitant network dynamics in time, underpin the ability to learn these representations and to respond to invariance in the environment. Comparing analog representations of absolute magnitude produces invariant signals that carry information about similarity and relative magnitude. We describe how a system can then use this information to bootstrap learning structured (i.e., symbolic) concepts of relative magnitude from experience without assuming such representations a priori.
  • Drenth, P., Levelt, W. J. M., & Noort, E. (2013). Rejoinder to commentary on the Stapel-fraud report. The Psychologist, 26(2), 81.

    Abstract

    The Levelt, Noort and Drenth Committees make their sole and final rejoinder to criticisms of their report on the Stapel fraud
  • Drew, P., Hakulinen, A., Heinemann, T., Niemi, J., & Rossi, G. (2021). Hendiadys in naturally occurring interactions: A cross-linguistic study of double verb constructions. Journal of Pragmatics, 182, 322-347. doi:10.1016/j.pragma.2021.02.008.

    Abstract

    Double verb constructions known as hendiadys have been studied primarily in literary texts and corpora of written language. Much less is known about their properties and usage in spoken language, where expressions such as ‘come and see’, ‘go and tell’, ‘sit and talk’ are particularly common, and where we can find an even richer diversity of other constructions. In this study, we investigate hendiadys in corpora of naturally occurring social interactions in four languages, Danish, English (US and UK), Finnish and Italian, with the objective of exploring whether hendiadys is used systematically in recurrent interactional and sequential circumstances, from which it is possible to identify the pragmatic function(s) that hendiadys may serve. Examining hendiadys in conversation also offers us a special window into its grammatical properties, for example when a speaker self-corrects from a non-hendiadic to a hendiadic expression, exposing the boundary between related grammatical forms and demonstrating the distinctiveness of hendiadys in context. More broadly, we demonstrate that hendiadys is systematically associated with talk about complainable matters, in environments characterised by a conflict, dissonance, or friction that is ongoing in the interaction or that is being reported by one participant to another. We also find that the utterance in which hendiadys is used is typically in a subsequent and possibly terminal position in the sequence, summarising or concluding it. Another key finding is that the complainable or conflictual element in these interactions is expressed primarily by the first conjunct of the hendiadic construction. Whilst the first conjunct is semantically subsidiary to the second, it is pragmatically the most important one. This analysis leads us to revisit a long-established asymmetry between the verbal components of hendiadys, and to bring to light the synergy of grammar and pragmatics in language usage.
  • Drijvers, L., Vaitonyte, J., & Ozyurek, A. (2019). Degree of language experience modulates visual attention to visible speech and iconic gestures during clear and degraded speech comprehension. Cognitive Science, 43: e12789. doi:10.1111/cogs.12789.

    Abstract

    Visual information conveyed by iconic hand gestures and visible speech can enhance speech comprehension under adverse listening conditions for both native and non‐native listeners. However, how a listener allocates visual attention to these articulators during speech comprehension is unknown. We used eye‐tracking to investigate whether and how native and highly proficient non‐native listeners of Dutch allocated overt eye gaze to visible speech and gestures during clear and degraded speech comprehension. Participants watched video clips of an actress uttering a clear or degraded (6‐band noise‐vocoded) action verb while performing a gesture or not, and were asked to indicate the word they heard in a cued‐recall task. Gestural enhancement was the largest (i.e., a relative reduction in reaction time cost) when speech was degraded for all listeners, but it was stronger for native listeners. Both native and non‐native listeners mostly gazed at the face during comprehension, but non‐native listeners gazed more often at gestures than native listeners. However, only native but not non‐native listeners' gaze allocation to gestures predicted gestural benefit during degraded speech comprehension. We conclude that non‐native listeners might gaze at gesture more as it might be more challenging for non‐native listeners to resolve the degraded auditory cues and couple those cues to phonological information that is conveyed by visible speech. This diminished phonological knowledge might hinder the use of semantic information that is conveyed by gestures for non‐native compared to native listeners. Our results demonstrate that the degree of language experience impacts overt visual attention to visual articulators, resulting in different visual benefits for native versus non‐native listeners.

    Additional information

    Supporting information
  • Drijvers, L., Jensen, O., & Spaak, E. (2021). Rapid invisible frequency tagging reveals nonlinear integration of auditory and visual information. Human Brain Mapping, 42(4), 1138-1152. doi:10.1002/hbm.25282.

    Abstract

    During communication in real-life settings, the brain integrates information from auditory and visual modalities to form a unified percept of our environment. In the current magnetoencephalography (MEG) study, we used rapid invisible frequency tagging (RIFT) to generate steady-state evoked fields and investigated the integration of audiovisual information in a semantic context. We presented participants with videos of an actress uttering action verbs (auditory; tagged at 61 Hz) accompanied by a gesture (visual; tagged at 68 Hz, using a projector with a 1440 Hz refresh rate). Integration ease was manipulated by auditory factors (clear/degraded speech) and visual factors (congruent/incongruent gesture). We identified MEG spectral peaks at the individual (61/68 Hz) tagging frequencies. We furthermore observed a peak at the intermodulation frequency of the auditory and visually tagged signals (fvisual – fauditory = 7 Hz), specifically when integration was easiest (i.e., when speech was clear and accompanied by a congruent gesture). This intermodulation peak is a signature of nonlinear audiovisual integration, and was strongest in left inferior frontal gyrus and left temporal regions; areas known to be involved in speech-gesture integration. The enhanced power at the intermodulation frequency thus reflects the ease of integration and demonstrates that speech-gesture information interacts in higher-order language areas. Furthermore, we provide a proof-of-principle of the use of RIFT to study the integration of audiovisual stimuli, in relation to, for instance, semantic context.
  • Drijvers, L., Van der Plas, M., Ozyurek, A., & Jensen, O. (2019). Native and non-native listeners show similar yet distinct oscillatory dynamics when using gestures to access speech in noise. NeuroImage, 194, 55-67. doi:10.1016/j.neuroimage.2019.03.032.

    Abstract

    Listeners are often challenged by adverse listening conditions during language comprehension induced by external factors, such as noise, but also internal factors, such as being a non-native listener. Visible cues, such as semantic information conveyed by iconic gestures, can enhance language comprehension in such situations. Using magnetoencephalography (MEG) we investigated whether spatiotemporal oscillatory dynamics can predict a listener's benefit of iconic gestures during language comprehension in both internally (non-native versus native listeners) and externally (clear/degraded speech) induced adverse listening conditions. Proficient non-native speakers of Dutch were presented with videos in which an actress uttered a degraded or clear verb, accompanied by a gesture or not, and completed a cued-recall task after every video. The behavioral and oscillatory results obtained from non-native listeners were compared to an MEG study where we presented the same stimuli to native listeners (Drijvers et al., 2018a). Non-native listeners demonstrated a similar gestural enhancement effect as native listeners, but overall scored significantly slower on the cued-recall task. In both native and non-native listeners, an alpha/beta power suppression revealed engagement of the extended language network, motor and visual regions during gestural enhancement of degraded speech comprehension, suggesting similar core processes that support unification and lexical access processes. An individual's alpha/beta power modulation predicted the gestural benefit a listener experienced during degraded speech comprehension. Importantly, however, non-native listeners showed less engagement of the mouth area of the primary somatosensory cortex, left insula (beta), LIFG and ATL (alpha) than native listeners, which suggests that non-native listeners might be hindered in processing the degraded phonological cues and coupling them to the semantic information conveyed by the gesture. Native and non-native listeners thus demonstrated similar yet distinct spatiotemporal oscillatory dynamics when recruiting visual cues to disambiguate degraded speech.

    Additional information

    1-s2.0-S1053811919302216-mmc1.docx
  • Drolet, M., & Kempen, G. (1985). IPG: A cognitive approach to sentence generation. CCAI: The Journal for the Integrated Study of Artificial Intelligence, Cognitive Science and Applied Epistemology, 2, 37-61.
  • Dronkers, N. F., Wilkins, D. P., Van Valin Jr., R. D., Redfern, B. B., & Jaeger, J. J. (2004). Lesion analysis of the brain areas involved in language comprehension. Cognition, 92, 145-177. doi:10.1016/j.cognition.2003.11.002.

    Abstract

    The cortical regions of the brain traditionally associated with the comprehension of language are Wernicke's area and Broca's area. However, recent evidence suggests that other brain regions might also be involved in this complex process. This paper describes the opportunity to evaluate a large number of brain-injured patients to determine which lesioned brain areas might affect language comprehension. Sixty-four chronic left hemisphere stroke patients were evaluated on 11 subtests of the Curtiss–Yamada Comprehensive Language Evaluation – Receptive (CYCLE-R; Curtiss, S., & Yamada, J. (1988). Curtiss–Yamada Comprehensive Language Evaluation. Unpublished test, UCLA). Eight right hemisphere stroke patients and 15 neurologically normal older controls also participated. Patients were required to select a single line drawing from an array of three or four choices that best depicted the content of an auditorily-presented sentence. Patients' lesions obtained from structural neuroimaging were reconstructed onto templates and entered into a voxel-based lesion-symptom mapping (VLSM; Bates, E., Wilson, S., Saygin, A. P., Dick, F., Sereno, M., Knight, R. T., & Dronkers, N. F. (2003). Voxel-based lesion-symptom mapping. Nature Neuroscience, 6(5), 448–450.) analysis along with the behavioral data. VLSM is a brain–behavior mapping technique that evaluates the relationships between areas of injury and behavioral performance in all patients on a voxel-by-voxel basis, similar to the analysis of functional neuroimaging data. Results indicated that lesions to five left hemisphere brain regions affected performance on the CYCLE-R, including the posterior middle temporal gyrus and underlying white matter, the anterior superior temporal gyrus, the superior temporal sulcus and angular gyrus, mid-frontal cortex in Brodmann's area 46, and Brodmann's area 47 of the inferior frontal gyrus. Lesions to Broca's and Wernicke's areas were not found to significantly alter language comprehension on this particular measure. Further analysis suggested that the middle temporal gyrus may be more important for comprehension at the word level, while the other regions may play a greater role at the level of the sentence. These results are consistent with those seen in recent functional neuroimaging studies and offer complementary data in the effort to understand the brain areas underlying language comprehension.
  • Drude, S., Awete, W., & Aweti, A. (2019). A ortografia da língua Awetí. LIAMES: Línguas Indígenas Americanas, 19: e019014. doi:10.20396/liames.v19i0.8655746.

    Abstract

    Este trabalho descreve e fundamenta a ortografia da língua Awetí (Tupí, Alto Xingu/mt), com base na análise da estrutura fonológica e gramatical do Awetí. A ortografia é resultado de um longo trabalho colaborativo entre os três autores, iniciado em 1998. Ela não define apenas um alfabeto (a representação das vogais e das consoantes da língua), mas também aborda a variação interna, ressilabificação, lenição, palatalização e outros processos (morfo‑)fonológicos. Tanto a representação escrita da oclusiva glotal, quanto as consequências ortográficas da harmonia nasal receberam uma atenção especial. Apesar de o acento lexical não ser ortograficamente marcado em Awetí, a grande maioria dos afixos e partículas é abordada considerando o acento e sua interação com morfemas adjacentes, ao mesmo tempo determinando as palavras ortográficas. Finalmente foi estabelecida a ordem alfabética em que dígrafos são tratados como sequências de letras, já a oclusiva glotal ⟨ʼ⟩ é ignorada, facilitando o aprendizado do Awetí. A ortografia tal como descrita aqui tem sido usada por aproximadamente dez anos na escola para a alfabetização em Awetí, com bons resultados obtidos. Acreditamos que vários dos argumentos aqui levantados podem ser produtivamente transferidos para outras línguas com fenômenos semelhantes (a oclusiva glotal como consoante, harmonia nasal, assimilação morfo-fonológica, etc.).
  • Dunn, M., Kruspe, N., & Burenhult, N. (2013). Time and place in the prehistory of the Aslian languages. Human Biology, 85, 383-399.

    Abstract

    The Aslian branch of Austroasiatic is recognised as the oldest recoverable language family in the Malay Peninsula, predating the now dominant Austronesian languages present today. In this paper we address the dynamics of the prehistoric spread of Aslian languages across the peninsula, including the languages spoken by Semang foragers, traditionally associated with the 'Negrito' phenotype. The received view of an early and uniform tripartite break-up of proto-Aslian in the Early Neolithic period, and subsequent differentiation driven by societal modes is challenged. We present a Bayesian phylogeographic analysis of our dataset of vocabulary from 28 Aslian varieties. An explicit geographic model of diffusion is combined with a cognate birth-word death model of lexical evolution to infer the location of the major events of Aslian cladogenesis. The resultant phylogenetic trees are calibrated against dates in the historical and archaeological record to extrapolate a detailed picture of Aslian language history. We conclude that a binary split between Southern Aslian and the rest of Aslian took place in the Early Neolithic (4000 BP). This was followed much later in the Late Neolithic (2000-3000 BP) by a tripartite branching into Central Aslian, Jah Hut and Northern Aslian. Subsequent internal divisions within these sub-clades took place in the Early Metal Phase (post-2000 BP). Significantly, a split in Northern Aslian between Ceq Wong and the languages of the Semang was a late development and is proposed here to coincide with the adoption of Aslian by the Semang foragers. Given the difficulties involved in associating archaeologically recorded activities with linguistic events, as well as the lack of historical sources, our results remain preliminary. However, they provide sufficient evidence to prompt a rethinking of previous models of both clado- and ethno-genesis within the Malay Peninsula.
  • Duprez, J., Stokkermans, M., Drijvers, L., & Cohen, M. X. (2021). Synchronization between keyboard typing and neural oscillations. Journal of Cognitive Neuroscience, 33(5), 887-901. doi:10.1162/jocn_a_01692.

    Abstract

    Rhythmic neural activity synchronizes with certain rhythmic behaviors, such as breathing, sniffing, saccades, and speech. The extent to which neural oscillations synchronize with higher-level and more complex behaviors is largely unknown. Here we investigated electrophysiological synchronization with keyboard typing, which is an omnipresent behavior daily engaged by an uncountably large number of people. Keyboard typing is rhythmic with frequency characteristics roughly the same as neural oscillatory dynamics associated with cognitive control, notably through midfrontal theta (4 -7 Hz) oscillations. We tested the hypothesis that synchronization occurs between typing and midfrontal theta, and breaks down when errors are committed. Thirty healthy participants typed words and sentences on a keyboard without visual feedback, while EEG was recorded. Typing rhythmicity was investigated by inter-keystroke interval analyses and by a kernel density estimation method. We used a multivariate spatial filtering technique to investigate frequency-specific synchronization between typing and neuronal oscillations. Our results demonstrate theta rhythmicity in typing (around 6.5 Hz) through the two different behavioral analyses. Synchronization between typing and neuronal oscillations occurred at frequencies ranging from 4 to 15 Hz, but to a larger extent for lower frequencies. However, peak synchronization frequency was idiosyncratic across subjects, therefore not specific to theta nor to midfrontal regions, and correlated somewhat with peak typing frequency. Errors and trials associated with stronger cognitive control were not associated with changes in synchronization at any frequency. As a whole, this study shows that brain-behavior synchronization does occur during keyboard typing but is not specific to midfrontal theta.
  • Durrant, S., Jessop, A., Chang, F., Bidgood, A., Peter, M. S., Pine, J. M., & Rowland, C. F. (2021). Does the understanding of complex dynamic events at 10 months predict vocabulary development? Language and Cognition, 13(1), 66-98. doi:10.1017/langcog.2020.26.

    Abstract

    By the end of their first year, infants can interpret many different types of complex dynamic visual events, such as caused-motion, chasing, and goal-directed action. Infants of this age are also in the early stages of vocabulary development, producing their first words at around 12 months. The present work examined whether there are meaningful individual differences in infants’ ability to represent dynamic causal events in visual scenes, and whether these differences influence vocabulary development. As part of the longitudinal Language 0–5 Project, 78 10-month-old infants were tested on their ability to interpret three dynamic motion events, involving (a) caused-motion, (b) chasing behaviour, and (c) goal-directed movement. Planned analyses found that infants showed evidence of understanding the first two event types, but not the third. Looking behaviour in each task was not meaningfully related to vocabulary development, nor were there any correlations between the tasks. The results of additional exploratory analyses and simulations suggested that the infants’ understanding of each event may not be predictive of their vocabulary development, and that looking times in these tasks may not be reliably capturing any meaningful individual differences in their knowledge. This raises questions about how to convert experimental group designs to individual differences measures, and how to interpret infant looking time behaviour.
  • Edlinger, G., Bastiaansen, M. C. M., Brunia, C., Neuper, C., & Pfurtscheller, G. (1999). Cortical oscillatory activity assessed by combined EEG and MEG recordings and high resolution ERD methods. Biomedizinische Technik, 44(2), 131-134.
  • Eekhof, L. S., Kuijpers, M. M., Faber, M., Gao, X., Mak, M., Van den Hoven, E., & Willems, R. M. (2021). Lost in a story, detached from the words. Discourse Processes, 58(7), 595-616. doi:10.1080/0163853X.2020.1857619.

    Abstract

    This article explores the relationship between low- and high-level aspects of reading by studying the interplay between word processing, as measured with eye tracking, and narrative absorption and liking, as measured with questionnaires. Specifically, we focused on how individual differences in sensitivity to lexical word characteristics—measured as the effect of these characteristics on gaze duration—were related to narrative absorption and liking. By reanalyzing a large data set consisting of three previous eye-tracking experiments in which subjects (N = 171) read literary short stories, we replicated the well-established finding that word length, lemma frequency, position in sentence, age of acquisition, and orthographic neighborhood size of words influenced gaze duration. More importantly, we found that individual differences in the degree of sensitivity to three of these word characteristics, i.e., word length, lemma frequency, and age of acquisition, were negatively related to print exposure and to a lesser degree to narrative absorption and liking. Even though the underlying mechanisms of this relationship are still unclear, we believe the current findings underline the need to map out the interplay between, on the one hand, the technical and, on the other hand, the subjective processes of reading by studying reading behavior in more natural settings.

    Additional information

    Analysis scripts and data
  • Eekhof, L. S., Van Krieken, K., Sanders, J., & Willems, R. M. (2021). Reading minds, reading stories: Social-cognitive abilities affect the linguistic processing of narrative viewpoint. Frontiers in Psychology, 12: 698986. doi:10.3389/fpsyg.2021.698986.

    Abstract

    Although various studies have shown that narrative reading draws on social-cognitive abilities, not much is known about the precise aspects of narrative processing that engage these abilities. We hypothesized that the linguistic processing of narrative viewpoint—expressed by elements that provide access to the inner world of characters—might play an important role in engaging social-cognitive abilities. Using eye tracking, we studied the effect of lexical markers of perceptual, cognitive, and emotional viewpoint on eye movements during reading of a 5,000-word narrative. Next, we investigated how this relationship was modulated by individual differences in social-cognitive abilities. Our results show diverging patterns of eye movements for perceptual viewpoint markers on the one hand, and cognitive and emotional viewpoint markers on the other. Whereas the former are processed relatively fast compared to non-viewpoint markers, the latter are processed relatively slow. Moreover, we found that social-cognitive abilities impacted the processing of words in general, and of perceptual and cognitive viewpoint markers in particular, such that both perspective-taking abilities and self-reported perspective-taking traits facilitated the processing of these markers. All in all, our study extends earlier findings that social cognition is of importance for story reading, showing that individual differences in social-cognitive abilities are related to the linguistic processing of narrative viewpoint.

    Additional information

    supplementary material
  • Eibl-Eibesfeldt, I., & Senft, G. (1991). Trobriander (Papua-Neu-guinea, Trobriand -Inseln, Kaile'una) Tänze zur Einleitung des Erntefeier-Rituals. Film E 3129. Trobriander (Papua-Neuguinea, Trobriand-Inseln, Kiriwina); Ausschnitte aus einem Erntefesttanz. Film E3130. Publikationen zu wissenschaftlichen Filmen. Sektion Ethnologie, 17, 1-17.
  • Eicher, J. D., Powers, N. R., Miller, L. L., Akshoomoff, N., Amaral, D. G., Bloss, C. S., Libiger, O., Schork, N. J., Darst, B. F., Casey, B. J., Chang, L., Ernst, T., Frazier, J., Kaufmann, W. E., Keating, B., Kenet, T., Kennedy, D., Mostofsky, S., Murray, S. S., Sowell, E. R. and 11 moreEicher, J. D., Powers, N. R., Miller, L. L., Akshoomoff, N., Amaral, D. G., Bloss, C. S., Libiger, O., Schork, N. J., Darst, B. F., Casey, B. J., Chang, L., Ernst, T., Frazier, J., Kaufmann, W. E., Keating, B., Kenet, T., Kennedy, D., Mostofsky, S., Murray, S. S., Sowell, E. R., Bartsch, H., Kuperman, J. M., Brown, T. T., Hagler, D. J., Dale, A. M., Jernigan, T. L., St Pourcain, B., Davey Smith, G., Ring, S. M., Gruen, J. R., & Pediatric Imaging, Neurocognition, and Genetics Study (2013). Genome-wide association study of shared components of reading disability and language impairment. Genes, Brain and Behavior, 12(8), 792-801. doi:10.1111/gbb.12085.

    Abstract

    Written and verbal languages are neurobehavioral traits vital to the development of communication skills. Unfortunately, disorders involving these traits-specifically reading disability (RD) and language impairment (LI)-are common and prevent affected individuals from developing adequate communication skills, leaving them at risk for adverse academic, socioeconomic and psychiatric outcomes. Both RD and LI are complex traits that frequently co-occur, leading us to hypothesize that these disorders share genetic etiologies. To test this, we performed a genome-wide association study on individuals affected with both RD and LI in the Avon Longitudinal Study of Parents and Children. The strongest associations were seen with markers in ZNF385D (OR = 1.81, P = 5.45 × 10(-7) ) and COL4A2 (OR = 1.71, P = 7.59 × 10(-7) ). Markers within NDST4 showed the strongest associations with LI individually (OR = 1.827, P = 1.40 × 10(-7) ). We replicated association of ZNF385D using receptive vocabulary measures in the Pediatric Imaging Neurocognitive Genetics study (P = 0.00245). We then used diffusion tensor imaging fiber tract volume data on 16 fiber tracts to examine the implications of replicated markers. ZNF385D was a predictor of overall fiber tract volumes in both hemispheres, as well as global brain volume. Here, we present evidence for ZNF385D as a candidate gene for RD and LI. The implication of transcription factor ZNF385D in RD and LI underscores the importance of transcriptional regulation in the development of higher order neurocognitive traits. Further study is necessary to discern target genes of ZNF385D and how it functions within neural development of fluent language.
  • Eising, E., Carrion Castillo, A., Vino, A., Strand, E. A., Jakielski, K. J., Scerri, T. S., Hildebrand, M. S., Webster, R., Ma, A., Mazoyer, B., Francks, C., Bahlo, M., Scheffer, I. E., Morgan, A. T., Shriberg, L. D., & Fisher, S. E. (2019). A set of regulatory genes co-expressed in embryonic human brain is implicated in disrupted speech development. Molecular Psychiatry, 24, 1065-1078. doi:10.1038/s41380-018-0020-x.

    Abstract

    Genetic investigations of people with impaired development of spoken language provide windows into key aspects of human biology. Over 15 years after FOXP2 was identified, most speech and language impairments remain unexplained at the molecular level. We sequenced whole genomes of nineteen unrelated individuals diagnosed with childhood apraxia of speech, a rare disorder enriched for causative mutations of large effect. Where DNA was available from unaffected parents, we discovered de novo mutations, implicating genes, including CHD3, SETD1A and WDR5. In other probands, we identified novel loss-of-function variants affecting KAT6A, SETBP1, ZFHX4, TNRC6B and MKL2, regulatory genes with links to neurodevelopment. Several of the new candidates interact with each other or with known speech-related genes. Moreover, they show significant clustering within a single co-expression module of genes highly expressed during early human brain development. This study highlights gene regulatory pathways in the developing brain that may contribute to acquisition of proficient speech.

    Additional information

    Eising_etal_2018sup.pdf
  • Eising, E., A Datson, N., van den Maagdenberg, A. M., & Ferrari, M. D. (2013). Epigenetic mechanisms in migraine: a promising avenue? BMC Medicine, 11(1): 26. doi:10.1186/1741-7015-11-26.

    Abstract

    Migraine is a disabling common brain disorder typically characterized by attacks of severe headache and associated with autonomic and neurological symptoms. Its etiology is far from resolved. This review will focus on evidence that epigenetic mechanisms play an important role in disease etiology. Epigenetics comprise both DNA methylation and post-translational modifications of the tails of histone proteins, affecting chromatin structure and gene expression. Besides playing a role in establishing cellular and developmental stage-specific regulation of gene expression, epigenetic processes are also important for programming lasting cellular responses to environmental signals. Epigenetic mechanisms may explain how non-genetic endogenous and exogenous factors such as female sex hormones, stress hormones and inflammation trigger may modulate attack frequency. Developing drugs that specifically target epigenetic mechanisms may open up exciting new avenues for the prophylactic treatment of migraine.
  • Eising, E., De Vries, B., Ferrari, M. D., Terwindt, G. M., & Van Den Maagdenberg, A. M. J. M. (2013). Pearls and pitfalls in genetic studies of migraine. Cephalalgia, 33(8), 614-625. doi:10.1177/0333102413484988.

    Abstract

    Purpose of review: Migraine is a prevalent neurovascular brain disorder with a strong genetic component, and different methodological approaches have been implemented to identify the genes involved. This review focuses on pearls and pitfalls of these approaches and genetic findings in migraine. Summary: Common forms of migraine (i.e. migraine with and without aura) are thought to have a polygenic make-up, whereas rare familial hemiplegic migraine (FHM) presents with a monogenic pattern of inheritance. Until a few years ago only studies in FHM yielded causal genes, which were identified by a classical linkage analysis approach. Functional analyses of FHM gene mutations in cellular and transgenic animal models suggest abnormal glutamatergic neurotransmission as a possible key disease mechanism. Recently, a number of genes were discovered for the common forms of migraine using a genome-wide association (GWA) approach, which sheds first light on the pathophysiological mechanisms involved. Conclusions: Novel technological strategies such as next-generation sequencing, which can be implemented in future genetic migraine research, may aid the identification of novel FHM genes and promote the search for the missing heritability of common migraine.
  • Eisner, F., Melinger, A., & Weber, A. (2013). Constraints on the transfer of perceptual learning in accented speech. Frontiers in Psychology, 4: 148. doi:10.3389/fpsyg.2013.00148.

    Abstract

    The perception of speech sounds can be re-tuned rapidly through a mechanism of lexically-driven learning (Norris et al 2003, Cogn.Psych. 47). Here we investigated this type of learning for English voiced stop consonants which are commonly de-voiced in word final position by Dutch learners of English . Specifically, this study asked under which conditions the change in pre-lexical representation encodes phonological information about the position of the critical sound within a word. After exposure to a Dutch learner’s productions of de-voiced stops in word-final position (but not in any other positions), British English listeners showed evidence of perceptual learning in a subsequent cross-modal priming task, where auditory primes with voiceless final stops (e.g., ‘seat’), facilitated recognition of visual targets with voiced final stops (e.g., SEED). This learning generalized to test pairs where the critical contrast was in word-initial position, e.g. auditory primes such as ‘town’ facilitated recognition of visual targets like DOWN (Experiment 1). Control listeners, who had not heard any stops by the speaker during exposure, showed no learning effects. The generalization to word-initial position did not occur when participants had also heard correctly voiced, word-initial stops during exposure (Experiment 2), and when the speaker was a native BE speaker who mimicked the word-final devoicing (Experiment 3). These results suggest that word position can be encoded in the pre-lexical adjustment to the accented phoneme contrast. Lexcially-guided feedback, distributional properties of the input, and long-term representations of accents all appear to modulate the pre-lexical re-tuning of phoneme categories.
  • Enfield, N. J. (2004). On linear segmentation and combinatorics in co-speech gesture: A symmetry-dominance construction in Lao fish trap descriptions. Semiotica, 149(1/4), 57-123. doi:10.1515/semi.2004.038.
  • Enfield, N. J. (2004). Nominal classification in Lao: A sketch. Sprachtypologie und Universalienforschung, 57(2/3), 117-143.
  • Enfield, N. J. (2013). Language, culture, and mind: Trends and standards in the latest pendulum swing. Journal of the Royal Anthropological Institute, 19, 155-169. doi:10.1111/1467-9655.12008.

    Abstract

    The study of language in relation to anthropological questions has deep and varied roots, from Humboldt and Boas, Malinowski and Vygotsky, Sapir and Whorf, Wittgenstein and Austin, through to the linguistic anthropologists of now. A recent book by the linguist Daniel Everett, language: the cultural tool (2012), aims to bring some of the issues to a popular audience, with a focus on the idea that language is a tool for social action. I argue in this essay that the book does not represent the state of the art in this field, falling short on three central desiderata of a good account for the social functions of language and its relation to culture. I frame these desiderata in terms of three questions, here termed the cognition question, the causality question, and the culture question. I look at the relevance of this work for socio-cultural anthropology, in the context of a major interdisciplinary pendulum swing that is incipient in the study of language today, a swing away from formalist, innatist perspectives, and towards functionalist, empiricist perspectives. The role of human diversity and culture is foregrounded in all of this work. To that extent, Everett’s book is representative, but the quality of his argument is neither strong in itself nor representative of a movement that ought to be of special interest to socio-cultural anthropologists.
  • Enfield, N. J. (1999). On the indispensability of semantics: Defining the ‘vacuous’. Rask: internationalt tidsskrift for sprog og kommunikation, 9/10, 285-304.
  • Enfield, N. J. (2013). Rejoinder to Daniel Everett [Comment]. Journal of the Royal Anthropological Institute, 19(3), 649. doi:10.1111/1467-9655.12056.
  • Enfield, N. J., Stivers, T., Brown, P., Englert, C., Harjunpää, K., Hayashi, M., Heinemann, T., Hoymann, G., Keisanen, T., Rauniomaa, M., Raymond, C. W., Rossano, F., Yoon, K.-E., Zwitserlood, I., & Levinson, S. C. (2019). Polar answers. Journal of Linguistics, 55(2), 277-304. doi:10.1017/S0022226718000336.

    Abstract

    How do people answer polar questions? In this fourteen-language study of answers to questions in conversation, we compare the two main strategies; first, interjection-type answers such as uh-huh (or equivalents yes, mm, head nods, etc.), and second, repetition-type answers that repeat some or all of the question. We find that all languages offer both options, but that there is a strong asymmetry in their frequency of use, with a global preference for interjection-type answers. We propose that this preference is motivated by the fact that the two options are not equivalent in meaning. We argue that interjection-type answers are intrinsically suited to be the pragmatically unmarked, and thus more frequent, strategy for confirming polar questions, regardless of the language spoken. Our analysis is based on the semantic-pragmatic profile of the interjection-type and repetition-type answer strategies, in the context of certain asymmetries inherent to the dialogic speech act structure of question–answer sequences, including sequential agency and thematic agency. This allows us to see possible explanations for the outlier distributions found in ǂĀkhoe Haiǁom and Tzeltal.
  • Enfield, N. J. (2013). The virtual you and the real you [Book review]. The Times Literary Supplement, April 12, 2013(5741), 31-32.

    Abstract

    Review of the books "Virtually you. The dangerous powers of the e-personality", by Elias Aboujaoude; "The big disconnect. The story of technology and loneliness", by Giles Slade; and "Net smart. How to thrive online", by Howard Rheingold.
  • Erb, J., Henry, M. J., Eisner, F., & Obleser, J. (2013). The brain dynamics of rapid perceptual adaptation to adverse listening conditions. The Journal of Neuroscience, 33, 10688-10697. doi:10.1523/​JNEUROSCI.4596-12.2013.

    Abstract

    Listeners show a remarkable ability to quickly adjust to degraded speech input. Here, we aimed to identify the neural mechanisms of such short-term perceptual adaptation. In a sparse-sampling, cardiac-gated functional magnetic resonance imaging (fMRI) acquisition, human listeners heard and repeated back 4-band-vocoded sentences (in which the temporal envelope of the acoustic signal is preserved, while spectral information is highly degraded). Clear-speech trials were included as baseline. An additional fMRI experiment on amplitude modulation rate discrimination quantified the convergence of neural mechanisms that subserve coping with challenging listening conditions for speech and non-speech. First, the degraded speech task revealed an “executive” network (comprising the anterior insula and anterior cingulate cortex), parts of which were also activated in the non-speech discrimination task. Second, trial-by-trial fluctuations in successful comprehension of degraded speech drove hemodynamic signal change in classic “language” areas (bilateral temporal cortices). Third, as listeners perceptually adapted to degraded speech, downregulation in a cortico-striato-thalamo-cortical circuit was observable. The present data highlight differential upregulation and downregulation in auditory–language and executive networks, respectively, with important subcortical contributions when successfully adapting to a challenging listening situation.
  • Ernestus, M., & Mak, W. M. (2004). Distinctive phonological features differ in relevance for both spoken and written word recognition. Brain and Language, 90(1-3), 378-392. doi:10.1016/S0093-934X(03)00449-8.

    Abstract

    This paper discusses four experiments on Dutch which show that distinctive phonological features differ in their relevance for word recognition. The relevance of a feature for word recognition depends on its phonological stability, that is, the extent to which that feature is generally realized in accordance with its lexical specification in the relevant word position. If one feature value is uninformative, all values of that feature are less relevant for word recognition, with the least informative feature being the least relevant. Features differ in their relevance both in spoken and written word recognition, though the differences are more pronounced in auditory lexical decision than in self-paced reading.
  • Ernestus, M., & Baayen, R. H. (2004). Analogical effects in regular past tense production in Dutch. Linguistics, 42(5), 873-903. doi:10.1515/ling.2004.031.

    Abstract

    This study addresses the question to what extent the production of regular past tense forms in Dutch is a¤ected by analogical processes. We report an experiment in which native speakers of Dutch listened to existing regular verbs over headphones, and had to indicate which of the past tense allomorphs, te or de, was appropriate for these verbs. According to generative analyses, the choice between the two su‰xes is completely regular and governed by the underlying [voice]-specification of the stem-final segment. In this approach, no analogical e¤ects are expected. In connectionist and analogical approaches, by contrast, the phonological similarity structure in the lexicon is expected to a¤ect lexical processing. Our experimental results support the latter approach: all participants created more nonstandard past tense forms, produced more inconsistency errors, and responded more slowly for verbs with stronger analogical support for the nonstandard form.
  • Ernestus, M., & Baayen, R. H. (2004). Kuchde, tobte, en turfte: Lekkage in 't kofschip. Onze Taal, 73(12), 360-361.
  • Escudero, P., Broersma, M., & Simon, E. (2013). Learning words in a third language: Effects of vowel inventory and language proficiency. Language and Cognitive Processes, 28, 746-761. doi:10.1080/01690965.2012.662279.

    Abstract

    This study examines the effect of L2 and L3 proficiency on L3 word learning. Native speakers of Spanish with different proficiencies in L2 English and L3 Dutch and a control group of Dutch native speakers participated in a Dutch word learning task involving minimal and non-minimal word pairs. The minimal word pairs were divided into ‘minimal-easy’ and ‘minimal-difficult’ pairs on the basis of whether or not they are known to pose perceptual problems for L1 Spanish learners. Spanish speakers’ proficiency in Dutch and English was independently established by their scores on general language comprehension tests. All participants were trained and subsequently tested on the mapping between pseudo-words and non-objects. The results revealed that, first, both native and non-native speakers produced more errors and longer reaction times for minimal than for non-minimal word pairs, and secondly, Spanish learners had more errors and longer reaction times for minimal-difficult than for minimal-easy pairs. The latter finding suggests that there is a strong continuity between sound perception and L3 word recognition. With respect to proficiency, only the learner’s proficiency in their L2, namely English, predicted their accuracy on L3 minimal pairs. This shows that learning an L2 with a larger vowel inventory than the L1 is also beneficial for word learning in an L3 with a similarly large vowel inventory.

    Files private

    Request files
  • Evans, N., Levinson, S. C., & Sterelny, K. (2021). Kinship revisited. Biological theory, 16, 123-126. doi:10.1007/s13752-021-00384-9.
  • Evans, N., Levinson, S. C., & Sterelny, K. (Eds.). (2021). Thematic issue on evolution of kinship systems [Special Issue]. Biological theory, 16.
  • Evans, D. M., Zhu, G., Dy, V., Heath, A. C., Madden, P. A. F., Kemp, J. P., McMahon, G., St Pourcain, B., Timpson, N. J., Golding, J., Lawlor, D. A., Steer, C., Montgomery, G. W., Martin, N. G., Smith, G. D., & Whitfield, J. B. (2013). Genome-wide association study identifies loci affecting blood copper, selenium and zinc. Human Molecular Genetics, 22(19), 3998-4006. doi:10.1093/hmg/ddt239.

    Abstract

    Genetic variation affecting absorption, distribution or excretion of essential trace elements may lead to health effects related to sub-clinical deficiency. We have tested for allelic effects of single-nucleotide polymorphisms (SNPs) on blood copper, selenium and zinc in a genome-wide association study using two adult cohorts from Australia and the UK. Participants were recruited in Australia from twins and their families and in the UK from pregnant women. We measured erythrocyte Cu, Se and Zn (Australian samples) or whole blood Se (UK samples) using inductively coupled plasma mass spectrometry. Genotyping was performed with Illumina chips and > 2.5 m SNPs were imputed from HapMap data. Genome-wide significant associations were found for each element. For Cu, there were two loci on chromosome 1 (most significant SNPs rs1175550, P = 5.03 × 10(-10), and rs2769264, P = 2.63 × 10(-20)); for Se, a locus on chromosome 5 was significant in both cohorts (combined P = 9.40 × 10(-28) at rs921943); and for Zn three loci on chromosomes 8, 15 and X showed significant results (rs1532423, P = 6.40 × 10(-12); rs2120019, P = 1.55 × 10(-18); and rs4826508, P = 1.40 × 10(-12), respectively). The Se locus covers three genes involved in metabolism of sulphur-containing amino acids and potentially of the analogous Se compounds; the chromosome 8 locus for Zn contains multiple genes for the Zn-containing enzyme carbonic anhydrase. Where potentially relevant genes were identified, they relate to metabolism of the element (Se) or to the presence at high concentration of a metal-containing protein (Cu).
  • Evans, D. M., Brion, M. J. A., Paternoster, L., Kemp, J. P., McMahon, G., Munafò, M., Whitfield, J. B., Medland, S. E., Montgomery, G. W., Timpson, N. J., St Pourcain, B., Lawlor, D. A., Martin, N. G., Dehghan, A., Hirschhorn, J., Davey Smith, G., The GIANT consortium, The CRP consortium, & The TAG Consortium (2013). Mining the Human Phenome Using Allelic Scores That Index Biological Intermediates. PLoS Genet, 9(10): e1003919. doi:10.1371/journal.pgen.1003919.

    Abstract

    Author SummaryThe standard approach in genome-wide association studies is to analyse the relationship between genetic variants and disease one marker at a time. Significant associations between markers and disease are then used as evidence to implicate biological intermediates and pathways likely to be involved in disease aetiology. However, single genetic variants typically only explain small amounts of disease risk. Our idea is to construct allelic scores that explain greater proportions of the variance in biological intermediates than single markers, and then use these scores to data mine genome-wide association studies. We show how allelic scores derived from known variants as well as allelic scores derived from hundreds of thousands of genetic markers across the genome explain significant portions of the variance in body mass index, levels of C-reactive protein, and LDLc cholesterol, and many of these scores show expected correlations with disease. Power calculations confirm the feasibility of scaling our strategy to the analysis of tens of thousands of molecular phenotypes in large genome-wide meta-analyses. Our method represents a simple way in which tens of thousands of molecular phenotypes could be screened for potential causal relationships with disease.
  • Eviatar, Z., & Huettig, F. (Eds.). (2021). Literacy and writing systems [Special Issue]. Journal of Cultural Cognitive Science.
  • Eviatar, Z., & Huettig, F. (2021). The literate mind. Journal of Cultural Cognitive Science, 5, 81-84. doi:10.1007/s41809-021-00086-5.
  • Fatemifar, G., Hoggart, C. J., Paternoster, L., Kemp, J. P., Prokopenko, I., Horikoshi, M., Wright, V. J., Tobias, J. H., Richmond, S., Zhurov, A. I., Toma, A. M., Pouta, A., Taanila, A., Sipila, K., Lähdesmäki, R., Pillas, D., Geller, F., Feenstra, B., Melbye, M., Nohr, E. A. and 6 moreFatemifar, G., Hoggart, C. J., Paternoster, L., Kemp, J. P., Prokopenko, I., Horikoshi, M., Wright, V. J., Tobias, J. H., Richmond, S., Zhurov, A. I., Toma, A. M., Pouta, A., Taanila, A., Sipila, K., Lähdesmäki, R., Pillas, D., Geller, F., Feenstra, B., Melbye, M., Nohr, E. A., Ring, S. M., St Pourcain, B., Timpson, N. J., Davey Smith, G., Jarvelin, M.-R., & Evans, D. M. (2013). Genome-wide association study of primary tooth eruption identifies pleiotropic loci associated with height and craniofacial distances. Human Molecular Genetics, 22(18), 3807-3817. doi:10.1093/hmg/ddt231.

    Abstract

    Twin and family studies indicate that the timing of primary tooth eruption is highly heritable, with estimates typically exceeding 80%. To identify variants involved in primary tooth eruption, we performed a population-based genome-wide association study of 'age at first tooth' and 'number of teeth' using 5998 and 6609 individuals, respectively, from the Avon Longitudinal Study of Parents and Children (ALSPAC) and 5403 individuals from the 1966 Northern Finland Birth Cohort (NFBC1966). We tested 2 446 724 SNPs imputed in both studies. Analyses were controlled for the effect of gestational age, sex and age of measurement. Results from the two studies were combined using fixed effects inverse variance meta-analysis. We identified a total of 15 independent loci, with 10 loci reaching genome-wide significance (P < 5 × 10(-8)) for 'age at first tooth' and 11 loci for 'number of teeth'. Together, these associations explain 6.06% of the variation in 'age of first tooth' and 4.76% of the variation in 'number of teeth'. The identified loci included eight previously unidentified loci, some containing genes known to play a role in tooth and other developmental pathways, including an SNP in the protein-coding region of BMP4 (rs17563, P = 9.080 × 10(-17)). Three of these loci, containing the genes HMGA2, AJUBA and ADK, also showed evidence of association with craniofacial distances, particularly those indexing facial width. Our results suggest that the genome-wide association approach is a powerful strategy for detecting variants involved in tooth eruption, and potentially craniofacial growth and more generally organ development.
  • Favier, S., & Huettig, F. (2021). Are there core and peripheral syntactic structures? Experimental evidence from Dutch native speakers with varying literacy levels. Lingua, 251: 102991. doi:10.1016/j.lingua.2020.102991.

    Abstract

    Some theorists posit the existence of a ‘core’ grammar that virtually all native speakers acquire, and a ‘peripheral’ grammar that many do not. We investigated the viability of such a categorical distinction in the Dutch language. We first consulted linguists’ intuitions as to the ‘core’ or ‘peripheral’ status of a wide range of grammatical structures. We then tested a selection of core- and peripheral-rated structures on naïve participants with varying levels of literacy experience, using grammaticality judgment as a proxy for receptive knowledge. Overall, participants demonstrated better knowledge of ‘core’ structures than ‘peripheral’ structures, but the considerable variability within these categories was strongly suggestive of a continuum rather than a categorical distinction between them. We also hypothesised that individual differences in the knowledge of core and peripheral structures would reflect participants’ literacy experience. This was supported only by a small trend in our data. The results fit best with the notion that more frequent syntactic structures are mastered by more people than infrequent ones and challenge the received sense of a categorical core-periphery distinction.
  • Favier, S., Meyer, A. S., & Huettig, F. (2021). Literacy can enhance syntactic prediction in spoken language processing. Journal of Experimental Psychology: General, 150(10), 2167-2174. doi:10.1037/xge0001042.

    Abstract

    Language comprehenders can use syntactic cues to generate predictions online about upcoming language. Previous research with reading-impaired adults and healthy, low-proficiency adult and child learners suggests that reading skills are related to prediction in spoken language comprehension. Here we investigated whether differences in literacy are also related to predictive spoken language processing in non-reading-impaired proficient adult readers with varying levels of literacy experience. Using the visual world paradigm enabled us to measure prediction based on syntactic cues in the spoken sentence, prior to the (predicted) target word. Literacy experience was found to be the strongest predictor of target anticipation, independent of general cognitive abilities. These findings suggest that a) experience with written language can enhance syntactic prediction of spoken language in normal adult language users, and b) processing skills can be transferred to related tasks (from reading to listening) if the domains involve similar processes (e.g., predictive dependencies) and representations (e.g., syntactic).

    Additional information

    Online supplementary material
  • Favier, S., & Huettig, F. (2021). Long-term written language experience affects grammaticality judgments and usage but not priming of spoken sentences. Quarterly Journal of Experimental Psychology, 74(8), 1378-1395. doi:10.1177/17470218211005228.

    Abstract

    ‘Book language’ offers a richer linguistic experience than typical conversational speech in terms of its syntactic properties. Here, we investigated the role of long-term syntactic experience on syntactic knowledge and processing. In a pre-registered study with 161 adult native Dutch speakers with varying levels of literacy, we assessed the contribution of individual differences in written language experience to offline and online syntactic processes. Offline syntactic knowledge was assessed as accuracy in an auditory grammaticality judgment task in which we tested violations of four Dutch grammatical norms. Online syntactic processing was indexed by syntactic priming of the Dutch dative alternation, using a comprehension-to-production priming paradigm with auditory presentation. Controlling for the contribution of non-verbal IQ, verbal working memory, and processing speed, we observed a robust effect of literacy experience on the detection of grammatical norm violations in spoken sentences, suggesting that exposure to the syntactic complexity and diversity of written language has specific benefits for general (modality-independent) syntactic knowledge. We replicated previous results by finding robust comprehension-to-production structural priming, both with and without lexical overlap between prime and target. Although literacy experience affected the usage of syntactic alternates in our large sample, it did not modulate their priming. We conclude that amount of experience with written language increases explicit awareness of grammatical norm violations and changes the usage of (PO vs. DO) dative spoken sentences but has no detectable effect on their implicit syntactic priming in proficient language users. These findings constrain theories about the effect of long-term experience on syntactic processing.
  • Favier, S., Wright, A., Meyer, A. S., & Huettig, F. (2019). Proficiency modulates between- but not within-language structural priming. Journal of Cultural Cognitive Science, 3(suppl. 1), 105-124. doi:10.1007/s41809-019-00029-1.

    Abstract

    The oldest of the Celtic language family, Irish differs considerably from English, notably with respect to word order and case marking. In spite of differences in surface constituent structure, less restricted accounts of bilingual shared syntax predict that processing datives and passives in Irish should prime the production of their English equivalents. Furthermore, this cross-linguistic influence should be sensitive to L2 proficiency, if shared structural representations are assumed to develop over time. In Experiment 1, we investigated cross-linguistic structural priming from Irish to English in 47 bilingual adolescents who are educated through Irish. Testing took place in a classroom setting, using written primes and written sentence generation. We found that priming for prepositional-object (PO) datives was predicted by self-rated Irish (L2) proficiency, in line with previous studies. In Experiment 2, we presented translations of the materials to an English-educated control group (n=54). We found a within-language priming effect for PO datives, which was not modulated by English (L1) proficiency. Our findings are compatible with current theories of bilingual language processing and L2 syntactic acquisition.
  • Felker, E. R., Broersma, M., & Ernestus, M. (2021). The role of corrective feedback and lexical guidance in perceptual learning of a novel L2 accent in dialogue. Applied Psycholinguistics, 42, 1029-1055. doi:10.1017/S0142716421000205.

    Abstract

    Perceptual learning of novel accents is a critical skill for second-language speech perception, but little is known about the mechanisms that facilitate perceptual learning in communicative contexts. To study perceptual learning in an interactive dialogue setting while maintaining experimental control of the phonetic input, we employed an innovative experimental method incorporating prerecorded speech into a naturalistic conversation. Using both computer-based and face-to-face dialogue settings, we investigated the effect of two types of learning mechanisms in interaction: explicit corrective feedback and implicit lexical guidance. Dutch participants played an information-gap game featuring minimal pairs with an accented English speaker whose /ε/ pronunciations were shifted to /ɪ/. Evidence for the vowel shift came either from corrective feedback about participants’ perceptual mistakes or from onscreen lexical information that constrained their interpretation of the interlocutor’s words. Corrective feedback explicitly contrasting the minimal pairs was more effective than generic feedback. Additionally, both receiving lexical guidance and exhibiting more uptake for the vowel shift improved listeners’ subsequent online processing of accented words. Comparable learning effects were found in both the computer-based and face-to-face interactions, showing that our results can be generalized to a more naturalistic learning context than traditional computer-based perception training programs.
  • Felker, E. R., Klockmann, H. E., & De Jong, N. H. (2019). How conceptualizing influences fluency in first and second language speech production. Applied Psycholinguistics, 40(1), 111-136. doi:10.1017/S0142716418000474.

    Abstract

    When speaking in any language, speakers must conceptualize what they want to say before they can formulate and articulate their message. We present two experiments employing a novel experimental paradigm in which the formulating and articulating stages of speech production were kept identical across conditions of differing conceptualizing difficulty. We tracked the effect of difficulty in conceptualizing during the generation of speech (Experiment 1) and during the abandonment and regeneration of speech (Experiment 2) on speaking fluency by Dutch native speakers in their first (L1) and second (L2) language (English). The results showed that abandoning and especially regenerating a speech plan taxes the speaker, leading to disfluencies. For most fluency measures, the increases in disfluency were similar across L1 and L2. However, a significant interaction revealed that abandoning and regenerating a speech plan increases the time needed to solve conceptual difficulties while speaking in the L2 to a greater degree than in the L1. This finding supports theories in which cognitive resources for conceptualizing are shared with those used for later stages of speech planning. Furthermore, a practical implication for language assessment is that increasing the conceptual difficulty of speaking tasks should be considered with caution.
  • Fernandes, T., Arunkumar, M., & Huettig, F. (2021). The role of the written script in shaping mirror-image discrimination: Evidence from illiterate, Tamil literate, and Tamil-Latin-alphabet bi-literate adults. Cognition, 206: 104493. doi:10.1016/j.cognition.2020.104493.

    Abstract

    Learning a script with mirrored graphs (e.g., d ≠ b) requires overcoming the evolutionary-old perceptual tendency to process mirror images as equivalent. Thus, breaking mirror invariance offers an important tool for understanding cultural re-shaping of evolutionarily ancient cognitive mechanisms. Here we investigated the role of script (i.e., presence vs. absence of mirrored graphs: Latin alphabet vs. Tamil) by revisiting mirror-image processing by illiterate, Tamil monoliterate, and Tamil-Latin-alphabet bi-literate adults. Participants performed two same-different tasks (one orientation-based, another shape-based) on Latin-alphabet letters. Tamil monoliterate were significantly better than illiterate and showed good explicit mirror-image discrimination. However, only bi-literate adults fully broke mirror invariance: slower shape-based judgments for mirrored than identical pairs and reduced disadvantage in orientation-based over shape-based judgments of mirrored pairs. These findings suggest learning a script with mirrored graphs is the strongest force for breaking mirror invariance.

    Additional information

    supplementary material
  • Ferrari, A., & Noppeney, U. (2021). Attention controls multisensory perception via two distinct mechanisms at different levels of the cortical hierarchy. PLoS Biology, 19(11): e3001465. doi:10.1371/journal.pbio.3001465.

    Abstract

    To form a percept of the multisensory world, the brain needs to integrate signals from common sources weighted by their reliabilities and segregate those from independent sources. Previously, we have shown that anterior parietal cortices combine sensory signals into representations that take into account the signals’ causal structure (i.e., common versus independent sources) and their sensory reliabilities as predicted by Bayesian causal inference. The current study asks to what extent and how attentional mechanisms can actively control how sensory signals are combined for perceptual inference. In a pre- and postcueing paradigm, we presented observers with audiovisual signals at variable spatial disparities. Observers were precued to attend to auditory or visual modalities prior to stimulus presentation and postcued to report their perceived auditory or visual location. Combining psychophysics, functional magnetic resonance imaging (fMRI), and Bayesian modelling, we demonstrate that the brain moulds multisensory inference via two distinct mechanisms. Prestimulus attention to vision enhances the reliability and influence of visual inputs on spatial representations in visual and posterior parietal cortices. Poststimulus report determines how parietal cortices flexibly combine sensory estimates into spatial representations consistent with Bayesian causal inference. Our results show that distinct neural mechanisms control how signals are combined for perceptual inference at different levels of the cortical hierarchy.

    Additional information

    supporting information
  • Fields, E. C., Weber, K., Stillerman, B., Delaney-Busch, N., & Kuperberg, G. (2019). Functional MRI reveals evidence of a self-positivity bias in the medial prefrontal cortex during the comprehension of social vignettes. Social Cognitive and Affective Neuroscience, 14(6), 613-621. doi:10.1093/scan/nsz035.

    Abstract

    A large literature in social neuroscience has associated the medial prefrontal cortex (mPFC) with the processing of self-related information. However, only recently have social neuroscience studies begun to consider the large behavioral literature showing a strong self-positivity bias, and these studies have mostly focused on its correlates during self-related judgments and decision making. We carried out a functional MRI (fMRI) study to ask whether the mPFC would show effects of the self-positivity bias in a paradigm that probed participants’ self-concept without any requirement of explicit self-judgment. We presented social vignettes that were either self-relevant or non-self-relevant with a neutral, positive, or negative outcome described in the second sentence. In previous work using event-related potentials, this paradigm has shown evidence of a self-positivity bias that influences early stages of semantically processing incoming stimuli. In the present fMRI study, we found evidence for this bias within the mPFC: an interaction between self-relevance and valence, with only positive scenarios showing a self vs other effect within the mPFC. We suggest that the mPFC may play a role in maintaining a positively-biased self-concept and discuss the implications of these findings for the social neuroscience of the self and the role of the mPFC.

    Additional information

    Supplementary data
  • Filippi, P. (2013). Connessioni regolate: la chiave ontologica alle specie-specificità? Epekeina, 2(1), 203-223. doi:10.7408/epkn.epkn.v2i1.41.

    Abstract

    This article focuses on “perceptual syntax”, the faculty to process patterns in sensory stimuli. Specifically, this study addresses the ability to perceptually connect elements that are: (1) of the same sensory modality; (2) spatially and temporally non-adjacent; or (3) within multiple sensorial domains. The underlying hypothesis is that in each animal species, this core cognitive faculty enables the perception of the environment-world (Umwelt) and consequently the possibility to survive within it. Importantly, it is suggested that in doing so, perceptual syntax determines (and guides) each species’ ontological access to the world. In support of this hypothesis, research on perceptual syntax in nonverbal individuals (preverbal infants and nonhuman animals) and humans is reviewed. This comparative approach results in theoretical remarks on human cognition and ontology, pointing to the conclusion that the ability to map cross-modal connections through verbal language is what makes humans’ form of life species-typical.
  • Filippi, P. (2013). Specifically Human: Going Beyond Perceptual Syntax. Biosemiotics, 7(1), 111-123. doi:10.1007/s12304-013-9187-3.

    Abstract

    The aim of this paper is to help refine the definition of humans as “linguistic animals” in light of a comparative approach on nonhuman animals’ cognitive systems. As Uexküll & Kriszat (1934/1992) have theorized, the epistemic access to each species-specific environment (Umwelt) is driven by different biocognitive processes. Within this conceptual framework, I identify the salient cognitive process that distinguishes each species typical perception of the world as the faculty of language meant in the following operational definition: the ability to connect different elements according to structural rules. In order to draw some conclusions about humans’ specific faculty of language, I review different empirical studies on nonhuman animals’ ability to recognize formal patterns of tokens. I suggest that what differentiates human language from other animals’ cognitive systems is the ability to categorize the units of a pattern, going beyond its perceptual aspects. In fact, humans are the only species known to be able to combine semantic units within a network of combinatorial logical relationships (Deacon 1997) that can be linked to the state of affairs in the external world (Wittgenstein 1922). I assume that this ability is the core cognitive process underlying a) the capacity to speak (or to reason) in verbal propositions and b) the general human faculty of language expressed, for instance, in the ability to draw visual conceptual maps or to compute mathematical expressions. In light of these considerations, I conclude providing some research questions that could lead to a more detailed comparative exploration of the faculty of language.
  • Fink, B., Bläsing, B., Ravignani, A., & Shackelford, T. K. (2021). Evolution and functions of human dance. Evolution and Human Behavior, 42(4), 351-360. doi:10.1016/j.evolhumbehav.2021.01.003.

    Abstract

    Dance is ubiquitous among humans and has received attention from several disciplines. Ethnographic documentation suggests that dance has a signaling function in social interaction. It can influence mate preferences and facilitate social bonds. Research has provided insights into the proximate mechanisms of dance, individually or when dancing with partners or in groups. Here, we review dance research from an evolutionary perspective. We propose that human dance evolved from ordinary (non-communicative) movements to communicate socially relevant information accurately. The need for accurate social signaling may have accompanied increases in group size and population density. Because of its complexity in production and display, dance may have evolved as a vehicle for expressing social and cultural information. Mating-related qualities and motives may have been the predominant information derived from individual dance movements, whereas group dance offers the opportunity for the exchange of socially relevant content, for coordinating actions among group members, for signaling coalitional strength, and for stabilizing group structures. We conclude that, despite the cultural diversity in dance movements and contexts, the primary communicative functions of dance may be the same across societies.
  • Fisher, N., Hadley, L., Corps, R. E., & Pickering, M. (2021). The effects of dual-task interference in predicting turn-ends in speech and music. Brain Research, 1768: 147571. doi:10.1016/j.brainres.2021.147571.

    Abstract

    Determining when a partner’s spoken or musical turn will end requires well-honed predictive abilities. Evidence suggests that our motor systems are activated during perception of both speech and music, and it has been argued that motor simulation is used to predict turn-ends across domains. Here we used a dual-task interference paradigm to investigate whether motor simulation of our partner’s action underlies our ability to make accurate turn-end predictions in speech and in music. Furthermore, we explored how specific this simulation is to the action being predicted. We conducted two experiments, one investigating speech turn-ends, and one investigating music turn-ends. In each, 34 proficient pianists predicted turn-endings while (1) passively listening, (2) producing an effector-specific motor activity (mouth/hand movement), or (3) producing a task- and effector-specific motor activity (mouthing words/fingering a piano melody). In the speech experiment, any movement during speech perception disrupted predictions of spoken turn-ends, whether the movement was task-specific or not. In the music experiment, only task-specific movement (i.e., fingering a piano melody) disrupted predictions of musical turn-ends. These findings support the use of motor simulation to make turn-end predictions in both speech and music but suggest that the specificity of this simulation may differ between domains.
  • Fisher, S. E., & Tilot, A. K. (2019). Bridging senses: Novel insights from synaesthesia. Philosophical Transactions of the Royal Society of London, Series B: Biological Sciences, 374: 20190022. doi:10.1098/rstb.2019.0022.
  • Fisher, S. E., & Tilot, A. K. (Eds.). (2019). Bridging senses: Novel insights from synaesthesia [Special Issue]. Philosophical Transactions of the Royal Society of London, Series B: Biological Sciences, 374.
  • Fisher, S. E., Stein, J. F., & Monaco, A. P. (1999). A genome-wide search strategy for identifying quantitative trait loci involved in reading and spelling disability (developmental dyslexia). European Child & Adolescent Psychiatry, 8(suppl. 3), S47-S51. doi:10.1007/PL00010694.

    Abstract

    Family and twin studies of developmental dyslexia have consistently shown that there is a significant heritable component for this disorder. However, any genetic basis for the trait is likely to be complex, involving reduced penetrance, phenocopy, heterogeneity and oligogenic inheritance. This complexity results in reduced power for traditional parametric linkage analysis, where specification of the correct genetic model is important. One strategy is to focus on large multigenerational pedigrees with severe phenotypes and/or apparent simple Mendelian inheritance, as has been successfully demonstrated for speech and language impairment. This approach is limited by the scarcity of such families. An alternative which has recently become feasible due to the development of high-throughput genotyping techniques is the analysis of large numbers of sib-pairs using allele-sharing methodology. This paper outlines our strategy for conducting a systematic genome-wide search for genes involved in dyslexia in a large number of affected sib-pair familites from the UK. We use a series of psychometric tests to obtain different quantitative measures of reading deficit, which should correlate with different components of the dyslexia phenotype, such as phonological awareness and orthographic coding ability. This enable us to use QTL (quantitative trait locus) mapping as a powerful tool for localising genes which may contribute to reading and spelling disability.
  • Fisher, S. E., Marlow, A. J., Lamb, J., Maestrini, E., Williams, D. F., Richardson, A. J., Weeks, D. E., Stein, J. F., & Monaco, A. P. (1999). A quantitative-trait locus on chromosome 6p influences different aspects of developmental dyslexia. American Journal of Human Genetics, 64(1), 146-156. doi:10.1086/302190.

    Abstract

    Recent application of nonparametric-linkage analysis to reading disability has implicated a putative quantitative-trait locus (QTL) on the short arm of chromosome 6. In the present study, we use QTL methods to evaluate linkage to the 6p25-21.3 region in a sample of 181 sib pairs from 82 nuclear families that were selected on the basis of a dyslexic proband. We have assessed linkage directly for several quantitative measures that should correlate with different components of the phenotype, rather than using a single composite measure or employing categorical definitions of subtypes. Our measures include the traditional IQ/reading discrepancy score, as well as tests of word recognition, irregular-word reading, and nonword reading. Pointwise analysis by means of sib-pair trait differences suggests the presence, in 6p21.3, of a QTL influencing multiple components of dyslexia, in particular the reading of irregular words (P=.0016) and nonwords (P=.0024). A complementary statistical approach involving estimation of variance components supports these findings (irregular words, P=.007; nonwords, P=.0004). Multipoint analyses place the QTL within the D6S422-D6S291 interval, with a peak around markers D6S276 and D6S105 consistently identified by approaches based on trait differences (irregular words, P=.00035; nonwords, P=.0035) and variance components (irregular words, P=.007; nonwords, P=.0038). Our findings indicate that the QTL affects both phonological and orthographic skills and is not specific to phoneme awareness, as has been previously suggested. Further studies will be necessary to obtain a more precise localization of this QTL, which may lead to the isolation of one of the genes involved in developmental dyslexia.
  • Fisher, S. E., & Ridley, M. (2013). Culture, genes, and the human revolution. Science, 340(6135), 929-930. doi:10.1126/science.1236171.

    Abstract

    State-of-the-art DNA sequencing is providing ever more detailed insights into the genomes of humans, extant apes, and even extinct hominins (1–3), offering unprecedented opportunities to uncover the molecular variants that make us human. A common assumption is that the emergence of behaviorally modern humans after 200,000 years ago required—and followed—a specific biological change triggered by one or more genetic mutations. For example, Klein has argued that the dawn of human culture stemmed from a single genetic change that “fostered the uniquely modern ability to adapt to a remarkable range of natural and social circumstance” (4). But are evolutionary changes in our genome a cause or a consequence of cultural innovation (see the figure)?

    Files private

    Request files
  • Fisher, S. E. (2019). Human genetics: The evolving story of FOXP2. Current Biology, 29(2), R65-R67. doi:10.1016/j.cub.2018.11.047.

    Abstract

    FOXP2 mutations cause a speech and language disorder, raising interest in potential roles of this gene in human evolution. A new study re-evaluates genomic variation at the human FOXP2 locus but finds no evidence of recent adaptive evolution.
  • Fisher, S. E., Vargha-Khadem, F., Watkins, K. E., Monaco, A. P., & Pembrey, M. E. (1998). Localisation of a gene implicated in a severe speech and language disorder. Nature Genetics, 18, 168 -170. doi:10.1038/ng0298-168.

    Abstract

    Between 2 and 5% of children who are otherwise unimpaired have significant difficulties in acquiring expressive and/or receptive language, despite adequate intelligence and opportunity. While twin studies indicate a significant role for genetic factors in developmental disorders of speech and language, the majority of families segregating such disorders show complex patterns of inheritance, and are thus not amenable for conventional linkage analysis. A rare exception is the KE family, a large three-generation pedigree in which approximately half of the members are affected with a severe speech and language disorder which appears to be transmitted as an autosomal dominant monogenic trait. This family has been widely publicised as suffering primarily from a defect in the use of grammatical suffixation rules, thus supposedly supporting the existence of genes specific to grammar. The phenotype, however, is broader in nature, with virtually every aspect of grammar and of language affected. In addition, affected members have a severe orofacial dyspraxia, and their speech is largely incomprehensible to the naive listener. We initiated a genome-wide search for linkage in the KE family and have identified a region on chromosome 7 which co-segregates with the speech and language disorder (maximum lod score = 6.62 at theta = 0.0), confirming autosomal dominant inheritance with full penetrance. Further analysis of microsatellites from within the region enabled us to fine map the locus responsible (designated SPCH1) to a 5.6-cM interval in 7q31, thus providing an important step towards its identification. Isolation of SPCH1 may offer the first insight into the molecular genetics of the developmental process that culminates in speech and language.
  • Fisher, V. J. (2021). Embodied songs: Insights into the nature of cross-modal meaning-making within sign language informed, embodied interpretations of vocal music. Frontiers in Psychology, 12: 624689. doi:10.3389/fpsyg.2021.624689.

    Abstract

    Embodied song practices involve the transformation of songs from the acoustic modality into an embodied-visual form, to increase meaningful access for d/Deaf audiences. This goes beyond the translation of lyrics, by combining poetic sign language with other bodily movements to embody the para-linguistic expressive and musical features that enhance the message of a song. To date, the limited research into this phenomenon has focussed on linguistic features and interactions with rhythm. The relationship between bodily actions and music has not been probed beyond an assumed implication of conformance. However, as the primary objective is to communicate equivalent meanings, the ways that the acoustic and embodied-visual signals relate to each other should reveal something about underlying conceptual agreement. This paper draws together a range of pertinent theories from within a grounded cognition framework including semiotics, analogy mapping and cross-modal correspondences. These theories are applied to embodiment strategies used by prominent d/Deaf and hearing Dutch practitioners, to unpack the relationship between acoustic songs, their embodied representations, and their broader conceptual and affective meanings. This leads to the proposition that meaning primarily arises through shared patterns of internal relations across a range of amodal and cross-modal features with an emphasis on dynamic qualities. These analogous patterns can inform metaphorical interpretations and trigger shared emotional responses. This exploratory survey offers insights into the nature of cross-modal and embodied meaning-making, as a jumping-off point for further research.
  • Fitneva, S. A., Lam, N. H. L., & Dunfield, K. A. (2013). The development of children's information gathering: To look or to ask? Developmental Psychology, 49(3), 533-542. doi:10.1037/a0031326.

    Abstract

    The testimony of others and direct experience play a major role in the development of children's knowledge. Children actively use questions to seek others' testimony and explore the environment. It is unclear though whether children distinguish when it is better to ask from when it is better to try to find an answer by oneself. In 2 experiments, we examined the ability of 4- and 6-year-olds to select between looking and asking to determine visible and invisible properties of entities (e.g., hair color vs. knowledge of French). All children chose to look more often for visible than invisible properties. However, only 6-year-olds chose above chance to look for visible properties and to ask for invisible properties. Four-year-olds showed a preference for looking in one experiment and asking in the other. The results suggest substantial development in the efficacy of children's learning in early childhood.
  • Fitz, H., & Chang, F. (2019). Language ERPs reflect learning through prediction error propagation. Cognitive Psychology, 111, 15-52. doi:10.1016/j.cogpsych.2019.03.002.

    Abstract

    Event-related potentials (ERPs) provide a window into how the brain is processing language. Here, we propose a theory that argues that ERPs such as the N400 and P600 arise as side effects of an error-based learning mechanism that explains linguistic adaptation and language learning. We instantiated this theory in a connectionist model that can simulate data from three studies on the N400 (amplitude modulation by expectancy, contextual constraint, and sentence position), five studies on the P600 (agreement, tense, word category, subcategorization and garden-path sentences), and a study on the semantic P600 in role reversal anomalies. Since ERPs are learning signals, this account explains adaptation of ERP amplitude to within-experiment frequency manipulations and the way ERP effects are shaped by word predictability in earlier sentences. Moreover, it predicts that ERPs can change over language development. The model provides an account of the sensitivity of ERPs to expectation mismatch, the relative timing of the N400 and P600, the semantic nature of the N400, the syntactic nature of the P600, and the fact that ERPs can change with experience. This approach suggests that comprehension ERPs are related to sentence production and language acquisition mechanisms
  • Flecken, M., von Stutterheim, C., & Carroll, M. (2013). Principles of information organization in L2 use: Complex patterns of conceptual transfer. International review of applied linguistics, 51(2), 229-242. doi:10.1515/iral-2013-0010.
  • Floyd, S. (2013). [Review of the book Lessons from a Quechua strongwoman: ideophony, dialogue and perspective. by Janis Nuckolls. 2010]. Journal of Linguistic Anthropology, 22, 256-258. doi:10.1111/j.1548-1395.2012.01166.x.
  • Frances, C., Navarra-Barindelli, E., & Martin, C. D. (2021). Inhibitory and facilitatory effects of phonological and orthographic similarity on L2 word recognition across modalities in bilinguals. Scientific Reports, 11: 12812. doi:10.1038/s41598-021-92259-z.

    Abstract

    Language perception studies on bilinguals often show that words that share form and meaning across languages (cognates) are easier to process than words that share only meaning. This facilitatory phenomenon is known as the cognate effect. Most previous studies have shown this effect visually, whereas the auditory modality as well as the interplay between type of similarity and modality remain largely unexplored. In this study, highly proficient late Spanish–English bilinguals carried out a lexical decision task in their second language, both visually and auditorily. Words had high or low phonological and orthographic similarity, fully crossed. We also included orthographically identical words (perfect cognates). Our results suggest that similarity in the same modality (i.e., orthographic similarity in the visual modality and phonological similarity in the auditory modality) leads to improved signal detection, whereas similarity across modalities hinders it. We provide support for the idea that perfect cognates are a special category within cognates. Results suggest a need for a conceptual and practical separation between types of similarity in cognate studies. The theoretical implication is that the representations of items are active in both modalities of the non-target language during language processing, which needs to be incorporated to our current processing models.

    Additional information

    supplementary information
  • Frances, C., Navarra-Barindelli, E., & Martin, C. D. (2021). Inhibitory and facilitatory effects of phonological and orthographic similarity on L2 word recognition across modalities in bilinguals. Scientific Reports, 11: 12812. doi:10.1038/s41598-021-92259-z.

    Abstract

    Language perception studies on bilinguals often show that words that share form and meaning across
    languages (cognates) are easier to process than words that share only meaning. This facilitatory
    phenomenon is known as the cognate effect. Most previous studies have shown this effect visually,
    whereas the auditory modality as well as the interplay between type of similarity and modality
    remain largely unexplored. In this study, highly proficient late Spanish–English bilinguals carried out
    a lexical decision task in their second language, both visually and auditorily. Words had high or low
    phonological and orthographic similarity, fully crossed. We also included orthographically identical
    words (perfect cognates). Our results suggest that similarity in the same modality (i.e., orthographic
    similarity in the visual modality and phonological similarity in the auditory modality) leads to
    improved signal detection, whereas similarity across modalities hinders it. We provide support for
    the idea that perfect cognates are a special category within cognates. Results suggest a need for a
    conceptual and practical separation between types of similarity in cognate studies. The theoretical
    implication is that the representations of items are active in both modalities of the non‑target
    language during language processing, which needs to be incorporated to our current processing
    models.
  • Francks, C., Paracchini, S., Smith, S. D., Richardson, A. J., Scerri, T. S., Cardon, L. R., Marlow, A. J., MacPhie, I. L., Walter, J., Pennington, B. F., Fisher, S. E., Olson, R. K., DeFries, J. C., Stein, J. F., & Monaco, A. P. (2004). A 77-kilobase region of chromosome 6p22.2 is associated with dyslexia in families from the United Kingdom and from the United States. American Journal of Human Genetics, 75(6), 1046-1058. doi:10.1086/426404.

    Abstract

    Several quantitative trait loci (QTLs) that influence developmental dyslexia (reading disability [RD]) have been mapped to chromosome regions by linkage analysis. The most consistently replicated area of linkage is on chromosome 6p23-21.3. We used association analysis in 223 siblings from the United Kingdom to identify an underlying QTL on 6p22.2. Our association study implicates a 77-kb region spanning the gene TTRAP and the first four exons of the neighboring uncharacterized gene KIAA0319. The region of association is also directly upstream of a third gene, THEM2. We found evidence of these associations in a second sample of siblings from the United Kingdom, as well as in an independent sample of twin-based sibships from Colorado. One main RD risk haplotype that has a frequency of ∼12% was found in both the U.K. and U.S. samples. The haplotype is not distinguished by any protein-coding polymorphisms, and, therefore, the functional variation may relate to gene expression. The QTL influences a broad range of reading-related cognitive abilities but has no significant impact on general cognitive performance in these samples. In addition, the QTL effect may be largely limited to the severe range of reading disability.
  • Francks, C. (2019). In search of the biological roots of typical and atypical human brain asymmetry. Physics of Life Reviews, 30, 22-24. doi:10.1016/j.plrev.2019.07.004.
  • Franken, M. K., Acheson, D. J., McQueen, J. M., Hagoort, P., & Eisner, F. (2019). Consistency influences altered auditory feedback processing. Quarterly Journal of Experimental Psychology, 72(10), 2371-2379. doi:10.1177/1747021819838939.

    Abstract

    Previous research on the effect of perturbed auditory feedback in speech production has focused on two types of responses. In the short term, speakers generate compensatory motor commands in response to unexpected perturbations. In the longer term, speakers adapt feedforward motor programmes in response to feedback perturbations, to avoid future errors. The current study investigated the relation between these two types of responses to altered auditory feedback. Specifically, it was hypothesised that consistency in previous feedback perturbations would influence whether speakers adapt their feedforward motor programmes. In an altered auditory feedback paradigm, formant perturbations were applied either across all trials (the consistent condition) or only to some trials, whereas the others remained unperturbed (the inconsistent condition). The results showed that speakers’ responses were affected by feedback consistency, with stronger speech changes in the consistent condition compared with the inconsistent condition. Current models of speech-motor control can explain this consistency effect. However, the data also suggest that compensation and adaptation are distinct processes, which are not in line with all current models.
  • Frauenfelder, U. H., & Cutler, A. (1985). Preface. Linguistics, 23(5). doi:10.1515/ling.1985.23.5.657.
  • Frega, M., Linda, K., Keller, J. M., Gümüş-Akay, G., Mossink, B., Van Rhijn, J. R., Negwer, M., Klein Gunnewiek, T., Foreman, K., Kompier, N., Schoenmaker, C., Van den Akker, W., Van der Werf, I., Oudakker, A., Zhou, H., Kleefstra, T., Schubert, D., Van Bokhoven, H., & Nadif Kasri, N. (2019). Neuronal network dysfunction in a model for Kleefstra syndrome mediated by enhanced NMDAR signaling. Nature Communications, 10: 4928. doi:10.1038/s41467-019-12947-3.

    Abstract

    Kleefstra syndrome (KS) is a neurodevelopmental disorder caused by mutations in the histone methyltransferase EHMT1. To study the impact of decreased EHMT1 function in human cells, we generated excitatory cortical neurons from induced pluripotent stem (iPS) cells derived from KS patients. Neuronal networks of patient-derived cells exhibit network bursting with a reduced rate, longer duration, and increased temporal irregularity compared to control networks. We show that these changes are mediated by upregulation of NMDA receptor (NMDAR) subunit 1 correlating with reduced deposition of the repressive H3K9me2 mark, the catalytic product of EHMT1, at the GRIN1 promoter. In mice EHMT1 deficiency leads to similar neuronal network impairments with increased NMDAR function. Finally, we rescue the KS patient-derived neuronal network phenotypes by pharmacological inhibition of NMDARs. Summarized, we demonstrate a direct link between EHMT1 deficiency and NMDAR hyperfunction in human neurons, providing a potential basis for more targeted therapeutic approaches for KS.

    Additional information

    supplementary information
  • French, C. A., Vinueza Veloz, M. F., Zhou, K., Peter, S., Fisher, S. E., Costa, R. M., & De Zeeuw, C. I. (2019). Differential effects of Foxp2 disruption in distinct motor circuits. Molecular Psychiatry, 24, 447-462. doi:10.1038/s41380-018-0199-x.

    Abstract

    Disruptions of the FOXP2 gene cause a speech and language disorder involving difficulties in sequencing orofacial movements. FOXP2 is expressed in cortico-striatal and cortico-cerebellar circuits important for fine motor skills, and affected individuals show abnormalities in these brain regions. We selectively disrupted Foxp2 in the cerebellar Purkinje cells, striatum or cortex of mice and assessed the effects on skilled motor behaviour using an operant lever-pressing task. Foxp2 loss in each region impacted behaviour differently, with striatal and Purkinje cell disruptions affecting the variability and the speed of lever-press sequences, respectively. Mice lacking Foxp2 in Purkinje cells showed a prominent phenotype involving slowed lever pressing as well as deficits in skilled locomotion. In vivo recordings from Purkinje cells uncovered an increased simple spike firing rate and decreased modulation of firing during limb movements. This was caused by increased intrinsic excitability rather than changes in excitatory or inhibitory inputs. Our findings show that Foxp2 can modulate different aspects of motor behaviour in distinct brain regions, and uncover an unknown role for Foxp2 in the modulation of Purkinje cell activity that severely impacts skilled movements.
  • Friedrich, P., Forkel, S. J., Amiez, C., Balsters, J. H., Coulon, O., Fan, L., Goulas, A., Hadj-Bouziane, F., Hecht, E. E., Heuer, K., Jiang, T., Latzman, R. D., Liu, X., Loh, K. K., Patil, K. R., Lopez-Persem, A., Procyk, E., Sallet, J., Toro, R., Vickery, S. Friedrich, P., Forkel, S. J., Amiez, C., Balsters, J. H., Coulon, O., Fan, L., Goulas, A., Hadj-Bouziane, F., Hecht, E. E., Heuer, K., Jiang, T., Latzman, R. D., Liu, X., Loh, K. K., Patil, K. R., Lopez-Persem, A., Procyk, E., Sallet, J., Toro, R., Vickery, S., Weis, S., Wilson, C., Xu, T., Zerbi, V., Eickoff, S. B., Margulies, D., Mars, R., & Thiebaut de Schotten, M. (2021). Imaging evolution of the primate brain: The next frontier? NeuroImage, 228: 117685. doi:10.1016/j.neuroimage.2020.117685.

    Abstract

    Evolution, as we currently understand it, strikes a delicate balance between animals' ancestral history and adaptations to their current niche. Similarities between species are generally considered inherited from a common ancestor whereas observed differences are considered as more recent evolution. Hence comparing species can provide insights into the evolutionary history. Comparative neuroimaging has recently emerged as a novel subdiscipline, which uses magnetic resonance imaging (MRI) to identify similarities and differences in brain structure and function across species. Whereas invasive histological and molecular techniques are superior in spatial resolution, they are laborious, post-mortem, and oftentimes limited to specific species. Neuroimaging, by comparison, has the advantages of being applicable across species and allows for fast, whole-brain, repeatable, and multi-modal measurements of the structure and function in living brains and post-mortem tissue. In this review, we summarise the current state of the art in comparative anatomy and function of the brain and gather together the main scientific questions to be explored in the future of the fascinating new field of brain evolution derived from comparative neuroimaging.
  • Frost, R. L. A., Monaghan, P., & Christiansen, M. H. (2019). Mark my words: High frequency marker words impact early stages of language learning. Journal of Experimental Psychology: Learning, Memory, and Cognition, 45(10), 1883-1898. doi:10.1037/xlm0000683.

    Abstract

    High frequency words have been suggested to benefit both speech segmentation and grammatical categorization of the words around them. Despite utilizing similar information, these tasks are usually investigated separately in studies examining learning. We determined whether including high frequency words in continuous speech could support categorization when words are being segmented for the first time. We familiarized learners with continuous artificial speech comprising repetitions of target words, which were preceded by high-frequency marker words. Crucially, marker words distinguished targets into 2 distributionally defined categories. We measured learning with segmentation and categorization tests and compared performance against a control group that heard the artificial speech without these marker words (i.e., just the targets, with no cues for categorization). Participants segmented the target words from speech in both conditions, but critically when the marker words were present, they influenced acquisition of word-referent mappings in a subsequent transfer task, with participants demonstrating better early learning for mappings that were consistent (rather than inconsistent) with the distributional categories. We propose that high-frequency words may assist early grammatical categorization, while speech segmentation is still being learned.

    Additional information

    Supplemental Material
  • Fueller, C., Loescher, J., & Indefrey, P. (2013). Writing superiority in cued recall. Frontiers in Psychology, 4: 764. doi:10.3389/fpsyg.2013.00764.

    Abstract

    In list learning paradigms with free recall, written recall has been found to be less susceptible to intrusions of related concepts than spoken recall when the list items had been visually presented. This effect has been ascribed to the use of stored orthographic representations from the study phase during written recall (Kellogg, 2001). In other memory retrieval paradigms, by contrast, either better recall for modality-congruent items or an input-independent writing superiority effect have been found (Grabowski, 2005). In a series of four experiments using a paired associate learning paradigm we tested (a) whether output modality effects on verbal recall can be replicated in a paradigm that does not involve the rejection of semantically related intrusion words, (b) whether a possible superior performance for written recall was due to a slower response onset for writing as compared to speaking in immediate recall, and (c) whether the performance in paired associate word recall was correlated with performance in an additional episodic memory recall task. We observed better written recall in the first half of the recall phase, irrespective of the modality in which the material was presented upon encoding. An explanation for this effect based on longer response latencies for writing and hence more time for memory retrieval could be ruled out by showing that the effect persisted in delayed response versions of the task. Although there was some evidence that stored additional episodic information may contribute to the successful retrieval of associate words, this evidence was only found in the immediate response experiments and hence is most likely independent from the observed output modality effect. In sum, our results from a paired associate learning paradigm suggest that superior performance for written vs. spoken recall cannot be (solely) explained in terms of additional access to stored orthographic representations from the encoding phase. Our findings rather suggest a general writing-superiority effect at the time of memory retrieval.
  • Gaby, A. R. (2004). Extended functions of Thaayorre body part terms. Papers in Linguistics and Applied Linguistics, 4(2), 24-34.
  • Galbiati, A., Verga, L., Giora, E., Zucconi, M., & Ferini-Strambi, L. (2019). The risk of neurodegeneration in REM sleep behavior disorder: A systematic review and meta-analysis of longitudinal studies. Sleep Medicine Reviews, 43, 37-46. doi:10.1016/j.smrv.2018.09.008.

    Abstract

    Several studies report an association between REM Sleep Behavior Disorder (RBD) and neurodegenerative diseases, in particular synucleinopathies. Interestingly, the onset of RBD precedes the development of neurodegeneration by several years. This review and meta-analysis aims to establish the rate of conversion of RBD into neurodegenerative diseases. Longitudinal studies were searched from the PubMed, Web of Science, and SCOPUS databases. Using random-effect modeling, we performed a meta-analysis on the rate of RBD conversions into neurodegeneration. Furthermore, we fitted a Kaplan-Meier analysis and compared the differences between survival curves of different diseases with log-rank tests. The risk for developing neurodegenerative diseases was 33.5% at five years follow-up, 82.4% at 10.5 years and 96.6% at 14 years. The average conversion rate was 31.95% after a mean duration of follow-up of 4.75 ± 2.43 years. The majority of RBD patients converted to Parkinson's Disease (43%), followed by Dementia with Lewy Bodies (25%). The estimated risk for RBD patients to develop a neurodegenerative disease over a long-term follow-up is more than 90%. Future studies should include control group for the evaluation of REM sleep without atonia as marker for neurodegeneration also in non-clinical population and target RBD as precursor of neurodegeneration to develop protective trials.
  • Ganushchak, L. Y., Krott, A., Frisson, S., & Meyer, A. S. (2013). Processing words and Short Message Service shortcuts in sentential contexts: An eye movement study. Applied Psycholinguistics, 34, 163-179. doi:10.1017/S0142716411000658.

    Abstract

    The present study investigated whether Short Message Service shortcuts are more difficult to process in sentence context than the spelled-out word equivalent and, if so, how any additional processing difficulty arises. Twenty-four student participants read 37 Short Message Service shortcuts and word equivalents embedded in semantically plausible and implausible contexts (e.g., He left/drank u/you a note) while their eye movements were recorded. There were effects of plausibility and spelling on early measures of processing difficulty (first fixation durations, gaze durations, skipping, and first-pass regression rates for the targets), but there were no interactions of plausibility and spelling. Late measures of processing difficulty (second run gaze duration and total fixation duration) were only affected by plausibility but not by spelling. These results suggest that shortcuts are harder to recognize, but that, once recognized, they are integrated into the sentence context as easily as ordinary words.
  • Gao, Y., Zheng, L., Liu, X., Nichols, E. S., Zhang, M., Shang, L., Ding, G., Meng, Z., & Liu, L. (2019). First and second language reading difficulty among Chinese–English bilingual children: The prevalence and influences from demographic characteristics. Frontiers in Psychology, 10: 2544. doi:10.3389/fpsyg.2019.02544.

    Abstract

    Learning to read a second language (L2) can pose a great challenge for children who have already been struggling to read in their first language (L1). Moreover, it is not clear whether, to what extent, and under what circumstances L1 reading difficulty increases the risk of L2 reading difficulty. This study investigated Chinese (L1) and English (L2) reading skills in a large representative sample of 1,824 Chinese–English bilingual children in Grades 4 and 5 from both urban and rural schools in Beijing. We examined the prevalence of reading difficulty in Chinese only (poor Chinese readers, PC), English only (poor English readers, PE), and both Chinese and English (poor bilingual readers, PB) and calculated the co-occurrence, that is, the chances of becoming a poor reader in English given that the child was already a poor reader in Chinese. We then conducted a multinomial logistic regression analysis and compared the prevalence of PC, PE, and PB between children in Grade 4 versus Grade 5, in urban versus rural areas, and in boys versus girls. Results showed that compared to girls, boys demonstrated significantly higher risk of PC, PE, and PB. Meanwhile, compared to the 5th graders, the 4th graders demonstrated significantly higher risk of PC and PB. In addition, children enrolled in the urban schools were more likely to become better second language readers, thus leading to a concerning rural–urban gap in the prevalence of L2 reading difficulty. Finally, among these Chinese–English bilingual children, regardless of sex and school location, poor reading skill in Chinese significantly increased the risk of also being a poor English reader, with a considerable and stable co-occurrence of approximately 36%. In sum, this study suggests that despite striking differences between alphabetic and logographic writing systems, L1 reading difficulty still significantly increases the risk of L2 reading difficulty. This indicates the shared meta-linguistic skills in reading different writing systems and the importance of understanding the universality and the interdependent relationship of reading between different writing systems. Furthermore, the male disadvantage (in both L1 and L2) and the urban–rural gap (in L2) found in the prevalence of reading difficulty calls for special attention to disadvantaged populations in educational practice.
  • Gao, X., Dera, J., Nijhoff, A. D., & Willems, R. M. (2019). Is less readable liked better? The case of font readability in poetry appreciation. PLoS One, 14(12): e0225757. doi:10.1371/journal.pone.0225757.

    Abstract

    Previous research shows conflicting findings for the effect of font readability on comprehension and memory for language. It has been found that—perhaps counterintuitively–a hard to read font can be beneficial for language comprehension, especially for difficult language. Here we test how font readability influences the subjective experience of poetry reading. In three experiments we tested the influence of poem difficulty and font readability on the subjective experience of poems. We specifically predicted that font readability would have opposite effects on the subjective experience of easy versus difficult poems. Participants read poems which could be more or less difficult in terms of conceptual or structural aspects, and which were presented in a font that was either easy or more difficult to read. Participants read existing poems and subsequently rated their subjective experience (measured through four dependent variables: overall liking, perceived flow of the poem, perceived topic clarity, and perceived structure). In line with previous literature we observed a Poem Difficulty x Font Readability interaction effect for subjective measures of poetry reading. We found that participants rated easy poems as nicer when presented in an easy to read font, as compared to when presented in a hard to read font. Despite the presence of the interaction effect, we did not observe the predicted opposite effect for more difficult poems. We conclude that font readability can influence reading of easy and more difficult poems differentially, with strongest effects for easy poems.

    Additional information

    https://osf.io/jwcqt/
  • Garcia, R., Garrido Rodriguez, G., & Kidd, E. (2021). Developmental effects in the online use of morphosyntactic cues in sentence processing: Evidence from Tagalog. Cognition, 216: 104859. doi:10.1016/j.cognition.2021.104859.

    Abstract

    Children must necessarily process their input in order to learn it, yet the architecture of the developing parsing system and how it interfaces with acquisition is unclear. In the current paper we report experimental and corpus data investigating adult and children's use of morphosyntactic cues for making incremental online predictions of thematic roles in Tagalog, a verb-initial symmetrical voice language of the Philippines. In Study 1, Tagalog-speaking adults completed a visual world eye-tracking experiment in which they viewed pictures of causative actions that were described by transitive sentences manipulated for voice and word order. The pattern of results showed that adults process agent and patient voice differently, predicting the upcoming noun in the patient voice but not in the agent voice, consistent with the observation of a patient voice preference in adult sentence production. In Study 2, our analysis of a corpus of child-directed speech showed that children heard more patient voice- than agent voice-marked verbs. In Study 3, 5-, 7-, and 9-year-old children completed a similar eye-tracking task as used in Study 1. The overall pattern of results suggested that, like the adults in Study 1, children process agent and patient voice differently in a manner that reflects the input distributions, with children developing towards the adult state across early childhood. The results are most consistent with theoretical accounts that identify a key role for input distributions in acquisition and language processing

    Additional information

    1-s2.0-S001002772100278X-mmc1.docx
  • Garcia, R., Roeser, J., & Höhle, B. (2019). Thematic role assignment in the L1 acquisition of Tagalog: Use of word order and morphosyntactic markers. Language Acquisition, 26(3), 235-261. doi:10.1080/10489223.2018.1525613.

    Abstract

    It is a common finding across languages that young children have problems in understanding patient-initial sentences. We used Tagalog, a verb-initial language with a reliable voice-marking system and highly frequent patient voice constructions, to test the predictions of several accounts that have been proposed to explain this difficulty: the frequency account, the Competition Model, and the incremental processing account. Study 1 presents an analysis of Tagalog child-directed speech, which showed that the dominant argument order is agent-before-patient and that morphosyntactic markers are highly valid cues to thematic role assignment. In Study 2, we used a combined self-paced listening and picture verification task to test how Tagalog-speaking adults and 5- and 7-year-old children process reversible transitive sentences. Results showed that adults performed well in all conditions, while children’s accuracy and listening times for the first noun phrase indicated more difficulty in interpreting patient-initial sentences in the agent voice compared to the patient voice. The patient voice advantage is partly explained by both the frequency account and incremental processing account.
  • Gau, R., Noble, S., Heuer, K., Bottenhorn, K. L., Bilgin, I. P., Yang, Y.-F., Huntenburg, J. M., Bayer, J. M., Bethlehem, R. A., Rhoads, S. A., Vogelbacher, C., Borghesani, V., Levitis, E., Wang, H.-T., Van Den Bossche, S., Kobeleva, X., Legarreta, J. H., Guay, S., Atay, S. M., Varoquaux, G. P. Gau, R., Noble, S., Heuer, K., Bottenhorn, K. L., Bilgin, I. P., Yang, Y.-F., Huntenburg, J. M., Bayer, J. M., Bethlehem, R. A., Rhoads, S. A., Vogelbacher, C., Borghesani, V., Levitis, E., Wang, H.-T., Van Den Bossche, S., Kobeleva, X., Legarreta, J. H., Guay, S., Atay, S. M., Varoquaux, G. P., Huijser, D. C., Sandström, M. S., Herholz, P., Nastase, S. A., Badhwar, A., Dumas, G., Schwab, S., Moia, S., Dayan, M., Bassil, Y., Brooks, P. P., Mancini, M., Shine, J. M., O’Connor, D., Xie, X., Poggiali, D., Friedrich, P., Heinsfeld, A. S., Riedl, L., Toro, R., Caballero-Gaudes, C., Eklund, A., Garner, K. G., Nolan, C. R., Demeter, D. V., Barrios, F. A., Merchant, J. S., McDevitt, E. A., Oostenveld, R., Craddock, R. C., Rokem, A., Doyle, A., Ghosh, S. S., Nikolaidis, A., Stanley, O. W., Uruñuela, E., Anousheh, N., Arnatkeviciute, A., Auzias, G., Bachar, D., Bannier, E., Basanisi, R., Basavaraj, A., Bedini, M., Bellec, P., Benn, R. A., Berluti, K., Bollmann, S., Bollmann, S., Bradley, C., Brown, J., Buchweitz, A., Callahan, P., Chan, M. Y., Chandio, B. Q., Cheng, T., Chopra, S., Chung, A. W., Close, T. G., Combrisson, E., Cona, G., Constable, R. T., Cury, C., Dadi, K., Damasceno, P. F., Das, S., De Vico Fallani, F., DeStasio, K., Dickie, E. W., Dorfschmidt, L., Duff, E. P., DuPre, E., Dziura, S., Esper, N. B., Esteban, O., Fadnavis, S., Flandin, G., Flannery, J. E., Flournoy, J., Forkel, S. J., Franco, A. R., Ganesan, S., Gao, S., García Alanis, J. C., Garyfallidis, E., Glatard, T., Glerean, E., Gonzalez-Castillo, J., Gould van Praag, C. D., Greene, A. S., Gupta, G., Hahn, C. A., Halchenko, Y. O., Handwerker, D., Hartmann, T. S., Hayot-Sasson, V., Heunis, S., Hoffstaedter, F., Hohmann, D. M., Horien, C., Ioanas, H.-I., Iordan, A., Jiang, C., Joseph, M., Kai, J., Karakuzu, A., Kennedy, D. N., Keshavan, A., Khan, A. R., Kiar, G., Klink, P. C., Koppelmans, V., Koudoro, S., Laird, A. R., Langs, G., Laws, M., Licandro, R., Liew, S.-L., Lipic, T., Litinas, K., Lurie, D. J., Lussier, D., Madan, C. R., Mais, L.-T., Mansour L, S., Manzano-Patron, J., Maoutsa, D., Marcon, M., Margulies, D. S., Marinato, G., Marinazzo, D., Markiewicz, C. J., Maumet, C., Meneguzzi, F., Meunier, D., Milham, M. P., Mills, K. L., Momi, D., Moreau, C. A., Motala, A., Moxon-Emre, I., Nichols, T. E., Nielson, D. M., Nilsonne, G., Novello, L., O’Brien, C., Olafson, E., Oliver, L. D., Onofrey, J. A., Orchard, E. R., Oudyk, K., Park, P. J., Parsapoor, M., Pasquini, L., Peltier, S., Pernet, C. R., Pienaar, R., Pinheiro-Chagas, P., Poline, J.-B., Qiu, A., Quendera, T., Rice, L. C., Rocha-Hidalgo, J., Rutherford, S., Scharinger, M., Scheinost, D., Shariq, D., Shaw, T. B., Siless, V., Simmonite, M., Sirmpilatze, N., Spence, H., Sprenger, J., Stajduhar, A., Szinte, M., Takerkart, S., Tam, A., Tejavibulya, L., Thiebaut de Schotten, M., Thome, I., Tomaz da Silva, L., Traut, N., Uddin, L. Q., Vallesi, A., VanMeter, J. W., Vijayakumar, N., di Oleggio Castello, M. V., Vohryzek, J., Vukojević, J., Whitaker, K. J., Whitmore, L., Wideman, S., Witt, S. T., Xie, H., Xu, T., Yan, C.-G., Yeh, F.-C., Yeo, B. T., & Zuo, X.-N. (2021). Brainhack: Developing a culture of open, inclusive, community-driven neuroscience. Neuron, 109(11), 1769-1775. doi:10.1016/j.neuron.2021.04.001.

    Abstract

    Social factors play a crucial role in the advancement of science. New findings are discussed and theories emerge through social interactions, which usually take place within local research groups and at academic events such as conferences, seminars, or workshops. This system tends to amplify the voices of a select subset of the community—especially more established researchers—thus limiting opportunities for the larger community to contribute and connect. Brainhack (https://brainhack.org/) events (or Brainhacks for short) complement these formats in neuroscience with decentralized 2- to 5-day gatherings, in which participants from diverse backgrounds and career stages collaborate and learn from each other in an informal setting. The Brainhack format was introduced in a previous publication (Cameron Craddock et al., 2016; Figures 1A and 1B). It is inspired by the hackathon model (see glossary in Table 1), which originated in software development and has gained traction in science as a way to bring people together for collaborative work and educational courses. Unlike many hackathons, Brainhacks welcome participants from all disciplines and with any level of experience—from those who have never written a line of code to software developers and expert neuroscientists. Brainhacks additionally replace the sometimes-competitive context of traditional hackathons with a purely collaborative one and also feature informal dissemination of ongoing research through unconferences.

    Additional information

    supplementary information
  • Gauvin, H. S., Hartsuiker, R. J., & Huettig, F. (2013). Speech monitoring and phonologically-mediated eye gaze in language perception and production: A comparison using printed word eye-tracking. Frontiers in Human Neuroscience, 7: 818. doi:10.3389/fnhum.2013.00818.

    Abstract

    The Perceptual Loop Theory of speech monitoring assumes that speakers routinely inspect their inner speech. In contrast, Huettig and Hartsuiker (2010) observed that listening to one’s own speech during language production drives eye-movements to phonologically related printed words with a similar time-course as listening to someone else’s speech does in speech perception experiments. This suggests that speakers listen to their own overt speech, but not to their inner speech. However, a direct comparison between production and perception with the same stimuli and participants is lacking so far. The current printed word eye-tracking experiment therefore used a within-subjects design, combining production and perception. Displays showed four words, of which one, the target, either had to be named or was presented auditorily. Accompanying words were phonologically related, semantically related, or unrelated to the target. There were small increases in looks to phonological competitors with a similar time-course in both production and perception. Phonological effects in perception however lasted longer and had a much larger magnitude. We conjecture that this difference is related to a difference in predictability of one’s own and someone else’s speech, which in turn has consequences for lexical competition in other-perception and possibly suppression of activation in self-perception.
  • Gavin, M., Botero, C. A., Bowern, C., Colwell, R. K., Dunn, M., Dunn, R. R., Gray, R. D., Kirby, K. R., McCarter, J., Powell, A., Rangel, T. F., Steppe, J. R., Trautwein, M., Verdolin, J. L., & Yanega, G. (2013). Towards a mechanistic understanding of linguistic diversity. Bioscience, 63, 524-535. doi:10.1525/bio.2013.63.7.6.

    Abstract

    Our species displays remarkable linguistic diversity. While the uneven distribution of this diversity demands explanation, the drivers of these patterns have not been conclusively determined. We address this issue in two steps. First, we review previous empirical studies that have suggested environmental, geographical, and socio-cultural drivers of linguistic diversification. However, contradictory results and methodological variation make it difficult to draw general conclusions. Second, we outline a program for future research. We suggest that future analyses should account for interactions among causal factors, lack of spatial and phylogenetic independence of data, and transitory patterns. Recent analytical advances in biogeography and evolutionary biology, such as simulation modeling of diversity patterns, hold promise for testing four key mechanisms of language diversification proposed here: neutral change, population movement, contact, and selection. Future modeling approaches should also evaluate how the outcomes of these processes are influenced by demography, environmental heterogeneity, and time.
  • Gehrig, J., Michalareas, G., Forster, M.-T., Lei, J., Hok, P., Laufs, H., Senft, C., Seifert, V., Schoffelen, J.-M., Hanslmayr, H., & Kell, C. A. (2019). Low-frequency oscillations code speech during verbal working memory. The Journal of Neuroscience, 39(33), 6498-6512. doi:10.1523/JNEUROSCI.0018-19.2019.

    Abstract

    The way the human brain represents speech in memory is still unknown. An obvious characteristic of speech is its evolvement over time.
    During speech processing, neural oscillations are modulated by the temporal properties of the acoustic speech signal, but also acquired
    knowledge on the temporal structure of language influences speech perception-related brain activity. This suggests that speech could be
    represented in the temporal domain, a form of representation that the brain also uses to encode autobiographic memories. Empirical
    evidence for such a memory code is lacking. We investigated the nature of speech memory representations using direct cortical recordings
    in the left perisylvian cortex during delayed sentence reproduction in female and male patients undergoing awake tumor surgery.
    Our results reveal that the brain endogenously represents speech in the temporal domain. Temporal pattern similarity analyses revealed
    that the phase of frontotemporal low-frequency oscillations, primarily in the beta range, represents sentence identity in working memory.
    The positive relationship between beta power during working memory and task performance suggests that working memory
    representations benefit from increased phase separation.
  • Geipel, I., Lattenkamp, E. Z., Dixon, M. M., Wiegrebe, L., & Page, R. A. (2021). Hearing sensitivity: An underlying mechanism for niche differentiation in gleaning bats. Proceedings of the National Academy of Sciences of the United States of America, 118: e2024943118. doi:10.1073/pnas.2024943118.

    Abstract

    Tropical ecosystems are known for high species diversity. Adaptations permitting niche differentiation enable species to coexist. Historically, research focused primarily on morphological and behavioral adaptations for foraging, roosting, and other basic ecological factors. Another important factor, however, is differences in sensory capabilities. So far, studies mainly have focused on the output of behavioral strategies of predators and their prey preference. Understanding the coexistence of different foraging strategies, however, requires understanding underlying cognitive and neural mechanisms. In this study, we investigate hearing in bats and how it shapes bat species coexistence. We present the hearing thresholds and echolocation calls of 12 different gleaning bats from the ecologically diverse Phyllostomid family. We measured their auditory brainstem responses to assess their hearing sensitivity. The audiograms of these species had similar overall shapes but differed substantially for frequencies below 9 kHz and in the frequency range of their echolocation calls. Our results suggest that differences among bats in hearing abilities contribute to the diversity in foraging strategies of gleaning bats. We argue that differences in auditory sensitivity could be important mechanisms shaping diversity in sensory niches and coexistence of species.
  • Gentner, D., Ozyurek, A., Gurcanli, O., & Goldin-Meadow, S. (2013). Spatial language facilitates spatial cognition: Evidence from children who lack language input. Cognition, 127, 318-330. doi:10.1016/j.cognition.2013.01.003.

    Abstract

    Does spatial language influence how people think about space? To address this question, we observed children who did not know a conventional language, and tested their performance on nonlinguistic spatial tasks. We studied deaf children living in Istanbul whose hearing losses prevented them from acquiring speech and whose hearing parents had not exposed them to sign. Lacking a conventional language, the children used gestures, called homesigns, to communicate. In Study 1, we asked whether homesigners used gesture to convey spatial relations, and found that they did not. In Study 2, we tested a new group of homesigners on a Spatial Mapping Task, and found that they performed significantly worse than hearing Turkish children who were matched to the deaf children on another cognitive task. The absence of spatial language thus went hand-in-hand with poor performance on the nonlinguistic spatial task, pointing to the importance of spatial language in thinking about space.
  • Ghatan, P. H., Hsieh, J. C., Petersson, K. M., Stone-Elander, S., & Ingvar, M. (1998). Coexistence of attention-based facilitation and inhibition in the human cortex. NeuroImage, 7, 23-29.

    Abstract

    A key function of attention is to select an appropriate subset of available information by facilitation of attended processes and/or inhibition of irrelevant processing. Functional imaging studies, using positron emission tomography, have during different experimental tasks revealed decreased neuronal activity in areas that process input from unattended sensory modalities. It has been hypothesized that these decreases reflect a selective inhibitory modulation of nonrelevant cortical processing. In this study we addressed this question using a continuous arithmetical task with and without concomitant disturbing auditory input (task-irrelevant speech). During the arithmetical task, irrelevant speech did not affect task-performance but yielded decreased activity in the auditory and midcingulate cortices and increased activity in the left posterior parietal cortex. This pattern of modulation is consistent with a top down inhibitory modulation of a nonattended input to the auditory cortex and a coexisting, attention-based facilitation of taskrelevant processing in higher order cortices. These findings suggest that task-related decreases in cortical activity may be of functional importance in the understanding of both attentional mechanisms and taskrelated information processing.

Share this page