Publications

Displaying 201 - 300 of 1223
  • Den Hoed, J., Sollis, E., Venselaar, H., Estruch, S. B., Derizioti, P., & Fisher, S. E. (2018). Functional characterization of TBR1 variants in neurodevelopmental disorder. Scientific Reports, 8: 14279. doi:10.1038/s41598-018-32053-6.

    Abstract

    Recurrent de novo variants in the TBR1 transcription factor are implicated in the etiology of sporadic autism spectrum disorders (ASD). Disruptions include missense variants located in the T-box DNA-binding domain and previous work has demonstrated that they disrupt TBR1 protein function. Recent screens of thousands of simplex families with sporadic ASD cases uncovered additional T-box variants in TBR1 but their etiological relevance is unclear. We performed detailed functional analyses of de novo missense TBR1 variants found in the T-box of ASD cases, assessing many aspects of protein function, including subcellular localization, transcriptional activity and protein-interactions. Only two of the three tested variants severely disrupted TBR1 protein function, despite in silico predictions that all would be deleterious. Furthermore, we characterized a putative interaction with BCL11A, a transcription factor that was recently implicated in a neurodevelopmental syndrome involving developmental delay and language deficits. Our findings enhance understanding of molecular functions of TBR1, as well as highlighting the importance of functional testing of variants that emerge from next-generation sequencing, to decipher their contributions to neurodevelopmental disorders like ASD.

    Additional information

    Electronic supplementary material
  • Deriziotis, P., & Tabrizi, S. J. (2008). Prions and the proteasome. Biochimica et Biophysica Acta-Molecular Basis of Disease, 1782(12), 713-722. doi:10.1016/j.bbadis.2008.06.011.

    Abstract

    Prion diseases are fatal neurodegenerative disorders that include Creutzfeldt-Jakob disease in humans and bovine spongiform encephalopathy in animals. They are unique in terms of their biology because they are caused by the conformational re-arrangement of a normal host-encoded prion protein, PrPC, to an abnormal infectious isoform, PrPSc. Currently the precise mechanism behind prion-mediated neurodegeneration remains unclear. It is hypothesised than an unknown toxic gain of function of PrPSc, or an intermediate oligomeric form, underlies neuronal death. Increasing evidence suggests a role for the ubiquitin proteasome system (UPS) in prion disease. Both wild-type PrPC and disease-associated PrP isoforms accumulate in cells after proteasome inhibition leading to increased cell death, and abnormal beta-sheet-rich PrP isoforms have been shown to inhibit the catalytic activity of the proteasome. Here we review potential interactions between prions and the proteasome outlining how the UPS may be implicated in prion-mediated neurodegeneration.
  • Devanna, P., Van de Vorst, M., Pfundt, R., Gilissen, C., & Vernes, S. C. (2018). Genome-wide investigation of an ID cohort reveals de novo 3′UTR variants affecting gene expression. Human Genetics, 137(9), 717-721. doi:10.1007/s00439-018-1925-9.

    Abstract

    Intellectual disability (ID) is a severe neurodevelopmental disorder with genetically heterogeneous causes. Large-scale sequencing has led to the identification of many gene-disrupting mutations; however, a substantial proportion of cases lack a molecular diagnosis. As such, there remains much to uncover for a complete understanding of the genetic underpinnings of ID. Genetic variants present in non-coding regions of the genome have been highlighted as potential contributors to neurodevelopmental disorders given their role in regulating gene expression. Nevertheless the functional characterization of non-coding variants remains challenging. We describe the identification and characterization of de novo non-coding variation in 3′UTR regulatory regions within an ID cohort of 50 patients. This cohort was previously screened for structural and coding pathogenic variants via CNV, whole exome and whole genome analysis. We identified 44 high-confidence single nucleotide non-coding variants within the 3′UTR regions of these 50 genomes. Four of these variants were located within predicted miRNA binding sites and were thus hypothesised to have regulatory consequences. Functional testing showed that two of the variants interfered with miRNA-mediated regulation of their target genes, AMD1 and FAIM. Both these variants were found in the same individual and their functional consequences may point to a potential role for such variants in intellectual disability.

    Additional information

    439_2018_1925_MOESM1_ESM.docx
  • Devanna, P., Chen, X. S., Ho, J., Gajewski, D., Smith, S. D., Gialluisi, A., Francks, C., Fisher, S. E., Newbury, D. F., & Vernes, S. C. (2018). Next-gen sequencing identifies non-coding variation disrupting miRNA binding sites in neurological disorders. Molecular Psychiatry, 23(5), 1375-1384. doi:10.1038/mp.2017.30.

    Abstract

    Understanding the genetic factors underlying neurodevelopmental and neuropsychiatric disorders is a major challenge given their prevalence and potential severity for quality of life. While large-scale genomic screens have made major advances in this area, for many disorders the genetic underpinnings are complex and poorly understood. To date the field has focused predominantly on protein coding variation, but given the importance of tightly controlled gene expression for normal brain development and disorder, variation that affects non-coding regulatory regions of the genome is likely to play an important role in these phenotypes. Herein we show the importance of 3 prime untranslated region (3'UTR) non-coding regulatory variants across neurodevelopmental and neuropsychiatric disorders. We devised a pipeline for identifying and functionally validating putatively pathogenic variants from next generation sequencing (NGS) data. We applied this pipeline to a cohort of children with severe specific language impairment (SLI) and identified a functional, SLI-associated variant affecting gene regulation in cells and post-mortem human brain. This variant and the affected gene (ARHGEF39) represent new putative risk factors for SLI. Furthermore, we identified 3′UTR regulatory variants across autism, schizophrenia and bipolar disorder NGS cohorts demonstrating their impact on neurodevelopmental and neuropsychiatric disorders. Our findings show the importance of investigating non-coding regulatory variants when determining risk factors contributing to neurodevelopmental and neuropsychiatric disorders. In the future, integration of such regulatory variation with protein coding changes will be essential for uncovering the genetic causes of complex neurological disorders and the fundamental mechanisms underlying health and disease

    Additional information

    mp201730x1.docx
  • Díaz-Caneja, C. M., Alloza, C., Gordaliza, P. M., Fernández Pena, A., De Hoyos, L., Santonja, J., Buimer, E. E. L., Van Haren, N. E. M., Cahn, W., Arango, C., Kahn, R. S., Hulshoff Pol, H. E., Schnack, H. G., & Janssen, J. (2021). Sex differences in lifespan trajectories and variability of human sulcal and gyral morphology. Cerebral Cortex, 31(11), 5107-5120. doi:10.1093/cercor/bhab145.

    Abstract

    Sex differences in development and aging of human sulcal morphology have been understudied. We charted sex differences in trajectories and inter-individual variability of global sulcal depth, width, and length, pial surface area, exposed (hull) gyral surface area, unexposed sulcal surface area, cortical thickness, and cortex volume across the lifespan in a longitudinal sample (700 scans, 194 participants two scans, 104 three scans, age range: 16-70 years) of neurotypical males and females. After adjusting for brain volume, females had thicker cortex and steeper thickness decline until age 40 years; trajectories converged thereafter. Across sexes, sulcal shortening was faster before age 40, while sulcal shallowing and widening were faster thereafter. While hull area remained stable, sulcal surface area declined and was more strongly associated with sulcal shortening than with sulcal shallowing and widening. Males showed greater variability for cortex volume and thickness and lower variability for sulcal width. Across sexes, variability decreased with age for all measures except for cortical volume and thickness. Our findings highlight the association between loss of sulcal area, notably through sulcal shortening, with cortex volume loss. Studying sex differences in lifespan trajectories may improve knowledge of individual differences in brain development and the pathophysiology of neuropsychiatric conditions.

    Additional information

    supplementary data
  • Dietrich, C., Swingley, D., & Werker, J. F. (2007). Native language governs interpretation of salient speech sound differences at 18 months. Proceedings of the National Academy of Sciences of the USA, 104(41), 16027-16031.

    Abstract

    One of the first steps infants take in learning their native language is to discover its set of speech-sound categories. This early development is shown when infants begin to lose the ability to differentiate some of the speech sounds their language does not use, while retaining or improving discrimination of language-relevant sounds. However, this aspect of early phonological tuning is not sufficient for language learning. Children must also discover which of the phonetic cues that are used in their language serve to signal lexical distinctions. Phonetic variation that is readily discriminable to all children may indicate two different words in one language but only one word in another. Here, we provide evidence that the language background of 1.5-year-olds affects their interpretation of phonetic variation in word learning, and we show that young children interpret salient phonetic variation in language-specific ways. Three experiments with a total of 104 children compared Dutch- and English-learning 18-month-olds' responses to novel words varying in vowel duration or vowel quality. Dutch learners interpreted vowel duration as lexically contrastive, but English learners did not, in keeping with properties of Dutch and English. Both groups performed equivalently when differentiating words varying in vowel quality. Thus, at one and a half years, children's phonological knowledge already guides their interpretation of salient phonetic variation. We argue that early phonological learning is not just a matter of maintaining the ability to distinguish language-relevant phonetic cues. Learning also requires phonological interpretation at appropriate levels of linguistic analysis.
  • Dietrich, R., & Klein, W. (1986). Simple language. Interdisciplinary Science Reviews, 11(2), 110-117.
  • Dijkstra, T., Moscoso del Prado Martín, F., Schulpen, B., Schreuder, R., & Baayen, R. H. (2005). A roommate in cream: Morphological family size effects on interlingual homograph recognition. Language and Cognitive Processes, 20, 7-41. doi:10.1080/01690960444000124.
  • Dimroth, C., & Lindner, K. (2005). Was langsame Lerner uns zeigen können: der Erwerb der Finitheit im Deutschen durch einsprachige Kinder mit spezifischen Sprachentwicklungsstörung und durch Zweit-sprach-lerner. Zeitschrift für Literaturwissenschaft und Linguistik, 140, 40-61.
  • Dimroth, C., & Klein, W. (2007). Den Erwachsenen überlegen: Kinder entwickeln beim Sprachenlernen besondere Techniken und sind erfolgreicher als ältere Menschen. Tagesspiegel, 19737, B6-B6.

    Abstract

    The younger - the better? This paper discusses second language learning at different ages and takes a critical look at generalizations of the kind ‘The younger – the better’. It is argued that these generalizations do not apply across the board. Age related differences like the amount of linguistic knowledge, prior experience as a language user, or more or less advanced communicative needs affect different components of the language system to different degrees, and can even be an advantage for the early development of simple communicative systems.
  • Dimroth, C. (2008). Age effects on the process of L2 acquisition? Evidence from the acquisition of negation and finiteness in L2 German. Language Learning, 58(1), 117-150. doi:10.1111/j.1467-9922.2007.00436.x.

    Abstract

    It is widely assumed that ultimate attainment in adult second language (L2) learners often differs quite radically from ultimate attainment in child L2 learners. This article addresses the question of whether learners at different ages also show qualitative differences in the process of L2 acquisition. Longitudinal production data from two untutored Russian beginners (ages 8 and 14) acquiring German under roughly similar conditions are compared to published results on the acquisition of German by adult immigrants. The study focuses on the acquisition of negation and finiteness as core domains of German sentence grammar. Adult learners have been shown to produce an early nonfinite learner variety in which utterance organization relies on principles of information structure rather than on target language grammar. They then go through a couple of intermediate steps in which, first, semantically empty verbs (auxiliaries) serve as isolated carriers of finiteness before lexical verbs become finite. Whereas the 14-year-old learner of this case study basically shows a developmental pattern similar to that of adults, the 8-year-old child produces a different order of acquisition: Not only is the development of finite morphology faster, but finite lexical verbs are acquired before auxiliary constructions (Perfekt). Results suggest a stronger tendency for young learners to incrementally assimilate input patterns without relying on analytic steps guided by principles of information organization to the same extent as older learners.
  • Dimroth, C., & Lambert, M. (Eds.). (2008). La structure informationelle chez les apprenants L2 [Special Issue]. Acquisition et Interaction en Language Etrangère, 26.
  • Dingemanse, M. (2008). WALS online [review]. Elanguage. Retrieved from http://elanguage.net/blogs/booknotices/?p=69.
  • Dingemanse, M. (2008). [Review of Phonology Assistant 3.0.1: From Sil International]. Language Documentation & Conversation, 2(2), 325-331. Retrieved from http://hdl.handle.net/10125/4350.
  • Dingemanse, M. (2008). [Review of the book Semantic assignment rules in Bantu classes: A reanalysis based on Kiswahili by Assibi A. Amidu]. Afrikanistik Online.
  • Dingemanse, M. (2018). Redrawing the margins of language: Lessons from research on ideophones. Glossa: a journal of general linguistics, 3(1): 4. doi:10.5334/gjgl.444.

    Abstract

    Ideophones (also known as expressives or mimetics, and including onomatopoeia) have been systematically studied in linguistics since the 1850s, when they were first described as a lexical class of vivid sensory words in West-African languages. This paper surveys the research history of ideophones, from its roots in African linguistics to its fruits in general linguistics and typology around the globe. It shows that despite a recurrent narrative of marginalisation, work on ideophones has made an impact in many areas of linguistics, from theories of phonological features to typologies of manner and motion, and from sound symbolism to sensory language. Due to their hybrid nature as gradient vocal gestures that grow roots in discrete linguistic systems, ideophones provide opportunities to reframe typological questions, reconsider the role of language ideology in linguistic scholarship, and rethink the margins of language. With ideophones increasingly being brought into the fold of the language sciences, this review synthesises past theoretical insights and empirical findings in order to enable future work to build on them.
  • Doherty, M., & Klein, W. (Eds.). (1991). Übersetzung [Special Issue]. Zeitschrift für Literaturwissenschaft und Linguistik, (84).
  • Donnelly, S., & Kidd, E. (2021). Onset neighborhood density slows lexical access in high vocabulary 30‐month olds. Cognitive Science, 45(9): e13022. doi:10.1111/cogs.13022.

    Abstract

    There is consensus that the adult lexicon exhibits lexical competition. In particular, substantial evidence demonstrates that words with more phonologically similar neighbors are recognized less efficiently than words with fewer neighbors. How and when these effects emerge in the child's lexicon is less clear. In the current paper, we build on previous research by testing whether phonological onset density slows lexical access in a large sample of 100 English-acquiring 30-month-olds. The children participated in a visual world looking-while-listening task, in which their attention was directed to one of two objects on a computer screen while their eye movements were recorded. We found moderate evidence of inhibitory effects of onset neighborhood density on lexical access and clear evidence for an interaction between onset neighborhood density and vocabulary, with larger effects of onset neighborhood density for children with larger vocabularies. Results suggest the lexicons of 30-month-olds exhibit lexical-level competition, with competition increasing with vocabulary size.
  • Donnelly, S., & Kidd, E. (2021). On the structure and source of individual differences in toddlers' comprehension of transitive sentences. Frontiers in Psychology, 12: 661022. doi:10.3389/fpsyg.2021.661022.

    Abstract

    How children learn grammar is one of the most fundamental questions in cognitive science. Two theoretical accounts, namely, the Early Abstraction and Usage-Based accounts, propose competing answers to this question. To compare the predictions of these accounts, we tested the comprehension of 92 24-month old children of transitive sentences with novel verbs (e.g., “The boy is gorping the girl!”) with the Intermodal Preferential Looking (IMPL) task. We found very little evidence that children looked to the target video at above-chance levels. Using mixed and mixture models, we tested the predictions the two accounts make about: (i) the structure of individual differences in the IMPL task and (ii) the relationship between vocabulary knowledge, lexical processing, and performance in the IMPL task. However, the results did not strongly support either of the two accounts. The implications for theories on language acquisition and for tasks developed for examining individual differences are discussed.

    Additional information

    data via OSF
  • Donnelly, S., & Kidd, E. (2021). The longitudinal relationship between conversational turn-taking and vocabulary growth in early language development. Child Development, 92(2), 609-625. doi:10.1111/cdev.13511.

    Abstract

    Children acquire language embedded within the rich social context of interaction. This paper reports on a longitudinal study investigating the developmental relationship between conversational turn‐taking and vocabulary growth in English‐acquiring children (N = 122) followed between 9 and 24 months. Daylong audio recordings obtained every 3 months provided several indices of the language environment, including the number of adult words children heard in their environment and their number of conversational turns. Vocabulary was measured independently via parental report. Growth curve analyses revealed a bidirectional relationship between conversational turns and vocabulary growth, controlling for the amount of words in children’s environments. The results are consistent with theoretical approaches that identify social interaction as a core component of early language acquisition.
  • Doumas, L. A. A., & Martin, A. E. (2021). A model for learning structured representations of similarity and relative magnitude from experience. Current Opinion in Behavioral Sciences, 37, 158-166. doi:10.1016/j.cobeha.2021.01.001.

    Abstract

    How a system represents information tightly constrains the kinds of problems it can solve. Humans routinely solve problems that appear to require abstract representations of stimulus properties and relations. How we acquire such representations has central importance in an account of human cognition. We briefly describe a theory of how a system can learn invariant responses to instances of similarity and relative magnitude, and how structured, relational representations can be learned from initially unstructured inputs. Two operations, comparing distributed representations and learning from the concomitant network dynamics in time, underpin the ability to learn these representations and to respond to invariance in the environment. Comparing analog representations of absolute magnitude produces invariant signals that carry information about similarity and relative magnitude. We describe how a system can then use this information to bootstrap learning structured (i.e., symbolic) concepts of relative magnitude from experience without assuming such representations a priori.
  • Doumas, L. A. A., & Martin, A. E. (2018). Learning structured representations from experience. Psychology of Learning and Motivation, 69, 165-203. doi:10.1016/bs.plm.2018.10.002.

    Abstract

    How a system represents information tightly constrains the kinds of problems it can solve. Humans routinely solve problems that appear to require structured representations of stimulus properties and the relations between them. An account of how we might acquire such representations has central importance for theories of human cognition. We describe how a system can learn structured relational representations from initially unstructured inputs using comparison, sensitivity to time, and a modified Hebbian learning algorithm. We summarize how the model DORA (Discovery of Relations by Analogy) instantiates this approach, which we call predicate learning, as well as how the model captures several phenomena from cognitive development, relational reasoning, and language processing in the human brain. Predicate learning offers a link between models based on formal languages and models which learn from experience and provides an existence proof for how structured representations might be learned in the first place.
  • Drew, P., Hakulinen, A., Heinemann, T., Niemi, J., & Rossi, G. (2021). Hendiadys in naturally occurring interactions: A cross-linguistic study of double verb constructions. Journal of Pragmatics, 182, 322-347. doi:10.1016/j.pragma.2021.02.008.

    Abstract

    Double verb constructions known as hendiadys have been studied primarily in literary texts and corpora of written language. Much less is known about their properties and usage in spoken language, where expressions such as ‘come and see’, ‘go and tell’, ‘sit and talk’ are particularly common, and where we can find an even richer diversity of other constructions. In this study, we investigate hendiadys in corpora of naturally occurring social interactions in four languages, Danish, English (US and UK), Finnish and Italian, with the objective of exploring whether hendiadys is used systematically in recurrent interactional and sequential circumstances, from which it is possible to identify the pragmatic function(s) that hendiadys may serve. Examining hendiadys in conversation also offers us a special window into its grammatical properties, for example when a speaker self-corrects from a non-hendiadic to a hendiadic expression, exposing the boundary between related grammatical forms and demonstrating the distinctiveness of hendiadys in context. More broadly, we demonstrate that hendiadys is systematically associated with talk about complainable matters, in environments characterised by a conflict, dissonance, or friction that is ongoing in the interaction or that is being reported by one participant to another. We also find that the utterance in which hendiadys is used is typically in a subsequent and possibly terminal position in the sequence, summarising or concluding it. Another key finding is that the complainable or conflictual element in these interactions is expressed primarily by the first conjunct of the hendiadic construction. Whilst the first conjunct is semantically subsidiary to the second, it is pragmatically the most important one. This analysis leads us to revisit a long-established asymmetry between the verbal components of hendiadys, and to bring to light the synergy of grammar and pragmatics in language usage.
  • Drijvers, L., Jensen, O., & Spaak, E. (2021). Rapid invisible frequency tagging reveals nonlinear integration of auditory and visual information. Human Brain Mapping, 42(4), 1138-1152. doi:10.1002/hbm.25282.

    Abstract

    During communication in real-life settings, the brain integrates information from auditory and visual modalities to form a unified percept of our environment. In the current magnetoencephalography (MEG) study, we used rapid invisible frequency tagging (RIFT) to generate steady-state evoked fields and investigated the integration of audiovisual information in a semantic context. We presented participants with videos of an actress uttering action verbs (auditory; tagged at 61 Hz) accompanied by a gesture (visual; tagged at 68 Hz, using a projector with a 1440 Hz refresh rate). Integration ease was manipulated by auditory factors (clear/degraded speech) and visual factors (congruent/incongruent gesture). We identified MEG spectral peaks at the individual (61/68 Hz) tagging frequencies. We furthermore observed a peak at the intermodulation frequency of the auditory and visually tagged signals (fvisual – fauditory = 7 Hz), specifically when integration was easiest (i.e., when speech was clear and accompanied by a congruent gesture). This intermodulation peak is a signature of nonlinear audiovisual integration, and was strongest in left inferior frontal gyrus and left temporal regions; areas known to be involved in speech-gesture integration. The enhanced power at the intermodulation frequency thus reflects the ease of integration and demonstrates that speech-gesture information interacts in higher-order language areas. Furthermore, we provide a proof-of-principle of the use of RIFT to study the integration of audiovisual stimuli, in relation to, for instance, semantic context.
  • Drijvers, L., & Trujillo, J. P. (2018). Commentary: Transcranial magnetic stimulation over left inferior frontal and posterior temporal cortex disrupts gesture-speech integration. Frontiers in Human Neuroscience, 12: 256. doi:10.3389/fnhum.2018.00256.

    Abstract

    A commentary on
    Transcranial Magnetic Stimulation over Left Inferior Frontal and Posterior Temporal Cortex Disrupts Gesture-Speech Integration

    by Zhao, W., Riggs, K., Schindler, I., and Holle, H. (2018). J. Neurosci. 10, 1748–1717. doi: 10.1523/JNEUROSCI.1748-17.2017
  • Drijvers, L., Ozyurek, A., & Jensen, O. (2018). Alpha and beta oscillations index semantic congruency between speech and gestures in clear and degraded speech. Journal of Cognitive Neuroscience, 30(8), 1086-1097. doi:10.1162/jocn_a_01301.

    Abstract

    Previous work revealed that visual semantic information conveyed by gestures can enhance degraded speech comprehension, but the mechanisms underlying these integration processes under adverse listening conditions remain poorly understood. We used MEG to investigate how oscillatory dynamics support speech–gesture integration when integration load is manipulated by auditory (e.g., speech degradation) and visual semantic (e.g., gesture congruency) factors. Participants were presented with videos of an actress uttering an action verb in clear or degraded speech, accompanied by a matching (mixing gesture + “mixing”) or mismatching (drinking gesture + “walking”) gesture. In clear speech, alpha/beta power was more suppressed in the left inferior frontal gyrus and motor and visual cortices when integration load increased in response to mismatching versus matching gestures. In degraded speech, beta power was less suppressed over posterior STS and medial temporal lobe for mismatching compared with matching gestures, showing that integration load was lowest when speech was degraded and mismatching gestures could not be integrated and disambiguate the degraded signal. Our results thus provide novel insights on how low-frequency oscillatory modulations in different parts of the cortex support the semantic audiovisual integration of gestures in clear and degraded speech: When speech is clear, the left inferior frontal gyrus and motor and visual cortices engage because higher-level semantic information increases semantic integration load. When speech is degraded, posterior STS/middle temporal gyrus and medial temporal lobe are less engaged because integration load is lowest when visual semantic information does not aid lexical retrieval and speech and gestures cannot be integrated.
  • Drijvers, L., Ozyurek, A., & Jensen, O. (2018). Hearing and seeing meaning in noise: Alpha, beta and gamma oscillations predict gestural enhancement of degraded speech comprehension. Human Brain Mapping, 39(5), 2075-2087. doi:10.1002/hbm.23987.

    Abstract

    During face-to-face communication, listeners integrate speech with gestures. The semantic information conveyed by iconic gestures (e.g., a drinking gesture) can aid speech comprehension in adverse listening conditions. In this magnetoencephalography (MEG) study, we investigated the spatiotemporal neural oscillatory activity associated with gestural enhancement of degraded speech comprehension. Participants watched videos of an actress uttering clear or degraded speech, accompanied by a gesture or not and completed a cued-recall task after watching every video. When gestures semantically disambiguated degraded speech comprehension, an alpha and beta power suppression and a gamma power increase revealed engagement and active processing in the hand-area of the motor cortex, the extended language network (LIFG/pSTS/STG/MTG), medial temporal lobe, and occipital regions. These observed low- and high-frequency oscillatory modulations in these areas support general unification, integration and lexical access processes during online language comprehension, and simulation of and increased visual attention to manual gestures over time. All individual oscillatory power modulations associated with gestural enhancement of degraded speech comprehension predicted a listener's correct disambiguation of the degraded verb after watching the videos. Our results thus go beyond the previously proposed role of oscillatory dynamics in unimodal degraded speech comprehension and provide first evidence for the role of low- and high-frequency oscillations in predicting the integration of auditory and visual information at a semantic level.

    Additional information

    hbm23987-sup-0001-suppinfo01.docx
  • Drijvers, L., & Ozyurek, A. (2018). Native language status of the listener modulates the neural integration of speech and iconic gestures in clear and adverse listening conditions. Brain and Language, 177-178, 7-17. doi:10.1016/j.bandl.2018.01.003.

    Abstract

    Native listeners neurally integrate iconic gestures with speech, which can enhance degraded speech comprehension. However, it is unknown how non-native listeners neurally integrate speech and gestures, as they might process visual semantic context differently than natives. We recorded EEG while native and highly-proficient non-native listeners watched videos of an actress uttering an action verb in clear or degraded speech, accompanied by a matching ('to drive'+driving gesture) or mismatching gesture ('to drink'+mixing gesture). Degraded speech elicited an enhanced N400 amplitude compared to clear speech in both groups, revealing an increase in neural resources needed to resolve the spoken input. A larger N400 effect was found in clear speech for non-natives compared to natives, but in degraded speech only for natives. Non-native listeners might thus process gesture more strongly than natives when speech is clear, but need more auditory cues to facilitate access to gestural semantic information when speech is degraded.
  • Dronkers, N. F., Wilkins, D. P., Van Valin Jr., R. D., Redfern, B. B., & Jaeger, J. J. (2004). Lesion analysis of the brain areas involved in language comprehension. Cognition, 92, 145-177. doi:10.1016/j.cognition.2003.11.002.

    Abstract

    The cortical regions of the brain traditionally associated with the comprehension of language are Wernicke's area and Broca's area. However, recent evidence suggests that other brain regions might also be involved in this complex process. This paper describes the opportunity to evaluate a large number of brain-injured patients to determine which lesioned brain areas might affect language comprehension. Sixty-four chronic left hemisphere stroke patients were evaluated on 11 subtests of the Curtiss–Yamada Comprehensive Language Evaluation – Receptive (CYCLE-R; Curtiss, S., & Yamada, J. (1988). Curtiss–Yamada Comprehensive Language Evaluation. Unpublished test, UCLA). Eight right hemisphere stroke patients and 15 neurologically normal older controls also participated. Patients were required to select a single line drawing from an array of three or four choices that best depicted the content of an auditorily-presented sentence. Patients' lesions obtained from structural neuroimaging were reconstructed onto templates and entered into a voxel-based lesion-symptom mapping (VLSM; Bates, E., Wilson, S., Saygin, A. P., Dick, F., Sereno, M., Knight, R. T., & Dronkers, N. F. (2003). Voxel-based lesion-symptom mapping. Nature Neuroscience, 6(5), 448–450.) analysis along with the behavioral data. VLSM is a brain–behavior mapping technique that evaluates the relationships between areas of injury and behavioral performance in all patients on a voxel-by-voxel basis, similar to the analysis of functional neuroimaging data. Results indicated that lesions to five left hemisphere brain regions affected performance on the CYCLE-R, including the posterior middle temporal gyrus and underlying white matter, the anterior superior temporal gyrus, the superior temporal sulcus and angular gyrus, mid-frontal cortex in Brodmann's area 46, and Brodmann's area 47 of the inferior frontal gyrus. Lesions to Broca's and Wernicke's areas were not found to significantly alter language comprehension on this particular measure. Further analysis suggested that the middle temporal gyrus may be more important for comprehension at the word level, while the other regions may play a greater role at the level of the sentence. These results are consistent with those seen in recent functional neuroimaging studies and offer complementary data in the effort to understand the brain areas underlying language comprehension.
  • Drude, S. (2008). Nasal harmony in Awetí and the Mawetí-Guarani family (Tupí). Amerindia, Revue d'Ethnolinguistique amérindienne, 32, 239-276.

    Abstract

    1. Object: Awetí and the ‘Mawetí-Guaraní’ subfamily “Mawetí-Guaraní” is a shorter designation of a branch of the large Tupí language family, alongside with eight other branches or subfamilies. This branch in turn consists internally of the languages (Sateré-) Mawé and Awetí and the large Tupí-Guaraní subfamily, and so its explicit but longish name could be “Mawé-Awetí-Tupí-Guaraní” (MTAG). This genetic grouping has already been suggested (without any specific designation) by A. D. Rodrigues (e.g., 1984/85; Rodrigues and Dietrich 1997), and, more recently, it has been confirmed by comparative studies (Corrêa da Silva 2007; Drude 2006; Meira and Drude in prep.), which also more reliably establish the most probable internal ramification, according to which Mawé separated first, whereas the differentiation between Awetí, on the one hand, and the precursor of the Tupí-Guaraní (TG) subfamily, proto-Tupí-Guaraní (pTG), on the other, would have been more recent. The intermediate branch could be named “Awetí-Tupí-Guaraní” (“Awetí-TG” or “ATG”). Figure 1 shows the internal grouping of the Tupí family according to results of the Tupí Comparative Project under D. Moore at the Museu Goeldi (2000–2006).
  • Duffield, N., Matsuo, A., & Roberts, L. (2007). Acceptable ungrammaticality in sentence matching. Second Language Research, 23(2), 155-177. doi:10.1177/0267658307076544.

    Abstract

    This paper presents results from a new set of experiments using the sentence matching paradigm (Forster, Kenneth (1979), Freedman & Forster (1985), also Bley-Vroman & Masterson (1989), investigating native-speakers’ and L2 learners’ knowledge of constraints on clitic placement in French.1 Our purpose is three-fold: (i) to shed more light on the contrasts between native-speakers and L2 learners observed in previous experiments, especially Duffield & White (1999), and Duffield, White, Bruhn de Garavito, Montrul & Prévost (2002); (ii), to address specific criticisms of the sentence-matching paradigm leveled by Gass (2001); (iii), to provide a firm empirical basis for follow-up experiments with L2 learners
  • Duhaime, M. B., Alsheimer, S., Angelova, R., & FitzPatrick, I. (2008). In defense of Max Planck [Letters to the editor]. Science Magazine, 320(5878), 872. doi:10.1126/science.320.5878.872b.
  • Duñabeitia, J. A., Crepaldi, D., Meyer, A. S., New, B., Pliatsikas, C., Smolka, E., & Brysbaert, M. (2018). MultiPic: A standardized set of 750 drawings with norms for six European languages. Quarterly Journal of Experimental Psychology, 71(4), 808-816. doi:10.1080/17470218.2017.1310261.

    Abstract

    Numerous studies in psychology, cognitive neuroscience and psycholinguistics have used pictures of objects as stimulus materials. Currently, authors engaged in cross-linguistic work or wishing to run parallel studies at multiple sites where different languages are spoken must rely on rather small sets of black-and-white or colored line drawings. These sets are increasingly experienced as being too limited. Therefore, we constructed a new set of 750 colored pictures of concrete concepts. This set, MultiPic, constitutes a new valuable tool for cognitive scientists investigating language, visual perception, memory and/or attention in monolingual or multilingual populations. Importantly, the MultiPic databank has been normed in six different European languages (British English, Spanish, French, Dutch, Italian and German). All stimuli and norms are freely available at http://www.bcbl.eu/databases/multipic

    Additional information

    http://www.bcbl.eu/databases/multipic
  • Dunn, M., Terrill, A., Reesink, G., Foley, R. A., & Levinson, S. C. (2005). Structural phylogenetics and the reconstruction of ancient language history. Science, 309(5743), 2072-2075. doi:10.1126/science.1114615.
  • Dunn, M., Levinson, S. C., Lindström, E., Reesink, G., & Terrill, A. (2008). Structural phylogeny in historical linguistics: Methodological explorations applied in Island Melanesia. Language, 84(4), 710-759. doi:10.1353/lan.0.0069.

    Abstract

    Using various methods derived from evolutionary biology, including maximum parsimony and Bayesian phylogenetic analysis, we tackle the question of the relationships among a group of Papuan isolate languages that have hitherto resisted accepted attempts at demonstration of interrelatedness. Instead of using existing vocabulary-based methods, which cannot be applied to these languages due to the paucity of shared lexemes, we created a database of STRUCTURAL FEATURES—abstract phonological and grammatical features apart from their form. The methods are first tested on the closely related Oceanic languages spoken in the same region as the Papuan languages in question. We find that using biological methods on structural features can recapitulate the results of the comparative method tree for the Oceanic languages, thus showing that structural features can be a valid way of extracting linguistic history. Application of the same methods to the otherwise unrelatable Papuan languages is therefore likely to be similarly valid. Because languages that have been in contact for protracted periods may also converge, we outline additional methods for distinguishing convergence from inherited relatedness.
  • Dunn, M., Foley, R., Levinson, S. C., Reesink, G., & Terrill, A. (2007). Statistical reasoning in the evaluation of typological diversity in Island Melanesia. Oceanic Linguistics, 46(2), 388-403.

    Abstract

    This paper builds on a previous work in which we attempted to retrieve a phylogenetic signal using abstract structural features alone, as opposed to cognate sets, drawn from a sample of Island Melanesian languages, both Oceanic (Austronesian) and (non-Austronesian) Papuan (Science 2005[309]: 2072-75 ). Here we clarify a number of misunderstandings of this approach, referring particularly to the critique by Mark Donohue and Simon Musgrave (in this same issue of Oceanic Linguistics), in which they fail to appreciate the statistical principles underlying computational phylogenetic methods. We also present new analyses that provide stronger evidence supporting the hypotheses put forward in our original paper: a reanalysis using Bayesian phylogenetic inference demonstrates the robustness of the data and methods, and provides a substantial improvement over the parsimony method used in our earlier paper. We further demonstrate, using the technique of spatial autocorrelation, that neither proximity nor Oceanic contact can be a major determinant of the pattern of structural variation of the Papuan languages, and thus that the phylogenetic relatedness of the Papuan languages remains a serious hypothesis.
  • Dunn, M., Margetts, A., Meira, S., & Terrill, A. (2007). Four languages from the lower end of the typology of locative predication. Linguistics, 45, 873-892. doi:10.1515/LING.2007.026.

    Abstract

    As proposed by Ameka and Levinson (this issue) locative verb systems can be classified into four types according to the number of verbs distinguished. This article addresses the lower extreme of this typology: languages which offer no choice of verb in the basic locative function (BLF). These languages have either a single locative verb, or do not use verbs at all in the basic locative construction (BLC, the construction used to encode the BLF). A close analysis is presented of the behavior of BLF predicate types in four genetically diverse languages: Chukchi (Chukotko-Kamchatkan, Russian Arctic), and Lavukaleve (Papuan isolate, Solomon Islands), which have BLC with the normal copula/existential verb for the language; Tiriyó (Cariban/Taranoan, Brazil), which has an optional copula in the BLC; and Saliba (Austronesian/Western Oceanic, Papua New Guinea), a language with a verbless clause as the BLC. The status of these languages in the typology of positional verb systems is reviewed, and other relevant typological generalizations are discussed
  • Dunn, M., & Ross, M. (2007). Is Kazukuru really non-Austronesian? Oceanic Linguistics, 46(1), 210-231. doi:10.1353/ol.2007.0018.

    Abstract

    Kazukuru is an extinct language, originally spoken in the inland of the western part of the island of New Georgia, Solomon Islands, and attested by very limited historical sources. Kazukuru has generally been considered to be a Papuan, that is, non-Austronesian, language, mostly on the basis of its lexicon. Reevaluation of the available data suggests a high likelihood that Kazukuru was in fact an Oceanic Austronesian language. Pronominal paradigms are clearly of Austronesian origin, and many other aspects of language structured retrievable from the limited data are also congruent with regional Oceanic Austronesian typology. The extent and possible causes of Kazukuru lexical deviations from the Austronesian norm are evaluated and discussed.
  • Duprez, J., Stokkermans, M., Drijvers, L., & Cohen, M. X. (2021). Synchronization between keyboard typing and neural oscillations. Journal of Cognitive Neuroscience, 33(5), 887-901. doi:10.1162/jocn_a_01692.

    Abstract

    Rhythmic neural activity synchronizes with certain rhythmic behaviors, such as breathing, sniffing, saccades, and speech. The extent to which neural oscillations synchronize with higher-level and more complex behaviors is largely unknown. Here we investigated electrophysiological synchronization with keyboard typing, which is an omnipresent behavior daily engaged by an uncountably large number of people. Keyboard typing is rhythmic with frequency characteristics roughly the same as neural oscillatory dynamics associated with cognitive control, notably through midfrontal theta (4 -7 Hz) oscillations. We tested the hypothesis that synchronization occurs between typing and midfrontal theta, and breaks down when errors are committed. Thirty healthy participants typed words and sentences on a keyboard without visual feedback, while EEG was recorded. Typing rhythmicity was investigated by inter-keystroke interval analyses and by a kernel density estimation method. We used a multivariate spatial filtering technique to investigate frequency-specific synchronization between typing and neuronal oscillations. Our results demonstrate theta rhythmicity in typing (around 6.5 Hz) through the two different behavioral analyses. Synchronization between typing and neuronal oscillations occurred at frequencies ranging from 4 to 15 Hz, but to a larger extent for lower frequencies. However, peak synchronization frequency was idiosyncratic across subjects, therefore not specific to theta nor to midfrontal regions, and correlated somewhat with peak typing frequency. Errors and trials associated with stronger cognitive control were not associated with changes in synchronization at any frequency. As a whole, this study shows that brain-behavior synchronization does occur during keyboard typing but is not specific to midfrontal theta.
  • Durrant, S., Jessop, A., Chang, F., Bidgood, A., Peter, M. S., Pine, J. M., & Rowland, C. F. (2021). Does the understanding of complex dynamic events at 10 months predict vocabulary development? Language and Cognition, 13(1), 66-98. doi:10.1017/langcog.2020.26.

    Abstract

    By the end of their first year, infants can interpret many different types of complex dynamic visual events, such as caused-motion, chasing, and goal-directed action. Infants of this age are also in the early stages of vocabulary development, producing their first words at around 12 months. The present work examined whether there are meaningful individual differences in infants’ ability to represent dynamic causal events in visual scenes, and whether these differences influence vocabulary development. As part of the longitudinal Language 0–5 Project, 78 10-month-old infants were tested on their ability to interpret three dynamic motion events, involving (a) caused-motion, (b) chasing behaviour, and (c) goal-directed movement. Planned analyses found that infants showed evidence of understanding the first two event types, but not the third. Looking behaviour in each task was not meaningfully related to vocabulary development, nor were there any correlations between the tasks. The results of additional exploratory analyses and simulations suggested that the infants’ understanding of each event may not be predictive of their vocabulary development, and that looking times in these tasks may not be reliably capturing any meaningful individual differences in their knowledge. This raises questions about how to convert experimental group designs to individual differences measures, and how to interpret infant looking time behaviour.
  • Eekhof, L. S., Kuijpers, M. M., Faber, M., Gao, X., Mak, M., Van den Hoven, E., & Willems, R. M. (2021). Lost in a story, detached from the words. Discourse Processes, 58(7), 595-616. doi:10.1080/0163853X.2020.1857619.

    Abstract

    This article explores the relationship between low- and high-level aspects of reading by studying the interplay between word processing, as measured with eye tracking, and narrative absorption and liking, as measured with questionnaires. Specifically, we focused on how individual differences in sensitivity to lexical word characteristics—measured as the effect of these characteristics on gaze duration—were related to narrative absorption and liking. By reanalyzing a large data set consisting of three previous eye-tracking experiments in which subjects (N = 171) read literary short stories, we replicated the well-established finding that word length, lemma frequency, position in sentence, age of acquisition, and orthographic neighborhood size of words influenced gaze duration. More importantly, we found that individual differences in the degree of sensitivity to three of these word characteristics, i.e., word length, lemma frequency, and age of acquisition, were negatively related to print exposure and to a lesser degree to narrative absorption and liking. Even though the underlying mechanisms of this relationship are still unclear, we believe the current findings underline the need to map out the interplay between, on the one hand, the technical and, on the other hand, the subjective processes of reading by studying reading behavior in more natural settings.

    Additional information

    Analysis scripts and data
  • Eekhof, L. S., Eerland, A., & Willems, R. M. (2018). Readers’ insensitivity to tense revealed: No differences in mental simulation during reading of present and past tense stories. Collabra: Psychology, 4(1): 16. doi:10.1525/collabra.121.

    Abstract

    While the importance of mental simulation during literary reading has long been recognized, we know little about the factors that determine when, what, and how much readers mentally simulate. Here we investigate the influence of a specific text characteristic, namely verb tense (present vs. past), on mental simulation during literary reading. Verbs usually denote the actions and events that take place in narratives and hence it is hypothesized that verb tense will influence the amount of mental simulation elicited in readers. Although the present tense is traditionally considered to be more “vivid”, this study is one of the first to experimentally assess this claim. We recorded eye-movements while subjects read stories in the past or present tense and collected data regarding self-reported levels of mental simulation, transportation and appreciation. We found no influence of tense on any of the offline measures. The eye-tracking data showed a slightly more complex pattern. Although we did not find a main effect of sensorimotor simulation content on reading times, we were able to link the degree to which subjects slowed down when reading simulation eliciting content to offline measures of attention and transportation, but this effect did not interact with the tense of the story. Unexpectedly, we found a main effect of tense on reading times per word, with past tense stories eliciting longer first fixation durations and gaze durations. However, we were unable to link this effect to any of the offline measures. In sum, this study suggests that tense does not play a substantial role in the process of mental simulation elicited by literary stories.

    Additional information

    Data Accessibility
  • Eekhof, L. S., Van Krieken, K., Sanders, J., & Willems, R. M. (2021). Reading minds, reading stories: Social-cognitive abilities affect the linguistic processing of narrative viewpoint. Frontiers in Psychology, 12: 698986. doi:10.3389/fpsyg.2021.698986.

    Abstract

    Although various studies have shown that narrative reading draws on social-cognitive abilities, not much is known about the precise aspects of narrative processing that engage these abilities. We hypothesized that the linguistic processing of narrative viewpoint—expressed by elements that provide access to the inner world of characters—might play an important role in engaging social-cognitive abilities. Using eye tracking, we studied the effect of lexical markers of perceptual, cognitive, and emotional viewpoint on eye movements during reading of a 5,000-word narrative. Next, we investigated how this relationship was modulated by individual differences in social-cognitive abilities. Our results show diverging patterns of eye movements for perceptual viewpoint markers on the one hand, and cognitive and emotional viewpoint markers on the other. Whereas the former are processed relatively fast compared to non-viewpoint markers, the latter are processed relatively slow. Moreover, we found that social-cognitive abilities impacted the processing of words in general, and of perceptual and cognitive viewpoint markers in particular, such that both perspective-taking abilities and self-reported perspective-taking traits facilitated the processing of these markers. All in all, our study extends earlier findings that social cognition is of importance for story reading, showing that individual differences in social-cognitive abilities are related to the linguistic processing of narrative viewpoint.

    Additional information

    supplementary material
  • Eibl-Eibesfeldt, I., & Senft, G. (1991). Trobriander (Papua-Neu-guinea, Trobriand -Inseln, Kaile'una) Tänze zur Einleitung des Erntefeier-Rituals. Film E 3129. Trobriander (Papua-Neuguinea, Trobriand-Inseln, Kiriwina); Ausschnitte aus einem Erntefesttanz. Film E3130. Publikationen zu wissenschaftlichen Filmen. Sektion Ethnologie, 17, 1-17.
  • Eichert, N., Peeters, D., & Hagoort, P. (2018). Language-driven anticipatory eye movements in virtual reality. Behavior Research Methods, 50(3), 1102-1115. doi:10.3758/s13428-017-0929-z.

    Abstract

    Predictive language processing is often studied by measuring eye movements as participants look at objects on a computer screen while they listen to spoken sentences. The use of this variant of the visual world paradigm has shown that information encountered by a listener at a spoken verb can give rise to anticipatory eye movements to a target object, which is taken to indicate that people predict upcoming words. The ecological validity of such findings remains questionable, however, because these computer experiments used two-dimensional (2D) stimuli that are mere abstractions of real world objects. Here we present a visual world paradigm study in a three-dimensional (3D) immersive virtual reality environment. Despite significant changes in the stimulus material and the different mode of stimulus presentation, language-mediated anticipatory eye movements were observed. These findings thus indicate prediction of upcoming words in language comprehension in a more naturalistic setting where natural depth cues are preserved. Moreover, the results confirm the feasibility of using eye-tracking in rich and multimodal 3D virtual environments.

    Additional information

    13428_2017_929_MOESM1_ESM.docx
  • Eisner, F., & McQueen, J. M. (2005). The specificity of perceptual learning in speech processing. Perception & Psychophysics, 67(2), 224-238.

    Abstract

    We conducted four experiments to investigate the specificity of perceptual adjustments made to unusual speech sounds. Dutch listeners heard a female talker produce an ambiguous fricative [?] (between [f] and [s]) in [f]- or [s]-biased lexical contexts. Listeners with [f]-biased exposure (e.g., [witlo?]; from witlof, “chicory”; witlos is meaningless) subsequently categorized more sounds on an [εf]–[εs] continuum as [f] than did listeners with [s]-biased exposure. This occurred when the continuum was based on the exposure talker's speech (Experiment 1), and when the same test fricatives appeared after vowels spoken by novel female and male talkers (Experiments 1 and 2). When the continuum was made entirely from a novel talker's speech, there was no exposure effect (Experiment 3) unless fricatives from that talker had been spliced into the exposure talker's speech during exposure (Experiment 4). We conclude that perceptual learning about idiosyncratic speech is applied at a segmental level and is, under these exposure conditions, talker specific.
  • Enfield, N. J., Kita, S., & De Ruiter, J. P. (2007). Primary and secondary pragmatic functions of pointing gestures. Journal of Pragmatics, 39(10), 1722-1741. doi:10.1016/j.pragma.2007.03.001.

    Abstract

    This article presents a study of a set of pointing gestures produced together with speech in a corpus of video-recorded “locality description” interviews in rural Laos. In a restricted set of the observed gestures (we did not consider gestures with special hand shapes, gestures with arc/tracing motion, or gestures directed at referents within physical reach), two basic formal types of pointing gesture are observed: B-points (large movement, full arm, eye gaze often aligned) and S-points (small movement, hand only, casual articulation). Taking the approach that speech and gesture are structurally integrated in composite utterances, we observe that these types of pointing gesture have distinct pragmatic functions at the utterance level. One type of gesture (usually “big” in form) carries primary, informationally foregrounded information (for saying “where” or “which one”). Infants perform this type of gesture long before they can talk. The second type of gesture (usually “small” in form) carries secondary, informationally backgrounded information which responds to a possible but uncertain lack of referential common ground. We propose that the packaging of the extra locational information into a casual gesture is a way of adding extra information to an utterance without it being on-record that the added information was necessary. This is motivated by the conflict between two general imperatives of communication in social interaction: a social-affiliational imperative not to provide more information than necessary (“Don’t over-tell”), and an informational imperative not to provide less information than necessary (“Don’t under-tell”).
  • Enfield, N. J. (2004). On linear segmentation and combinatorics in co-speech gesture: A symmetry-dominance construction in Lao fish trap descriptions. Semiotica, 149(1/4), 57-123. doi:10.1515/semi.2004.038.
  • Enfield, N. J. (2005). The body as a cognitive artifact in kinship representations: Hand gesture diagrams by speakers of Lao. Current Anthropology, 46(1), 51-81.

    Abstract

    Central to cultural, social, and conceptual life are cognitive arti-facts, the perceptible structures which populate our world and mediate our navigation of it, complementing, enhancing, and altering available affordances for the problem-solving challenges of everyday life. Much work in this domain has concentrated on technological artifacts, especially manual tools and devices and the conceptual and communicative tools of literacy and diagrams. Recent research on hand gestures and other bodily movements which occur during speech shows that the human body serves a number of the functions of "cognitive technologies," affording the special cognitive advantages claimed to be associated exclusively with enduring (e.g., printed or drawn) diagrammatic representations. The issue is explored with reference to extensive data from video-recorded interviews with speakers of Lao in Vientiane, Laos, which show integration of verbal descriptions with complex spatial representations akin to diagrams. The study has implications both for research on cognitive artifacts (namely, that the body is a visuospatial representational resource not to be overlooked) and for research on ethnogenealogical knowledge (namely, that hand gestures reveal speakers' conceptualizations of kinship structure which are of a different nature to and not necessarily retrievable from the accompanying linguistic code).
  • Enfield, N. J. (2008). Transmission biases in linguistic epidemiology. Journal of Language Contact, 2, 295-306.

    Abstract

    To develop a nuanced account for selection within an epidemiological, population-based model of language contact and change, it is useful to consider possible conduits and filters on linguistic transmission and distribution. Richerson & Boyd (2005) describe a number of candidate biases in their evolutionary analysis of culture as a biological phenomenon (cf. Cavalli-Sforza & Feldman 1981, Sperber 1985, 1999, Boyd & Richerson 2005). This paper explores some of these biases with reference to language, exploring a set of analytic distinctions for a proper understanding of population-level linguistic processes. In putting forward these ideas, this paper echoes recent attempts to combine linguistic and biological concepts in the analysis of language diversity and change.
  • Enfield, N. J. (2007). Encoding three-participant events in the Lao clause. Linguistics, 45(3), 509-538. doi:10.1515/LING.2007.016.

    Abstract

    Any language will have a range of predicates that specify three core participants (e.g. 'put', 'show', 'give'), and will conventionally provide a range of constructional types for the expression of these three participants in a structured single-clause or single-sentence event description. This article examines the clausal encoding of three-participant events in Lao, a Tai language of Southeast Asia. There is no possibility in Lao for expression of three full arguments in the core of a single-verb clause (although it is possible to have a third argument in a noncore slot, marked as oblique with a prepositionlike element). Available alternatives include extraposing an argument using a topic-comment construction, incorporating an argument into the verb phrase, and ellipsing one or more contextually retrievable arguments. A more common strategy is verb serialization, for example, where a threeplace verb (e.g. 'put') is assisted by an additional verb (typically a verb of handling such as 'carry') that provides a slot for the theme argument (e.g. the transferred object in a putting scene). The event construal encoded by this type of structure decomposes the event into a first stage in which the agent comes into control over a theme, and a second in which the agent performs a controlled action (e.g. of transfer) with respect to that theme and a goal (and/or source). The particular set of strategies that Lao offers for encoding three-participant events — notably, topic-comment strategy, ellipsis strategy, serial verb strategy — conform with (and are presumably motivated by) the general typological profile of the language. The typological features of Lao are typical for the mainland Southeast Asia area (isolating, topic-prominent, verb-serializing, widespread nominal ellipsis).
  • Enfield, N. J. (2005). Areal linguistics and mainland Southeast Asia. Annual Review of Anthropology, 34, 181-206. doi:10.1146/annurev.anthro.34.081804.120406.
  • Enfield, N. J. (2007). [Comment on 'Agency' by Paul Kockelman]. Current Anthropology, 48(3), 392-392. doi:10.1086/512998.
  • Enfield, N. J. (2005). [Comment on the book Explorations in the deictic field]. Current Anthropology, 46(2), 212-212.
  • Enfield, N. J. (2008). [Review of the book Constructions at work: The nature of generalization in language by Adele E. Goldberg]. Linguistic Typology, 12(1), 155-159. doi:10.1515/LITY.2008.034.
  • Enfield, N. J. (2007). [review of the book Ethnopragmatics: Understanding discourse in cultural context ed. by Cliff Goddard]. Intercultural Pragmatics, 4(3), 419-433. doi:10.1515/IP.2007.021.
  • Enfield, N. J. (2005). [Review of the book Laughter in interaction by Philip Glenn]. Linguistics, 43(6), 1195-1197. doi:10.1515/ling.2005.43.6.1191.
  • Enfield, N. J. (2008). It's a leopard [Review of the book Book review The origin of speech by Peter F. MacNeilage]. Times Literary Supplement, September 12, 2008, 12-13.
  • Enfield, N. J. (2008). Linguistic categories and their utilities: The case of Lao landscape terms. Language Sciences, 30(2/3), 227-255. doi:10.1016/j.langsci.2006.12.030.

    Abstract

    Different domains of concrete referential semantics have provided testing grounds for investigation of the differential roles of perception, cognition, language, and culture in human categorization. A vast literature on semantics of biological classification, color, shape and topological relations, artifacts, and more, raises a range of theoretical and analytical debates. This article uses landscape terms to address a key debate from within research on ethnobiological classification: the opposition between so-called utilitarian and intellectualist accounts for patterns of lexicalization of the natural world [Berlin, B., 1992. Ethnobiological Classification: Principles of Categorization of Plants and Animals in Traditional Societies. Princeton University Press, Princeton, NJ]. ‘Utilitarianists’ argue that lexical categories reflect practical consequences of knowing certain category distinctions, related to cultural practice and functional affordances of referents. ‘Intellectualists’ argue that lexical categories reflect people’s innate interest in the natural world, combined with the perceptual discontinuities supplied by ‘Nature’s Plan’. The debate is generalizable to other domains, including landscape terminology, the topic of this special issue. This article brings landscape terminology into this larger debate, arguing in favor of a utilitarian account of linguistic categories in the domain of landscape, but proposing a significant revision to the concept of utility in linguistic categorization. The proposal is that for linguistic categorization, what is at issue is not (primarily) the utility of the referent (e.g. a river), but the utility of the word (e.g. the English word river). By considering how landscape terms are actually used in conversation, we see that they are deployed in communicative contexts which fit a rich, ‘functionalist’ semantics. A landscape term is not employed for mere referring, but functions to bring particular associated ideas into social discourse. In turn, language use reveals a range of evidence for the semantic content of any such term, of utility both to the language learner and to the semanticist. This kind of evidence can be argued to underlie the acquisition of semantic categories in language learning. The arguments are illustrated with examples from Lao, a Tai language of mainland Southeast Asia.
  • Enfield, N. J. (2008). Language as shaped by social interaction [Commentary on Christiansen and Chater]. Behavioral and Brain Sciences, 31(5), 519-520. doi:10.1017/S0140525X08005104.

    Abstract

    Language is shaped by its environment, which includes not only the brain, but also the public context in which speech acts are effected. To fully account for why language has the shape it has, we need to examine the constraints imposed by language use as a sequentially organized joint activity, and as the very conduit for linguistic diffusion and change.
  • Enfield, N. J. (2007). Lao separation verbs and the logic of linguistic event categorization. Cognitive Linguistics, 18(2), 287-296. doi:10.1515/COG.2007.016.

    Abstract

    While there are infinite conceivable events of material separation, those actually encoded in the conventions of a given language's verb semantics number only a few. Furthermore, there appear to be crosslinguistic parallels in the native verbal analysis of this conceptual domain. What are the operative distinctions, and why these? This article analyses a key subset of the bivalent (transitive) verbs of cutting and breaking in Lao. I present a decompositional analysis of the verbs glossed 'cut (off)', 'cut.into.with.placed.blade', 'cut.into.with.moving.blade', and 'snap', pursuing the idea that the attested combinations of sub-events have a natural logic to them. Consideration of the nature of linguistic categories, as distinct from categories in general, suggests that the attested distinctions must have ethnographic and social interactional significance, raising new lines of research for cognitive semantics.
  • Enfield, N. J. (2004). Nominal classification in Lao: A sketch. Sprachtypologie und Universalienforschung, 57(2/3), 117-143.
  • Enfield, N. J. (2005). Review of the book [The Handbook of Historical Linguistics, edited by Brian D. Joseph and Richard D. Janda]. Linguistics, 43(6), 1191-1197. doi:10.1515/ling.2005.43.6.1191.
  • Ergin, R., Meir, I., Ilkbasaran, D., Padden, C., & Jackendoff, R. (2018). The Development of Argument Structure in Central Taurus Sign Language. Sign Language & Linguistics, 18(4), 612-639. doi:10.1353/sls.2018.0018.

    Abstract

    One of the fundamental issues for a language is its capacity to express
    argument structure unambiguously. This study presents evidence
    for the emergence and the incremental development of these
    basic mechanisms in a newly developing language, Central Taurus
    Sign Language. Our analyses identify universal patterns in both the
    emergence and development of these mechanisms and in languagespecific
    trajectories.
  • Ernestus, M., Van Mulken, M., & Baayen, R. H. (2007). Ridders en heiligen in tijd en ruimte: Moderne stylometrische technieken toegepast op Oud-Franse teksten. Taal en Tongval, 58, 1-83.

    Abstract

    This article shows that Old-French literary texts differ systematically in their relative frequencies of syntactic constructions. These frequencies reflect differences in register (poetry versus prose), region (Picardy, Champagne, and Esatern France), time period (until 1250, 1251 – 1300, 1301 – 1350), and genre (hagiography, romance of chivalry, or other).
  • Ernestus, M., & Baayen, R. H. (2007). Paradigmatic effects in auditory word recognition: The case of alternating voice in Dutch. Language and Cognitive Processes, 22(1), 1-24. doi:10.1080/01690960500268303.

    Abstract

    Two lexical decision experiments addressed the role of paradigmatic effects in auditory word recognition. Experiment 1 showed that listeners classified a form with an incorrectly voiced final obstruent more readily as a word if the obstruent is realised as voiced in other forms of that word's morphological paradigm. Moreover, if such was the case, the exact probability of paradigmatic voicing emerged as a significant predictor of the response latencies. A greater probability of voicing correlated with longer response latencies for words correctly realised with voiceless final obstruents. A similar effect of this probability was observed in Experiment 2 for words with completely voiceless or weakly voiced (incompletely neutralised) final obstruents. These data demonstrate the relevance of paradigmatically related complex words for the processing of morphologically simple words in auditory word recognition.
  • Ernestus, M., Mak, W. M., & Baayen, R. H. (2005). Waar 't kofschip strandt. Levende Talen Magazine, 92, 9-11.
  • Ernestus, M., & Neijt, A. (2008). Word length and the location of primary word stress in Dutch, German, and English. Linguistics, 46(3), 507-540. doi:10.1515/LING.2008.017.

    Abstract

    This study addresses the extent to which the location of primary stress in Dutch, German, and English monomorphemic words is affected by the syllables preceding the three final syllables. We present analyses of the monomorphemic words in the CELEX lexical database, which showed that penultimate primary stress is less frequent in Dutch and English trisyllabic than quadrisyllabic words. In addition, we discuss paper-and-pencil experiments in which native speakers assigned primary stress to pseudowords. These experiments provided evidence that in all three languages penultimate stress is more likely in quadrisyllabic than in trisyllabic words. We explain this length effect with the preferences in these languages for word-initial stress and for alternating patterns of stressed and unstressed syllables. The experimental data also showed important intra- and interspeaker variation, and they thus form a challenging test case for theories of language variation.
  • Ernestus, M., & Mak, W. M. (2004). Distinctive phonological features differ in relevance for both spoken and written word recognition. Brain and Language, 90(1-3), 378-392. doi:10.1016/S0093-934X(03)00449-8.

    Abstract

    This paper discusses four experiments on Dutch which show that distinctive phonological features differ in their relevance for word recognition. The relevance of a feature for word recognition depends on its phonological stability, that is, the extent to which that feature is generally realized in accordance with its lexical specification in the relevant word position. If one feature value is uninformative, all values of that feature are less relevant for word recognition, with the least informative feature being the least relevant. Features differ in their relevance both in spoken and written word recognition, though the differences are more pronounced in auditory lexical decision than in self-paced reading.
  • Ernestus, M., & Mak, W. M. (2005). Analogical effects in reading Dutch verb forms. Memory & Cognition, 33(7), 1160-1173.

    Abstract

    Previous research has shown that the production of morphologically complex words in isolation is affected by the properties of morphologically, phonologically, or semantically similar words stored in the mental lexicon. We report five experiments with Dutch speakers that show that reading an inflectional word form in its linguistic context is also affected by analogical sets of formally similar words. Using the self-paced reading technique, we show in Experiments 1-3 that an incorrectly spelled suffix delays readers less if the incorrect spelling is in line with the spelling of verbal suffixes in other inflectional forms of the same verb. In Experiments 4 and 5, our use of the self-paced reading technique shows that formally similar words with different stems affect the reading of incorrect suffixal allomorphs on a given stem. These intra- and interparadigmatic effects in reading may be due to online processes or to the storage of incorrect forms resulting from analogical effects in production.
  • Ernestus, M., & Baayen, R. H. (2004). Analogical effects in regular past tense production in Dutch. Linguistics, 42(5), 873-903. doi:10.1515/ling.2004.031.

    Abstract

    This study addresses the question to what extent the production of regular past tense forms in Dutch is a¤ected by analogical processes. We report an experiment in which native speakers of Dutch listened to existing regular verbs over headphones, and had to indicate which of the past tense allomorphs, te or de, was appropriate for these verbs. According to generative analyses, the choice between the two su‰xes is completely regular and governed by the underlying [voice]-specification of the stem-final segment. In this approach, no analogical e¤ects are expected. In connectionist and analogical approaches, by contrast, the phonological similarity structure in the lexicon is expected to a¤ect lexical processing. Our experimental results support the latter approach: all participants created more nonstandard past tense forms, produced more inconsistency errors, and responded more slowly for verbs with stronger analogical support for the nonstandard form.
  • Ernestus, M., & Baayen, R. H. (2004). Kuchde, tobte, en turfte: Lekkage in 't kofschip. Onze Taal, 73(12), 360-361.
  • Escudero, P., Hayes-Harb, R., & Mitterer, H. (2008). Novel second-language words and asymmetric lexical access. Journal of Phonetics, 36(2), 345-360. doi:10.1016/j.wocn.2007.11.002.

    Abstract

    The lexical and phonetic mapping of auditorily confusable L2 nonwords was examined by teaching L2 learners novel words and by later examining their word recognition using an eye-tracking paradigm. During word learning, two groups of highly proficient Dutch learners of English learned 20 English nonwords, of which 10 contained the English contrast /e/-æ/ (a confusable contrast for native Dutch speakers). One group of subjects learned the words by matching their auditory forms to pictured meanings, while a second group additionally saw the spelled forms of the words. We found that the group who received only auditory forms confused words containing /æ/ and /e/ symmetrically, i.e., both /æ/ and /e/ auditory tokens triggered looks to pictures containing both /æ/ and /e/. In contrast, the group who also had access to spelled forms showed the same asymmetric word recognition pattern found by previous studies, i.e., they only looked at pictures of words containing /e/ when presented with /e/ target tokens, but looked at pictures of words containing both /æ/ and /e/ when presented with /æ/ target tokens. The results demonstrate that L2 learners can form lexical contrasts for auditorily confusable novel L2 words. However, and most importantly, this study suggests that explicit information over the contrastive nature of two new sounds may be needed to build separate lexical representations for similar-sounding L2 words.
  • Essegbey, J., & Ameka, F. K. (2007). "Cut" and "break" verbs in Gbe and Sranan. Journal of Pidgin and Creole Languages, 22(1), 37-55. doi:10.1075/jpcl.22.1.04ess.

    Abstract

    This paper compares “cut” and “break” verbs in four variants of Gbe, namely Anfoe, Anlo, Fon and Ayizo, with those of Sranan. “Cut” verbs are change-of-state verbs that co-lexicalize the type of action that brings about a change, the type of instrument or instrument part, and the manner in which a change occurs. By contrast, break verbs co-lexicalize either the type of object or the type of change. It has been hypothesized that “cut”-verbs are unergative while breaks verbs are unaccusatives. For example “break” verbs participate in the causative alternation constructions but “cut” verbs don’t. We show that although there are some differences in the meanings of “cut” and break verbs across the Gbe languages, significant generalizations can be made with regard to their lexicalization patterns. By contrast, the meanings of “cut” and break verbs in Sranan are closer to those of their etymons in English and Dutch. However, despite the differences in the meanings of “cut” and “break” verbs between the Gbe languages and Sranan, the syntax of the verbs in Sranan is similar to that of the Eastern Gbe variants, namely Fon and Ayizo. We look at the implications of our findings for the relexification hypothesis. (copyright Benjamins)
  • Estruch, S. B., Graham, S. A., Quevedo, M., Vino, A., Dekkers, D. H. W., Deriziotis, P., Sollis, E., Demmers, J., Poot, R. A., & Fisher, S. E. (2018). Proteomic analysis of FOXP proteins reveals interactions between cortical transcription factors associated with neurodevelopmental disorders. Human Molecular Genetics, 27(7), 1212-1227. doi:10.1093/hmg/ddy035.

    Abstract

    FOXP transcription factors play important roles in neurodevelopment, but little is known about how their transcriptional activity is regulated. FOXP proteins cooperatively regulate gene expression by forming homo- and hetero-dimers with each other. Physical associations with other transcription factors might also modulate the functions of FOXP proteins. However, few FOXP-interacting transcription factors have been identified so far. Therefore, we sought to discover additional transcription factors that interact with the brain-expressed FOXP proteins, FOXP1, FOXP2 and FOXP4, through affinity-purifications of protein complexes followed by mass spectrometry. We identified seven novel FOXP-interacting transcription factors (NR2F1, NR2F2, SATB1, SATB2, SOX5, YY1 and ZMYM2), five of which have well-established roles in cortical development. Accordingly, we found that these transcription factors are co-expressed with FoxP2 in the deep layers of the cerebral cortex and also in the Purkinje cells of the cerebellum, suggesting that they may cooperate with the FoxPs to regulate neural gene expression in vivo. Moreover, we demonstrated that etiological mutations of FOXP1 and FOXP2, known to cause neurodevelopmental disorders, severely disrupted the interactions with FOXP-interacting transcription factors. Additionally, we pinpointed specific regions within FOXP2 sequence involved in mediating these interactions. Thus, by expanding the FOXP interactome we have uncovered part of a broader neural transcription factor network involved in cortical development, providing novel molecular insights into the transcriptional architecture underlying brain development and neurodevelopmental disorders.
  • Evans, N., Levinson, S. C., & Sterelny, K. (2021). Kinship revisited. Biological theory, 16, 123-126. doi:10.1007/s13752-021-00384-9.
  • Evans, N., Levinson, S. C., & Sterelny, K. (Eds.). (2021). Thematic issue on evolution of kinship systems [Special Issue]. Biological theory, 16.
  • Evans, N., Bergqvist, H., & San Roque, L. (2018). The grammar of engagement I: Framework and initial exemplification. Language and Cognition, 10, 110-140. doi:10.1017/langcog.2017.21.

    Abstract

    Human language offers rich ways to track, compare, and engage the attentional and epistemic states of interlocutors. While this task is central to everyday communication, our knowledge of the cross-linguistic grammatical means that target such intersubjective coordination has remained basic. In two serialised papers, we introduce the term ‘engagement’ to refer to grammaticalised means for encoding the relative mental directedness of speaker and addressee towards an entity or state of affairs, and describe examples of engagement systems from around the world. Engagement systems express the speaker’s assumptions about the degree to which their attention or knowledge is shared (or not shared) by the addressee. Engagement categories can operate at the level of entities in the here-and-now (deixis), in the unfolding discourse (definiteness vs indefiniteness), entire event-depicting propositions (through markers with clausal scope), and even metapropositions (potentially scoping over evidential values). In this first paper, we introduce engagement and situate it with respect to existing work on intersubjectivity in language. We then explore the key role of deixis in coordinating attention and expressing engagement, moving through increasingly intercognitive deictic systems from those that focus on the the location of the speaker, to those that encode the attentional state of the addressee.
  • Evans, N., Bergqvist, H., & San Roque, L. (2018). The grammar of engagement II: Typology and diachrony. Language and Cognition, 10(1), 141-170. doi:10.1017/langcog.2017.22.

    Abstract

    Engagement systems encode the relative accessibility of an entity or state of affairs to the speaker and addressee, and are thus underpinned by our social cognitive capacities. In our first foray into engagement (Part 1), we focused on specialised semantic contrasts as found in entity-level deictic systems, tailored to the primal scenario for establishing joint attention. This second paper broadens out to an exploration of engagement at the level of events and even metapropositions, and comments on how such systems may evolve. The languages Andoke and Kogi demonstrate what a canonical system of engagement with clausal scope looks like, symmetrically assigning ‘knowing’ and ‘unknowing’ values to speaker and addressee. Engagement is also found cross-cutting other epistemic categories such as evidentiality, for example where a complex assessment of relative speaker and addressee awareness concerns the source of information rather than the proposition itself. Data from the language Abui reveal that one way in which engagement systems can develop is by upscoping demonstratives, which normally denote entities, to apply at the level of events. We conclude by stressing the need for studies that focus on what difference it makes, in terms of communicative behaviour, for intersubjective coordination to be managed by engagement systems as opposed to other, non-grammaticalised means.
  • Eviatar, Z., & Huettig, F. (Eds.). (2021). Literacy and writing systems [Special Issue]. Journal of Cultural Cognitive Science.
  • Eviatar, Z., & Huettig, F. (2021). The literate mind. Journal of Cultural Cognitive Science, 5, 81-84. doi:10.1007/s41809-021-00086-5.
  • Fairs, A., Bögels, S., & Meyer, A. S. (2018). Dual-tasking with simple linguistic tasks: Evidence for serial processing. Acta Psychologica, 191, 131-148. doi:10.1016/j.actpsy.2018.09.006.

    Abstract

    In contrast to the large amount of dual-task research investigating the coordination of a linguistic and a nonlinguistic
    task, little research has investigated how two linguistic tasks are coordinated. However, such research
    would greatly contribute to our understanding of how interlocutors combine speech planning and listening in
    conversation. In three dual-task experiments we studied how participants coordinated the processing of an
    auditory stimulus (S1), which was either a syllable or a tone, with selecting a name for a picture (S2). Two SOAs,
    of 0 ms and 1000 ms, were used. To vary the time required for lexical selection and to determine when lexical
    selection took place, the pictures were presented with categorically related or unrelated distractor words. In
    Experiment 1 participants responded overtly to both stimuli. In Experiments 2 and 3, S1 was not responded to
    overtly, but determined how to respond to S2, by naming the picture or reading the distractor aloud. Experiment
    1 yielded additive effects of SOA and distractor type on the picture naming latencies. The presence of semantic
    interference at both SOAs indicated that lexical selection occurred after response selection for S1. With respect to
    the coordination of S1 and S2 processing, Experiments 2 and 3 yielded inconclusive results. In all experiments,
    syllables interfered more with picture naming than tones. This is likely because the syllables activated phonological
    representations also implicated in picture naming. The theoretical and methodological implications of the
    findings are discussed.

    Additional information

    1-s2.0-S0001691817305589-mmc1.pdf
  • Falcaro, M., Pickles, A., Newbury, D. F., Addis, L., Banfield, E., Fisher, S. E., Monaco, A. P., Simkin, Z., Conti-Ramsden, G., & Consortium (2008). Genetic and phenotypic effects of phonological short-term memory and grammatical morphology in specific language impairment. Genes, Brain and Behavior, 7, 393-402. doi:10.1111/j.1601-183X.2007.00364.x.

    Abstract

    Deficits in phonological short-term memory and aspects of verb grammar morphology have been proposed as phenotypic markers of specific language impairment (SLI) with the suggestion that these traits are likely to be under different genetic influences. This investigation in 300 first-degree relatives of 93 probands with SLI examined familial aggregation and genetic linkage of two measures thought to index these two traits, non-word repetition and tense marking. In particular, the involvement of chromosomes 16q and 19q was examined as previous studies found these two regions to be related to SLI. Results showed a strong association between relatives' and probands' scores on non-word repetition. In contrast, no association was found for tense marking when examined as a continuous measure. However, significant familial aggregation was found when tense marking was treated as a binary measure with a cut-off point of -1.5 SD, suggestive of the possibility that qualitative distinctions in the trait may be familial while quantitative variability may be more a consequence of non-familial factors. Linkage analyses supported previous findings of the SLI Consortium of linkage to chromosome 16q for phonological short-term memory and to chromosome 19q for expressive language. In addition, we report new findings that relate to the past tense phenotype. For the continuous measure, linkage was found on both chromosomes, but evidence was stronger on chromosome 19. For the binary measure, linkage was observed on chromosome 19 but not on chromosome 16.
  • Favier, S., & Huettig, F. (2021). Are there core and peripheral syntactic structures? Experimental evidence from Dutch native speakers with varying literacy levels. Lingua, 251: 102991. doi:10.1016/j.lingua.2020.102991.

    Abstract

    Some theorists posit the existence of a ‘core’ grammar that virtually all native speakers acquire, and a ‘peripheral’ grammar that many do not. We investigated the viability of such a categorical distinction in the Dutch language. We first consulted linguists’ intuitions as to the ‘core’ or ‘peripheral’ status of a wide range of grammatical structures. We then tested a selection of core- and peripheral-rated structures on naïve participants with varying levels of literacy experience, using grammaticality judgment as a proxy for receptive knowledge. Overall, participants demonstrated better knowledge of ‘core’ structures than ‘peripheral’ structures, but the considerable variability within these categories was strongly suggestive of a continuum rather than a categorical distinction between them. We also hypothesised that individual differences in the knowledge of core and peripheral structures would reflect participants’ literacy experience. This was supported only by a small trend in our data. The results fit best with the notion that more frequent syntactic structures are mastered by more people than infrequent ones and challenge the received sense of a categorical core-periphery distinction.
  • Favier, S., Meyer, A. S., & Huettig, F. (2021). Literacy can enhance syntactic prediction in spoken language processing. Journal of Experimental Psychology: General, 150(10), 2167-2174. doi:10.1037/xge0001042.

    Abstract

    Language comprehenders can use syntactic cues to generate predictions online about upcoming language. Previous research with reading-impaired adults and healthy, low-proficiency adult and child learners suggests that reading skills are related to prediction in spoken language comprehension. Here we investigated whether differences in literacy are also related to predictive spoken language processing in non-reading-impaired proficient adult readers with varying levels of literacy experience. Using the visual world paradigm enabled us to measure prediction based on syntactic cues in the spoken sentence, prior to the (predicted) target word. Literacy experience was found to be the strongest predictor of target anticipation, independent of general cognitive abilities. These findings suggest that a) experience with written language can enhance syntactic prediction of spoken language in normal adult language users, and b) processing skills can be transferred to related tasks (from reading to listening) if the domains involve similar processes (e.g., predictive dependencies) and representations (e.g., syntactic).

    Additional information

    Online supplementary material
  • Favier, S., & Huettig, F. (2021). Long-term written language experience affects grammaticality judgments and usage but not priming of spoken sentences. Quarterly Journal of Experimental Psychology, 74(8), 1378-1395. doi:10.1177/17470218211005228.

    Abstract

    ‘Book language’ offers a richer linguistic experience than typical conversational speech in terms of its syntactic properties. Here, we investigated the role of long-term syntactic experience on syntactic knowledge and processing. In a pre-registered study with 161 adult native Dutch speakers with varying levels of literacy, we assessed the contribution of individual differences in written language experience to offline and online syntactic processes. Offline syntactic knowledge was assessed as accuracy in an auditory grammaticality judgment task in which we tested violations of four Dutch grammatical norms. Online syntactic processing was indexed by syntactic priming of the Dutch dative alternation, using a comprehension-to-production priming paradigm with auditory presentation. Controlling for the contribution of non-verbal IQ, verbal working memory, and processing speed, we observed a robust effect of literacy experience on the detection of grammatical norm violations in spoken sentences, suggesting that exposure to the syntactic complexity and diversity of written language has specific benefits for general (modality-independent) syntactic knowledge. We replicated previous results by finding robust comprehension-to-production structural priming, both with and without lexical overlap between prime and target. Although literacy experience affected the usage of syntactic alternates in our large sample, it did not modulate their priming. We conclude that amount of experience with written language increases explicit awareness of grammatical norm violations and changes the usage of (PO vs. DO) dative spoken sentences but has no detectable effect on their implicit syntactic priming in proficient language users. These findings constrain theories about the effect of long-term experience on syntactic processing.
  • Felemban, D., Verdonschot, R. G., Iwamoto, Y., Uchiyama, Y., Kakimoto, N., Kreiborg, S., & Murakami, S. (2018). A quantitative experimental phantom study on MRI image uniformity. Dentomaxillofacial Radiology, 47(6): 20180077. doi:10.1259/dmfr.20180077.

    Abstract

    Objectives: Our goal was to assess MR image uniformity by investigating aspects influencing said uniformity via a method laid out by the National Electrical Manufacturers Association (NEMA).
    Methods: Six metallic materials embedded in a glass phantom were scanned (i.e. Au, Ag, Al, Au-Ag-Pd alloy, Ti and Co-Cr alloy) as well as a reference image. Sequences included spin echo (SE) and gradient echo (GRE) scanned in three planes (i.e. axial, coronal, and sagittal). Moreover, three surface coil types (i.e. head and neck, Brain, and temporomandibular joint coils) and two image correction methods (i.e. surface coil intensity correction or SCIC, phased array uniformity enhancement or PURE) were employed to evaluate their effectiveness on image uniformity. Image uniformity was assessed using the National Electrical Manufacturers Association peak-deviation non-uniformity method.
    Results: Results showed that temporomandibular joint coils elicited the least uniform image and brain coils outperformed head and neck coils when metallic materials were present. Additionally, when metallic materials were present, spin echo outperformed gradient echo especially for Co-Cr (particularly in the axial plane). Furthermore, both SCIC and PURE improved image uniformity compared to uncorrected images, and SCIC slightly surpassed PURE when metallic metals were present. Lastly, Co-Cr elicited the least uniform image while other metallic materials generally showed similar patterns (i.e. no significant deviation from images without metallic metals).
    Conclusions: Overall, a quantitative understanding of the factors influencing MR image uniformity (e.g. coil type, imaging method, metal susceptibility, and post-hoc correction method) is advantageous to optimize image quality, assists clinical interpretation, and may result in improved medical and dental care.
  • Felker, E. R., Broersma, M., & Ernestus, M. (2021). The role of corrective feedback and lexical guidance in perceptual learning of a novel L2 accent in dialogue. Applied Psycholinguistics, 42, 1029-1055. doi:10.1017/S0142716421000205.

    Abstract

    Perceptual learning of novel accents is a critical skill for second-language speech perception, but little is known about the mechanisms that facilitate perceptual learning in communicative contexts. To study perceptual learning in an interactive dialogue setting while maintaining experimental control of the phonetic input, we employed an innovative experimental method incorporating prerecorded speech into a naturalistic conversation. Using both computer-based and face-to-face dialogue settings, we investigated the effect of two types of learning mechanisms in interaction: explicit corrective feedback and implicit lexical guidance. Dutch participants played an information-gap game featuring minimal pairs with an accented English speaker whose /ε/ pronunciations were shifted to /ɪ/. Evidence for the vowel shift came either from corrective feedback about participants’ perceptual mistakes or from onscreen lexical information that constrained their interpretation of the interlocutor’s words. Corrective feedback explicitly contrasting the minimal pairs was more effective than generic feedback. Additionally, both receiving lexical guidance and exhibiting more uptake for the vowel shift improved listeners’ subsequent online processing of accented words. Comparable learning effects were found in both the computer-based and face-to-face interactions, showing that our results can be generalized to a more naturalistic learning context than traditional computer-based perception training programs.
  • Felker, E. R., Troncoso Ruiz, A., Ernestus, M., & Broersma, M. (2018). The ventriloquist paradigm: Studying speech processing in conversation with experimental control over phonetic input. The Journal of the Acoustical Society of America, 144(4), EL304-EL309. doi:10.1121/1.5063809.

    Abstract

    This article presents the ventriloquist paradigm, an innovative method for studying speech processing in dialogue whereby participants interact face-to-face with a confederate who, unbeknownst to them, communicates by playing pre-recorded speech. Results show that the paradigm convinces more participants that the speech is live than a setup without the face-to-face element, and it elicits more interactive conversation than a setup in which participants believe their partner is a computer. By reconciling the ecological validity of a conversational context with full experimental control over phonetic exposure, the paradigm offers a wealth of new possibilities for studying speech processing in interaction.
  • Felser, C., & Roberts, L. (2007). Processing wh-dependencies in a second language: A cross-modal priming study. Second Language Research, 23(1), 9-36. doi:10.1177/0267658307071600.

    Abstract

    This study investigates the real-time processing of wh-dependencies by advanced Greek-speaking learners of English using a cross-modal picture priming task. Participants were asked to respond to different types of picture target presented either at structurally defined gap positions, or at pre-gap control positions, while listening to sentences containing indirect-object relative clauses. Our results indicate that the learners processed the experimental sentences differently from both adult native speakers of English and monolingual English-speaking children. Contrary to what has been found for native speakers, the learners' response pattern was not influenced by individual working memory differences. Adult second language learners differed from native speakers with a relatively high reading or listening span in that they did not show any evidence of structurally based antecedent reactivation at the point of the indirect object gap. They also differed from low-span native speakers, however, in that they showed evidence of maintained antecedent activation during the processing of the experimental sentences. Whereas the localized priming effect observed in the high-span controls is indicative of trace-based antecedent reactivation in native sentence processing, the results from the Greek-speaking learners support the hypothesis that the mental representations built during non-native language processing lack abstract linguistic structure such as movement traces.
  • Fernandes, T., Arunkumar, M., & Huettig, F. (2021). The role of the written script in shaping mirror-image discrimination: Evidence from illiterate, Tamil literate, and Tamil-Latin-alphabet bi-literate adults. Cognition, 206: 104493. doi:10.1016/j.cognition.2020.104493.

    Abstract

    Learning a script with mirrored graphs (e.g., d ≠ b) requires overcoming the evolutionary-old perceptual tendency to process mirror images as equivalent. Thus, breaking mirror invariance offers an important tool for understanding cultural re-shaping of evolutionarily ancient cognitive mechanisms. Here we investigated the role of script (i.e., presence vs. absence of mirrored graphs: Latin alphabet vs. Tamil) by revisiting mirror-image processing by illiterate, Tamil monoliterate, and Tamil-Latin-alphabet bi-literate adults. Participants performed two same-different tasks (one orientation-based, another shape-based) on Latin-alphabet letters. Tamil monoliterate were significantly better than illiterate and showed good explicit mirror-image discrimination. However, only bi-literate adults fully broke mirror invariance: slower shape-based judgments for mirrored than identical pairs and reduced disadvantage in orientation-based over shape-based judgments of mirrored pairs. These findings suggest learning a script with mirrored graphs is the strongest force for breaking mirror invariance.

    Additional information

    supplementary material
  • Ferrari, A., & Noppeney, U. (2021). Attention controls multisensory perception via two distinct mechanisms at different levels of the cortical hierarchy. PLoS Biology, 19(11): e3001465. doi:10.1371/journal.pbio.3001465.

    Abstract

    To form a percept of the multisensory world, the brain needs to integrate signals from common sources weighted by their reliabilities and segregate those from independent sources. Previously, we have shown that anterior parietal cortices combine sensory signals into representations that take into account the signals’ causal structure (i.e., common versus independent sources) and their sensory reliabilities as predicted by Bayesian causal inference. The current study asks to what extent and how attentional mechanisms can actively control how sensory signals are combined for perceptual inference. In a pre- and postcueing paradigm, we presented observers with audiovisual signals at variable spatial disparities. Observers were precued to attend to auditory or visual modalities prior to stimulus presentation and postcued to report their perceived auditory or visual location. Combining psychophysics, functional magnetic resonance imaging (fMRI), and Bayesian modelling, we demonstrate that the brain moulds multisensory inference via two distinct mechanisms. Prestimulus attention to vision enhances the reliability and influence of visual inputs on spatial representations in visual and posterior parietal cortices. Poststimulus report determines how parietal cortices flexibly combine sensory estimates into spatial representations consistent with Bayesian causal inference. Our results show that distinct neural mechanisms control how signals are combined for perceptual inference at different levels of the cortical hierarchy.

    Additional information

    supporting information
  • Fink, B., Bläsing, B., Ravignani, A., & Shackelford, T. K. (2021). Evolution and functions of human dance. Evolution and Human Behavior, 42(4), 351-360. doi:10.1016/j.evolhumbehav.2021.01.003.

    Abstract

    Dance is ubiquitous among humans and has received attention from several disciplines. Ethnographic documentation suggests that dance has a signaling function in social interaction. It can influence mate preferences and facilitate social bonds. Research has provided insights into the proximate mechanisms of dance, individually or when dancing with partners or in groups. Here, we review dance research from an evolutionary perspective. We propose that human dance evolved from ordinary (non-communicative) movements to communicate socially relevant information accurately. The need for accurate social signaling may have accompanied increases in group size and population density. Because of its complexity in production and display, dance may have evolved as a vehicle for expressing social and cultural information. Mating-related qualities and motives may have been the predominant information derived from individual dance movements, whereas group dance offers the opportunity for the exchange of socially relevant content, for coordinating actions among group members, for signaling coalitional strength, and for stabilizing group structures. We conclude that, despite the cultural diversity in dance movements and contexts, the primary communicative functions of dance may be the same across societies.
  • Fisher, N., Hadley, L., Corps, R. E., & Pickering, M. (2021). The effects of dual-task interference in predicting turn-ends in speech and music. Brain Research, 1768: 147571. doi:10.1016/j.brainres.2021.147571.

    Abstract

    Determining when a partner’s spoken or musical turn will end requires well-honed predictive abilities. Evidence suggests that our motor systems are activated during perception of both speech and music, and it has been argued that motor simulation is used to predict turn-ends across domains. Here we used a dual-task interference paradigm to investigate whether motor simulation of our partner’s action underlies our ability to make accurate turn-end predictions in speech and in music. Furthermore, we explored how specific this simulation is to the action being predicted. We conducted two experiments, one investigating speech turn-ends, and one investigating music turn-ends. In each, 34 proficient pianists predicted turn-endings while (1) passively listening, (2) producing an effector-specific motor activity (mouth/hand movement), or (3) producing a task- and effector-specific motor activity (mouthing words/fingering a piano melody). In the speech experiment, any movement during speech perception disrupted predictions of spoken turn-ends, whether the movement was task-specific or not. In the music experiment, only task-specific movement (i.e., fingering a piano melody) disrupted predictions of musical turn-ends. These findings support the use of motor simulation to make turn-end predictions in both speech and music but suggest that the specificity of this simulation may differ between domains.
  • Fisher, S. E. (2005). Dissection of molecular mechanisms underlying speech and language disorders. Applied Psycholinguistics, 26, 111-128. doi:10.1017/S0142716405050095.

    Abstract

    Developmental disorders affecting speech and language are highly heritable, but very little is currently understood about the neuromolecular mechanisms that underlie these traits. Integration of data from diverse research areas, including linguistics, neuropsychology, neuroimaging, genetics, molecular neuroscience, developmental biology, and evolutionary anthropology, is becoming essential for unraveling the relevant pathways. Recent studies of the FOXP2 gene provide a case in point. Mutation of FOXP2 causes a rare form of speech and language disorder, and the gene appears to be a crucial regulator of embryonic development for several tissues. Molecular investigations of the central nervous system indicate that the gene may be involved in establishing and maintaining connectivity of corticostriatal and olivocerebellar circuits in mammals. Notably, it has been shown that FOXP2 was subject to positive selection in recent human evolution. Consideration of findings from multiple levels of analysis demonstrates that FOXP2 cannot be characterized as “the gene for speech,” but rather as one critical piece of a complex puzzle. This story gives a flavor of what is to come in this field and indicates that anyone expecting simple explanations of etiology or evolution should be prepared for some intriguing surprises.
  • Fisher, S. E. (2007). Molecular windows into speech and language disorders. Folia Phoniatrica et Logopaedica, 59, 130-140. doi:10.1159/000101771.

    Abstract

    Why do some children fail to acquire speech and language skills despite adequate environmental input and overtly normal neurological and anatomical development? It has been suspected for several decades, based on indirect evidence, that the human genome might hold some answers to this enigma. These suspicions have recently received dramatic confirmation with the discovery of specific genetic changes which appear sufficient to derail speech and language development. Indeed, researchers are already using information from genetic studies to aid early diagnosis and to shed light on the neural pathways that are perturbed in these inherited forms of speech and language disorder. Thus, we have entered an exciting era for dissecting the neural bases of human communication, one which takes genes and molecules as a starting point. In the current article I explain how this recent paradigm shift has occurred and describe the new vistas that have opened up. I demonstrate ways of bridging the gaps between molecules, neurons and the brain, which will provide a new understanding of the aetiology of speech and language impairments.
  • Fisher, S. E. (2005). On genes, speech, and language. The New England Journal of Medicine: NEJM / Publ. by the Massachusetts Medical Society, 353, 1655-1657. doi:10.1056/NEJMp058207.

    Abstract

    Learning to talk is one of the most important milestones in human development, but we still have only a limited understanding of the way in which the process occurs. It normally takes just a few years to go from babbling newborn to fluent communicator. During this period, the child learns to produce a rich array of speech sounds through intricate control of articulatory muscles, assembles a vocabulary comprising thousands of words, and deduces the complicated structural rules that permit construction of meaningful sentences. All of this (and more) is achieved with little conscious effort.

    Files private

    Request files
  • Fisher, V. J. (2021). Embodied songs: Insights into the nature of cross-modal meaning-making within sign language informed, embodied interpretations of vocal music. Frontiers in Psychology, 12: 624689. doi:10.3389/fpsyg.2021.624689.

    Abstract

    Embodied song practices involve the transformation of songs from the acoustic modality into an embodied-visual form, to increase meaningful access for d/Deaf audiences. This goes beyond the translation of lyrics, by combining poetic sign language with other bodily movements to embody the para-linguistic expressive and musical features that enhance the message of a song. To date, the limited research into this phenomenon has focussed on linguistic features and interactions with rhythm. The relationship between bodily actions and music has not been probed beyond an assumed implication of conformance. However, as the primary objective is to communicate equivalent meanings, the ways that the acoustic and embodied-visual signals relate to each other should reveal something about underlying conceptual agreement. This paper draws together a range of pertinent theories from within a grounded cognition framework including semiotics, analogy mapping and cross-modal correspondences. These theories are applied to embodiment strategies used by prominent d/Deaf and hearing Dutch practitioners, to unpack the relationship between acoustic songs, their embodied representations, and their broader conceptual and affective meanings. This leads to the proposition that meaning primarily arises through shared patterns of internal relations across a range of amodal and cross-modal features with an emphasis on dynamic qualities. These analogous patterns can inform metaphorical interpretations and trigger shared emotional responses. This exploratory survey offers insights into the nature of cross-modal and embodied meaning-making, as a jumping-off point for further research.
  • FitzPatrick, I., & Weber, K. (2008). “Il piccolo principe est allé”: Processing of language switches in auditory sentence comprehension. Journal of Neuroscience, 28(18), 4581-4582. doi:10.1523/JNEUROSCI.0905-08.2008.
  • FitzPatrick, I. (2007). Effects of sentence context in L2 natural speech comprehension. Nijmegen CNS, 2, 43-56.

    Abstract

    Electrophysiological studies consistently find N400 effects of semantic incongruity in non-native written language comprehension. Typically these N400 effects are later than N400 effects in native comprehension, suggesting that semantic processing in one’s second language (L2) may be delayed compared to one’s first language (L1). In this study we were firstly interested in replicating the semantic incongruity effect using natural auditory speech, which poses strong demands on the speed of processing. Secondly, we wished to investigate whether a possible delay in semantic processing might be due to bilinguals accessing lexical items from both their L1 and L2 (a more extensive lexical search). We recorded EEG from 30 Dutch-English bilinguals who listened to English sentences � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � ��� � � in which the sentence-final word was: (1) semantically fitting, (2) semantically incongruent, (3) initially congruent: semantically incongruent, but sharing initial phonemes with the most probable sentence completion within the L2, (4) semantically incongruent, but sharing initial phonemes with the L1 translation equivalent of the most probable sentence completion. We found an N400 effect in each of the semantically incongruent conditions. This N400 effect was significantly delayed to L2 words that were initially congruent with the sentence context. We found no effect of initial overlap with L1 translation equivalents. Taken together these findings firstly demonstrate that non-native listeners are sensitive to semantic incongruity in natural speech, secondly indicate that semantic integration in non-native listening can start on the basis of word initial phonemes, and finally suggest that during L2 sentence processing listeners do not access the L1 lexicon.

Share this page