Publications

Displaying 201 - 300 of 1158
  • Dingemanse, M., Rossi, G., & Floyd, S. (2017). Place reference in story beginnings: a cross-linguistic study of narrative and interactional affordances. Language in Society, 46(2), 129-158. doi:10.1017/S0047404516001019.

    Abstract

    People often begin stories in conversation by referring to person, time, and place. We study story beginnings in three societies and find place reference is recurrently used to (i) set the stage, foreshadowing the type of story and the kind of response due, and to (ii) make the story cohere, anchoring elements of the developing story. Recipients orient to these interactional affordances of place reference by responding in ways that attend to the relevance of place for the story and by requesting clarification when references are incongruent or noticeably absent. The findings are based on 108 story beginnings in three unrelated languages: Cha’palaa, a Barbacoan language of Ecuador; Northern Italian, a Romance language of Italy; and Siwu, a Kwa language of Ghana. The commonalities suggest we have identified generic affordances of place reference, and that storytelling in conversation offers a robust sequential environment for systematic comparative research on conversational structures.
  • Dingemanse, M. (2009). The selective advantage of body-part terms. Journal of Pragmatics, 41(10), 2130-2136. doi:10.1016/j.pragma.2008.11.008.

    Abstract

    This paper addresses the question why body-part terms are so often used to talk about other things than body parts. It is argued that the strategy of falling back on stable common ground to maximize the chances of successful communication is the driving force behind the selective advantage of body-part terms. The many different ways in which languages may implement this universal strategy suggest that, in order to properly understand the privileged role of the body in the evolution of linguistic signs, we have to look beyond the body to language in its socio-cultural context. A theory which acknowledges the interacting influences of stable common ground and diversified cultural practices on the evolution of linguistic signs will offer the most explanatory power for both universal patterns and language-specific variation.
  • Dingemanse, M., Schuerman, W. L., Reinisch, E., Tufvesson, S., & Mitterer, H. (2016). What sound symbolism can and cannot do: Testing the iconicity of ideophones from five languages. Language, 92(2), e117-e133. doi:10.1353/lan.2016.0034.

    Abstract

    Sound symbolism is a phenomenon with broad relevance to the study of language and mind, but there has been a disconnect between its investigations in linguistics and psychology. This study tests the sound-symbolic potential of ideophones—words described as iconic—in an experimental task that improves over prior work in terms of ecological validity and experimental control. We presented 203 ideophones from five languages to eighty-two Dutch listeners in a binary-choice task, in four versions: original recording, full diphone resynthesis, segments-only resynthesis, and prosody-only resynthesis. Listeners guessed the meaning of all four versions above chance, confirming the iconicity of ideophones and showing the viability of speech synthesis as a way of controlling for segmental and suprasegmental properties in experimental studies of sound symbolism. The success rate was more modest than prior studies using pseudowords like bouba/kiki, implying that assumptions based on such words cannot simply be transferred to natural languages. Prosody and segments together drive the effect: neither alone is sufficient, showing that segments and prosody work together as cues supporting iconic interpretations. The findings cast doubt on attempts to ascribe iconic meanings to segments alone and support a view of ideophones as words that combine arbitrariness and iconicity.We discuss the implications for theory and methods in the empirical study of sound symbolism and iconicity.

    Additional information

    https://muse.jhu.edu/article/619540
  • Djemie, T., Weckhuysen, S., von Spiczak, S., Carvill, G. L., Jaehn, J., Anttonen, A. K., Brilstra, E., Caglayan, H. S., De Kovel, C. G. F., Depienne, C., Gaily, E., Gennaro, E., Giraldez, B. G., Gormley, P., Guerrero-Lopez, R., Guerrini, R., Hamalainen, E., Hartmann, `., Hernandez-Hernandez, L., Hjalgrim, H. and 26 moreDjemie, T., Weckhuysen, S., von Spiczak, S., Carvill, G. L., Jaehn, J., Anttonen, A. K., Brilstra, E., Caglayan, H. S., De Kovel, C. G. F., Depienne, C., Gaily, E., Gennaro, E., Giraldez, B. G., Gormley, P., Guerrero-Lopez, R., Guerrini, R., Hamalainen, E., Hartmann, `., Hernandez-Hernandez, L., Hjalgrim, H., Koeleman, B. P., Leguern, E., Lehesjoki, A. E., Lemke, J. R., Leu, C., Marini, C., McMahon, J. M., Mei, D., Moller, R. S., Muhle, H., Myers, C. T., Nava, C., Serratosa, J. M., Sisodiya, S. M., Stephani, U., Striano, P., van Kempen, M. J., Verbeek, N. E., Usluer, S., Zara, F., Palotie, A., Mefford, H. C., Scheffer, I. E., De Jonghe, P., Helbig, I., & Suls, A. (2016). Pitfalls in genetic testing: the story of missed SCN1A mutations. Molecular Genetics & Genomic Medicine, 4(4), 457-64. doi:10.1002/mgg3.217.

    Abstract

    Background Sanger sequencing, still the standard technique for genetic testing in most diagnostic laboratories and until recently widely used in research, is gradually being complemented by next-generation sequencing (NGS). No single mutation detection technique is however perfect in identifying all mutations. Therefore, we wondered to what extent inconsistencies between Sanger sequencing and NGS affect the molecular diagnosis of patients. Since mutations in SCN1A, the major gene implicated in epilepsy, are found in the majority of Dravet syndrome (DS) patients, we focused on missed SCN1A mutations. Methods We sent out a survey to 16 genetic centers performing SCN1A testing. Results We collected data on 28 mutations initially missed using Sanger sequencing. All patients were falsely reported as SCN1A mutation-negative, both due to technical limitations and human errors. Conclusion We illustrate the pitfalls of Sanger sequencing and most importantly provide evidence that SCN1A mutations are an even more frequent cause of DS than already anticipated.
  • Doherty, M., & Klein, W. (Eds.). (1991). Übersetzung [Special Issue]. Zeitschrift für Literaturwissenschaft und Linguistik, (84).
  • Dolscheid, S., Hunnius, S., Casasanto, D., & Majid, A. (2014). Prelinguistic infants are sensitive to space-pitch associations found across cultures. Psychological Science, 25(6), 1256-1261. doi:10.1177/0956797614528521.

    Abstract

    People often talk about musical pitch using spatial metaphors. In English, for instance, pitches can be “high” or “low” (i.e., height-pitch association), whereas in other languages, pitches are described as “thin” or “thick” (i.e., thickness-pitch association). According to results from psychophysical studies, metaphors in language can shape people’s nonlinguistic space-pitch representations. But does language establish mappings between space and pitch in the first place, or does it only modify preexisting associations? To find out, we tested 4-month-old Dutch infants’ sensitivity to height-pitch and thickness-pitch mappings using a preferential-looking paradigm. The infants looked significantly longer at cross-modally congruent stimuli for both space-pitch mappings, which indicates that infants are sensitive to these associations before language acquisition. The early presence of space-pitch mappings means that these associations do not originate from language. Instead, language builds on preexisting mappings, changing them gradually via competitive associative learning. Space-pitch mappings that are language-specific in adults develop from mappings that may be universal in infants.
  • Drijvers, L., Mulder, K., & Ernestus, M. (2016). Alpha and gamma band oscillations index differential processing of acoustically reduced and full forms. Brain and Language, 153-154, 27-37. doi:10.1016/j.bandl.2016.01.003.

    Abstract

    Reduced forms like yeshay for yesterday often occur in conversations. Previous behavioral research reported a processing advantage for full over reduced forms. The present study investigated whether this processing advantage is reflected in a modulation of alpha (8–12 Hz) and gamma (30+ Hz) band activity. In three electrophysiological experiments, participants listened to full and reduced forms in isolation (Experiment 1), sentence-final position (Experiment 2), or mid-sentence position (Experiment 3). Alpha power was larger in response to reduced forms than to full forms, but only in Experiments 1 and 2. We interpret these increases in alpha power as reflections of higher auditory cognitive load. In all experiments, gamma power only increased in response to full forms, which we interpret as showing that lexical activation spreads more quickly through the semantic network for full than for reduced forms. These results confirm a processing advantage for full forms, especially in non-medial sentence position.
  • Drijvers, L., & Ozyurek, A. (2017). Visual context enhanced: The joint contribution of iconic gestures and visible speech to degraded speech comprehension. Journal of Speech, Language, and Hearing Research, 60, 212-222. doi:10.1044/2016_JSLHR-H-16-0101.

    Abstract

    Purpose This study investigated whether and to what extent iconic co-speech gestures contribute to information from visible speech to enhance degraded speech comprehension at different levels of noise-vocoding. Previous studies of the contributions of these 2 visual articulators to speech comprehension have only been performed separately.

    Method Twenty participants watched videos of an actress uttering an action verb and completed a free-recall task. The videos were presented in 3 speech conditions (2-band noise-vocoding, 6-band noise-vocoding, clear), 3 multimodal conditions (speech + lips blurred, speech + visible speech, speech + visible speech + gesture), and 2 visual-only conditions (visible speech, visible speech + gesture).

    Results Accuracy levels were higher when both visual articulators were present compared with 1 or none. The enhancement effects of (a) visible speech, (b) gestural information on top of visible speech, and (c) both visible speech and iconic gestures were larger in 6-band than 2-band noise-vocoding or visual-only conditions. Gestural enhancement in 2-band noise-vocoding did not differ from gestural enhancement in visual-only conditions.
  • Drozdova, P., Van Hout, R., & Scharenborg, O. (2017). L2 voice recognition: The role of speaker-, listener-, and stimulus-related factors. The Journal of the Acoustical Society of America, 142(5), 3058-3068. doi:10.1121/1.5010169.

    Abstract

    Previous studies examined various factors influencing voice recognition and learning with mixed results. The present study investigates the separate and combined contribution of these various speaker-, stimulus-, and listener-related factors to voice recognition. Dutch listeners, with arguably incomplete phonological and lexical knowledge in the target language, English, learned to recognize the voice of four native English speakers, speaking in English, during four-day training. Training was successful and listeners' accuracy was shown to be influenced by the acoustic characteristics of speakers and the sound composition of the words used in the training, but not by lexical frequency of the words, nor the lexical knowledge of the listeners or their phonological aptitude. Although not conclusive, listeners with a lower working memory capacity seemed to be slower in learning voices than listeners with a higher working memory capacity. The results reveal that speaker-related, listener-related, and stimulus-related factors accumulate in voice recognition, while lexical information turns out not to play a role in successful voice learning and recognition. This implies that voice recognition operates at the prelexical processing level.
  • Drozdova, P., Van Hout, R., & Scharenborg, O. (2016). Lexically-guided perceptual learning in non-native listening. Bilingualism: Language and Cognition, 19(5), 914-920. doi:10.1017/S136672891600002X.

    Abstract

    There is ample evidence that native and non-native listeners use lexical knowledge to retune their native phonetic categories following ambiguous pronunciations. The present study investigates whether a non-native ambiguous sound can retune non-native phonetic categories. After a brief exposure to an ambiguous British English [l/ɹ] sound, Dutch listeners demonstrated retuning. This retuning was, however, asymmetrical: the non-native listeners seemed to show (more) retuning of the /ɹ/ category than of the /l/ category, suggesting that non-native listeners can retune non-native phonetic categories. This asymmetry is argued to be related to the large phonetic variability of /r/ in both Dutch and English.
  • Drude, S. (2009). Nasal harmony in Awetí ‐ A declarative account. ReVEL - Revista Virtual de Estudos da Linguagem, (3). Retrieved from http://www.revel.inf.br/en/edicoes/?mode=especial&id=16.

    Abstract

    This article describes and analyses nasal harmony (or spreading of nasality) in Awetí. It first shows generally how sounds in prefixes adapt to nasality or orality of stems, and how nasality in stems also ‘extends’ to the left. With abstract templates we show which phonetically nasal or oral sequences are possible in Awetí (focusing on stops, pre-nasalized stops and nasals) and which phonological analysis is appropriate for account for this regularities. In Awetí, there are intrinsically nasal and oral vowels and ‘neutral’ vowels which adapt phonetically to a following vowel or consonant, as is the case of sonorant consonants. Pre-nasalized stops such as “nt” are nasalized variants of stops, not post-oralized variants of nasals as in Tupí-Guaranian languages. For nasals and stops in syllable coda (end of morphemes), we postulate arqui-phonemes which adapt to the preceding vowel or a following consonant. Finally, using a declarative approach, the analysis formulates ‘rules’ (statements) which account for the ‘behavior’ of nasality in Awetí words, making use of “structured sequences” on both the phonetic and phonological levels. So, each unit (syllable, morpheme, word etc.) on any level has three components, a sequence of segments, a constituent structure (where pre-nasalized stops, like diphthongs, correspond to two segments), and an intonation structure. The statements describe which phonetic variants can be combined (concatenated) with which other variants, depending on their nasality or orality.
  • Drude, S., Broeder, D., & Trilsbeek, P. (2014). The Language Archive and its solutions for sustainable endangered languages corpora. Book 2.0, 4, 5-20. doi:10.1386/btwo.4.1-2.5_1.

    Abstract

    Since the late 1990s, the technical group at the Max-Planck-Institute for Psycholinguistics has worked on solutions for important challenges in building sustainable data archives, in particular, how to guarantee long-time-availability of digital research data for future research. The support for the well-known DOBES (Documentation of Endangered Languages) programme has greatly inspired and advanced this work, and lead to the ongoing development of a whole suite of tools for annotating, cataloguing and archiving multi-media data. At the core of the LAT (Language Archiving Technology) tools is the IMDI metadata schema, now being integrated into a larger network of digital resources in the European CLARIN project. The multi-media annotator ELAN (with its web-based cousin ANNEX) is now well known not only among documentary linguists. We aim at presenting an overview of the solutions, both achieved and in development, for creating and exploiting sustainable digital data, in particular in the area of documenting languages and cultures, and their interfaces with related other developments
  • Duffield, N., Matsuo, A., & Roberts, L. (2009). Factoring out the parallelism effect in VP-ellipsis: English vs. Dutch contrasts. Second Language Research, 25, 427-467. doi:10.1177/0267658309349425.

    Abstract

    Previous studies, including Duffield and Matsuo (2001; 2002; 2009), have demonstrated second language learners’ overall sensitivity to a parallelism constraint governing English VP-ellipsis constructions: like native speakers (NS), advanced Dutch, Spanish and Japanese learners of English reliably prefer ellipsis clauses with structurally parallel antecedents over those with non-parallel antecedents. However, these studies also suggest that, in contrast to English native speakers, L2 learners’ sensitivity to parallelism is strongly influenced by other non-syntactic formal factors, such that the constraint applies in a comparatively restricted range of construction-specific contexts. This article reports a set of follow-up experiments — from both computer-based as well as more traditional acceptability judgement tasks — that systematically manipulates these other factors. Convergent results from these tasks confirm a qualitative difference in the judgement patterns of the two groups, as well as important differences between theoreticians’ judgements and those of typical native speakers. We consider the implications of these findings for theories of ultimate attainment in second language acquisition (SLA), as well as for current theoretical accounts of ellipsis.
  • Dunn, M. (2009). Contact and phylogeny in Island Melanesia. Lingua, 11(11), 1664-1678. doi:10.1016/j.lingua.2007.10.026.

    Abstract

    This paper shows that despite evidence of structural convergence between some of the Austronesian and non-Austronesian (Papuan) languages of Island Melanesia, statistical methods can detect two independent genealogical signals derived from linguistic structural features. Earlier work by the author and others has presented a maximum parsimony analysis which gave evidence for a genealogical connection between the non-Austronesian languages of island Melanesia. Using the same data set, this paper demonstrates for the non-statistician the application of more sophisticated statistical techniques—including Bayesian methods of phylogenetic inference, and shows that the evidence for common ancestry is if anything stronger than originally supposed.
  • Dunn, M. (2014). [Review of the book Evolutionary Linguistics by April McMahon and Robert McMahon]. American Anthropologist, 116(3), 690-691.
  • Eaves, L. J., St Pourcain, B., Smith, G. D., York, T. P., & Evans, D. M. (2014). Resolving the Effects of Maternal and Offspring Genotype on Dyadic Outcomes in Genome Wide Complex Trait Analysis (“M-GCTA”). Behavior Genetics, 44(5), 445-455. doi:10.1007/s10519-014-9666-6.

    Abstract

    Genome wide complex trait analysis (GCTA) is extended to include environmental effects of the maternal genotype on offspring phenotype (“maternal effects”, M-GCTA). The model includes parameters for the direct effects of the offspring genotype, maternal effects and the covariance between direct and maternal effects. Analysis of simulated data, conducted in OpenMx, confirmed that model parameters could be recovered by full information maximum likelihood (FIML) and evaluated the biases that arise in conventional GCTA when indirect genetic effects are ignored. Estimates derived from FIML in OpenMx showed very close agreement to those obtained by restricted maximum likelihood using the published algorithm for GCTA. The method was also applied to illustrative perinatal phenotypes from ~4,000 mother-offspring pairs from the Avon Longitudinal Study of Parents and Children. The relative merits of extended GCTA in contrast to quantitative genetic approaches based on analyzing the phenotypic covariance structure of kinships are considered.
  • Edlinger, G., Bastiaansen, M. C. M., Brunia, C., Neuper, C., & Pfurtscheller, G. (1999). Cortical oscillatory activity assessed by combined EEG and MEG recordings and high resolution ERD methods. Biomedizinische Technik, 44(2), 131-134.
  • Edmunds, R., L'Hours, H., Rickards, L., Trilsbeek, P., Vardigan, M., & Mokrane, M. (2016). Core trustworthy data repositories requirements. Zenodo, 168411. doi:10.5281/zenodo.168411.

    Abstract

    The Core Trustworthy Data Repository Requirements were developed by the DSA–WDS Partnership Working Group on Repository Audit and Certification, a Working Group (WG) of the Research Data Alliance . The goal of the effort was to create a set of harmonized common requirements for certification of repositories at the core level, drawing from criteria already put in place by the Data Seal of Approval (DSA: www.datasealofapproval.org) and the ICSU World Data System (ICSU-WDS: https://www.icsu-wds.org/services/certification). An additional goal of the project was to develop common procedures to be implemented by both DSA and ICSU-WDS. Ultimately, the DSA and ICSU-WDS plan to collaborate on a global framework for repository certification that moves from the core to the extended (nestor-Seal DIN 31644), to the formal (ISO 16363) level.
  • Eibl-Eibesfeldt, I., & Senft, G. (1991). Trobriander (Papua-Neu-guinea, Trobriand -Inseln, Kaile'una) Tänze zur Einleitung des Erntefeier-Rituals. Film E 3129. Trobriander (Papua-Neuguinea, Trobriand-Inseln, Kiriwina); Ausschnitte aus einem Erntefesttanz. Film E3130. Publikationen zu wissenschaftlichen Filmen. Sektion Ethnologie, 17, 1-17.
  • Eimer, M., Kiss, M., Press, C., & Sauter, D. (2009). The roles of feature-specific task set and bottom-up salience in attentional capture: An ERP study. Journal of Experimental Psychology: Human Perception and Performance, 35, 1316-1328. doi:10.1037/a0015872.

    Abstract

    We investigated the roles of top-down task set and bottom-up stimulus salience for feature-specific attentional capture. ERPs and behavioural performance were measured in two experiments where spatially nonpredictive cues preceded visual search arrays that included a colour-defined target. When cue arrays contained a target-colour singleton, behavioural spatial cueing effects were accompanied by a cue-induced N2pc component, indicative of attentional capture. Behavioural cueing effects and N2pc components were only minimally attenuated for non-singleton relative to singleton target-colour cues, demonstrating that top-down task set has a much greater impact on attentional capture than bottom-up salience. For nontarget-colour singleton cues, no N2pc was triggered, but an anterior N2 component indicative of top-down inhibition was observed. In Experiment 2, these cues produced an inverted behavioural cueing effect, which was accompanied by a delayed N2pc to targets presented at cued locations. These results suggest that perceptually salient visual stimuli without task-relevant features trigger a transient location-specific inhibition process that prevents attentional capture, but delays the selection of subsequent target events.
  • Eisenbeiß, S., Bartke, S., Weyerts, H., & Clahsen, H. (1994). Elizitationsverfahren in der Spracherwerbsforschung: Nominalphrasen, Kasus, Plural, Partizipien. Theorie des Lexikons, 57.
  • Eising, E., Shyti, R., 'T hoen, P. A. C., Vijfhuizen, L. S., Huisman, S. M. H., Broos, L. A. M., Mahfourz, A., Reinders, M. J. T., Ferrrari, M. D., Tolner, E. A., De Vries, B., & Van den Maagdenberg, A. M. J. M. (2017). Cortical spreading depression causes unique dysregulation of inflammatory pathways in a transgenic mouse model of migraine. Molecular Biology, 54(4), 2986-2996. doi:10.1007/s12035-015-9681-5.

    Abstract

    Familial hemiplegic migraine type 1 (FHM1) is a
    rare monogenic subtype of migraine with aura caused by mutations
    in CACNA1A that encodes the α1A subunit of voltagegated
    CaV2.1 calcium channels. Transgenic knock-in mice
    that carry the human FHM1 R192Q missense mutation
    (‘FHM1 R192Q mice’) exhibit an increased susceptibility to
    cortical spreading depression (CSD), the mechanism underlying
    migraine aura. Here, we analysed gene expression profiles
    from isolated cortical tissue of FHM1 R192Q mice 24 h after
    experimentally induced CSD in order to identify molecular
    pathways affected by CSD. Gene expression profiles were
    generated using deep serial analysis of gene expression sequencing.
    Our data reveal a signature of inflammatory signalling
    upon CSD in the cortex of both mutant and wild-type
    mice. However, only in the brains of FHM1 R192Q mice
    specific genes are up-regulated in response to CSD that are
    implicated in interferon-related inflammatory signalling. Our
    findings show that CSD modulates inflammatory processes in
    both wild-type and mutant brains, but that an additional
    unique inflammatory signature becomes expressed after
    CSD in a relevant mouse model of migraine.
  • Eising, E., Huisman, S. M., Mahfouz, A., Vijfhuizen, L. S., Anttila, V., Winsvold, B. S., Kurth, T., Ikram, M. A., Freilinger, T., Kaprio, J., Boomsma, D. I., van Duijn, C. M., Järvelin, M.-R.-R., Zwart, J.-A., Quaye, L., Strachan, D. P., Kubisch, C., Dichgans, M., Davey Smith, G., Stefansson, K. and 9 moreEising, E., Huisman, S. M., Mahfouz, A., Vijfhuizen, L. S., Anttila, V., Winsvold, B. S., Kurth, T., Ikram, M. A., Freilinger, T., Kaprio, J., Boomsma, D. I., van Duijn, C. M., Järvelin, M.-R.-R., Zwart, J.-A., Quaye, L., Strachan, D. P., Kubisch, C., Dichgans, M., Davey Smith, G., Stefansson, K., Palotie, A., Chasman, D. I., Ferrari, M. D., Terwindt, G. M., de Vries, B., Nyholt, D. R., Lelieveldt, B. P., van den Maagdenberg, A. M., & Reinders, M. J. (2016). Gene co‑expression analysis identifies brain regions and cell types involved in migraine pathophysiology: a GWAS‑based study using the Allen Human Brain Atlas. Human Genetics, 135(4), 425-439. doi:10.1007/s00439-016-1638-x.

    Abstract

    Migraine is a common disabling neurovascular brain disorder typically characterised by attacks of severe headache and associated with autonomic and neurological symptoms. Migraine is caused by an interplay of genetic and environmental factors. Genome-wide association studies (GWAS) have identified over a dozen genetic loci associated with migraine. Here, we integrated migraine GWAS data with high-resolution spatial gene expression data of normal adult brains from the Allen Human Brain Atlas to identify specific brain regions and molecular pathways that are possibly involved in migraine pathophysiology. To this end, we used two complementary methods. In GWAS data from 23,285 migraine cases and 95,425 controls, we first studied modules of co-expressed genes that were calculated based on human brain expression data for enrichment of genes that showed association with migraine. Enrichment of a migraine GWAS signal was found for five modules that suggest involvement in migraine pathophysiology of: (i) neurotransmission, protein catabolism and mitochondria in the cortex; (ii) transcription regulation in the cortex and cerebellum; and (iii) oligodendrocytes and mitochondria in subcortical areas. Second, we used the high-confidence genes from the migraine GWAS as a basis to construct local migraine-related co-expression gene networks. Signatures of all brain regions and pathways that were prominent in the first method also surfaced in the second method, thus providing support that these brain regions and pathways are indeed involved in migraine pathophysiology.
  • Eising, E., Pelzer, N., Vijfhuizen, L. S., De Vries, B., Ferrari, M. D., 'T Hoen, P. A. C., Terwindt, G. M., & Van den Maagdenberg, A. M. J. M. (2017). Identifying a gene expression signature of cluster headache in blood. Scientific Reports, 7: 40218. doi:10.1038/srep40218.

    Abstract

    Cluster headache is a relatively rare headache disorder, typically characterized by multiple daily, short-lasting attacks of excruciating, unilateral (peri-)orbital or temporal pain associated with autonomic symptoms and restlessness. To better understand the pathophysiology of cluster headache, we used RNA sequencing to identify differentially expressed genes and pathways in whole blood of patients with episodic (n = 19) or chronic (n = 20) cluster headache in comparison with headache-free controls (n = 20). Gene expression data were analysed by gene and by module of co-expressed genes with particular attention to previously implicated disease pathways including hypocretin dysregulation. Only moderate gene expression differences were identified and no associations were found with previously reported pathogenic mechanisms. At the level of functional gene sets, associations were observed for genes involved in several brain-related mechanisms such as GABA receptor function and voltage-gated channels. In addition, genes and modules of co-expressed genes showed a role for intracellular signalling cascades, mitochondria and inflammation. Although larger study samples may be required to identify the full range of involved pathways, these results indicate a role for mitochondria, intracellular signalling and inflammation in cluster headache

    Additional information

    Eising_etal_2017sup.pdf
  • Eising, E., De Leeuw, C., Min, J. L., Anttila, V., Verheijen, M. H. G., Terwindt, G. M., Dichgans, M., Freilinger, T., Kubisch, C., Ferrari, M. D., Smit, A. B., De Vries, B., Palotie, A., Van Den Maagdenberg, A. M. J. M., & Posthuma, D. (2016). Involvement of astrocyte and oligodendrocyte gene sets in migraine. Cephalalgia, 36(7), 640-647. doi:10.1177/0333102415618614.

    Abstract

    Migraine is a common episodic brain disorder characterized by recurrent attacks of severe unilateral headache and additional neurological symptoms. Two main migraine types can be distinguished based on the presence of aura symptoms that can accompany the headache: migraine with aura and migraine without aura. Multiple genetic and environmental factors confer disease susceptibility. Recent genome-wide association studies (GWAS) indicate that migraine susceptibility genes are involved in various pathways, including neurotransmission, which have already been implicated in genetic studies of monogenic familial hemiplegic migraine, a subtype of migraine with aura. Methods To further explore the genetic background of migraine, we performed a gene set analysis of migraine GWAS data of 4954 clinic-based patients with migraine, as well as 13,390 controls. Curated sets of synaptic genes and sets of genes predominantly expressed in three glial cell types (astrocytes, microglia and oligodendrocytes) were investigated. Discussion Our results show that gene sets containing astrocyte- and oligodendrocyte-related genes are associated with migraine, which is especially true for gene sets involved in protein modification and signal transduction. Observed differences between migraine with aura and migraine without aura indicate that both migraine types, at least in part, seem to have a different genetic background.
  • Enard, W., Gehre, S., Hammerschmidt, K., Hölter, S. M., Blass, T., Somel, M., Brückner, M. K., Schreiweis, C., Winter, C., Sohr, R., Becker, L., Wiebe, V., Nickel, B., Giger, T., Müller, U., Groszer, M., Adler, T., Aguilar, A., Bolle, I., Calzada-Wack, J. and 36 moreEnard, W., Gehre, S., Hammerschmidt, K., Hölter, S. M., Blass, T., Somel, M., Brückner, M. K., Schreiweis, C., Winter, C., Sohr, R., Becker, L., Wiebe, V., Nickel, B., Giger, T., Müller, U., Groszer, M., Adler, T., Aguilar, A., Bolle, I., Calzada-Wack, J., Dalke, C., Ehrhardt, N., Favor, J., Fuchs, H., Gailus-Durner, V., Hans, W., Hölzlwimmer, G., Javaheri, A., Kalaydjiev, S., Kallnik, M., Kling, E., Kunder, S., Moßbrugger, I., Naton, B., Racz, I., Rathkolb, B., Rozman, J., Schrewe, A., Busch, D. H., Graw, J., Ivandic, B., Klingenspor, M., Klopstock, T., Ollert, M., Quintanilla-Martinez, L., Schulz, H., Wolf, E., Wurst, W., Zimmer, A., Fisher, S. E., Morgenstern, R., Arendt, T., Hrabé de Angelis, M., Fischer, J., Schwarz, J., & Pääbo, S. (2009). A humanized version of Foxp2 affects cortico-basal ganglia circuits in mice. Cell, 137(5), 961-971. doi:10.1016/j.cell.2009.03.041.

    Abstract

    It has been proposed that two amino acid substitutions in the transcription factor FOXP2 have been positively selected during human evolution due to effects on aspects of speech and language. Here, we introduce these substitutions into the endogenous Foxp2 gene of mice. Although these mice are generally healthy, they have qualitatively different ultrasonic vocalizations, decreased exploratory behavior and decreased dopamine concentrations in the brain suggesting that the humanized Foxp2 allele affects basal ganglia. In the striatum, a part of the basal ganglia affected in humans with a speech deficit due to a nonfunctional FOXP2 allele, we find that medium spiny neurons have increased dendrite lengths and increased synaptic plasticity. Since mice carrying one nonfunctional Foxp2 allele show opposite effects, this suggests that alterations in cortico-basal ganglia circuits might have been important for the evolution of speech and language in humans.
  • Enfield, N. J. (2009). Common tragedy [Review of the book The native mind and the cultural construction of nature by Scott Atran Douglas Medin]. The Times Literary Supplement, September 18,2009, 10-11.
  • Enfield, N. J. (2009). [Review of the book Serial verb constructions: A cross-linguistic typology ed. by Alexandra Y. Aikhenvald and R. M. W. Dixon]. Language, 85, 445-451. doi:10.1353/lan.0.0124.
  • Enfield, N. J. (2009). Language: Social motives for syntax [Review of the book Origins of human communication by Michael Tomasello]. Science, 324(5923), 39. doi:10.1126/science.1172660.
  • Enfield, N. J. (1999). On the indispensability of semantics: Defining the ‘vacuous’. Rask: internationalt tidsskrift for sprog og kommunikation, 9/10, 285-304.
  • Enfield, N. J., & Diffloth, G. (2009). Phonology and sketch grammar of Kri, a Vietic language of Laos. Cahiers de Linguistique - Asie Orientale (CLAO), 38(1), 3-69.
  • Enfield, N. J. (2009). Relationship thinking and human pragmatics. Journal of Pragmatics, 41, 60-78. doi:10.1016/j.pragma.2008.09.007.

    Abstract

    The approach to pragmatics explored in this article focuses on elements of social interaction which are of universal relevance, and which may provide bases for a comparative approach. The discussion is anchored by reference to a fragment of conversation from a video-recording of Lao speakers during a home visit in rural Laos. The following points are discussed. First, an understanding of the full richness of context is indispensable for a proper understanding of any interaction. Second, human relationships are a primary locus of social organization, and as such constitute a key focus for pragmatics. Third, human social intelligence forms a universal cognitive under-carriage for interaction, and requires careful cross-cultural study. Fourth, a neo-Peircean framework for a general understanding of semiotic processes gives us a way of stepping away from language as our basic analytical frame. It is argued that in order to get a grip on pragmatics across human groups, we need to take a comparative approach in the biological sense—i.e. with reference to other species as well. From this perspective, human pragmatics is about using semiotic resources to try to meet goals in the realm of social relationships.
  • Erard, M. (2009). How Many Languages? Linguists Discover New Tongues in China. Science, 324(5925), 332-333. doi:10.1126/science.324.5925.332a.
  • Erard, M. (2016). Solving Australia's language puzzle. Science, 353(6306), 1357-1359. doi:10.1126/science.353.6306.1357.
  • Erard, M. (2017). Write yourself invisible. New Scientist, 236(3153), 36-39.
  • Ernestus, M. (2014). Acoustic reduction and the roles of abstractions and exemplars in speech processing. Lingua, 142, 27-41. doi:10.1016/j.lingua.2012.12.006.

    Abstract

    Acoustic reduction refers to the frequent phenomenon in conversational speech that words are produced with fewer or lenited segments compared to their citation forms. The few published studies on the production and comprehension of acoustic reduction have important implications for the debate on the relevance of abstractions and exemplars in speech processing. This article discusses these implications. It first briefly introduces the key assumptions of simple abstractionist and simple exemplar-based models. It then discusses the literature on acoustic reduction and draws the conclusion that both types of models need to be extended to explain all findings. The ultimate model should allow for the storage of different pronunciation variants, but also reserve an important role for phonetic implementation. Furthermore, the recognition of a highly reduced pronunciation variant requires top down information and leads to activation of the corresponding unreduced variant, the variant that reaches listeners’ consciousness. These findings are best accounted for in hybrids models, assuming both abstract representations and exemplars. None of the hybrid models formulated so far can account for all data on reduced speech and we need further research for obtaining detailed insight into how speakers produce and listeners comprehend reduced speech.
  • Ernestus, M., Dikmans, M., & Giezenaar, G. (2017). Advanced second language learners experience difficulties processing reduced word pronunciation variants. Dutch Journal of Applied Linguistics, 6(1), 1-20. doi:10.1075/dujal.6.1.01ern.

    Abstract

    Words are often pronounced with fewer segments in casual conversations than in formal speech. Previous research has shown that foreign language learners and beginning second language learners experience problems processing reduced speech. We examined whether this also holds for advanced second language learners. We designed a dictation task in Dutch consisting of sentences spliced from casual conversations and an unreduced counterpart of this task, with the same sentences carefully articulated by the same speaker. Advanced second language learners of Dutch produced substantially more transcription errors for the reduced than for the unreduced sentences. These errors made the sentences incomprehensible or led to non-intended meanings. The learners often did not rely on the semantic and syntactic information in the sentence or on the subsegmental cues to overcome the reductions. Hence, advanced second language learners also appear to suffer from the reduced pronunciation variants of words that are abundant in everyday conversations
  • Ernestus, M., Giezenaar, G., & Dikmans, M. (2016). Ikfstajezotuuknie: Half uitgesproken woorden in alledaagse gesprekken. Les, 199, 7-9.

    Abstract

    Amsterdam klinkt in informele gesprekken vaak als Amsdam en Rotterdam als Rodam, zonder dat de meeste moedertaalsprekers zich daar bewust van zijn. In alledaagse situaties valt een aanzienlijk deel van de klanken weg. Daarnaast worden veel klanken zwakker gearticuleerd (bijvoorbeeld een d als een j, als de mond niet helemaal afgesloten wordt). Het lijkt waarschijnlijk dat deze half uitgesproken woorden een probleem vormen voor tweedetaalleerders. Gereduceerde vormen kunnen immers sterk afwijken van de vormen die deze leerders geleerd hebben. Of dit werkelijk zo is, hebben de auteurs onderzocht in twee studies. Voordat ze deze twee studies bespreken, vertellen ze eerst kort iets over de verschillende typen reducties die voorkomen.
  • Ernestus, M., Kouwenhoven, H., & Van Mulken, M. (2017). The direct and indirect effects of the phonotactic constraints in the listener's native language on the comprehension of reduced and unreduced word pronunciation variants in a foreign language. Journal of Phonetics, 62, 50-64. doi:10.1016/j.wocn.2017.02.003.

    Abstract

    This study investigates how the comprehension of casual speech in foreign languages is affected by the phonotactic constraints in the listener’s native language. Non-native listeners of English with different native languages heard short English phrases produced by native speakers of English or Spanish and they indicated whether these phrases included can or can’t. Native Mandarin listeners especially tended to interpret can’t as can. We interpret this result as a direct effect of the ban on word-final /nt/ in Mandarin. Both the native Mandarin and the native Spanish listeners did not take full advantage of the subsegmental information in the speech signal cueing reduced can’t. This finding is probably an indirect effect of the phonotactic constraints in their native languages: these listeners have difficulties interpreting the subsegmental cues because these cues do not occur or have different functions in their native languages. Dutch resembles English in the phonotactic constraints relevant to the comprehension of can’t, and native Dutch listeners showed similar patterns in their comprehension of native and non-native English to native English listeners. This result supports our conclusion that the major patterns in the comprehension results are driven by the phonotactic constraints in the listeners’ native languages.
  • Eryilmaz, K., & Little, H. (2017). Using Leap Motion to investigate the emergence of structure in speech and language. Behavior Research Methods, 49(5), 1748-1768. doi:10.3758/s13428-016-0818-x.

    Abstract

    In evolutionary linguistics, experiments using artificial signal spaces are being used to investigate the emergence of speech structure. These signal spaces need to be continuous, non-discretised spaces from which discrete units and patterns can emerge. They need to be dissimilar from - but comparable with - the vocal-tract, in order to minimise interference from pre-existing linguistic knowledge, while informing us about language. This is a hard balance to strike. This article outlines a new approach which uses the Leap Motion, an infra-red controller which can convert manual movement in 3d space into sound. The signal space using this approach is more flexible than signal spaces in previous attempts. Further, output data using this approach is simpler to arrange and analyse. The experimental interface was built using free, and mostly open source libraries in Python. We provide our source code for other researchers as open source.
  • Esteve-Gibert, N., Prieto, P., & Liszkowski, U. (2017). Twelve-month-olds understand social intentions based on prosody and gesture shape. Infancy, 22, 108-129. doi:10.1111/infa.12146.

    Abstract

    Infants infer social and pragmatic intentions underlying attention-directing gestures, but the basis on which infants make these inferences is not well understood. Previous studies suggest that infants rely on information from preceding shared action contexts and joint perceptual scenes. Here, we tested whether 12-month-olds use information from act-accompanying cues, in particular prosody and hand shape, to guide their pragmatic understanding. In Experiment 1, caregivers directed infants’ attention to an object to request it, share interest in it, or inform them about a hidden aspect. Caregivers used distinct prosodic and gestural patterns to express each pragmatic intention. Experiment 2 was identical except that experimenters provided identical lexical information across conditions and used three sets of trained prosodic and gestural patterns. In all conditions, the joint perceptual scenes and preceding shared action contexts were identical. In both experiments, infants reacted appropriately to the adults’ intentions by attending to the object mostly in the sharing interest condition, offering the object mostly in the imperative condition, and searching for the referent mostly in the informing condition. Infants’ ability to comprehend pragmatic intentions based on prosody and gesture shape expands infants’ communicative understanding from common activities to novel situations for which shared background knowledge is missing.
  • Estruch, S. B., Graham, S. A., Chinnappa, S. M., Deriziotis, P., & Fisher, S. E. (2016). Functional characterization of rare FOXP2 variants in neurodevelopmental disorder. Journal of Neurodevelopmental Disorders, 8: 44. doi:10.1186/s11689-016-9177-2.
  • Estruch, S. B., Graham, S. A., Deriziotis, P., & Fisher, S. E. (2016). The language-related transcription factor FOXP2 is post-translationally modified with small ubiquitin-like modifiers. Scientific Reports, 6: 20911. doi:10.1038/srep20911.

    Abstract

    Mutations affecting the transcription factor FOXP2 cause a rare form of severe speech and language disorder. Although it is clear that sufficient FOXP2 expression is crucial for normal brain development, little is known about how this transcription factor is regulated. To investigate post-translational mechanisms for FOXP2 regulation, we searched for protein interaction partners of FOXP2, and identified members of the PIAS family as novel FOXP2 interactors. PIAS proteins mediate post-translational modification of a range of target proteins with small ubiquitin-like modifiers (SUMOs). We found that FOXP2 can be modified with all three human SUMO proteins and that PIAS1 promotes this process. An aetiological FOXP2 mutation found in a family with speech and language disorder markedly reduced FOXP2 SUMOylation. We demonstrate that FOXP2 is SUMOylated at a single major site, which is conserved in all FOXP2 vertebrate orthologues and in the paralogues FOXP1 and FOXP4. Abolishing this site did not lead to detectable changes in FOXP2 subcellular localization, stability, dimerization or transcriptional repression in cellular assays, but the conservation of this site suggests a potential role for SUMOylation in regulating FOXP2 activity in vivo.

    Additional information

    srep20911-s1.pdf
  • Ho, Y. Y. W., Evans, D. M., Montgomery, G. W., Henders, A. K., Kemp, J. P., Timpson, N. J., St Pourcain, B., Heath, A. C., Madden, P. A. F., Loesch, D. Z., McNevin, D., Daniel, R., Davey-Smith, G., Martin, N. G., & Medland, S. E. (2016). Common genetic variants influence whorls in fingerprint patterns. Journal of Investigative Dermatology, 136(4), 859-862. doi:10.1016/j.jid.2015.10.062.
  • Evans, N., & Levinson, S. C. (2009). The myth of language universals: Language diversity and its importance for cognitive science. Behavioral and Brain Sciences, 32(5), 429-492. doi:10.1017/S0140525X0999094X.

    Abstract

    Talk of linguistic universals has given cognitive scientists the impression that languages are all built to a common pattern. In fact, there are vanishingly few universals of language in the direct sense that all languages exhibit them. Instead, diversity can be found at almost every level of linguistic organization. This fundamentally changes the object of enquiry from a cognitive science perspective. This target article summarizes decades of cross-linguistic work by typologists and descriptive linguists, showing just how few and unprofound the universal characteristics of language are, once we honestly confront the diversity offered to us by the world's 6,000 to 8,000 languages. After surveying the various uses of “universal,” we illustrate the ways languages vary radically in sound, meaning, and syntactic organization, and then we examine in more detail the core grammatical machinery of recursion, constituency, and grammatical relations. Although there are significant recurrent patterns in organization, these are better explained as stable engineering solutions satisfying multiple design constraints, reflecting both cultural-historical factors and the constraints of human cognition.
  • Evans, S., McGettigan, C., Agnew, Z., Rosen, S., Cesar, L., Boebinger, D., Ostarek, M., Chen, S. H., Richards, A., Meekins, S., & Scott, S. K. (2014). The neural basis of informational and energetic masking effects in the perception and production of speech [abstract]. The Journal of the Acoustical Society of America, 136(4), 2243. doi:10.1121/1.4900096.

    Abstract

    When we have spoken conversations, it is usually in the context of competing sounds within our environment. Speech can be masked by many different kinds of sounds, for example, machinery noise and the speech of others, and these different sounds place differing demands on cognitive resources. In this talk, I will present data from a series of functional magnetic resonance imaging (fMRI) studies in which the informational properties of background sounds have been manipulated to make them more or less similar to speech. I will demonstrate the neural effects associated with speaking over and listening to these sounds, and demonstrate how in perception these effects are modulated by the age of the listener. The results will be interpreted within a framework of auditory processing developed from primate neurophysiology and human functional imaging work (Rauschecker and Scott 2009).
  • Evans, N., & Levinson, S. C. (2009). With diversity in mind: Freeing the language sciences from universal grammar [Author's response]. Behavioral and Brain Sciences, 32(5), 472-484. doi:10.1017/S0140525X09990525.

    Abstract

    Our response takes advantage of the wide-ranging commentary to clarify some aspects of our original proposal and augment others. We argue against the generative critics of our coevolutionary program for the language sciences, defend the use of close-to-surface models as minimizing crosslinguistic data distortion, and stress the growing role of stochastic simulations in making generalized historical accounts testable. These methods lead the search for general principles away from idealized representations and towards selective processes. Putting cultural evolution central in understanding language diversity makes learning fundamental in the cognition of language: increasingly powerful models of general learning, paired with channelled caregiver input, seem set to manage language acquisition without recourse to any innate “universal grammar.” Understanding why human language has no clear parallels in the animal world requires a cross-species perspective: crucial ingredients are vocal learning (for which there are clear non-primate parallels) and an intentionattributing cognitive infrastructure that provides a universal base for language evolution. We conclude by situating linguistic diversity within a broader trend towards understanding human cognition through the study of variation in, for example, human genetics, neurocognition, and psycholinguistic processing.
  • Everaerd, D., Klumpers, F., Zwiers, M., Guadalupe, T., Franke, B., Van Oostrum, I., Schene, A., Fernandez, G., & Tendolkar, I. (2016). Childhood abuse and deprivation are associated with distinct sex-dependent differences in brain morphology. Neuropsychopharmacology, 41, 1716-1723. doi:10.1038/npp.2015.344.

    Abstract

    Childhood adversity (CA) has been associated with long-term structural brain alterations and an increased risk for psychiatric disorders. Evidence is emerging that subtypes of CA, varying in the dimensions of threat and deprivation, lead to distinct neural and behavioral outcomes. However, these specific associations have yet to be established without potential confounders such as psychopathology. Moreover, differences in neural development and psychopathology necessitate the exploration of sexual dimorphism. Young healthy adult subjects were selected based on history of CA from a large database to assess gray matter (GM) differences associated with specific subtypes of adversity. We compared voxel-based morphometry data of subjects reporting specific childhood exposure to abuse (n = 127) or deprivation (n = 126) and a similar sized group of controls (n = 129) without reported CA. Subjects were matched on age, gender, and educational level. Differences between CA subtypes were found in the fusiform gyrus and middle occipital gyms, where subjects with a history of deprivation showed reduced GM compared with subjects with a history of abuse. An interaction between sex and CA subtype was found. Women showed less GM in the visual posterior precuneal region after both subtypes of CA than controls. Men had less GM in the postcentral gyms after childhood deprivation compared with abuse. Our results suggest that even in a healthy population, CA subtypes are related to specific alterations in brain structure, which are modulated by sex. These findings may help understand neurodevelopmental consequences related to CA
  • Everett, D., & Majid, A. (2009). Adventures in the jungle of language. [Interview by Asifa Majid and Jon Sutton.]. The Psychologist, 22(4), 312-313. Retrieved from http://www.thepsychologist.org.uk/archive/archive_home.cfm?volumeID=22&editionID=174&ArticleID=1494.

    Abstract

    Daniel Everett has spent his career in the Amazon, challenging some fundamental ideas about language and thought. Asifa Majid and Jon Sutton pose the questions
  • Everett, C., Blasi, D. E., & Roberts, S. G. (2016). Language evolution and climate: The case of desiccation and tone. Journal of Language Evolution, 1, 33-46. doi:10.1093/jole/lzv004.

    Abstract

    We make the case that, contra standard assumption in linguistic theory, the sound systems of human languages are adapted to their environment. While not conclusive, this plausible case rests on several points discussed in this work: First, human behavior is generally adaptive and the assumption that this characteristic does not extend to linguistic structure is empirically unsubstantiated. Second, animal communication systems are well known to be adaptive within species across a variety of phyla and taxa. Third, research in laryngology demonstrates clearly that ambient desiccation impacts the performance of the human vocal cords. The latter point motivates a clear, testable hypothesis with respect to the synchronic global distribution of language types. Fourth, this hypothesis is supported in our own previous work, and here we discuss new approaches being developed to further explore the hypothesis. We conclude by suggesting that the time has come to more substantively examine the possibility that linguistic sound systems are adapted to their physical ecology
  • Everett, C., Blasi, D., & Roberts, S. G. (2016). Response: Climate and language: has the discourse shifted? Journal of Language Evolution, 1(1), 83-87. doi:10.1093/jole/lzv013.

    Abstract

    We begin by thanking the respondents for their thoughtful comments and insightful leads. The overall impression we are left with by this exchange is one of progress, even if no consensus remains about the particular hypothesis we raise. To date, there has been a failure to seriously engage with the possibility that humans might adapt their communication to ecological factors. In these exchanges, we see signs of serious engagement with that possibility. Most respondents expressed agreement with the notion that our central premise—that language is ecologically adaptive—requires further exploration and may in fact be operative. We are pleased to see this shift in discourse, and to witness a heightening appreciation of possible ecological constraints on language evolution. It is that shift in discourse that represents progress in our view. Our hope is that future work will continue to explore these issues, paying careful attention to the fact that the human larynx is clearly sensitive to characteristics of ambient air. More generally, we think this exchange is indicative of the growing realization that inquiries into language development must consider potential external factors (see Dediu 2015)...

    Additional information

    AppendixResponseToHammarstrom.pdf
  • Fan, Q., Guo, X., Tideman, J. W. L., Williams, K. M., Yazar, S., Hosseini, S. M., Howe, L. D., St Pourcain, B., Evans, D. M., Timpson, N. J., McMahon, G., Hysi, P. G., Krapohl, E., Wang, Y. X., Jonas, J. B., Baird, P. N., Wang, J. J., Cheng, C. Y., Teo, Y. Y., Wong, T. Y. and 17 moreFan, Q., Guo, X., Tideman, J. W. L., Williams, K. M., Yazar, S., Hosseini, S. M., Howe, L. D., St Pourcain, B., Evans, D. M., Timpson, N. J., McMahon, G., Hysi, P. G., Krapohl, E., Wang, Y. X., Jonas, J. B., Baird, P. N., Wang, J. J., Cheng, C. Y., Teo, Y. Y., Wong, T. Y., Ding, X., Wojciechowski, R., Young, T. L., Parssinen, O., Oexle, K., Pfeiffer, N., Bailey-Wilson, J. E., Paterson, A. D., Klaver, C. C. W., Plomin, R., Hammond, C. J., Mackey, D. A., He, M. G., Saw, S. M., Williams, C., Guggenheim, J. A., & Cream, C. (2016). Childhood gene-environment interactions and age-dependent effects of genetic variants associated with refractive error and myopia: The CREAM Consortium. Scientific Reports, 6: 25853. doi:10.1038/srep25853.

    Abstract

    Myopia, currently at epidemic levels in East Asia, is a leading cause of untreatable visual impairment. Genome-wide association studies (GWAS) in adults have identified 39 loci associated with refractive error and myopia. Here, the age-of-onset of association between genetic variants at these 39 loci and refractive error was investigated in 5200 children assessed longitudinally across ages 7-15 years, along with gene-environment interactions involving the major environmental risk-factors, nearwork and time outdoors. Specific variants could be categorized as showing evidence of: (a) early-onset effects remaining stable through childhood, (b) early-onset effects that progressed further with increasing age, or (c) onset later in childhood (N = 10, 5 and 11 variants, respectively). A genetic risk score (GRS) for all 39 variants explained 0.6% (P = 6.6E-08) and 2.3% (P = 6.9E-21) of the variance in refractive error at ages 7 and 15, respectively, supporting increased effects from these genetic variants at older ages. Replication in multi-ancestry samples (combined N = 5599) yielded evidence of childhood onset for 6 of 12 variants present in both Asians and Europeans. There was no indication that variant or GRS effects altered depending on time outdoors, however 5 variants showed nominal evidence of interactions with nearwork (top variant, rs7829127 in ZMAT4; P = 6.3E-04).

    Additional information

    srep25853-s1.pdf
  • Fan, Q., Verhoeven, V. J., Wojciechowski, R., Barathi, V. A., Hysi, P. G., Guggenheim, J. A., Höhn, R., Vitart, V., Khawaja, A. P., Yamashiro, K., Hosseini, S. M., Lehtimäki, T., Lu, Y., Haller, T., Xie, J., Delcourt, C., Pirastu, M., Wedenoja, J., Gharahkhani, P., Venturini, C. and 83 moreFan, Q., Verhoeven, V. J., Wojciechowski, R., Barathi, V. A., Hysi, P. G., Guggenheim, J. A., Höhn, R., Vitart, V., Khawaja, A. P., Yamashiro, K., Hosseini, S. M., Lehtimäki, T., Lu, Y., Haller, T., Xie, J., Delcourt, C., Pirastu, M., Wedenoja, J., Gharahkhani, P., Venturini, C., Miyake, M., Hewitt, A. W., Guo, X., Mazur, J., Huffman, J. E., Williams, K. M., Polasek, O., Campbell, H., Rudan, I., Vatavuk, Z., Wilson, J. F., Joshi, P. K., McMahon, G., St Pourcain, B., Evans, D. M., Simpson, C. L., Schwantes-An, T.-H., Igo, R. P., Mirshahi, A., Cougnard-Gregoire, A., Bellenguez, C., Blettner, M., Raitakari, O., Kähönen, M., Seppälä, I., Zeller, T., Meitinger, T., Ried, J. S., Gieger, C., Portas, L., Van Leeuwen, E. M., Amin, N., Uitterlinden, A. G., Rivadeneira, F., Hofman, A., Vingerling, J. R., Wang, Y. X., Wang, X., Boh, E.-T.-H., Ikram, M. K., Sabanayagam, C., Gupta, P., Tan, V., Zhou, L., Ho, C. E., Lim, W., Beuerman, R. W., Siantar, R., Tai, E.-S., Vithana, E., Mihailov, E., Khor, C.-C., Hayward, C., Luben, R. N., Foster, P. J., Klein, B. E., Klein, R., Wong, H.-S., Mitchell, P., Metspalu, A., Aung, T., Young, T. L., He, M., Pärssinen, O., Van Duijn, C. M., Wang, J. J., Williams, C., Jonas, J. B., Teo, Y.-Y., Mackey, D. A., Oexle, K., Yoshimura, N., Paterson, A. D., Pfeiffer, N., Wong, T.-Y., Baird, P. N., Stambolian, D., Bailey-Wilson, J. E., Cheng, C.-Y., Hammond, C. J., Klaver, C. C., Saw, S.-M., & Consortium for Refractive Error and Myopia (CREAM) (2016). Meta-analysis of gene–environment-wide association scans accounting for education level identifies additional loci for refractive error. Nature Communications, 7: 11008. doi:10.1038/ncomms11008.

    Abstract

    Myopia is the most common human eye disorder and it results from complex genetic and environmental causes. The rapidly increasing prevalence of myopia poses a major public health challenge. Here, the CREAM consortium performs a joint meta-analysis to test single-nucleotide polymorphism (SNP) main effects and SNP × education interaction effects on refractive error in 40,036 adults from 25 studies of European ancestry and 10,315 adults from 9 studies of Asian ancestry. In European ancestry individuals, we identify six novel loci (FAM150B-ACP1, LINC00340, FBN1, DIS3L-MAP2K1, ARID2-SNAT1 and SLC14A2) associated with refractive error. In Asian populations, three genome-wide significant loci AREG, GABRR1 and PDE10A also exhibit strong interactions with education (P<8.5 × 10−5), whereas the interactions are less evident in Europeans. The discovery of these loci represents an important advance in understanding how gene and environment interactions contribute to the heterogeneity of myopia

    Additional information

    Fan_etal_2016sup.pdf
  • Fedorenko, E., Morgan, A., Murray, E., Cardinaux, A., Mei, C., Tager-Flusberg, H., Fisher, S. E., & Kanwisher, N. (2016). A highly penetrant form of childhood apraxia of speech due to deletion of 16p11.2. European Journal of Human Genetics, 24(2), 302-306. doi:10.1038/ejhg.2015.149.

    Abstract

    Individuals with heterozygous 16p11.2 deletions reportedly suffer from a variety of difficulties with speech and language. Indeed, recent copy-number variant screens of children with childhood apraxia of speech (CAS), a specific and rare motor speech disorder, have identified three unrelated individuals with 16p11.2 deletions. However, the nature and prevalence of speech and language disorders in general, and CAS in particular, is unknown for individuals with 16p11.2 deletions. Here we took a genotype-first approach, conducting detailed and systematic characterization of speech abilities in a group of 11 unrelated children ascertained on the basis of 16p11.2 deletions. To obtain the most precise and replicable phenotyping, we included tasks that are highly diagnostic for CAS, and we tested children under the age of 18 years, an age group where CAS has been best characterized. Two individuals were largely nonverbal, preventing detailed speech analysis, whereas the remaining nine met the standard accepted diagnostic criteria for CAS. These results link 16p11.2 deletions to a highly penetrant form of CAS. Our findings underline the need for further precise characterization of speech and language profiles in larger groups of affected individuals, which will also enhance our understanding of how genetic pathways contribute to human communication disorders.
  • Fedorenko, E., Patel, A., Casasanto, D., Winawer, J., & Gibson, E. (2009). Structural integration in language and music: Evidence for a shared system. Memory & Cognition, 37, 1-9. doi:10.3758/MC.37.1.1.

    Abstract

    In this study, we investigate whether language and music share cognitive resources for structural processing. We report an experiment that used sung materials and manipulated linguistic complexity (subject-extracted relative clauses, object-extracted relative clauses) and musical complexity (in-key critical note, out-of-key critical note, auditory anomaly on the critical note involving a loudness increase). The auditory-anomaly manipulation was included in order to test whether the difference between in-key and out-of-key conditions might be due to any salient, unexpected acoustic event. The critical dependent measure involved comprehension accuracies to questions about the propositional content of the sentences asked at the end of each trial. The results revealed an interaction between linguistic and musical complexity such that the difference between the subject- and object-extracted relative clause conditions was larger in the out-of-key condition than in the in-key and auditory-anomaly conditions. These results provide evidence for an overlap in structural processing between language and music.
  • Ferreri, L., & Verga, L. (2016). Benefits of music on verbal learning and memory: How and when does it work? Music Perception, 34(2), 167-182. doi:10.1525/mp.2016.34.2.167.

    Abstract

    A long-standing debate in cognitive neurosciences concerns the effect of music on verbal learning and memory. Research in this field has largely provided conflicting results in both clinical as well as non-clinical populations. Although several studies have shown a positive effect of music on the encoding and retrieval of verbal stimuli, music has also been suggested to hinder mnemonic performance by dividing attention. In an attempt to explain this conflict, we review the most relevant literature on the effects of music on verbal learning and memory. Furthermore, we specify several mechanisms through which music may modulate these cognitive functions. We suggest that the extent to which music boosts these cognitive functions relies on experimental factors, such as the relative complexity of musical and verbal stimuli employed. These factors should be carefully considered in further studies, in order to reliably establish how and when music boosts verbal memory and learning. The answers to these questions are not only crucial for our knowledge of how music influences cognitive and brain functions, but may have important clinical implications. Considering the increasing number of approaches using music as a therapeutic tool, the importance of understanding exactly how music works can no longer be underestimated.
  • Filippi, P. (2016). Emotional and Interactional Prosody across Animal Communication Systems: A Comparative Approach to the Emergence of Language. Frontiers in Psychology, 7: 1393. doi:10.3389/fpsyg.2016.01393.

    Abstract

    Across a wide range of animal taxa, prosodic modulation of the voice can express emotional information and is used to coordinate vocal interactions between multiple individuals. Within a comparative approach to animal communication systems, I hypothesize that the ability for emotional and interactional prosody (EIP) paved the way for the evolution of linguistic prosody – and perhaps also of music, continuing to play a vital role in the acquisition of language. In support of this hypothesis, I review three research fields: (i) empirical studies on the adaptive value of EIP in non-human primates, mammals, songbirds, anurans, and insects; (ii) the beneficial effects of EIP in scaffolding language learning and social development in human infants; (iii) the cognitive relationship between linguistic prosody and the ability for music, which has often been identified as the evolutionary precursor of language.
  • Filippi, P., Congdon, J. V., Hoang, J., Bowling, D. L., Reber, S. A., Pasukonis, A., Hoeschele, M., Ocklenburg, S., De Boer, B., Sturdy, C. B., Newen, A., & Güntürkün, O. (2017). Humans recognize emotional arousal in vocalizations across all classes of terrestrial vertebrates: Evidence for acoustic universals. Proceedings of the Royal Society B: Biological Sciences, 284: 20170990. doi:10.1098/rspb.2017.0990.

    Abstract

    Writing over a century ago, Darwin hypothesized that vocal expression of emotion dates back to our earliest terrestrial ancestors. If this hypothesis is true, we should expect to find cross-species acoustic universals in emotional vocalizations. Studies suggest that acoustic attributes of aroused vocalizations are shared across many mammalian species, and that humans can use these attributes to infer emotional content. But do these acoustic attributes extend to non-mammalian vertebrates? In this study, we asked human participants to judge the emotional content of vocalizations of nine vertebrate species representing three different biological classes—Amphibia, Reptilia (non-aves and aves) and Mammalia. We found that humans are able to identify higher levels of arousal in vocalizations across all species. This result was consistent across different language groups (English, German and Mandarin native speakers), suggesting that this ability is biologically rooted in humans. Our findings indicate that humans use multiple acoustic parameters to infer relative arousal in vocalizations for each species, but mainly rely on fundamental frequency and spectral centre of gravity to identify higher arousal vocalizations across species. These results suggest that fundamental mechanisms of vocal emotional expression are shared among vertebrates and could represent a homologous signalling system.
  • Filippi, P., Gogoleva, S. S., Volodina, E. V., Volodin, I. A., & De Boer, B. (2017). Humans identify negative (but not positive) arousal in silver fox vocalizations: Implications for the adaptive value of interspecific eavesdropping. Current Zoology, 63(4), 445-456. doi:10.1093/cz/zox035.

    Abstract

    The ability to identify emotional arousal in heterospecific vocalizations may facilitate behaviors that increase survival opportunities. Crucially, this ability may orient inter-species interactions, particularly between humans and other species. Research shows that humans identify emotional arousal in vocalizations across multiple species, such as cats, dogs, and piglets. However, no previous study has addressed humans' ability to identify emotional arousal in silver foxes. Here, we adopted low-and high-arousal calls emitted by three strains of silver fox-Tame, Aggressive, and Unselected-in response to human approach. Tame and Aggressive foxes are genetically selected for friendly and attacking behaviors toward humans, respectively. Unselected foxes show aggressive and fearful behaviors toward humans. These three strains show similar levels of emotional arousal, but different levels of emotional valence in relation to humans. This emotional information is reflected in the acoustic features of the calls. Our data suggest that humans can identify high-arousal calls of Aggressive and Unselected foxes, but not of Tame foxes. Further analyses revealed that, although within each strain different acoustic parameters affect human accuracy in identifying high-arousal calls, spectral center of gravity, harmonic-to-noise ratio, and F0 best predict humans' ability to discriminate high-arousal calls across all strains. Furthermore, we identified in spectral center of gravity and F0 the best predictors for humans' absolute ratings of arousal in each call. Implications for research on the adaptive value of inter-specific eavesdropping are discussed.

    Additional information

    zox035_Supp.zip
  • Filippi, P., Ocklenburg, S., Bowling, D. L., Heege, L., Güntürkün, O., Newen, A., & de Boer, B. (2017). More than words (and faces): evidence for a Stroop effect of prosody in emotion word processing. Cognition & Emotion, 31(5), 879-891. doi:10.1080/02699931.2016.1177489.

    Abstract

    Humans typically combine linguistic and nonlinguistic information to comprehend emotions. We adopted an emotion identification Stroop task to investigate how different channels interact in emotion communication. In experiment 1, synonyms of “happy” and “sad” were spoken with happy and sad prosody. Participants had more difficulty ignoring prosody than ignoring verbal content. In experiment 2, synonyms of “happy” and “sad” were spoken with happy and sad prosody, while happy or sad faces were displayed. Accuracy was lower when two channels expressed an emotion that was incongruent with the channel participants had to focus on, compared with the cross-channel congruence condition. When participants were required to focus on verbal content, accuracy was significantly lower also when prosody was incongruent with verbal content and face. This suggests that prosody biases emotional verbal content processing, even when conflicting with verbal content and face simultaneously. Implications for multimodal communication and language evolution studies are discussed.
  • Filippi, P., Jadoul, Y., Ravignani, A., Thompson, B., & de Boer, B. (2016). Seeking Temporal Predictability in Speech: Comparing Statistical Approaches on 18 World Languages. Frontiers in Human Neuroscience, 10: 586. doi:10.3389/fnhum.2016.00586.

    Abstract

    Temporal regularities in speech, such as interdependencies in the timing of speech events, are thought to scaffold early acquisition of the building blocks in speech. By providing on-line clues to the location and duration of upcoming syllables, temporal structure may aid segmentation and clustering of continuous speech into separable units. This hypothesis tacitly assumes that learners exploit predictability in the temporal structure of speech. Existing measures of speech timing tend to focus on first-order regularities among adjacent units, and are overly sensitive to idiosyncrasies in the data they describe. Here, we compare several statistical methods on a sample of 18 languages, testing whether syllable occurrence is predictable over time. Rather than looking for differences between languages, we aim to find across languages (using clearly defined acoustic, rather than orthographic, measures), temporal predictability in the speech signal which could be exploited by a language learner. First, we analyse distributional regularities using two novel techniques: a Bayesian ideal learner analysis, and a simple distributional measure. Second, we model higher-order temporal structure—regularities arising in an ordered series of syllable timings—testing the hypothesis that non-adjacent temporal structures may explain the gap between subjectively-perceived temporal regularities, and the absence of universally-accepted lower-order objective measures. Together, our analyses provide limited evidence for predictability at different time scales, though higher-order predictability is difficult to reliably infer. We conclude that temporal predictability in speech may well arise from a combination of individually weak perceptual cues at multiple structural levels, but is challenging to pinpoint.
  • Filippi, P., Gingras, B., & Fitch, W. T. (2014). Pitch enhancement facilitates word learning across visual contexts. Frontiers in Psychology, 5: 1468. doi:10.3389%2Ffpsyg.2014.01468.

    Abstract

    This study investigates word-learning using a new experimental paradigm that integrates three processes: (a) extracting a word out of a continuous sound sequence, (b) inferring its referential meanings in context, (c) mapping the segmented word onto its broader intended referent, such as other objects of the same semantic category, and to novel utterances. Previous work has examined the role of statistical learning and/or of prosody in each of these processes separately. Here, we combine these strands of investigation into a single experimental approach, in which participants viewed a photograph belonging to one of three semantic categories while hearing a complex, five-word utterance containing a target word. Six between-subjects conditions were tested with 20 adult participants each. In condition 1, the only cue to word-meaning mapping was the co-occurrence of word and referents. This statistical cue was present in all conditions. In condition 2, the target word was sounded at a higher pitch. In condition 3, random words were sounded at a higher pitch, creating an inconsistent cue. In condition 4, the duration of the target word was lengthened. In conditions 5 and 6, an extraneous acoustic cue and a visual cue were associated with the target word, respectively. Performance in this word-learning task was significantly higher than that observed with simple co-occurrence only when pitch prominence consistently marked the target word. We discuss implications for the pragmatic value of pitch marking as well as the relevance of our findings to language acquisition and language evolution.
  • Filippi, P., Laaha, S., & Fitch, W. T. (2017). Utterance-final position and pitch marking aid word learning in school-age children. Royal Society Open Science, 4: 161035. doi:10.1098/rsos.161035.

    Abstract

    We investigated the effects of word order and prosody on word learning in school-age children. Third graders viewed photographs belonging to one of three semantic categories while hearing four-word nonsense utterances containing a target word. In the control condition, all words had the same pitch and, across trials, the position of the target word was varied systematically within each utterance. The only cue to word–meaning mapping was the co-occurrence of target words and referents. This cue was present in all conditions. In the Utterance-final condition, the target word always occurred in utterance-final position, and at the same fundamental frequency as all the other words of the utterance. In the Pitch peak condition, the position of the target word was varied systematically within each utterance across trials, and produced with pitch contrasts typical of infant-directed speech (IDS). In the Pitch peak + Utterance-final condition, the target word always occurred in utterance-final position, and was marked with a pitch contrast typical of IDS. Word learning occurred in all conditions except the control condition. Moreover, learning performance was significantly higher than that observed with simple co-occurrence (control condition) only for the Pitch peak + Utterance-final condition. We conclude that, for school-age children, the combination of words' utterance-final alignment and pitch enhancement boosts word learning.
  • Fisher, S. E., Stein, J. F., & Monaco, A. P. (1999). A genome-wide search strategy for identifying quantitative trait loci involved in reading and spelling disability (developmental dyslexia). European Child & Adolescent Psychiatry, 8(suppl. 3), S47-S51. doi:10.1007/PL00010694.

    Abstract

    Family and twin studies of developmental dyslexia have consistently shown that there is a significant heritable component for this disorder. However, any genetic basis for the trait is likely to be complex, involving reduced penetrance, phenocopy, heterogeneity and oligogenic inheritance. This complexity results in reduced power for traditional parametric linkage analysis, where specification of the correct genetic model is important. One strategy is to focus on large multigenerational pedigrees with severe phenotypes and/or apparent simple Mendelian inheritance, as has been successfully demonstrated for speech and language impairment. This approach is limited by the scarcity of such families. An alternative which has recently become feasible due to the development of high-throughput genotyping techniques is the analysis of large numbers of sib-pairs using allele-sharing methodology. This paper outlines our strategy for conducting a systematic genome-wide search for genes involved in dyslexia in a large number of affected sib-pair familites from the UK. We use a series of psychometric tests to obtain different quantitative measures of reading deficit, which should correlate with different components of the dyslexia phenotype, such as phonological awareness and orthographic coding ability. This enable us to use QTL (quantitative trait locus) mapping as a powerful tool for localising genes which may contribute to reading and spelling disability.
  • Fisher, S. E., Marlow, A. J., Lamb, J., Maestrini, E., Williams, D. F., Richardson, A. J., Weeks, D. E., Stein, J. F., & Monaco, A. P. (1999). A quantitative-trait locus on chromosome 6p influences different aspects of developmental dyslexia. American Journal of Human Genetics, 64(1), 146-156. doi:10.1086/302190.

    Abstract

    Recent application of nonparametric-linkage analysis to reading disability has implicated a putative quantitative-trait locus (QTL) on the short arm of chromosome 6. In the present study, we use QTL methods to evaluate linkage to the 6p25-21.3 region in a sample of 181 sib pairs from 82 nuclear families that were selected on the basis of a dyslexic proband. We have assessed linkage directly for several quantitative measures that should correlate with different components of the phenotype, rather than using a single composite measure or employing categorical definitions of subtypes. Our measures include the traditional IQ/reading discrepancy score, as well as tests of word recognition, irregular-word reading, and nonword reading. Pointwise analysis by means of sib-pair trait differences suggests the presence, in 6p21.3, of a QTL influencing multiple components of dyslexia, in particular the reading of irregular words (P=.0016) and nonwords (P=.0024). A complementary statistical approach involving estimation of variance components supports these findings (irregular words, P=.007; nonwords, P=.0004). Multipoint analyses place the QTL within the D6S422-D6S291 interval, with a peak around markers D6S276 and D6S105 consistently identified by approaches based on trait differences (irregular words, P=.00035; nonwords, P=.0035) and variance components (irregular words, P=.007; nonwords, P=.0038). Our findings indicate that the QTL affects both phonological and orthographic skills and is not specific to phoneme awareness, as has been previously suggested. Further studies will be necessary to obtain a more precise localization of this QTL, which may lead to the isolation of one of the genes involved in developmental dyslexia.
  • Fisher, S. E., & Scharff, C. (2009). FOXP2 as a molecular window into speech and language [Review article]. Trends in Genetics, 25, 166-177. doi:10.1016/j.tig.2009.03.002.

    Abstract

    Rare mutations of the FOXP2 transcription factor gene cause a monogenic syndrome characterized by impaired speech development and linguistic deficits. Recent genomic investigations indicate that its downstream neural targets make broader impacts on common language impairments, bridging clinically distinct disorders. Moreover, the striking conservation of both FoxP2 sequence and neural expression in different vertebrates facilitates the use of animal models to study ancestral pathways that have been recruited towards human speech and language. Intriguingly, reduced FoxP2 dosage yields abnormal synaptic plasticity and impaired motor-skill learning in mice, and disrupts vocal learning in songbirds. Converging data indicate that Foxp2 is important for modulating the plasticity of relevant neural circuits. This body of research represents the first functional genetic forays into neural mechanisms contributing to human spoken language.
  • Fisher, S. E., Black, G. C. M., Lloyd, S. E., Wrong, O. M., Thakker, R. V., & Craig, I. W. (1994). Isolation and partial characterization of a chloride channel gene which is expressed in kidney and is a candidate for Dent's disease (an X-linked hereditary nephrolithiasis). Human Molecular Genetics, 3, 2053-2059.

    Abstract

    Dent's disease, an X-linked renal tubular disorder, is a form of Fanconi syndrome which is characterized by proteinuria, hypercalciuria, nephrocalcinosis, kidney stones and renal failure. Previous studies localised the gene responsible to Xp11.22, within a microdeletion involving the hypervariable locus DXS255. Further analysis using new probes which flank this locus indicate that the deletion is less than 515 kb. A 185 kb YAC containing DXS255 was used to screen a cDNA library from adult kidney in order to isolate coding sequences falling within the deleted region which may be implicated in the disease aetiology. We identified two clones which are evolutionarily conserved, and detect a 9.5 kb transcript which is expressed predominantly in the kidney. Sequence analysis of 780 bp of ORF from the clones suggests that the identified gene, termed hCIC-K2, encodes a new member of the CIC family of voltage-gated chloride channels. Genomic fragments detected by the cDNA clones are completely absent in patients who have an associated microdeletion. On the basis of the expression pattern, proposed function and deletion mapping, hCIC-K2 is a strong candidate for Dent's disease.
  • Fisher, S. E. (2017). Evolution of language: Lessons from the genome. Psychonomic Bulletin & Review, 24(1), 34-40. doi: 10.3758/s13423-016-1112-8.

    Abstract

    The post-genomic era is an exciting time for researchers interested in the biology of speech and language. Substantive advances in molecular methodologies have opened up entire vistas of investigation that were not previously possible, or in some cases even imagined. Speculations concerning the origins of human cognitive traits are being transformed into empirically addressable questions, generating specific hypotheses that can be explicitly tested using data collected from both the natural world and experimental settings. In this article, I discuss a number of promising lines of research in this area. For example, the field has begun to identify genes implicated in speech and language skills, including not just disorders but also the normal range of abilities. Such genes provide powerful entry points for gaining insights into neural bases and evolutionary origins, using sophisticated experimental tools from molecular neuroscience and developmental neurobiology. At the same time, sequencing of ancient hominin genomes is giving us an unprecedented view of the molecular genetic changes that have occurred during the evolution of our species. Synthesis of data from these complementary sources offers an opportunity to robustly evaluate alternative accounts of language evolution. Of course, this endeavour remains challenging on many fronts, as I also highlight in the article. Nonetheless, such an integrated approach holds great potential for untangling the complexities of the capacities that make us human.
  • Fisher, V. J. (2017). Unfurling the wings of flight: Clarifying ‘the what’ and ‘the why’ of mental imagery use in dance. Research in Dance Education, 18(3), 252-272. doi:10.1080/14647893.2017.1369508.

    Abstract

    This article provides clarification regarding ‘the what’ and ‘the why’ of mental imagery use in dance. It proposes that mental images are invoked across sensory modalities and often combine internal and external perspectives. The content of images ranges from ‘direct’ body oriented simulations along a continuum employing analogous mapping through ‘semi-direct’ literal similarities to abstract metaphors. The reasons for employing imagery are diverse and often overlapping, affecting physical, affective (psychological) and cognitive domains. This paper argues that when dance uses imagery, it is mapping aspects of the world to the body via analogy. Such mapping informs and changes our understanding of both our bodies and the world. In this way, mental imagery use in dance is fundamentally a process of embodied cognition
  • Fitz, H., & Chang, F. (2017). Meaningful questions: The acquisition of auxiliary inversion in a connectionist model of sentence production. Cognition, 166, 225-250. doi:10.1016/j.cognition.2017.05.008.

    Abstract

    Nativist theories have argued that language involves syntactic principles which are unlearnable from the input children receive. A paradigm case of these innate principles is the structure dependence of auxiliary inversion in complex polar questions (Chomsky, 1968, 1975, 1980). Computational approaches have focused on the properties of the input in explaining how children acquire these questions. In contrast, we argue that messages are structured in a way that supports structure dependence in syntax. We demonstrate this approach within a connectionist model of sentence production (Chang, 2009) which learned to generate a range of complex polar questions from a structured message without positive exemplars in the input. The model also generated different types of error in development that were similar in magnitude to those in children (e.g., auxiliary doubling, Ambridge, Rowland, & Pine, 2008; Crain & Nakayama, 1987). Through model comparisons we trace how meaning constraints and linguistic experience interact during the acquisition of auxiliary inversion. Our results suggest that auxiliary inversion rules in English can be acquired without innate syntactic principles, as long as it is assumed that speakers who ask complex questions express messages that are structured into multiple propositions
  • FitzPatrick, I., & Indefrey, P. (2016). Accessing Conceptual Representations for Speaking [Editorial]. Frontiers in Psychology, 7: 1216. doi:10.3389/fpsyg.2016.01216.

    Abstract

    Systematic investigations into the role of semantics in the speech production process have remained elusive. This special issue aims at moving forward toward a more detailed account of how precisely conceptual information is used to access the lexicon in speaking and what corresponding format of conceptual representations needs to be assumed. The studies presented in this volume investigated effects of conceptual processing on different processing stages of language production, including sentence formulation, lemma selection, and word form access.
  • FitzPatrick, I., & Indefrey, P. (2014). Head start for target language in bilingual listening. Brain Research, 1542, 111-130. doi:10.1016/j.brainres.2013.10.014.

    Abstract

    In this study we investigated the availability of non-target language semantic features in bilingual speech processing. We recorded EEG from Dutch-English bilinguals who listened to spoken sentences in their L2 (English) or L1 (Dutch). In Experiments 1 and 3 the sentences contained an interlingual homophone. The sentence context was either biased towards the target language meaning of the homophone (target biased), the non-target language meaning (non-target biased), or neither meaning of the homophone (fully incongruent). These conditions were each compared to a semantically congruent control condition. In L2 sentences we observed an N400 in the non-target biased condition that had an earlier offset than the N400 to fully incongruent homophones. In the target biased condition, a negativity emerged that was later than the N400 to fully incongruent homophones. In L1 contexts, neither target biased nor non-target biased homophones yielded significant N400 effects (compared to the control condition). In Experiments 2 and 4 the sentences contained a language switch to a non-target language word that could be semantically congruent or incongruent. Semantically incongruent words (switched, and non-switched) elicited an N400 effect. The N400 to semantically congruent language-switched words had an earlier offset than the N400 to incongruent words. Both congruent and incongruent language switches elicited a Late Positive Component (LPC). These findings show that bilinguals activate both meanings of interlingual homophones irrespective of their contextual fit. In L2 contexts, the target-language meaning of the homophone has a head start over the non-target language meaning. The target-language head start is also evident for language switches from both L2-to-L1 and L1-to-L2
  • Flecken, M., von Stutterheim, C., & Carroll, M. (2014). Grammatical aspect influences motion event perception: Evidence from a cross-linguistic non-verbal recognition task. Language and Cognition, 6(1), 45-78. doi:10.1017/langcog.2013.2.

    Abstract

    Using eye-tracking as a window on cognitive processing, this study investigates language effects on attention to motion events in a non-verbal task. We compare gaze allocation patterns by native speakers of German and Modern Standard Arabic (MSA), two languages that differ with regard to the grammaticalization of temporal concepts. Findings of the non-verbal task, in which speakers watch dynamic event scenes while performing an auditory distracter task, are compared to gaze allocation patterns which were obtained in an event description task, using the same stimuli. We investigate whether differences in the grammatical aspectual systems of German and MSA affect the extent to which endpoints of motion events are linguistically encoded and visually processed in the two tasks. In the linguistic task, we find clear language differences in endpoint encoding and in the eye-tracking data (attention to event endpoints) as well: German speakers attend to and linguistically encode endpoints more frequently than speakers of MSA. The fixation data in the non-verbal task show similar language effects, providing relevant insights with regard to the language-and-thought debate. The present study is one of the few studies that focus explicitly on language effects related to grammatical concepts, as opposed to lexical concepts.
  • Floyd, S. (2014). [Review of the book Flexible word classes: Typological studies of underspecified parts of speech ed. by Jan Rijkhoff and Eva van Lier]. Linguistics, 52, 1499-1502. doi:10.1515/ling-2014-0027.
  • Floyd, S. (2016). [Review of the book Fluent Selves: Autobiography, Person, and History in Lowland South America ed. by Suzanne Oakdale and Magnus Course]. Journal of Linguistic Anthropology, 26(1), 110-111. doi:10.1111/jola.12112.
  • Floyd, S. (2016). Modally hybrid grammar? Celestial pointing for time-of-day reference in Nheengatú. Language, 92(1), 31-64. doi:10.1353/lan.2016.0013.

    Abstract

    From the study of sign languages we know that the visual modality robustly supports the encoding of conventionalized linguistic elements, yet while the same possibility exists for the visual bodily behavior of speakers of spoken languages, such practices are often referred to as ‘gestural’ and are not usually described in linguistic terms. This article describes a practice of speakers of the Brazilian indigenous language Nheengatú of pointing to positions along the east-west axis of the sun’s arc for time-of-day reference, and illustrates how it satisfies any of the common criteria for linguistic elements, as a system of standardized and productive form-meaning pairings whose contributions to propositional meaning remain stable across contexts. First, examples from a video corpus of natural speech demonstrate these conventionalized properties of Nheengatú time reference across multiple speakers. Second, a series of video-based elicitation stimuli test several dimensions of its conventionalization for nine participants. The results illustrate why modality is not an a priori reason that linguistic properties cannot develop in the visual practices that accompany spoken language. The conclusion discusses different possible morphosyntactic and pragmatic analyses for such conventionalized visual elements and asks whether they might be more crosslinguistically common than we presently know.
  • Floyd, S., Manrique, E., Rossi, G., & Torreira, F. (2016). Timing of visual bodily behavior in repair sequences: Evidence from three languages. Discourse Processes, 53(3), 175-204. doi:10.1080/0163853X.2014.992680.

    Abstract

    This article expands the study of other-initiated repair in conversation—when one party
    signals a problemwith producing or perceiving another’s turn at talk—into the domain
    of visual bodily behavior. It presents one primary cross-linguistic finding about the
    timing of visual bodily behavior in repair sequences: if the party who initiates repair
    accompanies their turn with a “hold”—when relatively dynamic movements are
    temporarily andmeaningfully held static—this positionwill not be disengaged until the
    problem is resolved and the sequence closed. We base this finding on qualitative and
    quantitative analysis of corpora of conversational interaction from three unrelated languages representing two different modalities: Northern Italian, the Cha’palaa language of Ecuador, and Argentine Sign Language. The cross-linguistic similarities
    uncovered by this comparison suggest that visual bodily practices have been
    semiotized for similar interactive functions across different languages and modalities
    due to common pressures in face-to-face interaction.
  • Folia, V., & Petersson, K. M. (2014). Implicit structured sequence learning: An fMRI study of the structural mere-exposure effect. Frontiers in Psychology, 5: 41. doi:10.3389/fpsyg.2014.00041.

    Abstract

    In this event-related FMRI study we investigated the effect of five days of implicit acquisition on preference classification by means of an artificial grammar learning (AGL) paradigm based on the structural mere-exposure effect and preference classification using a simple right-linear unification grammar. This allowed us to investigate implicit AGL in a proper learning design by including baseline measurements prior to grammar exposure. After 5 days of implicit acquisition, the FMRI results showed activations in a network of brain regions including the inferior frontal (centered on BA 44/45) and the medial prefrontal regions (centered on BA 8/32). Importantly, and central to this study, the inclusion of a naive preference FMRI baseline measurement allowed us to conclude that these FMRI findings were the intrinsic outcomes of the learning process itself and not a reflection of a preexisting functionality recruited during classification, independent of acquisition. Support for the implicit nature of the knowledge utilized during preference classification on day 5 come from the fact that the basal ganglia, associated with implicit procedural learning, were activated during classification, while the medial temporal lobe system, associated with explicit declarative memory, was consistently deactivated. Thus, preference classification in combination with structural mere-exposure can be used to investigate structural sequence processing (syntax) in unsupervised AGL paradigms with proper learning designs.
  • Forkel, S. J., Thiebaut de Schotten, M., Dell’Acqua, F., Kalra, L., Murphy, D. G. M., Williams, S. C. R., & Catani, M. (2014). Anatomical predictors of aphasia recovery: a tractography study of bilateral perisylvian language networks. Brain, 137, 2027-2039. doi:10.1093/brain/awu113.

    Abstract

    Stroke-induced aphasia is associated with adverse effects on quality of life and the ability to return to work. For patients and clinicians the possibility of relying on valid predictors of recovery is an important asset in the clinical management of stroke-related impairment. Age, level of education, type and severity of initial symptoms are established predictors of recovery. However, anatomical predictors are still poorly understood. In this prospective longitudinal study, we intended to assess anatomical predictors of recovery derived from diffusion tractography of the perisylvian language networks. Our study focused on the arcuate fasciculus, a language pathway composed of three segments connecting Wernicke’s to Broca’s region (i.e. long segment), Wernicke’s to Geschwind’s region (i.e. posterior segment) and Broca’s to Geschwind’s region (i.e. anterior segment). In our study we were particularly interested in understanding how lateralization of the arcuate fasciculus impacts on severity of symptoms and their recovery. Sixteen patients (10 males; mean age 60 ± 17 years, range 28–87 years) underwent post stroke language assessment with the Revised Western Aphasia Battery and neuroimaging scanning within a fortnight from symptoms onset. Language assessment was repeated at 6 months. Backward elimination analysis identified a subset of predictor variables (age, sex, lesion size) to be introduced to further regression analyses. A hierarchical regression was conducted with the longitudinal aphasia severity as the dependent variable. The first model included the subset of variables as previously defined. The second model additionally introduced the left and right arcuate fasciculus (separate analysis for each segment). Lesion size was identified as the only independent predictor of longitudinal aphasia severity in the left hemisphere [beta = −0.630, t(−3.129), P = 0.011]. For the right hemisphere, age [beta = −0.678, t(–3.087), P = 0.010] and volume of the long segment of the arcuate fasciculus [beta = 0.730, t(2.732), P = 0.020] were predictors of longitudinal aphasia severity. Adding the volume of the right long segment to the first-level model increased the overall predictive power of the model from 28% to 57% [F(1,11) = 7.46, P = 0.02]. These findings suggest that different predictors of recovery are at play in the left and right hemisphere. The right hemisphere language network seems to be important in aphasia recovery after left hemispheric stroke.

    Additional information

    supplementary information
  • Forkel, S. J., Thiebaut de Schotten, M., Kawadler, J. M., Dell'Acqua, F., Danek, A., & Catani, M. (2014). The anatomy of fronto-occipital connections from early blunt dissections to contemporary tractography. Cortex, 56, 73-84. doi:10.1016/j.cortex.2012.09.005.

    Abstract

    The occipital and frontal lobes are anatomically distant yet functionally highly integrated to generate some of the most complex behaviour. A series of long associative fibres, such as the fronto-occipital networks, mediate this integration via rapid feed-forward propagation of visual input to anterior frontal regions and direct top–down modulation of early visual processing.

    Despite the vast number of anatomical investigations a general consensus on the anatomy of fronto-occipital connections is not forthcoming. For example, in the monkey the existence of a human equivalent of the ‘inferior fronto-occipital fasciculus’ (iFOF) has not been demonstrated. Conversely, a ‘superior fronto-occipital fasciculus’ (sFOF), also referred to as ‘subcallosal bundle’ by some authors, is reported in monkey axonal tracing studies but not in human dissections.

    In this study our aim is twofold. First, we use diffusion tractography to delineate the in vivo anatomy of the sFOF and the iFOF in 30 healthy subjects and three acallosal brains. Second, we provide a comprehensive review of the post-mortem and neuroimaging studies of the fronto-occipital connections published over the last two centuries, together with the first integral translation of Onufrowicz's original description of a human fronto-occipital fasciculus (1887) and Muratoff's report of the ‘subcallosal bundle’ in animals (1893).

    Our tractography dissections suggest that in the human brain (i) the iFOF is a bilateral association pathway connecting ventro-medial occipital cortex to orbital and polar frontal cortex, (ii) the sFOF overlaps with branches of the superior longitudinal fasciculus (SLF) and probably represents an ‘occipital extension’ of the SLF, (iii) the subcallosal bundle of Muratoff is probably a complex tract encompassing ascending thalamo-frontal and descending fronto-caudate connections and is therefore a projection rather than an associative tract.

    In conclusion, our experimental findings and review of the literature suggest that a ventral pathway in humans, namely the iFOF, mediates a direct communication between occipital and frontal lobes. Whether the iFOF represents a unique human pathway awaits further ad hoc investigations in animals.
  • Francisco, A. A., Groen, M. A., Jesse, A., & McQueen, J. M. (2017). Beyond the usual cognitive suspects: The importance of speechreading and audiovisual temporal sensitivity in reading ability. Learning and Individual Differences, 54, 60-72. doi:10.1016/j.lindif.2017.01.003.

    Abstract

    The aim of this study was to clarify whether audiovisual processing accounted for variance in reading and reading-related abilities, beyond the effect of a set of measures typically associated with individual differences in both reading and audiovisual processing. Testing adults with and without a diagnosis of dyslexia, we showed that—across all participants, and after accounting for variance in cognitive abilities—audiovisual temporal sensitivity contributed uniquely to variance in reading errors. This is consistent with previous studies demonstrating an audiovisual deficit in dyslexia. Additionally, we showed that speechreading (identification of speech based on visual cues from the talking face alone) was a unique contributor to variance in phonological awareness in dyslexic readers only: those who scored higher on speechreading, scored lower on phonological awareness. This suggests a greater reliance on visual speech as a compensatory mechanism when processing auditory speech is problematic. A secondary aim of this study was to better understand the nature of dyslexia. The finding that a sub-group of dyslexic readers scored low on phonological awareness and high on speechreading is consistent with a hybrid perspective of dyslexia: There are multiple possible pathways to reading impairment, which may translate into multiple profiles of dyslexia.
  • Francisco, A. A., Jesse, A., Groen, M. A., & McQueen, J. M. (2017). A general audiovisual temporal processing deficit in adult readers with dyslexia. Journal of Speech, Language, and Hearing Research, 60, 144-158. doi:10.1044/2016_JSLHR-H-15-0375.

    Abstract

    Purpose: Because reading is an audiovisual process, reading impairment may reflect an audiovisual processing deficit. The aim of the present study was to test the existence and scope of such a deficit in adult readers with dyslexia. Method: We tested 39 typical readers and 51 adult readers with dyslexia on their sensitivity to the simultaneity of audiovisual speech and nonspeech stimuli, their time window of audiovisual integration for speech (using incongruent /aCa/ syllables), and their audiovisual perception of phonetic categories. Results: Adult readers with dyslexia showed less sensitivity to audiovisual simultaneity than typical readers for both speech and nonspeech events. We found no differences between readers with dyslexia and typical readers in the temporal window of integration for audiovisual speech or in the audiovisual perception of phonetic categories. Conclusions: The results suggest an audiovisual temporal deficit in dyslexia that is not specific to speech-related events. But the differences found for audiovisual temporal sensitivity did not translate into a deficit in audiovisual speech perception. Hence, there seems to be a hiatus between simultaneity judgment and perception, suggesting a multisensory system that uses different mechanisms across tasks. Alternatively, it is possible that the audiovisual deficit in dyslexia is only observable when explicit judgments about audiovisual simultaneity are required
  • Francks, C. (2009). Understanding the genetics of behavioural and psychiatric traits will only be achieved through a realistic assessment of their complexity. Laterality: Asymmetries of Body, Brain and Cognition, 14(1), 11-16. doi:10.1080/13576500802536439.

    Abstract

    Francks et al. (2007) performed a recent study in which the first putative genetic effect on human handedness was identified (the imprinted locus LRRTM1 on human chromosome 2). In this issue of Laterality, Tim Crow and colleagues present a critique of that study. The present paper presents a personal response to that critique which argues that Francks et al. (2007) published a substantial body of evidence implicating LRRTM1 in handedness and schizophrenia. Progress will now be achieved by others trying to validate, refute, or extend those findings, rather than by further armchair discussion.
  • Frank, M. C., Bergelson, E., Bergmann, C., Cristia, A., Floccia, C., Gervain, J., Hamlin, J. K., Hannon, E. E., Kline, M., Levelt, C., Lew-Williams, C., Nazzi, T., Panneton, R., Rabagliati, H., Soderstrom, M., Sullivan, J., Waxman, S., & Yurovsky, D. (2017). A collaborative approach to infant research: Promoting reproducibility, best practices, and theory-building. Infancy, 22(4), 421-435. doi:10.1111/infa.12182.

    Abstract

    The ideal of scientific progress is that we accumulate measurements and integrate these into theory, but recent discussion of replicability issues has cast doubt on whether psychological research conforms to this model. Developmental research—especially with infant participants—also has discipline-specific replicability challenges, including small samples and limited measurement methods. Inspired by collaborative replication efforts in cognitive and social psychology, we describe a proposal for assessing and promoting replicability in infancy research: large-scale, multi-laboratory replication efforts aiming for a more precise understanding of key developmental phenomena. The ManyBabies project, our instantiation of this proposal, will not only help us estimate how robust and replicable these phenomena are, but also gain new theoretical insights into how they vary across ages, linguistic communities, and measurement methods. This project has the potential for a variety of positive outcomes, including less-biased estimates of theoretically important effects, estimates of variability that can be used for later study planning, and a series of best-practices blueprints for future infancy research.
  • Frank, S. L., & Fitz, H. (2016). Reservoir computing and the Sooner-is-Better bottleneck [Commentary on Christiansen & Slater]. Behavioral and Brain Sciences, 39: e73. doi:10.1017/S0140525X15000783.

    Abstract

    Prior language input is not lost but integrated with the current input. This principle is demonstrated by “reservoir computing”: Untrained recurrent neural networks project input sequences onto a random point in high-dimensional state space. Earlier inputs can be retrieved from this projection, albeit less reliably so as more input is received. The bottleneck is therefore not “Now-or-Never” but “Sooner-is-Better.
  • Frank, S. L., & Willems, R. M. (2017). Word predictability and semantic similarity show distinct patterns of brain activity during language comprehension. Language, Cognition and Neuroscience, 32(9), 1192-1203. doi:10.1080/23273798.2017.1323109.

    Abstract

    We investigate the effects of two types of relationship between the words of a sentence or text – predictability and semantic similarity – by reanalysing electroencephalography (EEG) and functional magnetic resonance imaging (fMRI) data from studies in which participants comprehend naturalistic stimuli. Each content word's predictability given previous words is quantified by a probabilistic language model, and semantic similarity to previous words is quantified by a distributional semantics model. Brain activity time-locked to each word is regressed on the two model-derived measures. Results show that predictability and semantic similarity have near identical N400 effects but are dissociated in the fMRI data, with word predictability related to activity in, among others, the visual word-form area, and semantic similarity related to activity in areas associated with the semantic network. This indicates that both predictability and similarity play a role during natural language comprehension and modulate distinct cortical regions.
  • Franke, B., Stein, J. L., Ripke, S., Anttila, V., Hibar, D. P., Van Hulzen, K. J. E., Arias-Vasquez, A., Smoller, J. W., Nichols, T. E., Neale, M. C., McIntosh, A. M., Lee, P., McMahon, F. J., Meyer-Lindenberg, A., Mattheisen, M., Andreassen, O. A., Gruber, O., Sachdev, P. S., Roiz-Santiañez, R., Saykin, A. J. and 17 moreFranke, B., Stein, J. L., Ripke, S., Anttila, V., Hibar, D. P., Van Hulzen, K. J. E., Arias-Vasquez, A., Smoller, J. W., Nichols, T. E., Neale, M. C., McIntosh, A. M., Lee, P., McMahon, F. J., Meyer-Lindenberg, A., Mattheisen, M., Andreassen, O. A., Gruber, O., Sachdev, P. S., Roiz-Santiañez, R., Saykin, A. J., Ehrlich, S., Mather, K. A., Turner, J. A., Schwarz, E., Thalamuthu, A., Yao, Y., Ho, Y. Y. W., Martin, N. G., Wright, M. J., Guadalupe, T., Fisher, S. E., Francks, C., Schizophrenia Working Group of the Psychiatric Genomics Consortium, ENIGMA Consortium, O’Donovan, M. C., Thompson, P. M., Neale, B. M., Medland, S. E., & Sullivan, P. F. (2016). Genetic influences on schizophrenia and subcortical brain volumes: large-scale proof of concept. Nature Neuroscience, 19, 420-431. doi:10.1038/nn.4228.

    Abstract

    Schizophrenia is a devastating psychiatric illness with high heritability. Brain structure and function differ, on average, between people with schizophrenia and healthy individuals. As common genetic associations are emerging for both schizophrenia and brain imaging phenotypes, we can now use genome-wide data to investigate genetic overlap. Here we integrated results from common variant studies of schizophrenia (33,636 cases, 43,008 controls) and volumes of several (mainly subcortical) brain structures (11,840 subjects). We did not find evidence of genetic overlap between schizophrenia risk and subcortical volume measures either at the level of common variant genetic architecture or for single genetic markers. These results provide a proof of concept (albeit based on a limited set of structural brain measures) and define a roadmap for future studies investigating the genetic covariance between structural or functional brain phenotypes and risk for psychiatric disorders

    Additional information

    Franke_etal_2016_supp1.pdf
  • Franken, M. K., Acheson, D. J., McQueen, J. M., Eisner, F., & Hagoort, P. (2017). Individual variability as a window on production-perception interactions in speech motor control. The Journal of the Acoustical Society of America, 142(4), 2007-2018. doi:10.1121/1.5006899.

    Abstract

    An important part of understanding speech motor control consists of capturing the
    interaction between speech production and speech perception. This study tests a
    prediction of theoretical frameworks that have tried to account for these interactions: if
    speech production targets are specified in auditory terms, individuals with better
    auditory acuity should have more precise speech targets, evidenced by decreased
    within-phoneme variability and increased between-phoneme distance. A study was
    carried out consisting of perception and production tasks in counterbalanced order.
    Auditory acuity was assessed using an adaptive speech discrimination task, while
    production variability was determined using a pseudo-word reading task. Analyses of
    the production data were carried out to quantify average within-phoneme variability as
    well as average between-phoneme contrasts. Results show that individuals not only
    vary in their production and perceptual abilities, but that better discriminators have
    more distinctive vowel production targets (that is, targets with less within-phoneme
    variability and greater between-phoneme distances), confirming the initial hypothesis.
    This association between speech production and perception did not depend on local
    phoneme density in vowel space. This study suggests that better auditory acuity leads
    to more precise speech production targets, which may be a consequence of auditory
    feedback affecting speech production over time.
  • Frega, M., van Gestel, S. H. C., Linda, K., Van der Raadt, J., Keller, J., Van Rhijn, J. R., Schubert, D., Albers, C. A., & Kasri, N. N. (2017). Rapid neuronal differentiation of induced pluripotent stem cells for measuring network activity on micro-electrode arrays. Journal of Visualized Experiments, e45900. doi:10.3791/54900.

    Abstract

    Neurons derived from human induced Pluripotent Stem Cells (hiPSCs) provide a promising new tool for studying neurological disorders. In the past decade, many protocols for differentiating hiPSCs into neurons have been developed. However, these protocols are often slow with high variability, low reproducibility, and low efficiency. In addition, the neurons obtained with these protocols are often immature and lack adequate functional activity both at the single-cell and network levels unless the neurons are cultured for several months. Partially due to these limitations, the functional properties of hiPSC-derived neuronal networks are still not well characterized. Here, we adapt a recently published protocol that describes production of human neurons from hiPSCs by forced expression of the transcription factor neurogenin-212. This protocol is rapid (yielding mature neurons within 3 weeks) and efficient, with nearly 100% conversion efficiency of transduced cells (>95% of DAPI-positive cells are MAP2 positive). Furthermore, the protocol yields a homogeneous population of excitatory neurons that would allow the investigation of cell-type specific contributions to neurological disorders. We modified the original protocol by generating stably transduced hiPSC cells, giving us explicit control over the total number of neurons. These cells are then used to generate hiPSC-derived neuronal networks on micro-electrode arrays. In this way, the spontaneous electrophysiological activity of hiPSC-derived neuronal networks can be measured and characterized, while retaining interexperimental consistency in terms of cell density. The presented protocol is broadly applicable, especially for mechanistic and pharmacological studies on human neuronal networks.

    Additional information

    video component of this article
  • French, C. A., & Fisher, S. E. (2014). What can mice tell us about Foxp2 function? Current Opinion in Neurobiology, 28, 72-79. doi:10.1016/j.conb.2014.07.003.

    Abstract

    Disruptions of the FOXP2 gene cause a rare speech and language disorder, a discovery that has opened up novel avenues for investigating the relevant neural pathways. FOXP2 shows remarkably high conservation of sequence and neural expression in diverse vertebrates, suggesting that studies in other species are useful in elucidating its functions. Here we describe how investigations of mice that carry disruptions of Foxp2 provide insights at multiple levels: molecules, cells, circuits and behaviour. Work thus far has implicated the gene in key processes including neurite outgrowth, synaptic plasticity, sensorimotor integration and motor-skill learning.
  • Freunberger, D., & Nieuwland, M. S. (2016). Incremental comprehension of spoken quantifier sentences: Evidence from brain potentials. Brain Research, 1646, 475-481. doi:10.1016/j.brainres.2016.06.035.

    Abstract

    Do people incrementally incorporate the meaning of quantifier expressions to understand an unfolding sentence? Most previous studies concluded that quantifiers do not immediately influence how a sentence is understood based on the observation that online N400-effects differed from offline plausibility judgments. Those studies, however, used serial visual presentation (SVP), which involves unnatural reading. In the current ERP-experiment, we presented spoken positive and negative quantifier sentences (“Practically all/practically no postmen prefer delivering mail, when the weather is good/bad during the day”). Different from results obtained in a previously reported SVP-study (Nieuwland, 2016) sentence truth-value N400 effects occurred in positive and negative quantifier sentences alike, reflecting fully incremental quantifier comprehension. This suggests that the prosodic information available during spoken language comprehension supports the generation of online predictions for upcoming words and that, at least for quantifier sentences, comprehension of spoken language may proceed more incrementally than comprehension during SVP reading.
  • Friedlaender, J., Hunley, K., Dunn, M., Terrill, A., Lindström, E., Reesink, G., & Friedlaender, F. (2009). Linguistics more robust than genetics [Letter to the editor]. Science, 324, 464-465. doi:10.1126/science.324_464c.
  • Frost, R. L. A., Monaghan, P., & Tatsumi, T. (2017). Domain-general mechanisms for speech segmentation: The role of duration information in language learning. Journal of Experimental Psychology: Human Perception and Performance, 43(3), 466-476. doi:10.1037/xhp0000325.

    Abstract

    Speech segmentation is supported by multiple sources of information that may either inform language processing specifically, or serve learning more broadly. The Iambic/Trochaic Law (ITL), where increased duration indicates the end of a group and increased emphasis indicates the beginning of a group, has been proposed as a domain-general mechanism that also applies to language. However, language background has been suggested to modulate use of the ITL, meaning that these perceptual grouping preferences may instead be a consequence of language exposure. To distinguish between these accounts, we exposed native-English and native-Japanese listeners to sequences of speech (Experiment 1) and nonspeech stimuli (Experiment 2), and examined segmentation using a 2AFC task. Duration was manipulated over 3 conditions: sequences contained either an initial-item duration increase, or a final-item duration increase, or items of uniform duration. In Experiment 1, language background did not affect the use of duration as a cue for segmenting speech in a structured artificial language. In Experiment 2, the same results were found for grouping structured sequences of visual shapes. The results are consistent with proposals that duration information draws upon a domain-general mechanism that can apply to the special case of language acquisition
  • Frost, R. L. A., & Monaghan, P. (2016). Simultaneous segmentation and generalisation of non-adjacent dependencies from continuous speech. Cognition, 147, 70-74. doi:10.1016/j.cognition.2015.11.010.

    Abstract

    Language learning requires mastering multiple tasks, including segmenting speech to identify words, and learning the syntactic role of these words within sentences. A key question in language acquisition research is the extent to which these tasks are sequential or successive, and consequently whether they may be driven by distinct or similar computations. We explored a classic artificial language learning paradigm, where the language structure is defined in terms of non-adjacent dependencies. We show that participants are able to use the same statistical information at the same time to segment continuous speech to both identify words and to generalise over the structure, when the generalisations were over novel speech that the participants had not previously experienced. We suggest that, in the absence of evidence to the contrary, the most economical explanation for the effects is that speech segmentation and grammatical generalisation are dependent on similar statistical processing mechanisms.
  • Frost, R. L. A., & Monaghan, P. (2017). Sleep-driven computations in speech processing. PLoS One, 12(1): e0169538. doi:10.1371/journal.pone.0169538.

    Abstract

    Acquiring language requires segmenting speech into individual words, and abstracting over those words to discover grammatical structure. However, these tasks can be conflicting—on the one hand requiring memorisation of precise sequences that occur in speech, and on the other requiring a flexible reconstruction of these sequences to determine the grammar. Here, we examine whether speech segmentation and generalisation of grammar can occur simultaneously—with the conflicting requirements for these tasks being over-come by sleep-related consolidation. After exposure to an artificial language comprising words containing non-adjacent dependencies, participants underwent periods of consolidation involving either sleep or wake. Participants who slept before testing demonstrated a sustained boost to word learning and a short-term improvement to grammatical generalisation of the non-adjacencies, with improvements after sleep outweighing gains seen after an equal period of wake. Thus, we propose that sleep may facilitate processing for these conflicting tasks in language acquisition, but with enhanced benefits for speech segmentation.

    Additional information

    Data available
  • Fuhrmann, D., Ravignani, A., Marshall-Pescini, S., & Whiten, A. (2014). Synchrony and motor mimicking in chimpanzee observational learning. Scientific Reports, 4: 5283. doi:10.1038/srep05283.

    Abstract

    Cumulative tool-based culture underwrote our species' evolutionary success and tool-based nut-cracking is one of the strongest candidates for cultural transmission in our closest relatives, chimpanzees. However the social learning processes that may explain both the similarities and differences between the species remain unclear. A previous study of nut-cracking by initially naïve chimpanzees suggested that a learning chimpanzee holding no hammer nevertheless replicated hammering actions it witnessed. This observation has potentially important implications for the nature of the social learning processes and underlying motor coding involved. In the present study, model and observer actions were quantified frame-by-frame and analysed with stringent statistical methods, demonstrating synchrony between the observer's and model's movements, cross-correlation of these movements above chance level and a unidirectional transmission process from model to observer. These results provide the first quantitative evidence for motor mimicking underlain by motor coding in apes, with implications for mirror neuron function.

    Additional information

    Supplementary Information
  • Furman, R., Kuntay, A., & Ozyurek, A. (2014). Early language-specificity of children's event encoding in speech and gesture: Evidence from caused motion in Turkish. Language, Cognition and Neuroscience, 29, 620-634. doi:10.1080/01690965.2013.824993.

    Abstract

    Previous research on language development shows that children are tuned early on to the language-specific semantic and syntactic encoding of events in their native language. Here we ask whether language-specificity is also evident in children's early representations in gesture accompanying speech. In a longitudinal study, we examined the spontaneous speech and cospeech gestures of eight Turkish-speaking children aged one to three and focused on their caused motion event expressions. In Turkish, unlike in English, the main semantic elements of caused motion such as Action and Path can be encoded in the verb (e.g. sok- ‘put in’) and the arguments of a verb can be easily omitted. We found that Turkish-speaking children's speech indeed displayed these language-specific features and focused on verbs to encode caused motion. More interestingly, we found that their early gestures also manifested specificity. Children used iconic cospeech gestures (from 19 months onwards) as often as pointing gestures and represented semantic elements such as Action with Figure and/or Path that reinforced or supplemented speech in language-specific ways until the age of three. In the light of previous reports on the scarcity of iconic gestures in English-speaking children's early productions, we argue that the language children learn shapes gestures and how they get integrated with speech in the first three years of life.
  • Ganushchak, L. Y., & Schiller, N. O. (2009). Speaking in one’s second language under time pressure: An ERP study on verbal self-monitoring in German-Dutch bilinguals. Psychophysiology, 46, 410-419. doi:10.1111/j.1469-8986.2008.00774.x.

    Abstract

    This study addresses how verbal self-monitoring and the Error-Related Negativity (ERN) are affected by time pressure
    when a task is performed in a second language as opposed to performance in the native language. German–Dutch
    bilinguals were required to perform a phoneme-monitoring task in Dutch with and without a time pressure manipulation.
    We obtained an ERN following verbal errors that showed an atypical increase in amplitude under time
    pressure. This finding is taken to suggest that under time pressure participants had more interference from their native
    language, which in turn led to a greater response conflict and thus enhancement of the amplitude of the ERN. This
    result demonstrates once more that the ERN is sensitive to psycholinguistic manipulations and suggests that the
    functioning of the verbal self-monitoring systemduring speaking is comparable to other performance monitoring, such
    as action monitoring.
  • Ganushchak, L., Konopka, A. E., & Chen, Y. (2014). What the eyes say about planning of focused referents during sentence formulation: a cross-linguistic investigation. Frontiers in Psychology, 5: 1124. doi:10.3389/fpsyg.2014.01124.

    Abstract

    This study investigated how sentence formulation is influenced by a preceding discourse context. In two eye-tracking experiments, participants described pictures of two-character transitive events in Dutch (Experiment 1) and Chinese (Experiment 2). Focus was manipulated by presenting questions before each picture. In the Neutral condition, participants first heard ‘What is happening here?’ In the Object or Subject Focus conditions, the questions asked about the Object or Subject character (What is the policeman stopping? Who is stopping the truck?). The target response was the same in all conditions (The policeman is stopping the truck). In both experiments, sentence formulation in the Neutral condition showed the expected pattern of speakers fixating the subject character (policeman) before the object character (truck). In contrast, in the focus conditions speakers rapidly directed their gaze preferentially only to the character they needed to encode to answer the question (the new, or focused, character). The timing of gaze shifts to the new character varied by language group (Dutch vs. Chinese): shifts to the new character occurred earlier when information in the question can be repeated in the response with the same syntactic structure (in Chinese but not in Dutch). The results show that discourse affects the timecourse of linguistic formulation in simple sentences and that these effects can be modulated by language-specific linguistic structures such as parallels in the syntax of questions and declarative sentences.
  • Ganushchak, L. Y., & Acheson, D. J. (Eds.). (2014). What's to be learned from speaking aloud? - Advances in the neurophysiological measurement of overt language production. [Research topic] [Special Issue]. Frontiers in Language Sciences. Retrieved from http://www.frontiersin.org/Language_Sciences/researchtopics/What_s_to_be_Learned_from_Spea/1671.

    Abstract

    Researchers have long avoided neurophysiological experiments of overt speech production due to the suspicion that artifacts caused by muscle activity may lead to a bad signal-to-noise ratio in the measurements. However, the need to actually produce speech may influence earlier processing and qualitatively change speech production processes and what we can infer from neurophysiological measures thereof. Recently, however, overt speech has been successfully investigated using EEG, MEG, and fMRI. The aim of this Research Topic is to draw together recent research on the neurophysiological basis of language production, with the aim of developing and extending theoretical accounts of the language production process. In this Research Topic of Frontiers in Language Sciences, we invite both experimental and review papers, as well as those about the latest methods in acquisition and analysis of overt language production data. All aspects of language production are welcome: i.e., from conceptualization to articulation during native as well as multilingual language production. Focus should be placed on using the neurophysiological data to inform questions about the processing stages of language production. In addition, emphasis should be placed on the extent to which the identified components of the electrophysiological signal (e.g., ERP/ERF, neuronal oscillations, etc.), brain areas or networks are related to language comprehension and other cognitive domains. By bringing together electrophysiological and neuroimaging evidence on language production mechanisms, a more complete picture of the locus of language production processes and their temporal and neurophysiological signatures will emerge.

Share this page