Publications

Displaying 201 - 300 of 1177
  • Devaraju, K., Miskinyte, G., Hansen, M. G., Monni, E., Tornero, D., Woods, N. B., Bengzon, J., Ahlenius, H., Lindvall, O., & Kokaia, Z. (2017). Direct conversion of human fibroblasts to functional excitatory cortical neurons integrating into human neural networks. Stem Cell Research & Therapy, 8: 207. doi:10.1186/s13287-017-0658-3.

    Abstract

    Background: Human fibroblasts can be directly converted to several subtypes of neurons, but cortical projection neurons have not been generated. Methods: Here we screened for transcription factor combinations that could potentially convert human fibroblasts to functional excitatory cortical neurons. The induced cortical (iCtx) cells were analyzed for cortical neuronal identity using immunocytochemistry, single-cell quantitative polymerase chain reaction (qPCR), electrophysiology, and their ability to integrate into human neural networks in vitro and ex vivo using electrophysiology and rabies virus tracing. Results: We show that a combination of three ranscription fact ors, BRN2, MYT1L, and FEZF2, have the ability to directly convert human fibroblasts to functional excitatory cortical neurons. The conversion efficiency was increased to about 16% by treatment with small molecules and microRNAs. The iCtx cells exhibited electrophysiological properties of functional neurons, had pyramidal-like cell morphology, and expressed key cortical projection neuronal markers. Single-cell analysis of iCtx cells revealed a complex gene expression profile, a subpopulation of them displaying a molecular signature closely resembling that of human fetal primary cortical neurons. The iCtx cells received synaptic inputs from co-cultured human fetal primary cortical neurons, contained spines, and expressed the postsyna ptic excitatory scaffold protein PSD95. When transplanted ex vivo to organotypic cultures of adult human cerebral cortex, the iCtx cells exhibited morphological and electrophysiological properties of mature neurons, integrated structurally into the cortical tissue, and received synaptic inputs from adult human neurons. Conclusions: Our findings indicate that functional excitatory cortical neurons, generated here for the first time by direct conversion of human somatic cells, have the capacity for synaptic integration into adult human cortex.
  • Dimroth, C. (1998). Indiquer la portée en allemand L2: Une étude longitudinale de l'acquisition des particules de portée. AILE (Acquisition et Interaction en Langue étrangère), 11, 11-34.
  • Dingemanse, M., & Enfield, N. J. (2014). Ongeschreven regels van de taal. Psyche en Brein, 6, 6-11.

    Abstract

    Als je wereldwijd gesprekken beluistert, merk je dat de menselijke dialoog universele regels volgt. Die sturen en verrijken onze sociale interactie.
  • Dingemanse, M., & Akita, K. (2017). An inverse relation between expressiveness and grammatical integration: on the morphosyntactic typology of ideophones, with special reference to Japanese. Journal of Linguistics, 53(3), 501-532. doi:10.1017/S002222671600030X.

    Abstract

    Words and phrases may differ in the extent to which they are susceptible to prosodic foregrounding and expressive morphology: their expressiveness. They may also differ in the degree to which they are integrated in the morphosyntactic structure of the utterance: their grammatical integration. We describe an inverse relation that holds across widely varied languages, such that more expressiveness goes together with less grammatical integration, and vice versa. We review typological evidence for this inverse relation in 10 languages, then quantify and explain it using Japanese corpus data. We do this by tracking ideophones —vivid sensory words also known as mimetics or expressives— across different morphosyntactic contexts and measuring their expressiveness in terms of intonation, phonation and expressive morphology. We find that as expressiveness increases, grammatical integration decreases. Using gesture as a measure independent of the speech signal, we find that the most expressive ideophones are most likely to come together with iconic gestures. We argue that the ultimate cause is the encounter of two distinct and partly incommensurable modes of representation: the gradient, iconic, depictive system represented by ideophones and iconic gestures and the discrete, arbitrary, descriptive system represented by ordinary words. The study shows how people combine modes of representation in speech and demonstrates the value of integrating description and depiction into the scientific vision of language.

    Additional information

    Open data & R code
  • Dingemanse, M., Blythe, J., & Dirksmeyer, T. (2014). Formats for other-initiation of repair across languages: An exercise in pragmatic typology. Studies in Language, 38, 5-43. doi:10.1075/sl.38.1.01din.

    Abstract

    In conversation, people have to deal with problems of speaking, hearing, and understanding. We report on a cross-linguistic investigation of the conversational structure of other-initiated repair (also known as collaborative repair, feedback, requests for clarification, or grounding sequences). We take stock of formats for initiating repair across languages (comparable to English huh?, who?, y’mean X?, etc.) and find that different languages make available a wide but remarkably similar range of linguistic resources for this function. We exploit the patterned variation as evidence for several underlying concerns addressed by repair initiation: characterising trouble, managing responsibility, and handling knowledge. The concerns do not always point in the same direction and thus provide participants in interaction with alternative principles for selecting one format over possible others. By comparing conversational structures across languages, this paper contributes to pragmatic typology: the typology of systems of language use and the principles that shape them
  • Dingemanse, M. (2011). Ideophones and the aesthetics of everyday language in a West-African society. The Senses & Society, 6(1), 77-85. doi:10.2752/174589311X12893982233830.

    Abstract

    This article explores language, culture, and the perceptual world as reflected in a particular linguistic device: ideophones, marked words that depict sensory imagery. Data from a range of elicitation tasks shows that ideophones are a key resource in talking about sensory perception in Siwu. Their use in everyday conversations underlines their communicative versatility while at the same time showing that people delight in their expressiveness. In ideophones, we have an expressive resource that combines sheer playfulness with extraordinary precision
  • Dingemanse, M. (2014). Making new ideophones in Siwu: Creative depiction in conversation. Pragmatics and Society, 5(3), 384-405. doi:10.1075/ps.5.3.04din.

    Abstract

    Ideophones are found in many of the world’s languages. Though they are a major word class on a par with nouns and verbs, their origins are ill-understood, and the question of ideophone creation has been a source of controversy. This paper studies ideophone creation in naturally occurring speech. New, unconventionalised ideophones are identified using native speaker judgements, and are studied in context to understand the rules and regularities underlying their production and interpretation. People produce and interpret new ideophones with the help of the semiotic infrastructure that underlies the use of existing ideophones: foregrounding frames certain stretches of speech as depictive enactments of sensory imagery, and various types of iconicity link forms and meanings. As with any creative use of linguistic resources, context and common ground also play an important role in supporting rapid ‘good enough’ interpretations of new material. The making of new ideophones is a special case of a more general phenomenon of creative depiction: the art of presenting verbal material in such a way that the interlocutor recognises and interprets it as a depiction.
  • Dingemanse, M., & Enfield, N. J. (2014). Let's talk: Universal social rules underlie languages. Scientific American Mind, 25, 64-69. doi:10.1038/scientificamericanmind0914-64.

    Abstract

    Recent developments in the science of language signal the emergence of a new paradigm for language study: a social approach to the fundamental questions of what language is like, how much languages really have in common, and why only our species has it. The key to these developments is a new appreciation of the need to study everyday spoken language, with all its complications and ‘imperfections’, in a systematic way. The work reviewed in this article —on turn-taking, timing, and other-initiated repair in languages around the world— has important implications for our understanding of human sociality and sheds new light on the social shape of language. For the first time in the history of linguistics, we are no longer tied to what can be written down or thought up. Rather, we look at language as a biologist would: as it occurs in nature.
  • Dingemanse, M. (2017). Expressiveness and system integration: On the typology of ideophones, with special reference to Siwu. STUF - Language Typology and Universals, 70(2), 363-384. doi:10.1515/stuf-2017-0018.

    Abstract

    Ideophones are often described as words that are highly expressive and morphosyntactically marginal. A study of ideophones in everyday conversations in Siwu (Kwa, eastern Ghana) reveals a landscape of variation and change that sheds light on some larger questions in the morphosyntactic typology of ideophones. The article documents a trade-off between expressiveness and morphosyntactic integration, with high expressiveness linked to low integration and vice versa. It also describes a pathway for deideophonisation and finds that frequency of use is a factor that influences the degree to which ideophones can come to be more like ordinary words. The findings have implications for processes of (de)ideophonisation, ideophone borrowing, and ideophone typology. A key point is that the internal diversity we find in naturally occurring data, far from being mere noise, is patterned variation that can help us to get a handle on the factors shaping ideophone systems within and across languages.
  • Dingemanse, M., Rossi, G., & Floyd, S. (2017). Place reference in story beginnings: a cross-linguistic study of narrative and interactional affordances. Language in Society, 46(2), 129-158. doi:10.1017/S0047404516001019.

    Abstract

    People often begin stories in conversation by referring to person, time, and place. We study story beginnings in three societies and find place reference is recurrently used to (i) set the stage, foreshadowing the type of story and the kind of response due, and to (ii) make the story cohere, anchoring elements of the developing story. Recipients orient to these interactional affordances of place reference by responding in ways that attend to the relevance of place for the story and by requesting clarification when references are incongruent or noticeably absent. The findings are based on 108 story beginnings in three unrelated languages: Cha’palaa, a Barbacoan language of Ecuador; Northern Italian, a Romance language of Italy; and Siwu, a Kwa language of Ghana. The commonalities suggest we have identified generic affordances of place reference, and that storytelling in conversation offers a robust sequential environment for systematic comparative research on conversational structures.
  • Dolscheid, S., Hunnius, S., Casasanto, D., & Majid, A. (2014). Prelinguistic infants are sensitive to space-pitch associations found across cultures. Psychological Science, 25(6), 1256-1261. doi:10.1177/0956797614528521.

    Abstract

    People often talk about musical pitch using spatial metaphors. In English, for instance, pitches can be “high” or “low” (i.e., height-pitch association), whereas in other languages, pitches are described as “thin” or “thick” (i.e., thickness-pitch association). According to results from psychophysical studies, metaphors in language can shape people’s nonlinguistic space-pitch representations. But does language establish mappings between space and pitch in the first place, or does it only modify preexisting associations? To find out, we tested 4-month-old Dutch infants’ sensitivity to height-pitch and thickness-pitch mappings using a preferential-looking paradigm. The infants looked significantly longer at cross-modally congruent stimuli for both space-pitch mappings, which indicates that infants are sensitive to these associations before language acquisition. The early presence of space-pitch mappings means that these associations do not originate from language. Instead, language builds on preexisting mappings, changing them gradually via competitive associative learning. Space-pitch mappings that are language-specific in adults develop from mappings that may be universal in infants.
  • Dow, D. J., Huxley-Jones, J., Hall, J. M., Francks, C., Maycox, P. R., Kew, J. N., Gloger, I. S., Mehta, N. A., Kelly, F. M., Muglia, P., Breen, G., Jugurnauth, S., Pederoso, I., St.Clair, D., Rujescu, D., & Barnes, M. R. (2011). ADAMTSL3 as a candidate gene for schizophrenia: Gene sequencing and ultra-high density association analysis by imputation. Schizophrenia Research, 127(1-3), 28-34. doi:10.1016/j.schres.2010.12.009.

    Abstract

    We previously reported an association with a putative functional variant in the ADAMTSL3 gene, just below genome-wide significance in a genome-wide association study of schizophrenia. As variants impacting the function of ADAMTSL3 (a disintegrin-like and metalloprotease domain with thrombospondin type I motifs-like-3) could illuminate a novel disease mechanism and a potentially specific target, we have used complementary approaches to further evaluate the association. We imputed genotypes and performed high density association analysis using data from the HapMap and 1000 genomes projects. To review all variants that could potentially cause the association, and to identify additional possible pathogenic rare variants, we sequenced ADAMTSL3 in 92 schizophrenics. A total of 71 ADAMTSL3 variants were identified by sequencing, many were also seen in the 1000 genomes data, but 26 were novel. None of the variants identified by re-sequencing was in strong linkage disequilibrium (LD) with the associated markers. Imputation analysis refined association between ADAMTSL3 and schizophrenia, and highlighted additional common variants with similar levels of association. We evaluated the functional consequences of all variants identified by sequencing, or showing direct or imputed association. The strongest evidence for function remained with the originally associated variant, rs950169, suggesting that this variant may be causal of the association. Rare variants were also identified with possible functional impact. Our study confirms ADAMTSL3 as a candidate for further investigation in schizophrenia, using the variants identified here. The utility of imputation analysis is demonstrated, and we recommend wider use of this method to re-evaluate the existing canon of suggestive schizophrenia associations.
  • Drijvers, L., & Ozyurek, A. (2017). Visual context enhanced: The joint contribution of iconic gestures and visible speech to degraded speech comprehension. Journal of Speech, Language, and Hearing Research, 60, 212-222. doi:10.1044/2016_JSLHR-H-16-0101.

    Abstract

    Purpose This study investigated whether and to what extent iconic co-speech gestures contribute to information from visible speech to enhance degraded speech comprehension at different levels of noise-vocoding. Previous studies of the contributions of these 2 visual articulators to speech comprehension have only been performed separately.

    Method Twenty participants watched videos of an actress uttering an action verb and completed a free-recall task. The videos were presented in 3 speech conditions (2-band noise-vocoding, 6-band noise-vocoding, clear), 3 multimodal conditions (speech + lips blurred, speech + visible speech, speech + visible speech + gesture), and 2 visual-only conditions (visible speech, visible speech + gesture).

    Results Accuracy levels were higher when both visual articulators were present compared with 1 or none. The enhancement effects of (a) visible speech, (b) gestural information on top of visible speech, and (c) both visible speech and iconic gestures were larger in 6-band than 2-band noise-vocoding or visual-only conditions. Gestural enhancement in 2-band noise-vocoding did not differ from gestural enhancement in visual-only conditions.
  • Dronkers, N. F., Wilkins, D. P., Van Valin Jr., R. D., Redfern, B. B., & Jaeger, J. J. (2004). Lesion analysis of the brain areas involved in language comprehension. Cognition, 92, 145-177. doi:10.1016/j.cognition.2003.11.002.

    Abstract

    The cortical regions of the brain traditionally associated with the comprehension of language are Wernicke's area and Broca's area. However, recent evidence suggests that other brain regions might also be involved in this complex process. This paper describes the opportunity to evaluate a large number of brain-injured patients to determine which lesioned brain areas might affect language comprehension. Sixty-four chronic left hemisphere stroke patients were evaluated on 11 subtests of the Curtiss–Yamada Comprehensive Language Evaluation – Receptive (CYCLE-R; Curtiss, S., & Yamada, J. (1988). Curtiss–Yamada Comprehensive Language Evaluation. Unpublished test, UCLA). Eight right hemisphere stroke patients and 15 neurologically normal older controls also participated. Patients were required to select a single line drawing from an array of three or four choices that best depicted the content of an auditorily-presented sentence. Patients' lesions obtained from structural neuroimaging were reconstructed onto templates and entered into a voxel-based lesion-symptom mapping (VLSM; Bates, E., Wilson, S., Saygin, A. P., Dick, F., Sereno, M., Knight, R. T., & Dronkers, N. F. (2003). Voxel-based lesion-symptom mapping. Nature Neuroscience, 6(5), 448–450.) analysis along with the behavioral data. VLSM is a brain–behavior mapping technique that evaluates the relationships between areas of injury and behavioral performance in all patients on a voxel-by-voxel basis, similar to the analysis of functional neuroimaging data. Results indicated that lesions to five left hemisphere brain regions affected performance on the CYCLE-R, including the posterior middle temporal gyrus and underlying white matter, the anterior superior temporal gyrus, the superior temporal sulcus and angular gyrus, mid-frontal cortex in Brodmann's area 46, and Brodmann's area 47 of the inferior frontal gyrus. Lesions to Broca's and Wernicke's areas were not found to significantly alter language comprehension on this particular measure. Further analysis suggested that the middle temporal gyrus may be more important for comprehension at the word level, while the other regions may play a greater role at the level of the sentence. These results are consistent with those seen in recent functional neuroimaging studies and offer complementary data in the effort to understand the brain areas underlying language comprehension.
  • Drozdova, P., Van Hout, R., & Scharenborg, O. (2017). L2 voice recognition: The role of speaker-, listener-, and stimulus-related factors. The Journal of the Acoustical Society of America, 142(5), 3058-3068. doi:10.1121/1.5010169.

    Abstract

    Previous studies examined various factors influencing voice recognition and learning with mixed results. The present study investigates the separate and combined contribution of these various speaker-, stimulus-, and listener-related factors to voice recognition. Dutch listeners, with arguably incomplete phonological and lexical knowledge in the target language, English, learned to recognize the voice of four native English speakers, speaking in English, during four-day training. Training was successful and listeners' accuracy was shown to be influenced by the acoustic characteristics of speakers and the sound composition of the words used in the training, but not by lexical frequency of the words, nor the lexical knowledge of the listeners or their phonological aptitude. Although not conclusive, listeners with a lower working memory capacity seemed to be slower in learning voices than listeners with a higher working memory capacity. The results reveal that speaker-related, listener-related, and stimulus-related factors accumulate in voice recognition, while lexical information turns out not to play a role in successful voice learning and recognition. This implies that voice recognition operates at the prelexical processing level.
  • Drude, S. (2006). Documentação lingüística: O formato de anotação de textos. Cadernos de Estudos Lingüísticos, 35, 27-51.

    Abstract

    This paper presents the methods of language documentation as applied in the Awetí Language Documentation Project, one of the projects in the Documentation of Endangered Languages Programme (DOBES). It describes the steps of how a large digital corpus of annotated multi-media data is built. Special attention is devoted to the format of annotation of linguistic data. The Advanced Glossing format is presented and justified
  • Drude, S. (2011). Nominalization and subordination in Awetí. Amerindia, 35, 189-218.

    Abstract

    This paper describes the different kinds of nominalizations and the main forms used in subordination in Awetí, a Tupian language spoken by ca. 150 people in central Bra-zil in the Upper Xingu area. Awetí does not belong to, but is arguably the closest rela-tive of the well-known Tupí-Guaraní subfamily, the largest branch of the Tupí stock. In our analysis, subordination in Awetí is achieved by means of forms which may have developed from nominalizations, but which are synchronously possibly best classified as verbal moods, belonging into the verbal paradigm. On the other hand, nouns (and in particular nouns derived from verbs) often appear as predicates, especially in equality and cleft sentences.
  • Drude, S., Broeder, D., & Trilsbeek, P. (2014). The Language Archive and its solutions for sustainable endangered languages corpora. Book 2.0, 4, 5-20. doi:10.1386/btwo.4.1-2.5_1.

    Abstract

    Since the late 1990s, the technical group at the Max-Planck-Institute for Psycholinguistics has worked on solutions for important challenges in building sustainable data archives, in particular, how to guarantee long-time-availability of digital research data for future research. The support for the well-known DOBES (Documentation of Endangered Languages) programme has greatly inspired and advanced this work, and lead to the ongoing development of a whole suite of tools for annotating, cataloguing and archiving multi-media data. At the core of the LAT (Language Archiving Technology) tools is the IMDI metadata schema, now being integrated into a larger network of digital resources in the European CLARIN project. The multi-media annotator ELAN (with its web-based cousin ANNEX) is now well known not only among documentary linguists. We aim at presenting an overview of the solutions, both achieved and in development, for creating and exploiting sustainable digital data, in particular in the area of documenting languages and cultures, and their interfaces with related other developments
  • Drude, S. (2011). Word accent and its manifestation in Awetí. Amerindia, 35, 7-40.

    Abstract

    This paper describes the distribution and phonetic properties of accentuation of word forms in Awetí, a Tupian language spoken by ca. 150 people in central Brazil in the Upper Xingu area. Awetí does not belong to, but is arguably the closest relative of the better known Tupí-Guaraní subfamily, the largest branch of the Tupí stock. After a short overview over the word classes and general phonotactics of Awetí (sec-tion 2), we briefly discuss the notion ‘word accent’ and show that, in Awetí, it is generally located on the last syllable of the stem in morphologically simple forms (section 3). We then discuss regular and isolated exceptions to this rule (section 4). In section 5, we describe the distribution of the word accent when inflectional or deriva-tional suffixes are present – usually, the word accent of the word form with suffixes continues to be on the last syllable of the stem. After this descriptive part, we present a preliminary study of the acoustic-phonetic details of the manifestation of the word accent, observing word forms in isolation (section 6) and in different syntactic con-texts (section 7). The results are briefly summarized in the conclusion (section 8)
  • Dufau, S., Duñabeitia, J. A., Moret-Tatay, C., McGonigal, A., Peeters, D., Alario, F.-X., Balota, D. A., Brysbaert, M., Carreiras, M., Ferrand, L., Ktori, M., Perea, M., Rastle, K., Sasburg, O., Yap, M. J., Ziegler, J. C., & Grainger, J. (2011). Smart phone, smart science: How the use of smartphones can revolutionize research in cognitive science. PLoS One, 6(9), e24974. doi:10.1371/journal.pone.0024974.

    Abstract

    Investigating human cognitive faculties such as language, attention, and memory most often relies on testing small and homogeneous groups of volunteers coming to research facilities where they are asked to participate in behavioral experiments. We show that this limitation and sampling bias can be overcome by using smartphone technology to collect data in cognitive science experiments from thousands of subjects from all over the world. This mass coordinated use of smartphones creates a novel and powerful scientific ‘‘instrument’’ that yields the data necessary to test universal theories of cognition. This increase in power represents a potential revolution in cognitive science
  • Dunn, M., Burenhult, N., Kruspe, N., Tufvesson, S., & Becker, N. (2011). Aslian linguistic prehistory: A case study in computational phylogenetics. Diachronica, 28, 291-323. doi:10.1075/dia.28.3.01dun.

    Abstract

    This paper analyzes newly collected lexical data from 26 languages of the Aslian subgroup of the Austroasiatic language family using computational phylogenetic methods. We show the most likely topology of the Aslian family tree, discuss rooting and external relationships to other Austroasiatic languages, and investigate differences in the rates of diversification of different branches. Evidence is given supporting the classification of Jah Hut as a fourth top level subgroup of the family. The phylogenetic positions of known geographic and linguistic outlier languages are clarified, and the relationships of the little studied Aslian languages of Southern Thailand to the rest of the family are explored.
  • Dunn, M. (2014). [Review of the book Evolutionary Linguistics by April McMahon and Robert McMahon]. American Anthropologist, 116(3), 690-691.
  • Dunn, M. (2006). [Review of the book Comparative Chukotko-Kamchatkan dictionary by Michael Fortescue]. Anthropological Linguistics, 48(3), 296-298.
  • Dunn, M., Greenhill, S. J., Levinson, S. C., & Gray, R. D. (2011). Evolved structure of language shows lineage-specific trends in word-order universals. Nature, 473, 79-82. doi:10.1038/nature09923.

    Abstract

    Languages vary widely but not without limit. The central goal of linguistics is to describe the diversity of human languages and explain the constraints on that diversity. Generative linguists following Chomsky have claimed that linguistic diversity must be constrained by innate parameters that are set as a child learns a language1, 2. In contrast, other linguists following Greenberg have claimed that there are statistical tendencies for co-occurrence of traits reflecting universal systems biases3, 4, 5, rather than absolute constraints or parametric variation. Here we use computational phylogenetic methods to address the nature of constraints on linguistic diversity in an evolutionary framework6. First, contrary to the generative account of parameter setting, we show that the evolution of only a few word-order features of languages are strongly correlated. Second, contrary to the Greenbergian generalizations, we show that most observed functional dependencies between traits are lineage-specific rather than universal tendencies. These findings support the view that—at least with respect to word order—cultural evolution is the primary factor that determines linguistic structure, with the current state of a linguistic system shaping and constraining future states.

    Additional information

    Supplementary information
  • Eaves, L. J., St Pourcain, B., Smith, G. D., York, T. P., & Evans, D. M. (2014). Resolving the Effects of Maternal and Offspring Genotype on Dyadic Outcomes in Genome Wide Complex Trait Analysis (“M-GCTA”). Behavior Genetics, 44(5), 445-455. doi:10.1007/s10519-014-9666-6.

    Abstract

    Genome wide complex trait analysis (GCTA) is extended to include environmental effects of the maternal genotype on offspring phenotype (“maternal effects”, M-GCTA). The model includes parameters for the direct effects of the offspring genotype, maternal effects and the covariance between direct and maternal effects. Analysis of simulated data, conducted in OpenMx, confirmed that model parameters could be recovered by full information maximum likelihood (FIML) and evaluated the biases that arise in conventional GCTA when indirect genetic effects are ignored. Estimates derived from FIML in OpenMx showed very close agreement to those obtained by restricted maximum likelihood using the published algorithm for GCTA. The method was also applied to illustrative perinatal phenotypes from ~4,000 mother-offspring pairs from the Avon Longitudinal Study of Parents and Children. The relative merits of extended GCTA in contrast to quantitative genetic approaches based on analyzing the phenotypic covariance structure of kinships are considered.
  • Ebisch, S. J., Gallese, V., Willems, R. M., Mantini, D., Groen, W. B., Romani, G. L., Buitelaar, J. K., & Bekkering, H. (2011). Altered intrinsic functional connectivity of anterior and posterior insular regions in high-functioning participants with autism spectrum disorder. Human Brain Mapping, 32, 1013-1028. doi:10.1002/hbm.21085.

    Abstract

    Impaired understanding of others' sensations and emotions as well as abnormal experience of their own emotions and sensations is frequently reported in individuals with Autism Spectrum Disorder (ASD). It is hypothesized that these abnormalities are based on altered connectivity within “shared” neural networks involved in emotional awareness of self and others. The insula is considered a central brain region in a network underlying these functions, being located at the transition of information about bodily arousal and the physiological state of the body to subjective feelings. The present study investigated the intrinsic functional connectivity properties of the insula in 14 high-functioning participants with ASD (HF-ASD) and 15 typically developing (TD) participants in the age range between 12 and 20 years by means of “resting state” or “nontask” functional magnetic resonance imaging. Essentially, a distinction was made between anterior and posterior regions of the insular cortex. The results show a reduced functional connectivity in the HF-ASD group, compared with the TD group, between anterior as well as posterior insula and specific brain regions involved in emotional and sensory processing. It is suggested that functional abnormalities in a network involved in emotional and interoceptive awareness might be at the basis of altered emotional experiences and impaired social abilities in ASD, and that these abnormalities are partly based on the intrinsic functional connectivity properties of such a network.
  • Eerland, A., Guadalupe, T. M., & Zwaan, R. A. (2011). Leaning to the left makes the Eiffel Tower seem smaller: Posture-modulated estimation. Psychological Science, 22, 1511-1514. doi:10.1177/0956797611420731.

    Abstract

    In two experiments, we investigated whether body posture influences people’s estimation of quantities. According to the mental-number-line theory, people mentally represent numbers along a line with smaller numbers on the left and larger numbers on the right. We hypothesized that surreptitiously making people lean to the right or to the left would affect their quantitative estimates. Participants answered estimation questions while standing on a Wii Balance Board. Posture was manipulated within subjects so that participants answered some questions while they leaned slightly to the left, some questions while they leaned slightly to the right, and some questions while they stood upright. Crucially, participants were not aware of this manipulation. Estimates were significantly smaller when participants leaned to the left than when they leaned to the right.

    Additional information

    Eerland_2011_Suppl_mat.pdf
  • Eising, E., Shyti, R., 'T hoen, P. A. C., Vijfhuizen, L. S., Huisman, S. M. H., Broos, L. A. M., Mahfourz, A., Reinders, M. J. T., Ferrrari, M. D., Tolner, E. A., De Vries, B., & Van den Maagdenberg, A. M. J. M. (2017). Cortical spreading depression causes unique dysregulation of inflammatory pathways in a transgenic mouse model of migraine. Molecular Biology, 54(4), 2986-2996. doi:10.1007/s12035-015-9681-5.

    Abstract

    Familial hemiplegic migraine type 1 (FHM1) is a
    rare monogenic subtype of migraine with aura caused by mutations
    in CACNA1A that encodes the α1A subunit of voltagegated
    CaV2.1 calcium channels. Transgenic knock-in mice
    that carry the human FHM1 R192Q missense mutation
    (‘FHM1 R192Q mice’) exhibit an increased susceptibility to
    cortical spreading depression (CSD), the mechanism underlying
    migraine aura. Here, we analysed gene expression profiles
    from isolated cortical tissue of FHM1 R192Q mice 24 h after
    experimentally induced CSD in order to identify molecular
    pathways affected by CSD. Gene expression profiles were
    generated using deep serial analysis of gene expression sequencing.
    Our data reveal a signature of inflammatory signalling
    upon CSD in the cortex of both mutant and wild-type
    mice. However, only in the brains of FHM1 R192Q mice
    specific genes are up-regulated in response to CSD that are
    implicated in interferon-related inflammatory signalling. Our
    findings show that CSD modulates inflammatory processes in
    both wild-type and mutant brains, but that an additional
    unique inflammatory signature becomes expressed after
    CSD in a relevant mouse model of migraine.
  • Eising, E., Pelzer, N., Vijfhuizen, L. S., De Vries, B., Ferrari, M. D., 'T Hoen, P. A. C., Terwindt, G. M., & Van den Maagdenberg, A. M. J. M. (2017). Identifying a gene expression signature of cluster headache in blood. Scientific Reports, 7: 40218. doi:10.1038/srep40218.

    Abstract

    Cluster headache is a relatively rare headache disorder, typically characterized by multiple daily, short-lasting attacks of excruciating, unilateral (peri-)orbital or temporal pain associated with autonomic symptoms and restlessness. To better understand the pathophysiology of cluster headache, we used RNA sequencing to identify differentially expressed genes and pathways in whole blood of patients with episodic (n = 19) or chronic (n = 20) cluster headache in comparison with headache-free controls (n = 20). Gene expression data were analysed by gene and by module of co-expressed genes with particular attention to previously implicated disease pathways including hypocretin dysregulation. Only moderate gene expression differences were identified and no associations were found with previously reported pathogenic mechanisms. At the level of functional gene sets, associations were observed for genes involved in several brain-related mechanisms such as GABA receptor function and voltage-gated channels. In addition, genes and modules of co-expressed genes showed a role for intracellular signalling cascades, mitochondria and inflammation. Although larger study samples may be required to identify the full range of involved pathways, these results indicate a role for mitochondria, intracellular signalling and inflammation in cluster headache

    Additional information

    Eising_etal_2017sup.pdf
  • Eisner, F., & McQueen, J. M. (2006). Perceptual learning in speech: Stability over time (L). Journal of the Acoustical Society of America, 119(4), 1950-1953. doi:10.1121/1.2178721.

    Abstract

    Perceptual representations of phonemes are flexible and adapt rapidly to accommodate idiosyncratic articulation in the speech of a particular talker. This letter addresses whether such adjustments remain stable over time and under exposure to other talkers. During exposure to a story, listeners learned to interpret an ambiguous sound as [f] or [s]. Perceptual adjustments measured after 12 h were as robust as those measured immediately after learning. Equivalent effects were found when listeners heard speech from other talkers in the 12 h interval, and when they had the opportunity to consolidate learning during sleep.
  • Enfield, N. J., & Levinson, S. C. (Eds.). (2006). Roots of human sociality: Culture, cognition and interaction. Oxford: Berg.
  • Enfield, N. J. (2004). On linear segmentation and combinatorics in co-speech gesture: A symmetry-dominance construction in Lao fish trap descriptions. Semiotica, 149(1/4), 57-123. doi:10.1515/semi.2004.038.
  • Enfield, N. J. (2011). Books that live and die [Book review]. Current Anthropology, 52(1), 129-131. doi:10.1086/657928.

    Abstract

    Reviewed work(s): Dying Words: Endangered Languages and What They Have to Tell Us. By Nicholas Evans. Indianapolis: Wiley-Blackwell, 2010. On the Death and Life of Languages. By Claude Hagège, translated by Jody Gladding. New Haven, CT: Yale University Press, 2009.
  • Enfield, N. J. (2011). Credit tests [Review of the book You are not a gadget by Jaron Lanier]. The Times Literary Supplement, February 18, 2011, 12.
  • Enfield, N. J., Majid, A., & Van Staden, M. (2006). Cross-linguistic categorisation of the body: Introduction. Language Sciences, 28(2-3), 137-147. doi:10.1016/j.langsci.2005.11.001.

    Abstract

    The domain of the human body is an ideal focus for semantic typology, since the body is a physical universal and all languages have terms referring to its parts. Previous research on body part terms has depended on secondary sources (e.g. dictionaries), and has lacked sufficient detail or clarity for a thorough understanding of these terms’ semantics. The present special issue is the outcome of a collaborative project aimed at improving approaches to investigating the semantics of body part terms, by developing materials to elicit information that provides for cross-linguistic comparison. The articles in this volume are original fieldwork-based descriptions of terminology for parts of the body in ten languages. Also included are an elicitation guide and experimental protocol used in gathering data. The contributions provide inventories of body part terms in each language, with analysis of both intensional and extensional aspects of meaning, differences in morphological complexity, semantic relations among terms, and discussion of partonomic structure within the domain.
  • Enfield, N. J. (2006). Elicitation guide on parts of the body. Language Sciences, 28(2-3), 148-157. doi:10.1016/j.langsci.2005.11.003.

    Abstract

    This document is intended for use as an elicitation guide for the field linguist consulting with native speakers in collecting terms for parts of the body, and in the exploration of their semantics.
  • Enfield, N. J. (2006). [Review of the book A grammar of Semelai by Nicole Kruspe]. Linguistic Typology, 10(3), 452-455. doi:10.1515/LINGTY.2006.014.
  • Enfield, N. J. (2006). Languages as historical documents: The endangered archive in Laos. South East Asia Research, 14(3), 471-488.

    Abstract

    Abstract: This paper reviews current discussion of the issue of just what is lost when a language dies. Special reference is made to the current situation in Laos, a country renowned for its considerable cultural and linguistic diversity. It focuses on the historical, anthropological and ecological knowledge that a language can encode, and the social and cultural consequences of the loss of such traditional knowledge when a language is no longer passed on. Finally, the article points out the paucity of studies and obstacles to field research on minority languages in Laos, which seriously hamper their documentation.
  • Enfield, N. J. (2006). Lao body part terms. Language Sciences, 28(2-3), 181-200. doi:10.1016/j.langsci.2005.11.011.

    Abstract

    This article presents a description of nominal expressions for parts of the human body conventionalised in Lao, a Southwestern Tai language spoken in Laos, Northeast Thailand, and Northeast Cambodia. An inventory of around 170 Lao expressions is listed, with commentary where some notability is determined, usually based on explicit comparison to the metalanguage, English. Notes on aspects of the grammatical and semantic structure of the set of body part terms are provided, including a discussion of semantic relations pertaining among members of the set of body part terms. I conclude that the semantic relations which pertain between terms for different parts of the body not only include part/whole relations, but also relations of location, connectedness, and general association. Calling the whole system a ‘partonomy’ attributes greater centrality to the part/whole relation than is warranted.
  • Enfield, N. J. (2004). Nominal classification in Lao: A sketch. Sprachtypologie und Universalienforschung, 57(2/3), 117-143.
  • Enfield, N. J. (2011). Hidden delights [Review of the book How pleasure works by Paul Bloom]. The Times Literary Supplement, January 21, 2011, 30-30.
  • Enfield, N. J. (2011). Taste in two tongues: A Southeast Asian study of semantic convergence. The Senses & Society, 6(1), 30-37. doi:10.2752/174589311X12893982233632.

    Abstract

    This article examines vocabulary for taste and flavor in two neighboring but unrelated languages (Lao and Kri) spoken in Laos, Southeast Asia. There are very close similarities in underlying semantic distinctions made in the taste/flavor domain in these two languages, not just in the set of basic tastes distinguished (sweet, salty, bitter, sour, umami or glutamate), but in a series of further basic terms for flavors, specifying texture and other sensations in the mouth apart from pure taste (e.g. starchy, dry in the mouth, minty, tingly, spicy). After presenting sets of taste/flavor vocabulary in the two languages and showing their high degree of convergence, the article discusses some methodological and theoretical issues that arise from the observation of close convergence in semantic structure across languages, in particular the issue of how much inter-speaker variation is possible not only across apparently highly convergent systems, but also within languages. The final section raises possible causes for the close convergence of semantic structure in the two languages. The conclusion is that the likely cause of this convergence is historical social contact between speech communities in the area, although the precise mode of influence (e.g. direction of transmission) is unknown.
  • Erard, M. (2017). Write yourself invisible. New Scientist, 236(3153), 36-39.
  • Ernestus, M. (2006). Statistically gradient generalizations for contrastive phonological features. The Linguistic Review, 23(3), 217-233. doi:10.1515/TLR.2006.008.

    Abstract

    In mainstream phonology, contrastive properties, like stem-final voicing, are simply listed in the lexicon. This article reviews experimental evidence that such contrastive properties may be predictable to some degree and that the relevant statistically gradient generalizations form an inherent part of the grammar. The evidence comes from the underlying voice specification of stem-final obstruents in Dutch. Contrary to received wisdom, this voice specification is partly predictable from the obstruent’s manner and place of articulation and from the phonological properties of the preceding segments. The degree of predictability, which depends on the exact contents of the lexicon, directs speakers’ guesses of underlying voice specifications. Moreover, existing words that disobey the generalizations are disadvantaged by being recognized and produced more slowly and less accurately, also under natural conditions.We discuss how these observations can be accounted for in two types of different approaches to grammar, Stochastic Optimality Theory and exemplar-based modeling.
  • Ernestus, M. (2014). Acoustic reduction and the roles of abstractions and exemplars in speech processing. Lingua, 142, 27-41. doi:10.1016/j.lingua.2012.12.006.

    Abstract

    Acoustic reduction refers to the frequent phenomenon in conversational speech that words are produced with fewer or lenited segments compared to their citation forms. The few published studies on the production and comprehension of acoustic reduction have important implications for the debate on the relevance of abstractions and exemplars in speech processing. This article discusses these implications. It first briefly introduces the key assumptions of simple abstractionist and simple exemplar-based models. It then discusses the literature on acoustic reduction and draws the conclusion that both types of models need to be extended to explain all findings. The ultimate model should allow for the storage of different pronunciation variants, but also reserve an important role for phonetic implementation. Furthermore, the recognition of a highly reduced pronunciation variant requires top down information and leads to activation of the corresponding unreduced variant, the variant that reaches listeners’ consciousness. These findings are best accounted for in hybrids models, assuming both abstract representations and exemplars. None of the hybrid models formulated so far can account for all data on reduced speech and we need further research for obtaining detailed insight into how speakers produce and listeners comprehend reduced speech.
  • Ernestus, M., Dikmans, M., & Giezenaar, G. (2017). Advanced second language learners experience difficulties processing reduced word pronunciation variants. Dutch Journal of Applied Linguistics, 6(1), 1-20. doi:10.1075/dujal.6.1.01ern.

    Abstract

    Words are often pronounced with fewer segments in casual conversations than in formal speech. Previous research has shown that foreign language learners and beginning second language learners experience problems processing reduced speech. We examined whether this also holds for advanced second language learners. We designed a dictation task in Dutch consisting of sentences spliced from casual conversations and an unreduced counterpart of this task, with the same sentences carefully articulated by the same speaker. Advanced second language learners of Dutch produced substantially more transcription errors for the reduced than for the unreduced sentences. These errors made the sentences incomprehensible or led to non-intended meanings. The learners often did not rely on the semantic and syntactic information in the sentence or on the subsegmental cues to overcome the reductions. Hence, advanced second language learners also appear to suffer from the reduced pronunciation variants of words that are abundant in everyday conversations
  • Ernestus, M., & Warner, N. (2011). An introduction to reduced pronunciation variants [Editorial]. Journal of Phonetics, 39(SI), 253-260. doi:10.1016/S0095-4470(11)00055-6.

    Abstract

    Words are often pronounced very differently in formal speech than in everyday conversations. In conversational speech, they may contain weaker segments, fewer sounds, and even fewer syllables. The English word yesterday, for instance, may be pronounced as [j epsilon integral eI]. This article forms an introduction to the phenomenon of reduced pronunciation variants and to the eight research articles in this issue on the characteristics, production, and comprehension of these variants. We provide a description of the phenomenon, addressing its high frequency of occurrence in casual conversations in various languages, the gradient nature of many reduction processes, and the intelligibility of reduced variants to native listeners. We also describe the relevance of research on reduced variants for linguistic and psychological theories as well as for applications in speech technology and foreign language acquisition. Since reduced variants occur more often in spontaneous than in formal speech, they are hard to study in the laboratory under well controlled conditions. We discuss the advantages and disadvantages of possible solutions, including the research methods employed in the articles in this special issue, based on corpora and experiments. This article ends with a short overview of the articles in this issue.
  • Ernestus, M., & Mak, W. M. (2004). Distinctive phonological features differ in relevance for both spoken and written word recognition. Brain and Language, 90(1-3), 378-392. doi:10.1016/S0093-934X(03)00449-8.

    Abstract

    This paper discusses four experiments on Dutch which show that distinctive phonological features differ in their relevance for word recognition. The relevance of a feature for word recognition depends on its phonological stability, that is, the extent to which that feature is generally realized in accordance with its lexical specification in the relevant word position. If one feature value is uninformative, all values of that feature are less relevant for word recognition, with the least informative feature being the least relevant. Features differ in their relevance both in spoken and written word recognition, though the differences are more pronounced in auditory lexical decision than in self-paced reading.
  • Ernestus, M., & Baayen, R. H. (2004). Analogical effects in regular past tense production in Dutch. Linguistics, 42(5), 873-903. doi:10.1515/ling.2004.031.

    Abstract

    This study addresses the question to what extent the production of regular past tense forms in Dutch is a¤ected by analogical processes. We report an experiment in which native speakers of Dutch listened to existing regular verbs over headphones, and had to indicate which of the past tense allomorphs, te or de, was appropriate for these verbs. According to generative analyses, the choice between the two su‰xes is completely regular and governed by the underlying [voice]-specification of the stem-final segment. In this approach, no analogical e¤ects are expected. In connectionist and analogical approaches, by contrast, the phonological similarity structure in the lexicon is expected to a¤ect lexical processing. Our experimental results support the latter approach: all participants created more nonstandard past tense forms, produced more inconsistency errors, and responded more slowly for verbs with stronger analogical support for the nonstandard form.
  • Ernestus, M., Lahey, M., Verhees, F., & Baayen, R. H. (2006). Lexical frequency and voice assimilation. Journal of the Acoustical Society of America, 120(2), 1040-1051. doi:10.1121/1.2211548.

    Abstract

    Acoustic duration and degree of vowel reduction are known to correlate with a word’s frequency of occurrence. The present study broadens the research on the role of frequency in speech production to voice assimilation. The test case was regressive voice assimilation in Dutch. Clusters from a corpus of read speech were more often perceived as unassimilated in lower-frequency words and as either completely voiced regressive assimilation or, unexpectedly, as completely voiceless progressive assimilation in higher-frequency words. Frequency did not predict the voice classifications over and above important acoustic cues to voicing, suggesting that the frequency effects on the classifications were carried exclusively by the acoustic signal. The duration of the cluster and the period of glottal vibration during the cluster decreased while the duration of the release noises increased with frequency. This indicates that speakers reduce articulatory effort for higher-frequency words, with some acoustic cues signaling more voicing and others less voicing. A higher frequency leads not only to acoustic reduction but also to more assimilation.
  • Ernestus, M., & Baayen, R. H. (2004). Kuchde, tobte, en turfte: Lekkage in 't kofschip. Onze Taal, 73(12), 360-361.
  • Ernestus, M., Kouwenhoven, H., & Van Mulken, M. (2017). The direct and indirect effects of the phonotactic constraints in the listener's native language on the comprehension of reduced and unreduced word pronunciation variants in a foreign language. Journal of Phonetics, 62, 50-64. doi:10.1016/j.wocn.2017.02.003.

    Abstract

    This study investigates how the comprehension of casual speech in foreign languages is affected by the phonotactic constraints in the listener’s native language. Non-native listeners of English with different native languages heard short English phrases produced by native speakers of English or Spanish and they indicated whether these phrases included can or can’t. Native Mandarin listeners especially tended to interpret can’t as can. We interpret this result as a direct effect of the ban on word-final /nt/ in Mandarin. Both the native Mandarin and the native Spanish listeners did not take full advantage of the subsegmental information in the speech signal cueing reduced can’t. This finding is probably an indirect effect of the phonotactic constraints in their native languages: these listeners have difficulties interpreting the subsegmental cues because these cues do not occur or have different functions in their native languages. Dutch resembles English in the phonotactic constraints relevant to the comprehension of can’t, and native Dutch listeners showed similar patterns in their comprehension of native and non-native English to native English listeners. This result supports our conclusion that the major patterns in the comprehension results are driven by the phonotactic constraints in the listeners’ native languages.
  • Ernestus, M., & Warner, N. (Eds.). (2011). Speech reduction [Special Issue]. Journal of Phonetics, 39(SI).
  • Eryilmaz, K., & Little, H. (2017). Using Leap Motion to investigate the emergence of structure in speech and language. Behavior Research Methods, 49(5), 1748-1768. doi:10.3758/s13428-016-0818-x.

    Abstract

    In evolutionary linguistics, experiments using artificial signal spaces are being used to investigate the emergence of speech structure. These signal spaces need to be continuous, non-discretised spaces from which discrete units and patterns can emerge. They need to be dissimilar from - but comparable with - the vocal-tract, in order to minimise interference from pre-existing linguistic knowledge, while informing us about language. This is a hard balance to strike. This article outlines a new approach which uses the Leap Motion, an infra-red controller which can convert manual movement in 3d space into sound. The signal space using this approach is more flexible than signal spaces in previous attempts. Further, output data using this approach is simpler to arrange and analyse. The experimental interface was built using free, and mostly open source libraries in Python. We provide our source code for other researchers as open source.
  • Esteve-Gibert, N., Prieto, P., & Liszkowski, U. (2017). Twelve-month-olds understand social intentions based on prosody and gesture shape. Infancy, 22, 108-129. doi:10.1111/infa.12146.

    Abstract

    Infants infer social and pragmatic intentions underlying attention-directing gestures, but the basis on which infants make these inferences is not well understood. Previous studies suggest that infants rely on information from preceding shared action contexts and joint perceptual scenes. Here, we tested whether 12-month-olds use information from act-accompanying cues, in particular prosody and hand shape, to guide their pragmatic understanding. In Experiment 1, caregivers directed infants’ attention to an object to request it, share interest in it, or inform them about a hidden aspect. Caregivers used distinct prosodic and gestural patterns to express each pragmatic intention. Experiment 2 was identical except that experimenters provided identical lexical information across conditions and used three sets of trained prosodic and gestural patterns. In all conditions, the joint perceptual scenes and preceding shared action contexts were identical. In both experiments, infants reacted appropriately to the adults’ intentions by attending to the object mostly in the sharing interest condition, offering the object mostly in the imperative condition, and searching for the referent mostly in the informing condition. Infants’ ability to comprehend pragmatic intentions based on prosody and gesture shape expands infants’ communicative understanding from common activities to novel situations for which shared background knowledge is missing.
  • Evans, S., McGettigan, C., Agnew, Z., Rosen, S., Cesar, L., Boebinger, D., Ostarek, M., Chen, S. H., Richards, A., Meekins, S., & Scott, S. K. (2014). The neural basis of informational and energetic masking effects in the perception and production of speech [abstract]. The Journal of the Acoustical Society of America, 136(4), 2243. doi:10.1121/1.4900096.

    Abstract

    When we have spoken conversations, it is usually in the context of competing sounds within our environment. Speech can be masked by many different kinds of sounds, for example, machinery noise and the speech of others, and these different sounds place differing demands on cognitive resources. In this talk, I will present data from a series of functional magnetic resonance imaging (fMRI) studies in which the informational properties of background sounds have been manipulated to make them more or less similar to speech. I will demonstrate the neural effects associated with speaking over and listening to these sounds, and demonstrate how in perception these effects are modulated by the age of the listener. The results will be interpreted within a framework of auditory processing developed from primate neurophysiology and human functional imaging work (Rauschecker and Scott 2009).
  • Feinberg, H., Taylor, M. E., Razi, N., McBride, R., Knirel, Y. A., Graham, S. A., Drickamer, K., & Weis, W. I. (2011). Structural basis for langerin recognition of diverse pathogen and mammalian glycans through a single binding site. Journal of Molecular Biology, 405, 1027-1039. doi:10.1016/j.jmb.2010.11.039.

    Abstract

    Langerin mediates the carbohydrate-dependent uptake of pathogens by Langerhans cells in the first step of antigen presentation to the adaptive immune system. Langerin binds to an unusually diverse number of endogenous and pathogenic cell surface carbohydrates, including mannose-containing O-specific polysaccharides derived from bacterial lipopolysaccharides identified here by probing a microarray of bacterial polysaccharides. Crystal structures of the carbohydrate-recognition domain from human langerin bound to a series of oligomannose compounds, the blood group B antigen, and a fragment of β-glucan reveal binding to mannose, fucose, and glucose residues by Ca(2+) coordination of vicinal hydroxyl groups with similar stereochemistry. Oligomannose compounds bind through a single mannose residue, with no other mannose residues contacting the protein directly. There is no evidence for a second Ca(2+)-independent binding site. Likewise, a β-glucan fragment, Glcβ1-3Glcβ1-3Glc, binds to langerin through the interaction of a single glucose residue with the Ca(2+) site. The fucose moiety of the blood group B trisaccharide Galα1-3(Fucα1-2)Gal also binds to the Ca(2+) site, and selective binding to this glycan compared to other fucose-containing oligosaccharides results from additional favorable interactions of the nonreducing terminal galactose, as well as of the fucose residue. Surprisingly, the equatorial 3-OH group and the axial 4-OH group of the galactose residue in 6SO(4)-Galβ1-4GlcNAc also coordinate Ca(2+), a heretofore unobserved mode of galactose binding in a C-type carbohydrate-recognition domain bearing the Glu-Pro-Asn signature motif characteristic of mannose binding sites. Salt bridges between the sulfate group and two lysine residues appear to compensate for the nonoptimal binding of galactose at this site.

    Additional information

    Feinberg_2011_Suppl_Table.pdf
  • Filippi, P., Congdon, J. V., Hoang, J., Bowling, D. L., Reber, S. A., Pasukonis, A., Hoeschele, M., Ocklenburg, S., De Boer, B., Sturdy, C. B., Newen, A., & Güntürkün, O. (2017). Humans recognize emotional arousal in vocalizations across all classes of terrestrial vertebrates: Evidence for acoustic universals. Proceedings of the Royal Society B: Biological Sciences, 284: 20170990. doi:10.1098/rspb.2017.0990.

    Abstract

    Writing over a century ago, Darwin hypothesized that vocal expression of emotion dates back to our earliest terrestrial ancestors. If this hypothesis is true, we should expect to find cross-species acoustic universals in emotional vocalizations. Studies suggest that acoustic attributes of aroused vocalizations are shared across many mammalian species, and that humans can use these attributes to infer emotional content. But do these acoustic attributes extend to non-mammalian vertebrates? In this study, we asked human participants to judge the emotional content of vocalizations of nine vertebrate species representing three different biological classes—Amphibia, Reptilia (non-aves and aves) and Mammalia. We found that humans are able to identify higher levels of arousal in vocalizations across all species. This result was consistent across different language groups (English, German and Mandarin native speakers), suggesting that this ability is biologically rooted in humans. Our findings indicate that humans use multiple acoustic parameters to infer relative arousal in vocalizations for each species, but mainly rely on fundamental frequency and spectral centre of gravity to identify higher arousal vocalizations across species. These results suggest that fundamental mechanisms of vocal emotional expression are shared among vertebrates and could represent a homologous signalling system.
  • Filippi, P., Gogoleva, S. S., Volodina, E. V., Volodin, I. A., & De Boer, B. (2017). Humans identify negative (but not positive) arousal in silver fox vocalizations: Implications for the adaptive value of interspecific eavesdropping. Current Zoology, 63(4), 445-456. doi:10.1093/cz/zox035.

    Abstract

    The ability to identify emotional arousal in heterospecific vocalizations may facilitate behaviors that increase survival opportunities. Crucially, this ability may orient inter-species interactions, particularly between humans and other species. Research shows that humans identify emotional arousal in vocalizations across multiple species, such as cats, dogs, and piglets. However, no previous study has addressed humans' ability to identify emotional arousal in silver foxes. Here, we adopted low-and high-arousal calls emitted by three strains of silver fox-Tame, Aggressive, and Unselected-in response to human approach. Tame and Aggressive foxes are genetically selected for friendly and attacking behaviors toward humans, respectively. Unselected foxes show aggressive and fearful behaviors toward humans. These three strains show similar levels of emotional arousal, but different levels of emotional valence in relation to humans. This emotional information is reflected in the acoustic features of the calls. Our data suggest that humans can identify high-arousal calls of Aggressive and Unselected foxes, but not of Tame foxes. Further analyses revealed that, although within each strain different acoustic parameters affect human accuracy in identifying high-arousal calls, spectral center of gravity, harmonic-to-noise ratio, and F0 best predict humans' ability to discriminate high-arousal calls across all strains. Furthermore, we identified in spectral center of gravity and F0 the best predictors for humans' absolute ratings of arousal in each call. Implications for research on the adaptive value of inter-specific eavesdropping are discussed.

    Additional information

    zox035_Supp.zip
  • Filippi, P., Ocklenburg, S., Bowling, D. L., Heege, L., Güntürkün, O., Newen, A., & de Boer, B. (2017). More than words (and faces): evidence for a Stroop effect of prosody in emotion word processing. Cognition & Emotion, 31(5), 879-891. doi:10.1080/02699931.2016.1177489.

    Abstract

    Humans typically combine linguistic and nonlinguistic information to comprehend emotions. We adopted an emotion identification Stroop task to investigate how different channels interact in emotion communication. In experiment 1, synonyms of “happy” and “sad” were spoken with happy and sad prosody. Participants had more difficulty ignoring prosody than ignoring verbal content. In experiment 2, synonyms of “happy” and “sad” were spoken with happy and sad prosody, while happy or sad faces were displayed. Accuracy was lower when two channels expressed an emotion that was incongruent with the channel participants had to focus on, compared with the cross-channel congruence condition. When participants were required to focus on verbal content, accuracy was significantly lower also when prosody was incongruent with verbal content and face. This suggests that prosody biases emotional verbal content processing, even when conflicting with verbal content and face simultaneously. Implications for multimodal communication and language evolution studies are discussed.
  • Filippi, P., Gingras, B., & Fitch, W. T. (2014). Pitch enhancement facilitates word learning across visual contexts. Frontiers in Psychology, 5: 1468. doi:10.3389%2Ffpsyg.2014.01468.

    Abstract

    This study investigates word-learning using a new experimental paradigm that integrates three processes: (a) extracting a word out of a continuous sound sequence, (b) inferring its referential meanings in context, (c) mapping the segmented word onto its broader intended referent, such as other objects of the same semantic category, and to novel utterances. Previous work has examined the role of statistical learning and/or of prosody in each of these processes separately. Here, we combine these strands of investigation into a single experimental approach, in which participants viewed a photograph belonging to one of three semantic categories while hearing a complex, five-word utterance containing a target word. Six between-subjects conditions were tested with 20 adult participants each. In condition 1, the only cue to word-meaning mapping was the co-occurrence of word and referents. This statistical cue was present in all conditions. In condition 2, the target word was sounded at a higher pitch. In condition 3, random words were sounded at a higher pitch, creating an inconsistent cue. In condition 4, the duration of the target word was lengthened. In conditions 5 and 6, an extraneous acoustic cue and a visual cue were associated with the target word, respectively. Performance in this word-learning task was significantly higher than that observed with simple co-occurrence only when pitch prominence consistently marked the target word. We discuss implications for the pragmatic value of pitch marking as well as the relevance of our findings to language acquisition and language evolution.
  • Filippi, P., Laaha, S., & Fitch, W. T. (2017). Utterance-final position and pitch marking aid word learning in school-age children. Royal Society Open Science, 4: 161035. doi:10.1098/rsos.161035.

    Abstract

    We investigated the effects of word order and prosody on word learning in school-age children. Third graders viewed photographs belonging to one of three semantic categories while hearing four-word nonsense utterances containing a target word. In the control condition, all words had the same pitch and, across trials, the position of the target word was varied systematically within each utterance. The only cue to word–meaning mapping was the co-occurrence of target words and referents. This cue was present in all conditions. In the Utterance-final condition, the target word always occurred in utterance-final position, and at the same fundamental frequency as all the other words of the utterance. In the Pitch peak condition, the position of the target word was varied systematically within each utterance across trials, and produced with pitch contrasts typical of infant-directed speech (IDS). In the Pitch peak + Utterance-final condition, the target word always occurred in utterance-final position, and was marked with a pitch contrast typical of IDS. Word learning occurred in all conditions except the control condition. Moreover, learning performance was significantly higher than that observed with simple co-occurrence (control condition) only for the Pitch peak + Utterance-final condition. We conclude that, for school-age children, the combination of words' utterance-final alignment and pitch enhancement boosts word learning.
  • Fisher, S. E., & Francks, C. (2006). Genes, cognition and dyslexia: Learning to read the genome. Trends in Cognitive Sciences, 10, 250-257. doi:10.1016/j.tics.2006.04.003.

    Abstract

    Studies of dyslexia provide vital insights into the cognitive architecture underpinning both disordered and normal reading. It is well established that inherited factors contribute to dyslexia susceptibility, but only very recently has evidence emerged to implicate specific candidate genes. In this article, we provide an accessible overview of four prominent examples--DYX1C1, KIAA0319, DCDC2 and ROBO1--and discuss their relevance for cognition. In each case correlations have been found between genetic variation and reading impairments, but precise risk variants remain elusive. Although none of these genes is specific to reading-related neuronal circuits, or even to the human brain, they have intriguing roles in neuronal migration or connectivity. Dissection of cognitive mechanisms that subserve reading will ultimately depend on an integrated approach, uniting data from genetic investigations, behavioural studies and neuroimaging.
  • Fisher, S. E., Vargha-Khadem, F., Watkins, K. E., Monaco, A. P., & Pembrey, M. E. (1998). Localisation of a gene implicated in a severe speech and language disorder. Nature Genetics, 18, 168 -170. doi:10.1038/ng0298-168.

    Abstract

    Between 2 and 5% of children who are otherwise unimpaired have significant difficulties in acquiring expressive and/or receptive language, despite adequate intelligence and opportunity. While twin studies indicate a significant role for genetic factors in developmental disorders of speech and language, the majority of families segregating such disorders show complex patterns of inheritance, and are thus not amenable for conventional linkage analysis. A rare exception is the KE family, a large three-generation pedigree in which approximately half of the members are affected with a severe speech and language disorder which appears to be transmitted as an autosomal dominant monogenic trait. This family has been widely publicised as suffering primarily from a defect in the use of grammatical suffixation rules, thus supposedly supporting the existence of genes specific to grammar. The phenotype, however, is broader in nature, with virtually every aspect of grammar and of language affected. In addition, affected members have a severe orofacial dyspraxia, and their speech is largely incomprehensible to the naive listener. We initiated a genome-wide search for linkage in the KE family and have identified a region on chromosome 7 which co-segregates with the speech and language disorder (maximum lod score = 6.62 at theta = 0.0), confirming autosomal dominant inheritance with full penetrance. Further analysis of microsatellites from within the region enabled us to fine map the locus responsible (designated SPCH1) to a 5.6-cM interval in 7q31, thus providing an important step towards its identification. Isolation of SPCH1 may offer the first insight into the molecular genetics of the developmental process that culminates in speech and language.
  • Fisher, S. E. (2017). Evolution of language: Lessons from the genome. Psychonomic Bulletin & Review, 24(1), 34-40. doi: 10.3758/s13423-016-1112-8.

    Abstract

    The post-genomic era is an exciting time for researchers interested in the biology of speech and language. Substantive advances in molecular methodologies have opened up entire vistas of investigation that were not previously possible, or in some cases even imagined. Speculations concerning the origins of human cognitive traits are being transformed into empirically addressable questions, generating specific hypotheses that can be explicitly tested using data collected from both the natural world and experimental settings. In this article, I discuss a number of promising lines of research in this area. For example, the field has begun to identify genes implicated in speech and language skills, including not just disorders but also the normal range of abilities. Such genes provide powerful entry points for gaining insights into neural bases and evolutionary origins, using sophisticated experimental tools from molecular neuroscience and developmental neurobiology. At the same time, sequencing of ancient hominin genomes is giving us an unprecedented view of the molecular genetic changes that have occurred during the evolution of our species. Synthesis of data from these complementary sources offers an opportunity to robustly evaluate alternative accounts of language evolution. Of course, this endeavour remains challenging on many fronts, as I also highlight in the article. Nonetheless, such an integrated approach holds great potential for untangling the complexities of the capacities that make us human.
  • Fisher, S. E. (2006). Tangled webs: Tracing the connections between genes and cognition. Cognition, 101, 270-297. doi:10.1016/j.cognition.2006.04.004.

    Abstract

    The rise of molecular genetics is having a pervasive influence in a wide variety of fields, including research into neurodevelopmental disorders like dyslexia, speech and language impairments, and autism. There are many studies underway which are attempting to determine the roles of genetic factors in the aetiology of these disorders. Beyond the obvious implications for diagnosis, treatment and understanding, success in these efforts promises to shed light on the links between genes and aspects of cognition and behaviour. However, the deceptive simplicity of finding correlations between genetic and phenotypic variation has led to a common misconception that there exist straightforward linear relationships between specific genes and particular behavioural and/or cognitive outputs. The problem is exacerbated by the adoption of an abstract view of the nature of the gene, without consideration of molecular, developmental or ontogenetic frameworks. To illustrate the limitations of this perspective, I select two cases from recent research into the genetic underpinnings of neurodevelopmental disorders. First, I discuss the proposal that dyslexia can be dissected into distinct components specified by different genes. Second, I review the story of the FOXP2 gene and its role in human speech and language. In both cases, adoption of an abstract concept of the gene can lead to erroneous conclusions, which are incompatible with current knowledge of molecular and developmental systems. Genes do not specify behaviours or cognitive processes; they make regulatory factors, signalling molecules, receptors, enzymes, and so on, that interact in highly complex networks, modulated by environmental influences, in order to build and maintain the brain. I propose that it is necessary for us to fully embrace the complexity of biological systems, if we are ever to untangle the webs that link genes to cognition.
  • Fisher, S. E., & Marcus, G. (2006). The eloquent ape: Genes, brains and the evolution of language. Nature Reviews Genetics, 7, 9-20. doi:10.1038/nrg1747.

    Abstract

    The human capacity to acquire complex language seems to be without parallel in the natural world. The origins of this remarkable trait have long resisted adequate explanation, but advances in fields that range from molecular genetics to cognitive neuroscience offer new promise. Here we synthesize recent developments in linguistics, psychology and neuroimaging with progress in comparative genomics, gene-expression profiling and studies of developmental disorders. We argue that language should be viewed not as a wholesale innovation, but as a complex reconfiguration of ancestral systems that have been adapted in evolutionarily novel ways.
  • Fisher, V. J. (2017). Unfurling the wings of flight: Clarifying ‘the what’ and ‘the why’ of mental imagery use in dance. Research in Dance Education, 18(3), 252-272. doi:10.1080/14647893.2017.1369508.

    Abstract

    This article provides clarification regarding ‘the what’ and ‘the why’ of mental imagery use in dance. It proposes that mental images are invoked across sensory modalities and often combine internal and external perspectives. The content of images ranges from ‘direct’ body oriented simulations along a continuum employing analogous mapping through ‘semi-direct’ literal similarities to abstract metaphors. The reasons for employing imagery are diverse and often overlapping, affecting physical, affective (psychological) and cognitive domains. This paper argues that when dance uses imagery, it is mapping aspects of the world to the body via analogy. Such mapping informs and changes our understanding of both our bodies and the world. In this way, mental imagery use in dance is fundamentally a process of embodied cognition
  • Fitz, H., & Chang, F. (2017). Meaningful questions: The acquisition of auxiliary inversion in a connectionist model of sentence production. Cognition, 166, 225-250. doi:10.1016/j.cognition.2017.05.008.

    Abstract

    Nativist theories have argued that language involves syntactic principles which are unlearnable from the input children receive. A paradigm case of these innate principles is the structure dependence of auxiliary inversion in complex polar questions (Chomsky, 1968, 1975, 1980). Computational approaches have focused on the properties of the input in explaining how children acquire these questions. In contrast, we argue that messages are structured in a way that supports structure dependence in syntax. We demonstrate this approach within a connectionist model of sentence production (Chang, 2009) which learned to generate a range of complex polar questions from a structured message without positive exemplars in the input. The model also generated different types of error in development that were similar in magnitude to those in children (e.g., auxiliary doubling, Ambridge, Rowland, & Pine, 2008; Crain & Nakayama, 1987). Through model comparisons we trace how meaning constraints and linguistic experience interact during the acquisition of auxiliary inversion. Our results suggest that auxiliary inversion rules in English can be acquired without innate syntactic principles, as long as it is assumed that speakers who ask complex questions express messages that are structured into multiple propositions
  • FitzPatrick, I., & Indefrey, P. (2014). Head start for target language in bilingual listening. Brain Research, 1542, 111-130. doi:10.1016/j.brainres.2013.10.014.

    Abstract

    In this study we investigated the availability of non-target language semantic features in bilingual speech processing. We recorded EEG from Dutch-English bilinguals who listened to spoken sentences in their L2 (English) or L1 (Dutch). In Experiments 1 and 3 the sentences contained an interlingual homophone. The sentence context was either biased towards the target language meaning of the homophone (target biased), the non-target language meaning (non-target biased), or neither meaning of the homophone (fully incongruent). These conditions were each compared to a semantically congruent control condition. In L2 sentences we observed an N400 in the non-target biased condition that had an earlier offset than the N400 to fully incongruent homophones. In the target biased condition, a negativity emerged that was later than the N400 to fully incongruent homophones. In L1 contexts, neither target biased nor non-target biased homophones yielded significant N400 effects (compared to the control condition). In Experiments 2 and 4 the sentences contained a language switch to a non-target language word that could be semantically congruent or incongruent. Semantically incongruent words (switched, and non-switched) elicited an N400 effect. The N400 to semantically congruent language-switched words had an earlier offset than the N400 to incongruent words. Both congruent and incongruent language switches elicited a Late Positive Component (LPC). These findings show that bilinguals activate both meanings of interlingual homophones irrespective of their contextual fit. In L2 contexts, the target-language meaning of the homophone has a head start over the non-target language meaning. The target-language head start is also evident for language switches from both L2-to-L1 and L1-to-L2
  • Flecken, M. (2011). Assessing bilingual attainment: macrostructural planning in narratives. International Journal of Bilingualism, 15(2), 164-186. doi:10.1177/1367006910381187.

    Abstract

    The present study addresses questions concerning bilinguals’ attainment in the two languages by investigating the extent to which early bilinguals manage to apply the information structure required in each language when producing a complex text. In re-narrating the content of a film, speakers have to break down the perceived series of dynamic situations and structure relevant information into units that are suited for linguistic expression. The analysis builds on typological studies of Germanic and Romance languages which investigate the role of grammaticized concepts in determining core features in information structure. It takes a global perspective in that it focuses on factors that determine information selection and information structure that hold in macrostructural terms for the text as a whole (factors driving information selection, the temporal frame used to locate events on the time line, and the means used in reference management). A first comparison focuses on Dutch and German monolingual native speakers and shows that despite overall typological similarities, there are subtle though systematic differences between the two languages in the aforementioned areas of information structure. The analyses of the bilinguals focus on their narratives in both languages, and compares the patterns found to those found in the monolingual narratives. Findings show that the method used provides insights into the individual bilingual’s attainment in the two languages and identifies either balanced levels of attainment, patterns showing higher degrees of conformity with one of the languages, as well as bilingual-specific patterns of performance.
  • Flecken, M., von Stutterheim, C., & Carroll, M. (2014). Grammatical aspect influences motion event perception: Evidence from a cross-linguistic non-verbal recognition task. Language and Cognition, 6(1), 45-78. doi:10.1017/langcog.2013.2.

    Abstract

    Using eye-tracking as a window on cognitive processing, this study investigates language effects on attention to motion events in a non-verbal task. We compare gaze allocation patterns by native speakers of German and Modern Standard Arabic (MSA), two languages that differ with regard to the grammaticalization of temporal concepts. Findings of the non-verbal task, in which speakers watch dynamic event scenes while performing an auditory distracter task, are compared to gaze allocation patterns which were obtained in an event description task, using the same stimuli. We investigate whether differences in the grammatical aspectual systems of German and MSA affect the extent to which endpoints of motion events are linguistically encoded and visually processed in the two tasks. In the linguistic task, we find clear language differences in endpoint encoding and in the eye-tracking data (attention to event endpoints) as well: German speakers attend to and linguistically encode endpoints more frequently than speakers of MSA. The fixation data in the non-verbal task show similar language effects, providing relevant insights with regard to the language-and-thought debate. The present study is one of the few studies that focus explicitly on language effects related to grammatical concepts, as opposed to lexical concepts.
  • Flecken, M. (2011). Event conceptualization by early Dutch-German bilinguals: Insights from linguistic and eye-tracking data. Bilingualism: Language and Cognition, 14(1), 61-77. doi:10.1017/S1366728910000027.

    Abstract

    This experimental study investigates event construal by early Dutch–German bilinguals, as reflected in their oral depiction of everyday events shown in video clips. The starting point is the finding that the expression of an aspectual perspective (progressive aspect), and its consequences for event construal, is dependent on the extent to which means are grammaticalized, as in English (e.g., progressive aspect) or not, as in German (von Stutterheim & Carroll, 2006). The present study shows that although speakers of Dutch and German have comparable means to mark this aspectual concept, at a first glance at least, they differ markedly both in the contexts as well as in the extent to which this aspectual perspective is selected, being highly frequent in specific contexts in Dutch, but not in German. The present experimental study investigates factors that lead to the use of progressive aspect by early bilinguals, using video clips (with different types of events varied along specific dimensions on a systematic basis). The study includes recordings of eye movements, and examines how far an aspectual perspective drives allocation of attention during information intake while viewing the stimulus material, both for and while speaking. Although the bilinguals have acquired the means to express progressive aspect in Dutch, their use shows a pattern that differs from monolingual Dutch speakers. Interestingly, these differences are reflected in different patterns in the direction of attention (eye movements) when verbalizing information on events.
  • Flecken, M. (2011). What native speaker judgments tell us about the grammaticalization of a progressive aspectual marker in Dutch. Linguistics, 49(3), 479-524. doi:10.1515/LING.2011.015.

    Abstract

    This paper focuses on native speaker judgments of a construction in Dutch that functions as a progressive aspectual marker (aan het X zijn, referred to as aan het-construction) and represents an event as in progression at the time of speech. The method was chosen in order to investigate how native speakers assess the scope and conditions of use of a construction which is in the process of grammaticalization. It allows for the inclusion of a large group of participants of different age groups and an investigation of potential age-related differences. The study systematically covers a range of temporal variables that were shown to be relevant in elicitation and corpus-based studies on the grammaticalization of progressive aspect constructions. The results provide insights into the selectional preferences and constraints of the aan het-construction in contemporary Dutch, as judged by native speakers, and the extent to which they correlate with production tasks.
  • Floyd, S. (2014). [Review of the book Flexible word classes: Typological studies of underspecified parts of speech ed. by Jan Rijkhoff and Eva van Lier]. Linguistics, 52, 1499-1502. doi:10.1515/ling-2014-0027.
  • Floyd, S. (2011). [Review of the book Racism and discourse in Latin America ed. by Teun A. van Dijk]. Language in Society, 40, 670-671. doi:10.1017/S0047404511000807.
  • Floyd, S. (2011). Re-discovering the Quechua adjective. Linguistic Typology, 15, 25-63. doi:10.1515/LITY.2011.003.

    Abstract

    This article describes the adjective class in Quechua, countering many previous accounts of the language as a linguistic type with no adjective/noun distinction. It applies a set of common crosslinguistic criteria for distinguishing adjectives to data from several dialects of Ecuadorian Highland Quechua (EHQ), analyzing examples from a natural speech audio/video corpus, speaker intuitions of grammaticality, and controlled elicitation exercises. It is concluded that by virtually any standard Quechua shows clear evidence for a distinct class of attributive noun modifiers, and that in the future Quechua should not be considered a “flexible” noun/adjective language for the purposes of crosslinguistic comparison.
  • Folia, V., Forkstam, C., Ingvar, M., Hagoort, P., & Petersson, K. M. (2011). Implicit artificial syntax processing: Genes, preference, and bounded recursion. Biolinguistics, 5(1/2), 105-132.

    Abstract

    The first objective of this study was to compare the brain network engaged by preference classification and the standard grammaticality classification after implicit artificial syntax acquisition by re-analyzing previously reported event-related fMRI data. The results show that preference and grammaticality classification engage virtually identical brain networks, including Broca’s region, consistent with previous behavioral findings. Moreover, the results showed that the effects related to artificial syntax in Broca’s region were essentially the same when masked with variability related to natural syntax processing in the same participants. The second objective was to explore CNTNAP2-related effects in implicit artificial syntax learning by analyzing behavioral and event-related fMRI data from a subsample. The CNTNAP2 gene has been linked to specific language impairment and is controlled by the FOXP2 transcription factor. CNTNAP2 is expressed in language related brain networks in the developing human brain and the FOXP2–CNTNAP2 pathway provides a mechanistic link between clinically distinct syndromes involving disrupted language. Finally, we discuss the implication of taking natural language to be a neurobiological system in terms of bounded recursion and suggest that the left inferior frontal region is a generic on-line sequence processor that unifies information from various sources in an incremental and recursive manner.
  • Folia, V., & Petersson, K. M. (2014). Implicit structured sequence learning: An fMRI study of the structural mere-exposure effect. Frontiers in Psychology, 5: 41. doi:10.3389/fpsyg.2014.00041.

    Abstract

    In this event-related FMRI study we investigated the effect of five days of implicit acquisition on preference classification by means of an artificial grammar learning (AGL) paradigm based on the structural mere-exposure effect and preference classification using a simple right-linear unification grammar. This allowed us to investigate implicit AGL in a proper learning design by including baseline measurements prior to grammar exposure. After 5 days of implicit acquisition, the FMRI results showed activations in a network of brain regions including the inferior frontal (centered on BA 44/45) and the medial prefrontal regions (centered on BA 8/32). Importantly, and central to this study, the inclusion of a naive preference FMRI baseline measurement allowed us to conclude that these FMRI findings were the intrinsic outcomes of the learning process itself and not a reflection of a preexisting functionality recruited during classification, independent of acquisition. Support for the implicit nature of the knowledge utilized during preference classification on day 5 come from the fact that the basal ganglia, associated with implicit procedural learning, were activated during classification, while the medial temporal lobe system, associated with explicit declarative memory, was consistently deactivated. Thus, preference classification in combination with structural mere-exposure can be used to investigate structural sequence processing (syntax) in unsupervised AGL paradigms with proper learning designs.
  • Forkel, S. J., Thiebaut de Schotten, M., Dell’Acqua, F., Kalra, L., Murphy, D. G. M., Williams, S. C. R., & Catani, M. (2014). Anatomical predictors of aphasia recovery: a tractography study of bilateral perisylvian language networks. Brain, 137, 2027-2039. doi:10.1093/brain/awu113.

    Abstract

    Stroke-induced aphasia is associated with adverse effects on quality of life and the ability to return to work. For patients and clinicians the possibility of relying on valid predictors of recovery is an important asset in the clinical management of stroke-related impairment. Age, level of education, type and severity of initial symptoms are established predictors of recovery. However, anatomical predictors are still poorly understood. In this prospective longitudinal study, we intended to assess anatomical predictors of recovery derived from diffusion tractography of the perisylvian language networks. Our study focused on the arcuate fasciculus, a language pathway composed of three segments connecting Wernicke’s to Broca’s region (i.e. long segment), Wernicke’s to Geschwind’s region (i.e. posterior segment) and Broca’s to Geschwind’s region (i.e. anterior segment). In our study we were particularly interested in understanding how lateralization of the arcuate fasciculus impacts on severity of symptoms and their recovery. Sixteen patients (10 males; mean age 60 ± 17 years, range 28–87 years) underwent post stroke language assessment with the Revised Western Aphasia Battery and neuroimaging scanning within a fortnight from symptoms onset. Language assessment was repeated at 6 months. Backward elimination analysis identified a subset of predictor variables (age, sex, lesion size) to be introduced to further regression analyses. A hierarchical regression was conducted with the longitudinal aphasia severity as the dependent variable. The first model included the subset of variables as previously defined. The second model additionally introduced the left and right arcuate fasciculus (separate analysis for each segment). Lesion size was identified as the only independent predictor of longitudinal aphasia severity in the left hemisphere [beta = −0.630, t(−3.129), P = 0.011]. For the right hemisphere, age [beta = −0.678, t(–3.087), P = 0.010] and volume of the long segment of the arcuate fasciculus [beta = 0.730, t(2.732), P = 0.020] were predictors of longitudinal aphasia severity. Adding the volume of the right long segment to the first-level model increased the overall predictive power of the model from 28% to 57% [F(1,11) = 7.46, P = 0.02]. These findings suggest that different predictors of recovery are at play in the left and right hemisphere. The right hemisphere language network seems to be important in aphasia recovery after left hemispheric stroke.

    Additional information

    supplementary information
  • Forkel, S. J., Dell’Acqua, F., Kalra, L., Williams, S. C., & Catani, M. (2011). Lateralisation of the Arcuate Fasciculus Predicts Aphasia Recovery at 6 Months. Procedia - Social and Behavioral Sciences, 23, 164-166. doi:10.1016/j.sbspro.2011.09.221.
  • Forkel, S. J., Thiebaut de Schotten, M., Kawadler, J. M., Dell'Acqua, F., Danek, A., & Catani, M. (2014). The anatomy of fronto-occipital connections from early blunt dissections to contemporary tractography. Cortex, 56, 73-84. doi:10.1016/j.cortex.2012.09.005.

    Abstract

    The occipital and frontal lobes are anatomically distant yet functionally highly integrated to generate some of the most complex behaviour. A series of long associative fibres, such as the fronto-occipital networks, mediate this integration via rapid feed-forward propagation of visual input to anterior frontal regions and direct top–down modulation of early visual processing.

    Despite the vast number of anatomical investigations a general consensus on the anatomy of fronto-occipital connections is not forthcoming. For example, in the monkey the existence of a human equivalent of the ‘inferior fronto-occipital fasciculus’ (iFOF) has not been demonstrated. Conversely, a ‘superior fronto-occipital fasciculus’ (sFOF), also referred to as ‘subcallosal bundle’ by some authors, is reported in monkey axonal tracing studies but not in human dissections.

    In this study our aim is twofold. First, we use diffusion tractography to delineate the in vivo anatomy of the sFOF and the iFOF in 30 healthy subjects and three acallosal brains. Second, we provide a comprehensive review of the post-mortem and neuroimaging studies of the fronto-occipital connections published over the last two centuries, together with the first integral translation of Onufrowicz's original description of a human fronto-occipital fasciculus (1887) and Muratoff's report of the ‘subcallosal bundle’ in animals (1893).

    Our tractography dissections suggest that in the human brain (i) the iFOF is a bilateral association pathway connecting ventro-medial occipital cortex to orbital and polar frontal cortex, (ii) the sFOF overlaps with branches of the superior longitudinal fasciculus (SLF) and probably represents an ‘occipital extension’ of the SLF, (iii) the subcallosal bundle of Muratoff is probably a complex tract encompassing ascending thalamo-frontal and descending fronto-caudate connections and is therefore a projection rather than an associative tract.

    In conclusion, our experimental findings and review of the literature suggest that a ventral pathway in humans, namely the iFOF, mediates a direct communication between occipital and frontal lobes. Whether the iFOF represents a unique human pathway awaits further ad hoc investigations in animals.
  • Forkstam, C., Hagoort, P., Fernandez, G., Ingvar, M., & Petersson, K. M. (2006). Neural correlates of artificial syntactic structure classification. NeuroImage, 32(2), 956-967. doi:10.1016/j.neuroimage.2006.03.057.

    Abstract

    The human brain supports acquisition mechanisms that extract structural regularities implicitly from experience without the induction of an explicit model. It has been argued that the capacity to generalize to new input is based on the acquisition of abstract representations, which reflect underlying structural regularities in the input ensemble. In this study, we explored the outcome of this acquisition mechanism, and to this end, we investigated the neural correlates of artificial syntactic classification using event-related functional magnetic resonance imaging. The participants engaged once a day during an 8-day period in a short-term memory acquisition task in which consonant-strings generated from an artificial grammar were presented in a sequential fashion without performance feedback. They performed reliably above chance on the grammaticality classification tasks on days 1 and 8 which correlated with a corticostriatal processing network, including frontal, cingulate, inferior parietal, and middle occipital/occipitotemporal regions as well as the caudate nucleus. Part of the left inferior frontal region (BA 45) was specifically related to syntactic violations and showed no sensitivity to local substring familiarity. In addition, the head of the caudate nucleus correlated positively with syntactic correctness on day 8 but not day 1, suggesting that this region contributes to an increase in cognitive processing fluency.
  • Francisco, A. A., Groen, M. A., Jesse, A., & McQueen, J. M. (2017). Beyond the usual cognitive suspects: The importance of speechreading and audiovisual temporal sensitivity in reading ability. Learning and Individual Differences, 54, 60-72. doi:10.1016/j.lindif.2017.01.003.

    Abstract

    The aim of this study was to clarify whether audiovisual processing accounted for variance in reading and reading-related abilities, beyond the effect of a set of measures typically associated with individual differences in both reading and audiovisual processing. Testing adults with and without a diagnosis of dyslexia, we showed that—across all participants, and after accounting for variance in cognitive abilities—audiovisual temporal sensitivity contributed uniquely to variance in reading errors. This is consistent with previous studies demonstrating an audiovisual deficit in dyslexia. Additionally, we showed that speechreading (identification of speech based on visual cues from the talking face alone) was a unique contributor to variance in phonological awareness in dyslexic readers only: those who scored higher on speechreading, scored lower on phonological awareness. This suggests a greater reliance on visual speech as a compensatory mechanism when processing auditory speech is problematic. A secondary aim of this study was to better understand the nature of dyslexia. The finding that a sub-group of dyslexic readers scored low on phonological awareness and high on speechreading is consistent with a hybrid perspective of dyslexia: There are multiple possible pathways to reading impairment, which may translate into multiple profiles of dyslexia.
  • Francisco, A. A., Jesse, A., Groen, M. A., & McQueen, J. M. (2017). A general audiovisual temporal processing deficit in adult readers with dyslexia. Journal of Speech, Language, and Hearing Research, 60, 144-158. doi:10.1044/2016_JSLHR-H-15-0375.

    Abstract

    Purpose: Because reading is an audiovisual process, reading impairment may reflect an audiovisual processing deficit. The aim of the present study was to test the existence and scope of such a deficit in adult readers with dyslexia. Method: We tested 39 typical readers and 51 adult readers with dyslexia on their sensitivity to the simultaneity of audiovisual speech and nonspeech stimuli, their time window of audiovisual integration for speech (using incongruent /aCa/ syllables), and their audiovisual perception of phonetic categories. Results: Adult readers with dyslexia showed less sensitivity to audiovisual simultaneity than typical readers for both speech and nonspeech events. We found no differences between readers with dyslexia and typical readers in the temporal window of integration for audiovisual speech or in the audiovisual perception of phonetic categories. Conclusions: The results suggest an audiovisual temporal deficit in dyslexia that is not specific to speech-related events. But the differences found for audiovisual temporal sensitivity did not translate into a deficit in audiovisual speech perception. Hence, there seems to be a hiatus between simultaneity judgment and perception, suggesting a multisensory system that uses different mechanisms across tasks. Alternatively, it is possible that the audiovisual deficit in dyslexia is only observable when explicit judgments about audiovisual simultaneity are required
  • Francks, C., Paracchini, S., Smith, S. D., Richardson, A. J., Scerri, T. S., Cardon, L. R., Marlow, A. J., MacPhie, I. L., Walter, J., Pennington, B. F., Fisher, S. E., Olson, R. K., DeFries, J. C., Stein, J. F., & Monaco, A. P. (2004). A 77-kilobase region of chromosome 6p22.2 is associated with dyslexia in families from the United Kingdom and from the United States. American Journal of Human Genetics, 75(6), 1046-1058. doi:10.1086/426404.

    Abstract

    Several quantitative trait loci (QTLs) that influence developmental dyslexia (reading disability [RD]) have been mapped to chromosome regions by linkage analysis. The most consistently replicated area of linkage is on chromosome 6p23-21.3. We used association analysis in 223 siblings from the United Kingdom to identify an underlying QTL on 6p22.2. Our association study implicates a 77-kb region spanning the gene TTRAP and the first four exons of the neighboring uncharacterized gene KIAA0319. The region of association is also directly upstream of a third gene, THEM2. We found evidence of these associations in a second sample of siblings from the United Kingdom, as well as in an independent sample of twin-based sibships from Colorado. One main RD risk haplotype that has a frequency of ∼12% was found in both the U.K. and U.S. samples. The haplotype is not distinguished by any protein-coding polymorphisms, and, therefore, the functional variation may relate to gene expression. The QTL influences a broad range of reading-related cognitive abilities but has no significant impact on general cognitive performance in these samples. In addition, the QTL effect may be largely limited to the severe range of reading disability.
  • Francks, C. (2011). Leucine-rich repeat genes and the fine-tuning of synapses. Biological Psychiatry, 69, 820-821. doi:10.1016/j.biopsych.2010.12.018.
  • Frank, M. C., Bergelson, E., Bergmann, C., Cristia, A., Floccia, C., Gervain, J., Hamlin, J. K., Hannon, E. E., Kline, M., Levelt, C., Lew-Williams, C., Nazzi, T., Panneton, R., Rabagliati, H., Soderstrom, M., Sullivan, J., Waxman, S., & Yurovsky, D. (2017). A collaborative approach to infant research: Promoting reproducibility, best practices, and theory-building. Infancy, 22(4), 421-435. doi:10.1111/infa.12182.

    Abstract

    The ideal of scientific progress is that we accumulate measurements and integrate these into theory, but recent discussion of replicability issues has cast doubt on whether psychological research conforms to this model. Developmental research—especially with infant participants—also has discipline-specific replicability challenges, including small samples and limited measurement methods. Inspired by collaborative replication efforts in cognitive and social psychology, we describe a proposal for assessing and promoting replicability in infancy research: large-scale, multi-laboratory replication efforts aiming for a more precise understanding of key developmental phenomena. The ManyBabies project, our instantiation of this proposal, will not only help us estimate how robust and replicable these phenomena are, but also gain new theoretical insights into how they vary across ages, linguistic communities, and measurement methods. This project has the potential for a variety of positive outcomes, including less-biased estimates of theoretically important effects, estimates of variability that can be used for later study planning, and a series of best-practices blueprints for future infancy research.
  • Frank, S. L., & Willems, R. M. (2017). Word predictability and semantic similarity show distinct patterns of brain activity during language comprehension. Language, Cognition and Neuroscience, 32(9), 1192-1203. doi:10.1080/23273798.2017.1323109.

    Abstract

    We investigate the effects of two types of relationship between the words of a sentence or text – predictability and semantic similarity – by reanalysing electroencephalography (EEG) and functional magnetic resonance imaging (fMRI) data from studies in which participants comprehend naturalistic stimuli. Each content word's predictability given previous words is quantified by a probabilistic language model, and semantic similarity to previous words is quantified by a distributional semantics model. Brain activity time-locked to each word is regressed on the two model-derived measures. Results show that predictability and semantic similarity have near identical N400 effects but are dissociated in the fMRI data, with word predictability related to activity in, among others, the visual word-form area, and semantic similarity related to activity in areas associated with the semantic network. This indicates that both predictability and similarity play a role during natural language comprehension and modulate distinct cortical regions.
  • Franken, M. K., Acheson, D. J., McQueen, J. M., Eisner, F., & Hagoort, P. (2017). Individual variability as a window on production-perception interactions in speech motor control. The Journal of the Acoustical Society of America, 142(4), 2007-2018. doi:10.1121/1.5006899.

    Abstract

    An important part of understanding speech motor control consists of capturing the
    interaction between speech production and speech perception. This study tests a
    prediction of theoretical frameworks that have tried to account for these interactions: if
    speech production targets are specified in auditory terms, individuals with better
    auditory acuity should have more precise speech targets, evidenced by decreased
    within-phoneme variability and increased between-phoneme distance. A study was
    carried out consisting of perception and production tasks in counterbalanced order.
    Auditory acuity was assessed using an adaptive speech discrimination task, while
    production variability was determined using a pseudo-word reading task. Analyses of
    the production data were carried out to quantify average within-phoneme variability as
    well as average between-phoneme contrasts. Results show that individuals not only
    vary in their production and perceptual abilities, but that better discriminators have
    more distinctive vowel production targets (that is, targets with less within-phoneme
    variability and greater between-phoneme distances), confirming the initial hypothesis.
    This association between speech production and perception did not depend on local
    phoneme density in vowel space. This study suggests that better auditory acuity leads
    to more precise speech production targets, which may be a consequence of auditory
    feedback affecting speech production over time.
  • Frega, M., van Gestel, S. H. C., Linda, K., Van der Raadt, J., Keller, J., Van Rhijn, J. R., Schubert, D., Albers, C. A., & Kasri, N. N. (2017). Rapid neuronal differentiation of induced pluripotent stem cells for measuring network activity on micro-electrode arrays. Journal of Visualized Experiments, e45900. doi:10.3791/54900.

    Abstract

    Neurons derived from human induced Pluripotent Stem Cells (hiPSCs) provide a promising new tool for studying neurological disorders. In the past decade, many protocols for differentiating hiPSCs into neurons have been developed. However, these protocols are often slow with high variability, low reproducibility, and low efficiency. In addition, the neurons obtained with these protocols are often immature and lack adequate functional activity both at the single-cell and network levels unless the neurons are cultured for several months. Partially due to these limitations, the functional properties of hiPSC-derived neuronal networks are still not well characterized. Here, we adapt a recently published protocol that describes production of human neurons from hiPSCs by forced expression of the transcription factor neurogenin-212. This protocol is rapid (yielding mature neurons within 3 weeks) and efficient, with nearly 100% conversion efficiency of transduced cells (>95% of DAPI-positive cells are MAP2 positive). Furthermore, the protocol yields a homogeneous population of excitatory neurons that would allow the investigation of cell-type specific contributions to neurological disorders. We modified the original protocol by generating stably transduced hiPSC cells, giving us explicit control over the total number of neurons. These cells are then used to generate hiPSC-derived neuronal networks on micro-electrode arrays. In this way, the spontaneous electrophysiological activity of hiPSC-derived neuronal networks can be measured and characterized, while retaining interexperimental consistency in terms of cell density. The presented protocol is broadly applicable, especially for mechanistic and pharmacological studies on human neuronal networks.

    Additional information

    video component of this article
  • French, C. A., & Fisher, S. E. (2014). What can mice tell us about Foxp2 function? Current Opinion in Neurobiology, 28, 72-79. doi:10.1016/j.conb.2014.07.003.

    Abstract

    Disruptions of the FOXP2 gene cause a rare speech and language disorder, a discovery that has opened up novel avenues for investigating the relevant neural pathways. FOXP2 shows remarkably high conservation of sequence and neural expression in diverse vertebrates, suggesting that studies in other species are useful in elucidating its functions. Here we describe how investigations of mice that carry disruptions of Foxp2 provide insights at multiple levels: molecules, cells, circuits and behaviour. Work thus far has implicated the gene in key processes including neurite outgrowth, synaptic plasticity, sensorimotor integration and motor-skill learning.
  • Frost, R. L. A., Monaghan, P., & Tatsumi, T. (2017). Domain-general mechanisms for speech segmentation: The role of duration information in language learning. Journal of Experimental Psychology: Human Perception and Performance, 43(3), 466-476. doi:10.1037/xhp0000325.

    Abstract

    Speech segmentation is supported by multiple sources of information that may either inform language processing specifically, or serve learning more broadly. The Iambic/Trochaic Law (ITL), where increased duration indicates the end of a group and increased emphasis indicates the beginning of a group, has been proposed as a domain-general mechanism that also applies to language. However, language background has been suggested to modulate use of the ITL, meaning that these perceptual grouping preferences may instead be a consequence of language exposure. To distinguish between these accounts, we exposed native-English and native-Japanese listeners to sequences of speech (Experiment 1) and nonspeech stimuli (Experiment 2), and examined segmentation using a 2AFC task. Duration was manipulated over 3 conditions: sequences contained either an initial-item duration increase, or a final-item duration increase, or items of uniform duration. In Experiment 1, language background did not affect the use of duration as a cue for segmenting speech in a structured artificial language. In Experiment 2, the same results were found for grouping structured sequences of visual shapes. The results are consistent with proposals that duration information draws upon a domain-general mechanism that can apply to the special case of language acquisition
  • Frost, R. L. A., & Monaghan, P. (2017). Sleep-driven computations in speech processing. PLoS One, 12(1): e0169538. doi:10.1371/journal.pone.0169538.

    Abstract

    Acquiring language requires segmenting speech into individual words, and abstracting over those words to discover grammatical structure. However, these tasks can be conflicting—on the one hand requiring memorisation of precise sequences that occur in speech, and on the other requiring a flexible reconstruction of these sequences to determine the grammar. Here, we examine whether speech segmentation and generalisation of grammar can occur simultaneously—with the conflicting requirements for these tasks being over-come by sleep-related consolidation. After exposure to an artificial language comprising words containing non-adjacent dependencies, participants underwent periods of consolidation involving either sleep or wake. Participants who slept before testing demonstrated a sustained boost to word learning and a short-term improvement to grammatical generalisation of the non-adjacencies, with improvements after sleep outweighing gains seen after an equal period of wake. Thus, we propose that sleep may facilitate processing for these conflicting tasks in language acquisition, but with enhanced benefits for speech segmentation.

    Additional information

    Data available
  • Fuhrmann, D., Ravignani, A., Marshall-Pescini, S., & Whiten, A. (2014). Synchrony and motor mimicking in chimpanzee observational learning. Scientific Reports, 4: 5283. doi:10.1038/srep05283.

    Abstract

    Cumulative tool-based culture underwrote our species' evolutionary success and tool-based nut-cracking is one of the strongest candidates for cultural transmission in our closest relatives, chimpanzees. However the social learning processes that may explain both the similarities and differences between the species remain unclear. A previous study of nut-cracking by initially naïve chimpanzees suggested that a learning chimpanzee holding no hammer nevertheless replicated hammering actions it witnessed. This observation has potentially important implications for the nature of the social learning processes and underlying motor coding involved. In the present study, model and observer actions were quantified frame-by-frame and analysed with stringent statistical methods, demonstrating synchrony between the observer's and model's movements, cross-correlation of these movements above chance level and a unidirectional transmission process from model to observer. These results provide the first quantitative evidence for motor mimicking underlain by motor coding in apes, with implications for mirror neuron function.

    Additional information

    Supplementary Information
  • Furman, R., Kuntay, A., & Ozyurek, A. (2014). Early language-specificity of children's event encoding in speech and gesture: Evidence from caused motion in Turkish. Language, Cognition and Neuroscience, 29, 620-634. doi:10.1080/01690965.2013.824993.

    Abstract

    Previous research on language development shows that children are tuned early on to the language-specific semantic and syntactic encoding of events in their native language. Here we ask whether language-specificity is also evident in children's early representations in gesture accompanying speech. In a longitudinal study, we examined the spontaneous speech and cospeech gestures of eight Turkish-speaking children aged one to three and focused on their caused motion event expressions. In Turkish, unlike in English, the main semantic elements of caused motion such as Action and Path can be encoded in the verb (e.g. sok- ‘put in’) and the arguments of a verb can be easily omitted. We found that Turkish-speaking children's speech indeed displayed these language-specific features and focused on verbs to encode caused motion. More interestingly, we found that their early gestures also manifested specificity. Children used iconic cospeech gestures (from 19 months onwards) as often as pointing gestures and represented semantic elements such as Action with Figure and/or Path that reinforced or supplemented speech in language-specific ways until the age of three. In the light of previous reports on the scarcity of iconic gestures in English-speaking children's early productions, we argue that the language children learn shapes gestures and how they get integrated with speech in the first three years of life.
  • Gaby, A. R. (2006). The Thaayorre 'true man': Lexicon of the human body in an Australian language. Language Sciences, 28(2-3), 201-220. doi:10.1016/j.langsci.2005.11.006.

    Abstract

    Segmentation (and, indeed, definition) of the human body in Kuuk Thaayorre (a Paman language of Cape York Peninsula, Australia) is in some respects typologically unusual, while at other times it conforms to cross-linguistic patterns. The process of deriving complex body part terms from monolexemic items is revealing of metaphorical associations between parts of the body. Associations between parts of the body and entities and phenomena in the broader environment are evidenced by the ubiquity of body part terms (in their extended uses) throughout Thaayorre speech. Understanding the categorisation of the body is therefore prerequisite to understanding the Thaayorre language and worldview.
  • Gaby, A. R. (2004). Extended functions of Thaayorre body part terms. Papers in Linguistics and Applied Linguistics, 4(2), 24-34.
  • Ganushchak, L. Y., Verdonschot, R. G., & Schiller, N. O. (2011). When leaf becomes neuter: Event related potential evidence for grammatical gender transfer in bilingualism. Neuroreport, 22(3), 106-110. doi:10.1097/WNR.0b013e3283427359.

    Abstract

    This study addressed the question as to whether grammatical properties of a first language are transferred to a second language. Dutch-English bilinguals classified Dutch words in white print according to their grammatical gender and colored words (i.e. Dutch common and neuter words, and their English translations) according to their color. Both the classifications were made with the same hand (congruent trials) or different hands (incongruent trials). Performance was more erroneous and the error-elated negativity was enhanced on incongruent compared with congruent trials. This effect was independent of the language in which words were presented. These results provide evidence for the fact thatbilinguals may transfer grammatical characteristics oftheir first language to a second language, even when such characteristics are absent in the grammar of the latter.

    Files private

    Request files
  • Ganushchak, L. Y., & Schiller, N. (2006). Effects of time pressure on verbal self-monitoring: An ERP study. Brain Research, 1125, 104-115. doi:10.1016/j.brainres.2006.09.096.

    Abstract

    The Error-Related Negativity (ERN) is a component of the event-related brain potential (ERP) that is associated with action monitoring and error detection. The present study addressed the question whether or not an ERN occurs after verbal error detection, e.g., during phoneme monitoring.We obtained an ERN following verbal errors which showed a typical decrease in amplitude under severe time pressure. This result demonstrates that the functioning of the verbal self-monitoring system is comparable to other performance monitoring, such as action monitoring. Furthermore, we found that participants made more errors in phoneme monitoring under time pressure than in a control condition. This may suggest that time pressure decreases the amount of resources available to a capacity-limited self-monitor thereby leading to more errors.

Share this page