Publications

Displaying 201 - 300 of 1232
  • Dediu, D. (2017). From biology to language change and diversity. In N. J. Enfield (Ed.), Dependencies in language: On the causal ontology of linguistics systems (pp. 39-52). Berlin: Language Science Press.
  • Dediu, D., Janssen, R., & Moisik, S. R. (2017). Language is not isolated from its wider environment: Vocal tract influences on the evolution of speech and language. Language and Communication, 54, 9-20. doi:10.1016/j.langcom.2016.10.002.

    Abstract

    Language is not a purely cultural phenomenon somehow isolated from its wider environment, and we may only understand its origins and evolution by seriously considering its embedding in this environment as well as its multimodal nature. By environment here we understand other aspects of culture (such as communication technology, attitudes towards language contact, etc.), of the physical environment (ultraviolet light incidence, air humidity, etc.), and of the biological infrastructure for language and speech. We are specifically concerned in this paper with the latter, in the form of the biases, constraints and affordances that the anatomy and physiology of the vocal tract create on speech and language. In a nutshell, our argument is that (a) there is an under-appreciated amount of inter-individual variation in vocal tract (VT) anatomy and physiology, (b) variation that is non-randomly distributed across populations, and that (c) results in systematic differences in phonetics and phonology between languages. Relevant differences in VT anatomy include the overall shape of the hard palate, the shape of the alveolar ridge, the relationship between the lower and upper jaw, to mention just a few, and our data offer a new way to systematically explore such differences and their potential impact on speech. These differences generate very small biases that nevertheless can be amplified by the repeated use and transmission of language, affecting language diachrony and resulting in cross-linguistic synchronic differences. Moreover, the same type of biases and processes might have played an essential role in the emergence and evolution of language, and might allow us a glimpse into the speech and language of extinct humans by, for example, reconstructing the anatomy of parts of their vocal tract from the fossil record and extrapolating the biases we find in present-day humans.
  • Dediu, D., & Moisik, S. R. (2019). Pushes and pulls from below: Anatomical variation, articulation and sound change. Glossa: A Journal of General Linguistics, 4(1): 7. doi:10.5334/gjgl.646.

    Abstract

    This paper argues that inter-individual and inter-group variation in language acquisition, perception, processing and production, rooted in our biology, may play a largely neglected role in sound change. We begin by discussing the patterning of these differences, highlighting those related to vocal tract anatomy with a foundation in genetics and development. We use our ArtiVarK database, a large multi-ethnic sample comprising 3D intraoral optical scans, as well as structural, static and real-time MRI scans of vocal tract anatomy and speech articulation, to quantify the articulatory strategies used to produce the North American English /r/ and to statistically show that anatomical factors seem to influence these articulatory strategies. Building on work showing that these alternative articulatory strategies may have indirect coarticulatory effects, we propose two models for how biases due to variation in vocal tract anatomy may affect sound change. The first involves direct overt acoustic effects of such biases that are then reinterpreted by the hearers, while the second is based on indirect coarticulatory phenomena generated by acoustically covert biases that produce overt “at-a-distance” acoustic effects. This view implies that speaker communities might be “poised” for change because they always contain pools of “standing variation” of such biased speakers, and when factors such as the frequency of the biased speakers in the community, their positions in the communicative network or the topology of the network itself change, sound change may rapidly follow as a self-reinforcing network-level phenomenon, akin to a phase transition. Thus, inter-speaker variation in structured and dynamic communicative networks may couple the initiation and actuation of sound change.
  • Dediu, D., Janssen, R., & Moisik, S. R. (2019). Weak biases emerging from vocal tract anatomy shape the repeated transmission of vowels. Nature Human Behaviour, 3, 1107-1115. doi:10.1038/s41562-019-0663-x.

    Abstract

    Linguistic diversity is affected by multiple factors, but it is usually assumed that variation in the anatomy of our speech organs
    plays no explanatory role. Here we use realistic computer models of the human speech organs to test whether inter-individual
    and inter-group variation in the shape of the hard palate (the bony roof of the mouth) affects acoustics of speech sounds. Based
    on 107 midsagittal MRI scans of the hard palate of human participants, we modelled with high accuracy the articulation of a set
    of five cross-linguistically representative vowels by agents learning to produce speech sounds. We found that different hard
    palate shapes result in subtle differences in the acoustics and articulatory strategies of the produced vowels, and that these
    individual-level speech idiosyncrasies are amplified by the repeated transmission of language across generations. Therefore,
    we suggest that, besides culture and environment, quantitative biological variation can be amplified, also influencing language.
  • Demontis, D., Walters, R. K., Martin, J., Mattheisen, M., Als, T. D., Agerbo, E., Baldursson, G., Belliveau, R., Bybjerg-Grauholm, J., Bækvad-Hansen, M., Cerrato, F., Chambert, K., Churchhouse, C., Dumont, A., Eriksson, N., Gandal, M., Goldstein, J. I., Grasby, K. L., Grove, J., Gudmundsson, O. O. and 61 moreDemontis, D., Walters, R. K., Martin, J., Mattheisen, M., Als, T. D., Agerbo, E., Baldursson, G., Belliveau, R., Bybjerg-Grauholm, J., Bækvad-Hansen, M., Cerrato, F., Chambert, K., Churchhouse, C., Dumont, A., Eriksson, N., Gandal, M., Goldstein, J. I., Grasby, K. L., Grove, J., Gudmundsson, O. O., Hansen, C. S., Hauberg, M. E., Hollegaard, M. V., Howrigan, D. P., Huang, H., Maller, J. B., Martin, A. R., Martin, N. G., Moran, J., Pallesen, J., Palmer, D. S., Pedersen, C. B., Pedersen, M. G., Poterba, T., Poulsen, J. B., Ripke, S., Robinson, E. B., Satterstrom, F. K., Stefansson, H., Stevens, C., Turley, P., Walters, G. B., Won, H., Wright, M. J., ADHD Working Group of the Psychiatric Genomics Consortium (PGC), EArly Genetics and Lifecourse Epidemiology (EAGLE) Consortium, 23andme Research Team, Andreassen, O. A., Asherson, P., Burton, C. L., Boomsma, D. I., Cormand, B., Dalsgaard, S., Franke, B., Gelernter, J., Geschwind, D., Hakonarson, H., Haavik, J., Kranzler, H. R., Kuntsi, J., Langley, K., Lesch, K.-P., Middeldorp, C., Reif, A., Rohde, L. A., Roussos, P., Schachar, R., Sklar, P., Sonuga-Barke, E. J. S., Sullivan, P. F., Thapar, A., Tung, J. Y., Waldman, I. D., Medland, S. E., Stefansson, K., Nordentoft, M., Hougaard, D. M., Werge, T., Mors, O., Mortensen, P. B., Daly, M. J., Faraone, S. V., Børglum, A. D., & Neale, B. (2019). Discovery of the first genome-wide significant risk loci for attention deficit/hyperactivity disorder. Nature Genetics, 51, 63-75. doi:10.1038/s41588-018-0269-7.

    Abstract

    Attention deficit/hyperactivity disorder (ADHD) is a highly heritable childhood behavioral disorder affecting 5% of children and 2.5% of adults. Common genetic variants contribute substantially to ADHD susceptibility, but no variants have been robustly associated with ADHD. We report a genome-wide association meta-analysis of 20,183 individuals diagnosed with ADHD and 35,191 controls that identifies variants surpassing genome-wide significance in 12 independent loci, finding important new information about the underlying biology of ADHD. Associations are enriched in evolutionarily constrained genomic regions and loss-of-function intolerant genes and around brain-expressed regulatory marks. Analyses of three replication studies: a cohort of individuals diagnosed with ADHD, a self-reported ADHD sample and a meta-analysis of quantitative measures of ADHD symptoms in the population, support these findings while highlighting study-specific differences on genetic overlap with educational attainment. Strong concordance with GWAS of quantitative population measures of ADHD symptoms supports that clinical diagnosis of ADHD is an extreme expression of continuous heritable traits.
  • Deriziotis, P., & Fisher, S. E. (2017). Speech and Language: Translating the Genome. Trends in Genetics, 33(9), 642-656. doi:10.1016/j.tig.2017.07.002.

    Abstract

    Investigation of the biological basis of human speech and language is being transformed by developments in molecular technologies, including high-throughput genotyping and next-generation sequencing of whole genomes. These advances are shedding new light on the genetic architecture underlying language-related disorders (speech apraxia, specific language impairment, developmental dyslexia) as well as that contributing to variation in relevant skills in the general population. We discuss how state-of-the-art methods are uncovering a range of genetic mechanisms, from rare mutations of large effect to common polymorphisms that increase risk in a subtle way, while converging on neurogenetic pathways that are shared between distinct disorders. We consider the future of the field, highlighting the unusual challenges and opportunities associated with studying genomics of language-related traits.
  • Devanna, P., Dediu, D., & Vernes, S. C. (2019). The Genetics of Language: From complex genes to complex communication. In S.-A. Rueschemeyer, & M. G. Gaskell (Eds.), The Oxford Handbook of Psycholinguistics (2nd ed., pp. 865-898). Oxford: Oxford University Press.

    Abstract

    This chapter discusses the genetic foundations of the human capacity for language. It reviews the molecular structure of the genome and the complex molecular mechanisms that allow genetic information to influence multiple levels of biology. It goes on to describe the active regulation of genes and their formation of complex genetic pathways that in turn control the cellular environment and function. At each of these levels, examples of genes and genetic variants that may influence the human capacity for language are given. Finally, it discusses the value of using animal models to understand the genetic underpinnings of speech and language. From this chapter will emerge the complexity of the genome in action and the multidisciplinary efforts that are currently made to bridge the gap between genetics and language.
  • Devaraju, K., Miskinyte, G., Hansen, M. G., Monni, E., Tornero, D., Woods, N. B., Bengzon, J., Ahlenius, H., Lindvall, O., & Kokaia, Z. (2017). Direct conversion of human fibroblasts to functional excitatory cortical neurons integrating into human neural networks. Stem Cell Research & Therapy, 8: 207. doi:10.1186/s13287-017-0658-3.

    Abstract

    Background: Human fibroblasts can be directly converted to several subtypes of neurons, but cortical projection neurons have not been generated. Methods: Here we screened for transcription factor combinations that could potentially convert human fibroblasts to functional excitatory cortical neurons. The induced cortical (iCtx) cells were analyzed for cortical neuronal identity using immunocytochemistry, single-cell quantitative polymerase chain reaction (qPCR), electrophysiology, and their ability to integrate into human neural networks in vitro and ex vivo using electrophysiology and rabies virus tracing. Results: We show that a combination of three ranscription fact ors, BRN2, MYT1L, and FEZF2, have the ability to directly convert human fibroblasts to functional excitatory cortical neurons. The conversion efficiency was increased to about 16% by treatment with small molecules and microRNAs. The iCtx cells exhibited electrophysiological properties of functional neurons, had pyramidal-like cell morphology, and expressed key cortical projection neuronal markers. Single-cell analysis of iCtx cells revealed a complex gene expression profile, a subpopulation of them displaying a molecular signature closely resembling that of human fetal primary cortical neurons. The iCtx cells received synaptic inputs from co-cultured human fetal primary cortical neurons, contained spines, and expressed the postsyna ptic excitatory scaffold protein PSD95. When transplanted ex vivo to organotypic cultures of adult human cerebral cortex, the iCtx cells exhibited morphological and electrophysiological properties of mature neurons, integrated structurally into the cortical tissue, and received synaptic inputs from adult human neurons. Conclusions: Our findings indicate that functional excitatory cortical neurons, generated here for the first time by direct conversion of human somatic cells, have the capacity for synaptic integration into adult human cortex.
  • Dideriksen, C., Fusaroli, R., Tylén, K., Dingemanse, M., & Christiansen, M. H. (2019). Contextualizing Conversational Strategies: Backchannel, Repair and Linguistic Alignment in Spontaneous and Task-Oriented Conversations. In A. K. Goel, C. M. Seifert, & C. Freksa (Eds.), Proceedings of the 41st Annual Conference of the Cognitive Science Society (CogSci 2019) (pp. 261-267). Montreal, QB: Cognitive Science Society.

    Abstract

    Do interlocutors adjust their conversational strategies to the specific contextual demands of a given situation? Prior studies have yielded conflicting results, making it unclear how strategies vary with demands. We combine insights from qualitative and quantitative approaches in a within-participant experimental design involving two different contexts: spontaneously occurring conversations (SOC) and task-oriented conversations (TOC). We systematically assess backchanneling, other-repair and linguistic alignment. We find that SOC exhibit a higher number of backchannels, a reduced and more generic repair format and higher rates of lexical and syntactic alignment. TOC are characterized by a high number of specific repairs and a lower rate of lexical and syntactic alignment. However, when alignment occurs, more linguistic forms are aligned. The findings show that conversational strategies adapt to specific contextual demands.
  • Dietrich, R., Klein, W., & Noyau, C. (1995). The acquisition of temporality in a second language. Amsterdam: Benjamins.
  • Dieuleveut, A., Van Dooren, A., Cournane, A., & Hacquard, V. (2019). Acquiring the force of modals: Sig you guess what sig means? In M. Brown, & B. Dailey (Eds.), BUCLD 43: Proceedings of the 43rd annual Boston University Conference on Language Development (pp. 189-202). Sommerville, MA: Cascadilla Press.
  • Dimroth, C., & Starren, M. (Eds.). (2003). Information structure and the dynamics of language acquisition. Amsterdam: John Benjamins.

    Abstract

    The papers in this volume focus on the impact of information structure on language acquisition, thereby taking different linguistic approaches into account. They start from an empirical point of view, and examine data from natural first and second language acquisition, which cover a wide range of varieties, from early learner language to native speaker production and from gesture to Creole prototypes. The central theme is the interplay between principles of information structure and linguistic structure and its impact on the functioning and development of the learner's system. The papers examine language-internal explanatory factors and in particular the communicative and structural forces that push and shape the acquisition process, and its outcome. On the theoretical level, the approach adopted appeals both to formal and communicative constraints on a learner’s language in use. Two empirical domains provide a 'testing ground' for the respective weight of grammatical versus functional determinants in the acquisition process: (1) the expression of finiteness and scope relations at the utterance level and (2) the expression of anaphoric relations at the discourse level.
  • Dimroth, C., Gretsch, P., Jordens, P., Perdue, C., & Starren, M. (2003). Finiteness in Germanic languages: A stage-model for first and second language development. In C. Dimroth, & M. Starren (Eds.), Information structure and the dynamics of language acquisition (pp. 65-94). Amsterdam: Benjamins.
  • Dimroth, C., & Starren, M. (2003). Introduction. In C. Dimroth, & M. Starren (Eds.), Information structure and the dynamics of language acquisition (pp. 1-14). Amsterdam: John Benjamins.
  • Dingemanse, M. (2017). Brain-to-brain interfaces and the role of language in distributing agency. In N. J. Enfield, & P. Kockelman (Eds.), Distributed Agency (pp. 59-66). Oxford: Oxford University Press. doi:10.1093/acprof:oso/9780190457204.003.0007.

    Abstract

    Brain-to-brain interfaces, in which brains are physically connected without the intervention of language, promise new ways of collaboration and communication between humans. I examine the narrow view of language implicit in current conceptions of brain-to-brain interfaces and put forward a constructive alternative, stressing the role of language in organising joint agency. Two features of language stand out as crucial: its selectivity, which provides people with much-needed filters between public words and private worlds; and its negotiability, which provides people with systematic opportunities for calibrating understanding and expressing consent and dissent. Without these checks and balances, brain-to-brain interfaces run the risk of reducing people to the level of amoeba in a slime mold; with them, they may mature to become useful extensions of human agency
  • Dingemanse, M., & Akita, K. (2017). An inverse relation between expressiveness and grammatical integration: on the morphosyntactic typology of ideophones, with special reference to Japanese. Journal of Linguistics, 53(3), 501-532. doi:10.1017/S002222671600030X.

    Abstract

    Words and phrases may differ in the extent to which they are susceptible to prosodic foregrounding and expressive morphology: their expressiveness. They may also differ in the degree to which they are integrated in the morphosyntactic structure of the utterance: their grammatical integration. We describe an inverse relation that holds across widely varied languages, such that more expressiveness goes together with less grammatical integration, and vice versa. We review typological evidence for this inverse relation in 10 languages, then quantify and explain it using Japanese corpus data. We do this by tracking ideophones —vivid sensory words also known as mimetics or expressives— across different morphosyntactic contexts and measuring their expressiveness in terms of intonation, phonation and expressive morphology. We find that as expressiveness increases, grammatical integration decreases. Using gesture as a measure independent of the speech signal, we find that the most expressive ideophones are most likely to come together with iconic gestures. We argue that the ultimate cause is the encounter of two distinct and partly incommensurable modes of representation: the gradient, iconic, depictive system represented by ideophones and iconic gestures and the discrete, arbitrary, descriptive system represented by ordinary words. The study shows how people combine modes of representation in speech and demonstrates the value of integrating description and depiction into the scientific vision of language.

    Additional information

    Open data & R code
  • Dingemanse, M. (2019). 'Ideophone' as a comparative concept. In K. Akita, & P. Pardeshi (Eds.), Ideophones, Mimetics, and Expressives (pp. 13-33). Amsterdam: John Benjamins. doi:10.1075/ill.16.02din.

    Abstract

    This chapter makes the case for ‘ideophone’ as a comparative concept: a notion that captures a recurrent typological pattern and provides a template for understanding language-specific phenomena that prove similar. It revises an earlier definition to account for the observation that ideophones typically form an open lexical class, and uses insights from canonical typology to explore the larger typological space. According to the resulting definition, a canonical ideophone is a member of an open lexical class of marked words that depict sensory imagery. The five elements of this definition can be seen as dimensions that together generate a possibility space to characterise cross-linguistic diversity in depictive means of expression. This approach allows for the systematic comparative treatment of ideophones and ideophone-like phenomena. Some phenomena in the larger typological space are discussed to demonstrate the utility of the approach: phonaesthemes in European languages, specialised semantic classes in West-Chadic, diachronic diversions in Aslian, and depicting constructions in signed languages.
  • Dingemanse, M. (2017). Expressiveness and system integration: On the typology of ideophones, with special reference to Siwu. STUF - Language Typology and Universals, 70(2), 363-384. doi:10.1515/stuf-2017-0018.

    Abstract

    Ideophones are often described as words that are highly expressive and morphosyntactically marginal. A study of ideophones in everyday conversations in Siwu (Kwa, eastern Ghana) reveals a landscape of variation and change that sheds light on some larger questions in the morphosyntactic typology of ideophones. The article documents a trade-off between expressiveness and morphosyntactic integration, with high expressiveness linked to low integration and vice versa. It also describes a pathway for deideophonisation and finds that frequency of use is a factor that influences the degree to which ideophones can come to be more like ordinary words. The findings have implications for processes of (de)ideophonisation, ideophone borrowing, and ideophone typology. A key point is that the internal diversity we find in naturally occurring data, far from being mere noise, is patterned variation that can help us to get a handle on the factors shaping ideophone systems within and across languages.
  • Dingemanse, M. (2017). On the margins of language: Ideophones, interjections and dependencies in linguistic theory. In N. J. Enfield (Ed.), Dependencies in language (pp. 195-202). Berlin: Language Science Press. doi:10.5281/zenodo.573781.

    Abstract

    Linguistic discovery is viewpoint-dependent, just like our ideas about what is marginal and what is central in language. In this essay I consider two supposed marginalia —ideophones and interjections— which provide some useful pointers for widening our field of view. Ideophones challenge us to take a fresh look at language and consider how it is that our communication system combines multiple modes of representation. Interjections challenge us to extend linguistic inquiry beyond sentence level, and remind us that language is social-interactive at core. Marginalia, then, are not the obscure, exotic phenomena that can be safely ignored: they represent opportunities for innovation and invite us to keep pushing the edges of linguistic inquiry.
  • Dingemanse, M., Rossi, G., & Floyd, S. (2017). Place reference in story beginnings: a cross-linguistic study of narrative and interactional affordances. Language in Society, 46(2), 129-158. doi:10.1017/S0047404516001019.

    Abstract

    People often begin stories in conversation by referring to person, time, and place. We study story beginnings in three societies and find place reference is recurrently used to (i) set the stage, foreshadowing the type of story and the kind of response due, and to (ii) make the story cohere, anchoring elements of the developing story. Recipients orient to these interactional affordances of place reference by responding in ways that attend to the relevance of place for the story and by requesting clarification when references are incongruent or noticeably absent. The findings are based on 108 story beginnings in three unrelated languages: Cha’palaa, a Barbacoan language of Ecuador; Northern Italian, a Romance language of Italy; and Siwu, a Kwa language of Ghana. The commonalities suggest we have identified generic affordances of place reference, and that storytelling in conversation offers a robust sequential environment for systematic comparative research on conversational structures.
  • Doherty, M., & Klein, W. (Eds.). (1991). Übersetzung [Special Issue]. Zeitschrift für Literaturwissenschaft und Linguistik, (84).
  • Doumas, L. A. A., Hamer, A., Puebla, G., & Martin, A. E. (2017). A theory of the detection and learning of structured representations of similarity and relative magnitude. In G. Gunzelmann, A. Howes, T. Tenbrink, & E. Davelaar (Eds.), Proceedings of the 39th Annual Conference of the Cognitive Science Society (CogSci 2017) (pp. 1955-1960). Austin, TX: Cognitive Science Society.

    Abstract

    Responding to similarity, difference, and relative magnitude (SDM) is ubiquitous in the animal kingdom. However, humans seem unique in the ability to represent relative magnitude (‘more’/‘less’) and similarity (‘same’/‘different’) as abstract relations that take arguments (e.g., greater-than (x,y)). While many models use structured relational representations of magnitude and similarity, little progress has been made on how these representations arise. Models that developuse these representations assume access to computations of similarity and magnitude a priori, either encoded as features or as output of evaluation operators. We detail a mechanism for producing invariant responses to “same”, “different”, “more”, and “less” which can be exploited to compute similarity and magnitude as an evaluation operator. Using DORA (Doumas, Hummel, & Sandhofer, 2008), these invariant responses can serve be used to learn structured relational representations of relative magnitude and similarity from pixel images of simple shapes
  • Drijvers, L., Vaitonyte, J., & Ozyurek, A. (2019). Degree of language experience modulates visual attention to visible speech and iconic gestures during clear and degraded speech comprehension. Cognitive Science, 43: e12789. doi:10.1111/cogs.12789.

    Abstract

    Visual information conveyed by iconic hand gestures and visible speech can enhance speech comprehension under adverse listening conditions for both native and non‐native listeners. However, how a listener allocates visual attention to these articulators during speech comprehension is unknown. We used eye‐tracking to investigate whether and how native and highly proficient non‐native listeners of Dutch allocated overt eye gaze to visible speech and gestures during clear and degraded speech comprehension. Participants watched video clips of an actress uttering a clear or degraded (6‐band noise‐vocoded) action verb while performing a gesture or not, and were asked to indicate the word they heard in a cued‐recall task. Gestural enhancement was the largest (i.e., a relative reduction in reaction time cost) when speech was degraded for all listeners, but it was stronger for native listeners. Both native and non‐native listeners mostly gazed at the face during comprehension, but non‐native listeners gazed more often at gestures than native listeners. However, only native but not non‐native listeners' gaze allocation to gestures predicted gestural benefit during degraded speech comprehension. We conclude that non‐native listeners might gaze at gesture more as it might be more challenging for non‐native listeners to resolve the degraded auditory cues and couple those cues to phonological information that is conveyed by visible speech. This diminished phonological knowledge might hinder the use of semantic information that is conveyed by gestures for non‐native compared to native listeners. Our results demonstrate that the degree of language experience impacts overt visual attention to visual articulators, resulting in different visual benefits for native versus non‐native listeners.

    Additional information

    Supporting information
  • Drijvers, L., Van der Plas, M., Ozyurek, A., & Jensen, O. (2019). Native and non-native listeners show similar yet distinct oscillatory dynamics when using gestures to access speech in noise. NeuroImage, 194, 55-67. doi:10.1016/j.neuroimage.2019.03.032.

    Abstract

    Listeners are often challenged by adverse listening conditions during language comprehension induced by external factors, such as noise, but also internal factors, such as being a non-native listener. Visible cues, such as semantic information conveyed by iconic gestures, can enhance language comprehension in such situations. Using magnetoencephalography (MEG) we investigated whether spatiotemporal oscillatory dynamics can predict a listener's benefit of iconic gestures during language comprehension in both internally (non-native versus native listeners) and externally (clear/degraded speech) induced adverse listening conditions. Proficient non-native speakers of Dutch were presented with videos in which an actress uttered a degraded or clear verb, accompanied by a gesture or not, and completed a cued-recall task after every video. The behavioral and oscillatory results obtained from non-native listeners were compared to an MEG study where we presented the same stimuli to native listeners (Drijvers et al., 2018a). Non-native listeners demonstrated a similar gestural enhancement effect as native listeners, but overall scored significantly slower on the cued-recall task. In both native and non-native listeners, an alpha/beta power suppression revealed engagement of the extended language network, motor and visual regions during gestural enhancement of degraded speech comprehension, suggesting similar core processes that support unification and lexical access processes. An individual's alpha/beta power modulation predicted the gestural benefit a listener experienced during degraded speech comprehension. Importantly, however, non-native listeners showed less engagement of the mouth area of the primary somatosensory cortex, left insula (beta), LIFG and ATL (alpha) than native listeners, which suggests that non-native listeners might be hindered in processing the degraded phonological cues and coupling them to the semantic information conveyed by the gesture. Native and non-native listeners thus demonstrated similar yet distinct spatiotemporal oscillatory dynamics when recruiting visual cues to disambiguate degraded speech.

    Additional information

    1-s2.0-S1053811919302216-mmc1.docx
  • Drijvers, L. (2019). On the oscillatory dynamics underlying speech-gesture integration in clear and adverse listening conditions. PhD Thesis, Radboud University Nijmegen, Nijmegen.
  • Drijvers, L., & Ozyurek, A. (2017). Visual context enhanced: The joint contribution of iconic gestures and visible speech to degraded speech comprehension. Journal of Speech, Language, and Hearing Research, 60, 212-222. doi:10.1044/2016_JSLHR-H-16-0101.

    Abstract

    Purpose This study investigated whether and to what extent iconic co-speech gestures contribute to information from visible speech to enhance degraded speech comprehension at different levels of noise-vocoding. Previous studies of the contributions of these 2 visual articulators to speech comprehension have only been performed separately.

    Method Twenty participants watched videos of an actress uttering an action verb and completed a free-recall task. The videos were presented in 3 speech conditions (2-band noise-vocoding, 6-band noise-vocoding, clear), 3 multimodal conditions (speech + lips blurred, speech + visible speech, speech + visible speech + gesture), and 2 visual-only conditions (visible speech, visible speech + gesture).

    Results Accuracy levels were higher when both visual articulators were present compared with 1 or none. The enhancement effects of (a) visible speech, (b) gestural information on top of visible speech, and (c) both visible speech and iconic gestures were larger in 6-band than 2-band noise-vocoding or visual-only conditions. Gestural enhancement in 2-band noise-vocoding did not differ from gestural enhancement in visual-only conditions.
  • Drozd, K. F. (1995). Child English pre-sentential negation as metalinguistic exclamatory sentence negation. Journal of Child Language, 22(3), 583-610. doi:10.1017/S030500090000996X.

    Abstract

    This paper presents a study of the spontaneous pre-sentential negations
    of ten English-speaking children between the ages of 1; 6 and 3; 4 which
    supports the hypothesis that child English nonanaphoric pre-sentential
    negation is a form of metalinguistic exclamatory sentence negation. A
    detailed discourse analysis reveals that children's pre-sentential negatives
    like No Nathaniel a king (i) are characteristically echoic, and (it)
    typically express objection and rectification, two characteristic functions
    of exclamatory negation in adult discourse, e.g. Don't say 'Nathaniel's a
    king'! A comparison of children's pre-sentential negations with their
    internal predicate negations using not and don't reveals that the two
    negative constructions are formally and functionally distinct. I argue
    that children's nonanaphoric pre-sentential negatives constitute an
    independent, well-formed class of discourse negation. They are not
    'primitive' constructions derived from the miscategorization of emphatic
    no in adult speech or children's 'inventions'. Nor are they an
    early derivational variant of internal sentence negation. Rather, these
    negatives reflect young children's competence in using grammatical
    negative constructions appropriately in discourse.
  • Drozdova, P., Van Hout, R., & Scharenborg, O. (2017). L2 voice recognition: The role of speaker-, listener-, and stimulus-related factors. The Journal of the Acoustical Society of America, 142(5), 3058-3068. doi:10.1121/1.5010169.

    Abstract

    Previous studies examined various factors influencing voice recognition and learning with mixed results. The present study investigates the separate and combined contribution of these various speaker-, stimulus-, and listener-related factors to voice recognition. Dutch listeners, with arguably incomplete phonological and lexical knowledge in the target language, English, learned to recognize the voice of four native English speakers, speaking in English, during four-day training. Training was successful and listeners' accuracy was shown to be influenced by the acoustic characteristics of speakers and the sound composition of the words used in the training, but not by lexical frequency of the words, nor the lexical knowledge of the listeners or their phonological aptitude. Although not conclusive, listeners with a lower working memory capacity seemed to be slower in learning voices than listeners with a higher working memory capacity. The results reveal that speaker-related, listener-related, and stimulus-related factors accumulate in voice recognition, while lexical information turns out not to play a role in successful voice learning and recognition. This implies that voice recognition operates at the prelexical processing level.
  • Drude, S., Awete, W., & Aweti, A. (2019). A ortografia da língua Awetí. LIAMES: Línguas Indígenas Americanas, 19: e019014. doi:10.20396/liames.v19i0.8655746.

    Abstract

    Este trabalho descreve e fundamenta a ortografia da língua Awetí (Tupí, Alto Xingu/mt), com base na análise da estrutura fonológica e gramatical do Awetí. A ortografia é resultado de um longo trabalho colaborativo entre os três autores, iniciado em 1998. Ela não define apenas um alfabeto (a representação das vogais e das consoantes da língua), mas também aborda a variação interna, ressilabificação, lenição, palatalização e outros processos (morfo‑)fonológicos. Tanto a representação escrita da oclusiva glotal, quanto as consequências ortográficas da harmonia nasal receberam uma atenção especial. Apesar de o acento lexical não ser ortograficamente marcado em Awetí, a grande maioria dos afixos e partículas é abordada considerando o acento e sua interação com morfemas adjacentes, ao mesmo tempo determinando as palavras ortográficas. Finalmente foi estabelecida a ordem alfabética em que dígrafos são tratados como sequências de letras, já a oclusiva glotal ⟨ʼ⟩ é ignorada, facilitando o aprendizado do Awetí. A ortografia tal como descrita aqui tem sido usada por aproximadamente dez anos na escola para a alfabetização em Awetí, com bons resultados obtidos. Acreditamos que vários dos argumentos aqui levantados podem ser produtivamente transferidos para outras línguas com fenômenos semelhantes (a oclusiva glotal como consoante, harmonia nasal, assimilação morfo-fonológica, etc.).
  • Drude, S. (2003). Advanced glossing: A language documentation format and its implementation with Shoebox. In Proceedings of the 2002 International Conference on Language Resources and Evaluation (LREC 2002). Paris: ELRA.

    Abstract

    This paper presents Advanced Glossing, a proposal for a general glossing format designed for language documentation, and a specific setup for the Shoebox-program that implements Advanced Glossing to a large extent. Advanced Glossing (AG) goes beyond the traditional Interlinear Morphemic Translation, keeping syntactic and morphological information apart from each other in separate glossing tables. AG provides specific lines for different kinds of annotation – phonetic, phonological, orthographical, prosodic, categorial, structural, relational, and semantic, and it allows for gradual and successive, incomplete, and partial filling in case that some information may be irrelevant, unknown or uncertain. The implementation of AG in Shoebox sets up several databases. Each documented text is represented as a file of syntactic glossings. The morphological glossings are kept in a separate database. As an additional feature interaction with lexical databases is possible. The implementation makes use of the interlinearizing automatism provided by Shoebox, thus obtaining the table format for the alignment of lines in cells, and for semi-automatic filling-in of information in glossing tables which has been extracted from databases
  • Drude, S. (2003). Digitizing and annotating texts and field recordings in the Awetí project. In Proceedings of the EMELD Language Digitization Project Conference 2003. Workshop on Digitizing and Annotating Text and Field Recordings, LSA Institute, Michigan State University, July 11th -13th.

    Abstract

    Digitizing and annotating texts and field recordings Given that several initiatives worldwide currently explore the new field of documentation of endangered languages, the E-MELD project proposes to survey and unite procedures, techniques and results in order to achieve its main goal, ''the formulation and promulgation of best practice in linguistic markup of texts and lexicons''. In this context, this year's workshop deals with the processing of recorded texts. I assume the most valuable contribution I could make to the workshop is to show the procedures and methods used in the Awetí Language Documentation Project. The procedures applied in the Awetí Project are not necessarily representative of all the projects in the DOBES program, and they may very well fall short in several respects of being best practice, but I hope they might provide a good and concrete starting point for comparison, criticism and further discussion. The procedures to be exposed include: * taping with digital devices, * digitizing (preliminarily in the field, later definitely by the TIDEL-team at the Max Planck Institute in Nijmegen), * segmenting and transcribing, using the transcriber computer program, * translating (on paper, or while transcribing), * adding more specific annotation, using the Shoebox program, * converting the annotation to the ELAN-format developed by the TIDEL-team, and doing annotation with ELAN. Focus will be on the different types of annotation. Especially, I will present, justify and discuss Advanced Glossing, a text annotation format developed by H.-H. Lieb and myself designed for language documentation. It will be shown how Advanced Glossing can be applied using the Shoebox program. The Shoebox setup used in the Awetí Project will be shown in greater detail, including lexical databases and semi-automatic interaction between different database types (jumping, interlinearization). ( Freie Universität Berlin and Museu Paraense Emílio Goeldi, with funding from the Volkswagen Foundation.)
  • Duffield, N., & Matsuo, A. (2003). Factoring out the parallelism effect in ellipsis: An interactional approach? In J. Chilar, A. Franklin, D. Keizer, & I. Kimbara (Eds.), Proceedings of the 39th Annual Meeting of the Chicago Linguistic Society (CLS) (pp. 591-603). Chicago: Chicago Linguistics Society.

    Abstract

    Traditionally, there have been three standard assumptions made about the Parallelism Effect on VP-ellipsis, namely that the effect is categorical, that it applies asymmetrically and that it is uniquely due to syntactic factors. Based on the results of a series of experiments involving online and offline tasks, it will be argued that the Parallelism Effect is instead noncategorical and interactional. The factors investigated include construction type, conceptual and morpho-syntactic recoverability, finiteness and anaphor type (to test VP-anaphora). The results show that parallelism is gradient rather than categorical, effects both VP-ellipsis and anaphora, and is influenced by both structural and non-structural factors.
  • Dunn, M. (2003). Pioneers of Island Melanesia project. Oceania Newsletter, 30/31, 1-3.
  • Dunn, M., Levinson, S. C., Lindström, E., Reesink, G., & Terrill, A. (2003). Island Melanesia elicitation materials. Nijmegen: Max Planck Institute for Psycholinguistics. doi:10.17617/2.885547.

    Abstract

    The Island Melanesia project was initiated to collect data on the little-known Papuan languages of Island Melanesia, and to explore the origins of and relationships between these languages. The project materials from the 2003 field season focus on language related to cultural domains (e.g., material culture) and on targeted grammatical description. Five tasks are included: Proto-Oceanic lexicon, Grammatical questionnaire and lexicon, Kinship questionnaire, Domains of likely pre-Austronesian terminology, and Botanical collection questionnaire.
  • Edlinger, G., Bastiaansen, M. C. M., Brunia, C., Neuper, C., & Pfurtscheller, G. (1999). Cortical oscillatory activity assessed by combined EEG and MEG recordings and high resolution ERD methods. Biomedizinische Technik, 44(2), 131-134.
  • Edmiston, P., Perlman, M., & Lupyan, G. (2017). Creating words from iterated vocal imitation. In G. Gunzelman, A. Howes, T. Tenbrink, & E. Davelaar (Eds.), Proceedings of the 39th Annual Conference of the Cognitive Science Society (CogSci 2017) (pp. 331-336). Austin, TX: Cognitive Science Society.

    Abstract

    We report the results of a large-scale (N=1571) experiment to investigate whether spoken words can emerge from the process of repeated imitation. Participants played a version of the children’s game “Telephone”. The first generation was asked to imitate recognizable environmental sounds (e.g., glass breaking, water splashing); subsequent generations imitated the imitators for a total of 8 generations. We then examined whether the vocal imitations became more stable and word-like, retained a resemblance to the original sound, and became more suitable as learned category labels. The results showed (1) the imitations became progressively more word-like, (2) even after 8 generations, they could be matched above chance to the environmental sound that motivated them, and (3) imitations from later generations were more effective as learned category labels. These results show how repeated imitation can create progressively more word-like forms while retaining a semblance of iconicity.
  • Ehrich, V., & Levelt, W. J. M. (Eds.). (1982). Max-Planck-Institute for Psycholinguistics: Annual Report Nr.3 1982. Nijmegen: MPI for Psycholinguistics.
  • Eibl-Eibesfeldt, I., & Senft, G. (1991). Trobriander (Papua-Neu-guinea, Trobriand -Inseln, Kaile'una) Tänze zur Einleitung des Erntefeier-Rituals. Film E 3129. Trobriander (Papua-Neuguinea, Trobriand-Inseln, Kiriwina); Ausschnitte aus einem Erntefesttanz. Film E3130. Publikationen zu wissenschaftlichen Filmen. Sektion Ethnologie, 17, 1-17.
  • Eijk, L., Ernestus, M., & Schriefers, H. (2019). Alignment of pitch and articulation rate. In S. Calhoun, P. Escudero, M. Tabain, & P. Warren (Eds.), Proceedings of the 19th International Congress of Phonetic Sciences (ICPhS 20195) (pp. 2690-2694). Canberra, Australia: Australasian Speech Science and Technology Association Inc.

    Abstract

    Previous studies have shown that speakers align their speech to each other at multiple linguistic levels. This study investigates whether alignment is mostly the result of priming from the immediately preceding
    speech materials, focussing on pitch and articulation rate (AR). Native Dutch speakers completed sentences, first by themselves (pre-test), then in alternation with Confederate 1 (Round 1), with Confederate 2 (Round 2), with Confederate 1 again
    (Round 3), and lastly by themselves again (post-test). Results indicate that participants aligned to the confederates and that this alignment lasted during the post-test. The confederates’ directly preceding sentences were not good predictors for the participants’ pitch and AR. Overall, the results indicate that alignment is more of a global effect than a local priming effect.
  • Eisenbeiss, S., McGregor, B., & Schmidt, C. M. (1999). Story book stimulus for the elicitation of external possessor constructions and dative constructions ('the circle of dirt'). In D. Wilkins (Ed.), Manual for the 1999 Field Season (pp. 140-144). Nijmegen: Max Planck Institute for Psycholinguistics. doi:10.17617/2.3002750.

    Abstract

    How involved in an event is a person that possesses one of the event participants? Some languages can treat such “external possessors” as very closely involved, even marking them on the verb along with core roles such as subject and object. Other languages only allow possessors to be expressed as non-core participants. This task explores possibilities for the encoding of possessors and other related roles such as beneficiaries. The materials consist of a sequence of thirty drawings designed to elicit target construction types.

    Additional information

    1999_Story_book_booklet.pdf
  • Eising, E., Carrion Castillo, A., Vino, A., Strand, E. A., Jakielski, K. J., Scerri, T. S., Hildebrand, M. S., Webster, R., Ma, A., Mazoyer, B., Francks, C., Bahlo, M., Scheffer, I. E., Morgan, A. T., Shriberg, L. D., & Fisher, S. E. (2019). A set of regulatory genes co-expressed in embryonic human brain is implicated in disrupted speech development. Molecular Psychiatry, 24, 1065-1078. doi:10.1038/s41380-018-0020-x.

    Abstract

    Genetic investigations of people with impaired development of spoken language provide windows into key aspects of human biology. Over 15 years after FOXP2 was identified, most speech and language impairments remain unexplained at the molecular level. We sequenced whole genomes of nineteen unrelated individuals diagnosed with childhood apraxia of speech, a rare disorder enriched for causative mutations of large effect. Where DNA was available from unaffected parents, we discovered de novo mutations, implicating genes, including CHD3, SETD1A and WDR5. In other probands, we identified novel loss-of-function variants affecting KAT6A, SETBP1, ZFHX4, TNRC6B and MKL2, regulatory genes with links to neurodevelopment. Several of the new candidates interact with each other or with known speech-related genes. Moreover, they show significant clustering within a single co-expression module of genes highly expressed during early human brain development. This study highlights gene regulatory pathways in the developing brain that may contribute to acquisition of proficient speech.

    Additional information

    Eising_etal_2018sup.pdf
  • Eising, E., Shyti, R., 'T hoen, P. A. C., Vijfhuizen, L. S., Huisman, S. M. H., Broos, L. A. M., Mahfourz, A., Reinders, M. J. T., Ferrrari, M. D., Tolner, E. A., De Vries, B., & Van den Maagdenberg, A. M. J. M. (2017). Cortical spreading depression causes unique dysregulation of inflammatory pathways in a transgenic mouse model of migraine. Molecular Biology, 54(4), 2986-2996. doi:10.1007/s12035-015-9681-5.

    Abstract

    Familial hemiplegic migraine type 1 (FHM1) is a
    rare monogenic subtype of migraine with aura caused by mutations
    in CACNA1A that encodes the α1A subunit of voltagegated
    CaV2.1 calcium channels. Transgenic knock-in mice
    that carry the human FHM1 R192Q missense mutation
    (‘FHM1 R192Q mice’) exhibit an increased susceptibility to
    cortical spreading depression (CSD), the mechanism underlying
    migraine aura. Here, we analysed gene expression profiles
    from isolated cortical tissue of FHM1 R192Q mice 24 h after
    experimentally induced CSD in order to identify molecular
    pathways affected by CSD. Gene expression profiles were
    generated using deep serial analysis of gene expression sequencing.
    Our data reveal a signature of inflammatory signalling
    upon CSD in the cortex of both mutant and wild-type
    mice. However, only in the brains of FHM1 R192Q mice
    specific genes are up-regulated in response to CSD that are
    implicated in interferon-related inflammatory signalling. Our
    findings show that CSD modulates inflammatory processes in
    both wild-type and mutant brains, but that an additional
    unique inflammatory signature becomes expressed after
    CSD in a relevant mouse model of migraine.
  • Eising, E., Pelzer, N., Vijfhuizen, L. S., De Vries, B., Ferrari, M. D., 'T Hoen, P. A. C., Terwindt, G. M., & Van den Maagdenberg, A. M. J. M. (2017). Identifying a gene expression signature of cluster headache in blood. Scientific Reports, 7: 40218. doi:10.1038/srep40218.

    Abstract

    Cluster headache is a relatively rare headache disorder, typically characterized by multiple daily, short-lasting attacks of excruciating, unilateral (peri-)orbital or temporal pain associated with autonomic symptoms and restlessness. To better understand the pathophysiology of cluster headache, we used RNA sequencing to identify differentially expressed genes and pathways in whole blood of patients with episodic (n = 19) or chronic (n = 20) cluster headache in comparison with headache-free controls (n = 20). Gene expression data were analysed by gene and by module of co-expressed genes with particular attention to previously implicated disease pathways including hypocretin dysregulation. Only moderate gene expression differences were identified and no associations were found with previously reported pathogenic mechanisms. At the level of functional gene sets, associations were observed for genes involved in several brain-related mechanisms such as GABA receptor function and voltage-gated channels. In addition, genes and modules of co-expressed genes showed a role for intracellular signalling cascades, mitochondria and inflammation. Although larger study samples may be required to identify the full range of involved pathways, these results indicate a role for mitochondria, intracellular signalling and inflammation in cluster headache

    Additional information

    Eising_etal_2017sup.pdf
  • Enfield, N. J. (2003). Producing and editing diagrams using co-speech gesture: Spatializing non-spatial relations in explanations of kinship in Laos. Journal of Linguistic Anthropology, 13(1), 7-50. doi:10.1525/jlin.2003.13.1.7.

    Abstract

    This article presents a description of two sequences of talk by urban speakers of Lao (a southwestern Tai language spoken in Laos) in which co-speech gesture plays a central role in explanations of kinship relations and terminology. The speakers spontaneously use hand gestures and gaze to spatially diagram relationships that have no inherent spatial structure. The descriptive sections of the article are prefaced by a discussion of the semiotic complexity of illustrative gestures and gesture diagrams. Gestured signals feature iconic, indexical, and symbolic components, usually in combination, as well as using motion and three-dimensional space to convey meaning. Such diagrams show temporal persistence and structural integrity despite having been projected in midair by evanescent signals (i.e., handmovements anddirected gaze). Speakers sometimes need or want to revise these spatial representations without destroying their structural integrity. The need to "edit" gesture diagrams involves such techniques as hold-and-drag, hold-and-work-with-free-hand, reassignment-of-old-chunk-tonew-chunk, and move-body-into-new-space.
  • Enfield, N. J. (2003). The definition of WHAT-d'you-call-it: Semantics and pragmatics of 'recognitional deixis'. Journal of Pragmatics, 35(1), 101-117. doi:10.1016/S0378-2166(02)00066-8.

    Abstract

    Words such as what -d'you-call-it raise issues at the heart of the semantics/pragmatics interface. Expressions of this kind are conventionalised and have meanings which, while very general, are explicitly oriented to the interactional nature of the speech context, drawing attention to a speaker's assumption that the listener can figure out what the speaker is referring to. The details of such meanings can account for functional contrast among similar expressions, in a single language as well as cross-linguistically. The English expressions what -d'you-call-it and you-know-what are compared, along with a comparable Lao expression meaning, roughly, ‘that thing’. Proposed definitions of the meanings of these expressions account for their different patterns of use. These definitions include reference to the speech act participants, a point which supports the view that what -d'you-call-it words can be considered deictic. Issues arising from the descriptive section of this paper include the question of how such terms are derived, as well as their degree of conventionality.
  • Enfield, N. J. (2003). “Fish traps” task. In N. J. Enfield (Ed.), Field research manual 2003, part I: Multimodal interaction, space, event representation (pp. 31). Nijmegen: Max Planck Institute for Psycholinguistics. doi:10.17617/2.877616.

    Abstract

    This task is designed to elicit virtual 3D ‘models’ created in gesture space using iconic and other representational gestures. This task has been piloted with Lao speakers, where two speakers were asked to explain the meaning of terms referring to different kinds of fish trap mechanisms. The task elicited complex performances involving a range of iconic gestures, and with especially interesting use of (a) the ‘model/diagram’ in gesture space as a virtual object, (b) the non-dominant hand as a prosodic/semiotic anchor, (c) a range of different techniques (indexical and iconic) for evoking meaning with the hand, and (d) the use of nearby objects and parts of the body as semiotic ‘props’.
  • Enfield, N. J. (2003). Demonstratives in space and interaction: Data from Lao speakers and implications for semantic analysis. Language, 79(1), 82-117.

    Abstract

    The semantics of simple (i.e. two-term) systems of demonstratives have in general hitherto been treated as inherently spatial and as marking a symmetrical opposition of distance (‘proximal’ versus ‘distal’), assuming the speaker as a point of origin. More complex systems are known to add further distinctions, such as visibility or elevation, but are assumed to build on basic distinctions of distance. Despite their inherently context-dependent nature, little previous work has based the analysis of demonstratives on evidence of their use in real interactional situations. In this article, video recordings of spontaneous interaction among speakers of Lao (Southwestern Tai, Laos) are examined in an analysis of the two Lao demonstrative determiners nii4 and nan4. A hypothesis of minimal encoded semantics is tested against rich contextual information, and the hypothesis is shown to be consistent with the data. Encoded conventional meanings must be kept distinct from contingent contextual information and context-dependent pragmatic implicatures. Based on examples of the two Lao demonstrative determiners in exophoric uses, the following claims are made. The term nii4 is a semantically general demonstrative, lacking specification of ANY spatial property (such as location or distance). The term nan4 specifies that the referent is ‘not here’ (encoding ‘location’ but NOT ‘distance’). Anchoring the semantic specification in a deictic primitive ‘here’ allows a strictly discrete intensional distinction to be mapped onto an extensional range of endless elasticity. A common ‘proximal’ spatial interpretation for the semantically more general term nii4 arises from the paradigmatic opposition of the two demonstrative determiners. This kind of analysis suggests a reappraisal of our general understanding of the semantics of demonstrative systems universally. To investigate the question in sufficient detail, however, rich contextual data (preferably collected on video) is necessary
  • Enfield, N. J. (2003). Linguistic epidemiology: Semantics and grammar of language contact in mainland Southeast Asia. London: Routledge Curzon.
  • Enfield, N. J. (Ed.). (2003). Field research manual 2003, part I: Multimodal interaction, space, event representation. Nijmegen: Max Planck Institute for Psycholinguistics.
  • Enfield, N. J., De Ruiter, J. P., Levinson, S. C., & Stivers, T. (2003). Multimodal interaction in your field site: A preliminary investigation. In N. J. Enfield (Ed.), Field research manual 2003, part I: Multimodal interaction, space, event representation (pp. 10-16). Nijmegen: Max Planck Institute for Psycholinguistics. doi:10.17617/2.877638.

    Abstract

    Research on video- and audio-recordings of spontaneous naturally-occurring conversation in English has shown that conversation is a rule-guided, practice-oriented domain that can be investigated for its underlying mechanics or structure. Systematic study could yield something like a grammar for conversation. The goal of this task is to acquire a corpus of video-data, for investigating the underlying structure(s) of interaction cross-linguistically and cross-culturally
  • Enfield, N. J. (2017). Language in the Mainland Southeast Asia Area. In R. Hickey (Ed.), The Cambridge Handbook of Areal Linguistics (pp. 677-702). Cambridge: Cambridge University Press. doi:10.1017/9781107279872.026.
  • Enfield, N. J. (1999). Lao as a national language. In G. Evans (Ed.), Laos: Culture and society (pp. 258-290). Chiang Mai: Silkworm Books.
  • Enfield, N. J., & Levinson, S. C. (2003). Interview on kinship. In N. J. Enfield (Ed.), Field research manual 2003, part I: Multimodal interaction, space, event representation (pp. 64-65). Nijmegen: Max Planck Institute for Psycholinguistics. doi:10.17617/2.877629.

    Abstract

    We want to know how people think about their field of kin, on the supposition that it is quasi-spatial. To get some insights here, we need to video a discussion about kinship reckoning, the kinship system, marriage rules and so on, with a view to looking at both the linguistic expressions involved, and the gestures people use to indicate kinship groups and relations. Unlike the task in the 2001 manual, this task is a direct interview method.
  • Enfield, N. J. (2003). Introduction. In N. J. Enfield, Linguistic epidemiology: Semantics and grammar of language contact in mainland Southeast Asia (pp. 2-44). London: Routledge Curzon.
  • Enfield, N. J. (1999). On the indispensability of semantics: Defining the ‘vacuous’. Rask: internationalt tidsskrift for sprog og kommunikation, 9/10, 285-304.
  • Enfield, N. J., & De Ruiter, J. P. (2003). The diff-task: A symmetrical dyadic multimodal interaction task. In N. J. Enfield (Ed.), Field research manual 2003, part I: Multimodal interaction, space, event representation (pp. 17-21). Nijmegen: Max Planck Institute for Psycholinguistics. doi:10.17617/2.877635.

    Abstract

    This task is a complement to the questionnaire ‘Multimodal interaction in your field site: a preliminary investigation’. The objective of the task is to obtain high quality video data on structured and symmetrical dyadic multimodal interaction. The features of interaction we are interested in include turn organization in speech and nonverbal behavior, eye-gaze behavior, use of composite signals (i.e. communicative units of speech-combined-with-gesture), and linguistic and other resources for ‘navigating’ interaction (e.g. words like okay, now, well, and um).

    Additional information

    2003_1_The_diff_task_stimuli.zip
  • Enfield, N. J., Stivers, T., Brown, P., Englert, C., Harjunpää, K., Hayashi, M., Heinemann, T., Hoymann, G., Keisanen, T., Rauniomaa, M., Raymond, C. W., Rossano, F., Yoon, K.-E., Zwitserlood, I., & Levinson, S. C. (2019). Polar answers. Journal of Linguistics, 55(2), 277-304. doi:10.1017/S0022226718000336.

    Abstract

    How do people answer polar questions? In this fourteen-language study of answers to questions in conversation, we compare the two main strategies; first, interjection-type answers such as uh-huh (or equivalents yes, mm, head nods, etc.), and second, repetition-type answers that repeat some or all of the question. We find that all languages offer both options, but that there is a strong asymmetry in their frequency of use, with a global preference for interjection-type answers. We propose that this preference is motivated by the fact that the two options are not equivalent in meaning. We argue that interjection-type answers are intrinsically suited to be the pragmatically unmarked, and thus more frequent, strategy for confirming polar questions, regardless of the language spoken. Our analysis is based on the semantic-pragmatic profile of the interjection-type and repetition-type answer strategies, in the context of certain asymmetries inherent to the dialogic speech act structure of question–answer sequences, including sequential agency and thematic agency. This allows us to see possible explanations for the outlier distributions found in ǂĀkhoe Haiǁom and Tzeltal.
  • Enfield, N. J. (2003). Preface and priorities. In N. J. Enfield (Ed.), Field research manual 2003, part I: Multimodal interaction, space, event representation (pp. 3). Nijmegen: Max Planck Institute for Psycholinguistics.
  • Erard, M. (2019). Language aptitude: Insights from hyperpolyglots. In Z. Wen, P. Skehan, A. Biedroń, S. Li, & R. L. Sparks (Eds.), Language aptitude: Advancing theory, testing, research and practice (pp. 153-167). Abingdon, UK: Taylor & Francis.

    Abstract

    Over the decades, high-intensity language learners scattered over the globe referred to as “hyperpolyglots” have undertaken a natural experiment into the limits of learning and acquiring proficiencies in multiple languages. This chapter details several ways in which hyperpolyglots are relevant to research on aptitude. First, historical hyperpolyglots Cardinal Giuseppe Mezzofanti, Emil Krebs, Elihu Burritt, and Lomb Kató are described in terms of how they viewed their own exceptional outcomes. Next, I draw on results from an online survey with 390 individuals to explore how contemporary hyperpolyglots consider the explanatory value of aptitude. Third, the challenges involved in studying the genetic basis of hyperpolyglottism (and by extension of language aptitude) are discussed. This mosaic of data is meant to inform the direction of future aptitude research that takes hyperpolyglots, one type of exceptional language learner and user, into account.
  • Erard, M. (2017). Write yourself invisible. New Scientist, 236(3153), 36-39.
  • Erkelens, M. (2003). The semantic organization of "cut" and "break" in Dutch: A developmental study. Master Thesis, Free University Amsterdam, Amsterdam.
  • Ernestus, M., & Baayen, R. H. (2003). Predicting the unpredictable: The phonological interpretation of neutralized segments in Dutch. Language, 79(1), 5-38.

    Abstract

    Among the most fascinating data for phonology are those showing how speakers incorporate new words and foreign words into their language system, since these data provide cues to the actual principles underlying language. In this article, we address how speakers deal with neutralized obstruents in new words. We formulate four hypotheses and test them on the basis of Dutch word-final obstruents, which are neutral for [voice]. Our experiments show that speakers predict the characteristics ofneutralized segments on the basis ofphonologically similar morphemes stored in the mental lexicon. This effect of the similar morphemes can be modeled in several ways. We compare five models, among them STOCHASTIC OPTIMALITY THEORY and ANALOGICAL MODELING OF LANGUAGE; all perform approximately equally well, but they differ in their complexity, with analogical modeling oflanguage providing the most economical explanation.
  • Ernestus, M. (2003). The role of phonology and phonetics in Dutch voice assimilation. In J. v. d. Weijer, V. J. v. Heuven, & H. v. d. Hulst (Eds.), The phonological spectrum Volume 1: Segmental structure (pp. 119-144). Amsterdam: John Benjamins.
  • Ernestus, M., Dikmans, M., & Giezenaar, G. (2017). Advanced second language learners experience difficulties processing reduced word pronunciation variants. Dutch Journal of Applied Linguistics, 6(1), 1-20. doi:10.1075/dujal.6.1.01ern.

    Abstract

    Words are often pronounced with fewer segments in casual conversations than in formal speech. Previous research has shown that foreign language learners and beginning second language learners experience problems processing reduced speech. We examined whether this also holds for advanced second language learners. We designed a dictation task in Dutch consisting of sentences spliced from casual conversations and an unreduced counterpart of this task, with the same sentences carefully articulated by the same speaker. Advanced second language learners of Dutch produced substantially more transcription errors for the reduced than for the unreduced sentences. These errors made the sentences incomprehensible or led to non-intended meanings. The learners often did not rely on the semantic and syntactic information in the sentence or on the subsegmental cues to overcome the reductions. Hence, advanced second language learners also appear to suffer from the reduced pronunciation variants of words that are abundant in everyday conversations
  • Ernestus, M., Kouwenhoven, H., & Van Mulken, M. (2017). The direct and indirect effects of the phonotactic constraints in the listener's native language on the comprehension of reduced and unreduced word pronunciation variants in a foreign language. Journal of Phonetics, 62, 50-64. doi:10.1016/j.wocn.2017.02.003.

    Abstract

    This study investigates how the comprehension of casual speech in foreign languages is affected by the phonotactic constraints in the listener’s native language. Non-native listeners of English with different native languages heard short English phrases produced by native speakers of English or Spanish and they indicated whether these phrases included can or can’t. Native Mandarin listeners especially tended to interpret can’t as can. We interpret this result as a direct effect of the ban on word-final /nt/ in Mandarin. Both the native Mandarin and the native Spanish listeners did not take full advantage of the subsegmental information in the speech signal cueing reduced can’t. This finding is probably an indirect effect of the phonotactic constraints in their native languages: these listeners have difficulties interpreting the subsegmental cues because these cues do not occur or have different functions in their native languages. Dutch resembles English in the phonotactic constraints relevant to the comprehension of can’t, and native Dutch listeners showed similar patterns in their comprehension of native and non-native English to native English listeners. This result supports our conclusion that the major patterns in the comprehension results are driven by the phonotactic constraints in the listeners’ native languages.
  • Eryilmaz, K., & Little, H. (2017). Using Leap Motion to investigate the emergence of structure in speech and language. Behavior Research Methods, 49(5), 1748-1768. doi:10.3758/s13428-016-0818-x.

    Abstract

    In evolutionary linguistics, experiments using artificial signal spaces are being used to investigate the emergence of speech structure. These signal spaces need to be continuous, non-discretised spaces from which discrete units and patterns can emerge. They need to be dissimilar from - but comparable with - the vocal-tract, in order to minimise interference from pre-existing linguistic knowledge, while informing us about language. This is a hard balance to strike. This article outlines a new approach which uses the Leap Motion, an infra-red controller which can convert manual movement in 3d space into sound. The signal space using this approach is more flexible than signal spaces in previous attempts. Further, output data using this approach is simpler to arrange and analyse. The experimental interface was built using free, and mostly open source libraries in Python. We provide our source code for other researchers as open source.
  • Essegbey, J. (1999). Inherent complement verbs revisited: Towards an understanding of argument structure in Ewe. PhD Thesis, Radboud University Nijmegen, Nijmegen. doi:10.17617/2.2057668.
  • Esteve-Gibert, N., Prieto, P., & Liszkowski, U. (2017). Twelve-month-olds understand social intentions based on prosody and gesture shape. Infancy, 22, 108-129. doi:10.1111/infa.12146.

    Abstract

    Infants infer social and pragmatic intentions underlying attention-directing gestures, but the basis on which infants make these inferences is not well understood. Previous studies suggest that infants rely on information from preceding shared action contexts and joint perceptual scenes. Here, we tested whether 12-month-olds use information from act-accompanying cues, in particular prosody and hand shape, to guide their pragmatic understanding. In Experiment 1, caregivers directed infants’ attention to an object to request it, share interest in it, or inform them about a hidden aspect. Caregivers used distinct prosodic and gestural patterns to express each pragmatic intention. Experiment 2 was identical except that experimenters provided identical lexical information across conditions and used three sets of trained prosodic and gestural patterns. In all conditions, the joint perceptual scenes and preceding shared action contexts were identical. In both experiments, infants reacted appropriately to the adults’ intentions by attending to the object mostly in the sharing interest condition, offering the object mostly in the imperative condition, and searching for the referent mostly in the informing condition. Infants’ ability to comprehend pragmatic intentions based on prosody and gesture shape expands infants’ communicative understanding from common activities to novel situations for which shared background knowledge is missing.
  • Fairs, A. (2019). Linguistic dual-tasking: Understanding temporal overlap between production and comprehension. PhD Thesis, Radboud University Nijmegen, Nijmegen.
  • Favier, S., Wright, A., Meyer, A. S., & Huettig, F. (2019). Proficiency modulates between- but not within-language structural priming. Journal of Cultural Cognitive Science, 3(suppl. 1), 105-124. doi:10.1007/s41809-019-00029-1.

    Abstract

    The oldest of the Celtic language family, Irish differs considerably from English, notably with respect to word order and case marking. In spite of differences in surface constituent structure, less restricted accounts of bilingual shared syntax predict that processing datives and passives in Irish should prime the production of their English equivalents. Furthermore, this cross-linguistic influence should be sensitive to L2 proficiency, if shared structural representations are assumed to develop over time. In Experiment 1, we investigated cross-linguistic structural priming from Irish to English in 47 bilingual adolescents who are educated through Irish. Testing took place in a classroom setting, using written primes and written sentence generation. We found that priming for prepositional-object (PO) datives was predicted by self-rated Irish (L2) proficiency, in line with previous studies. In Experiment 2, we presented translations of the materials to an English-educated control group (n=54). We found a within-language priming effect for PO datives, which was not modulated by English (L1) proficiency. Our findings are compatible with current theories of bilingual language processing and L2 syntactic acquisition.
  • Fear, B. D., Cutler, A., & Butterfield, S. (1995). The strong/weak syllable distinction in English. Journal of the Acoustical Society of America, 97, 1893-1904. doi:10.1121/1.412063.

    Abstract

    Strong and weak syllables in English can be distinguished on the basis of vowel quality, of stress, or of both factors. Critical for deciding between these factors are syllables containing unstressed unreduced vowels, such as the first syllable of automata. In this study 12 speakers produced sentences containing matched sets of words with initial vowels ranging from stressed to reduced, at normal and at fast speech rates. Measurements of the duration, intensity, F0, and spectral characteristics of the word-initial vowels showed that unstressed unreduced vowels differed significantly from both stressed and reduced vowels. This result held true across speaker sex and dialect. The vowels produced by one speaker were then cross-spliced across the words within each set, and the resulting words' acceptability was rated by listeners. In general, cross-spliced words were only rated significantly less acceptable than unspliced words when reduced vowels interchanged with any other vowel. Correlations between rated acceptability and acoustic characteristics of the cross-spliced words demonstrated that listeners were attending to duration, intensity, and spectral characteristics. Together these results suggest that unstressed unreduced vowels in English pattern differently from both stressed and reduced vowels, so that no acoustic support for a binary categorical distinction exists; nevertheless, listeners make such a distinction, grouping unstressed unreduced vowels by preference with stressed vowels
  • Felker, E. R., Ernestus, M., & Broersma, M. (2019). Evaluating dictation task measures for the study of speech perception. In S. Calhoun, P. Escudero, M. Tabain, & P. Warren (Eds.), Proceedings of the 19th International Congress of Phonetic Sciences (ICPhS 2019) (pp. 383-387). Canberra, Australia: Australasian Speech Science and Technology Association Inc.

    Abstract

    This paper shows that the dictation task, a well-
    known testing instrument in language education, has
    untapped potential as a research tool for studying
    speech perception. We describe how transcriptions
    can be scored on measures of lexical, orthographic,
    phonological, and semantic similarity to target
    phrases to provide comprehensive information about
    accuracy at different processing levels. The former
    three measures are automatically extractable,
    increasing objectivity, and the middle two are
    gradient, providing finer-grained information than
    traditionally used. We evaluate the measures in an
    English dictation task featuring phonetically reduced
    continuous speech. Whereas the lexical and
    orthographic measures emphasize listeners’ word
    identification difficulties, the phonological measure
    demonstrates that listeners can often still recover
    phonological features, and the semantic measure
    captures their ability to get the gist of the utterances.
    Correlational analyses and a discussion of practical
    and theoretical considerations show that combining
    multiple measures improves the dictation task’s
    utility as a research tool.
  • Felker, E. R., Ernestus, M., & Broersma, M. (2019). Lexically guided perceptual learning of a vowel shift in an interactive L2 listening context. In Proceedings of Interspeech 2019 (pp. 3123-3127). doi:10.21437/Interspeech.2019-1414.

    Abstract

    Lexically guided perceptual learning has traditionally been studied with ambiguous consonant sounds to which native listeners are exposed in a purely receptive listening context. To extend previous research, we investigate whether lexically guided learning applies to a vowel shift encountered by non-native listeners in an interactive dialogue. Dutch participants played a two-player game in English in either a control condition, which contained no evidence for a vowel shift, or a lexically constraining condition, in which onscreen lexical information required them to re-interpret their interlocutor’s /ɪ/ pronunciations as representing /ε/. A phonetic categorization pre-test and post-test were used to assess whether the game shifted listeners’ phonemic boundaries such that more of the /ε/-/ɪ/ continuum came to be perceived as /ε/. Both listener groups showed an overall post-test shift toward /ɪ/, suggesting that vowel perception may be sensitive to directional biases related to properties of the speaker’s vowel space. Importantly, listeners in the lexically constraining condition made relatively more post-test /ε/ responses than the control group, thereby exhibiting an effect of lexically guided adaptation. The results thus demonstrate that non-native listeners can adjust their phonemic boundaries on the basis of lexical information to accommodate a vowel shift learned in interactive conversation.
  • Felker, E. R., Klockmann, H. E., & De Jong, N. H. (2019). How conceptualizing influences fluency in first and second language speech production. Applied Psycholinguistics, 40(1), 111-136. doi:10.1017/S0142716418000474.

    Abstract

    When speaking in any language, speakers must conceptualize what they want to say before they can formulate and articulate their message. We present two experiments employing a novel experimental paradigm in which the formulating and articulating stages of speech production were kept identical across conditions of differing conceptualizing difficulty. We tracked the effect of difficulty in conceptualizing during the generation of speech (Experiment 1) and during the abandonment and regeneration of speech (Experiment 2) on speaking fluency by Dutch native speakers in their first (L1) and second (L2) language (English). The results showed that abandoning and especially regenerating a speech plan taxes the speaker, leading to disfluencies. For most fluency measures, the increases in disfluency were similar across L1 and L2. However, a significant interaction revealed that abandoning and regenerating a speech plan increases the time needed to solve conceptual difficulties while speaking in the L2 to a greater degree than in the L1. This finding supports theories in which cognitive resources for conceptualizing are shared with those used for later stages of speech planning. Furthermore, a practical implication for language assessment is that increasing the conceptual difficulty of speaking tasks should be considered with caution.
  • Felser, C., Roberts, L., Marinis, T., & Gross, R. (2003). The processing of ambiguous sentences by first and second language learners of English. Applied Psycholinguistics, 24(3), 453-489.

    Abstract

    This study investigates the way adult second language (L2) learners of English resolve relative clause attachment ambiguities in sentences such as The dean liked the secretary of the professor who was reading a letter. Two groups of advanced L2 learners of English with Greek or German as their first language participated in a set of off-line and on-line tasks. The results indicate that the L2 learners do not process ambiguous sentences of this type in the same way as adult native speakers of English do. Although the learners’ disambiguation preferences were influenced by lexical–semantic properties of the preposition linking the two potential antecedent noun phrases (of vs. with), there was no evidence that they applied any phrase structure–based ambiguity resolution strategies of the kind that have been claimed to influence sentence processing in monolingual adults. The L2 learners’ performance also differs markedly from the results obtained from 6- to 7-year-old monolingual English children in a parallel auditory study, in that the children’s attachment preferences were not affected by the type of preposition at all. We argue that children, monolingual adults, and adult L2 learners differ in the extent to which they are guided by phrase structure and lexical–semantic information during sentence processing.
  • Fields, E. C., Weber, K., Stillerman, B., Delaney-Busch, N., & Kuperberg, G. (2019). Functional MRI reveals evidence of a self-positivity bias in the medial prefrontal cortex during the comprehension of social vignettes. Social Cognitive and Affective Neuroscience, 14(6), 613-621. doi:10.1093/scan/nsz035.

    Abstract

    A large literature in social neuroscience has associated the medial prefrontal cortex (mPFC) with the processing of self-related information. However, only recently have social neuroscience studies begun to consider the large behavioral literature showing a strong self-positivity bias, and these studies have mostly focused on its correlates during self-related judgments and decision making. We carried out a functional MRI (fMRI) study to ask whether the mPFC would show effects of the self-positivity bias in a paradigm that probed participants’ self-concept without any requirement of explicit self-judgment. We presented social vignettes that were either self-relevant or non-self-relevant with a neutral, positive, or negative outcome described in the second sentence. In previous work using event-related potentials, this paradigm has shown evidence of a self-positivity bias that influences early stages of semantically processing incoming stimuli. In the present fMRI study, we found evidence for this bias within the mPFC: an interaction between self-relevance and valence, with only positive scenarios showing a self vs other effect within the mPFC. We suggest that the mPFC may play a role in maintaining a positively-biased self-concept and discuss the implications of these findings for the social neuroscience of the self and the role of the mPFC.

    Additional information

    Supplementary data
  • Filippi, P., Congdon, J. V., Hoang, J., Bowling, D. L., Reber, S. A., Pasukonis, A., Hoeschele, M., Ocklenburg, S., De Boer, B., Sturdy, C. B., Newen, A., & Güntürkün, O. (2017). Humans recognize emotional arousal in vocalizations across all classes of terrestrial vertebrates: Evidence for acoustic universals. Proceedings of the Royal Society B: Biological Sciences, 284: 20170990. doi:10.1098/rspb.2017.0990.

    Abstract

    Writing over a century ago, Darwin hypothesized that vocal expression of emotion dates back to our earliest terrestrial ancestors. If this hypothesis is true, we should expect to find cross-species acoustic universals in emotional vocalizations. Studies suggest that acoustic attributes of aroused vocalizations are shared across many mammalian species, and that humans can use these attributes to infer emotional content. But do these acoustic attributes extend to non-mammalian vertebrates? In this study, we asked human participants to judge the emotional content of vocalizations of nine vertebrate species representing three different biological classes—Amphibia, Reptilia (non-aves and aves) and Mammalia. We found that humans are able to identify higher levels of arousal in vocalizations across all species. This result was consistent across different language groups (English, German and Mandarin native speakers), suggesting that this ability is biologically rooted in humans. Our findings indicate that humans use multiple acoustic parameters to infer relative arousal in vocalizations for each species, but mainly rely on fundamental frequency and spectral centre of gravity to identify higher arousal vocalizations across species. These results suggest that fundamental mechanisms of vocal emotional expression are shared among vertebrates and could represent a homologous signalling system.
  • Filippi, P., Gogoleva, S. S., Volodina, E. V., Volodin, I. A., & De Boer, B. (2017). Humans identify negative (but not positive) arousal in silver fox vocalizations: Implications for the adaptive value of interspecific eavesdropping. Current Zoology, 63(4), 445-456. doi:10.1093/cz/zox035.

    Abstract

    The ability to identify emotional arousal in heterospecific vocalizations may facilitate behaviors that increase survival opportunities. Crucially, this ability may orient inter-species interactions, particularly between humans and other species. Research shows that humans identify emotional arousal in vocalizations across multiple species, such as cats, dogs, and piglets. However, no previous study has addressed humans' ability to identify emotional arousal in silver foxes. Here, we adopted low-and high-arousal calls emitted by three strains of silver fox-Tame, Aggressive, and Unselected-in response to human approach. Tame and Aggressive foxes are genetically selected for friendly and attacking behaviors toward humans, respectively. Unselected foxes show aggressive and fearful behaviors toward humans. These three strains show similar levels of emotional arousal, but different levels of emotional valence in relation to humans. This emotional information is reflected in the acoustic features of the calls. Our data suggest that humans can identify high-arousal calls of Aggressive and Unselected foxes, but not of Tame foxes. Further analyses revealed that, although within each strain different acoustic parameters affect human accuracy in identifying high-arousal calls, spectral center of gravity, harmonic-to-noise ratio, and F0 best predict humans' ability to discriminate high-arousal calls across all strains. Furthermore, we identified in spectral center of gravity and F0 the best predictors for humans' absolute ratings of arousal in each call. Implications for research on the adaptive value of inter-specific eavesdropping are discussed.

    Additional information

    zox035_Supp.zip
  • Filippi, P., Ocklenburg, S., Bowling, D. L., Heege, L., Güntürkün, O., Newen, A., & de Boer, B. (2017). More than words (and faces): evidence for a Stroop effect of prosody in emotion word processing. Cognition & Emotion, 31(5), 879-891. doi:10.1080/02699931.2016.1177489.

    Abstract

    Humans typically combine linguistic and nonlinguistic information to comprehend emotions. We adopted an emotion identification Stroop task to investigate how different channels interact in emotion communication. In experiment 1, synonyms of “happy” and “sad” were spoken with happy and sad prosody. Participants had more difficulty ignoring prosody than ignoring verbal content. In experiment 2, synonyms of “happy” and “sad” were spoken with happy and sad prosody, while happy or sad faces were displayed. Accuracy was lower when two channels expressed an emotion that was incongruent with the channel participants had to focus on, compared with the cross-channel congruence condition. When participants were required to focus on verbal content, accuracy was significantly lower also when prosody was incongruent with verbal content and face. This suggests that prosody biases emotional verbal content processing, even when conflicting with verbal content and face simultaneously. Implications for multimodal communication and language evolution studies are discussed.
  • Filippi, P., Laaha, S., & Fitch, W. T. (2017). Utterance-final position and pitch marking aid word learning in school-age children. Royal Society Open Science, 4: 161035. doi:10.1098/rsos.161035.

    Abstract

    We investigated the effects of word order and prosody on word learning in school-age children. Third graders viewed photographs belonging to one of three semantic categories while hearing four-word nonsense utterances containing a target word. In the control condition, all words had the same pitch and, across trials, the position of the target word was varied systematically within each utterance. The only cue to word–meaning mapping was the co-occurrence of target words and referents. This cue was present in all conditions. In the Utterance-final condition, the target word always occurred in utterance-final position, and at the same fundamental frequency as all the other words of the utterance. In the Pitch peak condition, the position of the target word was varied systematically within each utterance across trials, and produced with pitch contrasts typical of infant-directed speech (IDS). In the Pitch peak + Utterance-final condition, the target word always occurred in utterance-final position, and was marked with a pitch contrast typical of IDS. Word learning occurred in all conditions except the control condition. Moreover, learning performance was significantly higher than that observed with simple co-occurrence (control condition) only for the Pitch peak + Utterance-final condition. We conclude that, for school-age children, the combination of words' utterance-final alignment and pitch enhancement boosts word learning.
  • Fisher, S. E., & Tilot, A. K. (2019). Bridging senses: Novel insights from synaesthesia. Philosophical Transactions of the Royal Society of London, Series B: Biological Sciences, 374: 20190022. doi:10.1098/rstb.2019.0022.
  • Fisher, S. E., & Tilot, A. K. (Eds.). (2019). Bridging senses: Novel insights from synaesthesia [Special Issue]. Philosophical Transactions of the Royal Society of London, Series B: Biological Sciences, 374.
  • Fisher, S. E., Stein, J. F., & Monaco, A. P. (1999). A genome-wide search strategy for identifying quantitative trait loci involved in reading and spelling disability (developmental dyslexia). European Child & Adolescent Psychiatry, 8(suppl. 3), S47-S51. doi:10.1007/PL00010694.

    Abstract

    Family and twin studies of developmental dyslexia have consistently shown that there is a significant heritable component for this disorder. However, any genetic basis for the trait is likely to be complex, involving reduced penetrance, phenocopy, heterogeneity and oligogenic inheritance. This complexity results in reduced power for traditional parametric linkage analysis, where specification of the correct genetic model is important. One strategy is to focus on large multigenerational pedigrees with severe phenotypes and/or apparent simple Mendelian inheritance, as has been successfully demonstrated for speech and language impairment. This approach is limited by the scarcity of such families. An alternative which has recently become feasible due to the development of high-throughput genotyping techniques is the analysis of large numbers of sib-pairs using allele-sharing methodology. This paper outlines our strategy for conducting a systematic genome-wide search for genes involved in dyslexia in a large number of affected sib-pair familites from the UK. We use a series of psychometric tests to obtain different quantitative measures of reading deficit, which should correlate with different components of the dyslexia phenotype, such as phonological awareness and orthographic coding ability. This enable us to use QTL (quantitative trait locus) mapping as a powerful tool for localising genes which may contribute to reading and spelling disability.
  • Fisher, S. E., Marlow, A. J., Lamb, J., Maestrini, E., Williams, D. F., Richardson, A. J., Weeks, D. E., Stein, J. F., & Monaco, A. P. (1999). A quantitative-trait locus on chromosome 6p influences different aspects of developmental dyslexia. American Journal of Human Genetics, 64(1), 146-156. doi:10.1086/302190.

    Abstract

    Recent application of nonparametric-linkage analysis to reading disability has implicated a putative quantitative-trait locus (QTL) on the short arm of chromosome 6. In the present study, we use QTL methods to evaluate linkage to the 6p25-21.3 region in a sample of 181 sib pairs from 82 nuclear families that were selected on the basis of a dyslexic proband. We have assessed linkage directly for several quantitative measures that should correlate with different components of the phenotype, rather than using a single composite measure or employing categorical definitions of subtypes. Our measures include the traditional IQ/reading discrepancy score, as well as tests of word recognition, irregular-word reading, and nonword reading. Pointwise analysis by means of sib-pair trait differences suggests the presence, in 6p21.3, of a QTL influencing multiple components of dyslexia, in particular the reading of irregular words (P=.0016) and nonwords (P=.0024). A complementary statistical approach involving estimation of variance components supports these findings (irregular words, P=.007; nonwords, P=.0004). Multipoint analyses place the QTL within the D6S422-D6S291 interval, with a peak around markers D6S276 and D6S105 consistently identified by approaches based on trait differences (irregular words, P=.00035; nonwords, P=.0035) and variance components (irregular words, P=.007; nonwords, P=.0038). Our findings indicate that the QTL affects both phonological and orthographic skills and is not specific to phoneme awareness, as has been previously suggested. Further studies will be necessary to obtain a more precise localization of this QTL, which may lead to the isolation of one of the genes involved in developmental dyslexia.
  • Fisher, S. E., Hatchwell, E., Chand, A., Ockenden, N., Monaco, A. P., & Craig, I. W. (1995). Construction of two YAC contigs in human Xp11.23-p11.22, one encompassing the loci OATL1, GATA, TFE3, and SYP, the other linking DXS255 to DXS146. Genomics, 29(2), 496-502. doi:10.1006/geno.1995.9976.

    Abstract

    We have constructed two YAC contigs in the Xp11.23-p11.22 interval of the human X chromosome, a region that was previously poorly characterized. One contig, of at least 1.4 Mb, links the pseudogene OATL1 to the genes GATA1, TFE3, and SYP and also contains loci implicated in Wiskott-Aldrich syndrome and synovial sarcoma. A second contig, mapping proximal to the first, is estimated to be over 2.1 Mb and links the hypervariable locus DXS255 to DXS146, and also contains a chloride channel gene that is responsible for hereditary nephrolithiasis. We have used plasmid rescue, inverse PCR, and Alu-PCR to generate 20 novel markers from this region, 1 of which is polymorphic, and have positioned these relative to one another on the basis of YAC analysis. The order of previously known markers within our contigs, Xpter-OATL1-GATA-TFE3-SYP-DXS255146- Xcen, agrees with genomic pulsed-field maps of the region. In addition, we have constructed a rare-cutter restriction map for a 710-kb region of the DXS255-DXS146 contig and have identified three CPG islands. These contigs and new markers will provide a useful resource for more detailed analysis of Xp11.23-p11.22, a region implicated in several genetic diseases.
  • Fisher, S. E., Lai, C. S., & Monaco, a. A. P. (2003). Deciphering the genetic basis of speech and language disorders. Annual Review of Neuroscience, 26, 57-80. doi:10.1146/annurev.neuro.26.041002.131144.

    Abstract

    A significant number of individuals have unexplained difficulties with acquiring normal speech and language, despite adequate intelligence and environmental stimulation. Although developmental disorders of speech and language are heritable, the genetic basis is likely to involve several, possibly many, different risk factors. Investigations of a unique three-generation family showing monogenic inheritance of speech and language deficits led to the isolation of the first such gene on chromosome 7, which encodes a transcription factor known as FOXP2. Disruption of this gene causes a rare severe speech and language disorder but does not appear to be involved in more common forms of language impairment. Recent genome-wide scans have identified at least four chromosomal regions that may harbor genes influencing the latter, on chromosomes 2, 13, 16, and 19. The molecular genetic approach has potential for dissecting neurological pathways underlying speech and language disorders, but such investigations are only just beginning.
  • Fisher, S. E., Van Bakel, I., Lloyd, S. E., Pearce, S. H. S., Thakker, R. V., & Craig, I. W. (1995). Cloning and characterization of CLCN5, the human kidney chloride channel gene implicated in Dent disease (an X-linked hereditary nephrolithiasis). Genomics, 29, 598-606. doi:10.1006/geno.1995.9960.

    Abstract

    Dent disease, an X-linked familial renal tubular disorder, is a form of Fanconi syndrome associated with proteinuria, hypercalciuria, nephrocalcinosis, kidney stones, and eventual renal failure. We have previously used positional cloning to identify the 3' part of a novel kidney-specific gene (initially termed hClC-K2, but now referred to as CLCN5), which is deleted in patients from one pedigree segregating Dent disease. Mutations that disrupt this gene have been identified in other patients with this disorder. Here we describe the isolation and characterization of the complete open reading frame of the human CLCN5 gene, which is predicted to encode a protein of 746 amino acids, with significant homology to all known members of the ClC family of voltage-gated chloride channels. CLCN5 belongs to a distinct branch of this family, which also includes the recently identified genes CLCN3 and CLCN4. We have shown that the coding region of CLCN5 is organized into 12 exons, spanning 25-30 kb of genomic DNA, and have determined the sequence of each exon-intron boundary. The elucidation of the coding sequence and exon-intron organization of CLCN5 will both expedite the evaluation of structure/function relationships of these ion channels and facilitate the screening of other patients with renal tubular dysfunction for mutations at this locus.
  • Fisher, S. E. (2019). Human genetics: The evolving story of FOXP2. Current Biology, 29(2), R65-R67. doi:10.1016/j.cub.2018.11.047.

    Abstract

    FOXP2 mutations cause a speech and language disorder, raising interest in potential roles of this gene in human evolution. A new study re-evaluates genomic variation at the human FOXP2 locus but finds no evidence of recent adaptive evolution.
  • Fisher, S. E. (2019). Key issues and future directions: Genes and language. In P. Hagoort (Ed.), Human language: From genes and brain to behavior (pp. 609-620). Cambridge, MA: MIT Press.
  • Fisher, S. E. (2017). Evolution of language: Lessons from the genome. Psychonomic Bulletin & Review, 24(1), 34-40. doi: 10.3758/s13423-016-1112-8.

    Abstract

    The post-genomic era is an exciting time for researchers interested in the biology of speech and language. Substantive advances in molecular methodologies have opened up entire vistas of investigation that were not previously possible, or in some cases even imagined. Speculations concerning the origins of human cognitive traits are being transformed into empirically addressable questions, generating specific hypotheses that can be explicitly tested using data collected from both the natural world and experimental settings. In this article, I discuss a number of promising lines of research in this area. For example, the field has begun to identify genes implicated in speech and language skills, including not just disorders but also the normal range of abilities. Such genes provide powerful entry points for gaining insights into neural bases and evolutionary origins, using sophisticated experimental tools from molecular neuroscience and developmental neurobiology. At the same time, sequencing of ancient hominin genomes is giving us an unprecedented view of the molecular genetic changes that have occurred during the evolution of our species. Synthesis of data from these complementary sources offers an opportunity to robustly evaluate alternative accounts of language evolution. Of course, this endeavour remains challenging on many fronts, as I also highlight in the article. Nonetheless, such an integrated approach holds great potential for untangling the complexities of the capacities that make us human.
  • Fisher, S. E. (2003). The genetic basis of a severe speech and language disorder. In J. Mallet, & Y. Christen (Eds.), Neurosciences at the postgenomic era (pp. 125-134). Heidelberg: Springer.
  • Fisher, V. J. (2017). Dance as Embodied Analogy: Designing an Empirical Research Study. In M. Van Delft, J. Voets, Z. Gündüz, H. Koolen, & L. Wijers (Eds.), Danswetenschap in Nederland. Utrecht: Vereniging voor Dansonderzoek (VDO).
  • Fisher, V. J. (2017). Unfurling the wings of flight: Clarifying ‘the what’ and ‘the why’ of mental imagery use in dance. Research in Dance Education, 18(3), 252-272. doi:10.1080/14647893.2017.1369508.

    Abstract

    This article provides clarification regarding ‘the what’ and ‘the why’ of mental imagery use in dance. It proposes that mental images are invoked across sensory modalities and often combine internal and external perspectives. The content of images ranges from ‘direct’ body oriented simulations along a continuum employing analogous mapping through ‘semi-direct’ literal similarities to abstract metaphors. The reasons for employing imagery are diverse and often overlapping, affecting physical, affective (psychological) and cognitive domains. This paper argues that when dance uses imagery, it is mapping aspects of the world to the body via analogy. Such mapping informs and changes our understanding of both our bodies and the world. In this way, mental imagery use in dance is fundamentally a process of embodied cognition
  • Fitz, H., & Chang, F. (2017). Meaningful questions: The acquisition of auxiliary inversion in a connectionist model of sentence production. Cognition, 166, 225-250. doi:10.1016/j.cognition.2017.05.008.

    Abstract

    Nativist theories have argued that language involves syntactic principles which are unlearnable from the input children receive. A paradigm case of these innate principles is the structure dependence of auxiliary inversion in complex polar questions (Chomsky, 1968, 1975, 1980). Computational approaches have focused on the properties of the input in explaining how children acquire these questions. In contrast, we argue that messages are structured in a way that supports structure dependence in syntax. We demonstrate this approach within a connectionist model of sentence production (Chang, 2009) which learned to generate a range of complex polar questions from a structured message without positive exemplars in the input. The model also generated different types of error in development that were similar in magnitude to those in children (e.g., auxiliary doubling, Ambridge, Rowland, & Pine, 2008; Crain & Nakayama, 1987). Through model comparisons we trace how meaning constraints and linguistic experience interact during the acquisition of auxiliary inversion. Our results suggest that auxiliary inversion rules in English can be acquired without innate syntactic principles, as long as it is assumed that speakers who ask complex questions express messages that are structured into multiple propositions
  • Fitz, H., & Chang, F. (2019). Language ERPs reflect learning through prediction error propagation. Cognitive Psychology, 111, 15-52. doi:10.1016/j.cogpsych.2019.03.002.

    Abstract

    Event-related potentials (ERPs) provide a window into how the brain is processing language. Here, we propose a theory that argues that ERPs such as the N400 and P600 arise as side effects of an error-based learning mechanism that explains linguistic adaptation and language learning. We instantiated this theory in a connectionist model that can simulate data from three studies on the N400 (amplitude modulation by expectancy, contextual constraint, and sentence position), five studies on the P600 (agreement, tense, word category, subcategorization and garden-path sentences), and a study on the semantic P600 in role reversal anomalies. Since ERPs are learning signals, this account explains adaptation of ERP amplitude to within-experiment frequency manipulations and the way ERP effects are shaped by word predictability in earlier sentences. Moreover, it predicts that ERPs can change over language development. The model provides an account of the sensitivity of ERPs to expectation mismatch, the relative timing of the N400 and P600, the semantic nature of the N400, the syntactic nature of the P600, and the fact that ERPs can change with experience. This approach suggests that comprehension ERPs are related to sentence production and language acquisition mechanisms
  • Floyd, S. (2017). Requesting as a means for negotiating distributed agency. In N. J. Enfield, & P. Kockelman (Eds.), Distributed Agency (pp. 67-78). Oxford: Oxford University Press.
  • Francisco, A. A., Groen, M. A., Jesse, A., & McQueen, J. M. (2017). Beyond the usual cognitive suspects: The importance of speechreading and audiovisual temporal sensitivity in reading ability. Learning and Individual Differences, 54, 60-72. doi:10.1016/j.lindif.2017.01.003.

    Abstract

    The aim of this study was to clarify whether audiovisual processing accounted for variance in reading and reading-related abilities, beyond the effect of a set of measures typically associated with individual differences in both reading and audiovisual processing. Testing adults with and without a diagnosis of dyslexia, we showed that—across all participants, and after accounting for variance in cognitive abilities—audiovisual temporal sensitivity contributed uniquely to variance in reading errors. This is consistent with previous studies demonstrating an audiovisual deficit in dyslexia. Additionally, we showed that speechreading (identification of speech based on visual cues from the talking face alone) was a unique contributor to variance in phonological awareness in dyslexic readers only: those who scored higher on speechreading, scored lower on phonological awareness. This suggests a greater reliance on visual speech as a compensatory mechanism when processing auditory speech is problematic. A secondary aim of this study was to better understand the nature of dyslexia. The finding that a sub-group of dyslexic readers scored low on phonological awareness and high on speechreading is consistent with a hybrid perspective of dyslexia: There are multiple possible pathways to reading impairment, which may translate into multiple profiles of dyslexia.
  • Francisco, A. A., Jesse, A., Groen, M. A., & McQueen, J. M. (2017). A general audiovisual temporal processing deficit in adult readers with dyslexia. Journal of Speech, Language, and Hearing Research, 60, 144-158. doi:10.1044/2016_JSLHR-H-15-0375.

    Abstract

    Purpose: Because reading is an audiovisual process, reading impairment may reflect an audiovisual processing deficit. The aim of the present study was to test the existence and scope of such a deficit in adult readers with dyslexia. Method: We tested 39 typical readers and 51 adult readers with dyslexia on their sensitivity to the simultaneity of audiovisual speech and nonspeech stimuli, their time window of audiovisual integration for speech (using incongruent /aCa/ syllables), and their audiovisual perception of phonetic categories. Results: Adult readers with dyslexia showed less sensitivity to audiovisual simultaneity than typical readers for both speech and nonspeech events. We found no differences between readers with dyslexia and typical readers in the temporal window of integration for audiovisual speech or in the audiovisual perception of phonetic categories. Conclusions: The results suggest an audiovisual temporal deficit in dyslexia that is not specific to speech-related events. But the differences found for audiovisual temporal sensitivity did not translate into a deficit in audiovisual speech perception. Hence, there seems to be a hiatus between simultaneity judgment and perception, suggesting a multisensory system that uses different mechanisms across tasks. Alternatively, it is possible that the audiovisual deficit in dyslexia is only observable when explicit judgments about audiovisual simultaneity are required
  • Francks, C., DeLisi, L. E., Fisher, S. E., Laval, S. H., Rue, J. E., Stein, J. F., & Monaco, A. P. (2003). Confirmatory evidence for linkage of relative hand skill to 2p12-q11 [Letter to the editor]. American Journal of Human Genetics, 72(2), 499-502. doi:10.1086/367548.
  • Francks, C., Fisher, S. E., Marlow, A. J., MacPhie, I. L., Taylor, K. E., Richardson, A. J., Stein, J. F., & Monaco, A. P. (2003). Familial and genetic effects on motor coordination, laterality, and reading-related cognition. American Journal of Psychiatry, 160(11), 1970-1977. doi:10.1176/appi.ajp.160.11.1970.

    Abstract

    OBJECTIVE: Recent research has provided evidence for a genetically mediated association between language or reading-related cognitive deficits and impaired motor coordination. Other studies have identified relationships between lateralization of hand skill and cognitive abilities. With a large sample, the authors aimed to investigate genetic relationships between measures of reading-related cognition, hand motor skill, and hand skill lateralization.

    METHOD: The authors applied univariate and bivariate correlation and familiality analyses to a range of measures. They also performed genomewide linkage analysis of hand motor skill in a subgroup of 195 sibling pairs.

    RESULTS: Hand motor skill was significantly familial (maximum heritability=41%), as were reading-related measures. Hand motor skill was weakly but significantly correlated with reading-related measures, such as nonword reading and irregular word reading. However, these correlations were not significantly familial in nature, and the authors did not observe linkage of hand motor skill to any chromosomal regions implicated in susceptibility to dyslexia. Lateralization of hand skill was not correlated with reading or cognitive ability.

    CONCLUSIONS: The authors confirmed a relationship between lower motor ability and poor reading performance. However, the genetic effects on motor skill and reading ability appeared to be largely or wholly distinct, suggesting that the correlation between these traits may have arisen from environmental influences. Finally, the authors found no evidence that reading disability and/or low general cognitive ability were associated with ambidexterity.

Share this page