Publications

Displaying 301 - 400 of 1696
  • Devanna, P., Dediu, D., & Vernes, S. C. (2019). The Genetics of Language: From complex genes to complex communication. In S.-A. Rueschemeyer, & M. G. Gaskell (Eds.), The Oxford Handbook of Psycholinguistics (2nd ed., pp. 865-898). Oxford: Oxford University Press.

    Abstract

    This chapter discusses the genetic foundations of the human capacity for language. It reviews the molecular structure of the genome and the complex molecular mechanisms that allow genetic information to influence multiple levels of biology. It goes on to describe the active regulation of genes and their formation of complex genetic pathways that in turn control the cellular environment and function. At each of these levels, examples of genes and genetic variants that may influence the human capacity for language are given. Finally, it discusses the value of using animal models to understand the genetic underpinnings of speech and language. From this chapter will emerge the complexity of the genome in action and the multidisciplinary efforts that are currently made to bridge the gap between genetics and language.
  • Devaraju, K., Miskinyte, G., Hansen, M. G., Monni, E., Tornero, D., Woods, N. B., Bengzon, J., Ahlenius, H., Lindvall, O., & Kokaia, Z. (2017). Direct conversion of human fibroblasts to functional excitatory cortical neurons integrating into human neural networks. Stem Cell Research & Therapy, 8: 207. doi:10.1186/s13287-017-0658-3.

    Abstract

    Background: Human fibroblasts can be directly converted to several subtypes of neurons, but cortical projection neurons have not been generated. Methods: Here we screened for transcription factor combinations that could potentially convert human fibroblasts to functional excitatory cortical neurons. The induced cortical (iCtx) cells were analyzed for cortical neuronal identity using immunocytochemistry, single-cell quantitative polymerase chain reaction (qPCR), electrophysiology, and their ability to integrate into human neural networks in vitro and ex vivo using electrophysiology and rabies virus tracing. Results: We show that a combination of three ranscription fact ors, BRN2, MYT1L, and FEZF2, have the ability to directly convert human fibroblasts to functional excitatory cortical neurons. The conversion efficiency was increased to about 16% by treatment with small molecules and microRNAs. The iCtx cells exhibited electrophysiological properties of functional neurons, had pyramidal-like cell morphology, and expressed key cortical projection neuronal markers. Single-cell analysis of iCtx cells revealed a complex gene expression profile, a subpopulation of them displaying a molecular signature closely resembling that of human fetal primary cortical neurons. The iCtx cells received synaptic inputs from co-cultured human fetal primary cortical neurons, contained spines, and expressed the postsyna ptic excitatory scaffold protein PSD95. When transplanted ex vivo to organotypic cultures of adult human cerebral cortex, the iCtx cells exhibited morphological and electrophysiological properties of mature neurons, integrated structurally into the cortical tissue, and received synaptic inputs from adult human neurons. Conclusions: Our findings indicate that functional excitatory cortical neurons, generated here for the first time by direct conversion of human somatic cells, have the capacity for synaptic integration into adult human cortex.
  • Dideriksen, C., Fusaroli, R., Tylén, K., Dingemanse, M., & Christiansen, M. H. (2019). Contextualizing Conversational Strategies: Backchannel, Repair and Linguistic Alignment in Spontaneous and Task-Oriented Conversations. In A. K. Goel, C. M. Seifert, & C. Freksa (Eds.), Proceedings of the 41st Annual Conference of the Cognitive Science Society (CogSci 2019) (pp. 261-267). Montreal, QB: Cognitive Science Society.

    Abstract

    Do interlocutors adjust their conversational strategies to the specific contextual demands of a given situation? Prior studies have yielded conflicting results, making it unclear how strategies vary with demands. We combine insights from qualitative and quantitative approaches in a within-participant experimental design involving two different contexts: spontaneously occurring conversations (SOC) and task-oriented conversations (TOC). We systematically assess backchanneling, other-repair and linguistic alignment. We find that SOC exhibit a higher number of backchannels, a reduced and more generic repair format and higher rates of lexical and syntactic alignment. TOC are characterized by a high number of specific repairs and a lower rate of lexical and syntactic alignment. However, when alignment occurs, more linguistic forms are aligned. The findings show that conversational strategies adapt to specific contextual demands.
  • Dieuleveut, A., Van Dooren, A., Cournane, A., & Hacquard, V. (2019). Acquiring the force of modals: Sig you guess what sig means? In M. Brown, & B. Dailey (Eds.), BUCLD 43: Proceedings of the 43rd annual Boston University Conference on Language Development (pp. 189-202). Sommerville, MA: Cascadilla Press.
  • Dimitrova, D. V., Redeker, G., & Hoeks, J. C. J. (2009). Did you say a BLUE banana? The prosody of contrast and abnormality in Bulgarian and Dutch. In 10th Annual Conference of the International Speech Communication Association [Interspeech 2009] (pp. 999-1002). ISCA Archive.

    Abstract

    In a production experiment on Bulgarian that was based on a previous study on Dutch [1], we investigated the role of prosody when linguistic and extra-linguistic information coincide or contradict. Speakers described abnormally colored fruits in conditions where contrastive focus and discourse relations were varied. We found that the coincidence of contrast and abnormality enhances accentuation in Bulgarian as it did in Dutch. Surprisingly, when both factors are in conflict, the prosodic prominence of abnormality often overruled focus accentuation in both Bulgarian and Dutch, though the languages also show marked differences.
  • Dimroth, C., & Narasimhan, B. (2009). Accessibility and topicality in children's use of word order. In J. Chandlee, M. Franchini, S. Lord, & G. M. Rheiner (Eds.), Proceedings of the 33rd annual Boston University Conference on Language Development (BULCD) (pp. 133-138).
  • Dimroth, C., & Klein, W. (2009). Einleitung. Zeitschrift für Literaturwissenschaft und Linguistik, 153, 5-9.
  • Dimroth, C., & Jordens, P. (Eds.). (2009). Functional categories in learner language. Berlin: Mouton de Gruyter.
  • Dimroth, C. (1998). Indiquer la portée en allemand L2: Une étude longitudinale de l'acquisition des particules de portée. AILE (Acquisition et Interaction en Langue étrangère), 11, 11-34.
  • Dimroth, C. (2009). L'acquisition de la finitude en allemand L2 à différents âges. AILE (Acquisition et Interaction en Langue étrangère)/LIA (Languages, Interaction, Acquisition), 1(1), 113-135.

    Abstract

    Ultimate attainment in adult second language learners often differs tremendously from the end state typically achieved by young children learning their first language (L1) or a second language (L2). The research summarized in this article concentrates on developmental steps and orders of acquisition attested in learners of different ages. Findings from a longitudinal study concerned with the acquisition of verbal morpho-syntax in German as an L2 by two young Russian learners (8 and 14 years old) are compared to findings from the acquisition of the same target language by younger children and by untutored adult learners. The study focuses on the acquisition of verbal morphology, the role of auxiliary verbs and the position of finite and non finite verbs in relation to negation and additive scope particles.
  • Dimroth, C. (2009). Lernervarietäten im Sprachunterricht. Zeitschrift für Literaturwissenschaft und Linguistik, 39(153), 60-80.
  • Dimroth, C. (2009). Stepping stones and stumbling blocks: Why negation accelerates and additive particles delay the acquisition of finiteness in German. In C. Dimroth, & P. Jordens (Eds.), Functional Categories in Learner Language (pp. 137-170). Berlin: Mouton de Gruyter.
  • Dimroth, C., & Watorek, M. (2000). The scope of additive particles in basic learner languages. Studies in Second Language Acquisition, 22, 307-336. Retrieved from http://journals.cambridge.org/action/displayAbstract?aid=65981.

    Abstract

    Based on their longitudinal analysis of the acquisition of Dutch, English, French, and German, Klein and Perdue (1997) described a “basic learner variety” as valid cross-linguistically and comprising a limited number of shared syntactic patterns interacting with two types of constraints: (a) semantic—the NP whose referent has highest control comes first, and (b) pragmatic—the focus expression is in final position. These authors hypothesized that “the topic-focus structure also plays an important role in some other respects. . . . Thus, negation and (other) scope particles occur at the topic-focus boundary” (p. 318). This poses the problem of the interaction between the core organizational principles of the basic variety and optional items such as negative particles and scope particles, which semantically affect the whole or part of the utterance in which they occur. In this article, we test the validity of these authors' hypothesis for the acquisition of the additive scope particle also (and its translation equivalents). Our analysis is based on the European Science Foundation (ESF) data originally used to define the basic variety, but we also included some more advanced learner data from the same database. In doing so, we refer to the analyses of Dimroth and Klein (1996), which concern the interaction between scope particles and the part of the utterance they affect, and we make a distinction between maximal scope—that which is potentially affected by the particle—and the actual scope of a particle in relation to an utterance in a given discourse context

    Files private

    Request files
  • Dingemanse, M., Perlman, M., & Perniss, P. (2020). Construals of iconicity: Experimental approaches to form-meaning resemblances in language. Language and Cognition, 12(1), 1-14. doi:10.1017/langcog.2019.48.

    Abstract

    While speculations on form–meaning resemblances in language go back millennia, the experimental study of iconicity is only about a century old. Here we take stock of experimental work on iconicity and present a double special issue with a diverse set of new contributions. We contextualise the work by introducing a typology of approaches to iconicity in language. Some approaches construe iconicity as a discrete property that is either present or absent; others treat it as involving semiotic relationships that come in kinds; and yet others see it as a gradient substance that comes in degrees. We show the benefits and limitations that come with each of these construals and stress the importance of developing accounts that can fluently switch between them. With operationalisations of iconicity that are well defined yet flexible enough to deal with differences in tasks, modalities, and levels of analysis, experimental research on iconicity is well equipped to contribute to a comprehensive science of language.
  • Dingemanse, M. (2020). Resource-rationality beyond individual minds: The case of interactive language use. Behavioral and Brain Sciences, 43, 23-24. doi:10.1017/S0140525X19001638.

    Abstract

    Resource-rational approaches offer much promise for understanding human cognition, especially if they can reach beyond the confines of individual minds. Language allows people to transcend individual resource limitations by augmenting computation and enabling distributed cognition. Interactive language use, an environment where social rational agents routinely deal with resource constraints together, offers a natural laboratory to test resource-rationality in the wild.
  • Dingemanse, M. (2020). Between sound and speech: Liminal signs in interaction. Research on Language and Social Interaction, 53(1), 188-196. doi:10.1080/08351813.2020.1712967.

    Abstract

    When people talk, they recruit a wide range of expressive devices for interactional work, from sighs, sniffs, clicks, and whistles to other conduct that borders on the linguistic. These resources represent some of the more elusive yet no less powerful aspects of the interactional machinery as they are used in the management of turn and sequence and the marking of stance and affect. Phenomena long assumed to be beyond the purview of linguistic inquiry emerge as systematically deployed practices whose ambiguous degree of control and convention allows participants to carry out subtle interactional work without committing to specific words. While these resources have been characterised as non-lexical, non-verbal, or non-conventional, I propose they are unified in their liminality: they work well precisely because they equivocate between sound and speech. The empirical study of liminal signs shows the promise of sequential analysis for building a science of language on interactional foundations.
  • Dingemanse, M. (2017). Brain-to-brain interfaces and the role of language in distributing agency. In N. J. Enfield, & P. Kockelman (Eds.), Distributed Agency (pp. 59-66). Oxford: Oxford University Press. doi:10.1093/acprof:oso/9780190457204.003.0007.

    Abstract

    Brain-to-brain interfaces, in which brains are physically connected without the intervention of language, promise new ways of collaboration and communication between humans. I examine the narrow view of language implicit in current conceptions of brain-to-brain interfaces and put forward a constructive alternative, stressing the role of language in organising joint agency. Two features of language stand out as crucial: its selectivity, which provides people with much-needed filters between public words and private worlds; and its negotiability, which provides people with systematic opportunities for calibrating understanding and expressing consent and dissent. Without these checks and balances, brain-to-brain interfaces run the risk of reducing people to the level of amoeba in a slime mold; with them, they may mature to become useful extensions of human agency
  • Dingemanse, M. (2020). Der Raum zwischen unseren Köpfen. Technology Review, 2020(13), 10-15.

    Abstract

    Aktuelle Vorstellungen von Gehirn-zu-Gehirn-Schnittstellen versprechen, die Sprache zu umgehen. Aber wenn wir sie verfeinern, um ihr kollaboratives Potenzial voll auszuschöpfen, sehen wir Sprache — oder zumindest ein sprachähnliches Infrastruktur für Kommunika­tion und Koordination — durch die Hintertür wieder hereinkommen. Es wäre nicht das erste Mal, dass sich die Sprache neu erfindet.

    Current conceptions of brain-to-brain interfaces attempt to bypass language. But when we refine them to more fully realise their collaborative potential we find language —or at least a language-like infrastructure for communication and coordination— slipping through the back door. It wouldn't be the first time that language reinvented itself.
  • Dingemanse, M., & Akita, K. (2017). An inverse relation between expressiveness and grammatical integration: on the morphosyntactic typology of ideophones, with special reference to Japanese. Journal of Linguistics, 53(3), 501-532. doi:10.1017/S002222671600030X.

    Abstract

    Words and phrases may differ in the extent to which they are susceptible to prosodic foregrounding and expressive morphology: their expressiveness. They may also differ in the degree to which they are integrated in the morphosyntactic structure of the utterance: their grammatical integration. We describe an inverse relation that holds across widely varied languages, such that more expressiveness goes together with less grammatical integration, and vice versa. We review typological evidence for this inverse relation in 10 languages, then quantify and explain it using Japanese corpus data. We do this by tracking ideophones —vivid sensory words also known as mimetics or expressives— across different morphosyntactic contexts and measuring their expressiveness in terms of intonation, phonation and expressive morphology. We find that as expressiveness increases, grammatical integration decreases. Using gesture as a measure independent of the speech signal, we find that the most expressive ideophones are most likely to come together with iconic gestures. We argue that the ultimate cause is the encounter of two distinct and partly incommensurable modes of representation: the gradient, iconic, depictive system represented by ideophones and iconic gestures and the discrete, arbitrary, descriptive system represented by ordinary words. The study shows how people combine modes of representation in speech and demonstrates the value of integrating description and depiction into the scientific vision of language.

    Additional information

    Open data & R code
  • Dingemanse, M. (2019). 'Ideophone' as a comparative concept. In K. Akita, & P. Pardeshi (Eds.), Ideophones, Mimetics, and Expressives (pp. 13-33). Amsterdam: John Benjamins. doi:10.1075/ill.16.02din.

    Abstract

    This chapter makes the case for ‘ideophone’ as a comparative concept: a notion that captures a recurrent typological pattern and provides a template for understanding language-specific phenomena that prove similar. It revises an earlier definition to account for the observation that ideophones typically form an open lexical class, and uses insights from canonical typology to explore the larger typological space. According to the resulting definition, a canonical ideophone is a member of an open lexical class of marked words that depict sensory imagery. The five elements of this definition can be seen as dimensions that together generate a possibility space to characterise cross-linguistic diversity in depictive means of expression. This approach allows for the systematic comparative treatment of ideophones and ideophone-like phenomena. Some phenomena in the larger typological space are discussed to demonstrate the utility of the approach: phonaesthemes in European languages, specialised semantic classes in West-Chadic, diachronic diversions in Aslian, and depicting constructions in signed languages.
  • Dingemanse, M. (2009). Kããã [finalist photo in the 2008 AAA Photo Contest]. Anthropology News, 50(3), 23-23.

    Abstract

    Kyeei Yao, an age group leader, oversees a festival in Akpafu-Mempeasem, Volta Region, Ghana. The expensive draped cloth, Ashanti-inspired wreath, strings of beads that are handed down through the generations, and digital wristwatch work together to remind us that culture is a moving target, always renewing and reshaping itself. Kããã is a Siwu ideophone for "looking attentively".
  • Dingemanse, M. (2009). Ideophones in unexpected places. In P. K. Austin, O. Bond, M. Charette, D. Nathan, & P. Sells (Eds.), Proceedings of the 2nd Conference on Language Documentation and Linguistic Theory (pp. 83-97). London: School of Oriental and African Studies (SOAS).
  • Dingemanse, M. (2017). Expressiveness and system integration: On the typology of ideophones, with special reference to Siwu. STUF - Language Typology and Universals, 70(2), 363-384. doi:10.1515/stuf-2017-0018.

    Abstract

    Ideophones are often described as words that are highly expressive and morphosyntactically marginal. A study of ideophones in everyday conversations in Siwu (Kwa, eastern Ghana) reveals a landscape of variation and change that sheds light on some larger questions in the morphosyntactic typology of ideophones. The article documents a trade-off between expressiveness and morphosyntactic integration, with high expressiveness linked to low integration and vice versa. It also describes a pathway for deideophonisation and finds that frequency of use is a factor that influences the degree to which ideophones can come to be more like ordinary words. The findings have implications for processes of (de)ideophonisation, ideophone borrowing, and ideophone typology. A key point is that the internal diversity we find in naturally occurring data, far from being mere noise, is patterned variation that can help us to get a handle on the factors shaping ideophone systems within and across languages.
  • Dingemanse, M. (2017). On the margins of language: Ideophones, interjections and dependencies in linguistic theory. In N. J. Enfield (Ed.), Dependencies in language (pp. 195-202). Berlin: Language Science Press. doi:10.5281/zenodo.573781.

    Abstract

    Linguistic discovery is viewpoint-dependent, just like our ideas about what is marginal and what is central in language. In this essay I consider two supposed marginalia —ideophones and interjections— which provide some useful pointers for widening our field of view. Ideophones challenge us to take a fresh look at language and consider how it is that our communication system combines multiple modes of representation. Interjections challenge us to extend linguistic inquiry beyond sentence level, and remind us that language is social-interactive at core. Marginalia, then, are not the obscure, exotic phenomena that can be safely ignored: they represent opportunities for innovation and invite us to keep pushing the edges of linguistic inquiry.
  • Dingemanse, M. (2020). Recruiting assistance and collaboration: A West-African corpus study. In S. Floyd, G. Rossi, & N. J. Enfield (Eds.), Getting others to do things: A pragmatic typology of recruitments (pp. 369-241). Berlin: Language Science Press. doi:10.5281/zenodo.4018388.

    Abstract

    Doing things for and with others is one of the foundations of human social life. This chapter studies a systematic collection of 207 requests for assistance and collaboration from a video corpus of everyday conversations in Siwu, a Kwa language of Ghana. A range of social action formats and semiotic resources reveals how language is adapted to the interactional challenges posed by recruiting assistance. While many of the formats bear a language-specific signature, their sequential and interactional properties show important commonalities across languages. Two tentative findings are put forward for further cross-linguistic examination: a “rule of three” that may play a role in the organisation of successive response pursuits, and a striking commonality in animal-oriented recruitments across languages that may be explained by convergent cultural evolution. The Siwu recruitment system emerges as one instance of a sophisticated machinery for organising collaborative action that transcends language and culture.
  • Dingemanse, M. (2009). The enduring spoken word [Comment on Oard 2008]. Science, 323(5917), 1010-1011. doi:10.1126/science.323.5917.1010b.
  • Dingemanse, M., Rossi, G., & Floyd, S. (2017). Place reference in story beginnings: a cross-linguistic study of narrative and interactional affordances. Language in Society, 46(2), 129-158. doi:10.1017/S0047404516001019.

    Abstract

    People often begin stories in conversation by referring to person, time, and place. We study story beginnings in three societies and find place reference is recurrently used to (i) set the stage, foreshadowing the type of story and the kind of response due, and to (ii) make the story cohere, anchoring elements of the developing story. Recipients orient to these interactional affordances of place reference by responding in ways that attend to the relevance of place for the story and by requesting clarification when references are incongruent or noticeably absent. The findings are based on 108 story beginnings in three unrelated languages: Cha’palaa, a Barbacoan language of Ecuador; Northern Italian, a Romance language of Italy; and Siwu, a Kwa language of Ghana. The commonalities suggest we have identified generic affordances of place reference, and that storytelling in conversation offers a robust sequential environment for systematic comparative research on conversational structures.
  • Dingemanse, M., & Thompson, B. (2020). Playful iconicity: Structural markedness underlies the relation between funniness and iconicity. Language and Cognition, 12(1), 203-224. doi:10.1017/langcog.2019.49.

    Abstract

    Words like ‘waddle’, ‘flop’ and ‘zigzag’ combine playful connotations with iconic form-meaning resemblances. Here we propose that structural markedness may be a common factor underlying perceptions of playfulness and iconicity. Using collected and estimated lexical ratings covering a total of over 70,000 English words, we assess the robustness of this assocation. We identify cues of phonotactic complexity that covary with funniness and iconicity ratings and that, we propose, serve as metacommunicative signals to draw attention to words as playful and performative. To assess the generalisability of the findings we develop a method to estimate lexical ratings from distributional semantics and apply it to a dataset 20 times the size of the original set of human ratings. The method can be used more generally to extend coverage of lexical ratings. We find that it reliably reproduces correlations between funniness and iconicity as well as cues of structural markedness, though it also amplifies biases present in the human ratings. Our study shows that the playful and the poetic are part of the very texture of the lexicon.
  • Dingemanse, M. (2009). The selective advantage of body-part terms. Journal of Pragmatics, 41(10), 2130-2136. doi:10.1016/j.pragma.2008.11.008.

    Abstract

    This paper addresses the question why body-part terms are so often used to talk about other things than body parts. It is argued that the strategy of falling back on stable common ground to maximize the chances of successful communication is the driving force behind the selective advantage of body-part terms. The many different ways in which languages may implement this universal strategy suggest that, in order to properly understand the privileged role of the body in the evolution of linguistic signs, we have to look beyond the body to language in its socio-cultural context. A theory which acknowledges the interacting influences of stable common ground and diversified cultural practices on the evolution of linguistic signs will offer the most explanatory power for both universal patterns and language-specific variation.
  • Dolscheid, S., Çelik, S., Erkan, H., Küntay, A., & Majid, A. (2020). Space-pitch associations differ in their susceptibility to language. Cognition, 196: 104073. doi:10.1016/j.cognition.2019.104073.

    Abstract

    To what extent are links between musical pitch and space universal, and to what extent are they shaped by
    language? There is contradictory evidence in support of both universality and linguistic relativity presently,
    leaving the question open. To address this, speakers of Dutch who talk about pitch in terms of spatial height and
    speakers of Turkish who use a thickness metaphor were tested in simple nonlinguistic space-pitch association
    tasks. Both groups showed evidence of a thickness-pitch association, but differed significantly in their heightpitch
    associations, suggesting the latter may be more susceptible to language. When participants had to match
    pitches to spatial stimuli where height and thickness were opposed (i.e., a thick line high in space vs. a thin line
    low in space), Dutch and Turkish differed in their relative preferences. Whereas Turkish participants predominantly
    opted for a thickness-pitch interpretation—even if this meant a reversal of height-pitch
    mappings—Dutch participants favored a height-pitch interpretation more often. These findings provide new
    evidence that speakers of different languages vary in their space-pitch associations, while at the same time
    showing such associations are not equally susceptible to linguistic influences. Some space-pitch (i.e., heightpitch)
    associations are more malleable than others (i.e., thickness-pitch).
  • Donnelly, S., & Kidd, E. (2020). Individual differences in lexical processing efficiency and vocabulary in toddlers: A longitudinal investigation. Journal of Experimental Child Psychology, 192: 104781. doi:10.1016/j.jecp.2019.104781.

    Abstract

    Research on infants’ online lexical processing by Fernald, Perfors, and Marchman (2006) revealed substantial individual differences that are related to vocabulary development, such that infants with better lexical processing efficiency show greater vocabulary growth across time. Although it is clear that individual differences in lexical processing efficiency exist and are meaningful, the theoretical nature of lexical processing efficiency and its relation to vocabulary size is less clear. In the current study, we asked two questions: (a) Is lexical processing efficiency better conceptualized as a central processing capacity or as an emergent capacity reflecting a collection of word-specific capacities? and (b) Is there evidence for a causal role for lexical processing efficiency in early vocabulary development? In the study, 120 infants were tested on a measure of lexical processing at 18, 21, and 24 months, and their vocabulary was measured via parent report. Structural equation modeling of the 18-month time point data revealed that both theoretical constructs represented in the first question above (a) fit the data. A set of regression analyses on the longitudinal data revealed little evidence for a causal effect of lexical processing on vocabulary but revealed a significant effect of vocabulary size on lexical processing efficiency early in development. Overall, the results suggest that lexical processing efficiency is a stable construct in infancy that may reflect the structure of the developing lexicon.
  • Doumas, L. A. A., Hamer, A., Puebla, G., & Martin, A. E. (2017). A theory of the detection and learning of structured representations of similarity and relative magnitude. In G. Gunzelmann, A. Howes, T. Tenbrink, & E. Davelaar (Eds.), Proceedings of the 39th Annual Conference of the Cognitive Science Society (CogSci 2017) (pp. 1955-1960). Austin, TX: Cognitive Science Society.

    Abstract

    Responding to similarity, difference, and relative magnitude (SDM) is ubiquitous in the animal kingdom. However, humans seem unique in the ability to represent relative magnitude (‘more’/‘less’) and similarity (‘same’/‘different’) as abstract relations that take arguments (e.g., greater-than (x,y)). While many models use structured relational representations of magnitude and similarity, little progress has been made on how these representations arise. Models that developuse these representations assume access to computations of similarity and magnitude a priori, either encoded as features or as output of evaluation operators. We detail a mechanism for producing invariant responses to “same”, “different”, “more”, and “less” which can be exploited to compute similarity and magnitude as an evaluation operator. Using DORA (Doumas, Hummel, & Sandhofer, 2008), these invariant responses can serve be used to learn structured relational representations of relative magnitude and similarity from pixel images of simple shapes
  • Doumas, L. A. A., Martin, A. E., & Hummel, J. E. (2020). Relation learning in a neurocomputational architecture supports cross-domain transfer. In S. Denison, M. Mack, Y. Xu, & B. C. Armstrong (Eds.), Proceedings of the 42nd Annual Virtual Meeting of the Cognitive Science Society (CogSci 2020) (pp. 932-937). Montreal, QB: Cognitive Science Society.

    Abstract

    Humans readily generalize, applying prior knowledge to novel situations and stimuli. Advances in machine learning have begun to approximate and even surpass human performance, but these systems struggle to generalize what they have learned to untrained situations. We present a model based on wellestablished neurocomputational principles that demonstrates human-level generalisation. This model is trained to play one video game (Breakout) and performs one-shot generalisation to a new game (Pong) with different characteristics. The model
    generalizes because it learns structured representations that are functionally symbolic (viz., a role-filler binding calculus) from unstructured training data. It does so without feedback, and without requiring that structured representations are specified a priori. Specifically, the model uses neural co-activation to discover which characteristics of the input are invariant and to learn relational predicates, and oscillatory regularities in network firing to bind predicates to arguments. To our knowledge,
    this is the first demonstration of human-like generalisation in a machine system that does not assume structured representa-
    tions to begin with.
  • Doust, C., Gordon, S. D., Garden, N., Fisher, S. E., Martin, N. G., Bates, T. C., & Luciano, M. (2020). The association of dyslexia and developmental speech and language disorder candidate genes with reading and language abilities in adults. Twin Research and Human Genetics, 23(1), 22-32. doi:10.1017/thg.2020.7.

    Abstract

    Reading and language abilities are critical for educational achievement and success in adulthood. Variation in these traits is highly heritable, but the underlying genetic architecture is largely undiscovered. Genetic studies of reading and language skills traditionally focus on children with developmental disorders; however, much larger unselected adult samples are available, increasing power to identify associations with specific genetic variants of small effect size. We introduce an Australian adult population cohort (41.7–73.2 years of age, N = 1505) in which we obtained data using validated measures of several aspects of reading and language abilities. We performed genetic association analysis for a reading and spelling composite score, nonword reading (assessing phonological processing: a core component in learning to read), phonetic spelling, self-reported reading impairment and nonword repetition (a marker of language ability). Given the limited power in a sample of this size (~80% power to find a minimum effect size of 0.005), we focused on analyzing candidate genes that have been associated with dyslexia and developmental speech and language disorders in prior studies. In gene-based tests, FOXP2, a gene implicated in speech/language disorders, was associated with nonword repetition (p < .001), phonetic spelling (p = .002) and the reading and spelling composite score (p < .001). Gene-set analyses of candidate dyslexia and speech/language disorder genes were not significant. These findings contribute to the assessment of genetic associations in reading and language disorders, crucial for understanding their etiology and informing intervention strategies, and validate the approach of using unselected adult samples for gene discovery in language and reading.

    Additional information

    Supplementary materials
  • Dowell, C., Hajnal, A., Pouw, W., & Wagman, J. B. (2020). Visual and haptic perception of affordances of feelies. Perception, 49(9), 905-925. doi:10.1177/0301006620946532.

    Abstract

    Most objects have well-defined affordances. Investigating perception of affordances of objects that were not created for a specific purpose would provide insight into how affordances are perceived. In addition, comparison of perception of affordances for such objects across different exploratory modalities (visual vs. haptic) would offer a strong test of the lawfulness of information about affordances (i.e., the invariance of such information over transformation). Along these lines, “feelies”— objects created by Gibson with no obvious function and unlike any common object—could shed light on the processes underlying affordance perception. This study showed that when observers reported potential uses for feelies, modality significantly influenced what kind of affordances were perceived. Specifically, visual exploration resulted in more noun labels (e.g., “toy”) than haptic exploration which resulted in more verb labels (i.e., “throw”). These results suggested that overlapping, but distinct classes of action possibilities are perceivable using vision and haptics. Semantic network analyses revealed that visual exploration resulted in object-oriented responses focused on object identification, whereas haptic exploration resulted in action-oriented responses. Cluster analyses confirmed these results. Affordance labels produced in the visual condition were more consistent, used fewer descriptors, were less diverse, but more novel than in the haptic condition.
  • Drijvers, L., Vaitonyte, J., & Ozyurek, A. (2019). Degree of language experience modulates visual attention to visible speech and iconic gestures during clear and degraded speech comprehension. Cognitive Science, 43: e12789. doi:10.1111/cogs.12789.

    Abstract

    Visual information conveyed by iconic hand gestures and visible speech can enhance speech comprehension under adverse listening conditions for both native and non‐native listeners. However, how a listener allocates visual attention to these articulators during speech comprehension is unknown. We used eye‐tracking to investigate whether and how native and highly proficient non‐native listeners of Dutch allocated overt eye gaze to visible speech and gestures during clear and degraded speech comprehension. Participants watched video clips of an actress uttering a clear or degraded (6‐band noise‐vocoded) action verb while performing a gesture or not, and were asked to indicate the word they heard in a cued‐recall task. Gestural enhancement was the largest (i.e., a relative reduction in reaction time cost) when speech was degraded for all listeners, but it was stronger for native listeners. Both native and non‐native listeners mostly gazed at the face during comprehension, but non‐native listeners gazed more often at gestures than native listeners. However, only native but not non‐native listeners' gaze allocation to gestures predicted gestural benefit during degraded speech comprehension. We conclude that non‐native listeners might gaze at gesture more as it might be more challenging for non‐native listeners to resolve the degraded auditory cues and couple those cues to phonological information that is conveyed by visible speech. This diminished phonological knowledge might hinder the use of semantic information that is conveyed by gestures for non‐native compared to native listeners. Our results demonstrate that the degree of language experience impacts overt visual attention to visual articulators, resulting in different visual benefits for native versus non‐native listeners.

    Additional information

    Supporting information
  • Drijvers, L., Van der Plas, M., Ozyurek, A., & Jensen, O. (2019). Native and non-native listeners show similar yet distinct oscillatory dynamics when using gestures to access speech in noise. NeuroImage, 194, 55-67. doi:10.1016/j.neuroimage.2019.03.032.

    Abstract

    Listeners are often challenged by adverse listening conditions during language comprehension induced by external factors, such as noise, but also internal factors, such as being a non-native listener. Visible cues, such as semantic information conveyed by iconic gestures, can enhance language comprehension in such situations. Using magnetoencephalography (MEG) we investigated whether spatiotemporal oscillatory dynamics can predict a listener's benefit of iconic gestures during language comprehension in both internally (non-native versus native listeners) and externally (clear/degraded speech) induced adverse listening conditions. Proficient non-native speakers of Dutch were presented with videos in which an actress uttered a degraded or clear verb, accompanied by a gesture or not, and completed a cued-recall task after every video. The behavioral and oscillatory results obtained from non-native listeners were compared to an MEG study where we presented the same stimuli to native listeners (Drijvers et al., 2018a). Non-native listeners demonstrated a similar gestural enhancement effect as native listeners, but overall scored significantly slower on the cued-recall task. In both native and non-native listeners, an alpha/beta power suppression revealed engagement of the extended language network, motor and visual regions during gestural enhancement of degraded speech comprehension, suggesting similar core processes that support unification and lexical access processes. An individual's alpha/beta power modulation predicted the gestural benefit a listener experienced during degraded speech comprehension. Importantly, however, non-native listeners showed less engagement of the mouth area of the primary somatosensory cortex, left insula (beta), LIFG and ATL (alpha) than native listeners, which suggests that non-native listeners might be hindered in processing the degraded phonological cues and coupling them to the semantic information conveyed by the gesture. Native and non-native listeners thus demonstrated similar yet distinct spatiotemporal oscillatory dynamics when recruiting visual cues to disambiguate degraded speech.

    Additional information

    1-s2.0-S1053811919302216-mmc1.docx
  • Drijvers, L., & Ozyurek, A. (2020). Non-native listeners benefit less from gestures and visible speech than native listeners during degraded speech comprehension. Language and Speech, 63(2), 209-220. doi:10.1177/0023830919831311.

    Abstract

    Native listeners benefit from both visible speech and iconic gestures to enhance degraded speech comprehension (Drijvers & Ozyürek, 2017). We tested how highly proficient non-native listeners benefit from these visual articulators compared to native listeners. We presented videos of an actress uttering a verb in clear, moderately, or severely degraded speech, while her lips were blurred, visible, or visible and accompanied by a gesture. Our results revealed that unlike native listeners, non-native listeners were less likely to benefit from the combined enhancement of visible speech and gestures, especially since the benefit from visible speech was minimal when the signal quality was not sufficient.
  • Drijvers, L. (2019). On the oscillatory dynamics underlying speech-gesture integration in clear and adverse listening conditions. PhD Thesis, Radboud University Nijmegen, Nijmegen.
  • Drijvers, L., & Ozyurek, A. (2017). Visual context enhanced: The joint contribution of iconic gestures and visible speech to degraded speech comprehension. Journal of Speech, Language, and Hearing Research, 60, 212-222. doi:10.1044/2016_JSLHR-H-16-0101.

    Abstract

    Purpose This study investigated whether and to what extent iconic co-speech gestures contribute to information from visible speech to enhance degraded speech comprehension at different levels of noise-vocoding. Previous studies of the contributions of these 2 visual articulators to speech comprehension have only been performed separately.

    Method Twenty participants watched videos of an actress uttering an action verb and completed a free-recall task. The videos were presented in 3 speech conditions (2-band noise-vocoding, 6-band noise-vocoding, clear), 3 multimodal conditions (speech + lips blurred, speech + visible speech, speech + visible speech + gesture), and 2 visual-only conditions (visible speech, visible speech + gesture).

    Results Accuracy levels were higher when both visual articulators were present compared with 1 or none. The enhancement effects of (a) visible speech, (b) gestural information on top of visible speech, and (c) both visible speech and iconic gestures were larger in 6-band than 2-band noise-vocoding or visual-only conditions. Gestural enhancement in 2-band noise-vocoding did not differ from gestural enhancement in visual-only conditions.
  • Drozd, K. F. (1998). No as a determiner in child English: A summary of categorical evidence. In A. Sorace, C. Heycock, & R. Shillcock (Eds.), Proceedings of the Gala '97 Conference on Language Acquisition (pp. 34-39). Edinburgh, UK: Edinburgh University Press,.

    Abstract

    This paper summarizes the results of a descriptive syntactic category analysis of child English no which reveals that young children use and represent no as a determiner and negatives like no pen as NPs, contra standard analyses.
  • Drozdova, P., Van Hout, R., & Scharenborg, O. (2017). L2 voice recognition: The role of speaker-, listener-, and stimulus-related factors. The Journal of the Acoustical Society of America, 142(5), 3058-3068. doi:10.1121/1.5010169.

    Abstract

    Previous studies examined various factors influencing voice recognition and learning with mixed results. The present study investigates the separate and combined contribution of these various speaker-, stimulus-, and listener-related factors to voice recognition. Dutch listeners, with arguably incomplete phonological and lexical knowledge in the target language, English, learned to recognize the voice of four native English speakers, speaking in English, during four-day training. Training was successful and listeners' accuracy was shown to be influenced by the acoustic characteristics of speakers and the sound composition of the words used in the training, but not by lexical frequency of the words, nor the lexical knowledge of the listeners or their phonological aptitude. Although not conclusive, listeners with a lower working memory capacity seemed to be slower in learning voices than listeners with a higher working memory capacity. The results reveal that speaker-related, listener-related, and stimulus-related factors accumulate in voice recognition, while lexical information turns out not to play a role in successful voice learning and recognition. This implies that voice recognition operates at the prelexical processing level.
  • Drude, S., Awete, W., & Aweti, A. (2019). A ortografia da língua Awetí. LIAMES: Línguas Indígenas Americanas, 19: e019014. doi:10.20396/liames.v19i0.8655746.

    Abstract

    Este trabalho descreve e fundamenta a ortografia da língua Awetí (Tupí, Alto Xingu/mt), com base na análise da estrutura fonológica e gramatical do Awetí. A ortografia é resultado de um longo trabalho colaborativo entre os três autores, iniciado em 1998. Ela não define apenas um alfabeto (a representação das vogais e das consoantes da língua), mas também aborda a variação interna, ressilabificação, lenição, palatalização e outros processos (morfo‑)fonológicos. Tanto a representação escrita da oclusiva glotal, quanto as consequências ortográficas da harmonia nasal receberam uma atenção especial. Apesar de o acento lexical não ser ortograficamente marcado em Awetí, a grande maioria dos afixos e partículas é abordada considerando o acento e sua interação com morfemas adjacentes, ao mesmo tempo determinando as palavras ortográficas. Finalmente foi estabelecida a ordem alfabética em que dígrafos são tratados como sequências de letras, já a oclusiva glotal ⟨ʼ⟩ é ignorada, facilitando o aprendizado do Awetí. A ortografia tal como descrita aqui tem sido usada por aproximadamente dez anos na escola para a alfabetização em Awetí, com bons resultados obtidos. Acreditamos que vários dos argumentos aqui levantados podem ser produtivamente transferidos para outras línguas com fenômenos semelhantes (a oclusiva glotal como consoante, harmonia nasal, assimilação morfo-fonológica, etc.).
  • Drude, S. (2009). Nasal harmony in Awetí ‐ A declarative account. ReVEL - Revista Virtual de Estudos da Linguagem, (3). Retrieved from http://www.revel.inf.br/en/edicoes/?mode=especial&id=16.

    Abstract

    This article describes and analyses nasal harmony (or spreading of nasality) in Awetí. It first shows generally how sounds in prefixes adapt to nasality or orality of stems, and how nasality in stems also ‘extends’ to the left. With abstract templates we show which phonetically nasal or oral sequences are possible in Awetí (focusing on stops, pre-nasalized stops and nasals) and which phonological analysis is appropriate for account for this regularities. In Awetí, there are intrinsically nasal and oral vowels and ‘neutral’ vowels which adapt phonetically to a following vowel or consonant, as is the case of sonorant consonants. Pre-nasalized stops such as “nt” are nasalized variants of stops, not post-oralized variants of nasals as in Tupí-Guaranian languages. For nasals and stops in syllable coda (end of morphemes), we postulate arqui-phonemes which adapt to the preceding vowel or a following consonant. Finally, using a declarative approach, the analysis formulates ‘rules’ (statements) which account for the ‘behavior’ of nasality in Awetí words, making use of “structured sequences” on both the phonetic and phonological levels. So, each unit (syllable, morpheme, word etc.) on any level has three components, a sequence of segments, a constituent structure (where pre-nasalized stops, like diphthongs, correspond to two segments), and an intonation structure. The statements describe which phonetic variants can be combined (concatenated) with which other variants, depending on their nasality or orality.
  • Duffield, N., Matsuo, A., & Roberts, L. (2009). Factoring out the parallelism effect in VP-ellipsis: English vs. Dutch contrasts. Second Language Research, 25, 427-467. doi:10.1177/0267658309349425.

    Abstract

    Previous studies, including Duffield and Matsuo (2001; 2002; 2009), have demonstrated second language learners’ overall sensitivity to a parallelism constraint governing English VP-ellipsis constructions: like native speakers (NS), advanced Dutch, Spanish and Japanese learners of English reliably prefer ellipsis clauses with structurally parallel antecedents over those with non-parallel antecedents. However, these studies also suggest that, in contrast to English native speakers, L2 learners’ sensitivity to parallelism is strongly influenced by other non-syntactic formal factors, such that the constraint applies in a comparatively restricted range of construction-specific contexts. This article reports a set of follow-up experiments — from both computer-based as well as more traditional acceptability judgement tasks — that systematically manipulates these other factors. Convergent results from these tasks confirm a qualitative difference in the judgement patterns of the two groups, as well as important differences between theoreticians’ judgements and those of typical native speakers. We consider the implications of these findings for theories of ultimate attainment in second language acquisition (SLA), as well as for current theoretical accounts of ellipsis.
  • Dunn, M. (2009). Contact and phylogeny in Island Melanesia. Lingua, 11(11), 1664-1678. doi:10.1016/j.lingua.2007.10.026.

    Abstract

    This paper shows that despite evidence of structural convergence between some of the Austronesian and non-Austronesian (Papuan) languages of Island Melanesia, statistical methods can detect two independent genealogical signals derived from linguistic structural features. Earlier work by the author and others has presented a maximum parsimony analysis which gave evidence for a genealogical connection between the non-Austronesian languages of island Melanesia. Using the same data set, this paper demonstrates for the non-statistician the application of more sophisticated statistical techniques—including Bayesian methods of phylogenetic inference, and shows that the evidence for common ancestry is if anything stronger than originally supposed.
  • Dunn, M. (2000). Planning for failure: The niche of standard Chukchi. Current Issues in Language Planning, 1, 389-399. doi:10.1080/14664200008668013.

    Abstract

    This paper examines the effects of language standardization and orthography design on the Chukchi linguistic ecology. The process of standardisation has not taken into consideration the gender-based sociolects of colloquial Chukchi and is based on a grammaticaldescriptionwhich does not reflectactual Chukchi use; as a result standard Chukchi has not gained a place in the Chukchi language ecology. The Cyrillic orthography developed for Chukchi is also problematic as it is based on features of Russian phonology, rather than on Chukchi itself: this has meant that a knowledge of written Chukchi is dependent on a knowledge of the principles of Russian orthography. The aspects of language planning have had a large impact on the pre-existing Chukchi language ecology which has contributed to the obsolescence of the colloquial language.
  • Edmiston, P., Perlman, M., & Lupyan, G. (2017). Creating words from iterated vocal imitation. In G. Gunzelman, A. Howes, T. Tenbrink, & E. Davelaar (Eds.), Proceedings of the 39th Annual Conference of the Cognitive Science Society (CogSci 2017) (pp. 331-336). Austin, TX: Cognitive Science Society.

    Abstract

    We report the results of a large-scale (N=1571) experiment to investigate whether spoken words can emerge from the process of repeated imitation. Participants played a version of the children’s game “Telephone”. The first generation was asked to imitate recognizable environmental sounds (e.g., glass breaking, water splashing); subsequent generations imitated the imitators for a total of 8 generations. We then examined whether the vocal imitations became more stable and word-like, retained a resemblance to the original sound, and became more suitable as learned category labels. The results showed (1) the imitations became progressively more word-like, (2) even after 8 generations, they could be matched above chance to the environmental sound that motivated them, and (3) imitations from later generations were more effective as learned category labels. These results show how repeated imitation can create progressively more word-like forms while retaining a semblance of iconicity.
  • Eekhof, L. S., Van Krieken, K., & Sanders, J. (2020). VPIP: A lexical identification procedure for perceptual, cognitive, and emotional viewpoint in narrative discourse. Open Library of Humanities, 6(1): 18. doi:10.16995/olh.483.

    Abstract

    Although previous work on viewpoint techniques has shown that viewpoint is ubiquitous in narrative discourse, approaches to identify and analyze the linguistic manifestations of viewpoint are currently scattered over different disciplines and dominated by qualitative methods. This article presents the ViewPoint Identification Procedure (VPIP), the first systematic method for the lexical identification of markers of perceptual, cognitive and emotional viewpoint in narrative discourse. Use of this step-wise procedure is facilitated by a large appendix of Dutch viewpoint markers. After the introduction of the procedure and discussion of some special cases, we demonstrate its application by discussing three types of narrative excerpts: a literary narrative, a news narrative, and an oral narrative. Applying the identification procedure to the full news narrative, we show that the VPIP can be reliably used to detect viewpoint markers in long stretches of narrative discourse. As such, the systematic identification of viewpoint has the potential to benefit both established viewpoint scholars and researchers from other fields interested in the analytical and experimental study of narrative and viewpoint. Such experimental studies could complement qualitative studies, ultimately advancing our theoretical understanding of the relation between the linguistic presentation and cognitive processing of viewpoint. Suggestions for elaboration of the VPIP, particularly in the realm of pragmatic viewpoint marking, are formulated in the final part of the paper.

    Additional information

    appendix
  • Egger, J., Rowland, C. F., & Bergmann, C. (2020). Improving the robustness of infant lexical processing speed measures. Behavior Research Methods, 52, 2188-2201. doi:10.3758/s13428-020-01385-5.

    Abstract

    Visual reaction times to target pictures after naming events are an informative measurement in language acquisition research, because gaze shifts measured in looking-while-listening paradigms are an indicator of infants’ lexical speed of processing. This measure is very useful, as it can be applied from a young age onwards and has been linked to later language development. However, to obtain valid reaction times, the infant is required to switch the fixation of their eyes from a distractor to a target object. This means that usually at least half the trials have to be discarded—those where the participant is already fixating the target at the onset of the target word—so that no reaction time can be measured. With few trials, reliability suffers, which is especially problematic when studying individual differences. In order to solve this problem, we developed a gaze-triggered looking-while-listening paradigm. The trials do not differ from the original paradigm apart from the fact that the target object is chosen depending on the infant’s eye fixation before naming. The object the infant is looking at becomes the distractor and the other object is used as the target, requiring a fixation switch, and thus providing a reaction time. We tested our paradigm with forty-three 18-month-old infants, comparing the results to those from the original paradigm. The Gaze-triggered paradigm yielded more valid reaction time trials, as anticipated. The results of a ranked correlation between the conditions confirmed that the manipulated paradigm measures the same concept as the original paradigm.
  • Eibl-Eibesfeldt, I., Senft, B., & Senft, G. (1998). Trobriander (Ost-Neuguinea, Trobriand Inseln, Kaile'una) Fadenspiele 'ninikula'. In Ethnologie - Humanethologische Begleitpublikationen von I. Eibl-Eibesfeldt und Mitarbeitern. Sammelband I, 1985-1987. Göttingen: Institut für den Wissenschaftlichen Film.
  • Eielts, C., Pouw, W., Ouwehand, K., Van Gog, T., Zwaan, R. A., & Paas, F. (2020). Co-thought gesturing supports more complex problem solving in subjects with lower visual working-memory capacity. Psychological Research, 84, 502-513. doi:10.1007/s00426-018-1065-9.

    Abstract

    During silent problem solving, hand gestures arise that have no communicative intent. The role of such co-thought gestures in
    cognition has been understudied in cognitive research as compared to co-speech gestures. We investigated whether gesticulation
    during silent problem solving supported subsequent performance in a Tower of Hanoi problem-solving task, in relation
    to visual working-memory capacity and task complexity. Seventy-six participants were assigned to either an instructed gesture
    condition or a condition that allowed them to gesture, but without explicit instructions to do so. This resulted in three
    gesture groups: (1) non-gesturing; (2) spontaneous gesturing; (3) instructed gesturing. In line with the embedded/extended
    cognition perspective on gesture, gesturing benefited complex problem-solving performance for participants with a lower
    visual working-memory capacity, but not for participants with a lower spatial working-memory capacity.
  • Eijk, L., Fletcher, A., McAuliffe, M., & Janse, E. (2020). The effects of word frequency and word probability on speech rhythm in dysarthria. Journal of Speech, Language, and Hearing Research, 63, 2833-2845. doi:10.1044/2020_JSLHR-19-00389.

    Abstract

    Purpose

    In healthy speakers, the more frequent and probable a word is in its context, the shorter the word tends to be. This study investigated whether these probabilistic effects were similarly sized for speakers with dysarthria of different severities.
    Method

    Fifty-six speakers of New Zealand English (42 speakers with dysarthria and 14 healthy speakers) were recorded reading the Grandfather Passage. Measurements of word duration, frequency, and transitional word probability were taken.
    Results

    As hypothesized, words with a higher frequency and probability tended to be shorter in duration. There was also a significant interaction between word frequency and speech severity. This indicated that the more severe the dysarthria, the smaller the effects of word frequency on speakers' word durations. Transitional word probability also interacted with speech severity, but did not account for significant unique variance in the full model.
    Conclusions

    These results suggest that, as the severity of dysarthria increases, the duration of words is less affected by probabilistic variables. These findings may be due to reductions in the control and execution of muscle movement exhibited by speakers with dysarthria.
  • Eijk, L., Ernestus, M., & Schriefers, H. (2019). Alignment of pitch and articulation rate. In S. Calhoun, P. Escudero, M. Tabain, & P. Warren (Eds.), Proceedings of the 19th International Congress of Phonetic Sciences (ICPhS 20195) (pp. 2690-2694). Canberra, Australia: Australasian Speech Science and Technology Association Inc.

    Abstract

    Previous studies have shown that speakers align their speech to each other at multiple linguistic levels. This study investigates whether alignment is mostly the result of priming from the immediately preceding
    speech materials, focussing on pitch and articulation rate (AR). Native Dutch speakers completed sentences, first by themselves (pre-test), then in alternation with Confederate 1 (Round 1), with Confederate 2 (Round 2), with Confederate 1 again
    (Round 3), and lastly by themselves again (post-test). Results indicate that participants aligned to the confederates and that this alignment lasted during the post-test. The confederates’ directly preceding sentences were not good predictors for the participants’ pitch and AR. Overall, the results indicate that alignment is more of a global effect than a local priming effect.
  • Eimer, M., Kiss, M., Press, C., & Sauter, D. (2009). The roles of feature-specific task set and bottom-up salience in attentional capture: An ERP study. Journal of Experimental Psychology: Human Perception and Performance, 35, 1316-1328. doi:10.1037/a0015872.

    Abstract

    We investigated the roles of top-down task set and bottom-up stimulus salience for feature-specific attentional capture. ERPs and behavioural performance were measured in two experiments where spatially nonpredictive cues preceded visual search arrays that included a colour-defined target. When cue arrays contained a target-colour singleton, behavioural spatial cueing effects were accompanied by a cue-induced N2pc component, indicative of attentional capture. Behavioural cueing effects and N2pc components were only minimally attenuated for non-singleton relative to singleton target-colour cues, demonstrating that top-down task set has a much greater impact on attentional capture than bottom-up salience. For nontarget-colour singleton cues, no N2pc was triggered, but an anterior N2 component indicative of top-down inhibition was observed. In Experiment 2, these cues produced an inverted behavioural cueing effect, which was accompanied by a delayed N2pc to targets presented at cued locations. These results suggest that perceptually salient visual stimuli without task-relevant features trigger a transient location-specific inhibition process that prevents attentional capture, but delays the selection of subsequent target events.
  • Eisenbeiss, S. (2000). The acquisition of Determiner Phrase in German child language. In M.-A. Friedemann, & L. Rizzi (Eds.), The Acquisition of Syntax (pp. 26-62). Harlow, UK: Pearson Education Ltd.
  • Eising, E., Carrion Castillo, A., Vino, A., Strand, E. A., Jakielski, K. J., Scerri, T. S., Hildebrand, M. S., Webster, R., Ma, A., Mazoyer, B., Francks, C., Bahlo, M., Scheffer, I. E., Morgan, A. T., Shriberg, L. D., & Fisher, S. E. (2019). A set of regulatory genes co-expressed in embryonic human brain is implicated in disrupted speech development. Molecular Psychiatry, 24, 1065-1078. doi:10.1038/s41380-018-0020-x.

    Abstract

    Genetic investigations of people with impaired development of spoken language provide windows into key aspects of human biology. Over 15 years after FOXP2 was identified, most speech and language impairments remain unexplained at the molecular level. We sequenced whole genomes of nineteen unrelated individuals diagnosed with childhood apraxia of speech, a rare disorder enriched for causative mutations of large effect. Where DNA was available from unaffected parents, we discovered de novo mutations, implicating genes, including CHD3, SETD1A and WDR5. In other probands, we identified novel loss-of-function variants affecting KAT6A, SETBP1, ZFHX4, TNRC6B and MKL2, regulatory genes with links to neurodevelopment. Several of the new candidates interact with each other or with known speech-related genes. Moreover, they show significant clustering within a single co-expression module of genes highly expressed during early human brain development. This study highlights gene regulatory pathways in the developing brain that may contribute to acquisition of proficient speech.

    Additional information

    Eising_etal_2018sup.pdf
  • Eising, E., Shyti, R., 'T hoen, P. A. C., Vijfhuizen, L. S., Huisman, S. M. H., Broos, L. A. M., Mahfourz, A., Reinders, M. J. T., Ferrrari, M. D., Tolner, E. A., De Vries, B., & Van den Maagdenberg, A. M. J. M. (2017). Cortical spreading depression causes unique dysregulation of inflammatory pathways in a transgenic mouse model of migraine. Molecular Biology, 54(4), 2986-2996. doi:10.1007/s12035-015-9681-5.

    Abstract

    Familial hemiplegic migraine type 1 (FHM1) is a
    rare monogenic subtype of migraine with aura caused by mutations
    in CACNA1A that encodes the α1A subunit of voltagegated
    CaV2.1 calcium channels. Transgenic knock-in mice
    that carry the human FHM1 R192Q missense mutation
    (‘FHM1 R192Q mice’) exhibit an increased susceptibility to
    cortical spreading depression (CSD), the mechanism underlying
    migraine aura. Here, we analysed gene expression profiles
    from isolated cortical tissue of FHM1 R192Q mice 24 h after
    experimentally induced CSD in order to identify molecular
    pathways affected by CSD. Gene expression profiles were
    generated using deep serial analysis of gene expression sequencing.
    Our data reveal a signature of inflammatory signalling
    upon CSD in the cortex of both mutant and wild-type
    mice. However, only in the brains of FHM1 R192Q mice
    specific genes are up-regulated in response to CSD that are
    implicated in interferon-related inflammatory signalling. Our
    findings show that CSD modulates inflammatory processes in
    both wild-type and mutant brains, but that an additional
    unique inflammatory signature becomes expressed after
    CSD in a relevant mouse model of migraine.
  • Eising, E., Pelzer, N., Vijfhuizen, L. S., De Vries, B., Ferrari, M. D., 'T Hoen, P. A. C., Terwindt, G. M., & Van den Maagdenberg, A. M. J. M. (2017). Identifying a gene expression signature of cluster headache in blood. Scientific Reports, 7: 40218. doi:10.1038/srep40218.

    Abstract

    Cluster headache is a relatively rare headache disorder, typically characterized by multiple daily, short-lasting attacks of excruciating, unilateral (peri-)orbital or temporal pain associated with autonomic symptoms and restlessness. To better understand the pathophysiology of cluster headache, we used RNA sequencing to identify differentially expressed genes and pathways in whole blood of patients with episodic (n = 19) or chronic (n = 20) cluster headache in comparison with headache-free controls (n = 20). Gene expression data were analysed by gene and by module of co-expressed genes with particular attention to previously implicated disease pathways including hypocretin dysregulation. Only moderate gene expression differences were identified and no associations were found with previously reported pathogenic mechanisms. At the level of functional gene sets, associations were observed for genes involved in several brain-related mechanisms such as GABA receptor function and voltage-gated channels. In addition, genes and modules of co-expressed genes showed a role for intracellular signalling cascades, mitochondria and inflammation. Although larger study samples may be required to identify the full range of involved pathways, these results indicate a role for mitochondria, intracellular signalling and inflammation in cluster headache

    Additional information

    Eising_etal_2017sup.pdf
  • Emmendorfer, A. K., Correia, J. M., Jansma, B. M., Kotz, S. A., & Bonte, M. (2020). ERP mismatch response to phonological and temporal regularities in speech. Scientific Reports, 10: 9917. doi:10.1038/s41598-020-66824-x.

    Abstract

    Predictions of our sensory environment facilitate perception across domains. During speech perception, formal and temporal predictions may be made for phonotactic probability and syllable stress patterns, respectively, contributing to the efficient processing of speech input. The current experiment employed a passive EEG oddball paradigm to probe the neurophysiological processes underlying temporal and formal predictions simultaneously. The component of interest, the mismatch negativity (MMN), is considered a marker for experience-dependent change detection, where its timing and amplitude are indicative of the perceptual system’s sensitivity to presented stimuli. We hypothesized that more predictable stimuli (i.e. high phonotactic probability and first syllable stress) would facilitate change detection, indexed by shorter peak latencies or greater peak amplitudes of the MMN. This hypothesis was confirmed for phonotactic probability: high phonotactic probability deviants elicited an earlier MMN than low phonotactic probability deviants. We do not observe a significant modulation of the MMN to variations in syllable stress. Our findings confirm that speech perception is shaped by formal and temporal predictability. This paradigm may be useful to investigate the contribution of implicit processing of statistical regularities during (a)typical language development.

    Additional information

    supplementary information
  • Enard, W., Gehre, S., Hammerschmidt, K., Hölter, S. M., Blass, T., Somel, M., Brückner, M. K., Schreiweis, C., Winter, C., Sohr, R., Becker, L., Wiebe, V., Nickel, B., Giger, T., Müller, U., Groszer, M., Adler, T., Aguilar, A., Bolle, I., Calzada-Wack, J. and 36 moreEnard, W., Gehre, S., Hammerschmidt, K., Hölter, S. M., Blass, T., Somel, M., Brückner, M. K., Schreiweis, C., Winter, C., Sohr, R., Becker, L., Wiebe, V., Nickel, B., Giger, T., Müller, U., Groszer, M., Adler, T., Aguilar, A., Bolle, I., Calzada-Wack, J., Dalke, C., Ehrhardt, N., Favor, J., Fuchs, H., Gailus-Durner, V., Hans, W., Hölzlwimmer, G., Javaheri, A., Kalaydjiev, S., Kallnik, M., Kling, E., Kunder, S., Moßbrugger, I., Naton, B., Racz, I., Rathkolb, B., Rozman, J., Schrewe, A., Busch, D. H., Graw, J., Ivandic, B., Klingenspor, M., Klopstock, T., Ollert, M., Quintanilla-Martinez, L., Schulz, H., Wolf, E., Wurst, W., Zimmer, A., Fisher, S. E., Morgenstern, R., Arendt, T., Hrabé de Angelis, M., Fischer, J., Schwarz, J., & Pääbo, S. (2009). A humanized version of Foxp2 affects cortico-basal ganglia circuits in mice. Cell, 137(5), 961-971. doi:10.1016/j.cell.2009.03.041.

    Abstract

    It has been proposed that two amino acid substitutions in the transcription factor FOXP2 have been positively selected during human evolution due to effects on aspects of speech and language. Here, we introduce these substitutions into the endogenous Foxp2 gene of mice. Although these mice are generally healthy, they have qualitatively different ultrasonic vocalizations, decreased exploratory behavior and decreased dopamine concentrations in the brain suggesting that the humanized Foxp2 allele affects basal ganglia. In the striatum, a part of the basal ganglia affected in humans with a speech deficit due to a nonfunctional FOXP2 allele, we find that medium spiny neurons have increased dendrite lengths and increased synaptic plasticity. Since mice carrying one nonfunctional Foxp2 allele show opposite effects, this suggests that alterations in cortico-basal ganglia circuits might have been important for the evolution of speech and language in humans.
  • Enfield, N. J. (2009). Common tragedy [Review of the book The native mind and the cultural construction of nature by Scott Atran Douglas Medin]. The Times Literary Supplement, September 18,2009, 10-11.
  • Enfield, N. J. (2009). 'Case relations' in Lao, a radically isolating language. In A. L. Malčukov, & A. Spencer (Eds.), The Oxford handbook of case (pp. 808-819). Oxford: Oxford University Press.
  • Enfield, N. J. (2009). [Review of the book Serial verb constructions: A cross-linguistic typology ed. by Alexandra Y. Aikhenvald and R. M. W. Dixon]. Language, 85, 445-451. doi:10.1353/lan.0.0124.
  • Enfield, N. J., & Levinson, S. C. (2009). Metalanguage for speech acts. In A. Majid (Ed.), Field manual volume 12 (pp. 51-53). Nijmegen: Max Planck Institute for Psycholinguistics. doi:10.17617/2.883559.

    Abstract

    People of all cultures have some degree of concern with categorizing types of communicative social action. All languages have words with meanings like speak, say, talk, complain, curse, promise, accuse, nod, wink, point and chant. But the exact distinctions they make will differ in both quantity and quality. How is communicative social action categorised across languages and cultures? The goal of this task is to establish a basis for cross-linguistic comparison of native metalanguages for social action.
  • Enfield, N. J. (2009). Language and culture. In L. Wei, & V. Cook (Eds.), Contemporary Applied Linguistics Volume 2 (pp. 83-97). London: Continuum.
  • Enfield, N. J. (2017). Language in the Mainland Southeast Asia Area. In R. Hickey (Ed.), The Cambridge Handbook of Areal Linguistics (pp. 677-702). Cambridge: Cambridge University Press. doi:10.1017/9781107279872.026.
  • Enfield, N. J. (2009). Language: Social motives for syntax [Review of the book Origins of human communication by Michael Tomasello]. Science, 324(5923), 39. doi:10.1126/science.1172660.
  • Enfield, N. J. (2009). Everyday ritual in the residential world. In G. Senft, & E. B. Basso (Eds.), Ritual communication (pp. 51-80). Oxford: Berg.
  • Enfield, N. J. (2000). On linguocentrism. In M. Pütz, & M. H. Verspoor (Eds.), Explorations in linguistic relativity (pp. 125-157). Amsterdam: Benjamins.
  • Enfield, N. J., & Diffloth, G. (2009). Phonology and sketch grammar of Kri, a Vietic language of Laos. Cahiers de Linguistique - Asie Orientale (CLAO), 38(1), 3-69.
  • Enfield, N. J. (2009). Relationship thinking and human pragmatics. Journal of Pragmatics, 41, 60-78. doi:10.1016/j.pragma.2008.09.007.

    Abstract

    The approach to pragmatics explored in this article focuses on elements of social interaction which are of universal relevance, and which may provide bases for a comparative approach. The discussion is anchored by reference to a fragment of conversation from a video-recording of Lao speakers during a home visit in rural Laos. The following points are discussed. First, an understanding of the full richness of context is indispensable for a proper understanding of any interaction. Second, human relationships are a primary locus of social organization, and as such constitute a key focus for pragmatics. Third, human social intelligence forms a universal cognitive under-carriage for interaction, and requires careful cross-cultural study. Fourth, a neo-Peircean framework for a general understanding of semiotic processes gives us a way of stepping away from language as our basic analytical frame. It is argued that in order to get a grip on pragmatics across human groups, we need to take a comparative approach in the biological sense—i.e. with reference to other species as well. From this perspective, human pragmatics is about using semiotic resources to try to meet goals in the realm of social relationships.
  • Enfield, N. J. (2009). The anatomy of meaning: Speech, gesture, and composite utterances. Cambridge: Cambridge University Press.
  • Enfield, N. J., Levinson, S. C., & Stivers, T. (2009). Social action formulation: A "10-minutes" task. In A. Majid (Ed.), Field manual volume 12 (pp. 54-55). Nijmegen: Max Planck Institute for Psycholinguistics. doi:10.17617/2.883564.

    Abstract

    Human actions in the social world – like greeting, requesting, complaining, accusing, asking, confirming, etc. – are recognised through the interpretation of signs. Language is where much of the action is, but gesture, facial expression and other bodily actions matter as well. The goal of this task is to establish a maximally rich description of a representative, good quality piece of conversational interaction, which will serve as a reference point for comparative exploration of the status of social actions and their formulation across language
  • Enfield, N. J., Stivers, T., Brown, P., Englert, C., Harjunpää, K., Hayashi, M., Heinemann, T., Hoymann, G., Keisanen, T., Rauniomaa, M., Raymond, C. W., Rossano, F., Yoon, K.-E., Zwitserlood, I., & Levinson, S. C. (2019). Polar answers. Journal of Linguistics, 55(2), 277-304. doi:10.1017/S0022226718000336.

    Abstract

    How do people answer polar questions? In this fourteen-language study of answers to questions in conversation, we compare the two main strategies; first, interjection-type answers such as uh-huh (or equivalents yes, mm, head nods, etc.), and second, repetition-type answers that repeat some or all of the question. We find that all languages offer both options, but that there is a strong asymmetry in their frequency of use, with a global preference for interjection-type answers. We propose that this preference is motivated by the fact that the two options are not equivalent in meaning. We argue that interjection-type answers are intrinsically suited to be the pragmatically unmarked, and thus more frequent, strategy for confirming polar questions, regardless of the language spoken. Our analysis is based on the semantic-pragmatic profile of the interjection-type and repetition-type answer strategies, in the context of certain asymmetries inherent to the dialogic speech act structure of question–answer sequences, including sequential agency and thematic agency. This allows us to see possible explanations for the outlier distributions found in ǂĀkhoe Haiǁom and Tzeltal.
  • Enfield, N. J. (2000). The theory of cultural logic: How individuals combine social intelligence with semiotics to create and maintain cultural meaning. Cultural Dynamics, 12(1), 35-64. doi:10.1177/092137400001200102.

    Abstract

    The social world is an ecological complex in which cultural meanings and knowledges (linguistic and non-linguistic) personally embodied by individuals are intercalibrated via common attention to commonly accessible semiotic structures. This interpersonal ecology bridges realms which are the subject matter of both anthropology and linguistics, allowing the public maintenance of a system of assumptions and counter-assumptions among individuals as to what is mutually known (about), in general and/or in any particular context. The mutual assumption of particular cultural ideas provides human groups with common premises for predictably convergent inferential processes. This process of people collectively using effectively identical assumptions in interpreting each other's actions—i.e. hypothesizing as to each other's motivations and intentions—may be termed cultural logic. This logic relies on the establishment of stereotypes and other kinds of precedents, catalogued in individuals’ personal libraries, as models and scenarios which may serve as reference in inferring and attributing motivations behind people's actions, and behind other mysterious phenomena. This process of establishing conceptual convention depends directly on semiotics, since groups of individuals rely on external signs as material for common focus and, thereby, agreement. Social intelligence binds signs in the world (e.g. speech sounds impressing upon eardrums), with individually embodied representations (e.g. word meanings and contextual schemas). The innate tendency for people to model the intentions of others provides an ultimately biological account for the logic behind culture. Ethnographic examples are drawn from Laos and Australia.
  • Enfield, N. J., & Evans, G. (2000). Transcription as standardisation: The problem of Tai languages. In S. Burusphat (Ed.), Proceedings: the International Conference on Tai Studies, July 29-31, 1998, (pp. 201-212). Bangkok, Thailand: Institute of Language and Culture for Rural Development, Mahidol University.
  • Erard, M. (2009). How Many Languages? Linguists Discover New Tongues in China. Science, 324(5925), 332-333. doi:10.1126/science.324.5925.332a.
  • Erard, M. (2019). Language aptitude: Insights from hyperpolyglots. In Z. Wen, P. Skehan, A. Biedroń, S. Li, & R. L. Sparks (Eds.), Language aptitude: Advancing theory, testing, research and practice (pp. 153-167). Abingdon, UK: Taylor & Francis.

    Abstract

    Over the decades, high-intensity language learners scattered over the globe referred to as “hyperpolyglots” have undertaken a natural experiment into the limits of learning and acquiring proficiencies in multiple languages. This chapter details several ways in which hyperpolyglots are relevant to research on aptitude. First, historical hyperpolyglots Cardinal Giuseppe Mezzofanti, Emil Krebs, Elihu Burritt, and Lomb Kató are described in terms of how they viewed their own exceptional outcomes. Next, I draw on results from an online survey with 390 individuals to explore how contemporary hyperpolyglots consider the explanatory value of aptitude. Third, the challenges involved in studying the genetic basis of hyperpolyglottism (and by extension of language aptitude) are discussed. This mosaic of data is meant to inform the direction of future aptitude research that takes hyperpolyglots, one type of exceptional language learner and user, into account.
  • Erard, M. (2017). Write yourself invisible. New Scientist, 236(3153), 36-39.
  • Ergin, R., Raviv, L., Senghas, A., Padden, C., & Sandler, W. (2020). Community structure affects convergence on uniform word orders: Evidence from emerging sign languages. In A. Ravignani, C. Barbieri, M. Flaherty, Y. Jadoul, E. Lattenkamp, H. Little, M. Martins, K. Mudd, & T. Verhoef (Eds.), The Evolution of Language: Proceedings of the 13th International Conference (Evolang13) (pp. 84-86). Nijmegen: The Evolution of Language Conferences.
  • Ernestus, M., Dikmans, M., & Giezenaar, G. (2017). Advanced second language learners experience difficulties processing reduced word pronunciation variants. Dutch Journal of Applied Linguistics, 6(1), 1-20. doi:10.1075/dujal.6.1.01ern.

    Abstract

    Words are often pronounced with fewer segments in casual conversations than in formal speech. Previous research has shown that foreign language learners and beginning second language learners experience problems processing reduced speech. We examined whether this also holds for advanced second language learners. We designed a dictation task in Dutch consisting of sentences spliced from casual conversations and an unreduced counterpart of this task, with the same sentences carefully articulated by the same speaker. Advanced second language learners of Dutch produced substantially more transcription errors for the reduced than for the unreduced sentences. These errors made the sentences incomprehensible or led to non-intended meanings. The learners often did not rely on the semantic and syntactic information in the sentence or on the subsegmental cues to overcome the reductions. Hence, advanced second language learners also appear to suffer from the reduced pronunciation variants of words that are abundant in everyday conversations
  • Ernestus, M., Kouwenhoven, H., & Van Mulken, M. (2017). The direct and indirect effects of the phonotactic constraints in the listener's native language on the comprehension of reduced and unreduced word pronunciation variants in a foreign language. Journal of Phonetics, 62, 50-64. doi:10.1016/j.wocn.2017.02.003.

    Abstract

    This study investigates how the comprehension of casual speech in foreign languages is affected by the phonotactic constraints in the listener’s native language. Non-native listeners of English with different native languages heard short English phrases produced by native speakers of English or Spanish and they indicated whether these phrases included can or can’t. Native Mandarin listeners especially tended to interpret can’t as can. We interpret this result as a direct effect of the ban on word-final /nt/ in Mandarin. Both the native Mandarin and the native Spanish listeners did not take full advantage of the subsegmental information in the speech signal cueing reduced can’t. This finding is probably an indirect effect of the phonotactic constraints in their native languages: these listeners have difficulties interpreting the subsegmental cues because these cues do not occur or have different functions in their native languages. Dutch resembles English in the phonotactic constraints relevant to the comprehension of can’t, and native Dutch listeners showed similar patterns in their comprehension of native and non-native English to native English listeners. This result supports our conclusion that the major patterns in the comprehension results are driven by the phonotactic constraints in the listeners’ native languages.
  • Ernestus, M. (2009). The roles of reconstruction and lexical storage in the comprehension of regular pronunciation variants. In Proceedings of the 10th Annual Conference of the International Speech Communication Association (Interspeech 2009) (pp. 1875-1878). Causal Productions Pty Ltd.

    Abstract

    This paper investigates how listeners process regular pronunciation variants, resulting from simple general reduction processes. Study 1 shows that when listeners are presented with new words, they store the pronunciation variants presented to them, whether these are unreduced or reduced. Listeners thus store information on word-specific pronunciation variation. Study 2 suggests that if participants are presented with regularly reduced pronunciations, they also reconstruct and store the corresponding unreduced pronunciations. These unreduced pronunciations apparently have special status. Together the results support hybrid models of speech processing, assuming roles for both exemplars and abstract representations.
  • Eryilmaz, K., & Little, H. (2017). Using Leap Motion to investigate the emergence of structure in speech and language. Behavior Research Methods, 49(5), 1748-1768. doi:10.3758/s13428-016-0818-x.

    Abstract

    In evolutionary linguistics, experiments using artificial signal spaces are being used to investigate the emergence of speech structure. These signal spaces need to be continuous, non-discretised spaces from which discrete units and patterns can emerge. They need to be dissimilar from - but comparable with - the vocal-tract, in order to minimise interference from pre-existing linguistic knowledge, while informing us about language. This is a hard balance to strike. This article outlines a new approach which uses the Leap Motion, an infra-red controller which can convert manual movement in 3d space into sound. The signal space using this approach is more flexible than signal spaces in previous attempts. Further, output data using this approach is simpler to arrange and analyse. The experimental interface was built using free, and mostly open source libraries in Python. We provide our source code for other researchers as open source.
  • Esteve-Gibert, N., Prieto, P., & Liszkowski, U. (2017). Twelve-month-olds understand social intentions based on prosody and gesture shape. Infancy, 22, 108-129. doi:10.1111/infa.12146.

    Abstract

    Infants infer social and pragmatic intentions underlying attention-directing gestures, but the basis on which infants make these inferences is not well understood. Previous studies suggest that infants rely on information from preceding shared action contexts and joint perceptual scenes. Here, we tested whether 12-month-olds use information from act-accompanying cues, in particular prosody and hand shape, to guide their pragmatic understanding. In Experiment 1, caregivers directed infants’ attention to an object to request it, share interest in it, or inform them about a hidden aspect. Caregivers used distinct prosodic and gestural patterns to express each pragmatic intention. Experiment 2 was identical except that experimenters provided identical lexical information across conditions and used three sets of trained prosodic and gestural patterns. In all conditions, the joint perceptual scenes and preceding shared action contexts were identical. In both experiments, infants reacted appropriately to the adults’ intentions by attending to the object mostly in the sharing interest condition, offering the object mostly in the imperative condition, and searching for the referent mostly in the informing condition. Infants’ ability to comprehend pragmatic intentions based on prosody and gesture shape expands infants’ communicative understanding from common activities to novel situations for which shared background knowledge is missing.
  • Evans, N., & Levinson, S. C. (2009). The myth of language universals: Language diversity and its importance for cognitive science. Behavioral and Brain Sciences, 32(5), 429-492. doi:10.1017/S0140525X0999094X.

    Abstract

    Talk of linguistic universals has given cognitive scientists the impression that languages are all built to a common pattern. In fact, there are vanishingly few universals of language in the direct sense that all languages exhibit them. Instead, diversity can be found at almost every level of linguistic organization. This fundamentally changes the object of enquiry from a cognitive science perspective. This target article summarizes decades of cross-linguistic work by typologists and descriptive linguists, showing just how few and unprofound the universal characteristics of language are, once we honestly confront the diversity offered to us by the world's 6,000 to 8,000 languages. After surveying the various uses of “universal,” we illustrate the ways languages vary radically in sound, meaning, and syntactic organization, and then we examine in more detail the core grammatical machinery of recursion, constituency, and grammatical relations. Although there are significant recurrent patterns in organization, these are better explained as stable engineering solutions satisfying multiple design constraints, reflecting both cultural-historical factors and the constraints of human cognition.
  • Evans, N., & Levinson, S. C. (2009). With diversity in mind: Freeing the language sciences from universal grammar [Author's response]. Behavioral and Brain Sciences, 32(5), 472-484. doi:10.1017/S0140525X09990525.

    Abstract

    Our response takes advantage of the wide-ranging commentary to clarify some aspects of our original proposal and augment others. We argue against the generative critics of our coevolutionary program for the language sciences, defend the use of close-to-surface models as minimizing crosslinguistic data distortion, and stress the growing role of stochastic simulations in making generalized historical accounts testable. These methods lead the search for general principles away from idealized representations and towards selective processes. Putting cultural evolution central in understanding language diversity makes learning fundamental in the cognition of language: increasingly powerful models of general learning, paired with channelled caregiver input, seem set to manage language acquisition without recourse to any innate “universal grammar.” Understanding why human language has no clear parallels in the animal world requires a cross-species perspective: crucial ingredients are vocal learning (for which there are clear non-primate parallels) and an intentionattributing cognitive infrastructure that provides a universal base for language evolution. We conclude by situating linguistic diversity within a broader trend towards understanding human cognition through the study of variation in, for example, human genetics, neurocognition, and psycholinguistic processing.
  • Everett, D., & Majid, A. (2009). Adventures in the jungle of language. [Interview by Asifa Majid and Jon Sutton.]. The Psychologist, 22(4), 312-313. Retrieved from http://www.thepsychologist.org.uk/archive/archive_home.cfm?volumeID=22&editionID=174&ArticleID=1494.

    Abstract

    Daniel Everett has spent his career in the Amazon, challenging some fundamental ideas about language and thought. Asifa Majid and Jon Sutton pose the questions
  • Faber, M., Mak, M., & Willems, R. M. (2020). Word skipping as an indicator of individual reading style during literary reading. Journal of Eye Movement Research, 13(3): 2. doi:10.16910/jemr.13.3.2.

    Abstract

    Decades of research have established that the content of language (e.g. lexical characteristics of words) predicts eye movements during reading. Here we investigate whether there exist individual differences in ‘stable’ eye movement patterns during narrative reading. We computed Euclidean distances from correlations between gaze durations time courses (word level) across 102 participants who each read three literary narratives in Dutch. The resulting distance matrices were compared between narratives using a Mantel test. The results show that correlations between the scaling matrices of different narratives are relatively weak (r ≤ .11) when missing data points are ignored. However, when including these data points as zero durations (i.e. skipped words), we found significant correlations between stories (r > .51). Word skipping was significantly positively associated with print exposure but not with self-rated attention and story-world absorption, suggesting that more experienced readers are more likely to skip words, and do so in a comparable fashion. We interpret this finding as suggesting that word skipping might be a stable individual eye movement pattern.
  • Fairs, A. (2019). Linguistic dual-tasking: Understanding temporal overlap between production and comprehension. PhD Thesis, Radboud University Nijmegen, Nijmegen.
  • Favier, S. (2020). Individual differences in syntactic knowledge and processing: Exploring the role of literacy experience. PhD Thesis, Radboud University Nijmegen, Nijmegen.
  • Favier, S., Wright, A., Meyer, A. S., & Huettig, F. (2019). Proficiency modulates between- but not within-language structural priming. Journal of Cultural Cognitive Science, 3(suppl. 1), 105-124. doi:10.1007/s41809-019-00029-1.

    Abstract

    The oldest of the Celtic language family, Irish differs considerably from English, notably with respect to word order and case marking. In spite of differences in surface constituent structure, less restricted accounts of bilingual shared syntax predict that processing datives and passives in Irish should prime the production of their English equivalents. Furthermore, this cross-linguistic influence should be sensitive to L2 proficiency, if shared structural representations are assumed to develop over time. In Experiment 1, we investigated cross-linguistic structural priming from Irish to English in 47 bilingual adolescents who are educated through Irish. Testing took place in a classroom setting, using written primes and written sentence generation. We found that priming for prepositional-object (PO) datives was predicted by self-rated Irish (L2) proficiency, in line with previous studies. In Experiment 2, we presented translations of the materials to an English-educated control group (n=54). We found a within-language priming effect for PO datives, which was not modulated by English (L1) proficiency. Our findings are compatible with current theories of bilingual language processing and L2 syntactic acquisition.
  • Fazekas, J., Jessop, A., Pine, J., & Rowland, C. F. (2020). Do children learn from their prediction mistakes? A registered report evaluating error-based theories of language acquisition. Royal Society Open Science, 7(11): 180877. doi:10.1098/rsos.180877.

    Abstract

    Error-based theories of language acquisition suggest that children, like adults, continuously make and evaluate predictions in order to reach an adult-like state of language use. However, while these theories have become extremely influential, their central claim - that unpredictable
    input leads to higher rates of lasting change in linguistic representations – has scarcely been
    tested. We designed a prime surprisal-based intervention study to assess this claim.
    As predicted, both 5- to 6-year-old children (n=72) and adults (n=72) showed a pre- to post-test shift towards producing the dative syntactic structure they were exposed to in surprising sentences. The effect was significant in both age groups together, and in the child group separately when participants with ceiling performance in the pre-test were excluded. Secondary
    predictions were not upheld: we found no verb-based learning effects and there was only reliable evidence for immediate prime surprisal effects in the adult, but not in the child group. To our knowledge this is the first published study demonstrating enhanced learning rates for the same syntactic structure when it appeared in surprising as opposed to predictable contexts, thus
    providing crucial support for error-based theories of language acquisition.
  • Fedor, A., Pléh, C., Brauer, J., Caplan, D., Friederici, A. D., Gulyás, B., Hagoort, P., Nazir, T., & Singer, W. (2009). What are the brain mechanisms underlying syntactic operations? In D. Bickerton, & E. Szathmáry (Eds.), Biological foundations and origin of syntax (pp. 299-324). Cambridge, MA: MIT Press.

    Abstract

    This chapter summarizes the extensive discussions that took place during the Forum as well as the subsequent months thereafter. It assesses current understanding of the neuronal mechanisms that underlie syntactic structure and processing.... It is posited that to understand the neurobiology of syntax, it might be worthwhile to shift the balance from comprehension to syntactic encoding in language production
  • Fedorenko, E., Patel, A., Casasanto, D., Winawer, J., & Gibson, E. (2009). Structural integration in language and music: Evidence for a shared system. Memory & Cognition, 37, 1-9. doi:10.3758/MC.37.1.1.

    Abstract

    In this study, we investigate whether language and music share cognitive resources for structural processing. We report an experiment that used sung materials and manipulated linguistic complexity (subject-extracted relative clauses, object-extracted relative clauses) and musical complexity (in-key critical note, out-of-key critical note, auditory anomaly on the critical note involving a loudness increase). The auditory-anomaly manipulation was included in order to test whether the difference between in-key and out-of-key conditions might be due to any salient, unexpected acoustic event. The critical dependent measure involved comprehension accuracies to questions about the propositional content of the sentences asked at the end of each trial. The results revealed an interaction between linguistic and musical complexity such that the difference between the subject- and object-extracted relative clause conditions was larger in the out-of-key condition than in the in-key and auditory-anomaly conditions. These results provide evidence for an overlap in structural processing between language and music.
  • Felker, E. R., Ernestus, M., & Broersma, M. (2019). Evaluating dictation task measures for the study of speech perception. In S. Calhoun, P. Escudero, M. Tabain, & P. Warren (Eds.), Proceedings of the 19th International Congress of Phonetic Sciences (ICPhS 2019) (pp. 383-387). Canberra, Australia: Australasian Speech Science and Technology Association Inc.

    Abstract

    This paper shows that the dictation task, a well-
    known testing instrument in language education, has
    untapped potential as a research tool for studying
    speech perception. We describe how transcriptions
    can be scored on measures of lexical, orthographic,
    phonological, and semantic similarity to target
    phrases to provide comprehensive information about
    accuracy at different processing levels. The former
    three measures are automatically extractable,
    increasing objectivity, and the middle two are
    gradient, providing finer-grained information than
    traditionally used. We evaluate the measures in an
    English dictation task featuring phonetically reduced
    continuous speech. Whereas the lexical and
    orthographic measures emphasize listeners’ word
    identification difficulties, the phonological measure
    demonstrates that listeners can often still recover
    phonological features, and the semantic measure
    captures their ability to get the gist of the utterances.
    Correlational analyses and a discussion of practical
    and theoretical considerations show that combining
    multiple measures improves the dictation task’s
    utility as a research tool.
  • Felker, E. R., Ernestus, M., & Broersma, M. (2019). Lexically guided perceptual learning of a vowel shift in an interactive L2 listening context. In Proceedings of Interspeech 2019 (pp. 3123-3127). doi:10.21437/Interspeech.2019-1414.

    Abstract

    Lexically guided perceptual learning has traditionally been studied with ambiguous consonant sounds to which native listeners are exposed in a purely receptive listening context. To extend previous research, we investigate whether lexically guided learning applies to a vowel shift encountered by non-native listeners in an interactive dialogue. Dutch participants played a two-player game in English in either a control condition, which contained no evidence for a vowel shift, or a lexically constraining condition, in which onscreen lexical information required them to re-interpret their interlocutor’s /ɪ/ pronunciations as representing /ε/. A phonetic categorization pre-test and post-test were used to assess whether the game shifted listeners’ phonemic boundaries such that more of the /ε/-/ɪ/ continuum came to be perceived as /ε/. Both listener groups showed an overall post-test shift toward /ɪ/, suggesting that vowel perception may be sensitive to directional biases related to properties of the speaker’s vowel space. Importantly, listeners in the lexically constraining condition made relatively more post-test /ε/ responses than the control group, thereby exhibiting an effect of lexically guided adaptation. The results thus demonstrate that non-native listeners can adjust their phonemic boundaries on the basis of lexical information to accommodate a vowel shift learned in interactive conversation.
  • Felker, E. R., Klockmann, H. E., & De Jong, N. H. (2019). How conceptualizing influences fluency in first and second language speech production. Applied Psycholinguistics, 40(1), 111-136. doi:10.1017/S0142716418000474.

    Abstract

    When speaking in any language, speakers must conceptualize what they want to say before they can formulate and articulate their message. We present two experiments employing a novel experimental paradigm in which the formulating and articulating stages of speech production were kept identical across conditions of differing conceptualizing difficulty. We tracked the effect of difficulty in conceptualizing during the generation of speech (Experiment 1) and during the abandonment and regeneration of speech (Experiment 2) on speaking fluency by Dutch native speakers in their first (L1) and second (L2) language (English). The results showed that abandoning and especially regenerating a speech plan taxes the speaker, leading to disfluencies. For most fluency measures, the increases in disfluency were similar across L1 and L2. However, a significant interaction revealed that abandoning and regenerating a speech plan increases the time needed to solve conceptual difficulties while speaking in the L2 to a greater degree than in the L1. This finding supports theories in which cognitive resources for conceptualizing are shared with those used for later stages of speech planning. Furthermore, a practical implication for language assessment is that increasing the conceptual difficulty of speaking tasks should be considered with caution.
  • Ferraro, S., Nigri, A., D'incerti, L., Rosazza, C., Sattin, D., Sebastiano, D. R., Visani, E., Duran, D., Marotta, G., De Michelis, G., Catricalà, E., Kotz, S. A., Verga, L., Leonardi, M., Cappa, S. F., & Bruzzone, M. G. (2020). Preservation of language processing and auditory performance in patients with disorders of consciousness: a multimodal assessment. Frontiers in Neurology, 11: 526465. doi:10.3389/fneur.2020.526465.

    Abstract

    The impact of language impairment on the clinical assessment of patients suffering from disorders of consciousness (DOC) is unknown or underestimated, and may mask the presence of conscious behavior. In a group of DOC patients (n=11; time post-injury range:5-252 months), we investigated the main neural functional and structural underpinnings of linguistic processing, and their relationship with the behavioral measures of the auditory function, using the Coma Recovery Scale-Revised (CRS-R). We assessed the integrity of the brainstem auditory pathways, of the left superior temporal gyrus and arcuate fasciculus, the neural activity elicited by passive listening of an auditory language task and the mean hemispheric glucose metabolism.
    Our results support the hypothesis of a relationship between the level of preservation of the investigated structures/functions and the CRS-R auditory subscale scores.
    Moreover, our findings indicate that patients in minimally conscious state minus (MCS-): 1) when presenting the \emph{auditory startle} (at the CRS-R auditory subscale) might be aphasic in the receptive domain, being severely impaired in the core language structures/functions; 2) when presenting the \emph{localization to sound} might retain language processing, being almost intact or intact in the core language structures/functions. Despite the small group of investigated patients, our findings provide a grounding of the clinical measures of the CRS-R auditory subscale in the integrity of the underlying auditory structures/functions. Future studies are needed to confirm our results that might have important consequences for the clinical practice.

Share this page