Publications

Displaying 201 - 300 of 1542
  • Chu, M., & Kita, S. (2016). Co-thought and Co-speech Gestures Are Generated by the Same Action Generation Process. Journal of Experimental Psychology: Learning, Memory, and Cognition, 42(2), 257-270. doi:10.1037/xlm0000168.

    Abstract

    People spontaneously gesture when they speak (co-speech gestures) and when they solve problems silently (co-thought gestures). In this study, we first explored the relationship between these 2 types of gestures and found that individuals who produced co-thought gestures more frequently also produced co-speech gestures more frequently (Experiments 1 and 2). This suggests that the 2 types of gestures are generated from the same process. We then investigated whether both types of gestures can be generated from the representational use of the action generation process that also generates purposeful actions that have a direct physical impact on the world, such as manipulating an object or locomotion (the action generation hypothesis). To this end, we examined the effect of object affordances on the production of both types of gestures (Experiments 3 and 4). We found that individuals produced co-thought and co-speech gestures more often when the stimulus objects afforded action (objects with a smooth surface) than when they did not (objects with a spiky surface). These results support the action generation hypothesis for representational gestures. However, our findings are incompatible with the hypothesis that co-speech representational gestures are solely generated from the speech production process (the speech production hypothesis).
  • Claassen, S., D'Antoni, J., & Senft, G. (2010). Some Trobriand Islands string figures. Bulletin of the International String Figure Association, 17, 72-128.

    Abstract

    Some Trobriand Islands string figures by Stephan Claassen, Best, the Netherlands, and Joseph D'Antoni, Queens, New York, USA, in cooperation with Gunter Senft, Max Planck Institute for Psycholinguistics, Nijmegen, Netherlands (pages 72-128) - The construction and execution of fourteen string figures from the Trobriand Islands is given, along with accompanying chants (in the original, and in translation) and comparative notes. The figures were made during a 1984 string figure performance by two ladies in the village of Tauwema, on the island of Kaile’una. The performance was filmed by a team of German researchers. One of the figures appears to be not recorded before, and the construction method of another figure was hitherto unknown. Some of the other figures have their own peculiarities.
  • Clark, E. V., & Casillas, M. (2016). First language acquisition. In K. Allen (Ed.), The Routledge Handbook of Linguistics (pp. 311-328). New York: Routledge.
  • Clifton, C. J., Meyer, A. S., Wurm, L. H., & Treiman, R. (2013). Language comprehension and production. In A. F. Healy, & R. W. Proctor (Eds.), Handbook of Psychology, Volume 4, Experimental Psychology. 2nd Edition (pp. 523-547). Hoboken, NJ: Wiley.

    Abstract

    In this chapter, we survey the processes of recognizing and producing words and of understanding and creating sentences. Theory and research on these topics have been shaped by debates about how various sources of information are integrated in these processes, and about the role of language structure, as analyzed in the discipline of linguistics. In this chapter, we describe current views of fluent language users' comprehension of spoken and written language and their production of spoken language. We review what we consider to be the most important findings and theories in psycholinguistics, returning again and again to the questions of modularity and the importance of linguistic knowledge. Although we acknowledge the importance of social factors in language use, our focus is on core processes such as parsing and word retrieval that are not necessarily affected by such factors. We do not have space to say much about the important fields of developmental psycholinguistics, which deals with the acquisition of language by children, or applied psycholinguistics, which encompasses such topics as language disorders and language teaching. Although we recognize that there is burgeoning interest in the measurement of brain activity during language processing and how language is represented in the brain, space permits only occasional pointers to work in neuropsychology and the cognitive neuroscience of language. For treatment of these topics, and others, the interested reader could begin with two recent handbooks of psycholinguistics (Gaskell, 2007; Traxler & Gemsbacher, 2006) and a handbook of cognitive neuroscience (Gazzaniga, 2004).
  • Cohen, E. (2010). An author meets her critics. Around "The mind possessed": The cognition of spirit possession in an Afro-Brazilian religious tradition" by Emma Cohen [Response to comments by Diana Espirito Santo, Arnaud Halloy, and Pierre Lienard]. Religion and Society: Advances in Research, 1(1), 164-176. doi:10.3167/arrs.2010.010112.
  • Cohen, E. (2010). Anthropology of knowledge. Journal of the Royal Anthropological Institute, 16(S1), S193-S202. doi:10.1111/j.1467-9655.2010.01617.x.

    Abstract

    Explanatory accounts of the emergence, spread, storage, persistence, and transformation of knowledge face numerous theoretical and methodological challenges. This paper argues that although anthropologists are uniquely positioned to address some of these challenges, joint engagement with relevant research in neighbouring disciplines holds considerable promise for advancement in the area. Researchers across the human and social sciences are increasingly recognizing the importance of conjointly operative and mutually contingent bodily, cognitive, neural, and social mechanisms informing the generation and communication of knowledge. Selected cognitive scientific work, in particular, is reviewed here and used to illustrate how anthropology may potentially richly contribute not only to descriptive and interpretive endeavours, but to the development and substantiation of explanatory accounts also. Résumé Les comptes-rendus portant sur l'émergence, la diffusion, la conservation, la persistance et la transformation des connaissances se heurtent à de nombreuses difficultés théoriques et méthodologiques. Bien que les anthropologues soient particulièrement bien placés pour affronter ces défis, des progrès considérables pourraient être réalisés en la matière dans le cadre d'une approche conjointe avec des disciplines voisines menant des recherches connexes. Les adeptes du décloisonnement des sciences humaines et sociales reconnaissent de plus en plus l'importance des interactions et interdépendances entre mécanismes physiques, cognitifs, neurologiques et sociaux dans la production et la communication des connaissances. Des travaux scientifiques choisis, en matière de cognition en particulier, sont examinés et utilisés pour illustrer la manière dont l'anthropologie pourrait apporter une riche contribution non seulement aux tâches descriptives et interprétatives, mais aussi à l'élaboration et la mise à l'épreuve de comptes-rendus explicatifs.
  • Cohen, E. (2010). [Review of the book The accidental mind: How brain evolution has given us love, memory, dreams, and god, by David J. Linden]. Journal for the Study of Religion, Nature & Culture, 4(3), 235-238. doi:10.1558/jsrnc.v4i3.239.
  • Cohen, E., & Haun, D. B. M. (2013). The development of tag-based cooperation via a socially acquired trait. Evolution and Human Behavior, 24, 230-235. doi:10.1016/j.evolhumbehav.2013.02.001.

    Abstract

    Recent theoretical models have demonstrated that phenotypic traits can support the non-random assortment of cooperators in a population, thereby permitting the evolution of cooperation. In these “tag-based models”, cooperators modulate cooperation according to an observable and hard-to-fake trait displayed by potential interaction partners. Socially acquired vocalizations in general, and speech accent among humans in particular, are frequently proposed as hard to fake and hard to hide traits that display sufficient cross-populational variability to reliably guide such social assortment in fission–fusion societies. Adults’ sensitivity to accent variation in social evaluation and decisions about cooperation is well-established in sociolinguistic research. The evolutionary and developmental origins of these biases are largely unknown, however. Here, we investigate the influence of speech accent on 5–10-year-old children's developing social and cooperative preferences across four Brazilian Amazonian towns. Two sites have a single dominant accent, and two sites have multiple co-existing accent varieties. We found that children's friendship and resource allocation preferences were guided by accent only in sites characterized by accent heterogeneity. Results further suggest that this may be due to a more sensitively tuned ear for accent variation. The demonstrated local-accent preference did not hold in the face of personal cost. Results suggest that mechanisms guiding tag-based assortment are likely tuned according to locally relevant tag-variation.

    Additional information

    Cohen_Suppl_Mat_2013.docx
  • Cohen, E., Ejsmond-Frey, R., Knight, N., & Dunbar, R. (2010). Rowers’ high: Behavioural synchrony is correlated with elevated pain thresholds. Biology Letters, 6, 106-108. doi:10.1098/rsbl.2009.0670.

    Abstract

    Physical exercise is known to stimulate the release of endorphins, creating a mild sense of euphoria that has rewarding properties. Using pain tolerance (a conventional non-invasive
    assay for endorphin release), we show that synchronized training in a college rowing crew creates a heightened endorphin surge compared
    with a similar training regime carried out alone. This heightened effect from synchronized activity may explain the sense of euphoria experienced
    during other social activities (such as
    laughter, music-making and dancing) that are involved in social bonding in humans and possibly other vertebrates
  • Cohen, E. (2010). Where humans and spirits meet: The politics of rituals and identified spirits in Zanzibar by Kjersti Larsen [Book review]. American Ethnologist, 37, 386 -387. doi:10.1111/j.1548-1425.2010.01262_6.x.
  • Collins, J. (2016). The role of language contact in creating correlations between humidity and tone. Journal of Language Evolution, 46-52. doi:10.1093/jole/lzv012.
  • Comasco, E., Schijven, D., de Maeyer, H., Vrettou, M., Nylander, I., Sundström-Poromaa, I., & Olivier, J. D. A. (2019). Constitutive serotonin transporter reduction resembles maternal separation with regard to stress-related gene expression. ACS Chemical Neuroscience, 10, 3132-3142. doi:10.1021/acschemneuro.8b00595.

    Abstract

    Interactive effects between allelic variants of the serotonin transporter (5-HTT) promoter-linked polymorphic region (5-HTTLPR) and stressors on depression symptoms have been documented, as well as questioned, by meta-analyses. Translational models of constitutive 5-htt reduction and experimentally controlled stressors often led to inconsistent behavioral and molecular findings and often did not include females. The present study sought to investigate the effect of 5-htt genotype, maternal separation, and sex on the expression of stress-related candidate genes in the rat hippocampus and frontal cortex. The mRNA expression levels of Avp, Pomc, Crh, Crhbp, Crhr1, Bdnf, Ntrk2, Maoa, Maob, and Comt were assessed in the hippocampus and frontal cortex of 5-htt ± and 5-htt +/+ male and female adult rats exposed, or not, to daily maternal separation for 180 min during the first 2 postnatal weeks. Gene- and brain region-dependent, but sex-independent, interactions between 5-htt genotype and maternal separation were found. Gene expression levels were higher in 5-htt +/+ rats not exposed to maternal separation compared with the other experimental groups. Maternal separation and 5-htt +/− genotype did not yield additive effects on gene expression. Correlative relationships, mainly positive, were observed within, but not across, brain regions in all groups except in non-maternally separated 5-htt +/+ rats. Gene expression patterns in the hippocampus and frontal cortex of rats exposed to maternal separation resembled the ones observed in rats with reduced 5-htt expression regardless of sex. These results suggest that floor effects of 5-htt reduction and maternal separation might explain inconsistent findings in humans and rodents
  • Connell, L., Cai, Z. G., & Holler, J. (2013). Do you see what I'm singing? Visuospatial movement biases pitch perception. Brain and Cognition, 81, 124-130. doi:10.1016/j.bandc.2012.09.005.

    Abstract

    The nature of the connection between musical and spatial processing is controversial. While pitch may be described in spatial terms such as “high” or “low”, it is unclear whether pitch and space are associated but separate dimensions or whether they share representational and processing resources. In the present study, we asked participants to judge whether a target vocal note was the same as (or different from) a preceding cue note. Importantly, target trials were presented as video clips where a singer sometimes gestured upward or downward while singing that target note, thus providing an alternative, concurrent source of spatial information. Our results show that pitch discrimination was significantly biased by the spatial movement in gesture, such that downward gestures made notes seem lower in pitch than they really were, and upward gestures made notes seem higher in pitch. These effects were eliminated by spatial memory load but preserved under verbal memory load conditions. Together, our findings suggest that pitch and space have a shared representation such that the mental representation of pitch is audiospatial in nature.
  • Cooke, M., García Lecumberri, M. L., Scharenborg, O., & Van Dommelen, W. A. (2010). Language-independent processing in speech perception: Identification of English intervocalic consonants by speakers of eight European languages. Speech Communication, 52, 954-967. doi:10.1016/j.specom.2010.04.004.

    Abstract

    Processing speech in a non-native language requires listeners to cope with influences from their first language and to overcome the effects of limited exposure and experience. These factors may be particularly important when listening in adverse conditions. However,native listeners also suffer in noise, and the intelligibility of speech in noise clearly depends on factors which are independent of a listener’s first language. The current study explored the issue of language-independence by comparing the responses of eight listener groups differing in native language when confronted with the task of identifying English intervocalic consonants in three masker backgrounds, viz.stationary speech-shaped noise, temporally-modulated speech-shaped noise and competing English speech. The study analysed the effects of (i) noise type, (ii) speaker, (iii) vowel context, (iv) consonant, (v) phonetic feature classes, (vi) stress position, (vii) gender and (viii) stimulus onset relative to noise onset. A significant degree of similarity in the response to many of these factors was evident across all eight language groups, suggesting that acoustic and auditory considerations play a large role in determining intelligibility. Language- specific influences were observed in the rankings of individual consonants and in the masking effect of competing speech relative to speech-modulated noise.
  • Corps, R. E., Pickering, M. J., & Gambi, C. (2019). Predicting turn-ends in discourse context. Language, Cognition and Neuroscience, 34(5), 615-627. doi:10.1080/23273798.2018.1552008.

    Abstract

    Research suggests that during conversation, interlocutors coordinate their utterances by predicting the speaker’s forthcoming utterance and its end. In two experiments, we used a button-pressing task, in which participants pressed a button when they thought a speaker reached the end of their utterance, to investigate what role the wider discourse plays in turn-end prediction. Participants heard two-utterance sequences, in which the content of the second utterance was or was not constrained by the content of the first. In both experiments, participants responded earlier, but not more precisely, when the first utterance was constraining rather than unconstraining. Response times and precision were unaffected by whether they listened to dialogues or monologues (Experiment 1) and by whether they read the first utterance out loud or silently (Experiment 2), providing no indication that activation of production mechanisms facilitates prediction. We suggest that content predictions aid comprehension but not turn-end prediction.

    Additional information

    plcp_a_1552008_sm1646.pdf
  • Coulson, S., & Lai, V. T. (Eds.). (2016). The metaphorical brain [Research topic]. Lausanne: Frontiers Media. doi:10.3389/978-2-88919-772-9.

    Abstract

    This Frontiers Special Issue will synthesize current findings on the cognitive neuroscience of metaphor, provide a forum for voicing novel perspectives, and promote new insights into the metaphorical brain.
  • Cousminer, D. L., Berry, D. J., Timpson, N. J., Ang, W., Thiering, E., Byrne, E. M., Taal, H. R., Huikari, V., Bradfield, J. P., Kerkhof, M., Groen-Blokhuis, M. M., Kreiner-Møller, E., Marinelli, M., Holst, C., Leinonen, J. T., Perry, J. R. B., Surakka, I., Pietiläinen, O., Kettunen, J., Anttila, V. and 50 moreCousminer, D. L., Berry, D. J., Timpson, N. J., Ang, W., Thiering, E., Byrne, E. M., Taal, H. R., Huikari, V., Bradfield, J. P., Kerkhof, M., Groen-Blokhuis, M. M., Kreiner-Møller, E., Marinelli, M., Holst, C., Leinonen, J. T., Perry, J. R. B., Surakka, I., Pietiläinen, O., Kettunen, J., Anttila, V., Kaakinen, M., Sovio, U., Pouta, A., Das, S., Lagou, V., Power, C., Prokopenko, I., Evans, D. M., Kemp, J. P., St Pourcain, B., Ring, S., Palotie, A., Kajantie, E., Osmond, C., Lehtimäki, T., Viikari, J. S., Kähönen, M., Warrington, N. M., Lye, S. J., Palmer, L. J., Tiesler, C. M. T., Flexeder, C., Montgomery, G. W., Medland, S. E., Hofman, A., Hakonarson, H., Guxens, M., Bartels, M., Salomaa, V., Murabito, J. M., Kaprio, J., Sørensen, T. I. A., Ballester, F., Bisgaard, H., Boomsma, D. I., Koppelman, G. H., Grant, S. F. A., Jaddoe, V. W. V., Martin, N. G., Heinrich, J., Pennell, C. E., Raitakari, O. T., Eriksson, J. G., Smith, G. D., Hyppönen, E., Järvelin, M.-R., McCarthy, M. I., Ripatti, S., Widén, E., Consortium ReproGen, & Consortium Early Growth Genetics (EGG) (2013). Genome-wide association and longitudinal analyses reveal genetic loci linking pubertal height growth, pubertal timing and childhood adiposity. Human Molecular Genetics, 22(13), 2735-2747. doi:10.1093/hmg/ddt104.

    Abstract

    The pubertal height growth spurt is a distinctive feature of childhood growth reflecting both the central onset of puberty and local growth factors. Although little is known about the underlying genetics, growth variability during puberty correlates with adult risks for hormone-dependent cancer and adverse cardiometabolic health. The only gene so far associated with pubertal height growth, LIN28B, pleiotropically influences childhood growth, puberty and cancer progression, pointing to shared underlying mechanisms. To discover genetic loci influencing pubertal height and growth and to place them in context of overall growth and maturation, we performed genome-wide association meta-analyses in 18 737 European samples utilizing longitudinally collected height measurements. We found significant associations (P < 1.67 × 10(-8)) at 10 loci, including LIN28B. Five loci associated with pubertal timing, all impacting multiple aspects of growth. In particular, a novel variant correlated with expression of MAPK3, and associated both with increased prepubertal growth and earlier menarche. Another variant near ADCY3-POMC associated with increased body mass index, reduced pubertal growth and earlier puberty. Whereas epidemiological correlations suggest that early puberty marks a pathway from rapid prepubertal growth to reduced final height and adult obesity, our study shows that individual loci associating with pubertal growth have variable longitudinal growth patterns that may differ from epidemiological observations. Overall, this study uncovers part of the complex genetic architecture linking pubertal height growth, the timing of puberty and childhood obesity and provides new information to pinpoint processes linking these traits.
  • Cristia, A., Dupoux, E., Hakuno, Y., Lloyd-Fox, S., Schuetze, M., Kivits, J., Bergvelt, T., Van Gelder, M., Filippin, L., Charron, S., & Minagawa-Kawai, Y. (2013). An online database of infant functional Near InfraRed Spectroscopy studies: A community-augmented systematic review. PLoS One, 8(3): e58906. doi:10.1371/journal.pone.0058906.

    Abstract

    Until recently, imaging the infant brain was very challenging. Functional Near InfraRed Spectroscopy (fNIRS) is a promising, relatively novel technique, whose use is rapidly expanding. As an emergent field, it is particularly important to share methodological knowledge to ensure replicable and robust results. In this paper, we present a community-augmented database which will facilitate precisely this exchange. We tabulated articles and theses reporting empirical fNIRS research carried out on infants below three years of age along several methodological variables. The resulting spreadsheet has been uploaded in a format allowing individuals to continue adding new results, and download the most recent version of the table. Thus, this database is ideal to carry out systematic reviews. We illustrate its academic utility by focusing on the factors affecting three key variables: infant attrition, the reliability of oxygenated and deoxygenated responses, and signal-to-noise ratios. We then discuss strengths and weaknesses of the DBIfNIRS, and conclude by suggesting a set of simple guidelines aimed to facilitate methodological convergence through the standardization of reports.
  • Cristia, A., Seidl, A., & Onishi, K. H. (2010). Indices acoustiques de phonémicité et d'allophonie dans la parole adressée aux enfants. Actes des XXVIIIèmes Journées d’Étude sur la Parole (JEP), 28, 277-280.
  • Cristia, A. (2013). Input to language: The phonetics of infant-directed speech. Language and Linguistics Compass, 7, 157-170. doi:10.1111/lnc3.12015.

    Abstract

    Over the first year of life, infant perception changes radically as the child learns the phonology of the ambient language from the speech she is exposed to. Since infant-directed speech attracts the child's attention more than other registers, it is necessary to describe that input in order to understand language development, and to address questions of learnability. In this review, evidence from corpora analyses, experimental studies, and observational paradigms is brought together to outline the first comprehensive empirical picture of infant-directed speech and its effects on language acquisition. The ensuing landscape suggests that infant-directed speech provides an emotionally and linguistically rich input to language acquisition

    Additional information

    Cristia_Suppl_Material.xls
  • Cristia, A. (2010). Phonetic enhancement of sibilants in infant-directed speech. The Journal of the Acoustical Society of America, 128, 424-434. doi:10.1121/1.3436529.

    Abstract

    The hypothesis that vocalic categories are enhanced in infant-directed speech (IDS) has received a great deal of attention and support. In contrast, work focusing on the acoustic implementation of consonantal categories has been scarce, and positive, negative, and null results have been reported. However, interpreting this mixed evidence is complicated by the facts that the definition of phonetic enhancement varies across articles, that small and heterogeneous groups have been studied across experiments, and further that the categories chosen are likely affected by other characteristics of IDS. Here, an analysis of the English sibilants /s/ and /ʃ/ in a large corpus of caregivers’ speech to another adult and to their infant suggests that consonantal categories are indeed enhanced, even after controlling for typical IDS prosodic characteristics.
  • Cristia, A., Mielke, J., Daland, R., & Peperkamp, S. (2013). Similarity in the generalization of implicitly learned sound patterns. Journal of Laboratory Phonology, 4(2), 259-285.

    Abstract

    A core property of language is the ability to generalize beyond observed examples. In two experiments, we explore how listeners generalize implicitly learned sound patterns to new nonwords and to new sounds, with the goal of shedding light on how similarity affects treatment of potential generalization targets. During the exposure phase, listeners heard nonwords whose onset consonant was restricted to a subset of a natural class (e.g., /d g v z Z/). During the test phase, listeners were presented with new nonwords and asked to judge how frequently they had been presented before; some of the test items began with a consonant from the exposure set (e.g., /d/), and some began with novel consonants with varying relations to the exposure set (e.g., /b/, which is highly similar to all onsets in the training set; /t/, which is highly similar to one of the training onsets; and /p/, which is less similar than the other two). The exposure onset was rated most frequent, indicating that participants encoded onset attestation in the exposure set, and generalized it to new nonwords. Participants also rated novel consonants as somewhat frequent, indicating generalization to onsets that did not occur in the exposure phase. While generalization could be accounted for in terms of featural distance, it was insensitive to natural class structure. Generalization to new sounds was predicted better by models requiring prior linguistic knowledge (either traditional distinctive features or articulatory phonetic information) than by a model based on a linguistically naïve measure of acoustic similarity.
  • Croijmans, I. (2016). Gelukkig kunnen we erover praten: Over de kunst om geuren en smaken in woorden te omschrijven. koffieTcacao, 17, 80-81.
  • Croijmans, I., Speed, L., Arshamian, A., & Majid, A. (2019). Measuring the multisensory imagery of wine: The Vividness of Wine Imagery Questionnaire. Multisensory Research, 32(3), 179-195. doi:10.1163/22134808-20191340.

    Abstract

    When we imagine objects or events, we often engage in multisensory mental imagery. Yet, investigations of mental imagery have typically focused on only one sensory modality — vision. One reason for this is that the most common tool for the measurement of imagery, the questionnaire, has been restricted to unimodal ratings of the object. We present a new mental imagery questionnaire that measures multisensory imagery. Specifically, the newly developed Vividness of Wine Imagery Questionnaire (VWIQ) measures mental imagery of wine in the visual, olfactory, and gustatory modalities. Wine is an ideal domain to explore multisensory imagery because wine drinking is a multisensory experience, it involves the neglected chemical senses (smell and taste), and provides the opportunity to explore the effect of experience and expertise on imagery (from wine novices to experts). The VWIQ questionnaire showed high internal consistency and reliability, and correlated with other validated measures of imagery. Overall, the VWIQ may serve as a useful tool to explore mental imagery for researchers, as well as individuals in the wine industry during sommelier training and evaluation of wine professionals.
  • Croijmans, I., & Majid, A. (2016). Language does not explain the wine-specific memory advantage of wine experts. In A. Papafragou, D. Grodner, D. Mirman, & J. Trueswell (Eds.), Proceedings of the 38th Annual Meeting of the Cognitive Science Society (CogSci 2016) (pp. 141-146). Austin, TX: Cognitive Science Society.

    Abstract

    Although people are poor at naming odors, naming a smell helps to remember that odor. Previous studies show wine experts have better memory for smells, and they also name smells differently than novices. Is wine experts’ odor memory is verbally mediated? And is the odor memory advantage that experts have over novices restricted to odors in their domain of expertise, or does it generalize? Twenty-four wine experts and 24 novices smelled wines, wine-related odors and common odors, and remembered these. Half the participants also named the smells. Wine experts had better memory for wines, but not for the other odors, indicating their memory advantage is restricted to wine. Wine experts named odors better than novices, but there was no relationship between experts’ ability to name odors and their memory for odors. This suggests experts’ odor memory advantage is not linguistically mediated, but may be the result of differential perceptual learning
  • Croijmans, I., & Majid, A. (2016). Not all flavor expertise is equal: The language of wine and coffee experts. PLoS One, 11(6): e0155845. doi:10.1371/journal.pone.0155845.

    Abstract

    People in Western cultures are poor at naming smells and flavors. However, for wine and
    coffee experts, describing smells and flavors is part of their daily routine. So are experts bet-
    ter than lay people at conveying smells and flavors in language? If smells and flavors are
    more easily linguistically expressed by experts, or more

    codable

    , then experts should be
    better than novices at describing smells and flavors. If experts are indeed better, we can
    also ask how general this advantage is: do experts show higher codability only for smells
    and flavors they are expert in (i.e., wine experts for wine and coffee experts for coffee) or is
    their linguistic dexterity more general? To address these questions, wine experts, coffee
    experts, and novices were asked to describe the smell and flavor of wines, coffees, every-
    day odors, and basic tastes. The resulting descriptions were compared on a number of
    measures. We found expertise endows a modest advantage in smell and flavor naming.
    Wine experts showed more consistency in how they described wine smells and flavors than
    coffee experts, and novices; but coffee experts were not more consistent for coffee descriptions. Neither expert group was any more accurate at identifying everyday smells or tastes. Interestingly, both wine and coffee experts tended to use more source-based terms (e.g., vanilla) in descriptions of their own area of expertise whereas novices tended to use more
    evaluative terms (e.g.,nice). However, the overall linguistic strategies for both groups were en par. To conclude, experts only have a limited, domain-specific advantage when communicating about smells and flavors. The ability to communicate about smells and flavors is a matter not only of perceptual training, but specific linguistic training too

    Additional information

    Data availability
  • Cronin, K. A. (2013). [Review of the book Chimpanzees of the Lakeshore: Natural history and culture at Mahale by Toshisada Nishida]. Animal Behaviour, 85, 685-686. doi:10.1016/j.anbehav.2013.01.001.

    Abstract

    First paragraph: Motivated by his quest to characterize the society of the last common ancestor of humans and other great apes, Toshisada Nishida set out as a graduate student to the Mahale Mountains on the eastern shore of Lake Tanganyika, Tanzania. This book is a story of his 45 years with the Mahale chimpanzees, or as he calls it, their ethnography. Beginning with his accounts of meeting the Tongwe people and the challenges of provisioning the chimpanzees for habituation, Nishida reveals how he slowly unravelled the unit group and community basis of chimpanzee social organization. The book begins and ends with a feeling of chronological order, starting with his arrival at Mahale and ending with an eye towards the future, with concrete recommendations for protecting wild chimpanzees. However, the bulk of the book is topically organized with chapters on feeding behaviour, growth and development, play and exploration, communication, life histories, sexual strategies, politics and culture.
  • Cronin, K. A., West, V., & Ross, S. R. (2016). Investigating the Relationship between Welfare and Rearing Young in Captive Chimpanzees (Pan troglodytes). Applied Animal Behaviour Science, 181, 166-172. doi:10.1016/j.applanim.2016.05.014.

    Abstract

    Whether the opportunity to breed and rear young improves the welfare of captive animals is currently debated. However, there is very little empirical data available to evaluate this relationship and this study is a first attempt to contribute objective data to this debate. We utilized the existing variation in the reproductive experiences of sanctuary chimpanzees at Chimfunshi Wildlife Orphanage Trust in Zambia to investigate whether breeding and rearing young was associated with improved welfare for adult females (N = 43). We considered several behavioural welfare indicators, including rates of luxury behaviours and abnormal or stress-related behaviours under normal conditions and conditions inducing social stress. Furthermore, we investigated whether spending time with young was associated with good or poor welfare for adult females, regardless of their kin relationship. We used generalized linear mixed models and found no difference between adult females with and without dependent young on any welfare indices, nor did we find that time spent in proximity to unrelated young predicted welfare (all full-null model comparisons likelihood ratio tests P > 0.05). However, we did find that coprophagy was more prevalent among mother-reared than non-mother-reared individuals, in line with recent work suggesting this behaviour may have a different etiology than other behaviours often considered to be abnormal. In sum, the findings from this initial study lend support to the hypothesis that the opportunity to breed and rear young does not provide a welfare benefit for chimpanzees in captivity. We hope this investigation provides a valuable starting point for empirical study into the welfare implications of managed breeding.

    Additional information

    mmc1.pdf
  • Cronin, K. A., Schroeder, K. K. E., & Snowdon, C. T. (2010). Prosocial behaviour emerges independent of reciprocity in cottontop tamarins. Proceedings of the Royal Society of London Series B-Biological Sciences, 277, 3845-3851. doi:10.1098/rspb.2010.0879.

    Abstract

    The cooperative breeding hypothesis posits that cooperatively breeding species are motivated to act prosocially, that is, to behave in ways that provide benefits to others, and that cooperative breeding has played a central role in the evolution of human prosociality. However, investigations of prosocial behaviour in cooperative breeders have produced varying results and the mechanisms contributing to this variation are unknown. We investigated whether reciprocity would facilitate prosocial behaviour among cottontop tamarins, a cooperatively breeding primate species likely to engage in reciprocal altruism, by comparing the number of food rewards transferred to partners who had either immediately previously provided or denied rewards to the subject. Subjects were also tested in a non-social control condition. Overall, results indicated that reciprocity increased food transfers. However, temporal analyses revealed that when the tamarins' behaviour was evaluated in relation to the non-social control, results were best explained by (i) an initial depression in the transfer of rewards to partners who recently denied rewards, and (ii) a prosocial effect that emerged late in sessions independent of reciprocity. These results support the cooperative breeding hypothesis, but suggest a minimal role for positive reciprocity, and emphasize the importance of investigating proximate temporal mechanisms underlying prosocial behaviour.
  • Cuskley, C., Dingemanse, M., Kirby, S., & Van Leeuwen, T. M. (2019). Cross-modal associations and synesthesia: Categorical perception and structure in vowel–color mappings in a large online sample. Behavior Research Methods, 51, 1651-1675. doi:10.3758/s13428-019-01203-7.

    Abstract

    We report associations between vowel sounds, graphemes, and colours collected online from over 1000 Dutch speakers. We provide open materials including a Python implementation of the structure measure, and code for a single page web application to run simple cross-modal tasks. We also provide a full dataset of colour-vowel associations from 1164 participants, including over 200 synaesthetes identified using consistency measures. Our analysis reveals salient patterns in cross-modal associations, and introduces a novel measure of isomorphism in cross-modal mappings. We find that while acoustic features of vowels significantly predict certain mappings (replicating prior work), both vowel phoneme category and grapheme category are even better predictors of colour choice. Phoneme category is the best predictor of colour choice overall, pointing to the importance of phonological representations in addition to acoustic cues. Generally, high/front vowels are lighter, more green, and more yellow than low/back vowels. Synaesthetes respond more strongly on some dimensions, choosing lighter and more yellow colours for high and mid front vowels than non-synaesthetes. We also present a novel measure of cross-modal mappings adapted from ecology, which uses a simulated distribution of mappings to measure the extent to which participants' actual mappings are structured isomorphically across modalities. Synaesthetes have mappings that tend to be more structured than non-synaesthetes, and more consistent colour choices across trials correlate with higher structure scores. Nevertheless, the large majority (~70%) of participants produce structured mappings, indicating that the capacity to make isomorphically structured mappings across distinct modalities is shared to a large extent, even if the exact nature of mappings varies across individuals. Overall, this novel structure measure suggests a distribution of structured cross-modal association in the population, with synaesthetes on one extreme and participants with unstructured associations on the other.
  • Cutler, A., Burchfield, A., & Antoniou, M. (2019). A criterial interlocutor tally for successful talker adaptation? In S. Calhoun, P. Escudero, M. Tabain, & P. Warren (Eds.), Proceedings of the 19th International Congress of Phonetic Sciences (ICPhS 20195) (pp. 1485-1489). Canberra, Australia: Australasian Speech Science and Technology Association Inc.

    Abstract

    Part of the remarkable efficiency of listening is
    accommodation to unfamiliar talkers’ specific
    pronunciations by retuning of phonemic intercategory
    boundaries. Such retuning occurs in second
    (L2) as well as first language (L1); however, recent
    research with emigrés revealed successful adaptation
    in the environmental L2 but, unprecedentedly, not in
    L1 despite continuing L1 use. A possible explanation
    involving relative exposure to novel talkers is here
    tested in heritage language users with Mandarin as
    family L1 and English as environmental language. In
    English, exposure to an ambiguous sound in
    disambiguating word contexts prompted the expected
    adjustment of phonemic boundaries in subsequent
    categorisation. However, no adjustment occurred in
    Mandarin, again despite regular use. Participants
    reported highly asymmetric interlocutor counts in the
    two languages. We conclude that successful retuning
    ability requires regular exposure to novel talkers in
    the language in question, a criterion not met for the
    emigrés’ or for these heritage users’ L1.
  • Cutler, A., & Norris, D. (2016). Bottoms up! How top-down pitfalls ensnare speech perception researchers too. Commentary on C. Firestone & B. Scholl: Cognition does not affect perception: Evaluating the evidence for 'top-down' effects. Behavioral and Brain Sciences, e236. doi:10.1017/S0140525X15002745.

    Abstract

    Not only can the pitfalls that Firestone & Scholl (F&S) identify be generalised across multiple studies within the field of visual perception, but also they have general application outside the field wherever perceptual and cognitive processing are compared. We call attention to the widespread susceptibility of research on the perception of speech to versions of the same pitfalls.
  • Cutler, A. (2010). Abstraction-based efficiency in the lexicon. Laboratory Phonology, 1(2), 301-318. doi:10.1515/LABPHON.2010.016.

    Abstract

    Listeners learn from their past experience of listening to spoken words, and use this learning to maximise the efficiency of future word recognition. This paper summarises evidence that the facilitatory effects of drawing on past experience are mediated by abstraction, enabling learning to be generalised across new words and new listening situations. Phoneme category retuning, which allows adaptation to speaker-specific articulatory characteristics, is generalised on the basis of relatively brief experience to words previously unheard from that speaker. Abstract knowledge of prosodic regularities is applied to recognition even of novel words for which these regularities were violated. Prosodic word-boundary regularities drive segmentation of speech into words independently of the membership of the lexical candidate set resulting from the segmentation operation. Each of these different cases illustrates how abstraction from past listening experience has contributed to the efficiency of lexical recognition.
  • Ip, M., & Cutler, A. (2016). Cross-language data on five types of prosodic focus. In J. Barnes, A. Brugos, S. Shattuck-Hufnagel, & N. Veilleux (Eds.), Proceedings of Speech Prosody 2016 (pp. 330-334).

    Abstract

    To examine the relative roles of language-specific and language-universal mechanisms in the production of prosodic focus, we compared production of five different types of focus by native speakers of English and Mandarin. Two comparable dialogues were constructed for each language, with the same words appearing in focused and unfocused position; 24 speakers recorded each dialogue in each language. Duration, F0 (mean, maximum, range), and rms-intensity (mean, maximum) of all critical word tokens were measured. Across the different types of focus, cross-language differences were observed in the degree to which English versus Mandarin speakers use the different prosodic parameters to mark focus, suggesting that while prosody may be universally available for expressing focus, the means of its employment may be considerably language-specific
  • Cutler, A., El Aissati, A., Hanulikova, A., & McQueen, J. M. (2010). Effects on speech parsing of vowelless words in the phonology. In Abstracts of Laboratory Phonology 12 (pp. 115-116).
  • Cutler, A. (1970). An experimental method for semantic field study. Linguistic Communications, 2, 87-94.

    Abstract

    This paper emphasizes the need for empirical research and objective discovery procedures in semantics, and illustrates a method by which these goals may be obtained. The aim of the methodology described is to provide a description of the internal structure of a semantic field by eliciting the description--in an objective, standardized manner--from a representative group of native speakers. This would produce results that would be equally obtainable by any linguist using the same method under the same conditions with a similarly representative set of informants. The standardized method suggested by the author is the Semantic Differential developed by C. E. Osgood in the 1950's. Applying this method to semantic research, it is further hypothesized that, should different members of a semantic field be employed as concepts on a Semantic Differential task, a factor analysis of the results would reveal the dimensions operative within the body of data. The author demonstrates the use of the Semantic Differential and factor analysis in an actual experiment.
  • Cutler, A., Eisner, F., McQueen, J. M., & Norris, D. (2010). How abstract phonemic categories are necessary for coping with speaker-related variation. In C. Fougeron, B. Kühnert, M. D'Imperio, & N. Vallée (Eds.), Laboratory phonology 10 (pp. 91-111). Berlin: de Gruyter.
  • Cutler, A., Mitterer, H., Brouwer, S., & Tuinman, A. (2010). Phonological competition in casual speech. In Proceedings of DiSS-LPSS Joint Workshop 2010 (pp. 43-46).
  • Cutler, A., Treiman, R., & Van Ooijen, B. (2010). Strategic deployment of orthographic knowledge in phoneme detection. Language and Speech, 53(3), 307 -320. doi:10.1177/0023830910371445.

    Abstract

    The phoneme detection task is widely used in spoken-word recognition research. Alphabetically literate participants, however, are more used to explicit representations of letters than of phonemes. The present study explored whether phoneme detection is sensitive to how target phonemes are, or may be, orthographically realized. Listeners detected the target sounds [b, m, t, f, s, k] in word-initial position in sequences of isolated English words. Response times were faster to the targets [b, m, t], which have consistent word-initial spelling, than to the targets [f, s, k], which are inconsistently spelled, but only when spelling was rendered salient by the presence in the experiment of many irregularly spelled filler words. Within the inconsistent targets [f, s, k], there was no significant difference between responses to targets in words with more usual (foam, seed, cattle) versus less usual (phone, cede, kettle) spellings. Phoneme detection is thus not necessarily sensitive to orthographic effects; knowledge of spelling stored in the lexical representations of words does not automatically become available as word candidates are activated. However, salient orthographic manipulations in experimental input can induce such sensitivity. We attribute this to listeners' experience of the value of spelling in everyday situations that encourage phonemic decisions (such as learning new names)
  • Cutler, A., Cooke, M., & Lecumberri, M. L. G. (2010). Preface. Speech Communication, 52, 863. doi:10.1016/j.specom.2010.11.003.

    Abstract

    Adverse listening conditions always make the perception of speech harder, but their deleterious effect is far greater if the speech we are trying to understand is in a non-native language. An imperfect signal can be coped with by recourse to the extensive knowledge one has of a native language, and imperfect knowledge of a non-native language can still support useful communication when speech signals are high-quality. But the combination of imperfect signal and imperfect knowledge leads rapidly to communication breakdown. This phenomenon is undoubtedly well known to every reader of Speech Communication from personal experience. Many readers will also have a professional interest in explaining, or remedying, the problems it produces. The journal’s readership being a decidedly interdisciplinary one, this interest will involve quite varied scientific approaches, including (but not limited to) modelling the interaction of first and second language vocabularies and phonemic repertoires, developing targeted listening training for language learners, and redesigning the acoustics of classrooms and conference halls. In other words, the phenomenon that this special issue deals with is a well-known one, that raises important scientific and practical questions across a range of speech communication disciplines, and Speech Communication is arguably the ideal vehicle for presentation of such a breadth of approaches in a single volume. The call for papers for this issue elicited a large number of submissions from across the full range of the journal’s interdisciplinary scope, requiring the guest editors to apply very strict criteria to the final selection. Perhaps unique in the history of treatments of this topic is the combination represented by the guest editors for this issue: a phonetician whose primary research interest is in second-language speech (MLGL), an engineer whose primary research field is the acoustics of masking in speech processing (MC), and a psychologist whose primary research topic is the recognition of spoken words (AC). In the opening article of the issue, these three authors together review the existing literature on listening to second-language speech under adverse conditions, bringing together these differing perspectives for the first time in a single contribution. The introductory review is followed by 13 new experimental reports of phonetic, acoustic and psychological studies of the topic. The guest editors thank Speech Communication editor Marc Swerts and the journal’s team at Elsevier, as well as all the reviewers who devoted time and expert efforts to perfecting the contributions to this issue.
  • Cutler, A., & Shanley, J. (2010). Validation of a training method for L2 continuous-speech segmentation. In Proceedings of the 11th Annual Conference of the International Speech Communication Association (Interspeech 2010), Makuhari, Japan (pp. 1844-1847).

    Abstract

    Recognising continuous speech in a second language is often unexpectedly difficult, as the operation of segmenting speech is so attuned to native-language structure. We report the initial steps in development of a novel training method for second-language listening, focusing on speech segmentation and employing a task designed for studying this: word-spotting. Listeners detect real words in sequences consisting of a word plus a minimal context. The present validation study shows that learners from varying non-English backgrounds successfully perform a version of this task in English, and display appropriate sensitivity to structural factors that also affect segmentation by native English listeners.
  • Cutler, A., & Bruggeman, L. (2013). Vocabulary structure and spoken-word recognition: Evidence from French reveals the source of embedding asymmetry. In Proceedings of INTERSPEECH: 14th Annual Conference of the International Speech Communication Association (pp. 2812-2816).

    Abstract

    Vocabularies contain hundreds of thousands of words built from only a handful of phonemes, so that inevitably longer words tend to contain shorter ones. In many languages (but not all) such embedded words occur more often word-initially than word-finally, and this asymmetry, if present, has farreaching consequences for spoken-word recognition. Prior research had ascribed the asymmetry to suffixing or to effects of stress (in particular, final syllables containing the vowel schwa). Analyses of the standard French vocabulary here reveal an effect of suffixing, as predicted by this account, and further analyses of an artificial variety of French reveal that extensive final schwa has an independent and additive effect in promoting the embedding asymmetry.
  • D'Alessandra, Y., Carena, M. C., Spazzafumo, L., Martinelli, F., Bassetti, B., Devanna, P., Rubino, M., Marenzi, G., Colombo, G. I., Achilli, F., Maggiolini, S., Capogrossi, M. C., & Pompilio, G. (2013). Diagnostic Potential of Plasmatic MicroRNA Signatures in Stable and Unstable Angina. PLoS ONE, 8(11), e80345. doi:10.1371/journal.pone.0080345.

    Abstract

    PURPOSE: We examined circulating miRNA expression profiles in plasma of patients with coronary artery disease (CAD) vs. matched controls, with the aim of identifying novel discriminating biomarkers of Stable (SA) and Unstable (UA) angina. METHODS: An exploratory analysis of plasmatic expression profile of 367 miRNAs was conducted in a group of SA and UA patients and control donors, using TaqMan microRNA Arrays. Screening confirmation and expression analysis were performed by qRT-PCR: all miRNAs found dysregulated were examined in the plasma of troponin-negative UA (n=19) and SA (n=34) patients and control subjects (n=20), matched for sex, age, and cardiovascular risk factors. In addition, the expression of 14 known CAD-associated miRNAs was also investigated. RESULTS: Out of 178 miRNAs consistently detected in plasma samples, 3 showed positive modulation by CAD when compared to controls: miR-337-5p, miR-433, and miR-485-3p. Further, miR-1, -122, -126, -133a, -133b, and miR-199a were positively modulated in both UA and SA patients, while miR-337-5p and miR-145 showed a positive modulation only in SA or UA patients, respectively. ROC curve analyses showed a good diagnostic potential (AUC ≥ 0.85) for miR-1, -126, and -483-5p in SA and for miR-1, -126, and -133a in UA patients vs. controls, respectively. No discriminating AUC values were observed comparing SA vs. UA patients. Hierarchical cluster analysis showed that the combination of miR-1, -133a, and -126 in UA and of miR-1, -126, and -485-3p in SA correctly classified patients vs. controls with an efficiency ≥ 87%. No combination of miRNAs was able to reliably discriminate patients with UA from patients with SA. CONCLUSIONS: This work showed that specific plasmatic miRNA signatures have the potential to accurately discriminate patients with angiographically documented CAD from matched controls. We failed to identify a plasmatic miRNA expression pattern capable to differentiate SA from UA patients.
  • D'Alessandra, Y., Devanna, P., Limana, F., Straino, S., Di Carlo, A., Brambilla, P. G., Rubino, M., Carena, M. C., Spazzafumo, L., De Simone, M., Micheli, B., Biglioli, P., Achilli, F., Martelli, F., Maggiolini, S., Marenzi, G., Pompilio, G., & Capogrossi, M. C. (2010). Circulating microRNAs are new and sensitive biomarkers of myocardial infarction. European Heart Journal, 31(22), 2765-2773. doi:10.1093/eurheartj/ehq167.

    Abstract

    Aims Circulating microRNAs (miRNAs) may represent a novel class of biomarkers; therefore, we examined whether acute myocardial infarction (MI) modulates miRNAs plasma levels in humans and mice. Methods and results Healthy donors (n = 17) and patients (n = 33) with acute ST-segment elevation MI (STEMI) were evaluated. In one cohort (n = 25), the first plasma sample was obtained 517 ± 309 min after the onset of MI symptoms and after coronary reperfusion with percutaneous coronary intervention (PCI); miR-1, -133a, -133b, and -499-5p were ∼15- to 140-fold control, whereas miR-122 and -375 were ∼87–90% lower than control; 5 days later, miR-1, -133a, -133b, -499-5p, and -375 were back to baseline, whereas miR-122 remained lower than control through Day 30. In additional patients (n = 8; four treated with thrombolysis and four with PCI), miRNAs and troponin I (TnI) were quantified simultaneously starting 156 ± 72 min after the onset of symptoms and at different times thereafter. Peak miR-1, -133a, and -133b expression and TnI level occurred at a similar time, whereas miR-499-5p exhibited a slower time course. In mice, miRNAs plasma levels and TnI were measured 15 min after coronary ligation and at different times thereafter. The behaviour of miR-1, -133a, -133b, and -499-5p was similar to STEMI patients; further, reciprocal changes in the expression levels of these miRNAs were found in cardiac tissue 3–6 h after coronary ligation. In contrast, miR-122 and -375 exhibited minor changes and no significant modulation. In mice with acute hind-limb ischaemia, there was no increase in the plasma level of the above miRNAs. Conclusion Acute MI up-regulated miR-1, -133a, -133b, and -499-5p plasma levels, both in humans and mice, whereas miR-122 and -375 were lower than control only in STEMI patients. These miRNAs represent novel biomarkers of cardiac damage.
  • Dastjerdi, M., Ozker, M., Foster, B. L., Rangarajan, V., & Parvizi, J. (2013). Numerical processing in the human parietal cortex during experimental and natural conditions. Nature Communications, 4: 2528. doi:10.1038/ncomms3528.

    Abstract

    Human cognition is traditionally studied in experimental conditions wherein confounding complexities of the natural environment are intentionally eliminated. Thus, it remains unknown how a brain region involved in a particular experimental condition is engaged in natural conditions. Here we use electrocorticography to address this uncertainty in three participants implanted with intracranial electrodes and identify activations of neuronal populations within the intraparietal sulcus region during an experimental arithmetic condition. In a subsequent analysis, we report that the same intraparietal sulcus neural populations are activated when participants, engaged in social conversations, refer to objects with numerical content. Our prototype approach provides a means for both exploring human brain dynamics as they unfold in complex social settings and reconstructing natural experiences from recorded brain signals.
  • Davidson, D., & Martin, A. E. (2013). Modeling accuracy as a function of response time with the generalized linear mixed effects model. Acta Psychologica, 144(1), 83-96. doi:10.1016/j.actpsy.2013.04.016.

    Abstract

    In psycholinguistic studies using error rates as a response measure, response times (RT) are most often analyzed independently of the error rate, although it is widely recognized that they are related. In this paper we present a mixed effects logistic regression model for the error rate that uses RT as a trial-level fixed- and random-effect regression input. Production data from a translation–recall experiment are analyzed as an example. Several model comparisons reveal that RT improves the fit of the regression model for the error rate. Two simulation studies then show how the mixed effects regression model can identify individual participants for whom (a) faster responses are more accurate, (b) faster responses are less accurate, or (c) there is no relation between speed and accuracy. These results show that this type of model can serve as a useful adjunct to traditional techniques, allowing psycholinguistic researchers to examine more closely the relationship between RT and accuracy in individual subjects and better account for the variability which may be present, as well as a preliminary step to more advanced RT–accuracy modeling.
  • Debreslioska, S., Ozyurek, A., Gullberg, M., & Perniss, P. M. (2013). Gestural viewpoint signals referent accessibility. Discourse Processes, 50(7), 431-456. doi:10.1080/0163853x.2013.824286.

    Abstract

    The tracking of entities in discourse is known to be a bimodal phenomenon. Speakers achieve cohesion in speech by alternating between full lexical forms, pronouns, and zero anaphora as they track referents. They also track referents in co-speech gestures. In this study, we explored how viewpoint is deployed in reference tracking, focusing on representations of animate entities in German narrative discourse. We found that gestural viewpoint systematically varies depending on discourse context. Speakers predominantly use character viewpoint in maintained contexts and observer viewpoint in reintroduced contexts. Thus, gestural viewpoint seems to function as a cohesive device in narrative discourse. The findings expand on and provide further evidence for the coordination between speech and gesture on the discourse level that is crucial to understanding the tight link between the two modalities.
  • Dediu, D. (2016). A multi-layered problem. IEEE CDS Newsletter, 13, 14-15.

    Abstract

    A response to Moving Beyond Nature-Nurture: a Problem of Science or Communication? by John Spencer, Mark Blumberg and David Shenk
  • Dediu, D., Cysouw, M., Levinson, S. C., Baronchelli, A., Christiansen, M. H., Croft, W., Evans, N., Garrod, S., Gray, R., Kandler, A., & Lieven, E. (2013). Cultural evolution of language. In P. J. Richerson, & M. H. Christiansen (Eds.), Cultural evolution: Society, technology, language, and religion. Strüngmann Forum Reports, vol. 12 (pp. 303-332). Cambridge, Mass: MIT Press.

    Abstract

    This chapter argues that an evolutionary cultural approach to language not only has already proven fruitful, but it probably holds the key to understand many puzzling aspects of language, its change and origins. The chapter begins by highlighting several still common misconceptions about language that might seem to call into question a cultural evolutionary approach. It explores the antiquity of language and sketches a general evolutionary approach discussing the aspects of function, fi tness, replication, and selection, as well the relevant units of linguistic evolution. In this context, the chapter looks at some fundamental aspects of linguistic diversity such as the nature of the design space, the mechanisms generating it, and the shape and fabric of language. Given that biology is another evolutionary system, its complex coevolution with language needs to be understood in order to have a proper theory of language. Throughout the chapter, various challenges are identifi ed and discussed, sketching promising directions for future research. The chapter ends by listing the necessary data, methods, and theoretical developments required for a grounded evolutionary approach to language.
  • Dediu, D., & Moisik, S. (2016). Defining and counting phonological classes in cross-linguistic segment databases. In N. Calzolari, K. Choukri, T. Declerck, S. Goggi, M. Grobelnik, B. Maegaard, J. Mariani, H. Mazo, A. Moreno, J. Odijk, & S. Piperidis (Eds.), Proceedings of LREC 2016: 10th International Conference on Language Resources and Evaluation (pp. 1955-1962). Paris: European Language Resources Association (ELRA).

    Abstract

    Recently, there has been an explosion in the availability of large, good-quality cross-linguistic databases such as WALS (Dryer & Haspelmath, 2013), Glottolog (Hammarstrom et al., 2015) and Phoible (Moran & McCloy, 2014). Databases such as Phoible contain the actual segments used by various languages as they are given in the primary language descriptions. However, this segment-level representation cannot be used directly for analyses that require generalizations over classes of segments that share theoretically interesting features. Here we present a method and the associated R (R Core Team, 2014) code that allows the exible denition of such meaningful classes and that can identify the sets of segments falling into such a class for any language inventory. The method and its results are important for those interested in exploring cross-linguistic patterns of phonetic and phonological diversity and their relationship to extra-linguistic factors and processes such as climate, economics, history or human genetics.
  • Dediu, D., & Moisik, S. R. (2016). Anatomical biasing of click learning and production: An MRI and 3d palate imaging study. In S. G. Roberts, C. Cuskley, L. McCrohon, L. Barceló-Coblijn, O. Feher, & T. Verhoef (Eds.), The Evolution of Language: Proceedings of the 11th International Conference (EVOLANG11). Retrieved from http://evolang.org/neworleans/papers/57.html.

    Abstract

    The current paper presents results for data on click learning obtained from a larger imaging study (using MRI and 3D intraoral scanning) designed to quantify and characterize intra- and inter-population variation of vocal tract structures and the relation of this to speech production. The aim of the click study was to ascertain whether and to what extent vocal tract morphology influences (1) the ability to learn to produce clicks and (2) the productions of those that successfully learn to produce these sounds. The results indicate that the presence of an alveolar ridge certainly does not prevent an individual from learning to produce click sounds (1). However, the subtle details of how clicks are produced may indeed be driven by palate shape (2).
  • Dediu, D. (2013). Genes: Interactions with language on three levels — Inter-individual variation, historical correlations and genetic biasing. In P.-M. Binder, & K. Smith (Eds.), The language phenomenon: Human communication from milliseconds to millennia (pp. 139-161). Berlin: Springer. doi:10.1007/978-3-642-36086-2_7.

    Abstract

    The complex inter-relationships between genetics and linguistics encompass all four scales highlighted by the contributions to this book and, together with cultural transmission, the genetics of language holds the promise to offer a unitary understanding of this fascinating phenomenon. There are inter-individual differences in genetic makeup which contribute to the obvious fact that we are not identical in the way we understand and use language and, by studying them, we will be able to both better treat and enhance ourselves. There are correlations between the genetic configuration of human groups and their languages, reflecting the historical processes shaping them, and there also seem to exist genes which can influence some characteristics of language, biasing it towards or against certain states by altering the way language is transmitted across generations. Besides the joys of pure knowledge, the understanding of these three aspects of genetics relevant to language will potentially trigger advances in medicine, linguistics, psychology or the understanding of our own past and, last but not least, a profound change in the way we regard one of the emblems of being human: our capacity for language.
  • Dediu, D., & de Boer, B. (2016). Language evolution needs its own journal. Journal of Language Evolution, 1, 1-6. doi:10.1093/jole/lzv001.

    Abstract

    Interest in the origins and evolution of language has been around for as long as language has been around. However, only recently has the empirical study of language come of age. We argue that the field has sufficiently advanced that it now needs its own journal—the Journal of Language Evolution.
  • Dediu, D., & Christiansen, M. H. (2016). Language evolution: Constraints and opportunities from modern genetics. Topics in Cognitive Science, 8, 361-370. doi:10.1111/tops.12195.

    Abstract

    Our understanding of language, its origins and subsequent evolution (including language change) is shaped not only by data and theories from the language sciences, but also fundamentally by the biological sciences. Recent developments in genetics and evolutionary theory offer both very strong constraints on what scenarios of language evolution are possible and probable but also offer exciting opportunities for understanding otherwise puzzling phenomena. Due to the intrinsic breathtaking rate of advancement in these fields, the complexity, subtlety and sometimes apparent non-intuitiveness of the phenomena discovered, some of these recent developments have either being completely missed by language scientists, or misperceived and misrepresented. In this short paper, we offer an update on some of these findings and theoretical developments through a selection of illustrative examples and discussions that cast new light on current debates in the language sciences. The main message of our paper is that life is much more complex and nuanced than anybody could have predicted even a few decades ago, and that we need to be flexible in our theorizing instead of embracing a priori dogmas and trying to patch paradigms that are no longer satisfactory.
  • Dediu, D. (2010). Linguistic and genetic diversity - how and why are they related? In M. Brüne, F. Salter, & W. McGrew (Eds.), Building bridges between anthropology, medicine and human ethology: Tributes to Wulf Schiefenhövel (pp. 169-178). Bochum: Europäischer Universitätsverlag.

    Abstract

    There are some 6000 languages spoken today, classfied in approximately 90 linguistic families and many isolates, and also differing across structural, typological, dimensions. Genetically, the human species is remarkably homogeneous, with the existant genetic diversity mostly explain by intra-population differences between individuals, but the remaining inter-population differences have a non-trivial structure. Populations splits and contacts influence both languages and genes, in principle allowing them to evolve in parallel ways. The farming/language co-dispersal hypothesis is a well-known such theory, whereby farmers spreading agriculture from its places of origin also spread their genes and languages. A different type of relationship was recently proposed, involving a genetic bias which influences the structural properties of language as it is transmitted across generations. Such a bias was proposed to explain the correlations between the distribution of tone languages and two brain development-related human genes and, if confirmed by experimental studies, it could represent a new factor explaining the distrbution of diversity. The present chapter overviews these related topics in the hope that a truly interdisciplinary approach could allow a better understanding of our complex (recent as well as evolutionary) history.
  • Dediu, D., & Levinson, S. C. (2013). On the antiquity of language: The reinterpretation of Neandertal linguistic capacities and its consequences. Frontiers in Language Sciences, 4: 397. doi:10.3389/fpsyg.2013.00397.

    Abstract

    It is usually assumed that modern language is a recent phenomenon, coinciding with the emergence of modern humans themselves. Many assume as well that this is the result of a single, sudden mutation giving rise to the full “modern package”. However, we argue here that recognizably modern language is likely an ancient feature of our genus pre-dating at least the common ancestor of modern humans and Neandertals about half a million years ago. To this end, we adduce a broad range of evidence from linguistics, genetics, palaeontology and archaeology clearly suggesting that Neandertals shared with us something like modern speech and language. This reassessment of the antiquity of modern language, from the usually quoted 50,000-100,000 years to half a million years, has profound consequences for our understanding of our own evolution in general and especially for the sciences of speech and language. As such, it argues against a saltationist scenario for the evolution of language, and towards a gradual process of culture-gene co-evolution extending to the present day. Another consequence is that the present-day linguistic diversity might better reflect the properties of the design space for language and not just the vagaries of history, and could also contain traces of the languages spoken by other human forms such as the Neandertals.
  • Dediu, D., & Moisik, S. R. (2019). Pushes and pulls from below: Anatomical variation, articulation and sound change. Glossa: A Journal of General Linguistics, 4(1): 7. doi:10.5334/gjgl.646.

    Abstract

    This paper argues that inter-individual and inter-group variation in language acquisition, perception, processing and production, rooted in our biology, may play a largely neglected role in sound change. We begin by discussing the patterning of these differences, highlighting those related to vocal tract anatomy with a foundation in genetics and development. We use our ArtiVarK database, a large multi-ethnic sample comprising 3D intraoral optical scans, as well as structural, static and real-time MRI scans of vocal tract anatomy and speech articulation, to quantify the articulatory strategies used to produce the North American English /r/ and to statistically show that anatomical factors seem to influence these articulatory strategies. Building on work showing that these alternative articulatory strategies may have indirect coarticulatory effects, we propose two models for how biases due to variation in vocal tract anatomy may affect sound change. The first involves direct overt acoustic effects of such biases that are then reinterpreted by the hearers, while the second is based on indirect coarticulatory phenomena generated by acoustically covert biases that produce overt “at-a-distance” acoustic effects. This view implies that speaker communities might be “poised” for change because they always contain pools of “standing variation” of such biased speakers, and when factors such as the frequency of the biased speakers in the community, their positions in the communicative network or the topology of the network itself change, sound change may rapidly follow as a self-reinforcing network-level phenomenon, akin to a phase transition. Thus, inter-speaker variation in structured and dynamic communicative networks may couple the initiation and actuation of sound change.
  • Dediu, D., & Cysouw, M. A. (2013). Some structural aspects of language are more stable than others: A comparison of seven methods. PLoS One, 8: e55009. doi:10.1371/journal.pone.0055009.

    Abstract

    Understanding the patterns and causes of differential structural stability is an area of major interest for the study of language change and evolution. It is still debated whether structural features have intrinsic stabilities across language families and geographic areas, or if the processes governing their rate of change are completely dependent upon the specific context of a given language or language family. We conducted an extensive literature review and selected seven different approaches to conceptualising and estimating the stability of structural linguistic features, aiming at comparing them using the same dataset, the World Atlas of Language Structures. We found that, despite profound conceptual and empirical differences between these methods, they tend to agree in classifying some structural linguistic features as being more stable than others. This suggests that there are intrinsic properties of such structural features influencing their stability across methods, language families and geographic areas. This finding is a major step towards understanding the nature of structural linguistic features and their interaction with idiosyncratic, lineage- and area-specific factors during language change and evolution.
  • Dediu, D., Janssen, R., & Moisik, S. R. (2019). Weak biases emerging from vocal tract anatomy shape the repeated transmission of vowels. Nature Human Behaviour, 3, 1107-1115. doi:10.1038/s41562-019-0663-x.

    Abstract

    Linguistic diversity is affected by multiple factors, but it is usually assumed that variation in the anatomy of our speech organs
    plays no explanatory role. Here we use realistic computer models of the human speech organs to test whether inter-individual
    and inter-group variation in the shape of the hard palate (the bony roof of the mouth) affects acoustics of speech sounds. Based
    on 107 midsagittal MRI scans of the hard palate of human participants, we modelled with high accuracy the articulation of a set
    of five cross-linguistically representative vowels by agents learning to produce speech sounds. We found that different hard
    palate shapes result in subtle differences in the acoustics and articulatory strategies of the produced vowels, and that these
    individual-level speech idiosyncrasies are amplified by the repeated transmission of language across generations. Therefore,
    we suggest that, besides culture and environment, quantitative biological variation can be amplified, also influencing language.
  • Dediu, D. (2016). Typology for the masses. Linguistic typology, 20(3), 579-581. doi:10.1515/lingty-2016-0029.
  • Deegan, B., Sturt, B., Ryder, D., Butcher, M., Brumby, S., Long, G., Badngarri, N., Lannigan, J., Blythe, J., & Wightman, G. (2010). Jaru animals and plants: Aboriginal flora and fauna knowledge from the south-east Kimberley and western Top End, north Australia. Halls Creek: Kimberley Language Resource Centre; Palmerston: Department of Natural Resources, Environment, the Arts and Sport.
  • Defina, R. (2016). Do serial verb constructions describe single events? A study of co-speech gestures in Avatime. Language, 92(4), 890-910. doi:10.1353/lan.2016.0076.

    Abstract

    Serial verb constructions have often been said to refer to single conceptual events. However, evidence to support this claim has been elusive. This article introduces co-speech gestures as a new way of investigating the relationship. The alignment patterns of gestures with serial verb constructions and other complex clauses were compared in Avatime (Ka-Togo, Kwa, Niger-Congo). Serial verb constructions tended to occur with single gestures overlapping the entire construction. In contrast, other complex clauses were more likely to be accompanied by distinct gestures overlapping individual verbs. This pattern of alignment suggests that serial verb constructions are in fact used to describe single events.

    Additional information

    https://doi.org/10.1353/lan.2016.0069
  • Defina, R. (2010). Aspect and modality in Avatime. Master Thesis, Leiden University.
  • Defina, R. (2016). Events in language and thought: The case of serial verb constructions in Avatime. PhD Thesis, Radboud University Nijmegen, Nijmegen.
  • Defina, R. (2016). Serial verb constructions and their subtypes in Avatime. Studies in Language, 40(3), 648-680. doi:10.1075/sl.40.3.07def.
  • Demontis, D., Walters, R. K., Martin, J., Mattheisen, M., Als, T. D., Agerbo, E., Baldursson, G., Belliveau, R., Bybjerg-Grauholm, J., Bækvad-Hansen, M., Cerrato, F., Chambert, K., Churchhouse, C., Dumont, A., Eriksson, N., Gandal, M., Goldstein, J. I., Grasby, K. L., Grove, J., Gudmundsson, O. O. and 61 moreDemontis, D., Walters, R. K., Martin, J., Mattheisen, M., Als, T. D., Agerbo, E., Baldursson, G., Belliveau, R., Bybjerg-Grauholm, J., Bækvad-Hansen, M., Cerrato, F., Chambert, K., Churchhouse, C., Dumont, A., Eriksson, N., Gandal, M., Goldstein, J. I., Grasby, K. L., Grove, J., Gudmundsson, O. O., Hansen, C. S., Hauberg, M. E., Hollegaard, M. V., Howrigan, D. P., Huang, H., Maller, J. B., Martin, A. R., Martin, N. G., Moran, J., Pallesen, J., Palmer, D. S., Pedersen, C. B., Pedersen, M. G., Poterba, T., Poulsen, J. B., Ripke, S., Robinson, E. B., Satterstrom, F. K., Stefansson, H., Stevens, C., Turley, P., Walters, G. B., Won, H., Wright, M. J., ADHD Working Group of the Psychiatric Genomics Consortium (PGC), EArly Genetics and Lifecourse Epidemiology (EAGLE) Consortium, 23andme Research Team, Andreassen, O. A., Asherson, P., Burton, C. L., Boomsma, D. I., Cormand, B., Dalsgaard, S., Franke, B., Gelernter, J., Geschwind, D., Hakonarson, H., Haavik, J., Kranzler, H. R., Kuntsi, J., Langley, K., Lesch, K.-P., Middeldorp, C., Reif, A., Rohde, L. A., Roussos, P., Schachar, R., Sklar, P., Sonuga-Barke, E. J. S., Sullivan, P. F., Thapar, A., Tung, J. Y., Waldman, I. D., Medland, S. E., Stefansson, K., Nordentoft, M., Hougaard, D. M., Werge, T., Mors, O., Mortensen, P. B., Daly, M. J., Faraone, S. V., Børglum, A. D., & Neale, B. (2019). Discovery of the first genome-wide significant risk loci for attention deficit/hyperactivity disorder. Nature Genetics, 51, 63-75. doi:10.1038/s41588-018-0269-7.

    Abstract

    Attention deficit/hyperactivity disorder (ADHD) is a highly heritable childhood behavioral disorder affecting 5% of children and 2.5% of adults. Common genetic variants contribute substantially to ADHD susceptibility, but no variants have been robustly associated with ADHD. We report a genome-wide association meta-analysis of 20,183 individuals diagnosed with ADHD and 35,191 controls that identifies variants surpassing genome-wide significance in 12 independent loci, finding important new information about the underlying biology of ADHD. Associations are enriched in evolutionarily constrained genomic regions and loss-of-function intolerant genes and around brain-expressed regulatory marks. Analyses of three replication studies: a cohort of individuals diagnosed with ADHD, a self-reported ADHD sample and a meta-analysis of quantitative measures of ADHD symptoms in the population, support these findings while highlighting study-specific differences on genetic overlap with educational attainment. Strong concordance with GWAS of quantitative population measures of ADHD symptoms supports that clinical diagnosis of ADHD is an extreme expression of continuous heritable traits.
  • den Hoed, M., Eijgelsheim, M., Esko, T., Brundel, B. J. J. M., Peal, D. S., Evans, D. M., Nolte, I. M., Segrè, A. V., Holm, H., Handsaker, R. E., Westra, H.-J., Johnson, T., Isaacs, A., Yang, J., Lundby, A., Zhao, J. H., Kim, Y. J., Go, M. J., Almgren, P., Bochud, M. and 249 moreden Hoed, M., Eijgelsheim, M., Esko, T., Brundel, B. J. J. M., Peal, D. S., Evans, D. M., Nolte, I. M., Segrè, A. V., Holm, H., Handsaker, R. E., Westra, H.-J., Johnson, T., Isaacs, A., Yang, J., Lundby, A., Zhao, J. H., Kim, Y. J., Go, M. J., Almgren, P., Bochud, M., Boucher, G., Cornelis, M. C., Gudbjartsson, D., Hadley, D., van der Harst, P., Hayward, C., den Heijer, M., Igl, W., Jackson, A. U., Kutalik, Z., Luan, J., Kemp, J. P., Kristiansson, K., Ladenvall, C., Lorentzon, M., Montasser, M. E., Njajou, O. T., O'Reilly, P. F., Padmanabhan, S., St Pourcain, B., Rankinen, T., Salo, P., Tanaka, T., Timpson, N. J., Vitart, V., Waite, L., Wheeler, W., Zhang, W., Draisma, H. H. M., Feitosa, M. F., Kerr, K. F., Lind, P. A., Mihailov, E., Onland-Moret, N. C., Song, C., Weedon, M. N., Xie, W., Yengo, L., Absher, D., Albert, C. M., Alonso, A., Arking, D. E., de Bakker, P. I. W., Balkau, B., Barlassina, C., Benaglio, P., Bis, J. C., Bouatia-Naji, N., Brage, S., Chanock, S. J., Chines, P. S., Chung, M., Darbar, D., Dina, C., Dörr, M., Elliott, P., Felix, S. B., Fischer, K., Fuchsberger, C., de Geus, E. J. C., Goyette, P., Gudnason, V., Harris, T. B., Hartikainen, A.-L., Havulinna, A. S., Heckbert, S. R., Hicks, A. A., Hofman, A., Holewijn, S., Hoogstra-Berends, F., Hottenga, J.-J., Jensen, M. K., Johansson, A., Junttila, J., Kääb, S., Kanon, B., Ketkar, S., Khaw, K.-T., Knowles, J. W., Kooner, A. S., Kors, J. A., Kumari, M., Milani, L., Laiho, P., Lakatta, E. G., Langenberg, C., Leusink, M., Liu, Y., Luben, R. N., Lunetta, K. L., Lynch, S. N., Markus, M. R. P., Marques-Vidal, P., Mateo Leach, I., McArdle, W. L., McCarroll, S. A., Medland, S. E., Miller, K. A., Montgomery, G. W., Morrison, A. C., Müller-Nurasyid, M., Navarro, P., Nelis, M., O'Connell, J. R., O'Donnell, C. J., Ong, K. K., Newman, A. B., Peters, A., Polasek, O., Pouta, A., Pramstaller, P. P., Psaty, B. M., Rao, D. C., Ring, S. M., Rossin, E. J., Rudan, D., Sanna, S., Scott, R. A., Sehmi, J. S., Sharp, S., Shin, J. T., Singleton, A. B., Smith, A. V., Soranzo, N., Spector, T. D., Stewart, C., Stringham, H. M., Tarasov, K. V., Uitterlinden, A. G., Vandenput, L., Hwang, S.-J., Whitfield, J. B., Wijmenga, C., Wild, S. H., Willemsen, G., Wilson, J. F., Witteman, J. C. M., Wong, A., Wong, Q., Jamshidi, Y., Zitting, P., Boer, J. M. A., Boomsma, D. I., Borecki, I. B., van Duijn, C. M., Ekelund, U., Forouhi, N. G., Froguel, P., Hingorani, A., Ingelsson, E., Kivimaki, M., Kronmal, R. A., Kuh, D., Lind, L., Martin, N. G., Oostra, B. A., Pedersen, N. L., Quertermous, T., Rotter, J. I., van der Schouw, Y. T., Verschuren, W. M. M., Walker, M., Albanes, D., Arnar, D. O., Assimes, T. L., Bandinelli, S., Boehnke, M., de Boer, R. A., Bouchard, C., Caulfield, W. L. M., Chambers, J. C., Curhan, G., Cusi, D., Eriksson, J., Ferrucci, L., van Gilst, W. H., Glorioso, N., de Graaf, J., Groop, L., Gyllensten, U., Hsueh, W.-C., Hu, F. B., Huikuri, H. V., Hunter, D. J., Iribarren, C., Isomaa, B., Jarvelin, M.-R., Jula, A., Kähönen, M., Kiemeney, L. A., van der Klauw, M. M., Kooner, J. S., Kraft, P., Iacoviello, L., Lehtimäki, T., Lokki, M.-L.-L., Mitchell, B. D., Navis, G., Nieminen, M. S., Ohlsson, C., Poulter, N. R., Qi, L., Raitakari, O. T., Rimm, E. B., Rioux, J. D., Rizzi, F., Rudan, I., Salomaa, V., Sever, P. S., Shields, D. C., Shuldiner, A. R., Sinisalo, J., Stanton, A. V., Stolk, R. P., Strachan, D. P., Tardif, J.-C., Thorsteinsdottir, U., Tuomilehto, J., van Veldhuisen, D. J., Virtamo, J., Viikari, J., Vollenweider, P., Waeber, G., Widen, E., Cho, Y. S., Olsen, J. V., Visscher, P. M., Willer, C., Franke, L., Erdmann, J., Thompson, J. R., Pfeufer, A., Sotoodehnia, N., Newton-Cheh, C., Ellinor, P. T., Stricker, B. H. C., Metspalu, A., Perola, M., Beckmann, J. S., Smith, G. D., Stefansson, K., Wareham, N. J., Munroe, P. B., Sibon, O. C. M., Milan, D. J., Snieder, H., Samani, N. J., Loos, R. J. F., Global BPgen Consortium, CARDIoGRAM Consortium, PR GWAS Consortium, QRS GWAS Consortium, QT-IGC Consortium, & CHARGE-AF Consortium (2013). Identification of heart rate-associated loci and their effects on cardiac conduction and rhythm disorders. Nature Genetics, 45(6), 621-631. doi:10.1038/ng.2610.

    Abstract

    Elevated resting heart rate is associated with greater risk of cardiovascular disease and mortality. In a 2-stage meta-analysis of genome-wide association studies in up to 181,171 individuals, we identified 14 new loci associated with heart rate and confirmed associations with all 7 previously established loci. Experimental downregulation of gene expression in Drosophila melanogaster and Danio rerio identified 20 genes at 11 loci that are relevant for heart rate regulation and highlight a role for genes involved in signal transmission, embryonic cardiac development and the pathophysiology of dilated cardiomyopathy, congenital heart failure and/or sudden cardiac death. In addition, genetic susceptibility to increased heart rate is associated with altered cardiac conduction and reduced risk of sick sinus syndrome, and both heart rate-increasing and heart rate-decreasing variants associate with risk of atrial fibrillation. Our findings provide fresh insights into the mechanisms regulating heart rate and identify new therapeutic targets.
  • Deriziotis, P., & Fisher, S. E. (2013). Neurogenomics of speech and language disorders: The road ahead. Genome Biology, 14: 204. doi:10.1186/gb-2013-14-4-204.

    Abstract

    Next-generation sequencing is set to transform the discovery of genes underlying neurodevelopmental disorders, and so off er important insights into the biological bases of spoken language. Success will depend on functional assessments in neuronal cell lines, animal models and humans themselves.
  • Devanna, P., Dediu, D., & Vernes, S. C. (2019). The Genetics of Language: From complex genes to complex communication. In S.-A. Rueschemeyer, & M. G. Gaskell (Eds.), The Oxford Handbook of Psycholinguistics (2nd ed., pp. 865-898). Oxford: Oxford University Press.

    Abstract

    This chapter discusses the genetic foundations of the human capacity for language. It reviews the molecular structure of the genome and the complex molecular mechanisms that allow genetic information to influence multiple levels of biology. It goes on to describe the active regulation of genes and their formation of complex genetic pathways that in turn control the cellular environment and function. At each of these levels, examples of genes and genetic variants that may influence the human capacity for language are given. Finally, it discusses the value of using animal models to understand the genetic underpinnings of speech and language. From this chapter will emerge the complexity of the genome in action and the multidisciplinary efforts that are currently made to bridge the gap between genetics and language.
  • Devaraju, K., Barnabé-Heider, F., Kokaia, Z., & Lindvall, O. (2013). FoxJ1-expressing cells contribute to neurogenesis in forebrain of adult rats: Evidence from in vivo electroporation combined with piggyBac transposon. ScienceDirect, 319(18), 2790-2800. doi:10.1016/j.yexcr.2013.08.028.

    Abstract

    Ependymal cells in the lateral ventricular wall are considered to be post-mitotic but can give rise to neuroblasts and astrocytes after stroke in adult mice due to insult-induced suppression of Notch signaling. The transcription factor FoxJ1, which has been used to characterize mouse ependymal cells, is also expressed by a subset of astrocytes. Cells expressing FoxJ1, which drives the expression of motile cilia, contribute to early postnatal neurogenesis in mouse olfactory bulb. The distribution and progeny of FoxJ1-expressing cells in rat forebrain are unknown. Here we show using immunohistochemistry that the overall majority of FoxJ1-expressing cells in the lateral ventricular wall of adult rats are ependymal cells with a minor population being astrocytes. To allow for long-term fate mapping of FoxJ1-derived cells, we used the piggyBac system for in vivo gene transfer with electroporation. Using this method, we found that FoxJ1-expressing cells, presumably the astrocytes, give rise to neuroblasts and mature neurons in the olfactory bulb both in intact and stroke-damaged brain of adult rats. No significant contribution of FoxJ1-derived cells to stroke-induced striatal neurogenesis was detected. These data indicate that in the adult rat brain, FoxJ1-expressing cells contribute to the formation of new neurons in the olfactory bulb but are not involved in the cellular repair after stroke.
  • Dias, C., Estruch, S. B., Graham, S. A., McRae, J., Sawiak, S. J., Hurst, J. A., Joss, S. K., Holder, S. E., Morton, J. E., Turner, C., Thevenon, J., Mellul, K., Sánchez-Andrade, G., Ibarra-Soria, X., Derizioti, P., Santos, R. F., Lee, S.-C., Faivre, L., Kleefstra, T., Liu, P. and 3 moreDias, C., Estruch, S. B., Graham, S. A., McRae, J., Sawiak, S. J., Hurst, J. A., Joss, S. K., Holder, S. E., Morton, J. E., Turner, C., Thevenon, J., Mellul, K., Sánchez-Andrade, G., Ibarra-Soria, X., Derizioti, P., Santos, R. F., Lee, S.-C., Faivre, L., Kleefstra, T., Liu, P., Hurles, M. E., DDD Study, Fisher, S. E., & Logan, D. W. (2016). BCL11A haploinsufficiency causes an intellectual disability syndrome and dysregulates transcription. The American Journal of Human Genetics, 99(2), 253-274. doi:10.1016/j.ajhg.2016.05.030.

    Abstract

    Intellectual disability (ID) is a common condition with considerable genetic heterogeneity. Next-generation sequencing of large cohorts has identified an increasing number of genes implicated in ID, but their roles in neurodevelopment remain largely unexplored. Here we report an ID syndrome caused by de novo heterozygous missense, nonsense, and frameshift mutations in BCL11A, encoding a transcription factor that is a putative member of the BAF swi/snf chromatin-remodeling complex. Using a comprehensive integrated approach to ID disease modeling, involving human cellular analyses coupled to mouse behavioral, neuroanatomical, and molecular phenotyping, we provide multiple lines of functional evidence for phenotypic effects. The etiological missense variants cluster in the amino-terminal region of human BCL11A, and we demonstrate that they all disrupt its localization, dimerization, and transcriptional regulatory activity, consistent with a loss of function. We show that Bcl11a haploinsufficiency in mice causes impaired cognition, abnormal social behavior, and microcephaly in accordance with the human phenotype. Furthermore, we identify shared aberrant transcriptional profiles in the cortex and hippocampus of these mouse models. Thus, our work implicates BCL11A haploinsufficiency in neurodevelopmental disorders and defines additional targets regulated by this gene, with broad relevance for our understanding of ID and related syndromes
  • Diaz, B., Mitterer, H., Broersma, M., Escara, C., & Sebastián-Gallés, N. (2016). Variability in L2 phonemic learning originates from speech-specific capabilities: An MMN study on late bilinguals. Bilingualism: Language and Cognition, 19(5), 955-970. doi:10.1017/S1366728915000450.

    Abstract

    People differ in their ability to perceive second language (L2) sounds. In early bilinguals the variability in learning L2 phonemes stems from speech-specific capabilities (Díaz, Baus, Escera, Costa & Sebastián-Gallés, 2008). The present study addresses whether speech-specific capabilities similarly explain variability in late bilinguals. Event-related potentials were recorded (using a design similar to Díaz et al., 2008) in two groups of late Dutch–English bilinguals who were good or poor in overtly discriminating the L2 English vowels /ε-æ/. The mismatch negativity, an index of discrimination sensitivity, was similar between the groups in conditions involving pure tones (of different length, frequency, and presentation order) but was attenuated in poor L2 perceivers for native, unknown, and L2 phonemes. These results suggest that variability in L2 phonemic learning originates from speech-specific capabilities and imply a continuity of L2 phonemic learning mechanisms throughout the lifespan
  • Dideriksen, C., Fusaroli, R., Tylén, K., Dingemanse, M., & Christiansen, M. H. (2019). Contextualizing Conversational Strategies: Backchannel, Repair and Linguistic Alignment in Spontaneous and Task-Oriented Conversations. In A. K. Goel, C. M. Seifert, & C. Freksa (Eds.), Proceedings of the 41st Annual Conference of the Cognitive Science Society (CogSci 2019) (pp. 261-267). Montreal, QB: Cognitive Science Society.

    Abstract

    Do interlocutors adjust their conversational strategies to the specific contextual demands of a given situation? Prior studies have yielded conflicting results, making it unclear how strategies vary with demands. We combine insights from qualitative and quantitative approaches in a within-participant experimental design involving two different contexts: spontaneously occurring conversations (SOC) and task-oriented conversations (TOC). We systematically assess backchanneling, other-repair and linguistic alignment. We find that SOC exhibit a higher number of backchannels, a reduced and more generic repair format and higher rates of lexical and syntactic alignment. TOC are characterized by a high number of specific repairs and a lower rate of lexical and syntactic alignment. However, when alignment occurs, more linguistic forms are aligned. The findings show that conversational strategies adapt to specific contextual demands.
  • Dieuleveut, A., Van Dooren, A., Cournane, A., & Hacquard, V. (2019). Acquiring the force of modals: Sig you guess what sig means? In M. Brown, & B. Dailey (Eds.), BUCLD 43: Proceedings of the 43rd annual Boston University Conference on Language Development (pp. 189-202). Sommerville, MA: Cascadilla Press.
  • Dima, A. L., & Dediu, D. (2016). Computation of Adherence to Medications and Visualization of Medication Histories in R with AdhereR: Towards Transparent and Reproducible Use of Electronic Healthcare Data. PLoS One, 12(4): e0174426. doi:10.1371/journal.pone.0174426.

    Abstract

    Adherence to medications is an important indicator of the quality of medication management and impacts on health outcomes and cost-effectiveness of healthcare delivery. Electronic healthcare data (EHD) are increasingly used to estimate adherence in research and clinical practice, yet standardization and transparency of data processing are still a concern. Comprehensive and flexible open-source algorithms can facilitate the development of high-quality, consistent, and reproducible evidence in this field. Some EHD-based clinical decision support systems (CDSS) include visualization of medication histories, but this is rarely integrated in adherence analyses and not easily accessible for data exploration or implementation in new clinical settings. We introduce AdhereR, a package for the widely used open-source statistical environment R, designed to support researchers in computing EHD-based adherence estimates and in visualizing individual medication histories and adherence patterns. AdhereR implements a set of functions that are consistent with current adherence guidelines, definitions and operationalizations. We illustrate the use of AdhereR with an example dataset of 2-year records of 100 patients and describe the various analysis choices possible and how they can be adapted to different health conditions and types of medications. The package is freely available for use and its implementation facilitates the integration of medication history visualizations in open-source CDSS platforms.
  • Dimitrova, D. V., Chu, M., Wang, L., Ozyurek, A., & Hagoort, P. (2016). Beat that word: How listeners integrate beat gesture and focus in multimodal speech discourse. Journal of Cognitive Neuroscience, 28(9), 1255-1269. doi:10.1162/jocn_a_00963.

    Abstract

    Communication is facilitated when listeners allocate their attention to important information (focus) in the message, a process called "information structure." Linguistic cues like the preceding context and pitch accent help listeners to identify focused information. In multimodal communication, relevant information can be emphasized by nonverbal cues like beat gestures, which represent rhythmic nonmeaningful hand movements. Recent studies have found that linguistic and nonverbal attention cues are integrated independently in single sentences. However, it is possible that these two cues interact when information is embedded in context, because context allows listeners to predict what information is important. In an ERP study, we tested this hypothesis and asked listeners to view videos capturing a dialogue. In the critical sentence, focused and nonfocused words were accompanied by beat gestures, grooming hand movements, or no gestures. ERP results showed that focused words are processed more attentively than nonfocused words as reflected in an N1 and P300 component. Hand movements also captured attention and elicited a P300 component. Importantly, beat gesture and focus interacted in a late time window of 600-900 msec relative to target word onset, giving rise to a late positivity when nonfocused words were accompanied by beat gestures. Our results show that listeners integrate beat gesture with the focus of the message and that integration costs arise when beat gesture falls on nonfocused information. This suggests that beat gestures fulfill a unique focusing function in multimodal discourse processing and that they have to be integrated with the information structure of the message.
  • Dimroth, C., Andorno, C., Benazzo, S., & Verhagen, J. (2010). Given claims about new topics: How Romance and Germanic speakers link changed and maintained information in narrative discourse. Journal of Pragmatics, 42(12), 3328-3344. doi:10.1016/j.pragma.2010.05.009.

    Abstract

    This paper deals with the anaphoric linking of information units in spoken discourse in French, Italian, Dutch and German. We distinguish the information units ‘time’, ‘entity’, and ‘predicate’ and specifically investigate how speakers mark the information structure of their utterances and enhance discourse cohesion in contexts where the predicate contains given information but there is a change in one or more of the other information units. Germanic languages differ from Romance languages in the availability of a set of assertion-related particles (e.g. doch/toch, wel; roughly meaning ‘indeed’) and the option of highlighting the assertion component of a finite verb independently of its lexical content (verum focus). Based on elicited production data from 20 native speakers per language, we show that speakers of Dutch and German relate utterances to one another by focussing on this assertion component, and propose an analysis of the additive scope particles ook/auch (also) along similar lines. Speakers of Romance languages tend to highlight change or maintenance in the other information units. Such differences in the repertoire have consequences for the selection of units that are used for anaphoric linking. We conclude that there is a Germanic and a Romance way of signalling the information flow and enhancing discourse cohesion.
  • Dimroth, C. (2010). The acquisition of negation. In L. R. Horn (Ed.), The expression of negation (pp. 39-73). Berlin/New York: Mouton de Gruyter.
  • Dingemanse, M. (2013). Wie wir mit Sprache malen - How to paint with language. Forschungsbericht 2013 - Max-Planck-Institut für Psycholinguistik. In Max-Planck-Gesellschaft Jahrbuch 2013. München: Max Planck Society for the Advancement of Science. Retrieved from http://www.mpg.de/6683977/Psycholinguistik_JB_2013.

    Abstract

    Words evolve not as blobs of ink on paper but in face to face interaction. The nature of language as fundamentally interactive and multimodal is shown by the study of ideophones, vivid sensory words that thrive in conversations around the world. The ways in which these Lautbilder enable precise communication about sensory knowledge has for the first time been studied in detail. It turns out that we can paint with language, and that the onomatopoeia we sometimes classify as childish might be a subset of a much richer toolkit for depiction in speech, available to us all.
  • Dingemanse, M. (2019). 'Ideophone' as a comparative concept. In K. Akita, & P. Pardeshi (Eds.), Ideophones, Mimetics, and Expressives (pp. 13-33). Amsterdam: John Benjamins. doi:10.1075/ill.16.02din.

    Abstract

    This chapter makes the case for ‘ideophone’ as a comparative concept: a notion that captures a recurrent typological pattern and provides a template for understanding language-specific phenomena that prove similar. It revises an earlier definition to account for the observation that ideophones typically form an open lexical class, and uses insights from canonical typology to explore the larger typological space. According to the resulting definition, a canonical ideophone is a member of an open lexical class of marked words that depict sensory imagery. The five elements of this definition can be seen as dimensions that together generate a possibility space to characterise cross-linguistic diversity in depictive means of expression. This approach allows for the systematic comparative treatment of ideophones and ideophone-like phenomena. Some phenomena in the larger typological space are discussed to demonstrate the utility of the approach: phonaesthemes in European languages, specialised semantic classes in West-Chadic, diachronic diversions in Aslian, and depicting constructions in signed languages.
  • Dingemanse, M., Kendrick, K. H., & Enfield, N. J. (2016). A Coding Scheme for Other-Initiated Repair across Languages. Open Linguistics, 2, 35-46. doi:10.1515/opli-2016-0002.

    Abstract

    We provide an annotated coding scheme for other-initiated repair, along with guidelines for building collections and aggregating cases based on interactionally relevant similarities and differences. The questions and categories of the scheme are grounded in inductive observations of conversational data and connected to a rich body of work on other-initiated repair in conversation analysis. The scheme is developed and tested in a 12-language comparative project and can serve as a stepping stone for future work on other-initiated repair and the systematic comparative study of conversational structures.
  • Dingemanse, M. (2010). [Review of Talking voices: Repetition, dialogue, and imagery in conversational discourse. 2nd edition. By Deborah Tannen]. Language in Society, 39(1), 139-140. doi:10.1017/S0047404509990765.

    Abstract

    Reviews the book, Talking voices: Repetition, dialogue, and imagery in conversational discourse. 2nd edition by Deborah Tannen. This book is the same as the 1989 original except for an added introduction. This introduction situates TV in the context of intertextuality and gives a survey of relevant research since the book first appeared. The strength of the book lies in its insightful analysis of the auditory side of conversation. Yet talking voices have always been embedded in richly contextualized multimodal speech events. As spontaneous and pervasive involvement strategies, both iconic gestures and ideophones should be of central importance to the analysis of conversational discourse. Unfortunately, someone who picks up this book is pretty much left in the dark about the prevalence of these phenomena in everyday face-to-face interaction all over the world.
  • Dingemanse, M. (2010). Folk definitions of ideophones. In E. Norcliffe, & N. J. Enfield (Eds.), Field manual volume 13 (pp. 24-29). Nijmegen: Max Planck Institute for Psycholinguistics. doi:10.17617/2.529151.

    Abstract

    Ideophones are marked words that depict sensory events, for example English hippety-hoppety ‘in a limping and hobbling manner’ or Siwu mukumuku ‘mouth movements of a toothless person eating’. They typically have special sound patterns and distinct grammatical properties. Ideophones are found in many languages of the world, suggesting a common fascination with detailed sensory depiction, but reliable data on their meaning and use is still very scarce. This task involves video-recording spontaneous, informal explanations (“folk definitions”) of individual ideophones by native speakers, in their own language. The approach facilitates collection of rich primary data in a planned context while ensuring a large amount of spontaneity and freedom.
  • Dingemanse, M. (2013). Ideophones and gesture in everyday speech. Gesture, 13, 143-165. doi:10.1075/gest.13.2.02din.

    Abstract

    This article examines the relation between ideophones and gestures in a corpus of everyday discourse in Siwu, a richly ideophonic language spoken in Ghana. The overall frequency of ideophone-gesture couplings in everyday speech is lower than previously suggested, but two findings shed new light on the relation between ideophones and gesture. First, discourse type makes a difference: ideophone-gesture couplings are more frequent in narrative contexts, a finding that explains earlier claims, which were based not on everyday language use but on elicited narratives. Second, there is a particularly strong coupling between ideophones and one type of gesture: iconic gestures. This coupling allows us to better understand iconicity in relation to the affordances of meaning and modality. Ultimately, the connection between ideophones and iconic gestures is explained by reference to the depictive nature of both. Ideophone and iconic gesture are two aspects of the process of depiction
  • Dingemanse, M., Torreira, F., & Enfield, N. J. (2013). Is “Huh?” a universal word? Conversational infrastructure and the convergent evolution of linguistic items. PLoS One, 8(11): e78273. doi:10.1371/journal.pone.0078273.

    Abstract

    A word like Huh?–used as a repair initiator when, for example, one has not clearly heard what someone just said– is found in roughly the same form and function in spoken languages across the globe. We investigate it in naturally occurring conversations in ten languages and present evidence and arguments for two distinct claims: that Huh? is universal, and that it is a word. In support of the first, we show that the similarities in form and function of this interjection across languages are much greater than expected by chance. In support of the second claim we show that it is a lexical, conventionalised form that has to be learnt, unlike grunts or emotional cries. We discuss possible reasons for the cross-linguistic similarity and propose an account in terms of convergent evolution. Huh? is a universal word not because it is innate but because it is shaped by selective pressures in an interactional environment that all languages share: that of other-initiated repair. Our proposal enhances evolutionary models of language change by suggesting that conversational infrastructure can drive the convergent cultural evolution of linguistic items.
  • Dingemanse, M., Schuerman, W. L., Reinisch, E., Tufvesson, S., & Mitterer, H. (2016). What sound symbolism can and cannot do: Testing the iconicity of ideophones from five languages. Language, 92(2), e117-e133. doi:10.1353/lan.2016.0034.

    Abstract

    Sound symbolism is a phenomenon with broad relevance to the study of language and mind, but there has been a disconnect between its investigations in linguistics and psychology. This study tests the sound-symbolic potential of ideophones—words described as iconic—in an experimental task that improves over prior work in terms of ecological validity and experimental control. We presented 203 ideophones from five languages to eighty-two Dutch listeners in a binary-choice task, in four versions: original recording, full diphone resynthesis, segments-only resynthesis, and prosody-only resynthesis. Listeners guessed the meaning of all four versions above chance, confirming the iconicity of ideophones and showing the viability of speech synthesis as a way of controlling for segmental and suprasegmental properties in experimental studies of sound symbolism. The success rate was more modest than prior studies using pseudowords like bouba/kiki, implying that assumptions based on such words cannot simply be transferred to natural languages. Prosody and segments together drive the effect: neither alone is sufficient, showing that segments and prosody work together as cues supporting iconic interpretations. The findings cast doubt on attempts to ascribe iconic meanings to segments alone and support a view of ideophones as words that combine arbitrariness and iconicity.We discuss the implications for theory and methods in the empirical study of sound symbolism and iconicity.

    Additional information

    https://muse.jhu.edu/article/619540
  • Djemie, T., Weckhuysen, S., von Spiczak, S., Carvill, G. L., Jaehn, J., Anttonen, A. K., Brilstra, E., Caglayan, H. S., De Kovel, C. G. F., Depienne, C., Gaily, E., Gennaro, E., Giraldez, B. G., Gormley, P., Guerrero-Lopez, R., Guerrini, R., Hamalainen, E., Hartmann, `., Hernandez-Hernandez, L., Hjalgrim, H. and 26 moreDjemie, T., Weckhuysen, S., von Spiczak, S., Carvill, G. L., Jaehn, J., Anttonen, A. K., Brilstra, E., Caglayan, H. S., De Kovel, C. G. F., Depienne, C., Gaily, E., Gennaro, E., Giraldez, B. G., Gormley, P., Guerrero-Lopez, R., Guerrini, R., Hamalainen, E., Hartmann, `., Hernandez-Hernandez, L., Hjalgrim, H., Koeleman, B. P., Leguern, E., Lehesjoki, A. E., Lemke, J. R., Leu, C., Marini, C., McMahon, J. M., Mei, D., Moller, R. S., Muhle, H., Myers, C. T., Nava, C., Serratosa, J. M., Sisodiya, S. M., Stephani, U., Striano, P., van Kempen, M. J., Verbeek, N. E., Usluer, S., Zara, F., Palotie, A., Mefford, H. C., Scheffer, I. E., De Jonghe, P., Helbig, I., & Suls, A. (2016). Pitfalls in genetic testing: the story of missed SCN1A mutations. Molecular Genetics & Genomic Medicine, 4(4), 457-64. doi:10.1002/mgg3.217.

    Abstract

    Background Sanger sequencing, still the standard technique for genetic testing in most diagnostic laboratories and until recently widely used in research, is gradually being complemented by next-generation sequencing (NGS). No single mutation detection technique is however perfect in identifying all mutations. Therefore, we wondered to what extent inconsistencies between Sanger sequencing and NGS affect the molecular diagnosis of patients. Since mutations in SCN1A, the major gene implicated in epilepsy, are found in the majority of Dravet syndrome (DS) patients, we focused on missed SCN1A mutations. Methods We sent out a survey to 16 genetic centers performing SCN1A testing. Results We collected data on 28 mutations initially missed using Sanger sequencing. All patients were falsely reported as SCN1A mutation-negative, both due to technical limitations and human errors. Conclusion We illustrate the pitfalls of Sanger sequencing and most importantly provide evidence that SCN1A mutations are an even more frequent cause of DS than already anticipated.
  • Dolscheid, S., Shayan, S., Ozturk, O., Majid, A., & Casasanto, D. (2010). Language shapes mental representations of musical pitch: Implications for metaphorical language processing [Abstract]. In Proceedings of the 16th Annual Conference on Architectures and Mechanisms for Language Processing [AMLaP 2010] (pp. 137). York: University of York.

    Abstract

    Speakers often use spatial metaphors to talk about musical pitch (e.g., a low note, a high soprano). Previous experiments suggest that English speakers also think about pitches as high or low in space, even when theyʼre not using language or musical notation (Casasanto, 2010). Do metaphors in language merely reflect pre-existing associations between space and pitch, or might language also shape these non-linguistic metaphorical mappings? To investigate the role of language in pitch tepresentation, we conducted a pair of non-linguistic spacepitch interference experiments in speakers of two languages that use different spatial metaphors. Dutch speakers usually describe pitches as ʻhighʼ (hoog) and ʻlowʼ (laag). Farsi speakers, however, often describe high-frequency pitches as ʻthinʼ (naazok) and low-frequency pitches as ʻthickʼ (koloft). Do Dutch and Farsi speakers mentally represent pitch differently? To find out, we asked participants to reproduce musical pitches that they heard in the presence of irrelevant spatial information (i.e., lines that varied either in height or in thickness). For the Height Interference experiment, horizontal lines bisected a vertical reference line at one of nine different locations. For the Thickness Interference experiment, a vertical line appeared in the middle of the screen in one of nine thicknesses. In each experiment, the nine different lines were crossed with nine different pitches ranging from C4 to G#4 in semitone increments, to produce 81 distinct trials. If Dutch and Farsi speakers mentally represent pitch the way they talk about it, using different kinds of spatial representations, they should show contrasting patterns of cross-dimensional interference: Dutch speakersʼ pitch estimates should be more strongly affected by irrelevant height information, and Farsi speakersʼ by irrelevant thickness information. As predicted, Dutch speakersʼ pitch estimates were significantly modulated by spatial height but not by thickness. Conversely, Farsi speakersʼ pitch estimates were modulated by spatial thickness but not by height (2x2 ANOVA on normalized slopes of the effect of space on pitch: F(1,71)=17,15 p<.001). To determine whether language plays a causal role in shaping pitch representations, we conducted a training experiment. Native Dutch speakers learned to use Farsi-like metaphors, describing pitch relationships in terms of thickness (e.g., a cello sounds ʻthickerʼ than a flute). After training, Dutch speakers showed a significant effect of Thickness interference in the non-linguistic pitch reproduction task, similar to native Farsi speakers: on average, pitches accompanied by thicker lines were reproduced as lower in pitch (effect of thickness on pitch: r=-.22, p=.002). By conducting psychophysical tasks, we tested the ʻWhorfianʼ question without using words. Yet, results also inform theories of metaphorical language processing. According to psycholinguistic theories (e.g., Bowdle & Gentner, 2005), highly conventional metaphors are processed without any active mapping from the source to the target domain (e.g., from space to pitch). Our data, however, suggest that when people use verbal metaphors they activate a corresponding non-linguistic mapping from either height or thickness to pitch, strengthening this association at the expense of competing associations. As a result, people who use different metaphors in their native languages form correspondingly different representations of musical pitch. Casasanto, D. (2010). Space for Thinking. In Language, Cognition and Space: State of the art and new directions. V. Evans & P. Chilton (Eds.), 453-478, London: Equinox Publishing. Bowdle, B. & Gentner, D. (2005). The career of metaphor. Psychological Review, 112, 193-216.
  • Dolscheid, S. (2013). High pitches and thick voices: The role of language in space-pitch associations. PhD Thesis, Radboud University Nijmegen, Nijmegen.
  • Dolscheid, S., Graver, C., & Casasanto, D. (2013). Spatial congruity effects reveal metaphors, not markedness. In M. Knauff, M. Pauen, N. Sebanz, & I. Wachsmuth (Eds.), Proceedings of the 35th Annual Meeting of the Cognitive Science Society (CogSci 2013) (pp. 2213-2218). Austin,TX: Cognitive Science Society. Retrieved from http://mindmodeling.org/cogsci2013/papers/0405/index.html.

    Abstract

    Spatial congruity effects have often been interpreted as evidence for metaphorical thinking, but an alternative markedness-based account challenges this view. In two experiments, we directly compared metaphor and markedness explanations for spatial congruity effects, using musical pitch as a testbed. English speakers who talk about pitch in terms of spatial height were tested in speeded space-pitch compatibility tasks. To determine whether space-pitch congruency effects could be elicited by any marked spatial continuum, participants were asked to classify high- and low-frequency pitches as 'high' and 'low' or as 'front' and 'back' (both pairs of terms constitute cases of marked continuums). We found congruency effects in high/low conditions but not in front/back conditions, indicating that markedness is not sufficient to account for congruity effects (Experiment 1). A second experiment showed that congruency effects were specific to spatial words that cued a vertical schema (tall/short), and that congruity effects were not an artifact of polysemy (e.g., 'high' referring both to space and pitch). Together, these results suggest that congruency effects reveal metaphorical uses of spatial schemas, not markedness effects.
  • Dolscheid, S., Shayan, S., Majid, A., & Casasanto, D. (2013). The thickness of musical pitch: Psychophysical evidence for linguistic relativity. Psychological Science, 24, 613-621. doi:10.1177/0956797612457374.

    Abstract

    Do people who speak different languages think differently, even when they are not using language? To find out, we used nonlinguistic psychophysical tasks to compare mental representations of musical pitch in native speakers of Dutch and Farsi. Dutch speakers describe pitches as high (hoog) or low (laag), whereas Farsi speakers describe pitches as thin (na-zok) or thick (koloft). Differences in language were reflected in differences in performance on two pitch-reproduction tasks, even though the tasks used simple, nonlinguistic stimuli and responses. To test whether experience using language influences mental representations of pitch, we trained native Dutch speakers to describe pitch in terms of thickness, as Farsi speakers do. After the training, Dutch speakers’ performance on a nonlinguistic psychophysical task resembled the performance of native Farsi speakers. People who use different linguistic space-pitch metaphors also think about pitch differently. Language can play a causal role in shaping nonlinguistic representations of musical pitch.

    Additional information

    DS_10.1177_0956797612457374.pdf
  • Doumas, L. A., & Martin, A. E. (2016). Abstraction in time: Finding hierarchical linguistic structure in a model of relational processing. In A. Papafragou, D. Grodner, D. Mirman, & J. Trueswell (Eds.), Proceedings of the 38th Annual Meeting of the Cognitive Science Society (CogSci 2016) (pp. 2279-2284). Austin, TX: Cognitive Science Society.

    Abstract

    Abstract mental representation is fundamental for human cognition. Forming such representations in time, especially from dynamic and noisy perceptual input, is a challenge for any processing modality, but perhaps none so acutely as for language processing. We show that LISA (Hummel & Holyaok, 1997) and DORA (Doumas, Hummel, & Sandhofer, 2008), models built to process and to learn structured (i.e., symbolic) rep resentations of conceptual properties and relations from unstructured inputs, show oscillatory activation during processing that is highly similar to the cortical activity elicited by the linguistic stimuli from Ding et al.(2016). We argue, as Ding et al.(2016), that this activation reflects formation of hierarchical linguistic representation, and furthermore, that the kind of computational mechanisms in LISA/DORA (e.g., temporal binding by systematic asynchrony of firing) may underlie formation of abstract linguistic representations in the human brain. It may be this repurposing that allowed for the generation or mergence of hierarchical linguistic structure, and therefore, human language, from extant cognitive and neural systems. We conclude that models of thinking and reasoning and models of language processing must be integrated —not only for increased plausiblity, but in order to advance both fields towards a larger integrative model of human cognition
  • Drenth, P., Levelt, W. J. M., & Noort, E. (2013). Rejoinder to commentary on the Stapel-fraud report. The Psychologist, 26(2), 81.

    Abstract

    The Levelt, Noort and Drenth Committees make their sole and final rejoinder to criticisms of their report on the Stapel fraud
  • Drijvers, L., Vaitonyte, J., & Ozyurek, A. (2019). Degree of language experience modulates visual attention to visible speech and iconic gestures during clear and degraded speech comprehension. Cognitive Science, 43: e12789. doi:10.1111/cogs.12789.

    Abstract

    Visual information conveyed by iconic hand gestures and visible speech can enhance speech comprehension under adverse listening conditions for both native and non‐native listeners. However, how a listener allocates visual attention to these articulators during speech comprehension is unknown. We used eye‐tracking to investigate whether and how native and highly proficient non‐native listeners of Dutch allocated overt eye gaze to visible speech and gestures during clear and degraded speech comprehension. Participants watched video clips of an actress uttering a clear or degraded (6‐band noise‐vocoded) action verb while performing a gesture or not, and were asked to indicate the word they heard in a cued‐recall task. Gestural enhancement was the largest (i.e., a relative reduction in reaction time cost) when speech was degraded for all listeners, but it was stronger for native listeners. Both native and non‐native listeners mostly gazed at the face during comprehension, but non‐native listeners gazed more often at gestures than native listeners. However, only native but not non‐native listeners' gaze allocation to gestures predicted gestural benefit during degraded speech comprehension. We conclude that non‐native listeners might gaze at gesture more as it might be more challenging for non‐native listeners to resolve the degraded auditory cues and couple those cues to phonological information that is conveyed by visible speech. This diminished phonological knowledge might hinder the use of semantic information that is conveyed by gestures for non‐native compared to native listeners. Our results demonstrate that the degree of language experience impacts overt visual attention to visual articulators, resulting in different visual benefits for native versus non‐native listeners.

    Additional information

    Supporting information
  • Drijvers, L., Mulder, K., & Ernestus, M. (2016). Alpha and gamma band oscillations index differential processing of acoustically reduced and full forms. Brain and Language, 153-154, 27-37. doi:10.1016/j.bandl.2016.01.003.

    Abstract

    Reduced forms like yeshay for yesterday often occur in conversations. Previous behavioral research reported a processing advantage for full over reduced forms. The present study investigated whether this processing advantage is reflected in a modulation of alpha (8–12 Hz) and gamma (30+ Hz) band activity. In three electrophysiological experiments, participants listened to full and reduced forms in isolation (Experiment 1), sentence-final position (Experiment 2), or mid-sentence position (Experiment 3). Alpha power was larger in response to reduced forms than to full forms, but only in Experiments 1 and 2. We interpret these increases in alpha power as reflections of higher auditory cognitive load. In all experiments, gamma power only increased in response to full forms, which we interpret as showing that lexical activation spreads more quickly through the semantic network for full than for reduced forms. These results confirm a processing advantage for full forms, especially in non-medial sentence position.
  • Drijvers, L., Van der Plas, M., Ozyurek, A., & Jensen, O. (2019). Native and non-native listeners show similar yet distinct oscillatory dynamics when using gestures to access speech in noise. NeuroImage, 194, 55-67. doi:10.1016/j.neuroimage.2019.03.032.

    Abstract

    Listeners are often challenged by adverse listening conditions during language comprehension induced by external factors, such as noise, but also internal factors, such as being a non-native listener. Visible cues, such as semantic information conveyed by iconic gestures, can enhance language comprehension in such situations. Using magnetoencephalography (MEG) we investigated whether spatiotemporal oscillatory dynamics can predict a listener's benefit of iconic gestures during language comprehension in both internally (non-native versus native listeners) and externally (clear/degraded speech) induced adverse listening conditions. Proficient non-native speakers of Dutch were presented with videos in which an actress uttered a degraded or clear verb, accompanied by a gesture or not, and completed a cued-recall task after every video. The behavioral and oscillatory results obtained from non-native listeners were compared to an MEG study where we presented the same stimuli to native listeners (Drijvers et al., 2018a). Non-native listeners demonstrated a similar gestural enhancement effect as native listeners, but overall scored significantly slower on the cued-recall task. In both native and non-native listeners, an alpha/beta power suppression revealed engagement of the extended language network, motor and visual regions during gestural enhancement of degraded speech comprehension, suggesting similar core processes that support unification and lexical access processes. An individual's alpha/beta power modulation predicted the gestural benefit a listener experienced during degraded speech comprehension. Importantly, however, non-native listeners showed less engagement of the mouth area of the primary somatosensory cortex, left insula (beta), LIFG and ATL (alpha) than native listeners, which suggests that non-native listeners might be hindered in processing the degraded phonological cues and coupling them to the semantic information conveyed by the gesture. Native and non-native listeners thus demonstrated similar yet distinct spatiotemporal oscillatory dynamics when recruiting visual cues to disambiguate degraded speech.

    Additional information

    1-s2.0-S1053811919302216-mmc1.docx
  • Drijvers, L. (2019). On the oscillatory dynamics underlying speech-gesture integration in clear and adverse listening conditions. PhD Thesis, Radboud University Nijmegen, Nijmegen.
  • Drozdova, P., Van Hout, R., & Scharenborg, O. (2016). Lexically-guided perceptual learning in non-native listening. Bilingualism: Language and Cognition, 19(5), 914-920. doi:10.1017/S136672891600002X.

    Abstract

    There is ample evidence that native and non-native listeners use lexical knowledge to retune their native phonetic categories following ambiguous pronunciations. The present study investigates whether a non-native ambiguous sound can retune non-native phonetic categories. After a brief exposure to an ambiguous British English [l/ɹ] sound, Dutch listeners demonstrated retuning. This retuning was, however, asymmetrical: the non-native listeners seemed to show (more) retuning of the /ɹ/ category than of the /l/ category, suggesting that non-native listeners can retune non-native phonetic categories. This asymmetry is argued to be related to the large phonetic variability of /r/ in both Dutch and English.
  • Drozdova, P., Van Hout, R., & Scharenborg, O. (2016). Processing and adaptation to ambiguous sounds during the course of perceptual learning. In Proceedings of Interspeech 2016: The 17th Annual Conference of the International Speech Communication Association (pp. 2811-2815). doi:10.21437/Interspeech.2016-814.

    Abstract

    Listeners use their lexical knowledge to interpret ambiguous sounds, and retune their phonetic categories to include this ambiguous sound. Although there is ample evidence for lexically-guided retuning, the adaptation process is not fully understood. Using a lexical decision task with an embedded auditory semantic priming task, the present study investigates whether words containing an ambiguous sound are processed in the same way as “natural” words and whether adaptation to the ambiguous sound tends to equalize the processing of “ambiguous” and natural words. Analyses of the yes/no responses and reaction times to natural and “ambiguous” words showed that words containing an ambiguous sound were accepted as words less often and were processed slower than the same words without ambiguity. The difference in acceptance disappeared after exposure to approximately 15 ambiguous items. Interestingly, lower acceptance rates and slower processing did not have an effect on the processing of semantic information of the following word. However, lower acceptance rates of ambiguous primes predict slower reaction times of these primes, suggesting an important role of stimulus-specific characteristics in triggering lexically-guided perceptual learning.
  • Drude, S., Awete, W., & Aweti, A. (2019). A ortografia da língua Awetí. LIAMES: Línguas Indígenas Americanas, 19: e019014. doi:10.20396/liames.v19i0.8655746.

    Abstract

    Este trabalho descreve e fundamenta a ortografia da língua Awetí (Tupí, Alto Xingu/mt), com base na análise da estrutura fonológica e gramatical do Awetí. A ortografia é resultado de um longo trabalho colaborativo entre os três autores, iniciado em 1998. Ela não define apenas um alfabeto (a representação das vogais e das consoantes da língua), mas também aborda a variação interna, ressilabificação, lenição, palatalização e outros processos (morfo‑)fonológicos. Tanto a representação escrita da oclusiva glotal, quanto as consequências ortográficas da harmonia nasal receberam uma atenção especial. Apesar de o acento lexical não ser ortograficamente marcado em Awetí, a grande maioria dos afixos e partículas é abordada considerando o acento e sua interação com morfemas adjacentes, ao mesmo tempo determinando as palavras ortográficas. Finalmente foi estabelecida a ordem alfabética em que dígrafos são tratados como sequências de letras, já a oclusiva glotal ⟨ʼ⟩ é ignorada, facilitando o aprendizado do Awetí. A ortografia tal como descrita aqui tem sido usada por aproximadamente dez anos na escola para a alfabetização em Awetí, com bons resultados obtidos. Acreditamos que vários dos argumentos aqui levantados podem ser produtivamente transferidos para outras línguas com fenômenos semelhantes (a oclusiva glotal como consoante, harmonia nasal, assimilação morfo-fonológica, etc.).

Share this page