Publications

Displaying 201 - 300 of 1628
  • Christoffels, I. K., Formisano, E., & Schiller, N. O. (2007). The neural correlates of verbal feedback processing: An fMRI study employing overt speech. Human Brain Mapping, 28(9), 868-879. doi:10.1002/hbm.20315.

    Abstract

    Speakers use external auditory feedback to monitor their own speech. Feedback distortion has been found to increase activity in the superior temporal areas. Using fMRI, the present study investigates the neural correlates of processing verbal feedback without distortion. In a blocked design, the following conditions were presented: (1) overt picture-naming, (2) overt picture-naming while pink noise was presented to mask external feedback, (3) covert picture-naming, (4) listening to the picture names (previously recorded from participants' own voices), and (5) listening to pink noise. The results show that auditory feedback processing involves a network of different areas related to general performance monitoring and speech-motor control. These include the cingulate cortex and the bilateral insula, supplementary motor area, bilateral motor areas, cerebellum, thalamus and basal ganglia. Our findings suggest that the anterior cingulate cortex, which is often implicated in error-processing and conflict-monitoring, is also engaged in ongoing speech monitoring. Furthermore, in the superior temporal gyrus, we found a reduced response to speaking under normal feedback conditions. This finding is interpreted in the framework of a forward model according to which, during speech production, the sensory consequence of the speech-motor act is predicted to attenuate the sensitivity of the auditory cortex. Hum Brain Mapp 2007. © 2007 Wiley-Liss, Inc.
  • Christoffels, I. K., Firk, C., & Schiller, N. O. (2007). Bilingual language control: An event-related brain potential study. Brain Research, 1147, 192-208. doi:10.1016/j.brainres.2007.01.137.

    Abstract

    This study addressed how bilingual speakers switch between their first and second language when speaking. Event-related brain potentials (ERPs) and naming latencies were measured while unbalanced German (L1)-Dutch (L2) speakers performed a picture-naming task. Participants named pictures either in their L1 or in their L2 (blocked language conditions), or participants switched between their first and second language unpredictably (mixed language condition). Furthermore, form similarity between translation equivalents (cognate status) was manipulated. A cognate facilitation effect was found for L1 and L2 indicating phonological activation of the non-response language in blocked and mixed language conditions. The ERP data also revealed small but reliable effects of cognate status. Language switching resulted in equal switching costs for both languages and was associated with a modulation in the ERP waveforms (time windows 275-375 ms and 375-475 ms). Mixed language context affected especially the L1, both in ERPs and in latencies, which became slower in L1 than L2. It is suggested that sustained and transient components of language control should be distinguished. Results are discussed in relation to current theories of bilingual language processing.
  • Christoffels, I. K., Ganushchak, L. Y., & Koester, D. (2013). Language conflict in translation; An ERP study of translation production. Journal of Cognitive Psychology, 25, 646-664. doi:10.1080/20445911.2013.821127.

    Abstract

    Although most bilinguals can translate with relative ease, the underlying neuro-cognitive processes are poorly understood. Using event-related brain potentials (ERPs) we investigated the temporal course of word translation. Participants translated words from and to their first (L1, Dutch) and second (L2, English) language while ERPs were recorded. Interlingual homographs (IHs) were included to introduce language conflict. IHs share orthographic form but have different meanings in L1 and L2 (e.g., room in Dutch refers to cream). Results showed that the brain distinguished between translation directions as early as 200 ms after word presentation: the P2 amplitudes were more positive in the L1L2 translation direction. The N400 was also modulated by translation direction, with more negative amplitudes in the L2L1 translation direction. Furthermore, the IHs were translated more slowly, induced more errors, and elicited more negative N400 amplitudes than control words. In a naming experiment, participants read aloud the same words in L1 or L2 while ERPs were recorded. Results showed no effect of either IHs or language, suggesting that task schemas may be crucially related to language control in translation. Furthermore, translation appears to involve conceptual processing in both translation directions, and the task goal appears to influence how words are processed.

    Files private

    Request files
  • Clark, E. V., & Bowerman, M. (1986). On the acquisition of final voiced stops. In J. A. Fishman (Ed.), The Fergusonian impact: in honor of Charles A. Ferguson on the occasion of his 65th birthday. Volume 1: From phonology to society (pp. 51-68). Berlin: Mouton de Gruyter.
  • Clifton, C. J., Meyer, A. S., Wurm, L. H., & Treiman, R. (2013). Language comprehension and production. In A. F. Healy, & R. W. Proctor (Eds.), Handbook of Psychology, Volume 4, Experimental Psychology. 2nd Edition (pp. 523-547). Hoboken, NJ: Wiley.

    Abstract

    In this chapter, we survey the processes of recognizing and producing words and of understanding and creating sentences. Theory and research on these topics have been shaped by debates about how various sources of information are integrated in these processes, and about the role of language structure, as analyzed in the discipline of linguistics. In this chapter, we describe current views of fluent language users' comprehension of spoken and written language and their production of spoken language. We review what we consider to be the most important findings and theories in psycholinguistics, returning again and again to the questions of modularity and the importance of linguistic knowledge. Although we acknowledge the importance of social factors in language use, our focus is on core processes such as parsing and word retrieval that are not necessarily affected by such factors. We do not have space to say much about the important fields of developmental psycholinguistics, which deals with the acquisition of language by children, or applied psycholinguistics, which encompasses such topics as language disorders and language teaching. Although we recognize that there is burgeoning interest in the measurement of brain activity during language processing and how language is represented in the brain, space permits only occasional pointers to work in neuropsychology and the cognitive neuroscience of language. For treatment of these topics, and others, the interested reader could begin with two recent handbooks of psycholinguistics (Gaskell, 2007; Traxler & Gemsbacher, 2006) and a handbook of cognitive neuroscience (Gazzaniga, 2004).
  • Clough, S., & Hilverman, C. (2018). Hand gestures and how they help children learn. Frontiers for Young Minds, 6: 29. doi:10.3389/frym.2018.00029.

    Abstract

    When we talk, we often make hand movements called gestures at the same time. Although just about everyone gestures when they talk, we usually do not even notice the gestures. Our hand gestures play an important role in helping us learn and remember! When we see other people gesturing when they talk—or when we gesture when we talk ourselves—we are more likely to remember the information being talked about than if gestures were not involved. Our hand gestures can even indicate when we are ready to learn new things! In this article, we explain how gestures can help learning. To investigate this, we studied children learning a new mathematical concept called equivalence. We hope that this article will help you notice when you, your friends and family, and your teachers are gesturing, and that it will help you understand how those gestures can help people learn.
  • Cohen, E., & Haun, D. B. M. (2013). The development of tag-based cooperation via a socially acquired trait. Evolution and Human Behavior, 24, 230-235. doi:10.1016/j.evolhumbehav.2013.02.001.

    Abstract

    Recent theoretical models have demonstrated that phenotypic traits can support the non-random assortment of cooperators in a population, thereby permitting the evolution of cooperation. In these “tag-based models”, cooperators modulate cooperation according to an observable and hard-to-fake trait displayed by potential interaction partners. Socially acquired vocalizations in general, and speech accent among humans in particular, are frequently proposed as hard to fake and hard to hide traits that display sufficient cross-populational variability to reliably guide such social assortment in fission–fusion societies. Adults’ sensitivity to accent variation in social evaluation and decisions about cooperation is well-established in sociolinguistic research. The evolutionary and developmental origins of these biases are largely unknown, however. Here, we investigate the influence of speech accent on 5–10-year-old children's developing social and cooperative preferences across four Brazilian Amazonian towns. Two sites have a single dominant accent, and two sites have multiple co-existing accent varieties. We found that children's friendship and resource allocation preferences were guided by accent only in sites characterized by accent heterogeneity. Results further suggest that this may be due to a more sensitively tuned ear for accent variation. The demonstrated local-accent preference did not hold in the face of personal cost. Results suggest that mechanisms guiding tag-based assortment are likely tuned according to locally relevant tag-variation.

    Additional information

    Cohen_Suppl_Mat_2013.docx
  • Connell, L., Cai, Z. G., & Holler, J. (2013). Do you see what I'm singing? Visuospatial movement biases pitch perception. Brain and Cognition, 81, 124-130. doi:10.1016/j.bandc.2012.09.005.

    Abstract

    The nature of the connection between musical and spatial processing is controversial. While pitch may be described in spatial terms such as “high” or “low”, it is unclear whether pitch and space are associated but separate dimensions or whether they share representational and processing resources. In the present study, we asked participants to judge whether a target vocal note was the same as (or different from) a preceding cue note. Importantly, target trials were presented as video clips where a singer sometimes gestured upward or downward while singing that target note, thus providing an alternative, concurrent source of spatial information. Our results show that pitch discrimination was significantly biased by the spatial movement in gesture, such that downward gestures made notes seem lower in pitch than they really were, and upward gestures made notes seem higher in pitch. These effects were eliminated by spatial memory load but preserved under verbal memory load conditions. Together, our findings suggest that pitch and space have a shared representation such that the mental representation of pitch is audiospatial in nature.
  • Corcoran, A. W., Alday, P. M., Schlesewsky, M., & Bornkessel-Schlesewsky, I. (2018). Toward a reliable, automated method of individual alpha frequency (IAF) quantification. Psychophysiology, 55(7): e13064. doi:10.1111/psyp.13064.

    Abstract

    Individual alpha frequency (IAF) is a promising electrophysiological marker of interindividual differences in cognitive function. IAF has been linked with trait-like differences in information processing and general intelligence, and provides an empirical basis for the definition of individualized frequency bands. Despite its widespread application, however, there is little consensus on the optimal method for estimating IAF, and many common approaches are prone to bias and inconsistency. Here, we describe an automated strategy for deriving two of the most prevalent IAF estimators in the literature: peak alpha frequency (PAF) and center of gravity (CoG). These indices are calculated from resting-state power spectra that have been smoothed using a Savitzky-Golay filter (SGF). We evaluate the performance characteristics of this analysis procedure in both empirical and simulated EEG data sets. Applying the SGF technique to resting-state data from n = 63 healthy adults furnished 61 PAF and 62 CoG estimates. The statistical properties of these estimates were consistent with previous reports. Simulation analyses revealed that the SGF routine was able to reliably extract target alpha components, even under relatively noisy spectral conditions. The routine consistently outperformed a simpler method of automated peak detection that did not involve spectral smoothing. The SGF technique is fast, open source, and available in two popular programming languages (MATLAB, Python), and thus can easily be integrated within the most popular M/EEG toolsets (EEGLAB, FieldTrip, MNE-Python). As such, it affords a convenient tool for improving the reliability and replicability of future IAF-related research.

    Additional information

    psyp13064-sup-0001-s01.docx
  • Corps, R. E. (2018). Coordinating utterances during conversational dialogue: The role of content and timing predictions. PhD Thesis, The University of Edinburgh, Edinburgh.
  • Corps, R. E., Gambi, C., & Pickering, M. J. (2018). Coordinating utterances during turn-taking: The role of prediction, response preparation, and articulation. Discourse processes, 55(2, SI), 230-240. doi:10.1080/0163853X.2017.1330031.

    Abstract

    During conversation, interlocutors rapidly switch between speaker and listener
    roles and take turns at talk. How do they achieve such fine coordination?
    Most research has concentrated on the role of prediction, but listeners
    must also prepare a response in advance (assuming they wish to respond)
    and articulate this response at the appropriate moment. Such mechanisms
    may overlap with the processes of comprehending the speaker’s incoming
    turn and predicting its end. However, little is known about the stages of
    response preparation and production. We discuss three questions pertaining
    to such stages: (1) Do listeners prepare their own response in advance?,
    (2) Can listeners buffer their prepared response?, and (3) Does buffering
    lead to interference with concurrent comprehension? We argue that fine
    coordination requires more than just an accurate prediction of the interlocutor’s
    incoming turn: Listeners must also simultaneously prepare their own
    response.
  • Corps, R. E., Crossley, A., Gambi, C., & Pickering, M. J. (2018). Early preparation during turn-taking: Listeners use content predictions to determine what to say but not when to say it. Cognition, 175, 77-95. doi:10.1016/j.cognition.2018.01.015.

    Abstract

    During conversation, there is often little gap between interlocutors’ utterances. In two pairs of experiments, we manipulated the content predictability of yes/no questions to investigate whether listeners achieve such coordination by (i) preparing a response as early as possible or (ii) predicting the end of the speaker’s turn. To assess these two mechanisms, we varied the participants’ task: They either pressed a button when they thought the question was about to end (Experiments 1a and 2a), or verbally answered the questions with either yes or no (Experiments 1b and 2b). Predictability effects were present when participants had to prepare a verbal response, but not when they had to predict the turn-end. These findings suggest content prediction facilitates turn-taking because it allows listeners to prepare their own response early, rather than because it helps them predict when the speaker will reach the end of their turn.

    Additional information

    Supplementary material
  • Cousminer, D. L., Berry, D. J., Timpson, N. J., Ang, W., Thiering, E., Byrne, E. M., Taal, H. R., Huikari, V., Bradfield, J. P., Kerkhof, M., Groen-Blokhuis, M. M., Kreiner-Møller, E., Marinelli, M., Holst, C., Leinonen, J. T., Perry, J. R. B., Surakka, I., Pietiläinen, O., Kettunen, J., Anttila, V. and 50 moreCousminer, D. L., Berry, D. J., Timpson, N. J., Ang, W., Thiering, E., Byrne, E. M., Taal, H. R., Huikari, V., Bradfield, J. P., Kerkhof, M., Groen-Blokhuis, M. M., Kreiner-Møller, E., Marinelli, M., Holst, C., Leinonen, J. T., Perry, J. R. B., Surakka, I., Pietiläinen, O., Kettunen, J., Anttila, V., Kaakinen, M., Sovio, U., Pouta, A., Das, S., Lagou, V., Power, C., Prokopenko, I., Evans, D. M., Kemp, J. P., St Pourcain, B., Ring, S., Palotie, A., Kajantie, E., Osmond, C., Lehtimäki, T., Viikari, J. S., Kähönen, M., Warrington, N. M., Lye, S. J., Palmer, L. J., Tiesler, C. M. T., Flexeder, C., Montgomery, G. W., Medland, S. E., Hofman, A., Hakonarson, H., Guxens, M., Bartels, M., Salomaa, V., Murabito, J. M., Kaprio, J., Sørensen, T. I. A., Ballester, F., Bisgaard, H., Boomsma, D. I., Koppelman, G. H., Grant, S. F. A., Jaddoe, V. W. V., Martin, N. G., Heinrich, J., Pennell, C. E., Raitakari, O. T., Eriksson, J. G., Smith, G. D., Hyppönen, E., Järvelin, M.-R., McCarthy, M. I., Ripatti, S., Widén, E., Consortium ReproGen, & Consortium Early Growth Genetics (EGG) (2013). Genome-wide association and longitudinal analyses reveal genetic loci linking pubertal height growth, pubertal timing and childhood adiposity. Human Molecular Genetics, 22(13), 2735-2747. doi:10.1093/hmg/ddt104.

    Abstract

    The pubertal height growth spurt is a distinctive feature of childhood growth reflecting both the central onset of puberty and local growth factors. Although little is known about the underlying genetics, growth variability during puberty correlates with adult risks for hormone-dependent cancer and adverse cardiometabolic health. The only gene so far associated with pubertal height growth, LIN28B, pleiotropically influences childhood growth, puberty and cancer progression, pointing to shared underlying mechanisms. To discover genetic loci influencing pubertal height and growth and to place them in context of overall growth and maturation, we performed genome-wide association meta-analyses in 18 737 European samples utilizing longitudinally collected height measurements. We found significant associations (P < 1.67 × 10(-8)) at 10 loci, including LIN28B. Five loci associated with pubertal timing, all impacting multiple aspects of growth. In particular, a novel variant correlated with expression of MAPK3, and associated both with increased prepubertal growth and earlier menarche. Another variant near ADCY3-POMC associated with increased body mass index, reduced pubertal growth and earlier puberty. Whereas epidemiological correlations suggest that early puberty marks a pathway from rapid prepubertal growth to reduced final height and adult obesity, our study shows that individual loci associating with pubertal growth have variable longitudinal growth patterns that may differ from epidemiological observations. Overall, this study uncovers part of the complex genetic architecture linking pubertal height growth, the timing of puberty and childhood obesity and provides new information to pinpoint processes linking these traits.
  • Crago, M. B., & Allen, S. E. M. (1997). Linguistic and cultural aspects of simplicity and complexity in Inuktitut child directed speech. In E. Hughes, M. Hughes, & A. Greenhill (Eds.), Proceedings of the 21st annual Boston University Conference on Language Development (pp. 91-102).
  • Crago, M. B., Allen, S. E. M., & Hough-Eyamie, W. P. (1997). Exploring innateness through cultural and linguistic variation. In M. Gopnik (Ed.), The inheritance and innateness of grammars (pp. 70-90). New York City, NY, USA: Oxford University Press, Inc.
  • Crasborn, O., Sloetjes, H., Auer, E., & Wittenburg, P. (2006). Combining video and numeric data in the analysis of sign languages with the ELAN annotation software. In C. Vetoori (Ed.), Proceedings of the 2nd Workshop on the Representation and Processing of Sign languages: Lexicographic matters and didactic scenarios (pp. 82-87). Paris: ELRA.

    Abstract

    This paper describes hardware and software that can be used for the phonetic study of sign languages. The field of sign language phonetics is characterised, and the hardware that is currently in use is described. The paper focuses on the software that was developed to enable the recording of finger and hand movement data, and the additions to the ELAN annotation software that facilitate the further visualisation and analysis of the data.
  • Creemers, A., Don, J., & Fenger, P. (2018). Some affixes are roots, others are heads. Natural Language & Linguistic Theory, 36(1), 45-84. doi:10.1007/s11049-017-9372-1.

    Abstract

    A recent debate in the morphological literature concerns the status of derivational affixes. While some linguists (Marantz 1997, 2001; Marvin 2003) consider derivational affixes a type of functional morpheme that realizes a categorial head, others (Lowenstamm 2015; De Belder 2011) argue that derivational affixes are roots. Our proposal, which finds its empirical basis in a study of Dutch derivational affixes, takes a middle position. We argue that there are two types of derivational affixes: some that are roots (i.e. lexical morphemes) and others that are categorial heads (i.e. functional morphemes). Affixes that are roots show ‘flexible’ categorial behavior, are subject to ‘lexical’ phonological rules, and may trigger idiosyncratic meanings. Affixes that realize categorial heads, on the other hand, are categorially rigid, do not trigger ‘lexical’ phonological rules nor allow for idiosyncrasies in their interpretation.
  • Cristia, A., Dupoux, E., Hakuno, Y., Lloyd-Fox, S., Schuetze, M., Kivits, J., Bergvelt, T., Van Gelder, M., Filippin, L., Charron, S., & Minagawa-Kawai, Y. (2013). An online database of infant functional Near InfraRed Spectroscopy studies: A community-augmented systematic review. PLoS One, 8(3): e58906. doi:10.1371/journal.pone.0058906.

    Abstract

    Until recently, imaging the infant brain was very challenging. Functional Near InfraRed Spectroscopy (fNIRS) is a promising, relatively novel technique, whose use is rapidly expanding. As an emergent field, it is particularly important to share methodological knowledge to ensure replicable and robust results. In this paper, we present a community-augmented database which will facilitate precisely this exchange. We tabulated articles and theses reporting empirical fNIRS research carried out on infants below three years of age along several methodological variables. The resulting spreadsheet has been uploaded in a format allowing individuals to continue adding new results, and download the most recent version of the table. Thus, this database is ideal to carry out systematic reviews. We illustrate its academic utility by focusing on the factors affecting three key variables: infant attrition, the reliability of oxygenated and deoxygenated responses, and signal-to-noise ratios. We then discuss strengths and weaknesses of the DBIfNIRS, and conclude by suggesting a set of simple guidelines aimed to facilitate methodological convergence through the standardization of reports.
  • Cristia, A. (2013). Input to language: The phonetics of infant-directed speech. Language and Linguistics Compass, 7, 157-170. doi:10.1111/lnc3.12015.

    Abstract

    Over the first year of life, infant perception changes radically as the child learns the phonology of the ambient language from the speech she is exposed to. Since infant-directed speech attracts the child's attention more than other registers, it is necessary to describe that input in order to understand language development, and to address questions of learnability. In this review, evidence from corpora analyses, experimental studies, and observational paradigms is brought together to outline the first comprehensive empirical picture of infant-directed speech and its effects on language acquisition. The ensuing landscape suggests that infant-directed speech provides an emotionally and linguistically rich input to language acquisition

    Additional information

    Cristia_Suppl_Material.xls
  • Cristia, A., Ganesh, S., Casillas, M., & Ganapathy, S. (2018). Talker diarization in the wild: The case of child-centered daylong audio-recordings. In Proceedings of Interspeech 2018 (pp. 2583-2587). doi:10.21437/Interspeech.2018-2078.

    Abstract

    Speaker diarization (answering 'who spoke when') is a widely researched subject within speech technology. Numerous experiments have been run on datasets built from broadcast news, meeting data, and call centers—the task sometimes appears close to being solved. Much less work has begun to tackle the hardest diarization task of all: spontaneous conversations in real-world settings. Such diarization would be particularly useful for studies of language acquisition, where researchers investigate the speech children produce and hear in their daily lives. In this paper, we study audio gathered with a recorder worn by small children as they went about their normal days. As a result, each child was exposed to different acoustic environments with a multitude of background noises and a varying number of adults and peers. The inconsistency of speech and noise within and across samples poses a challenging task for speaker diarization systems, which we tackled via retraining and data augmentation techniques. We further studied sources of structured variation across raw audio files, including the impact of speaker type distribution, proportion of speech from children, and child age on diarization performance. We discuss the extent to which these findings might generalize to other samples of speech in the wild.
  • Cristia, A., Mielke, J., Daland, R., & Peperkamp, S. (2013). Similarity in the generalization of implicitly learned sound patterns. Journal of Laboratory Phonology, 4(2), 259-285.

    Abstract

    A core property of language is the ability to generalize beyond observed examples. In two experiments, we explore how listeners generalize implicitly learned sound patterns to new nonwords and to new sounds, with the goal of shedding light on how similarity affects treatment of potential generalization targets. During the exposure phase, listeners heard nonwords whose onset consonant was restricted to a subset of a natural class (e.g., /d g v z Z/). During the test phase, listeners were presented with new nonwords and asked to judge how frequently they had been presented before; some of the test items began with a consonant from the exposure set (e.g., /d/), and some began with novel consonants with varying relations to the exposure set (e.g., /b/, which is highly similar to all onsets in the training set; /t/, which is highly similar to one of the training onsets; and /p/, which is less similar than the other two). The exposure onset was rated most frequent, indicating that participants encoded onset attestation in the exposure set, and generalized it to new nonwords. Participants also rated novel consonants as somewhat frequent, indicating generalization to onsets that did not occur in the exposure phase. While generalization could be accounted for in terms of featural distance, it was insensitive to natural class structure. Generalization to new sounds was predicted better by models requiring prior linguistic knowledge (either traditional distinctive features or articulatory phonetic information) than by a model based on a linguistically naïve measure of acoustic similarity.
  • Croijmans, I. (2018). Wine expertise shapes olfactory language and cognition. PhD Thesis, Radboud University, Nijmegen.
  • Cronin, K. A. (2013). [Review of the book Chimpanzees of the Lakeshore: Natural history and culture at Mahale by Toshisada Nishida]. Animal Behaviour, 85, 685-686. doi:10.1016/j.anbehav.2013.01.001.

    Abstract

    First paragraph: Motivated by his quest to characterize the society of the last common ancestor of humans and other great apes, Toshisada Nishida set out as a graduate student to the Mahale Mountains on the eastern shore of Lake Tanganyika, Tanzania. This book is a story of his 45 years with the Mahale chimpanzees, or as he calls it, their ethnography. Beginning with his accounts of meeting the Tongwe people and the challenges of provisioning the chimpanzees for habituation, Nishida reveals how he slowly unravelled the unit group and community basis of chimpanzee social organization. The book begins and ends with a feeling of chronological order, starting with his arrival at Mahale and ending with an eye towards the future, with concrete recommendations for protecting wild chimpanzees. However, the bulk of the book is topically organized with chapters on feeding behaviour, growth and development, play and exploration, communication, life histories, sexual strategies, politics and culture.
  • Cronin, K. A., Mitchell, M. A., Lonsdorf, E. V., & Thompson, S. D. (2006). One year later: Evaluation of PMC-Recommended births and transfers. Zoo Biology, 25, 267-277. doi:10.1002/zoo.20100.

    Abstract

    To meet their exhibition, conservation, education, and scientific goals, members of the American Zoo and Aquarium Association (AZA) collaborate to manage their living collections as single species populations. These cooperative population management programs, Species Survival Planss (SSP) and Population Management Plans (PMP), issue specimen-by-specimen recommendations aimed at perpetuating captive populations by maintaining genetic diversity and demographic stability. Species Survival Plans and PMPs differ in that SSP participants agree to complete recommendations, whereas PMP participants need only take recommendations under advisement. We evaluated the effect of program type and the number of participating institutions on the success of actions recommended by the Population Management Center (PMC): transfers of specimens between institutions, breeding, and target number of offspring. We analyzed AZA studbook databases for the occurrence of recommended or unrecommended transfers and births during the 1-year period after the distribution of standard AZA Breeding-and-Transfer Plans. We had three major findings: 1) on average, both SSPs and PMPs fell about 25% short of their target; however, as the number of participating institutions increased so too did the likelihood that programs met or exceeded their target; 2) SSPs exhibited significantly greater transfer success than PMPs, although transfer success for both program types was below 50%; and 3) SSPs exhibited significantly greater breeding success than PMPs, although breeding success for both program types was below 20%. Together, these results indicate that the science and sophistication behind genetic and demographic management of captive populations may be compromised by the challenges of implementation.
  • Croxson, P., Forkel, S. J., Cerliani, L., & Thiebaut De Schotten, M. (2018). Structural Variability Across the Primate Brain: A Cross-Species Comparison. Cerebral Cortex, 28(11), 3829-3841. doi:10.1093/cercor/bhx244.

    Abstract

    A large amount of variability exists across human brains; revealed initially on a small scale by postmortem studies and,
    more recently, on a larger scale with the advent of neuroimaging. Here we compared structural variability between human
    and macaque monkey brains using grey and white matter magnetic resonance imaging measures. The monkey brain was
    overall structurally as variable as the human brain, but variability had a distinct distribution pattern, with some key areas
    showing high variability. We also report the first evidence of a relationship between anatomical variability and evolutionary
    expansion in the primate brain. This suggests a relationship between variability and stability, where areas of low variability
    may have evolved less recently and have more stability, while areas of high variability may have evolved more recently and
    be less similar across individuals. We showed specific differences between the species in key areas, including the amount of
    hemispheric asymmetry in variability, which was left-lateralized in the human brain across several phylogenetically recent
    regions. This suggests that cerebral variability may be another useful measure for comparison between species and may add
    another dimension to our understanding of evolutionary mechanisms.
  • Cutler, A. (2006). Rudolf Meringer. In K. Brown (Ed.), Encyclopedia of Language and Linguistics (vol. 8) (pp. 12-13). Amsterdam: Elsevier.

    Abstract

    Rudolf Meringer (1859–1931), Indo-European philologist, published two collections of slips of the tongue, annotated and interpreted. From 1909, he was the founding editor of the cultural morphology movement's journal Wörter und Sachen. Meringer was the first to note the linguistic significance of speech errors, and his interpretations have stood the test of time. This work, rather than his mainstream philological research, has proven his most lasting linguistic contribution
  • Cutler, A., Kim, J., & Otake, T. (2006). On the limits of L1 influence on non-L1 listening: Evidence from Japanese perception of Korean. In P. Warren, & C. I. Watson (Eds.), Proceedings of the 11th Australian International Conference on Speech Science & Technology (pp. 106-111).

    Abstract

    Language-specific procedures which are efficient for listening to the L1 may be applied to non-native spoken input, often to the detriment of successful listening. However, such misapplications of L1-based listening do not always happen. We propose, based on the results from two experiments in which Japanese listeners detected target sequences in spoken Korean, that an L1 procedure is only triggered if requisite L1 features are present in the input.
  • Cutler, A. (2006). Van spraak naar woorden in een tweede taal. In J. Morais, & G. d'Ydewalle (Eds.), Bilingualism and Second Language Acquisition (pp. 39-54). Brussels: Koninklijke Vlaamse Academie van België voor Wetenschappen en Kunsten.
  • Ip, M. H. K., & Cutler, A. (2018). Asymmetric efficiency of juncture perception in L1 and L2. In K. Klessa, J. Bachan, A. Wagner, M. Karpiński, & D. Śledziński (Eds.), Proceedings of Speech Prosody 2018 (pp. 289-296). Baixas, France: ISCA. doi:10.21437/SpeechProsody.2018-59.

    Abstract

    In two experiments, Mandarin listeners resolved potential syntactic ambiguities in spoken utterances in (a) their native language (L1) and (b) English which they had learned as a second language (L2). A new disambiguation task was used, requiring speeded responses to select the correct meaning for structurally ambiguous sentences. Importantly, the ambiguities used in the study are identical in Mandarin and in English, and production data show that prosodic disambiguation of this type of ambiguity is also realised very similarly in the two languages. The perceptual results here showed however that listeners’ response patterns differed for L1 and L2, although there was a significant increase in similarity between the two response patterns with increasing exposure to the L2. Thus identical ambiguity and comparable disambiguation patterns in L1 and L2 do not lead to immediate application of the appropriate L1 listening strategy to L2; instead, it appears that such a strategy may have to be learned anew for the L2.
  • Cutler, A., Sebastian-Galles, N., Soler-Vilageliu, O., & Van Ooijen, B. (2000). Constraints of vowels and consonants on lexical selection: Cross-linguistic comparisons. Memory & Cognition, 28, 746-755.

    Abstract

    Languages differ in the constitution of their phonemic repertoire and in the relative distinctiveness of phonemes within the repertoire. In the present study, we asked whether such differences constrain spoken-word recognition, via two word reconstruction experiments, in which listeners turned non-words into real words by changing single sounds. The experiments were carried out in Dutch (which has a relatively balanced vowel-consonant ratio and many similar vowels) and in Spanish (which has many more consonants than vowels and high distinctiveness among the vowels). Both Dutch and Spanish listeners responded significantly faster and more accurately when required to change vowels as opposed to consonants; when allowed to change any phoneme, they more often altered vowels than consonants. Vowel information thus appears to constrain lexical selection less tightly (allow more potential candidates) than does consonant information, independent of language-specific phoneme repertoire and of relative distinctiveness of vowels.
  • Cutler, A., & Otake, T. (1997). Contrastive studies of spoken-language processing. Journal of Phonetic Society of Japan, 1, 4-13.
  • Ip, M. H. K., & Cutler, A. (2018). Cue equivalence in prosodic entrainment for focus detection. In J. Epps, J. Wolfe, J. Smith, & C. Jones (Eds.), Proceedings of the 17th Australasian International Conference on Speech Science and Technology (pp. 153-156).

    Abstract

    Using a phoneme detection task, the present series of
    experiments examines whether listeners can entrain to
    different combinations of prosodic cues to predict where focus
    will fall in an utterance. The stimuli were recorded by four
    female native speakers of Australian English who happened to
    have used different prosodic cues to produce sentences with
    prosodic focus: a combination of duration cues, mean and
    maximum F0, F0 range, and longer pre-target interval before
    the focused word onset, only mean F0 cues, only pre-target
    interval, and only duration cues. Results revealed that listeners
    can entrain in almost every condition except for where
    duration was the only reliable cue. Our findings suggest that
    listeners are flexible in the cues they use for focus processing.
  • Cutler, A., & Van de Weijer, J. (2000). De ontdekking van de eerste woorden. Stem-, Spraak- en Taalpathologie, 9, 245-259.

    Abstract

    Spraak is continu, er zijn geen betrouwbare signalen waardoor de luisteraar weet waar het ene woord eindigt en het volgende begint. Voor volwassen luisteraars is het segmenteren van gesproken taal in afzonderlijke woorden dus niet onproblematisch, maar voor een kind dat nog geen woordenschat bezit, vormt de continuïteit van spraak een nog grotere uitdaging. Desalniettemin produceren de meeste kinderen hun eerste herkenbare woorden rond het begin van het tweede levensjaar. Aan deze vroege spraakproducties gaat een formidabele perceptuele prestatie vooraf. Tijdens het eerste levensjaar - met name gedurende de tweede helft - ontwikkelt de spraakperceptie zich van een algemeen fonetisch discriminatievermogen tot een selectieve gevoeligheid voor de fonologische contrasten die in de moedertaal voorkomen. Recent onderzoek heeft verder aangetoond dat kinderen, lang voordat ze ook maar een enkel woord kunnen zeggen, in staat zijn woorden die kenmerkend zijn voor hun moedertaal te onderscheiden van woorden die dat niet zijn. Bovendien kunnen ze woorden die eerst in isolatie werden aangeboden herkennen in een continue spraakcontext. Het dagelijkse taalaanbod aan een kind van deze leeftijd maakt het in zekere zin niet gemakkelijk, bijvoorbeeld doordat de meeste woorden niet in isolatie voorkomen. Toch wordt het kind ook wel houvast geboden, onder andere doordat het woordgebruik beperkt is.
  • Cutler, A., & Pasveer, D. (2006). Explaining cross-linguistic differences in effects of lexical stress on spoken-word recognition. In R. Hoffmann, & H. Mixdorff (Eds.), Speech Prosody 2006. Dresden: TUD press.

    Abstract

    Experiments have revealed differences across languages in listeners’ use of stress information in recognising spoken words. Previous comparisons of the vocabulary of Spanish and English had suggested that the explanation of this asymmetry might lie in the extent to which considering stress in spokenword recognition allows rejection of unwanted competition from words embedded in other words. This hypothesis was tested on the vocabularies of Dutch and German, for which word recognition results resemble those from Spanish more than those from English. The vocabulary statistics likewise revealed that in each language, the reduction of embeddings resulting from taking stress into account is more similar to the reduction achieved in Spanish than in English.
  • Cutler, A., Wales, R., Cooper, N., & Janssen, J. (2007). Dutch listeners' use of suprasegmental cues to English stress. In J. Trouvain, & W. J. Barry (Eds.), Proceedings of the 16th International Congress of Phonetics Sciences (ICPhS 2007) (pp. 1913-1916). Dudweiler: Pirrot.

    Abstract

    Dutch listeners outperform native listeners in identifying syllable stress in English. This is because lexical stress is more useful in recognition of spoken words of Dutch than of English, so that Dutch listeners pay greater attention to stress in general. We examined Dutch listeners’ use of the acoustic correlates of English stress. Primary- and secondary-stressed syllables differ significantly on acoustic measures, and some differences, in F0 especially, correlate with data of earlier listening experiments. The correlations found in the Dutch responses were not paralleled in data from native listeners. Thus the acoustic cues which distinguish English primary versus secondary stress are better exploited by Dutch than by native listeners.
  • Cutler, A., Eisner, F., McQueen, J. M., & Norris, D. (2006). Coping with speaker-related variation via abstract phonemic categories. In Variation, detail and representation: 10th Conference on Laboratory Phonology (pp. 31-32).
  • Cutler, A., Weber, A., & Otake, T. (2006). Asymmetric mapping from phonetic to lexical representations in second-language listening. Journal of Phonetics, 34(2), 269-284. doi:10.1016/j.wocn.2005.06.002.

    Abstract

    The mapping of phonetic information to lexical representations in second-language (L2) listening was examined using an eyetracking paradigm. Japanese listeners followed instructions in English to click on pictures in a display. When instructed to click on a picture of a rocket, they experienced interference when a picture of a locker was present, that is, they tended to look at the locker instead. However, when instructed to click on the locker, they were unlikely to look at the rocket. This asymmetry is consistent with a similar asymmetry previously observed in Dutch listeners’ mapping of English vowel contrasts to lexical representations. The results suggest that L2 listeners may maintain a distinction between two phonetic categories of the L2 in their lexical representations, even though their phonetic processing is incapable of delivering the perceptual discrimination required for correct mapping to the lexical distinction. At the phonetic processing level, one of the L2 categories is dominant; the present results suggest that dominance is determined by acoustic–phonetic proximity to the nearest L1 category. At the lexical processing level, representations containing this dominant category are more likely than representations containing the non-dominant category to be correctly contacted by the phonetic input.
  • Cutler, A., & Weber, A. (2007). Listening experience and phonetic-to-lexical mapping in L2. In J. Trouvain, & W. J. Barry (Eds.), Proceedings of the 16th International Congress of Phonetic Sciences (ICPhS 2007) (pp. 43-48). Dudweiler: Pirrot.

    Abstract

    In contrast to initial L1 vocabularies, which of necessity depend largely on heard exemplars, L2 vocabulary construction can draw on a variety of knowledge sources. This can lead to richer stored knowledge about the phonology of the L2 than the listener's prelexical phonetic processing capacity can support, and thus to mismatch between the level of detail required for accurate lexical mapping and the level of detail delivered by the prelexical processor. Experiments on spoken word recognition in L2 have shown that phonetic contrasts which are not reliably perceived are represented in the lexicon nonetheless. This lexical representation of contrast must be based on abstract knowledge, not on veridical representation of heard exemplars. New experiments confirm that provision of abstract knowledge (in the form of spelling) can induce lexical representation of a contrast which is not reliably perceived; but also that experience (in the form of frequency of occurrence) modulates the mismatch of phonetic and lexical processing. We conclude that a correct account of word recognition in L2 (as indeed in L1) requires consideration of both abstract and episodic information.
  • Cutler, A., Cooke, M., Garcia-Lecumberri, M. L., & Pasveer, D. (2007). L2 consonant identification in noise: Cross-language comparisons. In H. van Hamme, & R. van Son (Eds.), Proceedings of Interspeech 2007 (pp. 1585-1588). Adelaide: Causal productions.

    Abstract

    The difficulty of listening to speech in noise is exacerbated when the speech is in the listener’s L2 rather than L1. In this study, Spanish and Dutch users of English as an L2 identified American English consonants in a constant intervocalic context. Their performance was compared with that of L1 (British English) listeners, under quiet conditions and when the speech was masked by speech from another talker or by noise. Masking affected performance more for the Spanish listeners than for the L1 listeners, but not for the Dutch listeners, whose performance was worse than the L1 case to about the same degree in all conditions. There were, however,large differences in the pattern of results across individual consonants, which were consistent with differences in how consonants are identified in the respective L1s.
  • Cutler, A., Burchfield, L. A., & Antoniou, M. (2018). Factors affecting talker adaptation in a second language. In J. Epps, J. Wolfe, J. Smith, & C. Jones (Eds.), Proceedings of the 17th Australasian International Conference on Speech Science and Technology (pp. 33-36).

    Abstract

    Listeners adapt rapidly to previously unheard talkers by
    adjusting phoneme categories using lexical knowledge, in a
    process termed lexically-guided perceptual learning. Although
    this is firmly established for listening in the native language
    (L1), perceptual flexibility in second languages (L2) is as yet
    less well understood. We report two experiments examining L1
    and L2 perceptual learning, the first in Mandarin-English late
    bilinguals, the second in Australian learners of Mandarin. Both
    studies showed stronger learning in L1; in L2, however,
    learning appeared for the English-L1 group but not for the
    Mandarin-L1 group. Phonological mapping differences from
    the L1 to the L2 are suggested as the reason for this result.
  • Cutler, A. (1986). Forbear is a homophone: Lexical prosody does not constrain lexical access. Language and Speech, 29, 201-220.

    Abstract

    Because stress can occur in any position within an Eglish word, lexical prosody could serve as a minimal distinguishing feature between pairs of words. However, most pairs of English words with stress pattern opposition also differ vocalically: OBject an obJECT, CONtent and content have different vowels in their first syllables an well as different stress patters. To test whether prosodic information is made use in auditory word recognition independently of segmental phonetic information, it is necessary to examine pairs like FORbear – forBEAR of TRUSty – trusTEE, semantically unrelated words which echbit stress pattern opposition but no segmental difference. In a cross-modal priming task, such words produce the priming effects characteristic of homophones, indicating that lexical prosody is not used in the same was as segmental structure to constrain lexical access.
  • Cutler, A. (2000). How the ear comes to hear. In New Trends in Modern Linguistics [Part of Annual catalogue series] (pp. 6-10). Tokyo, Japan: Maruzen Publishers.
  • Cutler, A. (2000). Hoe het woord het oor verovert. In Voordrachten uitgesproken tijdens de uitreiking van de SPINOZA-premies op 15 februari 2000 (pp. 29-41). The Hague, The Netherlands: Nederlandse Organisatie voor Wetenschappelijk Onderzoek (NWO).
  • Cutler, A., & Chen, H.-C. (1997). Lexical tone in Cantonese spoken-word processing. Perception and Psychophysics, 59, 165-179. Retrieved from http://www.psychonomic.org/search/view.cgi?id=778.

    Abstract

    In three experiments, the processing of lexical tone in Cantonese was examined. Cantonese listeners more often accepted a nonword as a word when the only difference between the nonword and the word was in tone, especially when the F0 onset difference between correct and erroneous tone was small. Same–different judgments by these listeners were also slower and less accurate when the only difference between two syllables was in tone, and this was true whether the F0 onset difference between the two tones was large or small. Listeners with no knowledge of Cantonese produced essentially the same same-different judgment pattern as that produced by the native listeners, suggesting that the results display the effects of simple perceptual processing rather than of linguistic knowledge. It is argued that the processing of lexical tone distinctions may be slowed, relative to the processing of segmental distinctions, and that, in speeded-response tasks, tone is thus more likely to be misprocessed than is segmental structure.
  • Cutler, A., Mehler, J., Norris, D., & Segui, J. (1988). Limits on bilingualism [Letters to Nature]. Nature, 340, 229-230. doi:10.1038/340229a0.

    Abstract

    SPEECH, in any language, is continuous; speakers provide few reliable cues to the boundaries of words, phrases, or other meaningful units. To understand speech, listeners must divide the continuous speech stream into portions that correspond to such units. This segmentation process is so basic to human language comprehension that psycholinguists long assumed that all speakers would do it in the same way. In previous research1,2, however, we reported that segmentation routines can be language-specific: speakers of French process spoken words syllable by syllable, but speakers of English do not. French has relatively clear syllable boundaries and syllable-based timing patterns, whereas English has relatively unclear syllable boundaries and stress-based timing; thus syllabic segmentation would work more efficiently in the comprehension of French than in the comprehension of English. Our present study suggests that at this level of language processing, there are limits to bilingualism: a bilingual speaker has one and only one basic language.
  • Cutler, A., & Farrell, J. (2018). Listening in first and second language. In J. I. Liontas (Ed.), The TESOL encyclopedia of language teaching. New York: Wiley. doi:10.1002/9781118784235.eelt0583.

    Abstract

    Listeners' recognition of spoken language involves complex decoding processes: The continuous speech stream must be segmented into its component words, and words must be recognized despite great variability in their pronunciation (due to talker differences, or to influence of phonetic context, or to speech register) and despite competition from many spuriously present forms supported by the speech signal. L1 listeners deal more readily with all levels of this complexity than L2 listeners. Fortunately, the decoding processes necessary for competent L2 listening can be taught in the classroom. Evidence-based methodologies targeted at the development of efficient speech decoding include teaching of minimal pairs, of phonotactic constraints, and of reduction processes, as well as the use of dictation and L2 video captions.
  • Cutler, A. (1986). Phonological structure in speech recognition. Phonology Yearbook, 3, 161-178. Retrieved from http://www.jstor.org/stable/4615397.

    Abstract

    Two bodies of recent research from experimental psycholinguistics are summarised, each of which is centred upon a concept from phonology: LEXICAL STRESS and the SYLLABLE. The evidence indicates that neither construct plays a role in prelexical representations during speech recog- nition. Both constructs, however, are well supported by other performance evidence. Testing phonological claims against performance evidence from psycholinguistics can be difficult, since the results of studies designed to test processing models are often of limited relevance to phonological theory.
  • Cutler, A., McQueen, J. M., & Zondervan, R. (2000). Proceedings of SWAP (Workshop on Spoken Word Access Processes). Nijmegen: MPI for Psycholinguistics.
  • Cutler, A., & Swinney, D. A. (1986). Prosody and the development of comprehension. Journal of Child Language, 14, 145-167.

    Abstract

    Four studies are reported in which young children’s response time to detect word targets was measured. Children under about six years of age did not show response time advantage for accented target words which adult listeners show. When semantic focus of the target word was manipulated independently of accent, children of about five years of age showed an adult-like response time advantage for focussed targets, but children younger than five did not. Id is argued that the processing advantage for accented words reflect the semantic role of accent as an expression of sentence focus. Processing advantages for accented words depend on the prior development of representations of sentence semantic structure, including the concept of focus. The previous literature on the development of prosodic competence shows an apparent anomaly in that young children’s productive skills appear to outstrip their receptive skills; however, this anomaly disappears if very young children’s prosody is assumed to be produced without an underlying representation of the relationship between prosody and semantics.
  • Cutler, A. (1997). Prosody and the structure of the message. In Y. Sagisaka, N. Campbell, & N. Higuchi (Eds.), Computing prosody: Computational models for processing spontaneous speech (pp. 63-66). Heidelberg: Springer.
  • Cutler, A., Dahan, D., & Van Donselaar, W. (1997). Prosody in the comprehension of spoken language: A literature review. Language and Speech, 40, 141-201.

    Abstract

    Research on the exploitation of prosodic information in the recognition of spoken language is reviewed. The research falls into three main areas: the use of prosody in the recognition of spoken words, in which most attention has been paid to the question of whether the prosodic structure of a word plays a role in initial contact with stored lexical representations; the use of prosody in the computation of syntactic structure, in which the resolution of global and local ambiguities has formed the central focus; and the role of prosody in the processing of discourse structure, in which there has been a preponderance of work on the contribution of accentuation and deaccentuation to integration of concepts with an existing discourse model. The review reveals that in each area progress has been made towards new conceptions of prosody's role in processing, and in particular this has involved abandonment of previously held deterministic views of the relationship between prosodic structure and other aspects of linguistic structure
  • Cutler, A. (2000). Real words, phantom words and impossible words. In D. Burnham, S. Luksaneeyanawin, C. Davis, & M. Lafourcade (Eds.), Interdisciplinary approaches to language processing: The international conference on human and machine processing of language and speech (pp. 32-42). Bangkok: NECTEC.
  • Cutler, A. (1997). The comparative perspective on spoken-language processing. Speech Communication, 21, 3-15. doi:10.1016/S0167-6393(96)00075-1.

    Abstract

    Psycholinguists strive to construct a model of human language processing in general. But this does not imply that they should confine their research to universal aspects of linguistic structure, and avoid research on language-specific phenomena. First, even universal characteristics of language structure can only be accurately observed cross-linguistically. This point is illustrated here by research on the role of the syllable in spoken-word recognition, on the perceptual processing of vowels versus consonants, and on the contribution of phonetic assimilation phonemena to phoneme identification. In each case, it is only by looking at the pattern of effects across languages that it is possible to understand the general principle. Second, language-specific processing can certainly shed light on the universal model of language comprehension. This second point is illustrated by studies of the exploitation of vowel harmony in the lexical segmentation of Finnish, of the recognition of Dutch words with and without vowel epenthesis, and of the contribution of different kinds of lexical prosodic structure (tone, pitch accent, stress) to the initial activation of candidate words in lexical access. In each case, aspects of the universal processing model are revealed by analysis of these language-specific effects. In short, the study of spoken-language processing by human listeners requires cross-linguistic comparison.
  • Cutler, A., & Koster, M. (2000). Stress and lexical activation in Dutch. In B. Yuan, T. Huang, & X. Tang (Eds.), Proceedings of the Sixth International Conference on Spoken Language Processing: Vol. 1 (pp. 593-596). Beijing: China Military Friendship Publish.

    Abstract

    Dutch listeners were slower to make judgements about the semantic relatedness between a spoken target word (e.g. atLEET, 'athlete') and a previously presented visual prime word (e.g. SPORT 'sport') when the spoken word was mis-stressed. The adverse effect of mis-stressing confirms the role of stress information in lexical recognition in Dutch. However, although the erroneous stress pattern was always initially compatible with a competing word (e.g. ATlas, 'atlas'), mis-stressed words did not produced high false alarm rates in unrelated pairs (e.g. SPORT - atLAS). This suggests that stress information did not completely rule out segmentally matching but suprasegmentally mismatching words, a finding consistent with spoken-word recognition models involving multiple activation and inter-word competition.
  • Cutler, A., & Butterfield, S. (1986). The perceptual integrity of initial consonant clusters. In R. Lawrence (Ed.), Speech and Hearing: Proceedings of the Institute of Acoustics (pp. 31-36). Edinburgh: Institute of Acoustics.
  • Cutler, A. (1988). The perfect speech error. In L. Hyman, & C. Li (Eds.), Language, speech and mind: Studies in honor of Victoria A. Fromkin (pp. 209-223). London: Croom Helm.
  • Cutler, A., & Norris, D. (1988). The role of strong syllables in segmentation for lexical access. Journal of Experimental Psychology: Human Perception and Performance, 14, 113-121. doi:10.1037/0096-1523.14.1.113.

    Abstract

    A model of speech segmentation in a stress language is proposed, according to which the occurrence of a strong syllable triggers segmentation of the speech signal, whereas occurrence of a weak syllable does not trigger segmentation. We report experiments in which listeners detected words embedded in nonsense bisyllables more slowly when the bisyllable had two strong syllables than when it had a strong and a weak syllable; mint was detected more slowly in mintayve than in mintesh. According to our proposed model, this result is an effect of segmentation: When the second syllable is strong, it is segmented from the first syllable, and successful detection of the embedded word therefore requires assembly of speech material across a segmentation position. Speech recognition models involving phonemic or syllabic recoding, or based on strictly left-to-right processes, do not predict this result. It is argued that segmentation at strong syllables in continuous speech recognition serves the purpose of detecting the most efficient locations at which to initiate lexical access. (C) 1988 by the American Psychological Association
  • Cutler, A., Mehler, J., Norris, D., & Segui, J. (1986). The syllable’s differing role in the segmentation of French and English. Journal of Memory and Language, 25, 385-400. doi:10.1016/0749-596X(86)90033-1.

    Abstract

    Speech segmentation procedures may differ in speakers of different languages. Earlier work based on French speakers listening to French words suggested that the syllable functions as a segmentation unit in speech processing. However, while French has relatively regular and clearly bounded syllables, other languages, such as English, do not. No trace of syllabifying segmentation was found in English listeners listening to English words, French words, or nonsense words. French listeners, however, showed evidence of syllabification even when they were listening to English words. We conclude that alternative segmentation routines are available to the human language processor. In some cases speech segmentation may involve the operation of more than one procedure
  • Cutler, A. (1997). The syllable’s role in the segmentation of stress languages. Language and Cognitive Processes, 12, 839-845. doi:10.1080/016909697386718.
  • Cutler, A. (1986). Why readers of this newsletter should run cross-linguistic experiments. European Psycholinguistics Association Newsletter, 13, 4-8.
  • Cutler, A., Norris, D., & McQueen, J. M. (2000). Tracking TRACE’s troubles. In A. Cutler, J. M. McQueen, & R. Zondervan (Eds.), Proceedings of SWAP (Workshop on Spoken Word Access Processes) (pp. 63-66). Nijmegen: Max-Planck-Institute for Psycholinguistics.

    Abstract

    Simulations explored the inability of the TRACE model of spoken-word recognition to model the effects on human listening of acoustic-phonetic mismatches in word forms. The source of TRACE's failure lay not in its interactive connectivity, not in the presence of interword competition, and not in the use of phonemic representations, but in the need for continuously optimised interpretation of the input. When an analogue of TRACE was allowed to cycle to asymptote on every slice of input, an acceptable simulation of the subcategorical mismatch data was achieved. Even then, however, the simulation was not as close as that produced by the Merge model.
  • Cutler, A., & Bruggeman, L. (2013). Vocabulary structure and spoken-word recognition: Evidence from French reveals the source of embedding asymmetry. In Proceedings of INTERSPEECH: 14th Annual Conference of the International Speech Communication Association (pp. 2812-2816).

    Abstract

    Vocabularies contain hundreds of thousands of words built from only a handful of phonemes, so that inevitably longer words tend to contain shorter ones. In many languages (but not all) such embedded words occur more often word-initially than word-finally, and this asymmetry, if present, has farreaching consequences for spoken-word recognition. Prior research had ascribed the asymmetry to suffixing or to effects of stress (in particular, final syllables containing the vowel schwa). Analyses of the standard French vocabulary here reveal an effect of suffixing, as predicted by this account, and further analyses of an artificial variety of French reveal that extensive final schwa has an independent and additive effect in promoting the embedding asymmetry.
  • Dahan, D., & Gaskell, M. G. (2007). The temporal dynamics of ambiguity resolution: Evidence from spoken-word recognition. Journal of Memory and Language, 57(4), 483-501. doi:10.1016/j.jml.2007.01.001.

    Abstract

    Two experiments examined the dynamics of lexical activation in spoken-word recognition. In both, the key materials were pairs of onset-matched picturable nouns varying in frequency. Pictures associated with these words, plus two distractor pictures were displayed. A gating task, in which participants identified the picture associated with gradually lengthening fragments of spoken words, examined the availability of discriminating cues in the speech waveforms for these pairs. There was a clear frequency bias in participants’ responses to short, ambiguous fragments, followed by a temporal window in which discriminating information gradually became available. A visual-world experiment examined speech contingent eye movements. Fixation analyses suggested that frequency influences lexical competition well beyond the point in the speech signal at which the spoken word has been fully discriminated from its competitor (as identified using gating). Taken together, these data support models in which the processing dynamics of lexical activation are a limiting factor on recognition speed, over and above the temporal unfolding of the speech signal.
  • Dai, B., Chen, C., Long, Y., Zheng, L., Zhao, H., Bai, X., Liu, W., Zhang, Y., Liu, L., Guo, T., Ding, G., & Lu, C. (2018). Neural mechanisms for selectively tuning into the target speaker in a naturalistic noisy situation. Nature Communications, 9: 2405. doi:10.1038/s41467-018-04819-z.

    Abstract

    The neural mechanism for selectively tuning in to a target speaker while tuning out the others in a multi-speaker situation (i.e., the cocktail-party effect) remains elusive. Here we addressed this issue by measuring brain activity simultaneously from a listener and from multiple speakers while they were involved in naturalistic conversations. Results consistently show selectively enhanced interpersonal neural synchronization (INS) between the listener and the attended speaker at left temporal–parietal junction, compared with that between the listener and the unattended speaker across different multi-speaker situations. Moreover, INS increases significantly prior to the occurrence of verbal responses, and even when the listener’s brain activity precedes that of the speaker. The INS increase is independent of brain-to-speech synchronization in both the anatomical location and frequency range. These findings suggest that INS underlies the selective process in a multi-speaker situation through neural predictions at the content level but not the sensory level of speech.

    Additional information

    Dai_etal_2018_sup.pdf
  • D'Alessandra, Y., Carena, M. C., Spazzafumo, L., Martinelli, F., Bassetti, B., Devanna, P., Rubino, M., Marenzi, G., Colombo, G. I., Achilli, F., Maggiolini, S., Capogrossi, M. C., & Pompilio, G. (2013). Diagnostic Potential of Plasmatic MicroRNA Signatures in Stable and Unstable Angina. PLoS ONE, 8(11), e80345. doi:10.1371/journal.pone.0080345.

    Abstract

    PURPOSE: We examined circulating miRNA expression profiles in plasma of patients with coronary artery disease (CAD) vs. matched controls, with the aim of identifying novel discriminating biomarkers of Stable (SA) and Unstable (UA) angina. METHODS: An exploratory analysis of plasmatic expression profile of 367 miRNAs was conducted in a group of SA and UA patients and control donors, using TaqMan microRNA Arrays. Screening confirmation and expression analysis were performed by qRT-PCR: all miRNAs found dysregulated were examined in the plasma of troponin-negative UA (n=19) and SA (n=34) patients and control subjects (n=20), matched for sex, age, and cardiovascular risk factors. In addition, the expression of 14 known CAD-associated miRNAs was also investigated. RESULTS: Out of 178 miRNAs consistently detected in plasma samples, 3 showed positive modulation by CAD when compared to controls: miR-337-5p, miR-433, and miR-485-3p. Further, miR-1, -122, -126, -133a, -133b, and miR-199a were positively modulated in both UA and SA patients, while miR-337-5p and miR-145 showed a positive modulation only in SA or UA patients, respectively. ROC curve analyses showed a good diagnostic potential (AUC ≥ 0.85) for miR-1, -126, and -483-5p in SA and for miR-1, -126, and -133a in UA patients vs. controls, respectively. No discriminating AUC values were observed comparing SA vs. UA patients. Hierarchical cluster analysis showed that the combination of miR-1, -133a, and -126 in UA and of miR-1, -126, and -485-3p in SA correctly classified patients vs. controls with an efficiency ≥ 87%. No combination of miRNAs was able to reliably discriminate patients with UA from patients with SA. CONCLUSIONS: This work showed that specific plasmatic miRNA signatures have the potential to accurately discriminate patients with angiographically documented CAD from matched controls. We failed to identify a plasmatic miRNA expression pattern capable to differentiate SA from UA patients.
  • Dastjerdi, M., Ozker, M., Foster, B. L., Rangarajan, V., & Parvizi, J. (2013). Numerical processing in the human parietal cortex during experimental and natural conditions. Nature Communications, 4: 2528. doi:10.1038/ncomms3528.

    Abstract

    Human cognition is traditionally studied in experimental conditions wherein confounding complexities of the natural environment are intentionally eliminated. Thus, it remains unknown how a brain region involved in a particular experimental condition is engaged in natural conditions. Here we use electrocorticography to address this uncertainty in three participants implanted with intracranial electrodes and identify activations of neuronal populations within the intraparietal sulcus region during an experimental arithmetic condition. In a subsequent analysis, we report that the same intraparietal sulcus neural populations are activated when participants, engaged in social conversations, refer to objects with numerical content. Our prototype approach provides a means for both exploring human brain dynamics as they unfold in complex social settings and reconstructing natural experiences from recorded brain signals.
  • Davidson, D. J. (2006). Strategies for longitudinal neurophysiology [commentary on Osterhout et al.]. Language Learning, 56(suppl. 1), 231-234. doi:10.1111/j.1467-9922.2006.00362.x.
  • Davidson, D. J., & Indefrey, P. (2007). An inverse relation between event-related and time–frequency violation responses in sentence processing. Brain Research, 1158, 81-92. doi:10.1016/j.brainres.2007.04.082.

    Abstract

    The relationship between semantic and grammatical processing in sentence comprehension was investigated by examining event-related potential (ERP) and event-related power changes in response to semantic and grammatical violations. Sentences with semantic, phrase structure, or number violations and matched controls were presented serially (1.25 words/s) to 20 participants while EEG was recorded. Semantic violations were associated with an N400 effect and a theta band increase in power, while grammatical violations were associated with a P600 effect and an alpha/beta band decrease in power. A quartile analysis showed that for both types of violations, larger average violation effects were associated with lower relative amplitudes of oscillatory activity, implying an inverse relation between ERP amplitude and event-related power magnitude change in sentence processing.
  • Davidson, D., & Martin, A. E. (2013). Modeling accuracy as a function of response time with the generalized linear mixed effects model. Acta Psychologica, 144(1), 83-96. doi:10.1016/j.actpsy.2013.04.016.

    Abstract

    In psycholinguistic studies using error rates as a response measure, response times (RT) are most often analyzed independently of the error rate, although it is widely recognized that they are related. In this paper we present a mixed effects logistic regression model for the error rate that uses RT as a trial-level fixed- and random-effect regression input. Production data from a translation–recall experiment are analyzed as an example. Several model comparisons reveal that RT improves the fit of the regression model for the error rate. Two simulation studies then show how the mixed effects regression model can identify individual participants for whom (a) faster responses are more accurate, (b) faster responses are less accurate, or (c) there is no relation between speed and accuracy. These results show that this type of model can serve as a useful adjunct to traditional techniques, allowing psycholinguistic researchers to examine more closely the relationship between RT and accuracy in individual subjects and better account for the variability which may be present, as well as a preliminary step to more advanced RT–accuracy modeling.
  • Debreslioska, S., Ozyurek, A., Gullberg, M., & Perniss, P. M. (2013). Gestural viewpoint signals referent accessibility. Discourse Processes, 50(7), 431-456. doi:10.1080/0163853x.2013.824286.

    Abstract

    The tracking of entities in discourse is known to be a bimodal phenomenon. Speakers achieve cohesion in speech by alternating between full lexical forms, pronouns, and zero anaphora as they track referents. They also track referents in co-speech gestures. In this study, we explored how viewpoint is deployed in reference tracking, focusing on representations of animate entities in German narrative discourse. We found that gestural viewpoint systematically varies depending on discourse context. Speakers predominantly use character viewpoint in maintained contexts and observer viewpoint in reintroduced contexts. Thus, gestural viewpoint seems to function as a cohesive device in narrative discourse. The findings expand on and provide further evidence for the coordination between speech and gesture on the discourse level that is crucial to understanding the tight link between the two modalities.
  • Dediu, D., Cysouw, M., Levinson, S. C., Baronchelli, A., Christiansen, M. H., Croft, W., Evans, N., Garrod, S., Gray, R., Kandler, A., & Lieven, E. (2013). Cultural evolution of language. In P. J. Richerson, & M. H. Christiansen (Eds.), Cultural evolution: Society, technology, language, and religion. Strüngmann Forum Reports, vol. 12 (pp. 303-332). Cambridge, Mass: MIT Press.

    Abstract

    This chapter argues that an evolutionary cultural approach to language not only has already proven fruitful, but it probably holds the key to understand many puzzling aspects of language, its change and origins. The chapter begins by highlighting several still common misconceptions about language that might seem to call into question a cultural evolutionary approach. It explores the antiquity of language and sketches a general evolutionary approach discussing the aspects of function, fi tness, replication, and selection, as well the relevant units of linguistic evolution. In this context, the chapter looks at some fundamental aspects of linguistic diversity such as the nature of the design space, the mechanisms generating it, and the shape and fabric of language. Given that biology is another evolutionary system, its complex coevolution with language needs to be understood in order to have a proper theory of language. Throughout the chapter, various challenges are identifi ed and discussed, sketching promising directions for future research. The chapter ends by listing the necessary data, methods, and theoretical developments required for a grounded evolutionary approach to language.
  • Dediu, D. (2013). Genes: Interactions with language on three levels — Inter-individual variation, historical correlations and genetic biasing. In P.-M. Binder, & K. Smith (Eds.), The language phenomenon: Human communication from milliseconds to millennia (pp. 139-161). Berlin: Springer. doi:10.1007/978-3-642-36086-2_7.

    Abstract

    The complex inter-relationships between genetics and linguistics encompass all four scales highlighted by the contributions to this book and, together with cultural transmission, the genetics of language holds the promise to offer a unitary understanding of this fascinating phenomenon. There are inter-individual differences in genetic makeup which contribute to the obvious fact that we are not identical in the way we understand and use language and, by studying them, we will be able to both better treat and enhance ourselves. There are correlations between the genetic configuration of human groups and their languages, reflecting the historical processes shaping them, and there also seem to exist genes which can influence some characteristics of language, biasing it towards or against certain states by altering the way language is transmitted across generations. Besides the joys of pure knowledge, the understanding of these three aspects of genetics relevant to language will potentially trigger advances in medicine, linguistics, psychology or the understanding of our own past and, last but not least, a profound change in the way we regard one of the emblems of being human: our capacity for language.
  • Dediu, D. (2018). Making genealogical language classifications available for phylogenetic analysis: Newick trees, unified identifiers, and branch length. Language Dynamics and Change, 8(1), 1-21. doi:10.1163/22105832-00801001.

    Abstract

    One of the best-known types of non-independence between languages is caused by genealogical relationships due to descent from a common ancestor. These can be represented by (more or less resolved and controversial) language family trees. In theory, one can argue that language families should be built through the strict application of the comparative method of historical linguistics, but in practice this is not always the case, and there are several proposed classifications of languages into language families, each with its own advantages and disadvantages. A major stumbling block shared by most of them is that they are relatively difficult to use with computational methods, and in particular with phylogenetics. This is due to their lack of standardization, coupled with the general non-availability of branch length information, which encapsulates the amount of evolution taking place on the family tree. In this paper I introduce a method (and its implementation in R) that converts the language classifications provided by four widely-used databases (Ethnologue, WALS, AUTOTYP and Glottolog) intothe de facto Newick standard generally used in phylogenetics, aligns the four most used conventions for unique identifiers of linguistic entities (ISO 639-3, WALS, AUTOTYP and Glottocode), and adds branch length information from a variety of sources (the tree's own topology, an externally given numeric constant, or a distance matrix). The R scripts, input data and resulting Newick trees are available under liberal open-source licenses in a GitHub repository (https://github.com/ddediu/lgfam-newick), to encourage and promote the use of phylogenetic methods to investigate linguistic diversity and its temporal dynamics.
  • Dediu, D. (2006). Mostly out of Africa, but what did the others have to say? In A. Cangelosi, A. D. Smith, & K. Smith (Eds.), The evolution of language: proceedings of the 6th International Conference (EVOLANG6) (pp. 59-66). World Scientific.

    Abstract

    The Recent Out-of-Africa human evolutionary model seems to be generally accepted. This impression is very prevalent outside palaeoanthropological circles (including studies of language evolution), but proves to be unwarranted. This paper offers a short review of the main challenges facing ROA and concludes that alternative models based on the concept of metapopulation must be also considered. The implications of such a model for language evolution and diversity are briefly reviewed.
  • Dediu, D., & Ladd, D. R. (2007). Linguistic tone is related to the population frequency of the adaptive haplogroups of two brain size genes, ASPM and Microcephalin. PNAS, 104, 10944-10949. doi:10.1073/pnas.0610848104.

    Abstract

    The correlations between interpopulation genetic and linguistic diversities are mostly noncausal (spurious), being due to historical processes and geographical factors that shape them in similar ways. Studies of such correlations usually consider allele frequencies and linguistic groupings (dialects, languages, linguistic families or phyla), sometimes controlling for geographic, topographic, or ecological factors. Here, we consider the relation between allele frequencies and linguistic typological features. Specifically, we focus on the derived haplogroups of the brain growth and development-related genes ASPM and Microcephalin, which show signs of natural selection and a marked geographic structure, and on linguistic tone, the use of voice pitch to convey lexical or grammatical distinctions. We hypothesize that there is a relationship between the population frequency of these two alleles and the presence of linguistic tone and test this hypothesis relative to a large database (983 alleles and 26 linguistic features in 49 populations), showing that it is not due to the usual explanatory factors represented by geography and history. The relationship between genetic and linguistic diversity in this case may be causal: certain alleles can bias language acquisition or processing and thereby influence the trajectory of language change through iterated cultural transmission.

    Files private

    Request files
  • Dediu, D., & Levinson, S. C. (2018). Neanderthal language revisited: Not only us. Current Opinion in Behavioral Sciences, 21, 49-55. doi:10.1016/j.cobeha.2018.01.001.

    Abstract

    Here we re-evaluate our 2013 paper on the antiquity of language (Dediu and Levinson, 2013) in the light of a surge of new information on human evolution in the last half million years. Although new genetic data suggest the existence of some cognitive differences between Neanderthals and modern humans — fully expected after hundreds of thousands of years of partially separate evolution, overall our claims that Neanderthals were fully articulate beings and that language evolution was gradual are further substantiated by the wealth of new genetic, paleontological and archeological evidence briefly reviewed here.
  • Dediu, D. (2007). Non-spurious correlations between genetic and linguistic diversities in the context of human evolution. PhD Thesis, University of Edinburgh, Edinburgh, UK.
  • Dediu, D., & Levinson, S. C. (2013). On the antiquity of language: The reinterpretation of Neandertal linguistic capacities and its consequences. Frontiers in Language Sciences, 4: 397. doi:10.3389/fpsyg.2013.00397.

    Abstract

    It is usually assumed that modern language is a recent phenomenon, coinciding with the emergence of modern humans themselves. Many assume as well that this is the result of a single, sudden mutation giving rise to the full “modern package”. However, we argue here that recognizably modern language is likely an ancient feature of our genus pre-dating at least the common ancestor of modern humans and Neandertals about half a million years ago. To this end, we adduce a broad range of evidence from linguistics, genetics, palaeontology and archaeology clearly suggesting that Neandertals shared with us something like modern speech and language. This reassessment of the antiquity of modern language, from the usually quoted 50,000-100,000 years to half a million years, has profound consequences for our understanding of our own evolution in general and especially for the sciences of speech and language. As such, it argues against a saltationist scenario for the evolution of language, and towards a gradual process of culture-gene co-evolution extending to the present day. Another consequence is that the present-day linguistic diversity might better reflect the properties of the design space for language and not just the vagaries of history, and could also contain traces of the languages spoken by other human forms such as the Neandertals.
  • Dediu, D., & Cysouw, M. A. (2013). Some structural aspects of language are more stable than others: A comparison of seven methods. PLoS One, 8: e55009. doi:10.1371/journal.pone.0055009.

    Abstract

    Understanding the patterns and causes of differential structural stability is an area of major interest for the study of language change and evolution. It is still debated whether structural features have intrinsic stabilities across language families and geographic areas, or if the processes governing their rate of change are completely dependent upon the specific context of a given language or language family. We conducted an extensive literature review and selected seven different approaches to conceptualising and estimating the stability of structural linguistic features, aiming at comparing them using the same dataset, the World Atlas of Language Structures. We found that, despite profound conceptual and empirical differences between these methods, they tend to agree in classifying some structural linguistic features as being more stable than others. This suggests that there are intrinsic properties of such structural features influencing their stability across methods, language families and geographic areas. This finding is a major step towards understanding the nature of structural linguistic features and their interaction with idiosyncratic, lineage- and area-specific factors during language change and evolution.
  • Degand, L., & Van Bergen, G. (2018). Discourse markers as turn-transition devices: Evidence from speech and instant messaging. Discourse Processes, 55, 47-71. doi:10.1080/0163853X.2016.1198136.

    Abstract

    In this article we investigate the relation between discourse markers and turn-transition strategies in face-to-face conversations and Instant Messaging (IM), that is, unplanned, real-time, text-based, computer-mediated communication. By means of a quantitative corpus study of utterances containing a discourse marker, we show that utterance-final discourse markers are used more often in IM than in face-to-face conversations. Moreover, utterance-final discourse markers are shown to occur more often at points of turn-transition compared with points of turn-maintenance in both types of conversation. From our results we conclude that the discourse markers in utterance-final position can function as a turn-transition mechanism, signaling that the turn is over and the floor is open to the hearer. We argue that this linguistic turn-taking strategy is essentially similar in face-to-face and IM communication. Our results add to the evidence that communication in IM is more like speech than like writing.
  • Delgado, T., Ravignani, A., Verhoef, T., Thompson, B., Grossi, T., & Kirby, S. (2018). Cultural transmission of melodic and rhythmic universals: Four experiments and a model. In C. Cuskley, M. Flaherty, H. Little, L. McCrohon, A. Ravignani, & T. Verhoef (Eds.), Proceedings of the 12th International Conference on the Evolution of Language (EVOLANG XII) (pp. 89-91). Toruń, Poland: NCU Press. doi:10.12775/3991-1.019.
  • Dell, G. S., Reed, K. D., Adams, D. R., & Meyer, A. S. (2000). Speech errors, phonotactic constraints, and implicit learning: A study of the role of experience in language production. Journal of Experimental Psychology: Learning, Memory, and Cognition, 26, 1355-1367. doi:10.1037/0278-7393.26.6.1355.

    Abstract

    Speech errors follow the phonotactics of the language being spoken. For example, in English, if [n] is mispronounced as [n] the [n] will always appear in a syllable coda. The authors created an analogue to this phenomenon by having participants recite lists of consonant-vowel-consonant syllables in 4 sessions on different days. In the first 2 experiments, some consonants were always onsets, some were always codas, and some could be both. In a third experiment, the set of possible onsets and codas depended on vowel identity. In all 3 studies, the production errors that occurred respected the "phonotactics" of the experiment. The results illustrate the implicit learning of the sequential constraints present in the stimuli and show that the language production system adapts to recent experience.
  • Den Hoed, J., Sollis, E., Venselaar, H., Estruch, S. B., Derizioti, P., & Fisher, S. E. (2018). Functional characterization of TBR1 variants in neurodevelopmental disorder. Scientific Reports, 8: 14279. doi:10.1038/s41598-018-32053-6.

    Abstract

    Recurrent de novo variants in the TBR1 transcription factor are implicated in the etiology of sporadic autism spectrum disorders (ASD). Disruptions include missense variants located in the T-box DNA-binding domain and previous work has demonstrated that they disrupt TBR1 protein function. Recent screens of thousands of simplex families with sporadic ASD cases uncovered additional T-box variants in TBR1 but their etiological relevance is unclear. We performed detailed functional analyses of de novo missense TBR1 variants found in the T-box of ASD cases, assessing many aspects of protein function, including subcellular localization, transcriptional activity and protein-interactions. Only two of the three tested variants severely disrupted TBR1 protein function, despite in silico predictions that all would be deleterious. Furthermore, we characterized a putative interaction with BCL11A, a transcription factor that was recently implicated in a neurodevelopmental syndrome involving developmental delay and language deficits. Our findings enhance understanding of molecular functions of TBR1, as well as highlighting the importance of functional testing of variants that emerge from next-generation sequencing, to decipher their contributions to neurodevelopmental disorders like ASD.

    Additional information

    Electronic supplementary material
  • den Hoed, M., Eijgelsheim, M., Esko, T., Brundel, B. J. J. M., Peal, D. S., Evans, D. M., Nolte, I. M., Segrè, A. V., Holm, H., Handsaker, R. E., Westra, H.-J., Johnson, T., Isaacs, A., Yang, J., Lundby, A., Zhao, J. H., Kim, Y. J., Go, M. J., Almgren, P., Bochud, M. and 249 moreden Hoed, M., Eijgelsheim, M., Esko, T., Brundel, B. J. J. M., Peal, D. S., Evans, D. M., Nolte, I. M., Segrè, A. V., Holm, H., Handsaker, R. E., Westra, H.-J., Johnson, T., Isaacs, A., Yang, J., Lundby, A., Zhao, J. H., Kim, Y. J., Go, M. J., Almgren, P., Bochud, M., Boucher, G., Cornelis, M. C., Gudbjartsson, D., Hadley, D., van der Harst, P., Hayward, C., den Heijer, M., Igl, W., Jackson, A. U., Kutalik, Z., Luan, J., Kemp, J. P., Kristiansson, K., Ladenvall, C., Lorentzon, M., Montasser, M. E., Njajou, O. T., O'Reilly, P. F., Padmanabhan, S., St Pourcain, B., Rankinen, T., Salo, P., Tanaka, T., Timpson, N. J., Vitart, V., Waite, L., Wheeler, W., Zhang, W., Draisma, H. H. M., Feitosa, M. F., Kerr, K. F., Lind, P. A., Mihailov, E., Onland-Moret, N. C., Song, C., Weedon, M. N., Xie, W., Yengo, L., Absher, D., Albert, C. M., Alonso, A., Arking, D. E., de Bakker, P. I. W., Balkau, B., Barlassina, C., Benaglio, P., Bis, J. C., Bouatia-Naji, N., Brage, S., Chanock, S. J., Chines, P. S., Chung, M., Darbar, D., Dina, C., Dörr, M., Elliott, P., Felix, S. B., Fischer, K., Fuchsberger, C., de Geus, E. J. C., Goyette, P., Gudnason, V., Harris, T. B., Hartikainen, A.-L., Havulinna, A. S., Heckbert, S. R., Hicks, A. A., Hofman, A., Holewijn, S., Hoogstra-Berends, F., Hottenga, J.-J., Jensen, M. K., Johansson, A., Junttila, J., Kääb, S., Kanon, B., Ketkar, S., Khaw, K.-T., Knowles, J. W., Kooner, A. S., Kors, J. A., Kumari, M., Milani, L., Laiho, P., Lakatta, E. G., Langenberg, C., Leusink, M., Liu, Y., Luben, R. N., Lunetta, K. L., Lynch, S. N., Markus, M. R. P., Marques-Vidal, P., Mateo Leach, I., McArdle, W. L., McCarroll, S. A., Medland, S. E., Miller, K. A., Montgomery, G. W., Morrison, A. C., Müller-Nurasyid, M., Navarro, P., Nelis, M., O'Connell, J. R., O'Donnell, C. J., Ong, K. K., Newman, A. B., Peters, A., Polasek, O., Pouta, A., Pramstaller, P. P., Psaty, B. M., Rao, D. C., Ring, S. M., Rossin, E. J., Rudan, D., Sanna, S., Scott, R. A., Sehmi, J. S., Sharp, S., Shin, J. T., Singleton, A. B., Smith, A. V., Soranzo, N., Spector, T. D., Stewart, C., Stringham, H. M., Tarasov, K. V., Uitterlinden, A. G., Vandenput, L., Hwang, S.-J., Whitfield, J. B., Wijmenga, C., Wild, S. H., Willemsen, G., Wilson, J. F., Witteman, J. C. M., Wong, A., Wong, Q., Jamshidi, Y., Zitting, P., Boer, J. M. A., Boomsma, D. I., Borecki, I. B., van Duijn, C. M., Ekelund, U., Forouhi, N. G., Froguel, P., Hingorani, A., Ingelsson, E., Kivimaki, M., Kronmal, R. A., Kuh, D., Lind, L., Martin, N. G., Oostra, B. A., Pedersen, N. L., Quertermous, T., Rotter, J. I., van der Schouw, Y. T., Verschuren, W. M. M., Walker, M., Albanes, D., Arnar, D. O., Assimes, T. L., Bandinelli, S., Boehnke, M., de Boer, R. A., Bouchard, C., Caulfield, W. L. M., Chambers, J. C., Curhan, G., Cusi, D., Eriksson, J., Ferrucci, L., van Gilst, W. H., Glorioso, N., de Graaf, J., Groop, L., Gyllensten, U., Hsueh, W.-C., Hu, F. B., Huikuri, H. V., Hunter, D. J., Iribarren, C., Isomaa, B., Jarvelin, M.-R., Jula, A., Kähönen, M., Kiemeney, L. A., van der Klauw, M. M., Kooner, J. S., Kraft, P., Iacoviello, L., Lehtimäki, T., Lokki, M.-L.-L., Mitchell, B. D., Navis, G., Nieminen, M. S., Ohlsson, C., Poulter, N. R., Qi, L., Raitakari, O. T., Rimm, E. B., Rioux, J. D., Rizzi, F., Rudan, I., Salomaa, V., Sever, P. S., Shields, D. C., Shuldiner, A. R., Sinisalo, J., Stanton, A. V., Stolk, R. P., Strachan, D. P., Tardif, J.-C., Thorsteinsdottir, U., Tuomilehto, J., van Veldhuisen, D. J., Virtamo, J., Viikari, J., Vollenweider, P., Waeber, G., Widen, E., Cho, Y. S., Olsen, J. V., Visscher, P. M., Willer, C., Franke, L., Erdmann, J., Thompson, J. R., Pfeufer, A., Sotoodehnia, N., Newton-Cheh, C., Ellinor, P. T., Stricker, B. H. C., Metspalu, A., Perola, M., Beckmann, J. S., Smith, G. D., Stefansson, K., Wareham, N. J., Munroe, P. B., Sibon, O. C. M., Milan, D. J., Snieder, H., Samani, N. J., Loos, R. J. F., Global BPgen Consortium, CARDIoGRAM Consortium, PR GWAS Consortium, QRS GWAS Consortium, QT-IGC Consortium, & CHARGE-AF Consortium (2013). Identification of heart rate-associated loci and their effects on cardiac conduction and rhythm disorders. Nature Genetics, 45(6), 621-631. doi:10.1038/ng.2610.

    Abstract

    Elevated resting heart rate is associated with greater risk of cardiovascular disease and mortality. In a 2-stage meta-analysis of genome-wide association studies in up to 181,171 individuals, we identified 14 new loci associated with heart rate and confirmed associations with all 7 previously established loci. Experimental downregulation of gene expression in Drosophila melanogaster and Danio rerio identified 20 genes at 11 loci that are relevant for heart rate regulation and highlight a role for genes involved in signal transmission, embryonic cardiac development and the pathophysiology of dilated cardiomyopathy, congenital heart failure and/or sudden cardiac death. In addition, genetic susceptibility to increased heart rate is associated with altered cardiac conduction and reduced risk of sick sinus syndrome, and both heart rate-increasing and heart rate-decreasing variants associate with risk of atrial fibrillation. Our findings provide fresh insights into the mechanisms regulating heart rate and identify new therapeutic targets.
  • Deriziotis, P., & Fisher, S. E. (2013). Neurogenomics of speech and language disorders: The road ahead. Genome Biology, 14: 204. doi:10.1186/gb-2013-14-4-204.

    Abstract

    Next-generation sequencing is set to transform the discovery of genes underlying neurodevelopmental disorders, and so off er important insights into the biological bases of spoken language. Success will depend on functional assessments in neuronal cell lines, animal models and humans themselves.
  • Desmet, T., De Baecke, C., Drieghe, D., Brysbaert, M., & Vonk, W. (2006). Relative clause attachment in Dutch: On-line comprehension corresponds to corpus frequencies when lexical variables are taken into account. Language and Cognitive Processes, 21(4), 453-485. doi:10.1080/01690960400023485.

    Abstract

    Desmet, Brysbaert, and De Baecke (2002a) showed that the production of relative clauses following two potential attachment hosts (e.g., ‘Someone shot the servant of the actress who was on the balcony’) was influenced by the animacy of the first host. These results were important because they refuted evidence from Dutch against experience-based accounts of syntactic ambiguity resolution, such as the tuning hypothesis. However, Desmet et al. did not provide direct evidence in favour of tuning, because their study focused on production and did not include reading experiments. In the present paper this line of research was extended. A corpus analysis and an eye-tracking experiment revealed that when taking into account lexical properties of the NP host sites (i.e., animacy and concreteness) the frequency pattern and the on-line comprehension of the relative clause attachment ambiguity do correspond. The implications for exposure-based accounts of sentence processing are discussed.
  • Devanna, P., Van de Vorst, M., Pfundt, R., Gilissen, C., & Vernes, S. C. (2018). Genome-wide investigation of an ID cohort reveals de novo 3′UTR variants affecting gene expression. Human Genetics, 137(9), 717-721. doi:10.1007/s00439-018-1925-9.

    Abstract

    Intellectual disability (ID) is a severe neurodevelopmental disorder with genetically heterogeneous causes. Large-scale sequencing has led to the identification of many gene-disrupting mutations; however, a substantial proportion of cases lack a molecular diagnosis. As such, there remains much to uncover for a complete understanding of the genetic underpinnings of ID. Genetic variants present in non-coding regions of the genome have been highlighted as potential contributors to neurodevelopmental disorders given their role in regulating gene expression. Nevertheless the functional characterization of non-coding variants remains challenging. We describe the identification and characterization of de novo non-coding variation in 3′UTR regulatory regions within an ID cohort of 50 patients. This cohort was previously screened for structural and coding pathogenic variants via CNV, whole exome and whole genome analysis. We identified 44 high-confidence single nucleotide non-coding variants within the 3′UTR regions of these 50 genomes. Four of these variants were located within predicted miRNA binding sites and were thus hypothesised to have regulatory consequences. Functional testing showed that two of the variants interfered with miRNA-mediated regulation of their target genes, AMD1 and FAIM. Both these variants were found in the same individual and their functional consequences may point to a potential role for such variants in intellectual disability.

    Additional information

    439_2018_1925_MOESM1_ESM.docx
  • Devanna, P., Chen, X. S., Ho, J., Gajewski, D., Smith, S. D., Gialluisi, A., Francks, C., Fisher, S. E., Newbury, D. F., & Vernes, S. C. (2018). Next-gen sequencing identifies non-coding variation disrupting miRNA binding sites in neurological disorders. Molecular Psychiatry, 23(5), 1375-1384. doi:10.1038/mp.2017.30.

    Abstract

    Understanding the genetic factors underlying neurodevelopmental and neuropsychiatric disorders is a major challenge given their prevalence and potential severity for quality of life. While large-scale genomic screens have made major advances in this area, for many disorders the genetic underpinnings are complex and poorly understood. To date the field has focused predominantly on protein coding variation, but given the importance of tightly controlled gene expression for normal brain development and disorder, variation that affects non-coding regulatory regions of the genome is likely to play an important role in these phenotypes. Herein we show the importance of 3 prime untranslated region (3'UTR) non-coding regulatory variants across neurodevelopmental and neuropsychiatric disorders. We devised a pipeline for identifying and functionally validating putatively pathogenic variants from next generation sequencing (NGS) data. We applied this pipeline to a cohort of children with severe specific language impairment (SLI) and identified a functional, SLI-associated variant affecting gene regulation in cells and post-mortem human brain. This variant and the affected gene (ARHGEF39) represent new putative risk factors for SLI. Furthermore, we identified 3′UTR regulatory variants across autism, schizophrenia and bipolar disorder NGS cohorts demonstrating their impact on neurodevelopmental and neuropsychiatric disorders. Our findings show the importance of investigating non-coding regulatory variants when determining risk factors contributing to neurodevelopmental and neuropsychiatric disorders. In the future, integration of such regulatory variation with protein coding changes will be essential for uncovering the genetic causes of complex neurological disorders and the fundamental mechanisms underlying health and disease

    Additional information

    mp201730x1.docx
  • Devaraju, K., Barnabé-Heider, F., Kokaia, Z., & Lindvall, O. (2013). FoxJ1-expressing cells contribute to neurogenesis in forebrain of adult rats: Evidence from in vivo electroporation combined with piggyBac transposon. ScienceDirect, 319(18), 2790-2800. doi:10.1016/j.yexcr.2013.08.028.

    Abstract

    Ependymal cells in the lateral ventricular wall are considered to be post-mitotic but can give rise to neuroblasts and astrocytes after stroke in adult mice due to insult-induced suppression of Notch signaling. The transcription factor FoxJ1, which has been used to characterize mouse ependymal cells, is also expressed by a subset of astrocytes. Cells expressing FoxJ1, which drives the expression of motile cilia, contribute to early postnatal neurogenesis in mouse olfactory bulb. The distribution and progeny of FoxJ1-expressing cells in rat forebrain are unknown. Here we show using immunohistochemistry that the overall majority of FoxJ1-expressing cells in the lateral ventricular wall of adult rats are ependymal cells with a minor population being astrocytes. To allow for long-term fate mapping of FoxJ1-derived cells, we used the piggyBac system for in vivo gene transfer with electroporation. Using this method, we found that FoxJ1-expressing cells, presumably the astrocytes, give rise to neuroblasts and mature neurons in the olfactory bulb both in intact and stroke-damaged brain of adult rats. No significant contribution of FoxJ1-derived cells to stroke-induced striatal neurogenesis was detected. These data indicate that in the adult rat brain, FoxJ1-expressing cells contribute to the formation of new neurons in the olfactory bulb but are not involved in the cellular repair after stroke.
  • Dietrich, C. (2006). The acquisition of phonological structure: Distinguishing contrastive from non-contrastive variation. PhD Thesis, Radboud University Nijmegen, Nijmegen. doi:10.17617/2.57829.
  • Dietrich, C., Swingley, D., & Werker, J. F. (2007). Native language governs interpretation of salient speech sound differences at 18 months. Proceedings of the National Academy of Sciences of the USA, 104(41), 16027-16031.

    Abstract

    One of the first steps infants take in learning their native language is to discover its set of speech-sound categories. This early development is shown when infants begin to lose the ability to differentiate some of the speech sounds their language does not use, while retaining or improving discrimination of language-relevant sounds. However, this aspect of early phonological tuning is not sufficient for language learning. Children must also discover which of the phonetic cues that are used in their language serve to signal lexical distinctions. Phonetic variation that is readily discriminable to all children may indicate two different words in one language but only one word in another. Here, we provide evidence that the language background of 1.5-year-olds affects their interpretation of phonetic variation in word learning, and we show that young children interpret salient phonetic variation in language-specific ways. Three experiments with a total of 104 children compared Dutch- and English-learning 18-month-olds' responses to novel words varying in vowel duration or vowel quality. Dutch learners interpreted vowel duration as lexically contrastive, but English learners did not, in keeping with properties of Dutch and English. Both groups performed equivalently when differentiating words varying in vowel quality. Thus, at one and a half years, children's phonological knowledge already guides their interpretation of salient phonetic variation. We argue that early phonological learning is not just a matter of maintaining the ability to distinguish language-relevant phonetic cues. Learning also requires phonological interpretation at appropriate levels of linguistic analysis.
  • Dietrich, R., & Klein, W. (1986). Simple language. Interdisciplinary Science Reviews, 11(2), 110-117.
  • Dijkstra, T., & Kempen, G. (1997). Het taalgebruikersmodel. In H. Hulshof, & T. Hendrix (Eds.), De taalcentrale. Amsterdam: Bulkboek.
  • Dimitriadis, A., Kemps-Snijders, M., Wittenburg, P., Everaert, M., & Levinson, S. C. (2006). Towards a linguist's workbench supporting eScience methods. In Proceedings of the 2nd IEEE International Conference on e-Science and Grid Computing.
  • Dimroth, C. (2007). Zweitspracherwerb bei Kindern und Jugendlichen: Gemeinsamkeiten und Unterschiede. In T. Anstatt (Ed.), Mehrsprachigkeit bei Kindern und Erwachsenen: Erwerb, Formen, Förderung (pp. 115-137). Tübingen: Attempto.

    Abstract

    This paper discusses the influence of age-related factors like stage of cognitive development, prior linguistic knowledge, and motivation and addresses the specific effects of these ‘age factors’ on second language acquisition as opposed to other learning tasks. Based on longitudinal corpus data from child and adolescent learners of L2 German (L1 = Russian), the paper studies the acquisition of word order (verb raising over negation, verb second) and inflectional morphology (subject-verb-agreement, tense, noun plural, and adjective-noun agreement). Whereas the child learner shows target-like production in all of these areas within the observation period (1½ years), the adolescent learner masters only some of them. The discussion addresses the question of what it is about clusters of grammatical features that make them particularly affected by age.
  • Dimroth, C., & Klein, W. (2007). Den Erwachsenen überlegen: Kinder entwickeln beim Sprachenlernen besondere Techniken und sind erfolgreicher als ältere Menschen. Tagesspiegel, 19737, B6-B6.

    Abstract

    The younger - the better? This paper discusses second language learning at different ages and takes a critical look at generalizations of the kind ‘The younger – the better’. It is argued that these generalizations do not apply across the board. Age related differences like the amount of linguistic knowledge, prior experience as a language user, or more or less advanced communicative needs affect different components of the language system to different degrees, and can even be an advantage for the early development of simple communicative systems.
  • Dimroth, C., & Watorek, M. (2000). The scope of additive particles in basic learner languages. Studies in Second Language Acquisition, 22, 307-336. Retrieved from http://journals.cambridge.org/action/displayAbstract?aid=65981.

    Abstract

    Based on their longitudinal analysis of the acquisition of Dutch, English, French, and German, Klein and Perdue (1997) described a “basic learner variety” as valid cross-linguistically and comprising a limited number of shared syntactic patterns interacting with two types of constraints: (a) semantic—the NP whose referent has highest control comes first, and (b) pragmatic—the focus expression is in final position. These authors hypothesized that “the topic-focus structure also plays an important role in some other respects. . . . Thus, negation and (other) scope particles occur at the topic-focus boundary” (p. 318). This poses the problem of the interaction between the core organizational principles of the basic variety and optional items such as negative particles and scope particles, which semantically affect the whole or part of the utterance in which they occur. In this article, we test the validity of these authors' hypothesis for the acquisition of the additive scope particle also (and its translation equivalents). Our analysis is based on the European Science Foundation (ESF) data originally used to define the basic variety, but we also included some more advanced learner data from the same database. In doing so, we refer to the analyses of Dimroth and Klein (1996), which concern the interaction between scope particles and the part of the utterance they affect, and we make a distinction between maximal scope—that which is potentially affected by the particle—and the actual scope of a particle in relation to an utterance in a given discourse context

    Files private

    Request files
  • Dingemanse, M. (2006). The semantics of Bantu noun classification: A review and comparison of three approaches. Master Thesis, Leiden University.
  • Dingemanse, M. (2006). The body in Yoruba: A linguistic study. Master Thesis, Leiden University, Leiden.
  • Dingemanse, M. (2013). Wie wir mit Sprache malen - How to paint with language. Forschungsbericht 2013 - Max-Planck-Institut für Psycholinguistik. In Max-Planck-Gesellschaft Jahrbuch 2013. München: Max Planck Society for the Advancement of Science. Retrieved from http://www.mpg.de/6683977/Psycholinguistik_JB_2013.

    Abstract

    Words evolve not as blobs of ink on paper but in face to face interaction. The nature of language as fundamentally interactive and multimodal is shown by the study of ideophones, vivid sensory words that thrive in conversations around the world. The ways in which these Lautbilder enable precise communication about sensory knowledge has for the first time been studied in detail. It turns out that we can paint with language, and that the onomatopoeia we sometimes classify as childish might be a subset of a much richer toolkit for depiction in speech, available to us all.

Share this page