Publications

Displaying 201 - 300 of 1673
  • Cho, T. (2002). The effects of prosody on articulation in English. New York: Routledge.
  • Cho, T., Jun, S.-A., & Ladefoged, P. (2002). Acoustic and aerodynamic correlates of Korean stops and fricatives. Journal of Phonetics, 30(2), 193-228. doi:10.1006/jpho.2001.0153.

    Abstract

    This study examines acoustic and aerodynamic characteristics of consonants in standard Korean and in Cheju, an endangered Korean language. The focus is on the well-known three-way distinction among voiceless stops (i.e., lenis, fortis, aspirated) and the two-way distinction between the voiceless fricatives /s/ and /s*/. While such a typologically unusual contrast among voiceless stops has long drawn the attention of phoneticians and phonologists, there is no single work in the literature that discusses a body of data representing a relatively large number of speakers. This study reports a variety of acoustic and aerodynamic measures obtained from 12 Korean speakers (four speakers of Seoul Korean and eight speakers of Cheju). Results show that, in addition to findings similar to those reported by others, there are three crucial points worth noting. Firstly, lenis, fortis, and aspirated stops are systematically differentiated from each other by the voice quality of the following vowel. Secondly, these stops are also differentiated by aerodynamic mechanisms. The aspirated and fortis stops are similar in supralaryngeal articulation, but employ a different relation between intraoral pressure and flow. Thirdly, our study suggests that the fricative /s/ is better categorized as “lenis” rather than “aspirated”. The paper concludes with a discussion of the implications of Korean data for theories of the voicing contrast and their phonological representations.
  • Choi, S., & Bowerman, M. (1991). Learning to express motion events in English and Korean: The influence of language-specific lexicalization patterns. Cognition, 41, 83-121. doi:10.1016/0010-0277(91)90033-Z.

    Abstract

    English and Korean differ in how they lexicalize the components of motionevents. English characteristically conflates Motion with Manner, Cause, or Deixis, and expresses Path separately. Korean, in contrast, conflates Motion with Path and elements of Figure and Ground in transitive clauses for caused Motion, but conflates motion with Deixis and spells out Path and Manner separately in intransitive clauses for spontaneous motion. Children learningEnglish and Korean show sensitivity to language-specific patterns in the way they talk about motion from as early as 17–20 months. For example, learners of English quickly generalize their earliest spatial words — Path particles like up, down, and in — to both spontaneous and caused changes of location and, for up and down, to posture changes, while learners of Korean keep words for spontaneous and caused motion strictly separate and use different words for vertical changes of location and posture changes. These findings challenge the widespread view that children initially map spatial words directly to nonlinguistic spatial concepts, and suggest that they are influenced by the semantic organization of their language virtually from the beginning. We discuss how input and cognition may interact in the early phases of learning to talk about space.
  • Choi, J., Broersma, M., & Cutler, A. (2018). Phonetic learning is not enhanced by sequential exposure to more than one language. Linguistic Research, 35(3), 567-581. doi:10.17250/khisli.35.3.201812.006.

    Abstract

    Several studies have documented that international adoptees, who in early years have
    experienced a change from a language used in their birth country to a new language
    in an adoptive country, benefit from the limited early exposure to the birth language
    when relearning that language’s sounds later in life. The adoptees’ relearning advantages
    have been argued to be conferred by lasting birth-language knowledge obtained from
    the early exposure. However, it is also plausible to assume that the advantages may
    arise from adoptees’ superior ability to learn language sounds in general, as a result
    of their unusual linguistic experience, i.e., exposure to multiple languages in sequence
    early in life. If this is the case, then the adoptees’ relearning benefits should generalize
    to previously unheard language sounds, rather than be limited to their birth-language
    sounds. In the present study, adult Korean adoptees in the Netherlands and matched
    Dutch-native controls were trained on identifying a Japanese length distinction to which
    they had never been exposed before. The adoptees and Dutch controls did not differ
    on any test carried out before, during, or after the training, indicating that observed
    adoptee advantages for birth-language relearning do not generalize to novel, previously
    unheard language sounds. The finding thus fails to support the suggestion that
    birth-language relearning advantages may arise from enhanced ability to learn language
    sounds in general conferred by early experience in multiple languages. Rather, our
    finding supports the original contention that such advantages involve memory traces
    obtained before adoption
  • Chu, M., & Kita, S. (2016). Co-thought and Co-speech Gestures Are Generated by the Same Action Generation Process. Journal of Experimental Psychology: Learning, Memory, and Cognition, 42(2), 257-270. doi:10.1037/xlm0000168.

    Abstract

    People spontaneously gesture when they speak (co-speech gestures) and when they solve problems silently (co-thought gestures). In this study, we first explored the relationship between these 2 types of gestures and found that individuals who produced co-thought gestures more frequently also produced co-speech gestures more frequently (Experiments 1 and 2). This suggests that the 2 types of gestures are generated from the same process. We then investigated whether both types of gestures can be generated from the representational use of the action generation process that also generates purposeful actions that have a direct physical impact on the world, such as manipulating an object or locomotion (the action generation hypothesis). To this end, we examined the effect of object affordances on the production of both types of gestures (Experiments 3 and 4). We found that individuals produced co-thought and co-speech gestures more often when the stimulus objects afforded action (objects with a smooth surface) than when they did not (objects with a spiky surface). These results support the action generation hypothesis for representational gestures. However, our findings are incompatible with the hypothesis that co-speech representational gestures are solely generated from the speech production process (the speech production hypothesis).
  • Clahsen, H., Prüfert, P., Eisenbeiss, S., & Cholin, J. (2002). Strong stems in the German mental lexicon: Evidence from child language acquisition and adult processing. In I. Kaufmann, & B. Stiebels (Eds.), More than words. Festschrift for Dieter Wunderlich (pp. 91-112). Berlin: Akadamie Verlag.
  • Clark, E. V., & Casillas, M. (2016). First language acquisition. In K. Allen (Ed.), The Routledge Handbook of Linguistics (pp. 311-328). New York: Routledge.
  • Clough, S., & Hilverman, C. (2018). Hand gestures and how they help children learn. Frontiers for Young Minds, 6: 29. doi:10.3389/frym.2018.00029.

    Abstract

    When we talk, we often make hand movements called gestures at the same time. Although just about everyone gestures when they talk, we usually do not even notice the gestures. Our hand gestures play an important role in helping us learn and remember! When we see other people gesturing when they talk—or when we gesture when we talk ourselves—we are more likely to remember the information being talked about than if gestures were not involved. Our hand gestures can even indicate when we are ready to learn new things! In this article, we explain how gestures can help learning. To investigate this, we studied children learning a new mathematical concept called equivalence. We hope that this article will help you notice when you, your friends and family, and your teachers are gesturing, and that it will help you understand how those gestures can help people learn.
  • Clough, S., & Gordon, J. K. (2020). Fluent or nonfluent? Part A. Underlying contributors to categorical classifications of fluency in aphasia. Aphasiology, 34(5), 515-539. doi:10.1080/02687038.2020.1727709.

    Abstract

    Background: The concept of fluency is widely used to dichotomously classify aphasia syndromes in both research and clinical practice. Despite its ubiquity, reliability of fluency measurement is reduced due to its multi-dimensional nature and the variety of methods used to measure it.
    Aims: The primary aim of the study was to determine what factors contribute to judgements of fluency in aphasia, identifying methodological and linguistic sources of disagreement.
    Methods & Procedures: We compared fluency classifications generated according to fluency scores on the revised Western Aphasia Battery (WAB-R) to clinical impressions of fluency for 254 English-speaking people with aphasia (PwA) from the AphasiaBank database. To determine what contributed to fluency classifications, we examined syndrome diagnoses and measured the predictive strength of 18 spontaneous speech variables extracted from retellings of the Cinderella story. The variables were selected to represent three dimensions predicted to underlie fluency: grammatical competence, lexical retrieval, and the facility of speech production.
    Outcomes & Results: WAB-R fluency classifications agreed with 83% of clinician classifications, although agreement was much greater for fluent than nonfluent classifications. The majority of mismatches were diagnosed with anomic or conduction aphasia by the WAB-R but Broca's aphasia by clinicians. Modifying the WAB-R scale improved the extent to which WAB-R fluency categories matched clinical impressions. Fluency classifications were predicted by a combination of variables, including aspects of grammaticality, lexical retrieval and speech production. However, fluency classification by WAB-R was largely predicted by severity, whereas the presence or absence of apraxia of speech was the largest predictor of fluency classifications by clinicians.
    Conclusions: Fluency judgements according to WAB-R scoring and those according to clinical impression showed some common influences, but also some differences that contributed to mismatches in fluency categorization. We propose that, rather than using dichotomous fluency categories, which can mask sources of disagreement, fluency should be explicitly identified relative to the underlying deficits (word-finding, grammatical formulation, speech production, or a combination) contributing to each individual PwA's fluency profile. Identifying what contributes to fluency disruptions is likely to generate more reliable diagnoses and provide more concrete guidance regarding therapy, avenues we are pursuing in ongoing research.
  • Clough, S., & Duff, M. C. (2020). The role of gesture in communication and cognition: Implications for understanding and treating neurogenic communication disorders. Frontiers in Human Neuroscience, 14: 323. doi:10.3389/fnhum.2020.00323.

    Abstract

    When people talk, they gesture. Gesture is a fundamental component of language that contributes meaningful and unique information to a spoken message and reflects the speaker's underlying knowledge and experiences. Theoretical perspectives of speech and gesture propose that they share a common conceptual origin and have a tightly integrated relationship, overlapping in time, meaning, and function to enrich the communicative context. We review a robust literature from the field of psychology documenting the benefits of gesture for communication for both speakers and listeners, as well as its important cognitive functions for organizing spoken language, and facilitating problem-solving, learning, and memory. Despite this evidence, gesture has been relatively understudied in populations with neurogenic communication disorders. While few studies have examined the rehabilitative potential of gesture in these populations, others have ignored gesture entirely or even discouraged its use. We review the literature characterizing gesture production and its role in intervention for people with aphasia, as well as describe the much sparser literature on gesture in cognitive communication disorders including right hemisphere damage, traumatic brain injury, and Alzheimer's disease. The neuroanatomical and behavioral profiles of these patient populations provide a unique opportunity to test theories of the relationship of speech and gesture and advance our understanding of their neural correlates. This review highlights several gaps in the field of communication disorders which may serve as a bridge for applying the psychological literature of gesture to the study of language disorders. Such future work would benefit from considering theoretical perspectives of gesture and using more rigorous and quantitative empirical methods in its approaches. We discuss implications for leveraging gesture to explore its untapped potential in understanding and rehabilitating neurogenic communication disorders.
  • Collins, J. (2016). The role of language contact in creating correlations between humidity and tone. Journal of Language Evolution, 46-52. doi:10.1093/jole/lzv012.
  • Connaughton, D. M., Dai, R., Owen, D. J., Marquez, J., Mann, N., Graham-Paquin, A. L., Nakayama, M., Coyaud, E., Laurent, E. M. N., St-Germain, J. R., Snijders Blok, L., Vino, A., Klämbt, V., Deutsch, K., Wu, C.-H.-W., Kolvenbach, C. M., Kause, F., Ottlewski, I., Schneider, R., Kitzler, T. M. and 79 moreConnaughton, D. M., Dai, R., Owen, D. J., Marquez, J., Mann, N., Graham-Paquin, A. L., Nakayama, M., Coyaud, E., Laurent, E. M. N., St-Germain, J. R., Snijders Blok, L., Vino, A., Klämbt, V., Deutsch, K., Wu, C.-H.-W., Kolvenbach, C. M., Kause, F., Ottlewski, I., Schneider, R., Kitzler, T. M., Majmundar, A. J., Buerger, F., Onuchic-Whitford, A. C., Youying, M., Kolb, A., Salmanullah, D., Chen, E., Van der Ven, A. T., Rao, J., Ityel, H., Seltzsam, S., Rieke, J. M., Chen, J., Vivante, A., Hwang, D.-Y., Kohl, S., Dworschak, G. C., Hermle, T., Alders, M., Bartolomaeus, T., Bauer, S. B., Baum, M. A., Brilstra, E. H., Challman, T. D., Zyskind, J., Costin, C. E., Dipple, K. M., Duijkers, F. A., Ferguson, M., Fitzpatrick, D. R., Fick, R., Glass, I. A., Hulick, P. J., Kline, A. D., Krey, I., Kumar, S., Lu, W., Marco, E. J., Wentzensen, I. M., Mefford, H. C., Platzer, K., Povolotskaya, I. S., Savatt, J. M., Shcherbakova, N. V., Senguttuvan, P., Squire, A. E., Stein, D. R., Thiffault, I., Voinova, V. Y., Somers, M. J. G., Ferguson, M. A., Traum, A. Z., Daouk, G. H., Daga, A., Rodig, N. M., Terhal, P. A., Van Binsbergen, E., Eid, L. A., Tasic, V., Rasouly, H. M., Lim, T. Y., Ahram, D. F., Gharavi, A. G., Reutter, H. M., Rehm, H. L., MacArthur, D. G., Lek, M., Laricchia, K. M., Lifton, R. P., Xu, H., Mane, S. M., Sanna-Cherchi, S., Sharrocks, A. D., Raught, B., Fisher, S. E., Bouchard, M., Khokha, M. K., Shril, S., & Hildebrandt, F. (2020). Mutations of the transcriptional corepressor ZMYM2 cause syndromic urinary tract malformations. The American Journal of Human Genetics, 107(4), 727-742. doi:10.1016/j.ajhg.2020.08.013.

    Abstract

    Congenital anomalies of the kidney and urinary tract (CAKUT) constitute one of the most frequent birth defects and represent the most common cause of chronic kidney disease in the first three decades of life. Despite the discovery of dozens of monogenic causes of CAKUT, most pathogenic pathways remain elusive. We performed whole-exome sequencing (WES) in 551 individuals with CAKUT and identified a heterozygous de novo stop-gain variant in ZMYM2 in two different families with CAKUT. Through collaboration, we identified in total 14 different heterozygous loss-of-function mutations in ZMYM2 in 15 unrelated families. Most mutations occurred de novo, indicating possible interference with reproductive function. Human disease features are replicated in X. tropicalis larvae with morpholino knockdowns, in which expression of truncated ZMYM2 proteins, based on individual mutations, failed to rescue renal and craniofacial defects. Moreover, heterozygous Zmym2-deficient mice recapitulated features of CAKUT with high penetrance. The ZMYM2 protein is a component of a transcriptional corepressor complex recently linked to the silencing of developmentally regulated endogenous retrovirus elements. Using protein-protein interaction assays, we show that ZMYM2 interacts with additional epigenetic silencing complexes, as well as confirming that it binds to FOXP1, a transcription factor that has also been linked to CAKUT. In summary, our findings establish that loss-of-function mutations of ZMYM2, and potentially that of other proteins in its interactome, as causes of human CAKUT, offering new routes for studying the pathogenesis of the disorder.
  • Cooper, N., Cutler, A., & Wales, R. (2002). Constraints of lexical stress on lexical access in English: Evidence from native and non-native listeners. Language and Speech, 45(3), 207-228.

    Abstract

    Four cross-modal priming experiments and two forced-choice identification experiments investigated the use of suprasegmental cues to stress in the recognition of spoken English words, by native (English-speaking) and non- native (Dutch) listeners. Previous results had indicated that suprasegmental information was exploited in lexical access by Dutch but not by English listeners. For both listener groups, recognition of visually presented target words was faster, in comparison to a control condition, after stress-matching spoken primes, either monosyllabic (mus- from MUsic /muSEum) or bisyl labic (admi- from ADmiral/admiRAtion). For native listeners, the effect of stress-mismatching bisyllabic primes was not different from that of control primes, but mismatching monosyllabic primes produced partial facilitation. For non-native listeners, both bisyllabic and monosyllabic stress-mismatching primes produced partial facilitation. Native English listeners thus can exploit suprasegmental information in spoken-word recognition, but information from two syllables is used more effectively than information from one syllable. Dutch listeners are less proficient at using suprasegmental information in English than in their native language, but, as in their native language, use mono- and bisyllabic information to an equal extent. In forced-choice identification, Dutch listeners outperformed native listeners at correctly assigning a monosyllabic fragment (e.g., mus-) to one of two words differing in stress.
  • Coopmans, C. W., & Schoenmakers, G.-J. (2020). Incremental structure building of preverbal PPs in Dutch. Linguistics in the Netherlands, 37(1), 38-52. doi:10.1075/avt.00036.coo.

    Abstract

    Incremental comprehension of head-final constructions can reveal structural attachment preferences for ambiguous phrases. This study investigates
    how temporarily ambiguous PPs are processed in Dutch verb-final constructions. In De aannemer heeft op het dakterras bespaard/gewerkt ‘The
    contractor has on the roof terrace saved/worked’, the PP is locally ambiguous between attachment as argument and as adjunct. This ambiguity is
    resolved by the sentence-final verb. In a self-paced reading task, we manipulated the argument/adjunct status of the PP, and its position relative to the
    verb. While we found no reading-time differences between argument and
    adjunct PPs, we did find that transitive verbs, for which the PP is an argument, were read more slowly than intransitive verbs, for which the PP is an adjunct. We suggest that Dutch parsers have a preference for adjunct attachment of preverbal PPs, and discuss our findings in terms of incremental
    parsing models that aim to minimize costly reanalysis.
  • Coopmans, C. W., & Nieuwland, M. S. (2020). Dissociating activation and integration of discourse referents: Evidence from ERPs and oscillations. Cortex, 126, 83-106. doi:10.1016/j.cortex.2019.12.028.

    Abstract

    A key challenge in understanding stories and conversations is the comprehension of ‘anaphora’, words that refer back to previously mentioned words or concepts (‘antecedents’). In psycholinguistic theories, anaphor comprehension involves the initial activation of the antecedent and its subsequent integration into the unfolding representation of the narrated event. A recent proposal suggests that these processes draw upon the brain’s recognition memory and language networks, respectively, and may be dissociable in patterns of neural oscillatory synchronization (Nieuwland & Martin, 2017). We addressed this proposal in an electroencephalogram (EEG) study with pre-registered data acquisition and analyses, using event-related potentials (ERPs) and neural oscillations. Dutch participants read two-sentence mini stories containing proper names, which were repeated or new (ease of activation) and semantically coherent or incoherent with the preceding discourse (ease of integration). Repeated names elicited lower N400 and Late Positive Component amplitude than new names, and also an increase in theta-band (4-7 Hz) synchronization, which was largest around 240-450 ms after name onset. Discourse-coherent names elicited an increase in gamma-band (60-80 Hz) synchronization compared to discourse-incoherent names. This effect was largest around 690-1000 ms after name onset and exploratory beamformer analysis suggested a left frontal source. We argue that the initial activation and subsequent discourse-level integration of referents can be dissociated with event-related EEG activity, and are associated with respectively theta- and gamma-band activity. These findings further establish the link between memory and language through neural oscillations.

    Additional information

    materials, data, and analysis scripts
  • Corcoran, A. W., Alday, P. M., Schlesewsky, M., & Bornkessel-Schlesewsky, I. (2018). Toward a reliable, automated method of individual alpha frequency (IAF) quantification. Psychophysiology, 55(7): e13064. doi:10.1111/psyp.13064.

    Abstract

    Individual alpha frequency (IAF) is a promising electrophysiological marker of interindividual differences in cognitive function. IAF has been linked with trait-like differences in information processing and general intelligence, and provides an empirical basis for the definition of individualized frequency bands. Despite its widespread application, however, there is little consensus on the optimal method for estimating IAF, and many common approaches are prone to bias and inconsistency. Here, we describe an automated strategy for deriving two of the most prevalent IAF estimators in the literature: peak alpha frequency (PAF) and center of gravity (CoG). These indices are calculated from resting-state power spectra that have been smoothed using a Savitzky-Golay filter (SGF). We evaluate the performance characteristics of this analysis procedure in both empirical and simulated EEG data sets. Applying the SGF technique to resting-state data from n = 63 healthy adults furnished 61 PAF and 62 CoG estimates. The statistical properties of these estimates were consistent with previous reports. Simulation analyses revealed that the SGF routine was able to reliably extract target alpha components, even under relatively noisy spectral conditions. The routine consistently outperformed a simpler method of automated peak detection that did not involve spectral smoothing. The SGF technique is fast, open source, and available in two popular programming languages (MATLAB, Python), and thus can easily be integrated within the most popular M/EEG toolsets (EEGLAB, FieldTrip, MNE-Python). As such, it affords a convenient tool for improving the reliability and replicability of future IAF-related research.

    Additional information

    psyp13064-sup-0001-s01.docx
  • Corps, R. E. (2018). Coordinating utterances during conversational dialogue: The role of content and timing predictions. PhD Thesis, The University of Edinburgh, Edinburgh.
  • Corps, R. E., Gambi, C., & Pickering, M. J. (2018). Coordinating utterances during turn-taking: The role of prediction, response preparation, and articulation. Discourse processes, 55(2, SI), 230-240. doi:10.1080/0163853X.2017.1330031.

    Abstract

    During conversation, interlocutors rapidly switch between speaker and listener
    roles and take turns at talk. How do they achieve such fine coordination?
    Most research has concentrated on the role of prediction, but listeners
    must also prepare a response in advance (assuming they wish to respond)
    and articulate this response at the appropriate moment. Such mechanisms
    may overlap with the processes of comprehending the speaker’s incoming
    turn and predicting its end. However, little is known about the stages of
    response preparation and production. We discuss three questions pertaining
    to such stages: (1) Do listeners prepare their own response in advance?,
    (2) Can listeners buffer their prepared response?, and (3) Does buffering
    lead to interference with concurrent comprehension? We argue that fine
    coordination requires more than just an accurate prediction of the interlocutor’s
    incoming turn: Listeners must also simultaneously prepare their own
    response.
  • Corps, R. E., Crossley, A., Gambi, C., & Pickering, M. J. (2018). Early preparation during turn-taking: Listeners use content predictions to determine what to say but not when to say it. Cognition, 175, 77-95. doi:10.1016/j.cognition.2018.01.015.

    Abstract

    During conversation, there is often little gap between interlocutors’ utterances. In two pairs of experiments, we manipulated the content predictability of yes/no questions to investigate whether listeners achieve such coordination by (i) preparing a response as early as possible or (ii) predicting the end of the speaker’s turn. To assess these two mechanisms, we varied the participants’ task: They either pressed a button when they thought the question was about to end (Experiments 1a and 2a), or verbally answered the questions with either yes or no (Experiments 1b and 2b). Predictability effects were present when participants had to prepare a verbal response, but not when they had to predict the turn-end. These findings suggest content prediction facilitates turn-taking because it allows listeners to prepare their own response early, rather than because it helps them predict when the speaker will reach the end of their turn.

    Additional information

    Supplementary material
  • Corps, R. E., Gambi, C., & Pickering, M. J. (2020). How do listeners time response articulation when answering questions? The role of speech rate. Journal of Experimental Psychology: Learning, Memory, and Cognition, 46(4), 781-802. doi:10.1037/xlm0000759.

    Abstract

    During conversation, interlocutors often produce their utterances with little overlap or gap between their turns. But what mechanism underlies this striking ability to time articulation appropriately? In 2 verbal yes/no question-answering experiments, we investigated whether listeners use the speech rate of questions to time articulation of their answers. In Experiment 1, we orthogonally manipulated the speech rate of the context (e.g., Do you have a . . .) and final word (e.g., dog?) of questions using time-compression, so that each component was spoken at the natural rate or twice as a fast. Listeners responded earlier when the context was speeded rather than natural, suggesting they used the speaker’s context rate to time answer articulation. Additionally, listeners responded earlier when the speaker’s final syllable was speeded than natural, regardless of context rate, suggesting they adjusted the timing of articulation after listening to a single syllable produced at a different rate. We replicated this final word effect in Experiment 2, which also showed that our speech rate manipulation did not influence the timing of response preparation. Together, these findings suggest listeners use speech rate information to time articulation when answering questions
  • Corps, R. E., & Rabagliati, H. (2020). How top-down processing enhances comprehension of noise-vocoded speech: Predictions about meaning are more important than predictions about form. Journal of Memory and Language, 113: 104114. doi:10.1016/j.jml.2020.104114.

    Abstract

    Listeners quickly learn to understand speech that has been distorted, and this process is enhanced when comprehension is constrained by higher-level knowledge. In three experiments, we investigated whether this knowledge enhances comprehension of distorted speech because it allows listeners to predict (1) the meaning of the distorted utterance, or (2) the lower-level wordforms. Participants listened to question-answer sequences, in which questions were clearly-spoken but answers were noise-vocoded. Comprehension (Experiment 1) and learning (Experiment 2) were enhanced when listeners could use the question to predict the semantics of the distorted answer, but were not enhanced by predictions of answer form. Form predictions enhanced comprehension only when questions and answers were significantly separated by time and intervening linguistic material (Experiment 3). Together, these results suggest that high-level semantic predictions enhance comprehension and learning, with form predictions playing only a minimal role.
  • Coulson, S., & Lai, V. T. (Eds.). (2016). The metaphorical brain [Research topic]. Lausanne: Frontiers Media. doi:10.3389/978-2-88919-772-9.

    Abstract

    This Frontiers Special Issue will synthesize current findings on the cognitive neuroscience of metaphor, provide a forum for voicing novel perspectives, and promote new insights into the metaphorical brain.
  • Crago, M. B., & Allen, S. E. M. (1997). Linguistic and cultural aspects of simplicity and complexity in Inuktitut child directed speech. In E. Hughes, M. Hughes, & A. Greenhill (Eds.), Proceedings of the 21st annual Boston University Conference on Language Development (pp. 91-102).
  • Crago, M. B., Allen, S. E. M., & Hough-Eyamie, W. P. (1997). Exploring innateness through cultural and linguistic variation. In M. Gopnik (Ed.), The inheritance and innateness of grammars (pp. 70-90). New York City, NY, USA: Oxford University Press, Inc.
  • Creemers, A. (2020). Morphological processing and the effects of semantic transparency. PhD Thesis, University of Pennsylvania, Philadelphia, PA, USA.
  • Creemers, A., Goodwin Davies, A., Wilder, R. J., Tamminga, M., & Embick, D. (2020). Opacity, transparency, and morphological priming: A study of prefixed verbs in Dutch. Journal of Memory and Language, 110: 104055. doi:10.1016/j.jml.2019.104055.

    Abstract

    A basic question for the study of the mental lexicon is whether there are morphological representations and processes that are independent of phonology and semantics. According to a prominent tradition, morphological relatedness requires semantic transparency: semantically transparent words are related in meaning to their stems, while semantically opaque words are not. This study examines the question of morphological relatedness using intra-modal auditory priming by Dutch prefixed verbs. The key conditions involve semantically transparent prefixed primes (e.g., aanbieden ‘offer’, with the stem bieden, also ‘offer’) and opaque primes (e.g., verbieden ‘forbid’). Results show robust facilitation for both transparent and opaque pairs; phonological (Experiment 1) and semantic (Experiment 2) controls rule out the possibility that these other types of relatedness are responsible for the observed priming effects. The finding of facilitation with opaque primes suggests that morphological processing is independent of semantic and phonological representations. Accordingly, the results are incompatible with theories that make semantic overlap a necessary condition for relatedness, and favor theories in which words may be related in ways that do not require shared meaning. The general discussion considers several specific proposals along these lines, and compares and contrasts questions about morphological relatedness of the type found here with the different but related question of whether there is morphological decomposition of complex forms or not.
  • Creemers, A., Don, J., & Fenger, P. (2018). Some affixes are roots, others are heads. Natural Language & Linguistic Theory, 36(1), 45-84. doi:10.1007/s11049-017-9372-1.

    Abstract

    A recent debate in the morphological literature concerns the status of derivational affixes. While some linguists (Marantz 1997, 2001; Marvin 2003) consider derivational affixes a type of functional morpheme that realizes a categorial head, others (Lowenstamm 2015; De Belder 2011) argue that derivational affixes are roots. Our proposal, which finds its empirical basis in a study of Dutch derivational affixes, takes a middle position. We argue that there are two types of derivational affixes: some that are roots (i.e. lexical morphemes) and others that are categorial heads (i.e. functional morphemes). Affixes that are roots show ‘flexible’ categorial behavior, are subject to ‘lexical’ phonological rules, and may trigger idiosyncratic meanings. Affixes that realize categorial heads, on the other hand, are categorially rigid, do not trigger ‘lexical’ phonological rules nor allow for idiosyncrasies in their interpretation.
  • Cristia, A., Ganesh, S., Casillas, M., & Ganapathy, S. (2018). Talker diarization in the wild: The case of child-centered daylong audio-recordings. In Proceedings of Interspeech 2018 (pp. 2583-2587). doi:10.21437/Interspeech.2018-2078.

    Abstract

    Speaker diarization (answering 'who spoke when') is a widely researched subject within speech technology. Numerous experiments have been run on datasets built from broadcast news, meeting data, and call centers—the task sometimes appears close to being solved. Much less work has begun to tackle the hardest diarization task of all: spontaneous conversations in real-world settings. Such diarization would be particularly useful for studies of language acquisition, where researchers investigate the speech children produce and hear in their daily lives. In this paper, we study audio gathered with a recorder worn by small children as they went about their normal days. As a result, each child was exposed to different acoustic environments with a multitude of background noises and a varying number of adults and peers. The inconsistency of speech and noise within and across samples poses a challenging task for speaker diarization systems, which we tackled via retraining and data augmentation techniques. We further studied sources of structured variation across raw audio files, including the impact of speaker type distribution, proportion of speech from children, and child age on diarization performance. We discuss the extent to which these findings might generalize to other samples of speech in the wild.
  • Croijmans, I., Hendrickx, I., Lefever, E., Majid, A., & Van den Bosch, A. (2020). Uncovering the language of wine experts. Natural Language Engineering, 26(5), 511-530. doi:10.1017/S1351324919000500.

    Abstract

    Talking about odors and flavors is difficult for most people, yet experts appear to be able to convey critical information about wines in their reviews. This seems to be a contradiction, and wine expert descriptions are frequently received with criticism. Here, we propose a method for probing the language of wine reviews, and thus offer a means to enhance current vocabularies, and as a by-product question the general assumption that wine reviews are gibberish. By means of two different quantitative analyses—support vector machines for classification and Termhood analysis—on a corpus of online wine reviews, we tested whether wine reviews are written in a consistent manner, and thus may be considered informative; and whether reviews feature domain-specific language. First, a classification paradigm was trained on wine reviews from one set of authors for which the color, grape variety, and origin of a wine were known, and subsequently tested on data from a new author. This analysis revealed that, regardless of individual differences in vocabulary preferences, color and grape variety were predicted with high accuracy. Second, using Termhood as a measure of how words are used in wine reviews in a domain-specific manner compared to other genres in English, a list of 146 wine-specific terms was uncovered. These words were compared to existing lists of wine vocabulary that are currently used to train experts. Some overlap was observed, but there were also gaps revealed in the extant lists, suggesting these lists could be improved by our automatic analysis.
  • Croijmans, I. (2016). Gelukkig kunnen we erover praten: Over de kunst om geuren en smaken in woorden te omschrijven. koffieTcacao, 17, 80-81.
  • Croijmans, I., & Majid, A. (2016). Language does not explain the wine-specific memory advantage of wine experts. In A. Papafragou, D. Grodner, D. Mirman, & J. Trueswell (Eds.), Proceedings of the 38th Annual Meeting of the Cognitive Science Society (CogSci 2016) (pp. 141-146). Austin, TX: Cognitive Science Society.

    Abstract

    Although people are poor at naming odors, naming a smell helps to remember that odor. Previous studies show wine experts have better memory for smells, and they also name smells differently than novices. Is wine experts’ odor memory is verbally mediated? And is the odor memory advantage that experts have over novices restricted to odors in their domain of expertise, or does it generalize? Twenty-four wine experts and 24 novices smelled wines, wine-related odors and common odors, and remembered these. Half the participants also named the smells. Wine experts had better memory for wines, but not for the other odors, indicating their memory advantage is restricted to wine. Wine experts named odors better than novices, but there was no relationship between experts’ ability to name odors and their memory for odors. This suggests experts’ odor memory advantage is not linguistically mediated, but may be the result of differential perceptual learning
  • Croijmans, I., & Majid, A. (2016). Not all flavor expertise is equal: The language of wine and coffee experts. PLoS One, 11(6): e0155845. doi:10.1371/journal.pone.0155845.

    Abstract

    People in Western cultures are poor at naming smells and flavors. However, for wine and
    coffee experts, describing smells and flavors is part of their daily routine. So are experts bet-
    ter than lay people at conveying smells and flavors in language? If smells and flavors are
    more easily linguistically expressed by experts, or more

    codable

    , then experts should be
    better than novices at describing smells and flavors. If experts are indeed better, we can
    also ask how general this advantage is: do experts show higher codability only for smells
    and flavors they are expert in (i.e., wine experts for wine and coffee experts for coffee) or is
    their linguistic dexterity more general? To address these questions, wine experts, coffee
    experts, and novices were asked to describe the smell and flavor of wines, coffees, every-
    day odors, and basic tastes. The resulting descriptions were compared on a number of
    measures. We found expertise endows a modest advantage in smell and flavor naming.
    Wine experts showed more consistency in how they described wine smells and flavors than
    coffee experts, and novices; but coffee experts were not more consistent for coffee descriptions. Neither expert group was any more accurate at identifying everyday smells or tastes. Interestingly, both wine and coffee experts tended to use more source-based terms (e.g., vanilla) in descriptions of their own area of expertise whereas novices tended to use more
    evaluative terms (e.g.,nice). However, the overall linguistic strategies for both groups were en par. To conclude, experts only have a limited, domain-specific advantage when communicating about smells and flavors. The ability to communicate about smells and flavors is a matter not only of perceptual training, but specific linguistic training too

    Additional information

    Data availability
  • Croijmans, I. (2018). Wine expertise shapes olfactory language and cognition. PhD Thesis, Radboud University, Nijmegen.
  • Cronin, K. A., West, V., & Ross, S. R. (2016). Investigating the Relationship between Welfare and Rearing Young in Captive Chimpanzees (Pan troglodytes). Applied Animal Behaviour Science, 181, 166-172. doi:10.1016/j.applanim.2016.05.014.

    Abstract

    Whether the opportunity to breed and rear young improves the welfare of captive animals is currently debated. However, there is very little empirical data available to evaluate this relationship and this study is a first attempt to contribute objective data to this debate. We utilized the existing variation in the reproductive experiences of sanctuary chimpanzees at Chimfunshi Wildlife Orphanage Trust in Zambia to investigate whether breeding and rearing young was associated with improved welfare for adult females (N = 43). We considered several behavioural welfare indicators, including rates of luxury behaviours and abnormal or stress-related behaviours under normal conditions and conditions inducing social stress. Furthermore, we investigated whether spending time with young was associated with good or poor welfare for adult females, regardless of their kin relationship. We used generalized linear mixed models and found no difference between adult females with and without dependent young on any welfare indices, nor did we find that time spent in proximity to unrelated young predicted welfare (all full-null model comparisons likelihood ratio tests P > 0.05). However, we did find that coprophagy was more prevalent among mother-reared than non-mother-reared individuals, in line with recent work suggesting this behaviour may have a different etiology than other behaviours often considered to be abnormal. In sum, the findings from this initial study lend support to the hypothesis that the opportunity to breed and rear young does not provide a welfare benefit for chimpanzees in captivity. We hope this investigation provides a valuable starting point for empirical study into the welfare implications of managed breeding.

    Additional information

    mmc1.pdf
  • Cross, Z. R., Santamaria, A., Corcoran, A. W., Chatburn, A., Alday, P. M., Coussens, S., & Kohler, M. J. (2020). Individual alpha frequency modulates sleep-related emotional memory consolidation. Neuropsychologia, 148: 107660. doi:10.1016/j.neuropsychologia.2020.107660.

    Abstract

    Alpha-band oscillatory activity is involved in modulating memory and attention. However, few studies have investigated individual differences in oscillatory activity during the encoding of emotional memory, particularly in sleep paradigms where sleep is thought to play an active role in memory consolidation. The current study aimed to address the question of whether individual alpha frequency (IAF) modulates the consolidation of declarative memory across periods of sleep and wake. 22 participants aged 18 – 41 years (mean age = 25.77) viewed 120 emotionally valenced images (positive, negative, neutral) and completed a baseline memory task before a 2hr afternoon sleep opportunity and an equivalent period of wake. Following the sleep and wake conditions, participants were required to distinguish between 120 learned (target) images and 120 new (distractor) images. This method allowed us to delineate the role of different oscillatory components of sleep and wake states in the emotional modulation of memory. Linear mixed-effects models revealed interactions between IAF, rapid eye movement sleep theta power, and slow-wave sleep slow oscillatory density on memory outcomes. These results highlight the importance of individual factors in the EEG in modulating oscillatory-related memory consolidation and subsequent behavioural outcomes and test predictions proposed by models of sleep-based memory consolidation.

    Additional information

    supplementary data
  • Croxson, P., Forkel, S. J., Cerliani, L., & Thiebaut De Schotten, M. (2018). Structural Variability Across the Primate Brain: A Cross-Species Comparison. Cerebral Cortex, 28(11), 3829-3841. doi:10.1093/cercor/bhx244.

    Abstract

    A large amount of variability exists across human brains; revealed initially on a small scale by postmortem studies and,
    more recently, on a larger scale with the advent of neuroimaging. Here we compared structural variability between human
    and macaque monkey brains using grey and white matter magnetic resonance imaging measures. The monkey brain was
    overall structurally as variable as the human brain, but variability had a distinct distribution pattern, with some key areas
    showing high variability. We also report the first evidence of a relationship between anatomical variability and evolutionary
    expansion in the primate brain. This suggests a relationship between variability and stability, where areas of low variability
    may have evolved less recently and have more stability, while areas of high variability may have evolved more recently and
    be less similar across individuals. We showed specific differences between the species in key areas, including the amount of
    hemispheric asymmetry in variability, which was left-lateralized in the human brain across several phylogenetically recent
    regions. This suggests that cerebral variability may be another useful measure for comparison between species and may add
    another dimension to our understanding of evolutionary mechanisms.
  • Cutler, A. (2002). Phonological processing: Comments on Pierrehumbert, Moates et al., Kubozono, Peperkamp & Dupoux, and Bradlow. In C. Gussenhoven, & N. Warner (Eds.), Papers in Laboratory Phonology VII (pp. 275-296). Berlin: Mouton de Gruyter.
  • Cutler, A., & Otake, T. (2002). Rhythmic categories in spoken-word recognition. Journal of Memory and Language, 46(2), 296-322. doi:10.1006/jmla.2001.2814.

    Abstract

    Rhythmic categories such as morae in Japanese or stress units in English play a role in the perception of spoken
    language. We examined this role in Japanese, since recent evidence suggests that morae may intervene as
    structural units in word recognition. First, we found that traditional puns more often substituted part of a mora
    than a whole mora. Second, when listeners reconstructed distorted words, e.g. panorama from panozema, responses
    were faster and more accurate when only a phoneme was distorted (panozama, panorema) than when a
    whole CV mora was distorted (panozema). Third, lexical decisions on the same nonwords were better predicted
    by duration and number of phonemes from nonword uniqueness point to word end than by number of morae. Our
    results indicate no role for morae in early spoken-word processing; we propose that rhythmic categories constrain
    not initial lexical activation but subsequent processes of speech segmentation and selection among word candidates.
  • Cutler, A., & Norris, D. (2002). The role of strong syllables in segmentation for lexical access. In G. T. Altmann (Ed.), Psycholinguistics: Critical concepts in psychology (pp. 157-177). London: Routledge.
  • Cutler, A., Mehler, J., Norris, D., & Segui, J. (2002). The syllable's differing role in the segmentation of French and English. In G. T. Altmann (Ed.), Psycholinguistics: Critical concepts in psychology (pp. 115-135). London: Routledge.

    Abstract

    Speech segmentation procedures may differ in speakers of different languages. Earlier work based on French speakers listening to French words suggested that the syllable functions as a segmentation unit in speech processing. However, while French has relatively regular and clearly bounded syllables, other languages, such as English, do not. No trace of syllabifying segmentation was found in English listeners listening to English words, French words, or nonsense words. French listeners, however, showed evidence of syllabification even when they were listening to English words. We conclude that alternative segmentation routines are available to the human language processor. In some cases speech segmentation may involve the operation of more than one procedure.
  • Cutler, A., McQueen, J. M., Jansonius, M., & Bayerl, S. (2002). The lexical statistics of competitor activation in spoken-word recognition. In C. Bow (Ed.), Proceedings of the 9th Australian International Conference on Speech Science and Technology (pp. 40-45). Canberra: Australian Speech Science and Technology Association (ASSTA).

    Abstract

    The Possible Word Constraint is a proposed mechanism whereby listeners avoid recognising words spuriously embedded in other words. It applies to words leaving a vowelless residue between their edge and the nearest known word or syllable boundary. The present study tests the usefulness of this constraint via lexical statistics of both English and Dutch. The analyses demonstrate that the constraint removes a clear majority of embedded words in speech, and thus can contribute significantly to the efficiency of human speech recognition
  • Cutler, A., Demuth, K., & McQueen, J. M. (2002). Universality versus language-specificity in listening to running speech. Psychological Science, 13(3), 258-262. doi:10.1111/1467-9280.00447.

    Abstract

    Recognizing spoken language involves automatic activation of multiple candidate words. The process of selection between candidates is made more efficient by inhibition of embedded words (like egg in beg) that leave a portion of the input stranded (here, b). Results from European languages suggest that this inhibition occurs when consonants are stranded but not when syllables are stranded. The reason why leftover syllables do not lead to inhibition could be that in principle they might themselves be words; in European languages, a syllable can be a word. In Sesotho (a Bantu language), however, a single syllable cannot be a word. We report that in Sesotho, word recognition is inhibited by stranded consonants, but stranded monosyllables produce no more difficulty than stranded bisyllables (which could be Sesotho words). This finding suggests that the viability constraint which inhibits spurious embedded word candidates is not sensitive to language-specific word structure, but is universal.
  • Ip, M. H. K., & Cutler, A. (2020). Universals of listening: Equivalent prosodic entrainment in tone and non-tone languages. Cognition, 202: 104311. doi:10.1016/j.cognition.2020.104311.

    Abstract

    In English and Dutch, listeners entrain to prosodic contours to predict where focus will fall in an utterance. Here, we ask whether this strategy is universally available, even in languages with very different phonological systems (e.g., tone versus non-tone languages). In a phoneme detection experiment, we examined whether prosodic entrainment also occurs in Mandarin Chinese, a tone language, where the use of various suprasegmental cues to lexical identity may take precedence over their use in salience. Consistent with the results from Germanic languages, response times were facilitated when preceding intonation predicted high stress on the target-bearing word, and the lexical tone of the target word (i.e., rising versus falling) did not affect the Mandarin listeners' response. Further, the extent to which prosodic entrainment was used to detect the target phoneme was the same in both English and Mandarin listeners. Nevertheless, native Mandarin speakers did not adopt an entrainment strategy when the sentences were presented in English, consistent with the suggestion that L2 listening may be strained by additional functional load from prosodic processing. These findings have implications for how universal and language-specific mechanisms interact in the perception of focus structure in everyday discourse.

    Additional information

    supplementary data
  • Ip, M. H. K., & Cutler, A. (2018). Asymmetric efficiency of juncture perception in L1 and L2. In K. Klessa, J. Bachan, A. Wagner, M. Karpiński, & D. Śledziński (Eds.), Proceedings of Speech Prosody 2018 (pp. 289-296). Baixas, France: ISCA. doi:10.21437/SpeechProsody.2018-59.

    Abstract

    In two experiments, Mandarin listeners resolved potential syntactic ambiguities in spoken utterances in (a) their native language (L1) and (b) English which they had learned as a second language (L2). A new disambiguation task was used, requiring speeded responses to select the correct meaning for structurally ambiguous sentences. Importantly, the ambiguities used in the study are identical in Mandarin and in English, and production data show that prosodic disambiguation of this type of ambiguity is also realised very similarly in the two languages. The perceptual results here showed however that listeners’ response patterns differed for L1 and L2, although there was a significant increase in similarity between the two response patterns with increasing exposure to the L2. Thus identical ambiguity and comparable disambiguation patterns in L1 and L2 do not lead to immediate application of the appropriate L1 listening strategy to L2; instead, it appears that such a strategy may have to be learned anew for the L2.
  • Cutler, A., & Norris, D. (2016). Bottoms up! How top-down pitfalls ensnare speech perception researchers too. Commentary on C. Firestone & B. Scholl: Cognition does not affect perception: Evaluating the evidence for 'top-down' effects. Behavioral and Brain Sciences, e236. doi:10.1017/S0140525X15002745.

    Abstract

    Not only can the pitfalls that Firestone & Scholl (F&S) identify be generalised across multiple studies within the field of visual perception, but also they have general application outside the field wherever perceptual and cognitive processing are compared. We call attention to the widespread susceptibility of research on the perception of speech to versions of the same pitfalls.
  • Cutler, A., & Fear, B. D. (1991). Categoricality in acceptability judgements for strong versus weak vowels. In J. Llisterri (Ed.), Proceedings of the ESCA Workshop on Phonetics and Phonology of Speaking Styles (pp. 18.1-18.5). Barcelona, Catalonia: Universitat Autonoma de Barcelona.

    Abstract

    A distinction between strong and weak vowels can be drawn on the basis of vowel quality, of stress, or of both factors. An experiment was conducted in which sets of contextually matched word-intial vowels ranging from clearly strong to clearly weak were cross-spliced, and the naturalness of the resulting words was rated by listeners. The ratings showed that in general cross-spliced words were only significantly less acceptable than unspliced words when schwa was not involved; this supports a categorical distinction based on vowel quality.
  • Cutler, A., Mehler, J., Norris, D., & Segui, J. (1983). A language-specific comprehension strategy [Letters to Nature]. Nature, 304, 159-160. doi:10.1038/304159a0.

    Abstract

    Infants acquire whatever language is spoken in the environment into which they are born. The mental capability of the newborn child is not biased in any way towards the acquisition of one human language rather than another. Because psychologists who attempt to model the process of language comprehension are interested in the structure of the human mind, rather than in the properties of individual languages, strategies which they incorporate in their models are presumed to be universal, not language-specific. In other words, strategies of comprehension are presumed to be characteristic of the human language processing system, rather than, say, the French, English, or Igbo language processing systems. We report here, however, on a comprehension strategy which appears to be used by native speakers of French but not by native speakers of English.
  • Cutler, A., Sebastian-Galles, N., Soler-Vilageliu, O., & Van Ooijen, B. (2000). Constraints of vowels and consonants on lexical selection: Cross-linguistic comparisons. Memory & Cognition, 28, 746-755.

    Abstract

    Languages differ in the constitution of their phonemic repertoire and in the relative distinctiveness of phonemes within the repertoire. In the present study, we asked whether such differences constrain spoken-word recognition, via two word reconstruction experiments, in which listeners turned non-words into real words by changing single sounds. The experiments were carried out in Dutch (which has a relatively balanced vowel-consonant ratio and many similar vowels) and in Spanish (which has many more consonants than vowels and high distinctiveness among the vowels). Both Dutch and Spanish listeners responded significantly faster and more accurately when required to change vowels as opposed to consonants; when allowed to change any phoneme, they more often altered vowels than consonants. Vowel information thus appears to constrain lexical selection less tightly (allow more potential candidates) than does consonant information, independent of language-specific phoneme repertoire and of relative distinctiveness of vowels.
  • Cutler, A., & Otake, T. (1997). Contrastive studies of spoken-language processing. Journal of Phonetic Society of Japan, 1, 4-13.
  • Ip, M., & Cutler, A. (2016). Cross-language data on five types of prosodic focus. In J. Barnes, A. Brugos, S. Shattuck-Hufnagel, & N. Veilleux (Eds.), Proceedings of Speech Prosody 2016 (pp. 330-334).

    Abstract

    To examine the relative roles of language-specific and language-universal mechanisms in the production of prosodic focus, we compared production of five different types of focus by native speakers of English and Mandarin. Two comparable dialogues were constructed for each language, with the same words appearing in focused and unfocused position; 24 speakers recorded each dialogue in each language. Duration, F0 (mean, maximum, range), and rms-intensity (mean, maximum) of all critical word tokens were measured. Across the different types of focus, cross-language differences were observed in the degree to which English versus Mandarin speakers use the different prosodic parameters to mark focus, suggesting that while prosody may be universally available for expressing focus, the means of its employment may be considerably language-specific
  • Cutler, A. (1985). Cross-language psycholinguistics. Linguistics, 23, 659-667.
  • Ip, M. H. K., & Cutler, A. (2018). Cue equivalence in prosodic entrainment for focus detection. In J. Epps, J. Wolfe, J. Smith, & C. Jones (Eds.), Proceedings of the 17th Australasian International Conference on Speech Science and Technology (pp. 153-156).

    Abstract

    Using a phoneme detection task, the present series of
    experiments examines whether listeners can entrain to
    different combinations of prosodic cues to predict where focus
    will fall in an utterance. The stimuli were recorded by four
    female native speakers of Australian English who happened to
    have used different prosodic cues to produce sentences with
    prosodic focus: a combination of duration cues, mean and
    maximum F0, F0 range, and longer pre-target interval before
    the focused word onset, only mean F0 cues, only pre-target
    interval, and only duration cues. Results revealed that listeners
    can entrain in almost every condition except for where
    duration was the only reliable cue. Our findings suggest that
    listeners are flexible in the cues they use for focus processing.
  • Cutler, A., & Van de Weijer, J. (2000). De ontdekking van de eerste woorden. Stem-, Spraak- en Taalpathologie, 9, 245-259.

    Abstract

    Spraak is continu, er zijn geen betrouwbare signalen waardoor de luisteraar weet waar het ene woord eindigt en het volgende begint. Voor volwassen luisteraars is het segmenteren van gesproken taal in afzonderlijke woorden dus niet onproblematisch, maar voor een kind dat nog geen woordenschat bezit, vormt de continuïteit van spraak een nog grotere uitdaging. Desalniettemin produceren de meeste kinderen hun eerste herkenbare woorden rond het begin van het tweede levensjaar. Aan deze vroege spraakproducties gaat een formidabele perceptuele prestatie vooraf. Tijdens het eerste levensjaar - met name gedurende de tweede helft - ontwikkelt de spraakperceptie zich van een algemeen fonetisch discriminatievermogen tot een selectieve gevoeligheid voor de fonologische contrasten die in de moedertaal voorkomen. Recent onderzoek heeft verder aangetoond dat kinderen, lang voordat ze ook maar een enkel woord kunnen zeggen, in staat zijn woorden die kenmerkend zijn voor hun moedertaal te onderscheiden van woorden die dat niet zijn. Bovendien kunnen ze woorden die eerst in isolatie werden aangeboden herkennen in een continue spraakcontext. Het dagelijkse taalaanbod aan een kind van deze leeftijd maakt het in zekere zin niet gemakkelijk, bijvoorbeeld doordat de meeste woorden niet in isolatie voorkomen. Toch wordt het kind ook wel houvast geboden, onder andere doordat het woordgebruik beperkt is.
  • Cutler, A. (2002). Lexical access. In L. Nadel (Ed.), Encyclopedia of cognitive science (pp. 858-864). London: Nature Publishing Group.
  • Cutler, A., McQueen, J. M., Norris, D., & Somejuan, A. (2002). Le rôle de la syllable. In E. Dupoux (Ed.), Les langages du cerveau: Textes en l’honneur de Jacques Mehler (pp. 185-197). Paris: Odile Jacob.
  • Cutler, A. (2002). Native listeners. European Review, 10(1), 27-41. doi:10.1017/S1062798702000030.

    Abstract

    Becoming a native listener is the necessary precursor to becoming a native speaker. Babies in the first year of life undertake a remarkable amount of work; by the time they begin to speak, they have perceptually mastered the phonological repertoire and phoneme co-occurrence probabilities of the native language, and they can locate familiar word-forms in novel continuous-speech contexts. The skills acquired at this early stage form a necessary part of adult listening. However, the same native listening skills also underlie problems in listening to a late-acquired non-native language, accounting for why in such a case listening (an innate ability) is sometimes paradoxically more difficult than, for instance, reading (a learned ability).
  • Cutler, A., Burchfield, L. A., & Antoniou, M. (2018). Factors affecting talker adaptation in a second language. In J. Epps, J. Wolfe, J. Smith, & C. Jones (Eds.), Proceedings of the 17th Australasian International Conference on Speech Science and Technology (pp. 33-36).

    Abstract

    Listeners adapt rapidly to previously unheard talkers by
    adjusting phoneme categories using lexical knowledge, in a
    process termed lexically-guided perceptual learning. Although
    this is firmly established for listening in the native language
    (L1), perceptual flexibility in second languages (L2) is as yet
    less well understood. We report two experiments examining L1
    and L2 perceptual learning, the first in Mandarin-English late
    bilinguals, the second in Australian learners of Mandarin. Both
    studies showed stronger learning in L1; in L2, however,
    learning appeared for the English-L1 group but not for the
    Mandarin-L1 group. Phonological mapping differences from
    the L1 to the L2 are suggested as the reason for this result.
  • Cutler, A. (2000). How the ear comes to hear. In New Trends in Modern Linguistics [Part of Annual catalogue series] (pp. 6-10). Tokyo, Japan: Maruzen Publishers.
  • Cutler, A. (2000). Hoe het woord het oor verovert. In Voordrachten uitgesproken tijdens de uitreiking van de SPINOZA-premies op 15 februari 2000 (pp. 29-41). The Hague, The Netherlands: Nederlandse Organisatie voor Wetenschappelijk Onderzoek (NWO).
  • Cutler, A. (1983). Lexical complexity and sentence processing. In G. B. Flores d'Arcais, & R. J. Jarvella (Eds.), The process of language understanding (pp. 43-79). Chichester, Sussex: Wiley.
  • Cutler, A., & Chen, H.-C. (1997). Lexical tone in Cantonese spoken-word processing. Perception and Psychophysics, 59, 165-179. Retrieved from http://www.psychonomic.org/search/view.cgi?id=778.

    Abstract

    In three experiments, the processing of lexical tone in Cantonese was examined. Cantonese listeners more often accepted a nonword as a word when the only difference between the nonword and the word was in tone, especially when the F0 onset difference between correct and erroneous tone was small. Same–different judgments by these listeners were also slower and less accurate when the only difference between two syllables was in tone, and this was true whether the F0 onset difference between the two tones was large or small. Listeners with no knowledge of Cantonese produced essentially the same same-different judgment pattern as that produced by the native listeners, suggesting that the results display the effects of simple perceptual processing rather than of linguistic knowledge. It is argued that the processing of lexical tone distinctions may be slowed, relative to the processing of segmental distinctions, and that, in speeded-response tasks, tone is thus more likely to be misprocessed than is segmental structure.
  • Cutler, A. (1991). Linguistic rhythm and speech segmentation. In J. Sundberg, L. Nord, & R. Carlson (Eds.), Music, language, speech and brain (pp. 157-166). London: Macmillan.
  • Cutler, A., & Farrell, J. (2018). Listening in first and second language. In J. I. Liontas (Ed.), The TESOL encyclopedia of language teaching. New York: Wiley. doi:10.1002/9781118784235.eelt0583.

    Abstract

    Listeners' recognition of spoken language involves complex decoding processes: The continuous speech stream must be segmented into its component words, and words must be recognized despite great variability in their pronunciation (due to talker differences, or to influence of phonetic context, or to speech register) and despite competition from many spuriously present forms supported by the speech signal. L1 listeners deal more readily with all levels of this complexity than L2 listeners. Fortunately, the decoding processes necessary for competent L2 listening can be taught in the classroom. Evidence-based methodologies targeted at the development of efficient speech decoding include teaching of minimal pairs, of phonotactic constraints, and of reduction processes, as well as the use of dictation and L2 video captions.
  • Cutler, A., & Pearson, M. (1985). On the analysis of prosodic turn-taking cues. In C. Johns-Lewis (Ed.), Intonation in discourse (pp. 139-155). London: Croom Helm.
  • Cutler, A. (1985). Performance measures of lexical complexity. In G. Hoppenbrouwers, P. A. Seuren, & A. Weijters (Eds.), Meaning and the lexicon (pp. 75). Dordrecht: Foris.
  • Cutler, A. (1991). Proceed with caution. New Scientist, (1799), 53-54.
  • Cutler, A., McQueen, J. M., & Zondervan, R. (2000). Proceedings of SWAP (Workshop on Spoken Word Access Processes). Nijmegen: MPI for Psycholinguistics.
  • Cutler, A. (1997). Prosody and the structure of the message. In Y. Sagisaka, N. Campbell, & N. Higuchi (Eds.), Computing prosody: Computational models for processing spontaneous speech (pp. 63-66). Heidelberg: Springer.
  • Cutler, A. (1991). Prosody in situations of communication: Salience and segmentation. In Proceedings of the Twelfth International Congress of Phonetic Sciences: Vol. 1 (pp. 264-270). Aix-en-Provence: Université de Provence, Service des publications.

    Abstract

    Speakers and listeners have a shared goal: to communicate. The processes of speech perception and of speech production interact in many ways under the constraints of this communicative goal; such interaction is as characteristic of prosodic processing as of the processing of other aspects of linguistic structure. Two of the major uses of prosodic information in situations of communication are to encode salience and segmentation, and these themes unite the contributions to the symposium introduced by the present review.
  • Cutler, A., Dahan, D., & Van Donselaar, W. (1997). Prosody in the comprehension of spoken language: A literature review. Language and Speech, 40, 141-201.

    Abstract

    Research on the exploitation of prosodic information in the recognition of spoken language is reviewed. The research falls into three main areas: the use of prosody in the recognition of spoken words, in which most attention has been paid to the question of whether the prosodic structure of a word plays a role in initial contact with stored lexical representations; the use of prosody in the computation of syntactic structure, in which the resolution of global and local ambiguities has formed the central focus; and the role of prosody in the processing of discourse structure, in which there has been a preponderance of work on the contribution of accentuation and deaccentuation to integration of concepts with an existing discourse model. The review reveals that in each area progress has been made towards new conceptions of prosody's role in processing, and in particular this has involved abandonment of previously held deterministic views of the relationship between prosodic structure and other aspects of linguistic structure
  • Cutler, A., & Ladd, D. R. (Eds.). (1983). Prosody: Models and measurements. Heidelberg: Springer.
  • Cutler, A. (2000). Real words, phantom words and impossible words. In D. Burnham, S. Luksaneeyanawin, C. Davis, & M. Lafourcade (Eds.), Interdisciplinary approaches to language processing: The international conference on human and machine processing of language and speech (pp. 32-42). Bangkok: NECTEC.
  • Cutler, A. (1997). The comparative perspective on spoken-language processing. Speech Communication, 21, 3-15. doi:10.1016/S0167-6393(96)00075-1.

    Abstract

    Psycholinguists strive to construct a model of human language processing in general. But this does not imply that they should confine their research to universal aspects of linguistic structure, and avoid research on language-specific phenomena. First, even universal characteristics of language structure can only be accurately observed cross-linguistically. This point is illustrated here by research on the role of the syllable in spoken-word recognition, on the perceptual processing of vowels versus consonants, and on the contribution of phonetic assimilation phonemena to phoneme identification. In each case, it is only by looking at the pattern of effects across languages that it is possible to understand the general principle. Second, language-specific processing can certainly shed light on the universal model of language comprehension. This second point is illustrated by studies of the exploitation of vowel harmony in the lexical segmentation of Finnish, of the recognition of Dutch words with and without vowel epenthesis, and of the contribution of different kinds of lexical prosodic structure (tone, pitch accent, stress) to the initial activation of candidate words in lexical access. In each case, aspects of the universal processing model are revealed by analysis of these language-specific effects. In short, the study of spoken-language processing by human listeners requires cross-linguistic comparison.
  • Cutler, A. (1983). Semantics, syntax and sentence accent. In M. Van den Broecke, & A. Cohen (Eds.), Proceedings of the Tenth International Congress of Phonetic Sciences (pp. 85-91). Dordrecht: Foris.
  • Cutler, A. (1983). Speakers’ conceptions of the functions of prosody. In A. Cutler, & D. R. Ladd (Eds.), Prosody: Models and measurements (pp. 79-91). Heidelberg: Springer.
  • Cutler, A., & Koster, M. (2000). Stress and lexical activation in Dutch. In B. Yuan, T. Huang, & X. Tang (Eds.), Proceedings of the Sixth International Conference on Spoken Language Processing: Vol. 1 (pp. 593-596). Beijing: China Military Friendship Publish.

    Abstract

    Dutch listeners were slower to make judgements about the semantic relatedness between a spoken target word (e.g. atLEET, 'athlete') and a previously presented visual prime word (e.g. SPORT 'sport') when the spoken word was mis-stressed. The adverse effect of mis-stressing confirms the role of stress information in lexical recognition in Dutch. However, although the erroneous stress pattern was always initially compatible with a competing word (e.g. ATlas, 'atlas'), mis-stressed words did not produced high false alarm rates in unrelated pairs (e.g. SPORT - atLAS). This suggests that stress information did not completely rule out segmentally matching but suprasegmentally mismatching words, a finding consistent with spoken-word recognition models involving multiple activation and inter-word competition.
  • Cutler, A., Hawkins, J. A., & Gilligan, G. (1985). The suffixing preference: A processing explanation. Linguistics, 23, 723-758.
  • Cutler, A. (1997). The syllable’s role in the segmentation of stress languages. Language and Cognitive Processes, 12, 839-845. doi:10.1080/016909697386718.
  • Cutler, A., & Butterfield, S. (1991). Word boundary cues in clear speech: A supplementary report. Speech Communication, 10, 335-353. doi:10.1016/0167-6393(91)90002-B.

    Abstract

    One of a listener's major tasks in understanding continuous speech is segmenting the speech signal into separate words. When listening conditions are difficult, speakers can help listeners by deliberately speaking more clearly. In four experiments, we examined how word boundaries are produced in deliberately clear speech. In an earlier report we showed that speakers do indeed mark word boundaries in clear speech, by pausing at the boundary and lengthening pre-boundary syllables; moreover, these effects are applied particularly to boundaries preceding weak syllables. In English, listeners use segmentation procedures which make word boundaries before strong syllables easier to perceive; thus marking word boundaries before weak syllables in clear speech will make clear precisely those boundaries which are otherwise hard to perceive. The present report presents supplementary data, namely prosodic analyses of the syllable following a critical word boundary. More lengthening and greater increases in intensity were applied in clear speech to weak syllables than to strong. Mean F0 was also increased to a greater extent on weak syllables than on strong. Pitch movement, however, increased to a greater extent on strong syllables than on weak. The effects were, however, very small in comparison to the durational effects we observed earlier for syllables preceding the boundary and for pauses at the boundary.
  • Cutler, A., Norris, D., & McQueen, J. M. (2000). Tracking TRACE’s troubles. In A. Cutler, J. M. McQueen, & R. Zondervan (Eds.), Proceedings of SWAP (Workshop on Spoken Word Access Processes) (pp. 63-66). Nijmegen: Max-Planck-Institute for Psycholinguistics.

    Abstract

    Simulations explored the inability of the TRACE model of spoken-word recognition to model the effects on human listening of acoustic-phonetic mismatches in word forms. The source of TRACE's failure lay not in its interactive connectivity, not in the presence of interword competition, and not in the use of phonemic representations, but in the need for continuously optimised interpretation of the input. When an analogue of TRACE was allowed to cycle to asymptote on every slice of input, an acceptable simulation of the subcategorical mismatch data was achieved. Even then, however, the simulation was not as close as that produced by the Merge model.
  • Cutter, M. G., Martin, A. E., & Sturt, P. (2020). Capitalization interacts with syntactic complexity. Journal of Experimental Psychology: Learning, Memory, and Cognition, 46(6), 1146-1164. doi:10.1037/xlm0000780.

    Abstract

    We investigated whether readers use the low-level cue of proper noun capitalization in the parafovea to infer syntactic category, and whether this results in an early update of the representation of a sentence’s syntactic structure. Participants read sentences containing either a subject relative or object relative clause, in which the relative clause’s overt argument was a proper noun (e.g., The tall lanky guard who alerted Charlie/Charlie alerted to the danger was young) across three experiments. In Experiment 1 these sentences were presented in normal sentence casing or entirely in upper case. In Experiment 2 participants received either valid or invalid parafoveal previews of the relative clause. In Experiment 3 participants viewed relative clauses in only normal conditions. We hypothesized that we would observe relative clause effects (i.e., inflated fixation times for object relative clauses) while readers were still fixated on the word who, if readers use capitalization to infer a parafoveal word’s syntactic class. This would constitute a syntactic parafoveal-on-foveal effect. Furthermore, we hypothesised that this effect should be influenced by sentence casing in Experiment 1 (with no cue for syntactic category being available in upper case sentences) but not by parafoveal preview validity of the target words. We observed syntactic parafoveal-on-foveal effects in Experiment 1 and 3, and a Bayesian analysis of the combined data from all three experiments. These effects seemed to be influenced more by noun capitalization than lexical processing. We discuss our findings in relation to models of eye movement control and sentence processing theories.
  • Cutter, M. G., Martin, A. E., & Sturt, P. (2020). Readers detect an low-level phonological violation between two parafoveal words. Cognition, 204: 104395. doi:10.1016/j.cognition.2020.104395.

    Abstract

    In two eye-tracking studies we investigated whether readers can detect a violation of the phonological-grammatical convention for the indefinite article an to be followed by a word beginning with a vowel when these two words appear in the parafovea. Across two experiments participants read sentences in which the word an was followed by a parafoveal preview that was either correct (e.g. Icelandic), incorrect and represented a phonological violation (e.g. Mongolian), or incorrect without representing a phonological violation (e.g. Ethiopian), with this parafoveal preview changing to the target word as participants made a saccade into the space preceding an. Our data suggests that participants detected the phonological violation while the target word was still two words to the right of fixation, with participants making more regressions from the previewed word and having longer go-past times on this word when they received a violation preview as opposed to a non-violation preview. We argue that participants were attempting to perform aspects of sentence integration on the basis of low-level orthographic information from the previewed word.

    Additional information

    Data files and R Scripts
  • Cutter, M. G., Martin, A. E., & Sturt, P. (2020). The activation of contextually predictable words in syntactically illegal positions. Quarterly Journal of Experimental Psychology, 73(9), 1423-1430. doi:10.1177/1747021820911021.

    Abstract

    We present an eye-tracking study testing a hypothesis emerging from several theories of prediction during language processing, whereby predictable words should be skipped more than unpredictable words even in syntactically illegal positions. Participants read sentences in which a target word became predictable by a certain point (e.g., “bone” is 92% predictable given, “The dog buried his. . .”), with the next word actually being an intensifier (e.g., “really”), which a noun cannot follow. The target noun remained predictable to appear later in the sentence. We used the boundary paradigm to present the predictable noun or an alternative unpredictable noun (e.g., “food”) directly after the intensifier, until participants moved beyond the intensifier, at which point the noun changed to a syntactically legal word. Participants also read sentences in which predictable or unpredictable nouns appeared in syntactically legal positions. A Bayesian linear-mixed model suggested a 5.7% predictability effect on skipping of nouns in syntactically legal positions, and a 3.1% predictability effect on skipping of nouns in illegal positions. We discuss our findings in relation to theories of lexical prediction during reading.

    Additional information

    OSF data
  • Cychosz, M., Romeo, R., Soderstrom, M., Scaff, C., Ganek, H., Cristia, A., Casillas, M., De Barbaro, K., Bang, J. Y., & Weisleder, A. (2020). Longform recordings of everyday life: Ethics for best practices. Behavior Research Methods, 52, 1951-1969. doi:10.3758/s13428-020-01365-9.

    Abstract

    Recent advances in large-scale data storage and processing offer unprecedented opportunities for behavioral scientists to collect and analyze naturalistic data, including from under-represented groups. Audio data, particularly real-world audio recordings, are of particular interest to behavioral scientists because they provide high-fidelity access to subtle aspects of daily life and social interactions. However, these methodological advances pose novel risks to research participants and communities. In this article, we outline the benefits and challenges associated with collecting, analyzing, and sharing multi-hour audio recording data. Guided by the principles of autonomy, privacy, beneficence, and justice, we propose a set of ethical guidelines for the use of longform audio recordings in behavioral research. This article is also accompanied by an Open Science Framework Ethics Repository that includes informed consent resources such as frequent participant concerns and sample consent forms.
  • Dahan, D., Tanenhaus, M. K., & Chambers, C. G. (2002). Accent and reference resolution in spoken-language comprehension. Journal of Memory and Language, 47(2), 292-314. doi:10.1016/S0749-596X(02)00001-3.

    Abstract

    The role of accent in reference resolution was investigated by monitoring eye fixations to lexical competitors (e.g., candy and candle ) as participants followed prerecorded instructions to move objects above or below fixed geometric shapes using a computer mouse. In Experiment 1, the first utterance instructed participants to move one object above or below a shape (e.g., “Put the candle/candy below the triangle”) and the second utterance contained an accented or deaccented definite noun phrase which referred to the same object or introduced a new entity (e.g., “Now put the CANDLE above the square” vs. “Now put the candle ABOVE THE SQUARE”). Fixations to the competitor (e.g., candy ) demonstrated a bias to interpret deaccented nouns as anaphoric and accented nouns as nonanaphoric. Experiment 2 used only accented nouns in the second instruction, varying whether the referent of this second instruction was the Theme of the first instruction (e.g., “Put the candle below the triangle”) or the Goal of the first instruction (e.g., “Put the necklace below the candle”). Participants preferred to interpret accented noun phrases as referring to a previously mentioned nonfocused entity (the Goal) rather than as introducing a new unmentioned entity.
  • Dai, B., Chen, C., Long, Y., Zheng, L., Zhao, H., Bai, X., Liu, W., Zhang, Y., Liu, L., Guo, T., Ding, G., & Lu, C. (2018). Neural mechanisms for selectively tuning into the target speaker in a naturalistic noisy situation. Nature Communications, 9: 2405. doi:10.1038/s41467-018-04819-z.

    Abstract

    The neural mechanism for selectively tuning in to a target speaker while tuning out the others in a multi-speaker situation (i.e., the cocktail-party effect) remains elusive. Here we addressed this issue by measuring brain activity simultaneously from a listener and from multiple speakers while they were involved in naturalistic conversations. Results consistently show selectively enhanced interpersonal neural synchronization (INS) between the listener and the attended speaker at left temporal–parietal junction, compared with that between the listener and the unattended speaker across different multi-speaker situations. Moreover, INS increases significantly prior to the occurrence of verbal responses, and even when the listener’s brain activity precedes that of the speaker. The INS increase is independent of brain-to-speech synchronization in both the anatomical location and frequency range. These findings suggest that INS underlies the selective process in a multi-speaker situation through neural predictions at the content level but not the sensory level of speech.

    Additional information

    Dai_etal_2018_sup.pdf
  • Davies, C., McGillion, M., Rowland, C. F., & Matthews, D. (2020). Can inferencing be trained in preschoolers using shared book-reading? A randomised controlled trial of parents’ inference-eliciting questions on oral inferencing ability. Journal of Child Language, 47(3), 655-679. doi:10.1017/S0305000919000801.

    Abstract

    The ability to make inferences is essential for effective language comprehension. While inferencing training benefits reading comprehension in school-aged children (see Elleman, 2017, for a review), we do not yet know whether it is beneficial to support the development of these skills prior to school entry. In a pre-registered randomised controlled trial, we evaluated the efficacy of a parent-delivered intervention intended to promote four-year-olds’ oral inferencing skills during shared book-reading. One hundred children from socioeconomically diverse backgrounds were randomly assigned to inferencing training or an active control condition of daily maths activities. The training was found to have no effect on inferencing. However, inferencing measures were highly correlated with children's baseline language ability. This suggests that a more effective approach to scaffolding inferencing in the preschool years might be to focus on promoting vocabulary to develop richer and stronger semantic networks.
  • Dediu, D. (2016). A multi-layered problem. IEEE CDS Newsletter, 13, 14-15.

    Abstract

    A response to Moving Beyond Nature-Nurture: a Problem of Science or Communication? by John Spencer, Mark Blumberg and David Shenk
  • Dediu, D., & Moisik, S. (2016). Defining and counting phonological classes in cross-linguistic segment databases. In N. Calzolari, K. Choukri, T. Declerck, S. Goggi, M. Grobelnik, B. Maegaard, J. Mariani, H. Mazo, A. Moreno, J. Odijk, & S. Piperidis (Eds.), Proceedings of LREC 2016: 10th International Conference on Language Resources and Evaluation (pp. 1955-1962). Paris: European Language Resources Association (ELRA).

    Abstract

    Recently, there has been an explosion in the availability of large, good-quality cross-linguistic databases such as WALS (Dryer & Haspelmath, 2013), Glottolog (Hammarstrom et al., 2015) and Phoible (Moran & McCloy, 2014). Databases such as Phoible contain the actual segments used by various languages as they are given in the primary language descriptions. However, this segment-level representation cannot be used directly for analyses that require generalizations over classes of segments that share theoretically interesting features. Here we present a method and the associated R (R Core Team, 2014) code that allows the exible denition of such meaningful classes and that can identify the sets of segments falling into such a class for any language inventory. The method and its results are important for those interested in exploring cross-linguistic patterns of phonetic and phonological diversity and their relationship to extra-linguistic factors and processes such as climate, economics, history or human genetics.
  • Dediu, D., & Moisik, S. R. (2016). Anatomical biasing of click learning and production: An MRI and 3d palate imaging study. In S. G. Roberts, C. Cuskley, L. McCrohon, L. Barceló-Coblijn, O. Feher, & T. Verhoef (Eds.), The Evolution of Language: Proceedings of the 11th International Conference (EVOLANG11). Retrieved from http://evolang.org/neworleans/papers/57.html.

    Abstract

    The current paper presents results for data on click learning obtained from a larger imaging study (using MRI and 3D intraoral scanning) designed to quantify and characterize intra- and inter-population variation of vocal tract structures and the relation of this to speech production. The aim of the click study was to ascertain whether and to what extent vocal tract morphology influences (1) the ability to learn to produce clicks and (2) the productions of those that successfully learn to produce these sounds. The results indicate that the presence of an alveolar ridge certainly does not prevent an individual from learning to produce click sounds (1). However, the subtle details of how clicks are produced may indeed be driven by palate shape (2).
  • Dediu, D. (2018). Making genealogical language classifications available for phylogenetic analysis: Newick trees, unified identifiers, and branch length. Language Dynamics and Change, 8(1), 1-21. doi:10.1163/22105832-00801001.

    Abstract

    One of the best-known types of non-independence between languages is caused by genealogical relationships due to descent from a common ancestor. These can be represented by (more or less resolved and controversial) language family trees. In theory, one can argue that language families should be built through the strict application of the comparative method of historical linguistics, but in practice this is not always the case, and there are several proposed classifications of languages into language families, each with its own advantages and disadvantages. A major stumbling block shared by most of them is that they are relatively difficult to use with computational methods, and in particular with phylogenetics. This is due to their lack of standardization, coupled with the general non-availability of branch length information, which encapsulates the amount of evolution taking place on the family tree. In this paper I introduce a method (and its implementation in R) that converts the language classifications provided by four widely-used databases (Ethnologue, WALS, AUTOTYP and Glottolog) intothe de facto Newick standard generally used in phylogenetics, aligns the four most used conventions for unique identifiers of linguistic entities (ISO 639-3, WALS, AUTOTYP and Glottocode), and adds branch length information from a variety of sources (the tree's own topology, an externally given numeric constant, or a distance matrix). The R scripts, input data and resulting Newick trees are available under liberal open-source licenses in a GitHub repository (https://github.com/ddediu/lgfam-newick), to encourage and promote the use of phylogenetic methods to investigate linguistic diversity and its temporal dynamics.
  • Dediu, D., & de Boer, B. (2016). Language evolution needs its own journal. Journal of Language Evolution, 1, 1-6. doi:10.1093/jole/lzv001.

    Abstract

    Interest in the origins and evolution of language has been around for as long as language has been around. However, only recently has the empirical study of language come of age. We argue that the field has sufficiently advanced that it now needs its own journal—the Journal of Language Evolution.
  • Dediu, D., & Christiansen, M. H. (2016). Language evolution: Constraints and opportunities from modern genetics. Topics in Cognitive Science, 8, 361-370. doi:10.1111/tops.12195.

    Abstract

    Our understanding of language, its origins and subsequent evolution (including language change) is shaped not only by data and theories from the language sciences, but also fundamentally by the biological sciences. Recent developments in genetics and evolutionary theory offer both very strong constraints on what scenarios of language evolution are possible and probable but also offer exciting opportunities for understanding otherwise puzzling phenomena. Due to the intrinsic breathtaking rate of advancement in these fields, the complexity, subtlety and sometimes apparent non-intuitiveness of the phenomena discovered, some of these recent developments have either being completely missed by language scientists, or misperceived and misrepresented. In this short paper, we offer an update on some of these findings and theoretical developments through a selection of illustrative examples and discussions that cast new light on current debates in the language sciences. The main message of our paper is that life is much more complex and nuanced than anybody could have predicted even a few decades ago, and that we need to be flexible in our theorizing instead of embracing a priori dogmas and trying to patch paradigms that are no longer satisfactory.
  • Dediu, D., & Levinson, S. C. (2018). Neanderthal language revisited: Not only us. Current Opinion in Behavioral Sciences, 21, 49-55. doi:10.1016/j.cobeha.2018.01.001.

    Abstract

    Here we re-evaluate our 2013 paper on the antiquity of language (Dediu and Levinson, 2013) in the light of a surge of new information on human evolution in the last half million years. Although new genetic data suggest the existence of some cognitive differences between Neanderthals and modern humans — fully expected after hundreds of thousands of years of partially separate evolution, overall our claims that Neanderthals were fully articulate beings and that language evolution was gradual are further substantiated by the wealth of new genetic, paleontological and archeological evidence briefly reviewed here.
  • Dediu, D. (2016). Typology for the masses. Linguistic typology, 20(3), 579-581. doi:10.1515/lingty-2016-0029.
  • Defina, R. (2016). Do serial verb constructions describe single events? A study of co-speech gestures in Avatime. Language, 92(4), 890-910. doi:10.1353/lan.2016.0076.

    Abstract

    Serial verb constructions have often been said to refer to single conceptual events. However, evidence to support this claim has been elusive. This article introduces co-speech gestures as a new way of investigating the relationship. The alignment patterns of gestures with serial verb constructions and other complex clauses were compared in Avatime (Ka-Togo, Kwa, Niger-Congo). Serial verb constructions tended to occur with single gestures overlapping the entire construction. In contrast, other complex clauses were more likely to be accompanied by distinct gestures overlapping individual verbs. This pattern of alignment suggests that serial verb constructions are in fact used to describe single events.

    Additional information

    https://doi.org/10.1353/lan.2016.0069
  • Defina, R. (2016). Events in language and thought: The case of serial verb constructions in Avatime. PhD Thesis, Radboud University Nijmegen, Nijmegen.
  • Defina, R. (2016). Serial verb constructions and their subtypes in Avatime. Studies in Language, 40(3), 648-680. doi:10.1075/sl.40.3.07def.
  • Degand, L., & Van Bergen, G. (2018). Discourse markers as turn-transition devices: Evidence from speech and instant messaging. Discourse Processes, 55, 47-71. doi:10.1080/0163853X.2016.1198136.

    Abstract

    In this article we investigate the relation between discourse markers and turn-transition strategies in face-to-face conversations and Instant Messaging (IM), that is, unplanned, real-time, text-based, computer-mediated communication. By means of a quantitative corpus study of utterances containing a discourse marker, we show that utterance-final discourse markers are used more often in IM than in face-to-face conversations. Moreover, utterance-final discourse markers are shown to occur more often at points of turn-transition compared with points of turn-maintenance in both types of conversation. From our results we conclude that the discourse markers in utterance-final position can function as a turn-transition mechanism, signaling that the turn is over and the floor is open to the hearer. We argue that this linguistic turn-taking strategy is essentially similar in face-to-face and IM communication. Our results add to the evidence that communication in IM is more like speech than like writing.
  • Delgado, T., Ravignani, A., Verhoef, T., Thompson, B., Grossi, T., & Kirby, S. (2018). Cultural transmission of melodic and rhythmic universals: Four experiments and a model. In C. Cuskley, M. Flaherty, H. Little, L. McCrohon, A. Ravignani, & T. Verhoef (Eds.), Proceedings of the 12th International Conference on the Evolution of Language (EVOLANG XII) (pp. 89-91). Toruń, Poland: NCU Press. doi:10.12775/3991-1.019.

Share this page