Publications

Displaying 201 - 300 of 1646
  • Casillas, M., & Frank, M. C. (2012). Cues to turn boundary prediction in adults and preschoolers. In S. Brown-Schmidt, J. Ginzburg, & S. Larsson (Eds.), Proceedings of SemDial 2012 (SeineDial): The 16th Workshop on the Semantics and Pragmatics of Dialogue (pp. 61-69). Paris: Université Paris-Diderot.

    Abstract

    Conversational turns often proceed with very brief pauses between speakers. In order to maintain “no gap, no overlap” turntaking, we must be able to anticipate when an ongoing utterance will end, tracking the current speaker for upcoming points of potential floor exchange. The precise set of cues that listeners use for turn-end boundary anticipation is not yet established. We used an eyetracking paradigm to measure adults’ and children’s online turn processing as they watched videos of conversations in their native language (English) and a range of other languages they did not speak. Both adults and children anticipated speaker transitions effectively. In addition, we observed evidence of turn-boundary anticipation for questions even in languages that were unknown to participants, suggesting that listeners’ success in turn-end anticipation does not rely solely on lexical information.
  • Casillas, M., & Frank, M. C. (2017). The development of children's ability to track and predict turn structure in conversation. Journal of Memory and Language, 92, 234-253. doi:10.1016/j.jml.2016.06.013.

    Abstract

    Children begin developing turn-taking skills in infancy but take several years to fluidly integrate their growing knowledge of language into their turn-taking behavior. In two eye-tracking experiments, we measured children’s anticipatory gaze to upcoming responders while controlling linguistic cues to turn structure. In Experiment 1, we showed English and non-English conversations to English-speaking adults and children. In Experiment 2, we phonetically controlled lexicosyntactic and prosodic cues in English-only speech. Children spontaneously made anticipatory gaze switches by age two and continued improving through age six. In both experiments, children and adults made more anticipatory switches after hearing questions. Consistent with prior findings on adult turn prediction, prosodic information alone did not increase children’s anticipatory gaze shifts. But, unlike prior work with adults, lexical information alone was not sucient either—children’s performance was best overall with lexicosyntax and prosody together. Our findings support an account in which turn tracking and turn prediction emerge in infancy and then gradually become integrated with children’s online linguistic processing.
  • Casillas, M., Amatuni, A., Seidl, A., Soderstrom, M., Warlaumont, A., & Bergelson, E. (2017). What do Babies hear? Analyses of Child- and Adult-Directed Speech. In Proceedings of Interspeech 2017 (pp. 2093-2097). doi:10.21437/Interspeech.2017-1409.

    Abstract

    Child-directed speech is argued to facilitate language development, and is found cross-linguistically and cross-culturally to varying degrees. However, previous research has generally focused on short samples of child-caregiver interaction, often in the lab or with experimenters present. We test the generalizability of this phenomenon with an initial descriptive analysis of the speech heard by young children in a large, unique collection of naturalistic, daylong home recordings. Trained annotators coded automatically-detected adult speech 'utterances' from 61 homes across 4 North American cities, gathered from children (age 2-24 months) wearing audio recorders during a typical day. Coders marked the speaker gender (male/female) and intended addressee (child/adult), yielding 10,886 addressee and gender tags from 2,523 minutes of audio (cf. HB-CHAAC Interspeech ComParE challenge; Schuller et al., in press). Automated speaker-diarization (LENA) incorrectly gender-tagged 30% of male adult utterances, compared to manually-coded consensus. Furthermore, we find effects of SES and gender on child-directed and overall speech, increasing child-directed speech with child age, and interactions of speaker gender, child gender, and child age: female caretakers increased their child-directed speech more with age than male caretakers did, but only for male infants. Implications for language acquisition and existing classification algorithms are discussed.
  • Castro-Caldas, A., Petersson, K. M., Reis, A., Stone-Elander, S., & Ingvar, M. (1998). The illiterate brain: Learning to read and write during childhood influences the functional organization of the adult brain. Brain, 121, 1053-1063. doi:10.1093/brain/121.6.1053.

    Abstract

    Learning a specific skill during childhood may partly determine the functional organization of the adult brain. This hypothesis led us to study oral language processing in illiterate subjects who, for social reasons, had never entered school and had no knowledge of reading or writing. In a brain activation study using PET and statistical parametric mapping, we compared word and pseudoword repetition in literate and illiterate subjects. Our study confirms behavioural evidence of different phonological processing in illiterate subjects. During repetition of real words, the two groups performed similarly and activated similar areas of the brain. In contrast, illiterate subjects had more difficulty repeating pseudowords correctly and did not activate the same neural structures as literates. These results are consistent with the hypothesis that learning the written form of language (orthography) interacts with the function of oral language. Our results indicate that learning to read and write during childhood influences the functional organization of the adult human brain.
  • Catani, M., Dell'Acqua, F., Bizzi, A., Forkel, S. J., Williams, S. C., Simmons, A., Murphy, D. G., & Thiebaut de Schotten, M. (2012). Beyond cortical localization in clinico-anatomical correlation. Cortex, 48(10), 1262-1287. doi:10.1016/j.cortex.2012.07.001.

    Abstract

    Last year was the 150th anniversary of Paul Broca's landmark case report on speech disorder that paved the way for subsequent studies of cortical localization of higher cognitive functions. However, many complex functions rely on the activity of distributed networks rather than single cortical areas. Hence, it is important to understand how brain regions are linked within large-scale networks and to map lesions onto connecting white matter tracts. To facilitate this network approach we provide a synopsis of classical neurological syndromes associated with frontal, parietal, occipital, temporal and limbic lesions. A review of tractography studies in a variety of neuropsychiatric disorders is also included. The synopsis is accompanied by a new atlas of the human white matter connections based on diffusion tensor tractography freely downloadable on http://www.natbrainlab.com. Clinicians can use the maps to accurately identify the tract affected by lesions visible on conventional CT or MRI. The atlas will also assist researchers to interpret their group analysis results. We hope that the synopsis and the atlas by allowing a precise localization of white matter lesions and associated symptoms will facilitate future work on the functional correlates of human neural networks as derived from the study of clinical populations. Our goal is to stimulate clinicians to develop a critical approach to clinico-anatomical correlative studies and broaden their view of clinical anatomy beyond the cortical surface in order to encompass the dysfunction related to connecting pathways.

    Additional information

    supplementary file
  • Catani, M., Robertsson, N., Beyh, A., Huynh, V., de Santiago Requejo, F., Howells, H., Barrett, R. L., Aiello, M., Cavaliere, C., Dyrby, T. B., Krug, K., Ptito, M., D'Arceuil, H., Forkel, S. J., & Dell'Acqua, F. (2017). Short parietal lobe connections of the human and monkey brain. Cortex, 97, 339-357. doi:10.1016/j.cortex.2017.10.022.

    Abstract

    The parietal lobe has a unique place in the human brain. Anatomically, it is at the crossroad between the frontal, occipital, and temporal lobes, thus providing a middle ground for multimodal sensory integration. Functionally, it supports higher cognitive functions that are characteristic of the human species, such as mathematical cognition, semantic and pragmatic aspects of language, and abstract thinking. Despite its importance, a comprehensive comparison of human and simian intraparietal networks is missing.

    In this study, we used diffusion imaging tractography to reconstruct the major intralobar parietal tracts in twenty-one datasets acquired in vivo from healthy human subjects and eleven ex vivo datasets from five vervet and six macaque monkeys. Three regions of interest (postcentral gyrus, superior parietal lobule and inferior parietal lobule) were used to identify the tracts. Surface projections were reconstructed for both species and results compared to identify similarities or differences in tract anatomy (i.e., trajectories and cortical projections). In addition, post-mortem dissections were performed in a human brain.

    The largest tract identified in both human and monkey brains is a vertical pathway between the superior and inferior parietal lobules. This tract can be divided into an anterior (supramarginal gyrus) and a posterior (angular gyrus) component in both humans and monkey brains. The second prominent intraparietal tract connects the postcentral gyrus to both supramarginal and angular gyri of the inferior parietal lobule in humans but only to the supramarginal gyrus in the monkey brain. The third tract connects the postcentral gyrus to the anterior region of the superior parietal lobule and is more prominent in monkeys compared to humans. Finally, short U-shaped fibres in the medial and lateral aspects of the parietal lobe were identified in both species. A tract connecting the medial parietal cortex to the lateral inferior parietal cortex was observed in the monkey brain only.

    Our findings suggest a consistent pattern of intralobar parietal connections between humans and monkeys with some differences for those areas that have cytoarchitectonically distinct features in humans. The overall pattern of intraparietal connectivity supports the special role of the inferior parietal lobule in cognitive functions characteristic of humans.
  • Çetinçelik, M., Rowland, C. F., & Snijders, T. M. (2021). Do the eyes have it? A systematic review on the role of eye gaze in infant language development. Frontiers in Psychology, 11: 589096. doi:10.3389/fpsyg.2020.589096.

    Abstract

    Eye gaze is a ubiquitous cue in child-caregiver interactions and infants are highly attentive to eye gaze from very early on. However, the question of why infants show gaze-sensitive behavior, and what role this sensitivity to gaze plays in their language development, is not yet well-understood. To gain a better understanding of the role of eye gaze in infants’ language learning, we conducted a broad systematic review of the developmental literature for all studies that investigate the role of eye gaze in infants’ language development. Across 77 peer-reviewed articles containing data from typically-developing human infants (0-24 months) in the domain of language development we identified two broad themes. The first tracked the effect of eye gaze on four developmental domains: (1) vocabulary development, (2) word-object mapping, (3) object processing, and (4) speech processing. Overall, there is considerable evidence that infants learn more about objects and are more likely to form word-object mappings in the presence of eye gaze cues, both of which are necessary for learning words. In addition, there is good evidence for longitudinal relationships between infants’ gaze following abilities and later receptive and expressive vocabulary. However, many domains (e.g. speech processing) are understudied; further work is needed to decide whether gaze effects are specific to tasks such as word-object mapping, or whether they reflect a general learning enhancement mechanism. The second theme explored the reasons why eye gaze might be facilitative for learning, addressing the question of whether eye gaze is treated by infants as a specialized socio-cognitive cue. We concluded that the balance of evidence supports the idea that eye gaze facilitates infants’ learning by enhancing their arousal, memory and attentional capacities to a greater extent than other low-level attentional cues. However, as yet, there are too few studies that directly compare the effect of eye gaze cues and non-social, attentional cues for strong conclusions to be drawn. We also suggest there might be a developmental effect, with eye gaze, over the course of the first two years of life, developing into a truly ostensive cue that enhances language learning across the board.

    Additional information

    data sheet
  • Chan, A., Matthews, S., Tse, N., Lam, A., Chang, F., & Kidd, E. (2021). Revisiting Subject–Object Asymmetry in the Production of Cantonese Relative Clauses: Evidence From Elicited Production in 3-Year-Olds. Frontiers in Psychology, 12: 679008. doi:10.3389/fpsyg.2021.679008.

    Abstract

    Emergentist approaches to language acquisition identify a core role for language-specific experience and give primacy to other factors like function and domain-general learning mechanisms in syntactic development. This directly contrasts with a nativist structurally oriented approach, which predicts that grammatical development is guided by Universal Grammar and that structural factors constrain acquisition. Cantonese relative clauses (RCs) offer a good opportunity to test these perspectives because its typologically rare properties decouple the roles of frequency and complexity in subject- and object-RCs in a way not possible in European languages. Specifically, Cantonese object RCs of the classifier type are frequently attested in children’s linguistic experience and are isomorphic to frequent and early-acquired simple SVO transitive clauses, but according to formal grammatical analyses Cantonese subject RCs are computationally less demanding to process. Thus, the two opposing theories make different predictions: the emergentist approach predicts a specific preference for object RCs of the classifier type, whereas the structurally oriented approach predicts a subject advantage. In the current study we revisited this issue. Eighty-seven monolingual Cantonese children aged between 3;2 and 3;11 (Mage: 3;6) participated in an elicited production task designed to elicit production of subject- and object- RCs. The children were very young and most of them produced only noun phrases when RCs were elicited. Those (nine children) who did produce RCs produced overwhelmingly more object RCs than subject RCs, even when animacy cues were controlled. The majority of object RCs produced were the frequent classifier-type RCs. The findings concur with our hypothesis from the emergentist perspectives that input frequency and formal and functional similarity to known structures guide acquisition.
  • Chang, F., Janciauskas, M., & Fitz, H. (2012). Language adaptation and learning: Getting explicit about implicit learning. Language and Linguistics Compass, 6, 259-278. doi:10.1002/lnc3.337.

    Abstract

    Linguistic adaptation is a phenomenon where language representations change in response to linguistic input. Adaptation can occur on multiple linguistic levels such as phonology (tuning of phonotactic constraints), words (repetition priming), and syntax (structural priming). The persistent nature of these adaptations suggests that they may be a form of implicit learning and connectionist models have been developed which instantiate this hypothesis. Research on implicit learning, however, has also produced evidence that explicit chunk knowledge is involved in the performance of these tasks. In this review, we examine how these interacting implicit and explicit processes may change our understanding of language learning and processing.
  • Chen, X. S., & Brown, C. M. (2012). Computational identification of new structured cis-regulatory elements in the 3'-untranslated region of human protein coding genes. Nucleic Acids Research, 40, 8862-8873. doi:10.1093/nar/gks684.

    Abstract

    Messenger ribonucleic acids (RNAs) contain a large number of cis-regulatory RNA elements that function in many types of post-transcriptional regulation. These cis-regulatory elements are often characterized by conserved structures and/or sequences. Although some classes are well known, given the wide range of RNA-interacting proteins in eukaryotes, it is likely that many new classes of cis-regulatory elements are yet to be discovered. An approach to this is to use computational methods that have the advantage of analysing genomic data, particularly comparative data on a large scale. In this study, a set of structural discovery algorithms was applied followed by support vector machine (SVM) classification. We trained a new classification model (CisRNA-SVM) on a set of known structured cis-regulatory elements from 3′-untranslated regions (UTRs) and successfully distinguished these and groups of cis-regulatory elements not been strained on from control genomic and shuffled sequences. The new method outperformed previous methods in classification of cis-regulatory RNA elements. This model was then used to predict new elements from cross-species conserved regions of human 3′-UTRs. Clustering of these elements identified new classes of potential cis-regulatory elements. The model, training and testing sets and novel human predictions are available at: http://mRNA.otago.ac.nz/CisRNA-SVM.
  • Chen, J. (2012). “She from bookshelf take-descend-come the box”: Encoding and categorizing placement events in Mandarin. In A. Kopecka, & B. Narasimhan (Eds.), Events of putting and taking: A crosslinguistic perspective (pp. 37-54). Amsterdam: Benjamins.

    Abstract

    This paper investigates the lexical semantics of placement verbs in Mandarin. The majority of Mandarin placement verbs are directional verb compounds (e.g., na2-xia4-lai2 ‘take-descend-come’). They are composed of two or three verbs in a fixed order, each encoding certain semantic components of placement events. The first verb usually conveys object manipulation and the second and the third verbs indicate the Path of motion, including Deixis. The first verb, typically encoding object manipulation, can be semantically general or specific: two general verbs, fang4 ‘put’ and na2 ‘take’, have large but constrained extensional categories, and a number of specific verbs are used based on the Manner of manipulation of the Figure object, the relationship between and the physical properties of Figure and Ground, intentionality of the Agent, and the type of instrument.
  • Chen, A., Gussenhoven, C., & Rietveld, T. (2004). Language specificity in perception of paralinguistic intonational meaning. Language and Speech, 47(4), 311-349.

    Abstract

    This study examines the perception of paralinguistic intonational meanings deriving from Ohala’s Frequency Code (Experiment 1) and Gussenhoven’s Effort Code (Experiment 2) in British English and Dutch. Native speakers of British English and Dutch listened to a number of stimuli in their native language and judged each stimulus on four semantic scales deriving from these two codes: SELF-CONFIDENT versus NOT SELF-CONFIDENT, FRIENDLY versus NOT FRIENDLY (Frequency Code); SURPRISED versus NOT SURPRISED, and EMPHATIC versus NOT EMPHATIC (Effort Code). The stimuli, which were lexically equivalent across the two languages, differed in pitch contour, pitch register and pitch span in Experiment 1, and in pitch register, peak height, peak alignment and end pitch in Experiment 2. Contrary to the traditional view that the paralinguistic usage of intonation is similar across languages, it was found that British English and Dutch listeners differed considerably in the perception of “confident,” “friendly,” “emphatic,” and “surprised.” The present findings support a theory of paralinguistic meaning based on the universality of biological codes, which however acknowledges a languagespecific component in the implementation of these codes.
  • Chen, X. S., Reader, R. H., Hoischen, A., Veltman, J. A., Simpson, N. H., Francks, C., Newbury, D. F., & Fisher, S. E. (2017). Next-generation DNA sequencing identifies novel gene variants and pathways involved in specific language impairment. Scientific Reports, 7: 46105. doi:10.1038/srep46105.

    Abstract

    A significant proportion of children have unexplained problems acquiring proficient linguistic skills despite adequate intelligence and opportunity. Developmental language disorders are highly heritable with substantial societal impact. Molecular studies have begun to identify candidate loci, but much of the underlying genetic architecture remains undetermined. We performed whole-exome sequencing of 43 unrelated probands affected by severe specific language impairment, followed by independent validations with Sanger sequencing, and analyses of segregation patterns in parents and siblings, to shed new light on aetiology. By first focusing on a pre-defined set of known candidates from the literature, we identified potentially pathogenic variants in genes already implicated in diverse language-related syndromes, including ERC1, GRIN2A, and SRPX2. Complementary analyses suggested novel putative candidates carrying validated variants which were predicted to have functional effects, such as OXR1, SCN9A and KMT2D. We also searched for potential “multiple-hit” cases; one proband carried a rare AUTS2 variant in combination with a rare inherited haplotype affecting STARD9, while another carried a novel nonsynonymous variant in SEMA6D together with a rare stop-gain in SYNPR. On broadening scope to all rare and novel variants throughout the exomes, we identified biological themes that were enriched for such variants, including microtubule transport and cytoskeletal regulation.
  • Chen, A. (2012). Shaping the intonation of Wh-questions: Information structure and beyond. In J. P. de Ruiter (Ed.), Questions: Formal, functional and interactional perspectives (pp. 146-164). New York: Cambridge University Press.
  • Chen, A. (2012). The prosodic investigation of information structure. In M. Krifka, & R. Musan (Eds.), The expression of information structure (pp. 249-286). Berlin: de Gruyter.
  • Cho, T., & McQueen, J. M. (2004). Phonotactics vs. phonetic cues in native and non-native listening: Dutch and Korean listeners' perception of Dutch and English. In S. Kin, & M. J. Bae (Eds.), Proceedings of the 8th International Conference on Spoken Language Processing (Interspeech 2004-ICSLP) (pp. 1301-1304). Seoul: Sunjijn Printing Co.

    Abstract

    We investigated how listeners of two unrelated languages, Dutch and Korean, process phonotactically legitimate and illegitimate sounds spoken in Dutch and American English. To Dutch listeners, unreleased word-final stops are phonotactically illegal because word-final stops in Dutch are generally released in isolation, but to Korean listeners, released final stops are illegal because word-final stops are never released in Korean. Two phoneme monitoring experiments showed a phonotactic effect: Dutch listeners detected released stops more rapidly than unreleased stops whereas the reverse was true for Korean listeners. Korean listeners with English stimuli detected released stops more accurately than unreleased stops, however, suggesting that acoustic-phonetic cues associated with released stops improve detection accuracy. We propose that in non-native speech perception, phonotactic legitimacy in the native language speeds up phoneme recognition, the richness of acousticphonetic cues improves listening accuracy, and familiarity with the non-native language modulates the relative influence of these two factors.
  • Cho, T. (2004). Prosodically conditioned strengthening and vowel-to-vowel coarticulation in English. Journal of Phonetics, 32(2), 141-176. doi:10.1016/S0095-4470(03)00043-3.

    Abstract

    The goal of this study is to examine how the degree of vowel-to-vowel coarticulation varies as a function of prosodic factors such as nuclear-pitch accent (accented vs. unaccented), level of prosodic boundary (Prosodic Word vs. Intermediate Phrase vs. Intonational Phrase), and position-in-prosodic-domain (initial vs. final). It is hypothesized that vowels in prosodically stronger locations (e.g., in accented syllables and at a higher prosodic boundary) are not only coarticulated less with their neighboring vowels, but they also exert a stronger influence on their neighbors. Measurements of tongue position for English /a i/ over time were obtained with Carsten’s electromagnetic articulography. Results showed that vowels in prosodically stronger locations are coarticulated less with neighboring vowels, but do not exert a stronger influence on the articulation of neighboring vowels. An examination of the relationship between coarticulation and duration revealed that (a) accent-induced coarticulatory variation cannot be attributed to a duration factor and (b) some of the data with respect to boundary effects may be accounted for by the duration factor. This suggests that to the extent that prosodically conditioned coarticulatory variation is duration-independent, there is no absolute causal relationship from duration to coarticulation. It is proposed that prosodically conditioned V-to-V coarticulatory reduction is another type of strengthening that occurs in prosodically strong locations. The prosodically driven coarticulatory patterning is taken to be part of the phonetic signatures of the hierarchically nested structure of prosody.
  • Cho, T., & Johnson, E. K. (2004). Acoustic correlates of phrase-internal lexical boundaries in Dutch. In S. Kin, & M. J. Bae (Eds.), Proceedings of the 8th International Conference on Spoken Language Processing (Interspeech 2004-ICSLP) (pp. 1297-1300). Seoul: Sunjin Printing Co.

    Abstract

    The aim of this study was to determine if Dutch speakers reliably signal phrase-internal lexical boundaries, and if so, how. Six speakers recorded 4 pairs of phonemically identical strong-weak-strong (SWS) strings with matching syllable boundaries but mismatching intended word boundaries (e.g. reis # pastei versus reispas # tij, or more broadly C1V2(C)#C2V2(C)C3V3(C) vs. C1V2(C)C2V2(C)#C3V3(C)). An Analysis of Variance revealed 3 acoustic parameters that were significantly greater in S#WS items (C2 DURATION, RIME1 DURATION, C3 BURST AMPLITUDE) and 5 parameters that were significantly greater in the SW#S items (C2 VOT, C3 DURATION, RIME2 DURATION, RIME3 DURATION, and V2 AMPLITUDE). Additionally, center of gravity measurements suggested that the [s] to [t] coarticulation was greater in reis # pa[st]ei versus reispa[s] # [t]ij. Finally, a Logistic Regression Analysis revealed that the 3 parameters (RIME1 DURATION, RIME2 DURATION, and C3 DURATION) contributed most reliably to a S#WS versus SW#S classification.
  • Choi, J., Cutler, A., & Broersma, M. (2017). Early development of abstract language knowledge: Evidence from perception-production transfer of birth-language memory. Royal Society Open Science, 4: 160660. doi:10.1098/rsos.160660.

    Abstract

    Children adopted early in life into another linguistic community typically forget their birth language but retain, unaware, relevant linguistic knowledge that may facilitate (re)learning of birth-language patterns. Understanding the nature of this knowledge can shed light on how language is acquired. Here, international adoptees from Korea with Dutch as their current language, and matched Dutch-native controls, provided speech production data on a Korean consonantal distinction unlike any Dutch distinctions, at the outset and end of an intensive perceptual training. The productions, elicited in a repetition task, were identified and rated by Korean listeners. Adoptees' production scores improved significantly more across the training period than control participants' scores, and, for adoptees only, relative production success correlated significantly with the rate of learning in perception (which had, as predicted, also surpassed that of the controls). Of the adoptee group, half had been adopted at 17 months or older (when talking would have begun), while half had been prelinguistic (under six months). The former group, with production experience, showed no advantage over the group without. Thus the adoptees' retained knowledge of Korean transferred from perception to production and appears to be abstract in nature rather than dependent on the amount of experience.
  • Choi, J., Broersma, M., & Cutler, A. (2017). Early phonology revealed by international adoptees' birth language retention. Proceedings of the National Academy of Sciences of the United States of America, 114(28), 7307-7312. doi:10.1073/pnas.1706405114.

    Abstract

    Until at least 6 mo of age, infants show good discrimination for familiar phonetic contrasts (i.e., those heard in the environmental language) and contrasts that are unfamiliar. Adult-like discrimination (significantly worse for nonnative than for native contrasts) appears only later, by 9–10 mo. This has been interpreted as indicating that infants have no knowledge of phonology until vocabulary development begins, after 6 mo of age. Recently, however, word recognition has been observed before age 6 mo, apparently decoupling the vocabulary and phonology acquisition processes. Here we show that phonological acquisition is also in progress before 6 mo of age. The evidence comes from retention of birth-language knowledge in international adoptees. In the largest ever such study, we recruited 29 adult Dutch speakers who had been adopted from Korea when young and had no conscious knowledge of Korean language at all. Half were adopted at age 3–5 mo (before native-specific discrimination develops) and half at 17 mo or older (after word learning has begun). In a short intensive training program, we observe that adoptees (compared with 29 matched controls) more rapidly learn tripartite Korean consonant distinctions without counterparts in their later-acquired Dutch, suggesting that the adoptees retained phonological knowledge about the Korean distinction. The advantage is equivalent for the younger-adopted and the older-adopted groups, and both groups not only acquire the tripartite distinction for the trained consonants but also generalize it to untrained consonants. Although infants younger than 6 mo can still discriminate unfamiliar phonetic distinctions, this finding indicates that native-language phonological knowledge is nonetheless being acquired at that age.
  • Cholin, J. (2004). Syllables in speech production: Effects of syllable preparation and syllable frequency. PhD Thesis, Radboud University Nijmegen, Nijmegen. doi:10.17617/2.60589.

    Abstract

    The fluent production of speech is a very complex human skill. It requires the coordination of several articulatory subsystems. The instructions that lead articulatory movements to execution are the result of the interplay of speech production levels that operate above the articulatory network. During the process of word-form encoding, the groundwork for the articulatory programs is prepared which then serve the articulators as basic units. This thesis investigated whether or not syllables form the basis for the articulatory programs and in particular whether or not these syllable programs are stored, separate from the store of the lexical word-forms. It is assumed that syllable units are stored in a so-called 'mental syllabary'. The main goal of this thesis was to find evidence of the syllable playing a functionally important role in speech production and for the assumption that syllables are stored units. In a variant of the implicit priming paradigm, it was investigated whether information about the syllabic structure of a target word facilitates the preparation (advanced planning) of a to-be-produced utterance. These experiments yielded evidence for the functionally important role of syllables in speech production. In a subsequent row of experiments, it could be demonstrated that the production of syllables is sensitive to frequency. Syllable frequency effects provide strong evidence for the notion of a mental syllabary because only stored units are likely to exhibit frequency effects. In a last study, effects of syllable preparation and syllable frequency were investigated in a combined study to disentangle the two effects. The results of this last experiment converged with those reported for the other experiments and added further support to the claim that syllables play a core functional role in speech production and are stored in a mental syllabary.

    Additional information

    full text via Radboud Repository
  • Cholin, J., Schiller, N. O., & Levelt, W. J. M. (2004). The preparation of syllables in speech production. Journal of Memory and Language, 50(1), 47-61. doi:10.1016/j.jml.2003.08.003.

    Abstract

    Models of speech production assume that syllables play a functional role in the process of word-form encoding in speech production. In this study, we investigate this claim and specifically provide evidence about the level at which syllables come into play. We report two studies using an odd-man-out variant of the implicit priming paradigm to examine the role of the syllable during the process of word formation. Our results show that this modified version of the implicit priming paradigm can trace the emergence of syllabic structure during spoken word generation. Comparing these results to prior syllable priming studies, we conclude that syllables emerge at the interface between phonological and phonetic encoding. The results are discussed in terms of the WEAVER++ model of lexical access.
  • Chu, M., & Kita, S. (2012). The role of spontaneous gestures in spatial problem solving. In E. Efthimiou, G. Kouroupetroglou, & S.-E. Fotinea (Eds.), Gesture and sign language in human-computer interaction and embodied communication: 9th International Gesture Workshop, GW 2011, Athens, Greece, May 25-27, 2011, revised selected papers (pp. 57-68). Heidelberg: Springer.

    Abstract

    When solving spatial problems, people often spontaneously produce hand gestures. Recent research has shown that our knowledge is shaped by the interaction between our body and the environment. In this article, we review and discuss evidence on: 1) how spontaneous gesture can reveal the development of problem solving strategies when people solve spatial problems; 2) whether producing gestures can enhance spatial problem solving performance. We argue that when solving novel spatial problems, adults go through deagentivization and internalization processes, which are analogous to young children’s cognitive development processes. Furthermore, gesture enhances spatial problem solving performance. The beneficial effect of gesturing can be extended to non-gesturing trials and can be generalized to a different spatial task that shares similar spatial transformation processes.
  • Chu, M., & Kita, S. (2012). The nature of the beneficial role of spontaneous gesture in spatial problem solving [Abstract]. Cognitive Processing; Special Issue "ICSC 2012, the 5th International Conference on Spatial Cognition: Space and Embodied Cognition". Oral Presentations, 13(Suppl. 1), S39.

    Abstract

    Spontaneous gestures play an important role in spatial problem solving. We investigated the functional role and underlying mechanism of spontaneous gestures in spatial problem solving. In Experiment 1, 132 participants were required to solve a mental rotation task (see Figure 1) without speaking. Participants gestured more frequently in difficult trials than in easy trials. In Experiment 2, 66 new participants were given two identical sets of mental rotation tasks problems, as the one used in experiment 1. Participants who were encouraged to gesture in the first set of mental rotation task problemssolved more problems correctly than those who were allowed to gesture or those who were prohibited from gesturing both in the first set and in the second set in which all participants were prohibited from gesturing. The gestures produced by the gestureencouraged group and the gesture-allowed group were not qualitatively different. In Experiment 3, 32 new participants were first given a set of mental rotation problems and then a second set of nongesturing paper folding problems. The gesture-encouraged group solved more problems correctly in the first set of mental rotation problems and the second set of non-gesturing paper folding problems. We concluded that gesture improves spatial problem solving. Furthermore, gesture has a lasting beneficial effect even when gesture is not available and the beneficial effect is problem-general.We suggested that gesture enhances spatial problem solving by provide a rich sensori-motor representation of the physical world and pick up information that is less readily available to visuo-spatial processes.
  • Chwilla, D., Hagoort, P., & Brown, C. M. (1998). The mechanism underlying backward priming in a lexical decision task: Spreading activation versus semantic matching. Quarterly Journal of Experimental Psychology, 51A(3), 531-560. doi:10.1080/713755773.

    Abstract

    Koriat (1981) demonstrated that an association from the target to a preceding prime, in the absence of an association from the prime to the target, facilitates lexical decision and referred to this effect as "backward priming". Backward priming is of relevance, because it can provide information about the mechanism underlying semantic priming effects. Following Neely (1991), we distinguish three mechanisms of priming: spreading activation, expectancy, and semantic matching/integration. The goal was to determine which of these mechanisms causes backward priming, by assessing effects of backward priming on a language-relevant ERP component, the N400, and reaction time (RT). Based on previous work, we propose that the N400 priming effect reflects expectancy and semantic matching/integration, but in contrast with RT does not reflect spreading activation. Experiment 1 shows a backward priming effect that is qualitatively similar for the N400 and RT in a lexical decision task. This effect was not modulated by an ISI manipulation. Experiment 2 clarifies that the N400 backward priming effect reflects genuine changes in N400 amplitude and cannot be ascribed to other factors. We will argue that these backward priming effects cannot be due to expectancy but are best accounted for in terms of semantic matching/integration.
  • Claus, A. (2004). Access management system. Language Archive Newsletter, 1(2), 5.
  • Coco, M. I., Araujo, S., & Petersson, K. M. (2017). Disentangling stimulus plausibility and contextual congruency: Electro-physiological evidence for differential cognitive dynamics. Neuropsychologia, 96, 150-163. doi:10.1016/j.neuropsychologia.2016.12.008.

    Abstract

    Expectancy mechanisms are routinely used by the cognitive system in stimulus processing and in anticipation of appropriate responses. Electrophysiology research has documented negative shifts of brain activity when expectancies are violated within a local stimulus context (e.g., reading an implausible word in a sentence) or more globally between consecutive stimuli (e.g., a narrative of images with an incongruent end). In this EEG study, we examine the interaction between expectancies operating at the level of stimulus plausibility and at more global level of contextual congruency to provide evidence for, or against, a disassociation of the underlying processing mechanisms. We asked participants to verify the congruency of pairs of cross-modal stimuli (a sentence and a scene), which varied in plausibility. ANOVAs on ERP amplitudes in selected windows of interest show that congruency violation has longer-lasting (from 100 to 500 ms) and more widespread effects than plausibility violation (from 200 to 400 ms). We also observed critical interactions between these factors, whereby incongruent and implausible pairs elicited stronger negative shifts than their congruent counterpart, both early on (100–200 ms) and between 400–500 ms. Our results suggest that the integration mechanisms are sensitive to both global and local effects of expectancy in a modality independent manner. Overall, we provide novel insights into the interdependence of expectancy during meaning integration of cross-modal stimuli in a verification task
  • Cohen, E., Van Leeuwen, E. J. C., Barbosa, A., & Haun, D. B. M. (2021). Does accent trump skin color in guiding children’s social preferences? Evidence from Brazil’s natural lab. Cognitive Development, 60: 101111. doi:10.1016/j.cogdev.2021.101111.

    Abstract

    Previous research has shown significant effects of race and accent on children’s developing social preferences. Accounts of the primacy of accent biases in the evolution and ontogeny of discriminant cooperation have been proposed, but lack systematic cross-cultural investigation. We report three controlled studies conducted with 5−10 year old children across four towns in the Brazilian Amazon, selected for their variation in racial and accent homogeneity/heterogeneity. Study 1 investigated participants’ (N = 289) decisions about friendship and sharing across color-contrasted pairs of target individuals: Black-White, Black-Pardo (Brown), Pardo-White. Study 2 (N = 283) investigated effects of both color and accent (Local vs Non-Local) on friendship and sharing decisions. Overall, there was a significant bias toward the lighter colored individual. A significant preference for local accent mitigates but does not override the color bias, except in the site characterized by both racial and accent heterogeneity. Results also vary by participant age and color. Study 3 (N = 235) reports results of an accent discrimination task that shows an overall increase in accuracy with age. The research suggests that cooperative preferences based on accent and race develop differently in response to locally relevant parameters of racial and linguistic variation.
  • Cohen, E. (2012). [Review of the book Searching for Africa in Brazil: Power and Tradition in Candomblé by Stefania Capone]. Critique of Anthropology, 32, 217-218. doi:10.1177/0308275X12439961.
  • Cohen, E. (2012). The evolution of tag-based cooperation in humans: The case for accent. Current Anthropology, 53, 588-616. doi:10.1086/667654.

    Abstract

    Recent game-theoretic simulation and analytical models have demonstrated that cooperative strategies mediated by indicators of cooperative potential, or “tags,” can invade, spread, and resist invasion by noncooperators across a range of population-structure and cost-benefit scenarios. The plausibility of these models is potentially relevant for human evolutionary accounts insofar as humans possess some phenotypic trait that could serve as a reliable tag. Linguistic markers, such as accent and dialect, have frequently been either cursorily defended or promptly dismissed as satisfying the criteria of a reliable and evolutionarily viable tag. This paper integrates evidence from a range of disciplines to develop and assess the claim that speech accent mediated the evolution of tag-based cooperation in humans. Existing evidence warrants the preliminary conclusion that accent markers meet the demands of an evolutionarily viable tag and potentially afforded a cost-effective solution to the challenges of maintaining viable cooperative relationships in diffuse, regional social networks.
  • Collins, J. (2017). Real and spurious correlations involving tonal languages. In N. J. Enfield (Ed.), Dependencies in language: On the causal ontology of linguistics systems (pp. 129-139). Berlin: Language Science Press.
  • Collins, J. (2012). The evolution of the Greenbergian word order correlations. In T. C. Scott-Phillips, M. Tamariz, E. A. Cartmill, & J. R. Hurford (Eds.), The evolution of language. Proceedings of the 9th International Conference (EVOLANG9) (pp. 72-79). Singapore: World Scientific.
  • Colzato, L. S., Zech, H., Hommel, B., Verdonschot, R. G., Van den Wildenberg, W. P. M., & Hsieh, S. (2012). Loving-kindness brings loving-kindness: The impact of Buddhism on cognitive self-other integration. Psychonomic Bulletin & Review, 19(3), 541-545. doi:10.3758/s13423-012-0241-y.

    Abstract

    Common wisdom has it that Buddhism enhances compassion and self-other integration. We put this assumption to empirical test by comparing practicing Taiwanese Buddhists with well-matched atheists. Buddhists showed more evidence of self-other integration in the social Simon task, which assesses the degree to which people co-represent the actions of a coactor. This suggests that self-other integration and task co-representation vary as a function of religious practice.
  • Connell, L., Cai, Z. G., & Holler, J. (2012). Do you see what I'm singing? Visuospatial movement biases pitch perception. In N. Miyake, D. Peebles, & R. P. Cooper (Eds.), Proceedings of the 34th Annual Meeting of the Cognitive Science Society (CogSci 2012) (pp. 252-257). Austin, TX: Cognitive Science Society.

    Abstract

    The nature of the connection between musical and spatial processing is controversial. While pitch may be described in spatial terms such as “high” or “low”, it is unclear whether pitch and space are associated but separate dimensions or whether they share representational and processing resources. In the present study, we asked participants to judge whether a target vocal note was the same as (or different from) a preceding cue note. Importantly, target trials were presented as video clips where a singer sometimes gestured upward or downward while singing that target note, thus providing an alternative, concurrent source of spatial information. Our results show that pitch discrimination was significantly biased by the spatial movement in gesture. These effects were eliminated by spatial memory load but preserved under verbal memory load conditions. Together, our findings suggest that pitch and space have a shared representation such that the mental representation of pitch is audiospatial in nature.
  • Connine, C. M., Clifton, Jr., C., & Cutler, A. (1987). Effects of lexical stress on phonetic categorization. Phonetica, 44, 133-146.
  • Cooper, N., & Cutler, A. (2004). Perception of non-native phonemes in noise. In S. Kin, & M. J. Bae (Eds.), Proceedings of the 8th International Conference on Spoken Language Processing (Interspeech 2004-ICSLP) (pp. 469-472). Seoul: Sunjijn Printing Co.

    Abstract

    We report an investigation of the perception of American English phonemes by Dutch listeners proficient in English. Listeners identified either the consonant or the vowel in most possible English CV and VC syllables. The syllables were embedded in multispeaker babble at three signal-to-noise ratios (16 dB, 8 dB, and 0 dB). Effects of signal-to-noise ratio on vowel and consonant identification are discussed as a function of syllable position and of relationship to the native phoneme inventory. Comparison of the results with previously reported data from native listeners reveals that noise affected the responding of native and non-native listeners similarly.
  • Coopmans, C. W., De Hoop, H., Kaushik, K., Hagoort, P., & Martin, A. E. (2021). Structure-(in)dependent interpretation of phrases in humans and LSTMs. In Proceedings of the Society for Computation in Linguistics (SCiL 2021) (pp. 459-463).

    Abstract

    In this study, we compared the performance of a long short-term memory (LSTM) neural network to the behavior of human participants on a language task that requires hierarchically structured knowledge. We show that humans interpret ambiguous noun phrases, such as second blue ball, in line with their hierarchical constituent structure. LSTMs, instead, only do
    so after unambiguous training, and they do not systematically generalize to novel items. Overall, the results of our simulations indicate that a model can behave hierarchically without relying on hierarchical constituent structure.
  • Cortázar-Chinarro, M., Lattenkamp, E. Z., Meyer-Lucht, Y., Luquet, E., Laurila, A., & Höglund, J. (2017). Drift, selection, or migration? Processes affecting genetic differentiation and variation along a latitudinal gradient in an amphibian. BMC Evolutionary Biology, 17: 189. doi:10.1186/s12862-017-1022-z.

    Abstract

    Past events like fluctuations in population size and post-glacial colonization processes may influence the relative importance of genetic drift, migration and selection when determining the present day patterns of genetic variation. We disentangle how drift, selection and migration shape neutral and adaptive genetic variation in 12 moor frog populations along a 1700 km latitudinal gradient. We studied genetic differentiation and variation at a MHC exon II locus and a set of 18 microsatellites.
    Results

    Using outlier analyses, we identified the MHC II exon 2 (corresponding to the β-2 domain) locus and one microsatellite locus (RCO8640) to be subject to diversifying selection, while five microsatellite loci showed signals of stabilizing selection among populations. STRUCTURE and DAPC analyses on the neutral microsatellites assigned populations to a northern and a southern cluster, reflecting two different post-glacial colonization routes found in previous studies. Genetic variation overall was lower in the northern cluster. The signature of selection on MHC exon II was weaker in the northern cluster, possibly as a consequence of smaller and more fragmented populations.
    Conclusion

    Our results show that historical demographic processes combined with selection and drift have led to a complex pattern of differentiation along the gradient where some loci are more divergent among populations than predicted from drift expectations due to diversifying selection, while other loci are more uniform among populations due to stabilizing selection. Importantly, both overall and MHC genetic variation are lower at northern latitudes. Due to lower evolutionary potential, the low genetic variation in northern populations may increase the risk of extinction when confronted with emerging pathogens and climate change.
  • Costa, A., Cutler, A., & Sebastian-Galles, N. (1998). Effects of phoneme repertoire on phoneme decision. Perception and Psychophysics, 60, 1022-1031.

    Abstract

    In three experiments, listeners detected vowel or consonant targets in lists of CV syllables constructed from five vowels and five consonants. Responses were faster in a predictable context (e.g., listening for a vowel target in a list of syllables all beginning with the same consonant) than in an unpredictable context (e.g., listening for a vowel target in a list of syllables beginning with different consonants). In Experiment 1, the listeners’ native language was Dutch, in which vowel and consonant repertoires are similar in size. The difference between predictable and unpredictable contexts was comparable for vowel and consonant targets. In Experiments 2 and 3, the listeners’ native language was Spanish, which has four times as many consonants as vowels; here effects of an unpredictable consonant context on vowel detection were significantly greater than effects of an unpredictable vowel context on consonant detection. This finding suggests that listeners’ processing of phonemes takes into account the constitution of their language’s phonemic repertoire and the implications that this has for contextual variability.
  • Crago, M. B., & Allen, S. E. M. (1998). Acquiring Inuktitut. In O. L. Taylor, & L. Leonard (Eds.), Language Acquisition Across North America: Cross-Cultural And Cross-Linguistic Perspectives (pp. 245-279). San Diego, CA, USA: Singular Publishing Group, Inc.
  • Crago, M. B., Allen, S. E. M., & Pesco, D. (1998). Issues of Complexity in Inuktitut and English Child Directed Speech. In Proceedings of the twenty-ninth Annual Stanford Child Language Research Forum (pp. 37-46).
  • Crago, M. B., Chen, C., Genesee, F., & Allen, S. E. M. (1998). Power and deference. Journal for a Just and Caring Education, 4(1), 78-95.
  • Crasborn, O., & Windhouwer, M. (2012). ISOcat data categories for signed language resources. In E. Efthimiou, G. Kouroupetroglou, & S.-E. Fotinea (Eds.), Gesture and sign language in human-computer interaction and embodied communication: 9th International Gesture Workshop, GW 2011, Athens, Greece, May 25-27, 2011, revised selected papers (pp. 118-128). Heidelberg: Springer.

    Abstract

    As the creation of signed language resources is gaining speed world-wide, the need for standards in this field becomes more acute. This paper discusses the state of the field of signed language resources, their metadata descriptions, and annotations that are typically made. It then describes the role that ISOcat may play in this process and how it can stimulate standardisation without imposing standards. Finally, it makes some initial proposals for the thematic domain ‘sign language’ that was introduced in 2011.
  • Creaghe, N., Quinn, S., & Kidd, E. (2021). Symbolic play provides a fertile context for language development. Infancy, 26(6), 980-1010. doi:10.1111/infa.12422.

    Abstract

    In this study we test the hypothesis that symbolic play represents a fertile context for language acquisition because its inherent ambiguity elicits communicative behaviours that positively influence development. Infant-caregiver dyads (N = 54) participated in two 20-minute play sessions six months apart (Time 1 = 18 months, Time 2 = 24 months). During each session the dyads played with two sets of toys that elicited either symbolic or functional play. The sessions were transcribed and coded for several features of dyadic interaction and speech; infants’ linguistic proficiency was measured via parental report. The two play contexts resulted in different communicative and linguistic behaviour. Notably, the symbolic play condition resulted in significantly greater conversational turn-taking than functional play, and also resulted in the greater use of questions and mimetics in infant-directed speech (IDS). In contrast, caregivers used more imperative clauses in functional play. Regression analyses showed that unique properties of symbolic play (i.e., turn-taking, yes-no questions, mimetics) positively predicted children’s language proficiency, whereas unique features of functional play (i.e., imperatives in IDS) negatively predicted proficiency. The results provide evidence in support of the hypothesis that symbolic play is a fertile context for language development, driven by the need to negotiate meaning.
  • Creemers, A., & Embick, D. (2021). Retrieving stem meanings in opaque words during auditory lexical processing. Language, Cognition and Neuroscience, 36(9), 1107-1122. doi:10.1080/23273798.2021.1909085.

    Abstract

    Recent constituent priming experiments show that Dutch and German prefixed verbs prime their stem, regardless of semantic transparency (e.g. Smolka et al. [(2014). ‘Verstehen’ (‘understand’) primes ‘stehen’ (‘stand’): Morphological structure overrides semantic compositionality in the lexical representation of German complex verbs. Journal of Memory and Language, 72, 16–36. https://doi.org/10.1016/j.jml.2013.12.002]). We examine whether the processing of opaque verbs (e.g. herhalen “repeat”) involves the retrieval of only the whole-word meaning, or whether the lexical-semantic meaning of the stem (halen as “take/get”) is retrieved as well. We report the results of an auditory semantic priming experiment with Dutch prefixed verbs, testing whether the recognition of a semantic associate to the stem (BRENGEN “bring”) is facilitated by the presentation of an opaque prefixed verb. In contrast to prior visual studies, significant facilitation after semantically opaque primes is found, which suggests that the lexical-semantic meaning of stems in opaque words is retrieved. We examine the implications that these findings have for auditory word recognition, and for the way in which different types of meanings are represented and processed.

    Additional information

    supplemental material
  • Cristia, A., Lavechin, M., Scaff, C., Soderstrom, M., Rowland, C. F., Räsänen, O., Bunce, J., & Bergelson, E. (2021). A thorough evaluation of the Language Environment Analysis (LENA) system. Behavior Research Methods, 53, 467-486. doi:10.3758/s13428-020-01393-5.

    Abstract

    In the previous decade, dozens of studies involving thousands of children across several research disciplines have made use of a combined daylong audio-recorder and automated algorithmic analysis called the LENAⓇ system, which aims to assess children’s language environment. While the system’s prevalence in the language acquisition domain is steadily growing, there are only scattered validation efforts on only some of its key characteristics. Here, we assess the LENAⓇ system’s accuracy across all of its key measures: speaker classification, Child Vocalization Counts (CVC), Conversational Turn Counts (CTC), and Adult Word Counts (AWC). Our assessment is based on manual annotation of clips that have been randomly or periodically sampled out of daylong recordings, collected from (a) populations similar to the system’s original training data (North American English-learning children aged 3-36 months), (b) children learning another dialect of English (UK), and (c) slightly older children growing up in a different linguistic and socio-cultural setting (Tsimane’ learners in rural Bolivia). We find reasonably high accuracy in some measures (AWC, CVC), with more problematic levels of performance in others (CTC, precision of male adults and other children). Statistical analyses do not support the view that performance is worse for children who are dissimilar from the LENAⓇ original training set. Whether LENAⓇ results are accurate enough for a given research, educational, or clinical application depends largely on the specifics at hand. We therefore conclude with a set of recommendations to help researchers make this determination for their goals.
  • Cristia, A., & Peperkamp, S. (2012). Generalizing without encoding specifics: Infants infer phonotactic patterns on sound classes. In A. K. Biller, E. Y. Chung, & A. E. Kimball (Eds.), Proceedings of the 36th Annual Boston University Conference on Language Development (BUCLD 36) (pp. 126-138). Somerville, Mass.: Cascadilla Press.

    Abstract

    publication expected April 2012
  • Cristia, A., Seidl, A., Vaughn, C., Schmale, R., Bradlow, A., & Floccia, C. (2012). Linguistic processing of accented speech across the lifespan. Frontiers in Psychology, 3, 479. doi:10.3389/fpsyg.2012.00479.

    Abstract

    In most of the world, people have regular exposure to multiple accents. Therefore, learning to quickly process accented speech is a prerequisite to successful communication. In this paper, we examine work on the perception of accented speech across the lifespan, from early infancy to late adulthood. Unfamiliar accents initially impair linguistic processing by infants, children, younger adults, and older adults, but listeners of all ages come to adapt to accented speech. Emergent research also goes beyond these perceptual abilities, by assessing links with production and the relative contributions of linguistic knowledge and general cognitive skills. We conclude by underlining points of convergence across ages, and the gaps left to face in future work.
  • Cronin, K. A. (2012). Cognitive aspects of prosocial behavior in nonhuman primates. In N. M. Seel (Ed.), Encyclopedia of the sciences of learning. Part 3 (2nd ed., pp. 581-583). Berlin: Springer.

    Abstract

    Definition Prosocial behavior is any behavior performed by one individual that results in a benefit for another individual. Prosocial motivations, prosocial preferences, or other-regarding preferences refer to the psychological predisposition to behave in the best interest of another individual. A behavior need not be costly to the actor to be considered prosocial, thus the concept is distinct from altruistic behavior which requires that the actor incurs some cost when providing a benefit to another.
  • Cronin, K. A. (2012). Prosocial behaviour in animals: The influence of social relationships, communication and rewards. Animal Behaviour, 84, 1085-1093. doi:10.1016/j.anbehav.2012.08.009.

    Abstract

    Researchers have struggled to obtain a clear account of the evolution of prosocial behaviour despite a great deal of recent effort. The aim of this review is to take a brief step back from addressing the question of evolutionary origins of prosocial behaviour in order to identify contextual factors that are contributing to variation in the expression of prosocial behaviour and hindering progress towards identifying phylogenetic patterns. Most available data come from the Primate Order, and the choice of contextual factors to consider was informed by theory and practice, including the nature of the relationship between the potential donor and recipient, the communicative behaviour of the recipients, and features of the prosocial task including whether rewards are visible and whether the prosocial choice creates an inequity between actors. Conclusions are drawn about the facilitating or inhibiting impact of each of these factors on the expression of prosocial behaviour, and areas for future research are highlighted. Acknowledging the impact of these contextual features on the expression of prosocial behaviours should stimulate new research into the proximate mechanisms that drive these effects, yield experimental designs that better control for potential influences on prosocial expression, and ultimately allow progress towards reconstructing the evolutionary origins of prosocial behaviour.
  • Cronin, K. A., & Sanchez, A. (2012). Social dynamics and cooperation: The case of nonhuman primates and its implications for human behavior. Advances in complex systems, 15, 1250066. doi:10.1142/S021952591250066X.

    Abstract

    The social factors that influence cooperation have remained largely uninvestigated but have the potential to explain much of the variation in cooperative behavior observed in the natural world. We show here that certain dimensions of the social environment, namely the size of the social group, the degree of social tolerance expressed, the structure of the dominance hierarchy, and the patterns of dispersal, may influence the emergence and stability of cooperation in predictable ways. Furthermore, the social environment experienced by a species over evolutionary time will have shaped their cognition to provide certain strengths and strategies that are beneficial in their species‟ social world. These cognitive adaptations will in turn impact the likelihood of cooperating in a given social environment. Experiments with one primate species, the cottontop tamarin, illustrate how social dynamics may influence emergence and stability of cooperative behavior in this species. We then take a more general viewpoint and argue that the hypotheses presented here require further experimental work and the addition of quantitative modeling to obtain a better understanding of how social dynamics influence the emergence and stability of cooperative behavior in complex systems. We conclude by pointing out subsequent specific directions for models and experiments that will allow relevant advances in the understanding of the emergence of cooperation.
  • Cuellar-Partida, G., Tung, J. Y., Eriksson, N., Albrecht, E., Aliev, F., Andreassen, O. A., Barroso, I., Beckmann, J. S., Boks, M. P., Boomsma, D. I., Boyd, H. A., Breteler, M. M. B., Campbell, H., Chasman, D. I., Cherkas, L. F., Davies, G., De Geus, E. J. C., Deary, I. J., Deloukas, P., Dick, D. M. and 98 moreCuellar-Partida, G., Tung, J. Y., Eriksson, N., Albrecht, E., Aliev, F., Andreassen, O. A., Barroso, I., Beckmann, J. S., Boks, M. P., Boomsma, D. I., Boyd, H. A., Breteler, M. M. B., Campbell, H., Chasman, D. I., Cherkas, L. F., Davies, G., De Geus, E. J. C., Deary, I. J., Deloukas, P., Dick, D. M., Duffy, D. L., Eriksson, J. G., Esko, T., Feenstra, B., Geller, F., Gieger, C., Giegling, I., Gordon, S. D., Han, J., Hansen, T. F., Hartmann, A. M., Hayward, C., Heikkilä, K., Hicks, A. A., Hirschhorn, J. N., Hottenga, J.-J., Huffman, J. E., Hwang, L.-D., Ikram, M. A., Kaprio, J., Kemp, J. P., Khaw, K.-T., Klopp, N., Konte, B., Kutalik, Z., Lahti, J., Li, X., Loos, R. J. F., Luciano, M., Magnusson, S. H., Mangino, M., Marques-Vidal, P., Martin, N. G., McArdle, W. L., McCarthy, M. I., Medina-Gomez, C., Melbye, M., Melville, S. A., Metspalu, A., Milani, L., Mooser, V., Nelis, M., Nyholt, D. R., O'Connell, K. S., Ophoff, R. A., Palmer, C., Palotie, A., Palviainen, T., Pare, G., Paternoster, L., Peltonen, L., Penninx, B. W. J. H., Polasek, O., Pramstaller, P. P., Prokopenko, I., Raikkonen, K., Ripatti, S., Rivadeneira, F., Rudan, I., Rujescu, D., Smit, J. H., Smith, G. D., Smoller, J. W., Soranzo, N., Spector, T. D., St Pourcain, B., Starr, J. M., Stefánsson, H., Steinberg, S., Teder-Laving, M., Thorleifsson, G., Stefansson, K., Timpson, N. J., Uitterlinden, A. G., Van Duijn, C. M., Van Rooij, F. J. A., Vink, J. M., Vollenweider, P., Vuoksimaa, E., Waeber, G., Wareham, N. J., Warrington, N., Waterworth, D., Werge, T., Wichmann, H.-E., Widen, E., Willemsen, G., Wright, A. F., Wright, M. J., Xu, M., Zhao, J. H., Kraft, P., Hinds, D. A., Lindgren, C. M., Magi, R., Neale, B. M., Evans, D. M., & Medland, S. E. (2021). Genome-wide association study identifies 48 common genetic variants associated with handedness. Nature Human Behaviour, 5, 59-70. doi:10.1038/s41562-020-00956-y.

    Abstract

    Handedness has been extensively studied because of its relationship with language and the over-representation of left-handers in some neurodevelopmental disorders. Using data from the UK Biobank, 23andMe and the International Handedness Consortium, we conducted a genome-wide association meta-analysis of handedness (N = 1,766,671). We found 41 loci associated (P < 5 × 10−8) with left-handedness and 7 associated with ambidexterity. Tissue-enrichment analysis implicated the CNS in the aetiology of handedness. Pathways including regulation of microtubules and brain morphology were also highlighted. We found suggestive positive genetic correlations between left-handedness and neuropsychiatric traits, including schizophrenia and bipolar disorder. Furthermore, the genetic correlation between left-handedness and ambidexterity is low (rG = 0.26), which implies that these traits are largely influenced by different genetic mechanisms. Our findings suggest that handedness is highly polygenic and that the genetic variants that predispose to left-handedness may underlie part of the association with some psychiatric disorders.

    Additional information

    supplementary tables
  • Cutfield, S. (2012). Demonstratives in Dalabon: A language of southwestern Arnhem Land. PhD Thesis, Monash University, Melbourne.

    Abstract

    This study is a comprehensive description of the nominal demonstratives in Dalabon, a severely endangered Gunwinyguan non-Pama-Nyungan language of southwestern Arnhem Land, northern Australia. Demonstratives are attested in the basic vocabulary of every language, yet remain heretofore underdescribed in Australian languages. Traditional definitions of demonstratives as primarily making spatial reference have recently evolved at a great pace, with close analyses of demonstratives-in-use revealing that their use in spatial reference, in narrative discourse, and in interaction is significantly more complex than previously assumed, and that definitions of demonstrative forms are best developed after consideration of their use across these contexts. The present study reinforces findings of complexity in demonstrative use, and the significance of a multidimensional characterization of demonstrative forms. This study is therefore a contribution to the description of Dalabon, to the analysis of demonstratives in Australian languages, and to the theory and typology of demonstratives cross-linguistically. In this study, I present a multi-dimensional analysis of Dalabon demonstratives, using a variety of theoretical frameworks and research tools including descriptive linguistics, lexical-functional grammar, discourse analysis, gesture studies and pragmatics. Using data from personal narratives, improvised interactions and elicitation sessions to investigate the demonstratives, this study takes into account their morphosyntactic distribution, uses in the speech situation, interactional factors, discourse phenomena, concurrent gesture, and uses in personal narratives. I conclude with a unified account of the intenstional and extensional semantics of each form surveyed. The Dalabon demonstrative paradigm divides into two types, those which are spatially-specific and those which are non-spatial. The spatially-specific demonstratives nunda ‘this (in the here-space)’ and djakih ‘that (in the there-space)’ are shown not to encode the location of the referent per se, rather its relative position to dynamic physical and social elements of the speech situation such as the speaker’s engagement area and here-space. Both forms are also used as spatial adverbs to mean ‘here’ and ‘there’ respectively, while only nunda is also used as a temporal adverb ‘now, today’. The spatially-specific demonstratives are limited to situational use in narratives. The non-spatial demonstratives kanh/kanunh ‘that (identifiable)’ and nunh ‘that (unfamiliar, contrastive)’ are used in both the speech situation and personal narratives to index referents as ‘identifiable’ or ‘unfamiliar’ respectively. Their use in the speech situation can conversationally implicate that the referent is distal. The non-spatial demonstratives display the greatest diversity of use in narratives, each specializing for certain uses, yet their wide distribution across discourse usage types can be described on account of their intensional semantics. The findings of greatest typological interest in this study are that speakers’ choice of demonstrative in the speech situation is influenced by multiple simultaneous deictic parameters (including gesture); that oppositions in the Dalabon demonstrative paradigm are not equal, nor exclusively semantic; that the form nunh ‘that (unfamiliar, contrastive)’ is used to index a referent as somewhat inaccessible or unexpected; that the ‘recognitional’ form kanh/kanunh is instead described as ‘identifiable’; and that speakers use demonstratives to index emotional deixis to a referent, or to their addressee.
  • Cutfield, S. (2012). Foreword. Australian Journal of Linguistics, 32(4), 457-458.
  • Cutfield, S. (2012). Principles of Dalabon plant and animal names and classification. In D. Bordulk, N. Dalak, M. Tukumba, L. Bennett, R. Bordro Tingey, M. Katherine, S. Cutfield, M. Pamkal, & G. Wightman (Eds.), Dalabon plants and animals: Aboriginal biocultural knowledge from Southern Arnhem Land, North Australia (pp. 11-12). Palmerston, NT, Australia: Department of Land and Resource Management, Northern Territory.
  • Cutler, A., Norris, D., & Sebastián-Gallés, N. (2004). Phonemic repertoire and similarity within the vocabulary. In S. Kin, & M. J. Bae (Eds.), Proceedings of the 8th International Conference on Spoken Language Processing (Interspeech 2004-ICSLP) (pp. 65-68). Seoul: Sunjijn Printing Co.

    Abstract

    Language-specific differences in the size and distribution of the phonemic repertoire can have implications for the task facing listeners in recognising spoken words. A language with more phonemes will allow shorter words and reduced embedding of short words within longer ones, decreasing the potential for spurious lexical competitors to be activated by speech signals. We demonstrate that this is the case via comparative analyses of the vocabularies of English and Spanish. A language which uses suprasegmental as well as segmental contrasts, however, can substantially reduce the extent of spurious embedding.
  • Cutler, A. (2004). Segmentation of spoken language by normal adult listeners. In R. Kent (Ed.), MIT encyclopedia of communication sciences and disorders (pp. 392-395). Cambridge, MA: MIT Press.
  • Cutler, A., Weber, A., Smits, R., & Cooper, N. (2004). Patterns of English phoneme confusions by native and non-native listeners. Journal of the Acoustical Society of America, 116(6), 3668-3678. doi:10.1121/1.1810292.

    Abstract

    Native American English and non-native(Dutch)listeners identified either the consonant or the vowel in all possible American English CV and VC syllables. The syllables were embedded in multispeaker babble at three signal-to-noise ratios(0, 8, and 16 dB). The phoneme identification
    performance of the non-native listeners was less accurate than that of the native listeners. All listeners were adversely affected by noise. With these isolated syllables, initial segments were harder to identify than final segments. Crucially, the effects of language background and noise did not interact; the performance asymmetry between the native and non-native groups was not significantly different across signal-to-noise ratios. It is concluded that the frequently reported disproportionate difficulty of non-native listening under disadvantageous conditions is not due to a disproportionate increase in phoneme misidentifications.
  • Cutler, A. (2004). On spoken-word recognition in a second language. Newsletter, American Association of Teachers of Slavic and East European Languages, 47, 15-15.
  • Cutler, A., & Henton, C. G. (2004). There's many a slip 'twixt the cup and the lip. In H. Quené, & V. Van Heuven (Eds.), On speech and Language: Studies for Sieb G. Nooteboom (pp. 37-45). Utrecht: Netherlands Graduate School of Linguistics.

    Abstract

    The retiring academic may look back upon, inter alia, years of conference attendance. Speech error researchers are uniquely fortunate because they can collect data in any situation involving communication; accordingly, the retiring speech error researcher will have collected data at those conferences. We here address the issue of whether error data collected in situations involving conviviality (such as at conferences) is representative of error data in general. Our approach involved a comparison, across three levels of linguistic processing, between a specially constructed Conviviality Sample and the largest existing source of speech error data, the newly available Fromkin Speech Error Database. The results indicate that there are grounds for regarding the data in the Conviviality Sample as a better than average reflection of the true population of all errors committed. These findings encourage us to recommend further data collection in collaboration with like-minded colleagues.
  • Cutler, A. (2004). Twee regels voor academische vorming. In H. Procee (Ed.), Bij die wereld wil ik horen! Zesendertig columns en drie essays over de vorming tot academicus. (pp. 42-45). Amsterdam: Boom.
  • Cutler, A., & Jesse, A. (2021). Word stress in speech perception. In J. S. Pardo, L. C. Nygaard, & D. B. Pisoni (Eds.), The handbook of speech perception (2nd ed., pp. 239-265). Chichester: Wiley.
  • Cutler, A., Aslin, R. N., Gervain, J., & Nespor, M. (Eds.). (2021). Special issue in honor of Jacques Mehler, Cognition's founding editor [Special Issue]. Cognition, 213.
  • Cutler, A., Aslin, R. N., Gervain, J., & Nespor, M. (2021). Special issue in honor of Jacques Mehler, Cognition's founding editor [preface]. Cognition, 213: 104786. doi:10.1016/j.cognition.2021.104786.
  • Cutler, A., & Otake, T. (1998). Assimilation of place in Japanese and Dutch. In R. Mannell, & J. Robert-Ribes (Eds.), Proceedings of the Fifth International Conference on Spoken Language Processing: vol. 5 (pp. 1751-1754). Sydney: ICLSP.

    Abstract

    Assimilation of place of articulation across a nasal and a following stop consonant is obligatory in Japanese, but not in Dutch. In four experiments the processing of assimilated forms by speakers of Japanese and Dutch was compared, using a task in which listeners blended pseudo-word pairs such as ranga-serupa. An assimilated blend of this pair would be rampa, an unassimilated blend rangpa. Japanese listeners produced significantly more assimilated than unassimilated forms, both with pseudo-Japanese and pseudo-Dutch materials, while Dutch listeners produced significantly more unassimilated than assimilated forms in each materials set. This suggests that Japanese listeners, whose native-language phonology involves obligatory assimilation constraints, represent the assimilated nasals in nasal-stop sequences as unmarked for place of articulation, while Dutch listeners, who are accustomed to hearing unassimilated forms, represent the same nasal segments as marked for place of articulation.
  • Cutler, A., Norris, D., & Williams, J. (1987). A note on the role of phonological expectations in speech segmentation. Journal of Memory and Language, 26, 480-487. doi:10.1016/0749-596X(87)90103-3.

    Abstract

    Word-initial CVC syllables are detected faster in words beginning consonant-vowel-consonant-vowel (CVCV-) than in words beginning consonant-vowel-consonant-consonant (CVCC-). This effect was reported independently by M. Taft and G. Hambly (1985, Journal of Memory and Language, 24, 320–335) and by A. Cutler, J. Mehler, D. Norris, and J. Segui (1986, Journal of Memory and Language, 25, 385–400). Taft and Hambly explained the effect in terms of lexical factors. This explanation cannot account for Cutler et al.'s results, in which the effect also appeared with nonwords and foreign words. Cutler et al. suggested that CVCV-sequences might simply be easier to perceive than CVCC-sequences. The present study confirms this suggestion, and explains it as a reflection of listener expectations constructed on the basis of distributional characteristics of the language.
  • Cutler, A. (1987). Components of prosodic effects in speech recognition. In Proceedings of the Eleventh International Congress of Phonetic Sciences: Vol. 1 (pp. 84-87). Tallinn: Academy of Sciences of the Estonian SSR, Institute of Language and Literature.

    Abstract

    Previous research has shown that listeners use the prosodic structure of utterances in a predictive fashion in sentence comprehension, to direct attention to accented words. Acoustically identical words spliced into sentence contexts arc responded to differently if the prosodic structure of the context is \ aricd: when the preceding prosody indicates that the word will he accented, responses are faster than when the preceding prosodv is inconsistent with accent occurring on that word. In the present series of experiments speech hybridisation techniques were first used to interchange the timing patterns within pairs of prosodic variants of utterances, independently of the pitch and intensity contours. The time-adjusted utterances could then serve as a basis lor the orthogonal manipulation of the three prosodic dimensions of pilch, intensity and rhythm. The overall pattern of results showed that when listeners use prosody to predict accent location, they do not simply rely on a single prosodic dimension, hut exploit the interaction between pitch, intensity and rhythm.
  • Cutler, A. (2017). Converging evidence for abstract phonological knowledge in speech processing. In G. Gunzelmann, A. Howes, T. Tenbrink, & E. Davelaar (Eds.), Proceedings of the 39th Annual Conference of the Cognitive Science Society (CogSci 2017) (pp. 1447-1448). Austin, TX: Cognitive Science Society.

    Abstract

    The perceptual processing of speech is a constant interplay of multiple competing albeit convergent processes: acoustic input vs. higher-level representations, universal mechanisms vs. language-specific, veridical traces of speech experience vs. construction and activation of abstract representations. The present summary concerns the third of these issues. The ability to generalise across experience and to deal with resulting abstractions is the hallmark of human cognition, visible even in early infancy. In speech processing, abstract representations play a necessary role in both production and perception. New sorts of evidence are now informing our understanding of the breadth of this role.
  • Cutler, A. (2012). Eentaalpsychologie is geen taalpsychologie: Part II. [Valedictory lecture Radboud University]. Nijmegen: Radboud University.

    Abstract

    Rede uitgesproken bij het afscheid als hoogleraar Vergelijkende taalpsychologie aan de Faculteit der Sociale Wetenschappen van de Radboud Universiteit Nijmegen op donderdag 20 september 2012
  • Cutler, A., & Davis, C. (2012). An orthographic effect in phoneme processing, and its limitations. Frontiers in Psychology, 3, 18. doi:10.3389/fpsyg.2012.00018.

    Abstract

    To examine whether lexically stored knowledge about spelling influences phoneme evaluation, we conducted three experiments with a low-level phonetic judgement task: phoneme goodness rating. In each experiment, listeners heard phonetic tokens varying along a continuum centred on /s/, occurring finally in isolated word or nonword tokens. An effect of spelling appeared in Experiment 1: Native English speakers’ goodness ratings for the best /s/ tokens were significantly higher in words spelled with S (e.g., bless) than in words spelled with C (e.g., voice). No such difference appeared when nonnative speakers rated the same materials in Experiment 2, indicating that the difference could not be due to acoustic characteristics of the S- versus C-words. In Experiment 3, nonwords with lexical neighbours consistently spelled with S (e.g., pless) versus with C (e.g., floice) failed to elicit orthographic neighbourhood effects; no significant difference appeared in native English speakers’ ratings for the S-consistent versus the C-consistent sets. Obligatory influence of lexical knowledge on phonemic processing would have predicted such neighbourhood effects; the findings are thus better accommodated by models in which phonemic decisions draw strategically upon lexical information.
  • Ip, M. H. K., & Cutler, A. (2017). Intonation facilitates prediction of focus even in the presence of lexical tones. In Proceedings of Interspeech 2017 (pp. 1218-1222). doi:10.21437/Interspeech.2017-264.

    Abstract

    In English and Dutch, listeners entrain to prosodic contours to predict where focus will fall in an utterance. However, is this strategy universally available, even in languages with different phonological systems? In a phoneme detection experiment, we examined whether prosodic entrainment is also found in Mandarin Chinese, a tone language, where in principle the use of pitch for lexical identity may take precedence over the use of pitch cues to salience. Consistent with the results from Germanic languages, response times were facilitated when preceding intonation predicted accent on the target-bearing word. Acoustic analyses revealed greater F0 range in the preceding intonation of the predicted-accent sentences. These findings have implications for how universal and language-specific mechanisms interact in the processing of salience.
  • Cutler, A., Mister, E., Norris, D., & Sebastián-Gallés, N. (2004). La perception de la parole en espagnol: Un cas particulier? In L. Ferrand, & J. Grainger (Eds.), Psycholinguistique cognitive: Essais en l'honneur de Juan Segui (pp. 57-74). Brussels: De Boeck.
  • Cutler, A. (1998). How listeners find the right words. In Proceedings of the Sixteenth International Congress on Acoustics: Vol. 2 (pp. 1377-1380). Melville, NY: Acoustical Society of America.

    Abstract

    Languages contain tens of thousands of words, but these are constructed from a tiny handful of phonetic elements. Consequently, words resemble one another, or can be embedded within one another, a coup stick snot with standing. me process of spoken-word recognition by human listeners involves activation of multiple word candidates consistent with the input, and direct competition between activated candidate words. Further, human listeners are sensitive, at an early, prelexical, stage of speeeh processing, to constraints on what could potentially be a word of the language.
  • Cutler, A., Norris, D., & McQueen, J. M. (1996). Lexical access in continuous speech: Language-specific realisations of a universal model. In T. Otake, & A. Cutler (Eds.), Phonological structure and language processing: Cross-linguistic studies (pp. 227-242). Berlin: Mouton de Gruyter.
  • Cutler, A. (1976). High-stress words are easier to perceive than low-stress words, even when they are equally stressed. Texas Linguistic Forum, 2, 53-57.
  • Cutler, A. (2012). Native listening: Language experience and the recognition of spoken words. Cambridge, MA: MIT Press.

    Abstract

    Understanding speech in our native tongue seems natural and effortless; listening to speech in a nonnative language is a different experience. In this book, Anne Cutler argues that listening to speech is a process of native listening because so much of it is exquisitely tailored to the requirements of the native language. Her cross-linguistic study (drawing on experimental work in languages that range from English and Dutch to Chinese and Japanese) documents what is universal and what is language specific in the way we listen to spoken language. Cutler describes the formidable range of mental tasks we carry out, all at once, with astonishing speed and accuracy, when we listen. These include evaluating probabilities arising from the structure of the native vocabulary, tracking information to locate the boundaries between words, paying attention to the way the words are pronounced, and assessing not only the sounds of speech but prosodic information that spans sequences of sounds. She describes infant speech perception, the consequences of language-specific specialization for listening to other languages, the flexibility and adaptability of listening (to our native languages), and how language-specificity and universality fit together in our language processing system. Drawing on her four decades of work as a psycholinguist, Cutler documents the recent growth in our knowledge about how spoken-word recognition works and the role of language structure in this process. Her book is a significant contribution to a vibrant and rapidly developing field.
  • Cutler, A. (2012). Native listening: The flexibility dimension. Dutch Journal of Applied Linguistics, 1(2), 169-187.

    Abstract

    The way we listen to spoken language is tailored to the specific benefit of native-language speech input. Listening to speech in non-native languages can be significantly hindered by this native bias. Is it possible to determine the degree to which a listener is listening in a native-like manner? Promising indications of how this question may be tackled are provided by new research findings concerning the great flexibility that characterises listening to the L1, in online adjustment of phonetic category boundaries for adaptation across talkers, and in modulation of lexical dynamics for adjustment across listening conditions. This flexibility pays off in many dimensions, including listening in noise, adaptation across dialects, and identification of voices. These findings further illuminate the robustness and flexibility of native listening, and potentially point to ways in which we might begin to assess degrees of ‘native-likeness’ in this skill.
  • Cutler, A., Treiman, R., & Van Ooijen, B. (1998). Orthografik inkoncistensy ephekts in foneme detektion? In R. Mannell, & J. Robert-Ribes (Eds.), Proceedings of the Fifth International Conference on Spoken Language Processing: Vol. 6 (pp. 2783-2786). Sydney: ICSLP.

    Abstract

    The phoneme detection task is widely used in spoken word recognition research. Alphabetically literate participants, however, are more used to explicit representations of letters than of phonemes. The present study explored whether phoneme detection is sensitive to how target phonemes are, or may be, orthographically realised. Listeners detected the target sounds [b,m,t,f,s,k] in word-initial position in sequences of isolated English words. Response times were faster to the targets [b,m,t], which have consistent word-initial spelling, than to the targets [f,s,k], which are inconsistently spelled, but only when listeners’ attention was drawn to spelling by the presence in the experiment of many irregularly spelled fillers. Within the inconsistent targets [f,s,k], there was no significant difference between responses to targets in words with majority and minority spellings. We conclude that performance in the phoneme detection task is not necessarily sensitive to orthographic effects, but that salient orthographic manipulation can induce such sensitivity.
  • Cutler, A., Mehler, J., Norris, D., & Segui, J. (1987). Phoneme identification and the lexicon. Cognitive Psychology, 19, 141-177. doi:10.1016/0010-0285(87)90010-7.
  • Cutler, A. (1976). Phoneme-monitoring reaction time as a function of preceding intonation contour. Perception and Psychophysics, 20, 55-60. Retrieved from http://www.psychonomic.org/search/view.cgi?id=18194.

    Abstract

    An acoustically invariant one-word segment occurred in two versions of one syntactic context. In one version, the preceding intonation contour indicated that a stress would fall at the point where this word occurred. In the other version, the preceding contour predicted reduced stress at that point. Reaction time to the initial phoneme of the word was faster in the former case, despite the fact that no acoustic correlates of stress were present. It is concluded that a part of the sentence comprehension process is the prediction of upcoming sentence accents.
  • Cutler, A., & Otake, T. (1996). Phonological structure and its role in language processing. In T. Otake, & A. Cutler (Eds.), Phonological structure and language processing: Cross-linguistic studies (pp. 1-12). Berlin: Mouton de Gruyter.
  • Cutler, A., Otake, T., & Bruggeman, L. (2012). Phonologically determined asymmetries in vocabulary structure across languages. Journal of the Acoustical Society of America, 132(2), EL155-EL160. doi:10.1121/1.4737596.

    Abstract

    Studies of spoken-word recognition have revealed that competition from embedded words differs in strength as a function of where in the carrier word the embedded word is found and have further shown embedding patterns to be skewed such that embeddings in initial position in carriers outnumber embeddings in final position. Lexico-statistical analyses show that this skew is highly attenuated in Japanese, a noninflectional language. Comparison of the extent of the asymmetry in the three Germanic languages English, Dutch, and German allows the source to be traced to a combination of suffixal morphology and vowel reduction in unstressed syllables.
  • Cutler, A. (1998). Prosodic structure and word recognition. In A. D. Friederici (Ed.), Language comprehension: A biological perspective (pp. 41-70). Heidelberg: Springer.
  • Cutler, A. (1996). Prosody and the word boundary problem. In J. L. Morgan, & K. Demuth (Eds.), Signal to syntax: Bootstrapping from speech to grammar in early acquisition (pp. 87-99). Mahwah, NJ: Erlbaum.
  • Cutler, A. (1996). The comparative study of spoken-language processing. In H. T. Bunnell (Ed.), Proceedings of the Fourth International Conference on Spoken Language Processing: Vol. 1 (pp. 1). New York: Institute of Electrical and Electronics Engineers.

    Abstract

    Psycholinguists are saddled with a paradox. Their aim is to construct a model of human language processing, which will hold equally well for the processing of any language, but this aim cannot be achieved just by doing experiments in any language. They have to compare processing of many languages, and actively search for effects which are specific to a single language, even though a model which is itself specific to a single language is really the last thing they want.
  • Cutler, A. (1987). Speaking for listening. In A. Allport, D. MacKay, W. Prinz, & E. Scheerer (Eds.), Language perception and production: Relationships between listening, speaking, reading and writing (pp. 23-40). London: Academic Press.

    Abstract

    Speech production is constrained at all levels by the demands of speech perception. The speaker's primary aim is successful communication, and to this end semantic, syntactic and lexical choices are directed by the needs of the listener. Even at the articulatory level, some aspects of production appear to be perceptually constrained, for example the blocking of phonological distortions under certain conditions. An apparent exception to this pattern is word boundary information, which ought to be extremely useful to listeners, but which is not reliably coded in speech. It is argued that the solution to this apparent problem lies in rethinking the concept of the boundary of the lexical access unit. Speech rhythm provides clear information about the location of stressed syllables, and listeners do make use of this information. If stressed syllables can serve as the determinants of word lexical access codes, then once again speakers are providing precisely the necessary form of speech information to facilitate perception.
  • Cutler, A., Van Ooijen, B., Norris, D., & Sanchez-Casas, R. (1996). Speeded detection of vowels: A cross-linguistic study. Perception and Psychophysics, 58, 807-822. Retrieved from http://www.psychonomic.org/search/view.cgi?id=430.

    Abstract

    In four experiments, listeners’ response times to detect vowel targets in spoken input were measured. The first three experiments were conducted in English. In two, one using real words and the other, nonwords, detection accuracy was low, targets in initial syllables were detected more slowly than targets in final syllables, and both response time and missed-response rate were inversely correlated with vowel duration. In a third experiment, the speech context for some subjects included all English vowels, while for others, only five relatively distinct vowels occurred. This manipulation had essentially no effect, and the same response pattern was again observed. A fourth experiment, conducted in Spanish, replicated the results in the first three experiments, except that miss rate was here unrelated to vowel duration. We propose that listeners’ responses to vowel targets in naturally spoken input are effectively cautious, reflecting realistic appreciation of vowel variability in natural context.
  • Cutler, A., Butterfield, S., & Williams, J. (1987). The perceptual integrity of syllabic onsets. Journal of Memory and Language, 26, 406-418. doi:10.1016/0749-596X(87)90099-4.
  • Cutler, A., & Carter, D. (1987). The predominance of strong initial syllables in the English vocabulary. Computer Speech and Language, 2, 133-142. doi:10.1016/0885-2308(87)90004-0.

    Abstract

    Studies of human speech processing have provided evidence for a segmentation strategy in the perception of continuous speech, whereby a word boundary is postulated, and a lexical access procedure initiated, at each metrically strong syllable. The likely success of this strategy was here estimated against the characteristics of the English vocabulary. Two computerized dictionaries were found to list approximately three times as many words beginning with strong syllables (i.e. syllables containing a full vowel) as beginning with weak syllables (i.e. syllables containing a reduced vowel). Consideration of frequency of lexical word occurrence reveals that words beginning with strong syllables occur on average more often than words beginning with weak syllables. Together, these findings motivate an estimate for everyday speech recognition that approximately 85% of lexical words (i.e. excluding function words) will begin with strong syllables. This estimate was tested against a corpus of 190 000 words of spontaneous British English conversion. In this corpus, 90% of lexical words were found to begin with strong syllables. This suggests that a strategy of postulating word boundaries at the onset of strong syllables would have a high success rate in that few actual lexical word onsets would be missed.
  • Cutler, A., & Otake, T. (1996). The processing of word prosody in Japanese. In P. McCormack, & A. Russell (Eds.), Proceedings of the 6th Australian International Conference on Speech Science and Technology (pp. 599-604). Canberra: Australian Speech Science and Technology Association.
  • Cutler, A., & Carter, D. (1987). The prosodic structure of initial syllables in English. In J. Laver, & M. Jack (Eds.), Proceedings of the European Conference on Speech Technology: Vol. 1 (pp. 207-210). Edinburgh: IEE.
  • Cutler, A. (1998). The recognition of spoken words with variable representations. In D. Duez (Ed.), Proceedings of the ESCA Workshop on Sound Patterns of Spontaneous Speech (pp. 83-92). Aix-en-Provence: Université de Aix-en-Provence.
  • Cutler, A. (1987). The task of the speaker and the task of the hearer [Commentary/Sperber & Wilson: Relevance]. Behavioral and Brain Sciences, 10, 715-716.
  • Cychosz, M., Cristia, A., Bergelson, E., Casillas, M., Baudet, G., Warlaumont, A. S., Scaff, C., Yankowitz, L., & Seidl, A. (2021). Vocal development in a large‐scale crosslinguistic corpus. Developmental Science, 24(5): e13090. doi:10.1111/desc.13090.

    Abstract

    This study evaluates whether early vocalizations develop in similar ways in children across diverse cultural contexts. We analyze data from daylong audio recordings of 49 children (1–36 months) from five different language/cultural backgrounds. Citizen scientists annotated these recordings to determine if child vocalizations contained canonical transitions or not (e.g., “ba” vs. “ee”). Results revealed that the proportion of clips reported to contain canonical transitions increased with age. Furthermore, this proportion exceeded 0.15 by around 7 months, replicating and extending previous findings on canonical vocalization development but using data from the natural environments of a culturally and linguistically diverse sample. This work explores how crowdsourcing can be used to annotate corpora, helping establish developmental milestones relevant to multiple languages and cultures. Lower inter‐annotator reliability on the crowdsourcing platform, relative to more traditional in‐lab expert annotators, means that a larger number of unique annotators and/or annotations are required, and that crowdsourcing may not be a suitable method for more fine‐grained annotation decisions. Audio clips used for this project are compiled into a large‐scale infant vocalization corpus that is available for other researchers to use in future work.

    Additional information

    supporting information audio data
  • Cysouw, M., Dediu, D., & Moran, S. (2012). Comment on “Phonemic Diversity Supports a Serial Founder Effect Model of Language Expansion from Africa”. Science, 335, 657-b. doi:10.1126/science.1208841.

    Abstract

    We show that Atkinson’s (Reports, 15 April 2011, p. 346) intriguing proposal—that global
    linguistic diversity supports a single language origin in Africa—is an artifact of using suboptimal
    data, biased methodology, and unjustified assumptions. We criticize his approach using more
    suitable data, and we additionally provide new results suggesting a more complex scenario for the
    emergence of global linguistic diversity.
  • Dagklis, A., Ponzoni, M., Govi, S., Cangi, M. G., Pasini, E., Charlotte, F., Vino, A., Doglioni, C., Davi, F., Lossos, I. S., Ntountas, I., Papadaki, T., Dolcetti, R., Ferreri, A. J. M., Stamatopoulos, K., & Ghia, P. (2012). Immunoglobulin gene repertoire in ocular adnexal lymphomas: hints on the nature of the antigenic stimulation. Leukemia, 26, 814-821. doi:10.1038/leu.2011.276.

    Abstract

    Evidence from certain geographical areas links lymphomas of the ocular adnexa marginal zone B-cell lymphomas (OAMZL) with Chlamydophila psittaci (Cp) infection, suggesting that lymphoma development is dependent upon chronic stimulation by persistent infections. Notwithstanding that, the actual immunopathogenetical mechanisms have not yet been elucidated. As in other B-cell lymphomas, insight into this issue, especially with regard to potential selecting ligands, could be provided by analysis of the immunoglobulin (IG) receptors of the malignant clones. To this end, we studied the molecular features of IGs in 44 patients with OAMZL (40% Cp-positive), identifying features suggestive of a pathogenic mechanism of autoreactivity. Herein, we show that lymphoma cells express a distinctive IG repertoire, with electropositive antigen (Ag)-binding sites, reminiscent of autoantibodies (auto-Abs) recognizing DNA. Additionally, five (11%) cases of OAMZL expressed IGs homologous with autoreactive Abs or IGs of patients with chronic lymphocytic leukemia, a disease known for the expression of autoreactive IGs by neoplastic cells. In contrast, no similarity with known anti-Chlamydophila Abs was found. Taken together, these results strongly indicate that OAMZL may originate from B cells selected for their capability to bind Ags and, in particular, auto-Ags. In OAMZL associated with Cp infection, the pathogen likely acts indirectly on the malignant B cells, promoting the development of an inflammatory milieu, where auto-Ags could be exposed and presented, driving proliferation and expansion of self-reactive B cells.
  • Dahan, D., & Tanenhaus, M. K. (2004). Continuous mapping from sound to meaning in spoken-language comprehension: Immediate effects of verb-based thematic constraints. Journal of Experimental Psychology: Learning, Memory, and Cognition, 30(2), 498-513. doi:10.1037/0278-7393.30.2.498.

    Abstract

    The authors used 2 “visual-world” eye-tracking experiments to examine lexical access using Dutch constructions in which the verb did or did not place semantic constraints on its subsequent subject noun phrase. In Experiment 1, fixations to the picture of a cohort competitor (overlapping with the onset of the referent’s name, the subject) did not differ from fixations to a distractor in the constraining-verb condition. In Experiment 2, cross-splicing introduced phonetic information that temporarily biased the input toward the cohort competitor. Fixations to the cohort competitor temporarily increased in both the neutral and constraining conditions. These results favor models in which mapping from the input onto meaning is continuous over models in which contextual effects follow access of an initial form-based competitor set.
  • Dai, B., McQueen, J. M., Hagoort, P., & Kösem, A. (2017). Pure linguistic interference during comprehension of competing speech signals. The Journal of the Acoustical Society of America, 141, EL249-EL254. doi:10.1121/1.4977590.

    Abstract

    Speech-in-speech perception can be challenging because the processing of competing acoustic and linguistic information leads to informational masking. Here, a method is proposed to isolate the linguistic component of informational masking while keeping the distractor's acoustic information unchanged. Participants performed a dichotic listening cocktail-party task before and after training on 4-band noise-vocoded sentences that became intelligible through the training. Distracting noise-vocoded speech interfered more with target speech comprehension after training (i.e., when intelligible) than before training (i.e., when unintelligible) at −3 dB SNR. These findings confirm that linguistic and acoustic information have distinct masking effects during speech-in‐speech comprehension
  • Dalla Bella, S., Farrugia, F., Benoit, C.-E., Begel, V., Verga, L., Harding, E., & Kotz, S. A. (2017). BAASTA: Battery for the Assessment of Auditory Sensorimotor and Timing Abilities. Behavior Research Methods, 49(3), 1128-1145. doi:10.3758/s13428-016-0773-6.

    Abstract

    The Battery for the Assessment of Auditory Sensorimotor and Timing Abilities (BAASTA) is a new tool for the systematic assessment of perceptual and sensorimotor timing skills. It spans a broad range of timing skills aimed at differentiating individual timing profiles. BAASTA consists of sensitive time perception and production tasks. Perceptual tasks include duration discrimination, anisochrony detection (with tones and music), and a version of the Beat Alignment Task. Perceptual thresholds for duration discrimination and anisochrony detection are estimated with a maximum likelihood procedure (MLP) algorithm. Production tasks use finger tapping and include unpaced and paced tapping (with tones and music), synchronization-continuation, and adaptive tapping to a sequence with a tempo change. BAASTA was tested in a proof-of-concept study with 20 non-musicians (Experiment 1). To validate the results of the MLP procedure, less widespread than standard staircase methods, three perceptual tasks of the battery (duration discrimination, anisochrony detection with tones, and with music) were further tested in a second group of non-musicians using 2 down / 1 up and 3 down / 1 up staircase paradigms (n = 24) (Experiment 2). The results show that the timing profiles provided by BAASTA allow to detect cases of timing/rhythm disorders. In addition, perceptual thresholds yielded by the MLP algorithm, although generally comparable to the results provided by standard staircase, tend to be slightly lower. In sum, BAASTA provides a comprehensive battery to test perceptual and sensorimotor timing skills, and to detect timing/rhythm deficits.
  • Dalli, A., Tablan, V., Bontcheva, K., Wilks, Y., Broeder, D., Brugman, H., & Wittenburg, P. (2004). Web services architecture for language resources. In M. Lino, M. Xavier, F. Ferreira, R. Costa, & R. Silva (Eds.), Proceedings of the 4th International Conference on Language Resources and Evaluation (LREC2004) (pp. 365-368). Paris: ELRA - European Language Resources Association.

Share this page