Publications

Displaying 301 - 400 of 2262
  • Chen, A. (2011). The developmental path to phonological focus-marking in Dutch. In S. Frota, E. Gorka, & P. Prieto (Eds.), Prosodic categories: Production, perception and comprehension (pp. 93-109). Dordrecht: Springer.

    Abstract

    This paper gives an overview of recent studies on the use of phonological cues (accent placement and choice of accent type) to mark focus in Dutch-speaking children aged between 1;9 and 8;10. It is argued that learning to use phonological cues to mark focus is a gradual process. In the light of the findings in these studies, a first proposal is put forward on the developmental path to adult-like phonological focus-marking in Dutch.
  • Chen, A., Chen, A., Kager, R., & Wong, P. (2014). Rises and falls in Dutch and Mandarin Chinese. In C. Gussenhoven, Y. Chen, & D. Dediu (Eds.), Proceedings of the 4th International Symposium on Tonal Aspects of Language (pp. 83-86).

    Abstract

    Despite of the different functions of pitch in tone and nontone languages, rises and falls are common pitch patterns across different languages. In the current study, we ask what is the language specific phonetic realization of rises and falls. Chinese and Dutch speakers participated in a production experiment. We used contexts composed for conveying specific communicative purposes to elicit rises and falls. We measured both tonal alignment and tonal scaling for both patterns. For the alignment measurements, we found language specific patterns for the rises, but for falls. For rises, both peak and valley were aligned later among Chinese speakers compared to Dutch speakers. For all the scaling measurements (maximum pitch, minimum pitch, and pitch range), no language specific patterns were found for either the rises or the falls
  • Chen, A. (2011). What’s in a rise: Evidence for an off-ramp analysis of Dutch Intonation. In W.-S. Lee, & E. Zee (Eds.), Proceedings of the 17th International Congress of Phonetic Sciences 2011 [ICPhS XVII] (pp. 448-451). Hong Kong: Department of Chinese, Translation and Linguistics, City University of Hong Kong.

    Abstract

    Pitch accents are analysed differently in an onramp analysis (i.e. ToBI) and an off-ramp analysis (e.g. Transcription of Dutch intonation - ToDI), two competing approaches in the Autosegmental Metrical tradition. A case in point is pre-final high rise. A pre-final rise is analysed as H* in ToBI but is phonologically ambiguous between H* or H*L (a (rise-)fall) in ToDI. This is because in ToDI, the L tone of a pre-final H*L can be realised in the following unaccented words and both H* and H*L can show up as a high rise in the accented word. To find out whether there is a two-way phonological contrast in pre-final high rises in Dutch, we examined the distribution of phonologically ambiguous high rises (H*(L)) and their phonetic realisation in different information structural conditions (topic vs. focus), compared to phonologically unambiguous H* and H*L. Results showed that there is indeed a H*L vs. H* contrast in prefinal high rises in Dutch and that H*L is realised as H*(L) when sonorant material is limited in the accented word. These findings provide new evidence for an off-ramp analysis of Dutch intonation and have far-reaching implications for analysis of intonation across languages.
  • Chen, A. (2011). Tuning information packaging: Intonational realization of topic and focus in child Dutch. Journal of Child Language, 38, 1055-1083. doi:10.1017/S0305000910000541.

    Abstract

    This study examined how four- to five-year-olds and seven- to eight-year-olds used intonation (accent placement and accent type) to encode topic and focus in Dutch. Naturally spoken declarative sentences with either sentence-initial topic and sentence-final focus or sentence-initial focus and sentence-final topic were elicited via a picture-matching game. Results showed that the four- to five-year-olds were adult-like in topic-marking, but were not yet fully adult-like in focus-marking, in particular, in the use of accent type in sentence-final focus (i.e. showing no preference for H*L). Between age five and seven, the use of accent type was further developed. In contrast to the four- to five-year-olds, the seven- to eight-year-olds showed a preference for H*L in sentence-final focus. Furthermore, they used accent type to distinguish sentence-initial focus from sentence-initial topic in addition to phonetic cues.
  • Cheung, C.-Y., Yakpo, K., & Coupé, C. (2022). A computational simulation of the genesis and spread of lexical items in situations of abrupt language contact. In A. Ravignani, R. Asano, D. Valente, F. Ferretti, S. Hartmann, M. Hayashi, Y. Jadoul, M. Martins, Y. Oseki, E. D. Rodrigues, O. Vasileva, & S. Wacewicz (Eds.), The evolution of language: Proceedings of the Joint Conference on Language Evolution (JCoLE) (pp. 115-122). Nijmegen: Joint Conference on Language Evolution (JCoLE).

    Abstract

    The current study presents an agent-based model which simulates the innovation and
    competition among lexical items in cases of language contact. It is inspired by relatively
    recent historical cases in which the linguistic ecology and sociohistorical context are highly complex. Pidgin and creole genesis offers an opportunity to obtain linguistic facts, social dynamics, and historical demography in a highly segregated society. This provides a solid ground for researching the interaction of populations with different pre-existing language systems, and how different factors contribute to the genesis of the lexicon of a newly generated mixed language. We take into consideration the population dynamics and structures, as well as a distribution of word frequencies related to language use, in order to study how social factors may affect the developmental trajectory of languages. Focusing on the case of Sranan in Suriname, our study shows that it is possible to account for the
    composition of its core lexicon in relation to different social groups, contact patterns, and
    large population movements.
  • Cho, T., & McQueen, J. M. (2004). Phonotactics vs. phonetic cues in native and non-native listening: Dutch and Korean listeners' perception of Dutch and English. In S. Kin, & M. J. Bae (Eds.), Proceedings of the 8th International Conference on Spoken Language Processing (Interspeech 2004-ICSLP) (pp. 1301-1304). Seoul: Sunjijn Printing Co.

    Abstract

    We investigated how listeners of two unrelated languages, Dutch and Korean, process phonotactically legitimate and illegitimate sounds spoken in Dutch and American English. To Dutch listeners, unreleased word-final stops are phonotactically illegal because word-final stops in Dutch are generally released in isolation, but to Korean listeners, released final stops are illegal because word-final stops are never released in Korean. Two phoneme monitoring experiments showed a phonotactic effect: Dutch listeners detected released stops more rapidly than unreleased stops whereas the reverse was true for Korean listeners. Korean listeners with English stimuli detected released stops more accurately than unreleased stops, however, suggesting that acoustic-phonetic cues associated with released stops improve detection accuracy. We propose that in non-native speech perception, phonotactic legitimacy in the native language speeds up phoneme recognition, the richness of acousticphonetic cues improves listening accuracy, and familiarity with the non-native language modulates the relative influence of these two factors.
  • Cho, T. (2004). Prosodically conditioned strengthening and vowel-to-vowel coarticulation in English. Journal of Phonetics, 32(2), 141-176. doi:10.1016/S0095-4470(03)00043-3.

    Abstract

    The goal of this study is to examine how the degree of vowel-to-vowel coarticulation varies as a function of prosodic factors such as nuclear-pitch accent (accented vs. unaccented), level of prosodic boundary (Prosodic Word vs. Intermediate Phrase vs. Intonational Phrase), and position-in-prosodic-domain (initial vs. final). It is hypothesized that vowels in prosodically stronger locations (e.g., in accented syllables and at a higher prosodic boundary) are not only coarticulated less with their neighboring vowels, but they also exert a stronger influence on their neighbors. Measurements of tongue position for English /a i/ over time were obtained with Carsten’s electromagnetic articulography. Results showed that vowels in prosodically stronger locations are coarticulated less with neighboring vowels, but do not exert a stronger influence on the articulation of neighboring vowels. An examination of the relationship between coarticulation and duration revealed that (a) accent-induced coarticulatory variation cannot be attributed to a duration factor and (b) some of the data with respect to boundary effects may be accounted for by the duration factor. This suggests that to the extent that prosodically conditioned coarticulatory variation is duration-independent, there is no absolute causal relationship from duration to coarticulation. It is proposed that prosodically conditioned V-to-V coarticulatory reduction is another type of strengthening that occurs in prosodically strong locations. The prosodically driven coarticulatory patterning is taken to be part of the phonetic signatures of the hierarchically nested structure of prosody.
  • Cho, T., & McQueen, J. M. (2008). Not all sounds in assimilation environments are perceived equally: Evidence from Korean. Journal of Phonetics, 36, 239-249. doi:doi:10.1016/j.wocn.2007.06.001.

    Abstract

    This study tests whether potential differences in the perceptual robustness of speech sounds influence continuous-speech processes. Two phoneme-monitoring experiments examined place assimilation in Korean. In Experiment 1, Koreans monitored for targets which were either labials (/p,m/) or alveolars (/t,n/), and which were either unassimilated or assimilated to a following /k/ in two-word utterances. Listeners detected unaltered (unassimilated) labials faster and more accurately than assimilated labials; there was no such advantage for unaltered alveolars. In Experiment 2, labial–velar differences were tested using conditions in which /k/ and /p/ were illegally assimilated to a following /t/. Unassimilated sounds were detected faster than illegally assimilated sounds, but this difference tended to be larger for /k/ than for /p/. These place-dependent asymmetries suggest that differences in the perceptual robustness of segments play a role in shaping phonological patterns.
  • Cho, T., & Johnson, E. K. (2004). Acoustic correlates of phrase-internal lexical boundaries in Dutch. In S. Kin, & M. J. Bae (Eds.), Proceedings of the 8th International Conference on Spoken Language Processing (Interspeech 2004-ICSLP) (pp. 1297-1300). Seoul: Sunjin Printing Co.

    Abstract

    The aim of this study was to determine if Dutch speakers reliably signal phrase-internal lexical boundaries, and if so, how. Six speakers recorded 4 pairs of phonemically identical strong-weak-strong (SWS) strings with matching syllable boundaries but mismatching intended word boundaries (e.g. reis # pastei versus reispas # tij, or more broadly C1V2(C)#C2V2(C)C3V3(C) vs. C1V2(C)C2V2(C)#C3V3(C)). An Analysis of Variance revealed 3 acoustic parameters that were significantly greater in S#WS items (C2 DURATION, RIME1 DURATION, C3 BURST AMPLITUDE) and 5 parameters that were significantly greater in the SW#S items (C2 VOT, C3 DURATION, RIME2 DURATION, RIME3 DURATION, and V2 AMPLITUDE). Additionally, center of gravity measurements suggested that the [s] to [t] coarticulation was greater in reis # pa[st]ei versus reispa[s] # [t]ij. Finally, a Logistic Regression Analysis revealed that the 3 parameters (RIME1 DURATION, RIME2 DURATION, and C3 DURATION) contributed most reliably to a S#WS versus SW#S classification.
  • Cho, T., & McQueen, J. M. (2011). Perceptual recovery from consonant-cluster simplification using language-specific phonological knowledge. Journal of Psycholinguistic Research, 40, 253-274. doi:10.1007/s10936-011-9168-0.

    Abstract

    Two experiments examined whether perceptual recovery from Korean consonant-cluster simplification is based on language-specific phonological knowledge. In tri-consonantal C1C2C3 sequences such as /lkt/ and /lpt/ in Seoul Korean, either C1 or C2 can be completely deleted. Seoul Koreans monitored for C2 targets (/p/ or / k/, deleted or preserved) in the second word of a two-word phrase with an underlying /l/-C2-/t/ sequence. In Experiment 1 the target-bearing words had contextual lexical-semantic support. Listeners recovered deleted targets as fast and as accurately as preserved targets with both Word and Intonational Phrase (IP) boundaries between the two words. In Experiment 2, contexts were low-pass filtered. Listeners were still able to recover deleted targets as well as preserved targets in IP-boundary contexts, but better with physically-present targets than with deleted targets in Word-boundary contexts. This suggests that the benefit of having target acoustic-phonetic information emerges only when higher-order (contextual and phrase-boundary) information is not available. The strikingly efficient recovery of deleted phonemes with neither acoustic-phonetic cues nor contextual support demonstrates that language-specific phonological knowledge, rather than language-universal perceptual processes which rely on fine-grained phonetic details, is employed when the listener perceives the results of a continuous-speech process in which reduction is phonetically complete.
  • Cho, T. (2022). The Phonetics-Prosody Interface and Prosodic Strengthening in Korean. In S. Cho, & J. Whitman (Eds.), Cambridge handbook of Korean linguistics (pp. 248-293). Cambridge: Cambridge University Press.
  • Choi, J. (2014). Rediscovering a forgotten language. PhD Thesis, Radboud University Nijmegen, Nijmegen.
  • Choi, J., Broersma, M., & Cutler, A. (2018). Phonetic learning is not enhanced by sequential exposure to more than one language. Linguistic Research, 35(3), 567-581. doi:10.17250/khisli.35.3.201812.006.

    Abstract

    Several studies have documented that international adoptees, who in early years have
    experienced a change from a language used in their birth country to a new language
    in an adoptive country, benefit from the limited early exposure to the birth language
    when relearning that language’s sounds later in life. The adoptees’ relearning advantages
    have been argued to be conferred by lasting birth-language knowledge obtained from
    the early exposure. However, it is also plausible to assume that the advantages may
    arise from adoptees’ superior ability to learn language sounds in general, as a result
    of their unusual linguistic experience, i.e., exposure to multiple languages in sequence
    early in life. If this is the case, then the adoptees’ relearning benefits should generalize
    to previously unheard language sounds, rather than be limited to their birth-language
    sounds. In the present study, adult Korean adoptees in the Netherlands and matched
    Dutch-native controls were trained on identifying a Japanese length distinction to which
    they had never been exposed before. The adoptees and Dutch controls did not differ
    on any test carried out before, during, or after the training, indicating that observed
    adoptee advantages for birth-language relearning do not generalize to novel, previously
    unheard language sounds. The finding thus fails to support the suggestion that
    birth-language relearning advantages may arise from enhanced ability to learn language
    sounds in general conferred by early experience in multiple languages. Rather, our
    finding supports the original contention that such advantages involve memory traces
    obtained before adoption
  • Cholin, J. (2004). Syllables in speech production: Effects of syllable preparation and syllable frequency. PhD Thesis, Radboud University Nijmegen, Nijmegen. doi:10.17617/2.60589.

    Abstract

    The fluent production of speech is a very complex human skill. It requires the coordination of several articulatory subsystems. The instructions that lead articulatory movements to execution are the result of the interplay of speech production levels that operate above the articulatory network. During the process of word-form encoding, the groundwork for the articulatory programs is prepared which then serve the articulators as basic units. This thesis investigated whether or not syllables form the basis for the articulatory programs and in particular whether or not these syllable programs are stored, separate from the store of the lexical word-forms. It is assumed that syllable units are stored in a so-called 'mental syllabary'. The main goal of this thesis was to find evidence of the syllable playing a functionally important role in speech production and for the assumption that syllables are stored units. In a variant of the implicit priming paradigm, it was investigated whether information about the syllabic structure of a target word facilitates the preparation (advanced planning) of a to-be-produced utterance. These experiments yielded evidence for the functionally important role of syllables in speech production. In a subsequent row of experiments, it could be demonstrated that the production of syllables is sensitive to frequency. Syllable frequency effects provide strong evidence for the notion of a mental syllabary because only stored units are likely to exhibit frequency effects. In a last study, effects of syllable preparation and syllable frequency were investigated in a combined study to disentangle the two effects. The results of this last experiment converged with those reported for the other experiments and added further support to the claim that syllables play a core functional role in speech production and are stored in a mental syllabary.

    Additional information

    full text via Radboud Repository
  • Cholin, J., Schiller, N. O., & Levelt, W. J. M. (2004). The preparation of syllables in speech production. Journal of Memory and Language, 50(1), 47-61. doi:10.1016/j.jml.2003.08.003.

    Abstract

    Models of speech production assume that syllables play a functional role in the process of word-form encoding in speech production. In this study, we investigate this claim and specifically provide evidence about the level at which syllables come into play. We report two studies using an odd-man-out variant of the implicit priming paradigm to examine the role of the syllable during the process of word formation. Our results show that this modified version of the implicit priming paradigm can trace the emergence of syllabic structure during spoken word generation. Comparing these results to prior syllable priming studies, we conclude that syllables emerge at the interface between phonological and phonetic encoding. The results are discussed in terms of the WEAVER++ model of lexical access.
  • Cholin, J., Dell, G. S., & Levelt, W. J. M. (2011). Planning and articulation in incremental word production: Syllable-frequency effects in English. Journal of Experimental Psychology: Learning, Memory, and Cognition, 37, 109-122. doi:10.1037/a0021322.

    Abstract

    We investigated the role of syllables during speech planning in English by measuring syllable-frequency effects. So far, syllable-frequency effects in English have not been reported. English has poorly defined syllable boundaries, and thus the syllable might not function as a prominent unit in English speech production. Speakers produced either monosyllabic (Experiment 1) or disyllabic (Experiment 2–4) pseudowords as quickly as possible in response to symbolic cues. Monosyllabic targets consisted of either high- or low-frequency syllables, whereas disyllabic items contained either a 1st or 2nd syllable that was frequency-manipulated. Significant syllable-frequency effects were found in all experiments. Whereas previous findings for disyllables in Dutch and Spanish—languages with relatively clear syllable boundaries—showed effects of a frequency manipulation on 1st but not 2nd syllables, in our study English speakers were sensitive to the frequency of both syllables. We interpret this sensitivity as an indication that the production of English has more extensive planning scopes at the interface of phonetic encoding and articulation.
  • Chormai, P., Pu, Y., Hu, H., Fisher, S. E., Francks, C., & Kong, X. (2022). Machine learning of large-scale multimodal brain imaging data reveals neural correlates of hand preference. NeuroImage, 262: 119534. doi:10.1016/j.neuroimage.2022.119534.

    Abstract

    Lateralization is a fundamental characteristic of many behaviors and the organization of the brain, and atypical lateralization has been suggested to be linked to various brain-related disorders such as autism and schizophrenia. Right-handedness is one of the most prominent markers of human behavioural lateralization, yet its neurobiological basis remains to be determined. Here, we present a large-scale analysis of handedness, as measured by self-reported direction of hand preference, and its variability related to brain structural and functional organization in the UK Biobank (N = 36,024). A multivariate machine learning approach with multi-modalities of brain imaging data was adopted, to reveal how well brain imaging features could predict individual's handedness (i.e., right-handedness vs. non-right-handedness) and further identify the top brain signatures that contributed to the prediction. Overall, the results showed a good prediction performance, with an area under the receiver operating characteristic curve (AUROC) score of up to 0.72, driven largely by resting-state functional measures. Virtual lesion analysis and large-scale decoding analysis suggested that the brain networks with the highest importance in the prediction showed functional relevance to hand movement and several higher-level cognitive functions including language, arithmetic, and social interaction. Genetic analyses of contributions of common DNA polymorphisms to the imaging-derived handedness prediction score showed a significant heritability (h2=7.55%, p <0.001) that was similar to and slightly higher than that for the behavioural measure itself (h2=6.74%, p <0.001). The genetic correlation between the two was high (rg=0.71), suggesting that the imaging-derived score could be used as a surrogate in genetic studies where the behavioural measure is not available. This large-scale study using multimodal brain imaging and multivariate machine learning has shed new light on the neural correlates of human handedness.

    Additional information

    supplementary material
  • Chu, M., Meyer, A. S., Foulkes, L., & Kita, S. (2014). Individual differences in frequency and saliency of speech-accompanying gestures: The role of cognitive abilities and empathy. Journal of Experimental Psychology: General, 143, 694-709. doi:10.1037/a0033861.

    Abstract

    The present study concerns individual differences in gesture production. We used correlational and multiple regression analyses to examine the relationship between individuals’ cognitive abilities and empathy levels and their gesture frequency and saliency. We chose predictor variables according to experimental evidence of the functions of gesture in speech production and communication. We examined 3 types of gestures: representational gestures, conduit gestures, and palm-revealing gestures. Higher frequency of representational gestures was related to poorer visual and spatial working memory, spatial transformation ability, and conceptualization ability; higher frequency of conduit gestures was related to poorer visual working memory, conceptualization ability, and higher levels of empathy; and higher frequency of palm-revealing gestures was related to higher levels of empathy. The saliency of all gestures was positively related to level of empathy. These results demonstrate that cognitive abilities and empathy levels are related to individual differences in gesture frequency and saliency
  • Chu, M., & Kita, S. (2011). Microgenesis of gestures during mental rotation tasks recapitulates ontogenesis. In G. Stam, & M. Ishino (Eds.), Integrating gestures: The interdisciplinary nature of gesture (pp. 267-276). Amsterdam: John Benjamins.

    Abstract

    People spontaneously produce gestures when they solve problems or explain their solutions to a problem. In this chapter, we will review and discuss evidence on the role of representational gestures in problem solving. The focus will be on our recent experiments (Chu & Kita, 2008), in which we used Shepard-Metzler type of mental rotation tasks to investigate how spontaneous gestures revealed the development of problem solving strategy over the course of the experiment and what role gesture played in the development process. We found that when solving novel problems regarding the physical world, adults go through similar symbolic distancing (Werner & Kaplan, 1963) and internalization (Piaget, 1968) processes as those that occur during young children’s cognitive development and gesture facilitates such processes.
  • Chu, M., & Kita, S. (2008). Spontaneous gestures during mental rotation tasks: Insights into the microdevelopment of the motor strategy. Journal of Experimental Psychology: General, 137, 706-723. doi:10.1037/a0013157.

    Abstract

    This study investigated the motor strategy involved in mental rotation tasks by examining 2 types of spontaneous gestures (hand–object interaction gestures, representing the agentive hand action on an object, vs. object-movement gestures, representing the movement of an object by itself) and different types of verbal descriptions of rotation. Hand–object interaction gestures were produced earlier than object-movement gestures, the rate of both types of gestures decreased, and gestures became more distant from the stimulus object over trials (Experiments 1 and 3). Furthermore, in the first few trials, object-movement gestures increased, whereas hand–object interaction gestures decreased, and this change of motor strategies was also reflected in the type of verbal description of rotation in the concurrent speech (Experiment 2). This change of motor strategies was hampered when gestures were prohibited (Experiment 4). The authors concluded that the motor strategy becomes less dependent on agentive action on the object, and also becomes internalized over the course of the experiment, and that gesture facilitates the former process. When solving a problem regarding the physical world, adults go through developmental processes similar to internalization and symbolic distancing in young children, albeit within a much shorter time span.
  • Chu, M., & Hagoort, P. (2014). Synchronization of speech and gesture: Evidence for interaction in action. Journal of Experimental Psychology: General, 143(4), 1726-1741. doi:10.1037/a0036281.

    Abstract

    Language and action systems are highly interlinked. A critical piece of evidence is that speech and its accompanying gestures are tightly synchronized. Five experiments were conducted to test 2 hypotheses about the synchronization of speech and gesture. According to the interactive view, there is continuous information exchange between the gesture and speech systems, during both their planning and execution phases. According to the ballistic view, information exchange occurs only during the planning phases of gesture and speech, but the 2 systems become independent once their execution has been initiated. In all experiments, participants were required to point to and/or name a light that had just lit up. Virtual reality and motion tracking technologies were used to disrupt their gesture or speech execution. Participants delayed their speech onset when their gesture was disrupted. They did so even when their gesture was disrupted at its late phase and even when they received only the kinesthetic feedback of their gesture. Also, participants prolonged their gestures when their speech was disrupted. These findings support the interactive view and add new constraints on models of speech and gesture production
  • Chu, M., & Kita, S. (2011). The nature of gestures’ beneficial role in spatial problem solving. Journal of Experimental Psychology: General, 140, 102-116. doi:10.1037/a0021790.

    Abstract

    Co-thought gestures are hand movements produced in silent, noncommunicative, problem-solving situations. In the study, we investigated whether and how such gestures enhance performance in spatial visualization tasks such as a mental rotation task and a paper folding task. We found that participants gestured more often when they had difficulties solving mental rotation problems Experiment 1). The gesture-encouraged group solved more mental rotation problems correctly than did the gesture-allowed and gesture-prohibited groups (Experiment 2). Gestures produced by the gesture-encouraged group enhanced performance in the very trials in which they were produced Experiments 2 & 3). Furthermore, gesture frequency decreased as the participants in the gesture-encouraged group solved more problems (Experiments 2 & 3). In addition, the advantage of the gesture-encouraged group persisted into subsequent spatial visualization problems in which gesturing was prohibited: another mental rotation block (Experiment 2) and a newly introduced paper folding task (Experiment 3). The results indicate that when people have difficulty in solving spatial visualization problems, they spontaneously produce gestures to help them, and gestures can indeed improve performance. As they solve more problems, the spatial computation supported by gestures becomes internalized, and the gesture frequency decreases. The benefit of gestures persists even in subsequent spatial visualization problems in which gesture is prohibited. Moreover, the beneficial effect of gesturing can be generalized to a different spatial visualization task when two tasks require similar spatial transformation processes. We conclude that gestures enhance performance on spatial visualization tasks by improving the internal computation of spatial transformations.
  • Chwilla, D., Hagoort, P., & Brown, C. M. (1998). The mechanism underlying backward priming in a lexical decision task: Spreading activation versus semantic matching. Quarterly Journal of Experimental Psychology, 51A(3), 531-560. doi:10.1080/713755773.

    Abstract

    Koriat (1981) demonstrated that an association from the target to a preceding prime, in the absence of an association from the prime to the target, facilitates lexical decision and referred to this effect as "backward priming". Backward priming is of relevance, because it can provide information about the mechanism underlying semantic priming effects. Following Neely (1991), we distinguish three mechanisms of priming: spreading activation, expectancy, and semantic matching/integration. The goal was to determine which of these mechanisms causes backward priming, by assessing effects of backward priming on a language-relevant ERP component, the N400, and reaction time (RT). Based on previous work, we propose that the N400 priming effect reflects expectancy and semantic matching/integration, but in contrast with RT does not reflect spreading activation. Experiment 1 shows a backward priming effect that is qualitatively similar for the N400 and RT in a lexical decision task. This effect was not modulated by an ISI manipulation. Experiment 2 clarifies that the N400 backward priming effect reflects genuine changes in N400 amplitude and cannot be ascribed to other factors. We will argue that these backward priming effects cannot be due to expectancy but are best accounted for in terms of semantic matching/integration.
  • Clahsen, H., Sonnenstuhl, I., Hadler, M., & Eisenbeiss, S. (2008). Morphological paradigms in language processing and language disorders. Transactions of the Philological Society, 99(2), 247-277. doi:10.1111/1467-968X.00082.

    Abstract

    We present results from two cross‐modal morphological priming experiments investigating regular person and number inflection on finite verbs in German. We found asymmetries in the priming patterns between different affixes that can be predicted from the structure of the paradigm. We also report data from language disorders which indicate that inflectional errors produced by language‐impaired adults and children tend to occur within a given paradigm dimension, rather than randomly across the paradigm. We conclude that morphological paradigms are used by the human language processor and can be systematically affected in language disorders.
  • Clark, N., & Perlman, M. (2014). Breath, vocal, and supralaryngeal flexibility in a human-reared gorilla. In B. De Boer, & T. Verhoef (Eds.), Proceedings of Evolang X, Workshop on Signals, Speech, and Signs (pp. 11-15).

    Abstract

    “Gesture-first” theories dismiss ancestral great apes’ vocalization as a substrate for language evolution based on the claim that extant apes exhibit minimal learning and volitional control of vocalization. Contrary to this claim, we present data of novel learned and voluntarily controlled vocal behaviors produced by a human-fostered gorilla (G. gorilla gorilla). These behaviors demonstrate varying degrees of flexibility in the vocal apparatus (including diaphragm, lungs, larynx, and supralaryngeal articulators), and are predominantly performed in coordination with manual behaviors and gestures. Instead of a gesture-first theory, we suggest that these findings support multimodal theories of language evolution in which vocal and gestural forms are coordinated and supplement one another
  • Claus, A. (2004). Access management system. Language Archive Newsletter, 1(2), 5.
  • Cleary, R. A., Poliakoff, E., Galpin, A., Dick, J. P., & Holler, J. (2011). An investigation of co-speech gesture production during action description in Parkinson’s disease. Parkinsonism & Related Disorders, 17, 753-756. doi:10.1016/j.parkreldis.2011.08.001.

    Abstract

    Methods
    The present study provides a systematic analysis of co-speech gestures which spontaneously accompany the description of actions in a group of PD patients (N = 23, Hoehn and Yahr Stage III or less) and age-matched healthy controls (N = 22). The analysis considers different co-speech gesture types, using established classification schemes from the field of gesture research. The analysis focuses on the rate of these gestures as well as on their qualitative nature. In doing so, the analysis attempts to overcome several methodological shortcomings of research in this area.
    Results
    Contrary to expectation, gesture rate was not significantly affected in our patient group, with relatively mild PD. This indicates that co-speech gestures could compensate for speech problems. However, while gesture rate seems unaffected, the qualitative precision of gestures representing actions was significantly reduced.
    Conclusions
    This study demonstrates the feasibility of carrying out fine-grained, detailed analyses of gestures in PD and offers insights into an as yet neglected facet of communication in patients with PD. Based on the present findings, an important next step is the closer investigation of the qualitative changes in gesture (including different communicative situations) and an analysis of the heterogeneity in co-speech gesture production in PD.
  • Clough, S., & Hilverman, C. (2018). Hand gestures and how they help children learn. Frontiers for Young Minds, 6: 29. doi:10.3389/frym.2018.00029.

    Abstract

    When we talk, we often make hand movements called gestures at the same time. Although just about everyone gestures when they talk, we usually do not even notice the gestures. Our hand gestures play an important role in helping us learn and remember! When we see other people gesturing when they talk—or when we gesture when we talk ourselves—we are more likely to remember the information being talked about than if gestures were not involved. Our hand gestures can even indicate when we are ready to learn new things! In this article, we explain how gestures can help learning. To investigate this, we studied children learning a new mathematical concept called equivalence. We hope that this article will help you notice when you, your friends and family, and your teachers are gesturing, and that it will help you understand how those gestures can help people learn.
  • Clough, S., Hilverman, C., Brown-Schmidt, S., & Duff, M. C. (2022). Evidence of audience design in amnesia: Adaptation in gesture but not speech. Brain Sciences, 12(8): 1082. doi:10.3390/brainsci12081082.

    Abstract

    Speakers design communication for their audience, providing more information in both speech and gesture when their listener is naive to the topic. We test whether the hippocampal declarative memory system contributes to multimodal audience design. The hippocampus, while traditionally linked to episodic and relational memory, has also been linked to the ability to imagine the mental states of others and use language flexibly. We examined the speech and gesture use of four patients with hippocampal amnesia when describing how to complete everyday tasks (e.g., how to tie a shoe) to an imagined child listener and an adult listener. Although patients with amnesia did not increase their total number of words and instructional steps for the child listener, they did produce representational gestures at significantly higher rates for the imagined child compared to the adult listener. They also gestured at similar frequencies to neurotypical peers, suggesting that hand gesture can be a meaningful communicative resource, even in the case of severe declarative memory impairment. We discuss the contributions of multiple memory systems to multimodal audience design and the potential of gesture to act as a window into the social cognitive processes of individuals with neurologic disorders.
  • Cohen, E. (2011). Broadening the critical perspective on supernatural punishment theories. Religion, Brain & Behavior, 1(1), 70-72. doi:10.1080/2153599X.2011.558709.
  • Cohen, E., Burdett, E., Knight, N., & Barrett, J. (2011). Cross-cultural similarities and differences in person-body reasoning: Experimental evidence from the United Kingdom and Brazilian Amazon. Cognitive Science, 35, 1282-1304. doi:10.1111/j.1551-6709.2011.01172.x.

    Abstract

    We report the results of a cross-cultural investigation of person-body reasoning in the United Kingdom and northern Brazilian Amazon (Marajo´ Island). The study provides evidence that directly bears upon divergent theoretical claims in cognitive psychology and anthropology, respectively, on the cognitive origins and cross-cultural incidence of mind-body dualism. In a novel reasoning task, we found that participants across the two sample populations parsed a wide range of capacities similarly in terms of the capacities’ perceived anchoring to bodily function. Patterns of reasoning concerning the respective roles of physical and biological properties in sustaining various capacities did vary between sample populations, however. Further, the data challenge prior ad-hoc categorizations in the empirical literature on the developmental origins of and cognitive constraints on psycho-physical reasoning (e.g., in afterlife concepts). We suggest cross-culturally validated categories of ‘‘Body Dependent’’ and ‘‘Body Independent’’ items for future developmental and cross-cultural research in this emerging area.
  • Cohen, E. (2011). “Out with ‘Religion’: A novel framing of the religion debate”. In W. Williams (Ed.), Religion and rights: The Oxford Amnesty Lectures 2008. Manchester: Manchester University Press.
  • Cohen, E., & Barrett, J. L. (2011). In search of "Folk anthropology": The cognitive anthropology of the person. In J. W. Van Huysteen, & E. Wiebe (Eds.), In search of self: Interdisciplinary perspectives on personhood (pp. 104-124). Grand Rapids, CA: Wm. B. Eerdmans Publishing Company.
  • Collins, L. J., Schönfeld, B., & Chen, X. S. (2011). The epigenetics of non-coding RNA. In T. Tollefsbol (Ed.), Handbook of epigenetics: the new molecular and medical genetics (pp. 49-61). London: Academic.

    Abstract

    Summary Non-coding RNAs (ncRNAs) have been implicated in the epigenetic marking of many genes. Short regulatory ncRNAs, including miRNAs, siRNAs, piRNAs and snoRNAs as well as long ncRNAs such as Xist and Air are discussed in light of recent research of mechanisms regulating chromatin marking and RNA editing. The topic is expanding rapidly so we will concentrate on examples to highlight the main mechanisms, including simple mechanisms where complementary binding affect methylation or RNA sites. However, other examples especially with the long ncRNAs highlight very complex regulatory systems with multiple layers of ncRNA control.
  • Cook, A. E., & Meyer, A. S. (2008). Capacity demands of phoneme selection in word production: New evidence from dual-task experiments. Journal of Experimental Psychology: Learning, Memory, and Cognition, 34, 886-899. doi:10.1037/0278-7393.34.4.886.

    Abstract

    Three dual-task experiments investigated the capacity demands of phoneme selection in picture naming. On each trial, participants named a target picture (Task 1) and carried out a tone discrimination task (Task 2). To vary the time required for phoneme selection, the authors combined the targets with phonologically related or unrelated distractor pictures (Experiment 1) or words, which were clearly visible (Experiment 2) or masked (Experiment 3). When pictures or masked words were presented, the tone discrimination and picture naming latencies were shorter in the related condition than in the unrelated condition, which indicates that phoneme selection requires central processing capacity. However, when the distractor words were clearly visible, the facilitatory effect was confined to the picture naming latencies. This pattern arose because the visible related distractor words facilitated phoneme selection but slowed down speech monitoring processes that had to be completed before the response to the tone could be selected.
  • Cooke, M., & Scharenborg, O. (2008). The Interspeech 2008 consonant challenge. In INTERSPEECH 2008 - 9th Annual Conference of the International Speech Communication Association (pp. 1765-1768). ISCA Archive.

    Abstract

    Listeners outperform automatic speech recognition systems at every level, including the very basic level of consonant identification. What is not clear is where the human advantage originates. Does the fault lie in the acoustic representations of speech or in the recognizer architecture, or in a lack of compatibility between the two? Many insights can be gained by carrying out a detailed human-machine comparison. The purpose of the Interspeech 2008 Consonant Challenge is to promote focused comparisons on a task involving intervocalic consonant identification in noise, with all participants using the same training and test data. This paper describes the Challenge, listener results and baseline ASR performance.
  • Cooper, N., & Cutler, A. (2004). Perception of non-native phonemes in noise. In S. Kin, & M. J. Bae (Eds.), Proceedings of the 8th International Conference on Spoken Language Processing (Interspeech 2004-ICSLP) (pp. 469-472). Seoul: Sunjijn Printing Co.

    Abstract

    We report an investigation of the perception of American English phonemes by Dutch listeners proficient in English. Listeners identified either the consonant or the vowel in most possible English CV and VC syllables. The syllables were embedded in multispeaker babble at three signal-to-noise ratios (16 dB, 8 dB, and 0 dB). Effects of signal-to-noise ratio on vowel and consonant identification are discussed as a function of syllable position and of relationship to the native phoneme inventory. Comparison of the results with previously reported data from native listeners reveals that noise affected the responding of native and non-native listeners similarly.
  • Cooper, R. P., & Guest, O. (2014). Implementations are not specifications: Specification, replication and experimentation in computational cognitive modeling. Cognitive Systems Research, 27, 42-49. doi:10.1016/j.cogsys.2013.05.001.

    Abstract

    Contemporary methods of computational cognitive modeling have recently been criticized by Addyman and French (2012) on the grounds that they have not kept up with developments in computer technology and human–computer interaction. They present a manifesto for change according to which, it is argued, modelers should devote more effort to making their models accessible, both to non-modelers (with an appropriate easy-to-use user interface) and modelers alike. We agree that models, like data, should be freely available according to the normal standards of science, but caution against confusing implementations with specifications. Models may embody theories, but they generally also include implementation assumptions. Cognitive modeling methodology needs to be sensitive to this. We argue that specification, replication and experimentation are methodological approaches that can address this issue.
  • Coopmans, C. W., De Hoop, H., Kaushik, K., Hagoort, P., & Martin, A. E. (2022). Hierarchy in language interpretation: Evidence from behavioural experiments and computational modelling. Language, Cognition and Neuroscience, 37(4), 420-439. doi:10.1080/23273798.2021.1980595.

    Abstract

    It has long been recognised that phrases and sentences are organised hierarchically, but many computational models of language treat them as sequences of words without computing constituent structure. Against this background, we conducted two experiments which showed that participants interpret ambiguous noun phrases, such as second blue ball, in terms of their abstract hierarchical structure rather than their linear surface order. When a neural network model was tested on this task, it could simulate such “hierarchical” behaviour. However, when we changed the training data such that they were not entirely unambiguous anymore, the model stopped generalising in a human-like way. It did not systematically generalise to novel items, and when it was trained on ambiguous trials, it strongly favoured the linear interpretation. We argue that these models should be endowed with a bias to make generalisations over hierarchical structure in order to be cognitively adequate models of human language.
  • Coopmans, C. W., De Hoop, H., Hagoort, P., & Martin, A. E. (2022). Effects of structure and meaning on cortical tracking of linguistic units in naturalistic speech. Neurobiology of Language, 3(3), 386-412. doi:10.1162/nol_a_00070.

    Abstract

    Recent research has established that cortical activity “tracks” the presentation rate of syntactic phrases in continuous speech, even though phrases are abstract units that do not have direct correlates in the acoustic signal. We investigated whether cortical tracking of phrase structures is modulated by the extent to which these structures compositionally determine meaning. To this end, we recorded electroencephalography (EEG) of 38 native speakers who listened to naturally spoken Dutch stimuli in different conditions, which parametrically modulated the degree to which syntactic structure and lexical semantics determine sentence meaning. Tracking was quantified through mutual information between the EEG data and either the speech envelopes or abstract annotations of syntax, all of which were filtered in the frequency band corresponding to the presentation rate of phrases (1.1–2.1 Hz). Overall, these mutual information analyses showed stronger tracking of phrases in regular sentences than in stimuli whose lexical-syntactic content is reduced, but no consistent differences in tracking between sentences and stimuli that contain a combination of syntactic structure and lexical content. While there were no effects of compositional meaning on the degree of phrase-structure tracking, analyses of event-related potentials elicited by sentence-final words did reveal meaning-induced differences between conditions. Our findings suggest that cortical tracking of structure in sentences indexes the internal generation of this structure, a process that is modulated by the properties of its input, but not by the compositional interpretation of its output.

    Additional information

    supplementary information
  • Coopmans, C. W., & Cohn, N. (2022). An electrophysiological investigation of co-referential processes in visual narrative comprehension. Neuropsychologia, 172: 108253. doi:10.1016/j.neuropsychologia.2022.108253.

    Abstract

    Visual narratives make use of various means to convey referential and co-referential meaning, so comprehenders
    must recognize that different depictions across sequential images represent the same character(s). In this study,
    we investigated how the order in which different types of panels in visual sequences are presented affects how
    the unfolding narrative is comprehended. Participants viewed short comic strips while their electroencephalo-
    gram (EEG) was recorded. We analyzed evoked and induced EEG activity elicited by both full panels (showing a
    full character) and refiner panels (showing only a zoom of that full panel), and took into account whether they
    preceded or followed the panel to which they were co-referentially related (i.e., were cataphoric or anaphoric).
    We found that full panels elicited both larger N300 amplitude and increased gamma-band power compared to
    refiner panels. Anaphoric panels elicited a sustained negativity compared to cataphoric panels, which appeared
    to be sensitive to the referential status of the anaphoric panel. In the time-frequency domain, anaphoric panels
    elicited reduced 8–12 Hz alpha power and increased 45–65 Hz gamma-band power compared to cataphoric
    panels. These findings are consistent with models in which the processes involved in visual narrative compre-
    hension partially overlap with those in language comprehension.
  • Corcoran, A. W., Alday, P. M., Schlesewsky, M., & Bornkessel-Schlesewsky, I. (2018). Toward a reliable, automated method of individual alpha frequency (IAF) quantification. Psychophysiology, 55(7): e13064. doi:10.1111/psyp.13064.

    Abstract

    Individual alpha frequency (IAF) is a promising electrophysiological marker of interindividual differences in cognitive function. IAF has been linked with trait-like differences in information processing and general intelligence, and provides an empirical basis for the definition of individualized frequency bands. Despite its widespread application, however, there is little consensus on the optimal method for estimating IAF, and many common approaches are prone to bias and inconsistency. Here, we describe an automated strategy for deriving two of the most prevalent IAF estimators in the literature: peak alpha frequency (PAF) and center of gravity (CoG). These indices are calculated from resting-state power spectra that have been smoothed using a Savitzky-Golay filter (SGF). We evaluate the performance characteristics of this analysis procedure in both empirical and simulated EEG data sets. Applying the SGF technique to resting-state data from n = 63 healthy adults furnished 61 PAF and 62 CoG estimates. The statistical properties of these estimates were consistent with previous reports. Simulation analyses revealed that the SGF routine was able to reliably extract target alpha components, even under relatively noisy spectral conditions. The routine consistently outperformed a simpler method of automated peak detection that did not involve spectral smoothing. The SGF technique is fast, open source, and available in two popular programming languages (MATLAB, Python), and thus can easily be integrated within the most popular M/EEG toolsets (EEGLAB, FieldTrip, MNE-Python). As such, it affords a convenient tool for improving the reliability and replicability of future IAF-related research.

    Additional information

    psyp13064-sup-0001-s01.docx
  • Corps, R. E., Brooke, C., & Pickering, M. (2022). Prediction involves two stages: Evidence from visual-world eye-tracking. Journal of Memory and Language, 122: 104298. doi:10.1016/j.jml.2021.104298.

    Abstract

    Comprehenders often predict what they are going to hear. But do they make the best predictions possible? We addressed this question in three visual-world eye-tracking experiments by asking when comprehenders consider perspective. Male and female participants listened to male and female speakers producing sentences (e.g., I would like to wear the nice…) about stereotypically masculine (target: tie; distractor: drill) and feminine (target: dress, distractor: hairdryer) objects. In all three experiments, participants rapidly predicted semantic associates of the verb. But participants also predicted consistently – that is, consistent with their beliefs about what the speaker would ultimately say. They predicted consistently from the speaker’s perspective in Experiment 1, their own perspective in Experiment 2, and the character’s perspective in Experiment 3. This consistent effect occurred later than the associative effect. We conclude that comprehenders consider perspective when predicting, but not from the earliest moments of prediction, consistent with a two-stage account.

    Additional information

    data and analysis scripts
  • Corps, R. E., Knudsen, B., & Meyer, A. S. (2022). Overrated gaps: Inter-speaker gaps provide limited information about the timing of turns in conversation. Cognition, 223: 105037. doi:10.1016/j.cognition.2022.105037.

    Abstract

    Corpus analyses have shown that turn-taking in conversation is much faster than laboratory studies of speech planning would predict. To explain fast turn-taking, Levinson and Torreira (2015) proposed that speakers are highly proactive: They begin to plan a response to their interlocutor's turn as soon as they have understood its gist, and launch this planned response when the turn-end is imminent. Thus, fast turn-taking is possible because speakers use the time while their partner is talking to plan their own utterance. In the present study, we asked how much time upcoming speakers actually have to plan their utterances. Following earlier psycholinguistic work, we used transcripts of spoken conversations in Dutch, German, and English. These transcripts consisted of segments, which are continuous stretches of speech by one speaker. In the psycholinguistic and phonetic literature, such segments have often been used as proxies for turns. We found that in all three corpora, large proportions of the segments comprised of only one or two words, which on our estimate does not give the next speaker enough time to fully plan a response. Further analyses showed that speakers indeed often did not respond to the immediately preceding segment of their partner, but continued an earlier segment of their own. More generally, our findings suggest that speech segments derived from transcribed corpora do not necessarily correspond to turns, and the gaps between speech segments therefore only provide limited information about the planning and timing of turns.
  • Corps, R. E. (2018). Coordinating utterances during conversational dialogue: The role of content and timing predictions. PhD Thesis, The University of Edinburgh, Edinburgh.
  • Corps, R. E., Gambi, C., & Pickering, M. J. (2018). Coordinating utterances during turn-taking: The role of prediction, response preparation, and articulation. Discourse processes, 55(2, SI), 230-240. doi:10.1080/0163853X.2017.1330031.

    Abstract

    During conversation, interlocutors rapidly switch between speaker and listener
    roles and take turns at talk. How do they achieve such fine coordination?
    Most research has concentrated on the role of prediction, but listeners
    must also prepare a response in advance (assuming they wish to respond)
    and articulate this response at the appropriate moment. Such mechanisms
    may overlap with the processes of comprehending the speaker’s incoming
    turn and predicting its end. However, little is known about the stages of
    response preparation and production. We discuss three questions pertaining
    to such stages: (1) Do listeners prepare their own response in advance?,
    (2) Can listeners buffer their prepared response?, and (3) Does buffering
    lead to interference with concurrent comprehension? We argue that fine
    coordination requires more than just an accurate prediction of the interlocutor’s
    incoming turn: Listeners must also simultaneously prepare their own
    response.
  • Corps, R. E., Crossley, A., Gambi, C., & Pickering, M. J. (2018). Early preparation during turn-taking: Listeners use content predictions to determine what to say but not when to say it. Cognition, 175, 77-95. doi:10.1016/j.cognition.2018.01.015.

    Abstract

    During conversation, there is often little gap between interlocutors’ utterances. In two pairs of experiments, we manipulated the content predictability of yes/no questions to investigate whether listeners achieve such coordination by (i) preparing a response as early as possible or (ii) predicting the end of the speaker’s turn. To assess these two mechanisms, we varied the participants’ task: They either pressed a button when they thought the question was about to end (Experiments 1a and 2a), or verbally answered the questions with either yes or no (Experiments 1b and 2b). Predictability effects were present when participants had to prepare a verbal response, but not when they had to predict the turn-end. These findings suggest content prediction facilitates turn-taking because it allows listeners to prepare their own response early, rather than because it helps them predict when the speaker will reach the end of their turn.

    Additional information

    Supplementary material
  • Costa, A., Cutler, A., & Sebastian-Galles, N. (1998). Effects of phoneme repertoire on phoneme decision. Perception and Psychophysics, 60, 1022-1031.

    Abstract

    In three experiments, listeners detected vowel or consonant targets in lists of CV syllables constructed from five vowels and five consonants. Responses were faster in a predictable context (e.g., listening for a vowel target in a list of syllables all beginning with the same consonant) than in an unpredictable context (e.g., listening for a vowel target in a list of syllables beginning with different consonants). In Experiment 1, the listeners’ native language was Dutch, in which vowel and consonant repertoires are similar in size. The difference between predictable and unpredictable contexts was comparable for vowel and consonant targets. In Experiments 2 and 3, the listeners’ native language was Spanish, which has four times as many consonants as vowels; here effects of an unpredictable consonant context on vowel detection were significantly greater than effects of an unpredictable vowel context on consonant detection. This finding suggests that listeners’ processing of phonemes takes into account the constitution of their language’s phonemic repertoire and the implications that this has for contextual variability.
  • Cousijn, H., Eissing, M., Fernández, G., Fisher, S. E., Franke, B., Zwers, M., Harrison, P. J., & Arias-Vasquez, A. (2014). No effect of schizophrenia risk genes MIR137, TCF4, and ZNF804A on macroscopic brain structure. Schizophrenia Research, 159, 329-332. doi:10.1016/j.schres.2014.08.007.

    Abstract

    Single nucleotide polymorphisms (SNPs) within the MIR137, TCF4, and ZNF804A genes show genome-wide association to schizophrenia. However, the biological basis for the associations is unknown. Here, we tested the effects of these genes on brain structure in 1300 healthy adults. Using volumetry and voxel-based morphometry, neither gene-wide effects—including the combined effect of the genes—nor single SNP effects—including specific psychosis risk SNPs—were found on total brain volume, grey matter, white matter, or hippocampal volume. These results suggest that the associations between these risk genes and schizophrenia are unlikely to be mediated via effects on macroscopic brain structure.
  • Cozijn, R., Noordman, L. G., & Vonk, W. (2011). Propositional integration and world-knowledge inference: Processes in understanding because sentences. Discourse Processes, 48, 475-500. doi:10.1080/0163853X.2011.594421.

    Abstract

    The issue addressed in this study is whether propositional integration and world-knowledge inference can be distinguished as separate processes during the comprehension of Dutch omdat (because) sentences. “Propositional integration” refers to the process by which the reader establishes the type of relation between two clauses or sentences. “World-knowledge inference” refers to the process of deriving the general causal relation and checking it against the reader's world knowledge. An eye-tracking experiment showed that the presence of the conjunction speeds up the processing of the words immediately following the conjunction, and slows down the processing of the sentence final words in comparison to the absence of the conjunction. A second, subject-paced reading experiment replicated the reading time findings, and the results of a verification task confirmed that the effect at the end of the sentence was due to inferential processing. The findings evidence integrative processing and inferential processing, respectively.
  • Cozijn, R., Commandeur, E., Vonk, W., & Noordman, L. G. (2011). The time course of the use of implicit causality information in the processing of pronouns: A visual world paradigm study. Journal of Memory and Language, 64, 381-403. doi:10.1016/j.jml.2011.01.001.

    Abstract

    Several theoretical accounts have been proposed with respect to the issue how quickly the implicit causality verb bias affects the understanding of sentences such as “John beat Pete at the tennis match, because he had played very well”. They can be considered as instances of two viewpoints: the focusing and the integration account. The focusing account claims that the bias should be manifest soon after the verb has been processed, whereas the integration account claims that the interpretation is deferred until disambiguating information is encountered. Up to now, this issue has remained unresolved because materials or methods have failed to address it conclusively. We conducted two experiments that exploited the visual world paradigm and ambiguous pronouns in subordinate because clauses. The first experiment presented implicit causality sentences with the task to resolve the ambiguous pronoun. To exclude strategic processing, in the second experiment, the task was to answer simple comprehension questions and only a minority of the sentences contained implicit causality verbs. In both experiments, the implicit causality of the verb had an effect before the disambiguating information was available. This result supported the focusing account.
  • Crago, M. B., & Allen, S. E. M. (1998). Acquiring Inuktitut. In O. L. Taylor, & L. Leonard (Eds.), Language Acquisition Across North America: Cross-Cultural And Cross-Linguistic Perspectives (pp. 245-279). San Diego, CA, USA: Singular Publishing Group, Inc.
  • Crago, M. B., Allen, S. E. M., & Pesco, D. (1998). Issues of Complexity in Inuktitut and English Child Directed Speech. In Proceedings of the twenty-ninth Annual Stanford Child Language Research Forum (pp. 37-46).
  • Crago, M. B., Chen, C., Genesee, F., & Allen, S. E. M. (1998). Power and deference. Journal for a Just and Caring Education, 4(1), 78-95.
  • Crasborn, O. A., Hanke, T., Efthimiou, E., Zwitserlood, I., & Thoutenhooft, E. (Eds.). (2008). Construction and Exploitation of Sign Language Corpora. 3rd Workshop on the Representation and Processing of Sign Languages. Paris: ELDA.
  • Crasborn, O., & Sloetjes, H. (2008). Enhanced ELAN functionality for sign language corpora. In Proceedings of the 3rd Workshop on the Representation and Processing of Sign Languages: Construction and Exploitation of Sign Language Corpora (pp. 39-43).

    Abstract

    The multimedia annotation tool ELAN was enhanced within the Corpus NGT project by a number of new and improved functions. Most of these functions were not specific to working with sign language video data, and can readily be used for other annotation purposes as well. Their direct utility for working with large amounts of annotation files during the development and use of the Corpus NGT project is what unites the various functions, which are described in this paper. In addition, we aim to characterise future developments that will be needed in order to work efficiently with larger amounts of annotation files, for which a closer integration with the use and display of metadata is foreseen.
  • Crasborn, O., & Sloetjes, H. (2014). Improving the exploitation of linguistic annotations in ELAN. In N. Calzolari, K. Choukri, T. Declerck, H. Loftsson, B. Maegaard, J. Mariani, A. Moreno, J. Odijk, & S. Piperidis (Eds.), Proceedings of LREC 2014: 9th International Conference on Language Resources and Evaluation (pp. 3604-3608).

    Abstract

    This paper discusses some improvements in recent and planned versions of the multimodal annotation tool ELAN, which are targeted at improving the usability of annotated files. Increased support for multilingual documents is provided, by allowing for multilingual vocabularies and by specifying a language per document, annotation layer (tier) or annotation. In addition, improvements in the search possibilities and the display of the results have been implemented, which are especially relevant in the interpretation of the results of complex multi-tier searches.
  • Crasborn, O., Hulsbosch, M., Lampen, L., & Sloetjes, H. (2014). New multilayer concordance functions in ELAN and TROVA. In Proceedings of the Tilburg Gesture Research Meeting [TiGeR 2013].

    Abstract

    Collocations generated by concordancers are a standard instrument in the exploitation of text corpora for the analysis of language use. Multimodal corpora show similar types of patterns, activities that frequently occur together, but there is no tool that offers facilities for visualising such patterns. Examples include timing of eye contact with respect to speech, and the alignment of activities of the two hands in signed languages. This paper describes recent enhancements to the standard CLARIN tools ELAN and TROVA for multimodal annotation to address these needs: first of all the query and concordancing functions were improved, and secondly the tools now generate visualisations of multilayer collocations that allow for intuitive explorations and analyses of multimodal data. This will provide a boost to the linguistic fields of gesture and sign language studies, as it will improve the exploitation of multimodal corpora.
  • Crasborn, O. A., & Zwitserlood, I. (2008). The Corpus NGT: An online corpus for professionals and laymen. In O. A. Crasborn, T. Hanke, E. Efthimiou, I. Zwitserlood, & E. Thoutenhooft (Eds.), Construction and Exploitation of Sign Language Corpora. (pp. 44-49). Paris: ELDA.

    Abstract

    The Corpus NGT is an ambitious effort to record and archive video data from Sign Language of the Netherlands (Nederlandse Gebarentaal: NGT), guaranteeing online access to all interested parties and long-term availability. Data are collected from 100 native signers of NGT of different ages and from various regions in the country. Parts of these data are annotated and/or translated; the annotations and translations are part of the corpus. The Corpus NGT is accommodated in the Browsable Corpus based at the Max Planck Institute for Psycholinguistics. In this paper we share our experiences in data collection, video processing, annotation/translation and licensing involved in building the corpus.
  • Creaghe, N., & Kidd, E. (2022). Symbolic play as a zone of proximal development: An analysis of informational exchange. Social Development, 31(4), 1138-1156. doi:10.1111/sode.12592.

    Abstract

    Symbolic play has long been considered a beneficial context for development. According to Cultural Learning theory, one reason for this is that symbolically-infused dialogical interactions constitute a zone of proximal development. However, the dynamics of caregiver-child interactions during symbolic play are still not fully understood. In the current study, we investigated informational exchange between fifty-two 24-month-old infants and their primary caregivers during symbolic play and a comparable, non-symbolic, functional play context. We coded over 11,000 utterances for whether participants had superior, equivalent, or inferior knowledge concerning the current conversational topic. Results showed that children were significantly more knowledgeable speakers and recipients in symbolic play, whereas the opposite was the case for caregivers, who were more knowledgeable in functional play. The results suggest that, despite its potential conceptual complexity, symbolic play may scaffold development because it facilitates infants’ communicative success by promoting them to ‘co-constructors of meaning’.

    Additional information

    supporting information
  • Creemers, A., & Embick, D. (2022). The role of semantic transparency in the processing of spoken compound words. Journal of Experimental Psychology: Learning, Memory, and Cognition, 48(5), 734-751. doi:10.1037/xlm0001132.

    Abstract

    The question of whether lexical decomposition is driven by semantic transparency in the lexical processing of morphologically complex words, such as compounds, remains controversial. Prior research on compound processing has predominantly examined visual processing. Focusing instead on spoken word word recognition, the present study examined the processing of auditorily presented English compounds that were semantically transparent (e.g., farmyard) or partially opaque with an opaque head (e.g., airline) or opaque modifier (e.g., pothole). Three auditory primed lexical decision experiments were run to examine to what extent constituent priming effects are affected by the semantic transparency of a compound and whether semantic transparency affects the processing of heads and modifiers equally. The results showed priming effects for both modifiers and heads regardless of their semantic transparency, indicating that individual constituents are accessed in transparent as well as opaque compounds. In addition, the results showed smaller priming effects for semantically opaque heads compared with matched transparent compounds with the same head. These findings suggest that semantically opaque heads induce an increased processing cost, which may result from the need to suppress the meaning of the head in favor of the meaning of the opaque compound.
  • Creemers, A., & Meyer, A. S. (2022). The processing of ambiguous pronominal reference is sensitive to depth of processing. Glossa Psycholinguistics, 1(1): 3. doi:10.5070/G601166.

    Abstract

    Previous studies on the processing of ambiguous pronominal reference have led to contradictory results: some suggested that ambiguity may hinder processing (Stewart, Holler, & Kidd, 2007), while others showed an ambiguity advantage (Grant, Sloggett, & Dillon, 2020) similar to what has been reported for structural ambiguities. This study provides a conceptual replication of Stewart et al. (2007, Experiment 1), to examine whether the discrepancy in earlier results is caused by the processing depth that participants engage in (cf. Swets, Desmet, Clifton, & Ferreira, 2008). We present the results from a word-by-word self-paced reading experiment with Dutch sentences that contained a personal pronoun in an embedded clause that was either ambiguous or disambiguated through gender features. Depth of processing of the embedded clause was manipulated through offline comprehension questions. The results showed that the difference in reading times for ambiguous versus unambiguous sentences depends on the processing depth: a significant ambiguity penalty was found under deep processing but not under shallow processing. No significant ambiguity advantage was found, regardless of processing depth. This replicates the results in Stewart et al. (2007) using a different methodology and a larger sample size for appropriate statistical power. These findings provide further evidence that ambiguous pronominal reference resolution is a flexible process, such that the way in which ambiguous sentences are processed depends on the depth of processing of the relevant information. Theoretical and methodological implications of these findings are discussed.
  • Creemers, A., Don, J., & Fenger, P. (2018). Some affixes are roots, others are heads. Natural Language & Linguistic Theory, 36(1), 45-84. doi:10.1007/s11049-017-9372-1.

    Abstract

    A recent debate in the morphological literature concerns the status of derivational affixes. While some linguists (Marantz 1997, 2001; Marvin 2003) consider derivational affixes a type of functional morpheme that realizes a categorial head, others (Lowenstamm 2015; De Belder 2011) argue that derivational affixes are roots. Our proposal, which finds its empirical basis in a study of Dutch derivational affixes, takes a middle position. We argue that there are two types of derivational affixes: some that are roots (i.e. lexical morphemes) and others that are categorial heads (i.e. functional morphemes). Affixes that are roots show ‘flexible’ categorial behavior, are subject to ‘lexical’ phonological rules, and may trigger idiosyncratic meanings. Affixes that realize categorial heads, on the other hand, are categorially rigid, do not trigger ‘lexical’ phonological rules nor allow for idiosyncrasies in their interpretation.
  • Cristia, A., Tsuji, S., & Bergmann, C. (2022). A meta-analytic approach to evaluating the explanatory adequacy of theories. Meta-Psychology, 6: MP.2020.2741. doi:10.15626/MP.2020.2741.

    Abstract

    How can data be used to check theories’ explanatory adequacy? The two traditional and most widespread approaches use single studies and non-systematic narrative reviews to evaluate theories’ explanatory adequacy; more
    recently, large-scale replications entered the picture. We argue here that none of these approaches fits in with
    cumulative science tenets. We propose instead Community-Augmented Meta-Analyses (CAMAs), which, like metaanalyses and systematic reviews, are built using all available data; like meta-analyses but not systematic reviews, can
    rely on sound statistical practices to model methodological effects; and like no other approach, are broad-scoped,
    cumulative and open. We explain how CAMAs entail a conceptual shift from meta-analyses and systematic reviews, a
    shift that is useful when evaluating theories’ explanatory adequacy. We then provide step-by-step recommendations
    for how to implement this approach – and what it means when one cannot. This leads us to conclude that CAMAs
    highlight areas of uncertainty better than alternative approaches that bring data to bear on theory evaluation, and
    can trigger a much needed shift towards a cumulative mindset with respect to both theory and data, leading us to
    do and view experiments and narrative reviews differently.

    Additional information

    All data available at OSF
  • Cristia, A. (2008). Cue weighting at different ages. Purdue Linguistics Association Working Papers, 1, 87-105.
  • Cristia, A., McGuire, G. L., Seidl, A., & Francis, A. L. (2011). Effects of the distribution of acoustic cues on infants' perception of sibilants. Journal of Phonetics, 39, 388-402. doi:10.1016/j.wocn.2011.02.004.

    Abstract

    A current theoretical view proposes that infants converge on the speech categories of their native language by attending to frequency distributions that occur in the acoustic input. To date, the only empirical support for this statistical learning hypothesis comes from studies where a single, salient dimension was manipulated. Additional evidence is sought here, by introducing a less salient pair of categories supported by multiple cues. We exposed English-learning infants to a multi-cue bidimensional grid ranging between retroflex and alveolopalatal sibilants in prevocalic position. This contrast is substantially more difficult according to previous cross-linguistic and perceptual research, and its perception is driven by cues in both the consonantal and the following vowel portions. Infants heard one of two distributions (flat, or with two peaks), and were tested with sounds varying along only one dimension. Infants' responses differed depending on the familiarization distribution, and their performance was equally good for the vocalic and the frication dimension, lending some support to the statistical hypothesis even in this harder learning situation. However, learning was restricted to the retroflex category, and a control experiment showed that lack of learning for the alveolopalatal category was not due to the presence of a competing category. Thus, these results contribute fundamental evidence on the extent and limitations of the statistical hypothesis as an explanation for infants' perceptual tuning.
  • Cristia, A. (2011). Fine-grained variation in caregivers' speech predicts their infants' discrimination. Journal of the Acoustical Society of America, 129, 3271-3280. doi:10.1121/1.3562562.

    Abstract

    Within the debate on the mechanisms underlying infants’ perceptual acquisition, one hypothesis proposes that infants’ perception is directly affected by the acoustic implementation of sound categories in the speech they hear. In consonance with this view, the present study shows that individual variation in fine-grained, subphonemic aspects of the acoustic realization of /s/ in caregivers’ speech predicts infants’ discrimination of this sound from the highly similar /∫/, suggesting that learning based on acoustic cue distributions may indeed drive natural phonological acquisition.
  • Cristia, A., Seidl, A., & Gerken, L. (2011). Learning classes of sounds in infancy. University of Pennsylvania Working Papers in Linguistics, 17, 9.

    Abstract

    Adults' phonotactic learning is affected by perceptual biases. One such bias concerns learning of constraints affecting groups of sounds: all else being equal, learning constraints affecting a natural class (a set of sounds sharing some phonetic characteristic) is easier than learning a constraint affecting an arbitrary set of sounds. This perceptual bias could be a given, for example, the result of innately guided learning; alternatively, it could be due to human learners’ experience with sounds. Using artificial grammars, we investigated whether such a bias arises in development, or whether it is present as soon as infants can learn phonotactics. Seven-month-old English-learning infants fail to generalize a phonotactic pattern involving fricatives and nasals, which does not form a coherent phonetic group, but succeed with the natural class of oral and nasal stops. In this paper, we report an experiment that explored whether those results also follow in a cohort of 4-month-olds. Unlike the older infants, 4-month-olds were able to generalize both groups, suggesting that the perceptual bias that makes phonotactic constraints on natural classes easier to learn is likely the effect of experience.
  • Cristia, A., & Seidl, A. (2008). Is infants' learning of sound patterns constrained by phonological features? Language Learning and Development, 4, 203-227. doi:10.1080/15475440802143109.

    Abstract

    Phonological patterns in languages often involve groups of sounds rather than individual sounds, which may be explained if phonology operates on the abstract features shared by those groups (Troubetzkoy, 193957. Troubetzkoy , N. 1939/1969 . Principles of phonology , Berkeley : University of California Press . View all references/1969; Chomsky & Halle, 19688. Chomsky , N. and Halle , M. 1968 . The sound pattern of English , New York : Harper and Row . View all references). Such abstract features may be present in the developing grammar either because they are part of a Universal Grammar included in the genetic endowment of humans (e.g., Hale, Kissock and Reiss, 200618. Hale , M. , Kissock , M. and Reiss , C. 2006 . Microvariation, variation, and the features of universal grammar . Lingua , 32 : 402 – 420 . View all references), or plausibly because infants induce features from their linguistic experience (e.g., Mielke, 200438. Mielke , J. 2004 . The emergence of distinctive features , Ohio State University : Unpublished doctoral dissertation . View all references). A first experiment tested 7-month-old infants' learning of an artificial grammar pattern involving either a set of sounds defined by a phonological feature, or a set of sounds that cannot be described with a single feature—an “arbitrary” set. Infants were able to induce the constraint and generalize it to a novel sound only for the set that shared the phonological feature. A second study showed that infants' inability to learn the arbitrary grouping was not due to their inability to encode a constraint on some of the sounds involved.
  • Cristia, A., Minagawa-Kawai, Y., Egorova, N., Gervain, J., Filippin, L., Cabrol, D., & Dupoux, E. (2014). Neural correlates of infant accent discrimination: An fNIRS study. Developmental Science, 17(4), 628-635. doi:10.1111/desc.12160.

    Abstract

    The present study investigated the neural correlates of infant discrimination of very similar linguistic varieties (Quebecois and Parisian French) using functional Near InfraRed Spectroscopy. In line with previous behavioral and electrophysiological data, there was no evidence that 3-month-olds discriminated the two regional accents, whereas 5-month-olds did, with the locus of discrimination in left anterior perisylvian regions. These neuroimaging results suggest that a developing language network relying crucially on left perisylvian cortices sustains infants' discrimination of similar linguistic varieties within this early period of infancy.

    Files private

    Request files
  • Cristia, A., Seidl, A., & Francis, A. L. (2011). Phonological features in infancy. In G. N. Clements, & R. Ridouane (Eds.), Where do phonological contrasts come from? Cognitive, physical and developmental bases of phonological features (pp. 303-326). Amsterdam: Benjamins.

    Abstract

    Features serve two main functions in the phonology of languages: they encode the distinction between pairs of contrastive phonemes (distinctive function); and they delimit sets of sounds that participate in phonological processes and patterns (classificatory function). We summarize evidence from a variety of experimental paradigms bearing on the functional relevance of phonological features. This research shows that while young infants may use abstract phonological features to learn sound patterns, this ability becomes more constrained with development and experience. Furthermore, given the lack of overlap between the ability to learn a pair of words differing in a single feature and the ability to learn sound patterns based on features, we argue for the separation of the distinctive and the classificatory function.
  • Cristia, A., Ganesh, S., Casillas, M., & Ganapathy, S. (2018). Talker diarization in the wild: The case of child-centered daylong audio-recordings. In Proceedings of Interspeech 2018 (pp. 2583-2587). doi:10.21437/Interspeech.2018-2078.

    Abstract

    Speaker diarization (answering 'who spoke when') is a widely researched subject within speech technology. Numerous experiments have been run on datasets built from broadcast news, meeting data, and call centers—the task sometimes appears close to being solved. Much less work has begun to tackle the hardest diarization task of all: spontaneous conversations in real-world settings. Such diarization would be particularly useful for studies of language acquisition, where researchers investigate the speech children produce and hear in their daily lives. In this paper, we study audio gathered with a recorder worn by small children as they went about their normal days. As a result, each child was exposed to different acoustic environments with a multitude of background noises and a varying number of adults and peers. The inconsistency of speech and noise within and across samples poses a challenging task for speaker diarization systems, which we tackled via retraining and data augmentation techniques. We further studied sources of structured variation across raw audio files, including the impact of speaker type distribution, proportion of speech from children, and child age on diarization performance. We discuss the extent to which these findings might generalize to other samples of speech in the wild.
  • Cristia, A., & Seidl, A. (2011). Sensitivity to prosody at 6 months predicts vocabulary at 24 months. In N. Danis, K. Mesh, & H. Sung (Eds.), BUCLD 35: Proceedings of the 35th annual Boston University Conference on Language Development (pp. 145-156). Somerville, Mass: Cascadilla Press.
  • Cristia, A., Seidl, A., Junge, C., Soderstrom, M., & Hagoort, P. (2014). Predicting individual variation in language from infant speech perception measures. Child development, 85(4), 1330-1345. doi:10.1111/cdev.12193.

    Abstract

    There are increasing reports that individual variation in behavioral and neurophysiological measures of infant speech processing predicts later language outcomes, and specifically concurrent or subsequent vocabulary size. If such findings are held up under scrutiny, they could both illuminate theoretical models of language development and contribute to the prediction of communicative disorders. A qualitative, systematic review of this emergent literature illustrated the variety of approaches that have been used and highlighted some conceptual problems regarding the measurements. A quantitative analysis of the same data established that the bivariate relation was significant, with correlations of similar strength to those found for well-established nonlinguistic predictors of language. Further exploration of infant speech perception predictors, particularly from a methodological perspective, is recommended.
  • Cristia, A., & Seidl, A. (2014). The hyperarticulation hypothesis of infant-directed speech. Journal of Child Language, 41(4), 913-934. doi:10.1017/S0305000912000669.

    Abstract

    Typically, the point vowels [i,ɑ,u] are acoustically more peripheral in infant-directed speech (IDS) compared to adult-directed speech (ADS). If caregivers seek to highlight lexically relevant contrasts in IDS, then two sounds that are contrastive should become more distinct, whereas two sounds that are surface realizations of the same underlying sound category should not. To test this prediction, vowels that are phonemically contrastive ([i-ɪ] and [eɪ-ε]), vowels that map onto the same underlying category ([æ- ] and [ε- ]), and the point vowels [i,ɑ,u] were elicited in IDS and ADS by American English mothers of two age groups of infants (four- and eleven-month-olds). As in other work, point vowels were produced in more peripheral positions in IDS compared to ADS. However, there was little evidence of hyperarticulation per se (e.g. [i-ɪ] was hypoarticulated). We suggest that across-the-board lexically based hyperarticulation is not a necessary feature of IDS.

    Additional information

    CORRIGENDUM
  • Cristia, A., & Seidl, A. (2008). Why cross-linguistic frequency cannot be equated with ease of acquisition. University of Pennsylvania Working Papers in Linguistics, 14(1), 71-82. Retrieved from http://repository.upenn.edu/pwpl/vol14/iss1/6.
  • Croijmans, I. (2018). Wine expertise shapes olfactory language and cognition. PhD Thesis, Radboud University, Nijmegen.
  • Cronin, K. A., Van Leeuwen, E. J. C., Mulenga, I. C., & Bodamer, M. D. (2011). Behavioral response of a chimpanzee mother toward her dead infant. American Journal of Primatology, 73(5), 415-421. doi:10.1002/ajp.20927.

    Abstract

    The mother-offspring bond is one of the strongest and most essential social bonds. Following is a detailed behavioral report of a female chimpanzee two days after her 16-month-old infant died, on the first day that the mother is observed to create distance between her and the corpse. A series of repeated approaches and retreats to and from the body are documented, along with detailed accounts of behaviors directed toward the dead infant by the mother and other group members. The behavior of the mother toward her dead infant not only highlights the maternal contribution to the mother-infant relationship but also elucidates the opportunities chimpanzees have to learn about the sensory cues associated with death, and the implications of death for the social environment.
  • Cronin, K. A., Pieper, B., Van Leeuwen, E. J. C., Mundry, R., & Haun, D. B. M. (2014). Problem solving in the presence of others: How rank and relationship quality impact resource acquisition in chimpanzees (Pan troglodytes). PLoS One, 9(4): e93204. doi:10.1371/journal.pone.0093204.

    Abstract

    In the wild, chimpanzees (Pan troglodytes) are often faced with clumped food resources that they may know how to access but abstain from doing so due to social pressures. To better understand how social settings influence resource acquisition, we tested fifteen semi-wild chimpanzees from two social groups alone and in the presence of others. We investigated how resource acquisition was affected by relative social dominance, whether collaborative problem solving or (active or passive) sharing occurred amongst any of the dyads, and whether these outcomes were related to relationship quality as determined from six months of observational data. Results indicated that chimpanzees, regardless of rank, obtained fewer rewards when tested in the presence of others compared to when they were tested alone. Chimpanzees demonstrated behavioral inhibition; chimpanzees who showed proficient skill when alone often abstained from solving the task when in the presence of others. Finally, individuals with close social relationships spent more time together in the problem solving space, but collaboration and sharing were infrequent and sessions in which collaboration or sharing did occur contained more instances of aggression. Group living provides benefits and imposes costs, and these findings highlight that one cost of group living may be diminishing productive individual behaviors.
  • Cronin, K. A., & Snowdon, C. T. (2008). The effects of unequal reward distributions on cooperative problem solving by cottontop tamarins, Saguinus oedipus. Animal Behaviour, 75, 245-257. doi:10.1016/j.anbehav.2007.04.032.

    Abstract

    Cooperation among nonhuman animals has been the topic of much theoretical and empirical research, but few studies have examined systematically the effects of various reward payoffs on cooperative behaviour. Here, we presented heterosexual pairs of cooperatively breeding cottontop tamarins with a cooperative problem-solving task. In a series of four experiments, we examined how the tamarins’ cooperative performance changed under conditions in which (1) both actors were mutually rewarded, (2) both actors were rewarded reciprocally across days, (3) both actors competed for a monopolizable reward and (4) one actor repeatedly delivered a single reward to the other actor. The tamarins showed sensitivity to the reward structure, showing the greatest percentage of trials solved and shortest latency to solve the task in the mutual reward experiment and the lowest percentage of trials solved and longest latency to solve the task in the experiment in which one actor was repeatedly rewarded. However, even in the experiment in which the fewest trials were solved, the tamarins still solved 46 _ 12% of trials and little to no aggression was observed among partners following inequitable reward distributions. The tamarins did, however, show selfish motivation in each of the experiments. Nevertheless, in all experiments, unrewarded individuals continued to cooperate and procure rewards for their social partners.
  • Cronin, K. A., Van Leeuwen, E. J. C., Vreeman, V., & Haun, D. B. M. (2014). Population-level variability in the social climates of four chimpanzee societies. Evolution and Human Behavior, 35(5), 389-396. doi:10.1016/j.evolhumbehav.2014.05.004.

    Abstract

    Recent debates have questioned the extent to which culturally-transmitted norms drive behavioral variation in resource sharing across human populations. We shed new light on this discussion by examining the group-level variation in the social dynamics and resource sharing of chimpanzees, a species that is highly social and forms long-term community associations but differs from humans in the extent to which cultural norms are adopted and enforced. We rely on theory developed in primate socioecology to guide our investigation in four neighboring chimpanzee groups at a sanctuary in Zambia. We used a combination of experimental and observational approaches to assess the distribution of resource holding potential in each group. In the first assessment, we measured the proportion of the population that gathered in a resource-rich zone, in the second we assessed naturally occurring social spacing via social network analysis, and in the third we assessed the degree to which benefits were equally distributed within the group. We report significant, stable group-level variation across these multiple measures, indicating that group-level variation in resource sharing and social tolerance is not necessarily reliant upon human-like cultural norms.
  • Croxson, P., Forkel, S. J., Cerliani, L., & Thiebaut De Schotten, M. (2018). Structural Variability Across the Primate Brain: A Cross-Species Comparison. Cerebral Cortex, 28(11), 3829-3841. doi:10.1093/cercor/bhx244.

    Abstract

    A large amount of variability exists across human brains; revealed initially on a small scale by postmortem studies and,
    more recently, on a larger scale with the advent of neuroimaging. Here we compared structural variability between human
    and macaque monkey brains using grey and white matter magnetic resonance imaging measures. The monkey brain was
    overall structurally as variable as the human brain, but variability had a distinct distribution pattern, with some key areas
    showing high variability. We also report the first evidence of a relationship between anatomical variability and evolutionary
    expansion in the primate brain. This suggests a relationship between variability and stability, where areas of low variability
    may have evolved less recently and have more stability, while areas of high variability may have evolved more recently and
    be less similar across individuals. We showed specific differences between the species in key areas, including the amount of
    hemispheric asymmetry in variability, which was left-lateralized in the human brain across several phylogenetically recent
    regions. This suggests that cerebral variability may be another useful measure for comparison between species and may add
    another dimension to our understanding of evolutionary mechanisms.
  • Cucchiarini, C., Hubers, F., & Strik, H. (2022). Learning L2 idioms in a CALL environment: The role of practice intensity, modality, and idiom properties. Computer Assisted Language Learning, 35(4), 863-891. doi:10.1080/09588221.2020.1752734.

    Abstract

    Idiomatic expressions like hit the road or turn the tables are known to be problematic for L2 learners, but research indicates that learning L2 idiomatic language is important. Relatively few studies, most of them focusing on English idioms, have investigated how L2 idioms are actually acquired and how this process is affected by important idiom properties like transparency (the degree to which the figurative meaning of an idiom can be inferred from its literal analysis) and cross-language overlap (the degree to which L2 idioms correspond to L1 idioms). The present study employed a specially designed CALL system to investigate the effects of intensity of practice and the reading modality on learning Dutch L2 idioms, as well as the impact of idiom transparency and cross-language overlap. The results show that CALL practice with a focus on meaning and form is effective for learning L2 idioms and that the degree of practice needed depends on the properties of the idioms. L2 learners can achieve or even exceed native-like performance. Practicing reading idioms aloud does not lead to significantly higher performance than reading idioms silently.These findings have theoretical implications as they show that differences between native speakers and L2 learners are due to differences in exposure, rather than to different underlying acquisition mechanisms. For teaching practice, this study indicates that a properly designed CALL system is an effective and an ecologically sound environment for learning L2 idioms, a generally unattended area in L2 classes, and that teaching priorities should be based on degree of transparency and cross-language overlap of L2 idioms.
  • Cutler, A. (2008). The abstract representations in speech processing. Quarterly Journal of Experimental Psychology, 61(11), 1601-1619. doi:10.1080/13803390802218542.

    Abstract

    Speech processing by human listeners derives meaning from acoustic input via intermediate steps involving abstract representations of what has been heard. Recent results from several lines of research are here brought together to shed light on the nature and role of these representations. In spoken-word recognition, representations of phonological form and of conceptual content are dissociable. This follows from the independence of patterns of priming for a word's form and its meaning. The nature of the phonological-form representations is determined not only by acoustic-phonetic input but also by other sources of information, including metalinguistic knowledge. This follows from evidence that listeners can store two forms as different without showing any evidence of being able to detect the difference in question when they listen to speech. The lexical representations are in turn separate from prelexical representations, which are also abstract in nature. This follows from evidence that perceptual learning about speaker-specific phoneme realization, induced on the basis of a few words, generalizes across the whole lexicon to inform the recognition of all words containing the same phoneme. The efficiency of human speech processing has its basis in the rapid execution of operations over abstract representations.
  • Cutler, A., Norris, D., & Sebastián-Gallés, N. (2004). Phonemic repertoire and similarity within the vocabulary. In S. Kin, & M. J. Bae (Eds.), Proceedings of the 8th International Conference on Spoken Language Processing (Interspeech 2004-ICSLP) (pp. 65-68). Seoul: Sunjijn Printing Co.

    Abstract

    Language-specific differences in the size and distribution of the phonemic repertoire can have implications for the task facing listeners in recognising spoken words. A language with more phonemes will allow shorter words and reduced embedding of short words within longer ones, decreasing the potential for spurious lexical competitors to be activated by speech signals. We demonstrate that this is the case via comparative analyses of the vocabularies of English and Spanish. A language which uses suprasegmental as well as segmental contrasts, however, can substantially reduce the extent of spurious embedding.
  • Cutler, A. (2004). Segmentation of spoken language by normal adult listeners. In R. Kent (Ed.), MIT encyclopedia of communication sciences and disorders (pp. 392-395). Cambridge, MA: MIT Press.
  • Cutler, A., Weber, A., Smits, R., & Cooper, N. (2004). Patterns of English phoneme confusions by native and non-native listeners. Journal of the Acoustical Society of America, 116(6), 3668-3678. doi:10.1121/1.1810292.

    Abstract

    Native American English and non-native(Dutch)listeners identified either the consonant or the vowel in all possible American English CV and VC syllables. The syllables were embedded in multispeaker babble at three signal-to-noise ratios(0, 8, and 16 dB). The phoneme identification
    performance of the non-native listeners was less accurate than that of the native listeners. All listeners were adversely affected by noise. With these isolated syllables, initial segments were harder to identify than final segments. Crucially, the effects of language background and noise did not interact; the performance asymmetry between the native and non-native groups was not significantly different across signal-to-noise ratios. It is concluded that the frequently reported disproportionate difficulty of non-native listening under disadvantageous conditions is not due to a disproportionate increase in phoneme misidentifications.
  • Cutler, A., McQueen, J. M., Butterfield, S., & Norris, D. (2008). Prelexically-driven perceptual retuning of phoneme boundaries. In Proceedings of Interspeech 2008 (pp. 2056-2056).

    Abstract

    Listeners heard an ambiguous /f-s/ in nonword contexts where only one of /f/ or /s/ was legal (e.g., frul/*srul or *fnud/snud). In later categorisation of a phonetic continuum from /f/ to /s/, their category boundaries had shifted; hearing -rul led to expanded /f/ categories, -nud expanded /s/. Thus phonotactic sequence information alone induces perceptual retuning of phoneme category boundaries; lexical access is not required.
  • Cutler, A. (2004). On spoken-word recognition in a second language. Newsletter, American Association of Teachers of Slavic and East European Languages, 47, 15-15.
  • Cutler, A., & Henton, C. G. (2004). There's many a slip 'twixt the cup and the lip. In H. Quené, & V. Van Heuven (Eds.), On speech and Language: Studies for Sieb G. Nooteboom (pp. 37-45). Utrecht: Netherlands Graduate School of Linguistics.

    Abstract

    The retiring academic may look back upon, inter alia, years of conference attendance. Speech error researchers are uniquely fortunate because they can collect data in any situation involving communication; accordingly, the retiring speech error researcher will have collected data at those conferences. We here address the issue of whether error data collected in situations involving conviviality (such as at conferences) is representative of error data in general. Our approach involved a comparison, across three levels of linguistic processing, between a specially constructed Conviviality Sample and the largest existing source of speech error data, the newly available Fromkin Speech Error Database. The results indicate that there are grounds for regarding the data in the Conviviality Sample as a better than average reflection of the true population of all errors committed. These findings encourage us to recommend further data collection in collaboration with like-minded colleagues.
  • Cutler, A. (2004). Twee regels voor academische vorming. In H. Procee (Ed.), Bij die wereld wil ik horen! Zesendertig columns en drie essays over de vorming tot academicus. (pp. 42-45). Amsterdam: Boom.
  • Cutler, A., Ernestus, M., Warner, N., & Weber, A. (2022). Managing speech perception data sets. In B. McDonnell, E. Koller, & L. B. Collister (Eds.), The Open Handbook of Linguistic Data Management (pp. 565-573). Cambrdige, MA, USA: MIT Press. doi:10.7551/mitpress/12200.003.0055.
  • Ip, M. H. K., & Cutler, A. (2022). Juncture prosody across languages: Similar production but dissimilar perception. Laboratory Phonology, 13(1): 5. doi:10.16995/labphon.6464.

    Abstract

    How do speakers of languages with different intonation systems produce and perceive prosodic junctures in sentences with identical structural ambiguity? Native speakers of English and of Mandarin produced potentially ambiguous sentences with a prosodic juncture either earlier in the utterance (e.g., “He gave her # dog biscuits,” “他给她#狗饼干 ”), or later (e.g., “He gave her dog # biscuits,” “他给她狗 #饼干 ”). These productiondata showed that prosodic disambiguation is realised very similarly in the two languages, despite some differences in the degree to which individual juncture cues (e.g., pausing) were favoured. In perception experiments with a new disambiguation task, requiring speeded responses to select the correct meaning for structurally ambiguous sentences, language differences in disambiguation response time appeared: Mandarin speakers correctly disambiguated sentences with earlier juncture faster than those with later juncture, while English speakers showed the reverse. Mandarin-speakers with L2 English did not show their native-language response time pattern when they heard the English ambiguous sentences. Thus even with identical structural ambiguity and identically cued production, prosodic juncture perception across languages can differ.

    Additional information

    supplementary files
  • Cutler, A., & Otake, T. (1998). Assimilation of place in Japanese and Dutch. In R. Mannell, & J. Robert-Ribes (Eds.), Proceedings of the Fifth International Conference on Spoken Language Processing: vol. 5 (pp. 1751-1754). Sydney: ICLSP.

    Abstract

    Assimilation of place of articulation across a nasal and a following stop consonant is obligatory in Japanese, but not in Dutch. In four experiments the processing of assimilated forms by speakers of Japanese and Dutch was compared, using a task in which listeners blended pseudo-word pairs such as ranga-serupa. An assimilated blend of this pair would be rampa, an unassimilated blend rangpa. Japanese listeners produced significantly more assimilated than unassimilated forms, both with pseudo-Japanese and pseudo-Dutch materials, while Dutch listeners produced significantly more unassimilated than assimilated forms in each materials set. This suggests that Japanese listeners, whose native-language phonology involves obligatory assimilation constraints, represent the assimilated nasals in nasal-stop sequences as unmarked for place of articulation, while Dutch listeners, who are accustomed to hearing unassimilated forms, represent the same nasal segments as marked for place of articulation.
  • Ip, M. H. K., & Cutler, A. (2018). Asymmetric efficiency of juncture perception in L1 and L2. In K. Klessa, J. Bachan, A. Wagner, M. Karpiński, & D. Śledziński (Eds.), Proceedings of Speech Prosody 2018 (pp. 289-296). Baixas, France: ISCA. doi:10.21437/SpeechProsody.2018-59.

    Abstract

    In two experiments, Mandarin listeners resolved potential syntactic ambiguities in spoken utterances in (a) their native language (L1) and (b) English which they had learned as a second language (L2). A new disambiguation task was used, requiring speeded responses to select the correct meaning for structurally ambiguous sentences. Importantly, the ambiguities used in the study are identical in Mandarin and in English, and production data show that prosodic disambiguation of this type of ambiguity is also realised very similarly in the two languages. The perceptual results here showed however that listeners’ response patterns differed for L1 and L2, although there was a significant increase in similarity between the two response patterns with increasing exposure to the L2. Thus identical ambiguity and comparable disambiguation patterns in L1 and L2 do not lead to immediate application of the appropriate L1 listening strategy to L2; instead, it appears that such a strategy may have to be learned anew for the L2.
  • Ip, M. H. K., & Cutler, A. (2018). Cue equivalence in prosodic entrainment for focus detection. In J. Epps, J. Wolfe, J. Smith, & C. Jones (Eds.), Proceedings of the 17th Australasian International Conference on Speech Science and Technology (pp. 153-156).

    Abstract

    Using a phoneme detection task, the present series of
    experiments examines whether listeners can entrain to
    different combinations of prosodic cues to predict where focus
    will fall in an utterance. The stimuli were recorded by four
    female native speakers of Australian English who happened to
    have used different prosodic cues to produce sentences with
    prosodic focus: a combination of duration cues, mean and
    maximum F0, F0 range, and longer pre-target interval before
    the focused word onset, only mean F0 cues, only pre-target
    interval, and only duration cues. Results revealed that listeners
    can entrain in almost every condition except for where
    duration was the only reliable cue. Our findings suggest that
    listeners are flexible in the cues they use for focus processing.
  • Cutler, A., Garcia Lecumberri, M. L., & Cooke, M. (2008). Consonant identification in noise by native and non-native listeners: Effects of local context. Journal of the Acoustical Society of America, 124(2), 1264-1268. doi:10.1121/1.2946707.

    Abstract

    Speech recognition in noise is harder in second (L2) than first languages (L1). This could be because noise disrupts speech processing more in L2 than L1, or because L1 listeners recover better though disruption is equivalent. Two similar prior studies produced discrepant results: Equivalent noise effects for L1 and L2 (Dutch) listeners, versus larger effects for L2 (Spanish) than L1. To explain this, the latter experiment was presented to listeners from the former population. Larger noise effects on consonant identification emerged for L2 (Dutch) than L1 listeners, suggesting that task factors rather than L2 population differences underlie the results discrepancy.
  • Cutler, A., Mister, E., Norris, D., & Sebastián-Gallés, N. (2004). La perception de la parole en espagnol: Un cas particulier? In L. Ferrand, & J. Grainger (Eds.), Psycholinguistique cognitive: Essais en l'honneur de Juan Segui (pp. 57-74). Brussels: De Boeck.
  • Cutler, A., Burchfield, L. A., & Antoniou, M. (2018). Factors affecting talker adaptation in a second language. In J. Epps, J. Wolfe, J. Smith, & C. Jones (Eds.), Proceedings of the 17th Australasian International Conference on Speech Science and Technology (pp. 33-36).

    Abstract

    Listeners adapt rapidly to previously unheard talkers by
    adjusting phoneme categories using lexical knowledge, in a
    process termed lexically-guided perceptual learning. Although
    this is firmly established for listening in the native language
    (L1), perceptual flexibility in second languages (L2) is as yet
    less well understood. We report two experiments examining L1
    and L2 perceptual learning, the first in Mandarin-English late
    bilinguals, the second in Australian learners of Mandarin. Both
    studies showed stronger learning in L1; in L2, however,
    learning appeared for the English-L1 group but not for the
    Mandarin-L1 group. Phonological mapping differences from
    the L1 to the L2 are suggested as the reason for this result.
  • Cutler, A. (1998). How listeners find the right words. In Proceedings of the Sixteenth International Congress on Acoustics: Vol. 2 (pp. 1377-1380). Melville, NY: Acoustical Society of America.

    Abstract

    Languages contain tens of thousands of words, but these are constructed from a tiny handful of phonetic elements. Consequently, words resemble one another, or can be embedded within one another, a coup stick snot with standing. me process of spoken-word recognition by human listeners involves activation of multiple word candidates consistent with the input, and direct competition between activated candidate words. Further, human listeners are sensitive, at an early, prelexical, stage of speeeh processing, to constraints on what could potentially be a word of the language.

Share this page