Publications

Displaying 301 - 400 of 2235
  • Chen, A. (2012). Shaping the intonation of Wh-questions: Information structure and beyond. In J. P. de Ruiter (Ed.), Questions: Formal, functional and interactional perspectives (pp. 146-164). New York: Cambridge University Press.
  • Chen, A. (2009). The phonetics of sentence-initial topic and focus in adult and child Dutch. In M. Vigário, S. Frota, & M. Freitas (Eds.), Phonetics and Phonology: Interactions and interrelations (pp. 91-106). Amsterdam: Benjamins.
  • Chen, A. (2012). The prosodic investigation of information structure. In M. Krifka, & R. Musan (Eds.), The expression of information structure (pp. 249-286). Berlin: de Gruyter.
  • Cho, T., & McQueen, J. M. (2008). Not all sounds in assimilation environments are perceived equally: Evidence from Korean. Journal of Phonetics, 36, 239-249. doi:doi:10.1016/j.wocn.2007.06.001.

    Abstract

    This study tests whether potential differences in the perceptual robustness of speech sounds influence continuous-speech processes. Two phoneme-monitoring experiments examined place assimilation in Korean. In Experiment 1, Koreans monitored for targets which were either labials (/p,m/) or alveolars (/t,n/), and which were either unassimilated or assimilated to a following /k/ in two-word utterances. Listeners detected unaltered (unassimilated) labials faster and more accurately than assimilated labials; there was no such advantage for unaltered alveolars. In Experiment 2, labial–velar differences were tested using conditions in which /k/ and /p/ were illegally assimilated to a following /t/. Unassimilated sounds were detected faster than illegally assimilated sounds, but this difference tended to be larger for /k/ than for /p/. These place-dependent asymmetries suggest that differences in the perceptual robustness of segments play a role in shaping phonological patterns.
  • Choi, J., Broersma, M., & Cutler, A. (2015). Enhanced processing of a lost language: Linguistic knowledge or linguistic skill? In Proceedings of Interspeech 2015: 16th Annual Conference of the International Speech Communication Association (pp. 3110-3114).

    Abstract

    Same-different discrimination judgments for pairs of Korean stop consonants, or of Japanese syllables differing in phonetic segment length, were made by adult Korean adoptees in the Netherlands, by matched Dutch controls, and Korean controls. The adoptees did not outdo either control group on either task, although the same individuals had performed significantly better than matched controls on an identification learning task. This suggests that early exposure to multiple phonetic systems does not specifically improve acoustic-phonetic skills; rather, enhanced performance suggests retained language knowledge.
  • Choi, S., & Bowerman, M. (1991). Learning to express motion events in English and Korean: The influence of language-specific lexicalization patterns. Cognition, 41, 83-121. doi:10.1016/0010-0277(91)90033-Z.

    Abstract

    English and Korean differ in how they lexicalize the components of motionevents. English characteristically conflates Motion with Manner, Cause, or Deixis, and expresses Path separately. Korean, in contrast, conflates Motion with Path and elements of Figure and Ground in transitive clauses for caused Motion, but conflates motion with Deixis and spells out Path and Manner separately in intransitive clauses for spontaneous motion. Children learningEnglish and Korean show sensitivity to language-specific patterns in the way they talk about motion from as early as 17–20 months. For example, learners of English quickly generalize their earliest spatial words — Path particles like up, down, and in — to both spontaneous and caused changes of location and, for up and down, to posture changes, while learners of Korean keep words for spontaneous and caused motion strictly separate and use different words for vertical changes of location and posture changes. These findings challenge the widespread view that children initially map spatial words directly to nonlinguistic spatial concepts, and suggest that they are influenced by the semantic organization of their language virtually from the beginning. We discuss how input and cognition may interact in the early phases of learning to talk about space.
  • Choi, J., Broersma, M., & Cutler, A. (2018). Phonetic learning is not enhanced by sequential exposure to more than one language. Linguistic Research, 35(3), 567-581. doi:10.17250/khisli.35.3.201812.006.

    Abstract

    Several studies have documented that international adoptees, who in early years have
    experienced a change from a language used in their birth country to a new language
    in an adoptive country, benefit from the limited early exposure to the birth language
    when relearning that language’s sounds later in life. The adoptees’ relearning advantages
    have been argued to be conferred by lasting birth-language knowledge obtained from
    the early exposure. However, it is also plausible to assume that the advantages may
    arise from adoptees’ superior ability to learn language sounds in general, as a result
    of their unusual linguistic experience, i.e., exposure to multiple languages in sequence
    early in life. If this is the case, then the adoptees’ relearning benefits should generalize
    to previously unheard language sounds, rather than be limited to their birth-language
    sounds. In the present study, adult Korean adoptees in the Netherlands and matched
    Dutch-native controls were trained on identifying a Japanese length distinction to which
    they had never been exposed before. The adoptees and Dutch controls did not differ
    on any test carried out before, during, or after the training, indicating that observed
    adoptee advantages for birth-language relearning do not generalize to novel, previously
    unheard language sounds. The finding thus fails to support the suggestion that
    birth-language relearning advantages may arise from enhanced ability to learn language
    sounds in general conferred by early experience in multiple languages. Rather, our
    finding supports the original contention that such advantages involve memory traces
    obtained before adoption
  • Cholin, J., & Levelt, W. J. M. (2009). Effects of syllable preparation and syllable frequency in speech production: Further evidence for syllabic units at a post-lexical level. Language and Cognitive Processes, 24, 662-684. doi:10.1080/01690960802348852.

    Abstract

    In the current paper, we asked at what level in the speech planning process speakers retrieve stored syllables. There is evidence that syllable structure plays an essential role in the phonological encoding of words (e.g., online syllabification and phonological word formation). There is also evidence that syllables are retrieved as whole units. However, findings that clearly pinpoint these effects to specific levels in speech planning are scarce. We used a naming variant of the implicit priming paradigm to contrast voice onset latencies for frequency-manipulated disyllabic Dutch pseudo-words. While prior implicit priming studies only manipulated the item's form and/or syllable structure overlap we introduced syllable frequency as an additional factor. If the preparation effect for syllables obtained in the implicit priming paradigm proceeds beyond phonological planning, i.e., includes the retrieval of stored syllables, then the preparation effect should differ for high- and low frequency syllables. The findings reported here confirm this prediction: Low-frequency syllables benefit significantly more from the preparation than high-frequency syllables. Our findings support the notion of a mental syllabary at a post-lexical level, between the levels of phonological and phonetic encoding.
  • Chu, M., & Kita, S. (2009). Co-speech gestures do not originate from speech production processes: Evidence from the relationship between co-thought and co-speech gestures. In N. Taatgen, & H. Van Rijn (Eds.), Proceedings of the Thirty-First Annual Conference of the Cognitive Science Society (pp. 591-595). Austin, TX: Cognitive Science Society.

    Abstract

    When we speak, we spontaneously produce gestures (co-speech gestures). Co-speech gestures and speech production are closely interlinked. However, the exact nature of the link is still under debate. To addressed the question that whether co-speech gestures originate from the speech production system or from a system independent of the speech production, the present study examined the relationship between co-speech and co-thought gestures. Co-thought gestures, produced during silent thinking without speaking, presumably originate from a system independent of the speech production processes. We found a positive correlation between the production frequency of co-thought and co-speech gestures, regardless the communicative function that co-speech gestures might serve. Therefore, we suggest that co-speech gestures and co-thought gestures originate from a common system that is independent of the speech production processes
  • Chu, M., & Kita, S. (2008). Spontaneous gestures during mental rotation tasks: Insights into the microdevelopment of the motor strategy. Journal of Experimental Psychology: General, 137, 706-723. doi:10.1037/a0013157.

    Abstract

    This study investigated the motor strategy involved in mental rotation tasks by examining 2 types of spontaneous gestures (hand–object interaction gestures, representing the agentive hand action on an object, vs. object-movement gestures, representing the movement of an object by itself) and different types of verbal descriptions of rotation. Hand–object interaction gestures were produced earlier than object-movement gestures, the rate of both types of gestures decreased, and gestures became more distant from the stimulus object over trials (Experiments 1 and 3). Furthermore, in the first few trials, object-movement gestures increased, whereas hand–object interaction gestures decreased, and this change of motor strategies was also reflected in the type of verbal description of rotation in the concurrent speech (Experiment 2). This change of motor strategies was hampered when gestures were prohibited (Experiment 4). The authors concluded that the motor strategy becomes less dependent on agentive action on the object, and also becomes internalized over the course of the experiment, and that gesture facilitates the former process. When solving a problem regarding the physical world, adults go through developmental processes similar to internalization and symbolic distancing in young children, albeit within a much shorter time span.
  • Chu, M., & Kita, S. (2012). The role of spontaneous gestures in spatial problem solving. In E. Efthimiou, G. Kouroupetroglou, & S.-E. Fotinea (Eds.), Gesture and sign language in human-computer interaction and embodied communication: 9th International Gesture Workshop, GW 2011, Athens, Greece, May 25-27, 2011, revised selected papers (pp. 57-68). Heidelberg: Springer.

    Abstract

    When solving spatial problems, people often spontaneously produce hand gestures. Recent research has shown that our knowledge is shaped by the interaction between our body and the environment. In this article, we review and discuss evidence on: 1) how spontaneous gesture can reveal the development of problem solving strategies when people solve spatial problems; 2) whether producing gestures can enhance spatial problem solving performance. We argue that when solving novel spatial problems, adults go through deagentivization and internalization processes, which are analogous to young children’s cognitive development processes. Furthermore, gesture enhances spatial problem solving performance. The beneficial effect of gesturing can be extended to non-gesturing trials and can be generalized to a different spatial task that shares similar spatial transformation processes.
  • Chu, M., & Kita, S. (2012). The nature of the beneficial role of spontaneous gesture in spatial problem solving [Abstract]. Cognitive Processing; Special Issue "ICSC 2012, the 5th International Conference on Spatial Cognition: Space and Embodied Cognition". Oral Presentations, 13(Suppl. 1), S39.

    Abstract

    Spontaneous gestures play an important role in spatial problem solving. We investigated the functional role and underlying mechanism of spontaneous gestures in spatial problem solving. In Experiment 1, 132 participants were required to solve a mental rotation task (see Figure 1) without speaking. Participants gestured more frequently in difficult trials than in easy trials. In Experiment 2, 66 new participants were given two identical sets of mental rotation tasks problems, as the one used in experiment 1. Participants who were encouraged to gesture in the first set of mental rotation task problemssolved more problems correctly than those who were allowed to gesture or those who were prohibited from gesturing both in the first set and in the second set in which all participants were prohibited from gesturing. The gestures produced by the gestureencouraged group and the gesture-allowed group were not qualitatively different. In Experiment 3, 32 new participants were first given a set of mental rotation problems and then a second set of nongesturing paper folding problems. The gesture-encouraged group solved more problems correctly in the first set of mental rotation problems and the second set of non-gesturing paper folding problems. We concluded that gesture improves spatial problem solving. Furthermore, gesture has a lasting beneficial effect even when gesture is not available and the beneficial effect is problem-general.We suggested that gesture enhances spatial problem solving by provide a rich sensori-motor representation of the physical world and pick up information that is less readily available to visuo-spatial processes.
  • Chwilla, D., Brown, C. M., & Hagoort, P. (1995). The N400 as a function of the level of processing. Psychophysiology, 32, 274-285. doi:10.1111/j.1469-8986.1995.tb02956.x.

    Abstract

    In a semantic priming paradigm, the effects of different levels of processing on the N400 were assessed by changing the task demands. In the lexical decision task, subjects had to discriminate between words and nonwords and in the physical task, subjects had to discriminate between uppercase and lowercase letters. The proportion of related versus unrelated word pairs differed between conditions. A lexicality test on reaction times demonstrated that the physical task was performed nonlexically. Moreover, a semantic priming reaction time effect was obtained only in the lexical decision task. The level of processing clearly affected the event-related potentials. An N400 priming effect was only observed in the lexical decision task. In contrast, in the physical task a P300 effect was observed for either related or unrelated targets, depending on their frequency of occurrence. Taken together, the results indicate that an N400 priming effect is only evoked when the task performance induces the semantic aspects of words to become part of an episodic trace of the stimulus event.
  • Clahsen, H., Sonnenstuhl, I., Hadler, M., & Eisenbeiss, S. (2008). Morphological paradigms in language processing and language disorders. Transactions of the Philological Society, 99(2), 247-277. doi:10.1111/1467-968X.00082.

    Abstract

    We present results from two cross‐modal morphological priming experiments investigating regular person and number inflection on finite verbs in German. We found asymmetries in the priming patterns between different affixes that can be predicted from the structure of the paradigm. We also report data from language disorders which indicate that inflectional errors produced by language‐impaired adults and children tend to occur within a given paradigm dimension, rather than randomly across the paradigm. We conclude that morphological paradigms are used by the human language processor and can be systematically affected in language disorders.
  • Clark, E. V., & Bowerman, M. (1986). On the acquisition of final voiced stops. In J. A. Fishman (Ed.), The Fergusonian impact: in honor of Charles A. Ferguson on the occasion of his 65th birthday. Volume 1: From phonology to society (pp. 51-68). Berlin: Mouton de Gruyter.
  • Clough, S., & Hilverman, C. (2018). Hand gestures and how they help children learn. Frontiers for Young Minds, 6: 29. doi:10.3389/frym.2018.00029.

    Abstract

    When we talk, we often make hand movements called gestures at the same time. Although just about everyone gestures when they talk, we usually do not even notice the gestures. Our hand gestures play an important role in helping us learn and remember! When we see other people gesturing when they talk—or when we gesture when we talk ourselves—we are more likely to remember the information being talked about than if gestures were not involved. Our hand gestures can even indicate when we are ready to learn new things! In this article, we explain how gestures can help learning. To investigate this, we studied children learning a new mathematical concept called equivalence. We hope that this article will help you notice when you, your friends and family, and your teachers are gesturing, and that it will help you understand how those gestures can help people learn.
  • Cohen, E. (2012). [Review of the book Searching for Africa in Brazil: Power and Tradition in Candomblé by Stefania Capone]. Critique of Anthropology, 32, 217-218. doi:10.1177/0308275X12439961.
  • Cohen, E. (2012). The evolution of tag-based cooperation in humans: The case for accent. Current Anthropology, 53, 588-616. doi:10.1086/667654.

    Abstract

    Recent game-theoretic simulation and analytical models have demonstrated that cooperative strategies mediated by indicators of cooperative potential, or “tags,” can invade, spread, and resist invasion by noncooperators across a range of population-structure and cost-benefit scenarios. The plausibility of these models is potentially relevant for human evolutionary accounts insofar as humans possess some phenotypic trait that could serve as a reliable tag. Linguistic markers, such as accent and dialect, have frequently been either cursorily defended or promptly dismissed as satisfying the criteria of a reliable and evolutionarily viable tag. This paper integrates evidence from a range of disciplines to develop and assess the claim that speech accent mediated the evolution of tag-based cooperation in humans. Existing evidence warrants the preliminary conclusion that accent markers meet the demands of an evolutionarily viable tag and potentially afforded a cost-effective solution to the challenges of maintaining viable cooperative relationships in diffuse, regional social networks.
  • Collins, L. J., & Chen, X. S. (2009). Ancestral RNA: The RNA biology of the eukaryotic ancestor. RNA Biology, 6(5), 495-502. doi:10.4161/rna.6.5.9551.

    Abstract

    Our knowledge of RNA biology within eukaryotes has exploded over the last five years. Within new research we see that some features that were once thought to be part of multicellular life have now been identified in several protist lineages. Hence, it is timely to ask which features of eukaryote RNA biology are ancestral to all eukaryotes. We focus on RNA-based regulation and epigenetic mechanisms that use small regulatory ncRNAs and long ncRNAs, to highlight some of the many questions surrounding eukaryotic ncRNA evolution.
  • Collins, J. (2015). ‘Give’ and semantic maps. In B. Nolan, G. Rawoens, & E. Diedrichsen (Eds.), Causation, permission, and transfer: Argument realisation in GET, TAKE, PUT, GIVE and LET verbs (pp. 129-146). Amsterdam: John Benjamins.
  • Collins, J. (2012). The evolution of the Greenbergian word order correlations. In T. C. Scott-Phillips, M. Tamariz, E. A. Cartmill, & J. R. Hurford (Eds.), The evolution of language. Proceedings of the 9th International Conference (EVOLANG9) (pp. 72-79). Singapore: World Scientific.
  • Colzato, L. S., Zech, H., Hommel, B., Verdonschot, R. G., Van den Wildenberg, W. P. M., & Hsieh, S. (2012). Loving-kindness brings loving-kindness: The impact of Buddhism on cognitive self-other integration. Psychonomic Bulletin & Review, 19(3), 541-545. doi:10.3758/s13423-012-0241-y.

    Abstract

    Common wisdom has it that Buddhism enhances compassion and self-other integration. We put this assumption to empirical test by comparing practicing Taiwanese Buddhists with well-matched atheists. Buddhists showed more evidence of self-other integration in the social Simon task, which assesses the degree to which people co-represent the actions of a coactor. This suggests that self-other integration and task co-representation vary as a function of religious practice.
  • Connell, L., Cai, Z. G., & Holler, J. (2012). Do you see what I'm singing? Visuospatial movement biases pitch perception. In N. Miyake, D. Peebles, & R. P. Cooper (Eds.), Proceedings of the 34th Annual Meeting of the Cognitive Science Society (CogSci 2012) (pp. 252-257). Austin, TX: Cognitive Science Society.

    Abstract

    The nature of the connection between musical and spatial processing is controversial. While pitch may be described in spatial terms such as “high” or “low”, it is unclear whether pitch and space are associated but separate dimensions or whether they share representational and processing resources. In the present study, we asked participants to judge whether a target vocal note was the same as (or different from) a preceding cue note. Importantly, target trials were presented as video clips where a singer sometimes gestured upward or downward while singing that target note, thus providing an alternative, concurrent source of spatial information. Our results show that pitch discrimination was significantly biased by the spatial movement in gesture. These effects were eliminated by spatial memory load but preserved under verbal memory load conditions. Together, our findings suggest that pitch and space have a shared representation such that the mental representation of pitch is audiospatial in nature.
  • Cook, A. E., & Meyer, A. S. (2008). Capacity demands of phoneme selection in word production: New evidence from dual-task experiments. Journal of Experimental Psychology: Learning, Memory, and Cognition, 34, 886-899. doi:10.1037/0278-7393.34.4.886.

    Abstract

    Three dual-task experiments investigated the capacity demands of phoneme selection in picture naming. On each trial, participants named a target picture (Task 1) and carried out a tone discrimination task (Task 2). To vary the time required for phoneme selection, the authors combined the targets with phonologically related or unrelated distractor pictures (Experiment 1) or words, which were clearly visible (Experiment 2) or masked (Experiment 3). When pictures or masked words were presented, the tone discrimination and picture naming latencies were shorter in the related condition than in the unrelated condition, which indicates that phoneme selection requires central processing capacity. However, when the distractor words were clearly visible, the facilitatory effect was confined to the picture naming latencies. This pattern arose because the visible related distractor words facilitated phoneme selection but slowed down speech monitoring processes that had to be completed before the response to the tone could be selected.
  • Cooke, M., & Scharenborg, O. (2008). The Interspeech 2008 consonant challenge. In INTERSPEECH 2008 - 9th Annual Conference of the International Speech Communication Association (pp. 1765-1768). ISCA Archive.

    Abstract

    Listeners outperform automatic speech recognition systems at every level, including the very basic level of consonant identification. What is not clear is where the human advantage originates. Does the fault lie in the acoustic representations of speech or in the recognizer architecture, or in a lack of compatibility between the two? Many insights can be gained by carrying out a detailed human-machine comparison. The purpose of the Interspeech 2008 Consonant Challenge is to promote focused comparisons on a task involving intervocalic consonant identification in noise, with all participants using the same training and test data. This paper describes the Challenge, listener results and baseline ASR performance.
  • Cooper, A., Brouwer, S., & Bradlow, A. R. (2015). Interdependent processing and encoding of speech and concurrent background noise. Attention, Perception & Psychophysics, 77(4), 1342-1357. doi:10.3758/s13414-015-0855-z.

    Abstract

    Speech processing can often take place in adverse listening conditions that involve the mixing of speech and background noise. In this study, we investigated processing dependencies between background noise and indexical speech features, using a speeded classification paradigm (Garner, 1974; Exp. 1), and whether background noise is encoded and represented in memory for spoken words in a continuous recognition memory paradigm (Exp. 2). Whether or not the noise spectrally overlapped with the speech signal was also manipulated. The results of Experiment 1 indicated that background noise and indexical features of speech (gender, talker identity) cannot be completely segregated during processing, even when the two auditory streams are spectrally nonoverlapping. Perceptual interference was asymmetric, whereby irrelevant indexical feature variation in the speech signal slowed noise classification to a greater extent than irrelevant noise variation slowed speech classification. This asymmetry may stem from the fact that speech features have greater functional relevance to listeners, and are thus more difficult to selectively ignore than background noise. Experiment 2 revealed that a recognition cost for words embedded in different types of background noise on the first and second occurrences only emerged when the noise and the speech signal were spectrally overlapping. Together, these data suggest integral processing of speech and background noise, modulated by the level of processing and the spectral separation of the speech and noise.
  • Corcoran, A. W., Alday, P. M., Schlesewsky, M., & Bornkessel-Schlesewsky, I. (2018). Toward a reliable, automated method of individual alpha frequency (IAF) quantification. Psychophysiology, 55(7): e13064. doi:10.1111/psyp.13064.

    Abstract

    Individual alpha frequency (IAF) is a promising electrophysiological marker of interindividual differences in cognitive function. IAF has been linked with trait-like differences in information processing and general intelligence, and provides an empirical basis for the definition of individualized frequency bands. Despite its widespread application, however, there is little consensus on the optimal method for estimating IAF, and many common approaches are prone to bias and inconsistency. Here, we describe an automated strategy for deriving two of the most prevalent IAF estimators in the literature: peak alpha frequency (PAF) and center of gravity (CoG). These indices are calculated from resting-state power spectra that have been smoothed using a Savitzky-Golay filter (SGF). We evaluate the performance characteristics of this analysis procedure in both empirical and simulated EEG data sets. Applying the SGF technique to resting-state data from n = 63 healthy adults furnished 61 PAF and 62 CoG estimates. The statistical properties of these estimates were consistent with previous reports. Simulation analyses revealed that the SGF routine was able to reliably extract target alpha components, even under relatively noisy spectral conditions. The routine consistently outperformed a simpler method of automated peak detection that did not involve spectral smoothing. The SGF technique is fast, open source, and available in two popular programming languages (MATLAB, Python), and thus can easily be integrated within the most popular M/EEG toolsets (EEGLAB, FieldTrip, MNE-Python). As such, it affords a convenient tool for improving the reliability and replicability of future IAF-related research.

    Additional information

    psyp13064-sup-0001-s01.docx
  • Coridun, S., Ernestus, M., & Ten Bosch, L. (2015). Learning pronunciation variants in a second language: Orthographic effects. In Scottish consortium for ICPhS 2015, M. Wolters, J. Livingstone, B. Beattie, R. Smith, M. MacMahon, J. Stuart-Smith, & J. Scobbie (Eds.), Proceedings of the 18th International Congress of Phonetic Sciences (ICPhS 2015). Glasgow: University of Glasgow.

    Abstract

    The present study investigated the effect of orthography on the learning and subsequent processing of pronunciation variants in a second language. Dutch learners of French learned reduced pronunciation variants that result from schwa-zero alternation in French (e.g., reduced /ʃnij/ from chenille 'caterpillar'). Half of the participants additionally learnt the words' spellings, which correspond more closely to the full variants with schwa. On the following day, participants performed an auditory lexical decision task, in which they heard half of the words in their reduced variants, and the other half in their full variants. Participants who had exclusively learnt the auditory forms performed significantly worse on full variants than participants who had also learnt the spellings. This shows that learners integrate phonological and orthographic information to process pronunciation variants. There was no difference between both groups in their performances on reduced variants, suggesting that the exposure to spelling does not impede learners' processing of these variants.
  • Corps, R. E. (2018). Coordinating utterances during conversational dialogue: The role of content and timing predictions. PhD Thesis, The University of Edinburgh, Edinburgh.
  • Corps, R. E., Gambi, C., & Pickering, M. J. (2018). Coordinating utterances during turn-taking: The role of prediction, response preparation, and articulation. Discourse processes, 55(2, SI), 230-240. doi:10.1080/0163853X.2017.1330031.

    Abstract

    During conversation, interlocutors rapidly switch between speaker and listener
    roles and take turns at talk. How do they achieve such fine coordination?
    Most research has concentrated on the role of prediction, but listeners
    must also prepare a response in advance (assuming they wish to respond)
    and articulate this response at the appropriate moment. Such mechanisms
    may overlap with the processes of comprehending the speaker’s incoming
    turn and predicting its end. However, little is known about the stages of
    response preparation and production. We discuss three questions pertaining
    to such stages: (1) Do listeners prepare their own response in advance?,
    (2) Can listeners buffer their prepared response?, and (3) Does buffering
    lead to interference with concurrent comprehension? We argue that fine
    coordination requires more than just an accurate prediction of the interlocutor’s
    incoming turn: Listeners must also simultaneously prepare their own
    response.
  • Corps, R. E., Crossley, A., Gambi, C., & Pickering, M. J. (2018). Early preparation during turn-taking: Listeners use content predictions to determine what to say but not when to say it. Cognition, 175, 77-95. doi:10.1016/j.cognition.2018.01.015.

    Abstract

    During conversation, there is often little gap between interlocutors’ utterances. In two pairs of experiments, we manipulated the content predictability of yes/no questions to investigate whether listeners achieve such coordination by (i) preparing a response as early as possible or (ii) predicting the end of the speaker’s turn. To assess these two mechanisms, we varied the participants’ task: They either pressed a button when they thought the question was about to end (Experiments 1a and 2a), or verbally answered the questions with either yes or no (Experiments 1b and 2b). Predictability effects were present when participants had to prepare a verbal response, but not when they had to predict the turn-end. These findings suggest content prediction facilitates turn-taking because it allows listeners to prepare their own response early, rather than because it helps them predict when the speaker will reach the end of their turn.

    Additional information

    Supplementary material
  • Crasborn, O. A., Hanke, T., Efthimiou, E., Zwitserlood, I., & Thoutenhooft, E. (Eds.). (2008). Construction and Exploitation of Sign Language Corpora. 3rd Workshop on the Representation and Processing of Sign Languages. Paris: ELDA.
  • Crasborn, O., & Sloetjes, H. (2008). Enhanced ELAN functionality for sign language corpora. In Proceedings of the 3rd Workshop on the Representation and Processing of Sign Languages: Construction and Exploitation of Sign Language Corpora (pp. 39-43).

    Abstract

    The multimedia annotation tool ELAN was enhanced within the Corpus NGT project by a number of new and improved functions. Most of these functions were not specific to working with sign language video data, and can readily be used for other annotation purposes as well. Their direct utility for working with large amounts of annotation files during the development and use of the Corpus NGT project is what unites the various functions, which are described in this paper. In addition, we aim to characterise future developments that will be needed in order to work efficiently with larger amounts of annotation files, for which a closer integration with the use and display of metadata is foreseen.
  • Crasborn, O., & Windhouwer, M. (2012). ISOcat data categories for signed language resources. In E. Efthimiou, G. Kouroupetroglou, & S.-E. Fotinea (Eds.), Gesture and sign language in human-computer interaction and embodied communication: 9th International Gesture Workshop, GW 2011, Athens, Greece, May 25-27, 2011, revised selected papers (pp. 118-128). Heidelberg: Springer.

    Abstract

    As the creation of signed language resources is gaining speed world-wide, the need for standards in this field becomes more acute. This paper discusses the state of the field of signed language resources, their metadata descriptions, and annotations that are typically made. It then describes the role that ISOcat may play in this process and how it can stimulate standardisation without imposing standards. Finally, it makes some initial proposals for the thematic domain ‘sign language’ that was introduced in 2011.
  • Crasborn, O. A., & Zwitserlood, I. (2008). The Corpus NGT: An online corpus for professionals and laymen. In O. A. Crasborn, T. Hanke, E. Efthimiou, I. Zwitserlood, & E. Thoutenhooft (Eds.), Construction and Exploitation of Sign Language Corpora. (pp. 44-49). Paris: ELDA.

    Abstract

    The Corpus NGT is an ambitious effort to record and archive video data from Sign Language of the Netherlands (Nederlandse Gebarentaal: NGT), guaranteeing online access to all interested parties and long-term availability. Data are collected from 100 native signers of NGT of different ages and from various regions in the country. Parts of these data are annotated and/or translated; the annotations and translations are part of the corpus. The Corpus NGT is accommodated in the Browsable Corpus based at the Max Planck Institute for Psycholinguistics. In this paper we share our experiences in data collection, video processing, annotation/translation and licensing involved in building the corpus.
  • Creemers, A., Don, J., & Fenger, P. (2018). Some affixes are roots, others are heads. Natural Language & Linguistic Theory, 36(1), 45-84. doi:10.1007/s11049-017-9372-1.

    Abstract

    A recent debate in the morphological literature concerns the status of derivational affixes. While some linguists (Marantz 1997, 2001; Marvin 2003) consider derivational affixes a type of functional morpheme that realizes a categorial head, others (Lowenstamm 2015; De Belder 2011) argue that derivational affixes are roots. Our proposal, which finds its empirical basis in a study of Dutch derivational affixes, takes a middle position. We argue that there are two types of derivational affixes: some that are roots (i.e. lexical morphemes) and others that are categorial heads (i.e. functional morphemes). Affixes that are roots show ‘flexible’ categorial behavior, are subject to ‘lexical’ phonological rules, and may trigger idiosyncratic meanings. Affixes that realize categorial heads, on the other hand, are categorially rigid, do not trigger ‘lexical’ phonological rules nor allow for idiosyncrasies in their interpretation.
  • Cristia, A. (2008). Cue weighting at different ages. Purdue Linguistics Association Working Papers, 1, 87-105.
  • Cristia, A., & Peperkamp, S. (2012). Generalizing without encoding specifics: Infants infer phonotactic patterns on sound classes. In A. K. Biller, E. Y. Chung, & A. E. Kimball (Eds.), Proceedings of the 36th Annual Boston University Conference on Language Development (BUCLD 36) (pp. 126-138). Somerville, Mass.: Cascadilla Press.

    Abstract

    publication expected April 2012
  • Cristia, A., Seidl, A., Vaughn, C., Schmale, R., Bradlow, A., & Floccia, C. (2012). Linguistic processing of accented speech across the lifespan. Frontiers in Psychology, 3, 479. doi:10.3389/fpsyg.2012.00479.

    Abstract

    In most of the world, people have regular exposure to multiple accents. Therefore, learning to quickly process accented speech is a prerequisite to successful communication. In this paper, we examine work on the perception of accented speech across the lifespan, from early infancy to late adulthood. Unfamiliar accents initially impair linguistic processing by infants, children, younger adults, and older adults, but listeners of all ages come to adapt to accented speech. Emergent research also goes beyond these perceptual abilities, by assessing links with production and the relative contributions of linguistic knowledge and general cognitive skills. We conclude by underlining points of convergence across ages, and the gaps left to face in future work.
  • Cristia, A., & Seidl, A. (2008). Is infants' learning of sound patterns constrained by phonological features? Language Learning and Development, 4, 203-227. doi:10.1080/15475440802143109.

    Abstract

    Phonological patterns in languages often involve groups of sounds rather than individual sounds, which may be explained if phonology operates on the abstract features shared by those groups (Troubetzkoy, 193957. Troubetzkoy , N. 1939/1969 . Principles of phonology , Berkeley : University of California Press . View all references/1969; Chomsky & Halle, 19688. Chomsky , N. and Halle , M. 1968 . The sound pattern of English , New York : Harper and Row . View all references). Such abstract features may be present in the developing grammar either because they are part of a Universal Grammar included in the genetic endowment of humans (e.g., Hale, Kissock and Reiss, 200618. Hale , M. , Kissock , M. and Reiss , C. 2006 . Microvariation, variation, and the features of universal grammar . Lingua , 32 : 402 – 420 . View all references), or plausibly because infants induce features from their linguistic experience (e.g., Mielke, 200438. Mielke , J. 2004 . The emergence of distinctive features , Ohio State University : Unpublished doctoral dissertation . View all references). A first experiment tested 7-month-old infants' learning of an artificial grammar pattern involving either a set of sounds defined by a phonological feature, or a set of sounds that cannot be described with a single feature—an “arbitrary” set. Infants were able to induce the constraint and generalize it to a novel sound only for the set that shared the phonological feature. A second study showed that infants' inability to learn the arbitrary grouping was not due to their inability to encode a constraint on some of the sounds involved.
  • Cristia, A., Ganesh, S., Casillas, M., & Ganapathy, S. (2018). Talker diarization in the wild: The case of child-centered daylong audio-recordings. In Proceedings of Interspeech 2018 (pp. 2583-2587). doi:10.21437/Interspeech.2018-2078.

    Abstract

    Speaker diarization (answering 'who spoke when') is a widely researched subject within speech technology. Numerous experiments have been run on datasets built from broadcast news, meeting data, and call centers—the task sometimes appears close to being solved. Much less work has begun to tackle the hardest diarization task of all: spontaneous conversations in real-world settings. Such diarization would be particularly useful for studies of language acquisition, where researchers investigate the speech children produce and hear in their daily lives. In this paper, we study audio gathered with a recorder worn by small children as they went about their normal days. As a result, each child was exposed to different acoustic environments with a multitude of background noises and a varying number of adults and peers. The inconsistency of speech and noise within and across samples poses a challenging task for speaker diarization systems, which we tackled via retraining and data augmentation techniques. We further studied sources of structured variation across raw audio files, including the impact of speaker type distribution, proportion of speech from children, and child age on diarization performance. We discuss the extent to which these findings might generalize to other samples of speech in the wild.
  • Cristia, A., & Seidl, A. (2008). Why cross-linguistic frequency cannot be equated with ease of acquisition. University of Pennsylvania Working Papers in Linguistics, 14(1), 71-82. Retrieved from http://repository.upenn.edu/pwpl/vol14/iss1/6.
  • Croijmans, I., & Majid, A. (2015). Odor naming is difficult, even for wine and coffee experts. In D. C. Noelle, R. Dale, A. S. Warlaumont, J. Yoshimi, T. Matlock, C. D. Jennings, & P. P. Maglio (Eds.), Proceedings of the 37th Annual Meeting of the Cognitive Science Society (CogSci 2015) (pp. 483-488). Austin, TX: Cognitive Science Society. Retrieved from https://mindmodeling.org/cogsci2015/papers/0092/index.html.

    Abstract

    Odor naming is difficult for people, but recent cross-cultural research suggests this difficulty is culture-specific. Jahai speakers (hunter-gatherers from the Malay Peninsula) name odors as consistently as colors, and much better than English speakers (Majid & Burenhult, 2014). In Jahai the linguistic advantage for smells correlates with a cultural interest in odors. Here we ask whether sub-cultures in the West with odor expertise also show superior odor naming. We tested wine and coffee experts (who have specialized odor training) in an odor naming task. Both wine and coffee experts were no more accurate or consistent than novices when naming odors. Although there were small differences in naming strategies, experts and non-experts alike relied overwhelmingly on source-based descriptions. So the specific language experts speak continues to constrain their ability to express odors. This suggests expertise alone is not sufficient to overcome the limits of language in the domain of smell.
  • Croijmans, I. (2018). Wine expertise shapes olfactory language and cognition. PhD Thesis, Radboud University, Nijmegen.
  • Cronin, K. A., De Groot, E., & Stevens, J. M. G. (2015). Bonobos show limited social tolerance in a group setting: A comparison with chimpanzees and a test of the Relational Model. Folia primatologica, 86, 164-177. doi:10.1159/000373886.

    Abstract

    Social tolerance is a core aspect of primate social relationships with implications for the evolution of cooperation, prosociality and social learning. We measured the social tolerance of bonobos in an experiment recently validated with chimpanzees to allow for a comparative assessment of group-level tolerance, and found that the bonobo group studied here exhibited lower social tolerance on average than chimpanzees. Furthermore, following the Relational Model [de Waal, 1996], we investigated whether bonobos responded to an increased potential for social conflict with tolerance, conflict avoidance or conflict escalation, and found that only behaviours indicative of conflict escalation differed across conditions. Taken together, these findings contribute to the current debate over the level of social tolerance of bonobos and lend support to the position that the social tolerance of bonobos may not be notably high compared with other primates.
  • Cronin, K. A., Schroeder, K. K. E., Rothwell, E. S., Silk, J. B., & Snowdon, C. T. (2009). Cooperatively breeding cottontop tamarins (Saguinus oedipus) do not donate rewards to their long-term mates. Journal of Comparative Psychology, 123(3), 231-241. doi:10.1037/a0015094.

    Abstract

    This study tested the hypothesis that cooperative breeding facilitates the emergence of prosocial behavior by presenting cottontop tamarins (Saguinus oedipus) with the option to provide food rewards to pair-bonded mates. In Experiment 1, tamarins could provide rewards to mates at no additional cost while obtaining rewards for themselves. Contrary to the hypothesis, tamarins did not demonstrate a preference to donate rewards, behaving similar to chimpanzees in previous studies. In Experiment 2, the authors eliminated rewards for the donor for a stricter test of prosocial behavior, while reducing separation distress and food preoccupation. Again, the authors found no evidence for a donation preference. Furthermore, tamarins were significantly less likely to deliver rewards to mates when the mate displayed interest in the reward. The results of this study contrast with those recently reported for cooperatively breeding common marmosets, and indicate that prosocial preferences in a food donation task do not emerge in all cooperative breeders. In previous studies, cottontop tamarins have cooperated and reciprocated to obtain food rewards; the current findings sharpen understanding of the boundaries of cottontop tamarins’ food-provisioning behavior.
  • Cronin, K. A. (2012). Cognitive aspects of prosocial behavior in nonhuman primates. In N. M. Seel (Ed.), Encyclopedia of the sciences of learning. Part 3 (2nd ed., pp. 581-583). Berlin: Springer.

    Abstract

    Definition Prosocial behavior is any behavior performed by one individual that results in a benefit for another individual. Prosocial motivations, prosocial preferences, or other-regarding preferences refer to the psychological predisposition to behave in the best interest of another individual. A behavior need not be costly to the actor to be considered prosocial, thus the concept is distinct from altruistic behavior which requires that the actor incurs some cost when providing a benefit to another.
  • Cronin, K. A., Acheson, D. J., Hernández, P., & Sánchez, A. (2015). Hierarchy is Detrimental for Human Cooperation. Scientific Reports, 5: 18634. doi:10.1038/srep18634.

    Abstract

    Studies of animal behavior consistently demonstrate that the social environment impacts cooperation, yet the effect of social dynamics has been largely excluded from studies of human cooperation. Here, we introduce a novel approach inspired by nonhuman primate research to address how social hierarchies impact human cooperation. Participants competed to earn hierarchy positions and then could cooperate with another individual in the hierarchy by investing in a common effort. Cooperation was achieved if the combined investments exceeded a threshold, and the higher ranked individual distributed the spoils unless control was contested by the partner. Compared to a condition lacking hierarchy, cooperation declined in the presence of a hierarchy due to a decrease in investment by lower ranked individuals. Furthermore, hierarchy was detrimental to cooperation regardless of whether it was earned or arbitrary. These findings mirror results from nonhuman primates and demonstrate that hierarchies are detrimental to cooperation. However, these results deviate from nonhuman primate findings by demonstrating that human behavior is responsive to changing hierarchical structures and suggests partnership dynamics that may improve cooperation. This work introduces a controlled way to investigate the social influences on human behavior, and demonstrates the evolutionary continuity of human behavior with other primate species.
  • Cronin, K. A. (2012). Prosocial behaviour in animals: The influence of social relationships, communication and rewards. Animal Behaviour, 84, 1085-1093. doi:10.1016/j.anbehav.2012.08.009.

    Abstract

    Researchers have struggled to obtain a clear account of the evolution of prosocial behaviour despite a great deal of recent effort. The aim of this review is to take a brief step back from addressing the question of evolutionary origins of prosocial behaviour in order to identify contextual factors that are contributing to variation in the expression of prosocial behaviour and hindering progress towards identifying phylogenetic patterns. Most available data come from the Primate Order, and the choice of contextual factors to consider was informed by theory and practice, including the nature of the relationship between the potential donor and recipient, the communicative behaviour of the recipients, and features of the prosocial task including whether rewards are visible and whether the prosocial choice creates an inequity between actors. Conclusions are drawn about the facilitating or inhibiting impact of each of these factors on the expression of prosocial behaviour, and areas for future research are highlighted. Acknowledging the impact of these contextual features on the expression of prosocial behaviours should stimulate new research into the proximate mechanisms that drive these effects, yield experimental designs that better control for potential influences on prosocial expression, and ultimately allow progress towards reconstructing the evolutionary origins of prosocial behaviour.
  • Cronin, K. A., & Snowdon, C. T. (2008). The effects of unequal reward distributions on cooperative problem solving by cottontop tamarins, Saguinus oedipus. Animal Behaviour, 75, 245-257. doi:10.1016/j.anbehav.2007.04.032.

    Abstract

    Cooperation among nonhuman animals has been the topic of much theoretical and empirical research, but few studies have examined systematically the effects of various reward payoffs on cooperative behaviour. Here, we presented heterosexual pairs of cooperatively breeding cottontop tamarins with a cooperative problem-solving task. In a series of four experiments, we examined how the tamarins’ cooperative performance changed under conditions in which (1) both actors were mutually rewarded, (2) both actors were rewarded reciprocally across days, (3) both actors competed for a monopolizable reward and (4) one actor repeatedly delivered a single reward to the other actor. The tamarins showed sensitivity to the reward structure, showing the greatest percentage of trials solved and shortest latency to solve the task in the mutual reward experiment and the lowest percentage of trials solved and longest latency to solve the task in the experiment in which one actor was repeatedly rewarded. However, even in the experiment in which the fewest trials were solved, the tamarins still solved 46 _ 12% of trials and little to no aggression was observed among partners following inequitable reward distributions. The tamarins did, however, show selfish motivation in each of the experiments. Nevertheless, in all experiments, unrewarded individuals continued to cooperate and procure rewards for their social partners.
  • Cronin, K. A., & Sanchez, A. (2012). Social dynamics and cooperation: The case of nonhuman primates and its implications for human behavior. Advances in complex systems, 15, 1250066. doi:10.1142/S021952591250066X.

    Abstract

    The social factors that influence cooperation have remained largely uninvestigated but have the potential to explain much of the variation in cooperative behavior observed in the natural world. We show here that certain dimensions of the social environment, namely the size of the social group, the degree of social tolerance expressed, the structure of the dominance hierarchy, and the patterns of dispersal, may influence the emergence and stability of cooperation in predictable ways. Furthermore, the social environment experienced by a species over evolutionary time will have shaped their cognition to provide certain strengths and strategies that are beneficial in their species‟ social world. These cognitive adaptations will in turn impact the likelihood of cooperating in a given social environment. Experiments with one primate species, the cottontop tamarin, illustrate how social dynamics may influence emergence and stability of cooperative behavior in this species. We then take a more general viewpoint and argue that the hypotheses presented here require further experimental work and the addition of quantitative modeling to obtain a better understanding of how social dynamics influence the emergence and stability of cooperative behavior in complex systems. We conclude by pointing out subsequent specific directions for models and experiments that will allow relevant advances in the understanding of the emergence of cooperation.
  • Croxson, P., Forkel, S. J., Cerliani, L., & Thiebaut De Schotten, M. (2018). Structural Variability Across the Primate Brain: A Cross-Species Comparison. Cerebral Cortex, 28(11), 3829-3841. doi:10.1093/cercor/bhx244.

    Abstract

    A large amount of variability exists across human brains; revealed initially on a small scale by postmortem studies and,
    more recently, on a larger scale with the advent of neuroimaging. Here we compared structural variability between human
    and macaque monkey brains using grey and white matter magnetic resonance imaging measures. The monkey brain was
    overall structurally as variable as the human brain, but variability had a distinct distribution pattern, with some key areas
    showing high variability. We also report the first evidence of a relationship between anatomical variability and evolutionary
    expansion in the primate brain. This suggests a relationship between variability and stability, where areas of low variability
    may have evolved less recently and have more stability, while areas of high variability may have evolved more recently and
    be less similar across individuals. We showed specific differences between the species in key areas, including the amount of
    hemispheric asymmetry in variability, which was left-lateralized in the human brain across several phylogenetically recent
    regions. This suggests that cerebral variability may be another useful measure for comparison between species and may add
    another dimension to our understanding of evolutionary mechanisms.
  • Cutfield, S. (2012). Demonstratives in Dalabon: A language of southwestern Arnhem Land. PhD Thesis, Monash University, Melbourne.

    Abstract

    This study is a comprehensive description of the nominal demonstratives in Dalabon, a severely endangered Gunwinyguan non-Pama-Nyungan language of southwestern Arnhem Land, northern Australia. Demonstratives are attested in the basic vocabulary of every language, yet remain heretofore underdescribed in Australian languages. Traditional definitions of demonstratives as primarily making spatial reference have recently evolved at a great pace, with close analyses of demonstratives-in-use revealing that their use in spatial reference, in narrative discourse, and in interaction is significantly more complex than previously assumed, and that definitions of demonstrative forms are best developed after consideration of their use across these contexts. The present study reinforces findings of complexity in demonstrative use, and the significance of a multidimensional characterization of demonstrative forms. This study is therefore a contribution to the description of Dalabon, to the analysis of demonstratives in Australian languages, and to the theory and typology of demonstratives cross-linguistically. In this study, I present a multi-dimensional analysis of Dalabon demonstratives, using a variety of theoretical frameworks and research tools including descriptive linguistics, lexical-functional grammar, discourse analysis, gesture studies and pragmatics. Using data from personal narratives, improvised interactions and elicitation sessions to investigate the demonstratives, this study takes into account their morphosyntactic distribution, uses in the speech situation, interactional factors, discourse phenomena, concurrent gesture, and uses in personal narratives. I conclude with a unified account of the intenstional and extensional semantics of each form surveyed. The Dalabon demonstrative paradigm divides into two types, those which are spatially-specific and those which are non-spatial. The spatially-specific demonstratives nunda ‘this (in the here-space)’ and djakih ‘that (in the there-space)’ are shown not to encode the location of the referent per se, rather its relative position to dynamic physical and social elements of the speech situation such as the speaker’s engagement area and here-space. Both forms are also used as spatial adverbs to mean ‘here’ and ‘there’ respectively, while only nunda is also used as a temporal adverb ‘now, today’. The spatially-specific demonstratives are limited to situational use in narratives. The non-spatial demonstratives kanh/kanunh ‘that (identifiable)’ and nunh ‘that (unfamiliar, contrastive)’ are used in both the speech situation and personal narratives to index referents as ‘identifiable’ or ‘unfamiliar’ respectively. Their use in the speech situation can conversationally implicate that the referent is distal. The non-spatial demonstratives display the greatest diversity of use in narratives, each specializing for certain uses, yet their wide distribution across discourse usage types can be described on account of their intensional semantics. The findings of greatest typological interest in this study are that speakers’ choice of demonstrative in the speech situation is influenced by multiple simultaneous deictic parameters (including gesture); that oppositions in the Dalabon demonstrative paradigm are not equal, nor exclusively semantic; that the form nunh ‘that (unfamiliar, contrastive)’ is used to index a referent as somewhat inaccessible or unexpected; that the ‘recognitional’ form kanh/kanunh is instead described as ‘identifiable’; and that speakers use demonstratives to index emotional deixis to a referent, or to their addressee.
  • Cutfield, S. (2012). Foreword. Australian Journal of Linguistics, 32(4), 457-458.
  • Cutfield, S. (2012). Principles of Dalabon plant and animal names and classification. In D. Bordulk, N. Dalak, M. Tukumba, L. Bennett, R. Bordro Tingey, M. Katherine, S. Cutfield, M. Pamkal, & G. Wightman (Eds.), Dalabon plants and animals: Aboriginal biocultural knowledge from Southern Arnhem Land, North Australia (pp. 11-12). Palmerston, NT, Australia: Department of Land and Resource Management, Northern Territory.
  • Cutler, A. (2008). The abstract representations in speech processing. Quarterly Journal of Experimental Psychology, 61(11), 1601-1619. doi:10.1080/13803390802218542.

    Abstract

    Speech processing by human listeners derives meaning from acoustic input via intermediate steps involving abstract representations of what has been heard. Recent results from several lines of research are here brought together to shed light on the nature and role of these representations. In spoken-word recognition, representations of phonological form and of conceptual content are dissociable. This follows from the independence of patterns of priming for a word's form and its meaning. The nature of the phonological-form representations is determined not only by acoustic-phonetic input but also by other sources of information, including metalinguistic knowledge. This follows from evidence that listeners can store two forms as different without showing any evidence of being able to detect the difference in question when they listen to speech. The lexical representations are in turn separate from prelexical representations, which are also abstract in nature. This follows from evidence that perceptual learning about speaker-specific phoneme realization, induced on the basis of a few words, generalizes across the whole lexicon to inform the recognition of all words containing the same phoneme. The efficiency of human speech processing has its basis in the rapid execution of operations over abstract representations.
  • Cutler, A., McQueen, J. M., Butterfield, S., & Norris, D. (2008). Prelexically-driven perceptual retuning of phoneme boundaries. In Proceedings of Interspeech 2008 (pp. 2056-2056).

    Abstract

    Listeners heard an ambiguous /f-s/ in nonword contexts where only one of /f/ or /s/ was legal (e.g., frul/*srul or *fnud/snud). In later categorisation of a phonetic continuum from /f/ to /s/, their category boundaries had shifted; hearing -rul led to expanded /f/ categories, -nud expanded /s/. Thus phonotactic sequence information alone induces perceptual retuning of phoneme category boundaries; lexical access is not required.
  • Ip, M. H. K., & Cutler, A. (2018). Asymmetric efficiency of juncture perception in L1 and L2. In K. Klessa, J. Bachan, A. Wagner, M. Karpiński, & D. Śledziński (Eds.), Proceedings of Speech Prosody 2018 (pp. 289-296). Baixas, France: ISCA. doi:10.21437/SpeechProsody.2018-59.

    Abstract

    In two experiments, Mandarin listeners resolved potential syntactic ambiguities in spoken utterances in (a) their native language (L1) and (b) English which they had learned as a second language (L2). A new disambiguation task was used, requiring speeded responses to select the correct meaning for structurally ambiguous sentences. Importantly, the ambiguities used in the study are identical in Mandarin and in English, and production data show that prosodic disambiguation of this type of ambiguity is also realised very similarly in the two languages. The perceptual results here showed however that listeners’ response patterns differed for L1 and L2, although there was a significant increase in similarity between the two response patterns with increasing exposure to the L2. Thus identical ambiguity and comparable disambiguation patterns in L1 and L2 do not lead to immediate application of the appropriate L1 listening strategy to L2; instead, it appears that such a strategy may have to be learned anew for the L2.
  • Cutler, A., & Fear, B. D. (1991). Categoricality in acceptability judgements for strong versus weak vowels. In J. Llisterri (Ed.), Proceedings of the ESCA Workshop on Phonetics and Phonology of Speaking Styles (pp. 18.1-18.5). Barcelona, Catalonia: Universitat Autonoma de Barcelona.

    Abstract

    A distinction between strong and weak vowels can be drawn on the basis of vowel quality, of stress, or of both factors. An experiment was conducted in which sets of contextually matched word-intial vowels ranging from clearly strong to clearly weak were cross-spliced, and the naturalness of the resulting words was rated by listeners. The ratings showed that in general cross-spliced words were only significantly less acceptable than unspliced words when schwa was not involved; this supports a categorical distinction based on vowel quality.
  • Cutler, A., Sebastian-Galles, N., Soler-Vilageliu, O., & Van Ooijen, B. (2000). Constraints of vowels and consonants on lexical selection: Cross-linguistic comparisons. Memory & Cognition, 28, 746-755.

    Abstract

    Languages differ in the constitution of their phonemic repertoire and in the relative distinctiveness of phonemes within the repertoire. In the present study, we asked whether such differences constrain spoken-word recognition, via two word reconstruction experiments, in which listeners turned non-words into real words by changing single sounds. The experiments were carried out in Dutch (which has a relatively balanced vowel-consonant ratio and many similar vowels) and in Spanish (which has many more consonants than vowels and high distinctiveness among the vowels). Both Dutch and Spanish listeners responded significantly faster and more accurately when required to change vowels as opposed to consonants; when allowed to change any phoneme, they more often altered vowels than consonants. Vowel information thus appears to constrain lexical selection less tightly (allow more potential candidates) than does consonant information, independent of language-specific phoneme repertoire and of relative distinctiveness of vowels.
  • Ip, M. H. K., & Cutler, A. (2018). Cue equivalence in prosodic entrainment for focus detection. In J. Epps, J. Wolfe, J. Smith, & C. Jones (Eds.), Proceedings of the 17th Australasian International Conference on Speech Science and Technology (pp. 153-156).

    Abstract

    Using a phoneme detection task, the present series of
    experiments examines whether listeners can entrain to
    different combinations of prosodic cues to predict where focus
    will fall in an utterance. The stimuli were recorded by four
    female native speakers of Australian English who happened to
    have used different prosodic cues to produce sentences with
    prosodic focus: a combination of duration cues, mean and
    maximum F0, F0 range, and longer pre-target interval before
    the focused word onset, only mean F0 cues, only pre-target
    interval, and only duration cues. Results revealed that listeners
    can entrain in almost every condition except for where
    duration was the only reliable cue. Our findings suggest that
    listeners are flexible in the cues they use for focus processing.
  • Cutler, A., & Van de Weijer, J. (2000). De ontdekking van de eerste woorden. Stem-, Spraak- en Taalpathologie, 9, 245-259.

    Abstract

    Spraak is continu, er zijn geen betrouwbare signalen waardoor de luisteraar weet waar het ene woord eindigt en het volgende begint. Voor volwassen luisteraars is het segmenteren van gesproken taal in afzonderlijke woorden dus niet onproblematisch, maar voor een kind dat nog geen woordenschat bezit, vormt de continuïteit van spraak een nog grotere uitdaging. Desalniettemin produceren de meeste kinderen hun eerste herkenbare woorden rond het begin van het tweede levensjaar. Aan deze vroege spraakproducties gaat een formidabele perceptuele prestatie vooraf. Tijdens het eerste levensjaar - met name gedurende de tweede helft - ontwikkelt de spraakperceptie zich van een algemeen fonetisch discriminatievermogen tot een selectieve gevoeligheid voor de fonologische contrasten die in de moedertaal voorkomen. Recent onderzoek heeft verder aangetoond dat kinderen, lang voordat ze ook maar een enkel woord kunnen zeggen, in staat zijn woorden die kenmerkend zijn voor hun moedertaal te onderscheiden van woorden die dat niet zijn. Bovendien kunnen ze woorden die eerst in isolatie werden aangeboden herkennen in een continue spraakcontext. Het dagelijkse taalaanbod aan een kind van deze leeftijd maakt het in zekere zin niet gemakkelijk, bijvoorbeeld doordat de meeste woorden niet in isolatie voorkomen. Toch wordt het kind ook wel houvast geboden, onder andere doordat het woordgebruik beperkt is.
  • Cutler, A. (2012). Eentaalpsychologie is geen taalpsychologie: Part II. [Valedictory lecture Radboud University]. Nijmegen: Radboud University.

    Abstract

    Rede uitgesproken bij het afscheid als hoogleraar Vergelijkende taalpsychologie aan de Faculteit der Sociale Wetenschappen van de Radboud Universiteit Nijmegen op donderdag 20 september 2012
  • Cutler, A., & Davis, C. (2012). An orthographic effect in phoneme processing, and its limitations. Frontiers in Psychology, 3, 18. doi:10.3389/fpsyg.2012.00018.

    Abstract

    To examine whether lexically stored knowledge about spelling influences phoneme evaluation, we conducted three experiments with a low-level phonetic judgement task: phoneme goodness rating. In each experiment, listeners heard phonetic tokens varying along a continuum centred on /s/, occurring finally in isolated word or nonword tokens. An effect of spelling appeared in Experiment 1: Native English speakers’ goodness ratings for the best /s/ tokens were significantly higher in words spelled with S (e.g., bless) than in words spelled with C (e.g., voice). No such difference appeared when nonnative speakers rated the same materials in Experiment 2, indicating that the difference could not be due to acoustic characteristics of the S- versus C-words. In Experiment 3, nonwords with lexical neighbours consistently spelled with S (e.g., pless) versus with C (e.g., floice) failed to elicit orthographic neighbourhood effects; no significant difference appeared in native English speakers’ ratings for the S-consistent versus the C-consistent sets. Obligatory influence of lexical knowledge on phonemic processing would have predicted such neighbourhood effects; the findings are thus better accommodated by models in which phonemic decisions draw strategically upon lexical information.
  • Cutler, A., Garcia Lecumberri, M. L., & Cooke, M. (2008). Consonant identification in noise by native and non-native listeners: Effects of local context. Journal of the Acoustical Society of America, 124(2), 1264-1268. doi:10.1121/1.2946707.

    Abstract

    Speech recognition in noise is harder in second (L2) than first languages (L1). This could be because noise disrupts speech processing more in L2 than L1, or because L1 listeners recover better though disruption is equivalent. Two similar prior studies produced discrepant results: Equivalent noise effects for L1 and L2 (Dutch) listeners, versus larger effects for L2 (Spanish) than L1. To explain this, the latter experiment was presented to listeners from the former population. Larger noise effects on consonant identification emerged for L2 (Dutch) than L1 listeners, suggesting that task factors rather than L2 population differences underlie the results discrepancy.
  • Cutler, A., Burchfield, L. A., & Antoniou, M. (2018). Factors affecting talker adaptation in a second language. In J. Epps, J. Wolfe, J. Smith, & C. Jones (Eds.), Proceedings of the 17th Australasian International Conference on Speech Science and Technology (pp. 33-36).

    Abstract

    Listeners adapt rapidly to previously unheard talkers by
    adjusting phoneme categories using lexical knowledge, in a
    process termed lexically-guided perceptual learning. Although
    this is firmly established for listening in the native language
    (L1), perceptual flexibility in second languages (L2) is as yet
    less well understood. We report two experiments examining L1
    and L2 perceptual learning, the first in Mandarin-English late
    bilinguals, the second in Australian learners of Mandarin. Both
    studies showed stronger learning in L1; in L2, however,
    learning appeared for the English-L1 group but not for the
    Mandarin-L1 group. Phonological mapping differences from
    the L1 to the L2 are suggested as the reason for this result.
  • Cutler, A. (1986). Forbear is a homophone: Lexical prosody does not constrain lexical access. Language and Speech, 29, 201-220.

    Abstract

    Because stress can occur in any position within an Eglish word, lexical prosody could serve as a minimal distinguishing feature between pairs of words. However, most pairs of English words with stress pattern opposition also differ vocalically: OBject an obJECT, CONtent and content have different vowels in their first syllables an well as different stress patters. To test whether prosodic information is made use in auditory word recognition independently of segmental phonetic information, it is necessary to examine pairs like FORbear – forBEAR of TRUSty – trusTEE, semantically unrelated words which echbit stress pattern opposition but no segmental difference. In a cross-modal priming task, such words produce the priming effects characteristic of homophones, indicating that lexical prosody is not used in the same was as segmental structure to constrain lexical access.
  • Cutler, A. (2000). How the ear comes to hear. In New Trends in Modern Linguistics [Part of Annual catalogue series] (pp. 6-10). Tokyo, Japan: Maruzen Publishers.
  • Cutler, A. (2009). Greater sensitivity to prosodic goodness in non-native than in native listeners. Journal of the Acoustical Society of America, 125, 3522-3525. doi:10.1121/1.3117434.

    Abstract

    English listeners largely disregard suprasegmental cues to stress in recognizing words. Evidence for this includes the demonstration of Fear et al. [J. Acoust. Soc. Am. 97, 1893–1904 (1995)] that cross-splicings are tolerated between stressed and unstressed full vowels (e.g., au- of autumn, automata). Dutch listeners, however, do exploit suprasegmental stress cues in recognizing native-language words. In this study, Dutch listeners were presented with English materials from the study of Fear et al. Acceptability ratings by these listeners revealed sensitivity to suprasegmental mismatch, in particular, in replacements of unstressed full vowels by higher-stressed vowels, thus evincing greater sensitivity to prosodic goodness than had been shown by the original native listener group.
  • Cutler, A. (2000). Hoe het woord het oor verovert. In Voordrachten uitgesproken tijdens de uitreiking van de SPINOZA-premies op 15 februari 2000 (pp. 29-41). The Hague, The Netherlands: Nederlandse Organisatie voor Wetenschappelijk Onderzoek (NWO).
  • Cutler, A. (2015). Lexical stress in English pronunciation. In M. Reed, & J. M. Levis (Eds.), The Handbook of English Pronunciation (pp. 106-124). Chichester: Wiley.
  • Cutler, A. (1991). Linguistic rhythm and speech segmentation. In J. Sundberg, L. Nord, & R. Carlson (Eds.), Music, language, speech and brain (pp. 157-166). London: Macmillan.
  • Cutler, A., & Farrell, J. (2018). Listening in first and second language. In J. I. Liontas (Ed.), The TESOL encyclopedia of language teaching. New York: Wiley. doi:10.1002/9781118784235.eelt0583.

    Abstract

    Listeners' recognition of spoken language involves complex decoding processes: The continuous speech stream must be segmented into its component words, and words must be recognized despite great variability in their pronunciation (due to talker differences, or to influence of phonetic context, or to speech register) and despite competition from many spuriously present forms supported by the speech signal. L1 listeners deal more readily with all levels of this complexity than L2 listeners. Fortunately, the decoding processes necessary for competent L2 listening can be taught in the classroom. Evidence-based methodologies targeted at the development of efficient speech decoding include teaching of minimal pairs, of phonotactic constraints, and of reduction processes, as well as the use of dictation and L2 video captions.
  • Cutler, A. (2012). Native listening: Language experience and the recognition of spoken words. Cambridge, MA: MIT Press.

    Abstract

    Understanding speech in our native tongue seems natural and effortless; listening to speech in a nonnative language is a different experience. In this book, Anne Cutler argues that listening to speech is a process of native listening because so much of it is exquisitely tailored to the requirements of the native language. Her cross-linguistic study (drawing on experimental work in languages that range from English and Dutch to Chinese and Japanese) documents what is universal and what is language specific in the way we listen to spoken language. Cutler describes the formidable range of mental tasks we carry out, all at once, with astonishing speed and accuracy, when we listen. These include evaluating probabilities arising from the structure of the native vocabulary, tracking information to locate the boundaries between words, paying attention to the way the words are pronounced, and assessing not only the sounds of speech but prosodic information that spans sequences of sounds. She describes infant speech perception, the consequences of language-specific specialization for listening to other languages, the flexibility and adaptability of listening (to our native languages), and how language-specificity and universality fit together in our language processing system. Drawing on her four decades of work as a psycholinguist, Cutler documents the recent growth in our knowledge about how spoken-word recognition works and the role of language structure in this process. Her book is a significant contribution to a vibrant and rapidly developing field.
  • Cutler, A. (2012). Native listening: The flexibility dimension. Dutch Journal of Applied Linguistics, 1(2), 169-187.

    Abstract

    The way we listen to spoken language is tailored to the specific benefit of native-language speech input. Listening to speech in non-native languages can be significantly hindered by this native bias. Is it possible to determine the degree to which a listener is listening in a native-like manner? Promising indications of how this question may be tackled are provided by new research findings concerning the great flexibility that characterises listening to the L1, in online adjustment of phonetic category boundaries for adaptation across talkers, and in modulation of lexical dynamics for adjustment across listening conditions. This flexibility pays off in many dimensions, including listening in noise, adaptation across dialects, and identification of voices. These findings further illuminate the robustness and flexibility of native listening, and potentially point to ways in which we might begin to assess degrees of ‘native-likeness’ in this skill.
  • Cutler, A., Davis, C., & Kim, J. (2009). Non-automaticity of use of orthographic knowledge in phoneme evaluation. In Proceedings of the 10th Annual Conference of the International Speech Communication Association (Interspeech 2009) (pp. 380-383). Causal Productions Pty Ltd.

    Abstract

    Two phoneme goodness rating experiments addressed the role of orthographic knowledge in the evaluation of speech sounds. Ratings for the best tokens of /s/ were higher in words spelled with S (e.g., bless) than in words where /s/ was spelled with C (e.g., voice). This difference did not appear for analogous nonwords for which every lexical neighbour had either S or C spelling (pless, floice). Models of phonemic processing incorporating obligatory influence of lexical information in phonemic processing cannot explain this dissociation; the data are consistent with models in which phonemic decisions are not subject to necessary top-down lexical influence.
  • Cutler, A., & Chen, H.-C. (1995). Phonological similarity effects in Cantonese word recognition. In K. Elenius, & P. Branderud (Eds.), Proceedings of the Thirteenth International Congress of Phonetic Sciences: Vol. 1 (pp. 106-109). Stockholm: Stockholm University.

    Abstract

    Two lexical decision experiments in Cantonese are described in which the recognition of spoken target words as a function of phonological similarity to a preceding prime is investigated. Phonological similaritv in first syllables produced inhibition, while similarity in second syllables led to facilitation. Differences between syllables in tonal and segmental structure had generally similar effects.
  • Cutler, A. (1986). Phonological structure in speech recognition. Phonology Yearbook, 3, 161-178. Retrieved from http://www.jstor.org/stable/4615397.

    Abstract

    Two bodies of recent research from experimental psycholinguistics are summarised, each of which is centred upon a concept from phonology: LEXICAL STRESS and the SYLLABLE. The evidence indicates that neither construct plays a role in prelexical representations during speech recog- nition. Both constructs, however, are well supported by other performance evidence. Testing phonological claims against performance evidence from psycholinguistics can be difficult, since the results of studies designed to test processing models are often of limited relevance to phonological theory.
  • Cutler, A., Otake, T., & Bruggeman, L. (2012). Phonologically determined asymmetries in vocabulary structure across languages. Journal of the Acoustical Society of America, 132(2), EL155-EL160. doi:10.1121/1.4737596.

    Abstract

    Studies of spoken-word recognition have revealed that competition from embedded words differs in strength as a function of where in the carrier word the embedded word is found and have further shown embedding patterns to be skewed such that embeddings in initial position in carriers outnumber embeddings in final position. Lexico-statistical analyses show that this skew is highly attenuated in Japanese, a noninflectional language. Comparison of the extent of the asymmetry in the three Germanic languages English, Dutch, and German allows the source to be traced to a combination of suffixal morphology and vowel reduction in unstressed syllables.
  • Cutler, A. (1991). Proceed with caution. New Scientist, (1799), 53-54.
  • Cutler, A., McQueen, J. M., & Zondervan, R. (2000). Proceedings of SWAP (Workshop on Spoken Word Access Processes). Nijmegen: MPI for Psycholinguistics.
  • Cutler, A., & Swinney, D. A. (1986). Prosody and the development of comprehension. Journal of Child Language, 14, 145-167.

    Abstract

    Four studies are reported in which young children’s response time to detect word targets was measured. Children under about six years of age did not show response time advantage for accented target words which adult listeners show. When semantic focus of the target word was manipulated independently of accent, children of about five years of age showed an adult-like response time advantage for focussed targets, but children younger than five did not. Id is argued that the processing advantage for accented words reflect the semantic role of accent as an expression of sentence focus. Processing advantages for accented words depend on the prior development of representations of sentence semantic structure, including the concept of focus. The previous literature on the development of prosodic competence shows an apparent anomaly in that young children’s productive skills appear to outstrip their receptive skills; however, this anomaly disappears if very young children’s prosody is assumed to be produced without an underlying representation of the relationship between prosody and semantics.
  • Cutler, A. (1991). Prosody in situations of communication: Salience and segmentation. In Proceedings of the Twelfth International Congress of Phonetic Sciences: Vol. 1 (pp. 264-270). Aix-en-Provence: Université de Provence, Service des publications.

    Abstract

    Speakers and listeners have a shared goal: to communicate. The processes of speech perception and of speech production interact in many ways under the constraints of this communicative goal; such interaction is as characteristic of prosodic processing as of the processing of other aspects of linguistic structure. Two of the major uses of prosodic information in situations of communication are to encode salience and segmentation, and these themes unite the contributions to the symposium introduced by the present review.
  • Cutler, A. (2009). Psycholinguistics in our time. In P. Rabbitt (Ed.), Inside psychology: A science over 50 years (pp. 91-101). Oxford: Oxford University Press.
  • Cutler, A. (2000). Real words, phantom words and impossible words. In D. Burnham, S. Luksaneeyanawin, C. Davis, & M. Lafourcade (Eds.), Interdisciplinary approaches to language processing: The international conference on human and machine processing of language and speech (pp. 32-42). Bangkok: NECTEC.
  • Cutler, A. (2015). Representation of second language phonology. Applied Psycholinguistics, 36(1), 115-128. doi:10.1017/S0142716414000459.

    Abstract

    Orthographies encode phonological information only at the level of words (chiefly, the information encoded concerns phonetic segments; in some cases, tonal information or default stress may be encoded). Of primary interest to second language (L2) learners is whether orthography can assist in clarifying L2 phonological distinctions that are particularly difficult to perceive (e.g., where one native-language phonemic category captures two L2 categories). A review of spoken-word recognition evidence suggests that orthographic information can install knowledge of such a distinction in lexical representations but that this does not affect learners’ ability to perceive the phonemic distinction in speech. Words containing the difficult phonemes become even harder for L2 listeners to recognize, because perception maps less accurately to lexical content.
  • Cutler, A. (1975). Sentence stress and sentence comprehension. PhD Thesis, University of Texas, Austin.
  • Cutler, A. (1995). Spoken word recognition and production. In J. L. Miller, & P. D. Eimas (Eds.), Speech, language and communication (pp. 97-136). New York: Academic Press.

    Abstract

    This chapter highlights that most language behavior consists of speaking and listening. The chapter also reveals differences and similarities between speaking and listening. The laboratory study of word production raises formidable problems; ensuring that a particular word is produced may subvert the spontaneous production process. Word production is investigated via slips and tip-of-the-tongue (TOT), primarily via instances of processing failure and via the technique of via the picture-naming task. The methodology of word production is explained in the chapter. The chapter also explains the phenomenon of interaction between various stages of word production and the process of speech recognition. In this context, it explores the difference between sound and meaning and examines whether or not the comparisons are appropriate between the processes of recognition and production of spoken words. It also describes the similarities and differences in the structure of the recognition and production systems. Finally, the chapter highlights the common issues in recognition and production research, which include the nuances of frequency of occurrence, morphological structure, and phonological structure.
  • Cutler, A. (1995). Spoken-word recognition. In G. Bloothooft, V. Hazan, D. Hubert, & J. Llisterri (Eds.), European studies in phonetics and speech communication (pp. 66-71). Utrecht: OTS.
  • Cutler, A., & Koster, M. (2000). Stress and lexical activation in Dutch. In B. Yuan, T. Huang, & X. Tang (Eds.), Proceedings of the Sixth International Conference on Spoken Language Processing: Vol. 1 (pp. 593-596). Beijing: China Military Friendship Publish.

    Abstract

    Dutch listeners were slower to make judgements about the semantic relatedness between a spoken target word (e.g. atLEET, 'athlete') and a previously presented visual prime word (e.g. SPORT 'sport') when the spoken word was mis-stressed. The adverse effect of mis-stressing confirms the role of stress information in lexical recognition in Dutch. However, although the erroneous stress pattern was always initially compatible with a competing word (e.g. ATlas, 'atlas'), mis-stressed words did not produced high false alarm rates in unrelated pairs (e.g. SPORT - atLAS). This suggests that stress information did not completely rule out segmentally matching but suprasegmentally mismatching words, a finding consistent with spoken-word recognition models involving multiple activation and inter-word competition.
  • Cutler, A. (1995). The perception of rhythm in spoken and written language. In J. Mehler, & S. Franck (Eds.), Cognition on cognition (pp. 283-288). Cambridge, MA: MIT Press.
  • Cutler, A., & Butterfield, S. (1986). The perceptual integrity of initial consonant clusters. In R. Lawrence (Ed.), Speech and Hearing: Proceedings of the Institute of Acoustics (pp. 31-36). Edinburgh: Institute of Acoustics.
  • Cutler, A., & McQueen, J. M. (1995). The recognition of lexical units in speech. In B. De Gelder, & J. Morais (Eds.), Speech and reading: A comparative approach (pp. 33-47). Hove, UK: Erlbaum.
  • Cutler, A., Mehler, J., Norris, D., & Segui, J. (1986). The syllable’s differing role in the segmentation of French and English. Journal of Memory and Language, 25, 385-400. doi:10.1016/0749-596X(86)90033-1.

    Abstract

    Speech segmentation procedures may differ in speakers of different languages. Earlier work based on French speakers listening to French words suggested that the syllable functions as a segmentation unit in speech processing. However, while French has relatively regular and clearly bounded syllables, other languages, such as English, do not. No trace of syllabifying segmentation was found in English listeners listening to English words, French words, or nonsense words. French listeners, however, showed evidence of syllabification even when they were listening to English words. We conclude that alternative segmentation routines are available to the human language processor. In some cases speech segmentation may involve the operation of more than one procedure
  • Cutler, A., Otake, T., & McQueen, J. M. (2009). Vowel devoicing and the perception of spoken Japanese words. Journal of the Acoustical Society of America, 125(3), 1693-1703. doi:10.1121/1.3075556.

    Abstract

    Three experiments, in which Japanese listeners detected Japanese words embedded in nonsense sequences, examined the perceptual consequences of vowel devoicing in that language. Since vowelless sequences disrupt speech segmentation [Norris et al. (1997). Cognit. Psychol. 34, 191– 243], devoicing is potentially problematic for perception. Words in initial position in nonsense sequences were detected more easily when followed by a sequence containing a vowel than by a vowelless segment (with or without further context), and vowelless segments that were potential devoicing environments were no easier than those not allowing devoicing. Thus asa, “morning,” was easier in asau or asazu than in all of asap, asapdo, asaf, or asafte, despite the fact that the /f/ in the latter two is a possible realization of fu, with devoiced [u]. Japanese listeners thus do not treat devoicing contexts as if they always contain vowels. Words in final position in nonsense sequences, however, produced a different pattern: here, preceding vowelless contexts allowing devoicing impeded word detection less strongly (so, sake was detected less accurately, but not less rapidly, in nyaksake—possibly arising from nyakusake—than in nyagusake). This is consistent with listeners treating consonant sequences as potential realizations of parts of existing lexical candidates wherever possible.
  • Cutler, A. (1986). Why readers of this newsletter should run cross-linguistic experiments. European Psycholinguistics Association Newsletter, 13, 4-8.
  • Cutler, A., & Butterfield, S. (1991). Word boundary cues in clear speech: A supplementary report. Speech Communication, 10, 335-353. doi:10.1016/0167-6393(91)90002-B.

    Abstract

    One of a listener's major tasks in understanding continuous speech is segmenting the speech signal into separate words. When listening conditions are difficult, speakers can help listeners by deliberately speaking more clearly. In four experiments, we examined how word boundaries are produced in deliberately clear speech. In an earlier report we showed that speakers do indeed mark word boundaries in clear speech, by pausing at the boundary and lengthening pre-boundary syllables; moreover, these effects are applied particularly to boundaries preceding weak syllables. In English, listeners use segmentation procedures which make word boundaries before strong syllables easier to perceive; thus marking word boundaries before weak syllables in clear speech will make clear precisely those boundaries which are otherwise hard to perceive. The present report presents supplementary data, namely prosodic analyses of the syllable following a critical word boundary. More lengthening and greater increases in intensity were applied in clear speech to weak syllables than to strong. Mean F0 was also increased to a greater extent on weak syllables than on strong. Pitch movement, however, increased to a greater extent on strong syllables than on weak. The effects were, however, very small in comparison to the durational effects we observed earlier for syllables preceding the boundary and for pauses at the boundary.
  • Cutler, A., & Fay, D. (1975). You have a Dictionary in your Head, not a Thesaurus. Texas Linguistic Forum, 1, 27-40.
  • Cutler, A., Norris, D., & McQueen, J. M. (2000). Tracking TRACE’s troubles. In A. Cutler, J. M. McQueen, & R. Zondervan (Eds.), Proceedings of SWAP (Workshop on Spoken Word Access Processes) (pp. 63-66). Nijmegen: Max-Planck-Institute for Psycholinguistics.

    Abstract

    Simulations explored the inability of the TRACE model of spoken-word recognition to model the effects on human listening of acoustic-phonetic mismatches in word forms. The source of TRACE's failure lay not in its interactive connectivity, not in the presence of interword competition, and not in the use of phonemic representations, but in the need for continuously optimised interpretation of the input. When an analogue of TRACE was allowed to cycle to asymptote on every slice of input, an acceptable simulation of the subcategorical mismatch data was achieved. Even then, however, the simulation was not as close as that produced by the Merge model.
  • Cutler, A. (1995). Universal and Language-Specific in the Development of Speech. Biology International, (Special Issue 33).

Share this page