Publications

Displaying 101 - 200 of 1028
  • Brown, P. (1993). The role of shape in the acquisition of Tzeltal (Mayan) locatives. In E. V. Clark (Ed.), Proceedings of the 25th Annual Child Language Research Forum (pp. 211-220). Stanford, CA: CSLI/University of Chicago Press.

    Abstract

    In a critique of the current state of theories of language acquisition, Bowerman (1985) has argued forcibly for the need to take crosslinguistic variation in semantic structure seriously, in order to understand children's acquisition of semantic categories in the process of learning their language. The semantics of locative expressions in the Mayan language Tzeltal exemplifies this point, for no existing theory of spatial expressions provides an adequate basis for capturing the semantic structure of spatial description in this Mayan language. In this paper I describe some of the characteristics of Tzeltal locative descriptions, as a contribution to the growing body of data on crosslinguistic variation in this domain and as a prod to ideas about acquisition processes, confining myself to the topological notions of 'on' and 'in', and asking whether, and how, these notions are involved in the semantic distinctions underlying Tzeltal locatives.
  • Brown, P., & Levinson, S. C. (2018). Tzeltal: The demonstrative system. In S. C. Levinson, S. Cutfield, M. Dunn, N. J. Enfield, & S. Meira (Eds.), Demonstratives in cross-linguistic perspective (pp. 150-177). Cambridge: Cambridge University Press.
  • Brugman, H., Malaisé, V., & Gazendam, L. (2006). A web based general thesaurus browser to support indexing of television and radio programs. In Proceedings of the 5th International Conference on Language Resources and Evaluation (LREC 2006) (pp. 1488-1491).
  • Budwig, N., Narasimhan, B., & Srivastava, S. (2006). Interim solutions: The acquisition of early constructions in Hindi. In E. Clark, & B. Kelly (Eds.), Constructions in acquisition (pp. 163-185). Stanford: CSLI Publications.
  • Bulut, T., Cheng, S. K., Xu, K. Y., Hung, D. L., & Wu, D. H. (2018). Is there a processing preference for object relative clauses in Chinese? Evidence from ERPs. Frontiers in Psychology, 9: 995. doi:10.3389/fpsyg.2018.00995.

    Abstract

    A consistent finding across head-initial languages, such as English, is that subject relative clauses (SRCs) are easier to comprehend than object relative clauses (ORCs). However, several studies in Mandarin Chinese, a head-final language, revealed the opposite pattern, which might be modulated by working memory (WM) as suggested by recent results from self-paced reading performance. In the present study, event-related potentials (ERPs) were recorded when participants with high and low WM spans (measured by forward digit span and operation span tests) read Chinese ORCs and SRCs. The results revealed an N400-P600 complex elicited by ORCs on the relativizer, whose magnitude was modulated by the WM span. On the other hand, a P600 effect was elicited by SRCs on the head noun, whose magnitude was not affected by the WM span. These findings paint a complex picture of relative clause processing in Chinese such that opposing factors involving structural ambiguities and integration of filler-gap dependencies influence processing dynamics in Chinese relative clauses.
  • Burenhult, N. (2006). Body part terms in Jahai. Language Sciences, 28(2-3), 162-180. doi:10.1016/j.langsci.2005.11.002.

    Abstract

    This article explores the lexicon of body part terms in Jahai, a Mon-Khmer language spoken by a group of hunter–gatherers in the Malay Peninsula. It provides an extensive inventory of body part terms and describes their structural and semantic properties. The Jahai body part lexicon pays attention to fine anatomical detail but lacks labels for major, ‘higher-level’ categories, like ‘trunk’, ‘limb’, ‘arm’ and ‘leg’. In this lexicon it is therefore sometimes difficult to discern a clear partonomic hierarchy, a presumed universal of body part terminology.
  • Byun, K.-S., De Vos, C., Bradford, A., Zeshan, U., & Levinson, S. C. (2018). First encounters: Repair sequences in cross-signing. Topics in Cognitive Science, 10(2), 314-334. doi:10.1111/tops.12303.

    Abstract

    Most human communication is between people who speak or sign the same languages. Nevertheless, communication is to some extent possible where there is no language in common, as every tourist knows. How this works is of some theoretical interest (Levinson 2006). A nice arena to explore this capacity is when deaf signers of different languages meet for the first time, and are able to use the iconic affordances of sign to begin communication. Here we focus on Other-Initiated Repair (OIR), that is, where one signer makes clear he or she does not understand, thus initiating repair of the prior conversational turn. OIR sequences are typically of a three-turn structure (Schegloff 2007) including the problem source turn (T-1), the initiation of repair (T0), and the turn offering a problem solution (T+1). These sequences seem to have a universal structure (Dingemanse et al. 2013). We find that in most cases where such OIR occur, the signer of the troublesome turn (T-1) foresees potential difficulty, and marks the utterance with 'try markers' (Sacks & Schegloff 1979, Moerman 1988) which pause to invite recognition. The signers use repetition, gestural holds, prosodic lengthening and eyegaze at the addressee as such try-markers. Moreover, when T-1 is try-marked this allows for faster response times of T+1 with respect to T0. This finding suggests that signers in these 'first encounter' situations actively anticipate potential trouble and, through try-marking, mobilize and facilitate OIRs. The suggestion is that heightened meta-linguistic awareness can be utilized to deal with these problems at the limits of our communicational ability.
  • Byun, K.-S., De Vos, C., Roberts, S. G., & Levinson, S. C. (2018). Interactive sequences modulate the selection of expressive forms in cross-signing. In C. Cuskley, M. Flaherty, H. Little, L. McCrohon, A. Ravignani, & T. Verhoef (Eds.), Proceedings of the 12th International Conference on the Evolution of Language (EVOLANG XII) (pp. 67-69). Toruń, Poland: NCU Press. doi:10.12775/3991-1.012.
  • Carlsson, K., Andersson, J., Petrovic, P., Petersson, K. M., Öhman, A., & Ingvar, M. (2006). Predictability modulates the affective and sensory-discriminative neural processing of pain. NeuroImage, 32(4), 1804-1814. doi:10.1016/j.neuroimage.2006.05.027.

    Abstract

    Knowing what is going to happen next, that is, the capacity to predict upcoming events, modulates the extent to which aversive stimuli induce stress and anxiety. We explored this issue by manipulating the temporal predictability of aversive events by means of a visual cue, which was either correlated or uncorrelated with pain stimuli (electric shocks). Subjects reported lower levels of anxiety, negative valence and pain intensity when shocks were predictable. In addition to attenuate focus on danger, predictability allows for correct temporal estimation of, and selective attention to, the sensory input. With functional magnetic resonance imaging, we found that predictability was related to enhanced activity in relevant sensory-discriminative processing areas, such as the primary and secondary sensory cortex and posterior insula. In contrast, the unpredictable more aversive context was correlated to brain activity in the anterior insula and the orbitofrontal cortex, areas associated with affective pain processing. This context also prompted increased activity in the posterior parietal cortex and lateral prefrontal cortex that we attribute to enhanced alertness and sustained attention during unpredictability.
  • Carota, F. (2006). Derivational morphology of Italian: Principles for formalization. Literary and Linguistic Computing, 21(SUPPL. 1), 41-53. doi:10.1093/llc/fql007.

    Abstract

    The present paper investigates the major derivational strategies underlying the formation of suffixed words in Italian, with the purpose of tackling the issue of their formalization. After having specified the theoretical cognitive premises that orient the work, the interacting component modules of the suffixation process, i.e. morphonology, morphotactics and affixal semantics, are explored empirically, by drawing ample naturally occurring data on a Corpus of written Italian. A special attention is paid to the semantic mechanisms that are involved into suffixation. Some semantic nuclei are identified for the major suffixed word types of Italian, which are due to word formation rules active at the synchronic level, and a semantic configuration of productive suffixes is suggested. A general framework is then sketched, which combines classical finite-state methods with a feature unification-based word grammar. More specifically, the semantic information specified for the affixal material is internalised into the structures of the Lexical Functional Grammar (LFG). The formal model allows us to integrate the various modules of suffixation. In particular, it treats, on the one hand, the interface between morphonology/morphotactics and semantics and, on the other hand, the interface between suffixation and inflection. Furthermore, since LFG exploits a hierarchically organised lexicon in order to structure the information regarding the affixal material, affixal co-selectional restrictions are advatageously constrained, avoiding potential multiple spurious analysis/generations.
  • Carter, D. M., Broersma, M., Donnelly, K., & Konopka, A. E. (2018). Presenting the Bangor autoglosser and the Bangor automated clause-splitter. Digital Scholarship in the Humanities, 33(1), 21-28. doi:10.1093/llc/fqw065.

    Abstract

    Until recently, corpus studies of natural bilingual speech and, more specifically, codeswitching in bilingual speech have used a manual method of glossing, partof- speech tagging, and clause-splitting to prepare the data for analysis. In our article, we present innovative tools developed for the first large-scale corpus study of codeswitching triggered by cognates. A study of this size was only possible due to the automation of several steps, such as morpheme-by-morpheme glossing, splitting complex clauses into simple clauses, and the analysis of internal and external codeswitching through the use of database tables, algorithms, and a scripting language.
  • Castro-Caldas, A., Petersson, K. M., Reis, A., Stone-Elander, S., & Ingvar, M. (1998). The illiterate brain: Learning to read and write during childhood influences the functional organization of the adult brain. Brain, 121, 1053-1063. doi:10.1093/brain/121.6.1053.

    Abstract

    Learning a specific skill during childhood may partly determine the functional organization of the adult brain. This hypothesis led us to study oral language processing in illiterate subjects who, for social reasons, had never entered school and had no knowledge of reading or writing. In a brain activation study using PET and statistical parametric mapping, we compared word and pseudoword repetition in literate and illiterate subjects. Our study confirms behavioural evidence of different phonological processing in illiterate subjects. During repetition of real words, the two groups performed similarly and activated similar areas of the brain. In contrast, illiterate subjects had more difficulty repeating pseudowords correctly and did not activate the same neural structures as literates. These results are consistent with the hypothesis that learning the written form of language (orthography) interacts with the function of oral language. Our results indicate that learning to read and write during childhood influences the functional organization of the adult human brain.
  • Chan, A., Yang, W., Chang, F., & Kidd, E. (2018). Four-year-old Cantonese-speaking children's online processing of relative clauses: A permutation analysis. Journal of Child Language, 45(1), 174-203. doi:10.1017/s0305000917000198.

    Abstract


    We report on an eye-tracking study that investigated four-year-old Cantonese-speaking children's online processing of subject and object relative clauses (RCs). Children's eye-movements were recorded as they listened to RC structures identifying a unique referent (e.g. “Can you pick up the horse that pushed the pig?”). Two RC types, classifier (CL) and ge3 RCs, were tested in a between-participants design. The two RC types differ in their syntactic analyses and frequency of occurrence, providing an important point of comparison for theories of RC acquisition and processing. A permutation analysis showed that the two structures were processed differently: CL RCs showed a significant object-over-subject advantage, whereas ge3 RCs showed the opposite effect. This study shows that children can have different preferences even for two very similar RC structures within the same language, suggesting that syntactic processing preferences are shaped by the unique features of particular constructions both within and across different linguistic typologies.
  • Chen, J. (2006). The acquisition of verb compounding in Mandarin. In E. V. Clark, & B. F. Kelly (Eds.), Constructions in acquisition (pp. 111-136). Stanford: CSLI Publications.
  • Chen, Y., & Braun, B. (2006). Prosodic realization in information structure categories in standard Chinese. In R. Hoffmann, & H. Mixdorff (Eds.), Speech Prosody 2006. Dresden: TUD Press.

    Abstract

    This paper investigates the prosodic realization of information
    structure categories in Standard Chinese. A number of proper
    names with different tonal combinations were elicited as a
    grammatical subject in five pragmatic contexts. Results show
    that both duration and F0 range of the tonal realizations were
    adjusted to signal the information structure categories (i.e.
    theme vs. rheme and background vs. focus). Rhemes
    consistently induced a longer duration and a more expanded F0
    range than themes. Focus, compared to background, generally
    induced lengthening and F0 range expansion (the presence and
    magnitude of which, however, are dependent on the tonal
    structure of the proper names). Within the rheme focus
    condition, corrective rheme focus induced more expanded F0
    range than normal rheme focus.
  • Chen, A. (2006). Variations in the marking of focus in child language. In Variation, detail and representation: 10th Conference on Laboratory Phonology (pp. 113-114).
  • Chen, C.-h., Zhang, Y., & Yu, C. (2018). Learning object names at different hierarchical levels using cross-situational statistics. Cognitive Science, 42(S2), 591-605. doi:10.1111/cogs.12516.

    Abstract

    Objects in the world usually have names at different hierarchical levels (e.g., beagle, dog, animal). This research investigates adults' ability to use cross-situational statistics to simultaneously learn object labels at individual and category levels. The results revealed that adults were able to use co-occurrence information to learn hierarchical labels in contexts where the labels for individual objects and labels for categories were presented in completely separated blocks, in interleaved blocks, or mixed in the same trial. Temporal presentation schedules significantly affected the learning of individual object labels, but not the learning of category labels. Learners' subsequent generalization of category labels indicated sensitivity to the structure of statistical input.
  • Chen, A. (2006). Interface between information structure and intonation in Dutch wh-questions. In R. Hoffmann, & H. Mixdorff (Eds.), Speech Prosody 2006. Dresden: TUD Press.

    Abstract

    This study set out to investigate how accent placement is pragmatically governed in WH-questions. Central to this issue are questions such as whether the intonation of the WH-word depends on the information structure of the non-WH word part, whether topical constituents can be accented, and whether constituents in the non-WH word part can be non-topical and accented. Previous approaches, based either on carefully composed examples or on read speech, differ in their treatments of these questions and consequently make opposing claims on the intonation of WH-questions. We addressed these questions by examining a corpus of 90 naturally occurring WH-questions, selected from the Spoken Dutch Corpus. Results show that the intonation of the WH-word is related to the information structure of the non-WH word part. Further, topical constituents can get accented and the accents are not necessarily phonetically reduced. Additionally, certain adverbs, which have no topical relation to the presupposition of the WH-questions, also get accented. They appear to function as a device for enhancing speaker engagement.
  • Cho, T., & McQueen, J. M. (2006). Phonological versus phonetic cues in native and non-native listening: Korean and Dutch listeners' perception of Dutch and English consonants. Journal of the Acoustical Society of America, 119(5), 3085-3096. doi:10.1121/1.2188917.

    Abstract

    We investigated how listeners of two unrelated languages, Korean and Dutch, process phonologically viable and nonviable consonants spoken in Dutch and American English. To Korean listeners, released final stops are nonviable because word-final stops in Korean are never released in words spoken in isolation, but to Dutch listeners, unreleased word-final stops are nonviable because word-final stops in Dutch are generally released in words spoken in isolation. Two phoneme monitoring experiments showed a phonological effect on both Dutch and English stimuli: Korean listeners detected the unreleased stops more rapidly whereas Dutch listeners detected the released stops more rapidly and/or more accurately. The Koreans, however, detected released stops more accurately than unreleased stops, but only in the non-native language they were familiar with (English). The results suggest that, in non-native speech perception, phonological legitimacy in the native language can be more important than the richness of phonetic information, though familiarity with phonetic detail in the non-native language can also improve listening performance.
  • Choi, S., & Bowerman, M. (1991). Learning to express motion events in English and Korean: The influence of language-specific lexicalization patterns. Cognition, 41, 83-121. doi:10.1016/0010-0277(91)90033-Z.

    Abstract

    English and Korean differ in how they lexicalize the components of motionevents. English characteristically conflates Motion with Manner, Cause, or Deixis, and expresses Path separately. Korean, in contrast, conflates Motion with Path and elements of Figure and Ground in transitive clauses for caused Motion, but conflates motion with Deixis and spells out Path and Manner separately in intransitive clauses for spontaneous motion. Children learningEnglish and Korean show sensitivity to language-specific patterns in the way they talk about motion from as early as 17–20 months. For example, learners of English quickly generalize their earliest spatial words — Path particles like up, down, and in — to both spontaneous and caused changes of location and, for up and down, to posture changes, while learners of Korean keep words for spontaneous and caused motion strictly separate and use different words for vertical changes of location and posture changes. These findings challenge the widespread view that children initially map spatial words directly to nonlinguistic spatial concepts, and suggest that they are influenced by the semantic organization of their language virtually from the beginning. We discuss how input and cognition may interact in the early phases of learning to talk about space.
  • Choi, J., Broersma, M., & Cutler, A. (2018). Phonetic learning is not enhanced by sequential exposure to more than one language. Linguistic Research, 35(3), 567-581. doi:10.17250/khisli.35.3.201812.006.

    Abstract

    Several studies have documented that international adoptees, who in early years have
    experienced a change from a language used in their birth country to a new language
    in an adoptive country, benefit from the limited early exposure to the birth language
    when relearning that language’s sounds later in life. The adoptees’ relearning advantages
    have been argued to be conferred by lasting birth-language knowledge obtained from
    the early exposure. However, it is also plausible to assume that the advantages may
    arise from adoptees’ superior ability to learn language sounds in general, as a result
    of their unusual linguistic experience, i.e., exposure to multiple languages in sequence
    early in life. If this is the case, then the adoptees’ relearning benefits should generalize
    to previously unheard language sounds, rather than be limited to their birth-language
    sounds. In the present study, adult Korean adoptees in the Netherlands and matched
    Dutch-native controls were trained on identifying a Japanese length distinction to which
    they had never been exposed before. The adoptees and Dutch controls did not differ
    on any test carried out before, during, or after the training, indicating that observed
    adoptee advantages for birth-language relearning do not generalize to novel, previously
    unheard language sounds. The finding thus fails to support the suggestion that
    birth-language relearning advantages may arise from enhanced ability to learn language
    sounds in general conferred by early experience in multiple languages. Rather, our
    finding supports the original contention that such advantages involve memory traces
    obtained before adoption
  • Cholin, J., Levelt, W. J. M., & Schiller, N. O. (2006). Effects of syllable frequency in speech production. Cognition, 99, 205-235. doi:10.1016/j.cognition.2005.01.009.

    Abstract

    In the speech production model proposed by [Levelt, W. J. M., Roelofs, A., Meyer, A. S. (1999). A theory of lexical access in speech production. Behavioral and Brain Sciences, 22, pp. 1-75.], syllables play a crucial role at the interface of phonological and phonetic encoding. At this interface, abstract phonological syllables are translated into phonetic syllables. It is assumed that this translation process is mediated by a so-called Mental Syllabary. Rather than constructing the motor programs for each syllable on-line, the mental syllabary is hypothesized to provide pre-compiled gestural scores for the articulators. In order to find evidence for such a repository, we investigated syllable-frequency effects: If the mental syllabary consists of retrievable representations corresponding to syllables, then the retrieval process should be sensitive to frequency differences. In a series of experiments using a symbol-position association learning task, we tested whether highfrequency syllables are retrieved and produced faster compared to low-frequency syllables. We found significant syllable frequency effects with monosyllabic pseudo-words and disyllabic pseudo-words in which the first syllable bore the frequency manipulation; no effect was found when the frequency manipulation was on the second syllable. The implications of these results for the theory of word form encoding at the interface of phonological and phonetic encoding; especially with respect to the access mechanisms to the mental syllabary in the speech production model by (Levelt et al.) are discussed.
  • Chwilla, D., Hagoort, P., & Brown, C. M. (1998). The mechanism underlying backward priming in a lexical decision task: Spreading activation versus semantic matching. Quarterly Journal of Experimental Psychology, 51A(3), 531-560. doi:10.1080/713755773.

    Abstract

    Koriat (1981) demonstrated that an association from the target to a preceding prime, in the absence of an association from the prime to the target, facilitates lexical decision and referred to this effect as "backward priming". Backward priming is of relevance, because it can provide information about the mechanism underlying semantic priming effects. Following Neely (1991), we distinguish three mechanisms of priming: spreading activation, expectancy, and semantic matching/integration. The goal was to determine which of these mechanisms causes backward priming, by assessing effects of backward priming on a language-relevant ERP component, the N400, and reaction time (RT). Based on previous work, we propose that the N400 priming effect reflects expectancy and semantic matching/integration, but in contrast with RT does not reflect spreading activation. Experiment 1 shows a backward priming effect that is qualitatively similar for the N400 and RT in a lexical decision task. This effect was not modulated by an ISI manipulation. Experiment 2 clarifies that the N400 backward priming effect reflects genuine changes in N400 amplitude and cannot be ascribed to other factors. We will argue that these backward priming effects cannot be due to expectancy but are best accounted for in terms of semantic matching/integration.
  • Chwilla, D., Brown, C. M., & Hagoort, P. (1995). The N400 as a function of the level of processing. Psychophysiology, 32, 274-285. doi:10.1111/j.1469-8986.1995.tb02956.x.

    Abstract

    In a semantic priming paradigm, the effects of different levels of processing on the N400 were assessed by changing the task demands. In the lexical decision task, subjects had to discriminate between words and nonwords and in the physical task, subjects had to discriminate between uppercase and lowercase letters. The proportion of related versus unrelated word pairs differed between conditions. A lexicality test on reaction times demonstrated that the physical task was performed nonlexically. Moreover, a semantic priming reaction time effect was obtained only in the lexical decision task. The level of processing clearly affected the event-related potentials. An N400 priming effect was only observed in the lexical decision task. In contrast, in the physical task a P300 effect was observed for either related or unrelated targets, depending on their frequency of occurrence. Taken together, the results indicate that an N400 priming effect is only evoked when the task performance induces the semantic aspects of words to become part of an episodic trace of the stimulus event.
  • Clough, S., & Hilverman, C. (2018). Hand gestures and how they help children learn. Frontiers for Young Minds, 6: 29. doi:10.3389/frym.2018.00029.

    Abstract

    When we talk, we often make hand movements called gestures at the same time. Although just about everyone gestures when they talk, we usually do not even notice the gestures. Our hand gestures play an important role in helping us learn and remember! When we see other people gesturing when they talk—or when we gesture when we talk ourselves—we are more likely to remember the information being talked about than if gestures were not involved. Our hand gestures can even indicate when we are ready to learn new things! In this article, we explain how gestures can help learning. To investigate this, we studied children learning a new mathematical concept called equivalence. We hope that this article will help you notice when you, your friends and family, and your teachers are gesturing, and that it will help you understand how those gestures can help people learn.
  • Connine, C. M., Clifton, Jr., C., & Cutler, A. (1987). Effects of lexical stress on phonetic categorization. Phonetica, 44, 133-146.
  • Corcoran, A. W., Alday, P. M., Schlesewsky, M., & Bornkessel-Schlesewsky, I. (2018). Toward a reliable, automated method of individual alpha frequency (IAF) quantification. Psychophysiology, 55(7): e13064. doi:10.1111/psyp.13064.

    Abstract

    Individual alpha frequency (IAF) is a promising electrophysiological marker of interindividual differences in cognitive function. IAF has been linked with trait-like differences in information processing and general intelligence, and provides an empirical basis for the definition of individualized frequency bands. Despite its widespread application, however, there is little consensus on the optimal method for estimating IAF, and many common approaches are prone to bias and inconsistency. Here, we describe an automated strategy for deriving two of the most prevalent IAF estimators in the literature: peak alpha frequency (PAF) and center of gravity (CoG). These indices are calculated from resting-state power spectra that have been smoothed using a Savitzky-Golay filter (SGF). We evaluate the performance characteristics of this analysis procedure in both empirical and simulated EEG data sets. Applying the SGF technique to resting-state data from n = 63 healthy adults furnished 61 PAF and 62 CoG estimates. The statistical properties of these estimates were consistent with previous reports. Simulation analyses revealed that the SGF routine was able to reliably extract target alpha components, even under relatively noisy spectral conditions. The routine consistently outperformed a simpler method of automated peak detection that did not involve spectral smoothing. The SGF technique is fast, open source, and available in two popular programming languages (MATLAB, Python), and thus can easily be integrated within the most popular M/EEG toolsets (EEGLAB, FieldTrip, MNE-Python). As such, it affords a convenient tool for improving the reliability and replicability of future IAF-related research.

    Additional information

    psyp13064-sup-0001-s01.docx
  • Corps, R. E. (2018). Coordinating utterances during conversational dialogue: The role of content and timing predictions. PhD Thesis, The University of Edinburgh, Edinburgh.
  • Corps, R. E., Gambi, C., & Pickering, M. J. (2018). Coordinating utterances during turn-taking: The role of prediction, response preparation, and articulation. Discourse processes, 55(2, SI), 230-240. doi:10.1080/0163853X.2017.1330031.

    Abstract

    During conversation, interlocutors rapidly switch between speaker and listener
    roles and take turns at talk. How do they achieve such fine coordination?
    Most research has concentrated on the role of prediction, but listeners
    must also prepare a response in advance (assuming they wish to respond)
    and articulate this response at the appropriate moment. Such mechanisms
    may overlap with the processes of comprehending the speaker’s incoming
    turn and predicting its end. However, little is known about the stages of
    response preparation and production. We discuss three questions pertaining
    to such stages: (1) Do listeners prepare their own response in advance?,
    (2) Can listeners buffer their prepared response?, and (3) Does buffering
    lead to interference with concurrent comprehension? We argue that fine
    coordination requires more than just an accurate prediction of the interlocutor’s
    incoming turn: Listeners must also simultaneously prepare their own
    response.
  • Corps, R. E., Crossley, A., Gambi, C., & Pickering, M. J. (2018). Early preparation during turn-taking: Listeners use content predictions to determine what to say but not when to say it. Cognition, 175, 77-95. doi:10.1016/j.cognition.2018.01.015.

    Abstract

    During conversation, there is often little gap between interlocutors’ utterances. In two pairs of experiments, we manipulated the content predictability of yes/no questions to investigate whether listeners achieve such coordination by (i) preparing a response as early as possible or (ii) predicting the end of the speaker’s turn. To assess these two mechanisms, we varied the participants’ task: They either pressed a button when they thought the question was about to end (Experiments 1a and 2a), or verbally answered the questions with either yes or no (Experiments 1b and 2b). Predictability effects were present when participants had to prepare a verbal response, but not when they had to predict the turn-end. These findings suggest content prediction facilitates turn-taking because it allows listeners to prepare their own response early, rather than because it helps them predict when the speaker will reach the end of their turn.

    Additional information

    Supplementary material
  • Costa, A., Cutler, A., & Sebastian-Galles, N. (1998). Effects of phoneme repertoire on phoneme decision. Perception and Psychophysics, 60, 1022-1031.

    Abstract

    In three experiments, listeners detected vowel or consonant targets in lists of CV syllables constructed from five vowels and five consonants. Responses were faster in a predictable context (e.g., listening for a vowel target in a list of syllables all beginning with the same consonant) than in an unpredictable context (e.g., listening for a vowel target in a list of syllables beginning with different consonants). In Experiment 1, the listeners’ native language was Dutch, in which vowel and consonant repertoires are similar in size. The difference between predictable and unpredictable contexts was comparable for vowel and consonant targets. In Experiments 2 and 3, the listeners’ native language was Spanish, which has four times as many consonants as vowels; here effects of an unpredictable consonant context on vowel detection were significantly greater than effects of an unpredictable vowel context on consonant detection. This finding suggests that listeners’ processing of phonemes takes into account the constitution of their language’s phonemic repertoire and the implications that this has for contextual variability.
  • Crago, M. B., & Allen, S. E. M. (1998). Acquiring Inuktitut. In O. L. Taylor, & L. Leonard (Eds.), Language Acquisition Across North America: Cross-Cultural And Cross-Linguistic Perspectives (pp. 245-279). San Diego, CA, USA: Singular Publishing Group, Inc.
  • Crago, M. B., Allen, S. E. M., & Pesco, D. (1998). Issues of Complexity in Inuktitut and English Child Directed Speech. In Proceedings of the twenty-ninth Annual Stanford Child Language Research Forum (pp. 37-46).
  • Crago, M. B., Chen, C., Genesee, F., & Allen, S. E. M. (1998). Power and deference. Journal for a Just and Caring Education, 4(1), 78-95.
  • Crasborn, O., Sloetjes, H., Auer, E., & Wittenburg, P. (2006). Combining video and numeric data in the analysis of sign languages with the ELAN annotation software. In C. Vetoori (Ed.), Proceedings of the 2nd Workshop on the Representation and Processing of Sign languages: Lexicographic matters and didactic scenarios (pp. 82-87). Paris: ELRA.

    Abstract

    This paper describes hardware and software that can be used for the phonetic study of sign languages. The field of sign language phonetics is characterised, and the hardware that is currently in use is described. The paper focuses on the software that was developed to enable the recording of finger and hand movement data, and the additions to the ELAN annotation software that facilitate the further visualisation and analysis of the data.
  • Creemers, A., Don, J., & Fenger, P. (2018). Some affixes are roots, others are heads. Natural Language & Linguistic Theory, 36(1), 45-84. doi:10.1007/s11049-017-9372-1.

    Abstract

    A recent debate in the morphological literature concerns the status of derivational affixes. While some linguists (Marantz 1997, 2001; Marvin 2003) consider derivational affixes a type of functional morpheme that realizes a categorial head, others (Lowenstamm 2015; De Belder 2011) argue that derivational affixes are roots. Our proposal, which finds its empirical basis in a study of Dutch derivational affixes, takes a middle position. We argue that there are two types of derivational affixes: some that are roots (i.e. lexical morphemes) and others that are categorial heads (i.e. functional morphemes). Affixes that are roots show ‘flexible’ categorial behavior, are subject to ‘lexical’ phonological rules, and may trigger idiosyncratic meanings. Affixes that realize categorial heads, on the other hand, are categorially rigid, do not trigger ‘lexical’ phonological rules nor allow for idiosyncrasies in their interpretation.
  • Cristia, A., Ganesh, S., Casillas, M., & Ganapathy, S. (2018). Talker diarization in the wild: The case of child-centered daylong audio-recordings. In Proceedings of Interspeech 2018 (pp. 2583-2587). doi:10.21437/Interspeech.2018-2078.

    Abstract

    Speaker diarization (answering 'who spoke when') is a widely researched subject within speech technology. Numerous experiments have been run on datasets built from broadcast news, meeting data, and call centers—the task sometimes appears close to being solved. Much less work has begun to tackle the hardest diarization task of all: spontaneous conversations in real-world settings. Such diarization would be particularly useful for studies of language acquisition, where researchers investigate the speech children produce and hear in their daily lives. In this paper, we study audio gathered with a recorder worn by small children as they went about their normal days. As a result, each child was exposed to different acoustic environments with a multitude of background noises and a varying number of adults and peers. The inconsistency of speech and noise within and across samples poses a challenging task for speaker diarization systems, which we tackled via retraining and data augmentation techniques. We further studied sources of structured variation across raw audio files, including the impact of speaker type distribution, proportion of speech from children, and child age on diarization performance. We discuss the extent to which these findings might generalize to other samples of speech in the wild.
  • Croijmans, I. (2018). Wine expertise shapes olfactory language and cognition. PhD Thesis, Radboud University, Nijmegen.
  • Cronin, K. A., Mitchell, M. A., Lonsdorf, E. V., & Thompson, S. D. (2006). One year later: Evaluation of PMC-Recommended births and transfers. Zoo Biology, 25, 267-277. doi:10.1002/zoo.20100.

    Abstract

    To meet their exhibition, conservation, education, and scientific goals, members of the American Zoo and Aquarium Association (AZA) collaborate to manage their living collections as single species populations. These cooperative population management programs, Species Survival Planss (SSP) and Population Management Plans (PMP), issue specimen-by-specimen recommendations aimed at perpetuating captive populations by maintaining genetic diversity and demographic stability. Species Survival Plans and PMPs differ in that SSP participants agree to complete recommendations, whereas PMP participants need only take recommendations under advisement. We evaluated the effect of program type and the number of participating institutions on the success of actions recommended by the Population Management Center (PMC): transfers of specimens between institutions, breeding, and target number of offspring. We analyzed AZA studbook databases for the occurrence of recommended or unrecommended transfers and births during the 1-year period after the distribution of standard AZA Breeding-and-Transfer Plans. We had three major findings: 1) on average, both SSPs and PMPs fell about 25% short of their target; however, as the number of participating institutions increased so too did the likelihood that programs met or exceeded their target; 2) SSPs exhibited significantly greater transfer success than PMPs, although transfer success for both program types was below 50%; and 3) SSPs exhibited significantly greater breeding success than PMPs, although breeding success for both program types was below 20%. Together, these results indicate that the science and sophistication behind genetic and demographic management of captive populations may be compromised by the challenges of implementation.
  • Croxson, P., Forkel, S. J., Cerliani, L., & Thiebaut De Schotten, M. (2018). Structural Variability Across the Primate Brain: A Cross-Species Comparison. Cerebral Cortex, 28(11), 3829-3841. doi:10.1093/cercor/bhx244.

    Abstract

    A large amount of variability exists across human brains; revealed initially on a small scale by postmortem studies and,
    more recently, on a larger scale with the advent of neuroimaging. Here we compared structural variability between human
    and macaque monkey brains using grey and white matter magnetic resonance imaging measures. The monkey brain was
    overall structurally as variable as the human brain, but variability had a distinct distribution pattern, with some key areas
    showing high variability. We also report the first evidence of a relationship between anatomical variability and evolutionary
    expansion in the primate brain. This suggests a relationship between variability and stability, where areas of low variability
    may have evolved less recently and have more stability, while areas of high variability may have evolved more recently and
    be less similar across individuals. We showed specific differences between the species in key areas, including the amount of
    hemispheric asymmetry in variability, which was left-lateralized in the human brain across several phylogenetically recent
    regions. This suggests that cerebral variability may be another useful measure for comparison between species and may add
    another dimension to our understanding of evolutionary mechanisms.
  • Cutler, A. (2006). Rudolf Meringer. In K. Brown (Ed.), Encyclopedia of Language and Linguistics (vol. 8) (pp. 12-13). Amsterdam: Elsevier.

    Abstract

    Rudolf Meringer (1859–1931), Indo-European philologist, published two collections of slips of the tongue, annotated and interpreted. From 1909, he was the founding editor of the cultural morphology movement's journal Wörter und Sachen. Meringer was the first to note the linguistic significance of speech errors, and his interpretations have stood the test of time. This work, rather than his mainstream philological research, has proven his most lasting linguistic contribution
  • Cutler, A., Kim, J., & Otake, T. (2006). On the limits of L1 influence on non-L1 listening: Evidence from Japanese perception of Korean. In P. Warren, & C. I. Watson (Eds.), Proceedings of the 11th Australian International Conference on Speech Science & Technology (pp. 106-111).

    Abstract

    Language-specific procedures which are efficient for listening to the L1 may be applied to non-native spoken input, often to the detriment of successful listening. However, such misapplications of L1-based listening do not always happen. We propose, based on the results from two experiments in which Japanese listeners detected target sequences in spoken Korean, that an L1 procedure is only triggered if requisite L1 features are present in the input.
  • Cutler, A. (2006). Van spraak naar woorden in een tweede taal. In J. Morais, & G. d'Ydewalle (Eds.), Bilingualism and Second Language Acquisition (pp. 39-54). Brussels: Koninklijke Vlaamse Academie van België voor Wetenschappen en Kunsten.
  • Cutler, A., & Otake, T. (1998). Assimilation of place in Japanese and Dutch. In R. Mannell, & J. Robert-Ribes (Eds.), Proceedings of the Fifth International Conference on Spoken Language Processing: vol. 5 (pp. 1751-1754). Sydney: ICLSP.

    Abstract

    Assimilation of place of articulation across a nasal and a following stop consonant is obligatory in Japanese, but not in Dutch. In four experiments the processing of assimilated forms by speakers of Japanese and Dutch was compared, using a task in which listeners blended pseudo-word pairs such as ranga-serupa. An assimilated blend of this pair would be rampa, an unassimilated blend rangpa. Japanese listeners produced significantly more assimilated than unassimilated forms, both with pseudo-Japanese and pseudo-Dutch materials, while Dutch listeners produced significantly more unassimilated than assimilated forms in each materials set. This suggests that Japanese listeners, whose native-language phonology involves obligatory assimilation constraints, represent the assimilated nasals in nasal-stop sequences as unmarked for place of articulation, while Dutch listeners, who are accustomed to hearing unassimilated forms, represent the same nasal segments as marked for place of articulation.
  • Ip, M. H. K., & Cutler, A. (2018). Asymmetric efficiency of juncture perception in L1 and L2. In K. Klessa, J. Bachan, A. Wagner, M. Karpiński, & D. Śledziński (Eds.), Proceedings of Speech Prosody 2018 (pp. 289-296). Baixas, France: ISCA. doi:10.21437/SpeechProsody.2018-59.

    Abstract

    In two experiments, Mandarin listeners resolved potential syntactic ambiguities in spoken utterances in (a) their native language (L1) and (b) English which they had learned as a second language (L2). A new disambiguation task was used, requiring speeded responses to select the correct meaning for structurally ambiguous sentences. Importantly, the ambiguities used in the study are identical in Mandarin and in English, and production data show that prosodic disambiguation of this type of ambiguity is also realised very similarly in the two languages. The perceptual results here showed however that listeners’ response patterns differed for L1 and L2, although there was a significant increase in similarity between the two response patterns with increasing exposure to the L2. Thus identical ambiguity and comparable disambiguation patterns in L1 and L2 do not lead to immediate application of the appropriate L1 listening strategy to L2; instead, it appears that such a strategy may have to be learned anew for the L2.
  • Cutler, A., & Fear, B. D. (1991). Categoricality in acceptability judgements for strong versus weak vowels. In J. Llisterri (Ed.), Proceedings of the ESCA Workshop on Phonetics and Phonology of Speaking Styles (pp. 18.1-18.5). Barcelona, Catalonia: Universitat Autonoma de Barcelona.

    Abstract

    A distinction between strong and weak vowels can be drawn on the basis of vowel quality, of stress, or of both factors. An experiment was conducted in which sets of contextually matched word-intial vowels ranging from clearly strong to clearly weak were cross-spliced, and the naturalness of the resulting words was rated by listeners. The ratings showed that in general cross-spliced words were only significantly less acceptable than unspliced words when schwa was not involved; this supports a categorical distinction based on vowel quality.
  • Cutler, A., Norris, D., & Williams, J. (1987). A note on the role of phonological expectations in speech segmentation. Journal of Memory and Language, 26, 480-487. doi:10.1016/0749-596X(87)90103-3.

    Abstract

    Word-initial CVC syllables are detected faster in words beginning consonant-vowel-consonant-vowel (CVCV-) than in words beginning consonant-vowel-consonant-consonant (CVCC-). This effect was reported independently by M. Taft and G. Hambly (1985, Journal of Memory and Language, 24, 320–335) and by A. Cutler, J. Mehler, D. Norris, and J. Segui (1986, Journal of Memory and Language, 25, 385–400). Taft and Hambly explained the effect in terms of lexical factors. This explanation cannot account for Cutler et al.'s results, in which the effect also appeared with nonwords and foreign words. Cutler et al. suggested that CVCV-sequences might simply be easier to perceive than CVCC-sequences. The present study confirms this suggestion, and explains it as a reflection of listener expectations constructed on the basis of distributional characteristics of the language.
  • Cutler, A. (1987). Components of prosodic effects in speech recognition. In Proceedings of the Eleventh International Congress of Phonetic Sciences: Vol. 1 (pp. 84-87). Tallinn: Academy of Sciences of the Estonian SSR, Institute of Language and Literature.

    Abstract

    Previous research has shown that listeners use the prosodic structure of utterances in a predictive fashion in sentence comprehension, to direct attention to accented words. Acoustically identical words spliced into sentence contexts arc responded to differently if the prosodic structure of the context is \ aricd: when the preceding prosody indicates that the word will he accented, responses are faster than when the preceding prosodv is inconsistent with accent occurring on that word. In the present series of experiments speech hybridisation techniques were first used to interchange the timing patterns within pairs of prosodic variants of utterances, independently of the pitch and intensity contours. The time-adjusted utterances could then serve as a basis lor the orthogonal manipulation of the three prosodic dimensions of pilch, intensity and rhythm. The overall pattern of results showed that when listeners use prosody to predict accent location, they do not simply rely on a single prosodic dimension, hut exploit the interaction between pitch, intensity and rhythm.
  • Ip, M. H. K., & Cutler, A. (2018). Cue equivalence in prosodic entrainment for focus detection. In J. Epps, J. Wolfe, J. Smith, & C. Jones (Eds.), Proceedings of the 17th Australasian International Conference on Speech Science and Technology (pp. 153-156).

    Abstract

    Using a phoneme detection task, the present series of
    experiments examines whether listeners can entrain to
    different combinations of prosodic cues to predict where focus
    will fall in an utterance. The stimuli were recorded by four
    female native speakers of Australian English who happened to
    have used different prosodic cues to produce sentences with
    prosodic focus: a combination of duration cues, mean and
    maximum F0, F0 range, and longer pre-target interval before
    the focused word onset, only mean F0 cues, only pre-target
    interval, and only duration cues. Results revealed that listeners
    can entrain in almost every condition except for where
    duration was the only reliable cue. Our findings suggest that
    listeners are flexible in the cues they use for focus processing.
  • Cutler, A., & Pasveer, D. (2006). Explaining cross-linguistic differences in effects of lexical stress on spoken-word recognition. In R. Hoffmann, & H. Mixdorff (Eds.), Speech Prosody 2006. Dresden: TUD press.

    Abstract

    Experiments have revealed differences across languages in listeners’ use of stress information in recognising spoken words. Previous comparisons of the vocabulary of Spanish and English had suggested that the explanation of this asymmetry might lie in the extent to which considering stress in spokenword recognition allows rejection of unwanted competition from words embedded in other words. This hypothesis was tested on the vocabularies of Dutch and German, for which word recognition results resemble those from Spanish more than those from English. The vocabulary statistics likewise revealed that in each language, the reduction of embeddings resulting from taking stress into account is more similar to the reduction achieved in Spanish than in English.
  • Cutler, A., Eisner, F., McQueen, J. M., & Norris, D. (2006). Coping with speaker-related variation via abstract phonemic categories. In Variation, detail and representation: 10th Conference on Laboratory Phonology (pp. 31-32).
  • Cutler, A., Weber, A., & Otake, T. (2006). Asymmetric mapping from phonetic to lexical representations in second-language listening. Journal of Phonetics, 34(2), 269-284. doi:10.1016/j.wocn.2005.06.002.

    Abstract

    The mapping of phonetic information to lexical representations in second-language (L2) listening was examined using an eyetracking paradigm. Japanese listeners followed instructions in English to click on pictures in a display. When instructed to click on a picture of a rocket, they experienced interference when a picture of a locker was present, that is, they tended to look at the locker instead. However, when instructed to click on the locker, they were unlikely to look at the rocket. This asymmetry is consistent with a similar asymmetry previously observed in Dutch listeners’ mapping of English vowel contrasts to lexical representations. The results suggest that L2 listeners may maintain a distinction between two phonetic categories of the L2 in their lexical representations, even though their phonetic processing is incapable of delivering the perceptual discrimination required for correct mapping to the lexical distinction. At the phonetic processing level, one of the L2 categories is dominant; the present results suggest that dominance is determined by acoustic–phonetic proximity to the nearest L1 category. At the lexical processing level, representations containing this dominant category are more likely than representations containing the non-dominant category to be correctly contacted by the phonetic input.
  • Cutler, A., Burchfield, L. A., & Antoniou, M. (2018). Factors affecting talker adaptation in a second language. In J. Epps, J. Wolfe, J. Smith, & C. Jones (Eds.), Proceedings of the 17th Australasian International Conference on Speech Science and Technology (pp. 33-36).

    Abstract

    Listeners adapt rapidly to previously unheard talkers by
    adjusting phoneme categories using lexical knowledge, in a
    process termed lexically-guided perceptual learning. Although
    this is firmly established for listening in the native language
    (L1), perceptual flexibility in second languages (L2) is as yet
    less well understood. We report two experiments examining L1
    and L2 perceptual learning, the first in Mandarin-English late
    bilinguals, the second in Australian learners of Mandarin. Both
    studies showed stronger learning in L1; in L2, however,
    learning appeared for the English-L1 group but not for the
    Mandarin-L1 group. Phonological mapping differences from
    the L1 to the L2 are suggested as the reason for this result.
  • Cutler, A. (1998). How listeners find the right words. In Proceedings of the Sixteenth International Congress on Acoustics: Vol. 2 (pp. 1377-1380). Melville, NY: Acoustical Society of America.

    Abstract

    Languages contain tens of thousands of words, but these are constructed from a tiny handful of phonetic elements. Consequently, words resemble one another, or can be embedded within one another, a coup stick snot with standing. me process of spoken-word recognition by human listeners involves activation of multiple word candidates consistent with the input, and direct competition between activated candidate words. Further, human listeners are sensitive, at an early, prelexical, stage of speeeh processing, to constraints on what could potentially be a word of the language.
  • Cutler, A. (1993). Language-specific processing: Does the evidence converge? In G. T. Altmann, & R. C. Shillcock (Eds.), Cognitive models of speech processing: The Sperlonga Meeting II (pp. 115-123). Hillsdale, NJ: Erlbaum.
  • Cutler, A. (1991). Linguistic rhythm and speech segmentation. In J. Sundberg, L. Nord, & R. Carlson (Eds.), Music, language, speech and brain (pp. 157-166). London: Macmillan.
  • Cutler, A., & Farrell, J. (2018). Listening in first and second language. In J. I. Liontas (Ed.), The TESOL encyclopedia of language teaching. New York: Wiley. doi:10.1002/9781118784235.eelt0583.

    Abstract

    Listeners' recognition of spoken language involves complex decoding processes: The continuous speech stream must be segmented into its component words, and words must be recognized despite great variability in their pronunciation (due to talker differences, or to influence of phonetic context, or to speech register) and despite competition from many spuriously present forms supported by the speech signal. L1 listeners deal more readily with all levels of this complexity than L2 listeners. Fortunately, the decoding processes necessary for competent L2 listening can be taught in the classroom. Evidence-based methodologies targeted at the development of efficient speech decoding include teaching of minimal pairs, of phonotactic constraints, and of reduction processes, as well as the use of dictation and L2 video captions.
  • Cutler, A., Treiman, R., & Van Ooijen, B. (1998). Orthografik inkoncistensy ephekts in foneme detektion? In R. Mannell, & J. Robert-Ribes (Eds.), Proceedings of the Fifth International Conference on Spoken Language Processing: Vol. 6 (pp. 2783-2786). Sydney: ICSLP.

    Abstract

    The phoneme detection task is widely used in spoken word recognition research. Alphabetically literate participants, however, are more used to explicit representations of letters than of phonemes. The present study explored whether phoneme detection is sensitive to how target phonemes are, or may be, orthographically realised. Listeners detected the target sounds [b,m,t,f,s,k] in word-initial position in sequences of isolated English words. Response times were faster to the targets [b,m,t], which have consistent word-initial spelling, than to the targets [f,s,k], which are inconsistently spelled, but only when listeners’ attention was drawn to spelling by the presence in the experiment of many irregularly spelled fillers. Within the inconsistent targets [f,s,k], there was no significant difference between responses to targets in words with majority and minority spellings. We conclude that performance in the phoneme detection task is not necessarily sensitive to orthographic effects, but that salient orthographic manipulation can induce such sensitivity.
  • Cutler, A., Mehler, J., Norris, D., & Segui, J. (1987). Phoneme identification and the lexicon. Cognitive Psychology, 19, 141-177. doi:10.1016/0010-0285(87)90010-7.
  • Cutler, A. (1993). Phonological cues to open- and closed-class words in the processing of spoken sentences. Journal of Psycholinguistic Research, 22, 109-131.

    Abstract

    Evidence is presented that (a) the open and the closed word classes in English have different phonological characteristics, (b) the phonological dimension on which they differ is one to which listeners are highly sensitive, and (c) spoken open- and closed-class words produce different patterns of results in some auditory recognition tasks. What implications might link these findings? Two recent lines of evidence from disparate paradigms—the learning of an artificial language, and natural and experimentally induced misperception of juncture—are summarized, both of which suggest that listeners are sensitive to the phonological reflections of open- vs. closed-class word status. Although these correlates cannot be strictly necessary for efficient processing, if they are present listeners exploit them in making word class assignments. That such a use of phonological information is of value to listeners could be indirect evidence that open- vs. closed-class words undergo different processing operations. Parts of the research reported in this paper were carried out in collaboration with Sally Butterfield and David Carter, and supported by the Alvey Directorate (United Kingdom). Jonathan Stankler's master's research was supported by the Science and Engineering Research Council (United Kingdom). Thanks to all of the above, and to Merrill Garrett, Mike Kelly, James McQueen, and Dennis Norris for further assistance.
  • Cutler, A., & Chen, H.-C. (1995). Phonological similarity effects in Cantonese word recognition. In K. Elenius, & P. Branderud (Eds.), Proceedings of the Thirteenth International Congress of Phonetic Sciences: Vol. 1 (pp. 106-109). Stockholm: Stockholm University.

    Abstract

    Two lexical decision experiments in Cantonese are described in which the recognition of spoken target words as a function of phonological similarity to a preceding prime is investigated. Phonological similaritv in first syllables produced inhibition, while similarity in second syllables led to facilitation. Differences between syllables in tonal and segmental structure had generally similar effects.
  • Cutler, A., Kearns, R., Norris, D., & Scott, D. R. (1993). Problems with click detection: Insights from cross-linguistic comparisons. Speech Communication, 13, 401-410. doi:10.1016/0167-6393(93)90038-M.

    Abstract

    Cross-linguistic comparisons may shed light on the levels of processing involved in the performance of psycholinguistic tasks. For instance, if the same pattern of results appears whether or not subjects understand the experimental materials, it may be concluded that the results do not reflect higher-level linguistic processing. In the present study, English and French listeners performed two tasks - click location and speeded click detection - with both English and French sentences, closely matched for syntactic and phonological structure. Clicks were located more accurately in open- than in closed-class words in both English and French; they were detected more rapidly in open- than in closed-class words in English, but not in French. The two listener groups produced the same pattern of responses, suggesting that higher-level linguistic processing was not involved in the listeners' responses. It is concluded that click detection tasks are primarily sensitive to low-level (e.g. acoustic) effects, and hence are not well suited to the investigation of linguistic processing.
  • Cutler, A. (1991). Proceed with caution. New Scientist, (1799), 53-54.
  • Cutler, A. (1998). Prosodic structure and word recognition. In A. D. Friederici (Ed.), Language comprehension: A biological perspective (pp. 41-70). Heidelberg: Springer.
  • Cutler, A. (1991). Prosody in situations of communication: Salience and segmentation. In Proceedings of the Twelfth International Congress of Phonetic Sciences: Vol. 1 (pp. 264-270). Aix-en-Provence: Université de Provence, Service des publications.

    Abstract

    Speakers and listeners have a shared goal: to communicate. The processes of speech perception and of speech production interact in many ways under the constraints of this communicative goal; such interaction is as characteristic of prosodic processing as of the processing of other aspects of linguistic structure. Two of the major uses of prosodic information in situations of communication are to encode salience and segmentation, and these themes unite the contributions to the symposium introduced by the present review.
  • Cutler, A. (1987). Speaking for listening. In A. Allport, D. MacKay, W. Prinz, & E. Scheerer (Eds.), Language perception and production: Relationships between listening, speaking, reading and writing (pp. 23-40). London: Academic Press.

    Abstract

    Speech production is constrained at all levels by the demands of speech perception. The speaker's primary aim is successful communication, and to this end semantic, syntactic and lexical choices are directed by the needs of the listener. Even at the articulatory level, some aspects of production appear to be perceptually constrained, for example the blocking of phonological distortions under certain conditions. An apparent exception to this pattern is word boundary information, which ought to be extremely useful to listeners, but which is not reliably coded in speech. It is argued that the solution to this apparent problem lies in rethinking the concept of the boundary of the lexical access unit. Speech rhythm provides clear information about the location of stressed syllables, and listeners do make use of this information. If stressed syllables can serve as the determinants of word lexical access codes, then once again speakers are providing precisely the necessary form of speech information to facilitate perception.
  • Cutler, A. (1995). Spoken word recognition and production. In J. L. Miller, & P. D. Eimas (Eds.), Speech, language and communication (pp. 97-136). New York: Academic Press.

    Abstract

    This chapter highlights that most language behavior consists of speaking and listening. The chapter also reveals differences and similarities between speaking and listening. The laboratory study of word production raises formidable problems; ensuring that a particular word is produced may subvert the spontaneous production process. Word production is investigated via slips and tip-of-the-tongue (TOT), primarily via instances of processing failure and via the technique of via the picture-naming task. The methodology of word production is explained in the chapter. The chapter also explains the phenomenon of interaction between various stages of word production and the process of speech recognition. In this context, it explores the difference between sound and meaning and examines whether or not the comparisons are appropriate between the processes of recognition and production of spoken words. It also describes the similarities and differences in the structure of the recognition and production systems. Finally, the chapter highlights the common issues in recognition and production research, which include the nuances of frequency of occurrence, morphological structure, and phonological structure.
  • Cutler, A. (1995). Spoken-word recognition. In G. Bloothooft, V. Hazan, D. Hubert, & J. Llisterri (Eds.), European studies in phonetics and speech communication (pp. 66-71). Utrecht: OTS.
  • Cutler, A. (1993). Segmentation problems, rhythmic solutions. Lingua, 92, 81-104. doi:10.1016/0024-3841(94)90338-7.

    Abstract

    The lexicon contains discrete entries, which must be located in speech input in order for speech to be understood; but the continuity of speech signals means that lexical access from spoken input involves a segmentation problem for listeners. The speech environment of prelinguistic infants may not provide special information to assist the infant listeners in solving this problem. Mature language users in possession of a lexicon might be thought to be able to avoid explicit segmentation of speech by relying on information from successful lexical access; however, evidence from adult perceptual studies indicates that listeners do use explicit segmentation procedures. These procedures differ across languages and seem to exploit language-specific rhythmic structure. Efficient as these procedures are, they may not have been developed in response to statistical properties of the input, because bilinguals, equally competent in two languages, apparently only possess one rhythmic segmentation procedure. The origin of rhythmic segmentation may therefore lie in the infant's exploitation of rhythm to solve the segmentation problem and gain a first toehold on lexical acquisition. Recent evidence from speech production and perception studies with prelinguistic infants supports the claim that infants are sensitive to rhythmic structure and its relationship to lexical segmentation.
  • Cutler, A. (1993). Segmenting speech in different languages. The Psychologist, 6(10), 453-455.
  • Cutler, A. (1995). The perception of rhythm in spoken and written language. In J. Mehler, & S. Franck (Eds.), Cognition on cognition (pp. 283-288). Cambridge, MA: MIT Press.
  • Cutler, A., Butterfield, S., & Williams, J. (1987). The perceptual integrity of syllabic onsets. Journal of Memory and Language, 26, 406-418. doi:10.1016/0749-596X(87)90099-4.
  • Cutler, A., & Mehler, J. (1993). The periodicity bias. Journal of Phonetics, 21, 101-108.
  • Cutler, A., & Carter, D. (1987). The predominance of strong initial syllables in the English vocabulary. Computer Speech and Language, 2, 133-142. doi:10.1016/0885-2308(87)90004-0.

    Abstract

    Studies of human speech processing have provided evidence for a segmentation strategy in the perception of continuous speech, whereby a word boundary is postulated, and a lexical access procedure initiated, at each metrically strong syllable. The likely success of this strategy was here estimated against the characteristics of the English vocabulary. Two computerized dictionaries were found to list approximately three times as many words beginning with strong syllables (i.e. syllables containing a full vowel) as beginning with weak syllables (i.e. syllables containing a reduced vowel). Consideration of frequency of lexical word occurrence reveals that words beginning with strong syllables occur on average more often than words beginning with weak syllables. Together, these findings motivate an estimate for everyday speech recognition that approximately 85% of lexical words (i.e. excluding function words) will begin with strong syllables. This estimate was tested against a corpus of 190 000 words of spontaneous British English conversion. In this corpus, 90% of lexical words were found to begin with strong syllables. This suggests that a strategy of postulating word boundaries at the onset of strong syllables would have a high success rate in that few actual lexical word onsets would be missed.
  • Cutler, A., & Carter, D. (1987). The prosodic structure of initial syllables in English. In J. Laver, & M. Jack (Eds.), Proceedings of the European Conference on Speech Technology: Vol. 1 (pp. 207-210). Edinburgh: IEE.
  • Cutler, A., & McQueen, J. M. (1995). The recognition of lexical units in speech. In B. De Gelder, & J. Morais (Eds.), Speech and reading: A comparative approach (pp. 33-47). Hove, UK: Erlbaum.
  • Cutler, A. (1998). The recognition of spoken words with variable representations. In D. Duez (Ed.), Proceedings of the ESCA Workshop on Sound Patterns of Spontaneous Speech (pp. 83-92). Aix-en-Provence: Université de Aix-en-Provence.
  • Cutler, A. (1987). The task of the speaker and the task of the hearer [Commentary/Sperber & Wilson: Relevance]. Behavioral and Brain Sciences, 10, 715-716.
  • Cutler, A., & Butterfield, S. (1991). Word boundary cues in clear speech: A supplementary report. Speech Communication, 10, 335-353. doi:10.1016/0167-6393(91)90002-B.

    Abstract

    One of a listener's major tasks in understanding continuous speech is segmenting the speech signal into separate words. When listening conditions are difficult, speakers can help listeners by deliberately speaking more clearly. In four experiments, we examined how word boundaries are produced in deliberately clear speech. In an earlier report we showed that speakers do indeed mark word boundaries in clear speech, by pausing at the boundary and lengthening pre-boundary syllables; moreover, these effects are applied particularly to boundaries preceding weak syllables. In English, listeners use segmentation procedures which make word boundaries before strong syllables easier to perceive; thus marking word boundaries before weak syllables in clear speech will make clear precisely those boundaries which are otherwise hard to perceive. The present report presents supplementary data, namely prosodic analyses of the syllable following a critical word boundary. More lengthening and greater increases in intensity were applied in clear speech to weak syllables than to strong. Mean F0 was also increased to a greater extent on weak syllables than on strong. Pitch movement, however, increased to a greater extent on strong syllables than on weak. The effects were, however, very small in comparison to the durational effects we observed earlier for syllables preceding the boundary and for pauses at the boundary.
  • Cutler, A. (1995). Universal and Language-Specific in the Development of Speech. Biology International, (Special Issue 33).
  • Dai, B., Chen, C., Long, Y., Zheng, L., Zhao, H., Bai, X., Liu, W., Zhang, Y., Liu, L., Guo, T., Ding, G., & Lu, C. (2018). Neural mechanisms for selectively tuning into the target speaker in a naturalistic noisy situation. Nature Communications, 9: 2405. doi:10.1038/s41467-018-04819-z.

    Abstract

    The neural mechanism for selectively tuning in to a target speaker while tuning out the others in a multi-speaker situation (i.e., the cocktail-party effect) remains elusive. Here we addressed this issue by measuring brain activity simultaneously from a listener and from multiple speakers while they were involved in naturalistic conversations. Results consistently show selectively enhanced interpersonal neural synchronization (INS) between the listener and the attended speaker at left temporal–parietal junction, compared with that between the listener and the unattended speaker across different multi-speaker situations. Moreover, INS increases significantly prior to the occurrence of verbal responses, and even when the listener’s brain activity precedes that of the speaker. The INS increase is independent of brain-to-speech synchronization in both the anatomical location and frequency range. These findings suggest that INS underlies the selective process in a multi-speaker situation through neural predictions at the content level but not the sensory level of speech.

    Additional information

    Dai_etal_2018_sup.pdf
  • Danziger, E., & Gaskins, S. (1993). Exploring the Intrinsic Frame of Reference. In S. C. Levinson (Ed.), Cognition and space kit 1.0 (pp. 53-64). Nijmegen: Max Planck Institute for Psycholinguistics. doi:10.17617/2.3513136.

    Abstract

    We can describe the position of one item with respect to another using a number of different ‘frames of reference’. For example, I can use a ‘deictic’ frame that involves the speaker’s viewpoint (The chair is on the far side of the room), or an ‘intrinsic’ frame that involves a feature of one of the items (The chair is at the back of the room). Where more than one frame of reference is available in a language, what motivates the speaker’s choice? This elicitation task is designed to explore when and why people select intrinsic frames of reference, and how these choices interact with non-linguistic problem-solving strategies.
  • Danziger, E. (1995). Intransitive predicate form class survey. In D. Wilkins (Ed.), Extensions of space and beyond: manual for field elicitation for the 1995 field season (pp. 46-53). Nijmegen: Max Planck Institute for Psycholinguistics. doi:10.17617/2.3004298.

    Abstract

    Different linguistic structures allow us to highlight distinct aspects of a situation. The aim of this survey is to investigate similarities and differences in the expression of situations or events as “stative” (maintaining a state), “inchoative” (adopting a state) and “agentive” (causing something to be in a state). The questionnaire focuses on the encoding of stative, inchoative and agentive possibilities for the translation equivalents of a set of English verbs.
  • Danziger, E. (1995). Posture verb survey. In D. Wilkins (Ed.), Extensions of space and beyond: manual for field elicitation for the 1995 field season (pp. 33-34). Nijmegen: Max Planck Institute for Psycholinguistics. doi:10.17617/2.3004235.

    Abstract

    Expressions of human activities and states are a rich area for cross-linguistic comparison. Some languages of the world treat human posture verbs (e.g., sit, lie, kneel) as a special class of predicates, with distinct formal properties. This survey examines lexical, semantic and grammatical patterns for posture verbs, with special reference to contrasts between “stative” (maintaining a posture), “inchoative” (adopting a posture), and “agentive” (causing something to adopt a posture) constructions. The enquiry is thematically linked to the more general questionnaire 'Intransitive Predicate Form Class Survey'.
  • Davidson, D. J. (2006). Strategies for longitudinal neurophysiology [commentary on Osterhout et al.]. Language Learning, 56(suppl. 1), 231-234. doi:10.1111/j.1467-9922.2006.00362.x.
  • Dediu, D. (2018). Making genealogical language classifications available for phylogenetic analysis: Newick trees, unified identifiers, and branch length. Language Dynamics and Change, 8(1), 1-21. doi:10.1163/22105832-00801001.

    Abstract

    One of the best-known types of non-independence between languages is caused by genealogical relationships due to descent from a common ancestor. These can be represented by (more or less resolved and controversial) language family trees. In theory, one can argue that language families should be built through the strict application of the comparative method of historical linguistics, but in practice this is not always the case, and there are several proposed classifications of languages into language families, each with its own advantages and disadvantages. A major stumbling block shared by most of them is that they are relatively difficult to use with computational methods, and in particular with phylogenetics. This is due to their lack of standardization, coupled with the general non-availability of branch length information, which encapsulates the amount of evolution taking place on the family tree. In this paper I introduce a method (and its implementation in R) that converts the language classifications provided by four widely-used databases (Ethnologue, WALS, AUTOTYP and Glottolog) intothe de facto Newick standard generally used in phylogenetics, aligns the four most used conventions for unique identifiers of linguistic entities (ISO 639-3, WALS, AUTOTYP and Glottocode), and adds branch length information from a variety of sources (the tree's own topology, an externally given numeric constant, or a distance matrix). The R scripts, input data and resulting Newick trees are available under liberal open-source licenses in a GitHub repository (https://github.com/ddediu/lgfam-newick), to encourage and promote the use of phylogenetic methods to investigate linguistic diversity and its temporal dynamics.
  • Dediu, D. (2006). Mostly out of Africa, but what did the others have to say? In A. Cangelosi, A. D. Smith, & K. Smith (Eds.), The evolution of language: proceedings of the 6th International Conference (EVOLANG6) (pp. 59-66). World Scientific.

    Abstract

    The Recent Out-of-Africa human evolutionary model seems to be generally accepted. This impression is very prevalent outside palaeoanthropological circles (including studies of language evolution), but proves to be unwarranted. This paper offers a short review of the main challenges facing ROA and concludes that alternative models based on the concept of metapopulation must be also considered. The implications of such a model for language evolution and diversity are briefly reviewed.
  • Dediu, D., & Levinson, S. C. (2018). Neanderthal language revisited: Not only us. Current Opinion in Behavioral Sciences, 21, 49-55. doi:10.1016/j.cobeha.2018.01.001.

    Abstract

    Here we re-evaluate our 2013 paper on the antiquity of language (Dediu and Levinson, 2013) in the light of a surge of new information on human evolution in the last half million years. Although new genetic data suggest the existence of some cognitive differences between Neanderthals and modern humans — fully expected after hundreds of thousands of years of partially separate evolution, overall our claims that Neanderthals were fully articulate beings and that language evolution was gradual are further substantiated by the wealth of new genetic, paleontological and archeological evidence briefly reviewed here.
  • Degand, L., & Van Bergen, G. (2018). Discourse markers as turn-transition devices: Evidence from speech and instant messaging. Discourse Processes, 55, 47-71. doi:10.1080/0163853X.2016.1198136.

    Abstract

    In this article we investigate the relation between discourse markers and turn-transition strategies in face-to-face conversations and Instant Messaging (IM), that is, unplanned, real-time, text-based, computer-mediated communication. By means of a quantitative corpus study of utterances containing a discourse marker, we show that utterance-final discourse markers are used more often in IM than in face-to-face conversations. Moreover, utterance-final discourse markers are shown to occur more often at points of turn-transition compared with points of turn-maintenance in both types of conversation. From our results we conclude that the discourse markers in utterance-final position can function as a turn-transition mechanism, signaling that the turn is over and the floor is open to the hearer. We argue that this linguistic turn-taking strategy is essentially similar in face-to-face and IM communication. Our results add to the evidence that communication in IM is more like speech than like writing.
  • Delgado, T., Ravignani, A., Verhoef, T., Thompson, B., Grossi, T., & Kirby, S. (2018). Cultural transmission of melodic and rhythmic universals: Four experiments and a model. In C. Cuskley, M. Flaherty, H. Little, L. McCrohon, A. Ravignani, & T. Verhoef (Eds.), Proceedings of the 12th International Conference on the Evolution of Language (EVOLANG XII) (pp. 89-91). Toruń, Poland: NCU Press. doi:10.12775/3991-1.019.
  • Den Hoed, J., Sollis, E., Venselaar, H., Estruch, S. B., Derizioti, P., & Fisher, S. E. (2018). Functional characterization of TBR1 variants in neurodevelopmental disorder. Scientific Reports, 8: 14279. doi:10.1038/s41598-018-32053-6.

    Abstract

    Recurrent de novo variants in the TBR1 transcription factor are implicated in the etiology of sporadic autism spectrum disorders (ASD). Disruptions include missense variants located in the T-box DNA-binding domain and previous work has demonstrated that they disrupt TBR1 protein function. Recent screens of thousands of simplex families with sporadic ASD cases uncovered additional T-box variants in TBR1 but their etiological relevance is unclear. We performed detailed functional analyses of de novo missense TBR1 variants found in the T-box of ASD cases, assessing many aspects of protein function, including subcellular localization, transcriptional activity and protein-interactions. Only two of the three tested variants severely disrupted TBR1 protein function, despite in silico predictions that all would be deleterious. Furthermore, we characterized a putative interaction with BCL11A, a transcription factor that was recently implicated in a neurodevelopmental syndrome involving developmental delay and language deficits. Our findings enhance understanding of molecular functions of TBR1, as well as highlighting the importance of functional testing of variants that emerge from next-generation sequencing, to decipher their contributions to neurodevelopmental disorders like ASD.

    Additional information

    Electronic supplementary material
  • Desmet, T., De Baecke, C., Drieghe, D., Brysbaert, M., & Vonk, W. (2006). Relative clause attachment in Dutch: On-line comprehension corresponds to corpus frequencies when lexical variables are taken into account. Language and Cognitive Processes, 21(4), 453-485. doi:10.1080/01690960400023485.

    Abstract

    Desmet, Brysbaert, and De Baecke (2002a) showed that the production of relative clauses following two potential attachment hosts (e.g., ‘Someone shot the servant of the actress who was on the balcony’) was influenced by the animacy of the first host. These results were important because they refuted evidence from Dutch against experience-based accounts of syntactic ambiguity resolution, such as the tuning hypothesis. However, Desmet et al. did not provide direct evidence in favour of tuning, because their study focused on production and did not include reading experiments. In the present paper this line of research was extended. A corpus analysis and an eye-tracking experiment revealed that when taking into account lexical properties of the NP host sites (i.e., animacy and concreteness) the frequency pattern and the on-line comprehension of the relative clause attachment ambiguity do correspond. The implications for exposure-based accounts of sentence processing are discussed.
  • Devanna, P., Van de Vorst, M., Pfundt, R., Gilissen, C., & Vernes, S. C. (2018). Genome-wide investigation of an ID cohort reveals de novo 3′UTR variants affecting gene expression. Human Genetics, 137(9), 717-721. doi:10.1007/s00439-018-1925-9.

    Abstract

    Intellectual disability (ID) is a severe neurodevelopmental disorder with genetically heterogeneous causes. Large-scale sequencing has led to the identification of many gene-disrupting mutations; however, a substantial proportion of cases lack a molecular diagnosis. As such, there remains much to uncover for a complete understanding of the genetic underpinnings of ID. Genetic variants present in non-coding regions of the genome have been highlighted as potential contributors to neurodevelopmental disorders given their role in regulating gene expression. Nevertheless the functional characterization of non-coding variants remains challenging. We describe the identification and characterization of de novo non-coding variation in 3′UTR regulatory regions within an ID cohort of 50 patients. This cohort was previously screened for structural and coding pathogenic variants via CNV, whole exome and whole genome analysis. We identified 44 high-confidence single nucleotide non-coding variants within the 3′UTR regions of these 50 genomes. Four of these variants were located within predicted miRNA binding sites and were thus hypothesised to have regulatory consequences. Functional testing showed that two of the variants interfered with miRNA-mediated regulation of their target genes, AMD1 and FAIM. Both these variants were found in the same individual and their functional consequences may point to a potential role for such variants in intellectual disability.

    Additional information

    439_2018_1925_MOESM1_ESM.docx
  • Devanna, P., Chen, X. S., Ho, J., Gajewski, D., Smith, S. D., Gialluisi, A., Francks, C., Fisher, S. E., Newbury, D. F., & Vernes, S. C. (2018). Next-gen sequencing identifies non-coding variation disrupting miRNA binding sites in neurological disorders. Molecular Psychiatry, 23(5), 1375-1384. doi:10.1038/mp.2017.30.

    Abstract

    Understanding the genetic factors underlying neurodevelopmental and neuropsychiatric disorders is a major challenge given their prevalence and potential severity for quality of life. While large-scale genomic screens have made major advances in this area, for many disorders the genetic underpinnings are complex and poorly understood. To date the field has focused predominantly on protein coding variation, but given the importance of tightly controlled gene expression for normal brain development and disorder, variation that affects non-coding regulatory regions of the genome is likely to play an important role in these phenotypes. Herein we show the importance of 3 prime untranslated region (3'UTR) non-coding regulatory variants across neurodevelopmental and neuropsychiatric disorders. We devised a pipeline for identifying and functionally validating putatively pathogenic variants from next generation sequencing (NGS) data. We applied this pipeline to a cohort of children with severe specific language impairment (SLI) and identified a functional, SLI-associated variant affecting gene regulation in cells and post-mortem human brain. This variant and the affected gene (ARHGEF39) represent new putative risk factors for SLI. Furthermore, we identified 3′UTR regulatory variants across autism, schizophrenia and bipolar disorder NGS cohorts demonstrating their impact on neurodevelopmental and neuropsychiatric disorders. Our findings show the importance of investigating non-coding regulatory variants when determining risk factors contributing to neurodevelopmental and neuropsychiatric disorders. In the future, integration of such regulatory variation with protein coding changes will be essential for uncovering the genetic causes of complex neurological disorders and the fundamental mechanisms underlying health and disease

    Additional information

    mp201730x1.docx
  • Diesveld, P., & Kempen, G. (1993). Zinnen als bouwwerken: Computerprogramma's voor grammatica-oefeningen. MOER, Tijdschrift voor onderwijs in het Nederlands, 1993(4), 130-138.
  • Dietrich, C. (2006). The acquisition of phonological structure: Distinguishing contrastive from non-contrastive variation. PhD Thesis, Radboud University Nijmegen, Nijmegen. doi:10.17617/2.57829.
  • Dietrich, R., Klein, W., & Noyau, C. (1993). The acquisition of temporality. In C. Perdue (Ed.), Adult language acquisition: Cross-linguistic perspectives: Vol. 2 The results (pp. 73-118). Cambridge: Cambridge University Press.
  • Dietrich, R., Klein, W., & Noyau, C. (1995). The acquisition of temporality in a second language. Amsterdam: Benjamins.
  • Dijkstra, T., & Kempen, G. (Eds.). (1993). Einführung in die Psycholinguistik. München: Hans Huber.
  • Dijkstra, T. (1993). Taalpsychologie (G. Kempen, Ed.). Groningen: Wolters-Noordhoff.

Share this page