Publications

Displaying 101 - 200 of 1143
  • Broeder, D., Van Veenendaal, R., Nathan, D., & Strömqvist, S. (2006). A grid of language resource repositories. In Proceedings of the 2nd IEEE International Conference on e-Science and Grid Computing.
  • Broeder, D., Declerck, T., Romary, L., Uneson, M., Strömqvist, S., & Wittenburg, P. (2004). A large metadata domain of language resources. In M. Lino, M. Xavier, F. Ferreira, R. Costa, & R. Silva (Eds.), Proceedings of the 4th International Conference on Language Resources and Evaluation (LREC 2004) (pp. 369-372). Paris: European Language Resources Association.
  • Broeder, D. (2004). 40,000 IMDI sessions. Language Archive Newsletter, 1(4), 12-12.
  • Broeder, D., Nava, M., & Declerck, T. (2004). INTERA - a Distributed Domain of Metadata Resources. In M. Lino, M. Xavier, F. Ferreira, R. Costa, & R. Silva (Eds.), Proceedings of the 4th International Conference on Spoken Language Resources and Evaluation (LREC 2004) (pp. 369-372). Paris: European Language Resources Association.
  • Broeder, D., & Offenga, F. (2004). IMDI Metadata Set 3.0. Language Archive Newsletter, 1(2), 3-3.
  • Broeder, D., Claus, A., Offenga, F., Skiba, R., Trilsbeek, P., & Wittenburg, P. (2006). LAMUS: The Language Archive Management and Upload System. In Proceedings of the 5th International Conference on Language Resources and Evaluation (LREC 2006) (pp. 2291-2294).
  • Broersma, M. (2006). Nonnative listeners rely less on phonetic information for phonetic categorization than native listeners. In Variation, detail and representation: 10th Conference on Laboratory Phonology (pp. 109-110).
  • Broersma, M., & De Bot, K. (2006). Triggered codeswitching: A corpus-based evaluation of the original triggering hypothesis and a new alternative. Bilingualism: Language and Cognition, 9(1), 1-13. doi:10.1017/S1366728905002348.

    Abstract

    In this article the triggering hypothesis for codeswitching proposed by Michael Clyne is discussed and tested. According to this hypothesis, cognates can facilitate codeswitching of directly preceding or following words. It is argued that the triggering hypothesis in its original form is incompatible with language production models, as it assumes that language choice takes place at the surface structure of utterances, while in bilingual production models language choice takes place along with lemma selection. An adjusted version of the triggering hypothesis is proposed in which triggering takes place during lemma selection and the scope of triggering is extended to basic units in language production. Data from a Dutch–Moroccan Arabic corpus are used for a statistical test of the original and the adjusted triggering theory. The codeswitching patterns found in the data support part of the original triggering hypothesis, but they are best explained by the adjusted triggering theory.
  • Broersma, M. (2006). Accident - execute: Increased activation in nonnative listening. In Proceedings of Interspeech 2006 (pp. 1519-1522).

    Abstract

    Dutch and English listeners’ perception of English words with partially overlapping onsets (e.g., accident- execute) was investigated. Partially overlapping words remained active longer for nonnative listeners, causing an increase of lexical competition in nonnative compared with native listening.
  • Broersma, M., & Kolkman, K. M. (2004). Lexical representation of non-native phonemes. In S. Kin, & M. J. Bae (Eds.), Proceedings of the 8th International Conference on Spoken Language Processing (Interspeech 2004-ICSLP) (pp. 1241-1244). Seoul: Sunjijn Printing Co.
  • Brouwer, S., & Bradlow, A. R. (2015). The effect of target-background synchronicity on speech-in-speech recognition. In Scottish consortium for ICPhS 2015, M. Wolters, J. Livingstone, B. Beattie, R. Smith, M. MacMahon, J. Stuart-Smith, & J. Scobbie (Eds.), Proceedings of the 18th International Congress of Phonetic Sciences (ICPhS 2015). Glasgow: University of Glasgow.

    Abstract

    The aim of the present study was to investigate whether speech-in-speech recognition is affected by variation in the target-background timing relationship. Specifically, we examined whether within trial synchronous or asynchronous onset and offset of the target and background speech influenced speech-in-speech recognition. Native English listeners were presented with English target sentences in the presence of English or Dutch background speech. Importantly, only the short-term temporal context –in terms of onset and offset synchrony or asynchrony of the target and background speech– varied across conditions. Participants’ task was to repeat back the English target sentences. The results showed an effect of synchronicity for English-in-English but not for English-in-Dutch recognition, indicating that familiarity with the English background lead in the asynchronous English-in-English condition might have attracted attention towards the English background. Overall, this study demonstrated that speech-in-speech recognition is sensitive to the target-background timing relationship, revealing an important role for variation in the local context of the target-background relationship as it extends beyond the limits of the time-frame of the to-be-recognized target sentence.
  • Brouwer, S., & Bradlow, A. R. (2015). The temporal dynamics of spoken word recognition in adverse listening conditions. Journal of Psycholinguistic Research. Advanced online publication. doi:10.1007/s10936-015-9396-9.

    Abstract

    This study examined the temporal dynamics of spoken word recognition in noise and background speech. In two visual-world experiments, English participants listened to target words while looking at four pictures on the screen: a target (e.g. candle), an onset competitor (e.g. candy), a rhyme competitor (e.g. sandal), and an unrelated distractor (e.g. lemon). Target words were presented in quiet, mixed with broadband noise, or mixed with background speech. Results showed that lexical competition changes throughout the observation window as a function of what is presented in the background. These findings suggest that, rather than being strictly sequential, stream segregation and lexical competition interact during spoken word recognition
  • Brown, P. (2004). Position and motion in Tzeltal frog stories: The acquisition of narrative style. In S. Strömqvist, & L. Verhoeven (Eds.), Relating events in narrative: Typological and contextual perspectives (pp. 37-57). Mahwah: Erlbaum.

    Abstract

    How are events framed in narrative? Speakers of English (a 'satellite-framed' language), when 'reading' Mercer Mayer's wordless picture book 'Frog, Where Are You?', find the story self-evident: a boy has a dog and a pet frog; the frog escapes and runs away; the boy and dog look for it across hill and dale, through woods and over a cliff, until they find it and return home with a baby frog child of the original pet frog. In Tzeltal, as spoken in a Mayan community in southern Mexico, the story is somewhat different, because the language structures event descriptions differently. Tzeltal is in part a 'verb-framed' language with a set of Path-encoding motion verbs, so that the bare bones of the Frog story can consist of verbs translating as 'go'/'pass by'/'ascend'/ 'descend'/ 'arrive'/'return'. But Tzeltal also has satellite-framing adverbials, grammaticized from the same set of motion verbs, which encode the direction of motion or the orientation of static arrays. Furthermore, motion is not generally encoded barebones, but vivid pictorial detail is provided by positional verbs which can describe the position of the Figure as an outcome of a motion event; motion and stasis are thereby combined in a single event description. (For example: jipot jawal "he has been thrown (by the deer) lying¬_face_upwards_spread-eagled". This paper compares the use of these three linguistic resources in frog narratives from 14 Tzeltal adults and 21 children, looks at their development in the narratives of children between the ages of 4-12, and considers the results in relation to those from Berman and Slobin's (1996) comparative study of adult and child Frog stories.
  • Brown, C. M., & Hagoort, P. (1989). De LAT-relatie tussen lichaam en geest: Over de implicaties van neurowetenschap voor onze kennis van cognitie. In C. Brown, P. Hagoort, & T. Meijering (Eds.), Vensters op de geest: Cognitie op het snijvlak van filosofie en psychologie (pp. 50-81). Utrecht: Grafiet.
  • Brown, P. (1989). [Review of the book Language, gender, and sex in comparative perspective ed. by Susan U. Philips, Susan Steeleand Christine Tanz]. Man, 24(1), 192.
  • Brown, A. (2006). Cross-linguistic influence in first and second lanuages: Convergence in speech and gesture. PhD Thesis, Boston University, Boston.

    Abstract

    Research on second language acquisition typically focuses on how a first language (L1) influences a second language (L2) in different linguistic domains and across modalities. This dissertation, in contrast, explores interactions between languages in the mind of a language learner by asking 1) can an emerging L2 influence an established L1? 2) if so, how is such influence realized? 3) are there parallel influences of the L1 on the L2? These questions were investigated for the expression of Manner (e.g. climb, roll) and Path (e.g. up, down) of motion, areas where substantial crosslinguistic differences exist in speech and co-speech gesture. Japanese and English are typologically distinct in this domain; therefore, narrative descriptions of four motion events were elicited from monolingual Japanese speakers (n=16), monolingual English speakers (n=13), and native Japanese speakers with intermediate knowledge of English (narratives elicited in both their L1 and L2, n=28). Ways in which Path and Manner were expressed at the lexical, syntactic, and gestural levels were analyzed in monolingual and non-monolingual production. Results suggest mutual crosslinguistic influences. In their L1, native Japanese speakers with knowledge of English displayed both Japanese- and English-like use of morphosyntactic elements to express Path and Manner (i.e. a combination of verbs and other constructions). Consequently, non-monolingual L1 discourse contained significantly more Path expressions per clause, with significantly greater mention of Goal of motion than monolingual Japanese and English discourse. Furthermore, the gestures of non-monolingual speakers diverged from their monolingual counterparts with differences in depiction of Manner and gesture perspective (character versus observer). Importantly, non-monolingual production in the L1 was not ungrammatical, but simply reflected altered preferences. As for L2 production, many effects of L1 influence were seen, crucially in areas parallel to those described above. Overall, production by native Japanese speakers who knew English differed from that of monolingual Japanese and English speakers. But L1 and L2 production within non-monolingual individuals was similar. These findings imply a convergence of L1-L2 linguistic systems within the mind of a language learner. Theoretical and methodological implications for SLA research and language assessment with respect to the ‘native speaker standard language’ are discussed.
  • Brown, P. (2006). Cognitive anthropology. In C. Jourdan, & K. Tuite (Eds.), Language, culture and society: Key topics in linguistic anthropology (pp. 96-114). Cambridge University Press.

    Abstract

    This is an appropriate moment to review the state of the art in cognitive anthropology, construed broadly as the comparative study of human cognition in its linguistic and cultural context. In reaction to the dominance of universalism in the 1970s and '80s, there have recently been a number of reappraisals of the relation between language and cognition, and the field of cognitive anthropology is flourishing in several new directions in both America and Europe. This is partly due to a renewal and re-evaluation of approaches to the question of linguistic relativity associated with Whorf, and partly to the inspiration of modern developments in cognitive science. This review briefly sketches the history of cognitive anthropology and surveys current research on both sides of the Atlantic. The focus is on assessing current directions, considering in particular, by way of illustration, recent work in cultural models and on spatial language and cognition. The review concludes with an assessment of how cognitive anthropology could contribute directly both to the broader project of cognitive science and to the anthropological study of how cultural ideas and practices relate to structures and processes of human cognition.
  • Brown, P. (2006). A sketch of the grammar of space in Tzeltal. In S. C. Levinson, & D. P. Wilkins (Eds.), Grammars of space: Explorations in cognitive diversity (pp. 230-272). Cambridge: Cambridge University Press.

    Abstract

    This paper surveys the lexical and grammatical resources for talking about spatial relations in the Mayan language Tzeltal - for describing where things are located, where they are moving, and how they are distributed in space. Six basic sets of spatial vocabulary are presented: i. existential locative expressions with ay ‘exist’, ii. deictics (demonstratives, adverbs, presentationals), iii. dispositional adjectives, often in combination with (iv) and (v), iv. body part relational noun locatives, v. absolute (‘cardinal’) directions, and vi. motion verbs, directionals and auxiliaries. The first two are used in minimal locative descriptions, while the others constitute the core resources for specifying in detail the location, disposition, orientation, or motion of a Figure in relation to a Ground. We find that Tzeltal displays a relative de-emphasis on deixis and left/right asymmetry, and a detailed attention to the spatial properties of objects.
  • Brown, P., & Levinson, S. C. (2004). Frames of spatial reference and their acquisition in Tenejapan Tzeltal. In A. Assmann, U. Gaier, & G. Trommsdorff (Eds.), Zwischen Literatur und Anthropologie: Diskurse, Medien, Performanzen (pp. 285-314). Tübingen: Gunter Narr.

    Abstract

    This is a reprint of the Brown and Levinson 2000 article.
  • Brown, P. (2006). Language, culture and cognition: The view from space. Zeitschrift für Germanistische Linguistik, 34, 64-86.

    Abstract

    This paper addresses the vexed questions of how language relates to culture, and what kind of notion of culture is important for linguistic explanation. I first sketch five perspectives - five different construals - of culture apparent in linguistics and in cognitive science more generally. These are: (i) culture as ethno-linguistic group, (ii) culture as a mental module, (iii) culture as knowledge, (iv) culture as context, and (v) culture as a process emergent in interaction. I then present my own work on spatial language and cognition in a Mayan languge and culture, to explain why I believe a concept of culture is important for linguistics. I argue for a core role for cultural explanation in two domains: in analysing the semantics of words embedded in cultural practices which color their meanings (in this case, spatial frames of reference), and in characterizing thematic and functional links across different domains in the social and semiotic life of a particular group of people.
  • Brown, P., Levinson, S. C., & Senft, G. (2004). Initial references to persons and places. In A. Majid (Ed.), Field Manual Volume 9 (pp. 37-44). Nijmegen: Max Planck Institute for Psycholinguistics. doi:10.17617/2.492929.

    Abstract

    This task has two parts: (i) video-taped elicitation of the range of possibilities for referring to persons and places, and (ii) observations of (first) references to persons and places in video-taped natural interaction. The goal of this task is to establish the repertoires of referential terms (and other practices) used for referring to persons and to places in particular languages and cultures, and provide examples of situated use of these kinds of referential practices in natural conversation. This data will form the basis for cross-language comparison, and for formulating hypotheses about general principles underlying the deployment of such referential terms in natural language usage.
  • Brown, P., Gaskins, S., Lieven, E., Striano, T., & Liszkowski, U. (2004). Multimodal multiperson interaction with infants aged 9 to 15 months. In A. Majid (Ed.), Field Manual Volume 9 (pp. 56-63). Nijmegen: Max Planck Institute for Psycholinguistics. doi:10.17617/2.492925.

    Abstract

    Interaction, for all that it has an ethological base, is culturally constituted, and how new social members are enculturated into the interactional practices of the society is of critical interest to our understanding of interaction – how much is learned, how variable is it across cultures – as well as to our understanding of the role of culture in children’s social-cognitive development. The goal of this task is to document the nature of caregiver infant interaction in different cultures, especially during the critical age of 9-15 months when children come to have an understanding of others’ intentions. This is of interest to all students of interaction; it does not require specialist knowledge of children.
  • Brown, P. (2015). Language, culture, and spatial cognition. In F. Sharifian (Ed.), Routledge Handbook on Language and Culture (pp. 294-309). London: Routledge.
  • Brown, P. (1997). Isolating the CVC root in Tzeltal Mayan: A study of children's first verbs. In E. V. Clark (Ed.), Proceedings of the 28th Annual Child Language Research Forum (pp. 41-52). Stanford, CA: CSLI/University of Chicago Press.

    Abstract

    How do children isolate the semantic package contained in verb roots in the Mayan language Tzeltal? One might imagine that the canonical CVC shape of roots characteristic of Mayan languages would make the job simple, but the root is normally preceded and followed by affixes which mask its identity. Pye (1983) demonstrated that, in Kiche' Mayan, prosodic salience overrides semantic salience, and children's first words in Kiche' are often composed of only the final (stressed) syllable constituted by the final consonant of the CVC root and a 'meaningless' termination suffix. Intonation thus plays a crucial role in early Kiche' morphological development. Tzeltal presents a rather different picture: The first words of children around the age of 1;6 are bare roots, children strip off all prefixes and suffixes which are obligatory in adult speech. They gradually add them, starting with the suffixes (which receive the main stress), but person prefixes are omitted in some contexts past a child's third birthday, and one obligatory aspectual prefix (x-) is systematically omitted by the four children in my longitudinal study even after they are four years old. Tzeltal children's first verbs generally show faultless isolation of the root. An account in terms of intonation or stress cannot explain this ability (the prefixes are not all syllables; the roots are not always stressed). This paper suggests that probable clues include the fact that the CVC root stays constant across contexts (with some exceptions) whereas the affixes vary, that there are some linguistic contexts where the root occurs without any prefixes (relatively frequent in the input), and that the Tzeltal discourse convention of responding by repeating with appropriate deictic alternation (e.g., "I see it." "Oh, you see it.") highlights the root.
  • Brown, P. (2015). Space: Linguistic expression of. In J. D. Wright (Ed.), International Encyclopedia of the Social and Behavioral Sciences (2nd ed.) Vol. 23 (pp. 89-93). Amsterdam: Elsevier. doi:10.1016/B978-0-08-097086-8.57017-2.
  • Brown, P. (2015). Politeness and language. In J. D. Wright (Ed.), The International Encyclopedia of the Social and Behavioural Sciences (IESBS), (2nd ed.) (pp. 326-330). Amsterdam: Elsevier. doi:10.1016/B978-0-08-097086-8.53072-4.
  • Brown-Schmidt, S., & Konopka, A. E. (2015). Processes of incremental message planning during conversation. Psychonomic Bulletin & Review, 22, 833-843. doi:10.3758/s13423-014-0714-2.

    Abstract

    Speaking begins with the formulation of an intended preverbal message and linguistic encoding of this information. The transition from thought to speech occurs incrementally, with cascading planning at subsequent levels of production. In this article, we aim to specify the mechanisms that support incremental message preparation. We contrast two hypotheses about the mechanisms responsible for incorporating message-level information into a linguistic plan. According to the Initial Preparation view, messages can be encoded as fluent utterances if all information is ready before speaking begins. By contrast, on the Continuous Incrementality view, messages can be continually prepared and updated throughout the production process, allowing for fluent production even if new information is added to the message while speaking is underway. Testing these hypotheses, eye-tracked speakers in two experiments produced unscripted, conjoined noun phrases with modifiers. Both experiments showed that new message elements can be incrementally incorporated into the utterance even after articulation begins, consistent with a Continuous Incrementality view of message planning, in which messages percolate to linguistic encoding immediately as that information becomes available in the mind of the speaker. We conclude by discussing the functional role of incremental message planning in conversational speech and the situations in which this continuous incremental planning would be most likely to be observed
  • Brucato, N., Guadalupe, T., Franke, B., Fisher, S. E., & Francks, C. (2015). A schizophrenia-associated HLA locus affects thalamus volume and asymmetry. Brain, Behavior, and Immunity, 46, 311-318. doi:10.1016/j.bbi.2015.02.021.

    Abstract

    Genes of the Major Histocompatibility Complex (MHC) have recently been shown to have neuronal functions in the thalamus and hippocampus. Common genetic variants in the Human Leukocyte Antigens (HLA) region, human homologue of the MHC locus, are associated with small effects on susceptibility to schizophrenia, while volumetric changes of the thalamus and hippocampus have also been linked to schizophrenia. We therefore investigated whether common variants of the HLA would affect volumetric variation of the thalamus and hippocampus. We analyzed thalamus and hippocampus volumes, as measured using structural magnetic resonance imaging, in 1.265 healthy participants. These participants had also been genotyped using genome-wide single nucleotide polymorphism (SNP) arrays. We imputed genotypes for single nucleotide polymorphisms at high density across the HLA locus, as well as HLA allotypes and HLA amino acids, by use of a reference population dataset that was specifically targeted to the HLA region. We detected a significant association of the SNP rs17194174 with thalamus volume (nominal P=0.0000017, corrected P=0.0039), as well as additional SNPs within the same region of linkage disequilibrium. This effect was largely lateralized to the left thalamus and is localized within a genomic region previously associated with schizophrenia. The associated SNPs are also clustered within a potential regulatory element, and a region of linkage disequilibrium that spans genes expressed in the thalamus, including HLA-A. Our data indicate that genetic variation within the HLA region influences the volume and asymmetry of the human thalamus. The molecular mechanisms underlying this association may relate to HLA influences on susceptibility to schizophrenia
  • Bruggeman, L., & Janse, E. (2015). Older listeners' decreased flexibility in adjusting to changes in speech signal reliability. In M. Wolters, J. Linvingstone, B. Beattie, M. MacMahon, J. Stuart-Smith, & J. Scobbie (Eds.), Proceedings of the 18th International Congress of Phonetic Sciences (ICPhS 2015). London: International Phonetic Association.

    Abstract

    Under noise or speech reductions, young adult listeners flexibly adjust the parameters of lexical activation and competition to allow for speech signal unreliability. Consequently, mismatches in the input are treated more leniently such that lexical candidates are not immediately deactivated. Using eyetracking, we assessed whether this modulation of recognition dynamics also occurs for older listeners. Dutch participants (aged 60+) heard Dutch sentences containing a critical word while viewing displays of four line drawings. The name of one picture shared either onset or rhyme with the critical word (i.e., was a phonological competitor). Sentences were either clear and noise-free, or had several phonemes replaced by bursts of noise. A larger preference for onset competitors than for rhyme competitors was observed in both clear and noise conditions; performance did not alter across condition. This suggests that dynamic adjustment of spoken-word recognition parameters in response to noise is less available to older listeners.
  • Brugman, H. (2004). ELAN 2.2 now available. Language Archive Newsletter, 1(3), 13-14.
  • Brugman, H., Sloetjes, H., Russel, A., & Klassmann, A. (2004). ELAN 2.3 available. Language Archive Newsletter, 1(4), 13-13.
  • Brugman, H. (2004). ELAN Releases 2.0.2 and 2.1. Language Archive Newsletter, 1(2), 4-4.
  • Brugman, H., Crasborn, O., & Russel, A. (2004). Collaborative annotation of sign language data with Peer-to-Peer technology. In M. Lino, M. Xavier, F. Ferreira, R. Costa, & R. Silva (Eds.), Proceedings of the 4th International Conference on Language Resources and Language Evaluation (LREC 2004) (pp. 213-216). Paris: European Language Resources Association.
  • Brugman, H., Malaisé, V., & Gazendam, L. (2006). A web based general thesaurus browser to support indexing of television and radio programs. In Proceedings of the 5th International Conference on Language Resources and Evaluation (LREC 2006) (pp. 1488-1491).
  • Brugman, H., & Russel, A. (2004). Annotating Multi-media/Multi-modal resources with ELAN. In M. Lino, M. Xavier, F. Ferreira, R. Costa, & R. Silva (Eds.), Proceedings of the 4th International Conference on Language Resources and Language Evaluation (LREC 2004) (pp. 2065-2068). Paris: European Language Resources Association.
  • Budwig, N., Narasimhan, B., & Srivastava, S. (2006). Interim solutions: The acquisition of early constructions in Hindi. In E. Clark, & B. Kelly (Eds.), Constructions in acquisition (pp. 163-185). Stanford: CSLI Publications.
  • Bull, L. E., Oliver, C., Callaghan, E., & Woodcock, K. A. (2015). Increased Exposure to Rigid Routines can Lead to Increased Challenging Behavior Following Changes to Those Routines. Journal of Autism and Developmental Disorders, 45(6), 1569-1578. doi:10.1007/s10803-014-2308-2.

    Abstract

    Several neurodevelopmental disorders are associated with preference for routine and challenging behavior following changes to routines. We examine individuals with Prader–Willi syndrome, who show elevated levels of this behavior, to better understand how previous experience of a routine can affect challenging behavior elicited by disruption to that routine. Play based challenges exposed 16 participants to routines, which were either adhered to or changed. Temper outburst behaviors, heart rate and movement were measured. As participants were exposed to routines for longer before a change (between 10 and 80 min; within participants), more temper outburst behaviors were elicited by changes. Increased emotional arousal was also elicited, which was indexed by heart rate increases not driven by movement. Further study will be important to understand whether current intervention approaches that limit exposure to changes, may benefit from the structured integration of flexibility to ensure that the opportunity for routine establishment is also limited.

    Additional information

    10803_2014_2308_MOESM1_ESM.docx
  • Burenhult, N. (2004). Spatial deixis in Jahai. In S. Burusphat (Ed.), Papers from the 11th Annual Meeting of the Southeast Asian Linguistics Society 2001 (pp. 87-100). Arizona State University: Program for Southeast Asian Studies.
  • Burenhult, N. (2006). Body part terms in Jahai. Language Sciences, 28(2-3), 162-180. doi:10.1016/j.langsci.2005.11.002.

    Abstract

    This article explores the lexicon of body part terms in Jahai, a Mon-Khmer language spoken by a group of hunter–gatherers in the Malay Peninsula. It provides an extensive inventory of body part terms and describes their structural and semantic properties. The Jahai body part lexicon pays attention to fine anatomical detail but lacks labels for major, ‘higher-level’ categories, like ‘trunk’, ‘limb’, ‘arm’ and ‘leg’. In this lexicon it is therefore sometimes difficult to discern a clear partonomic hierarchy, a presumed universal of body part terminology.
  • Burenhult, N. (2004). Landscape terms and toponyms in Jahai: A field report. Lund Working Papers, 51, 17-29.
  • Butterfield, S., & Cutler, A. (1988). Segmentation errors by human listeners: Evidence for a prosodic segmentation strategy. In W. Ainsworth, & J. Holmes (Eds.), Proceedings of SPEECH ’88: Seventh Symposium of the Federation of Acoustic Societies of Europe: Vol. 3 (pp. 827-833). Edinburgh: Institute of Acoustics.
  • Byun, K.-S., & Byun, E.-J. (2015). Becoming Friends with International Sign. Seoul: Sign Language Dandelion.
  • Caldwell-Harris, C. L., Lancaster, A., Ladd, D. R., Dediu, D., & Christiansen, M. H. (2015). Factors influencing sensitivity to lexical tone in an artificial language: Implications for second language learning. Studies in Second Language Acquisition, 37(2), 335-357. doi:10.1017/S0272263114000849.

    Abstract

    This study examined whether musical training, ethnicity, and experience with a natural tone language influenced sensitivity to tone while listening to an artificial tone language. The language was designed with three tones, modeled after level-tone African languages. Participants listened to a 15-min random concatenation of six 3-syllable words. Sensitivity to tone was assessed using minimal pairs differing only in one syllable (nonword task: e.g., to-kà-su compared to ca-fí-to) or only in tone (tone task: e.g., to-kà-su compared to to-ká-su). Proficiency in an East Asian heritage language was the strongest predictor of success on the tone task. Asians without tone language experience were no better than other ethnic groups. We conclude by considering implications for research on second language learning, especially as approached through artificial language learning.
  • Calkoen, F., Vervat, C., van Pel, M., de Haas, V., Vijfhuizen, L., Eising, E., Kroes, W., Hoen, P., van den Heuvel-Eibrink, M., Egeler, R., Van Tol, M., & Ball, L. (2015). Despite differential gene expression profiles pediatric MDS derived mesenchymal stromal cells display functionality in vitro. Stem Cell Research, 14(2), 198-210. doi:10.1016/j.scr.2015.01.006.

    Abstract

    Pediatric myelodysplastic syndrome (MDS) is a heterogeneous disease covering a spectrum ranging from aplasia (RCC) to myeloproliferation (RAEB(t)). In adult-type MDS there is increasing evidence for abnormal function of the bone-marrow microenvironment. Here, we extensively studied the mesenchymal stromal cells (MSCs) derived from children with MDS. MSCs were expanded from the bone-marrow of 17 MDS patients (RCC: n = 10 and advanced MDS: n = 7) and pediatric controls (n = 10). No differences were observed with respect to phenotype, differentiation capacity, immunomodulatory capacity or hematopoietic support. mRNA expression analysis by Deep-SAGE revealed increased IL-6 expression in RCC- and RAEB(t)-MDS. RCC-MDS MSC expressed increased levels of DKK3, a protein associated with decreased apoptosis. RAEB(t)-MDS revealed increased CRLF1 and decreased DAPK1 expressions. This pattern has been associated with transformation in hematopoietic malignancies. Genes reported to be differentially expressed in adult MDS-MSC did not differ between MSC of pediatric MDS and controls. An altered mRNA expression profile, associated with cell survival and malignant transformation, of MSC derived from children with MDS strengthens the hypothesis that the micro-environment is of importance in this disease. Our data support the understanding that pediatric and adult MDS are two different diseases. Further evaluation of the pathways involved might reveal additional therapy targets.
  • Calkoen, F. G., Vervat, C., Eising, E., Vijfhuizen, L. S., 't Hoen, P.-B.-A., van den Heuvel-Eibrink, M. M., & Egeler, R. M. (2015). Gene-expression and in vitro function of mesenchymal stromal cells are affected in juvenile myelomonocytic leukemia. Haematologica, 100(11), 1434-1441. doi:10.3324/haematol.2015.126938.

    Abstract

    An aberrant interaction between hematopoietic stem cells and mesenchymal stromal cells has been linked to disease and shown to contribute to the pathophysiology of hematologic malignancies in murine models. Juvenile myelomonocytic leukemia is an aggressive malignant disease affecting young infants. Here we investigated the impact of juvenile myelomonocytic leukemia on mesenchymal stromal cells. Mesenchymal stromal cells were expanded from bone marrow samples of patients at diagnosis (n=9) and after hematopoietic stem cell transplantation (n=7; from 5 patients) and from healthy children (n=10). Cells were characterized by phenotyping, differentiation, gene expression analysis (of controls and samples obtained at diagnosis) and in vitro functional studies assessing immunomodulation and hematopoietic support. Mesenchymal stromal cells from patients did not differ from controls in differentiation capacity nor did they differ in their capacity to support in vitro hematopoiesis. Deep-SAGE sequencing revealed differential mRNA expression in patient-derived samples, including genes encoding proteins involved in immunomodulation and cell-cell interaction. Selected gene expression normalized during remission after successful hematopoietic stem cell transplantation. Whereas natural killer cell activation and peripheral blood mononuclear cell proliferation were not differentially affected, the suppressive effect on monocyte to dendritic cell differentiation was increased by mesenchymal stromal cells obtained at diagnosis, but not at time of remission. This study shows that active juvenile myelomonocytic leukemia affects the immune response-related gene expression and function of mesenchymal stromal cells. In contrast, the differential gene expression of hematopoiesis-related genes could not be supported by functional data. Decreased immune surveillance might contribute to the therapy resistance and progression in juvenile myelomonocytic leukemia.
  • Carlsson, K., Andersson, J., Petrovic, P., Petersson, K. M., Öhman, A., & Ingvar, M. (2006). Predictability modulates the affective and sensory-discriminative neural processing of pain. NeuroImage, 32(4), 1804-1814. doi:10.1016/j.neuroimage.2006.05.027.

    Abstract

    Knowing what is going to happen next, that is, the capacity to predict upcoming events, modulates the extent to which aversive stimuli induce stress and anxiety. We explored this issue by manipulating the temporal predictability of aversive events by means of a visual cue, which was either correlated or uncorrelated with pain stimuli (electric shocks). Subjects reported lower levels of anxiety, negative valence and pain intensity when shocks were predictable. In addition to attenuate focus on danger, predictability allows for correct temporal estimation of, and selective attention to, the sensory input. With functional magnetic resonance imaging, we found that predictability was related to enhanced activity in relevant sensory-discriminative processing areas, such as the primary and secondary sensory cortex and posterior insula. In contrast, the unpredictable more aversive context was correlated to brain activity in the anterior insula and the orbitofrontal cortex, areas associated with affective pain processing. This context also prompted increased activity in the posterior parietal cortex and lateral prefrontal cortex that we attribute to enhanced alertness and sustained attention during unpredictability.
  • Carlsson, K., Petersson, K. M., Lundqvist, D., Karlsson, A., Ingvar, M., & Öhman, A. (2004). Fear and the amygdala: manipulation of awareness generates differential cerebral responses to phobic and fear-relevant (but nonfeared) stimuli. Emotion, 4(4), 340-353. doi:10.1037/1528-3542.4.4.340.

    Abstract

    Rapid response to danger holds an evolutionary advantage. In this positron emission tomography study, phobics were exposed to masked visual stimuli with timings that either allowed awareness or not of either phobic, fear-relevant (e.g., spiders to snake phobics), or neutral images. When the timing did not permit awareness, the amygdala responded to both phobic and fear-relevant stimuli. With time for more elaborate processing, phobic stimuli resulted in an addition of an affective processing network to the amygdala activity, whereas no activity was found in response to fear-relevant stimuli. Also, right prefrontal areas appeared deactivated, comparing aware phobic and fear-relevant conditions. Thus, a shift from top-down control to an affectively driven system optimized for speed was observed in phobic relative to fear-relevant aware processing.
  • Carota, F. (2006). Derivational morphology of Italian: Principles for formalization. Literary and Linguistic Computing, 21(SUPPL. 1), 41-53. doi:10.1093/llc/fql007.

    Abstract

    The present paper investigates the major derivational strategies underlying the formation of suffixed words in Italian, with the purpose of tackling the issue of their formalization. After having specified the theoretical cognitive premises that orient the work, the interacting component modules of the suffixation process, i.e. morphonology, morphotactics and affixal semantics, are explored empirically, by drawing ample naturally occurring data on a Corpus of written Italian. A special attention is paid to the semantic mechanisms that are involved into suffixation. Some semantic nuclei are identified for the major suffixed word types of Italian, which are due to word formation rules active at the synchronic level, and a semantic configuration of productive suffixes is suggested. A general framework is then sketched, which combines classical finite-state methods with a feature unification-based word grammar. More specifically, the semantic information specified for the affixal material is internalised into the structures of the Lexical Functional Grammar (LFG). The formal model allows us to integrate the various modules of suffixation. In particular, it treats, on the one hand, the interface between morphonology/morphotactics and semantics and, on the other hand, the interface between suffixation and inflection. Furthermore, since LFG exploits a hierarchically organised lexicon in order to structure the information regarding the affixal material, affixal co-selectional restrictions are advatageously constrained, avoiding potential multiple spurious analysis/generations.
  • Casillas, M., De Vos, C., Crasborn, O., & Levinson, S. C. (2015). The perception of stroke-to-stroke turn boundaries in signed conversation. In D. C. Noelle, R. Dale, A. S. Warlaumont, J. Yoshimi, T. Matlock, C. D. Jennings, & P. R. Maglio (Eds.), Proceedings of the 37th Annual Meeting of the Cognitive Science Society (CogSci 2015) (pp. 315-320). Austin, TX: Cognitive Science Society.

    Abstract

    Speaker transitions in conversation are often brief, with minimal vocal overlap. Signed languages appear to defy this pattern with frequent, long spans of simultaneous signing. But recent evidence suggests that turn boundaries in signed language may only include the content-bearing parts of the turn (from the first stroke to the last), and not all turn-related movement (from first preparation to final retraction). We tested whether signers were able to anticipate “stroke-to-stroke” turn boundaries with only minimal conversational context. We found that, indeed, signers anticipated turn boundaries at the ends of turn-final strokes. Signers often responded early, especially when the turn was long or contained multiple possible end points. Early responses for long turns were especially apparent for interrogatives—long interrogative turns showed much greater anticipation compared to short ones.
  • Ceroni, F., Simpson, N. H., Francks, C., Baird, G., Conti-Ramsden, G., Clark, A., Bolton, P. F., Hennessy, E. R., Donnelly, P., Bentley, D. R., Martin, H., IMGSAC, SLI Consortium, WGS500 Consortium, Parr, J., Pagnamenta, A. T., Maestrini, E., Bacchelli, E., Fisher, S. E., & Newbury, D. F. (2015). Reply to Pembrey et al: ‘ZNF277 microdeletions, specific language impairment and the meiotic mismatch methylation (3M) hypothesis’. European Journal of Human Genetics, 23, 1113-1115. doi:10.1038/ejhg.2014.275.
  • Chang, F., Bauman, M., Pappert, S., & Fitz, H. (2015). Do lemmas speak German?: A verb position effect in German structural priming. Cognitive Science, 39(5), 1113-1130. doi:10.1111/cogs.12184.

    Abstract

    Lexicalized theories of syntax often assume that verb-structure regularities are mediated by lemmas, which abstract over variation in verb tense and aspect. German syntax seems to challenge this assumption, because verb position depends on tense and aspect. To examine how German speakers link these elements, a structural priming study was performed which varied syntactic structure, verb position (encoded by tense and aspect), and verb overlap. Abstract structural priming was found, both within and across verb position, but priming was larger when the verb position was the same between prime and target. Priming was boosted by verb overlap, but there was no interaction with verb position. The results can be explained by a lemma model where tense and aspect are linked to structural choices in German. Since the architecture of this lemma model is not consistent with results from English, a connectionist model was developed which could explain the cross-linguistic variation in the production system. Together, these findings support the view that language learning plays an important role in determining the nature of structural priming in different languages
  • Chen, J. (2006). The acquisition of verb compounding in Mandarin. In E. V. Clark, & B. F. Kelly (Eds.), Constructions in acquisition (pp. 111-136). Stanford: CSLI Publications.
  • Chen, Y., & Braun, B. (2006). Prosodic realization in information structure categories in standard Chinese. In R. Hoffmann, & H. Mixdorff (Eds.), Speech Prosody 2006. Dresden: TUD Press.

    Abstract

    This paper investigates the prosodic realization of information
    structure categories in Standard Chinese. A number of proper
    names with different tonal combinations were elicited as a
    grammatical subject in five pragmatic contexts. Results show
    that both duration and F0 range of the tonal realizations were
    adjusted to signal the information structure categories (i.e.
    theme vs. rheme and background vs. focus). Rhemes
    consistently induced a longer duration and a more expanded F0
    range than themes. Focus, compared to background, generally
    induced lengthening and F0 range expansion (the presence and
    magnitude of which, however, are dependent on the tonal
    structure of the proper names). Within the rheme focus
    condition, corrective rheme focus induced more expanded F0
    range than normal rheme focus.
  • Chen, A. (2006). Variations in the marking of focus in child language. In Variation, detail and representation: 10th Conference on Laboratory Phonology (pp. 113-114).
  • Chen, H.-C., & Cutler, A. (1997). Auditory priming in spoken and printed word recognition. In H.-C. Chen (Ed.), Cognitive processing of Chinese and related Asian languages (pp. 77-81). Hong Kong: Chinese University Press.
  • Chen, A. (2015). Children’s use of intonation in reference and the role of input. In L. Serratrice, & S. E. M. Allen (Eds.), The acquisition of reference (pp. 83-104). Amsterdam: Benjamins.

    Abstract

    Studies on children’s use of intonation in reference are few in number but are diverse in terms of theoretical frameworks and intonational parameters. In the current review, I present a re-analysis of the referents in each study, using a three-dimension approach (i.e. referential givenness-newness, relational givenness-newness, contrast), discuss the use of intonation at two levels (phonetic, phonological), and compare findings from different studies within a single framework. The patterns stemming from these studies may be limited in generalisability but can serve as initial hypotheses for future work. Furthermore, I examine the role of input as available in infant direct speech in the acquisition of intonational encoding of referents. In addition, I discuss how future research can advance our knowledge.

    Files private

    Request files
  • Chen, A. (2006). Interface between information structure and intonation in Dutch wh-questions. In R. Hoffmann, & H. Mixdorff (Eds.), Speech Prosody 2006. Dresden: TUD Press.

    Abstract

    This study set out to investigate how accent placement is pragmatically governed in WH-questions. Central to this issue are questions such as whether the intonation of the WH-word depends on the information structure of the non-WH word part, whether topical constituents can be accented, and whether constituents in the non-WH word part can be non-topical and accented. Previous approaches, based either on carefully composed examples or on read speech, differ in their treatments of these questions and consequently make opposing claims on the intonation of WH-questions. We addressed these questions by examining a corpus of 90 naturally occurring WH-questions, selected from the Spoken Dutch Corpus. Results show that the intonation of the WH-word is related to the information structure of the non-WH word part. Further, topical constituents can get accented and the accents are not necessarily phonetically reduced. Additionally, certain adverbs, which have no topical relation to the presupposition of the WH-questions, also get accented. They appear to function as a device for enhancing speaker engagement.
  • Chen, A., Gussenhoven, C., & Rietveld, T. (2004). Language specificity in perception of paralinguistic intonational meaning. Language and Speech, 47(4), 311-349.

    Abstract

    This study examines the perception of paralinguistic intonational meanings deriving from Ohala’s Frequency Code (Experiment 1) and Gussenhoven’s Effort Code (Experiment 2) in British English and Dutch. Native speakers of British English and Dutch listened to a number of stimuli in their native language and judged each stimulus on four semantic scales deriving from these two codes: SELF-CONFIDENT versus NOT SELF-CONFIDENT, FRIENDLY versus NOT FRIENDLY (Frequency Code); SURPRISED versus NOT SURPRISED, and EMPHATIC versus NOT EMPHATIC (Effort Code). The stimuli, which were lexically equivalent across the two languages, differed in pitch contour, pitch register and pitch span in Experiment 1, and in pitch register, peak height, peak alignment and end pitch in Experiment 2. Contrary to the traditional view that the paralinguistic usage of intonation is similar across languages, it was found that British English and Dutch listeners differed considerably in the perception of “confident,” “friendly,” “emphatic,” and “surprised.” The present findings support a theory of paralinguistic meaning based on the universality of biological codes, which however acknowledges a languagespecific component in the implementation of these codes.
  • Chen, J., Calhoun, V. D., Arias-Vasquez, A., Zwiers, M. P., Van Hulzen, K., Fernández, G., Fisher, S. E., Franke, B., Turner, J. A., & Liu, J. (2015). G-Protein genomic association with normal variation in gray matter density. Human Brain Mapping, 36(11), 4272-4286. doi:10.1002/hbm.22916.

    Abstract

    While detecting genetic variations underlying brain structures helps reveal mechanisms of neural disorders, high data dimensionality poses a major challenge for imaging genomic association studies. In this work, we present the application of a recently proposed approach, parallel independent component analysis with reference (pICA-R), to investigate genomic factors potentially regulating gray matter variation in a healthy population. This approach simultaneously assesses many variables for an aggregate effect and helps to elicit particular features in the data. We applied pICA-R to analyze gray matter density (GMD) images (274,131 voxels) in conjunction with single nucleotide polymorphism (SNP) data (666,019 markers) collected from 1,256 healthy individuals of the Brain Imaging Genetics (BIG) study. Guided by a genetic reference derived from the gene GNA14, pICA-R identified a significant SNP-GMD association (r = −0.16, P = 2.34 × 10−8), implying that subjects with specific genotypes have lower localized GMD. The identified components were then projected to an independent dataset from the Mind Clinical Imaging Consortium (MCIC) including 89 healthy individuals, and the obtained loadings again yielded a significant SNP-GMD association (r = −0.25, P = 0.02). The imaging component reflected GMD variations in frontal, precuneus, and cingulate regions. The SNP component was enriched in genes with neuronal functions, including synaptic plasticity, axon guidance, molecular signal transduction via PKA and CREB, highlighting the GRM1, PRKCH, GNA12, and CAMK2B genes. Collectively, our findings suggest that GNA12 and GNA14 play a key role in the genetic architecture underlying normal GMD variation in frontal and parietal regions
  • Cho, T., & McQueen, J. M. (2006). Phonological versus phonetic cues in native and non-native listening: Korean and Dutch listeners' perception of Dutch and English consonants. Journal of the Acoustical Society of America, 119(5), 3085-3096. doi:10.1121/1.2188917.

    Abstract

    We investigated how listeners of two unrelated languages, Korean and Dutch, process phonologically viable and nonviable consonants spoken in Dutch and American English. To Korean listeners, released final stops are nonviable because word-final stops in Korean are never released in words spoken in isolation, but to Dutch listeners, unreleased word-final stops are nonviable because word-final stops in Dutch are generally released in words spoken in isolation. Two phoneme monitoring experiments showed a phonological effect on both Dutch and English stimuli: Korean listeners detected the unreleased stops more rapidly whereas Dutch listeners detected the released stops more rapidly and/or more accurately. The Koreans, however, detected released stops more accurately than unreleased stops, but only in the non-native language they were familiar with (English). The results suggest that, in non-native speech perception, phonological legitimacy in the native language can be more important than the richness of phonetic information, though familiarity with phonetic detail in the non-native language can also improve listening performance.
  • Cho, T., & McQueen, J. M. (2004). Phonotactics vs. phonetic cues in native and non-native listening: Dutch and Korean listeners' perception of Dutch and English. In S. Kin, & M. J. Bae (Eds.), Proceedings of the 8th International Conference on Spoken Language Processing (Interspeech 2004-ICSLP) (pp. 1301-1304). Seoul: Sunjijn Printing Co.

    Abstract

    We investigated how listeners of two unrelated languages, Dutch and Korean, process phonotactically legitimate and illegitimate sounds spoken in Dutch and American English. To Dutch listeners, unreleased word-final stops are phonotactically illegal because word-final stops in Dutch are generally released in isolation, but to Korean listeners, released final stops are illegal because word-final stops are never released in Korean. Two phoneme monitoring experiments showed a phonotactic effect: Dutch listeners detected released stops more rapidly than unreleased stops whereas the reverse was true for Korean listeners. Korean listeners with English stimuli detected released stops more accurately than unreleased stops, however, suggesting that acoustic-phonetic cues associated with released stops improve detection accuracy. We propose that in non-native speech perception, phonotactic legitimacy in the native language speeds up phoneme recognition, the richness of acousticphonetic cues improves listening accuracy, and familiarity with the non-native language modulates the relative influence of these two factors.
  • Cho, T. (2004). Prosodically conditioned strengthening and vowel-to-vowel coarticulation in English. Journal of Phonetics, 32(2), 141-176. doi:10.1016/S0095-4470(03)00043-3.

    Abstract

    The goal of this study is to examine how the degree of vowel-to-vowel coarticulation varies as a function of prosodic factors such as nuclear-pitch accent (accented vs. unaccented), level of prosodic boundary (Prosodic Word vs. Intermediate Phrase vs. Intonational Phrase), and position-in-prosodic-domain (initial vs. final). It is hypothesized that vowels in prosodically stronger locations (e.g., in accented syllables and at a higher prosodic boundary) are not only coarticulated less with their neighboring vowels, but they also exert a stronger influence on their neighbors. Measurements of tongue position for English /a i/ over time were obtained with Carsten’s electromagnetic articulography. Results showed that vowels in prosodically stronger locations are coarticulated less with neighboring vowels, but do not exert a stronger influence on the articulation of neighboring vowels. An examination of the relationship between coarticulation and duration revealed that (a) accent-induced coarticulatory variation cannot be attributed to a duration factor and (b) some of the data with respect to boundary effects may be accounted for by the duration factor. This suggests that to the extent that prosodically conditioned coarticulatory variation is duration-independent, there is no absolute causal relationship from duration to coarticulation. It is proposed that prosodically conditioned V-to-V coarticulatory reduction is another type of strengthening that occurs in prosodically strong locations. The prosodically driven coarticulatory patterning is taken to be part of the phonetic signatures of the hierarchically nested structure of prosody.
  • Cho, T., & Johnson, E. K. (2004). Acoustic correlates of phrase-internal lexical boundaries in Dutch. In S. Kin, & M. J. Bae (Eds.), Proceedings of the 8th International Conference on Spoken Language Processing (Interspeech 2004-ICSLP) (pp. 1297-1300). Seoul: Sunjin Printing Co.

    Abstract

    The aim of this study was to determine if Dutch speakers reliably signal phrase-internal lexical boundaries, and if so, how. Six speakers recorded 4 pairs of phonemically identical strong-weak-strong (SWS) strings with matching syllable boundaries but mismatching intended word boundaries (e.g. reis # pastei versus reispas # tij, or more broadly C1V2(C)#C2V2(C)C3V3(C) vs. C1V2(C)C2V2(C)#C3V3(C)). An Analysis of Variance revealed 3 acoustic parameters that were significantly greater in S#WS items (C2 DURATION, RIME1 DURATION, C3 BURST AMPLITUDE) and 5 parameters that were significantly greater in the SW#S items (C2 VOT, C3 DURATION, RIME2 DURATION, RIME3 DURATION, and V2 AMPLITUDE). Additionally, center of gravity measurements suggested that the [s] to [t] coarticulation was greater in reis # pa[st]ei versus reispa[s] # [t]ij. Finally, a Logistic Regression Analysis revealed that the 3 parameters (RIME1 DURATION, RIME2 DURATION, and C3 DURATION) contributed most reliably to a S#WS versus SW#S classification.
  • Choi, J., Broersma, M., & Cutler, A. (2015). Enhanced processing of a lost language: Linguistic knowledge or linguistic skill? In Proceedings of Interspeech 2015: 16th Annual Conference of the International Speech Communication Association (pp. 3110-3114).

    Abstract

    Same-different discrimination judgments for pairs of Korean stop consonants, or of Japanese syllables differing in phonetic segment length, were made by adult Korean adoptees in the Netherlands, by matched Dutch controls, and Korean controls. The adoptees did not outdo either control group on either task, although the same individuals had performed significantly better than matched controls on an identification learning task. This suggests that early exposure to multiple phonetic systems does not specifically improve acoustic-phonetic skills; rather, enhanced performance suggests retained language knowledge.
  • Cholin, J. (2004). Syllables in speech production: Effects of syllable preparation and syllable frequency. PhD Thesis, Radboud University Nijmegen, Nijmegen. doi:10.17617/2.60589.

    Abstract

    The fluent production of speech is a very complex human skill. It requires the coordination of several articulatory subsystems. The instructions that lead articulatory movements to execution are the result of the interplay of speech production levels that operate above the articulatory network. During the process of word-form encoding, the groundwork for the articulatory programs is prepared which then serve the articulators as basic units. This thesis investigated whether or not syllables form the basis for the articulatory programs and in particular whether or not these syllable programs are stored, separate from the store of the lexical word-forms. It is assumed that syllable units are stored in a so-called 'mental syllabary'. The main goal of this thesis was to find evidence of the syllable playing a functionally important role in speech production and for the assumption that syllables are stored units. In a variant of the implicit priming paradigm, it was investigated whether information about the syllabic structure of a target word facilitates the preparation (advanced planning) of a to-be-produced utterance. These experiments yielded evidence for the functionally important role of syllables in speech production. In a subsequent row of experiments, it could be demonstrated that the production of syllables is sensitive to frequency. Syllable frequency effects provide strong evidence for the notion of a mental syllabary because only stored units are likely to exhibit frequency effects. In a last study, effects of syllable preparation and syllable frequency were investigated in a combined study to disentangle the two effects. The results of this last experiment converged with those reported for the other experiments and added further support to the claim that syllables play a core functional role in speech production and are stored in a mental syllabary.

    Additional information

    full text via Radboud Repository
  • Cholin, J., Schiller, N. O., & Levelt, W. J. M. (2004). The preparation of syllables in speech production. Journal of Memory and Language, 50(1), 47-61. doi:10.1016/j.jml.2003.08.003.

    Abstract

    Models of speech production assume that syllables play a functional role in the process of word-form encoding in speech production. In this study, we investigate this claim and specifically provide evidence about the level at which syllables come into play. We report two studies using an odd-man-out variant of the implicit priming paradigm to examine the role of the syllable during the process of word formation. Our results show that this modified version of the implicit priming paradigm can trace the emergence of syllabic structure during spoken word generation. Comparing these results to prior syllable priming studies, we conclude that syllables emerge at the interface between phonological and phonetic encoding. The results are discussed in terms of the WEAVER++ model of lexical access.
  • Cholin, J., Levelt, W. J. M., & Schiller, N. O. (2006). Effects of syllable frequency in speech production. Cognition, 99, 205-235. doi:10.1016/j.cognition.2005.01.009.

    Abstract

    In the speech production model proposed by [Levelt, W. J. M., Roelofs, A., Meyer, A. S. (1999). A theory of lexical access in speech production. Behavioral and Brain Sciences, 22, pp. 1-75.], syllables play a crucial role at the interface of phonological and phonetic encoding. At this interface, abstract phonological syllables are translated into phonetic syllables. It is assumed that this translation process is mediated by a so-called Mental Syllabary. Rather than constructing the motor programs for each syllable on-line, the mental syllabary is hypothesized to provide pre-compiled gestural scores for the articulators. In order to find evidence for such a repository, we investigated syllable-frequency effects: If the mental syllabary consists of retrievable representations corresponding to syllables, then the retrieval process should be sensitive to frequency differences. In a series of experiments using a symbol-position association learning task, we tested whether highfrequency syllables are retrieved and produced faster compared to low-frequency syllables. We found significant syllable frequency effects with monosyllabic pseudo-words and disyllabic pseudo-words in which the first syllable bore the frequency manipulation; no effect was found when the frequency manipulation was on the second syllable. The implications of these results for the theory of word form encoding at the interface of phonological and phonetic encoding; especially with respect to the access mechanisms to the mental syllabary in the speech production model by (Levelt et al.) are discussed.
  • Claus, A. (2004). Access management system. Language Archive Newsletter, 1(2), 5.
  • Collins, J. (2015). ‘Give’ and semantic maps. In B. Nolan, G. Rawoens, & E. Diedrichsen (Eds.), Causation, permission, and transfer: Argument realisation in GET, TAKE, PUT, GIVE and LET verbs (pp. 129-146). Amsterdam: John Benjamins.
  • Cooper, N., & Cutler, A. (2004). Perception of non-native phonemes in noise. In S. Kin, & M. J. Bae (Eds.), Proceedings of the 8th International Conference on Spoken Language Processing (Interspeech 2004-ICSLP) (pp. 469-472). Seoul: Sunjijn Printing Co.

    Abstract

    We report an investigation of the perception of American English phonemes by Dutch listeners proficient in English. Listeners identified either the consonant or the vowel in most possible English CV and VC syllables. The syllables were embedded in multispeaker babble at three signal-to-noise ratios (16 dB, 8 dB, and 0 dB). Effects of signal-to-noise ratio on vowel and consonant identification are discussed as a function of syllable position and of relationship to the native phoneme inventory. Comparison of the results with previously reported data from native listeners reveals that noise affected the responding of native and non-native listeners similarly.
  • Cooper, A., Brouwer, S., & Bradlow, A. R. (2015). Interdependent processing and encoding of speech and concurrent background noise. Attention, Perception & Psychophysics, 77(4), 1342-1357. doi:10.3758/s13414-015-0855-z.

    Abstract

    Speech processing can often take place in adverse listening conditions that involve the mixing of speech and background noise. In this study, we investigated processing dependencies between background noise and indexical speech features, using a speeded classification paradigm (Garner, 1974; Exp. 1), and whether background noise is encoded and represented in memory for spoken words in a continuous recognition memory paradigm (Exp. 2). Whether or not the noise spectrally overlapped with the speech signal was also manipulated. The results of Experiment 1 indicated that background noise and indexical features of speech (gender, talker identity) cannot be completely segregated during processing, even when the two auditory streams are spectrally nonoverlapping. Perceptual interference was asymmetric, whereby irrelevant indexical feature variation in the speech signal slowed noise classification to a greater extent than irrelevant noise variation slowed speech classification. This asymmetry may stem from the fact that speech features have greater functional relevance to listeners, and are thus more difficult to selectively ignore than background noise. Experiment 2 revealed that a recognition cost for words embedded in different types of background noise on the first and second occurrences only emerged when the noise and the speech signal were spectrally overlapping. Together, these data suggest integral processing of speech and background noise, modulated by the level of processing and the spectral separation of the speech and noise.
  • Coridun, S., Ernestus, M., & Ten Bosch, L. (2015). Learning pronunciation variants in a second language: Orthographic effects. In Scottish consortium for ICPhS 2015, M. Wolters, J. Livingstone, B. Beattie, R. Smith, M. MacMahon, J. Stuart-Smith, & J. Scobbie (Eds.), Proceedings of the 18th International Congress of Phonetic Sciences (ICPhS 2015). Glasgow: University of Glasgow.

    Abstract

    The present study investigated the effect of orthography on the learning and subsequent processing of pronunciation variants in a second language. Dutch learners of French learned reduced pronunciation variants that result from schwa-zero alternation in French (e.g., reduced /ʃnij/ from chenille 'caterpillar'). Half of the participants additionally learnt the words' spellings, which correspond more closely to the full variants with schwa. On the following day, participants performed an auditory lexical decision task, in which they heard half of the words in their reduced variants, and the other half in their full variants. Participants who had exclusively learnt the auditory forms performed significantly worse on full variants than participants who had also learnt the spellings. This shows that learners integrate phonological and orthographic information to process pronunciation variants. There was no difference between both groups in their performances on reduced variants, suggesting that the exposure to spelling does not impede learners' processing of these variants.
  • Cox, S., Rösler, D., & Skiba, R. (1989). A tailor-made database for language teaching material. Literary & Linguistic Computing, 4(4), 260-264.
  • Crago, M. B., & Allen, S. E. M. (1997). Linguistic and cultural aspects of simplicity and complexity in Inuktitut child directed speech. In E. Hughes, M. Hughes, & A. Greenhill (Eds.), Proceedings of the 21st annual Boston University Conference on Language Development (pp. 91-102).
  • Crago, M. B., Allen, S. E. M., & Hough-Eyamie, W. P. (1997). Exploring innateness through cultural and linguistic variation. In M. Gopnik (Ed.), The inheritance and innateness of grammars (pp. 70-90). New York City, NY, USA: Oxford University Press, Inc.
  • Crasborn, O., Sloetjes, H., Auer, E., & Wittenburg, P. (2006). Combining video and numeric data in the analysis of sign languages with the ELAN annotation software. In C. Vetoori (Ed.), Proceedings of the 2nd Workshop on the Representation and Processing of Sign languages: Lexicographic matters and didactic scenarios (pp. 82-87). Paris: ELRA.

    Abstract

    This paper describes hardware and software that can be used for the phonetic study of sign languages. The field of sign language phonetics is characterised, and the hardware that is currently in use is described. The paper focuses on the software that was developed to enable the recording of finger and hand movement data, and the additions to the ELAN annotation software that facilitate the further visualisation and analysis of the data.
  • Croijmans, I., & Majid, A. (2015). Odor naming is difficult, even for wine and coffee experts. In D. C. Noelle, R. Dale, A. S. Warlaumont, J. Yoshimi, T. Matlock, C. D. Jennings, & P. P. Maglio (Eds.), Proceedings of the 37th Annual Meeting of the Cognitive Science Society (CogSci 2015) (pp. 483-488). Austin, TX: Cognitive Science Society. Retrieved from https://mindmodeling.org/cogsci2015/papers/0092/index.html.

    Abstract

    Odor naming is difficult for people, but recent cross-cultural research suggests this difficulty is culture-specific. Jahai speakers (hunter-gatherers from the Malay Peninsula) name odors as consistently as colors, and much better than English speakers (Majid & Burenhult, 2014). In Jahai the linguistic advantage for smells correlates with a cultural interest in odors. Here we ask whether sub-cultures in the West with odor expertise also show superior odor naming. We tested wine and coffee experts (who have specialized odor training) in an odor naming task. Both wine and coffee experts were no more accurate or consistent than novices when naming odors. Although there were small differences in naming strategies, experts and non-experts alike relied overwhelmingly on source-based descriptions. So the specific language experts speak continues to constrain their ability to express odors. This suggests expertise alone is not sufficient to overcome the limits of language in the domain of smell.
  • Cronin, K. A., De Groot, E., & Stevens, J. M. G. (2015). Bonobos show limited social tolerance in a group setting: A comparison with chimpanzees and a test of the Relational Model. Folia primatologica, 86, 164-177. doi:10.1159/000373886.

    Abstract

    Social tolerance is a core aspect of primate social relationships with implications for the evolution of cooperation, prosociality and social learning. We measured the social tolerance of bonobos in an experiment recently validated with chimpanzees to allow for a comparative assessment of group-level tolerance, and found that the bonobo group studied here exhibited lower social tolerance on average than chimpanzees. Furthermore, following the Relational Model [de Waal, 1996], we investigated whether bonobos responded to an increased potential for social conflict with tolerance, conflict avoidance or conflict escalation, and found that only behaviours indicative of conflict escalation differed across conditions. Taken together, these findings contribute to the current debate over the level of social tolerance of bonobos and lend support to the position that the social tolerance of bonobos may not be notably high compared with other primates.
  • Cronin, K. A., Acheson, D. J., Hernández, P., & Sánchez, A. (2015). Hierarchy is Detrimental for Human Cooperation. Scientific Reports, 5: 18634. doi:10.1038/srep18634.

    Abstract

    Studies of animal behavior consistently demonstrate that the social environment impacts cooperation, yet the effect of social dynamics has been largely excluded from studies of human cooperation. Here, we introduce a novel approach inspired by nonhuman primate research to address how social hierarchies impact human cooperation. Participants competed to earn hierarchy positions and then could cooperate with another individual in the hierarchy by investing in a common effort. Cooperation was achieved if the combined investments exceeded a threshold, and the higher ranked individual distributed the spoils unless control was contested by the partner. Compared to a condition lacking hierarchy, cooperation declined in the presence of a hierarchy due to a decrease in investment by lower ranked individuals. Furthermore, hierarchy was detrimental to cooperation regardless of whether it was earned or arbitrary. These findings mirror results from nonhuman primates and demonstrate that hierarchies are detrimental to cooperation. However, these results deviate from nonhuman primate findings by demonstrating that human behavior is responsive to changing hierarchical structures and suggests partnership dynamics that may improve cooperation. This work introduces a controlled way to investigate the social influences on human behavior, and demonstrates the evolutionary continuity of human behavior with other primate species.
  • Cronin, K. A., Mitchell, M. A., Lonsdorf, E. V., & Thompson, S. D. (2006). One year later: Evaluation of PMC-Recommended births and transfers. Zoo Biology, 25, 267-277. doi:10.1002/zoo.20100.

    Abstract

    To meet their exhibition, conservation, education, and scientific goals, members of the American Zoo and Aquarium Association (AZA) collaborate to manage their living collections as single species populations. These cooperative population management programs, Species Survival Planss (SSP) and Population Management Plans (PMP), issue specimen-by-specimen recommendations aimed at perpetuating captive populations by maintaining genetic diversity and demographic stability. Species Survival Plans and PMPs differ in that SSP participants agree to complete recommendations, whereas PMP participants need only take recommendations under advisement. We evaluated the effect of program type and the number of participating institutions on the success of actions recommended by the Population Management Center (PMC): transfers of specimens between institutions, breeding, and target number of offspring. We analyzed AZA studbook databases for the occurrence of recommended or unrecommended transfers and births during the 1-year period after the distribution of standard AZA Breeding-and-Transfer Plans. We had three major findings: 1) on average, both SSPs and PMPs fell about 25% short of their target; however, as the number of participating institutions increased so too did the likelihood that programs met or exceeded their target; 2) SSPs exhibited significantly greater transfer success than PMPs, although transfer success for both program types was below 50%; and 3) SSPs exhibited significantly greater breeding success than PMPs, although breeding success for both program types was below 20%. Together, these results indicate that the science and sophistication behind genetic and demographic management of captive populations may be compromised by the challenges of implementation.
  • Cutler, A., Norris, D., & Sebastián-Gallés, N. (2004). Phonemic repertoire and similarity within the vocabulary. In S. Kin, & M. J. Bae (Eds.), Proceedings of the 8th International Conference on Spoken Language Processing (Interspeech 2004-ICSLP) (pp. 65-68). Seoul: Sunjijn Printing Co.

    Abstract

    Language-specific differences in the size and distribution of the phonemic repertoire can have implications for the task facing listeners in recognising spoken words. A language with more phonemes will allow shorter words and reduced embedding of short words within longer ones, decreasing the potential for spurious lexical competitors to be activated by speech signals. We demonstrate that this is the case via comparative analyses of the vocabularies of English and Spanish. A language which uses suprasegmental as well as segmental contrasts, however, can substantially reduce the extent of spurious embedding.
  • Cutler, A. (2006). Rudolf Meringer. In K. Brown (Ed.), Encyclopedia of Language and Linguistics (vol. 8) (pp. 12-13). Amsterdam: Elsevier.

    Abstract

    Rudolf Meringer (1859–1931), Indo-European philologist, published two collections of slips of the tongue, annotated and interpreted. From 1909, he was the founding editor of the cultural morphology movement's journal Wörter und Sachen. Meringer was the first to note the linguistic significance of speech errors, and his interpretations have stood the test of time. This work, rather than his mainstream philological research, has proven his most lasting linguistic contribution
  • Cutler, A. (2004). Segmentation of spoken language by normal adult listeners. In R. Kent (Ed.), MIT encyclopedia of communication sciences and disorders (pp. 392-395). Cambridge, MA: MIT Press.
  • Cutler, A., Weber, A., Smits, R., & Cooper, N. (2004). Patterns of English phoneme confusions by native and non-native listeners. Journal of the Acoustical Society of America, 116(6), 3668-3678. doi:10.1121/1.1810292.

    Abstract

    Native American English and non-native(Dutch)listeners identified either the consonant or the vowel in all possible American English CV and VC syllables. The syllables were embedded in multispeaker babble at three signal-to-noise ratios(0, 8, and 16 dB). The phoneme identification
    performance of the non-native listeners was less accurate than that of the native listeners. All listeners were adversely affected by noise. With these isolated syllables, initial segments were harder to identify than final segments. Crucially, the effects of language background and noise did not interact; the performance asymmetry between the native and non-native groups was not significantly different across signal-to-noise ratios. It is concluded that the frequently reported disproportionate difficulty of non-native listening under disadvantageous conditions is not due to a disproportionate increase in phoneme misidentifications.
  • Cutler, A. (2004). On spoken-word recognition in a second language. Newsletter, American Association of Teachers of Slavic and East European Languages, 47, 15-15.
  • Cutler, A., Kim, J., & Otake, T. (2006). On the limits of L1 influence on non-L1 listening: Evidence from Japanese perception of Korean. In P. Warren, & C. I. Watson (Eds.), Proceedings of the 11th Australian International Conference on Speech Science & Technology (pp. 106-111).

    Abstract

    Language-specific procedures which are efficient for listening to the L1 may be applied to non-native spoken input, often to the detriment of successful listening. However, such misapplications of L1-based listening do not always happen. We propose, based on the results from two experiments in which Japanese listeners detected target sequences in spoken Korean, that an L1 procedure is only triggered if requisite L1 features are present in the input.
  • Cutler, A., & Henton, C. G. (2004). There's many a slip 'twixt the cup and the lip. In H. Quené, & V. Van Heuven (Eds.), On speech and Language: Studies for Sieb G. Nooteboom (pp. 37-45). Utrecht: Netherlands Graduate School of Linguistics.

    Abstract

    The retiring academic may look back upon, inter alia, years of conference attendance. Speech error researchers are uniquely fortunate because they can collect data in any situation involving communication; accordingly, the retiring speech error researcher will have collected data at those conferences. We here address the issue of whether error data collected in situations involving conviviality (such as at conferences) is representative of error data in general. Our approach involved a comparison, across three levels of linguistic processing, between a specially constructed Conviviality Sample and the largest existing source of speech error data, the newly available Fromkin Speech Error Database. The results indicate that there are grounds for regarding the data in the Conviviality Sample as a better than average reflection of the true population of all errors committed. These findings encourage us to recommend further data collection in collaboration with like-minded colleagues.
  • Cutler, A. (2004). Twee regels voor academische vorming. In H. Procee (Ed.), Bij die wereld wil ik horen! Zesendertig columns en drie essays over de vorming tot academicus. (pp. 42-45). Amsterdam: Boom.
  • Cutler, A. (2006). Van spraak naar woorden in een tweede taal. In J. Morais, & G. d'Ydewalle (Eds.), Bilingualism and Second Language Acquisition (pp. 39-54). Brussels: Koninklijke Vlaamse Academie van België voor Wetenschappen en Kunsten.
  • Cutler, A. (1989). Auditory lexical access: Where do we start? In W. Marslen-Wilson (Ed.), Lexical representation and process (pp. 342-356). Cambridge, MA: MIT Press.

    Abstract

    The lexicon, considered as a component of the process of recognizing speech, is a device that accepts a sound image as input and outputs meaning. Lexical access is the process of formulating an appropriate input and mapping it onto an entry in the lexicon's store of sound images matched with their meanings. This chapter addresses the problems of auditory lexical access from continuous speech. The central argument to be proposed is that utterance prosody plays a crucial role in the access process. Continuous listening faces problems that are not present in visual recognition (reading) or in noncontinuous recognition (understanding isolated words). Aspects of utterance prosody offer a solution to these particular problems.
  • Cutler, A., & Otake, T. (1997). Contrastive studies of spoken-language processing. Journal of Phonetic Society of Japan, 1, 4-13.
  • Cutler, A., & Pasveer, D. (2006). Explaining cross-linguistic differences in effects of lexical stress on spoken-word recognition. In R. Hoffmann, & H. Mixdorff (Eds.), Speech Prosody 2006. Dresden: TUD press.

    Abstract

    Experiments have revealed differences across languages in listeners’ use of stress information in recognising spoken words. Previous comparisons of the vocabulary of Spanish and English had suggested that the explanation of this asymmetry might lie in the extent to which considering stress in spokenword recognition allows rejection of unwanted competition from words embedded in other words. This hypothesis was tested on the vocabularies of Dutch and German, for which word recognition results resemble those from Spanish more than those from English. The vocabulary statistics likewise revealed that in each language, the reduction of embeddings resulting from taking stress into account is more similar to the reduction achieved in Spanish than in English.
  • Cutler, A., Eisner, F., McQueen, J. M., & Norris, D. (2006). Coping with speaker-related variation via abstract phonemic categories. In Variation, detail and representation: 10th Conference on Laboratory Phonology (pp. 31-32).
  • Cutler, A., Weber, A., & Otake, T. (2006). Asymmetric mapping from phonetic to lexical representations in second-language listening. Journal of Phonetics, 34(2), 269-284. doi:10.1016/j.wocn.2005.06.002.

    Abstract

    The mapping of phonetic information to lexical representations in second-language (L2) listening was examined using an eyetracking paradigm. Japanese listeners followed instructions in English to click on pictures in a display. When instructed to click on a picture of a rocket, they experienced interference when a picture of a locker was present, that is, they tended to look at the locker instead. However, when instructed to click on the locker, they were unlikely to look at the rocket. This asymmetry is consistent with a similar asymmetry previously observed in Dutch listeners’ mapping of English vowel contrasts to lexical representations. The results suggest that L2 listeners may maintain a distinction between two phonetic categories of the L2 in their lexical representations, even though their phonetic processing is incapable of delivering the perceptual discrimination required for correct mapping to the lexical distinction. At the phonetic processing level, one of the L2 categories is dominant; the present results suggest that dominance is determined by acoustic–phonetic proximity to the nearest L1 category. At the lexical processing level, representations containing this dominant category are more likely than representations containing the non-dominant category to be correctly contacted by the phonetic input.
  • Cutler, A., Mister, E., Norris, D., & Sebastián-Gallés, N. (2004). La perception de la parole en espagnol: Un cas particulier? In L. Ferrand, & J. Grainger (Eds.), Psycholinguistique cognitive: Essais en l'honneur de Juan Segui (pp. 57-74). Brussels: De Boeck.
  • Cutler, A., Howard, D., & Patterson, K. E. (1989). Misplaced stress on prosody: A reply to Black and Byng. Cognitive Neuropsychology, 6, 67-83.

    Abstract

    The recent claim by Black and Byng (1986) that lexical access in reading is subject to prosodic constraints is examined and found to be unsupported. The evidence from impaired reading which Black and Byng report is based on poorly controlled stimulus materials and is inadequately analysed and reported. An alternative explanation of their findings is proposed, and new data are reported for which this alternative explanation can account but their model cannot. Finally, their proposal is shown to be theoretically unmotivated and in conflict with evidence from normal reading.
  • Cutler, A. (2015). Lexical stress in English pronunciation. In M. Reed, & J. M. Levis (Eds.), The Handbook of English Pronunciation (pp. 106-124). Chichester: Wiley.
  • Cutler, A., & Chen, H.-C. (1997). Lexical tone in Cantonese spoken-word processing. Perception and Psychophysics, 59, 165-179. Retrieved from http://www.psychonomic.org/search/view.cgi?id=778.

    Abstract

    In three experiments, the processing of lexical tone in Cantonese was examined. Cantonese listeners more often accepted a nonword as a word when the only difference between the nonword and the word was in tone, especially when the F0 onset difference between correct and erroneous tone was small. Same–different judgments by these listeners were also slower and less accurate when the only difference between two syllables was in tone, and this was true whether the F0 onset difference between the two tones was large or small. Listeners with no knowledge of Cantonese produced essentially the same same-different judgment pattern as that produced by the native listeners, suggesting that the results display the effects of simple perceptual processing rather than of linguistic knowledge. It is argued that the processing of lexical tone distinctions may be slowed, relative to the processing of segmental distinctions, and that, in speeded-response tasks, tone is thus more likely to be misprocessed than is segmental structure.
  • Cutler, A., Mehler, J., Norris, D., & Segui, J. (1988). Limits on bilingualism [Letters to Nature]. Nature, 340, 229-230. doi:10.1038/340229a0.

    Abstract

    SPEECH, in any language, is continuous; speakers provide few reliable cues to the boundaries of words, phrases, or other meaningful units. To understand speech, listeners must divide the continuous speech stream into portions that correspond to such units. This segmentation process is so basic to human language comprehension that psycholinguists long assumed that all speakers would do it in the same way. In previous research1,2, however, we reported that segmentation routines can be language-specific: speakers of French process spoken words syllable by syllable, but speakers of English do not. French has relatively clear syllable boundaries and syllable-based timing patterns, whereas English has relatively unclear syllable boundaries and stress-based timing; thus syllabic segmentation would work more efficiently in the comprehension of French than in the comprehension of English. Our present study suggests that at this level of language processing, there are limits to bilingualism: a bilingual speaker has one and only one basic language.
  • Cutler, A., & Butterfield, S. (1989). Natural speech cues to word segmentation under difficult listening conditions. In J. Tubach, & J. Mariani (Eds.), Proceedings of Eurospeech 89: European Conference on Speech Communication and Technology: Vol. 2 (pp. 372-375). Edinburgh: CEP Consultants.

    Abstract

    One of a listener's major tasks in understanding continuous speech is segmenting the speech signal into separate words. When listening conditions are difficult, speakers can help listeners by deliberately speaking more clearly. In three experiments, we examined how word boundaries are produced in deliberately clear speech. We found that speakers do indeed attempt to mark word boundaries; moreover, they differentiate between word boundaries in a way which suggests they are sensitive to listener needs. Application of heuristic segmentation strategies makes word boundaries before strong syllables easiest for listeners to perceive; but under difficult listening conditions speakers pay more attention to marking word boundaries before weak syllables, i.e. they mark those boundaries which are otherwise particularly hard to perceive.

Share this page