Publications

Displaying 1 - 100 of 519
  • Adank, P., Smits, R., & Van Hout, R. (2003). Modeling perceived vowel height, advancement, and rounding. In Proceedings of the 15th International Congress of Phonetic Sciences (ICPhS 2003) (pp. 647-650). Adelaide: Causal Productions.
  • Alhama, R. G., Scha, R., & Zudema, W. (2015). How should we evaluate models of segmentation in artificial language learning? In N. A. Taatgen, M. K. van Vugt, J. P. Borst, & K. Mehlhorn (Eds.), Proceedings of ICCM 2015 (pp. 172-173). Groningen: University of Groningen.

    Abstract

    One of the challenges that infants have to solve when learn- ing their native language is to identify the words in a con- tinuous speech stream. Some of the experiments in Artificial Grammar Learning (Saffran, Newport, and Aslin (1996); Saf- fran, Aslin, and Newport (1996); Aslin, Saffran, and Newport (1998) and many more) investigate this ability. In these ex- periments, subjects are exposed to an artificial speech stream that contains certain regularities. Adult participants are typ- ically tested with 2-alternative Forced Choice Tests (2AFC) in which they have to choose between a word and another sequence (typically a partword, a sequence resulting from misplacing boundaries).
  • Alhama, R. G., Scha, R., & Zuidema, W. (2014). Rule learning in humans and animals. In E. A. Cartmill, S. Roberts, H. Lyn, & H. Cornish (Eds.), The evolution of language: Proceedings of the 10th International Conference (EVOLANG 10) (pp. 371-372). Singapore: World Scientific.
  • Allen, G. L., & Haun, D. B. M. (2004). Proximity and precision in spatial memory. In G. Allen (Ed.), Human spatial memory: Remembering where (pp. 41-63). Mahwah, NJ: Lawrence Erlbaum.
  • Allen, S. E. M. (1998). A discourse-pragmatic explanation for the subject-object asymmetry in early null arguments. In A. Sorace, C. Heycock, & R. Shillcock (Eds.), Proceedings of the GALA '97 Conference on Language Acquisition (pp. 10-15). Edinburgh, UK: Edinburgh University Press.

    Abstract

    The present paper assesses discourse-pragmatic factors as a potential explanation for the subject-object assymetry in early child language. It identifies a set of factors which characterize typical situations of informativeness (Greenfield & Smith, 1976), and uses these factors to identify informative arguments in data from four children aged 2;0 through 3;6 learning Inuktitut as a first language. In addition, it assesses the extent of the links between features of informativeness on one hand and lexical vs. null and subject vs. object arguments on the other. Results suggest that a pragmatics account of the subject-object asymmetry can be upheld to a greater extent than previous research indicates, and that several of the factors characterizing informativeness are good indicators of those arguments which tend to be omitted in early child language.
  • Allen, S., Ozyurek, A., Kita, S., Brown, A., Turanli, R., & Ishizuka, T. (2003). Early speech about manner and path in Turkish and English: Universal or language-specific? In B. Beachley, A. Brown, & F. Conlin (Eds.), Proceedings of the 27th annual Boston University Conference on Language Development (pp. 63-72). Somerville (MA): Cascadilla Press.
  • Ameka, F. K. (2003). Prepositions and postpositions in Ewe: Empirical and theoretical considerations. In A. Zibri-Hetz, & P. Sauzet (Eds.), Typologie des langues d'Afrique et universaux de la grammaire (pp. 43-66). Paris: L'Harmattan.
  • Ameka, F. K. (2003). 'Today is far: Situational anaphors in overlapping clause constructions in Ewe. In M. E. K. Dakubu, & E. K. Osam (Eds.), In Studies in the Languages of the Volta Baisin 1. Proceedings of the Legon-Trondheim Linguistics Project, December 4-6, 2002 (pp. 9-22). Legon: Department of Linguistics University of Ghana.
  • Anderson, P., Harandi, N. M., Moisik, S. R., Stavness, I., & Fels, S. (2015). A comprehensive 3D biomechanically-driven vocal tract model including inverse dynamics for speech research. In Proceedings of Interspeech 2015: The 16th Annual Conference of the International Speech Communication Association (pp. 2395-2399).

    Abstract

    We introduce a biomechanical model of oropharyngeal structures that adds the soft-palate, pharynx, and larynx to our previous models of jaw, skull, hyoid, tongue, and face in a unified model. The model includes a comprehensive description of the upper airway musculature, using point-to-point muscles that may either be embedded within the deformable structures or operate exter- nally. The airway is described by an air-tight mesh that fits and deforms with the surrounding articulators, which enables dynamic coupling to our articulatory speech synthesizer. We demonstrate that the biomechanics, in conjunction with the skinning, supports a range from physically realistic to simplified vocal tract geometries to investigate different approaches to aeroacoustic modeling of vocal tract. Furthermore, our model supports inverse modeling to support investigation of plausible muscle activation patterns to generate speech.
  • Baayen, R. H. (2003). Probabilistic approaches to morphology. In R. Bod, J. Hay, & S. Jannedy (Eds.), Probabilistic linguistics (pp. 229-287). Cambridge: MIT Press.
  • Baayen, R. H., Moscoso del Prado Martín, F., Wurm, L., & Schreuder, R. (2003). When word frequencies do not regress towards the mean. In R. Baayen, & R. Schreuder (Eds.), Morphological structure in language processing (pp. 463-484). Berlin: Mouton de Gruyter.
  • Baayen, R. H., McQueen, J. M., Dijkstra, T., & Schreuder, R. (2003). Frequency effects in regular inflectional morphology: Revisiting Dutch plurals. In R. H. Baayen, & R. Schreuder (Eds.), Morphological structure in language processing (pp. 355-390). Berlin: Mouton de Gruyter.
  • Baayen, R. H., McQueen, J. M., Dijkstra, T., & Schreuder, R. (2003). Frequency effects in regular inflectional morphology: Revisiting Dutch plurals. In R. H. Baayen, & R. Schreuder (Eds.), Morphological Structure in Language Processing (pp. 355-390). Berlin, Germany: Mouton De Gruyter.
  • Baayen, R. H. (2014). Productivity in language production. In D. Sandra, & M. Taft (Eds.), Morphological Structure, Lexical Representation and Lexical Access: A Special Issue of Language and Cognitive Processes (pp. 447-469). London: Routledge.

    Abstract

    Lexical statistics and a production experiment are used to gauge the extent to which the linguistic notion of morphological productivity is relevant for psycholinguistic theories of speech production in languages such as Dutch and English. Lexical statistics of productivity show that despite the relatively poor morphology of Dutch, new words are created often enough for the marginalisation of word formation in theories of speech production to be theoretically unattractive. This conclusion is supported by the results of a production experiment in which subjects freely created hundreds of productive, but only a handful of unproductive, neologisms. A tentative solution is proposed as to why the opposite pattern has been observed in the speech of jargonaphasics.
  • Bauer, B. L. M. (2014). Indefinite HOMO in the Gospels of the Vulgata. In P. Molinell, P. Cuzzoli, & C. Fedriani (Eds.), Latin vulgaire – latin tardif X (pp. 415-435). Bergamo: Bergamo University Press.
  • Bauer, B. L. M., & Pinault, G.-J. (2003). Introduction: Werner Winter, ad multos annos. In B. L. M. Bauer, & G.-J. Pinault (Eds.), Language in time and space: A festschrift for Werner Winter on the occasion of his 80th birthday (pp. xxiii-xxv). Berlin: Mouton de Gruyter.
  • Bauer, B. L. M. (2015). Origins of the indefinite HOMO constructions. In G. Haverling (Ed.), Latin Linguistics in the Early 21st Century: Acts of the 16th International Colloquium on Latin Linguistics (pp. 542-553). Uppsala: Uppsala University.
  • Bauer, B. L. M. (2003). The adverbial formation in mente in Vulgar and Late Latin: A problem in grammaticalization. In H. Solin, M. Leiwo, & H. Hallo-aho (Eds.), Latin vulgaire, latin tardif VI (pp. 439-457). Hildesheim: Olms.
  • Bergmann, C., Ten Bosch, L., & Boves, L. (2014). A computational model of the headturn preference procedure: Design, challenges, and insights. In J. Mayor, & P. Gomez (Eds.), Computational Models of Cognitive Processes (pp. 125-136). World Scientific. doi:10.1142/9789814458849_0010.

    Abstract

    The Headturn Preference Procedure (HPP) is a frequently used method (e.g., Jusczyk & Aslin; and subsequent studies) to investigate linguistic abilities in infants. In this paradigm infants are usually first familiarised with words and then tested for a listening preference for passages containing those words in comparison to unrelated passages. Listening preference is defined as the time an infant spends attending to those passages with his or her head turned towards a flashing light and the speech stimuli. The knowledge and abilities inferred from the results of HPP studies have been used to reason about and formally model early linguistic skills and language acquisition. However, the actual cause of infants' behaviour in HPP experiments has been subject to numerous assumptions as there are no means to directly tap into cognitive processes. To make these assumptions explicit, and more crucially, to understand how infants' behaviour emerges if only general learning mechanisms are assumed, we introduce a computational model of the HPP. Simulations with the computational HPP model show that the difference in infant behaviour between familiarised and unfamiliar words in passages can be explained by a general learning mechanism and that many assumptions underlying the HPP are not necessarily warranted. We discuss the implications for conventional interpretations of the outcomes of HPP experiments.
  • Blasi, D. E., Christiansen, M. H., Wichmann, S., Hammarström, H., & Stadler, P. F. (2014). Sound symbolism and the origins of language. In E. A. Cartmill, S. Roberts, H. Lyn, & H. Cornish (Eds.), The evolution of language: Proceedings of the 10th International Conference (EVOLANG 10) (pp. 391-392). Singapore: World Scientific.
  • Blumstein, S., & Cutler, A. (2003). Speech perception: Phonetic aspects. In W. Frawley (Ed.), International encyclopaedia of linguistics (pp. 151-154). Oxford: Oxford University Press.
  • Bocanegra, B. R., Poletiek, F. H., & Zwaan, R. A. (2014). Asymmetrical feature binding across language and perception. In Proceedings of the 7th annual Conference on Embodied and Situated Language Processing (ESLP 2014).
  • Bohnemeyer, J. (2003). The unique vector constraint: The impact of direction changes on the linguistic segmentation of motion events. In E. v. d. Zee, & J. Slack (Eds.), Axes and vectors in language and space (pp. 86-110). Oxford: Oxford University Press.
  • Bohnemeyer, J. (2004). Argument and event structure in Yukatek verb classes. In J.-Y. Kim, & A. Werle (Eds.), Proceedings of The Semantics of Under-Represented Languages in the Americas. Amherst, Mass: GLSA.

    Abstract

    In Yukatek Maya, event types are lexicalized in verb roots and stems that fall into a number of different form classes on the basis of (a) patterns of aspect-mood marking and (b) priviledges of undergoing valence-changing operations. Of particular interest are the intransitive classes in the light of Perlmutter’s (1978) Unaccusativity hypothesis. In the spirit of Levin & Rappaport Hovav (1995) [L&RH], Van Valin (1990), Zaenen (1993), and others, this paper investigates whether (and to what extent) the association between formal predicate classes and event types is determined by argument structure features such as ‘agentivity’ and ‘control’ or features of lexical aspect such as ‘telicity’ and ‘durativity’. It is shown that mismatches between agentivity/control and telicity/durativity are even more extensive in Yukatek than they are in English (Abusch 1985; L&RH, Van Valin & LaPolla 1997), providing new evidence against Dowty’s (1979) reconstruction of Vendler’s (1967) ‘time schemata of verbs’ in terms of argument structure configurations. Moreover, contrary to what has been claimed in earlier studies of Yukatek (Krämer & Wunderlich 1999, Lucy 1994), neither agentivity/control nor telicity/durativity turn out to be good predictors of verb class membership. Instead, the patterns of aspect-mood marking prove to be sensitive only to the presence or absense of state change, in a way that supports the unified analysis of all verbs of gradual change proposed by Kennedy & Levin (2001). The presence or absence of ‘internal causation’ (L&RH) may motivate the semantic interpretation of transitivization operations. An explicit semantics for the valence-changing operations is proposed, based on Parsons’s (1990) Neo-Davidsonian approach.
  • Bohnemeyer, J. (2003). Fictive motion questionnaire. In N. J. Enfield (Ed.), Field research manual 2003, part I: Multimodal interaction, space, event representation (pp. 81-85). Nijmegen: Max Planck Institute for Psycholinguistics. doi:10.17617/2.877601.

    Abstract

    Fictive Motion is the metaphoric use of path relators in the expression of spatial relations or configurations that are static, or at any rate do not in any obvious way involve physical entities moving in real space. The goal is to study the expression of such relations or configurations in the target language, with an eye particularly on whether these expressions exclusively/preferably/possibly involve motion verbs and/or path relators, i.e., Fictive Motion. Section 2 gives Talmy’s (2000: ch. 2) phenomenology of Fictive Motion construals. The researcher’s task is to “distill” the intended spatial relations/configurations from Talmy’s description of the particular Fictive Motion metaphors and elicit as many different examples of the relations/configurations as (s)he deems necessary to obtain a basic sense of whether and how much Fictive Motion the target language offers or prescribes for the encoding of the particular type of relation/configuration. As a first stab, the researcher may try to elicit natural translations of culturally appropriate adaptations of the examples Talmy provides with each type of Fictive Motion metaphor.
  • Bohnemeyer, J., Burenhult, N., Enfield, N. J., & Levinson, S. C. (2004). Landscape terms and place names elicitation guide. In A. Majid (Ed.), Field Manual Volume 9 (pp. 75-79). Nijmegen: Max Planck Institute for Psycholinguistics. doi:10.17617/2.492904.

    Abstract

    Landscape terms reflect the relationship between geographic reality and human cognition. Are ‘mountains’, ‘rivers, ‘lakes’ and the like universally recognised in languages as naturally salient objects to be named? The landscape subproject is concerned with the interrelation between language, cognition and geography. Specifically, it investigates issues relating to how landforms are categorised cross-linguistically as well as the characteristics of place naming.
  • Bohnemeyer, J., Burenhult, N., Levinson, S. C., & Enfield, N. J. (2003). Landscape terms and place names questionnaire. In N. J. Enfield (Ed.), Field research manual 2003, part I: Multimodal interaction, space, event representation (pp. 60-63). Nijmegen: Max Planck Institute for Psycholinguistics. doi:10.17617/2.877604.

    Abstract

    Landscape terms reflect the relationship between geographic reality and human cognition. Are ‘mountains’, ‘rivers, ‘lakes’ and the like universally recognised in languages as naturally salient objects to be named? The landscape subproject is concerned with the interrelation between language, cognition and geography. Specifically, it investigates issues relating to how landforms are categorised cross-linguistically as well as the characteristics of place naming.
  • Bohnemeyer, J. (1998). Temporale Relatoren im Hispano-Yukatekischen Sprachkontakt. In A. Koechert, & T. Stolz (Eds.), Convergencia e Individualidad - Las lenguas Mayas entre hispanización e indigenismo (pp. 195-241). Hannover, Germany: Verlag für Ethnologie.
  • Bohnemeyer, J. (1998). Sententiale Topics im Yukatekischen. In Z. Dietmar (Ed.), Deskriptive Grammatik und allgemeiner Sprachvergleich (pp. 55-85). Tübingen, Germany: Max-Niemeyer-Verlag.
  • Bosker, H. R., Tjiong, V., Quené, H., Sanders, T., & De Jong, N. H. (2015). Both native and non-native disfluencies trigger listeners' attention. In Disfluency in Spontaneous Speech: DISS 2015: An ICPhS Satellite Meeting. Edinburgh: DISS2015.

    Abstract

    Disfluencies, such as uh and uhm, are known to help the listener in speech comprehension. For instance, disfluencies may elicit prediction of less accessible referents and may trigger listeners’ attention to the following word. However, recent work suggests differential processing of disfluencies in native and non-native speech. The current study investigated whether the beneficial effects of disfluencies on listeners’ attention are modulated by the (non-)native identity of the speaker. Using the Change Detection Paradigm, we investigated listeners’ recall accuracy for words presented in disfluent and fluent contexts, in native and non-native speech. We observed beneficial effects of both native and non-native disfluencies on listeners’ recall accuracy, suggesting that native and non-native disfluencies trigger listeners’ attention in a similar fashion.
  • Bosker, H. R., & Reinisch, E. (2015). Normalization for speechrate in native and nonnative speech. In M. Wolters, J. Livingstone, B. Beattie, R. Smith, M. MacMahon, J. Stuart-Smith, & J. Scobbie (Eds.), Proceedings of the 18th International Congresses of Phonetic Sciences (ICPhS 2015). London: International Phonetic Association.

    Abstract

    Speech perception involves a number of processes that deal with variation in the speech signal. One such process is normalization for speechrate: local temporal cues are perceived relative to the rate in the surrounding context. It is as yet unclear whether and how this perceptual effect interacts with higher level impressions of rate, such as a speaker’s nonnative identity. Nonnative speakers typically speak more slowly than natives, an experience that listeners take into account when explicitly judging the rate of nonnative speech. The present study investigated whether this is also reflected in implicit rate normalization. Results indicate that nonnative speech is implicitly perceived as faster than temporally-matched native speech, suggesting that the additional cognitive load of listening to an accent speeds up rate perception. Therefore, rate perception in speech is not dependent on syllable durations alone but also on the ease of processing of the temporal signal.
  • Bowerman, M. (2003). Rola predyspozycji kognitywnych w przyswajaniu systemu semantycznego [Reprint]. In E. Dabrowska, & W. Kubiński (Eds.), Akwizycja języka w świetle językoznawstwa kognitywnego [Language acquisition from a cognitive linguistic perspective]. Kraków: Uniwersitas.

    Abstract

    Reprinted from; Bowerman, M. (1989). Learning a semantic system: What role do cognitive predispositions play? In M.L. Rice & R.L Schiefelbusch (Ed.), The teachability of language (pp. 133-169). Baltimore: Paul H. Brookes.
  • Bowerman, M., & Choi, S. (2003). Space under construction: Language-specific spatial categorization in first language acquisition. In D. Gentner, & S. Goldin-Meadow (Eds.), Language in mind: Advances in the study of language and thought (pp. 387-427). Cambridge: MIT Press.
  • Bowerman, M. (2004). From universal to language-specific in early grammatical development [Reprint]. In K. Trott, S. Dobbinson, & P. Griffiths (Eds.), The child language reader (pp. 131-146). London: Routledge.

    Abstract

    Attempts to explain children's grammatical development often assume a close initial match between units of meaning and units of form; for example, agents are said to map to sentence-subjects and actions to verbs. The meanings themselves, according to this view, are not influenced by language, but reflect children's universal non-linguistic way of understanding the world. This paper argues that, contrary to this position, meaning as it is expressed in children's early sentences is, from the beginning, organized on the basis of experience with the grammar and lexicon of a particular language. As a case in point, children learning English and Korean are shown to express meanings having to do with directed motion according to language-specific principles of semantic and grammatical structuring from the earliest stages of word combination.
  • Bowerman, M., & Majid, A. (2003). Kids’ cut & break. In N. J. Enfield (Ed.), Field research manual 2003, part I: Multimodal interaction, space, event representation (pp. 70-71). Nijmegen: Max Planck Institute for Psycholinguistics. doi:10.17617/2.877607.

    Abstract

    Kids’ Cut & Break is a task inspired by the original Cut & Break task (see MPI L&C Group Field Manual 2001), but designed for use with children as well as adults. There are fewer videoclips to be described (34 as opposed to 61), and they are “friendlier” and more interesting: the actors wear colorful clothes, smile, and act cheerfully. The first 2 items are warm-ups and 4 more items are fillers (interspersed with test items), so only 28 of the items are actually “test items”. In the original Cut & Break, each clip is in a separate file. In Kids’ Cut & Break, all 34 clips are edited into a single file, which plays the clips successively with 5 seconds of black screen between each clip.

    Additional information

    2003_1_Kids_cut_and_break_films.zip
  • Bowerman, M., Gullberg, M., Majid, A., & Narasimhan, B. (2004). Put project: The cross-linguistic encoding of placement events. In A. Majid (Ed.), Field Manual Volume 9 (pp. 10-24). Nijmegen: Max Planck Institute for Psycholinguistics. doi:10.17617/2.492916.

    Abstract

    How similar are the event concepts encoded by different languages? So far, few event domains have been investigated in any detail. The PUT project extends the systematic cross-linguistic exploration of event categorisation to a new domain, that of placement events (putting things in places and removing them from places). The goal of this task is to explore cross-linguistic universality and variability in the semantic categorisation of placement events (e.g., ‘putting a cup on the table’).

    Additional information

    2004_Put_project_video_stimuli.zip
  • Bowerman, M. (1979). The acquisition of complex sentences. In M. Garman, & P. Fletcher (Eds.), Studies in language acquisition (pp. 285-305). Cambridge: Cambridge University Press.
  • Brand, S., & Ernestus, M. (2015). Reduction of obstruent-liquid-schwa clusters in casual French. In Scottish consortium for ICPhS 2015, M. Wolters, J. Livingstone, B. Beattie, R. Smith, M. MacMahon, J. Stuart-Smith, & J. Scobbie (Eds.), Proceedings of the 18th International Congress of Phonetic Sciences (ICPhS 2015). Glasgow: University of Glasgow.

    Abstract

    This study investigated pronunciation variants of word-final obstruent-liquid-schwa (OLS) clusters in casual French and the variables predicting the absence of the phonemes in these clusters. In a dataset of 291 noun tokens extracted from a corpus of casual conversations, we observed that in 80.7% of the tokens, at least one phoneme was absent and that in no less than 15.5% the whole cluster was absent (e.g., /mis/ for ministre). Importantly, the probability of a phoneme being absent was higher if the following phoneme was absent as well. These data show that reduction can affect several phonemes at once and is not restricted to just a handful of (function) words. Moreover, our results demonstrate that the absence of each single phoneme is affected by the speaker's tendency to increase ease of articulation and to adapt a word's pronunciation variant to the time available.
  • Broeder, D., Brugman, H., Oostdijk, N., & Wittenburg, P. (2004). Towards Dynamic Corpora: Workshop on compiling and processing spoken corpora. In M. Lino, M. Xavier, F. Ferreira, R. Costa, & R. Silva (Eds.), Proceedings of the 4th International Conference on Language Resources and Evaluation (LREC 2004) (pp. 59-62). Paris: European Language Resource Association.
  • Broeder, D., Wittenburg, P., & Crasborn, O. (2004). Using Profiles for IMDI Metadata Creation. In M. Lino, M. Xavier, F. Ferreira, R. Costa, & R. Silva (Eds.), Proceedings of the 4th International Conference on Language Resources and Evaluation (LREC 2004) (pp. 1317-1320). Paris: European Language Resources Association.
  • Broeder, D., Declerck, T., Romary, L., Uneson, M., Strömqvist, S., & Wittenburg, P. (2004). A large metadata domain of language resources. In M. Lino, M. Xavier, F. Ferreira, R. Costa, & R. Silva (Eds.), Proceedings of the 4th International Conference on Language Resources and Evaluation (LREC 2004) (pp. 369-372). Paris: European Language Resources Association.
  • Broeder, D., Nava, M., & Declerck, T. (2004). INTERA - a Distributed Domain of Metadata Resources. In M. Lino, M. Xavier, F. Ferreira, R. Costa, & R. Silva (Eds.), Proceedings of the 4th International Conference on Spoken Language Resources and Evaluation (LREC 2004) (pp. 369-372). Paris: European Language Resources Association.
  • Broeder, D., & Van Uytvanck, D. (2014). Metadata formats. In J. Durand, U. Gut, & G. Kristoffersen (Eds.), The Oxford Handbook of Corpus Phonology (pp. 150-165). Oxford: Oxford University Press.
  • Broeder, D., Schuurman, I., & Windhouwer, M. (2014). Experiences with the ISOcat Data Category Registry. In N. Calzolari, K. Choukri, T. Declerck, H. Loftsson, B. Maegaard, J. Mariani, A. Moreno, J. Odijk, & S. Piperidis (Eds.), Proceedings of LREC 2014: 9th International Conference on Language Resources and Evaluation (pp. 4565-4568).
  • Broersma, M., & Kolkman, K. M. (2004). Lexical representation of non-native phonemes. In S. Kin, & M. J. Bae (Eds.), Proceedings of the 8th International Conference on Spoken Language Processing (Interspeech 2004-ICSLP) (pp. 1241-1244). Seoul: Sunjijn Printing Co.
  • Brouwer, S., & Bradlow, A. R. (2015). The effect of target-background synchronicity on speech-in-speech recognition. In Scottish consortium for ICPhS 2015, M. Wolters, J. Livingstone, B. Beattie, R. Smith, M. MacMahon, J. Stuart-Smith, & J. Scobbie (Eds.), Proceedings of the 18th International Congress of Phonetic Sciences (ICPhS 2015). Glasgow: University of Glasgow.

    Abstract

    The aim of the present study was to investigate whether speech-in-speech recognition is affected by variation in the target-background timing relationship. Specifically, we examined whether within trial synchronous or asynchronous onset and offset of the target and background speech influenced speech-in-speech recognition. Native English listeners were presented with English target sentences in the presence of English or Dutch background speech. Importantly, only the short-term temporal context –in terms of onset and offset synchrony or asynchrony of the target and background speech– varied across conditions. Participants’ task was to repeat back the English target sentences. The results showed an effect of synchronicity for English-in-English but not for English-in-Dutch recognition, indicating that familiarity with the English background lead in the asynchronous English-in-English condition might have attracted attention towards the English background. Overall, this study demonstrated that speech-in-speech recognition is sensitive to the target-background timing relationship, revealing an important role for variation in the local context of the target-background relationship as it extends beyond the limits of the time-frame of the to-be-recognized target sentence.
  • Brown, P. (2004). Position and motion in Tzeltal frog stories: The acquisition of narrative style. In S. Strömqvist, & L. Verhoeven (Eds.), Relating events in narrative: Typological and contextual perspectives (pp. 37-57). Mahwah: Erlbaum.

    Abstract

    How are events framed in narrative? Speakers of English (a 'satellite-framed' language), when 'reading' Mercer Mayer's wordless picture book 'Frog, Where Are You?', find the story self-evident: a boy has a dog and a pet frog; the frog escapes and runs away; the boy and dog look for it across hill and dale, through woods and over a cliff, until they find it and return home with a baby frog child of the original pet frog. In Tzeltal, as spoken in a Mayan community in southern Mexico, the story is somewhat different, because the language structures event descriptions differently. Tzeltal is in part a 'verb-framed' language with a set of Path-encoding motion verbs, so that the bare bones of the Frog story can consist of verbs translating as 'go'/'pass by'/'ascend'/ 'descend'/ 'arrive'/'return'. But Tzeltal also has satellite-framing adverbials, grammaticized from the same set of motion verbs, which encode the direction of motion or the orientation of static arrays. Furthermore, motion is not generally encoded barebones, but vivid pictorial detail is provided by positional verbs which can describe the position of the Figure as an outcome of a motion event; motion and stasis are thereby combined in a single event description. (For example: jipot jawal "he has been thrown (by the deer) lying¬_face_upwards_spread-eagled". This paper compares the use of these three linguistic resources in frog narratives from 14 Tzeltal adults and 21 children, looks at their development in the narratives of children between the ages of 4-12, and considers the results in relation to those from Berman and Slobin's (1996) comparative study of adult and child Frog stories.
  • Brown, P. (1998). Early Tzeltal verbs: Argument structure and argument representation. In E. Clark (Ed.), Proceedings of the 29th Annual Stanford Child Language Research Forum (pp. 129-140). Stanford: CSLI Publications.

    Abstract

    The surge of research activity focussing on children's acquisition of verbs (e.g., Tomasello and Merriman 1996) addresses some fundamental questions: Just how variable across languages, and across individual children, is the process of verb learning? How specific are arguments to particular verbs in early child language? How does the grammatical category 'Verb' develop? The position of Universal Grammar, that a verb category is early, contrasts with that of Tomasello (1992), Pine and Lieven and their colleagues (1996, in press), and many others, that children develop a verb category slowly, gradually building up subcategorizations of verbs around pragmatic, syntactic, and semantic properties of the language they are exposed to. On this latter view, one would expect the language which the child is learning, the cultural milieu and the nature of the interactions in which the child is engaged, to influence the process of acquiring verb argument structures. This paper explores these issues by examining the development of argument representation in the Mayan language Tzeltal, in both its lexical and verbal cross-referencing forms, and analyzing the semantic and pragmatic factors influencing the form argument representation takes. Certain facts about Tzeltal (the ergative/ absolutive marking, the semantic specificity of transitive and positional verbs) are proposed to affect the representation of arguments. The first 500 multimorpheme combinations of 3 children (aged between 1;8 and 2;4) are examined. It is argued that there is no evidence of semantically light 'pathbreaking' verbs (Ninio 1996) leading the way into word combinations. There is early productivity of cross-referencing affixes marking A, S, and O arguments (although there are systematic omissions). The paper assesses the respective contributions of three kinds of factors to these results - structural (regular morphology), semantic (verb specificity) and pragmatic (the nature of Tzeltal conversational interaction).
  • Brown, P., & Levinson, S. C. (2004). Frames of spatial reference and their acquisition in Tenejapan Tzeltal. In A. Assmann, U. Gaier, & G. Trommsdorff (Eds.), Zwischen Literatur und Anthropologie: Diskurse, Medien, Performanzen (pp. 285-314). Tübingen: Gunter Narr.

    Abstract

    This is a reprint of the Brown and Levinson 2000 article.
  • Brown, P. (2014). Gestures in native Mexico and Central America. In C. Müller, A. Cienki, E. Fricke, S. Ladewig, D. McNeill, & J. Bressem (Eds.), Body -language – communication: An international handbook on multimodality in human interaction. Volume 2 (pp. 1206-1215). Berlin: Mouton de Gruyter.

    Abstract

    The systematic study of kinesics, gaze, and gestural aspects of communication in Central American cultures is a recent phenomenon, most of it focussing on the Mayan cultures of southern Mexico, Guatemala, and Belize. This article surveys ethnographic observations and research reports on bodily aspects of speaking in three domains: gaze and kinesics in social interaction, indexical pointing in adult and caregiver-child interactions, and co-speech gestures associated with “absolute” (geographically-based) systems of spatial reference. In addition, it reports how the indigenous co-speech gesture repertoire has provided the basis for developing village sign languages in the region. It is argued that studies of the embodied aspects of speech in the Mayan areas of Mexico and Central America have contributed to the typology of gestures and of spatial frames of reference. They have refined our understanding of how spatial frames of reference are invoked, communicated, and switched in conversational interaction and of the importance of co-speech gestures in understanding language use, language acquisition, and the transmission of culture-specific cognitive styles.
  • Brown, P., Levinson, S. C., & Senft, G. (2004). Initial references to persons and places. In A. Majid (Ed.), Field Manual Volume 9 (pp. 37-44). Nijmegen: Max Planck Institute for Psycholinguistics. doi:10.17617/2.492929.

    Abstract

    This task has two parts: (i) video-taped elicitation of the range of possibilities for referring to persons and places, and (ii) observations of (first) references to persons and places in video-taped natural interaction. The goal of this task is to establish the repertoires of referential terms (and other practices) used for referring to persons and to places in particular languages and cultures, and provide examples of situated use of these kinds of referential practices in natural conversation. This data will form the basis for cross-language comparison, and for formulating hypotheses about general principles underlying the deployment of such referential terms in natural language usage.
  • Brown, P., Gaskins, S., Lieven, E., Striano, T., & Liszkowski, U. (2004). Multimodal multiperson interaction with infants aged 9 to 15 months. In A. Majid (Ed.), Field Manual Volume 9 (pp. 56-63). Nijmegen: Max Planck Institute for Psycholinguistics. doi:10.17617/2.492925.

    Abstract

    Interaction, for all that it has an ethological base, is culturally constituted, and how new social members are enculturated into the interactional practices of the society is of critical interest to our understanding of interaction – how much is learned, how variable is it across cultures – as well as to our understanding of the role of culture in children’s social-cognitive development. The goal of this task is to document the nature of caregiver infant interaction in different cultures, especially during the critical age of 9-15 months when children come to have an understanding of others’ intentions. This is of interest to all students of interaction; it does not require specialist knowledge of children.
  • Brown, P. (2003). Multimodal multiperson interaction with infants aged 9 to 15 months. In N. J. Enfield (Ed.), Field research manual 2003, part I: Multimodal interaction, space, event representation (pp. 22-24). Nijmegen: Max Planck Institute for Psycholinguistics. doi:10.17617/2.877610.

    Abstract

    Interaction, for all that it has an ethological base, is culturally constituted, and how new social members are enculturated into the interactional practices of the society is of critical interest to our understanding of interaction – how much is learned, how variable is it across cultures – as well as to our understanding of the role of culture in children’s social-cognitive development. The goal of this task is to document the nature of caregiver infant interaction in different cultures, especially during the critical age of 9-15 months when children come to have an understanding of others’ intentions. This is of interest to all students of interaction; it does not require specialist knowledge of children.
  • Brown, P., & Gaskins, S. (2014). Language acquisition and language socialization. In N. J. Enfield, P. Kockelman, & J. Sidnell (Eds.), Cambridge handbook of linguistic anthropology (pp. 187-226). Cambridge: Cambridge University Press.
  • Brown, P. (2015). Language, culture, and spatial cognition. In F. Sharifian (Ed.), Routledge Handbook on Language and Culture (pp. 294-309). London: Routledge.
  • Brown, P. (1998). How and why are women more polite: Some evidence from a Mayan community. In J. Coates (Ed.), Language and gender (pp. 81-99). Oxford: Blackwell.
  • Brown, P., & Levinson, S. C. (1979). Social structure, groups and interaction. In H. Giles, & K. R. Scherer (Eds.), Social markers in speech (pp. 291-341). Cambridge University Press.
  • Brown, P. (2015). Space: Linguistic expression of. In J. D. Wright (Ed.), International Encyclopedia of the Social and Behavioral Sciences (2nd ed.) Vol. 23 (pp. 89-93). Amsterdam: Elsevier. doi:10.1016/B978-0-08-097086-8.57017-2.
  • Brown, P., & Fraser, C. (1979). Speech as a marker of situation. In H. Giles, & K. Scherer (Eds.), Social markers in speech (pp. 33-62). Cambridge: Cambridge University Press.
  • Brown, P. (2015). Politeness and language. In J. D. Wright (Ed.), The International Encyclopedia of the Social and Behavioural Sciences (IESBS), (2nd ed.) (pp. 326-330). Amsterdam: Elsevier. doi:10.1016/B978-0-08-097086-8.53072-4.
  • Brown, P., & Levinson, S. C. (1998). Politeness, introduction to the reissue: A review of recent work. In A. Kasher (Ed.), Pragmatics: Vol. 6 Grammar, psychology and sociology (pp. 488-554). London: Routledge.

    Abstract

    This article is a reprint of chapter 1, the introduction to Brown and Levinson, 1987, Politeness: Some universals in language usage (Cambridge University Press).
  • Brown, P. (2014). The interactional context of language learning in Tzeltal. In I. Arnon, M. Casillas, C. Kurumada, & B. Estigarriba (Eds.), Language in Interaction: Studies in honor of Eve V. Clark (pp. 51-82). Amsterdam: Benjamins.

    Abstract

    This paper addresses the theories of Eve Clark about how children learn word meanings in western middle-class interactional contexts by examining child language data from a Tzeltal Maya society in southern Mexico where interaction patterns are radically different. Through examples of caregiver interactions with children 12-30 months old, I ask what lessons we can learn from how the details of these interactions unfold in this non-child-centered cultural context, and specifically, what aspects of the Tzeltal linguistic and interactional context might help to focus children’s attention on the meanings and the conventional forms of words being used around them.
  • Bruggeman, L., & Janse, E. (2015). Older listeners' decreased flexibility in adjusting to changes in speech signal reliability. In M. Wolters, J. Linvingstone, B. Beattie, M. MacMahon, J. Stuart-Smith, & J. Scobbie (Eds.), Proceedings of the 18th International Congress of Phonetic Sciences (ICPhS 2015). London: International Phonetic Association.

    Abstract

    Under noise or speech reductions, young adult listeners flexibly adjust the parameters of lexical activation and competition to allow for speech signal unreliability. Consequently, mismatches in the input are treated more leniently such that lexical candidates are not immediately deactivated. Using eyetracking, we assessed whether this modulation of recognition dynamics also occurs for older listeners. Dutch participants (aged 60+) heard Dutch sentences containing a critical word while viewing displays of four line drawings. The name of one picture shared either onset or rhyme with the critical word (i.e., was a phonological competitor). Sentences were either clear and noise-free, or had several phonemes replaced by bursts of noise. A larger preference for onset competitors than for rhyme competitors was observed in both clear and noise conditions; performance did not alter across condition. This suggests that dynamic adjustment of spoken-word recognition parameters in response to noise is less available to older listeners.
  • Brugman, H., Crasborn, O., & Russel, A. (2004). Collaborative annotation of sign language data with Peer-to-Peer technology. In M. Lino, M. Xavier, F. Ferreira, R. Costa, & R. Silva (Eds.), Proceedings of the 4th International Conference on Language Resources and Language Evaluation (LREC 2004) (pp. 213-216). Paris: European Language Resources Association.
  • Brugman, H., & Russel, A. (2004). Annotating Multi-media/Multi-modal resources with ELAN. In M. Lino, M. Xavier, F. Ferreira, R. Costa, & R. Silva (Eds.), Proceedings of the 4th International Conference on Language Resources and Language Evaluation (LREC 2004) (pp. 2065-2068). Paris: European Language Resources Association.
  • Burenhult, N. (2004). Spatial deixis in Jahai. In S. Burusphat (Ed.), Papers from the 11th Annual Meeting of the Southeast Asian Linguistics Society 2001 (pp. 87-100). Arizona State University: Program for Southeast Asian Studies.
  • Caramazza, A., Miozzo, M., Costa, A., Schiller, N. O., & Alario, F.-X. (2003). Etude comparee de la production des determinants dans differentes langues. In E. Dupoux (Ed.), Les Langages du cerveau: Textes en l'honneur de Jacques Mehler (pp. 213-229). Paris: Odile Jacob.
  • Casillas, M. (2014). Taking the floor on time: Delay and deferral in children’s turn taking. In I. Arnon, M. Casillas, C. Kurumada, & B. Estigarribia (Eds.), Language in Interaction: Studies in honor of Eve V. Clark (pp. 101-114). Amsterdam: Benjamins.

    Abstract

    A key part of learning to speak with others is figuring out when to start talking and how to hold the floor in conversation. For young children, the challenge of planning a linguistic response can slow down their response latencies, making misunderstanding, repair, and loss of the floor more likely. Like adults, children can mitigate their delays by using fillers (e.g., uh and um) at the start of their turns. In this chapter I analyze the onset and development of fillers in five children’s spontaneous speech from ages 1;6–3;6. My findings suggest that children start using fillers by 2;0, and use them to effectively mitigate delay in making a response.
  • Casillas, M., De Vos, C., Crasborn, O., & Levinson, S. C. (2015). The perception of stroke-to-stroke turn boundaries in signed conversation. In D. C. Noelle, R. Dale, A. S. Warlaumont, J. Yoshimi, T. Matlock, C. D. Jennings, & P. R. Maglio (Eds.), Proceedings of the 37th Annual Meeting of the Cognitive Science Society (CogSci 2015) (pp. 315-320). Austin, TX: Cognitive Science Society.

    Abstract

    Speaker transitions in conversation are often brief, with minimal vocal overlap. Signed languages appear to defy this pattern with frequent, long spans of simultaneous signing. But recent evidence suggests that turn boundaries in signed language may only include the content-bearing parts of the turn (from the first stroke to the last), and not all turn-related movement (from first preparation to final retraction). We tested whether signers were able to anticipate “stroke-to-stroke” turn boundaries with only minimal conversational context. We found that, indeed, signers anticipated turn boundaries at the ends of turn-final strokes. Signers often responded early, especially when the turn was long or contained multiple possible end points. Early responses for long turns were especially apparent for interrogatives—long interrogative turns showed much greater anticipation compared to short ones.
  • Casillas, M. (2014). Turn-taking. In D. Matthews (Ed.), Pragmatic development in first language acquisition (pp. 53-70). Amsterdam: Benjamins.

    Abstract

    Conversation is a structured, joint action for which children need to learn a specialized set skills and conventions. Because conversation is a primary source of linguistic input, we can better grasp how children become active agents in their own linguistic development by studying their acquisition of conversational skills. In this chapter I review research on children’s turn-taking. This fundamental skill of human interaction allows children to gain feedback, make clarifications, and test hypotheses at every stage of development. I broadly review children’s conversational experiences, the types of turn-based contingency they must acquire, how they ask and answer questions, and when they manage to make timely responses
  • Chang, F., & Fitz, H. (2014). Computational models of sentence production: A dual-path approach. In M. Goldrick, & M. Miozzo (Eds.), The Oxford handbook of language production (pp. 70-89). Oxford: Oxford University Press.

    Abstract

    Sentence production is the process we use to create language-specific sentences that convey particular meanings. In production, there are complex interactions between meaning, words, and syntax at different points in sentences. Computational models can make these interactions explicit and connectionist learning algorithms have been useful for building such models. Connectionist models use domaingeneral mechanisms to learn internal representations and these mechanisms can also explain evidence of long-term syntactic adaptation in adult speakers. This paper will review work showing that these models can generalize words in novel ways and learn typologically-different languages like English and Japanese. It will also present modeling work which shows that connectionist learning algorithms can account for complex sentence production in children and adult production phenomena like structural priming, heavy NP shift, and conceptual/lexical accessibility.
  • Chen, A. (2015). Children’s use of intonation in reference and the role of input. In L. Serratrice, & S. E. M. Allen (Eds.), The acquisition of reference (pp. 83-104). Amsterdam: Benjamins.

    Abstract

    Studies on children’s use of intonation in reference are few in number but are diverse in terms of theoretical frameworks and intonational parameters. In the current review, I present a re-analysis of the referents in each study, using a three-dimension approach (i.e. referential givenness-newness, relational givenness-newness, contrast), discuss the use of intonation at two levels (phonetic, phonological), and compare findings from different studies within a single framework. The patterns stemming from these studies may be limited in generalisability but can serve as initial hypotheses for future work. Furthermore, I examine the role of input as available in infant direct speech in the acquisition of intonational encoding of referents. In addition, I discuss how future research can advance our knowledge.

    Files private

    Request files
  • Chen, A. (2003). Language dependence in continuation intonation. In M. Solé, D. Recasens, & J. Romero (Eds.), Proceedings of the 15th International Congress of Phonetic Sciences (ICPhS.) (pp. 1069-1072). Rundle Mall, SA, Austr.: Causal Productions Pty.
  • Chen, A. (2014). Production-comprehension (A)Symmetry: Individual differences in the acquisition of prosodic focus-marking. In N. Campbell, D. Gibbon, & D. Hirst (Eds.), Proceedings of Speech Prosody 2014 (pp. 423-427).

    Abstract

    Previous work based on different groups of children has shown that four- to five-year-old children are similar to adults in both producing and comprehending the focus-toaccentuation mapping in Dutch, contra the alleged productionprecedes- comprehension asymmetry in earlier studies. In the current study, we addressed the question of whether there are individual differences in the production-comprehension (a)symmetricity. To this end, we examined the use of prosody in focus marking in production and the processing of focusrelated prosody in online language comprehension in the same group of 4- to 5-year-olds. We have found that the relationship between comprehension and production can be rather diverse at an individual level. This result suggests some degree of independence in learning to use prosody to mark focus in production and learning to process focus-related prosodic information in online language comprehension, and implies influences of other linguistic and non-linguistic factors on the production-comprehension (a)symmetricity
  • Chen, A. (2003). Reaction time as an indicator to discrete intonational contrasts in English. In Proceedings of Eurospeech 2003 (pp. 97-100).

    Abstract

    This paper reports a perceptual study using a semantically motivated identification task in which we investigated the nature of two pairs of intonational contrasts in English: (1) normal High accent vs. emphatic High accent; (2) early peak alignment vs. late peak alignment. Unlike previous inquiries, the present study employs an on-line method using the Reaction Time measurement, in addition to the measurement of response frequencies. Regarding the peak height continuum, the mean RTs are shortest for within-category identification but longest for across-category identification. As for the peak alignment contrast, no identification boundary emerges and the mean RTs only reflect a difference between peaks aligned with the vowel onset and peaks aligned elsewhere. We conclude that the peak height contrast is discrete but the previously claimed discreteness of the peak alignment contrast is not borne out.
  • Chen, A., Chen, A., Kager, R., & Wong, P. (2014). Rises and falls in Dutch and Mandarin Chinese. In C. Gussenhoven, Y. Chen, & D. Dediu (Eds.), Proceedings of the 4th International Symposium on Tonal Aspects of Language (pp. 83-86).

    Abstract

    Despite of the different functions of pitch in tone and nontone languages, rises and falls are common pitch patterns across different languages. In the current study, we ask what is the language specific phonetic realization of rises and falls. Chinese and Dutch speakers participated in a production experiment. We used contexts composed for conveying specific communicative purposes to elicit rises and falls. We measured both tonal alignment and tonal scaling for both patterns. For the alignment measurements, we found language specific patterns for the rises, but for falls. For rises, both peak and valley were aligned later among Chinese speakers compared to Dutch speakers. For all the scaling measurements (maximum pitch, minimum pitch, and pitch range), no language specific patterns were found for either the rises or the falls
  • Cho, T., & McQueen, J. M. (2004). Phonotactics vs. phonetic cues in native and non-native listening: Dutch and Korean listeners' perception of Dutch and English. In S. Kin, & M. J. Bae (Eds.), Proceedings of the 8th International Conference on Spoken Language Processing (Interspeech 2004-ICSLP) (pp. 1301-1304). Seoul: Sunjijn Printing Co.

    Abstract

    We investigated how listeners of two unrelated languages, Dutch and Korean, process phonotactically legitimate and illegitimate sounds spoken in Dutch and American English. To Dutch listeners, unreleased word-final stops are phonotactically illegal because word-final stops in Dutch are generally released in isolation, but to Korean listeners, released final stops are illegal because word-final stops are never released in Korean. Two phoneme monitoring experiments showed a phonotactic effect: Dutch listeners detected released stops more rapidly than unreleased stops whereas the reverse was true for Korean listeners. Korean listeners with English stimuli detected released stops more accurately than unreleased stops, however, suggesting that acoustic-phonetic cues associated with released stops improve detection accuracy. We propose that in non-native speech perception, phonotactic legitimacy in the native language speeds up phoneme recognition, the richness of acousticphonetic cues improves listening accuracy, and familiarity with the non-native language modulates the relative influence of these two factors.
  • Cho, T., & Johnson, E. K. (2004). Acoustic correlates of phrase-internal lexical boundaries in Dutch. In S. Kin, & M. J. Bae (Eds.), Proceedings of the 8th International Conference on Spoken Language Processing (Interspeech 2004-ICSLP) (pp. 1297-1300). Seoul: Sunjin Printing Co.

    Abstract

    The aim of this study was to determine if Dutch speakers reliably signal phrase-internal lexical boundaries, and if so, how. Six speakers recorded 4 pairs of phonemically identical strong-weak-strong (SWS) strings with matching syllable boundaries but mismatching intended word boundaries (e.g. reis # pastei versus reispas # tij, or more broadly C1V2(C)#C2V2(C)C3V3(C) vs. C1V2(C)C2V2(C)#C3V3(C)). An Analysis of Variance revealed 3 acoustic parameters that were significantly greater in S#WS items (C2 DURATION, RIME1 DURATION, C3 BURST AMPLITUDE) and 5 parameters that were significantly greater in the SW#S items (C2 VOT, C3 DURATION, RIME2 DURATION, RIME3 DURATION, and V2 AMPLITUDE). Additionally, center of gravity measurements suggested that the [s] to [t] coarticulation was greater in reis # pa[st]ei versus reispa[s] # [t]ij. Finally, a Logistic Regression Analysis revealed that the 3 parameters (RIME1 DURATION, RIME2 DURATION, and C3 DURATION) contributed most reliably to a S#WS versus SW#S classification.
  • Cho, T. (2003). Lexical stress, phrasal accent and prosodic boundaries in the realization of domain-initial stops in Dutch. In Proceedings of the 15th International Congress of Phonetic Sciences (ICPhs 2003) (pp. 2657-2660). Adelaide: Causal Productions.

    Abstract

    This study examines the effects of prosodic boundaries, lexical stress, and phrasal accent on the acoustic realization of stops (/t, d/) in Dutch, with special attention paid to language-specificity in the phonetics-prosody interface. The results obtained from various acoustic measures show systematic phonetic variations in the production of /t d/ as a function of prosodic position, which may be interpreted as being due to prosodicallyconditioned articulatory strengthening. Shorter VOTs were found for the voiceless stop /t/ in prosodically stronger locations (as opposed to longer VOTs in this position in English). The results suggest that prosodically-driven phonetic realization is bounded by a language-specific phonological feature system.
  • Choi, J., Broersma, M., & Cutler, A. (2015). Enhanced processing of a lost language: Linguistic knowledge or linguistic skill? In Proceedings of Interspeech 2015: 16th Annual Conference of the International Speech Communication Association (pp. 3110-3114).

    Abstract

    Same-different discrimination judgments for pairs of Korean stop consonants, or of Japanese syllables differing in phonetic segment length, were made by adult Korean adoptees in the Netherlands, by matched Dutch controls, and Korean controls. The adoptees did not outdo either control group on either task, although the same individuals had performed significantly better than matched controls on an identification learning task. This suggests that early exposure to multiple phonetic systems does not specifically improve acoustic-phonetic skills; rather, enhanced performance suggests retained language knowledge.
  • Clark, N., & Perlman, M. (2014). Breath, vocal, and supralaryngeal flexibility in a human-reared gorilla. In B. De Boer, & T. Verhoef (Eds.), Proceedings of Evolang X, Workshop on Signals, Speech, and Signs (pp. 11-15).

    Abstract

    “Gesture-first” theories dismiss ancestral great apes’ vocalization as a substrate for language evolution based on the claim that extant apes exhibit minimal learning and volitional control of vocalization. Contrary to this claim, we present data of novel learned and voluntarily controlled vocal behaviors produced by a human-fostered gorilla (G. gorilla gorilla). These behaviors demonstrate varying degrees of flexibility in the vocal apparatus (including diaphragm, lungs, larynx, and supralaryngeal articulators), and are predominantly performed in coordination with manual behaviors and gestures. Instead of a gesture-first theory, we suggest that these findings support multimodal theories of language evolution in which vocal and gestural forms are coordinated and supplement one another
  • Collins, J. (2015). ‘Give’ and semantic maps. In B. Nolan, G. Rawoens, & E. Diedrichsen (Eds.), Causation, permission, and transfer: Argument realisation in GET, TAKE, PUT, GIVE and LET verbs (pp. 129-146). Amsterdam: John Benjamins.
  • Cooper, N., & Cutler, A. (2004). Perception of non-native phonemes in noise. In S. Kin, & M. J. Bae (Eds.), Proceedings of the 8th International Conference on Spoken Language Processing (Interspeech 2004-ICSLP) (pp. 469-472). Seoul: Sunjijn Printing Co.

    Abstract

    We report an investigation of the perception of American English phonemes by Dutch listeners proficient in English. Listeners identified either the consonant or the vowel in most possible English CV and VC syllables. The syllables were embedded in multispeaker babble at three signal-to-noise ratios (16 dB, 8 dB, and 0 dB). Effects of signal-to-noise ratio on vowel and consonant identification are discussed as a function of syllable position and of relationship to the native phoneme inventory. Comparison of the results with previously reported data from native listeners reveals that noise affected the responding of native and non-native listeners similarly.
  • Coridun, S., Ernestus, M., & Ten Bosch, L. (2015). Learning pronunciation variants in a second language: Orthographic effects. In Scottish consortium for ICPhS 2015, M. Wolters, J. Livingstone, B. Beattie, R. Smith, M. MacMahon, J. Stuart-Smith, & J. Scobbie (Eds.), Proceedings of the 18th International Congress of Phonetic Sciences (ICPhS 2015). Glasgow: University of Glasgow.

    Abstract

    The present study investigated the effect of orthography on the learning and subsequent processing of pronunciation variants in a second language. Dutch learners of French learned reduced pronunciation variants that result from schwa-zero alternation in French (e.g., reduced /ʃnij/ from chenille 'caterpillar'). Half of the participants additionally learnt the words' spellings, which correspond more closely to the full variants with schwa. On the following day, participants performed an auditory lexical decision task, in which they heard half of the words in their reduced variants, and the other half in their full variants. Participants who had exclusively learnt the auditory forms performed significantly worse on full variants than participants who had also learnt the spellings. This shows that learners integrate phonological and orthographic information to process pronunciation variants. There was no difference between both groups in their performances on reduced variants, suggesting that the exposure to spelling does not impede learners' processing of these variants.
  • Crago, M. B., & Allen, S. E. M. (1998). Acquiring Inuktitut. In O. L. Taylor, & L. Leonard (Eds.), Language Acquisition Across North America: Cross-Cultural And Cross-Linguistic Perspectives (pp. 245-279). San Diego, CA, USA: Singular Publishing Group, Inc.
  • Crago, M. B., Allen, S. E. M., & Pesco, D. (1998). Issues of Complexity in Inuktitut and English Child Directed Speech. In Proceedings of the twenty-ninth Annual Stanford Child Language Research Forum (pp. 37-46).
  • Crasborn, O., & Sloetjes, H. (2014). Improving the exploitation of linguistic annotations in ELAN. In N. Calzolari, K. Choukri, T. Declerck, H. Loftsson, B. Maegaard, J. Mariani, A. Moreno, J. Odijk, & S. Piperidis (Eds.), Proceedings of LREC 2014: 9th International Conference on Language Resources and Evaluation (pp. 3604-3608).

    Abstract

    This paper discusses some improvements in recent and planned versions of the multimodal annotation tool ELAN, which are targeted at improving the usability of annotated files. Increased support for multilingual documents is provided, by allowing for multilingual vocabularies and by specifying a language per document, annotation layer (tier) or annotation. In addition, improvements in the search possibilities and the display of the results have been implemented, which are especially relevant in the interpretation of the results of complex multi-tier searches.
  • Crasborn, O., Hulsbosch, M., Lampen, L., & Sloetjes, H. (2014). New multilayer concordance functions in ELAN and TROVA. In Proceedings of the Tilburg Gesture Research Meeting [TiGeR 2013].

    Abstract

    Collocations generated by concordancers are a standard instrument in the exploitation of text corpora for the analysis of language use. Multimodal corpora show similar types of patterns, activities that frequently occur together, but there is no tool that offers facilities for visualising such patterns. Examples include timing of eye contact with respect to speech, and the alignment of activities of the two hands in signed languages. This paper describes recent enhancements to the standard CLARIN tools ELAN and TROVA for multimodal annotation to address these needs: first of all the query and concordancing functions were improved, and secondly the tools now generate visualisations of multilayer collocations that allow for intuitive explorations and analyses of multimodal data. This will provide a boost to the linguistic fields of gesture and sign language studies, as it will improve the exploitation of multimodal corpora.
  • Croijmans, I., & Majid, A. (2015). Odor naming is difficult, even for wine and coffee experts. In D. C. Noelle, R. Dale, A. S. Warlaumont, J. Yoshimi, T. Matlock, C. D. Jennings, & P. P. Maglio (Eds.), Proceedings of the 37th Annual Meeting of the Cognitive Science Society (CogSci 2015) (pp. 483-488). Austin, TX: Cognitive Science Society. Retrieved from https://mindmodeling.org/cogsci2015/papers/0092/index.html.

    Abstract

    Odor naming is difficult for people, but recent cross-cultural research suggests this difficulty is culture-specific. Jahai speakers (hunter-gatherers from the Malay Peninsula) name odors as consistently as colors, and much better than English speakers (Majid & Burenhult, 2014). In Jahai the linguistic advantage for smells correlates with a cultural interest in odors. Here we ask whether sub-cultures in the West with odor expertise also show superior odor naming. We tested wine and coffee experts (who have specialized odor training) in an odor naming task. Both wine and coffee experts were no more accurate or consistent than novices when naming odors. Although there were small differences in naming strategies, experts and non-experts alike relied overwhelmingly on source-based descriptions. So the specific language experts speak continues to constrain their ability to express odors. This suggests expertise alone is not sufficient to overcome the limits of language in the domain of smell.
  • Cutler, A., Norris, D., & Sebastián-Gallés, N. (2004). Phonemic repertoire and similarity within the vocabulary. In S. Kin, & M. J. Bae (Eds.), Proceedings of the 8th International Conference on Spoken Language Processing (Interspeech 2004-ICSLP) (pp. 65-68). Seoul: Sunjijn Printing Co.

    Abstract

    Language-specific differences in the size and distribution of the phonemic repertoire can have implications for the task facing listeners in recognising spoken words. A language with more phonemes will allow shorter words and reduced embedding of short words within longer ones, decreasing the potential for spurious lexical competitors to be activated by speech signals. We demonstrate that this is the case via comparative analyses of the vocabularies of English and Spanish. A language which uses suprasegmental as well as segmental contrasts, however, can substantially reduce the extent of spurious embedding.
  • Cutler, A., & Butterfield, S. (2003). Rhythmic cues to speech segmentation: Evidence from juncture misperception. In J. Field (Ed.), Psycholinguistics: A resource book for students. (pp. 185-189). London: Routledge.
  • Cutler, A., Murty, L., & Otake, T. (2003). Rhythmic similarity effects in non-native listening? In Proceedings of the 15th International Congress of Phonetic Sciences (PCPhS 2003) (pp. 329-332). Adelaide: Causal Productions.

    Abstract

    Listeners rely on native-language rhythm in segmenting speech; in different languages, stress-, syllable- or mora-based rhythm is exploited. This language-specificity affects listening to non- native speech, if native procedures are applied even though inefficient for the non-native language. However, speakers of two languages with similar rhythmic interpretation should segment their own and the other language similarly. This was observed to date only for related languages (English-Dutch; French-Spanish). We now report experiments in which Japanese listeners heard Telugu, a Dravidian language unrelated to Japanese, and Telugu listeners heard Japanese. In both cases detection of target sequences in speech was harder when target boundaries mismatched mora boundaries, exactly the pattern that Japanese listeners earlier exhibited with Japanese and other languages. These results suggest that Telugu and Japanese listeners use similar procedures in segmenting speech, and support the idea that languages fall into rhythmic classes, with aspects of phonological structure affecting listeners' speech segmentation.
  • Cutler, A. (2004). Segmentation of spoken language by normal adult listeners. In R. Kent (Ed.), MIT encyclopedia of communication sciences and disorders (pp. 392-395). Cambridge, MA: MIT Press.
  • Cutler, A. (2003). The perception of speech: Psycholinguistic aspects. In W. Frawley (Ed.), International encyclopaedia of linguistics (pp. 154-157). Oxford: Oxford University Press.
  • Cutler, A., & Henton, C. G. (2004). There's many a slip 'twixt the cup and the lip. In H. Quené, & V. Van Heuven (Eds.), On speech and Language: Studies for Sieb G. Nooteboom (pp. 37-45). Utrecht: Netherlands Graduate School of Linguistics.

    Abstract

    The retiring academic may look back upon, inter alia, years of conference attendance. Speech error researchers are uniquely fortunate because they can collect data in any situation involving communication; accordingly, the retiring speech error researcher will have collected data at those conferences. We here address the issue of whether error data collected in situations involving conviviality (such as at conferences) is representative of error data in general. Our approach involved a comparison, across three levels of linguistic processing, between a specially constructed Conviviality Sample and the largest existing source of speech error data, the newly available Fromkin Speech Error Database. The results indicate that there are grounds for regarding the data in the Conviviality Sample as a better than average reflection of the true population of all errors committed. These findings encourage us to recommend further data collection in collaboration with like-minded colleagues.
  • Cutler, A. (2004). Twee regels voor academische vorming. In H. Procee (Ed.), Bij die wereld wil ik horen! Zesendertig columns en drie essays over de vorming tot academicus. (pp. 42-45). Amsterdam: Boom.
  • Cutler, A., & Otake, T. (1998). Assimilation of place in Japanese and Dutch. In R. Mannell, & J. Robert-Ribes (Eds.), Proceedings of the Fifth International Conference on Spoken Language Processing: vol. 5 (pp. 1751-1754). Sydney: ICLSP.

    Abstract

    Assimilation of place of articulation across a nasal and a following stop consonant is obligatory in Japanese, but not in Dutch. In four experiments the processing of assimilated forms by speakers of Japanese and Dutch was compared, using a task in which listeners blended pseudo-word pairs such as ranga-serupa. An assimilated blend of this pair would be rampa, an unassimilated blend rangpa. Japanese listeners produced significantly more assimilated than unassimilated forms, both with pseudo-Japanese and pseudo-Dutch materials, while Dutch listeners produced significantly more unassimilated than assimilated forms in each materials set. This suggests that Japanese listeners, whose native-language phonology involves obligatory assimilation constraints, represent the assimilated nasals in nasal-stop sequences as unmarked for place of articulation, while Dutch listeners, who are accustomed to hearing unassimilated forms, represent the same nasal segments as marked for place of articulation.
  • Cutler, A. (1979). Beyond parsing and lexical look-up. In R. J. Wales, & E. C. T. Walker (Eds.), New approaches to language mechanisms: a collection of psycholinguistic studies (pp. 133-149). Amsterdam: North-Holland.
  • Cutler, A., Mister, E., Norris, D., & Sebastián-Gallés, N. (2004). La perception de la parole en espagnol: Un cas particulier? In L. Ferrand, & J. Grainger (Eds.), Psycholinguistique cognitive: Essais en l'honneur de Juan Segui (pp. 57-74). Brussels: De Boeck.
  • Cutler, A. (1998). How listeners find the right words. In Proceedings of the Sixteenth International Congress on Acoustics: Vol. 2 (pp. 1377-1380). Melville, NY: Acoustical Society of America.

    Abstract

    Languages contain tens of thousands of words, but these are constructed from a tiny handful of phonetic elements. Consequently, words resemble one another, or can be embedded within one another, a coup stick snot with standing. me process of spoken-word recognition by human listeners involves activation of multiple word candidates consistent with the input, and direct competition between activated candidate words. Further, human listeners are sensitive, at an early, prelexical, stage of speeeh processing, to constraints on what could potentially be a word of the language.

Share this page