Publications

Displaying 1 - 100 of 506
  • Alhama, R. G., & Zuidema, W. (2017). Segmentation as Retention and Recognition: the R&R model. In G. Gunzelmann, A. Howes, T. Tenbrink, & E. Davelaar (Eds.), Proceedings of the 39th Annual Conference of the Cognitive Science Society (CogSci 2017) (pp. 1531-1536). Austin, TX: Cognitive Science Society.

    Abstract

    We present the Retention and Recognition model (R&R), a probabilistic exemplar model that accounts for segmentation in Artificial Language Learning experiments. We show that R&R provides an excellent fit to human responses in three segmentation experiments with adults (Frank et al., 2010), outperforming existing models. Additionally, we analyze the results of the simulations and propose alternative explanations for the experimental findings.
  • Allen, G. L., & Haun, D. B. M. (2004). Proximity and precision in spatial memory. In G. Allen (Ed.), Human spatial memory: Remembering where (pp. 41-63). Mahwah, NJ: Lawrence Erlbaum.
  • Almeida, L., Amdal, I., Beires, N., Boualem, M., Boves, L., Den Os, E., Filoche, P., Gomes, R., Knudsen, J. E., Kvale, K., Rugelbak, J., Tallec, C., & Warakagoda, N. (2002). Implementing and evaluating a multimodal tourist guide. In J. v. Kuppevelt, L. Dybkjær, & N. Bernsen (Eds.), Proceedings of the International CLASS Workshop on Natural, Intelligent and Effective Interaction in Multimodal Dialogue System (pp. 1-7). Copenhagen: Kluwer.
  • Ameka, F. K. (2002). The progressive aspect in Likpe: Its implications for aspect and word order in Kwa. In F. K. Ameka, & E. K. Osam (Eds.), New directions in Ghanaian linguistics: Essays in honor of the 3Ds (pp. 85-111). Accra: Black Mask.
  • Ameka, F. K. (2002). Constituent order and grammatical relations in Ewe in typological perspective. In K. Davidse, & B. Lamiroy (Eds.), The nominative & accusative and their counterparts (pp. 319-352). Amsterdam: John Benjamins.
  • Ameka, F. K. (2010). Information packaging constructions in Kwa: Micro-variation and typology. In E. O. Aboh, & J. Essegbey (Eds.), Topics in Kwa syntax (pp. 141-176). Dordrecht: Springer.

    Abstract

    Kwa languages such as Akye, Akan, Ewe, Ga, Likpe, Yoruba etc. are not prototypically “topic-prominent” like Chinese nor “focus-prominent” like Somali, yet they have dedicated structural positions in the clause, as well as morphological markers for signalling the information status of the component parts of information units. They could thus be seen as “discourse configurational languages” (Kiss 1995). In this chapter, I first argue for distinct positions in the left periphery of the clause in these languages for scene-setting topics, contrastive topics and focus. I then describe the morpho-syntactic properties of various information packaging constructions and the variations that we find across the languages in this domain.
  • Ameka, F. K. (1999). Interjections. In K. Brown, & J. Miller (Eds.), Concise encyclopedia of grammatical categories (pp. 213-216). Oxford: Elsevier.
  • Ameka, F. K., De Witte, C., & Wilkins, D. (1999). Picture series for positional verbs: Eliciting the verbal component in locative descriptions. In D. Wilkins (Ed.), Manual for the 1999 Field Season (pp. 48-54). Nijmegen: Max Planck Institute for Psycholinguistics. doi:10.17617/2.2573831.

    Abstract

    How do different languages encode location and position meanings? In conjunction with the BowPed picture series and Caused Positions task, this elicitation tool is designed to help researchers (i) identify a language’s resources for encoding topological relations; (ii) delimit the pragmatics of use of such resources; and (iii) determine the semantics of select spatial terms. The task focuses on the exploration of the predicative component of topological expressions (e.g., ‘the cassavas are lying in the basket’), especially the contrastive elicitation of positional verbs. The materials consist of a set of photographs of objects (e.g., bottles, cloths, sticks) in specific configurations with various ground items (e.g., basket, table, tree).

    Additional information

    1999_Positional_verbs_stimuli.zip
  • Ameka, F. K. (2017). The Uselessness of the Useful: Language Standardisation and Variation in Multilingual Context. In I. Tieken-Boon van Ostade, & C. Percy (Eds.), Prescription and tradition in language: Establishing standards across the time and space (pp. 71-87). Bristol: Multilingual Matters.
  • Arnhold, A., Vainio, M., Suni, A., & Järvikivi, J. (2010). Intonation of Finnish verbs. Speech Prosody 2010, 100054, 1-4. Retrieved from http://speechprosody2010.illinois.edu/papers/100054.pdf.

    Abstract

    A production experiment investigated the tonal shape of Finnish finite verbs in transitive sentences without narrow focus. Traditional descriptions of Finnish stating that non-focused finite verbs do not receive accents were only partly supported. Verbs were found to have a consistently smaller pitch range than words in other word classes, but their pitch contours were neither flat nor explainable by pure interpolation.
  • Auer, E., Wittenburg, P., Sloetjes, H., Schreer, O., Masneri, S., Schneider, D., & Tschöpel, S. (2010). Automatic annotation of media field recordings. In C. Sporleder, & K. Zervanou (Eds.), Proceedings of the ECAI 2010 Workshop on Language Technology for Cultural Heritage, Social Sciences, and Humanities (LaTeCH 2010) (pp. 31-34). Lisbon: University de Lisbon. Retrieved from http://ilk.uvt.nl/LaTeCH2010/.

    Abstract

    In the paper we describe a new attempt to come to automatic detectors processing real scene audio-video streams that can be used by researchers world-wide to speed up their annotation and analysis work. Typically these recordings are taken in field and experimental situations mostly with bad quality and only little corpora preventing to use standard stochastic pattern recognition techniques. Audio/video processing components are taken out of the expert lab and are integrated in easy-to-use interactive frameworks so that the researcher can easily start them with modified parameters and can check the usefulness of the created annotations. Finally a variety of detectors may have been used yielding a lattice of annotations. A flexible search engine allows finding combinations of patterns opening completely new analysis and theorization possibilities for the researchers who until were required to do all annotations manually and who did not have any help in pre-segmenting lengthy media recordings.
  • Auer, E., Russel, A., Sloetjes, H., Wittenburg, P., Schreer, O., Masnieri, S., Schneider, D., & Tschöpel, S. (2010). ELAN as flexible annotation framework for sound and image processing detectors. In N. Calzolari, B. Maegaard, J. Mariani, J. Odjik, K. Choukri, S. Piperidis, M. Rosner, & D. Tapias (Eds.), Proceedings of the Seventh conference on International Language Resources and Evaluation (LREC'10) (pp. 890-893). European Language Resources Association (ELRA).

    Abstract

    Annotation of digital recordings in humanities research still is, to a largeextend, a process that is performed manually. This paper describes the firstpattern recognition based software components developed in the AVATecH projectand their integration in the annotation tool ELAN. AVATecH (AdvancingVideo/Audio Technology in Humanities Research) is a project that involves twoMax Planck Institutes (Max Planck Institute for Psycholinguistics, Nijmegen,Max Planck Institute for Social Anthropology, Halle) and two FraunhoferInstitutes (Fraunhofer-Institut für Intelligente Analyse- undInformationssysteme IAIS, Sankt Augustin, Fraunhofer Heinrich-Hertz-Institute,Berlin) and that aims to develop and implement audio and video technology forsemi-automatic annotation of heterogeneous media collections as they occur inmultimedia based research. The highly diverse nature of the digital recordingsstored in the archives of both Max Planck Institutes, poses a huge challenge tomost of the existing pattern recognition solutions and is a motivation to makesuch technology available to researchers in the humanities.
  • Azar, Z., Backus, A., & Ozyurek, A. (2017). Highly proficient bilinguals maintain language-specific pragmatic constraints on pronouns: Evidence from speech and gesture. In G. Gunzelmann, A. Howes, T. Tenbrink, & E. Davelaar (Eds.), Proceedings of the 39th Annual Conference of the Cognitive Science Society (CogSci 2017) (pp. 81-86). Austin, TX: Cognitive Science Society.

    Abstract

    The use of subject pronouns by bilingual speakers using both a pro-drop and a non-pro-drop language (e.g. Spanish heritage speakers in the USA) is a well-studied topic in research on cross-linguistic influence in language contact situations. Previous studies looking at bilinguals with different proficiency levels have yielded conflicting results on whether there is transfer from the non-pro-drop patterns to the pro-drop language. Additionally, previous research has focused on speech patterns only. In this paper, we study the two modalities of language, speech and gesture, and ask whether and how they reveal cross-linguistic influence on the use of subject pronouns in discourse. We focus on elicited narratives from heritage speakers of Turkish in the Netherlands, in both Turkish (pro-drop) and Dutch (non-pro-drop), as well as from monolingual control groups. The use of pronouns was not very common in monolingual Turkish narratives and was constrained by the pragmatic contexts, unlike in Dutch. Furthermore, Turkish pronouns were more likely to be accompanied by localized gestures than Dutch pronouns, presumably because pronouns in Turkish are pragmatically marked forms. We did not find any cross-linguistic influence in bilingual speech or gesture patterns, in line with studies (speech only) of highly proficient bilinguals. We therefore suggest that speech and gesture parallel each other not only in monolingual but also in bilingual production. Highly proficient heritage speakers who have been exposed to diverse linguistic and gestural patterns of each language from early on maintain monolingual patterns of pragmatic constraints on the use of pronouns multimodally.
  • Barbiers, S., & Van Dooren, A. (2017). Modal Auxiliaries. In M. Everaert, & H. C. Van Riemsdijk (Eds.), The Wiley Blackwell Companion to Syntax (2nd ed.). Hoboken, NJ, USA: Wiley.

    Abstract

    In many languages modal auxiliaries such as English can, must, may, need, will, ought, want are ambiguous between two types of interpretations: epistemic and root interpretations. In the epistemic interpretation the modal expresses how likely it is that a proposition is true (for example, necessarily, possibly, probably true) while in the root interpretations the modal expresses the obligatoriness, permissibility, desirability, or possibility of a state or event. A central question in much syntactic research on modal auxiliaries has been whether this systematic semantic ambiguity corresponds to a syntactic distinction. A commonly accepted answer has been that in epistemic interpretations the modal verb is a monadic predicate while in root interpretations it is a dyadic predicate, typically a relation between a subject and an infinitival verb. This distinction between monadic and dyadic modal predicates has been modeled syntactically in various ways: (i) in terms of lexical argument structure, that is, as the distinction between raising and control verbs; (ii) in terms of different base positions in the array of functional heads making up the clausal spine, with epistemic modals being higher than root modals; (iii) in terms of a higher syntactic position for epistemically interpreted modals after raising at the level of semantic interpretation (LF raising); (iv) in terms of the nature of the complement of the modal. This chapter evaluates these proposals, drawing on data from, among others, English, Dutch, Icelandic, German, and Catalan and taking into account cross-linguistic differences in the modal systems. One important conclusion is that the alleged correspondence between the epistemic/root distinction and the raising/control distinction is too simple, as there are sentences with root interpretations but a raising syntax. The chapter ends with a list of questions for future research.
  • Bardhan, N. P., Aslin, R., & Tanenhaus, M. (2010). Adults' self-directed learning of an artificial lexicon: The dynamics of neighborhood reorganization. In S. Ohlsson, & R. Catrambone (Eds.), Proceedings of the 32nd Annual Meeting of the Cognitive Science Society (pp. 364-368). Austin, TX: Cognitive Science Society.
  • Bauer, B. L. M. (1999). Aspects of impersonal constructions in Late Latin. In H. Petersmann, & R. Kettelmann (Eds.), Latin vulgaire – latin tardif V (pp. 209-211). Heidelberg: Winter.
  • Bauer, B. L. M. (2010). Fore-runners of Romance -mente adverbs in Latin prose and poetry. In E. Dickey, & A. Chahoud (Eds.), Colloquial and literary Latin (pp. 339-353). Cambridge: Cambridge University Press.
  • Bauer, B. L. M. (1999). Impersonal HABET constructions: At the cross-roads of Indo-European innovation. In E. Polomé, & C. Justus (Eds.), Language change and typological variation. Vol II. Grammatical universals and typology (pp. 590-612). Washington: Institute for the study of man.
  • Bergmann, C., Paulus, M., & Fikkert, J. (2010). A closer look at pronoun comprehension: Comparing different methods. In J. Costa, A. Castro, M. Lobo, & F. Pratas (Eds.), Language Acquisition and Development: Proceedings of GALA 2009 (pp. 53-61). Newcastle upon Tyne: Cambridge Scholars Publishing.

    Abstract

    1. Introduction External input is necessary to acquire language. Consequently, the comprehension of various constituents of language, such as lexical items or syntactic and semantic structures should emerge at the same time as or even precede their production. However, in the case of pronouns this general assumption does not seem to hold. On the contrary, while children at the age of four use pronouns and reflexives appropriately during production (de Villiers, et al. 2006), a number of comprehension studies across different languages found chance performance in pronoun trials up to the age of seven, which co-occurs with a high level of accuracy in reflexive trials (for an overview see e.g. Conroy, et al. 2009; Elbourne 2005).
  • Bergmann, C., Gubian, M., & Boves, L. (2010). Modelling the effect of speaker familiarity and noise on infant word recognition. In Proceedings of the 11th Annual Conference of the International Speech Communication Association [Interspeech 2010] (pp. 2910-2913). ISCA.

    Abstract

    In the present paper we show that a general-purpose word learning model can simulate several important findings from recent experiments in language acquisition. Both the addition of background noise and varying the speaker have been found to influence infants’ performance during word recognition experiments. We were able to replicate this behaviour in our artificial word learning agent. We use the results to discuss both advantages and limitations of computational models of language acquisition.
  • Bergmann, C., Tsuji, S., & Cristia, A. (2017). Top-down versus bottom-up theories of phonological acquisition: A big data approach. In Proceedings of Interspeech 2017 (pp. 2103-2107).

    Abstract

    Recent work has made available a number of standardized meta- analyses bearing on various aspects of infant language processing. We utilize data from two such meta-analyses (discrimination of vowel contrasts and word segmentation, i.e., recognition of word forms extracted from running speech) to assess whether the published body of empirical evidence supports a bottom-up versus a top-down theory of early phonological development by leveling the power of results from thousands of infants. We predicted that if infants can rely purely on auditory experience to develop their phonological categories, then vowel discrimination and word segmentation should develop in parallel, with the latter being potentially lagged compared to the former. However, if infants crucially rely on word form information to build their phonological categories, then development at the word level must precede the acquisition of native sound categories. Our results do not support the latter prediction. We discuss potential implications and limitations, most saliently that word forms are only one top-down level proposed to affect phonological development, with other proposals suggesting that top-down pressures emerge from lexical (i.e., word-meaning pairs) development. This investigation also highlights general procedures by which standardized meta-analyses may be reused to answer theoretical questions spanning across phenomena.

    Additional information

    Scripts and data
  • Bickel, B. (1991). Der Hang zur Exzentrik - Annäherungen an das kognitive Modell der Relativkonstruktion. In W. Bisang, & P. Rinderknecht (Eds.), Von Europa bis Ozeanien - von der Antinomie zum Relativsatz (pp. 15-37). Zurich, Switzerland: Seminar für Allgemeine Sprachwissenschaft der Universität.
  • Black, A., & Bergmann, C. (2017). Quantifying infants' statistical word segmentation: A meta-analysis. In G. Gunzelmann, A. Howes, T. Tenbrink, & E. Davelaar (Eds.), Proceedings of the 39th Annual Meeting of the Cognitive Science Society (pp. 124-129). Austin, TX: Cognitive Science Society.

    Abstract

    Theories of language acquisition and perceptual learning increasingly rely on statistical learning mechanisms. The current meta-analysis aims to clarify the robustness of this capacity in infancy within the word segmentation literature. Our analysis reveals a significant, small effect size for conceptual replications of Saffran, Aslin, & Newport (1996), and a nonsignificant effect across all studies that incorporate transitional probabilities to segment words. In both conceptual replications and the broader literature, however, statistical learning is moderated by whether stimuli are naturally produced or synthesized. These findings invite deeper questions about the complex factors that influence statistical learning, and the role of statistical learning in language acquisition.
  • Blythe, J. (2010). From ethical datives to number markers in Murriny Patha. In R. Hendery, & J. Hendriks (Eds.), Grammatical change: Theory and description (pp. 157-187). Canberra: Pacific Linguistics.
  • Blythe, J. (2010). Self-association in Murriny Patha talk-in-interaction. In I. Mushin, & R. Gardner (Eds.), Studies in Australian Indigenous Conversation [Special issue] (pp. 447-469). Australian Journal of Linguistics. doi:10.1080/07268602.2010.518555.

    Abstract

    When referring to persons in talk-in-interaction, interlocutors recruit the particular referential expressions that best satisfy both cultural and interactional contingencies, as well as the speaker’s own personal objectives. Regular referring practices reveal cultural preferences for choosing particular classes of reference forms for engaging in particular types of activities. When speakers of the northern Australian language Murriny Patha refer to each other, they display a clear preference for associating the referent to the current conversation’s participants. This preference for Association is normally achieved through the use of triangular reference forms such as kinterms. Triangulations are reference forms that link the person being spoken about to another specified person (e.g. Bill’s doctor). Triangulations are frequently used to associate the referent to the current speaker (e.g.my father), to an addressed recipient (your uncle) or co-present other (this bloke’s cousin). Murriny Patha speakers regularly associate key persons to themselves when making authoritative claims about items of business and important events. They frequently draw on kinship links when attempting to bolster their epistemic position. When speakers demonstrate their relatedness to the event’s protagonists, they ground their contribution to the discussion as being informed by appropriate genealogical connections (effectively, ‘I happen to know something about that. He was after all my own uncle’).
  • Bock, K., & Levelt, W. J. M. (2002). Language production: Grammatical encoding. In G. T. Altmann (Ed.), Psycholinguistics: Critical concepts in psychology (pp. 405-452). London: Routledge.
  • Bohnemeyer, J. (1999). A questionnaire on event integration. In D. Wilkins (Ed.), Manual for the 1999 Field Season (pp. 87-95). Nijmegen: Max Planck Institute for Psycholinguistics. doi:10.17617/2.3002691.

    Abstract

    How do we decide where events begin and end? Like the ECOM clips, this questionnaire is designed to investigate how a language divides and/or integrates complex scenarios into sub-events and macro-events. The questionnaire focuses on events of motion, caused state change (e.g., breaking), and transfer (e.g., giving). It provides a checklist of scenarios that give insight into where a language “draws the line” in event integration, based on known cross-linguistic differences.
  • Bohnemeyer, J., & Majid, A. (2002). ECOM causality revisited version 4. In S. Kita (Ed.), 2002 Supplement (version 3) for the “Manual” for the field season 2001 (pp. 35-38). Nijmegen: Max Planck Institute for Psycholinguistics.
  • Bohnemeyer, J. (2004). Argument and event structure in Yukatek verb classes. In J.-Y. Kim, & A. Werle (Eds.), Proceedings of The Semantics of Under-Represented Languages in the Americas. Amherst, Mass: GLSA.

    Abstract

    In Yukatek Maya, event types are lexicalized in verb roots and stems that fall into a number of different form classes on the basis of (a) patterns of aspect-mood marking and (b) priviledges of undergoing valence-changing operations. Of particular interest are the intransitive classes in the light of Perlmutter’s (1978) Unaccusativity hypothesis. In the spirit of Levin & Rappaport Hovav (1995) [L&RH], Van Valin (1990), Zaenen (1993), and others, this paper investigates whether (and to what extent) the association between formal predicate classes and event types is determined by argument structure features such as ‘agentivity’ and ‘control’ or features of lexical aspect such as ‘telicity’ and ‘durativity’. It is shown that mismatches between agentivity/control and telicity/durativity are even more extensive in Yukatek than they are in English (Abusch 1985; L&RH, Van Valin & LaPolla 1997), providing new evidence against Dowty’s (1979) reconstruction of Vendler’s (1967) ‘time schemata of verbs’ in terms of argument structure configurations. Moreover, contrary to what has been claimed in earlier studies of Yukatek (Krämer & Wunderlich 1999, Lucy 1994), neither agentivity/control nor telicity/durativity turn out to be good predictors of verb class membership. Instead, the patterns of aspect-mood marking prove to be sensitive only to the presence or absense of state change, in a way that supports the unified analysis of all verbs of gradual change proposed by Kennedy & Levin (2001). The presence or absence of ‘internal causation’ (L&RH) may motivate the semantic interpretation of transitivization operations. An explicit semantics for the valence-changing operations is proposed, based on Parsons’s (1990) Neo-Davidsonian approach.
  • Bohnemeyer, J., Burenhult, N., Enfield, N. J., & Levinson, S. C. (2004). Landscape terms and place names elicitation guide. In A. Majid (Ed.), Field Manual Volume 9 (pp. 75-79). Nijmegen: Max Planck Institute for Psycholinguistics. doi:10.17617/2.492904.

    Abstract

    Landscape terms reflect the relationship between geographic reality and human cognition. Are ‘mountains’, ‘rivers, ‘lakes’ and the like universally recognised in languages as naturally salient objects to be named? The landscape subproject is concerned with the interrelation between language, cognition and geography. Specifically, it investigates issues relating to how landforms are categorised cross-linguistically as well as the characteristics of place naming.
  • Bohnemeyer, J. (1999). Event representation and event complexity: General introduction. In D. Wilkins (Ed.), Manual for the 1999 Field Season (pp. 69-73). Nijmegen: Max Planck Institute for Psycholinguistics. doi:10.17617/2.3002741.

    Abstract

    How do we decide where events begin and end? In some languages it makes sense to say something like Dan broke the plate, but in other languages it is necessary to treat this action as a complex scenario composed of separate stages (Dan dropped the plate and then the plate broke). This document introduces issues concerning the linguistic and cognitive representations of event complexity and integration, and provides an overview of tasks that are relevant to this topic, including the ECOM clips, the Questionnaire on Event integration, and the Questionnaire on motion lexicalisation and motion description.
  • Bohnemeyer, J., & Caelen, M. (1999). The ECOM clips: A stimulus for the linguistic coding of event complexity. In D. Wilkins (Ed.), Manual for the 1999 Field Season (pp. 74-86). Nijmegen: Max Planck Institute for Psycholinguistics. doi:10.17617/2.874627.

    Abstract

    How do we decide where events begin and end? In some languages it makes sense to say something like Dan broke the plate, but in other languages it is necessary to treat this action as a complex scenario composed of separate stages (Dan dropped the plate and then the plate broke). The “Event Complexity” (ECOM) clips are designed to explore how languages differ in dividing and/or integrating complex scenarios into sub-events and macro-events. The stimuli consist of animated clips of geometric shapes that participate in different scenarios (e.g., a circle “hits” a triangle and “breaks” it). Consultants are asked to describe the scenes, and then to comment on possible alternative descriptions.

    Additional information

    1999_The_ECOM_clips.zip
  • Bosker, H. R., & Kösem, A. (2017). An entrained rhythm's frequency, not phase, influences temporal sampling of speech. In Proceedings of Interspeech 2017 (pp. 2416-2420). doi:10.21437/Interspeech.2017-73.

    Abstract

    Brain oscillations have been shown to track the slow amplitude fluctuations in speech during comprehension. Moreover, there is evidence that these stimulus-induced cortical rhythms may persist even after the driving stimulus has ceased. However, how exactly this neural entrainment shapes speech perception remains debated. This behavioral study investigated whether and how the frequency and phase of an entrained rhythm would influence the temporal sampling of subsequent speech. In two behavioral experiments, participants were presented with slow and fast isochronous tone sequences, followed by Dutch target words ambiguous between as /ɑs/ “ash” (with a short vowel) and aas /a:s/ “bait” (with a long vowel). Target words were presented at various phases of the entrained rhythm. Both experiments revealed effects of the frequency of the tone sequence on target word perception: fast sequences biased listeners to more long /a:s/ responses. However, no evidence for phase effects could be discerned. These findings show that an entrained rhythm’s frequency, but not phase, influences the temporal sampling of subsequent speech. These outcomes are compatible with theories suggesting that sensory timing is evaluated relative to entrained frequency. Furthermore, they suggest that phase tracking of (syllabic) rhythms by theta oscillations plays a limited role in speech parsing.
  • Bosker, H. R. (2017). The role of temporal amplitude modulations in the political arena: Hillary Clinton vs. Donald Trump. In Proceedings of Interspeech 2017 (pp. 2228-2232). doi:10.21437/Interspeech.2017-142.

    Abstract

    Speech is an acoustic signal with inherent amplitude modulations in the 1-9 Hz range. Recent models of speech perception propose that this rhythmic nature of speech is central to speech recognition. Moreover, rhythmic amplitude modulations have been shown to have beneficial effects on language processing and the subjective impression listeners have of the speaker. This study investigated the role of amplitude modulations in the political arena by comparing the speech produced by Hillary Clinton and Donald Trump in the three presidential debates of 2016. Inspection of the modulation spectra, revealing the spectral content of the two speakers’ amplitude envelopes after matching for overall intensity, showed considerably greater power in Clinton’s modulation spectra (compared to Trump’s) across the three debates, particularly in the 1-9 Hz range. The findings suggest that Clinton’s speech had a more pronounced temporal envelope with rhythmic amplitude modulations below 9 Hz, with a preference for modulations around 3 Hz. This may be taken as evidence for a more structured temporal organization of syllables in Clinton’s speech, potentially due to more frequent use of preplanned utterances. Outcomes are interpreted in light of the potential beneficial effects of a rhythmic temporal envelope on intelligibility and speaker perception.
  • Bosker, H. R., Briaire, J., Heeren, W., van Heuven, V. J., & Jongman, S. R. (2010). Whispered speech as input for cochlear implants. In J. Van Kampen, & R. Nouwen (Eds.), Linguistics in the Netherlands 2010 (pp. 1-14).
  • Bottini, R., & Casasanto, D. (2010). Implicit spatial length modulates time estimates, but not vice versa. In C. Hölscher, T. F. Shipley, M. Olivetti Belardinelli, J. A. Bateman, & N. Newcombe (Eds.), Spatial Cognition VII. International Conference, Spatial Cognition 2010, Mt. Hood/Portland, OR, USA, August 15-19, 2010. Proceedings (pp. 152-162). Berlin Heidelberg: Springer.

    Abstract

    How are space and time represented in the human mind? Here we evaluate two theoretical proposals, one suggesting a symmetric relationship between space and time (ATOM theory) and the other an asymmetric relationship (metaphor theory). In Experiment 1, Dutch-speakers saw 7-letter nouns that named concrete objects of various spatial lengths (tr. pencil, bench, footpath) and estimated how much time they remained on the screen. In Experiment 2, participants saw nouns naming temporal events of various durations (tr. blink, party, season) and estimated the words’ spatial length. Nouns that named short objects were judged to remain on the screen for a shorter time, and nouns that named longer objects to remain for a longer time. By contrast, variations in the duration of the event nouns’ referents had no effect on judgments of the words’ spatial length. This asymmetric pattern of cross-dimensional interference supports metaphor theory and challenges ATOM.
  • Bottini, R., & Casasanto, D. (2010). Implicit spatial length modulates time estimates, but not vice versa. In S. Ohlsson, & R. Catrambone (Eds.), Proceedings of the 32nd Annual Conference of the Cognitive Science Society (pp. 1348-1353). Austin, TX: Cognitive Science Society.

    Abstract

    Why do people accommodate to each other’s linguistic behavior? Studies of natural interactions (Giles, Taylor & Bourhis, 1973) suggest that speakers accommodate to achieve interactional goals, influencing what their interlocutor thinks or feels about them. But is this the only reason speakers accommodate? In real-world conversations, interactional motivations are ubiquitous, making it difficult to assess the extent to which they drive accommodation. Do speakers still accommodate even when interactional goals cannot be achieved, for instance, when their interlocutor cannot interpret their accommodation behavior? To find out, we asked participants to enter an immersive virtual reality (VR) environment and to converse with a virtual interlocutor. Participants accommodated to the speech rate of their virtual interlocutor even though he could not interpret their linguistic behavior, and thus accommodation could not possibly help them to achieve interactional goals. Results show that accommodation does not require explicit interactional goals, and suggest other social motivations for accommodation.
  • Bowerman, M. (2002). Taalverwerving, cognitie en cultuur. In T. Janssen (Ed.), Taal in gebruik: Een inleiding in de taalwetenschap (pp. 27-44). The Hague: Sdu.
  • Bowerman, M., Brown, P., Eisenbeiss, S., Narasimhan, B., & Slobin, D. I. (2002). Putting things in places: Developmental consequences of linguistic typology. In E. V. Clark (Ed.), Proceedings of the 31st Stanford Child Language Research Forum. Space in language location, motion, path, and manner (pp. 1-29). Stanford: Center for the Study of Language & Information.

    Abstract

    This study explores how adults and children describe placement events (e.g., putting a book on a table) in a range of different languages (Finnish, English, German, Russian, Hindi, Tzeltal Maya, Spanish, and Turkish). Results show that the eight languages grammatically encode placement events in two main ways (Talmy, 1985, 1991), but further investigation reveals fine-grained crosslinguistic variation within each of the two groups. Children are sensitive to these finer-grained characteristics of the input language at an early age, but only when such features are perceptually salient. Our study demonstrates that a unitary notion of 'event' does not suffice to characterize complex but systematic patterns of event encoding crosslinguistically, and that children are sensitive to multiple influences, including the distributional properties of the target language, in constructing these patterns in their own speech.
  • Bowerman, M. (1986). First steps in acquiring conditionals. In E. C. Traugott, A. G. t. Meulen, J. S. Reilly, & C. A. Ferguson (Eds.), On conditionals (pp. 285-308). Cambridge University Press.

    Abstract

    This chapter is about the initial flowering of conditionals, if-(then) constructions, in children's spontaneous speech. It is motivated by two major theoretical interests. The first and most immediate is to understand the acquisition process itself. Conditionals are conceptually, and in many languages morphosyntactically, complex. What aspects of cognitive and grammatical development are implicated in their acquisition? Does learning take place in the context of particular interactions with other speakers? Where do conditionals fit in with the acquisition of other complex sentences? What are the semantic, syntactic and pragmatic properties of the first conditionals? Underlying this first interest is a second, more strictly linguistic one. Research of recent years has found increasing evidence that natural languages are constrained in certain ways. The source of these constraints is not yet clearly understood, but it is widely assumed that some of them derive ultimately from properties of children's capacity for language acquisition.

    Files private

    Request files
  • Bowerman, M. (2004). From universal to language-specific in early grammatical development [Reprint]. In K. Trott, S. Dobbinson, & P. Griffiths (Eds.), The child language reader (pp. 131-146). London: Routledge.

    Abstract

    Attempts to explain children's grammatical development often assume a close initial match between units of meaning and units of form; for example, agents are said to map to sentence-subjects and actions to verbs. The meanings themselves, according to this view, are not influenced by language, but reflect children's universal non-linguistic way of understanding the world. This paper argues that, contrary to this position, meaning as it is expressed in children's early sentences is, from the beginning, organized on the basis of experience with the grammar and lexicon of a particular language. As a case in point, children learning English and Korean are shown to express meanings having to do with directed motion according to language-specific principles of semantic and grammatical structuring from the earliest stages of word combination.
  • Bowerman, M. (2002). Mapping thematic roles onto syntactic functions: Are children helped by innate linking rules? [Reprint]. In Mouton Classics: From syntax to cognition, from phonology to text (vol.2) (pp. 495-531). Berlin: Mouton de Gruyter.

    Abstract

    Reprinted from: Bowerman, M. (1990). Mapping thematic roles onto syntactic functions: Are children helped by innate linking rules? Linguistics, 28, 1253-1289.
  • Bowerman, M., Gullberg, M., Majid, A., & Narasimhan, B. (2004). Put project: The cross-linguistic encoding of placement events. In A. Majid (Ed.), Field Manual Volume 9 (pp. 10-24). Nijmegen: Max Planck Institute for Psycholinguistics. doi:10.17617/2.492916.

    Abstract

    How similar are the event concepts encoded by different languages? So far, few event domains have been investigated in any detail. The PUT project extends the systematic cross-linguistic exploration of event categorisation to a new domain, that of placement events (putting things in places and removing them from places). The goal of this task is to explore cross-linguistic universality and variability in the semantic categorisation of placement events (e.g., ‘putting a cup on the table’).

    Additional information

    2004_Put_project_video_stimuli.zip
  • Braun, B., & Tagliapietra, L. (2010). The role of contrastive intonation contours in the retrieval of contextual alternatives. In D. G. Watson, M. Wagner, & E. Gibson (Eds.), Experimental and theoretical advances in prosody (pp. 1024-1043). Hove: Psychology Press.

    Abstract

    Sentences with a contrastive intonation contour are usually produced when the speaker entertains alternatives to the accented words. However, such contrastive sentences are frequently produced without making the alternatives explicit for the listener. In two cross-modal associative priming experiments we tested in Dutch whether such contextual alternatives become available to listeners upon hearing a sentence with a contrastive intonation contour compared with a sentence with a non-contrastive one. The first experiment tested the recognition of contrastive associates (contextual alternatives to the sentence-final primes), the second one the recognition of non-contrastive associates (generic associates which are not alternatives). Results showed that contrastive associates were facilitated when the primes occurred in sentences with a contrastive intonation contour but not in sentences with a non-contrastive intonation. Non-contrastive associates were weakly facilitated independent of intonation. Possibly, contrastive contours trigger an accommodation mechanism by which listeners retrieve the contrast available for the speaker.
  • Broeder, D., Brugman, H., Oostdijk, N., & Wittenburg, P. (2004). Towards Dynamic Corpora: Workshop on compiling and processing spoken corpora. In M. Lino, M. Xavier, F. Ferreira, R. Costa, & R. Silva (Eds.), Proceedings of the 4th International Conference on Language Resources and Evaluation (LREC 2004) (pp. 59-62). Paris: European Language Resource Association.
  • Broeder, D., Wittenburg, P., & Crasborn, O. (2004). Using Profiles for IMDI Metadata Creation. In M. Lino, M. Xavier, F. Ferreira, R. Costa, & R. Silva (Eds.), Proceedings of the 4th International Conference on Language Resources and Evaluation (LREC 2004) (pp. 1317-1320). Paris: European Language Resources Association.
  • Broeder, D., Kemps-Snijders, M., Van Uytvanck, D., Windhouwer, M., Withers, P., Wittenburg, P., & Zinn, C. (2010). A data category registry- and component-based metadata framework. In N. Calzolari, B. Maegaard, J. Mariani, J. Odjik, K. Choukri, S. Piperidis, M. Rosner, & D. Tapias (Eds.), Proceedings of the Seventh conference on International Language Resources and Evaluation (LREC'10) (pp. 43-47). European Language Resources Association (ELRA).

    Abstract

    We describe our computer-supported framework to overcome the rule of metadata schism. It combines the use of controlled vocabularies, managed by a data category registry, with a component-based approach, where the categories can be combined to yield complex metadata structures. A metadata scheme devised in this way will thus be grounded in its use of categories. Schema designers will profit from existing prefabricated larger building blocks, motivating re-use at a larger scale. The common base of any two metadata schemes within this framework will solve, at least to a good extent, the semantic interoperability problem, and consequently, further promote systematic use of metadata for existing resources and tools to be shared.
  • Broeder, D., Declerck, T., Romary, L., Uneson, M., Strömqvist, S., & Wittenburg, P. (2004). A large metadata domain of language resources. In M. Lino, M. Xavier, F. Ferreira, R. Costa, & R. Silva (Eds.), Proceedings of the 4th International Conference on Language Resources and Evaluation (LREC 2004) (pp. 369-372). Paris: European Language Resources Association.
  • Broeder, D., Nava, M., & Declerck, T. (2004). INTERA - a Distributed Domain of Metadata Resources. In M. Lino, M. Xavier, F. Ferreira, R. Costa, & R. Silva (Eds.), Proceedings of the 4th International Conference on Spoken Language Resources and Evaluation (LREC 2004) (pp. 369-372). Paris: European Language Resources Association.
  • Broeder, D., Offenga, F., & Willems, D. (2002). Metadata tools supporting controlled vocabulary services. In M. Rodriguez González, & C. Paz SuárezR Araujo (Eds.), Third international conference on language resources and evaluation (pp. 1055-1059). Paris: European Language Resources Association.

    Abstract

    Within the ISLE Metadata Initiative (IMDI) project a user-friendly editor to enter metadata descriptions and a browser operating on the linked metadata descriptions were developed. Both tools support the usage of Controlled Vocabulary (CV) repositories by means of the specification of an URL where the formal CV definition data is available.
  • Broeder, D., Wittenburg, P., Declerck, T., & Romary, L. (2002). LREP: A language repository exchange protocol. In M. Rodriguez González, & C. Paz Suárez Araujo (Eds.), Third international conference on language resources and evaluation (pp. 1302-1305). Paris: European Language Resources Association.

    Abstract

    The recent increase in the number and complexity of the language resources available on the Internet is followed by a similar increase of available tools for linguistic analysis. Ideally the user does not need to be confronted with the question in how to match tools with resources. If resource repositories and tool repositories offer adequate metadata information and a suitable exchange protocol is developed this matching process could be performed (semi-) automatically.
  • Broersma, M. (2010). Dutch listener's perception of Korean fortis, lenis, and aspirated stops: First exposure. In K. Dziubalska-Kołaczyk, M. Wrembel, & M. Kul (Eds.), Proceedings of the 6th International Symposium on the Acquisition of Second Language Speech, New Sounds 2010, Poznań, Poland, 1-3 May 2010 (pp. 49-54).
  • Broersma, M. (2002). Comprehension of non-native speech: Inaccurate phoneme processing and activation of lexical competitors. In ICSLP-2002 (pp. 261-264). Denver: Center for Spoken Language Research, U. of Colorado Boulder.

    Abstract

    Native speakers of Dutch with English as a second language and native speakers of English participated in an English lexical decision experiment. Phonemes in real words were replaced by others from which they are hard to distinguish for Dutch listeners. Non-native listeners judged the resulting near-words more often as a word than native listeners. This not only happened when the phonemes that were exchanged did not exist as separate phonemes in the native language Dutch, but also when phoneme pairs that do exist in Dutch were used in word-final position, where they are not distinctive in Dutch. In an English bimodal priming experiment with similar groups of participants, word pairs were used which differed in one phoneme. These phonemes were hard to distinguish for the non-native listeners. Whereas in native listening both words inhibited each other, in non-native listening presentation of one word led to unresolved competition between both words. The results suggest that inaccurate phoneme processing by non-native listeners leads to the activation of spurious lexical competitors.
  • Broersma, M., & Kolkman, K. M. (2004). Lexical representation of non-native phonemes. In S. Kin, & M. J. Bae (Eds.), Proceedings of the 8th International Conference on Spoken Language Processing (Interspeech 2004-ICSLP) (pp. 1241-1244). Seoul: Sunjijn Printing Co.
  • Broersma, M. (2010). Korean lenis, fortis, and aspirated stops: Effect of place of articulation on acoustic realization. In Proceedings of the 11th Annual Conference of the International Speech Communication Association (Interspeech 2010), Makuhari, Japan. (pp. 941-944).

    Abstract

    Unlike most of the world's languages, Korean distinguishes three types of voiceless stops, namely lenis, fortis, and aspirated stops. All occur at three places of articulation. In previous work, acoustic measurements are mostly collapsed over the three places of articulation. This study therefore provides acoustic measurements of Korean lenis, fortis, and aspirated stops at all three places of articulation separately. Clear differences are found among the acoustic characteristics of the stops at the different places of articulation
  • Brookshire, G., Casasanto, D., & Ivry, R. (2010). Modulation of motor-meaning congruity effects for valenced words. In S. Ohlsson, & R. Catrambone (Eds.), Proceedings of the 32nd Annual Meeting of the Cognitive Science Society (CogSci 2010) (pp. 1940-1945). Austin, TX: Cognitive Science Society.

    Abstract

    We investigated the extent to which emotionally valenced words automatically cue spatio-motor representations. Participants made speeded button presses, moving their hand upward or downward while viewing words with positive or negative valence. Only the color of the words was relevant to the response; on target trials, there was no requirement to read the words or process their meaning. In Experiment 1, upward responses were faster for positive words, and downward for negative words. This effect was extinguished, however, when words were repeated. In Experiment 2, participants performed the same primary task with the addition of distractor trials. Distractors either oriented attention toward the words’ meaning or toward their color. Congruity effects were increased with orientation to meaning, but eliminated with orientation to color. When people read words with emotional valence, vertical spatio-motor representations are activated highly automatically, but this automaticity is modulated by repetition and by attentional orientation to the words’ form or meaning.
  • Brouwer, H., Fitz, H., & Hoeks, J. C. (2010). Modeling the noun phrase versus sentence coordination ambiguity in Dutch: Evidence from Surprisal Theory. In Proceedings of the 2010 Workshop on Cognitive Modeling and Computational Linguistics, ACL 2010 (pp. 72-80). Association for Computational Linguistics.

    Abstract

    This paper investigates whether surprisal theory can account for differential processing difficulty in the NP-/S-coordination ambiguity in Dutch. Surprisal is estimated using a Probabilistic Context-Free Grammar (PCFG), which is induced from an automatically annotated corpus. We find that our lexicalized surprisal model can account for the reading time data from a classic experiment on this ambiguity by Frazier (1987). We argue that syntactic and lexical probabilities, as specified in a PCFG, are sufficient to account for what is commonly referred to as an NP-coordination preference.
  • Brown, P. (2004). Position and motion in Tzeltal frog stories: The acquisition of narrative style. In S. Strömqvist, & L. Verhoeven (Eds.), Relating events in narrative: Typological and contextual perspectives (pp. 37-57). Mahwah: Erlbaum.

    Abstract

    How are events framed in narrative? Speakers of English (a 'satellite-framed' language), when 'reading' Mercer Mayer's wordless picture book 'Frog, Where Are You?', find the story self-evident: a boy has a dog and a pet frog; the frog escapes and runs away; the boy and dog look for it across hill and dale, through woods and over a cliff, until they find it and return home with a baby frog child of the original pet frog. In Tzeltal, as spoken in a Mayan community in southern Mexico, the story is somewhat different, because the language structures event descriptions differently. Tzeltal is in part a 'verb-framed' language with a set of Path-encoding motion verbs, so that the bare bones of the Frog story can consist of verbs translating as 'go'/'pass by'/'ascend'/ 'descend'/ 'arrive'/'return'. But Tzeltal also has satellite-framing adverbials, grammaticized from the same set of motion verbs, which encode the direction of motion or the orientation of static arrays. Furthermore, motion is not generally encoded barebones, but vivid pictorial detail is provided by positional verbs which can describe the position of the Figure as an outcome of a motion event; motion and stasis are thereby combined in a single event description. (For example: jipot jawal "he has been thrown (by the deer) lying¬_face_upwards_spread-eagled". This paper compares the use of these three linguistic resources in frog narratives from 14 Tzeltal adults and 21 children, looks at their development in the narratives of children between the ages of 4-12, and considers the results in relation to those from Berman and Slobin's (1996) comparative study of adult and child Frog stories.
  • Brown, P. (2010). Cognitive anthropology. In L. Cummings (Ed.), The pragmatics encyclopedia (pp. 43-46). London: Routledge.

    Abstract

    This is an encyclopedia entry surveying anthropological approaches to cognition and culture.
  • Brown, P. (2002). Everyone has to lie in Tzeltal. In S. Blum-Kulka, & C. E. Snow (Eds.), Talking to adults: The contribution of multiparty discourse to language acquisition (pp. 241-275). Mahwah, NJ: Erlbaum.

    Abstract

    In a famous paper Harvey Sacks (1974) argued that the sequential properties of greeting conventions, as well as those governing the flow of information, mean that 'everyone has to lie'. In this paper I show this dictum to be equally true in the Tzeltal Mayan community of Tenejapa, in southern Mexico, but for somewhat different reasons. The phenomenon of interest is the practice of routine fearsome threats to small children. Based on a longitudinal corpus of videotaped and tape-recorded naturally-occurring interaction between caregivers and children in five Tzeltal families, the study examines sequences of Tzeltal caregivers' speech aimed at controlling the children's behaviour and analyzes the children's developing pragmatic skills in handling such controlling utterances, from prelinguistic infants to age five and over. Infants in this society are considered to be vulnerable, easily scared or shocked into losing their 'souls', and therefore at all costs to be protected and hidden from outsiders and other dangers. Nonetheless, the chief form of control (aside from physically removing a child from danger) is to threaten, saying things like "Don't do that, or I'll take you to the clinic for an injection," These overt scare-threats - rarely actually realized - lead Tzeltal children by the age of 2;6 to 3;0 to the understanding that speech does not necessarily convey true propositions, and to a sensitivity to the underlying motivations for utterances distinct from their literal meaning. By age 4;0 children perform the same role to their younger siblings;they also begin to use more subtle non-true (e.g. ironic) utterances. The caretaker practice described here is related to adult norms of social lying, to the sociocultural context of constraints on information flow, social control through gossip, and the different notion of 'truth' that arises in the context of non-verifiability characteristic of a small-scale nonliterate society.
  • Brown, P., & Levinson, S. C. (2004). Frames of spatial reference and their acquisition in Tenejapan Tzeltal. In A. Assmann, U. Gaier, & G. Trommsdorff (Eds.), Zwischen Literatur und Anthropologie: Diskurse, Medien, Performanzen (pp. 285-314). Tübingen: Gunter Narr.

    Abstract

    This is a reprint of the Brown and Levinson 2000 article.
  • Brown, P. (2002). Language as a model for culture: Lessons from the cognitive sciences. In R. G. Fox, & B. J. King (Eds.), Anthropology beyond culture (pp. 169-192). Oxford: Berg.

    Abstract

    This paper surveys the concept of culture as used in recent work in cognitive science, assessing the very different (and sometimes minimal) role 'culture' plays in different branches and schools of linguistics: generative approaches, descriptive/comparative linguistics, typology, cognitive linguistics, semantics, pragmatics, psycholinguistics, linguistic and cognitive anthropology. The paper then describes research on one specific topic, spatial language and conceptualization, describes a methodology for studying it cross-linguistically and cross-culturally. Finally, it considers the implications of results in this area for how we can fruitfully conceptualize 'culture', arguing for an approach which shifts back and forth between individual mind and collective representations, between universals and particulars, and ties 'culture' to our biological roots.
  • Brown, P., Levinson, S. C., & Senft, G. (2004). Initial references to persons and places. In A. Majid (Ed.), Field Manual Volume 9 (pp. 37-44). Nijmegen: Max Planck Institute for Psycholinguistics. doi:10.17617/2.492929.

    Abstract

    This task has two parts: (i) video-taped elicitation of the range of possibilities for referring to persons and places, and (ii) observations of (first) references to persons and places in video-taped natural interaction. The goal of this task is to establish the repertoires of referential terms (and other practices) used for referring to persons and to places in particular languages and cultures, and provide examples of situated use of these kinds of referential practices in natural conversation. This data will form the basis for cross-language comparison, and for formulating hypotheses about general principles underlying the deployment of such referential terms in natural language usage.
  • Brown, P., Gaskins, S., Lieven, E., Striano, T., & Liszkowski, U. (2004). Multimodal multiperson interaction with infants aged 9 to 15 months. In A. Majid (Ed.), Field Manual Volume 9 (pp. 56-63). Nijmegen: Max Planck Institute for Psycholinguistics. doi:10.17617/2.492925.

    Abstract

    Interaction, for all that it has an ethological base, is culturally constituted, and how new social members are enculturated into the interactional practices of the society is of critical interest to our understanding of interaction – how much is learned, how variable is it across cultures – as well as to our understanding of the role of culture in children’s social-cognitive development. The goal of this task is to document the nature of caregiver infant interaction in different cultures, especially during the critical age of 9-15 months when children come to have an understanding of others’ intentions. This is of interest to all students of interaction; it does not require specialist knowledge of children.
  • Brown, C. M., & Hagoort, P. (1999). The cognitive neuroscience of language: Challenges and future directions. In C. M. Brown, & P. Hagoort (Eds.), The neurocognition of language (pp. 3-14). Oxford: Oxford University Press.
  • Brown, P. (1991). Sind Frauen höflicher? Befunde aus einer Maya-Gemeinde. In S. Günther, & H. Kotthoff (Eds.), Von fremden Stimmen: Weibliches und männliches Sprechen im Kulturvergleich. Frankfurt am Main: Suhrkamp.

    Abstract

    This is a German translation of Brown 1980, How and why are women more polite: Some evidence from a Mayan community.
  • Brown, P. (2017). Politeness and impoliteness. In Y. Huang (Ed.), Oxford handbook of pragmatics (pp. 383-399). Oxford: Oxford University Press. doi:10.1093/oxfordhb/9780199697960.013.16.

    Abstract

    This article selectively reviews the literature on politeness across different disciplines—linguistics, anthropology, communications, conversation analysis, social psychology, and sociology—and critically assesses how both theoretical approaches to politeness and research on linguistic politeness phenomena have evolved over the past forty years. Major new developments include a shift from predominantly linguistic approaches to those examining politeness and impoliteness as processes that are embedded and negotiated in interactional and cultural contexts, as well as a greater focus on how both politeness and interactional confrontation and conflict fit into our developing understanding of human cooperation and universal aspects of human social interaction.

    Files private

    Request files
  • Brown, P., & Levinson, S. C. (1999). Politeness: Some universals in language usage [Reprint]. In A. Jaworski, & N. Coupland (Eds.), The discourse reader (pp. 321-335). London: Routledge.

    Abstract

    This article is a reprint of chapter 1, the introduction to Brown and Levinson, 1987, Politeness: Some universals in language usage (Cambridge University Press).
  • Brown, P. (2010). Todo el mundo tiene que mentir en Tzeltal: Amenazas y mentiras en la socialización de los niños tzeltales de Tenejapa, Chiapas. In L. de León Pasquel (Ed.), Socialización, lenguajes y culturas infantiles: Estudios interdisciplinarios (pp. 231-271). Mexico: Centro de Investigaciones y Estudios Superiores en Antropología Social (CIESAS).

    Abstract

    This is a Spanish translation of Brown 2002, 'Everyone has to lie in Tzeltal'. Translated by B. E. Alvaraz Klein
  • Brugman, H., Levinson, S. C., Skiba, R., & Wittenburg, P. (2002). The DOBES archive: It's purpose and implementation. In P. Austin, H. Dry, & P. Wittenburg (Eds.), Proceedings of the international LREC workshop on resources and tools in field linguistics (pp. 11-11). Paris: European Language Resources Association.
  • Brugman, H., Crasborn, O., & Russel, A. (2004). Collaborative annotation of sign language data with Peer-to-Peer technology. In M. Lino, M. Xavier, F. Ferreira, R. Costa, & R. Silva (Eds.), Proceedings of the 4th International Conference on Language Resources and Language Evaluation (LREC 2004) (pp. 213-216). Paris: European Language Resources Association.
  • Brugman, H., & Russel, A. (2004). Annotating Multi-media/Multi-modal resources with ELAN. In M. Lino, M. Xavier, F. Ferreira, R. Costa, & R. Silva (Eds.), Proceedings of the 4th International Conference on Language Resources and Language Evaluation (LREC 2004) (pp. 2065-2068). Paris: European Language Resources Association.
  • Brugman, H., Spenke, H., Kramer, M., & Klassmann, A. (2002). Multimedia annotation with multilingual input methods and search support.
  • Brugman, H., Wittenburg, P., Levinson, S. C., & Kita, S. (2002). Multimodal annotations in gesture and sign language studies. In M. Rodriguez González, & C. Paz Suárez Araujo (Eds.), Third international conference on language resources and evaluation (pp. 176-182). Paris: European Language Resources Association.

    Abstract

    For multimodal annotations an exhaustive encoding system for gestures was developed to facilitate research. The structural requirements of multimodal annotations were analyzed to develop an Abstract Corpus Model which is the basis for a powerful annotation and exploitation tool for multimedia recordings and the definition of the XML-based EUDICO Annotation Format. Finally, a metadata-based data management environment has been setup to facilitate resource discovery and especially corpus management. Bt means of an appropriate digitization policy and their online availability researchers have been able to build up a large corpus covering gesture and sign language data.
  • Burchfield, L. A., Luk, S.-.-H.-K., Antoniou, M., & Cutler, A. (2017). Lexically guided perceptual learning in Mandarin Chinese. In Proceedings of Interspeech 2017 (pp. 576-580). doi:10.21437/Interspeech.2017-618.

    Abstract

    Lexically guided perceptual learni ng refers to the use of lexical knowledge to retune sp eech categories and thereby adapt to a novel talker’s pronunciation. This adaptation has been extensively documented, but primarily for segmental-based learning in English and Dutch. In languages with lexical tone, such as Mandarin Chinese, tonal categories can also be retuned in this way, but segmental category retuning had not been studied. We report two experiment s in which Mandarin Chinese listeners were exposed to an ambiguous mixture of [f] and [s] in lexical contexts favoring an interpretation as either [f] or [s]. Listeners were subsequently more likely to identify sounds along a continuum between [f] and [s], and to interpret minimal word pairs, in a manner consistent with this exposure. Thus lexically guided perceptual learning of segmental categories had indeed taken place, consistent with suggestions that such learning may be a universally available adaptation process
  • Burenhult, N. (2004). Spatial deixis in Jahai. In S. Burusphat (Ed.), Papers from the 11th Annual Meeting of the Southeast Asian Linguistics Society 2001 (pp. 87-100). Arizona State University: Program for Southeast Asian Studies.
  • Burenhult, N., & Levinson, S. C. (2010). Semplates: A guide to identification and elicitation. In E. Norcliffe, & N. J. Enfield (Eds.), Field manual volume 13 (pp. 17-23). Nijmegen: Max Planck Institute for Psycholinguistics.
  • Cablitz, G. (2002). The acquisition of an absolute system: learning to talk about space in Marquesan (Oceanic, French Polynesia). In E. V. Clark (Ed.), Space in language location, motion, path, and manner (pp. 40-49). Stanford: Center for the Study of Language & Information (Electronic proceedings.
  • Carota, F., Desmurget, M., & Sirigu, A. (2010). Forward Modeling Mediates Motor Awareness. In W. Sinnott-Armstrong, & L. Nadel (Eds.), Conscious Will and Responsibility - A Tribute to Benjamin Libet (pp. 97-108). Oxford: Oxford University Press.

    Abstract

    This chapter focuses on the issue of motor awareness. It addresses three main questions: What exactly are we aware of when making a movement? What is the contribution of afferent and efferent signals to motor awareness? What are the neural bases of motor awareness? It reviews evidence that the motor system is mainly aware of its intention. As long as the goal is achieved, nothing reaches awareness about the kinematic details of the ongoing movements, even when substantial corrections have to be implemented to attain the intended state. The chapter also shows that motor awareness relies mainly on the central predictive computations carried out within the posterior parietal cortex. The outcome of these computations is contrasted with the peripheral reafferent input to build a veridical motor awareness. Some evidence exists that this process involves the premotor areas.
  • Casasanto, D., & Bottini, R. (2010). Can mirror-reading reverse the flow of time? In C. Hölscher, T. F. Shipley, M. Olivetti Belardinelli, J. A. Bateman, & N. S. Newcombe (Eds.), Spatial Cognition VII. International Conference, Spatial Cognition 2010, Mt. Hood/Portland, OR, USA, August 15-19, 2010. Proceedings (pp. 335-345). Berlin Heidelberg: Springer.

    Abstract

    Across cultures, people conceptualize time as if it flows along a horizontal timeline, but the direction of this implicit timeline is culture-specific: in cultures with left-to-right orthography (e.g., English-speaking cultures) time appears to flow rightward, but in cultures with right-to-left orthography (e.g., Arabic-speaking cultures) time flows leftward. Can orthography influence implicit time representations independent of other cultural and linguistic factors? Native Dutch speakers performed a space-time congruity task with the instructions and stimuli written in either standard Dutch or mirror-reversed Dutch. Participants in the Standard Dutch condition were fastest to judge past-oriented phrases by pressing the left button and future-oriented phrases by pressing the right button. Participants in the Mirror-Reversed Dutch condition showed the opposite pattern of reaction times, consistent with results found previously in native Arabic and Hebrew speakers. These results demonstrate a causal role for writing direction in shaping implicit mental representations of time.
  • Casasanto, D., & Bottini, R. (2010). Can mirror-reading reverse the flow of time? In S. Ohlsson, & R. Catrambone (Eds.), Proceedings of the 32nd Annual Meeting of the Cognitive Science Society (CogSci 2010) (pp. 1342-1347). Austin, TX: Cognitive Science Society.

    Abstract

    Across cultures, people conceptualize time as if it flows along a horizontal timeline, but the direction of this implicit timeline is culture-specific: in cultures with left-to-right orthography (e.g., English-speaking cultures) time appears to flow rightward, but in cultures with right-to-left orthography (e.g., Arabic-speaking cultures) time flows leftward. Can orthography influence implicit time representations independent of other cultural and linguistic factors? Native Dutch speakers performed a space-time congruity task with the instructions and stimuli written in either standard Dutch or mirror-reversed Dutch. Participants in the Standard Dutch condition were fastest to judge past-oriented phrases by pressing the left button and future-oriented phrases by pressing the right button. Participants in the Mirror-Reversed Dutch condition showed the opposite pattern of reaction times, consistent with results found previously in native Arabic and Hebrew speakers. These results demonstrate a causal role for writing direction in shaping implicit mental representations of time.
  • Casasanto, D. (2010). En qué casos una metáfora lingüística constituye una metáfora conceptual? In D. Pérez, S. Español, L. Skidelsky, & R. Minervino (Eds.), Conceptos: Debates contemporáneos en filosofía y psicología. Buenos Airos: Catálogos.
  • Casasanto, D., & Bottini, R. (2010). Mirror-reading can reverse the flow of time [Abstract]. In Proceedings of the 16th Annual Conference on Architectures and Mechanisms for Language Processing [AMLaP 2010] (pp. 57). York: University of York.
  • Casasanto, D., & Jasmin, K. (2010). Good and bad in the hands of politicians: Spontaneous gestures during positive and negative speech [Abstract]. In Proceedings of the 16th Annual Conference on Architectures and Mechanisms for Language Processing [AMLaP 2010] (pp. 137). York: University of York.
  • Casasanto, D. (2010). Wie der Körper Sprache und Vorstellungsvermögen im Gehirn formt. In Max-Planck-Gesellschaft. Jahrbuch 2010. München: Max-Planck-Gesellschaft. Retrieved from http://www.mpg.de/jahrbuch/forschungsbericht?obj=454607.

    Abstract

    Wenn unsere geistigen Fähigkeiten zum Teil von der Struktur unserer Körper abhängen, dann sollten Menschen mit unterschiedlichen Körpertypen unterschiedlich denken. Um dies zu überprüfen, haben Wissenschaftler des MPI für Psycholinguistik neurale Korrelate von Sprachverstehen und motorischen Vorstellungen untersucht, die durch Aktionsverben hervorgerufen werden. Diese Verben bezeichnen Handlungen, die Menschen zumeist mit ihrer dominanten Hand ausführen (z. B. schreiben, werfen). Das Verstehen dieser Verben sowie die Vorstellung entsprechender motorischer Handlungen wurde in Gehirnen von Rechts- und Linkshändern unterschiedlich lateralisiert. Bilden Menschen mit unterschiedlichen Körpertypen verschiedene Konzepte und Wortbedeutungen? Gemäß der Körperspezifitätshypothese sollten sie das tun [1]. Weil geistige Fähigkeiten vom Körper abhängen, sollten Menschen mit unterschiedlichen Körpertypen auch unterschiedlich denken. Diese Annahme stellt die klassische Auffassung in Frage, dass Konzepte universal und Wortbedeutungen identisch sind für alle Sprecher einer Sprache. Untersuchungen im Projekt „Sprache in Aktion“ am MPI für Psycholinguistik zeigen, dass die Art und Weise, wie Sprecher ihre Körper nutzen, die Art und Weise beeinflusst, wie sie sich im Gehirn Handlungen vorstellen und wie sie Sprache, die solche Handlungen thematisiert, im Gehirn verarbeiten.
  • Casillas, M., Bergelson, E., Warlaumont, A. S., Cristia, A., Soderstrom, M., VanDam, M., & Sloetjes, H. (2017). A New Workflow for Semi-automatized Annotations: Tests with Long-Form Naturalistic Recordings of Childrens Language Environments. In Proceedings of Interspeech 2017 (pp. 2098-2102). doi:10.21437/Interspeech.2017-1418.

    Abstract

    Interoperable annotation formats are fundamental to the utility, expansion, and sustainability of collective data repositories.In language development research, shared annotation schemes have been critical to facilitating the transition from raw acoustic data to searchable, structured corpora. Current schemes typically require comprehensive and manual annotation of utterance boundaries and orthographic speech content, with an additional, optional range of tags of interest. These schemes have been enormously successful for datasets on the scale of dozens of recording hours but are untenable for long-format recording corpora, which routinely contain hundreds to thousands of audio hours. Long-format corpora would benefit greatly from (semi-)automated analyses, both on the earliest steps of annotation—voice activity detection, utterance segmentation, and speaker diarization—as well as later steps—e.g., classification-based codes such as child-vs-adult-directed speech, and speech recognition to produce phonetic/orthographic representations. We present an annotation workflow specifically designed for long-format corpora which can be tailored by individual researchers and which interfaces with the current dominant scheme for short-format recordings. The workflow allows semi-automated annotation and analyses at higher linguistic levels. We give one example of how the workflow has been successfully implemented in a large cross-database project.
  • Casillas, M., Amatuni, A., Seidl, A., Soderstrom, M., Warlaumont, A., & Bergelson, E. (2017). What do Babies hear? Analyses of Child- and Adult-Directed Speech. In Proceedings of Interspeech 2017 (pp. 2093-2097). doi:10.21437/Interspeech.2017-1409.

    Abstract

    Child-directed speech is argued to facilitate language development, and is found cross-linguistically and cross-culturally to varying degrees. However, previous research has generally focused on short samples of child-caregiver interaction, often in the lab or with experimenters present. We test the generalizability of this phenomenon with an initial descriptive analysis of the speech heard by young children in a large, unique collection of naturalistic, daylong home recordings. Trained annotators coded automatically-detected adult speech 'utterances' from 61 homes across 4 North American cities, gathered from children (age 2-24 months) wearing audio recorders during a typical day. Coders marked the speaker gender (male/female) and intended addressee (child/adult), yielding 10,886 addressee and gender tags from 2,523 minutes of audio (cf. HB-CHAAC Interspeech ComParE challenge; Schuller et al., in press). Automated speaker-diarization (LENA) incorrectly gender-tagged 30% of male adult utterances, compared to manually-coded consensus. Furthermore, we find effects of SES and gender on child-directed and overall speech, increasing child-directed speech with child age, and interactions of speaker gender, child gender, and child age: female caretakers increased their child-directed speech more with age than male caretakers did, but only for male infants. Implications for language acquisition and existing classification algorithms are discussed.
  • Chen, A. (2010). Ab wann nutzen Kinder die Intonation zum Ausdruck neuer Information? In Max-Planck-Gesellschaft. Jahrbuch 2010. München: Max-Planck-Gesellschaft. Retrieved from http://www.mpg.de/jahrbuch/forschungsbericht?obj=447900.

    Abstract

    In einer Studie am Max-Planck-Institut in Nijmegen wurde untersucht, wie und wann Kinder die Regeln der Intonation in der niederländischen Sprache beherrschen. Die Ergebnisse zeigten, dass sie mehrere Entwicklungsstufen durchlaufen, bevor sie im Alter von sieben oder acht Jahren so intonieren wie die Erwachsenen, die einen Fokus (sprich: neue Information) mit einem fallenden Akzent markieren.
  • Chen, A., Gussenhoven, C., & Rietveld, T. (2002). Language-specific uses of the effort code. In B. Bel, & I. Marlien (Eds.), Proceedings of the 1st Conference on Speech Prosody (pp. 215-218). Aix=en-Provence: Université de Provence.

    Abstract

    Two groups of listeners with Dutch and British English language backgrounds judged Dutch and British English utterances, respectively, which varied in the intonation contour on the scales EMPHATIC vs. NOT EMPHATIC and SURPRISED vs. NOT SURPRISED, two meanings derived from the Effort Code. The stimuli, which differed in sentence mode but were otherwise lexically equivalent, were varied in peak height, peak alignment, end pitch, and overall register. In both languages, there are positive correlations between peak height and degree of emphasis, between peak height and degree of surprise, between peak alignment and degree of surprise, and between pitch register and degree of surprise. However, in all these cases, Dutch stimuli lead to larger perceived meaning differences than the British English stimuli. This difference in the extent to which increased pitch height triggers increases in perceived emphasis and surprise is argued to be due to the difference in the standard pitch ranges between Dutch and British English. In addition, we found a positive correlation between pitch register and the degree of emphasis in Dutch, but a negative correlation in British English. This is an unexpected difference, which illustrates a case of ambiguity in the meaning of pitch.
  • Chen, A., & Destruel, E. (2010). Intonational encoding of focus in Toulousian French. Speech Prosody 2010, 100233, 1-4. Retrieved from http://speechprosody2010.illinois.edu/papers/100233.pdf.

    Abstract

    Previous studies on focus marking in French have shown that post-focus deaccentuation, phrasing and phonetic cues like peak height and duration are employed to encode narrow focus but tonal patterns appear to be irrelevant. These studies either examined Standard French or did not control for the regional varieties spoken by the speakers. The present study investigated the use of all these cues in expressing narrow focus in naturally spoken declarative sentences in Toulousian French. It was found that similar to Standard French, Toulousian French uses post-focus deaccentuation and phrasing to mark focus. Different from Standard French, Toulousian French does not use the phonetic cues but use tonal patterns to encode focus. Tonal patterns ending with H\% occur more frequently in the VPs when the subject is in focus but tonal patterns ending with L\% occur more frequently in the VPs when the object is in focus. Our study thus provides a first insight into the similarities and differences in focus marking between Toulousian French and Standard French.
  • Cho, T., & McQueen, J. M. (2004). Phonotactics vs. phonetic cues in native and non-native listening: Dutch and Korean listeners' perception of Dutch and English. In S. Kin, & M. J. Bae (Eds.), Proceedings of the 8th International Conference on Spoken Language Processing (Interspeech 2004-ICSLP) (pp. 1301-1304). Seoul: Sunjijn Printing Co.

    Abstract

    We investigated how listeners of two unrelated languages, Dutch and Korean, process phonotactically legitimate and illegitimate sounds spoken in Dutch and American English. To Dutch listeners, unreleased word-final stops are phonotactically illegal because word-final stops in Dutch are generally released in isolation, but to Korean listeners, released final stops are illegal because word-final stops are never released in Korean. Two phoneme monitoring experiments showed a phonotactic effect: Dutch listeners detected released stops more rapidly than unreleased stops whereas the reverse was true for Korean listeners. Korean listeners with English stimuli detected released stops more accurately than unreleased stops, however, suggesting that acoustic-phonetic cues associated with released stops improve detection accuracy. We propose that in non-native speech perception, phonotactic legitimacy in the native language speeds up phoneme recognition, the richness of acousticphonetic cues improves listening accuracy, and familiarity with the non-native language modulates the relative influence of these two factors.
  • Cho, T., & Johnson, E. K. (2004). Acoustic correlates of phrase-internal lexical boundaries in Dutch. In S. Kin, & M. J. Bae (Eds.), Proceedings of the 8th International Conference on Spoken Language Processing (Interspeech 2004-ICSLP) (pp. 1297-1300). Seoul: Sunjin Printing Co.

    Abstract

    The aim of this study was to determine if Dutch speakers reliably signal phrase-internal lexical boundaries, and if so, how. Six speakers recorded 4 pairs of phonemically identical strong-weak-strong (SWS) strings with matching syllable boundaries but mismatching intended word boundaries (e.g. reis # pastei versus reispas # tij, or more broadly C1V2(C)#C2V2(C)C3V3(C) vs. C1V2(C)C2V2(C)#C3V3(C)). An Analysis of Variance revealed 3 acoustic parameters that were significantly greater in S#WS items (C2 DURATION, RIME1 DURATION, C3 BURST AMPLITUDE) and 5 parameters that were significantly greater in the SW#S items (C2 VOT, C3 DURATION, RIME2 DURATION, RIME3 DURATION, and V2 AMPLITUDE). Additionally, center of gravity measurements suggested that the [s] to [t] coarticulation was greater in reis # pa[st]ei versus reispa[s] # [t]ij. Finally, a Logistic Regression Analysis revealed that the 3 parameters (RIME1 DURATION, RIME2 DURATION, and C3 DURATION) contributed most reliably to a S#WS versus SW#S classification.
  • Clahsen, H., Prüfert, P., Eisenbeiss, S., & Cholin, J. (2002). Strong stems in the German mental lexicon: Evidence from child language acquisition and adult processing. In I. Kaufmann, & B. Stiebels (Eds.), More than words. Festschrift for Dieter Wunderlich (pp. 91-112). Berlin: Akadamie Verlag.
  • Clark, E. V., & Bowerman, M. (1986). On the acquisition of final voiced stops. In J. A. Fishman (Ed.), The Fergusonian impact: in honor of Charles A. Ferguson on the occasion of his 65th birthday. Volume 1: From phonology to society (pp. 51-68). Berlin: Mouton de Gruyter.
  • Collins, J. (2017). Real and spurious correlations involving tonal languages. In N. J. Enfield (Ed.), Dependencies in language: On the causal ontology of linguistics systems (pp. 129-139). Berlin: Language Science Press.
  • Cooper, N., & Cutler, A. (2004). Perception of non-native phonemes in noise. In S. Kin, & M. J. Bae (Eds.), Proceedings of the 8th International Conference on Spoken Language Processing (Interspeech 2004-ICSLP) (pp. 469-472). Seoul: Sunjijn Printing Co.

    Abstract

    We report an investigation of the perception of American English phonemes by Dutch listeners proficient in English. Listeners identified either the consonant or the vowel in most possible English CV and VC syllables. The syllables were embedded in multispeaker babble at three signal-to-noise ratios (16 dB, 8 dB, and 0 dB). Effects of signal-to-noise ratio on vowel and consonant identification are discussed as a function of syllable position and of relationship to the native phoneme inventory. Comparison of the results with previously reported data from native listeners reveals that noise affected the responding of native and non-native listeners similarly.
  • Cutler, A., Norris, D., & Sebastián-Gallés, N. (2004). Phonemic repertoire and similarity within the vocabulary. In S. Kin, & M. J. Bae (Eds.), Proceedings of the 8th International Conference on Spoken Language Processing (Interspeech 2004-ICSLP) (pp. 65-68). Seoul: Sunjijn Printing Co.

    Abstract

    Language-specific differences in the size and distribution of the phonemic repertoire can have implications for the task facing listeners in recognising spoken words. A language with more phonemes will allow shorter words and reduced embedding of short words within longer ones, decreasing the potential for spurious lexical competitors to be activated by speech signals. We demonstrate that this is the case via comparative analyses of the vocabularies of English and Spanish. A language which uses suprasegmental as well as segmental contrasts, however, can substantially reduce the extent of spurious embedding.
  • Cutler, A. (2002). Phonological processing: Comments on Pierrehumbert, Moates et al., Kubozono, Peperkamp & Dupoux, and Bradlow. In C. Gussenhoven, & N. Warner (Eds.), Papers in Laboratory Phonology VII (pp. 275-296). Berlin: Mouton de Gruyter.
  • Cutler, A. (2004). Segmentation of spoken language by normal adult listeners. In R. Kent (Ed.), MIT encyclopedia of communication sciences and disorders (pp. 392-395). Cambridge, MA: MIT Press.
  • Cutler, A., & Norris, D. (2002). The role of strong syllables in segmentation for lexical access. In G. T. Altmann (Ed.), Psycholinguistics: Critical concepts in psychology (pp. 157-177). London: Routledge.

Share this page