Publications

Displaying 1 - 100 of 287
  • Agirrezabal, M., Paggio, P., Navarretta, C., & Jongejan, B. (2023). Multimodal detection and classification of head movements in face-to-face conversations: Exploring models, features and their interaction. In W. Pouw, J. Trujillo, H. R. Bosker, L. Drijvers, M. Hoetjes, J. Holler, S. Kadava, L. Van Maastricht, E. Mamus, & A. Ozyurek (Eds.), Gesture and Speech in Interaction (GeSpIn) Conference. doi:10.17617/2.3527200.

    Abstract

    In this work we perform multimodal detection and classification
    of head movements from face to face video conversation data.
    We have experimented with different models and feature sets
    and provided some insight on the effect of independent features,
    but also how their interaction can enhance a head movement
    classifier. Used features include nose, neck and mid hip position
    coordinates and their derivatives together with acoustic features,
    namely, intensity and pitch of the speaker on focus. Results
    show that when input features are sufficiently processed by in-
    teracting with each other, a linear classifier can reach a similar
    performance to a more complex non-linear neural model with
    several hidden layers. Our best models achieve state-of-the-art
    performance in the detection task, measured by macro-averaged
    F1 score.
  • Alday, P. M. (2016). Towards a rigorous motivation for Ziph's law. In S. G. Roberts, C. Cuskley, L. McCrohon, L. Barceló-Coblijn, O. Feher, & T. Verhoef (Eds.), The Evolution of Language: Proceedings of the 11th International Conference (EVOLANG11). Retrieved from http://evolang.org/neworleans/papers/178.html.

    Abstract

    Language evolution can be viewed from two viewpoints: the development of a communicative system and the biological adaptations necessary for producing and perceiving said system. The communicative-system vantage point has enjoyed a wealth of mathematical models based on simple distributional properties of language, often formulated as empirical laws. However, be- yond vague psychological notions of “least effort”, no principled explanation has been proposed for the existence and success of such laws. Meanwhile, psychological and neurobiological mod- els have focused largely on the computational constraints presented by incremental, real-time processing. In the following, we show that information-theoretic entropy underpins successful models of both types and provides a more principled motivation for Zipf’s Law
  • Alhama, R. G., & Zuidema, W. (2016). Generalization in Artificial Language Learning: Modelling the Propensity to Generalize. In Proceedings of the 7th Workshop on Cognitive Aspects of Computational Language Learning (pp. 64-72). Association for Computational Linguistics. doi:10.18653/v1/W16-1909.

    Abstract

    Experiments in Artificial Language Learn-
    ing have revealed much about the cogni-
    tive mechanisms underlying sequence and
    language learning in human adults, in in-
    fants and in non-human animals. This pa-
    per focuses on their ability to generalize
    to novel grammatical instances (i.e., in-
    stances consistent with a familiarization
    pattern). Notably, the propensity to gen-
    eralize appears to be negatively correlated
    with the amount of exposure to the artifi-
    cial language, a fact that has been claimed
    to be contrary to the predictions of statis-
    tical models (Pe
    ̃
    na et al. (2002); Endress
    and Bonatti (2007)). In this paper, we pro-
    pose to model generalization as a three-
    step process, and we demonstrate that the
    use of statistical models for the first two
    steps, contrary to widespread intuitions in
    the ALL-field, can explain the observed
    decrease of the propensity to generalize
    with exposure time.
  • Alhama, R. G., & Zuidema, W. (2016). Pre-Wiring and Pre-Training: What does a neural network need to learn truly general identity rules? In T. R. Besold, A. Bordes, & A. D'Avila Garcez (Eds.), CoCo 2016 Cognitive Computation: Proceedings of the Workshop on Cognitive Computation: Integrating neural and symbolic approaches 2016. CEUR Workshop Proceedings.

    Abstract

    In an influential paper, Marcus et al. [1999] claimed that connectionist models
    cannot account for human success at learning tasks that involved generalization
    of abstract knowledge such as grammatical rules. This claim triggered a heated
    debate, centered mostly around variants of the Simple Recurrent Network model
    [Elman, 1990]. In our work, we revisit this unresolved debate and analyze the
    underlying issues from a different perspective. We argue that, in order to simulate
    human-like learning of grammatical rules, a neural network model should not be
    used as a
    tabula rasa
    , but rather, the initial wiring of the neural connections and
    the experience acquired prior to the actual task should be incorporated into the
    model. We present two methods that aim to provide such initial state: a manipu-
    lation of the initial connections of the network in a cognitively plausible manner
    (concretely, by implementing a “delay-line” memory), and a pre-training algorithm
    that incrementally challenges the network with novel stimuli. We implement such
    techniques in an Echo State Network [Jaeger, 2001], and we show that only when
    combining both techniques the ESN is able to learn truly general identity rules.
  • Allen, S. E. M. (1998). A discourse-pragmatic explanation for the subject-object asymmetry in early null arguments. In A. Sorace, C. Heycock, & R. Shillcock (Eds.), Proceedings of the GALA '97 Conference on Language Acquisition (pp. 10-15). Edinburgh, UK: Edinburgh University Press.

    Abstract

    The present paper assesses discourse-pragmatic factors as a potential explanation for the subject-object assymetry in early child language. It identifies a set of factors which characterize typical situations of informativeness (Greenfield & Smith, 1976), and uses these factors to identify informative arguments in data from four children aged 2;0 through 3;6 learning Inuktitut as a first language. In addition, it assesses the extent of the links between features of informativeness on one hand and lexical vs. null and subject vs. object arguments on the other. Results suggest that a pragmatics account of the subject-object asymmetry can be upheld to a greater extent than previous research indicates, and that several of the factors characterizing informativeness are good indicators of those arguments which tend to be omitted in early child language.
  • Azar, Z., Backus, A., & Ozyurek, A. (2016). Pragmatic relativity: Gender and context affect the use of personal pronouns in discourse differentially across languages. In A. Papafragou, D. Grodner, D. Mirman, & J. Trueswell (Eds.), Proceedings of the 38th Annual Meeting of the Cognitive Science Society (CogSci 2016) (pp. 1295-1300). Austin, TX: Cognitive Science Society.

    Abstract

    Speakers use differential referring expressions in pragmatically appropriate ways to produce coherent narratives. Languages, however, differ in a) whether REs as arguments can be dropped and b) whether personal pronouns encode gender. We examine two languages that differ from each other in these two aspects and ask whether the co-reference context and the gender encoding options affect the use of REs differentially. We elicited narratives from Dutch and Turkish speakers about two types of three-person events, one including people of the same and the other of mixed-gender. Speakers re-introduced referents into the discourse with fuller forms (NPs) and maintained them with reduced forms (overt or null pronoun). Turkish speakers used pronouns mainly to mark emphasis and only Dutch speakers used pronouns differentially across the two types of videos. We argue that linguistic possibilities available in languages tune speakers into taking different principles into account to produce pragmatically coherent narratives
  • Bauer, B. L. M. (2000). From Latin to French: The linear development of word order. In B. Bichakjian, T. Chernigovskaya, A. Kendon, & A. Müller (Eds.), Becoming Loquens: More studies in language origins (pp. 239-257). Frankfurt am Main: Lang.
  • Bauer, B. L. M. (2016). The development of the comparative in Latin texts. In J. N. Adams, & N. Vincent (Eds.), Early and late Latin. Continuity or change? (pp. 313-339). Cambridge: Cambridge University Press.
  • Bavin, E. L., & Kidd, E. (2000). Learning new verbs: Beyond the input. In C. Davis, T. J. Van Gelder, & R. Wales (Eds.), Cognitive Science in Australia, 2000: Proceedings of the Fifth Biennial Conference of the Australasian Cognitive Science Society.
  • Bergmann, C., Cristia, A., & Dupoux, E. (2016). Discriminability of sound contrasts in the face of speaker variation quantified. In Proceedings of the 38th Annual Conference of the Cognitive Science Society. (pp. 1331-1336). Austin, TX: Cognitive Science Society.

    Abstract

    How does a naive language learner deal with speaker variation irrelevant to distinguishing word meanings? Experimental data is contradictory, and incompatible models have been proposed. Here, we examine basic assumptions regarding the acoustic signal the learner deals with: Is speaker variability a hurdle in discriminating sounds or can it easily be ignored? To this end, we summarize existing infant data. We then present machine-based discriminability scores of sound pairs obtained without any language knowledge. Our results show that speaker variability decreases sound contrast discriminability, and that some contrasts are affected more than others. However, chance performance is rare; most contrasts remain discriminable in the face of speaker variation. We take our results to mean that speaker variation is not a uniform hurdle to discriminating sound contrasts, and careful examination is necessary when planning and interpreting studies testing whether and to what extent infants (and adults) are sensitive to speaker differences.

    Additional information

    Scripts and data
  • Bock, K., & Levelt, W. J. M. (1994). Language production: Grammatical encoding. In M. A. Gernsbacher (Ed.), Handbook of Psycholinguistics (pp. 945-984). San Diego,: Academic Press.
  • Bohnemeyer, J. (1998). Temporale Relatoren im Hispano-Yukatekischen Sprachkontakt. In A. Koechert, & T. Stolz (Eds.), Convergencia e Individualidad - Las lenguas Mayas entre hispanización e indigenismo (pp. 195-241). Hannover, Germany: Verlag für Ethnologie.
  • Bohnemeyer, J. (1998). Sententiale Topics im Yukatekischen. In Z. Dietmar (Ed.), Deskriptive Grammatik und allgemeiner Sprachvergleich (pp. 55-85). Tübingen, Germany: Max-Niemeyer-Verlag.
  • Bohnemeyer, J. (2000). Where do pragmatic meanings come from? In W. Spooren, T. Sanders, & C. van Wijk (Eds.), Samenhang in Diversiteit; Opstellen voor Leo Noorman, aangeboden bij gelegenheid van zijn zestigste verjaardag (pp. 137-153).
  • Bosker, H. R., Reinisch, E., & Sjerps, M. J. (2016). Listening under cognitive load makes speech sound fast. In H. van den Heuvel, B. Cranen, & S. Mattys (Eds.), Proceedings of the Speech Processing in Realistic Environments [SPIRE] Workshop (pp. 23-24). Groningen.
  • Bosker, H. R. (2016). Our own speech rate influences speech perception. In J. Barnes, A. Brugos, S. Stattuck-Hufnagel, & N. Veilleux (Eds.), Proceedings of Speech Prosody 2016 (pp. 227-231).

    Abstract

    During conversation, spoken utterances occur in rich acoustic contexts, including speech produced by our interlocutor(s) and speech we produced ourselves. Prosodic characteristics of the acoustic context have been known to influence speech perception in a contrastive fashion: for instance, a vowel presented in a fast context is perceived to have a longer duration than the same vowel in a slow context. Given the ubiquity of the sound of our own voice, it may be that our own speech rate - a common source of acoustic context - also influences our perception of the speech of others. Two experiments were designed to test this hypothesis. Experiment 1 replicated earlier contextual rate effects by showing that hearing pre-recorded fast or slow context sentences alters the perception of ambiguous Dutch target words. Experiment 2 then extended this finding by showing that talking at a fast or slow rate prior to the presentation of the target words also altered the perception of those words. These results suggest that between-talker variation in speech rate production may induce between-talker variation in speech perception, thus potentially explaining why interlocutors tend to converge on speech rate in dialogue settings.

    Additional information

    pdf via conference website227
  • Bouman, M. A., & Levelt, W. J. M. (1994). Werner E. Reichardt: Levensbericht. In H. W. Pleket (Ed.), Levensberichten en herdenkingen 1993 (pp. 75-80). Amsterdam: Koninklijke Nederlandse Akademie van Wetenschappen.
  • Bowerman, M. (1974). Early development of concepts underlying language. In R. Schiefelbusch, & L. Lloyd (Eds.), Language perspectives: Acquisition, retardation, and intervention (pp. 191-209). Baltimore: University Park Press.
  • Bowerman, M. (1994). Learning a semantic system: What role do cognitive predispositions play? [Reprint]. In P. Bloom (Ed.), Language acquisition: Core readings (pp. 329-363). Cambridge, MA: MIT Press.

    Abstract

    Reprint from: Bowerman, M. (1989). Learning a semantic system: What role do cognitive predispositions play? In M.L. Rice & R.L Schiefelbusch (Ed.), The teachability of language (pp. 133-169). Baltimore: Paul H. Brookes.
  • Bowerman, M. (1982). Reorganizational processes in lexical and syntactic development. In E. Wanner, & L. Gleitman (Eds.), Language acquisition: The state of the art (pp. 319-346). New York: Academic Press.
  • Bowerman, M. (1982). Starting to talk worse: Clues to language acquisition from children's late speech errors. In S. Strauss (Ed.), U shaped behavioral growth (pp. 101-145). New York: Academic Press.
  • Bowerman, M. (2000). Where do children's word meanings come from? Rethinking the role of cognition in early semantic development. In L. Nucci, G. Saxe, & E. Turiel (Eds.), Culture, thought and development (pp. 199-230). Mahwah, NJ: Lawrence Erlbaum.
  • Brown, P. (1998). Early Tzeltal verbs: Argument structure and argument representation. In E. Clark (Ed.), Proceedings of the 29th Annual Stanford Child Language Research Forum (pp. 129-140). Stanford: CSLI Publications.

    Abstract

    The surge of research activity focussing on children's acquisition of verbs (e.g., Tomasello and Merriman 1996) addresses some fundamental questions: Just how variable across languages, and across individual children, is the process of verb learning? How specific are arguments to particular verbs in early child language? How does the grammatical category 'Verb' develop? The position of Universal Grammar, that a verb category is early, contrasts with that of Tomasello (1992), Pine and Lieven and their colleagues (1996, in press), and many others, that children develop a verb category slowly, gradually building up subcategorizations of verbs around pragmatic, syntactic, and semantic properties of the language they are exposed to. On this latter view, one would expect the language which the child is learning, the cultural milieu and the nature of the interactions in which the child is engaged, to influence the process of acquiring verb argument structures. This paper explores these issues by examining the development of argument representation in the Mayan language Tzeltal, in both its lexical and verbal cross-referencing forms, and analyzing the semantic and pragmatic factors influencing the form argument representation takes. Certain facts about Tzeltal (the ergative/ absolutive marking, the semantic specificity of transitive and positional verbs) are proposed to affect the representation of arguments. The first 500 multimorpheme combinations of 3 children (aged between 1;8 and 2;4) are examined. It is argued that there is no evidence of semantically light 'pathbreaking' verbs (Ninio 1996) leading the way into word combinations. There is early productivity of cross-referencing affixes marking A, S, and O arguments (although there are systematic omissions). The paper assesses the respective contributions of three kinds of factors to these results - structural (regular morphology), semantic (verb specificity) and pragmatic (the nature of Tzeltal conversational interaction).
  • Brown, P. (2000). ’He descended legs-upwards‘: Position and motion in Tzeltal frog stories. In E. V. Clark (Ed.), Proceedings of the 30th Stanford Child Language Research Forum (pp. 67-75). Stanford: CSLI.

    Abstract

    How are events framed in narrative? Speakers of English (a 'satellite-framed' language), when 'reading' Mercer Mayer's wordless picture book 'Frog, Where Are You?', find the story self-evident: a boy has a dog and a pet frog; the frog escapes and runs away; the boy and dog look for it across hill and dale, through woods and over a cliff, until they find it and return home with a baby frog child of the original pet frog. In Tzeltal, as spoken in a Mayan community in southern Mexico, the story is somewhat different, because the language structures event descriptions differently. Tzeltal is in part a 'verb-framed' language with a set of Path-encoding motion verbs, so that the bare bones of the Frog story can consist of verbs translating as 'go'/'pass by'/'ascend'/ 'descend'/ 'arrive'/'return'. But Tzeltal also has satellite-framing adverbials, grammaticized from the same set of motion verbs, which encode the direction of motion or the orientation of static arrays. Furthermore, vivid pictorial detail is provided by positional verbs which can describe the position of the Figure as an outcome of a motion event; motion and stasis are thereby combined in a single event description. (For example:  jipot jawal "he has been thrown (by the deer) lying­_face_upwards_spread-eagled". This paper compares the use of these three linguistic resources in Frog Story narratives from  Tzeltal adults and children, looks at their development in the narratives of children, and considers the results in relation to those from Berman and Slobin's (1996) comparative study of adult and child Frog stories.
  • Brown, P., & Levinson, S. C. (2000). Frames of spatial reference and their acquisition in Tenejapan Tzeltal. In L. Nucci, G. Saxe, & E. Turiel (Eds.), Culture, thought, and development (pp. 167-197). Mahwah, NJ: Erlbaum.
  • Brown, P. (1998). How and why are women more polite: Some evidence from a Mayan community. In J. Coates (Ed.), Language and gender (pp. 81-99). Oxford: Blackwell.
  • Brown, C. M., & Hagoort, P. (2000). On the electrophysiology of language comprehension: Implications for the human language system. In M. W. Crocker, M. Pickering, & C. Clifton jr. (Eds.), Architectures and mechanisms for language processing (pp. 213-237). Cambridge University Press.
  • Brown, P., & Levinson, S. C. (1998). Politeness, introduction to the reissue: A review of recent work. In A. Kasher (Ed.), Pragmatics: Vol. 6 Grammar, psychology and sociology (pp. 488-554). London: Routledge.

    Abstract

    This article is a reprint of chapter 1, the introduction to Brown and Levinson, 1987, Politeness: Some universals in language usage (Cambridge University Press).
  • Brown, C. M., Hagoort, P., & Kutas, M. (2000). Postlexical integration processes during language comprehension: Evidence from brain-imaging research. In M. S. Gazzaniga (Ed.), The new cognitive neurosciences (2nd., pp. 881-895). Cambridge, MA: MIT Press.
  • Bruggeman, L., & Cutler, A. (2016). Lexical manipulation as a discovery tool for psycholinguistic research. In C. Carignan, & M. D. Tyler (Eds.), Proceedings of the 16th Australasian International Conference on Speech Science and Technology (SST2016) (pp. 313-316).
  • Burenhult, N., & Kruspe, N. (2016). The language of eating and drinking: A window on Orang Asli meaning-making. In K. Endicott (Ed.), Malaysia’s original people: Past, present and future of the Orang Asli (pp. 175-199). Singapore: National University of Singapore Press.
  • Cabrelli, J., Chaouch-Orozco, A., González Alonso, J., Pereira Soares, S. M., Puig-Mayenco, E., & Rothman, J. (2023). Introduction - Multilingualism: Language, brain, and cognition. In J. Cabrelli, A. Chaouch-Orozco, J. González Alonso, S. M. Pereira Soares, E. Puig-Mayenco, & J. Rothman (Eds.), The Cambridge handbook of third language acquisition (pp. 1-20). Cambridge: Cambridge University Press. doi:10.1017/9781108957823.001.

    Abstract

    This chapter provides an introduction to the handbook. It succintly overviews the key questions in the field of L3/Ln acquisition and summarizes the scope of all the chapters included. The chapter ends by raising some outstanding questions that the field needs to address.
  • Caplan, S., Peng, M. Z., Zhang, Y., & Yu, C. (2023). Using an Egocentric Human Simulation Paradigm to quantify referential and semantic ambiguity in early word learning. In M. Goldwater, F. K. Anggoro, B. K. Hayes, & D. C. Ong (Eds.), Proceedings of the 45th Annual Meeting of the Cognitive Science Society (CogSci 2023) (pp. 1043-1049).

    Abstract

    In order to understand early word learning we need to better understand and quantify properties of the input that young children receive. We extended the human simulation paradigm (HSP) using egocentric videos taken from infant head-mounted cameras. The videos were further annotated with gaze information indicating in-the-moment visual attention from the infant. Our new HSP prompted participants for two types of responses, thus differentiating referential from semantic ambiguity in the learning input. Consistent with findings on visual attention in word learning, we find a strongly bimodal distribution over HSP accuracy. Even in this open-ended task, most videos only lead to a small handful of common responses. What's more, referential ambiguity was the key bottleneck to performance: participants can nearly always recover the exact word that was said if they identify the correct referent. Finally, analysis shows that adult learners relied on particular, multimodal behavioral cues to infer those target referents.
  • Chevrefils, L., Morgenstern, A., Beaupoil-Hourdel, P., Bedoin, D., Caët, S., Danet, C., Danino, C., De Pontonx, S., & Parisse, C. (2023). Coordinating eating and languaging: The choreography of speech, sign, gesture and action in family dinners. In W. Pouw, J. Trujillo, H. R. Bosker, L. Drijvers, M. Hoetjes, J. Holler, S. Kadava, L. Van Maastricht, E. Mamus, & A. Ozyurek (Eds.), Gesture and Speech in Interaction (GeSpIn) Conference. doi:10.17617/2.3527183.

    Abstract

    In this study, we analyze one French signing and one French speaking family’s interaction during dinner. The families composed of two parents and two children aged 3 to 11 were filmed with three cameras to capture all family members’ behaviors. The three videos per dinner were synchronized and coded on ELAN. We annotated all participants’ acting, and languaging.
    Our quantitative analyses show how family members collaboratively manage multiple streams of activity through the embodied performances of dining and interacting. We uncover different profiles according to participants’ modality of expression and status (focusing on the mother and the younger child). The hearing participants’ co-activity management illustrates their monitoring of dining and conversing and how they progressively master the affordances of the visual and vocal channels to maintain the simultaneity of the two activities. The deaf mother skillfully manages to alternate smoothly between dining and interacting. The deaf younger child manifests how she is in the process of developing her skills to manage multi-activity. Our qualitative analyses focus on the ecology of visual-gestural and audio-vocal languaging in the context of co-activity according to language and participant. We open new perspectives on the management of gaze and body parts in multimodal languaging.
  • Clark, E. V., & Casillas, M. (2016). First language acquisition. In K. Allen (Ed.), The Routledge Handbook of Linguistics (pp. 311-328). New York: Routledge.
  • Corps, R. E. (2023). What do we know about the mechanisms of response planning in dialog? In Psychology of Learning and Motivation (pp. 41-81). doi:10.1016/bs.plm.2023.02.002.

    Abstract

    During dialog, interlocutors take turns at speaking with little gap or overlap between their contributions. But language production in monolog is comparatively slow. Theories of dialog tend to agree that interlocutors manage these timing demands by planning a response early, before the current speaker reaches the end of their turn. In the first half of this chapter, I review experimental research supporting these theories. But this research also suggests that planning a response early, while simultaneously comprehending, is difficult. Does response planning need to be this difficult during dialog? In other words, is early-planning always necessary? In the second half of this chapter, I discuss research that suggests the answer to this question is no. In particular, corpora of natural conversation demonstrate that speakers do not directly respond to the immediately preceding utterance of their partner—instead, they continue an utterance they produced earlier. This parallel talk likely occurs because speakers are highly incremental and plan only part of their utterance before speaking, leading to pauses, hesitations, and disfluencies. As a result, speakers do not need to engage in extensive advance planning. Thus, laboratory studies do not provide a full picture of language production in dialog, and further research using naturalistic tasks is needed.
  • Crago, M. B., & Allen, S. E. M. (1998). Acquiring Inuktitut. In O. L. Taylor, & L. Leonard (Eds.), Language Acquisition Across North America: Cross-Cultural And Cross-Linguistic Perspectives (pp. 245-279). San Diego, CA, USA: Singular Publishing Group, Inc.
  • Crago, M. B., Allen, S. E. M., & Pesco, D. (1998). Issues of Complexity in Inuktitut and English Child Directed Speech. In Proceedings of the twenty-ninth Annual Stanford Child Language Research Forum (pp. 37-46).
  • Creemers, A. (2023). Morphological processing in spoken-word recognition. In D. Crepaldi (Ed.), Linguistic morphology in the mind and brain (pp. 50-64). New York: Routledge.

    Abstract

    Most psycholinguistic studies on morphological processing have examined the role of morphological structure in the visual modality. This chapter discusses morphological processing in the auditory modality, which is an area of research that has only recently received more attention. It first discusses why results in the visual modality cannot straightforwardly be applied to the processing of spoken words, stressing the importance of acknowledging potential modality effects. It then gives a brief overview of the existing research on the role of morphology in the auditory modality, for which an increasing number of studies report that listeners show sensitivity to morphological structure. Finally, the chapter highlights insights gained by looking at morphological processing not only in reading, but also in listening, and it discusses directions for future research
  • Croijmans, I., & Majid, A. (2016). Language does not explain the wine-specific memory advantage of wine experts. In A. Papafragou, D. Grodner, D. Mirman, & J. Trueswell (Eds.), Proceedings of the 38th Annual Meeting of the Cognitive Science Society (CogSci 2016) (pp. 141-146). Austin, TX: Cognitive Science Society.

    Abstract

    Although people are poor at naming odors, naming a smell helps to remember that odor. Previous studies show wine experts have better memory for smells, and they also name smells differently than novices. Is wine experts’ odor memory is verbally mediated? And is the odor memory advantage that experts have over novices restricted to odors in their domain of expertise, or does it generalize? Twenty-four wine experts and 24 novices smelled wines, wine-related odors and common odors, and remembered these. Half the participants also named the smells. Wine experts had better memory for wines, but not for the other odors, indicating their memory advantage is restricted to wine. Wine experts named odors better than novices, but there was no relationship between experts’ ability to name odors and their memory for odors. This suggests experts’ odor memory advantage is not linguistically mediated, but may be the result of differential perceptual learning
  • Cutler, A., & Otake, T. (1998). Assimilation of place in Japanese and Dutch. In R. Mannell, & J. Robert-Ribes (Eds.), Proceedings of the Fifth International Conference on Spoken Language Processing: vol. 5 (pp. 1751-1754). Sydney: ICLSP.

    Abstract

    Assimilation of place of articulation across a nasal and a following stop consonant is obligatory in Japanese, but not in Dutch. In four experiments the processing of assimilated forms by speakers of Japanese and Dutch was compared, using a task in which listeners blended pseudo-word pairs such as ranga-serupa. An assimilated blend of this pair would be rampa, an unassimilated blend rangpa. Japanese listeners produced significantly more assimilated than unassimilated forms, both with pseudo-Japanese and pseudo-Dutch materials, while Dutch listeners produced significantly more unassimilated than assimilated forms in each materials set. This suggests that Japanese listeners, whose native-language phonology involves obligatory assimilation constraints, represent the assimilated nasals in nasal-stop sequences as unmarked for place of articulation, while Dutch listeners, who are accustomed to hearing unassimilated forms, represent the same nasal segments as marked for place of articulation.
  • Ip, M., & Cutler, A. (2016). Cross-language data on five types of prosodic focus. In J. Barnes, A. Brugos, S. Shattuck-Hufnagel, & N. Veilleux (Eds.), Proceedings of Speech Prosody 2016 (pp. 330-334).

    Abstract

    To examine the relative roles of language-specific and language-universal mechanisms in the production of prosodic focus, we compared production of five different types of focus by native speakers of English and Mandarin. Two comparable dialogues were constructed for each language, with the same words appearing in focused and unfocused position; 24 speakers recorded each dialogue in each language. Duration, F0 (mean, maximum, range), and rms-intensity (mean, maximum) of all critical word tokens were measured. Across the different types of focus, cross-language differences were observed in the degree to which English versus Mandarin speakers use the different prosodic parameters to mark focus, suggesting that while prosody may be universally available for expressing focus, the means of its employment may be considerably language-specific
  • Cutler, A. (1994). How human speech recognition is affected by phonological diversity among languages. In R. Togneri (Ed.), Proceedings of the fifth Australian International Conference on Speech Science and Technology: Vol. 1 (pp. 285-288). Canberra: Australian Speech Science and Technology Association.

    Abstract

    Listeners process spoken language in ways which are adapted to the phonological structure of their native language. As a consequence, non-native speakers do not listen to a language in the same way as native speakers; moreover, listeners may use their native language listening procedures inappropriately with foreign input. With sufficient experience, however, it may be possible to inhibit this latter (counter-productive) behavior.
  • Cutler, A. (1998). How listeners find the right words. In Proceedings of the Sixteenth International Congress on Acoustics: Vol. 2 (pp. 1377-1380). Melville, NY: Acoustical Society of America.

    Abstract

    Languages contain tens of thousands of words, but these are constructed from a tiny handful of phonetic elements. Consequently, words resemble one another, or can be embedded within one another, a coup stick snot with standing. me process of spoken-word recognition by human listeners involves activation of multiple word candidates consistent with the input, and direct competition between activated candidate words. Further, human listeners are sensitive, at an early, prelexical, stage of speeeh processing, to constraints on what could potentially be a word of the language.
  • Cutler, A. (2000). How the ear comes to hear. In New Trends in Modern Linguistics [Part of Annual catalogue series] (pp. 6-10). Tokyo, Japan: Maruzen Publishers.
  • Cutler, A. (2000). Hoe het woord het oor verovert. In Voordrachten uitgesproken tijdens de uitreiking van de SPINOZA-premies op 15 februari 2000 (pp. 29-41). The Hague, The Netherlands: Nederlandse Organisatie voor Wetenschappelijk Onderzoek (NWO).
  • Cutler, A. (1974). On saying what you mean without meaning what you say. In M. Galy, R. Fox, & A. Bruck (Eds.), Papers from the Tenth Regional Meeting, Chicago Linguistic Society (pp. 117-127). Chicago, Ill.: CLS.
  • Cutler, A., Treiman, R., & Van Ooijen, B. (1998). Orthografik inkoncistensy ephekts in foneme detektion? In R. Mannell, & J. Robert-Ribes (Eds.), Proceedings of the Fifth International Conference on Spoken Language Processing: Vol. 6 (pp. 2783-2786). Sydney: ICSLP.

    Abstract

    The phoneme detection task is widely used in spoken word recognition research. Alphabetically literate participants, however, are more used to explicit representations of letters than of phonemes. The present study explored whether phoneme detection is sensitive to how target phonemes are, or may be, orthographically realised. Listeners detected the target sounds [b,m,t,f,s,k] in word-initial position in sequences of isolated English words. Response times were faster to the targets [b,m,t], which have consistent word-initial spelling, than to the targets [f,s,k], which are inconsistently spelled, but only when listeners’ attention was drawn to spelling by the presence in the experiment of many irregularly spelled fillers. Within the inconsistent targets [f,s,k], there was no significant difference between responses to targets in words with majority and minority spellings. We conclude that performance in the phoneme detection task is not necessarily sensitive to orthographic effects, but that salient orthographic manipulation can induce such sensitivity.
  • Cutler, A. (1998). Prosodic structure and word recognition. In A. D. Friederici (Ed.), Language comprehension: A biological perspective (pp. 41-70). Heidelberg: Springer.
  • Cutler, A. (1982). Prosody and sentence perception in English. In J. Mehler, E. C. Walker, & M. Garrett (Eds.), Perspectives on mental representation: Experimental and theoretical studies of cognitive processes and capacities (pp. 201-216). Hillsdale, N.J: Erlbaum.
  • Cutler, A. (2000). Real words, phantom words and impossible words. In D. Burnham, S. Luksaneeyanawin, C. Davis, & M. Lafourcade (Eds.), Interdisciplinary approaches to language processing: The international conference on human and machine processing of language and speech (pp. 32-42). Bangkok: NECTEC.
  • Cutler, A., & Koster, M. (2000). Stress and lexical activation in Dutch. In B. Yuan, T. Huang, & X. Tang (Eds.), Proceedings of the Sixth International Conference on Spoken Language Processing: Vol. 1 (pp. 593-596). Beijing: China Military Friendship Publish.

    Abstract

    Dutch listeners were slower to make judgements about the semantic relatedness between a spoken target word (e.g. atLEET, 'athlete') and a previously presented visual prime word (e.g. SPORT 'sport') when the spoken word was mis-stressed. The adverse effect of mis-stressing confirms the role of stress information in lexical recognition in Dutch. However, although the erroneous stress pattern was always initially compatible with a competing word (e.g. ATlas, 'atlas'), mis-stressed words did not produced high false alarm rates in unrelated pairs (e.g. SPORT - atLAS). This suggests that stress information did not completely rule out segmentally matching but suprasegmentally mismatching words, a finding consistent with spoken-word recognition models involving multiple activation and inter-word competition.
  • Cutler, A., & Young, D. (1994). Rhythmic structure of word blends in English. In Proceedings of the Third International Conference on Spoken Language Processing (pp. 1407-1410). Kobe: Acoustical Society of Japan.

    Abstract

    Word blends combine fragments from two words, either in speech errors or when a new word is created. Previous work has demonstrated that in Japanese, such blends preserve moraic structure; in English they do not. A similar effect of moraic structure is observed in perceptual research on segmentation of continuous speech in Japanese; English listeners, by contrast, exploit stress units in segmentation, suggesting that a general rhythmic constraint may underlie both findings. The present study examined whether mis parallel would also hold for word blends. In spontaneous English polysyllabic blends, the source words were significantly more likely to be split before a strong than before a weak (unstressed) syllable, i.e. to be split at a stress unit boundary. In an experiment in which listeners were asked to identify the source words of blends, significantly more correct detections resulted when splits had been made before strong syllables. Word blending, like speech segmentation, appears to be constrained by language rhythm.
  • Cutler, A. (1998). The recognition of spoken words with variable representations. In D. Duez (Ed.), Proceedings of the ESCA Workshop on Sound Patterns of Spontaneous Speech (pp. 83-92). Aix-en-Provence: Université de Aix-en-Provence.
  • Cutler, A., McQueen, J. M., Baayen, R. H., & Drexler, H. (1994). Words within words in a real-speech corpus. In R. Togneri (Ed.), Proceedings of the 5th Australian International Conference on Speech Science and Technology: Vol. 1 (pp. 362-367). Canberra: Australian Speech Science and Technology Association.

    Abstract

    In a 50,000-word corpus of spoken British English the occurrence of words embedded within other words is reported. Within-word embedding in this real speech sample is common, and analogous to the extent of embedding observed in the vocabulary. Imposition of a syllable boundary matching constraint reduces but by no means eliminates spurious embedding. Embedded words are most likely to overlap with the beginning of matrix words, and thus may pose serious problems for speech recognisers.
  • Cutler, A., Norris, D., & McQueen, J. M. (2000). Tracking TRACE’s troubles. In A. Cutler, J. M. McQueen, & R. Zondervan (Eds.), Proceedings of SWAP (Workshop on Spoken Word Access Processes) (pp. 63-66). Nijmegen: Max-Planck-Institute for Psycholinguistics.

    Abstract

    Simulations explored the inability of the TRACE model of spoken-word recognition to model the effects on human listening of acoustic-phonetic mismatches in word forms. The source of TRACE's failure lay not in its interactive connectivity, not in the presence of interword competition, and not in the use of phonemic representations, but in the need for continuously optimised interpretation of the input. When an analogue of TRACE was allowed to cycle to asymptote on every slice of input, an acceptable simulation of the subcategorical mismatch data was achieved. Even then, however, the simulation was not as close as that produced by the Merge model.
  • D'Avis, F.-J., & Gretsch, P. (1994). Variations on "Variation": On the Acquisition of Complementizers in German. In R. Tracy, & E. Lattey (Eds.), How Tolerant is Universal Grammar? (pp. 59-109). Tübingen, Germany: Max-Niemeyer-Verlag.
  • Dediu, D., & Moisik, S. (2016). Defining and counting phonological classes in cross-linguistic segment databases. In N. Calzolari, K. Choukri, T. Declerck, S. Goggi, M. Grobelnik, B. Maegaard, J. Mariani, H. Mazo, A. Moreno, J. Odijk, & S. Piperidis (Eds.), Proceedings of LREC 2016: 10th International Conference on Language Resources and Evaluation (pp. 1955-1962). Paris: European Language Resources Association (ELRA).

    Abstract

    Recently, there has been an explosion in the availability of large, good-quality cross-linguistic databases such as WALS (Dryer & Haspelmath, 2013), Glottolog (Hammarstrom et al., 2015) and Phoible (Moran & McCloy, 2014). Databases such as Phoible contain the actual segments used by various languages as they are given in the primary language descriptions. However, this segment-level representation cannot be used directly for analyses that require generalizations over classes of segments that share theoretically interesting features. Here we present a method and the associated R (R Core Team, 2014) code that allows the exible denition of such meaningful classes and that can identify the sets of segments falling into such a class for any language inventory. The method and its results are important for those interested in exploring cross-linguistic patterns of phonetic and phonological diversity and their relationship to extra-linguistic factors and processes such as climate, economics, history or human genetics.
  • Dediu, D., & Moisik, S. R. (2016). Anatomical biasing of click learning and production: An MRI and 3d palate imaging study. In S. G. Roberts, C. Cuskley, L. McCrohon, L. Barceló-Coblijn, O. Feher, & T. Verhoef (Eds.), The Evolution of Language: Proceedings of the 11th International Conference (EVOLANG11). Retrieved from http://evolang.org/neworleans/papers/57.html.

    Abstract

    The current paper presents results for data on click learning obtained from a larger imaging study (using MRI and 3D intraoral scanning) designed to quantify and characterize intra- and inter-population variation of vocal tract structures and the relation of this to speech production. The aim of the click study was to ascertain whether and to what extent vocal tract morphology influences (1) the ability to learn to produce clicks and (2) the productions of those that successfully learn to produce these sounds. The results indicate that the presence of an alveolar ridge certainly does not prevent an individual from learning to produce click sounds (1). However, the subtle details of how clicks are produced may indeed be driven by palate shape (2).
  • Dingemanse, M. (2023). Ideophones. In E. Van Lier (Ed.), The Oxford handbook of word classes (pp. 466-476). Oxford: Oxford University Press.

    Abstract

    Many of the world’s languages feature an open lexical class of ideophones, words whose marked forms and sensory meanings invite iconic associations. Ideophones (also known as mimetics or expressives) are well-known from languages in Asia, Africa and the Americas, where they often form a class on the same order of magnitude as other major word classes and take up a considerable functional load as modifying expressions or predicates. Across languages, commonalities in the morphosyntactic behaviour of ideophones can be related to their nature and origin as vocal depictions. At the same time there is ample room for linguistic diversity, raising the need for fine-grained grammatical description of ideophone systems. As vocal depictions, ideophones often form a distinct lexical stratum seemingly conjured out of thin air; but as conventionalized words, they inevitably grow roots in local linguistic systems, showing relations to adverbs, adjectives, verbs and other linguistic resources devoted to modification and predication
  • Dingemanse, M. (2023). Interjections. In E. Van Lier (Ed.), The Oxford handbook of word classes (pp. 477-491). Oxford: Oxford University Press.

    Abstract

    No class of words has better claims to universality than interjections. At the same time, no category has more variable content than this one, traditionally the catch-all basket for linguistic items that bear a complicated relation to sentential syntax. Interjections are a mirror reflecting methodological and theoretical assumptions more than a coherent linguistic category that affords unitary treatment. This chapter focuses on linguistic items that typically function as free-standing utterances, and on some of the conceptual, methodological, and theoretical questions generated by such items. A key move is to study these items in the setting of conversational sequences, rather than from the “flatland” of sequential syntax. This makes visible how some of the most frequent interjections streamline everyday language use and scaffold complex language. Approaching interjections in terms of their sequential positions and interactional functions has the potential to reveal and explain patterns of universality and diversity in interjections.
  • Doumas, L. A., & Martin, A. E. (2016). Abstraction in time: Finding hierarchical linguistic structure in a model of relational processing. In A. Papafragou, D. Grodner, D. Mirman, & J. Trueswell (Eds.), Proceedings of the 38th Annual Meeting of the Cognitive Science Society (CogSci 2016) (pp. 2279-2284). Austin, TX: Cognitive Science Society.

    Abstract

    Abstract mental representation is fundamental for human cognition. Forming such representations in time, especially from dynamic and noisy perceptual input, is a challenge for any processing modality, but perhaps none so acutely as for language processing. We show that LISA (Hummel & Holyaok, 1997) and DORA (Doumas, Hummel, & Sandhofer, 2008), models built to process and to learn structured (i.e., symbolic) rep resentations of conceptual properties and relations from unstructured inputs, show oscillatory activation during processing that is highly similar to the cortical activity elicited by the linguistic stimuli from Ding et al.(2016). We argue, as Ding et al.(2016), that this activation reflects formation of hierarchical linguistic representation, and furthermore, that the kind of computational mechanisms in LISA/DORA (e.g., temporal binding by systematic asynchrony of firing) may underlie formation of abstract linguistic representations in the human brain. It may be this repurposing that allowed for the generation or mergence of hierarchical linguistic structure, and therefore, human language, from extant cognitive and neural systems. We conclude that models of thinking and reasoning and models of language processing must be integrated —not only for increased plausiblity, but in order to advance both fields towards a larger integrative model of human cognition
  • Drijvers, L., & Mazzini, S. (2023). Neural oscillations in audiovisual language and communication. In Oxford Research Encyclopedia of Neuroscience. Oxford: Oxford University Press. doi:10.1093/acrefore/9780190264086.013.455.

    Abstract

    How do neural oscillations support human audiovisual language and communication? Considering the rhythmic nature of audiovisual language, in which stimuli from different sensory modalities unfold over time, neural oscillations represent an ideal candidate to investigate how audiovisual language is processed in the brain. Modulations of oscillatory phase and power are thought to support audiovisual language and communication in multiple ways. Neural oscillations synchronize by tracking external rhythmic stimuli or by re-setting their phase to presentation of relevant stimuli, resulting in perceptual benefits. In particular, synchronized neural oscillations have been shown to subserve the processing and the integration of auditory speech, visual speech, and hand gestures. Furthermore, synchronized oscillatory modulations have been studied and reported between brains during social interaction, suggesting that their contribution to audiovisual communication goes beyond the processing of single stimuli and applies to natural, face-to-face communication.

    There are still some outstanding questions that need to be answered to reach a better understanding of the neural processes supporting audiovisual language and communication. In particular, it is not entirely clear yet how the multitude of signals encountered during audiovisual communication are combined into a coherent percept and how this is affected during real-world dyadic interactions. In order to address these outstanding questions, it is fundamental to consider language as a multimodal phenomenon, involving the processing of multiple stimuli unfolding at different rhythms over time, and to study language in its natural context: social interaction. Other outstanding questions could be addressed by implementing novel techniques (such as rapid invisible frequency tagging, dual-electroencephalography, or multi-brain stimulation) and analysis methods (e.g., using temporal response functions) to better understand the relationship between oscillatory dynamics and efficient audiovisual communication.
  • Drozd, K. F. (1998). No as a determiner in child English: A summary of categorical evidence. In A. Sorace, C. Heycock, & R. Shillcock (Eds.), Proceedings of the Gala '97 Conference on Language Acquisition (pp. 34-39). Edinburgh, UK: Edinburgh University Press,.

    Abstract

    This paper summarizes the results of a descriptive syntactic category analysis of child English no which reveals that young children use and represent no as a determiner and negatives like no pen as NPs, contra standard analyses.
  • Drozdova, P., Van Hout, R., & Scharenborg, O. (2016). Processing and adaptation to ambiguous sounds during the course of perceptual learning. In Proceedings of Interspeech 2016: The 17th Annual Conference of the International Speech Communication Association (pp. 2811-2815). doi:10.21437/Interspeech.2016-814.

    Abstract

    Listeners use their lexical knowledge to interpret ambiguous sounds, and retune their phonetic categories to include this ambiguous sound. Although there is ample evidence for lexically-guided retuning, the adaptation process is not fully understood. Using a lexical decision task with an embedded auditory semantic priming task, the present study investigates whether words containing an ambiguous sound are processed in the same way as “natural” words and whether adaptation to the ambiguous sound tends to equalize the processing of “ambiguous” and natural words. Analyses of the yes/no responses and reaction times to natural and “ambiguous” words showed that words containing an ambiguous sound were accepted as words less often and were processed slower than the same words without ambiguity. The difference in acceptance disappeared after exposure to approximately 15 ambiguous items. Interestingly, lower acceptance rates and slower processing did not have an effect on the processing of semantic information of the following word. However, lower acceptance rates of ambiguous primes predict slower reaction times of these primes, suggesting an important role of stimulus-specific characteristics in triggering lexically-guided perceptual learning.
  • Düngen, D., Sarfati, M., & Ravignani, A. (2023). Cross-species research in biomusicality: Methods, pitfalls, and prospects. In E. H. Margulis, P. Loui, & D. Loughridge (Eds.), The science-music borderlands: Reckoning with the past and imagining the future (pp. 57-95). Cambridge, MA, USA: The MIT Press. doi:10.7551/mitpress/14186.003.0008.
  • Eibl-Eibesfeldt, I., Senft, B., & Senft, G. (1998). Trobriander (Ost-Neuguinea, Trobriand Inseln, Kaile'una) Fadenspiele 'ninikula'. In Ethnologie - Humanethologische Begleitpublikationen von I. Eibl-Eibesfeldt und Mitarbeitern. Sammelband I, 1985-1987. Göttingen: Institut für den Wissenschaftlichen Film.
  • Eisenbeiss, S. (2000). The acquisition of Determiner Phrase in German child language. In M.-A. Friedemann, & L. Rizzi (Eds.), The Acquisition of Syntax (pp. 26-62). Harlow, UK: Pearson Education Ltd.
  • Ekerdt, C., Takashima, A., & McQueen, J. M. (2023). Memory consolidation in second language neurocognition. In K. Morgan-Short, & J. G. Van Hell (Eds.), The Routledge handbook of second language acquisition and neurolinguistics. Oxfordshire: Routledge.

    Abstract

    Acquiring a second language (L2) requires newly learned information to be integrated with existing knowledge. It has been proposed that several memory systems work together to enable this process of rapidly encoding new information and then slowly incorporating it with existing knowledge, such that it is consolidated and integrated into the language network without catastrophic interference. This chapter focuses on consolidation of L2 vocabulary. First, the complementary learning systems model is outlined, along with the model’s predictions regarding lexical consolidation. Next, word learning studies in first language (L1) that investigate the factors playing a role in consolidation, and the neural mechanisms underlying this, are reviewed. Using the L1 memory consolidation literature as background, the chapter then presents what is currently known about memory consolidation in L2 word learning. Finally, considering what is already known about L1 but not about L2, future research investigating memory consolidation in L2 neurocognition is proposed.
  • Enfield, N. J. (2000). On linguocentrism. In M. Pütz, & M. H. Verspoor (Eds.), Explorations in linguistic relativity (pp. 125-157). Amsterdam: Benjamins.
  • Enfield, N. J., & Evans, G. (2000). Transcription as standardisation: The problem of Tai languages. In S. Burusphat (Ed.), Proceedings: the International Conference on Tai Studies, July 29-31, 1998, (pp. 201-212). Bangkok, Thailand: Institute of Language and Culture for Rural Development, Mahidol University.
  • Ernestus, M. (2016). L'utilisation des corpus oraux pour la recherche en (psycho)linguistique. In M. Kilani-Schoch, C. Surcouf, & A. Xanthos (Eds.), Nouvelles technologies et standards méthodologiques en linguistique (pp. 65-93). Lausanne: Université de Lausanne.
  • Eryilmaz, K., Little, H., & De Boer, B. (2016). Using HMMs To Attribute Structure To Artificial Languages. In S. G. Roberts, C. Cuskley, L. McCrohon, L. Barceló-Coblijn, O. Feher, & T. Verhoef (Eds.), The Evolution of Language: Proceedings of the 11th International Conference (EVOLANG11). Retrieved from http://evolang.org/neworleans/papers/125.html.

    Abstract

    We investigated the use of Hidden Markov Models (HMMs) as a way of representing repertoires of continuous signals in order to infer their building blocks. We tested the idea on a dataset from an artificial language experiment. The study demonstrates using HMMs for this purpose is viable, but also that there is a lot of room for refinement such as explicit duration modeling, incorporation of autoregressive elements and relaxing the Markovian assumption, in order to accommodate specific details.
  • Ferré, G. (2023). Pragmatic gestures and prosody. In W. Pouw, J. Trujillo, H. R. Bosker, L. Drijvers, M. Hoetjes, J. Holler, S. Kadava, L. Van Maastricht, E. Mamus, & A. Ozyurek (Eds.), Gesture and Speech in Interaction (GeSpIn) Conference. doi:10.17617/2.3527215.

    Abstract

    The study presented here focuses on two pragmatic gestures:
    the hand flip (Ferré, 2011), a gesture of the Palm Up Open
    Hand/PUOH family (Müller, 2004) and the closed hand which
    can be considered as the opposite kind of movement to the open-
    ing of the hands present in the PUOH gesture. Whereas one of
    the functions of the hand flip has been described as presenting
    a new point in speech (Cienki, 2021), the closed hand gesture
    has not yet been described in the literature to the best of our
    knowledge. It can however be conceived of as having the oppo-
    site function of announcing the end of a point in discourse. The
    object of the present study is therefore to determine, with the
    study of prosodic features, if the two gestures are found in the
    same type of speech units and what their respective scope is.
    Drawing from a corpus of three TED Talks in French the
    prosodic characteristics of the speech that accompanies the two
    gestures will be examined. The hypothesis developed in the
    present paper is that their scope should be reflected in the
    prosody of accompanying speech, especially pitch key, tone,
    and relative pitch range. The prediction is that hand flips and
    closing hand gestures are expected to be located at the periph-
    ery of Intonation Phrases (IPs), Inter-Pausal Units (IPUs) or
    more conversational Turn Constructional Units (TCUs), and are
    likely to be co-occurrent with pauses in speech. But because of
    the natural slope of intonation in speech, the speech that accom-
    pany early gestures in Intonation Phrases should reveal different
    features from the speech at the end of intonational units. Tones
    should be different as well, considering the prosodic structure
    of spoken French.
  • Filippi, P., Congdon, J. V., Hoang, J., Bowling, D. L., Reber, S., Pašukonis, A., Hoeschele, M., Ocklenburg, S., de Boer, B., Sturdy, C. B., Newen, A., & Güntürkün, O. (2016). Humans Recognize Vocal Expressions Of Emotional States Universally Across Species. In The Evolution of Language: Proceedings of the 11th International Conference (EVOLANG11). Retrieved from http://evolang.org/neworleans/papers/91.html.

    Abstract

    The perception of danger in the environment can induce physiological responses (such as a heightened state of arousal) in animals, which may cause measurable changes in the prosodic modulation of the voice (Briefer, 2012). The ability to interpret the prosodic features of animal calls as an indicator of emotional arousal may have provided the first hominins with an adaptive advantage, enabling, for instance, the recognition of a threat in the surroundings. This ability might have paved the ability to process meaningful prosodic modulations in the emerging linguistic utterances.
  • Filippi, P., Ocklenburg, S., Bowling, D. L., Heege, L., Newen, A., Güntürkün, O., & de Boer, B. (2016). Multimodal Processing Of Emotional Meanings: A Hypothesis On The Adaptive Value Of Prosody. In The Evolution of Language: Proceedings of the 11th International Conference (EVOLANG11). Retrieved from http://evolang.org/neworleans/papers/90.html.

    Abstract

    Humans combine multiple sources of information to comprehend meanings. These sources can be characterized as linguistic (i.e., lexical units and/or sentences) or paralinguistic (e.g. body posture, facial expression, voice intonation, pragmatic context). Emotion communication is a special case in which linguistic and paralinguistic dimensions can simultaneously denote the same, or multiple incongruous referential meanings. Think, for instance, about when someone says “I’m sad!”, but does so with happy intonation and a happy facial expression. Here, the communicative channels express very specific (although conflicting) emotional states as denotations. In such cases of intermodal incongruence, are we involuntarily biased to respond to information in one channel over the other? We hypothesize that humans are involuntary biased to respond to prosody over verbal content and facial expression, since the ability to communicate socially relevant information such as basic emotional states through prosodic modulation of the voice might have provided early hominins with an adaptive advantage that preceded the emergence of segmental speech (Darwin 1871; Mithen, 2005). To address this hypothesis, we examined the interaction between multiple communicative channels in recruiting attentional resources, within a Stroop interference task (i.e. a task in which different channels give conflicting information; Stroop, 1935). In experiment 1, we used synonyms of “happy” and “sad” spoken with happy and sad prosody. Participants were asked to identify the emotion expressed by the verbal content while ignoring prosody (Word task) or vice versa (Prosody task). Participants responded faster and more accurately in the Prosody task. Within the Word task, incongruent stimuli were responded to more slowly and less accurately than congruent stimuli. In experiment 2, we adopted synonyms of “happy” and “sad” spoken in happy and sad prosody, while a happy or sad face was displayed. Participants were asked to identify the emotion expressed by the verbal content while ignoring prosody and face (Word task), to identify the emotion expressed by prosody while ignoring verbal content and face (Prosody task), or to identify the emotion expressed by the face while ignoring prosody and verbal content (Face task). Participants responded faster in the Face task and less accurately when the two non-focused channels were expressing an emotion that was incongruent with the focused one, as compared with the condition where all the channels were congruent. In addition, in the Word task, accuracy was lower when prosody was incongruent to verbal content and face, as compared with the condition where all the channels were congruent. Our data suggest that prosody interferes with emotion word processing, eliciting automatic responses even when conflicting with both verbal content and facial expressions at the same time. In contrast, although processed significantly faster than prosody and verbal content, faces alone are not sufficient to interfere in emotion processing within a three-dimensional Stroop task. Our findings align with the hypothesis that the ability to communicate emotions through prosodic modulation of the voice – which seems to be dominant over verbal content - is evolutionary older than the emergence of segmental articulation (Mithen, 2005; Fitch, 2010). This hypothesis fits with quantitative data suggesting that prosody has a vital role in the perception of well-formed words (Johnson & Jusczyk, 2001), in the ability to map sounds to referential meanings (Filippi et al., 2014), and in syntactic disambiguation (Soderstrom et al., 2003). This research could complement studies on iconic communication within visual and auditory domains, providing new insights for models of language evolution. Further work aimed at how emotional cues from different modalities are simultaneously integrated will improve our understanding of how humans interpret multimodal emotional meanings in real life interactions.
  • Fisher, S. E. (2016). A molecular genetic perspective on speech and language. In G. Hickok, & S. Small (Eds.), Neurobiology of Language (pp. 13-24). Amsterdam: Elsevier. doi:10.1016/B978-0-12-407794-2.00002-X.

    Abstract

    The rise of genomic technologies has yielded exciting new routes for studying the biological foundations of language. Researchers have begun to identify genes implicated in neurodevelopmental disorders that disrupt speech and language skills. This chapter illustrates how such work can provide powerful entry points into the critical neural pathways using FOXP2 as an example. Rare mutations of this gene cause problems with learning to sequence mouth movements during speech, accompanied by wide-ranging impairments in language production and comprehension. FOXP2 encodes a regulatory protein, a hub in a network of other genes, several of which have also been associated with language-related impairments. Versions of FOXP2 are found in similar form in many vertebrate species; indeed, studies of animals and birds suggest conserved roles in the development and plasticity of certain sets of neural circuits. Thus, the contributions of this gene to human speech and language involve modifications of evolutionarily ancient functions.
  • Floyd, S. (2016). Insubordination in Interaction: The Cha’palaa counter-assertive. In N. Evans, & H. Wananabe (Eds.), Dynamics of Insubordination (pp. 341-366). Amsterdam: John Benjamins.

    Abstract

    In the Cha’palaa language of Ecuador the main-clause use of the otherwise non-finite morpheme -ba can be accounted for by a specific interactive practice: the ‘counter-assertion’ of statement or implicature of a previous conversational turn. Attention to the ways in which different constructions are deployed in such recurrent conversational contexts reveals a plausible account for how this type of dependent clause has come to be one of the options for finite clauses. After giving some background on Cha’palaa and placing ba clauses within a larger ecology of insubordination constructions in the language, this chapter uses examples from a video corpus of informal conversation to illustrate how interactive data provides answers that may otherwise be elusive for understanding how the different grammatical options for Cha’palaa finite verb constructions have been structured by insubordination
  • Floyd, S., & Norcliffe, E. (2016). Switch reference systems in the Barbacoan languages and their neighbors. In R. Van Gijn, & J. Hammond (Eds.), Switch Reference 2.0 (pp. 207-230). Amsterdam: Benjamins.

    Abstract

    This chapter surveys the available data on Barbacoan languages and their neighbors to explore a case study of switch reference within a single language family and in a situation of areal contact. To the extent possible given the available data, we weigh accounts appealing to common inheritance and areal convergence to ask what combination of factors led to the current state of these languages. We discuss the areal distribution of switch reference systems in the northwest Andean region, the different types of systems and degrees of complexity observed, and scenarios of contact and convergence, particularly in the case of Barbacoan and Ecuadorian Quechua. We then covers each of the Barbacoan languages’ systems (with the exception of Totoró, represented by its close relative Guambiano), identifying limited formal cognates, primarily between closely-related Tsafiki and Cha’palaa, as well as broader functional similarities, particularly in terms of interactions with topic/focus markers. n accounts for the current state of affairs with a complex scenario of areal prevalence of switch reference combined with deep structural family inheritance and formal re-structuring of the systems over time
  • Frost, R. L. A., Monaghan, P., & Christiansen, M. H. (2016). Using Statistics to Learn Words and Grammatical Categories: How High Frequency Words Assist Language Acquisition. In A. Papafragou, D. Mirman, & J. Trueswell (Eds.), Proceedings of the 38th Annual Meeting of the Cognitive Science Society (CogSci 2016) (pp. 81-86). Austin, Tx: Cognitive Science Society. Retrieved from https://mindmodeling.org/cogsci2016/papers/0027/index.html.

    Abstract

    Recent studies suggest that high-frequency words may benefit speech segmentation (Bortfeld, Morgan, Golinkoff, & Rathbun, 2005) and grammatical categorisation (Monaghan, Christiansen, & Chater, 2007). To date, these tasks have been examined separately, but not together. We familiarised adults with continuous speech comprising repetitions of target words, and compared learning to a language in which targets appeared alongside high-frequency marker words. Marker words reliably preceded targets, and distinguished them into two otherwise unidentifiable categories. Participants completed a 2AFC segmentation test, and a similarity judgement categorisation test. We tested transfer to a word-picture mapping task, where words from each category were used either consistently or inconsistently to label actions/objects. Participants segmented the speech successfully, but only demonstrated effective categorisation when speech contained high-frequency marker words. The advantage of marker words extended to the early stages of the transfer task. Findings indicate the same high-frequency words may assist speech segmentation and grammatical categorisation.
  • Gamba, M., Raimondi, T., De Gregorio, C., Valente, D., Carugati, F., Cristiano, W., Ferrario, V., Torti, V., Favaro, L., Friard, O., Giacoma, C., & Ravignani, A. (2023). Rhythmic categories across primate vocal displays. In A. Astolfi, F. Asdrubali, & L. Shtrepi (Eds.), Proceedings of the 10th Convention of the European Acoustics Association Forum Acusticum 2023 (pp. 3971-3974). Torino: European Acoustics Association.

    Abstract

    The last few years have revealed that several species may share the building blocks of Musicality with humans. The recognition of these building blocks (e.g., rhythm, frequency variation) was a necessary impetus for a new round of studies investigating rhythmic variation in animal vocal displays. Singing primates are a small group of primate species that produce modulated songs ranging from tens to thousands of vocal units. Previous studies showed that the indri, the only singing lemur, is currently the only known species that perform duet and choruses showing multiple rhythmic categories, as seen in human music. Rhythmic categories occur when temporal intervals between note onsets are not uniformly distributed, and rhythms with a small integer ratio between these intervals are typical of human music. Besides indris, white-handed gibbons and three crested gibbon species showed a prominent rhythmic category corresponding to a single small integer ratio, isochrony. This study reviews previous evidence on the co-occurrence of rhythmic categories in primates and focuses on the prospects for a comparative, multimodal study of rhythmicity in this clade.
  • Gannon, E., He, J., Gao, X., & Chaparro, B. (2016). RSVP Reading on a Smart Watch. In Proceedings of the Human Factors and Ergonomics Society 2016 Annual Meeting (pp. 1130-1134).

    Abstract

    Reading with Rapid Serial Visual Presentation (RSVP) has shown promise for optimizing screen space and increasing reading speed without compromising comprehension. Given the wide use of small-screen devices, the present study compared RSVP and traditional reading on three types of reading comprehension, reading speed, and subjective measures on a smart watch. Results confirm previous studies that show faster reading speed with RSVP without detracting from comprehension. Subjective data indicate that Traditional is strongly preferred to RSVP as a primary reading method. Given the optimal use of screen space, increased speed and comparable comprehension, future studies should focus on making RSVP a more comfortable format.
  • Gerwien, J., & Flecken, M. (2016). First things first? Top-down influences on event apprehension. In A. Papafragou, D. Grodner, D. Mirman, & J. Trueswell (Eds.), Proceedings of the 38th Annual Meeting of the Cognitive Science Society (CogSci 2016) (pp. 2633-2638). Austin, TX: Cognitive Science Society.

    Abstract

    Not much is known about event apprehension, the earliest stage of information processing in elicited language production studies, using pictorial stimuli. A reason for our lack of knowledge on this process is that apprehension happens very rapidly (<350 ms after stimulus onset, Griffin & Bock 2000), making it difficult to measure the process directly. To broaden our understanding of apprehension, we analyzed landing positions and onset latencies of first fixations on visual stimuli (pictures of real-world events) given short stimulus presentation times, presupposing that the first fixation directly results from information processing during apprehension
  • Gordon, P. C., Lowder, M. W., & Hoedemaker, R. S. (2016). Reading in normally aging adults. In H. Wright (Ed.), Cognitive-Linguistic Processes and Aging (pp. 165-192). Amsterdam: Benjamins. doi:10.1075/z.200.07gor.

    Abstract

    The activity of reading raises fundamental theoretical and practical questions about healthy cognitive aging. Reading relies greatly on knowledge of patterns of language and of meaning at the level of words and topics of text. Further, this knowledge must be rapidly accessed so that it can be coordinated with processes of perception, attention, memory and motor control that sustain skilled reading at rates of four-to-five words a second. As such, reading depends both on crystallized semantic intelligence which grows or is maintained through healthy aging, and on components of fluid intelligence which decline with age. Reading is important to older adults because it facilitates completion of everyday tasks that are essential to independent living. In addition, it entails the kind of active mental engagement that can preserve and deepen the cognitive reserve that may mitigate the negative consequences of age-related changes in the brain. This chapter reviews research on the front end of reading (word recognition) and on the back end of reading (text memory) because both of these abilities are surprisingly robust to declines associated with cognitive aging. For word recognition, that robustness is surprising because rapid processing of the sort found in reading is usually impaired by aging; for text memory, it is surprising because other types of episodic memory performance (e.g., paired associates) substantially decline in aging. These two otherwise quite different levels of reading comprehension remain robust because they draw on the knowledge of language that older adults gain through a life-time of experience with language.
  • Green, K., Osei-Cobbina, C., Perlman, M., & Kita, S. (2023). Infants can create different types of iconic gestures, with and without parental scaffolding. In W. Pouw, J. Trujillo, H. R. Bosker, L. Drijvers, M. Hoetjes, J. Holler, S. Kadava, L. Van Maastricht, E. Mamus, & A. Ozyurek (Eds.), Gesture and Speech in Interaction (GeSpIn) Conference. doi:10.17617/2.3527188.

    Abstract

    Despite the early emergence of pointing, children are generally not documented to produce iconic gestures until later in development. Although research has described this developmental trajectory and the types of iconic gestures that emerge first, there has been limited focus on iconic gestures within interactional contexts. This study identified the first 10 iconic gestures produced by five monolingual English-speaking children in a naturalistic longitudinal video corpus and analysed the interactional contexts. We found children produced their first iconic gesture between 12 and 20 months and that gestural types varied. Although 34% of gestures could have been imitated or derived from adult or child actions in the preceding context, the majority were produced independently of any observed model. In these cases, adults often led the interaction in a direction where iconic gesture was an appropriate response. Overall, we find infants can represent a referent symbolically and possess a greater capacity for innovation than previously assumed. In order to develop our understanding of how children learn to produce iconic gestures, it is important to consider the immediate interactional context. Conducting naturalistic corpus analyses could be a more ecologically valid approach to understanding how children learn to produce iconic gestures in real life contexts.
  • Gussenhoven, C., & Chen, A. (2000). Universal and language-specific effects in the perception of question intonation. In B. Yuan, T. Huang, & X. Tang (Eds.), Proceedings of the 6th International Conference on Spoken Language Processing (ICSLP) (pp. 91-94). Beijing: China Military Friendship Publish.

    Abstract

    Three groups of monolingual listeners, with Standard Chinese, Dutch and Hungarian as their native language, judged pairs of trisyllabic stimuli which differed only in their itch pattern. The segmental structure of the stimuli was made up by the experimenters and presented to subjects as being taken from a little-known language spoken on a South Pacific island. Pitch patterns consisted of a single rise-fall located on or near the second syllable. By and large, listeners selected the stimulus with the higher peak, the later eak, and the higher end rise as the one that signalled a question, regardless of language group. The result is argued to reflect innate, non-linguistic knowledge of the meaning of pitch variation, notably Ohala’s Frequency Code. A significant difference between groups is explained as due to the influence of the mother tongue.
  • Gussenhoven, C., & Chen, A. (2000). Universal and language-specific effects in the perception of question intonation. In Proceedings of the 6th International Conference on Spoken Language Processing (ICSLP) (pp. 91-94).
  • Hagoort, P., & Brown, C. M. (1994). Brain responses to lexical ambiguity resolution and parsing. In C. Clifton Jr, L. Frazier, & K. Rayner (Eds.), Perspectives on sentence processing (pp. 45-81). Hilsdale NY: Lawrence Erlbaum Associates.
  • Hagoort, P. (2016). MUC (Memory, Unification, Control): A Model on the Neurobiology of Language Beyond Single Word Processing. In G. Hickok, & S. Small (Eds.), Neurobiology of language (pp. 339-347). Amsterdam: Elsever. doi:10.1016/B978-0-12-407794-2.00028-6.

    Abstract

    A neurobiological model of language is discussed that overcomes the shortcomings of the classical Wernicke-Lichtheim-Geschwind model. It is based on a subdivision of language processing into three components: Memory, Unification, and Control. The functional components as well as the neurobiological underpinnings of the model are discussed. In addition, the need for extension beyond the classical core regions for language is shown. Attentional networks as well as networks for inferential processing are crucial to realize language comprehension beyond single word processing and beyond decoding propositional content.
  • Hagoort, P. (1998). The shadows of lexical meaning in patients with semantic impairments. In B. Stemmer, & H. Whitaker (Eds.), Handbook of neurolinguistics (pp. 235-248). New York: Academic Press.
  • Hagoort, P. (2016). Zij zijn ons brein. In J. Brockman (Ed.), Machines die denken: Invloedrijke denkers over de komst van kunstmatige intelligentie (pp. 184-186). Amsterdam: Maven Publishing.
  • Harbusch, K., & Kempen, G. (2000). Complexity of linear order computation in Performance Grammar, TAG and HPSG. In Proceedings of Fifth International Workshop on Tree Adjoining Grammars and Related Formalisms (TAG+5) (pp. 101-106).

    Abstract

    This paper investigates the time and space complexity of word order computation in the psycholinguistically motivated grammar formalism of Performance Grammar (PG). In PG, the first stage of syntax assembly yields an unordered tree ('mobile') consisting of a hierarchy of lexical frames (lexically anchored elementary trees). Associated with each lexica l frame is a linearizer—a Finite-State Automaton that locally computes the left-to-right order of the branches of the frame. Linearization takes place after the promotion component may have raised certain constituents (e.g. Wh- or focused phrases) into the domain of lexical frames higher up in the syntactic mobile. We show that the worst-case time and space complexity of analyzing input strings of length n is O(n5) and O(n4), respectively. This result compares favorably with the time complexity of word-order computations in Tree Adjoining Grammar (TAG). A comparison with Head-Driven Phrase Structure Grammar (HPSG) reveals that PG yields a more declarative linearization method, provided that the FSA is rewritten as an equivalent regular expression.
  • Harmon, Z., & Kapatsinski, V. (2016). Fuse to be used: A weak cue’s guide to attracting attention. In A. Papafragou, D. Grodner, D. Mirman, & J. Trueswell (Eds.), Proceedings of the 38th Annual Meeting of the Cognitive Science Society (CogSci 2016). Austin, TX: Cognitive Science Society (pp. 520-525). Austin, TX: Cognitive Science Society.

    Abstract

    Several studies examined cue competition in human learning by testing learners on a combination of conflicting cues rooting for different outcomes, with each cue perfectly predicting its outcome. A common result has been that learners faced with cue conflict choose the outcome associated with the rare cue (the Inverse Base Rate Effect, IBRE). Here, we investigate cue competition including IBRE with sentences containing cues to meanings in a visual world. We do not observe IBRE. Instead we find that position in the sentence strongly influences cue salience. Faced with conflict between an initial cue and a non-initial cue, learners choose the outcome associated with the initial cue, whether frequent or rare. However, a frequent configuration of non-initial cues that are not sufficiently salient on their own can overcome a competing salient initial cue rooting for a different meaning. This provides a possible explanation for certain recurring patterns in language change.
  • Harmon, Z., & Kapatsinski, V. (2016). Determinants of lengths of repetition disfluencies: Probabilistic syntactic constituency in speech production. In R. Burkholder, C. Cisneros, E. R. Coppess, J. Grove, E. A. Hanink, H. McMahan, C. Meyer, N. Pavlou, Ö. Sarıgül, A. R. Singerman, & A. Zhang (Eds.), Proceedings of the Fiftieth Annual Meeting of the Chicago Linguistic Society (pp. 237-248). Chicago: Chicago Linguistic Society.
  • Hendricks, I., Lefever, E., Croijmans, I., Majid, A., & Van den Bosch, A. (2016). Very quaffable and great fun: Applying NLP to wine reviews. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics: Vol 2 (pp. 306-312). Stroudsburg, PA: Association for Computational Linguistics.

    Abstract

    We automatically predict properties of
    wines on the basis of smell and flavor de-
    scriptions from experts’ wine reviews. We
    show wine experts are capable of describ-
    ing their smell and flavor experiences in
    wine reviews in a sufficiently consistent
    manner, such that we can use their descrip-
    tions to predict properties of a wine based
    solely on language. The experimental re-
    sults show promising F-scores when using
    lexical and semantic information to predict
    the color, grape variety, country of origin,
    and price of a wine. This demonstrates,
    contrary to popular opinion, that wine ex-
    perts’ reviews really are informative.
  • Hintz, F., & Scharenborg, O. (2016). Neighbourhood density influences word recognition in native and non-native speech recognition in noise. In H. Van den Heuvel, B. Cranen, & S. Mattys (Eds.), Proceedings of the Speech Processing in Realistic Environments (SPIRE) workshop (pp. 46-47). Groningen.
  • Hintz, F., & Scharenborg, O. (2016). The effect of background noise on the activation of phonological and semantic information during spoken-word recognition. In Proceedings of Interspeech 2016: The 17th Annual Conference of the International Speech Communication Association (pp. 2816-2820).

    Abstract

    During spoken-word recognition, listeners experience phonological competition between multiple word candidates, which increases, relative to optimal listening conditions, when speech is masked by noise. Moreover, listeners activate semantic word knowledge during the word’s unfolding. Here, we replicated the effect of background noise on phonological competition and investigated to which extent noise affects the activation of semantic information in phonological competitors. Participants’ eye movements were recorded when they listened to sentences containing a target word and looked at three types of displays. The displays either contained a picture of the target word, or a picture of a phonological onset competitor, or a picture of a word semantically related to the onset competitor, each along with three unrelated distractors. The analyses revealed that, in noise, fixations to the target and to the phonological onset competitor were delayed and smaller in magnitude compared to the clean listening condition, most likely reflecting enhanced phonological competition. No evidence for the activation of semantic information in the phonological competitors was observed in noise and, surprisingly, also not in the clear. We discuss the implications of the lack of an effect and differences between the present and earlier studies.
  • Indefrey, P., & Levelt, W. J. M. (2000). The neural correlates of language production. In M. S. Gazzaniga (Ed.), The new cognitive neurosciences; 2nd ed. (pp. 845-865). Cambridge, MA: MIT Press.

    Abstract

    This chapter reviews the findings of 58 word production experiments using different tasks and neuroimaging techniques. The reported cerebral activation sites are coded in a common anatomic reference system. Based on a functional model of language production, the different word production tasks are analyzed in terms of their processing components. This approach allows a distinction between the core process of word production and preceding task-specific processes (lead-in processes) such as visual or auditory stimulus recognition. The core process of word production is subserved by a left-lateralized perisylvian/thalamic language production network. Within this network there seems to be functional specialization for the processing stages of word production. In addition, this chapter includes a discussion of the available evidence on syntactic production, self-monitoring, and the time course of word production.
  • Ingvar, M., & Petersson, K. M. (2000). Functional maps and brain networks. In A. W. Toga (Ed.), Brain mapping: The systems (pp. 111-140). San Diego: Academic Press.
  • Irivine, E., & Roberts, S. G. (2016). Deictic tools can limit the emergence of referential symbol systems. In S. G. Roberts, C. Cuskley, L. McCrohon, L. Barceló-Coblijn, O. Feher, & T. Verhoef (Eds.), The Evolution of Language: Proceedings of the 11th International Conference (EVOLANG11). Retrieved from http://evolang.org/neworleans/papers/99.html.

    Abstract

    Previous experiments and models show that the pressure to communicate can lead to the emergence of symbols in specific tasks. The experiment presented here suggests that the ability to use deictic gestures can reduce the pressure for symbols to emerge in co-operative tasks. In the 'gesture-only' condition, pairs built a structure together in 'Minecraft', and could only communicate using a small range of gestures. In the 'gesture-plus' condition, pairs could also use sound to develop a symbol system if they wished. All pairs were taught a pointing convention. None of the pairs we tested developed a symbol system, and performance was no different across the two conditions. We therefore suggest that deictic gestures, and non-referential means of organising activity sequences, are often sufficient for communication. This suggests that the emergence of linguistic symbols in early hominids may have been late and patchy with symbols only emerging in contexts where they could significantly improve task success or efficiency. Given the communicative power of pointing however, these contexts may be fewer than usually supposed. An approach for identifying these situations is outlined.

Share this page