Publications

Displaying 1 - 100 of 294
  • Adam, R., Orfanidou, E., McQueen, J. M., & Morgan, G. (2011). Sign language comprehension: Insights from misperceptions of different phonological parameters. In R. Channon, & H. Van der Hulst (Eds.), Formational units in sign languages (pp. 87-106). Berlin: Mouton de Gruyter and Ishara Press.
  • Alday, P. M. (2016). Towards a rigorous motivation for Ziph's law. In S. G. Roberts, C. Cuskley, L. McCrohon, L. Barceló-Coblijn, O. Feher, & T. Verhoef (Eds.), The Evolution of Language: Proceedings of the 11th International Conference (EVOLANG11). Retrieved from http://evolang.org/neworleans/papers/178.html.

    Abstract

    Language evolution can be viewed from two viewpoints: the development of a communicative system and the biological adaptations necessary for producing and perceiving said system. The communicative-system vantage point has enjoyed a wealth of mathematical models based on simple distributional properties of language, often formulated as empirical laws. However, be- yond vague psychological notions of “least effort”, no principled explanation has been proposed for the existence and success of such laws. Meanwhile, psychological and neurobiological mod- els have focused largely on the computational constraints presented by incremental, real-time processing. In the following, we show that information-theoretic entropy underpins successful models of both types and provides a more principled motivation for Zipf’s Law
  • Alhama, R. G., & Zuidema, W. (2016). Generalization in Artificial Language Learning: Modelling the Propensity to Generalize. In Proceedings of the 7th Workshop on Cognitive Aspects of Computational Language Learning (pp. 64-72). Association for Computational Linguistics. doi:10.18653/v1/W16-1909.

    Abstract

    Experiments in Artificial Language Learn-
    ing have revealed much about the cogni-
    tive mechanisms underlying sequence and
    language learning in human adults, in in-
    fants and in non-human animals. This pa-
    per focuses on their ability to generalize
    to novel grammatical instances (i.e., in-
    stances consistent with a familiarization
    pattern). Notably, the propensity to gen-
    eralize appears to be negatively correlated
    with the amount of exposure to the artifi-
    cial language, a fact that has been claimed
    to be contrary to the predictions of statis-
    tical models (Pe
    ̃
    na et al. (2002); Endress
    and Bonatti (2007)). In this paper, we pro-
    pose to model generalization as a three-
    step process, and we demonstrate that the
    use of statistical models for the first two
    steps, contrary to widespread intuitions in
    the ALL-field, can explain the observed
    decrease of the propensity to generalize
    with exposure time.
  • Alhama, R. G., & Zuidema, W. (2016). Pre-Wiring and Pre-Training: What does a neural network need to learn truly general identity rules? In T. R. Besold, A. Bordes, & A. D'Avila Garcez (Eds.), CoCo 2016 Cognitive Computation: Proceedings of the Workshop on Cognitive Computation: Integrating neural and symbolic approaches 2016. CEUR Workshop Proceedings.

    Abstract

    In an influential paper, Marcus et al. [1999] claimed that connectionist models
    cannot account for human success at learning tasks that involved generalization
    of abstract knowledge such as grammatical rules. This claim triggered a heated
    debate, centered mostly around variants of the Simple Recurrent Network model
    [Elman, 1990]. In our work, we revisit this unresolved debate and analyze the
    underlying issues from a different perspective. We argue that, in order to simulate
    human-like learning of grammatical rules, a neural network model should not be
    used as a
    tabula rasa
    , but rather, the initial wiring of the neural connections and
    the experience acquired prior to the actual task should be incorporated into the
    model. We present two methods that aim to provide such initial state: a manipu-
    lation of the initial connections of the network in a cognitively plausible manner
    (concretely, by implementing a “delay-line” memory), and a pre-training algorithm
    that incrementally challenges the network with novel stimuli. We implement such
    techniques in an Echo State Network [Jaeger, 2001], and we show that only when
    combining both techniques the ESN is able to learn truly general identity rules.
  • Allen, S. E. M. (1998). A discourse-pragmatic explanation for the subject-object asymmetry in early null arguments. In A. Sorace, C. Heycock, & R. Shillcock (Eds.), Proceedings of the GALA '97 Conference on Language Acquisition (pp. 10-15). Edinburgh, UK: Edinburgh University Press.

    Abstract

    The present paper assesses discourse-pragmatic factors as a potential explanation for the subject-object assymetry in early child language. It identifies a set of factors which characterize typical situations of informativeness (Greenfield & Smith, 1976), and uses these factors to identify informative arguments in data from four children aged 2;0 through 3;6 learning Inuktitut as a first language. In addition, it assesses the extent of the links between features of informativeness on one hand and lexical vs. null and subject vs. object arguments on the other. Results suggest that a pragmatics account of the subject-object asymmetry can be upheld to a greater extent than previous research indicates, and that several of the factors characterizing informativeness are good indicators of those arguments which tend to be omitted in early child language.
  • Azar, Z., Backus, A., & Ozyurek, A. (2016). Pragmatic relativity: Gender and context affect the use of personal pronouns in discourse differentially across languages. In A. Papafragou, D. Grodner, D. Mirman, & J. Trueswell (Eds.), Proceedings of the 38th Annual Meeting of the Cognitive Science Society (CogSci 2016) (pp. 1295-1300). Austin, TX: Cognitive Science Society.

    Abstract

    Speakers use differential referring expressions in pragmatically appropriate ways to produce coherent narratives. Languages, however, differ in a) whether REs as arguments can be dropped and b) whether personal pronouns encode gender. We examine two languages that differ from each other in these two aspects and ask whether the co-reference context and the gender encoding options affect the use of REs differentially. We elicited narratives from Dutch and Turkish speakers about two types of three-person events, one including people of the same and the other of mixed-gender. Speakers re-introduced referents into the discourse with fuller forms (NPs) and maintained them with reduced forms (overt or null pronoun). Turkish speakers used pronouns mainly to mark emphasis and only Dutch speakers used pronouns differentially across the two types of videos. We argue that linguistic possibilities available in languages tune speakers into taking different principles into account to produce pragmatically coherent narratives
  • Bardhan, N. P., & Weber, A. (2011). Listening to a novel foreign accent, with long lasting effects [Abstract]. Journal of the Acoustical Society of America. Program abstracts of the 162nd Meeting of the Acoustical Society of America, 130(4), 2445.

    Abstract

    In conversation, listeners frequently encounter speakers with foreign accents. Previous research on foreign-accented speech has primarily examined the short-term effects of exposure and the relative ease that listeners have with adapting to an accent. The present study examines the stability of this adaptation, with seven full days between testing sessions. On both days, subjects performed a cross-modal priming task in which they heard several minutes of an unfamiliar accent of their native language: a form of Hebrewaccented Dutch in which long /i:/ was shortened to /I/. During this task on Day 1, recognition of accented forms was not facilitated, compared to that of canonical forms. A week later, when tested on new words, facilitatory priming occurred, comparable to that seen for canonically produced items tested in both sessions. These results suggest that accented forms can be learned from brief exposure and the stable effects of this can be seen a week later.
  • Bauer, B. L. M. (2016). The development of the comparative in Latin texts. In J. N. Adams, & N. Vincent (Eds.), Early and late Latin. Continuity or change? (pp. 313-339). Cambridge: Cambridge University Press.
  • Bauer, B. L. M. (2011). Word formation. In M. Maiden, J. C. Smith, & A. Ledgeway (Eds.), The Cambridge history of the Romance languages. Vol. I. structures (pp. 532-563). Cambridge: Cambridge University Press.
  • Bergmann, C., Cristia, A., & Dupoux, E. (2016). Discriminability of sound contrasts in the face of speaker variation quantified. In Proceedings of the 38th Annual Conference of the Cognitive Science Society. (pp. 1331-1336). Austin, TX: Cognitive Science Society.

    Abstract

    How does a naive language learner deal with speaker variation irrelevant to distinguishing word meanings? Experimental data is contradictory, and incompatible models have been proposed. Here, we examine basic assumptions regarding the acoustic signal the learner deals with: Is speaker variability a hurdle in discriminating sounds or can it easily be ignored? To this end, we summarize existing infant data. We then present machine-based discriminability scores of sound pairs obtained without any language knowledge. Our results show that speaker variability decreases sound contrast discriminability, and that some contrasts are affected more than others. However, chance performance is rare; most contrasts remain discriminable in the face of speaker variation. We take our results to mean that speaker variation is not a uniform hurdle to discriminating sound contrasts, and careful examination is necessary when planning and interpreting studies testing whether and to what extent infants (and adults) are sensitive to speaker differences.

    Additional information

    Scripts and data
  • Bergmann, C., Boves, L., & Ten Bosch, L. (2011). Measuring word learning performance in computational models and infants. In Proceedings of the IEEE Conference on Development and Learning, and Epigenetic Robotics. Frankfurt am Main, Germany, 24-27 Aug. 2011.

    Abstract

    In the present paper we investigate the effect of categorising raw behavioural data or computational model responses. In addition, the effect of averaging over stimuli from potentially different populations is assessed. To this end, we replicate studies on word learning and generalisation abilities using the ACORNS models. Our results show that discrete categories may obscure interesting phenomena in the continuous responses. For example, the finding that learning in the model saturates very early at a uniform high recognition accuracy only holds for categorical representations. Additionally, a large difference in the accuracy for individual words is obscured by averaging over all stimuli. Because different words behaved differently for different speakers, we could not identify a phonetic basis for the differences. Implications and new predictions for infant behaviour are discussed.
  • Bergmann, C., Boves, L., & Ten Bosch, L. (2011). Thresholding word activations for response scoring - Modelling psycholinguistic data. In Proceedings of the 12th Annual Conference of the International Speech Communication Association [Interspeech 2011] (pp. 769-772). ISCA.

    Abstract

    In the present paper we investigate the effect of categorising raw behavioural data or computational model responses. In addition, the effect of averaging over stimuli from potentially different populations is assessed. To this end, we replicate studies on word learning and generalisation abilities using the ACORNS models. Our results show that discrete
    categories may obscure interesting phenomena in the continuous
    responses. For example, the finding that learning in the model saturates very early at a uniform high recognition accuracy only holds for categorical representations. Additionally, a large difference in the accuracy for individual words is obscured
    by averaging over all stimuli. Because different words behaved
    differently for different speakers, we could not identify a phonetic
    basis for the differences. Implications and new predictions for
    infant behaviour are discussed.
  • Blythe, J. (2011). Laughter is the best medicine: Roles for prosody in a Murriny Patha conversational narrative. In B. Baker, I. Mushin, M. Harvey, & R. Gardner (Eds.), Indigenous Language and Social Identity: Papers in Honour of Michael Walsh (pp. 223-236). Canberra: Pacific Linguistics.
  • Bohnemeyer, J., Enfield, N. J., Essegbey, J., Majid, A., & van Staden, M. (2011). Configuraciones temáticas atípicas y el uso de predicados complejos en perspectiva tipológica [Atypical thematic configurations and the use of complex predicates in typological perspective]. In A. L. Munguía (Ed.), Colección Estudios Lingüísticos. Vol. I: Fonología, morfología, y tipología semántico-sintáctica [Collection Linguistic Studies. Vol 1: Phonology, morphology, and semantico-syntactic typology] (pp. 173-194). Hermosillo, Mexico: Universidad de Sonora.
  • Bohnemeyer, J., Burenhult, N., Enfield, N. J., & Levinson, S. C. (2011). Landscape terms and place names questionnaire. In K. Kendrick, & A. Majid (Eds.), Field manual volume 14 (pp. 19-23). Nijmegen: Max Planck Institute for Psycholinguistics. doi:10.17617/2.1005606.
  • Bohnemeyer, J. (1998). Temporale Relatoren im Hispano-Yukatekischen Sprachkontakt. In A. Koechert, & T. Stolz (Eds.), Convergencia e Individualidad - Las lenguas Mayas entre hispanización e indigenismo (pp. 195-241). Hannover, Germany: Verlag für Ethnologie.
  • Bohnemeyer, J. (1998). Sententiale Topics im Yukatekischen. In Z. Dietmar (Ed.), Deskriptive Grammatik und allgemeiner Sprachvergleich (pp. 55-85). Tübingen, Germany: Max-Niemeyer-Verlag.
  • Bohnemeyer, J., Enfield, N. J., Essegbey, J., & Kita, S. (2011). The macro-event property: The segmentation of causal chains. In J. Bohnemeyer, & E. Pederson (Eds.), Event representation in language and cognition (pp. 43-67). New York: Cambridge University Press.
  • Bosker, H. R., Reinisch, E., & Sjerps, M. J. (2016). Listening under cognitive load makes speech sound fast. In H. van den Heuvel, B. Cranen, & S. Mattys (Eds.), Proceedings of the Speech Processing in Realistic Environments [SPIRE] Workshop (pp. 23-24). Groningen.
  • Bosker, H. R. (2016). Our own speech rate influences speech perception. In J. Barnes, A. Brugos, S. Stattuck-Hufnagel, & N. Veilleux (Eds.), Proceedings of Speech Prosody 2016 (pp. 227-231).

    Abstract

    During conversation, spoken utterances occur in rich acoustic contexts, including speech produced by our interlocutor(s) and speech we produced ourselves. Prosodic characteristics of the acoustic context have been known to influence speech perception in a contrastive fashion: for instance, a vowel presented in a fast context is perceived to have a longer duration than the same vowel in a slow context. Given the ubiquity of the sound of our own voice, it may be that our own speech rate - a common source of acoustic context - also influences our perception of the speech of others. Two experiments were designed to test this hypothesis. Experiment 1 replicated earlier contextual rate effects by showing that hearing pre-recorded fast or slow context sentences alters the perception of ambiguous Dutch target words. Experiment 2 then extended this finding by showing that talking at a fast or slow rate prior to the presentation of the target words also altered the perception of those words. These results suggest that between-talker variation in speech rate production may induce between-talker variation in speech perception, thus potentially explaining why interlocutors tend to converge on speech rate in dialogue settings.

    Additional information

    pdf via conference website227
  • Bottini, R., & Casasanto, D. (2011). Space and time in the child’s mind: Further evidence for a cross-dimensional asymmetry [Abstract]. In L. Carlson, C. Hölscher, & T. Shipley (Eds.), Proceedings of the 33rd Annual Conference of the Cognitive Science Society (pp. 3010). Austin, TX: Cognitive Science Society.

    Abstract

    Space and time appear to be related asymmetrically in the child’s mind: temporal representations depend on spatial representations more than vice versa, as predicted by space-time metaphors in language. In a study supporting this conclusion, spatial information interfered with children’s temporal judgments more than vice versa (Casasanto, Fotakopoulou, & Boroditsky, 2010, Cognitive Science). In this earlier study, however, spatial information was available to participants for more time than temporal information was (as is often the case when people observe natural events), suggesting a skeptical explanation for the observed effect. Here we conducted a stronger test of the hypothesized space-time asymmetry, controlling spatial and temporal aspects of the stimuli even more stringently than they are generally ’controlled’ in the natural world. Results replicated Casasanto and colleagues’, validating their finding of a robust representational asymmetry between space and time, and extending it to children (4-10 y.o.) who speak Dutch and Brazilian Portuguese.
  • Bowerman, M. (1985). Beyond communicative adequacy: From piecemeal knowledge to an integrated system in the child's acquisition of language. In K. Nelson (Ed.), Children's language (pp. 369-398). Hillsdale, N.J.: Lawrence Erlbaum.

    Abstract

    (From the chapter) the first section considers very briefly the kinds of processes that can be inferred to underlie errors that do not set in until after a period of correct usage acquisition often seems to be a more extended process than we have envisioned summarize a currently influential model of how linguistic forms, meaning, and communication are interrelated in the acquisition of language, point out some challenging problems for this model, and suggest that the notion of "meaning" in language must be reconceptualized before we can hope to solve these problems evidence from several types of late errors is marshalled in support of these arguments (From the preface) provides many examples of new errors that children introduce at relatively advanced stages of mastery of semantics and syntax Bowerman views these seemingly backwards steps as indications of definite steps forward by the child achieving reflective, flexible and integrated systems of semantics and syntax (
  • Bowerman, M. (2011). Linguistic typology and first language acquisition. In J. J. Song (Ed.), The Oxford handbook of linguistic typology (pp. 591-617). Oxford: Oxford University Press.
  • Bowerman, M. (1985). What shapes children's grammars? In D. Slobin (Ed.), The crosslinguistic study of language acquisition (pp. 1257-1319). Hillsdale, N.J.: Lawrence Erlbaum.
  • Brenner, D., Warner, N., Ernestus, M., & Tucker, B. V. (2011). Parsing the ambiguity of casual speech: “He was like” or “He’s like”? [Abstract]. The Journal of the Acoustical Society of America, 129(4 Pt. 2), 2683.

    Abstract

    Paper presented at The 161th Meeting Acoustical Society of America, Seattle, Washington, 23-27 May 2011. Reduction in casual speech can create ambiguity, e.g., “he was” can sound like “he’s.” Before quotative “like” “so she’s/she was like…”, it was found that there is little accurate acoustic information about the distinction in the signal. This work examines what types of information acoustics of the target itself, speech rate, coarticulation, and syntax/semantics listeners use to recognize such reduced function words. We compare perception studies presenting the targets auditorily with varying amounts of context, presenting the context without the targets, and a visual study presenting context in written form. Given primarily discourse information visual or auditory context only, subjects are strongly biased toward past, reflecting the use of quotative “like” for reporting past speech. However, if the target itself is presented, the direction of bias reverses, indicating that listeners favor acoustic information within the target which is reduced, sounding like the shorter, present form over almost any other source of information. Furthermore, when the target is presented auditorily with surrounding context, the bias shifts slightly toward the direction shown in the orthographic or auditory-no-target experiments. Thus, listeners prioritize acoustic information within the target when present, even if that information is misleading, but they also take discourse information into account.
  • Broeder, D., Sloetjes, H., Trilsbeek, P., Van Uytvanck, D., Windhouwer, M., & Wittenburg, P. (2011). Evolving challenges in archiving and data infrastructures. In G. L. J. Haig, N. Nau, S. Schnell, & C. Wegener (Eds.), Documenting endangered languages: Achievements and perspectives (pp. 33-54). Berlin: De Gruyter.

    Abstract

    Introduction Increasingly often research in the humanities is based on data. This change in attitude and research practice is driven to a large extent by the availability of small and cheap yet high-quality recording equipment (video cameras, audio recorders) as well as advances in information technology (faster networks, larger data storage, larger computation power, suitable software). In some institutes such as the Max Planck Institute for Psycholinguistics, already in the 90s a clear trend towards an all-digital domain could be identified, making use of state-of-the-art technology for research purposes. This change of habits was one of the reasons for the Volkswagen Foundation to establish the DoBeS program in 2000 with a clear focus on language documentation based on recordings as primary material.
  • Broersma, M. (2011). Triggered code-switching: Evidence from picture naming experiments. In M. S. Schmid, & W. Lowie (Eds.), Modeling bilingualism: From structure to chaos. In honor of Kees de Bot (pp. 37-58). Amsterdam: Benjamins.

    Abstract

    This paper presents experimental evidence that cognates can trigger codeswitching. In two picture naming experiments, Dutch-English bilinguals switched between Dutch and English. Crucial words followed either a cognate or a non-cognate. In Experiment 1, response language was indicated by a color cue, and crucial trials always required a switch. Crucial trials had shorter reaction times after a cognate than after a non-cognate. In Experiment 2, response language was not cued and participants switched freely between the languages. Words after cognates were switched more often than words after non-cognates, for switching from L1 to L2 only. Both experiments thus showed that cognates facilitated language switching of the following word. The results extend evidence for triggered codeswitching from natural speech analyses.
  • Brookshire, G., & Casasanto, D. (2011). Motivation and motor action: Hemispheric specialization for motivation reverses with handedness. In L. Carlson, C. Holscher, & T. Shipley (Eds.), Proceedings of the 33rd Annual Meeting of the Cognitive Science Society (pp. 2610-2615). Austin, TX: Cognitive Science Society.
  • Brouwer, S., & Bradlow, A. R. (2011). The influence of noise on phonological competition during spoken word recognition. In W.-S. Lee, & E. Zee (Eds.), Proceedings of the 17th International Congress of Phonetic Sciences 2011 [ICPhS XVII] (pp. 364-367). Hong Kong: Department of Chinese, Translation and Linguistics, City University of Hong Kong.

    Abstract

    Listeners’ interactions often take place in auditorily challenging conditions. We examined how noise affects phonological competition during spoken word recognition. In a visual-world experiment, which allows us to examine the timecourse of recognition, English participants listened to target words in quiet and in noise while they saw four pictures on the screen: a target (e.g. candle), an onset overlap competitor (e.g. candy), an offset overlap competitor (e.g. sandal), and a distractor. The results showed that, while all competitors were relatively quickly suppressed in quiet listening conditions, listeners experienced persistent competition in noise from the offset competitor but not from the onset competitor. This suggests that listeners’ phonological competitor activation persists for longer in noise than in quiet and that listeners are able to deactivate some unwanted competition when listening to speech in noise. The well-attested competition pattern in quiet was not replicated. Possible methodological explanations for this result are discussed.
  • Brown, P. (1998). Early Tzeltal verbs: Argument structure and argument representation. In E. Clark (Ed.), Proceedings of the 29th Annual Stanford Child Language Research Forum (pp. 129-140). Stanford: CSLI Publications.

    Abstract

    The surge of research activity focussing on children's acquisition of verbs (e.g., Tomasello and Merriman 1996) addresses some fundamental questions: Just how variable across languages, and across individual children, is the process of verb learning? How specific are arguments to particular verbs in early child language? How does the grammatical category 'Verb' develop? The position of Universal Grammar, that a verb category is early, contrasts with that of Tomasello (1992), Pine and Lieven and their colleagues (1996, in press), and many others, that children develop a verb category slowly, gradually building up subcategorizations of verbs around pragmatic, syntactic, and semantic properties of the language they are exposed to. On this latter view, one would expect the language which the child is learning, the cultural milieu and the nature of the interactions in which the child is engaged, to influence the process of acquiring verb argument structures. This paper explores these issues by examining the development of argument representation in the Mayan language Tzeltal, in both its lexical and verbal cross-referencing forms, and analyzing the semantic and pragmatic factors influencing the form argument representation takes. Certain facts about Tzeltal (the ergative/ absolutive marking, the semantic specificity of transitive and positional verbs) are proposed to affect the representation of arguments. The first 500 multimorpheme combinations of 3 children (aged between 1;8 and 2;4) are examined. It is argued that there is no evidence of semantically light 'pathbreaking' verbs (Ninio 1996) leading the way into word combinations. There is early productivity of cross-referencing affixes marking A, S, and O arguments (although there are systematic omissions). The paper assesses the respective contributions of three kinds of factors to these results - structural (regular morphology), semantic (verb specificity) and pragmatic (the nature of Tzeltal conversational interaction).
  • Brown, P. (1998). How and why are women more polite: Some evidence from a Mayan community. In J. Coates (Ed.), Language and gender (pp. 81-99). Oxford: Blackwell.
  • Brown, P. (2011). Everyone has to lie in Tzeltal [Reprint]. In B. B. Schieffelin, & P. B. Garrett (Eds.), Anthropological linguistics: Critical concepts in language studies. Volume III Talking about language (pp. 59-87). London: Routledge.

    Abstract

    Reprint of Brown, P. (2002). Everyone has to lie in Tzeltal. In S. Blum-Kulka, & C. E. Snow (Eds.), Talking to adults: The contribution of multiparty discourse to language acquisition (pp. 241-275). Mahwah, NJ: Erlbaum. In a famous paper Harvey Sacks (1974) argued that the sequential properties of greeting conventions, as well as those governing the flow of information, mean that 'everyone has to lie'. In this paper I show this dictum to be equally true in the Tzeltal Mayan community of Tenejapa, in southern Mexico, but for somewhat different reasons. The phenomenon of interest is the practice of routine fearsome threats to small children. Based on a longitudinal corpus of videotaped and tape-recorded naturally-occurring interaction between caregivers and children in five Tzeltal families, the study examines sequences of Tzeltal caregivers' speech aimed at controlling the children's behaviour and analyzes the children's developing pragmatic skills in handling such controlling utterances, from prelinguistic infants to age five and over. Infants in this society are considered to be vulnerable, easily scared or shocked into losing their 'souls', and therefore at all costs to be protected and hidden from outsiders and other dangers. Nonetheless, the chief form of control (aside from physically removing a child from danger) is to threaten, saying things like "Don't do that, or I'll take you to the clinic for an injection," These overt scare-threats - rarely actually realized - lead Tzeltal children by the age of 2;6 to 3;0 to the understanding that speech does not necessarily convey true propositions, and to a sensitivity to the underlying motivations for utterances distinct from their literal meaning. By age 4;0 children perform the same role to their younger siblings;they also begin to use more subtle non-true (e.g. ironic) utterances. The caretaker practice described here is related to adult norms of social lying, to the sociocultural context of constraints on information flow, social control through gossip, and the different notion of 'truth' that arises in the context of non-verifiability characteristic of a small-scale nonliterate society.
  • Brown, P. (2011). The cultural organization of attention. In A. Duranti, E. Ochs, & B. B. Schieffelin (Eds.), The handbook of language socialization (pp. 29-55). Malden, MA: Wiley-Blackwell.

    Abstract

    How new social members are enculturated into the interactional practices of the society they grow up in is crucial to an understanding of social interaction, as well as to an understanding of the role of culture in children's social-cognitive development. Modern theories of infant development (e.g., Bruner 1982, Elman et al 1996, Tomasello 1999, Masataka 2003) emphasize the influence of particular interactional practices in the child's developing communicative skills. But interactional practices with infants - behaviors like prompting, pointing, turn-taking routines, and interacting over objects - are culturally shaped by beliefs about what infants need and what they can understand; these practices therefore vary across cultures in both quantity and quality. What effect does this variation have on children's communicative development? This article focuses on one aspect of cultural practice, the interactional organization of attention and how it is socialized in prelinguistic infants. It surveys the literature on the precursors to attention coordination in infancy, leading up to the crucial development of 'joint attention' and pointing behavior around the age of 12 months, and it reports what is known about cultural differences in related interactional practices of adults. It then considers the implications of such differences for infant-caregiver interaction prior to the period when infants begin to speak. I report on my own work on the integration of gaze and pointing in infant/caregiver interaction in two different cultures. One is a Mayan society in Mexico, where interaction with infants during their first year is relatively minimal; the other is on Rossel Island (Papua New Guinea), where interaction with infants is characterized by intensive face-to-face communicative behaviors from shortly after the child's birth. Examination of videotaped naturally-occurring interactions in both societies for episodes of index finger point following and production, and the integration of gaze and vocalization with pointing, reveals that despite the differences in interactional style with infants, pointing for joint attention emerges in infants in both cultures in the 9 -15 month period. However, a comparative perspective on cultural practices in caregiver-infant interactions allows us to refine our understanding of joint attention and its role in the process of learning to become a communicative partner.
  • Brown, P. (2011). Politeness. In P. C. Hogan (Ed.), The Cambridge encyclopedia of the language sciences (pp. 635-636). New York: Cambridge University Press.

    Abstract

    This is an encyclopedia entry surveying theoretical approaches to politeness phenomena in language usage.
  • Brown, P., & Levinson, S. C. (1998). Politeness, introduction to the reissue: A review of recent work. In A. Kasher (Ed.), Pragmatics: Vol. 6 Grammar, psychology and sociology (pp. 488-554). London: Routledge.

    Abstract

    This article is a reprint of chapter 1, the introduction to Brown and Levinson, 1987, Politeness: Some universals in language usage (Cambridge University Press).
  • Brown, P., & Levinson, S. C. (2011). Politeness: Some universals in language use [Reprint]. In D. Archer, & P. Grundy (Eds.), The pragmatics reader (pp. 283-304). London: Routledge.

    Abstract

    Reprinted with permission of Cambridge University Press from: Brown, P. and Levinson, S. E. (1987) Politeness, (©) 1978, 1987, CUP.
  • Bruggeman, L., & Cutler, A. (2016). Lexical manipulation as a discovery tool for psycholinguistic research. In C. Carignan, & M. D. Tyler (Eds.), Proceedings of the 16th Australasian International Conference on Speech Science and Technology (SST2016) (pp. 313-316).
  • Burenhult, N., Kruspe, N., & Dunn, M. (2011). Language history and culture groups among Austroasiatic-speaking foragers of the Malay Peninsula. In N. J. Enfield (Ed.), Dynamics of human diversity: The case of mainland Southeast Asia (pp. 257-277). Canberra: Pacific Linguistics.
  • Burenhult, N. (2011). The coding of reciprocal events in Jahai. In N. Evans, A. Gaby, S. C. Levinson, & A. Majid (Eds.), Reciprocals and semantic typology (pp. 163-176). Amsterdam: Benjamins.

    Abstract

    This work explores the linguistic encoding of reciprocal events in Jahai (Aslian, Mon-Khmer, Malay Peninsula) on the basis of linguistic descriptions of the video stimuli of the ‘Reciprocal constructions and situation type’ task (Evans et al. 2004). Reciprocal situation types find expression in three different constructions: distributive verb forms, reciprocal verb forms, and adjunct phrases containing a body part noun. Distributives represent the dominant strategy, reciprocal forms and body part adjuncts being highly restricted across event types and consultants. The distributive and reciprocal morphemes manifest intricate morphological processes typical of Aslian languages. The paper also addresses some analytical problems raised by the data, such as structural ambiguity and restrictions on derivation, as well as individual variation.
  • Burenhult, N., & Kruspe, N. (2016). The language of eating and drinking: A window on Orang Asli meaning-making. In K. Endicott (Ed.), Malaysia’s original people: Past, present and future of the Orang Asli (pp. 175-199). Singapore: National University of Singapore Press.
  • Carstensen, A., Khetarpal, N., Majid, A., & Regier, T. (2011). Universals and variation in spatial language and cognition: Evidence from Chichewa. In L. Carlson, C. Hölscher, & T. Shipley (Eds.), Proceedings of the 33rd Annual Conference of the Cognitive Science Society (pp. 2315). Austin, TX: Cognitive Science Society.
  • Casasanto, D. (2011). Bodily relativity: The body-specificity of language and thought. In L. Carlson, C. Holscher, & T. Shipley (Eds.), Proceedings of the 33rd Annual Meeting of the Cognitive Science Society (pp. 1258-1259). Austin, TX: Cognitive Science Society.
  • Casasanto, D., & Lupyan, G. (2011). Ad hoc cognition [Abstract]. In L. Carlson, C. Hölscher, & T. F. Shipley (Eds.), Proceedings of the 33rd Annual Conference of the Cognitive Science Society (pp. 826). Austin, TX: Cognitive Science Society.

    Abstract

    If concepts, categories, and word meanings are stable, how can people use them so flexibly? Here we explore a possible answer: maybe this stability is an illusion. Perhaps all concepts, categories, and word meanings (CC&Ms) are constructed ad hoc, each time we use them. On this proposal, all words are infinitely polysemous, all communication is ’good enough’, and no idea is ever the same twice. The details of people’s ad hoc CC&Ms are determined by the way retrieval cues interact with the physical, social, and linguistic context. We argue that even the most stable-seeming CC&Ms are instantiated via the same processes as those that are more obviously ad hoc, and vary (a) from one microsecond to the next within a given instantiation, (b) from one instantiation to the next within an individual, and (c) from person to person and group to group as a function of people’s experiential history. 826
  • Casasanto, D., & De Bruin, A. (2011). Word Up! Directed motor action improves word learning [Abstract]. In L. Carlson, C. Hölscher, & T. Shipley (Eds.), Proceedings of the 33rd Annual Conference of the Cognitive Science Society (pp. 1902). Austin, TX: Cognitive Science Society.

    Abstract

    Can simple motor actions help people expand their vocabulary? Here we show that word learning depends on where students place their flash cards after studying them. In Experiment 1, participants learned the definitions of ”alien words” with positive or negative emotional valence. After studying each card, they placed it in one of two boxes (top or bottom), according to its valence. Participants who were instructed to place positive cards in the top box, consistent with Good is Up metaphors, scored about 10.
  • Casillas, M., & Amaral, P. (2011). Learning cues to category membership: Patterns in children’s acquisition of hedges. In C. Cathcart, I.-H. Chen, G. Finley, S. Kang, C. S. Sandy, & E. Stickles (Eds.), Proceedings of the Berkeley Linguistics Society 37th Annual Meeting (pp. 33-45). Linguistic Society of America, eLanguage.

    Abstract

    When we think of children acquiring language, we often think of their acquisition of linguistic structure as separate from their acquisition of knowledge about the world. But it is clear that in the process of learning about language, children consult what they know about the world; and that in learning about the world, children use linguistic cues to discover how items are related to one another. This interaction between the acquisition of linguistic structure and the acquisition of category structure is especially clear in word learning.
  • Chen, A., & Lai, V. T. (2011). Comb or coat: The role of intonation in online reference resolution in a second language. In W. Zonneveld, & H. Quené (Eds.), Sound and Sounds. Studies presented to M.E.H. (Bert) Schouten on the occasion of his 65th birthday (pp. 57-68). Utrecht: UiL OTS.

    Abstract

    1 Introduction In spoken sentence processing, listeners do not wait till the end of a sentence to decipher what message is conveyed. Rather, they make predictions on the most plausible interpretation at every possible point in the auditory signal on the basis of all kinds of linguistic information (e.g., Eberhard et al. 1995; Alman and Kamide 1999, 2007). Intonation is one such kind of linguistic information that is efficiently used in spoken sentence processing. The evidence comes primarily from recent work on online reference resolution conducted in the visual-world eyetracking paradigm (e.g., Tanenhaus et al. 1995). In this paradigm, listeners are shown a visual scene containing a number of objects and listen to one or two short sentences about the scene. They are asked to either inspect the visual scene while listening or to carry out the action depicted in the sentence(s) (e.g., 'Touch the blue square'). Listeners' eye movements directed to each object in the scene are monitored and time-locked to pre-defined time points in the auditory stimulus. Their predictions on the upcoming referent and sources for the predictions in the auditory signal are examined by analysing fixations to the relevant objects in the visual scene before the acoustic information on the referent is available
  • Chen, A. (2011). The developmental path to phonological focus-marking in Dutch. In S. Frota, E. Gorka, & P. Prieto (Eds.), Prosodic categories: Production, perception and comprehension (pp. 93-109). Dordrecht: Springer.

    Abstract

    This paper gives an overview of recent studies on the use of phonological cues (accent placement and choice of accent type) to mark focus in Dutch-speaking children aged between 1;9 and 8;10. It is argued that learning to use phonological cues to mark focus is a gradual process. In the light of the findings in these studies, a first proposal is put forward on the developmental path to adult-like phonological focus-marking in Dutch.
  • Chen, A. (2011). What’s in a rise: Evidence for an off-ramp analysis of Dutch Intonation. In W.-S. Lee, & E. Zee (Eds.), Proceedings of the 17th International Congress of Phonetic Sciences 2011 [ICPhS XVII] (pp. 448-451). Hong Kong: Department of Chinese, Translation and Linguistics, City University of Hong Kong.

    Abstract

    Pitch accents are analysed differently in an onramp analysis (i.e. ToBI) and an off-ramp analysis (e.g. Transcription of Dutch intonation - ToDI), two competing approaches in the Autosegmental Metrical tradition. A case in point is pre-final high rise. A pre-final rise is analysed as H* in ToBI but is phonologically ambiguous between H* or H*L (a (rise-)fall) in ToDI. This is because in ToDI, the L tone of a pre-final H*L can be realised in the following unaccented words and both H* and H*L can show up as a high rise in the accented word. To find out whether there is a two-way phonological contrast in pre-final high rises in Dutch, we examined the distribution of phonologically ambiguous high rises (H*(L)) and their phonetic realisation in different information structural conditions (topic vs. focus), compared to phonologically unambiguous H* and H*L. Results showed that there is indeed a H*L vs. H* contrast in prefinal high rises in Dutch and that H*L is realised as H*(L) when sonorant material is limited in the accented word. These findings provide new evidence for an off-ramp analysis of Dutch intonation and have far-reaching implications for analysis of intonation across languages.
  • Chu, M., & Kita, S. (2011). Microgenesis of gestures during mental rotation tasks recapitulates ontogenesis. In G. Stam, & M. Ishino (Eds.), Integrating gestures: The interdisciplinary nature of gesture (pp. 267-276). Amsterdam: John Benjamins.

    Abstract

    People spontaneously produce gestures when they solve problems or explain their solutions to a problem. In this chapter, we will review and discuss evidence on the role of representational gestures in problem solving. The focus will be on our recent experiments (Chu & Kita, 2008), in which we used Shepard-Metzler type of mental rotation tasks to investigate how spontaneous gestures revealed the development of problem solving strategy over the course of the experiment and what role gesture played in the development process. We found that when solving novel problems regarding the physical world, adults go through similar symbolic distancing (Werner & Kaplan, 1963) and internalization (Piaget, 1968) processes as those that occur during young children’s cognitive development and gesture facilitates such processes.
  • Clark, E. V., & Casillas, M. (2016). First language acquisition. In K. Allen (Ed.), The Routledge Handbook of Linguistics (pp. 311-328). New York: Routledge.
  • Cohen, E. (2011). “Out with ‘Religion’: A novel framing of the religion debate”. In W. Williams (Ed.), Religion and rights: The Oxford Amnesty Lectures 2008. Manchester: Manchester University Press.
  • Cohen, E., & Barrett, J. L. (2011). In search of "Folk anthropology": The cognitive anthropology of the person. In J. W. Van Huysteen, & E. Wiebe (Eds.), In search of self: Interdisciplinary perspectives on personhood (pp. 104-124). Grand Rapids, CA: Wm. B. Eerdmans Publishing Company.
  • Collins, L. J., Schönfeld, B., & Chen, X. S. (2011). The epigenetics of non-coding RNA. In T. Tollefsbol (Ed.), Handbook of epigenetics: the new molecular and medical genetics (pp. 49-61). London: Academic.

    Abstract

    Summary Non-coding RNAs (ncRNAs) have been implicated in the epigenetic marking of many genes. Short regulatory ncRNAs, including miRNAs, siRNAs, piRNAs and snoRNAs as well as long ncRNAs such as Xist and Air are discussed in light of recent research of mechanisms regulating chromatin marking and RNA editing. The topic is expanding rapidly so we will concentrate on examples to highlight the main mechanisms, including simple mechanisms where complementary binding affect methylation or RNA sites. However, other examples especially with the long ncRNAs highlight very complex regulatory systems with multiple layers of ncRNA control.
  • Crago, M. B., & Allen, S. E. M. (1998). Acquiring Inuktitut. In O. L. Taylor, & L. Leonard (Eds.), Language Acquisition Across North America: Cross-Cultural And Cross-Linguistic Perspectives (pp. 245-279). San Diego, CA, USA: Singular Publishing Group, Inc.
  • Crago, M. B., Allen, S. E. M., & Pesco, D. (1998). Issues of Complexity in Inuktitut and English Child Directed Speech. In Proceedings of the twenty-ninth Annual Stanford Child Language Research Forum (pp. 37-46).
  • Cristia, A., Seidl, A., & Francis, A. L. (2011). Phonological features in infancy. In G. N. Clements, & R. Ridouane (Eds.), Where do phonological contrasts come from? Cognitive, physical and developmental bases of phonological features (pp. 303-326). Amsterdam: Benjamins.

    Abstract

    Features serve two main functions in the phonology of languages: they encode the distinction between pairs of contrastive phonemes (distinctive function); and they delimit sets of sounds that participate in phonological processes and patterns (classificatory function). We summarize evidence from a variety of experimental paradigms bearing on the functional relevance of phonological features. This research shows that while young infants may use abstract phonological features to learn sound patterns, this ability becomes more constrained with development and experience. Furthermore, given the lack of overlap between the ability to learn a pair of words differing in a single feature and the ability to learn sound patterns based on features, we argue for the separation of the distinctive and the classificatory function.
  • Cristia, A., & Seidl, A. (2011). Sensitivity to prosody at 6 months predicts vocabulary at 24 months. In N. Danis, K. Mesh, & H. Sung (Eds.), BUCLD 35: Proceedings of the 35th annual Boston University Conference on Language Development (pp. 145-156). Somerville, Mass: Cascadilla Press.
  • Croijmans, I., & Majid, A. (2016). Language does not explain the wine-specific memory advantage of wine experts. In A. Papafragou, D. Grodner, D. Mirman, & J. Trueswell (Eds.), Proceedings of the 38th Annual Meeting of the Cognitive Science Society (CogSci 2016) (pp. 141-146). Austin, TX: Cognitive Science Society.

    Abstract

    Although people are poor at naming odors, naming a smell helps to remember that odor. Previous studies show wine experts have better memory for smells, and they also name smells differently than novices. Is wine experts’ odor memory is verbally mediated? And is the odor memory advantage that experts have over novices restricted to odors in their domain of expertise, or does it generalize? Twenty-four wine experts and 24 novices smelled wines, wine-related odors and common odors, and remembered these. Half the participants also named the smells. Wine experts had better memory for wines, but not for the other odors, indicating their memory advantage is restricted to wine. Wine experts named odors better than novices, but there was no relationship between experts’ ability to name odors and their memory for odors. This suggests experts’ odor memory advantage is not linguistically mediated, but may be the result of differential perceptual learning
  • Cutler, A., & Otake, T. (1998). Assimilation of place in Japanese and Dutch. In R. Mannell, & J. Robert-Ribes (Eds.), Proceedings of the Fifth International Conference on Spoken Language Processing: vol. 5 (pp. 1751-1754). Sydney: ICLSP.

    Abstract

    Assimilation of place of articulation across a nasal and a following stop consonant is obligatory in Japanese, but not in Dutch. In four experiments the processing of assimilated forms by speakers of Japanese and Dutch was compared, using a task in which listeners blended pseudo-word pairs such as ranga-serupa. An assimilated blend of this pair would be rampa, an unassimilated blend rangpa. Japanese listeners produced significantly more assimilated than unassimilated forms, both with pseudo-Japanese and pseudo-Dutch materials, while Dutch listeners produced significantly more unassimilated than assimilated forms in each materials set. This suggests that Japanese listeners, whose native-language phonology involves obligatory assimilation constraints, represent the assimilated nasals in nasal-stop sequences as unmarked for place of articulation, while Dutch listeners, who are accustomed to hearing unassimilated forms, represent the same nasal segments as marked for place of articulation.
  • Ip, M., & Cutler, A. (2016). Cross-language data on five types of prosodic focus. In J. Barnes, A. Brugos, S. Shattuck-Hufnagel, & N. Veilleux (Eds.), Proceedings of Speech Prosody 2016 (pp. 330-334).

    Abstract

    To examine the relative roles of language-specific and language-universal mechanisms in the production of prosodic focus, we compared production of five different types of focus by native speakers of English and Mandarin. Two comparable dialogues were constructed for each language, with the same words appearing in focused and unfocused position; 24 speakers recorded each dialogue in each language. Duration, F0 (mean, maximum, range), and rms-intensity (mean, maximum) of all critical word tokens were measured. Across the different types of focus, cross-language differences were observed in the degree to which English versus Mandarin speakers use the different prosodic parameters to mark focus, suggesting that while prosody may be universally available for expressing focus, the means of its employment may be considerably language-specific
  • Cutler, A. (1998). How listeners find the right words. In Proceedings of the Sixteenth International Congress on Acoustics: Vol. 2 (pp. 1377-1380). Melville, NY: Acoustical Society of America.

    Abstract

    Languages contain tens of thousands of words, but these are constructed from a tiny handful of phonetic elements. Consequently, words resemble one another, or can be embedded within one another, a coup stick snot with standing. me process of spoken-word recognition by human listeners involves activation of multiple word candidates consistent with the input, and direct competition between activated candidate words. Further, human listeners are sensitive, at an early, prelexical, stage of speeeh processing, to constraints on what could potentially be a word of the language.
  • Cutler, A., Andics, A., & Fang, Z. (2011). Inter-dependent categorization of voices and segments. In W.-S. Lee, & E. Zee (Eds.), Proceedings of the 17th International Congress of Phonetic Sciences [ICPhS 2011] (pp. 552-555). Hong Kong: Department of Chinese, Translation and Linguistics, City University of Hong Kong.

    Abstract

    Listeners performed speeded two-alternative choice between two unfamiliar and relatively similar voices or between two phonetically close segments, in VC syllables. For each decision type (segment, voice), the non-target dimension (voice, segment) either was constant, or varied across four alternatives. Responses were always slower when a non-target dimension varied than when it did not, but the effect of phonetic variation on voice identity decision was stronger than that of voice variation on phonetic identity decision. Cues to voice and segment identity in speech are processed inter-dependently, but hard categorization decisions about voices draw on, and are hence sensitive to, segmental information.
  • Cutler, A., & Pearson, M. (1985). On the analysis of prosodic turn-taking cues. In C. Johns-Lewis (Ed.), Intonation in discourse (pp. 139-155). London: Croom Helm.
  • Cutler, A., Treiman, R., & Van Ooijen, B. (1998). Orthografik inkoncistensy ephekts in foneme detektion? In R. Mannell, & J. Robert-Ribes (Eds.), Proceedings of the Fifth International Conference on Spoken Language Processing: Vol. 6 (pp. 2783-2786). Sydney: ICSLP.

    Abstract

    The phoneme detection task is widely used in spoken word recognition research. Alphabetically literate participants, however, are more used to explicit representations of letters than of phonemes. The present study explored whether phoneme detection is sensitive to how target phonemes are, or may be, orthographically realised. Listeners detected the target sounds [b,m,t,f,s,k] in word-initial position in sequences of isolated English words. Response times were faster to the targets [b,m,t], which have consistent word-initial spelling, than to the targets [f,s,k], which are inconsistently spelled, but only when listeners’ attention was drawn to spelling by the presence in the experiment of many irregularly spelled fillers. Within the inconsistent targets [f,s,k], there was no significant difference between responses to targets in words with majority and minority spellings. We conclude that performance in the phoneme detection task is not necessarily sensitive to orthographic effects, but that salient orthographic manipulation can induce such sensitivity.
  • Cutler, A. (1985). Performance measures of lexical complexity. In G. Hoppenbrouwers, P. A. Seuren, & A. Weijters (Eds.), Meaning and the lexicon (pp. 75). Dordrecht: Foris.
  • Cutler, A. (1998). Prosodic structure and word recognition. In A. D. Friederici (Ed.), Language comprehension: A biological perspective (pp. 41-70). Heidelberg: Springer.
  • Cutler, A. (1998). The recognition of spoken words with variable representations. In D. Duez (Ed.), Proceedings of the ESCA Workshop on Sound Patterns of Spontaneous Speech (pp. 83-92). Aix-en-Provence: Université de Aix-en-Provence.
  • Daly, T., Chen, X. S., & Penny, D. (2011). How old are RNA networks? In L. J. Collins (Ed.), RNA infrastructure and networks (pp. 255-273). New York: Springer Science + Business Media and Landes Bioscience.

    Abstract

    Some major classes of RNAs (such as mRNA, rRNA, tRNA and RNase P) are ubiquitous in all living systems so are inferred to have arisen early during the origin of life. However, the situation is not so clear for the system of RNA regulatory networks that continue to be uncovered, especially in eukaryotes. It is increasingly being recognised that networks of small RNAs are important for regulation in all cells, but it is not certain whether the origin of these networks are as old as rRNAs and tRNA. Another group of ncRNAs, including snoRNAs, occurs mainly in archaea and eukaryotes and their ultimate origin is less certain, although perhaps the simplest hypothesis is that they were present in earlier stages of life and were lost from bacteria. Some RNA networks may trace back to an early stage when there was just RNA and proteins, the RNP‑world; before DNA.
  • Danielsen, S., Dunn, M., & Muysken, P. (2011). The spread of the Arawakan languages: A view from structural phylogenetics. In A. Hornborg, & J. D. Hill (Eds.), Ethnicity in ancient Amazonia: Reconstructing past identities from archaeology, linguistics, and ethnohistory (pp. 173-196). Boulder: University Press of Colorado.
  • Dediu, D., & Moisik, S. (2016). Defining and counting phonological classes in cross-linguistic segment databases. In N. Calzolari, K. Choukri, T. Declerck, S. Goggi, M. Grobelnik, B. Maegaard, J. Mariani, H. Mazo, A. Moreno, J. Odijk, & S. Piperidis (Eds.), Proceedings of LREC 2016: 10th International Conference on Language Resources and Evaluation (pp. 1955-1962). Paris: European Language Resources Association (ELRA).

    Abstract

    Recently, there has been an explosion in the availability of large, good-quality cross-linguistic databases such as WALS (Dryer & Haspelmath, 2013), Glottolog (Hammarstrom et al., 2015) and Phoible (Moran & McCloy, 2014). Databases such as Phoible contain the actual segments used by various languages as they are given in the primary language descriptions. However, this segment-level representation cannot be used directly for analyses that require generalizations over classes of segments that share theoretically interesting features. Here we present a method and the associated R (R Core Team, 2014) code that allows the exible denition of such meaningful classes and that can identify the sets of segments falling into such a class for any language inventory. The method and its results are important for those interested in exploring cross-linguistic patterns of phonetic and phonological diversity and their relationship to extra-linguistic factors and processes such as climate, economics, history or human genetics.
  • Dediu, D., & Moisik, S. R. (2016). Anatomical biasing of click learning and production: An MRI and 3d palate imaging study. In S. G. Roberts, C. Cuskley, L. McCrohon, L. Barceló-Coblijn, O. Feher, & T. Verhoef (Eds.), The Evolution of Language: Proceedings of the 11th International Conference (EVOLANG11). Retrieved from http://evolang.org/neworleans/papers/57.html.

    Abstract

    The current paper presents results for data on click learning obtained from a larger imaging study (using MRI and 3D intraoral scanning) designed to quantify and characterize intra- and inter-population variation of vocal tract structures and the relation of this to speech production. The aim of the click study was to ascertain whether and to what extent vocal tract morphology influences (1) the ability to learn to produce clicks and (2) the productions of those that successfully learn to produce these sounds. The results indicate that the presence of an alveolar ridge certainly does not prevent an individual from learning to produce click sounds (1). However, the subtle details of how clicks are produced may indeed be driven by palate shape (2).
  • Dijkstra, N., & Fikkert, P. (2011). Universal constraints on the discrimination of Place of Articulation? Asymmetries in the discrimination of 'paan' and 'taan' by 6-month-old Dutch infants. In N. Danis, K. Mesh, & H. Sung (Eds.), Proceedings of the 35th Annual Boston University Conference on Language Development. Volume 1 (pp. 170-182). Somerville, MA: Cascadilla Press.
  • Dingemanse, M., Van Leeuwen, T., & Majid, A. (2011). Mapping across senses: Two cross-modal association tasks. In K. Kendrick, & A. Majid (Eds.), Field manual volume 14 (pp. 11-15). Nijmegen: Max Planck Institute for Psycholinguistics. doi:10.17617/2.1005579.
  • Dingemanse, M. (2011). Ezra Pound among the Mawu: Ideophones and iconicity in Siwu. In P. Michelucci, O. Fischer, & C. Ljungberg (Eds.), Semblance and Signification (pp. 39-54). Amsterdam: John Benjamins.

    Abstract

    The Mawu people of eastern Ghana make common use of ideophones: marked words that depict sensory imagery. Ideophones have been described as “poetry in ordinary language,” yet the shadow of Lévy-Bruhl, who assigned such words to the realm of primitivity, has loomed large over linguistics and literary theory alike. The poet Ezra Pound is a case in point: while his fascination with Chinese characters spawned the ideogrammic method, the mimicry and gestures of the “primitive languages in Africa” were never more than a mere curiosity to him. This paper imagines Pound transposed into the linguaculture of the Mawu. What would have struck him about their ways of ‘charging language’ with imagery? I juxtapose Pound’s views of the poetic image with an analysis of how different layers of iconicity in ideophones combine to depict sensory imagery. This exercise illuminates aspects of what one might call ‘the ideophonic
  • Dolscheid, S., Shayan, S., Majid, A., & Casasanto, D. (2011). The thickness of musical pitch: Psychophysical evidence for the Whorfian hypothesis. In L. Carlson, C. Hölscher, & T. Shipley (Eds.), Proceedings of the 33rd Annual Conference of the Cognitive Science Society (pp. 537-542). Austin, TX: Cognitive Science Society.
  • Doumas, L. A., & Martin, A. E. (2016). Abstraction in time: Finding hierarchical linguistic structure in a model of relational processing. In A. Papafragou, D. Grodner, D. Mirman, & J. Trueswell (Eds.), Proceedings of the 38th Annual Meeting of the Cognitive Science Society (CogSci 2016) (pp. 2279-2284). Austin, TX: Cognitive Science Society.

    Abstract

    Abstract mental representation is fundamental for human cognition. Forming such representations in time, especially from dynamic and noisy perceptual input, is a challenge for any processing modality, but perhaps none so acutely as for language processing. We show that LISA (Hummel & Holyaok, 1997) and DORA (Doumas, Hummel, & Sandhofer, 2008), models built to process and to learn structured (i.e., symbolic) rep resentations of conceptual properties and relations from unstructured inputs, show oscillatory activation during processing that is highly similar to the cortical activity elicited by the linguistic stimuli from Ding et al.(2016). We argue, as Ding et al.(2016), that this activation reflects formation of hierarchical linguistic representation, and furthermore, that the kind of computational mechanisms in LISA/DORA (e.g., temporal binding by systematic asynchrony of firing) may underlie formation of abstract linguistic representations in the human brain. It may be this repurposing that allowed for the generation or mergence of hierarchical linguistic structure, and therefore, human language, from extant cognitive and neural systems. We conclude that models of thinking and reasoning and models of language processing must be integrated —not only for increased plausiblity, but in order to advance both fields towards a larger integrative model of human cognition
  • Drozd, K. F. (1998). No as a determiner in child English: A summary of categorical evidence. In A. Sorace, C. Heycock, & R. Shillcock (Eds.), Proceedings of the Gala '97 Conference on Language Acquisition (pp. 34-39). Edinburgh, UK: Edinburgh University Press,.

    Abstract

    This paper summarizes the results of a descriptive syntactic category analysis of child English no which reveals that young children use and represent no as a determiner and negatives like no pen as NPs, contra standard analyses.
  • Drozdova, P., Van Hout, R., & Scharenborg, O. (2016). Processing and adaptation to ambiguous sounds during the course of perceptual learning. In Proceedings of Interspeech 2016: The 17th Annual Conference of the International Speech Communication Association (pp. 2811-2815). doi:10.21437/Interspeech.2016-814.

    Abstract

    Listeners use their lexical knowledge to interpret ambiguous sounds, and retune their phonetic categories to include this ambiguous sound. Although there is ample evidence for lexically-guided retuning, the adaptation process is not fully understood. Using a lexical decision task with an embedded auditory semantic priming task, the present study investigates whether words containing an ambiguous sound are processed in the same way as “natural” words and whether adaptation to the ambiguous sound tends to equalize the processing of “ambiguous” and natural words. Analyses of the yes/no responses and reaction times to natural and “ambiguous” words showed that words containing an ambiguous sound were accepted as words less often and were processed slower than the same words without ambiguity. The difference in acceptance disappeared after exposure to approximately 15 ambiguous items. Interestingly, lower acceptance rates and slower processing did not have an effect on the processing of semantic information of the following word. However, lower acceptance rates of ambiguous primes predict slower reaction times of these primes, suggesting an important role of stimulus-specific characteristics in triggering lexically-guided perceptual learning.
  • Drude, S. (2011). Awetí in relation with Kamayurá: The two Tupian languages of the Upper Xingu. In B. Franchetto (Ed.), Alto Xingu. Uma sociedade multilíngüe (pp. 155-192). Rio de Janeiro: Museu do Indio - FUNAI.

    Abstract

    The article analyzes the relation between Aweti and Kamayurá on different levels. Both languages belong to different branches of the subfamily “Maweti-Guarani” within the large Tupi ‘stock’. Both peoples have arrived rather late to the complex Upper Xinguan society, but probably independently and from different directions. Both resulted from mergers of different groups and suffered a dramatic demographic decline in the first half of last century. There is no concrete evidence that these groups spoke varieties of more than 2 different languages (Pre-Aweti and Pre-Kamayurá). Today, many Aweti are at least passive bilinguals with Kamayurá, their most important allies, but the opposite does not hold. The article also discusses the relations between the languages on the main structural levels. In phonology, the phoneme inventories are compared and the sound changes are listed that occurred from the hypothetical proto-language “Proto-Maweti-Guarani” to Aweti, on the one hand, and to Proto-Tupi-Guarani and further to Kamayurá, on the other. In morpho-syntax, the article offers a comparison of the person systems and of affixes in general, treating in particular the so-called ‘relational prefixes’, which do not exist in Aweti. The most important syntactic shared properties are also listed. There seem to be very little mutual lexical borrowing. In the appendix, a list of more than 60 cognates with reconstructed proto-forms is given. Key-words: Aweti; Kamayurá; Sociolinguistics; History; Phonology.
  • Drude, S. (2011). Comparando línguas alto‐xinguanas: Metodologia e bases de dados comparativos. In B. Franchetto (Ed.), Alto Xingu. Uma sociedade multilíngüe (pp. 39-56). Rio de Janeiro: Museu do Indio - FUNAI.

    Abstract

    A key for understanding the Upper Xingu system is the comparison of the different languages which are part of that multilingual society. This article discusses the notion ‘comparing languages’ and delineates a research program in accordance to which a fruitful comparison can be done on four levels: 1) structural (phonological and morphosyntactic), 2) lexical (semantic structure of the lexica and individual lexical items), 3) discourse (figures of speech and thought), 4) content (in particular, narratives). The language data of the project gathered so far (focusing on level 2 and 4) is described in detail: 10 comparative word lists from different semantic domains, and a core of 5 analogous texts of different genera. Finally, some general considerations are offered about how to analyze both similarities and divergence found among the compared material.
  • Drude, S. (2011). 'Derivational verbs' and other multi-verb constructions in Aweti and Tupi-Guarani. In A. Y. Aikhenvald, & P. C. Muysken (Eds.), Multi-verb constructions: A view from the Americas (pp. 213-254). Leiden: Brill.
  • Eibl-Eibesfeldt, I., Senft, B., & Senft, G. (1998). Trobriander (Ost-Neuguinea, Trobriand Inseln, Kaile'una) Fadenspiele 'ninikula'. In Ethnologie - Humanethologische Begleitpublikationen von I. Eibl-Eibesfeldt und Mitarbeitern. Sammelband I, 1985-1987. Göttingen: Institut für den Wissenschaftlichen Film.
  • Ellert, M., Roberts, L., & Järvikivi, J. (2011). Verarbeitung und Disambiguierung pronominaler Referenz in der Fremdsprache Deutsch: Eine psycholinguistische Studie. In A. Krafft, & C. Spiegel (Eds.), Sprachliche Förderung und Weiterbildung-Transdisziplinär (pp. 51-68). Frankfurt am Main: Peter Lang.
  • Enfield, N. J., Kendrick, K. H., De Ruiter, J. P., Stivers, T., & Levinson, S. C. (2011). Building a corpus of spontaneous interaction. In Field manual volume 14 (pp. 29-32). Nijmegen: Max Planck Institute for Psycholinguistics. doi:10.17617/2.1005610.

    Abstract

    This revised version supersedes all previous versions (e.g., Field Manual 2010).
  • Enfield, N. J. (2011). Description of reciprocal situations in Lao. In N. Evans, A. Gaby, S. C. Levinson, & A. Majid (Eds.), Reciprocals and semantic typology (pp. 129-149). Amsterdam: Benjamins.

    Abstract

    This article describes the grammatical resources available to speakers of Lao for describing situations that can be described broadly as ‘reciprocal’. The analysis is based on complementary methods: elicitation by means of non-linguistic stimuli, exploratory consultation with native speakers, and investigation of corpora of spontaneous language use. Typically, reciprocal situations are described using a semantically general ‘collaborative’ marker on an action verb. The resultant meaning is that some set of people participate in a situation ‘together’, broadly construed. The collaborative marker is found in two distinct syntactic constructions, which differ in terms of their information structural contexts of use. The paper first explores in detail the semantic range of the collaborative marker as it occurs in the more common ‘Type 1’ construction, and then discusses a special pragmatic context for the ‘Type 2’ construction. There is some methodological discussion concerning the results of elicitation via video stimuli. The chapter also discusses two specialised constructions dedicated to the expression of strict reciprocity.
  • Enfield, N. J. (2011). Dynamics of human diversity in mainland Southeast Asia: Introduction. In N. J. Enfield (Ed.), Dynamics of human diversity: The case of mainland Southeast Asia (pp. 1-8). Canberra: Pacific Linguistics.
  • Enfield, N. J. (2011). Elements of formulation. In J. Streeck, C. Goodwin, & C. LeBaron (Eds.), Embodied interaction: Language and body in the material world (pp. 59-66). Cambridge: Cambridge University Press.

    Abstract

    (from the chapter) Recognizing others' goals in the flow of interaction is complex, not only for analysts but for participants too. This chapter explores a semiotic approach, with the utterance-in-context as a basic-level unit, and where the interpreter, not the producer, is the driving force in how utterances come to have meaning. We first want to know how people extract meaning from others' communicative behavior. We then ask what are the elements of producers' formulation of communicative actions in anticipation of how others will interpret that behavior.
  • Enfield, N. J., & Levinson, S. C. (2011). Metalanguage for speech acts. In K. Kendrick, & A. Majid (Eds.), Field manual volume 14 (pp. 33-35). Nijmegen: Max Planck Institute for Psycholinguistics. doi:10.17617/2.1005611.

    Abstract

    This version is reprinted from the 2010 Field Manual
  • Enfield, N. J. (2011). Linguistic diversity in mainland Southeast Asia. In N. J. Enfield (Ed.), Dynamics of human diversity: The case of mainland Southeast Asia (pp. 63-80). Canberra: Pacific Linguistics.
  • Enfield, N. J. (2011). Sources of asymmetry in human interaction: Enchrony, status, knowledge and agency. In T. Stivers, L. Mondada, & J. Steensig (Eds.), The morality of knowledge in conversation (pp. 285-312). Cambridge: Cambridge University Press.
  • Ernestus, M., & Baayen, R. H. (2011). Corpora and exemplars in phonology. In J. A. Goldsmith, J. Riggle, & A. C. Yu (Eds.), The handbook of phonological theory (2nd ed.) (pp. 374-400). Oxford: Wiley-Blackwell.
  • Ernestus, M. (2016). L'utilisation des corpus oraux pour la recherche en (psycho)linguistique. In M. Kilani-Schoch, C. Surcouf, & A. Xanthos (Eds.), Nouvelles technologies et standards méthodologiques en linguistique (pp. 65-93). Lausanne: Université de Lausanne.
  • Ernestus, M. (2011). Gradience and categoricality in phonological theory. In M. Van Oostendorp, C. J. Ewen, E. Hume, & K. Rice (Eds.), The Blackwell companion to phonology (pp. 2115-2136). Wiley-Blackwell.
  • Eryilmaz, K., Little, H., & De Boer, B. (2016). Using HMMs To Attribute Structure To Artificial Languages. In S. G. Roberts, C. Cuskley, L. McCrohon, L. Barceló-Coblijn, O. Feher, & T. Verhoef (Eds.), The Evolution of Language: Proceedings of the 11th International Conference (EVOLANG11). Retrieved from http://evolang.org/neworleans/papers/125.html.

    Abstract

    We investigated the use of Hidden Markov Models (HMMs) as a way of representing repertoires of continuous signals in order to infer their building blocks. We tested the idea on a dataset from an artificial language experiment. The study demonstrates using HMMs for this purpose is viable, but also that there is a lot of room for refinement such as explicit duration modeling, incorporation of autoregressive elements and relaxing the Markovian assumption, in order to accommodate specific details.
  • Evans, N., Levinson, S. C., Gaby, A., & Majid, A. (2011). Introduction: Reciprocals and semantic typology. In N. Evans, A. Gaby, S. C. Levinson, & A. Majid (Eds.), Reciprocals and semantic typology (pp. 1-28). Amsterdam: Benjamins.

    Abstract

    Reciprocity lies at the heart of social cognition, and with it so does the encoding of reciprocity in language via reciprocal constructions. Despite the prominence of strong universal claims about the semantics of reciprocal constructions, there is considerable descriptive literature on the semantics of reciprocals that seems to indicate variable coding and subtle cross-linguistic differences in meaning of reciprocals, both of which would make it impossible to formulate a single, essentialising definition of reciprocal semantics. These problems make it vital for studies in the semantic typology of reciprocals to employ methodologies that allow the relevant categories to emerge objectively from cross-linguistic comparison of standardised stimulus materials. We situate the rationale for the 20-language study that forms the basis for this book within this empirical approach to semantic typology, and summarise some of the findings.

    Files private

    Request files
  • Fikkert, P., & Chen, A. (2011). The role of word-stress and intonation in word recognition in Dutch 14- and 24-month-olds. In N. Danis, K. Mesh, & H. Sung (Eds.), Proceedings of the 35th annual Boston University Conference on Language Development (pp. 222-232). Somerville, MA: Cascadilla Press.
  • Filippi, P., Congdon, J. V., Hoang, J., Bowling, D. L., Reber, S., Pašukonis, A., Hoeschele, M., Ocklenburg, S., de Boer, B., Sturdy, C. B., Newen, A., & Güntürkün, O. (2016). Humans Recognize Vocal Expressions Of Emotional States Universally Across Species. In The Evolution of Language: Proceedings of the 11th International Conference (EVOLANG11). Retrieved from http://evolang.org/neworleans/papers/91.html.

    Abstract

    The perception of danger in the environment can induce physiological responses (such as a heightened state of arousal) in animals, which may cause measurable changes in the prosodic modulation of the voice (Briefer, 2012). The ability to interpret the prosodic features of animal calls as an indicator of emotional arousal may have provided the first hominins with an adaptive advantage, enabling, for instance, the recognition of a threat in the surroundings. This ability might have paved the ability to process meaningful prosodic modulations in the emerging linguistic utterances.
  • Filippi, P., Ocklenburg, S., Bowling, D. L., Heege, L., Newen, A., Güntürkün, O., & de Boer, B. (2016). Multimodal Processing Of Emotional Meanings: A Hypothesis On The Adaptive Value Of Prosody. In The Evolution of Language: Proceedings of the 11th International Conference (EVOLANG11). Retrieved from http://evolang.org/neworleans/papers/90.html.

    Abstract

    Humans combine multiple sources of information to comprehend meanings. These sources can be characterized as linguistic (i.e., lexical units and/or sentences) or paralinguistic (e.g. body posture, facial expression, voice intonation, pragmatic context). Emotion communication is a special case in which linguistic and paralinguistic dimensions can simultaneously denote the same, or multiple incongruous referential meanings. Think, for instance, about when someone says “I’m sad!”, but does so with happy intonation and a happy facial expression. Here, the communicative channels express very specific (although conflicting) emotional states as denotations. In such cases of intermodal incongruence, are we involuntarily biased to respond to information in one channel over the other? We hypothesize that humans are involuntary biased to respond to prosody over verbal content and facial expression, since the ability to communicate socially relevant information such as basic emotional states through prosodic modulation of the voice might have provided early hominins with an adaptive advantage that preceded the emergence of segmental speech (Darwin 1871; Mithen, 2005). To address this hypothesis, we examined the interaction between multiple communicative channels in recruiting attentional resources, within a Stroop interference task (i.e. a task in which different channels give conflicting information; Stroop, 1935). In experiment 1, we used synonyms of “happy” and “sad” spoken with happy and sad prosody. Participants were asked to identify the emotion expressed by the verbal content while ignoring prosody (Word task) or vice versa (Prosody task). Participants responded faster and more accurately in the Prosody task. Within the Word task, incongruent stimuli were responded to more slowly and less accurately than congruent stimuli. In experiment 2, we adopted synonyms of “happy” and “sad” spoken in happy and sad prosody, while a happy or sad face was displayed. Participants were asked to identify the emotion expressed by the verbal content while ignoring prosody and face (Word task), to identify the emotion expressed by prosody while ignoring verbal content and face (Prosody task), or to identify the emotion expressed by the face while ignoring prosody and verbal content (Face task). Participants responded faster in the Face task and less accurately when the two non-focused channels were expressing an emotion that was incongruent with the focused one, as compared with the condition where all the channels were congruent. In addition, in the Word task, accuracy was lower when prosody was incongruent to verbal content and face, as compared with the condition where all the channels were congruent. Our data suggest that prosody interferes with emotion word processing, eliciting automatic responses even when conflicting with both verbal content and facial expressions at the same time. In contrast, although processed significantly faster than prosody and verbal content, faces alone are not sufficient to interfere in emotion processing within a three-dimensional Stroop task. Our findings align with the hypothesis that the ability to communicate emotions through prosodic modulation of the voice – which seems to be dominant over verbal content - is evolutionary older than the emergence of segmental articulation (Mithen, 2005; Fitch, 2010). This hypothesis fits with quantitative data suggesting that prosody has a vital role in the perception of well-formed words (Johnson & Jusczyk, 2001), in the ability to map sounds to referential meanings (Filippi et al., 2014), and in syntactic disambiguation (Soderstrom et al., 2003). This research could complement studies on iconic communication within visual and auditory domains, providing new insights for models of language evolution. Further work aimed at how emotional cues from different modalities are simultaneously integrated will improve our understanding of how humans interpret multimodal emotional meanings in real life interactions.
  • Fisher, S. E. (2016). A molecular genetic perspective on speech and language. In G. Hickok, & S. Small (Eds.), Neurobiology of Language (pp. 13-24). Amsterdam: Elsevier. doi:10.1016/B978-0-12-407794-2.00002-X.

    Abstract

    The rise of genomic technologies has yielded exciting new routes for studying the biological foundations of language. Researchers have begun to identify genes implicated in neurodevelopmental disorders that disrupt speech and language skills. This chapter illustrates how such work can provide powerful entry points into the critical neural pathways using FOXP2 as an example. Rare mutations of this gene cause problems with learning to sequence mouth movements during speech, accompanied by wide-ranging impairments in language production and comprehension. FOXP2 encodes a regulatory protein, a hub in a network of other genes, several of which have also been associated with language-related impairments. Versions of FOXP2 are found in similar form in many vertebrate species; indeed, studies of animals and birds suggest conserved roles in the development and plasticity of certain sets of neural circuits. Thus, the contributions of this gene to human speech and language involve modifications of evolutionarily ancient functions.
  • Fitz, H., Chang, F., & Christansen, M. H. (2011). A connectionist account of the acquisition and processing of relative clauses. In E. Kidd (Ed.), The acquisition of relative clauses. Processing, typology and function (pp. 39-60). Amsterdam: Benjamins.

    Abstract

    Relative clause processing depends on the grammatical role of the head noun in the subordinate clause. This has traditionally been explained in terms of cognitive limitations. We suggest that structure-related processing differences arise from differences in experience with these structures. We present a connectionist model which learns to produce utterances with relative clauses from exposure to message-sentence pairs. The model shows how various factors such as frequent subsequences, structural variations, and meaning conspire to create differences in the processing of these structures. The predictions of this learning-based account have been confirmed in behavioral studies with adults. This work shows that structural regularities that govern relative clause processing can be explained within a usage-based approach to recursion.

Share this page