Publications

Displaying 1 - 100 of 409
  • Adam, R., Orfanidou, E., McQueen, J. M., & Morgan, G. (2011). Sign language comprehension: Insights from misperceptions of different phonological parameters. In R. Channon, & H. Van der Hulst (Eds.), Formational units in sign languages (pp. 87-106). Berlin: Mouton de Gruyter and Ishara Press.
  • Akita, K., & Dingemanse, M. (2019). Ideophones (Mimetics, Expressives). In Oxford Research Encyclopedia for Linguistics. Oxford: Oxford University Press. doi:10.1093/acrefore/9780199384655.013.477.

    Abstract

    Ideophones, also termed “mimetics” or “expressives,” are marked words that depict sensory imagery. They are found in many of the world’s languages, and sizable lexical classes of ideophones are particularly well-documented in languages of Asia, Africa, and the Americas. Ideophones are not limited to onomatopoeia like meow and smack, but cover a wide range of sensory domains, such as manner of motion (e.g., plisti plasta ‘splish-splash’ in Basque), texture (e.g., tsaklii ‘rough’ in Ewe), and psychological states (e.g., wakuwaku ‘excited’ in Japanese). Across languages, ideophones stand out as marked words due to special phonotactics, expressive morphology including certain types of reduplication, and relative syntactic independence, in addition to production features like prosodic foregrounding and common co-occurrence with iconic gestures.

    Three intertwined issues have been repeatedly debated in the century-long literature on ideophones. (a) Definition: Isolated descriptive traditions and cross-linguistic variation have sometimes obscured a typologically unified view of ideophones, but recent advances show the promise of a prototype definition of ideophones as conventionalised depictions in speech, with room for language-specific nuances. (b) Integration: The variable integration of ideophones across linguistic levels reveals an interaction between expressiveness and grammatical integration, and has important implications for how to conceive of dependencies between linguistic systems. (c) Iconicity: Ideophones form a natural laboratory for the study of iconic form-meaning associations in natural languages, and converging evidence from corpus and experimental studies suggests important developmental, evolutionary, and communicative advantages of ideophones.
  • Alhama, R. G., Siegelman, N., Frost, R., & Armstrong, B. C. (2019). The role of information in visual word recognition: A perceptually-constrained connectionist account. In A. Goel, C. Seifert, & C. Freksa (Eds.), Proceedings of the 41st Annual Meeting of the Cognitive Science Society (CogSci 2019) (pp. 83-89). Austin, TX: Cognitive Science Society.

    Abstract

    Proficient readers typically fixate near the center of a word, with a slight bias towards word onset. We explore a novel account of this phenomenon based on combining information-theory with visual perceptual constraints in a connectionist model of visual word recognition. This account posits that the amount of information-content available for word identification varies across fixation locations and across languages, thereby explaining the overall fixation location bias in different languages, making the novel prediction that certain words are more readily identified when fixating at an atypical fixation location, and predicting specific cross-linguistic differences. We tested these predictions across several simulations in English and Hebrew, and in a pilot behavioral experiment. Results confirmed that the bias to fixate closer to word onset aligns with maximizing information in the visual signal, that some words are more readily identified at atypical fixation locations, and that these effects vary to some degree across languages.
  • Allen, S. E. M. (1998). A discourse-pragmatic explanation for the subject-object asymmetry in early null arguments. In A. Sorace, C. Heycock, & R. Shillcock (Eds.), Proceedings of the GALA '97 Conference on Language Acquisition (pp. 10-15). Edinburgh, UK: Edinburgh University Press.

    Abstract

    The present paper assesses discourse-pragmatic factors as a potential explanation for the subject-object assymetry in early child language. It identifies a set of factors which characterize typical situations of informativeness (Greenfield & Smith, 1976), and uses these factors to identify informative arguments in data from four children aged 2;0 through 3;6 learning Inuktitut as a first language. In addition, it assesses the extent of the links between features of informativeness on one hand and lexical vs. null and subject vs. object arguments on the other. Results suggest that a pragmatics account of the subject-object asymmetry can be upheld to a greater extent than previous research indicates, and that several of the factors characterizing informativeness are good indicators of those arguments which tend to be omitted in early child language.
  • Allen, S. E. M. (1997). Towards a discourse-pragmatic explanation for the subject-object asymmetry in early null arguments. In NET-Bulletin 1997 (pp. 1-16). Amsterdam, The Netherlands: Instituut voor Functioneel Onderzoek van Taal en Taalgebruik (IFOTT).
  • Anastasopoulos, A., Lekakou, M., Quer, J., Zimianiti, E., DeBenedetto, J., & Chiang, D. (2018). Part-of-speech tagging on an endangered language: a parallel Griko-Italian Resource. In Proceedings of the 27th International Conference on Computational Linguistics (COLING 2018) (pp. 2529-2539).

    Abstract

    Most work on part-of-speech (POS) tagging is focused on high resource languages, or examines low-resource and active learning settings through simulated studies. We evaluate POS tagging techniques on an actual endangered language, Griko. We present a resource that contains 114 narratives in Griko, along with sentence-level translations in Italian, and provides gold annotations for the test set. Based on a previously collected small corpus, we investigate several traditional methods, as well as methods that take advantage of monolingual data or project cross-lingual POS tags. We show that the combination of a semi-supervised method with cross-lingual transfer is more appropriate for this extremely challenging setting, with the best tagger achieving an accuracy of 72.9%. With an applied active learning scheme, which we use to collect sentence-level annotations over the test set, we achieve improvements of more than 21 percentage points
  • Badimala, P., Mishra, C., Venkataramana, R. K. M., Bukhari, S. S., & Dengel, A. (2019). A Study of Various Text Augmentation Techniques for Relation Classification in Free Text. In Proceedings of the 8th International Conference on Pattern Recognition Applications and Methods (pp. 360-367). Setúbal, Portugal: SciTePress Digital Library. doi:10.5220/0007311003600367.

    Abstract

    Data augmentation techniques have been widely used in visual recognition tasks as it is easy to generate new
    data by simple and straight forward image transformations. However, when it comes to text data augmen-
    tations, it is difficult to find appropriate transformation techniques which also preserve the contextual and
    grammatical structure of language texts. In this paper, we explore various text data augmentation techniques
    in text space and word embedding space. We study the effect of various augmented datasets on the efficiency
    of different deep learning models for relation classification in text.
  • Bardhan, N. P., & Weber, A. (2011). Listening to a novel foreign accent, with long lasting effects [Abstract]. Journal of the Acoustical Society of America. Program abstracts of the 162nd Meeting of the Acoustical Society of America, 130(4), 2445.

    Abstract

    In conversation, listeners frequently encounter speakers with foreign accents. Previous research on foreign-accented speech has primarily examined the short-term effects of exposure and the relative ease that listeners have with adapting to an accent. The present study examines the stability of this adaptation, with seven full days between testing sessions. On both days, subjects performed a cross-modal priming task in which they heard several minutes of an unfamiliar accent of their native language: a form of Hebrewaccented Dutch in which long /i:/ was shortened to /I/. During this task on Day 1, recognition of accented forms was not facilitated, compared to that of canonical forms. A week later, when tested on new words, facilitatory priming occurred, comparable to that seen for canonically produced items tested in both sessions. These results suggest that accented forms can be learned from brief exposure and the stable effects of this can be seen a week later.
  • Bauer, B. L. M. (1997). The adjective in Italic and Romance: Genetic or areal factors affecting word order patterns?”. In B. Palek (Ed.), Proceedings of LP'96: Typology: Prototypes, item orderings and universals (pp. 295-306). Prague: Charles University Press.
  • Bauer, B. L. M. (1997). Nominal syntax in Italic: A diachronic perspective. In Language change and functional explanations (pp. 273-301). Berlin: Mouton de Gruyter.
  • Bauer, B. L. M. (2011). Word formation. In M. Maiden, J. C. Smith, & A. Ledgeway (Eds.), The Cambridge history of the Romance languages. Vol. I. structures (pp. 532-563). Cambridge: Cambridge University Press.
  • Bentum, M., Ten Bosch, L., Van den Bosch, A., & Ernestus, M. (2019). Listening with great expectations: An investigation of word form anticipations in naturalistic speech. In Proceedings of Interspeech 2019 (pp. 2265-2269). doi:10.21437/Interspeech.2019-2741.

    Abstract

    The event-related potential (ERP) component named phonological mismatch negativity (PMN) arises when listeners hear an unexpected word form in a spoken sentence [1]. The PMN is thought to reflect the mismatch between expected and perceived auditory speech input. In this paper, we use the PMN to test a central premise in the predictive coding framework [2], namely that the mismatch between prior expectations and sensory input is an important mechanism of perception. We test this with natural speech materials containing approximately 50,000 word tokens. The corresponding EEG-signal was recorded while participants (n = 48) listened to these materials. Following [3], we quantify the mismatch with two word probability distributions (WPD): a WPD based on preceding context, and a WPD that is additionally updated based on the incoming audio of the current word. We use the between-WPD cross entropy for each word in the utterances and show that a higher cross entropy correlates with a more negative PMN. Our results show that listeners anticipate auditory input while processing each word in naturalistic speech. Moreover, complementing previous research, we show that predictive language processing occurs across the whole probability spectrum.
  • Bentum, M., Ten Bosch, L., Van den Bosch, A., & Ernestus, M. (2019). Quantifying expectation modulation in human speech processing. In Proceedings of Interspeech 2019 (pp. 2270-2274). doi:10.21437/Interspeech.2019-2685.

    Abstract

    The mismatch between top-down predicted and bottom-up perceptual input is an important mechanism of perception according to the predictive coding framework (Friston, [1]). In this paper we develop and validate a new information-theoretic measure that quantifies the mismatch between expected and observed auditory input during speech processing. We argue that such a mismatch measure is useful for the study of speech processing. To compute the mismatch measure, we use naturalistic speech materials containing approximately 50,000 word tokens. For each word token we first estimate the prior word probability distribution with the aid of statistical language modelling, and next use automatic speech recognition to update this word probability distribution based on the unfolding speech signal. We validate the mismatch measure with multiple analyses, and show that the auditory-based update improves the probability of the correct word and lowers the uncertainty of the word probability distribution. Based on these results, we argue that it is possible to explicitly estimate the mismatch between predicted and perceived speech input with the cross entropy between word expectations computed before and after an auditory update.
  • Bentz, C., Dediu, D., Verkerk, A., & Jäger, G. (2018). Language family trees reflect geography and demography beyond neutral drift. In C. Cuskley, M. Flaherty, H. Little, L. McCrohon, A. Ravignani, & T. Verhoef (Eds.), Proceedings of the 12th International Conference on the Evolution of Language (EVOLANG XII) (pp. 38-40). Toruń, Poland: NCU Press. doi:10.12775/3991-1.006.
  • Bergmann, C., Boves, L., & Ten Bosch, L. (2011). Measuring word learning performance in computational models and infants. In Proceedings of the IEEE Conference on Development and Learning, and Epigenetic Robotics. Frankfurt am Main, Germany, 24-27 Aug. 2011.

    Abstract

    In the present paper we investigate the effect of categorising raw behavioural data or computational model responses. In addition, the effect of averaging over stimuli from potentially different populations is assessed. To this end, we replicate studies on word learning and generalisation abilities using the ACORNS models. Our results show that discrete categories may obscure interesting phenomena in the continuous responses. For example, the finding that learning in the model saturates very early at a uniform high recognition accuracy only holds for categorical representations. Additionally, a large difference in the accuracy for individual words is obscured by averaging over all stimuli. Because different words behaved differently for different speakers, we could not identify a phonetic basis for the differences. Implications and new predictions for infant behaviour are discussed.
  • Bergmann, C., Boves, L., & Ten Bosch, L. (2011). Thresholding word activations for response scoring - Modelling psycholinguistic data. In Proceedings of the 12th Annual Conference of the International Speech Communication Association [Interspeech 2011] (pp. 769-772). ISCA.

    Abstract

    In the present paper we investigate the effect of categorising raw behavioural data or computational model responses. In addition, the effect of averaging over stimuli from potentially different populations is assessed. To this end, we replicate studies on word learning and generalisation abilities using the ACORNS models. Our results show that discrete
    categories may obscure interesting phenomena in the continuous
    responses. For example, the finding that learning in the model saturates very early at a uniform high recognition accuracy only holds for categorical representations. Additionally, a large difference in the accuracy for individual words is obscured
    by averaging over all stimuli. Because different words behaved
    differently for different speakers, we could not identify a phonetic
    basis for the differences. Implications and new predictions for
    infant behaviour are discussed.
  • Blythe, J. (2018). Genesis of the trinity: The convergent evolution of trirelational kinterms. In P. McConvell, & P. Kelly (Eds.), Skin, kin and clan: The dynamics of social categories in Indigenous Australia (pp. 431-471). Canberra: ANU EPress.
  • Blythe, J. (2011). Laughter is the best medicine: Roles for prosody in a Murriny Patha conversational narrative. In B. Baker, I. Mushin, M. Harvey, & R. Gardner (Eds.), Indigenous Language and Social Identity: Papers in Honour of Michael Walsh (pp. 223-236). Canberra: Pacific Linguistics.
  • Bohnemeyer, J., Enfield, N. J., Essegbey, J., Majid, A., & van Staden, M. (2011). Configuraciones temáticas atípicas y el uso de predicados complejos en perspectiva tipológica [Atypical thematic configurations and the use of complex predicates in typological perspective]. In A. L. Munguía (Ed.), Colección Estudios Lingüísticos. Vol. I: Fonología, morfología, y tipología semántico-sintáctica [Collection Linguistic Studies. Vol 1: Phonology, morphology, and semantico-syntactic typology] (pp. 173-194). Hermosillo, Mexico: Universidad de Sonora.
  • Bohnemeyer, J., Burenhult, N., Enfield, N. J., & Levinson, S. C. (2011). Landscape terms and place names questionnaire. In K. Kendrick, & A. Majid (Eds.), Field manual volume 14 (pp. 19-23). Nijmegen: Max Planck Institute for Psycholinguistics. doi:10.17617/2.1005606.
  • Bohnemeyer, J. (1998). Temporale Relatoren im Hispano-Yukatekischen Sprachkontakt. In A. Koechert, & T. Stolz (Eds.), Convergencia e Individualidad - Las lenguas Mayas entre hispanización e indigenismo (pp. 195-241). Hannover, Germany: Verlag für Ethnologie.
  • Bohnemeyer, J. (1998). Sententiale Topics im Yukatekischen. In Z. Dietmar (Ed.), Deskriptive Grammatik und allgemeiner Sprachvergleich (pp. 55-85). Tübingen, Germany: Max-Niemeyer-Verlag.
  • Bohnemeyer, J., Enfield, N. J., Essegbey, J., & Kita, S. (2011). The macro-event property: The segmentation of causal chains. In J. Bohnemeyer, & E. Pederson (Eds.), Event representation in language and cognition (pp. 43-67). New York: Cambridge University Press.
  • Bohnemeyer, J. (1997). Yucatec Mayan Lexicalization Patterns in Time and Space. In M. Biemans, & J. van de Weijer (Eds.), Proceedings of the CLS opening of the academic year '97-'98. Tilburg, The Netherlands: University Center for Language Studies.
  • Bottini, R., & Casasanto, D. (2011). Space and time in the child’s mind: Further evidence for a cross-dimensional asymmetry [Abstract]. In L. Carlson, C. Hölscher, & T. Shipley (Eds.), Proceedings of the 33rd Annual Conference of the Cognitive Science Society (pp. 3010). Austin, TX: Cognitive Science Society.

    Abstract

    Space and time appear to be related asymmetrically in the child’s mind: temporal representations depend on spatial representations more than vice versa, as predicted by space-time metaphors in language. In a study supporting this conclusion, spatial information interfered with children’s temporal judgments more than vice versa (Casasanto, Fotakopoulou, & Boroditsky, 2010, Cognitive Science). In this earlier study, however, spatial information was available to participants for more time than temporal information was (as is often the case when people observe natural events), suggesting a skeptical explanation for the observed effect. Here we conducted a stronger test of the hypothesized space-time asymmetry, controlling spatial and temporal aspects of the stimuli even more stringently than they are generally ’controlled’ in the natural world. Results replicated Casasanto and colleagues’, validating their finding of a robust representational asymmetry between space and time, and extending it to children (4-10 y.o.) who speak Dutch and Brazilian Portuguese.
  • Böttner, M. (1997). Natural Language. In C. Brink, W. Kahl, & G. Schmidt (Eds.), Relational Methods in computer science (pp. 229-249). Vienna, Austria: Springer-Verlag.
  • Böttner, M. (1997). Visiting some relatives of Peirce's. In 3rd International Seminar on The use of Relational Methods in Computer Science.

    Abstract

    The notion of relational grammar is extented to ternary relations and illustrated by a fragment of English. Some of Peirce's terms for ternary relations are shown to be incorrect and corrected.
  • Bowden, J. (1997). The meanings of Directionals in Taba. In G. Senft (Ed.), Referring to Space: Studies in Austronesian and Papuan Languages (pp. 251-268). New York, NJ: Oxford University Press.
  • Bowerman, M. (2011). Linguistic typology and first language acquisition. In J. J. Song (Ed.), The Oxford handbook of linguistic typology (pp. 591-617). Oxford: Oxford University Press.
  • Brand, J., Monaghan, P., & Walker, P. (2018). Changing Signs: Testing How Sound-Symbolism Supports Early Word Learning. In C. Kalish, M. Rau, J. Zhu, & T. T. Rogers (Eds.), Proceedings of the 40th Annual Conference of the Cognitive Science Society (CogSci 2018) (pp. 1398-1403). Austin, TX: Cognitive Science Society.

    Abstract

    Learning a language involves learning how to map specific forms onto their associated meanings. Such mappings can utilise arbitrariness and non-arbitrariness, yet, our understanding of how these two systems operate at different stages of vocabulary development is still not fully understood. The Sound-Symbolism Bootstrapping Hypothesis (SSBH) proposes that sound-symbolism is essential for word learning to commence, but empirical evidence of exactly how sound-symbolism influences language learning is still sparse. It may be the case that sound-symbolism supports acquisition of categories of meaning, or that it enables acquisition of individualized word meanings. In two Experiments where participants learned form-meaning mappings from either sound-symbolic or arbitrary languages, we demonstrate the changing roles of sound-symbolism and arbitrariness for different vocabulary sizes, showing that sound-symbolism provides an advantage for learning of broad categories, which may then transfer to support learning individual words, whereas an arbitrary language impedes acquisition of categories of sound to meaning.
  • Brehm, L., & Goldrick, M. (2018). Connectionist principles in theories of speech production. In S.-A. Rueschemeyer, & M. G. Gaskell (Eds.), The Oxford Handbook of Psycholinguistics (2nd ed., pp. 372-397). Oxford: Oxford University Press.

    Abstract

    This chapter focuses on connectionist modeling in language production, highlighting how
    core principles of connectionism provide coverage for empirical observations about
    representation and selection at the phonological, lexical, and sentence levels. The first
    section focuses on the connectionist principles of localist representations and spreading
    activation. It discusses how these two principles have motivated classic models of speech
    production and shows how they cover results of the picture-word interference paradigm,
    the mixed error effect, and aphasic naming errors. The second section focuses on how
    newer connectionist models incorporate the principles of learning and distributed
    representations through discussion of syntactic priming, cumulative semantic
    interference, sequencing errors, phonological blends, and code-switching
  • Brehm, L., Jackson, C. N., & Miller, K. L. (2019). Incremental interpretation in the first and second language. In M. Brown, & B. Dailey (Eds.), BUCLD 43: Proceedings of the 43rd annual Boston University Conference on Language Development (pp. 109-122). Sommerville, MA: Cascadilla Press.
  • Brenner, D., Warner, N., Ernestus, M., & Tucker, B. V. (2011). Parsing the ambiguity of casual speech: “He was like” or “He’s like”? [Abstract]. The Journal of the Acoustical Society of America, 129(4 Pt. 2), 2683.

    Abstract

    Paper presented at The 161th Meeting Acoustical Society of America, Seattle, Washington, 23-27 May 2011. Reduction in casual speech can create ambiguity, e.g., “he was” can sound like “he’s.” Before quotative “like” “so she’s/she was like…”, it was found that there is little accurate acoustic information about the distinction in the signal. This work examines what types of information acoustics of the target itself, speech rate, coarticulation, and syntax/semantics listeners use to recognize such reduced function words. We compare perception studies presenting the targets auditorily with varying amounts of context, presenting the context without the targets, and a visual study presenting context in written form. Given primarily discourse information visual or auditory context only, subjects are strongly biased toward past, reflecting the use of quotative “like” for reporting past speech. However, if the target itself is presented, the direction of bias reverses, indicating that listeners favor acoustic information within the target which is reduced, sounding like the shorter, present form over almost any other source of information. Furthermore, when the target is presented auditorily with surrounding context, the bias shifts slightly toward the direction shown in the orthographic or auditory-no-target experiments. Thus, listeners prioritize acoustic information within the target when present, even if that information is misleading, but they also take discourse information into account.
  • Broeder, D., Sloetjes, H., Trilsbeek, P., Van Uytvanck, D., Windhouwer, M., & Wittenburg, P. (2011). Evolving challenges in archiving and data infrastructures. In G. L. J. Haig, N. Nau, S. Schnell, & C. Wegener (Eds.), Documenting endangered languages: Achievements and perspectives (pp. 33-54). Berlin: De Gruyter.

    Abstract

    Introduction Increasingly often research in the humanities is based on data. This change in attitude and research practice is driven to a large extent by the availability of small and cheap yet high-quality recording equipment (video cameras, audio recorders) as well as advances in information technology (faster networks, larger data storage, larger computation power, suitable software). In some institutes such as the Max Planck Institute for Psycholinguistics, already in the 90s a clear trend towards an all-digital domain could be identified, making use of state-of-the-art technology for research purposes. This change of habits was one of the reasons for the Volkswagen Foundation to establish the DoBeS program in 2000 with a clear focus on language documentation based on recordings as primary material.
  • Broersma, M. (2011). Triggered code-switching: Evidence from picture naming experiments. In M. S. Schmid, & W. Lowie (Eds.), Modeling bilingualism: From structure to chaos. In honor of Kees de Bot (pp. 37-58). Amsterdam: Benjamins.

    Abstract

    This paper presents experimental evidence that cognates can trigger codeswitching. In two picture naming experiments, Dutch-English bilinguals switched between Dutch and English. Crucial words followed either a cognate or a non-cognate. In Experiment 1, response language was indicated by a color cue, and crucial trials always required a switch. Crucial trials had shorter reaction times after a cognate than after a non-cognate. In Experiment 2, response language was not cued and participants switched freely between the languages. Words after cognates were switched more often than words after non-cognates, for switching from L1 to L2 only. Both experiments thus showed that cognates facilitated language switching of the following word. The results extend evidence for triggered codeswitching from natural speech analyses.
  • Brookshire, G., & Casasanto, D. (2011). Motivation and motor action: Hemispheric specialization for motivation reverses with handedness. In L. Carlson, C. Holscher, & T. Shipley (Eds.), Proceedings of the 33rd Annual Meeting of the Cognitive Science Society (pp. 2610-2615). Austin, TX: Cognitive Science Society.
  • Brouwer, S., & Bradlow, A. R. (2011). The influence of noise on phonological competition during spoken word recognition. In W.-S. Lee, & E. Zee (Eds.), Proceedings of the 17th International Congress of Phonetic Sciences 2011 [ICPhS XVII] (pp. 364-367). Hong Kong: Department of Chinese, Translation and Linguistics, City University of Hong Kong.

    Abstract

    Listeners’ interactions often take place in auditorily challenging conditions. We examined how noise affects phonological competition during spoken word recognition. In a visual-world experiment, which allows us to examine the timecourse of recognition, English participants listened to target words in quiet and in noise while they saw four pictures on the screen: a target (e.g. candle), an onset overlap competitor (e.g. candy), an offset overlap competitor (e.g. sandal), and a distractor. The results showed that, while all competitors were relatively quickly suppressed in quiet listening conditions, listeners experienced persistent competition in noise from the offset competitor but not from the onset competitor. This suggests that listeners’ phonological competitor activation persists for longer in noise than in quiet and that listeners are able to deactivate some unwanted competition when listening to speech in noise. The well-attested competition pattern in quiet was not replicated. Possible methodological explanations for this result are discussed.
  • Brown, P. (1998). Early Tzeltal verbs: Argument structure and argument representation. In E. Clark (Ed.), Proceedings of the 29th Annual Stanford Child Language Research Forum (pp. 129-140). Stanford: CSLI Publications.

    Abstract

    The surge of research activity focussing on children's acquisition of verbs (e.g., Tomasello and Merriman 1996) addresses some fundamental questions: Just how variable across languages, and across individual children, is the process of verb learning? How specific are arguments to particular verbs in early child language? How does the grammatical category 'Verb' develop? The position of Universal Grammar, that a verb category is early, contrasts with that of Tomasello (1992), Pine and Lieven and their colleagues (1996, in press), and many others, that children develop a verb category slowly, gradually building up subcategorizations of verbs around pragmatic, syntactic, and semantic properties of the language they are exposed to. On this latter view, one would expect the language which the child is learning, the cultural milieu and the nature of the interactions in which the child is engaged, to influence the process of acquiring verb argument structures. This paper explores these issues by examining the development of argument representation in the Mayan language Tzeltal, in both its lexical and verbal cross-referencing forms, and analyzing the semantic and pragmatic factors influencing the form argument representation takes. Certain facts about Tzeltal (the ergative/ absolutive marking, the semantic specificity of transitive and positional verbs) are proposed to affect the representation of arguments. The first 500 multimorpheme combinations of 3 children (aged between 1;8 and 2;4) are examined. It is argued that there is no evidence of semantically light 'pathbreaking' verbs (Ninio 1996) leading the way into word combinations. There is early productivity of cross-referencing affixes marking A, S, and O arguments (although there are systematic omissions). The paper assesses the respective contributions of three kinds of factors to these results - structural (regular morphology), semantic (verb specificity) and pragmatic (the nature of Tzeltal conversational interaction).
  • Brown, P. (1998). How and why are women more polite: Some evidence from a Mayan community. In J. Coates (Ed.), Language and gender (pp. 81-99). Oxford: Blackwell.
  • Brown, P. (1997). Isolating the CVC root in Tzeltal Mayan: A study of children's first verbs. In E. V. Clark (Ed.), Proceedings of the 28th Annual Child Language Research Forum (pp. 41-52). Stanford, CA: CSLI/University of Chicago Press.

    Abstract

    How do children isolate the semantic package contained in verb roots in the Mayan language Tzeltal? One might imagine that the canonical CVC shape of roots characteristic of Mayan languages would make the job simple, but the root is normally preceded and followed by affixes which mask its identity. Pye (1983) demonstrated that, in Kiche' Mayan, prosodic salience overrides semantic salience, and children's first words in Kiche' are often composed of only the final (stressed) syllable constituted by the final consonant of the CVC root and a 'meaningless' termination suffix. Intonation thus plays a crucial role in early Kiche' morphological development. Tzeltal presents a rather different picture: The first words of children around the age of 1;6 are bare roots, children strip off all prefixes and suffixes which are obligatory in adult speech. They gradually add them, starting with the suffixes (which receive the main stress), but person prefixes are omitted in some contexts past a child's third birthday, and one obligatory aspectual prefix (x-) is systematically omitted by the four children in my longitudinal study even after they are four years old. Tzeltal children's first verbs generally show faultless isolation of the root. An account in terms of intonation or stress cannot explain this ability (the prefixes are not all syllables; the roots are not always stressed). This paper suggests that probable clues include the fact that the CVC root stays constant across contexts (with some exceptions) whereas the affixes vary, that there are some linguistic contexts where the root occurs without any prefixes (relatively frequent in the input), and that the Tzeltal discourse convention of responding by repeating with appropriate deictic alternation (e.g., "I see it." "Oh, you see it.") highlights the root.
  • Brown, P. (2011). Everyone has to lie in Tzeltal [Reprint]. In B. B. Schieffelin, & P. B. Garrett (Eds.), Anthropological linguistics: Critical concepts in language studies. Volume III Talking about language (pp. 59-87). London: Routledge.

    Abstract

    Reprint of Brown, P. (2002). Everyone has to lie in Tzeltal. In S. Blum-Kulka, & C. E. Snow (Eds.), Talking to adults: The contribution of multiparty discourse to language acquisition (pp. 241-275). Mahwah, NJ: Erlbaum. In a famous paper Harvey Sacks (1974) argued that the sequential properties of greeting conventions, as well as those governing the flow of information, mean that 'everyone has to lie'. In this paper I show this dictum to be equally true in the Tzeltal Mayan community of Tenejapa, in southern Mexico, but for somewhat different reasons. The phenomenon of interest is the practice of routine fearsome threats to small children. Based on a longitudinal corpus of videotaped and tape-recorded naturally-occurring interaction between caregivers and children in five Tzeltal families, the study examines sequences of Tzeltal caregivers' speech aimed at controlling the children's behaviour and analyzes the children's developing pragmatic skills in handling such controlling utterances, from prelinguistic infants to age five and over. Infants in this society are considered to be vulnerable, easily scared or shocked into losing their 'souls', and therefore at all costs to be protected and hidden from outsiders and other dangers. Nonetheless, the chief form of control (aside from physically removing a child from danger) is to threaten, saying things like "Don't do that, or I'll take you to the clinic for an injection," These overt scare-threats - rarely actually realized - lead Tzeltal children by the age of 2;6 to 3;0 to the understanding that speech does not necessarily convey true propositions, and to a sensitivity to the underlying motivations for utterances distinct from their literal meaning. By age 4;0 children perform the same role to their younger siblings;they also begin to use more subtle non-true (e.g. ironic) utterances. The caretaker practice described here is related to adult norms of social lying, to the sociocultural context of constraints on information flow, social control through gossip, and the different notion of 'truth' that arises in the context of non-verifiability characteristic of a small-scale nonliterate society.
  • Brown, P. (2011). The cultural organization of attention. In A. Duranti, E. Ochs, & B. B. Schieffelin (Eds.), The handbook of language socialization (pp. 29-55). Malden, MA: Wiley-Blackwell.

    Abstract

    How new social members are enculturated into the interactional practices of the society they grow up in is crucial to an understanding of social interaction, as well as to an understanding of the role of culture in children's social-cognitive development. Modern theories of infant development (e.g., Bruner 1982, Elman et al 1996, Tomasello 1999, Masataka 2003) emphasize the influence of particular interactional practices in the child's developing communicative skills. But interactional practices with infants - behaviors like prompting, pointing, turn-taking routines, and interacting over objects - are culturally shaped by beliefs about what infants need and what they can understand; these practices therefore vary across cultures in both quantity and quality. What effect does this variation have on children's communicative development? This article focuses on one aspect of cultural practice, the interactional organization of attention and how it is socialized in prelinguistic infants. It surveys the literature on the precursors to attention coordination in infancy, leading up to the crucial development of 'joint attention' and pointing behavior around the age of 12 months, and it reports what is known about cultural differences in related interactional practices of adults. It then considers the implications of such differences for infant-caregiver interaction prior to the period when infants begin to speak. I report on my own work on the integration of gaze and pointing in infant/caregiver interaction in two different cultures. One is a Mayan society in Mexico, where interaction with infants during their first year is relatively minimal; the other is on Rossel Island (Papua New Guinea), where interaction with infants is characterized by intensive face-to-face communicative behaviors from shortly after the child's birth. Examination of videotaped naturally-occurring interactions in both societies for episodes of index finger point following and production, and the integration of gaze and vocalization with pointing, reveals that despite the differences in interactional style with infants, pointing for joint attention emerges in infants in both cultures in the 9 -15 month period. However, a comparative perspective on cultural practices in caregiver-infant interactions allows us to refine our understanding of joint attention and its role in the process of learning to become a communicative partner.
  • Brown, P. (2011). Politeness. In P. C. Hogan (Ed.), The Cambridge encyclopedia of the language sciences (pp. 635-636). New York: Cambridge University Press.

    Abstract

    This is an encyclopedia entry surveying theoretical approaches to politeness phenomena in language usage.
  • Brown, P., & Levinson, S. C. (1998). Politeness, introduction to the reissue: A review of recent work. In A. Kasher (Ed.), Pragmatics: Vol. 6 Grammar, psychology and sociology (pp. 488-554). London: Routledge.

    Abstract

    This article is a reprint of chapter 1, the introduction to Brown and Levinson, 1987, Politeness: Some universals in language usage (Cambridge University Press).
  • Brown, P., & Levinson, S. C. (2011). Politeness: Some universals in language use [Reprint]. In D. Archer, & P. Grundy (Eds.), The pragmatics reader (pp. 283-304). London: Routledge.

    Abstract

    Reprinted with permission of Cambridge University Press from: Brown, P. and Levinson, S. E. (1987) Politeness, (©) 1978, 1987, CUP.
  • Brown, P., & Levinson, S. C. (2018). Tzeltal: The demonstrative system. In S. C. Levinson, S. Cutfield, M. Dunn, N. J. Enfield, & S. Meira (Eds.), Demonstratives in cross-linguistic perspective (pp. 150-177). Cambridge: Cambridge University Press.
  • Bruggeman, L., & Cutler, A. (2019). The dynamics of lexical activation and competition in bilinguals’ first versus second language. In S. Calhoun, P. Escudero, M. Tabain, & P. Warren (Eds.), Proceedings of the 19th International Congress of Phonetic Sciences (ICPhS 20195) (pp. 1342-1346). Canberra, Australia: Australasian Speech Science and Technology Association Inc.

    Abstract

    Speech input causes listeners to activate multiple
    candidate words which then compete with one
    another. These include onset competitors, that share a
    beginning (bumper, butter), but also, counterintuitively,
    rhyme competitors, sharing an ending
    (bumper, jumper). In L1, competition is typically
    stronger for onset than for rhyme. In L2, onset
    competition has been attested but rhyme competition
    has heretofore remained largely unexamined. We
    assessed L1 (Dutch) and L2 (English) word
    recognition by the same late-bilingual individuals. In
    each language, eye gaze was recorded as listeners
    heard sentences and viewed sets of drawings: three
    unrelated, one depicting an onset or rhyme competitor
    of a word in the input. Activation patterns revealed
    substantial onset competition but no significant
    rhyme competition in either L1 or L2. Rhyme
    competition may thus be a “luxury” feature of
    maximally efficient listening, to be abandoned when
    resources are scarcer, as in listening by late
    bilinguals, in either language.
  • Burenhult, N., Kruspe, N., & Dunn, M. (2011). Language history and culture groups among Austroasiatic-speaking foragers of the Malay Peninsula. In N. J. Enfield (Ed.), Dynamics of human diversity: The case of mainland Southeast Asia (pp. 257-277). Canberra: Pacific Linguistics.
  • Burenhult, N. (2011). The coding of reciprocal events in Jahai. In N. Evans, A. Gaby, S. C. Levinson, & A. Majid (Eds.), Reciprocals and semantic typology (pp. 163-176). Amsterdam: Benjamins.

    Abstract

    This work explores the linguistic encoding of reciprocal events in Jahai (Aslian, Mon-Khmer, Malay Peninsula) on the basis of linguistic descriptions of the video stimuli of the ‘Reciprocal constructions and situation type’ task (Evans et al. 2004). Reciprocal situation types find expression in three different constructions: distributive verb forms, reciprocal verb forms, and adjunct phrases containing a body part noun. Distributives represent the dominant strategy, reciprocal forms and body part adjuncts being highly restricted across event types and consultants. The distributive and reciprocal morphemes manifest intricate morphological processes typical of Aslian languages. The paper also addresses some analytical problems raised by the data, such as structural ambiguity and restrictions on derivation, as well as individual variation.
  • Burenkova, O. V., & Fisher, S. E. (2019). Genetic insights into the neurobiology of speech and language. In E. Grigorenko, Y. Shtyrov, & P. McCardle (Eds.), All About Language: Science, Theory, and Practice. Baltimore, MD: Paul Brookes Publishing, Inc.
  • Butterfield, S., & Cutler, A. (1990). Intonational cues to word boundaries in clear speech? In Proceedings of the Institute of Acoustics: Vol 12, part 10 (pp. 87-94). St. Albans, Herts.: Institute of Acoustics.
  • Byun, K.-S., De Vos, C., Roberts, S. G., & Levinson, S. C. (2018). Interactive sequences modulate the selection of expressive forms in cross-signing. In C. Cuskley, M. Flaherty, H. Little, L. McCrohon, A. Ravignani, & T. Verhoef (Eds.), Proceedings of the 12th International Conference on the Evolution of Language (EVOLANG XII) (pp. 67-69). Toruń, Poland: NCU Press. doi:10.12775/3991-1.012.
  • Carstensen, A., Khetarpal, N., Majid, A., & Regier, T. (2011). Universals and variation in spatial language and cognition: Evidence from Chichewa. In L. Carlson, C. Hölscher, & T. Shipley (Eds.), Proceedings of the 33rd Annual Conference of the Cognitive Science Society (pp. 2315). Austin, TX: Cognitive Science Society.
  • Casasanto, D. (2011). Bodily relativity: The body-specificity of language and thought. In L. Carlson, C. Holscher, & T. Shipley (Eds.), Proceedings of the 33rd Annual Meeting of the Cognitive Science Society (pp. 1258-1259). Austin, TX: Cognitive Science Society.
  • Casasanto, D., & Lupyan, G. (2011). Ad hoc cognition [Abstract]. In L. Carlson, C. Hölscher, & T. F. Shipley (Eds.), Proceedings of the 33rd Annual Conference of the Cognitive Science Society (pp. 826). Austin, TX: Cognitive Science Society.

    Abstract

    If concepts, categories, and word meanings are stable, how can people use them so flexibly? Here we explore a possible answer: maybe this stability is an illusion. Perhaps all concepts, categories, and word meanings (CC&Ms) are constructed ad hoc, each time we use them. On this proposal, all words are infinitely polysemous, all communication is ’good enough’, and no idea is ever the same twice. The details of people’s ad hoc CC&Ms are determined by the way retrieval cues interact with the physical, social, and linguistic context. We argue that even the most stable-seeming CC&Ms are instantiated via the same processes as those that are more obviously ad hoc, and vary (a) from one microsecond to the next within a given instantiation, (b) from one instantiation to the next within an individual, and (c) from person to person and group to group as a function of people’s experiential history. 826
  • Casasanto, D., & De Bruin, A. (2011). Word Up! Directed motor action improves word learning [Abstract]. In L. Carlson, C. Hölscher, & T. Shipley (Eds.), Proceedings of the 33rd Annual Conference of the Cognitive Science Society (pp. 1902). Austin, TX: Cognitive Science Society.

    Abstract

    Can simple motor actions help people expand their vocabulary? Here we show that word learning depends on where students place their flash cards after studying them. In Experiment 1, participants learned the definitions of ”alien words” with positive or negative emotional valence. After studying each card, they placed it in one of two boxes (top or bottom), according to its valence. Participants who were instructed to place positive cards in the top box, consistent with Good is Up metaphors, scored about 10.
  • Casillas, M., & Amaral, P. (2011). Learning cues to category membership: Patterns in children’s acquisition of hedges. In C. Cathcart, I.-H. Chen, G. Finley, S. Kang, C. S. Sandy, & E. Stickles (Eds.), Proceedings of the Berkeley Linguistics Society 37th Annual Meeting (pp. 33-45). Linguistic Society of America, eLanguage.

    Abstract

    When we think of children acquiring language, we often think of their acquisition of linguistic structure as separate from their acquisition of knowledge about the world. But it is clear that in the process of learning about language, children consult what they know about the world; and that in learning about the world, children use linguistic cues to discover how items are related to one another. This interaction between the acquisition of linguistic structure and the acquisition of category structure is especially clear in word learning.
  • Chen, H.-C., & Cutler, A. (1997). Auditory priming in spoken and printed word recognition. In H.-C. Chen (Ed.), Cognitive processing of Chinese and related Asian languages (pp. 77-81). Hong Kong: Chinese University Press.
  • Chen, A., & Lai, V. T. (2011). Comb or coat: The role of intonation in online reference resolution in a second language. In W. Zonneveld, & H. Quené (Eds.), Sound and Sounds. Studies presented to M.E.H. (Bert) Schouten on the occasion of his 65th birthday (pp. 57-68). Utrecht: UiL OTS.

    Abstract

    1 Introduction In spoken sentence processing, listeners do not wait till the end of a sentence to decipher what message is conveyed. Rather, they make predictions on the most plausible interpretation at every possible point in the auditory signal on the basis of all kinds of linguistic information (e.g., Eberhard et al. 1995; Alman and Kamide 1999, 2007). Intonation is one such kind of linguistic information that is efficiently used in spoken sentence processing. The evidence comes primarily from recent work on online reference resolution conducted in the visual-world eyetracking paradigm (e.g., Tanenhaus et al. 1995). In this paradigm, listeners are shown a visual scene containing a number of objects and listen to one or two short sentences about the scene. They are asked to either inspect the visual scene while listening or to carry out the action depicted in the sentence(s) (e.g., 'Touch the blue square'). Listeners' eye movements directed to each object in the scene are monitored and time-locked to pre-defined time points in the auditory stimulus. Their predictions on the upcoming referent and sources for the predictions in the auditory signal are examined by analysing fixations to the relevant objects in the visual scene before the acoustic information on the referent is available
  • Chen, A. (2011). The developmental path to phonological focus-marking in Dutch. In S. Frota, E. Gorka, & P. Prieto (Eds.), Prosodic categories: Production, perception and comprehension (pp. 93-109). Dordrecht: Springer.

    Abstract

    This paper gives an overview of recent studies on the use of phonological cues (accent placement and choice of accent type) to mark focus in Dutch-speaking children aged between 1;9 and 8;10. It is argued that learning to use phonological cues to mark focus is a gradual process. In the light of the findings in these studies, a first proposal is put forward on the developmental path to adult-like phonological focus-marking in Dutch.
  • Chen, A. (2011). What’s in a rise: Evidence for an off-ramp analysis of Dutch Intonation. In W.-S. Lee, & E. Zee (Eds.), Proceedings of the 17th International Congress of Phonetic Sciences 2011 [ICPhS XVII] (pp. 448-451). Hong Kong: Department of Chinese, Translation and Linguistics, City University of Hong Kong.

    Abstract

    Pitch accents are analysed differently in an onramp analysis (i.e. ToBI) and an off-ramp analysis (e.g. Transcription of Dutch intonation - ToDI), two competing approaches in the Autosegmental Metrical tradition. A case in point is pre-final high rise. A pre-final rise is analysed as H* in ToBI but is phonologically ambiguous between H* or H*L (a (rise-)fall) in ToDI. This is because in ToDI, the L tone of a pre-final H*L can be realised in the following unaccented words and both H* and H*L can show up as a high rise in the accented word. To find out whether there is a two-way phonological contrast in pre-final high rises in Dutch, we examined the distribution of phonologically ambiguous high rises (H*(L)) and their phonetic realisation in different information structural conditions (topic vs. focus), compared to phonologically unambiguous H* and H*L. Results showed that there is indeed a H*L vs. H* contrast in prefinal high rises in Dutch and that H*L is realised as H*(L) when sonorant material is limited in the accented word. These findings provide new evidence for an off-ramp analysis of Dutch intonation and have far-reaching implications for analysis of intonation across languages.
  • Chu, M., & Kita, S. (2011). Microgenesis of gestures during mental rotation tasks recapitulates ontogenesis. In G. Stam, & M. Ishino (Eds.), Integrating gestures: The interdisciplinary nature of gesture (pp. 267-276). Amsterdam: John Benjamins.

    Abstract

    People spontaneously produce gestures when they solve problems or explain their solutions to a problem. In this chapter, we will review and discuss evidence on the role of representational gestures in problem solving. The focus will be on our recent experiments (Chu & Kita, 2008), in which we used Shepard-Metzler type of mental rotation tasks to investigate how spontaneous gestures revealed the development of problem solving strategy over the course of the experiment and what role gesture played in the development process. We found that when solving novel problems regarding the physical world, adults go through similar symbolic distancing (Werner & Kaplan, 1963) and internalization (Piaget, 1968) processes as those that occur during young children’s cognitive development and gesture facilitates such processes.
  • Cohen, E. (2011). “Out with ‘Religion’: A novel framing of the religion debate”. In W. Williams (Ed.), Religion and rights: The Oxford Amnesty Lectures 2008. Manchester: Manchester University Press.
  • Cohen, E., & Barrett, J. L. (2011). In search of "Folk anthropology": The cognitive anthropology of the person. In J. W. Van Huysteen, & E. Wiebe (Eds.), In search of self: Interdisciplinary perspectives on personhood (pp. 104-124). Grand Rapids, CA: Wm. B. Eerdmans Publishing Company.
  • Collins, L. J., Schönfeld, B., & Chen, X. S. (2011). The epigenetics of non-coding RNA. In T. Tollefsbol (Ed.), Handbook of epigenetics: the new molecular and medical genetics (pp. 49-61). London: Academic.

    Abstract

    Summary Non-coding RNAs (ncRNAs) have been implicated in the epigenetic marking of many genes. Short regulatory ncRNAs, including miRNAs, siRNAs, piRNAs and snoRNAs as well as long ncRNAs such as Xist and Air are discussed in light of recent research of mechanisms regulating chromatin marking and RNA editing. The topic is expanding rapidly so we will concentrate on examples to highlight the main mechanisms, including simple mechanisms where complementary binding affect methylation or RNA sites. However, other examples especially with the long ncRNAs highlight very complex regulatory systems with multiple layers of ncRNA control.
  • Crago, M. B., & Allen, S. E. M. (1998). Acquiring Inuktitut. In O. L. Taylor, & L. Leonard (Eds.), Language Acquisition Across North America: Cross-Cultural And Cross-Linguistic Perspectives (pp. 245-279). San Diego, CA, USA: Singular Publishing Group, Inc.
  • Crago, M. B., & Allen, S. E. M. (1997). Linguistic and cultural aspects of simplicity and complexity in Inuktitut child directed speech. In E. Hughes, M. Hughes, & A. Greenhill (Eds.), Proceedings of the 21st annual Boston University Conference on Language Development (pp. 91-102).
  • Crago, M. B., Allen, S. E. M., & Pesco, D. (1998). Issues of Complexity in Inuktitut and English Child Directed Speech. In Proceedings of the twenty-ninth Annual Stanford Child Language Research Forum (pp. 37-46).
  • Crago, M. B., Allen, S. E. M., & Hough-Eyamie, W. P. (1997). Exploring innateness through cultural and linguistic variation. In M. Gopnik (Ed.), The inheritance and innateness of grammars (pp. 70-90). New York City, NY, USA: Oxford University Press, Inc.
  • Cristia, A., Seidl, A., & Francis, A. L. (2011). Phonological features in infancy. In G. N. Clements, & R. Ridouane (Eds.), Where do phonological contrasts come from? Cognitive, physical and developmental bases of phonological features (pp. 303-326). Amsterdam: Benjamins.

    Abstract

    Features serve two main functions in the phonology of languages: they encode the distinction between pairs of contrastive phonemes (distinctive function); and they delimit sets of sounds that participate in phonological processes and patterns (classificatory function). We summarize evidence from a variety of experimental paradigms bearing on the functional relevance of phonological features. This research shows that while young infants may use abstract phonological features to learn sound patterns, this ability becomes more constrained with development and experience. Furthermore, given the lack of overlap between the ability to learn a pair of words differing in a single feature and the ability to learn sound patterns based on features, we argue for the separation of the distinctive and the classificatory function.
  • Cristia, A., Ganesh, S., Casillas, M., & Ganapathy, S. (2018). Talker diarization in the wild: The case of child-centered daylong audio-recordings. In Proceedings of Interspeech 2018 (pp. 2583-2587). doi:10.21437/Interspeech.2018-2078.

    Abstract

    Speaker diarization (answering 'who spoke when') is a widely researched subject within speech technology. Numerous experiments have been run on datasets built from broadcast news, meeting data, and call centers—the task sometimes appears close to being solved. Much less work has begun to tackle the hardest diarization task of all: spontaneous conversations in real-world settings. Such diarization would be particularly useful for studies of language acquisition, where researchers investigate the speech children produce and hear in their daily lives. In this paper, we study audio gathered with a recorder worn by small children as they went about their normal days. As a result, each child was exposed to different acoustic environments with a multitude of background noises and a varying number of adults and peers. The inconsistency of speech and noise within and across samples poses a challenging task for speaker diarization systems, which we tackled via retraining and data augmentation techniques. We further studied sources of structured variation across raw audio files, including the impact of speaker type distribution, proportion of speech from children, and child age on diarization performance. We discuss the extent to which these findings might generalize to other samples of speech in the wild.
  • Cristia, A., & Seidl, A. (2011). Sensitivity to prosody at 6 months predicts vocabulary at 24 months. In N. Danis, K. Mesh, & H. Sung (Eds.), BUCLD 35: Proceedings of the 35th annual Boston University Conference on Language Development (pp. 145-156). Somerville, Mass: Cascadilla Press.
  • Cutler, A., Burchfield, A., & Antoniou, M. (2019). A criterial interlocutor tally for successful talker adaptation? In S. Calhoun, P. Escudero, M. Tabain, & P. Warren (Eds.), Proceedings of the 19th International Congress of Phonetic Sciences (ICPhS 20195) (pp. 1485-1489). Canberra, Australia: Australasian Speech Science and Technology Association Inc.

    Abstract

    Part of the remarkable efficiency of listening is
    accommodation to unfamiliar talkers’ specific
    pronunciations by retuning of phonemic intercategory
    boundaries. Such retuning occurs in second
    (L2) as well as first language (L1); however, recent
    research with emigrés revealed successful adaptation
    in the environmental L2 but, unprecedentedly, not in
    L1 despite continuing L1 use. A possible explanation
    involving relative exposure to novel talkers is here
    tested in heritage language users with Mandarin as
    family L1 and English as environmental language. In
    English, exposure to an ambiguous sound in
    disambiguating word contexts prompted the expected
    adjustment of phonemic boundaries in subsequent
    categorisation. However, no adjustment occurred in
    Mandarin, again despite regular use. Participants
    reported highly asymmetric interlocutor counts in the
    two languages. We conclude that successful retuning
    ability requires regular exposure to novel talkers in
    the language in question, a criterion not met for the
    emigrés’ or for these heritage users’ L1.
  • Cutler, A., & Otake, T. (1998). Assimilation of place in Japanese and Dutch. In R. Mannell, & J. Robert-Ribes (Eds.), Proceedings of the Fifth International Conference on Spoken Language Processing: vol. 5 (pp. 1751-1754). Sydney: ICLSP.

    Abstract

    Assimilation of place of articulation across a nasal and a following stop consonant is obligatory in Japanese, but not in Dutch. In four experiments the processing of assimilated forms by speakers of Japanese and Dutch was compared, using a task in which listeners blended pseudo-word pairs such as ranga-serupa. An assimilated blend of this pair would be rampa, an unassimilated blend rangpa. Japanese listeners produced significantly more assimilated than unassimilated forms, both with pseudo-Japanese and pseudo-Dutch materials, while Dutch listeners produced significantly more unassimilated than assimilated forms in each materials set. This suggests that Japanese listeners, whose native-language phonology involves obligatory assimilation constraints, represent the assimilated nasals in nasal-stop sequences as unmarked for place of articulation, while Dutch listeners, who are accustomed to hearing unassimilated forms, represent the same nasal segments as marked for place of articulation.
  • Ip, M. H. K., & Cutler, A. (2018). Asymmetric efficiency of juncture perception in L1 and L2. In K. Klessa, J. Bachan, A. Wagner, M. Karpiński, & D. Śledziński (Eds.), Proceedings of Speech Prosody 2018 (pp. 289-296). Baixas, France: ISCA. doi:10.21437/SpeechProsody.2018-59.

    Abstract

    In two experiments, Mandarin listeners resolved potential syntactic ambiguities in spoken utterances in (a) their native language (L1) and (b) English which they had learned as a second language (L2). A new disambiguation task was used, requiring speeded responses to select the correct meaning for structurally ambiguous sentences. Importantly, the ambiguities used in the study are identical in Mandarin and in English, and production data show that prosodic disambiguation of this type of ambiguity is also realised very similarly in the two languages. The perceptual results here showed however that listeners’ response patterns differed for L1 and L2, although there was a significant increase in similarity between the two response patterns with increasing exposure to the L2. Thus identical ambiguity and comparable disambiguation patterns in L1 and L2 do not lead to immediate application of the appropriate L1 listening strategy to L2; instead, it appears that such a strategy may have to be learned anew for the L2.
  • Ip, M. H. K., & Cutler, A. (2018). Cue equivalence in prosodic entrainment for focus detection. In J. Epps, J. Wolfe, J. Smith, & C. Jones (Eds.), Proceedings of the 17th Australasian International Conference on Speech Science and Technology (pp. 153-156).

    Abstract

    Using a phoneme detection task, the present series of
    experiments examines whether listeners can entrain to
    different combinations of prosodic cues to predict where focus
    will fall in an utterance. The stimuli were recorded by four
    female native speakers of Australian English who happened to
    have used different prosodic cues to produce sentences with
    prosodic focus: a combination of duration cues, mean and
    maximum F0, F0 range, and longer pre-target interval before
    the focused word onset, only mean F0 cues, only pre-target
    interval, and only duration cues. Results revealed that listeners
    can entrain in almost every condition except for where
    duration was the only reliable cue. Our findings suggest that
    listeners are flexible in the cues they use for focus processing.
  • Cutler, A., Burchfield, L. A., & Antoniou, M. (2018). Factors affecting talker adaptation in a second language. In J. Epps, J. Wolfe, J. Smith, & C. Jones (Eds.), Proceedings of the 17th Australasian International Conference on Speech Science and Technology (pp. 33-36).

    Abstract

    Listeners adapt rapidly to previously unheard talkers by
    adjusting phoneme categories using lexical knowledge, in a
    process termed lexically-guided perceptual learning. Although
    this is firmly established for listening in the native language
    (L1), perceptual flexibility in second languages (L2) is as yet
    less well understood. We report two experiments examining L1
    and L2 perceptual learning, the first in Mandarin-English late
    bilinguals, the second in Australian learners of Mandarin. Both
    studies showed stronger learning in L1; in L2, however,
    learning appeared for the English-L1 group but not for the
    Mandarin-L1 group. Phonological mapping differences from
    the L1 to the L2 are suggested as the reason for this result.
  • Cutler, A. (1990). From performance to phonology: Comments on Beckman and Edwards's paper. In J. Kingston, & M. Beckman (Eds.), Papers in laboratory phonology I: Between the grammar and physics of speech (pp. 208-214). Cambridge: Cambridge University Press.
  • Cutler, A. (1998). How listeners find the right words. In Proceedings of the Sixteenth International Congress on Acoustics: Vol. 2 (pp. 1377-1380). Melville, NY: Acoustical Society of America.

    Abstract

    Languages contain tens of thousands of words, but these are constructed from a tiny handful of phonetic elements. Consequently, words resemble one another, or can be embedded within one another, a coup stick snot with standing. me process of spoken-word recognition by human listeners involves activation of multiple word candidates consistent with the input, and direct competition between activated candidate words. Further, human listeners are sensitive, at an early, prelexical, stage of speeeh processing, to constraints on what could potentially be a word of the language.
  • Cutler, A., Andics, A., & Fang, Z. (2011). Inter-dependent categorization of voices and segments. In W.-S. Lee, & E. Zee (Eds.), Proceedings of the 17th International Congress of Phonetic Sciences [ICPhS 2011] (pp. 552-555). Hong Kong: Department of Chinese, Translation and Linguistics, City University of Hong Kong.

    Abstract

    Listeners performed speeded two-alternative choice between two unfamiliar and relatively similar voices or between two phonetically close segments, in VC syllables. For each decision type (segment, voice), the non-target dimension (voice, segment) either was constant, or varied across four alternatives. Responses were always slower when a non-target dimension varied than when it did not, but the effect of phonetic variation on voice identity decision was stronger than that of voice variation on phonetic identity decision. Cues to voice and segment identity in speech are processed inter-dependently, but hard categorization decisions about voices draw on, and are hence sensitive to, segmental information.
  • Cutler, A., & Farrell, J. (2018). Listening in first and second language. In J. I. Liontas (Ed.), The TESOL encyclopedia of language teaching. New York: Wiley. doi:10.1002/9781118784235.eelt0583.

    Abstract

    Listeners' recognition of spoken language involves complex decoding processes: The continuous speech stream must be segmented into its component words, and words must be recognized despite great variability in their pronunciation (due to talker differences, or to influence of phonetic context, or to speech register) and despite competition from many spuriously present forms supported by the speech signal. L1 listeners deal more readily with all levels of this complexity than L2 listeners. Fortunately, the decoding processes necessary for competent L2 listening can be taught in the classroom. Evidence-based methodologies targeted at the development of efficient speech decoding include teaching of minimal pairs, of phonotactic constraints, and of reduction processes, as well as the use of dictation and L2 video captions.
  • Cutler, A. (1990). Exploiting prosodic probabilities in speech segmentation. In G. Altmann (Ed.), Cognitive models of speech processing: Psycholinguistic and computational perspectives (pp. 105-121). Cambridge, MA: MIT Press.
  • Cutler, A., Treiman, R., & Van Ooijen, B. (1998). Orthografik inkoncistensy ephekts in foneme detektion? In R. Mannell, & J. Robert-Ribes (Eds.), Proceedings of the Fifth International Conference on Spoken Language Processing: Vol. 6 (pp. 2783-2786). Sydney: ICSLP.

    Abstract

    The phoneme detection task is widely used in spoken word recognition research. Alphabetically literate participants, however, are more used to explicit representations of letters than of phonemes. The present study explored whether phoneme detection is sensitive to how target phonemes are, or may be, orthographically realised. Listeners detected the target sounds [b,m,t,f,s,k] in word-initial position in sequences of isolated English words. Response times were faster to the targets [b,m,t], which have consistent word-initial spelling, than to the targets [f,s,k], which are inconsistently spelled, but only when listeners’ attention was drawn to spelling by the presence in the experiment of many irregularly spelled fillers. Within the inconsistent targets [f,s,k], there was no significant difference between responses to targets in words with majority and minority spellings. We conclude that performance in the phoneme detection task is not necessarily sensitive to orthographic effects, but that salient orthographic manipulation can induce such sensitivity.
  • Cutler, A. (1998). Prosodic structure and word recognition. In A. D. Friederici (Ed.), Language comprehension: A biological perspective (pp. 41-70). Heidelberg: Springer.
  • Cutler, A. (1997). Prosody and the structure of the message. In Y. Sagisaka, N. Campbell, & N. Higuchi (Eds.), Computing prosody: Computational models for processing spontaneous speech (pp. 63-66). Heidelberg: Springer.
  • Cutler, A. (1990). Syllabic lengthening as a word boundary cue. In R. Seidl (Ed.), Proceedings of the 3rd Australian International Conference on Speech Science and Technology (pp. 324-328). Canberra: Australian Speech Science and Technology Association.

    Abstract

    Bisyllabic sequences which could be interpreted as one word or two were produced in sentence contexts by a trained speaker, and syllabic durations measured. Listeners judged whether the bisyllables, excised from context, were one word or two. The proportion of two-word choices correlated positively with measured duration, but only for bisyllables stressed on the second syllable. The results may suggest a limit for listener sensitivity to syllabic lengthening as a word boundary cue.
  • Cutler, A. (1998). The recognition of spoken words with variable representations. In D. Duez (Ed.), Proceedings of the ESCA Workshop on Sound Patterns of Spontaneous Speech (pp. 83-92). Aix-en-Provence: Université de Aix-en-Provence.
  • Cutler, A., Norris, D., & Van Ooijen, B. (1990). Vowels as phoneme detection targets. In Proceedings of the First International Conference on Spoken Language Processing (pp. 581-584).

    Abstract

    Phoneme detection is a psycholinguistic task in which listeners' response time to detect the presence of a pre-specified phoneme target is measured. Typically, detection tasks have used consonant targets. This paper reports two experiments in which subjects responded to vowels as phoneme detection targets. In the first experiment, targets occurred in real words, in the second in nonsense words. Response times were long by comparison with consonantal targets. Targets in initial syllables were responded to much more slowly than targets in second syllables. Strong vowels were responded to faster than reduced vowels in real words but not in nonwords. These results suggest that the process of phoneme detection produces different results for vowels and for consonants. We discuss possible explanations for this difference, in particular the possibility of language-specificity.
  • Daly, T., Chen, X. S., & Penny, D. (2011). How old are RNA networks? In L. J. Collins (Ed.), RNA infrastructure and networks (pp. 255-273). New York: Springer Science + Business Media and Landes Bioscience.

    Abstract

    Some major classes of RNAs (such as mRNA, rRNA, tRNA and RNase P) are ubiquitous in all living systems so are inferred to have arisen early during the origin of life. However, the situation is not so clear for the system of RNA regulatory networks that continue to be uncovered, especially in eukaryotes. It is increasingly being recognised that networks of small RNAs are important for regulation in all cells, but it is not certain whether the origin of these networks are as old as rRNAs and tRNA. Another group of ncRNAs, including snoRNAs, occurs mainly in archaea and eukaryotes and their ultimate origin is less certain, although perhaps the simplest hypothesis is that they were present in earlier stages of life and were lost from bacteria. Some RNA networks may trace back to an early stage when there was just RNA and proteins, the RNP‑world; before DNA.
  • Danielsen, S., Dunn, M., & Muysken, P. (2011). The spread of the Arawakan languages: A view from structural phylogenetics. In A. Hornborg, & J. D. Hill (Eds.), Ethnicity in ancient Amazonia: Reconstructing past identities from archaeology, linguistics, and ethnohistory (pp. 173-196). Boulder: University Press of Colorado.
  • Delgado, T., Ravignani, A., Verhoef, T., Thompson, B., Grossi, T., & Kirby, S. (2018). Cultural transmission of melodic and rhythmic universals: Four experiments and a model. In C. Cuskley, M. Flaherty, H. Little, L. McCrohon, A. Ravignani, & T. Verhoef (Eds.), Proceedings of the 12th International Conference on the Evolution of Language (EVOLANG XII) (pp. 89-91). Toruń, Poland: NCU Press. doi:10.12775/3991-1.019.
  • Devanna, P., Dediu, D., & Vernes, S. C. (2019). The Genetics of Language: From complex genes to complex communication. In S.-A. Rueschemeyer, & M. G. Gaskell (Eds.), The Oxford Handbook of Psycholinguistics (2nd ed., pp. 865-898). Oxford: Oxford University Press.

    Abstract

    This chapter discusses the genetic foundations of the human capacity for language. It reviews the molecular structure of the genome and the complex molecular mechanisms that allow genetic information to influence multiple levels of biology. It goes on to describe the active regulation of genes and their formation of complex genetic pathways that in turn control the cellular environment and function. At each of these levels, examples of genes and genetic variants that may influence the human capacity for language are given. Finally, it discusses the value of using animal models to understand the genetic underpinnings of speech and language. From this chapter will emerge the complexity of the genome in action and the multidisciplinary efforts that are currently made to bridge the gap between genetics and language.
  • Dideriksen, C., Fusaroli, R., Tylén, K., Dingemanse, M., & Christiansen, M. H. (2019). Contextualizing Conversational Strategies: Backchannel, Repair and Linguistic Alignment in Spontaneous and Task-Oriented Conversations. In A. K. Goel, C. M. Seifert, & C. Freksa (Eds.), Proceedings of the 41st Annual Conference of the Cognitive Science Society (CogSci 2019) (pp. 261-267). Montreal, QB: Cognitive Science Society.

    Abstract

    Do interlocutors adjust their conversational strategies to the specific contextual demands of a given situation? Prior studies have yielded conflicting results, making it unclear how strategies vary with demands. We combine insights from qualitative and quantitative approaches in a within-participant experimental design involving two different contexts: spontaneously occurring conversations (SOC) and task-oriented conversations (TOC). We systematically assess backchanneling, other-repair and linguistic alignment. We find that SOC exhibit a higher number of backchannels, a reduced and more generic repair format and higher rates of lexical and syntactic alignment. TOC are characterized by a high number of specific repairs and a lower rate of lexical and syntactic alignment. However, when alignment occurs, more linguistic forms are aligned. The findings show that conversational strategies adapt to specific contextual demands.
  • Dieuleveut, A., Van Dooren, A., Cournane, A., & Hacquard, V. (2019). Acquiring the force of modals: Sig you guess what sig means? In M. Brown, & B. Dailey (Eds.), BUCLD 43: Proceedings of the 43rd annual Boston University Conference on Language Development (pp. 189-202). Sommerville, MA: Cascadilla Press.
  • Dijkstra, T., & Kempen, G. (1997). Het taalgebruikersmodel. In H. Hulshof, & T. Hendrix (Eds.), De taalcentrale. Amsterdam: Bulkboek.
  • Dijkstra, N., & Fikkert, P. (2011). Universal constraints on the discrimination of Place of Articulation? Asymmetries in the discrimination of 'paan' and 'taan' by 6-month-old Dutch infants. In N. Danis, K. Mesh, & H. Sung (Eds.), Proceedings of the 35th Annual Boston University Conference on Language Development. Volume 1 (pp. 170-182). Somerville, MA: Cascadilla Press.
  • Dingemanse, M. (2019). 'Ideophone' as a comparative concept. In K. Akita, & P. Pardeshi (Eds.), Ideophones, Mimetics, and Expressives (pp. 13-33). Amsterdam: John Benjamins. doi:10.1075/ill.16.02din.

    Abstract

    This chapter makes the case for ‘ideophone’ as a comparative concept: a notion that captures a recurrent typological pattern and provides a template for understanding language-specific phenomena that prove similar. It revises an earlier definition to account for the observation that ideophones typically form an open lexical class, and uses insights from canonical typology to explore the larger typological space. According to the resulting definition, a canonical ideophone is a member of an open lexical class of marked words that depict sensory imagery. The five elements of this definition can be seen as dimensions that together generate a possibility space to characterise cross-linguistic diversity in depictive means of expression. This approach allows for the systematic comparative treatment of ideophones and ideophone-like phenomena. Some phenomena in the larger typological space are discussed to demonstrate the utility of the approach: phonaesthemes in European languages, specialised semantic classes in West-Chadic, diachronic diversions in Aslian, and depicting constructions in signed languages.
  • Dingemanse, M., Van Leeuwen, T., & Majid, A. (2011). Mapping across senses: Two cross-modal association tasks. In K. Kendrick, & A. Majid (Eds.), Field manual volume 14 (pp. 11-15). Nijmegen: Max Planck Institute for Psycholinguistics. doi:10.17617/2.1005579.
  • Dingemanse, M. (2011). Ezra Pound among the Mawu: Ideophones and iconicity in Siwu. In P. Michelucci, O. Fischer, & C. Ljungberg (Eds.), Semblance and Signification (pp. 39-54). Amsterdam: John Benjamins.

    Abstract

    The Mawu people of eastern Ghana make common use of ideophones: marked words that depict sensory imagery. Ideophones have been described as “poetry in ordinary language,” yet the shadow of Lévy-Bruhl, who assigned such words to the realm of primitivity, has loomed large over linguistics and literary theory alike. The poet Ezra Pound is a case in point: while his fascination with Chinese characters spawned the ideogrammic method, the mimicry and gestures of the “primitive languages in Africa” were never more than a mere curiosity to him. This paper imagines Pound transposed into the linguaculture of the Mawu. What would have struck him about their ways of ‘charging language’ with imagery? I juxtapose Pound’s views of the poetic image with an analysis of how different layers of iconicity in ideophones combine to depict sensory imagery. This exercise illuminates aspects of what one might call ‘the ideophonic
  • Dingemanse, M., Blythe, J., & Dirksmeyer, T. (2018). Formats for other-initiation of repair across languages: An exercise in pragmatic typology. In I. Nikolaeva (Ed.), Linguistic Typology: Critical Concepts in Linguistics. Vol. 4 (pp. 322-357). London: Routledge.

    Abstract

    In conversation, people regularly deal with problems of speaking, hearing, and understanding. We report on a cross-linguistic investigation of the conversational structure of other-initiated repair (also known as collaborative repair, feedback, requests for clarification, or grounding sequences). We take stock of formats for initiating repair across languages (comparable to English huh?, who?, y’mean X?, etc.) and find that different languages make available a wide but remarkably similar range of linguistic resources for this function. We exploit the patterned variation as evidence for several underlying concerns addressed by repair initiation: characterising trouble, managing responsibility, and handling knowledge. The concerns do not always point in the same direction and thus provide participants in interaction with alternative principles for selecting one format over possible others. By comparing conversational structures across languages, this paper contributes to pragmatic typology: the typology of systems of language use and the principles that shape them.

Share this page