Publications

Displaying 201 - 300 of 495
  • Ibarretxe-Antuñano, I. (2012). Placement and removal events in Basque and Spanish. In A. Kopecka, & B. Narasimhan (Eds.), Events of putting and taking: A crosslinguistic perspective (pp. 123-144). Amsterdam: Benjamins.

    Abstract

    This paper examines how placement and removal events are lexicalised and conceptualised in Basque and Peninsular Spanish. After a brief description of the main linguistic devices employed for the coding of these types of events, the paper discusses how speakers of the two languages choose to talk about these events. Finally, the paper focuses on two aspects that seem to be crucial in the description of these events (1) the role of force dynamics: both languages distinguish between different degrees of force, causality, and intentionality, and (2) the influence of the verb-framed lexicalisation pattern. Data come from six Basque and ten Peninsular Spanish native speakers.
  • Indefrey, P. (2012). Hemodynamic studies of syntactic processing. In M. Faust (Ed.), Handbook of the neuropsychology of language. Volume 1: Language processing in the brain: Basic science (pp. 209-228). Malden, MA: Wiley-Blackwell.
  • Irivine, E., & Roberts, S. G. (2016). Deictic tools can limit the emergence of referential symbol systems. In S. G. Roberts, C. Cuskley, L. McCrohon, L. Barceló-Coblijn, O. Feher, & T. Verhoef (Eds.), The Evolution of Language: Proceedings of the 11th International Conference (EVOLANG11). Retrieved from http://evolang.org/neworleans/papers/99.html.

    Abstract

    Previous experiments and models show that the pressure to communicate can lead to the emergence of symbols in specific tasks. The experiment presented here suggests that the ability to use deictic gestures can reduce the pressure for symbols to emerge in co-operative tasks. In the 'gesture-only' condition, pairs built a structure together in 'Minecraft', and could only communicate using a small range of gestures. In the 'gesture-plus' condition, pairs could also use sound to develop a symbol system if they wished. All pairs were taught a pointing convention. None of the pairs we tested developed a symbol system, and performance was no different across the two conditions. We therefore suggest that deictic gestures, and non-referential means of organising activity sequences, are often sufficient for communication. This suggests that the emergence of linguistic symbols in early hominids may have been late and patchy with symbols only emerging in contexts where they could significantly improve task success or efficiency. Given the communicative power of pointing however, these contexts may be fewer than usually supposed. An approach for identifying these situations is outlined.
  • Irizarri van Suchtelen, P. (2012). Dative constructions in the Spanish of heritage speakers in the Netherlands. In Z. Wąsik, & P. P. Chruszczewski (Eds.), Languages in contact 2011 (pp. 103-118). Wrocław: Philological School of Higher Education in Wrocław Publishing.

    Abstract

    Spanish can use dative as well as non-dative strategies to encode Possessors, Human Sources, Interestees (datives of interest) and Experiencers. In Dutch this optionality is virtually absent, restricting dative encoding mainly to the Recipient of a ditransitive. The present study examines whether this may lead to instability of the non-prototypical dative constructions in the Spanish of Dutch-Spanish bilinguals. Elicited data of 12 Chilean heritage informants from the Netherlands were analyzed. Whereas the evidence on the stability of dative Experiencers was not conclusive, the results indicate that the use of prototypical datives, dative External Possessors, dative Human Sources and datives of interest is fairly stable in bilinguals, except for those with limited childhood exposure to Spanish. It is argued that the consistent preference for non-dative strategies of this group was primarily attributable to instability of the dative clitic, which affected all constructions, even the encoding of prototypical indirect objects
  • Ishibashi, M. (2012). The expression of ‘putting’ and ‘taking’ events in Japanese: The asymmetry of Source and Goal revisited. In A. Kopecka, & B. Narasimhan (Eds.), Events of putting and taking: A crosslinguistic perspective (pp. 253-272). Amsterdam: Benjamins.

    Abstract

    This study explores the expression of Source and Goal in describing placement and removal events in adult Japanese. Although placement and removal events a priori represent symmetry regarding the orientation of motion, their (c)overt expressions actually exhibit multiple asymmetries at various structural levels. The results show that the expression of the Source is less frequent than the expression of the Goal, but, if expressed, morphosyntactically more complex, suggesting that ‘taking’ events are more complex than ‘putting’ events in their construal. It is stressed that finer linguistic analysis is necessary before explaining linguistic asymmetries in terms of non-linguistic foundations of spatial language.
  • Janssen, R., Winter, B., Dediu, D., Moisik, S. R., & Roberts, S. G. (2016). Nonlinear biases in articulation constrain the design space of language. In S. G. Roberts, C. Cuskley, L. McCrohon, L. Barceló-Coblijn, O. Feher, & T. Verhoef (Eds.), The Evolution of Language: Proceedings of the 11th International Conference (EVOLANG11). Retrieved from http://evolang.org/neworleans/papers/86.html.

    Abstract

    In Iterated Learning (IL) experiments, a participant’s learned output serves as the next participant’s learning input (Kirby et al., 2014). IL can be used to model cultural transmission and has indicated that weak biases can be amplified through repeated cultural transmission (Kirby et al., 2007). So, for example, structural language properties can emerge over time because languages come to reflect the cognitive constraints in the individuals that learn and produce the language. Similarly, we propose that languages may also reflect certain anatomical biases. Do sound systems adapt to the affordances of the articulation space induced by the vocal tract?
    The human vocal tract has inherent nonlinearities which might derive from acoustics and aerodynamics (cf. quantal theory, see Stevens, 1989) or biomechanics (cf. Gick & Moisik, 2015). For instance, moving the tongue anteriorly along the hard palate to produce a fricative does not result in large changes in acoustics in most cases, but for a small range there is an abrupt change from a perceived palato-alveolar [ʃ] to alveolar [s] sound (Perkell, 2012). Nonlinearities such as these might bias all human speakers to converge on a very limited set of phonetic categories, and might even be a basis for combinatoriality or phonemic ‘universals’.
    While IL typically uses discrete symbols, Verhoef et al. (2014) have used slide whistles to produce a continuous signal. We conducted an IL experiment with human subjects who communicated using a digital slide whistle for which the degree of nonlinearity is controlled. A single parameter (α) changes the mapping from slide whistle position (the ‘articulator’) to the acoustics. With α=0, the position of the slide whistle maps Bark-linearly to the acoustics. As α approaches 1, the mapping gets more double-sigmoidal, creating three plateaus where large ranges of positions map to similar frequencies. In more abstract terms, α represents the strength of a nonlinear (anatomical) bias in the vocal tract.
    Six chains (138 participants) of dyads were tested, each chain with a different, fixed α. Participants had to communicate four meanings by producing a continuous signal using the slide-whistle in a ‘director-matcher’ game, alternating roles (cf. Garrod et al., 2007).
    Results show that for high αs, subjects quickly converged on the plateaus. This quick convergence is indicative of a strong bias, repelling subjects away from unstable regions already within-subject. Furthermore, high αs lead to the emergence of signals that oscillate between two (out of three) plateaus. Because the sigmoidal spaces are spatially constrained, participants increasingly used the sequential/temporal dimension. As a result of this, the average duration of signals with high α was ~100ms longer than with low α. These oscillations could be an expression of a basis for phonemic combinatoriality.
    We have shown that it is possible to manipulate the magnitude of an articulator-induced non-linear bias in a slide whistle IL framework. The results suggest that anatomical biases might indeed constrain the design space of language. In particular, the signaling systems in our study quickly converged (within-subject) on the use of stable regions. While these conclusions were drawn from experiments using slide whistles with a relatively strong bias, weaker biases could possibly be amplified over time by repeated cultural transmission, and likely lead to similar outcomes.
  • Janssen, R., Dediu, D., & Moisik, S. R. (2016). Simple agents are able to replicate speech sounds using 3d vocal tract model. In S. G. Roberts, C. Cuskley, L. McCrohon, L. Barceló-Coblijn, O. Feher, & T. Verhoef (Eds.), The Evolution of Language: Proceedings of the 11th International Conference (EVOLANG11). Retrieved from http://evolang.org/neworleans/papers/97.html.

    Abstract

    Many factors have been proposed to explain why groups of people use different speech sounds in their language. These range from cultural, cognitive, environmental (e.g., Everett, et al., 2015) to anatomical (e.g., vocal tract (VT) morphology). How could such anatomical properties have led to the similarities and differences in speech sound distributions between human languages?

    It is known that hard palate profile variation can induce different articulatory strategies in speakers (e.g., Brunner et al., 2009). That is, different hard palate profiles might induce a kind of bias on speech sound production, easing some types of sounds while impeding others. With a population of speakers (with a proportion of individuals) that share certain anatomical properties, even subtle VT biases might become expressed at a population-level (through e.g., bias amplification, Kirby et al., 2007). However, before we look into population-level effects, we should first look at within-individual anatomical factors. For that, we have developed a computer-simulated analogue for a human speaker: an agent. Our agent is designed to replicate speech sounds using a production and cognition module in a computationally tractable manner.

    Previous agent models have often used more abstract (e.g., symbolic) signals. (e.g., Kirby et al., 2007). We have equipped our agent with a three-dimensional model of the VT (the production module, based on Birkholz, 2005) to which we made numerous adjustments. Specifically, we used a 4th-order Bezier curve that is able to capture hard palate variation on the mid-sagittal plane (XXX, 2015). Using an evolutionary algorithm, we were able to fit the model to human hard palate MRI tracings, yielding high accuracy fits and using as little as two parameters. Finally, we show that the samples map well-dispersed to the parameter-space, demonstrating that the model cannot generate unrealistic profiles. We can thus use this procedure to import palate measurements into our agent’s production module to investigate the effects on acoustics. We can also exaggerate/introduce novel biases.

    Our agent is able to control the VT model using the cognition module.

    Previous research has focused on detailed neurocomputation (e.g., Kröger et al., 2014) that highlights e.g., neurobiological principles or speech recognition performance. However, the brain is not the focus of our current study. Furthermore, present-day computing throughput likely does not allow for large-scale deployment of these architectures, as required by the population model we are developing. Thus, the question whether a very simple cognition module is able to replicate sounds in a computationally tractable manner, and even generalize over novel stimuli, is one worthy of attention in its own right.

    Our agent’s cognition module is based on running an evolutionary algorithm on a large population of feed-forward neural networks (NNs). As such, (anatomical) bias strength can be thought of as an attractor basin area within the parameter-space the agent has to explore. The NN we used consists of a triple-layered (fully-connected), directed graph. The input layer (three neurons) receives the formants frequencies of a target-sound. The output layer (12 neurons) projects to the articulators in the production module. A hidden layer (seven neurons) enables the network to deal with nonlinear dependencies. The Euclidean distance (first three formants) between target and replication is used as fitness measure. Results show that sound replication is indeed possible, with Euclidean distance quickly approaching a close-to-zero asymptote.

    Statistical analysis should reveal if the agent can also: a) Generalize: Can it replicate sounds not exposed to during learning? b) Replicate consistently: Do different, isolated agents always converge on the same sounds? c) Deal with consolidation: Can it still learn new sounds after an extended learning phase (‘infancy’) has been terminated? Finally, a comparison with more complex models will be used to demonstrate robustness.
  • Jeske, J., Kember, H., & Cutler, A. (2016). Native and non-native English speakers' use of prosody to predict sentence endings. In Proceedings of the 16th Australasian International Conference on Speech Science and Technology (SST2016).
  • Jordens, P. (1998). Defaultformen des Präteritums. Zum Erwerb der Vergangenheitsmorphologie im Niederlänidischen. In H. Wegener (Ed.), Eine zweite Sprache lernen (pp. 61-88). Tübingen, Germany: Verlag Gunter Narr.
  • Jordens, P., & Dimroth, C. (2006). Finiteness in children and adults learning Dutch. In N. Gagarina, & I. Gülzow (Eds.), The acquisition of verbs and their grammar: The effect of particular languages (pp. 173-200). Dordrecht: Springer.
  • Jordens, P. (2006). Inversion as an artifact: The acquisition of topicalization in child L1- and adult L2-Dutch. In S. H. Foster-Cohen, M. Medved Krajnovic, & J. Mihaljevic Djigunovic (Eds.), EUROSLA Yearbook 6 (pp. 101-120).
  • Kastens, K. (2020). The Jerome Bruner Library treasure. In M. E. Poulsen (Ed.), The Jerome Bruner Library: From New York to Nijmegen (pp. 29-34). Nijmegen: Max Planck Institute for Psycholinguistics.
  • Keating, E. (1995). Pilot questionnaire to investigate social uses of space, especially as related to 1) linguistic practices and 2) social organization. In D. Wilkins (Ed.), Extensions of space and beyond: manual for field elicitation for the 1995 field season (pp. 17-21). Nijmegen: Max Planck Institute for Psycholinguistics. doi:10.17617/2.3004227.

    Abstract

    Day-to-day interpretations of “space” are enmeshed in specific cultural and linguistic practices. For example, many cultures have an association between vertical height and social standing; more powerful people may be placed literally higher than others at social gatherings, and be spoken of as having higher status. This questionnaire is a guide for exploring relationships between space, language, and social structure. The goal is to better understand how space is organised in the focus community, and to investigate the extent to which space is used as a model for reproducing social forms.
  • Kember, H., Choi, J., & Cutler, A. (2016). Processing advantages for focused words in Korean. In J. Barnes, A. Brugos, S. Shattuck-Hufnagel, & N. Veilleux (Eds.), Proceedings of Speech Prosody 2016 (pp. 702-705).

    Abstract

    In Korean, focus is expressed in accentual phrasing. To ascertain whether words focused in this manner enjoy a processing advantage analogous to that conferred by focus as expressed in, e.g, English and Dutch, we devised sentences with target words in one of four conditions: prosodic focus, syntactic focus, prosodic + syntactic focus, and no focus as a control. 32 native speakers of Korean listened to blocks of 10 sentences, then were presented visually with words and asked whether or not they had heard them. Overall, words with focus were recognised significantly faster and more accurately than unfocused words. In addition, words with syntactic focus or syntactic + prosodic focus were recognised faster than words with prosodic focus alone. As for other languages, Korean focus confers processing advantage on the words carrying it. While prosodic focus does provide an advantage, however, syntactic focus appears to provide the greater beneficial effect for recognition memory
  • Kempen, G. (1986). Beyond word processing. In E. Cluff, & G. Bunting (Eds.), Information management yearbook 1986 (pp. 178-181). London: IDPM Publications.
  • Kempen, G., & Harbusch, K. (1998). A 'tree adjoining' grammar without adjoining: The case of scrambling in German. In Fourth International Workshop on Tree Adjoining Grammars and Related Frameworks (TAG+4).
  • Kempen, G. (1986). Kunstmatige intelligentie en gezond verstand. In P. Hagoort, & R. Maessen (Eds.), Geest, computer, kunst (pp. 118-123). Utrecht: Stichting Grafiet.
  • Kempen, G. (1998). Sentence parsing. In A. D. Friederici (Ed.), Language comprehension: A biological perspective (pp. 213-228). Berlin: Springer.
  • Kemps-Snijders, M., Ducret, J., Romary, L., & Wittenburg, P. (2006). An API for accessing the data category registry. In Proceedings of the 5th International Conference on Language Resources and Evaluation (LREC 2006) (pp. 2299-2302).
  • Kemps-Snijders, M., Nederhof, M.-J., & Wittenburg, P. (2006). LEXUS, a web-based tool for manipulating lexical resources. In Proceedings of the 5th International Conference on Language Resources and Evaluation (LREC 2006) (pp. 1862-1865).
  • Khoe, Y. H., Tsoukala, C., Kootstra, G. J., & Frank, S. L. (2020). Modeling cross-language structural priming in sentence production. In T. C. Stewart (Ed.), Proceedings of the 18th Annual Meeting of the International Conference on Cognitive Modeling (pp. 131-137). University Park, PA, USA: The Penn State Applied Cognitive Science Lab.

    Abstract

    A central question in the psycholinguistic study of multilingualism is how syntax is shared across languages. We implement a model to investigate whether error-based implicit learning can provide an account of cross-language structural priming. The model is based on the Dual-path model of
    sentence-production (Chang, 2002). We implement our model using the Bilingual version of Dual-path (Tsoukala, Frank, & Broersma, 2017). We answer two main questions: (1) Can structural priming of active and passive constructions occur between English and Spanish in a bilingual version of the Dual-
    path model? (2) Does cross-language priming differ quantitatively from within-language priming in this model? Our results show that cross-language priming does occur in the model. This finding adds to the viability of implicit learning as an account of structural priming in general and cross-language
    structural priming specifically. Furthermore, we find that the within-language priming effect is somewhat stronger than the cross-language effect. In the context of mixed results from
    behavioral studies, we interpret the latter finding as an indication that the difference between cross-language and within-
    language priming is small and difficult to detect statistically.
  • Kidd, E., Bigood, A., Donnelly, S., Durrant, S., Peter, M. S., & Rowland, C. F. (2020). Individual differences in first language acquisition and their theoretical implications. In C. F. Rowland, A. L. Theakston, B. Ambridge, & K. E. Twomey (Eds.), Current Perspectives on Child Language Acquisition: How children use their environment to learn (pp. 189-219). Amsterdam: John Benjamins. doi:10.1075/tilar.27.09kid.

    Abstract

    Much of Lieven’s pioneering work has helped move the study of individual differences to the centre of child language research. The goal of the present chapter is to illustrate how the study of individual differences provides crucial insights into the language acquisition process. In part one, we summarise some of the evidence showing how pervasive individual differences are across the whole of the language system; from gestures to morphosyntax. In part two, we describe three causal factors implicated in explaining individual differences, which, we argue, must be built into any theory of language acquisition (intrinsic differences in the neurocognitive learning mechanisms, the child’s communicative environment, and developmental cascades in which each new linguistic skill that the child has to acquire depends critically on the prior acquisition of foundational abilities). In part three, we present an example study on the role of the speed of linguistic processing on vocabulary development, which illustrates our approach to individual differences. The results show evidence of a changing relationship between lexical processing speed and vocabulary over developmental time, perhaps as a result of the changing nature of the structure of the lexicon. The study thus highlights the benefits of an individual differences approach in building, testing, and constraining theories of language acquisition.
  • Kidd, E. (2006). The acquisition of complement clause constructions. In E. V. Clark, & B. F. Kelly (Eds.), Constructions in acquisition (pp. 311-332). Stanford: Center for the Study of Language and Information.
  • Kirschenbaum, A., Wittenburg, P., & Heyer, G. (2012). Unsupervised morphological analysis of small corpora: First experiments with Kilivila. In F. Seifart, G. Haig, N. P. Himmelmann, D. Jung, A. Margetts, & P. Trilsbeek (Eds.), Potentials of language documentation: Methods, analyses, and utilization (pp. 32-38). Honolulu: University of Hawai'i Press.

    Abstract

    Language documentation involves linguistic analysis of the collected material, which is typically done manually. Automatic methods for language processing usually require large corpora. The method presented in this paper uses techniques from bioinformatics and contextual information to morphologically analyze raw text corpora. This paper presents initial results of the method when applied on a small Kilivila corpus.
  • Kita, S. (1995). Enter/exit animation for linguistic elicitation. In D. Wilkins (Ed.), Extensions of space and beyond: manual for field elicitation for the 1995 field season (pp. 13). Nijmegen: Max Planck Institute for Psycholinguistics. doi:10.17617/2.3003394.

    Abstract

    This task investigates the expression of “enter” and “exit” events, and is a supplement to the Motion Elicitation task (https://doi.org/10.17617/2.3003391). Consultants are asked to describe a series of animated clips where a man moves into or out of a house. The clips focus on contrasts to do with perspective (e.g., whether the man appears to move away or towards the viewer) and transitional movement (e.g., whether the man walks or “teleports” into his new location).

    Additional information

    1995_Enter_exit_animation_stimuli.zip
  • Kita, S., van Gijn, I., & van der Hulst, H. (1998). Movement phases in signs and co-speech gestures, and their transcription by human coders. In Gesture and Sign-Language in Human-Computer Interaction (Lecture Notes in Artificial Intelligence - LNCS Subseries, Vol. 1371) (pp. 23-35). Berlin, Germany: Springer-Verlag.

    Abstract

    The previous literature has suggested that the hand movement in co-speech gestures and signs consists of a series of phases with qualitatively different dynamic characteristics. In this paper, we propose a syntagmatic rule system for movement phases that applies to both co-speech gestures and signs. Descriptive criteria for the rule system were developed for the analysis video-recorded continuous production of signs and gesture. It involves segmenting a stream of body movement into phases and identifying different phase types. Two human coders used the criteria to analyze signs and cospeech gestures that are produced in natural discourse. It was found that the criteria yielded good inter-coder reliability. These criteria can be used for the technology of automatic recognition of signs and co-speech gestures in order to segment continuous production and identify the potentially meaningbearing phase.
  • Kita, S. (1995). Recommendations for data collection for gesture studies. In D. Wilkins (Ed.), Extensions of space and beyond: manual for field elicitation for the 1995 field season (pp. 35-45). Nijmegen: Max Planck Institute for Psycholinguistics. doi:10.17617/2.3004287.

    Abstract

    Do our hands 'speak the same language' across cultures? Gesture is the silent partner of spoken languages in face-to-face interaction, but we still have a lot to learn about gesture practices in different speech communities. The primary purpose of this task is to collect data in naturalistic settings that can be used to investigate the linguistic and cultural relativity of gesture performance, especially spatially indicative gestures. It involves video-recording pairs of speakers in both free conversation and more structured communication tasks (e.g., describing film plots).

    Please note: the stimuli mentioned in this entry are available elsewhere: 'The Pear Story', a short film made at the University of California at Berkeley; "Frog, where are you?" from the original Mayer (1969) book, as published in the Appendix of Berman & Slobin (1994).
  • Klassmann, A., Offenga, F., Broeder, D., Skiba, R., & Wittenburg, P. (2006). Comparison of resource discovery methods. In Proceedings of the 5th International Conference on Language Resources and Evaluation (LREC 2006) (pp. 113-116).
  • Klein, W. (2006). On finiteness. In V. Van Geenhoven (Ed.), Semantics in acquisition (pp. 245-272). Dordrecht: Springer.

    Abstract

    The distinction between finite and non-finite verb forms is well-established but not particularly well-defined. It cannot just be a matter of verb morphology, because it is also made when there is hardly any morphological difference: by far most English verb forms can be finite as well as non-finite. More importantly, many structural phenomena are clearly associated with the presence or absence of finiteness, a fact which is clearly reflected in the early stages of first and second language acquisition. In syntax, these include basic word order rules, gapping, the licensing of a grammatical subject and the licensing of expletives. In semantics, the specific interpretation of indefinite noun phrases is crucially linked to the presence of a finite element. These phenomena are surveyed, and it is argued that finiteness (a) links the descriptive content of the sentence (the 'sentence basis') to its topic component (in particular, to its topic time), and (b) it confines the illocutionary force to that topic component. In a declarative main clause, for example, the assertion is confined to a particular time, the topic time. It is shown that most of the syntactic and semantic effects connected to finiteness naturally follow from this assumption.
  • Klein, W. (2012). Auf dem Markt der Wissenschaften oder: Weniger wäre mehr. In K. Sonntag (Ed.), Heidelberger Profile. Herausragende Persönlichkeiten berichten über ihre Begegnung mit Heidelberg. (pp. 61-84). Heidelberg: Universitätsverlag Winter.
  • Klein, W. (1969). Bibliographie zur maschinellen syntaktischen Analyse. In H. Eggers, & R. Dietrich (Eds.), Elektronische Syntaxanalyse der deutschen Gegenwartssprache (pp. 165-177). Tübingen: Niemeyer.
  • Klein, W. (1995). A simplest analysis of the English tense-aspect system. In W. Riehle, & H. Keiper (Eds.), Proceedings of the Anglistentag 1994 (pp. 139-151). Tübingen: Niemeyer.
  • Klein, W. (2012). A way to look at second language acquisition. In M. Watorek, S. Benazzo, & M. Hickmann (Eds.), Comparative perspectives on language acquisition: A tribute to Clive Perdue (pp. 23-36). Bristol: Multilingual Matters.
  • Klein, W. (2012). Alle zwei Wochen verschwindet eine Sprache. In G. Stock (Ed.), Die Akademie am Gendarmenmarkt 2012/13, Jahresmagazin 2012/13 (pp. 8-13). Berlin: Berlin-Brandenburgische Akademie der Wissenschaften.
  • Klein, W., Dietrich, R., & Noyau, C. (1995). Conclusions. In R. Dietrich, W. Klein, & C. Noyau (Eds.), The acquisition of temporality in a second language (pp. 261-280). Amsterdam: Benjamins.
  • Klein, W. (1998). Ein Blick zurück auf die Varietätengrammatik. In U. Ammon, K. Mattheier, & P. Nelde (Eds.), Sociolinguistica: Internationales Jahrbuch für europäische Soziolinguistik (pp. 22-38). Tübingen: Niemeyer.
  • Klein, W. (2012). Die Sprache der Denker. In J. Voss, & M. Stolleis (Eds.), Fachsprachen und Normalsprache (pp. 49-60). Göttingen: Wallstein.
  • Klein, W., & Perdue, C. (1986). Comment résourdre une tache verbale complexe avec peu de moyens linguistiques? In A. Giacomi, & D. Véronique (Eds.), Acquisition d'une langue étrangère (pp. 306-330). Aix-en-Provence: Service des Publications de l'Universite de Provence.
  • Klein, W. (1998). Assertion and finiteness. In N. Dittmar, & Z. Penner (Eds.), Issues in the theory of language acquisition: Essays in honor of Jürgen Weissenborn (pp. 225-245). Bern: Peter Lang.
  • Klein, W. (1995). Frame of analysis. In R. Dietrich, W. Klein, & C. Noyau (Eds.), The acquisition of temporality in a second language (pp. 17-29). Amsterdam: Benjamins.
  • Klein, W. (2012). Grußworte. In C. Markschies, & E. Osterkamp (Eds.), Vademekum der Inspirationsmittel (pp. 63-65). Göttingen: Wallstein.
  • Klein, W. (1986). Intonation und Satzmodalität in einfachen Fällen: Einige Beobachtungen. In E. Slembek (Ed.), Miteinander sprechen und handeln: Festschrift für Hellmut Geissner (pp. 161-177). Königstein Ts.: Scriptor.
  • Klein, W., Coenen, J., Van Helvert, K., & Hendriks, H. (1995). The acquisition of Dutch. In R. Dietrich, W. Klein, & C. Noyau (Eds.), The acquisition of temporality in a second language (pp. 117-143). Amsterdam: Benjamins.
  • Klein, W. (1995). The acquisition of English. In R. Dietrich, W. Klein, & C. Noyau (Eds.), The acquisition of temporality in a second language (pp. 31-70). Amsterdam: Benjamins.
  • Klein, W. (1995). Sprachverhalten. In M. Amelang, & Pawlik (Eds.), Enzyklopädie der Psychologie (pp. 469-505). Göttingen: Hogrefe.
  • Klein, W., & Vater, H. (1998). The perfect in English and German. In L. Kulikov, & H. Vater (Eds.), Typology of verbal categories: Papers presented to Vladimir Nedjalkov on the occasion of his 70th birthday (pp. 215-235). Tübingen: Niemeyer.
  • Klein, W. (2012). The information structure of French. In M. Krifka, & R. Musan (Eds.), The expression of information structure (pp. 95-126). Berlin: de Gruyter.
  • Klein, W. (1969). Zum Begriff der syntaktischen Analyse. In H. Eggers, & R. Dietrich (Eds.), Elektronische Syntaxanalyse der deutschen Gegenwartssprache (pp. 20-37). Tübingen: Niemeyer.
  • Kopecka, A. (2006). The semantic structure of motion verbs in French: Typological perspectives. In M. Hickmann, & Roberts S. (Eds.), Space in languages: Linguistic systems and cognitive categories (pp. 83-102). Amsterdam: Benjamins.
  • Kopecka, A. (2012). Semantic granularity of placement and removal expressions in Polish. In A. Kopecka, & B. Narasimhan (Eds.), Events of putting and taking: A crosslinguistic perspective (pp. 327-348). Amsterdam: Benjamins.

    Abstract

    This chapter explores the expression of placement (or Goal-oriented) and removal (or Source-oriented) events by speakers of Polish (a West Slavic language). Its aim is to investigate the hypothesis known as ‘Source/Goal asymmetry’ according to which languages tend to favor the expression of Goals (e.g., into, onto) and to encode them more systematically and in a more fine-grained way than Sources (e.g., from, out of). The study provides both evidence and counter-evidence for Source/Goal asymmetry. On the one hand, it shows that Polish speakers use a greater variety of verbs to convey Manner and/or mode of manipulation in the expression of placement, encoding such events in a more fine-grained manner than removal events. The expression of placement is also characterized by a greater variety of verb prefixes conveying Path and prepositional phrases (including prepositions and case markers) conveying Ground. On the other hand, the study reveals that Polish speakers attend to Sources as often as to Goals, revealing no evidence for an attentional bias toward the endpoints of events.
  • Kouwenhoven, H., & Van Mulken, M. (2012). The perception of self in L1 and L2 for Dutch-English compound bilinguals. In N. De Jong, K. Juffermans, M. Keijzer, & L. Rasier (Eds.), Papers of the Anéla 2012 Applied Linguistics Conference (pp. 326-335). Delft: Eburon.
  • Kuijpers, C. T., Coolen, R., Houston, D., & Cutler, A. (1998). Using the head-turning technique to explore cross-linguistic performance differences. In C. Rovee-Collier, L. Lipsitt, & H. Hayne (Eds.), Advances in infancy research: Vol. 12 (pp. 205-220). Stamford: Ablex.
  • Kuzla, C., Mitterer, H., Ernestus, M., & Cutler, A. (2006). Perceptual compensation for voice assimilation of German fricatives. In P. Warren, & I. Watson (Eds.), Proceedings of the 11th Australasian International Conference on Speech Science and Technology (pp. 394-399).

    Abstract

    In German, word-initial lax fricatives may be produced with substantially reduced glottal vibration after voiceless obstruents. This assimilation occurs more frequently and to a larger extent across prosodic word boundaries than across phrase boundaries. Assimilatory devoicing makes the fricatives more similar to their tense counterparts and could thus hinder word recognition. The present study investigates how listeners cope with assimilatory devoicing. Results of a cross-modal priming experiment indicate that listeners compensate for assimilation in appropriate contexts. Prosodic structure moderates compensation for assimilation: Compensation occurs especially after phrase boundaries, where devoiced fricatives are sufficiently long to be confused with their tense counterparts.
  • Kuzla, C., Ernestus, M., & Mitterer, H. (2006). Prosodic structure affects the production and perception of voice-assimilated German fricatives. In R. Hoffmann, & H. Mixdorff (Eds.), Speech prosody 2006. Dresden: TUD Press.

    Abstract

    Prosodic structure has long been known to constrain phonological processes [1]. More recently, it has also been recognized as a source of fine-grained phonetic variation of speech sounds. In particular, segments in domain-initial position undergo prosodic strengthening [2, 3], which also implies more resistance to coarticulation in higher prosodic domains [5]. The present study investigates the combined effects of prosodic strengthening and assimilatory devoicing on word-initial fricatives in German, the functional implication of both processes for cues to the fortis-lenis contrast, and the influence of prosodic structure on listeners’ compensation for assimilation. Results indicate that 1. Prosodic structure modulates duration and the degree of assimilatory devoicing, 2. Phonological contrasts are maintained by speakers, but differ in phonetic detail across prosodic domains, and 3. Compensation for assimilation in perception is moderated by prosodic structure and lexical constraints.
  • Kuzla, C., Mitterer, H., & Ernestus, M. (2006). Compensation for assimilatory devoicing and prosodic structure in German fricative perception. In Variation, detail and representation: 10th Conference on Laboratory Phonology (pp. 43-44).
  • Lattenkamp, E. Z., Linnenschmidt, M., Mardus, E., Vernes, S. C., Wiegrebe, L., & Schutte, M. (2020). Impact of auditory feedback on bat vocal development. In A. Ravignani, C. Barbieri, M. Flaherty, Y. Jadoul, E. Lattenkamp, H. Little, M. Martins, K. Mudd, & T. Verhoef (Eds.), The Evolution of Language: Proceedings of the 13th International Conference (Evolang13) (pp. 249-251). Nijmegen: The Evolution of Language Conferences.
  • Lei, L., Raviv, L., & Alday, P. M. (2020). Using spatial visualizations and real-world social networks to understand language evolution and change. In A. Ravignani, C. Barbieri, M. Flaherty, Y. Jadoul, E. Lattenkamp, H. Little, M. Martins, K. Mudd, & T. Verhoef (Eds.), The Evolution of Language: Proceedings of the 13th International Conference (Evolang13) (pp. 252-254). Nijmegen: The Evolution of Language Conferences.
  • Lenkiewicz, P., Auer, E., Schreer, O., Masneri, S., Schneider, D., & Tschöpe, S. (2012). AVATecH ― automated annotation through audio and video analysis. In N. Calzolari (Ed.), Proceedings of LREC 2012: 8th International Conference on Language Resources and Evaluation (pp. 209-214). European Language Resources Association.

    Abstract

    In different fields of the humanities annotations of multimodal resources are a necessary component of the research workflow. Examples include linguistics, psychology, anthropology, etc. However, creation of those annotations is a very laborious task, which can take 50 to 100 times the length of the annotated media, or more. This can be significantly improved by applying innovative audio and video processing algorithms, which analyze the recordings and provide automated annotations. This is the aim of the AVATecH project, which is a collaboration of the Max Planck Institute for Psycholinguistics (MPI) and the Fraunhofer institutes HHI and IAIS. In this paper we present a set of results of automated annotation together with an evaluation of their quality.
  • Lenkiewicz, A., Lis, M., & Lenkiewicz, P. (2012). Linguistic concepts described with Media Query Language for automated annotation. In J. C. Meiser (Ed.), Digital Humanities 2012 Conference Abstracts. University of Hamburg, Germany; July 16–22, 2012 (pp. 477-479).

    Abstract

    Introduction Human spoken communication is multimodal, i.e. it encompasses both speech and gesture. Acoustic properties of voice, body movements, facial expression, etc. are an inherent and meaningful part of spoken interaction; they can provide attitudinal, grammatical and semantic information. In the recent years interest in audio-visual corpora has been rising rapidly as they enable investigation of different communicative modalities and provide more holistic view on communication (Kipp et al. 2009). Moreover, for some languages such corpora are the only available resource, as is the case for endangered languages for which no written resources exist.
  • Lenkiewicz, P., Van Uytvanck, D., Wittenburg, P., & Drude, S. (2012). Towards automated annotation of audio and video recordings by application of advanced web-services. In Proceedings of INTERSPEECH 2012: 13th Annual Conference of the International Speech Communication Association (pp. 1880-1883).

    Abstract

    In this paper we describe audio and video processing algorithms that are developed in the scope of AVATecH project. The purpose of these algorithms is to shorten the time taken by manual annotation of audio and video recordings by extracting features from media files and creating semi-automated annotations. We show that the use of such supporting algorithms can shorten the annotation time to 30-50% of the time necessary to perform a fully manual annotation of the same kind.
  • Levelt, W. J. M. (1969). Semantic features: A psychological model and its mathematical analysis. In Heymans Bulletins Psychologische instituten R.U. Groningen, HB-69-45.
  • Levelt, W. J. M. (2016). Localism versus holism. Historical origins of studying language in the brain. In R. Rubens, & M. Van Dijk (Eds.), Sartoniana vol. 29 (pp. 37-60). Ghent: Ghent University.
  • Levelt, W. J. M. (2016). The first golden age of psycholinguistics 1865-World War I. In R. Rubens, & M. Van Dyck (Eds.), Sartoniana vol. 29 (pp. 15-36). Ghent: Ghent University.
  • Levelt, W. J. M., & De Swaan, A. (2016). Levensbericht Nico Frijda. In Koninklijke Nederlandse Akademie van Wetenschappen (Ed.), Levensberichten en herdenkingen 2016 (pp. 16-25). Amsterdam: KNAW.
  • Levelt, W. J. M., & Ruijssenaars, A. (1995). Levensbericht Johan Joseph Dumont. In Jaarboek Koninklijke Nederlandse Akademie van Wetenschappen (pp. 31-36).
  • Levelt, W. J. M. (1995). Chapters of psychology: An interview with Wilhelm Wundt. In R. L. Solso, & D. W. Massaro (Eds.), The science of mind: 2001 and beyond (pp. 184-202). Oxford University Press.
  • Levelt, W. J. M. (1962). Motion breaking and the perception of causality. In A. Michotte (Ed.), Causalité, permanence et réalité phénoménales: Etudes de psychologie expérimentale (pp. 244-258). Louvain: Publications Universitaires.
  • Levelt, W. J. M., & Plomp, R. (1962). Musical consonance and critical bandwidth. In Proceedings of the 4th International Congress Acoustics (pp. 55-55).
  • Levelt, W. J. M. (1986). Herdenking van Joseph Maria Franciscus Jaspars (16 maart 1934 - 31 juli 1985). In Jaarboek 1986 Koninklijke Nederlandse Akademie van Wetenschappen (pp. 187-189). Amsterdam: North Holland.
  • Levelt, W. J. M. (1995). Psycholinguistics. In C. C. French, & A. M. Colman (Eds.), Cognitive psychology (reprint, pp. 39- 57). London: Longman.
  • Levelt, W. J. M. (1969). Psycholinguistiek. In Winkler-Prins [Suppl.] (pp. A756-A757).
  • Levelt, W. J. M. (2020). The alpha and omega of Jerome Bruner's contributions to the Max Planck Institute for Psycholinguistics. In M. E. Poulsen (Ed.), The Jerome Bruner Library: From New York to Nijmegen (pp. 11-18). Nijmegen: Max Planck Institute for Psycholinguistics.

    Abstract

    Presentation of the official opening of the Jerome Bruner Library, January 8th, 2020
  • Levelt, W. J. M. (1986). Zur sprachlichen Abbildung des Raumes: Deiktische und intrinsische Perspektive. In H. Bosshardt (Ed.), Perspektiven auf Sprache. Interdisziplinäre Beiträge zum Gedenken an Hans Hörmann (pp. 187-211). Berlin: De Gruyter.
  • Levinson, S. C., & Wilkins, D. P. (2006). Patterns in the data: Towards a semantic typology of spatial description. In S. C. Levinson, & D. P. Wilkins (Eds.), Grammars of space: Explorations in cognitive diversity (pp. 512-552). Cambridge: Cambridge University Press.
  • Levinson, S. C. (2006). On the human "interaction engine". In N. J. Enfield, & S. C. Levinson (Eds.), Roots of human sociality: Culture, cognition and interaction (pp. 39-69). Oxford: Berg.
  • Levinson, S. C., & Wilkins, D. P. (2006). The background to the study of the language of space. In S. C. Levinson, & D. P. Wilkins (Eds.), Grammars of space: Explorations in cognitive diversity (pp. 1-23). Cambridge: Cambridge University Press.
  • Levinson, S. C. (2006). The language of space in Yélî Dnye. In S. C. Levinson, & D. P. Wilkins (Eds.), Grammars of space: Explorations in cognitive diversity (pp. 157-203). Cambridge: Cambridge University Press.
  • Levinson, S. C. (1995). 'Logical' Connectives in Natural Language: A First Questionnaire. In D. Wilkins (Ed.), Extensions of space and beyond: manual for field elicitation for the 1995 field season (pp. 61-69). Nijmegen: Max Planck Institute for Psycholinguistics. doi:10.17617/2.3513476.

    Abstract

    It has been hypothesised that human reasoning has a non-linguistic foundation, but is nevertheless influenced by the formal means available in a language. For example, Western logic is transparently related to European sentential connectives (e.g., and, if … then, or, not), some of which cannot be unambiguously expressed in other languages. The questionnaire explores reasoning tools and practices through investigating translation equivalents of English sentential connectives and collecting examples of “reasoned arguments”.
  • Levinson, S. C. (1998). Deixis. In J. L. Mey (Ed.), Concise encyclopedia of pragmatics (pp. 200-204). Amsterdam: Elsevier.
  • Levinson, S. C. (2006). Introduction: The evolution of culture in a microcosm. In S. C. Levinson, & P. Jaisson (Eds.), Evolution and culture: A Fyssen Foundation Symposium (pp. 1-41). Cambridge: MIT Press.
  • Levinson, S. C. (1998). Minimization and conversational inference. In A. Kasher (Ed.), Pragmatics: Vol. 4 Presupposition, implicature and indirect speech acts (pp. 545-612). London: Routledge.
  • Levinson, S. C. (2016). Language and mind: Let's get the issues straight! In S. D. Blum (Ed.), Making sense of language: Readings in culture and communication [3rd ed.] (pp. 68-80). Oxford: Oxford University Press.
  • Levinson, S. C. (1995). Interactional biases in human thinking. In E. N. Goody (Ed.), Social intelligence and interaction (pp. 221-260). Cambridge: Cambridge University Press.
  • Levinson, S. C. (2012). Interrogative intimations: On a possible social economics of interrogatives. In J. P. De Ruiter (Ed.), Questions: Formal, functional and interactional perspectives (pp. 11-32). New York: Cambridge University Press.
  • Levinson, S. C., & Brown, P. (2012). Put and Take in Yélî Dnye, the Papuan language of Rossel Island. In A. Kopecka, & B. Narasimhan (Eds.), Events of putting and taking: A crosslinguistic perspective (pp. 273-296). Amsterdam: Benjamins.

    Abstract

    This paper describes the linguistic treatment of placement events in the Rossel Island (Papua New Guinea) language Yélî Dnye. Yélî Dnye is unusual in treating PUT and TAKE events symmetrically with a remarkable consistency. In what follows, we first provide a brief background for the language, then describe the six core PUT/TAKE verbs that were drawn upon by Yélî Dnye speakers to describe the great majority of the PUT/TAKE stimuli clips, along with some of their grammatical properties. In Section 5 we describe alternative verbs usable in particular circumstances and give an indication of the basis for variability in responses across speakers. Section 6 presents some reasons why the Yélî verb pattern for expressing PUT and TAKE events is of broad interest.
  • Levinson, S. C. (2016). The countable singulare tantum. In A. Reuneker, R. Boogaart, & S. Lensink (Eds.), Aries netwerk: Een constructicon (pp. 145-146). Leiden: Leiden University.
  • Levinson, S. C. (2012). Preface. In A. Kopecka, & B. Narasimhan (Eds.), Events of putting and taking: A crosslinguistic perspective (pp. xi-xv). Amsterdam: Benjamins.
  • Levinson, S. C. (1995). Three levels of meaning. In F. Palmer (Ed.), Grammar and meaning: Essays in honour of Sir John Lyons (pp. 90-115). Cambridge University Press.
  • Levshina, N. (2020). How tight is your language? A semantic typology based on Mutual Information. In K. Evang, L. Kallmeyer, R. Ehren, S. Petitjean, E. Seyffarth, & D. Seddah (Eds.), Proceedings of the 19th International Workshop on Treebanks and Linguistic Theories (pp. 70-78). Düsseldorf, Germany: Association for Computational Linguistics. doi:10.18653/v1/2020.tlt-1.7.

    Abstract

    Languages differ in the degree of semantic flexibility of their syntactic roles. For example, Eng-
    lish and Indonesian are considered more flexible with regard to the semantics of subjects,
    whereas German and Japanese are less flexible. In Hawkins’ classification, more flexible lan-
    guages are said to have a loose fit, and less flexible ones are those that have a tight fit. This
    classification has been based on manual inspection of example sentences. The present paper
    proposes a new, quantitative approach to deriving the measures of looseness and tightness from
    corpora. We use corpora of online news from the Leipzig Corpora Collection in thirty typolog-
    ically and genealogically diverse languages and parse them syntactically with the help of the
    Universal Dependencies annotation software. Next, we compute Mutual Information scores for
    each language using the matrices of lexical lemmas and four syntactic dependencies (intransi-
    tive subjects, transitive subject, objects and obliques). The new approach allows us not only to
    reproduce the results of previous investigations, but also to extend the typology to new lan-
    guages. We also demonstrate that verb-final languages tend to have a tighter relationship be-
    tween lexemes and syntactic roles, which helps language users to recognize thematic roles early
    during comprehension.

    Additional information

    full text via ACL website
  • Liszkowski, U. (2006). Infant pointing at twelve months: Communicative goals, motives, and social-cognitive abilities. In N. J. Enfield, & S. C. Levinson (Eds.), Roots of human sociality: culture, cognition and interaction (pp. 153-178). New York: Berg.
  • Little, H., Eryılmaz, K., & De Boer, B. (2016). Emergence of signal structure: Effects of duration constraints. In S. G. Roberts, C. Cuskley, L. McCrohon, L. Barceló-Coblijn, O. Feher, & T. Verhoef (Eds.), The Evolution of Language: Proceedings of the 11th International Conference (EVOLANG11). Retrieved from http://evolang.org/neworleans/papers/25.html.

    Abstract

    Recent work has investigated the emergence of structure in speech using experiments which use artificial continuous signals. Some experiments have had no limit on the duration which signals can have (e.g. Verhoef et al., 2014), and others have had time limitations (e.g. Verhoef et al., 2015). However, the effect of time constraints on the structure in signals has never been experimentally investigated.
  • Little, H., & de Boer, B. (2016). Did the pressure for discrimination trigger the emergence of combinatorial structure? In Proceedings of the 2nd Conference of the International Association for Cognitive Semiotics (pp. 109-110).
  • Little, H., Eryılmaz, K., & De Boer, B. (2016). Differing signal-meaning dimensionalities facilitates the emergence of structure. In S. G. Roberts, C. Cuskley, L. McCrohon, L. Barceló-Coblijn, O. Feher, & T. Verhoef (Eds.), The Evolution of Language: Proceedings of the 11th International Conference (EVOLANG11). Retrieved from http://evolang.org/neworleans/papers/25.html.

    Abstract

    Structure of language is not only caused by cognitive processes, but also by physical aspects of the signalling modality. We test the assumptions surrounding the role which the physical aspects of the signal space will have on the emergence of structure in speech. Here, we use a signal creation task to test whether a signal space and a meaning space having similar dimensionalities will generate an iconic system with signal-meaning mapping and whether, when the topologies differ, the emergence of non-iconic structure is facilitated. In our experiments, signals are created using infrared sensors which use hand position to create audio signals. We find that people take advantage of signal-meaning mappings where possible. Further, we use trajectory probabilities and measures of variance to show that when there is a dimensionality mismatch, more structural strategies are used.
  • Little, H. (2016). Nahran Bhannamz: Language Evolution in an Online Zombie Apocalypse Game. In Createvolang: creativity and innovation in language evolution.
  • Lockwood, G., Hagoort, P., & Dingemanse, M. (2016). Synthesized Size-Sound Sound Symbolism. In A. Papafragou, D. Grodner, D. Mirman, & J. Trueswell (Eds.), Proceedings of the 38th Annual Meeting of the Cognitive Science Society (CogSci 2016) (pp. 1823-1828). Austin, TX: Cognitive Science Society.

    Abstract

    Studies of sound symbolism have shown that people can associate sound and meaning in consistent ways when presented with maximally contrastive stimulus pairs of nonwords such as bouba/kiki (rounded/sharp) or mil/mal (small/big). Recent work has shown the effect extends to antonymic words from natural languages and has proposed a role for shared cross-modal correspondences in biasing form-to-meaning associations. An important open question is how the associations work, and particularly what the role is of sound-symbolic matches versus mismatches. We report on a learning task designed to distinguish between three existing theories by using a spectrum of sound-symbolically matching, mismatching, and neutral (neither matching nor mismatching) stimuli. Synthesized stimuli allow us to control for prosody, and the inclusion of a neutral condition allows a direct test of competing accounts. We find evidence for a sound-symbolic match boost, but not for a mismatch difficulty compared to the neutral condition.
  • MacDonald, K., Räsänen, O., Casillas, M., & Warlaumont, A. S. (2020). Measuring prosodic predictability in children’s home language environments. In S. Denison, M. Mack, Y. Xu, & B. C. Armstrong (Eds.), Proceedings of the 42nd Annual Virtual Meeting of the Cognitive Science Society (CogSci 2020) (pp. 695-701). Montreal, QB: Cognitive Science Society.

    Abstract

    Children learn language from the speech in their home environment. Recent work shows that more infant-directed speech
    (IDS) leads to stronger lexical development. But what makes IDS a particularly useful learning signal? Here, we expand on an attention-based account first proposed by Räsänen et al. (2018): that prosodic modifications make IDS less predictable, and thus more interesting. First, we reproduce the critical finding from Räsänen et al.: that lab-recorded IDS pitch is less predictable compared to adult-directed speech (ADS). Next, we show that this result generalizes to the home language environment, finding that IDS in daylong recordings is also less predictable than ADS but that this pattern is much less robust than for IDS recorded in the lab. These results link experimental work on attention and prosodic modifications of IDS to real-world language-learning environments, highlighting some challenges of scaling up analyses of IDS to larger datasets that better capture children’s actual input.
  • Macuch Silva, V., & Roberts, S. G. (2016). Language adapts to signal disruption in interaction. In S. G. Roberts, C. Cuskley, L. McCrohon, L. Barceló-Coblijn, O. Feher, & T. Verhoef (Eds.), The Evolution of Language: Proceedings of the 11th International Conference (EVOLANG11). Retrieved from http://evolang.org/neworleans/papers/20.html.

    Abstract

    Linguistic traits are often seen as reflecting cognitive biases and constraints (e.g. Christiansen & Chater, 2008). However, language must also adapt to properties of the channel through which communication between individuals occurs. Perhaps the most basic aspect of any communication channel is noise. Communicative signals can be blocked, degraded or distorted by other sources in the environment. This poses a fundamental problem for communication. On average, channel disruption accompanies problems in conversation every 3 minutes (27% of cases of other-initiated repair, Dingemanse et al., 2015). Linguistic signals must adapt to this harsh environment. While modern language structures are robust to noise (e.g. Piantadosi et al., 2011), we investigate how noise might have shaped the early emergence of structure in language. The obvious adaptation to noise is redundancy. Signals which are maximally different from competitors are harder to render ambiguous by noise. Redundancy can be increased by adding differentiating segments to each signal (increasing the diversity of segments). However, this makes each signal more complex and harder to learn. Under this strategy, holistic languages may emerge. Another strategy is reduplication - repeating parts of the signal so that noise is less likely to disrupt all of the crucial information. This strategy does not increase the difficulty of learning the language - there is only one extra rule which applies to all signals. Therefore, under pressures for learnability, expressivity and redundancy, reduplicated signals are expected to emerge. However, reduplication is not a pervasive feature of words (though it does occur in limited domains like plurals or iconic meanings). We suggest that this is due to the pressure for redundancy being lifted by conversational infrastructure for repair. Receivers can request that senders repeat signals only after a problem occurs. That is, robustness is achieved by repeating the signal across conversational turns (when needed) instead of within single utterances. As a proof of concept, we ran two iterated learning chains with pairs of individuals in generations learning and using an artificial language (e.g. Kirby et al., 2015). The meaning space was a structured collection of unfamiliar images (3 shapes x 2 textures x 2 outline types). The initial language for each chain was the same written, unstructured, fully expressive language. Signals produced in each generation formed the training language for the next generation. Within each generation, pairs played an interactive communication game. The director was given a target meaning to describe, and typed a word for the matcher, who guessed the target meaning from a set. With a 50% probability, a contiguous section of 3-5 characters in the typed word was replaced by ‘noise’ characters (#). In one chain, the matcher could initiate repair by requesting that the director type and send another signal. Parallel generations across chains were matched for the number of signals sent (if repair was initiated for a meaning, then it was presented twice in the parallel generation where repair was not possible) and noise (a signal for a given meaning which was affected by noise in one generation was affected by the same amount of noise in the parallel generation). For the final set of signals produced in each generation we measured the signal redundancy (the zip compressibility of the signals), the character diversity (entropy of the characters of the signals) and systematic structure (z-score of the correlation between signal edit distance and meaning hamming distance). In the condition without repair, redundancy increased with each generation (r=0.97, p=0.01), and the character diversity decreased (r=-0.99,p=0.001) which is consistent with reduplication, as shown below (part of the initial and the final language): Linear regressions revealed that generations with repair had higher overall systematic structure (main effect of condition, t = 2.5, p < 0.05), increasing character diversity (interaction between condition and generation, t = 3.9, p = 0.01) and redundancy increased at a slower rate (interaction between condition and generation, t = -2.5, p < 0.05). That is, the ability to repair counteracts the pressure from noise, and facilitates the emergence of compositional structure. Therefore, just as systems to repair damage to DNA replication are vital for the evolution of biological species (O’Brien, 2006), conversational repair may regulate replication of linguistic forms in the cultural evolution of language. Future studies should further investigate how evolving linguistic structure is shaped by interaction pressures, drawing on experimental methods and naturalistic studies of emerging languages, both spoken (e.g Botha, 2006; Roberge, 2008) and signed (e.g Senghas, Kita, & Ozyurek, 2004; Sandler et al., 2005).
  • Yu, J., Mailhammer, R., & Cutler, A. (2020). Vocabulary structure affects word recognition: Evidence from German listeners. In N. Minematsu, M. Kondo, T. Arai, & R. Hayashi (Eds.), Proceedings of Speech Prosody 2020 (pp. 474-478). Tokyo: ISCA. doi:10.21437/SpeechProsody.2020-97.

    Abstract

    Lexical stress is realised similarly in English, German, and
    Dutch. On a suprasegmental level, stressed syllables tend to be
    longer and more acoustically salient than unstressed syllables;
    segmentally, vowels in unstressed syllables are often reduced.
    The frequency of unreduced unstressed syllables (where only
    the suprasegmental cues indicate lack of stress) however,
    differs across the languages. The present studies test whether
    listener behaviour is affected by these vocabulary differences,
    by investigating German listeners’ use of suprasegmental cues
    to lexical stress in German and English word recognition. In a
    forced-choice identification task, German listeners correctly
    assigned single-syllable fragments (e.g., Kon-) to one of two
    words differing in stress (KONto, konZEPT). Thus, German
    listeners can exploit suprasegmental information for
    identifying words. German listeners also performed above
    chance in a similar task in English (with, e.g., DIver, diVERT),
    i.e., their sensitivity to these cues also transferred to a nonnative
    language. An English listener group, in contrast, failed
    in the English fragment task. These findings mirror vocabulary
    patterns: German has more words with unreduced unstressed
    syllables than English does.
  • Majid, A. (2012). A guide to stimulus-based elicitation for semantic categories. In N. Thieberger (Ed.), The Oxford handbook of linguistic fieldwork (pp. 54-71). New York: Oxford University Press.
  • Majid, A. (2012). Taste in twenty cultures [Abstract]. Abstracts from the XXIth Congress of European Chemoreception Research Organization, ECRO-2011. Publ. in Chemical Senses, 37(3), A10.

    Abstract

    Scholars disagree about the extent to which language can tell us
    about conceptualisation of the world. Some believe that language
    is a direct window onto concepts: Having a word ‘‘bird’’, ‘‘table’’ or
    ‘‘sour’’ presupposes the corresponding underlying concept, BIRD,
    TABLE, SOUR. Others disagree. Words are thought to be uninformative,
    or worse, misleading about our underlying conceptual representations;
    after all, our mental worlds are full of ideas that we
    struggle to express in language. How could this be so, argue sceptics,
    if language were a direct window on our inner life? In this presentation,
    I consider what language can tell us about the
    conceptualisation of taste. By considering linguistic data from
    twenty unrelated cultures – varying in subsistence mode (huntergatherer
    to industrial), ecological zone (rainforest jungle to desert),
    dwelling type (rural and urban), and so forth – I argue any single language is, indeed, impoverished about what it can reveal about
    taste. But recurrent lexicalisation patterns across languages can
    provide valuable insights about human taste experience. Moreover,
    language patterning is part of the data that a good theory of taste
    perception has to be answerable for. Taste researchers, therefore,
    cannot ignore the crosslinguistic facts.

Share this page