Publications

Displaying 101 - 200 of 423
  • Dimroth, C., & Starren, M. (2003). Introduction. In C. Dimroth, & M. Starren (Eds.), Information structure and the dynamics of language acquisition (pp. 1-14). Amsterdam: John Benjamins.
  • Dimroth, C., & Haberzettl, S. (2008). Je älter desto besser: Der Erwerb der Verbflexion in Kindesalter. In B. Ahrenholz, U. Bredel, W. Klein, M. Rost-Roth, & R. Skiba (Eds.), Empirische Forschung und Theoriebildung: Beiträge aus Soziolinguistik, Gesprochene-Sprache- und Zweitspracherwerbsforschung: Festschrift für Norbert Dittmar (pp. 227-238). Frankfurt am Main: Lang.
  • Dimroth, C. (2008). Kleine Unterschiede in den Lernvoraussetzungen beim ungesteuerten Zweitspracherwerb: Welche Bereiche der Zielsprache Deutsch sind besonders betroffen? In B. Ahrenholz (Ed.), Kinder und Migrationshintergrund: Spracherwerb und Fördermöglichkeiten (pp. 117-133). Freiburg: Fillibach.
  • Dingemanse, M., Hill, C., Majid, A., & Levinson, S. C. (2008). Ethnography of the senses. In A. Majid (Ed.), Field manual volume 11 (pp. 18-28). Nijmegen: Max Planck Institute for Psycholinguistics. doi:10.17617/2.492935.

    Abstract

    This entry provides some orientation and task suggestions on how to explore the perceptual world of your field site and the interaction between the cultural world and the sensory lexicon in your community. The material consists of procedural texts; soundscapes; other documentary and observational tasks. The goal of this task is to explore the perceptual world of your field site and the interaction between the cultural world and the sensory lexicon in your community.
  • Doumas, L. A., & Martin, A. E. (2016). Abstraction in time: Finding hierarchical linguistic structure in a model of relational processing. In A. Papafragou, D. Grodner, D. Mirman, & J. Trueswell (Eds.), Proceedings of the 38th Annual Meeting of the Cognitive Science Society (CogSci 2016) (pp. 2279-2284). Austin, TX: Cognitive Science Society.

    Abstract

    Abstract mental representation is fundamental for human cognition. Forming such representations in time, especially from dynamic and noisy perceptual input, is a challenge for any processing modality, but perhaps none so acutely as for language processing. We show that LISA (Hummel & Holyaok, 1997) and DORA (Doumas, Hummel, & Sandhofer, 2008), models built to process and to learn structured (i.e., symbolic) rep resentations of conceptual properties and relations from unstructured inputs, show oscillatory activation during processing that is highly similar to the cortical activity elicited by the linguistic stimuli from Ding et al.(2016). We argue, as Ding et al.(2016), that this activation reflects formation of hierarchical linguistic representation, and furthermore, that the kind of computational mechanisms in LISA/DORA (e.g., temporal binding by systematic asynchrony of firing) may underlie formation of abstract linguistic representations in the human brain. It may be this repurposing that allowed for the generation or mergence of hierarchical linguistic structure, and therefore, human language, from extant cognitive and neural systems. We conclude that models of thinking and reasoning and models of language processing must be integrated —not only for increased plausiblity, but in order to advance both fields towards a larger integrative model of human cognition
  • Drozd, K. F. (1998). No as a determiner in child English: A summary of categorical evidence. In A. Sorace, C. Heycock, & R. Shillcock (Eds.), Proceedings of the Gala '97 Conference on Language Acquisition (pp. 34-39). Edinburgh, UK: Edinburgh University Press,.

    Abstract

    This paper summarizes the results of a descriptive syntactic category analysis of child English no which reveals that young children use and represent no as a determiner and negatives like no pen as NPs, contra standard analyses.
  • Drozdova, P., Van Hout, R., & Scharenborg, O. (2016). Processing and adaptation to ambiguous sounds during the course of perceptual learning. In Proceedings of Interspeech 2016: The 17th Annual Conference of the International Speech Communication Association (pp. 2811-2815). doi:10.21437/Interspeech.2016-814.

    Abstract

    Listeners use their lexical knowledge to interpret ambiguous sounds, and retune their phonetic categories to include this ambiguous sound. Although there is ample evidence for lexically-guided retuning, the adaptation process is not fully understood. Using a lexical decision task with an embedded auditory semantic priming task, the present study investigates whether words containing an ambiguous sound are processed in the same way as “natural” words and whether adaptation to the ambiguous sound tends to equalize the processing of “ambiguous” and natural words. Analyses of the yes/no responses and reaction times to natural and “ambiguous” words showed that words containing an ambiguous sound were accepted as words less often and were processed slower than the same words without ambiguity. The difference in acceptance disappeared after exposure to approximately 15 ambiguous items. Interestingly, lower acceptance rates and slower processing did not have an effect on the processing of semantic information of the following word. However, lower acceptance rates of ambiguous primes predict slower reaction times of these primes, suggesting an important role of stimulus-specific characteristics in triggering lexically-guided perceptual learning.
  • Drude, S. (2003). Advanced glossing: A language documentation format and its implementation with Shoebox. In Proceedings of the 2002 International Conference on Language Resources and Evaluation (LREC 2002). Paris: ELRA.

    Abstract

    This paper presents Advanced Glossing, a proposal for a general glossing format designed for language documentation, and a specific setup for the Shoebox-program that implements Advanced Glossing to a large extent. Advanced Glossing (AG) goes beyond the traditional Interlinear Morphemic Translation, keeping syntactic and morphological information apart from each other in separate glossing tables. AG provides specific lines for different kinds of annotation – phonetic, phonological, orthographical, prosodic, categorial, structural, relational, and semantic, and it allows for gradual and successive, incomplete, and partial filling in case that some information may be irrelevant, unknown or uncertain. The implementation of AG in Shoebox sets up several databases. Each documented text is represented as a file of syntactic glossings. The morphological glossings are kept in a separate database. As an additional feature interaction with lexical databases is possible. The implementation makes use of the interlinearizing automatism provided by Shoebox, thus obtaining the table format for the alignment of lines in cells, and for semi-automatic filling-in of information in glossing tables which has been extracted from databases
  • Drude, S. (2008). Die Personenpräfixe des Guaraní und ihre lexikographische Behandlung. In W. Dietrich, & H. Symeonidis (Eds.), Geschichte und Aktualität der deutschsprachigen Guaraní-Philologie: Akten der Guaraní-Tagung in Kiel und Berlin 25.-27. Mai 2000 (pp. 198-234). Berlin: Lit Verlag.

    Abstract

    Der vorliegende Beitrag zum Kieler Symposium1 stellt die Resultate eines Teilbereichs meiner Arbeit zum Guarani vor, nämlich einen Vorschlag zur Analyse der Personenpräfixe dieser Sprache und der mit ihnen verbundenen grammatischen Kategorien. Die im Titel angedeutete lexikographische Fragestellung bedarf einer näheren Erläuterung, die ich im Zusammenhang mit einer kurzen Darstellung der Motivation für meine Untersuchungen geben will
  • Drude, S. (2003). Digitizing and annotating texts and field recordings in the Awetí project. In Proceedings of the EMELD Language Digitization Project Conference 2003. Workshop on Digitizing and Annotating Text and Field Recordings, LSA Institute, Michigan State University, July 11th -13th.

    Abstract

    Digitizing and annotating texts and field recordings Given that several initiatives worldwide currently explore the new field of documentation of endangered languages, the E-MELD project proposes to survey and unite procedures, techniques and results in order to achieve its main goal, ''the formulation and promulgation of best practice in linguistic markup of texts and lexicons''. In this context, this year's workshop deals with the processing of recorded texts. I assume the most valuable contribution I could make to the workshop is to show the procedures and methods used in the Awetí Language Documentation Project. The procedures applied in the Awetí Project are not necessarily representative of all the projects in the DOBES program, and they may very well fall short in several respects of being best practice, but I hope they might provide a good and concrete starting point for comparison, criticism and further discussion. The procedures to be exposed include: * taping with digital devices, * digitizing (preliminarily in the field, later definitely by the TIDEL-team at the Max Planck Institute in Nijmegen), * segmenting and transcribing, using the transcriber computer program, * translating (on paper, or while transcribing), * adding more specific annotation, using the Shoebox program, * converting the annotation to the ELAN-format developed by the TIDEL-team, and doing annotation with ELAN. Focus will be on the different types of annotation. Especially, I will present, justify and discuss Advanced Glossing, a text annotation format developed by H.-H. Lieb and myself designed for language documentation. It will be shown how Advanced Glossing can be applied using the Shoebox program. The Shoebox setup used in the Awetí Project will be shown in greater detail, including lexical databases and semi-automatic interaction between different database types (jumping, interlinearization). ( Freie Universität Berlin and Museu Paraense Emílio Goeldi, with funding from the Volkswagen Foundation.)
  • Drude, S. (2008). Inflectional units and their effects: The case of verbal prefixes in Guaraní. In R. Sackmann (Ed.), Explorations in integrational linguistics: Four essays on German, French, and Guaraní (pp. 153-189). Amsterdam: Benjamins.

    Abstract

    With the present essay I pursue a threefold aim as will be explained in the following paragraphs. Since I cannot expect my readers to be familiar with the language studied, Guaran´ı, more information about this language will be given in the next subsection.
  • Drude, S. (2008). Tense, aspect and mood in Awetí verb paradigms: Analytic and synthetic forms. In K. D. Harrison, D. S. Rood, & A. Dwyer (Eds.), Lessons from documented endangered languages (pp. 67-110). Amsterdam: Benjamins.

    Abstract

    This paper describes the verbal Tense-Aspect-Mood system of Awetí (Tupian, Central Brazil) in a Word-and-Paradigm approach. One classification of Awetí verb forms contains clear aspect categories. A second set of independent classifications renders at least four moods and contains a third major TAM classification, factuality, that has one mainly temporal category Future, while others are partially or wholly modal. Structural categories reflect the formal composition of the forms. Some forms are synthetic, ‘marked’ only by means of affixes, but many are analytic, containing auxiliary particles. With selected sample forms we demonstrate in detail the interplay of structural and functional categories in Awetí verb paradigms.
  • Duffield, N., & Matsuo, A. (2003). Factoring out the parallelism effect in ellipsis: An interactional approach? In J. Chilar, A. Franklin, D. Keizer, & I. Kimbara (Eds.), Proceedings of the 39th Annual Meeting of the Chicago Linguistic Society (CLS) (pp. 591-603). Chicago: Chicago Linguistics Society.

    Abstract

    Traditionally, there have been three standard assumptions made about the Parallelism Effect on VP-ellipsis, namely that the effect is categorical, that it applies asymmetrically and that it is uniquely due to syntactic factors. Based on the results of a series of experiments involving online and offline tasks, it will be argued that the Parallelism Effect is instead noncategorical and interactional. The factors investigated include construction type, conceptual and morpho-syntactic recoverability, finiteness and anaphor type (to test VP-anaphora). The results show that parallelism is gradient rather than categorical, effects both VP-ellipsis and anaphora, and is influenced by both structural and non-structural factors.
  • Eibl-Eibesfeldt, I., Senft, B., & Senft, G. (1998). Trobriander (Ost-Neuguinea, Trobriand Inseln, Kaile'una) Fadenspiele 'ninikula'. In Ethnologie - Humanethologische Begleitpublikationen von I. Eibl-Eibesfeldt und Mitarbeitern. Sammelband I, 1985-1987. Göttingen: Institut für den Wissenschaftlichen Film.
  • Eisner, F., & Scott, S. K. (2008). Speech and auditory processing in the cortex: Evidence from functional neuroimaging. In A. Cacace, & D. McFarland (Eds.), Controversies in central auditory processing disorder. San Diego, Ca: Plural Publishing.
  • Enfield, N. J. (2008). Verbs and multi-verb construction in Lao. In A. V. Diller, J. A. Edmondson, & Y. Luo (Eds.), The Tai-Kadai languages (pp. 83-183). London: Routledge.
  • Enfield, N. J., & Majid, A. (2008). Constructions in 'language and perception'. In A. Majid (Ed.), Field Manual Volume 11 (pp. 11-17). Nijmegen: Max Planck Institute for Psycholinguistics. doi:10.17617/2.492949.

    Abstract

    This field guide is for eliciting information about grammatical resources used in describing perceptual events and perception-based properties and states. A list of leading questions outlines an underlying semantic space for events/states of perception, against which language-specific constructions may be defined. It should be used as an entry point into a flexible exploration of the structures and constraints which are specific to the language you are working on. The goal is to provide a cross-linguistically comparable description of the constructions of a language used in describing perceptual events and states. The core focus is to discover any sensory asymmetries, i.e., ways in which different sensory modalities are treated differently with respect to these constructions.
  • Enfield, N. J. (2003). “Fish traps” task. In N. J. Enfield (Ed.), Field research manual 2003, part I: Multimodal interaction, space, event representation (pp. 31). Nijmegen: Max Planck Institute for Psycholinguistics. doi:10.17617/2.877616.

    Abstract

    This task is designed to elicit virtual 3D ‘models’ created in gesture space using iconic and other representational gestures. This task has been piloted with Lao speakers, where two speakers were asked to explain the meaning of terms referring to different kinds of fish trap mechanisms. The task elicited complex performances involving a range of iconic gestures, and with especially interesting use of (a) the ‘model/diagram’ in gesture space as a virtual object, (b) the non-dominant hand as a prosodic/semiotic anchor, (c) a range of different techniques (indexical and iconic) for evoking meaning with the hand, and (d) the use of nearby objects and parts of the body as semiotic ‘props’.
  • Enfield, N. J. (2008). Common ground as a resource for social affiliation. In I. Kecskes, & J. L. Mey (Eds.), Intention, common ground and the egocentric speaker-hearer (pp. 223-254). Berlin: Mouton de Gruyter.
  • Enfield, N. J. (2008). Lao linguistics in the 20th century and since. In Y. Goudineau, & M. Lorrillard (Eds.), Recherches nouvelles sur le Laos (pp. 435-452). Paris: EFEO.
  • Enfield, N. J., & Levinson, S. C. (2008). Metalanguage for speech acts. In A. Majid (Ed.), Field manual volume 11 (pp. 77-79). Nijmegen: Max Planck Institute for Psycholinguistics. doi:10.17617/2.492937.

    Abstract

    People of all cultures have some degree of concern with categorizing types of communicative social action. All languages have words with meanings like speak, say, talk, complain, curse, promise, accuse, nod, wink, point and chant. But the exact distinctions they make will differ in both quantity and quality. How is communicative social action categorised across languages and cultures? The goal of this task is to establish a basis for cross-linguistic comparison of native metalanguages for social action.
  • Enfield, N. J., De Ruiter, J. P., Levinson, S. C., & Stivers, T. (2003). Multimodal interaction in your field site: A preliminary investigation. In N. J. Enfield (Ed.), Field research manual 2003, part I: Multimodal interaction, space, event representation (pp. 10-16). Nijmegen: Max Planck Institute for Psycholinguistics. doi:10.17617/2.877638.

    Abstract

    Research on video- and audio-recordings of spontaneous naturally-occurring conversation in English has shown that conversation is a rule-guided, practice-oriented domain that can be investigated for its underlying mechanics or structure. Systematic study could yield something like a grammar for conversation. The goal of this task is to acquire a corpus of video-data, for investigating the underlying structure(s) of interaction cross-linguistically and cross-culturally
  • Enfield, N. J., & Levinson, S. C. (2003). Interview on kinship. In N. J. Enfield (Ed.), Field research manual 2003, part I: Multimodal interaction, space, event representation (pp. 64-65). Nijmegen: Max Planck Institute for Psycholinguistics. doi:10.17617/2.877629.

    Abstract

    We want to know how people think about their field of kin, on the supposition that it is quasi-spatial. To get some insights here, we need to video a discussion about kinship reckoning, the kinship system, marriage rules and so on, with a view to looking at both the linguistic expressions involved, and the gestures people use to indicate kinship groups and relations. Unlike the task in the 2001 manual, this task is a direct interview method.
  • Enfield, N. J. (2003). Introduction. In N. J. Enfield, Linguistic epidemiology: Semantics and grammar of language contact in mainland Southeast Asia (pp. 2-44). London: Routledge Curzon.
  • Enfield, N. J., & De Ruiter, J. P. (2003). The diff-task: A symmetrical dyadic multimodal interaction task. In N. J. Enfield (Ed.), Field research manual 2003, part I: Multimodal interaction, space, event representation (pp. 17-21). Nijmegen: Max Planck Institute for Psycholinguistics. doi:10.17617/2.877635.

    Abstract

    This task is a complement to the questionnaire ‘Multimodal interaction in your field site: a preliminary investigation’. The objective of the task is to obtain high quality video data on structured and symmetrical dyadic multimodal interaction. The features of interaction we are interested in include turn organization in speech and nonverbal behavior, eye-gaze behavior, use of composite signals (i.e. communicative units of speech-combined-with-gesture), and linguistic and other resources for ‘navigating’ interaction (e.g. words like okay, now, well, and um).

    Additional information

    2003_1_The_diff_task_stimuli.zip
  • Enfield, N. J., Levinson, S. C., & Stivers, T. (2008). Social action formulation: A "10-minutes" task. In A. Majid (Ed.), Field manual volume 11 (pp. 80-81). Nijmegen: Max Planck Institute for Psycholinguistics. doi:10.17617/2.492939.

    Abstract

    This Field Manual entry has been superceded by the 2009 version: https://doi.org/10.17617/2.883564

    Files private

    Request files
  • Enfield, N. J. (2003). Preface and priorities. In N. J. Enfield (Ed.), Field research manual 2003, part I: Multimodal interaction, space, event representation (pp. 3). Nijmegen: Max Planck Institute for Psycholinguistics.
  • Ernestus, M. (2003). The role of phonology and phonetics in Dutch voice assimilation. In J. v. d. Weijer, V. J. v. Heuven, & H. v. d. Hulst (Eds.), The phonological spectrum Volume 1: Segmental structure (pp. 119-144). Amsterdam: John Benjamins.
  • Ernestus, M. (2016). L'utilisation des corpus oraux pour la recherche en (psycho)linguistique. In M. Kilani-Schoch, C. Surcouf, & A. Xanthos (Eds.), Nouvelles technologies et standards méthodologiques en linguistique (pp. 65-93). Lausanne: Université de Lausanne.
  • Eryilmaz, K., Little, H., & De Boer, B. (2016). Using HMMs To Attribute Structure To Artificial Languages. In S. G. Roberts, C. Cuskley, L. McCrohon, L. Barceló-Coblijn, O. Feher, & T. Verhoef (Eds.), The Evolution of Language: Proceedings of the 11th International Conference (EVOLANG11). Retrieved from http://evolang.org/neworleans/papers/125.html.

    Abstract

    We investigated the use of Hidden Markov Models (HMMs) as a way of representing repertoires of continuous signals in order to infer their building blocks. We tested the idea on a dataset from an artificial language experiment. The study demonstrates using HMMs for this purpose is viable, but also that there is a lot of room for refinement such as explicit duration modeling, incorporation of autoregressive elements and relaxing the Markovian assumption, in order to accommodate specific details.
  • Filippi, P., Congdon, J. V., Hoang, J., Bowling, D. L., Reber, S., Pašukonis, A., Hoeschele, M., Ocklenburg, S., de Boer, B., Sturdy, C. B., Newen, A., & Güntürkün, O. (2016). Humans Recognize Vocal Expressions Of Emotional States Universally Across Species. In The Evolution of Language: Proceedings of the 11th International Conference (EVOLANG11). Retrieved from http://evolang.org/neworleans/papers/91.html.

    Abstract

    The perception of danger in the environment can induce physiological responses (such as a heightened state of arousal) in animals, which may cause measurable changes in the prosodic modulation of the voice (Briefer, 2012). The ability to interpret the prosodic features of animal calls as an indicator of emotional arousal may have provided the first hominins with an adaptive advantage, enabling, for instance, the recognition of a threat in the surroundings. This ability might have paved the ability to process meaningful prosodic modulations in the emerging linguistic utterances.
  • Filippi, P., Ocklenburg, S., Bowling, D. L., Heege, L., Newen, A., Güntürkün, O., & de Boer, B. (2016). Multimodal Processing Of Emotional Meanings: A Hypothesis On The Adaptive Value Of Prosody. In The Evolution of Language: Proceedings of the 11th International Conference (EVOLANG11). Retrieved from http://evolang.org/neworleans/papers/90.html.

    Abstract

    Humans combine multiple sources of information to comprehend meanings. These sources can be characterized as linguistic (i.e., lexical units and/or sentences) or paralinguistic (e.g. body posture, facial expression, voice intonation, pragmatic context). Emotion communication is a special case in which linguistic and paralinguistic dimensions can simultaneously denote the same, or multiple incongruous referential meanings. Think, for instance, about when someone says “I’m sad!”, but does so with happy intonation and a happy facial expression. Here, the communicative channels express very specific (although conflicting) emotional states as denotations. In such cases of intermodal incongruence, are we involuntarily biased to respond to information in one channel over the other? We hypothesize that humans are involuntary biased to respond to prosody over verbal content and facial expression, since the ability to communicate socially relevant information such as basic emotional states through prosodic modulation of the voice might have provided early hominins with an adaptive advantage that preceded the emergence of segmental speech (Darwin 1871; Mithen, 2005). To address this hypothesis, we examined the interaction between multiple communicative channels in recruiting attentional resources, within a Stroop interference task (i.e. a task in which different channels give conflicting information; Stroop, 1935). In experiment 1, we used synonyms of “happy” and “sad” spoken with happy and sad prosody. Participants were asked to identify the emotion expressed by the verbal content while ignoring prosody (Word task) or vice versa (Prosody task). Participants responded faster and more accurately in the Prosody task. Within the Word task, incongruent stimuli were responded to more slowly and less accurately than congruent stimuli. In experiment 2, we adopted synonyms of “happy” and “sad” spoken in happy and sad prosody, while a happy or sad face was displayed. Participants were asked to identify the emotion expressed by the verbal content while ignoring prosody and face (Word task), to identify the emotion expressed by prosody while ignoring verbal content and face (Prosody task), or to identify the emotion expressed by the face while ignoring prosody and verbal content (Face task). Participants responded faster in the Face task and less accurately when the two non-focused channels were expressing an emotion that was incongruent with the focused one, as compared with the condition where all the channels were congruent. In addition, in the Word task, accuracy was lower when prosody was incongruent to verbal content and face, as compared with the condition where all the channels were congruent. Our data suggest that prosody interferes with emotion word processing, eliciting automatic responses even when conflicting with both verbal content and facial expressions at the same time. In contrast, although processed significantly faster than prosody and verbal content, faces alone are not sufficient to interfere in emotion processing within a three-dimensional Stroop task. Our findings align with the hypothesis that the ability to communicate emotions through prosodic modulation of the voice – which seems to be dominant over verbal content - is evolutionary older than the emergence of segmental articulation (Mithen, 2005; Fitch, 2010). This hypothesis fits with quantitative data suggesting that prosody has a vital role in the perception of well-formed words (Johnson & Jusczyk, 2001), in the ability to map sounds to referential meanings (Filippi et al., 2014), and in syntactic disambiguation (Soderstrom et al., 2003). This research could complement studies on iconic communication within visual and auditory domains, providing new insights for models of language evolution. Further work aimed at how emotional cues from different modalities are simultaneously integrated will improve our understanding of how humans interpret multimodal emotional meanings in real life interactions.
  • Fisher, S. E. (2016). A molecular genetic perspective on speech and language. In G. Hickok, & S. Small (Eds.), Neurobiology of Language (pp. 13-24). Amsterdam: Elsevier. doi:10.1016/B978-0-12-407794-2.00002-X.

    Abstract

    The rise of genomic technologies has yielded exciting new routes for studying the biological foundations of language. Researchers have begun to identify genes implicated in neurodevelopmental disorders that disrupt speech and language skills. This chapter illustrates how such work can provide powerful entry points into the critical neural pathways using FOXP2 as an example. Rare mutations of this gene cause problems with learning to sequence mouth movements during speech, accompanied by wide-ranging impairments in language production and comprehension. FOXP2 encodes a regulatory protein, a hub in a network of other genes, several of which have also been associated with language-related impairments. Versions of FOXP2 are found in similar form in many vertebrate species; indeed, studies of animals and birds suggest conserved roles in the development and plasticity of certain sets of neural circuits. Thus, the contributions of this gene to human speech and language involve modifications of evolutionarily ancient functions.
  • Fisher, S. E. (2003). The genetic basis of a severe speech and language disorder. In J. Mallet, & Y. Christen (Eds.), Neurosciences at the postgenomic era (pp. 125-134). Heidelberg: Springer.
  • Fitz, H., & Chang, F. (2008). The role of the input in a connectionist model of the accessibility hierarchy in development. In H. Chan, H. Jacob, & E. Kapia (Eds.), Proceedings from the 32nd Annual Boston University Conference on Language Development [BUCLD 32] (pp. 120-131). Somerville, Mass.: Cascadilla Press.
  • Floyd, S. (2016). Insubordination in Interaction: The Cha’palaa counter-assertive. In N. Evans, & H. Wananabe (Eds.), Dynamics of Insubordination (pp. 341-366). Amsterdam: John Benjamins.

    Abstract

    In the Cha’palaa language of Ecuador the main-clause use of the otherwise non-finite morpheme -ba can be accounted for by a specific interactive practice: the ‘counter-assertion’ of statement or implicature of a previous conversational turn. Attention to the ways in which different constructions are deployed in such recurrent conversational contexts reveals a plausible account for how this type of dependent clause has come to be one of the options for finite clauses. After giving some background on Cha’palaa and placing ba clauses within a larger ecology of insubordination constructions in the language, this chapter uses examples from a video corpus of informal conversation to illustrate how interactive data provides answers that may otherwise be elusive for understanding how the different grammatical options for Cha’palaa finite verb constructions have been structured by insubordination
  • Floyd, S., & Norcliffe, E. (2016). Switch reference systems in the Barbacoan languages and their neighbors. In R. Van Gijn, & J. Hammond (Eds.), Switch Reference 2.0 (pp. 207-230). Amsterdam: Benjamins.

    Abstract

    This chapter surveys the available data on Barbacoan languages and their neighbors to explore a case study of switch reference within a single language family and in a situation of areal contact. To the extent possible given the available data, we weigh accounts appealing to common inheritance and areal convergence to ask what combination of factors led to the current state of these languages. We discuss the areal distribution of switch reference systems in the northwest Andean region, the different types of systems and degrees of complexity observed, and scenarios of contact and convergence, particularly in the case of Barbacoan and Ecuadorian Quechua. We then covers each of the Barbacoan languages’ systems (with the exception of Totoró, represented by its close relative Guambiano), identifying limited formal cognates, primarily between closely-related Tsafiki and Cha’palaa, as well as broader functional similarities, particularly in terms of interactions with topic/focus markers. n accounts for the current state of affairs with a complex scenario of areal prevalence of switch reference combined with deep structural family inheritance and formal re-structuring of the systems over time
  • Frank, S. L., Koppen, M., Noordman, L. G. M., & Vonk, W. (2003). A model for knowledge-based pronoun resolution. In F. Detje, D. Dörner, & H. Schaub (Eds.), The logic of cognitive systems (pp. 245-246). Bamberg: Otto-Friedrich Universität.

    Abstract

    Several sources of information are used in choosing the intended referent of an ambiguous pronoun. The two sources considered in this paper are foregrounding and context. The first refers to the accessibility of discourse entities. An entity that is foregrounded is more likely to become the pronoun’s referent than an entity that is not. Context information affects pronoun resolution when world knowledge is needed to find the referent. The model presented here simulates how world knowledge invoked by context, together with foregrounding, influences pronoun resolution. It was developed as an extension to the Distributed Situation Space (DSS) model of knowledge-based inferencing in story comprehension (Frank, Koppen, Noordman, & Vonk, 2003), which shall be introduced first.
  • Frost, R. L. A., Monaghan, P., & Christiansen, M. H. (2016). Using Statistics to Learn Words and Grammatical Categories: How High Frequency Words Assist Language Acquisition. In A. Papafragou, D. Mirman, & J. Trueswell (Eds.), Proceedings of the 38th Annual Meeting of the Cognitive Science Society (CogSci 2016) (pp. 81-86). Austin, Tx: Cognitive Science Society. Retrieved from https://mindmodeling.org/cogsci2016/papers/0027/index.html.

    Abstract

    Recent studies suggest that high-frequency words may benefit speech segmentation (Bortfeld, Morgan, Golinkoff, & Rathbun, 2005) and grammatical categorisation (Monaghan, Christiansen, & Chater, 2007). To date, these tasks have been examined separately, but not together. We familiarised adults with continuous speech comprising repetitions of target words, and compared learning to a language in which targets appeared alongside high-frequency marker words. Marker words reliably preceded targets, and distinguished them into two otherwise unidentifiable categories. Participants completed a 2AFC segmentation test, and a similarity judgement categorisation test. We tested transfer to a word-picture mapping task, where words from each category were used either consistently or inconsistently to label actions/objects. Participants segmented the speech successfully, but only demonstrated effective categorisation when speech contained high-frequency marker words. The advantage of marker words extended to the early stages of the transfer task. Findings indicate the same high-frequency words may assist speech segmentation and grammatical categorisation.
  • Gaby, A., & Faller, M. (2003). Reciprocity questionnaire. In N. J. Enfield (Ed.), Field research manual 2003, part I: Multimodal interaction, space, event representation (pp. 77-80). Nijmegen: Max Planck Institute for Psycholinguistics. doi:10.17617/2.877641.

    Abstract

    This project is part of a collaborative project with the research group “Reciprocals across languages” led by Nick Evans. One goal of this project is to develop a typology of reciprocals. This questionnaire is designed to help field workers get an overview over the type of markers used in the expression of reciprocity in the language studied.
  • Gannon, E., He, J., Gao, X., & Chaparro, B. (2016). RSVP Reading on a Smart Watch. In Proceedings of the Human Factors and Ergonomics Society 2016 Annual Meeting (pp. 1130-1134).

    Abstract

    Reading with Rapid Serial Visual Presentation (RSVP) has shown promise for optimizing screen space and increasing reading speed without compromising comprehension. Given the wide use of small-screen devices, the present study compared RSVP and traditional reading on three types of reading comprehension, reading speed, and subjective measures on a smart watch. Results confirm previous studies that show faster reading speed with RSVP without detracting from comprehension. Subjective data indicate that Traditional is strongly preferred to RSVP as a primary reading method. Given the optimal use of screen space, increased speed and comparable comprehension, future studies should focus on making RSVP a more comfortable format.
  • García Lecumberri, M. L., Cooke, M., Cutugno, F., Giurgiu, M., Meyer, B. T., Scharenborg, O., Van Dommelen, W., & Volin, J. (2008). The non-native consonant challenge for European languages. In INTERSPEECH 2008 - 9th Annual Conference of the International Speech Communication Association (pp. 1781-1784). ISCA Archive.

    Abstract

    This paper reports on a multilingual investigation into the effects of different masker types on native and non-native perception in a VCV consonant recognition task. Native listeners outperformed 7 other language groups, but all groups showed a similar ranking of maskers. Strong first language (L1) interference was observed, both from the sound system and from the L1 orthography. Universal acoustic-perceptual tendencies are also at work in both native and non-native sound identifications in noise. The effect of linguistic distance, however, was less clear: in large multilingual studies, listener variables may overpower other factors.
  • Gerwien, J., & Flecken, M. (2016). First things first? Top-down influences on event apprehension. In A. Papafragou, D. Grodner, D. Mirman, & J. Trueswell (Eds.), Proceedings of the 38th Annual Meeting of the Cognitive Science Society (CogSci 2016) (pp. 2633-2638). Austin, TX: Cognitive Science Society.

    Abstract

    Not much is known about event apprehension, the earliest stage of information processing in elicited language production studies, using pictorial stimuli. A reason for our lack of knowledge on this process is that apprehension happens very rapidly (<350 ms after stimulus onset, Griffin & Bock 2000), making it difficult to measure the process directly. To broaden our understanding of apprehension, we analyzed landing positions and onset latencies of first fixations on visual stimuli (pictures of real-world events) given short stimulus presentation times, presupposing that the first fixation directly results from information processing during apprehension
  • Gordon, P. C., Lowder, M. W., & Hoedemaker, R. S. (2016). Reading in normally aging adults. In H. Wright (Ed.), Cognitive-Linguistic Processes and Aging (pp. 165-192). Amsterdam: Benjamins. doi:10.1075/z.200.07gor.

    Abstract

    The activity of reading raises fundamental theoretical and practical questions about healthy cognitive aging. Reading relies greatly on knowledge of patterns of language and of meaning at the level of words and topics of text. Further, this knowledge must be rapidly accessed so that it can be coordinated with processes of perception, attention, memory and motor control that sustain skilled reading at rates of four-to-five words a second. As such, reading depends both on crystallized semantic intelligence which grows or is maintained through healthy aging, and on components of fluid intelligence which decline with age. Reading is important to older adults because it facilitates completion of everyday tasks that are essential to independent living. In addition, it entails the kind of active mental engagement that can preserve and deepen the cognitive reserve that may mitigate the negative consequences of age-related changes in the brain. This chapter reviews research on the front end of reading (word recognition) and on the back end of reading (text memory) because both of these abilities are surprisingly robust to declines associated with cognitive aging. For word recognition, that robustness is surprising because rapid processing of the sort found in reading is usually impaired by aging; for text memory, it is surprising because other types of episodic memory performance (e.g., paired associates) substantially decline in aging. These two otherwise quite different levels of reading comprehension remain robust because they draw on the knowledge of language that older adults gain through a life-time of experience with language.
  • Gretsch, P. (2003). Omission impossible?: Topic and Focus in Focal Ellipsis. In K. Schwabe, & S. Winkler (Eds.), The Interfaces: Deriving and interpreting omitted structures (pp. 341-365). Amsterdam: John Benjamins.
  • Le Guen, O., Senft, G., & Sicoli, M. A. (2008). Language of perception: Views from anthropology. In A. Majid (Ed.), Field Manual Volume 11 (pp. 29-36). Nijmegen: Max Planck Institute for Psycholinguistics. doi:10.17617/2.446079.

    Abstract

    To understand the underlying principles of categorisation and classification of sensory input semantic analyses must be based on both language and culture. The senses are not only physiological phenomena, but they are also linguistic, cultural, and social. The goal of this task is to explore and describe sociocultural patterns relating language of perception, ideologies of perception, and perceptual practice in our speech communities.
  • Gullberg, M. (2003). Eye movements and gestures in human face-to-face interaction. In J. Hyönä, R. Radach, & H. Deubel (Eds.), The mind's eyes: Cognitive and applied aspects of eye movements (pp. 685-703). Oxford: Elsevier.

    Abstract

    Gestures are visuospatial events, meaning carriers, and social interactional phenomena. As such they constitute a particularly favourable area for investigating visual attention in a complex everyday situation under conditions of competitive processing. This chapter discusses visual attention to spontaneous gestures in human face-to-face interaction as explored with eye-tracking. Some basic fixation patterns are described, live and video-based settings are compared, and preliminary results on the relationship between fixations and information processing are outlined.
  • Gullberg, M., & Kita, S. (2003). Das Beachten von Gesten: Eine Studie zu Blickverhalten und Integration gestisch ausgedrückter Informationen. In Max-Planck-Gesellschaft (Ed.), Jahrbuch der Max Planck Gesellschaft 2003 (pp. 949-953). Göttingen: Vandenhoeck & Ruprecht.
  • Gullberg, M. (2008). A helping hand? Gestures, L2 learners, and grammar. In S. G. McCafferty, & G. Stam (Eds.), Gesture: Second language acquisition and classroom research (pp. 185-210). New York: Routledge.

    Abstract

    This chapter explores what L2 learners' gestures reveal about L2 grammar. The focus is on learners’ difficulties with maintaining reference in discourse caused by their incomplete mastery of pronouns. The study highlights the systematic parallels between properties of L2 speech and gesture, and the parallel effects of grammatical development in both modalities. The validity of a communicative account of interlanguage grammar in this domain is tested by taking the cohesive properties of the gesture-speech ensemble into account. Specifically, I investigate whether learners use gestures to compensate for and to license over-explicit reference in speech. The results rule out a communicative account for the spoken variety of maintained reference. In contrast, cohesive gestures are found to be multi-functional. While the presence of cohesive gestures is not communicatively motivated, their spatial realisation is. It is suggested that gestures are exploited as a grammatical communication strategy to disambiguate speech wherever possible, but that they may also be doing speaker-internal work. The methodological importance of considering L2 gestures when studying grammar is also discussed.
  • Gullberg, M., & Indefrey, P. (2008). Cognitive and neural prerequisites for time in language: Any answers? In P. Indefrey, & M. Gullberg (Eds.), Time to speak: Cognitive and neural prerequisites for time in language (pp. 207-216). Oxford: Blackwell.
  • Gullberg, M. (2008). Gestures and second language acquisition. In P. Robinson, & N. C. Ellis (Eds.), Handbook of cognitive linguistics and second language acquisition (pp. 276-305). New York: Routledge.

    Abstract

    Gestures, the symbolic movements speakers perform while they speak, are systematically related to speech and language at multiple levels, and reflect cognitive and linguistic activities in non-trivial ways. This chapter presents an overview of what gestures can tell us about the processes of second language acquisition. It focuses on two key aspects, (a) gestures and the developing language system and (b) gestures and learning, and discusses some implications of an expanded view of language acquisition that takes gestures into account.
  • Gullberg, M. (2003). Gestures, referents, and anaphoric linkage in learner varieties. In C. Dimroth, & M. Starren (Eds.), Information structure, linguistic structure and the dynamics of language acquisition. (pp. 311-328). Amsterdam: Benjamins.

    Abstract

    This paper discusses how the gestural modality can contribute to our understanding of anaphoric linkage in learner varieties, focusing on gestural anaphoric linkage marking the introduction, maintenance, and shift of reference in story retellings by learners of French and Swedish. The comparison of gestural anaphoric linkage in native and non-native varieties reveals what appears to be a particular learner variety of gestural cohesion, which closely reflects the characteristics of anaphoric linkage in learners' speech. Specifically, particular forms co-occur with anaphoric gestures depending on the information organisation in discourse. The typical nominal over-marking of maintained referents or topic elements in speech is mirrored by gestural (over-)marking of the same items. The paper discusses two ways in which this finding may further the understanding of anaphoric over-explicitness of learner varieties. An addressee-based communicative perspective on anaphoric linkage highlights how over-marking in gesture and speech may be related to issues of hyper-clarity and ambiguity. An alternative speaker-based perspective is also explored in which anaphoric over-marking is seen as related to L2 speech planning.
  • Hagoort, P., & Brown, C. M. (1994). Brain responses to lexical ambiguity resolution and parsing. In C. Clifton Jr, L. Frazier, & K. Rayner (Eds.), Perspectives on sentence processing (pp. 45-81). Hilsdale NY: Lawrence Erlbaum Associates.
  • Hagoort, P., Ramsey, N. F., & Jensen, O. (2008). De gereedschapskist van de cognitieve neurowetenschap. In F. Wijnen, & F. Verstraten (Eds.), Het brein te kijk: Verkenning van de cognitieve neurowetenschap (pp. 41-75). Amsterdam: Harcourt Assessment.
  • Hagoort, P. (2003). De verloving tussen neurowetenschap en psychologie. In K. Hilberdink (Ed.), Interdisciplinariteit in de geesteswetenschappen (pp. 73-81). Amsterdam: KNAW.
  • Hagoort, P. (2003). Die einzigartige, grösstenteils aber unbewusste Fähigkeit der Menschen zu sprachlicher Kommunikation. In G. Kaiser (Ed.), Jahrbuch 2002-2003 / Wissenschaftszentrum Nordrhein-Westfalen (pp. 33-46). Düsseldorf: Wissenschaftszentrum Nordrhein-Westfalen.
  • Hagoort, P. (2003). Functional brain imaging. In W. J. Frawley (Ed.), International encyclopedia of linguistics (pp. 142-145). New York: Oxford University Press.
  • Hagoort, P. (2016). MUC (Memory, Unification, Control): A Model on the Neurobiology of Language Beyond Single Word Processing. In G. Hickok, & S. Small (Eds.), Neurobiology of language (pp. 339-347). Amsterdam: Elsever. doi:10.1016/B978-0-12-407794-2.00028-6.

    Abstract

    A neurobiological model of language is discussed that overcomes the shortcomings of the classical Wernicke-Lichtheim-Geschwind model. It is based on a subdivision of language processing into three components: Memory, Unification, and Control. The functional components as well as the neurobiological underpinnings of the model are discussed. In addition, the need for extension beyond the classical core regions for language is shown. Attentional networks as well as networks for inferential processing are crucial to realize language comprehension beyond single word processing and beyond decoding propositional content.
  • Hagoort, P. (1998). The shadows of lexical meaning in patients with semantic impairments. In B. Stemmer, & H. Whitaker (Eds.), Handbook of neurolinguistics (pp. 235-248). New York: Academic Press.
  • Hagoort, P. (2016). Zij zijn ons brein. In J. Brockman (Ed.), Machines die denken: Invloedrijke denkers over de komst van kunstmatige intelligentie (pp. 184-186). Amsterdam: Maven Publishing.
  • Hagoort, P. (2008). Über Broca, Gehirn und Bindung. In Jahrbuch 2008: Tätigkeitsberichte der Institute. München: Generalverwaltung der Max-Planck-Gesellschaft. Retrieved from http://www.mpg.de/306524/forschungsSchwerpunkt1?c=166434.

    Abstract

    Beim Sprechen und beim Sprachverstehen findet man die Wortbedeutung im Gedächtnis auf und kombiniert sie zu größeren Einheiten (Unifikation). Solche Unifikations-Operationen laufen auf unterschiedlichen Ebenen der Sprachverarbeitung ab. In diesem Beitrag wird ein Rahmen vorgeschlagen, in dem psycholinguistische Modelle mit neurobiologischer Sprachbetrachtung in Verbindung gebracht werden. Diesem Vorschlag zufolge spielt der linke inferiore frontale Gyrus (LIFG) eine bedeutende Rolle bei der Unifi kation
  • Hanulikova, A. (2008). Word recognition in possible word contexts. In M. Kokkonidis (Ed.), Proceedings of LingO 2007 (pp. 92-99). Oxford: Faculty of Linguistics, Philology, and Phonetics, University of Oxford.

    Abstract

    The Possible-Word Constraint (PWC; Norris, McQueen, Cutler, and Butterfield 1997) suggests that segmentation of continuous speech operates with a universal constraint that feasible words should contain a vowel. Single consonants, because they do not constitute syllables, are treated as non-viable residues. Two word-spotting experiments are reported that investigate whether the PWC really is a language-universal principle. According to the PWC, Slovak listeners should, just like Germans, be slower at spotting words in single consonant contexts (not feasible words) as compared to syllable contexts (feasible words)—even if single consonants can be words in Slovak. The results confirm the PWC in German but not in Slovak.
  • Hanulikova, A., & Dietrich, R. (2008). Die variable Coda in der slowakisch-deutschen Interimsprache. In M. Tarvas (Ed.), Tradition und Geschichte im literarischen und sprachwissenschaftlichen Kontext (pp. 119-130). Bern: Peter Lang.
  • Harbusch, K., Kempen, G., & Vosse, T. (2008). A natural-language paraphrase generator for on-line monitoring and commenting incremental sentence construction by L2 learners of German. In Proceedings of WorldCALL 2008.

    Abstract

    Certain categories of language learners need feedback on the grammatical structure of sentences they wish to produce. In contrast with the usual NLP approach to this problem—parsing student-generated texts—we propose a generation-based approach aiming at preventing errors (“scaffolding”). In our ICALL system, students construct sentences by composing syntactic trees out of lexically anchored “treelets” via a graphical drag&drop user interface. A natural-language generator computes all possible grammatically well-formed sentences entailed by the student-composed tree, and intervenes immediately when the latter tree does not belong to the set of well-formed alternatives. Feedback is based on comparisons between the student-composed tree and the well-formed set. Frequently occurring errors are handled in terms of “malrules.” The system (implemented in JAVA and C++) currently focuses constituent order in German as L2.
  • Harmon, Z., & Kapatsinski, V. (2016). Fuse to be used: A weak cue’s guide to attracting attention. In A. Papafragou, D. Grodner, D. Mirman, & J. Trueswell (Eds.), Proceedings of the 38th Annual Meeting of the Cognitive Science Society (CogSci 2016). Austin, TX: Cognitive Science Society (pp. 520-525). Austin, TX: Cognitive Science Society.

    Abstract

    Several studies examined cue competition in human learning by testing learners on a combination of conflicting cues rooting for different outcomes, with each cue perfectly predicting its outcome. A common result has been that learners faced with cue conflict choose the outcome associated with the rare cue (the Inverse Base Rate Effect, IBRE). Here, we investigate cue competition including IBRE with sentences containing cues to meanings in a visual world. We do not observe IBRE. Instead we find that position in the sentence strongly influences cue salience. Faced with conflict between an initial cue and a non-initial cue, learners choose the outcome associated with the initial cue, whether frequent or rare. However, a frequent configuration of non-initial cues that are not sufficiently salient on their own can overcome a competing salient initial cue rooting for a different meaning. This provides a possible explanation for certain recurring patterns in language change.
  • Harmon, Z., & Kapatsinski, V. (2016). Determinants of lengths of repetition disfluencies: Probabilistic syntactic constituency in speech production. In R. Burkholder, C. Cisneros, E. R. Coppess, J. Grove, E. A. Hanink, H. McMahan, C. Meyer, N. Pavlou, Ö. Sarıgül, A. R. Singerman, & A. Zhang (Eds.), Proceedings of the Fiftieth Annual Meeting of the Chicago Linguistic Society (pp. 237-248). Chicago: Chicago Linguistic Society.
  • Haun, D. B. M., & Waller, D. (2003). Alignment task. In N. J. Enfield (Ed.), Field research manual 2003, part I: Multimodal interaction, space, event representation (pp. 39-48). Nijmegen: Max Planck Institute for Psycholinguistics.
  • Haun, D. B. M. (2003). Path integration. In N. J. Enfield (Ed.), Field research manual 2003, part I: Multimodal interaction, space, event representation (pp. 33-38). Nijmegen: Max Planck Institute for Psycholinguistics. doi:10.17617/2.877644.
  • Haun, D. B. M. (2003). Spatial updating. In N. J. Enfield (Ed.), Field research manual 2003, part I: Multimodal interaction, space, event representation (pp. 49-56). Nijmegen: Max Planck Institute for Psycholinguistics.
  • Hendricks, I., Lefever, E., Croijmans, I., Majid, A., & Van den Bosch, A. (2016). Very quaffable and great fun: Applying NLP to wine reviews. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics: Vol 2 (pp. 306-312). Stroudsburg, PA: Association for Computational Linguistics.

    Abstract

    We automatically predict properties of
    wines on the basis of smell and flavor de-
    scriptions from experts’ wine reviews. We
    show wine experts are capable of describ-
    ing their smell and flavor experiences in
    wine reviews in a sufficiently consistent
    manner, such that we can use their descrip-
    tions to predict properties of a wine based
    solely on language. The experimental re-
    sults show promising F-scores when using
    lexical and semantic information to predict
    the color, grape variety, country of origin,
    and price of a wine. This demonstrates,
    contrary to popular opinion, that wine ex-
    perts’ reviews really are informative.
  • Hintz, F., & Scharenborg, O. (2016). Neighbourhood density influences word recognition in native and non-native speech recognition in noise. In H. Van den Heuvel, B. Cranen, & S. Mattys (Eds.), Proceedings of the Speech Processing in Realistic Environments (SPIRE) workshop (pp. 46-47). Groningen.
  • Hintz, F., & Scharenborg, O. (2016). The effect of background noise on the activation of phonological and semantic information during spoken-word recognition. In Proceedings of Interspeech 2016: The 17th Annual Conference of the International Speech Communication Association (pp. 2816-2820).

    Abstract

    During spoken-word recognition, listeners experience phonological competition between multiple word candidates, which increases, relative to optimal listening conditions, when speech is masked by noise. Moreover, listeners activate semantic word knowledge during the word’s unfolding. Here, we replicated the effect of background noise on phonological competition and investigated to which extent noise affects the activation of semantic information in phonological competitors. Participants’ eye movements were recorded when they listened to sentences containing a target word and looked at three types of displays. The displays either contained a picture of the target word, or a picture of a phonological onset competitor, or a picture of a word semantically related to the onset competitor, each along with three unrelated distractors. The analyses revealed that, in noise, fixations to the target and to the phonological onset competitor were delayed and smaller in magnitude compared to the clean listening condition, most likely reflecting enhanced phonological competition. No evidence for the activation of semantic information in the phonological competitors was observed in noise and, surprisingly, also not in the clear. We discuss the implications of the lack of an effect and differences between the present and earlier studies.
  • Irivine, E., & Roberts, S. G. (2016). Deictic tools can limit the emergence of referential symbol systems. In S. G. Roberts, C. Cuskley, L. McCrohon, L. Barceló-Coblijn, O. Feher, & T. Verhoef (Eds.), The Evolution of Language: Proceedings of the 11th International Conference (EVOLANG11). Retrieved from http://evolang.org/neworleans/papers/99.html.

    Abstract

    Previous experiments and models show that the pressure to communicate can lead to the emergence of symbols in specific tasks. The experiment presented here suggests that the ability to use deictic gestures can reduce the pressure for symbols to emerge in co-operative tasks. In the 'gesture-only' condition, pairs built a structure together in 'Minecraft', and could only communicate using a small range of gestures. In the 'gesture-plus' condition, pairs could also use sound to develop a symbol system if they wished. All pairs were taught a pointing convention. None of the pairs we tested developed a symbol system, and performance was no different across the two conditions. We therefore suggest that deictic gestures, and non-referential means of organising activity sequences, are often sufficient for communication. This suggests that the emergence of linguistic symbols in early hominids may have been late and patchy with symbols only emerging in contexts where they could significantly improve task success or efficiency. Given the communicative power of pointing however, these contexts may be fewer than usually supposed. An approach for identifying these situations is outlined.
  • Isaac, A., Matthezing, H., Van der Meij, L., Schlobach, S., Wang, S., & Zinn, C. (2008). Putting ontology alignment in context: Usage, scenarios, deployment and evaluation in a library case. In S. Bechhofer, M. Hauswirth, J. Hoffmann, & M. Koubarakis (Eds.), The semantic web: Research and applications (pp. 402-417). Berlin: Springer.

    Abstract

    Thesaurus alignment plays an important role in realising efficient access to heterogeneous Cultural Heritage data. Current ontology alignment techniques, however, provide only limited value for such access as they consider little if any requirements from realistic use cases or application scenarios. In this paper, we focus on two real-world scenarios in a library context: thesaurus merging and book re-indexing. We identify their particular requirements and describe our approach of deploying and evaluating thesaurus alignment techniques in this context. We have applied our approach for the Ontology Alignment Evaluation Initiative, and report on the performance evaluation of participants’ tools wrt. the application scenario at hand. It shows that evaluations of tools requires significant effort, but when done carefully, brings many benefits.
  • Janse, E. (2003). Word perception in natural-fast and artificially time-compressed speech. In M. SolÉ, D. Recasens, & J. Romero (Eds.), Proceedings of the 15th International Congress of the Phonetic Sciences (pp. 3001-3004).
  • Janssen, R., Winter, B., Dediu, D., Moisik, S. R., & Roberts, S. G. (2016). Nonlinear biases in articulation constrain the design space of language. In S. G. Roberts, C. Cuskley, L. McCrohon, L. Barceló-Coblijn, O. Feher, & T. Verhoef (Eds.), The Evolution of Language: Proceedings of the 11th International Conference (EVOLANG11). Retrieved from http://evolang.org/neworleans/papers/86.html.

    Abstract

    In Iterated Learning (IL) experiments, a participant’s learned output serves as the next participant’s learning input (Kirby et al., 2014). IL can be used to model cultural transmission and has indicated that weak biases can be amplified through repeated cultural transmission (Kirby et al., 2007). So, for example, structural language properties can emerge over time because languages come to reflect the cognitive constraints in the individuals that learn and produce the language. Similarly, we propose that languages may also reflect certain anatomical biases. Do sound systems adapt to the affordances of the articulation space induced by the vocal tract?
    The human vocal tract has inherent nonlinearities which might derive from acoustics and aerodynamics (cf. quantal theory, see Stevens, 1989) or biomechanics (cf. Gick & Moisik, 2015). For instance, moving the tongue anteriorly along the hard palate to produce a fricative does not result in large changes in acoustics in most cases, but for a small range there is an abrupt change from a perceived palato-alveolar [ʃ] to alveolar [s] sound (Perkell, 2012). Nonlinearities such as these might bias all human speakers to converge on a very limited set of phonetic categories, and might even be a basis for combinatoriality or phonemic ‘universals’.
    While IL typically uses discrete symbols, Verhoef et al. (2014) have used slide whistles to produce a continuous signal. We conducted an IL experiment with human subjects who communicated using a digital slide whistle for which the degree of nonlinearity is controlled. A single parameter (α) changes the mapping from slide whistle position (the ‘articulator’) to the acoustics. With α=0, the position of the slide whistle maps Bark-linearly to the acoustics. As α approaches 1, the mapping gets more double-sigmoidal, creating three plateaus where large ranges of positions map to similar frequencies. In more abstract terms, α represents the strength of a nonlinear (anatomical) bias in the vocal tract.
    Six chains (138 participants) of dyads were tested, each chain with a different, fixed α. Participants had to communicate four meanings by producing a continuous signal using the slide-whistle in a ‘director-matcher’ game, alternating roles (cf. Garrod et al., 2007).
    Results show that for high αs, subjects quickly converged on the plateaus. This quick convergence is indicative of a strong bias, repelling subjects away from unstable regions already within-subject. Furthermore, high αs lead to the emergence of signals that oscillate between two (out of three) plateaus. Because the sigmoidal spaces are spatially constrained, participants increasingly used the sequential/temporal dimension. As a result of this, the average duration of signals with high α was ~100ms longer than with low α. These oscillations could be an expression of a basis for phonemic combinatoriality.
    We have shown that it is possible to manipulate the magnitude of an articulator-induced non-linear bias in a slide whistle IL framework. The results suggest that anatomical biases might indeed constrain the design space of language. In particular, the signaling systems in our study quickly converged (within-subject) on the use of stable regions. While these conclusions were drawn from experiments using slide whistles with a relatively strong bias, weaker biases could possibly be amplified over time by repeated cultural transmission, and likely lead to similar outcomes.
  • Janssen, R., Dediu, D., & Moisik, S. R. (2016). Simple agents are able to replicate speech sounds using 3d vocal tract model. In S. G. Roberts, C. Cuskley, L. McCrohon, L. Barceló-Coblijn, O. Feher, & T. Verhoef (Eds.), The Evolution of Language: Proceedings of the 11th International Conference (EVOLANG11). Retrieved from http://evolang.org/neworleans/papers/97.html.

    Abstract

    Many factors have been proposed to explain why groups of people use different speech sounds in their language. These range from cultural, cognitive, environmental (e.g., Everett, et al., 2015) to anatomical (e.g., vocal tract (VT) morphology). How could such anatomical properties have led to the similarities and differences in speech sound distributions between human languages?

    It is known that hard palate profile variation can induce different articulatory strategies in speakers (e.g., Brunner et al., 2009). That is, different hard palate profiles might induce a kind of bias on speech sound production, easing some types of sounds while impeding others. With a population of speakers (with a proportion of individuals) that share certain anatomical properties, even subtle VT biases might become expressed at a population-level (through e.g., bias amplification, Kirby et al., 2007). However, before we look into population-level effects, we should first look at within-individual anatomical factors. For that, we have developed a computer-simulated analogue for a human speaker: an agent. Our agent is designed to replicate speech sounds using a production and cognition module in a computationally tractable manner.

    Previous agent models have often used more abstract (e.g., symbolic) signals. (e.g., Kirby et al., 2007). We have equipped our agent with a three-dimensional model of the VT (the production module, based on Birkholz, 2005) to which we made numerous adjustments. Specifically, we used a 4th-order Bezier curve that is able to capture hard palate variation on the mid-sagittal plane (XXX, 2015). Using an evolutionary algorithm, we were able to fit the model to human hard palate MRI tracings, yielding high accuracy fits and using as little as two parameters. Finally, we show that the samples map well-dispersed to the parameter-space, demonstrating that the model cannot generate unrealistic profiles. We can thus use this procedure to import palate measurements into our agent’s production module to investigate the effects on acoustics. We can also exaggerate/introduce novel biases.

    Our agent is able to control the VT model using the cognition module.

    Previous research has focused on detailed neurocomputation (e.g., Kröger et al., 2014) that highlights e.g., neurobiological principles or speech recognition performance. However, the brain is not the focus of our current study. Furthermore, present-day computing throughput likely does not allow for large-scale deployment of these architectures, as required by the population model we are developing. Thus, the question whether a very simple cognition module is able to replicate sounds in a computationally tractable manner, and even generalize over novel stimuli, is one worthy of attention in its own right.

    Our agent’s cognition module is based on running an evolutionary algorithm on a large population of feed-forward neural networks (NNs). As such, (anatomical) bias strength can be thought of as an attractor basin area within the parameter-space the agent has to explore. The NN we used consists of a triple-layered (fully-connected), directed graph. The input layer (three neurons) receives the formants frequencies of a target-sound. The output layer (12 neurons) projects to the articulators in the production module. A hidden layer (seven neurons) enables the network to deal with nonlinear dependencies. The Euclidean distance (first three formants) between target and replication is used as fitness measure. Results show that sound replication is indeed possible, with Euclidean distance quickly approaching a close-to-zero asymptote.

    Statistical analysis should reveal if the agent can also: a) Generalize: Can it replicate sounds not exposed to during learning? b) Replicate consistently: Do different, isolated agents always converge on the same sounds? c) Deal with consolidation: Can it still learn new sounds after an extended learning phase (‘infancy’) has been terminated? Finally, a comparison with more complex models will be used to demonstrate robustness.
  • Jeske, J., Kember, H., & Cutler, A. (2016). Native and non-native English speakers' use of prosody to predict sentence endings. In Proceedings of the 16th Australasian International Conference on Speech Science and Technology (SST2016).
  • Jesse, A., & Johnson, E. K. (2008). Audiovisual alignment in child-directed speech facilitates word learning. In Proceedings of the International Conference on Auditory-Visual Speech Processing (pp. 101-106). Adelaide, Aust: Causal Productions.

    Abstract

    Adult-to-child interactions are often characterized by prosodically-exaggerated speech accompanied by visually captivating co-speech gestures. In a series of adult studies, we have shown that these gestures are linked in a sophisticated manner to the prosodic structure of adults' utterances. In the current study, we use the Preferential Looking Paradigm to demonstrate that two-year-olds can use the alignment of these gestures to speech to deduce the meaning of words.
  • Johnson, E. K. (2003). Speaker intent influences infants' segmentation of potentially ambiguous utterances. In Proceedings of the 15th International Congress of Phonetic Sciences (PCPhS 2003) (pp. 1995-1998). Adelaide: Causal Productions.
  • De Jong, N. H., Schreuder, R., & Baayen, R. H. (2003). Morphological resonance in the mental lexicon. In R. Baayen, & R. Schreuder (Eds.), Morphological structure in language processing (pp. 65-88). Berlin: Mouton de Gruyter.
  • Jordens, P. (1998). Defaultformen des Präteritums. Zum Erwerb der Vergangenheitsmorphologie im Niederlänidischen. In H. Wegener (Ed.), Eine zweite Sprache lernen (pp. 61-88). Tübingen, Germany: Verlag Gunter Narr.
  • Jordens, P. (2003). Constraints on the shape of second language learner varieties. In G. Rickheit, T. Herrmann, & W. Deutsch (Eds.), Psycholinguistik/Psycholinguistics: Ein internationales Handbuch. [An International Handbook] (pp. 819-833). Berlin: Mouton de Gruyter.
  • Jordens, P., Matsuo, A., & Perdue, C. (2008). Comparing the acquisition of finiteness: A cross-linguistic approach. In B. Ahrenholz, U. Bredel, W. Klein, M. Rost-Roth, & R. Skiba (Eds.), Empirische Forschung und Theoriebildung: Beiträge aus Soziolinguistik, Gesprochene-Sprache- und Zweitspracherwerbsforschung: Festschrift für Norbert Dittmar (pp. 261-276). Frankfurt am Main: Lang.
  • Keating, P., Cho, T., Fougeron, C., & Hsu, C.-S. (2003). Domain-initial strengthening in four languages. In J. Local, R. Ogden, & R. Temple (Eds.), Laboratory phonology VI: Phonetic interpretation (pp. 145-163). Cambridge: Cambridge University Press.
  • Kember, H., Choi, J., & Cutler, A. (2016). Processing advantages for focused words in Korean. In J. Barnes, A. Brugos, S. Shattuck-Hufnagel, & N. Veilleux (Eds.), Proceedings of Speech Prosody 2016 (pp. 702-705).

    Abstract

    In Korean, focus is expressed in accentual phrasing. To ascertain whether words focused in this manner enjoy a processing advantage analogous to that conferred by focus as expressed in, e.g, English and Dutch, we devised sentences with target words in one of four conditions: prosodic focus, syntactic focus, prosodic + syntactic focus, and no focus as a control. 32 native speakers of Korean listened to blocks of 10 sentences, then were presented visually with words and asked whether or not they had heard them. Overall, words with focus were recognised significantly faster and more accurately than unfocused words. In addition, words with syntactic focus or syntactic + prosodic focus were recognised faster than words with prosodic focus alone. As for other languages, Korean focus confers processing advantage on the words carrying it. While prosodic focus does provide an advantage, however, syntactic focus appears to provide the greater beneficial effect for recognition memory
  • Kempen, G., & Harbusch, K. (2003). A corpus study into word order variation in German subordinate clauses: Animacy affects linearization independently of function assignment. In Proceedings of AMLaP 2003 (pp. 153-154). Glasgow: Glasgow University.
  • Kempen, G., & Harbusch, K. (1998). A 'tree adjoining' grammar without adjoining: The case of scrambling in German. In Fourth International Workshop on Tree Adjoining Grammars and Related Frameworks (TAG+4).
  • Kempen, G., & Harbusch, K. (2003). Dutch and German verb clusters in performance grammar. In P. A. Seuren, & G. Kempen (Eds.), Verb constructions in German and Dutch (pp. 185-221). Amsterdam: Benjamins.
  • Kempen, G., & Harbusch, K. (2008). Comparing linguistic judgments and corpus frequencies as windows on grammatical competence: A study of argument linearization in German clauses. In A. Steube (Ed.), The discourse potential of underspecified structures (pp. 179-192). Berlin: Walter de Gruyter.

    Abstract

    We present an overview of several corpus studies we carried out into the frequencies of argument NP orderings in the midfield of subordinate and main clauses of German. Comparing the corpus frequencies with grammaticality ratings published by Keller’s (2000), we observe a “grammaticality–frequency gap”: Quite a few argument orderings with zero corpus frequency are nevertheless assigned medium–range grammaticality ratings. We propose an explanation in terms of a two-factor theory. First, we hypothesize that the grammatical induction component needs a sufficient number of exposures to a syntactic pattern to incorporate it into its repertoire of more or less stable rules of grammar. Moderately to highly frequent argument NP orderings are likely have attained this status, but not their zero-frequency counterparts. This is why the latter argument sequences cannot be produced by the grammatical encoder and are absent from the corpora. Secondly, we assumed that an extraneous (nonlinguistic) judgment process biases the ratings of moderately grammatical linear order patterns: Confronted with such structures, the informants produce their own “ideal delivery” variant of the to-be-rated target sentence and evaluate the similarity between the two versions. A high similarity score yielded by this judgment then exerts a positive bias on the grammaticality rating—a score that should not be mistaken for an authentic grammaticality rating. We conclude that, at least in the linearization domain studied here, the goal of gaining a clear view of the internal grammar of language users is best served by a combined strategy in which grammar rules are founded on structures that elicit moderate to high grammaticality ratings and attain at least moderate usage frequencies.
  • Kempen, G. (2003). Language generation. In W. Frawley (Ed.), International encyclopedia of linguistics (pp. 362-364). New York: Oxford University Press.
  • Kempen, G. (1994). Innovative language checking software for Dutch. In J. Van Gent, & E. Peeters (Eds.), Proceedings of the 2e Dag van het Document (pp. 99-100). Delft: TNO Technisch Physische Dienst.
  • Kempen, G. (1998). Sentence parsing. In A. D. Friederici (Ed.), Language comprehension: A biological perspective (pp. 213-228). Berlin: Springer.
  • Kempen, G. (1994). The unification space: A hybrid model of human syntactic processing [Abstract]. In Cuny 1994 - The 7th Annual CUNY Conference on Human Sentence Processing. March 17-19, 1994. CUNY Graduate Center, New York.
  • Kempen, G., & Harbusch, K. (2003). Word order scrambling as a consequence of incremental sentence production. In H. Härtl, & H. Tappe (Eds.), Mediating between concepts and grammar (pp. 141-164). Berlin: Mouton de Gruyter.
  • Kempen, G., & Dijkstra, A. (1994). Toward an integrated system for grammar, writing and spelling instruction. In L. Appelo, & F. De Jong (Eds.), Computer-Assisted Language Learning: Proceedings of the Seventh Twente Workshop on Language Technology (pp. 41-46). Enschede: University of Twente.
  • Kemps-Snijders, M., Klassmann, A., Zinn, C., Berck, P., Russel, A., & Wittenburg, P. (2008). Exploring and enriching a language resource archive via the web. In Proceedings of the 6th International Conference on Language Resources and Evaluation (LREC 2008).

    Abstract

    The ”download first, then process paradigm” is still the predominant working method amongst the research community. The web-based paradigm, however, offers many advantages from a tool development and data management perspective as they allow a quick adaptation to changing research environments. Moreover, new ways of combining tools and data are increasingly becoming available and will eventually enable a true web-based workflow approach, thus challenging the ”download first, then process” paradigm. The necessary infrastructure for managing, exploring and enriching language resources via the Web will need to be delivered by projects like CLARIN and DARIAH
  • Kemps-Snijders, M., Zinn, C., Ringersma, J., & Windhouwer, M. (2008). Ensuring semantic interoperability on lexical resources. In Proceedings of the 6th International Conference on Language Resources and Evaluation (LREC 2008).

    Abstract

    In this paper, we describe a unifying approach to tackle data heterogeneity issues for lexica and related resources. We present LEXUS, our software that implements the Lexical Markup Framework (LMF) to uniformly describe and manage lexica of different structures. LEXUS also makes use of a central Data Category Registry (DCR) to address terminological issues with regard to linguistic concepts as well as the handling of working and object languages. Finally, we report on ViCoS, a LEXUS extension, providing support for the definition of arbitrary semantic relations between lexical entries or parts thereof.
  • Kemps-Snijders, M., Windhouwer, M., Wittenburg, P., & Wright, S. E. (2008). ISOcat: Corralling data categories in the wild. In Proceedings of the 6th International Conference on Language Resources and Evaluation (LREC 2008).

    Abstract

    To achieve true interoperability for valuable linguistic resources different levels of variation need to be addressed. ISO Technical Committee 37, Terminology and other language and content resources, is developing a Data Category Registry. This registry will provide a reusable set of data categories. A new implementation, dubbed ISOcat, of the registry is currently under construction. This paper shortly describes the new data model for data categories that will be introduced in this implementation. It goes on with a sketch of the standardization process. Completed data categories can be reused by the community. This is done by either making a selection of data categories using the ISOcat web interface, or by other tools which interact with the ISOcat system using one of its various Application Programming Interfaces. Linguistic resources that use data categories from the registry should include persistent references, e.g. in the metadata or schemata of the resource, which point back to their origin. These data category references can then be used to determine if two or more resources share common semantics, thus providing a level of interoperability close to the source data and a promising layer for semantic alignment on higher levels
  • Kita, S. (2003). Pointing: A foundational building block in human communication. In S. Kita (Ed.), Pointing: Where language, culture, and cognition meet (pp. 1-8). Mahwah, NJ: Erlbaum.

Share this page