Publications

Displaying 201 - 300 of 386
  • Klein, W. (2013). L'effettivo declino e la crescita potenziale della lessicografia tedesca. In N. Maraschio, D. De Martiono, & G. Stanchina (Eds.), L'italiano dei vocabolari: Atti di La piazza delle lingue 2012 (pp. 11-20). Firenze: Accademia della Crusca.
  • Klein, W., & Schnell, R. (Eds.). (2008). Literaturwissenschaft und Linguistik [Special Issue]. Zeitschrift für Literaturwissenschaft und Linguistik, (150).
  • Klein, W. (1986). Intonation und Satzmodalität in einfachen Fällen: Einige Beobachtungen. In E. Slembek (Ed.), Miteinander sprechen und handeln: Festschrift für Hellmut Geissner (pp. 161-177). Königstein Ts.: Scriptor.
  • Klein, W. (Ed.). (2008). Ist Schönheit messbar? [Special Issue]. Zeitschrift für Literaturwissenschaft und Linguistik, 152.
  • Klein, W. (2013). European Science Foundation (ESF) Project. In P. Robinson (Ed.), The Routledge encyclopedia of second language acquisition (pp. 220-221). New York: Routledge.
  • Klein, W. (Ed.). (1976). Psycholinguistik [Special Issue]. Zeitschrift für Literaturwissenschaft und Linguistik, (23/24).
  • Klein, W. (1991). Seven trivia of language acquisition. In L. Eubank (Ed.), Point counterpoint: Universal grammar in the second language (pp. 49-70). Amsterdam: Benjamins.
  • Klein, W. (1991). SLA theory: Prolegomena to a theory of language acquisition and implications for Theoretical Linguistics. In T. Huebner, & C. Ferguson (Eds.), Crosscurrents in second language acquisition and linguistic theories (pp. 169-194). Amsterdam: Benjamins.
  • Klein, W. (Ed.). (1986). Sprachverfall [Special Issue]. Zeitschrift für Literaturwissenschaft und Linguistik, (62).
  • Klein, W. (2013). Von Reichtum und Armut des deutschen Wortschatzes. In Deutsche Akademie für Sprache und Dichtung, & Union der deutschen Akademien der Wissenschaften (Eds.), Reichtum und Armut der deutschen Sprache (pp. 15-55). Boston: de Gruyter.
  • Kooijman, V., Johnson, E. K., & Cutler, A. (2008). Reflections on reflections of infant word recognition. In A. D. Friederici, & G. Thierry (Eds.), Early language development: Bridging brain and behaviour (pp. 91-114). Amsterdam: Benjamins.
  • Kristoffersen, J. H., Troelsgard, T., & Zwitserlood, I. (2013). Issues in sign language lexicography. In H. Jackson (Ed.), The Bloomsbury companion to lexicography (pp. 259-283). London: Bloomsbury.
  • Ladd, D. R., & Dediu, D. (2013). Genes and linguistic tone. In H. Pashler (Ed.), Encyclopedia of the mind (pp. 372-373). London: Sage Publications.

    Abstract

    It is usually assumed that the language spoken by a human community is independent of the community's genetic makeup, an assumption supported by an overwhelming amount of evidence. However, the possibility that language is influenced by its speakers' genes cannot be ruled out a priori, and a recently discovered correlation between the geographic distribution of tone languages and two human genes seems to point to a genetically influenced bias affecting language. This entry describes this specific correlation and highlights its major implications. Voice pitch has a variety of communicative functions. Some of these are probably universal, such as conveying information about the speaker's sex, age, and emotional state. In many languages, including the European languages, voice pitch also conveys certain sentence-level meanings such as signaling that an utterance is a question or an exclamation; these uses of pitch are known as intonation. Some languages, however, known as tone languages, nian ...
  • Lausberg, H., & Sloetjes, H. (2013). NEUROGES in combination with the annotation tool ELAN. In H. Lausberg (Ed.), Understanding body movement: A guide to empirical research on nonverbal behaviour with an introduction to the NEUROGES coding system (pp. 199-200). Frankfurt a/M: Lang.
  • Lenkiewicz, A., & Drude, S. (2013). Automatic annotation of linguistic 2D and Kinect recordings with the Media Query Language for Elan. In Proceedings of Digital Humanities 2013 (pp. 276-278).

    Abstract

    Research in body language with use of gesture recognition and speech analysis has gained much attention in the recent times, influencing disciplines related to image and speech processing.

    This study aims to design the Media Query Language (MQL) (Lenkiewicz, et al. 2012) combined with the Linguistic Media Query Interface (LMQI) for Elan (Wittenburg, et al. 2006). The system integrated with the new achievements in audio-video recognition will allow querying media files with predefined gesture phases (or motion primitives) and speech characteristics as well as combinations of both. For the purpose of this work the predefined motions and speech characteristics are called patterns for atomic elements and actions for a sequence of patterns. The main assumption is that a user-customized library of patterns and actions and automated media annotation with LMQI will reduce annotation time, hence decreasing costs of creation of annotated corpora. Increase of the number of annotated data should influence the speed and number of possible research in disciplines in which human multimodal interaction is a subject of interest and where annotated corpora are required.
  • Lenkiewicz, P., Pereira, M., Freire, M., & Fernandes, J. (2008). Accelerating 3D medical image segmentation with high performance computing. In Proceedings of the IEEE International Workshops on Image Processing Theory, Tools and Applications - IPT (pp. 1-8).

    Abstract

    Digital processing of medical images has helped physicians and patients during past years by allowing examination and diagnosis on a very precise level. Nowadays possibly the biggest deal of support it can offer for modern healthcare is the use of high performance computing architectures to treat the huge amounts of data that can be collected by modern acquisition devices. This paper presents a parallel processing implementation of an image segmentation algorithm that operates on a computer cluster equipped with 10 processing units. Thanks to well-organized distribution of the workload we manage to significantly shorten the execution time of the developed algorithm and reach a performance gain very close to linear.
  • Levelt, W. J. M. (1976). Formal grammars and the natural language user: A review. In A. Marzollo (Ed.), Topics in artificial intelligence (pp. 226-290). Vienna: Springer.
  • Levelt, W. J. M. (1962). Motion breaking and the perception of causality. In A. Michotte (Ed.), Causalité, permanence et réalité phénoménales: Etudes de psychologie expérimentale (pp. 244-258). Louvain: Publications Universitaires.
  • Levelt, W. J. M., & Plomp, R. (1962). Musical consonance and critical bandwidth. In Proceedings of the 4th International Congress Acoustics (pp. 55-55).
  • Levelt, W. J. M. (2004). Language. In G. Adelman, & B. H. Smith (Eds.), Elsevier's encyclopedia of neuroscience [CD-ROM] (3rd). Amsterdam: Elsevier.
  • Levelt, W. J. M. (1986). Herdenking van Joseph Maria Franciscus Jaspars (16 maart 1934 - 31 juli 1985). In Jaarboek 1986 Koninklijke Nederlandse Akademie van Wetenschappen (pp. 187-189). Amsterdam: North Holland.
  • Levelt, W. J. M. (1991). Lexical access in speech production: Stages versus cascading. In H. Peters, W. Hulstijn, & C. Starkweather (Eds.), Speech motor control and stuttering (pp. 3-10). Amsterdam: Excerpta Medica.
  • Levelt, W. J. M., & Kempen, G. (1976). Taal. In J. Michon, E. Eijkman, & L. De Klerk (Eds.), Handboek der Psychonomie (pp. 492-523). Deventer: Van Loghum Slaterus.
  • Levelt, W. J. M. (2008). What has become of formal grammars in linguistics and psycholinguistics? [Postscript]. In Formal Grammars in linguistics and psycholinguistics (pp. 1-17). Amsterdam: John Benjamins.
  • Levelt, W. J. M. (1986). Zur sprachlichen Abbildung des Raumes: Deiktische und intrinsische Perspektive. In H. Bosshardt (Ed.), Perspektiven auf Sprache. Interdisziplinäre Beiträge zum Gedenken an Hans Hörmann (pp. 187-211). Berlin: De Gruyter.
  • Levinson, S. C. (2013). Action formation and ascription. In T. Stivers, & J. Sidnell (Eds.), The handbook of conversation analysis (pp. 103-130). Malden, MA: Wiley-Blackwell. doi:10.1002/9781118325001.ch6.

    Abstract

    Since the core matrix for language use is interaction, the main job of language
    is not to express propositions or abstract meanings, but to deliver actions.
    For in order to respond in interaction we have to ascribe to the prior turn
    a primary ‘action’ – variously thought of as an ‘illocution’, ‘speech act’, ‘move’,
    etc. – to which we then respond. The analysis of interaction also relies heavily
    on attributing actions to turns, so that, e.g., sequences can be characterized in
    terms of actions and responses. Yet the process of action ascription remains way
    understudied. We don’t know much about how it is done, when it is done, nor even
    what kind of inventory of possible actions might exist, or the degree to which they
    are culturally variable.
    The study of action ascription remains perhaps the primary unfulfilled task in
    the study of language use, and it needs to be tackled from conversationanalytic,
    psycholinguistic, cross-linguistic and anthropological perspectives.
    In this talk I try to take stock of what we know, and derive a set of goals for and
    constraints on an adequate theory. Such a theory is likely to employ, I will suggest,
    a top-down plus bottom-up account of action perception, and a multi-level notion
    of action which may resolve some of the puzzles that have repeatedly arisen.
  • Levinson, S. C. (2013). Cross-cultural universals and communication structures. In M. A. Arbib (Ed.), Language, music, and the brain: A mysterious relationship (pp. 67-80). Cambridge, MA: MIT Press.

    Abstract

    Given the diversity of languages, it is unlikely that the human capacity for language resides in rich universal syntactic machinery. More likely, it resides centrally in the capacity for vocal learning combined with a distinctive ethology for communicative interaction, which together (no doubt with other capacities) make diverse languages learnable. This chapter focuses on face-to-face communication, which is characterized by the mapping of sounds and multimodal signals onto speech acts and which can be deeply recursively embedded in interaction structure, suggesting an interactive origin for complex syntax. These actions are recognized through Gricean intention recognition, which is a kind of “ mirroring” or simulation distinct from the classic mirror neuron system. The multimodality of conversational interaction makes evident the involvement of body, hand, and mouth, where the burden on these can be shifted, as in the use of speech and gesture, or hands and face in sign languages. Such shifts having taken place during the course of human evolution. All this suggests a slightly different approach to the mystery of music, whose origins should also be sought in joint action, albeit with a shift from turn-taking to simultaneous expression, and with an affective quality that may tap ancient sources residual in primate vocalization. The deep connection of language to music can best be seen in the only universal form of music, namely song.
  • Levinson, S. C. (1991). Deixis. In W. Bright (Ed.), Oxford international encyclopedia of linguistics (pp. 343-344). Oxford University Press.
  • Levinson, S. C. (2004). Deixis. In L. Horn (Ed.), The handbook of pragmatics (pp. 97-121). Oxford: Blackwell.
  • Levinson, S. C., & Majid, A. (2008). Preface and priorities. In A. Majid (Ed.), Field manual volume 11 (pp. iii-iv). Nijmegen: Max Planck Institute for Psycholinguistics.
  • Levinson, S. C., & Dediu, D. (2013). The interplay of genetic and cultural factors in ongoing language evolution. In P. J. Richerson, & M. H. Christiansen (Eds.), Cultural evolution: Society, technology, language, and religion. Strüngmann Forum Reports, vol. 12 (pp. 219-232). Cambridge, Mass: MIT Press.
  • Levinson, S. C., Bohnemeyer, J., & Enfield, N. J. (2008). Time and space questionnaire. In A. Majid (Ed.), Field Manual Volume 11 (pp. 42-49). Nijmegen: Max Planck Institute for Psycholinguistics. doi:10.17617/2.492955.

    Abstract

    This entry contains: 1. An invitation to think about to what extent the grammar of space and time share lexical and morphosyntactic resources − the suggestions here are only prompts, since it would take a long questionnaire to fully explore this; 2. A suggestion about how to collect gestural data that might show us to what extent the spatial and temporal domains, have a psychological continuity. This is really the goal − but you need to do the linguistic work first or in addition. The goal of this task is to explore the extent to which time is conceptualised on a spatial basis.
  • Lindström, E. (2004). Melanesian kinship and culture. In A. Majid (Ed.), Field Manual Volume 9 (pp. 70-73). Nijmegen: Max Planck Institute for Psycholinguistics. doi:10.17617/2.1552190.
  • Lucas, C., Griffiths, T., Xu, F., & Fawcett, C. (2008). A rational model of preference learning and choice prediction by children. In D. Koller, Y. Bengio, D. Schuurmans, L. Bottou, & A. Culotta (Eds.), Advances in Neural Information Processing Systems.

    Abstract

    Young children demonstrate the ability to make inferences about the preferences of other agents based on their choices. However, there exists no overarching account of what children are doing when they learn about preferences or how they use that knowledge. We use a rational model of preference learning, drawing on ideas from economics and computer science, to explain the behavior of children in several recent experiments. Specifically, we show how a simple econometric model can be extended to capture two- to four-year-olds’ use of statistical information in inferring preferences, and their generalization of these preferences.
  • Magyari, L., & De Ruiter, J. P. (2008). Timing in conversation: The anticipation of turn endings. In J. Ginzburg, P. Healey, & Y. Sato (Eds.), Proceedings of the 12th Workshop on the Semantics and Pragmatics Dialogue (pp. 139-146). London: King's college.

    Abstract

    We examined how communicators can switch between speaker and listener role with such accurate timing. During conversations, the majority of role transitions happens with a gap or overlap of only a few hundred milliseconds. This suggests that listeners can predict when the turn of the current speaker is going to end. Our hypothesis is that listeners know when a turn ends because they know how it ends. Anticipating the last words of a turn can help the next speaker in predicting when the turn will end, and also in anticipating the content of the turn, so that an appropriate response can be prepared in advance. We used the stimuli material of an earlier experiment (De Ruiter, Mitterer & Enfield, 2006), in which subjects were listening to turns from natural conversations and had to press a button exactly when the turn they were listening to ended. In the present experiment, we investigated if the subjects can complete those turns when only an initial fragment of the turn is presented to them. We found that the subjects made better predictions about the last words of those turns that had more accurate responses in the earlier button press experiment.
  • Magyari, L. (2008). A mentális lexikon modelljei és a magyar nyelv (Models of mental lexicon and the Hungarian language). In J. Gervain, & C. Pléh (Eds.), A láthatatlan nyelv (Invisible Language). Budapest: Gondolat Kiadó.
  • Majid, A., van Leeuwen, T., & Dingemanse, M. (2008). Synaesthesia: A cross-cultural pilot. In A. Majid (Ed.), Field manual volume 11 (pp. 37-41). Nijmegen: Max Planck Institute for Psycholinguistics. doi:10.17617/2.492960.

    Abstract

    This Field Manual entry has been superceded by the 2009 version:
    https://doi.org/10.17617/2.883570

    Files private

    Request files
  • Majid, A., Van Staden, M., & Enfield, N. J. (2004). The human body in cognition, brain, and typology. In K. Hovie (Ed.), Forum Handbook, 4th International Forum on Language, Brain, and Cognition - Cognition, Brain, and Typology: Toward a Synthesis (pp. 31-35). Sendai: Tohoku University.

    Abstract

    The human body is unique: it is both an object of perception and the source of human experience. Its universality makes it a perfect resource for asking questions about how cognition, brain and typology relate to one another. For example, we can ask how speakers of different languages segment and categorize the human body. A dominant view is that body parts are “given” by visual perceptual discontinuities, and that words are merely labels for these visually determined parts (e.g., Andersen, 1978; Brown, 1976; Lakoff, 1987). However, there are problems with this view. First it ignores other perceptual information, such as somatosensory and motoric representations. By looking at the neural representations of sesnsory representations, we can test how much of the categorization of the human body can be done through perception alone. Second, we can look at language typology to see how much universality and variation there is in body-part categories. A comparison of a range of typologically, genetically and areally diverse languages shows that the perceptual view has only limited applicability (Majid, Enfield & van Staden, in press). For example, using a “coloring-in” task, where speakers of seven different languages were given a line drawing of a human body and asked to color in various body parts, Majid & van Staden (in prep) show that languages vary substantially in body part segmentation. For example, Jahai (Mon-Khmer) makes a lexical distinction between upper arm, lower arm, and hand, but Lavukaleve (Papuan Isolate) has just one word to refer to arm, hand, and leg. This shows that body part categorization is not a straightforward mapping of words to visually determined perceptual parts.
  • Majid, A., Van Staden, M., Boster, J. S., & Bowerman, M. (2004). Event categorization: A cross-linguistic perspective. In K. Forbus, D. Gentner, & T. Tegier (Eds.), Proceedings of the 26th Annual Meeting of the Cognitive Science Society (pp. 885-890). Mahwah, NJ: Erlbaum.

    Abstract

    Many studies in cognitive science address how people categorize objects, but there has been comparatively little research on event categorization. This study investigated the categorization of events involving material destruction, such as “cutting” and “breaking”. Speakers of 28 typologically, genetically, and areally diverse languages described events shown in a set of video-clips. There was considerable cross-linguistic agreement in the dimensions along which the events were distinguished, but there was variation in the number of categories and the placement of their boundaries.
  • Majid, A. (2008). Focal colours. In A. Majid (Ed.), Field Manual Volume 11 (pp. 8-10). Nijmegen: Max Planck Institute for Psycholinguistics. doi:10.17617/2.492958.

    Abstract

    In this task we aim to find what the best exemplars or “focal colours” of each basic colour term is in our field languages. This is an important part of the evidence we need in order to understand the colour data collected using 'The Language of Vision I: Colour'. This task consists of an experiment where participants pick out the best exemplar for the colour terms in their language. The goal is to establish language specific focal colours.
  • Majid, A. (2013). Olfactory language and cognition. In M. Knauff, M. Pauen, N. Sebanz, & I. Wachsmuth (Eds.), Proceedings of the 35th annual meeting of the Cognitive Science Society (CogSci 2013) (pp. 68). Austin,TX: Cognitive Science Society. Retrieved from http://mindmodeling.org/cogsci2013/papers/0025/index.html.

    Abstract

    Since the cognitive revolution, a widely held assumption has been that—whereas content may vary across cultures—cognitive processes would be universal, especially those on the more basic levels. Even if scholars do not fully subscribe to this assumption, they often conceptualize, or tend to investigate, cognition as if it were universal (Henrich, Heine, & Norenzayan, 2010). The insight that universality must not be presupposed but scrutinized is now gaining ground, and cognitive diversity has become one of the hot (and controversial) topics in the field (Norenzayan & Heine, 2005). We argue that, for scrutinizing the cultural dimension of cognition, taking an anthropological perspective is invaluable, not only for the task itself, but for attenuating the home-field disadvantages that are inescapably linked to cross-cultural research (Medin, Bennis, & Chandler, 2010).
  • Majid, A. (2013). Psycholinguistics. In J. L. Jackson (Ed.), Oxford Bibliographies Online: Anthropology. Oxford: Oxford University Press.
  • Matsuo, A. (2004). Young children's understanding of ongoing vs. completion in present and perfective participles. In J. v. Kampen, & S. Baauw (Eds.), Proceedings of GALA 2003 (pp. 305-316). Utrecht: Netherlands Graduate School of Linguistics (LOT).
  • McCafferty, S. G., & Gullberg, M. (Eds.). (2008). Gesture and SLA: Toward an integrated approach [Special Issue]. Studies in Second Language Acquisition, 30(2).
  • Meyer, A. S. (2004). The use of eye tracking in studies of sentence generation. In J. M. Henderson, & F. Ferreira (Eds.), The interface of language, vision, and action: Eye movements and the visual world (pp. 191-212). Hove: Psychology Press.
  • Mishra, R. K., Olivers, C. N. L., & Huettig, F. (2013). Spoken language and the decision to move the eyes: To what extent are language-mediated eye movements automatic? In V. S. C. Pammi, & N. Srinivasan (Eds.), Progress in Brain Research: Decision making: Neural and behavioural approaches (pp. 135-149). New York: Elsevier.

    Abstract

    Recent eye-tracking research has revealed that spoken language can guide eye gaze very rapidly (and closely time-locked to the unfolding speech) toward referents in the visual world. We discuss whether, and to what extent, such language-mediated eye movements are automatic rather than subject to conscious and controlled decision-making. We consider whether language-mediated eye movements adhere to four main criteria of automatic behavior, namely, whether they are fast and efficient, unintentional, unconscious, and overlearned (i.e., arrived at through extensive practice). Current evidence indicates that language-driven oculomotor behavior is fast but not necessarily always efficient. It seems largely unintentional though there is also some evidence that participants can actively use the information in working memory to avoid distraction in search. Language-mediated eye movements appear to be for the most part unconscious and have all the hallmarks of an overlearned behavior. These data are suggestive of automatic mechanisms linking language to potentially referred-to visual objects, but more comprehensive and rigorous testing of this hypothesis is needed.
  • Mitterer, H. (2008). How are words reduced in spontaneous speech? In A. Botonis (Ed.), Proceedings of ISCA Tutorial and Research Workshop On Experimental Linguistics (pp. 165-168). Athens: University of Athens.

    Abstract

    Words are reduced in spontaneous speech. If reductions are constrained by functional (i.e., perception and production) constraints, they should not be arbitrary. This hypothesis was tested by examing the pronunciations of high- to mid-frequency words in a Dutch and a German spontaneous speech corpus. In logistic-regression models the "reduction likelihood" of a phoneme was predicted by fixed-effect predictors such as position within the word, word length, word frequency, and stress, as well as random effects such as phoneme identity and word. The models for Dutch and German show many communalities. This is in line with the assumption that similar functional constraints influence reductions in both languages.
  • Narasimhan, B., Bowerman, M., Brown, P., Eisenbeiss, S., & Slobin, D. I. (2004). "Putting things in places": Effekte linguisticher Typologie auf die Sprachentwicklung. In G. Plehn (Ed.), Jahrbuch der Max-Planck Gesellschaft (pp. 659-663). Göttingen: Vandenhoeck & Ruprecht.

    Abstract

    Effekte linguisticher Typologie auf die Sprach-entwicklung. In G. Plehn (Ed.), Jahrbuch der Max-Planck Gesellsch
  • Neijt, A., Schreuder, R., & Baayen, R. H. (2004). Seven years later: The effect of spelling on interpretation. In L. Cornips, & J. Doetjes (Eds.), Linguistics in the Netherlands 2004 (pp. 134-145). Amsterdam: Benjamins.
  • O'Connor, L. (2004). Going getting tired: Associated motion through space and time in Lowland Chontal. In M. Achard, & S. Kemmer (Eds.), Language, culture and mind (pp. 181-199). Stanford: CSLI.
  • Ortega, G., & Ozyurek, A. (2013). Gesture-sign interface in hearing non-signers' first exposure to sign. In Proceedings of the Tilburg Gesture Research Meeting [TiGeR 2013].

    Abstract

    Natural sign languages and gestures are complex communicative systems that allow the incorporation of features of a referent into their structure. They differ, however, in that signs are more conventionalised because they consist of meaningless phonological parameters. There is some evidence that despite non-signers finding iconic signs more memorable they can have more difficulty at articulating their exact phonological components. In the present study, hearing non-signers took part in a sign repetition task in which they had to imitate as accurately as possible a set of iconic and arbitrary signs. Their renditions showed that iconic signs were articulated significantly less accurately than arbitrary signs. Participants were recalled six months later to take part in a sign generation task. In this task, participants were shown the English translation of the iconic signs they imitated six months prior. For each word, participants were asked to generate a sign (i.e., an iconic gesture). The handshapes produced in the sign repetition and sign generation tasks were compared to detect instances in which both renditions presented the same configuration. There was a significant correlation between articulation accuracy in the sign repetition task and handshape overlap. These results suggest some form of gestural interference in the production of iconic signs by hearing non-signers. We also suggest that in some instances non-signers may deploy their own conventionalised gesture when producing some iconic signs. These findings are interpreted as evidence that non-signers process iconic signs as gestures and that in production, only when sign and gesture have overlapping features will they be capable of producing the phonological components of signs accurately.
  • Osswald, R., & Van Valin Jr., R. D. (2013). FrameNet, frame structure and the syntax-semantics interface. In T. Gamerschlag, D. Gerland, R. Osswald, & W. Petersen (Eds.), Frames and concept types: Applications in language and philosophy. Heidelberg: Springer.
  • Ozturk, O., & Papafragou, A. (2008). Acquisition of evidentiality and source monitoring. In H. Chan, H. Jacob, & E. Kapia (Eds.), Proceedings from the 32nd Annual Boston University Conference on Language Development [BUCLD 32] (pp. 368-377). Somerville, Mass.: Cascadilla Press.
  • Peeters, D., Chu, M., Holler, J., Ozyurek, A., & Hagoort, P. (2013). Getting to the point: The influence of communicative intent on the kinematics of pointing gestures. In M. Knauff, M. Pauen, N. Sebanz, & I. Wachsmuth (Eds.), Proceedings of the 35th Annual Meeting of the Cognitive Science Society (CogSci 2013) (pp. 1127-1132). Austin, TX: Cognitive Science Society.

    Abstract

    In everyday communication, people not only use speech but
    also hand gestures to convey information. One intriguing
    question in gesture research has been why gestures take the
    specific form they do. Previous research has identified the
    speaker-gesturer’s communicative intent as one factor
    shaping the form of iconic gestures. Here we investigate
    whether communicative intent also shapes the form of
    pointing gestures. In an experimental setting, twenty-four
    participants produced pointing gestures identifying a referent
    for an addressee. The communicative intent of the speakergesturer
    was manipulated by varying the informativeness of
    the pointing gesture. A second independent variable was the
    presence or absence of concurrent speech. As a function of their communicative intent and irrespective of the presence of speech, participants varied the durations of the stroke and the post-stroke hold-phase of their gesture. These findings add to our understanding of how the communicative context influences the form that a gesture takes.
  • Perniss, P. M., & Ozyurek, A. (2008). Representations of action, motion and location in sign space: A comparison of German (DGS) and Turkish (TID) sign language narratives. In J. Quer (Ed.), Signs of the time: Selected papers from TISLR 8 (pp. 353-376). Seedorf: Signum Press.
  • Perniss, P. M., & Zeshan, U. (2008). Possessive and existential constructions in Kata Kolok (Bali). In Possessive and existential constructions in sign languages. Nijmegen: Ishara Press.
  • Perniss, P. M., & Zeshan, U. (2008). Possessive and existential constructions: Introduction and overview. In Possessive and existential constructions in sign languages (pp. 1-31). Nijmegen: Ishara Press.
  • Petersson, K. M. (2008). On cognition, structured sequence processing, and adaptive dynamical systems. American Institute of Physics Conference Proceedings, 1060(1), 195-200.

    Abstract

    Cognitive neuroscience approaches the brain as a cognitive system: a system that functionally is conceptualized in terms of information processing. We outline some aspects of this concept and consider a physical system to be an information processing device when a subclass of its physical states can be viewed as representational/cognitive and transitions between these can be conceptualized as a process operating on these states by implementing operations on the corresponding representational structures. We identify a generic and fundamental problem in cognition: sequentially organized structured processing. Structured sequence processing provides the brain, in an essential sense, with its processing logic. In an approach addressing this problem, we illustrate how to integrate levels of analysis within a framework of adaptive dynamical systems. We note that the dynamical system framework lends itself to a description of asynchronous event-driven devices, which is likely to be important in cognition because the brain appears to be an asynchronous processing system. We use the human language faculty and natural language processing as a concrete example through out.
  • Piai, V., Roelofs, A., Jensen, O., Schoffelen, J.-M., & Bonnefond, M. (2013). Distinct patterns of brain activity characterize lexical activation and competition in speech production [Abstract]. Journal of Cognitive Neuroscience, 25 Suppl., 106.

    Abstract

    A fundamental ability of speakers is to
    quickly retrieve words from long-term memory. According to a prominent theory, concepts activate multiple associated words, which enter into competition for selection. Previous electrophysiological studies have provided evidence for the activation of multiple alternative words, but did not identify brain responses refl ecting competition. We report a magnetoencephalography study examining the timing and neural substrates of lexical activation and competition. The degree of activation of competing words was
    manipulated by presenting pictures (e.g., dog) simultaneously with distractor
    words. The distractors were semantically related to the picture name (cat), unrelated (pin), or identical (dog). Semantic distractors are stronger competitors to the picture name, because they receive additional activation from the picture, whereas unrelated distractors do not. Picture naming times were longer with semantic than with unrelated and identical distractors. The patterns of phase-locked and non-phase-locked activity were distinct
    but temporally overlapping. Phase-locked activity in left middle temporal
    gyrus, peaking at 400 ms, was larger on unrelated than semantic and identical trials, suggesting differential effort in processing the alternative words activated by the picture-word stimuli. Non-phase-locked activity in the 4-10 Hz range between 400-650 ms in left superior frontal gyrus was larger on semantic than unrelated and identical trials, suggesting different
    degrees of effort in resolving the competition among the alternatives
    words, as refl ected in the naming times. These findings characterize distinct
    patterns of brain activity associated with lexical activation and competition
    respectively, and their temporal relation, supporting the theory that words are selected by competition.
  • Poletiek, F. H., & Stolker, C. J. J. M. (2004). Who decides the worth of an arm and a leg? Assessing the monetary value of nonmonetary damage. In E. Kurz-Milcke, & G. Gigerenzer (Eds.), Experts in science and society (pp. 201-213). New York: Kluwer Academic/Plenum Publishers.
  • Randall, J., Van Hout, A., Weissenborn, J., & Baayen, R. H. (2004). Acquiring unaccusativity: A cross-linguistic look. In A. Alexiadou (Ed.), The unaccusativity puzzle (pp. 332-353). Oxford: Oxford University Press.
  • Ravignani, A., Gingras, B., Asano, R., Sonnweber, R., Matellan, V., & Fitch, W. T. (2013). The evolution of rhythmic cognition: New perspectives and technologies in comparative research. In M. Knauff, M. Pauen, I. Sebanz, & I. Wachsmuth (Eds.), Proceedings of the 35th Annual Conference of the Cognitive Science Society (pp. 1199-1204). Austin,TX: Cognitive Science Society.

    Abstract

    Music is a pervasive phenomenon in human culture, and musical
    rhythm is virtually present in all musical traditions. Research
    on the evolution and cognitive underpinnings of rhythm
    can benefit from a number of approaches. We outline key concepts
    and definitions, allowing fine-grained analysis of rhythmic
    cognition in experimental studies. We advocate comparative
    animal research as a useful approach to answer questions
    about human music cognition and review experimental evidence
    from different species. Finally, we suggest future directions
    for research on the cognitive basis of rhythm. Apart from
    research in semi-natural setups, possibly allowed by “drum set
    for chimpanzees” prototypes presented here for the first time,
    mathematical modeling and systematic use of circular statistics
    may allow promising advances.
  • Razafindrazaka, H., & Brucato, N. (2008). Esclavage et diaspora Africaine. In É. Crubézy, J. Braga, & G. Larrouy (Eds.), Anthropobiologie: Évolution humaine (pp. 326-328). Issy-les-Moulineaux: Elsevier Masson.
  • Razafindrazaka, H., Brucato, N., & Mazières, S. (2008). Les Noirs marrons. In É. Crubézy, J. Braga, & G. Larrouy (Eds.), Anthropobiologie: Évolution humaine (pp. 319-320). Issy-les-Moulineaux: Elsevier Masson.
  • Reesink, G. (2004). Interclausal relations. In G. Booij (Ed.), Morphologie / morphology (pp. 1202-1207). Berlin: Mouton de Gruyter.
  • Reinisch, E., Jesse, A., & McQueen, J. M. (2008). The strength of stress-related lexical competition depends on the presence of first-syllable stress. In Proceedings of Interspeech 2008 (pp. 1954-1954).

    Abstract

    Dutch listeners' looks to printed words were tracked while they listened to instructions to click with their mouse on one of them. When presented with targets from word pairs where the first two syllables were segmentally identical but differed in stress location, listeners used stress information to recognize the target before segmental information disambiguated the words. Furthermore, the amount of lexical competition was influenced by the presence or absence of word-initial stress.
  • Reinisch, E., Jesse, A., & McQueen, J. M. (2008). Lexical stress information modulates the time-course of spoken-word recognition. In Proceedings of Acoustics' 08 (pp. 3183-3188).

    Abstract

    Segmental as well as suprasegmental information is used by Dutch listeners to recognize words. The time-course of the effect of suprasegmental stress information on spoken-word recognition was investigated in a previous study, in which we tracked Dutch listeners' looks to arrays of four printed words as they listened to spoken sentences. Each target was displayed along with a competitor that did not differ segmentally in its first two syllables but differed in stress placement (e.g., 'CENtimeter' and 'sentiMENT'). The listeners' eye-movements showed that stress information is used to recognize the target before distinct segmental information is available. Here, we examine the role of durational information in this effect. Two experiments showed that initial-syllable duration, as a cue to lexical stress, is not interpreted dependent on the speaking rate of the preceding carrier sentence. This still held when other stress cues like pitch and amplitude were removed. Rather, the speaking rate of the preceding carrier affected the speed of word recognition globally, even though the rate of the target itself was not altered. Stress information modulated lexical competition, but did so independently of the rate of the preceding carrier, even if duration was the only stress cue present.
  • Roberts, L. (2008). Processing temporal constraints and some implications for the investigation of second language sentence processing and acquisition. Commentary on Baggio. In P. Indefrey, & M. Gullberg (Eds.), Time to speak: Cognitive and neural prerequisites for time in language (pp. 57-61). Oxford: Blackwell.
  • Roberts, L. (2013). Discourse processing. In P. Robinson (Ed.), The Routledge encyclopedia of second language acquisition (pp. 190-194). New York: Routledge.
  • Roberts, S. G. (2013). A Bottom-up approach to the cultural evolution of bilingualism. In M. Knauff, M. Pauen, N. Sebanz, & I. Wachsmuth (Eds.), Proceedings of the 35th Annual Meeting of the Cognitive Science Society (CogSci 2013) (pp. 1229-1234). Austin, TX: Cognitive Science Society. Retrieved from http://mindmodeling.org/cogsci2013/papers/0236/index.html.

    Abstract

    The relationship between individual cognition and cultural phenomena at the society level can be transformed by cultural transmission (Kirby, Dowman, & Griffiths, 2007). Top-down models of this process have typically assumed that individuals only adopt a single linguistic trait. Recent extensions include ‘bilingual’ agents, able to adopt multiple linguistic traits (Burkett & Griffiths, 2010). However, bilingualism is more than variation within an individual: it involves the conditional use of variation with different interlocutors. That is, bilingualism is a property of a population that emerges from use. A bottom-up simulation is presented where learners are sensitive to the identity of other speakers. The simulation reveals that dynamic social structures are a key factor for the evolution of bilingualism in a population, a feature that was abstracted away in the top-down models. Top-down and bottom-up approaches may lead to different answers, but can work together to reveal and explore important features of the cultural transmission process.
  • Roberts, L. (2013). Sentence processing in bilinguals. In R. Van Gompel (Ed.), Sentence processing. London: Psychology Press.
  • Robotham, L., Trinkler, I., & Sauter, D. (2008). The power of positives: Evidence for an overall emotional recognition deficit in Huntington's disease [Abstract]. Journal of Neurology, Neurosurgery & Psychiatry, 79, A12.

    Abstract

    The recognition of emotions of disgust, anger and fear have been shown to be significantly impaired in Huntington’s disease (eg,Sprengelmeyer et al, 1997, 2006; Gray et al, 1997; Milders et al, 2003,Montagne et al, 2006; Johnson et al, 2007; De Gelder et al, 2008). The relative impairment of these emotions might have implied a recognition impairment specific to negative emotions. Could the asymmetric recognition deficits be due not to the complexity of the emotion but rather reflect the complexity of the task? In the current study, 15 Huntington’s patients and 16 control subjects were presented with negative and positive non-speech emotional vocalisations that were to be identified as anger, fear, sadness, disgust, achievement, pleasure and amusement in a forced-choice paradigm. This experiment more accurately matched the negative emotions with positive emotions in a homogeneous modality. The resulting dually impaired ability of Huntington’s patients to identify negative and positive non-speech emotional vocalisations correctly provides evidence for an overall emotional recognition deficit in the disease. These results indicate that previous findings of a specificity in emotional recognition deficits might instead be due to the limitations of the visual modality. Previous experiments may have found an effect of emotional specificy due to the presence of a single positive emotion, happiness, in the midst of multiple negative emotions. In contrast with the previous literature, the study presented here points to a global deficit in the recognition of emotional sounds.
  • Roelofs, A. (2004). The seduced speaker: Modeling of cognitive control. In A. Belz, R. Evans, & P. Piwek (Eds.), Natural language generation. (pp. 1-10). Berlin: Springer.

    Abstract

    Although humans are the ultimate “natural language generators”, the area of psycholinguistic modeling has been somewhat underrepresented in recent approaches to Natural Language Generation in computer science. To draw attention to the area and illustrate its potential relevance to Natural Language Generation, I provide an overview of recent work on psycholinguistic modeling of language production together with some key empirical findings, state-of-the-art experimental techniques, and their historical roots. The techniques include analyses of speech-error corpora, chronometric analyses, eyetracking, and neuroimaging.
    The overview is built around the issue of cognitive control in natural language generation, concentrating on the production of single words, which is an essential ingredient of the generation of larger utterances. Most of the work exploited the fact that human speakers are good but not perfect at resisting temptation, which has provided some critical clues about the nature of the underlying system.
  • Roelofs, A., & Schiller, N. (2004). Produzieren von Ein- und Mehrwortäusserungen. In G. Plehn (Ed.), Jahrbuch der Max-Planck Gesellschaft (pp. 655-658). Göttingen: Vandenhoeck & Ruprecht.
  • Rossano, F. (2013). Gaze in conversation. In J. Sidnell, & T. Stivers (Eds.), The handbook of conversation analysis (pp. 308-329). Malden, MA: Wiley-Blackwell. doi:10.1002/9781118325001.ch15.

    Abstract

    This chapter contains sections titled: Introduction Background: The Gaze “Machinery” Gaze “Machinery” in Social Interaction Future Directions
  • De Ruiter, J. P. (2004). On the primacy of language in multimodal communication. In Workshop Proceedings on Multimodal Corpora: Models of Human Behaviour for the Specification and Evaluation of Multimodal Input and Output Interfaces.(LREC2004) (pp. 38-41). Paris: ELRA - European Language Resources Association (CD-ROM).

    Abstract

    In this paper, I will argue that although the study of multimodal interaction offers exciting new prospects for Human Computer Interaction and human-human communication research, language is the primary form of communication, even in multimodal systems. I will support this claim with theoretical and empirical arguments, mainly drawn from human-human communication research, and will discuss the implications for multimodal communication research and Human-Computer Interaction.
  • De Ruiter, L. E. (2008). How useful are polynomials for analyzing intonation? In Proceedings of Interspeech 2008 (pp. 785-789).

    Abstract

    This paper presents the first application of polynomial modeling as a means for validating phonological pitch accent labels to German data. It is compared to traditional phonetic analysis (measuring minima, maxima, alignment). The traditional method fares better in classification, but results are comparable in statistical accent pair testing. Robustness tests show that pitch correction is necessary in both cases. The approaches are discussed in terms of their practicability, applicability to other domains of research and interpretability of their results.
  • De Ruiter, J. P. (2004). Response systems and signals of recipiency. In A. Majid (Ed.), Field Manual Volume 9 (pp. 53-55). Nijmegen: Max Planck Institute for Psycholinguistics. doi:10.17617/2.506961.

    Abstract

    Listeners’ signals of recipiency, such as “Mm-hm” or “uh-huh” in English, are the most elementary or minimal “conversational turns” possible. Minimal, because apart from acknowledging recipiency and inviting the speaker to continue with his/her next turn, they do not add any new information to the discourse of the conversation. The goal of this project is to gather cross cultural information on listeners’ feedback behaviour during conversation. Listeners in a conversation usually provide short signals that indicate to the speaker that they are still “with the speaker”. These signals could be verbal (like for instance “mm hm” in English or “hm hm” in Dutch) or nonverbal (visual), like nodding. Often, these signals are produced in overlap with the speaker’s vocalisation. If listeners do not produce these signals, speakers often invite them explicitly (e.g. “are you still there?” in a telephone conversation). Our goal is to investigate what kind of signals are used by listeners of different languages to signal “recipiency” to the speaker.
  • Rumsey, A., San Roque, L., & Schieffelin, B. (2013). The acquisition of ergative marking in Kaluli, Ku Waru and Duna (Trans New Guinea). In E. L. Bavin, & S. Stoll (Eds.), The acquisition of ergativity (pp. 133-182). Amsterdam: Benjamins.

    Abstract

    In this chapter we present material on the acquisition of ergative marking on noun phrases in three languages of Papua New Guinea: Kaluli, Ku Waru, and Duna. The expression of ergativity in all the languages is broadly similar, but sensitive to language-specific features, and this pattern of similarity and difference is reflected in the available acquisition data. Children acquire adult-like ergative marking at about the same pace, reaching similar levels of mastery by 3;00 despite considerable differences in morphological complexity of ergative marking among the languages. What may be more important – as a factor in accounting for the relative uniformity of acquisition in this respect – are the similarities in patterns of interactional scaffolding that emerge from a comparison of the three cases.
  • Sauppe, S., Norcliffe, E., Konopka, A. E., Van Valin Jr., R. D., & Levinson, S. C. (2013). Dependencies first: Eye tracking evidence from sentence production in Tagalog. In M. Knauff, M. Pauen, N. Sebanz, & I. Wachsmuth (Eds.), Proceedings of the 35th Annual Meeting of the Cognitive Science Society (CogSci 2013) (pp. 1265-1270). Austin, TX: Cognitive Science Society.

    Abstract

    We investigated the time course of sentence formulation in Tagalog, a verb-initial language in which the verb obligatorily agrees with one of its arguments. Eye-tracked participants described pictures of transitive events. Fixations to the two characters in the events were compared across sentences differing in agreement marking and post-verbal word order. Fixation patterns show evidence for two temporally dissociated phases in Tagalog sentence production. The first, driven by verb agreement, involves early linking of concepts to syntactic functions; the second, driven by word order, involves incremental lexical encoding of these concepts. These results suggest that even the earliest stages of sentence formulation may be guided by a language's grammatical structure.
  • Sauter, D., Scott, S., & Calder, A. (2004). Categorisation of vocally expressed positive emotion: A first step towards basic positive emotions? [Abstract]. Proceedings of the British Psychological Society, 12, 111.

    Abstract

    Most of the study of basic emotion expressions has focused on facial expressions and little work has been done to specifically investigate happiness, the only positive of the basic emotions (Ekman & Friesen, 1971). However, a theoretical suggestion has been made that happiness could be broken down into discrete positive emotions, which each fulfil the criteria of basic emotions, and that these would be expressed vocally (Ekman, 1992). To empirically test this hypothesis, 20 participants categorised 80 paralinguistic sounds using the labels achievement, amusement, contentment, pleasure and relief. The results suggest that achievement, amusement and relief are perceived as distinct categories, which subjects accurately identify. In contrast, the categories of contentment and pleasure were systematically confused with other responses, although performance was still well above chance levels. These findings are initial evidence that the positive emotions engage distinct vocal expressions and may be considered to be distinct emotion categories.
  • Sauter, D., Eisner, F., Rosen, S., & Scott, S. K. (2008). The role of source and filter cues in emotion recognition in speech [Abstract]. Journal of the Acoustical Society of America, 123, 3739-3740.

    Abstract

    In the context of the source-filter theory of speech, it is well established that intelligibility is heavily reliant on information carried by the filter, that is, spectral cues (e.g., Faulkner et al., 2001; Shannon et al., 1995). However, the extraction of other types of information in the speech signal, such as emotion and identity, is less well understood. In this study we investigated the extent to which emotion recognition in speech depends on filterdependent cues, using a forced-choice emotion identification task at ten levels of noise-vocoding ranging between one and 32 channels. In addition, participants performed a speech intelligibility task with the same stimuli. Our results indicate that compared to speech intelligibility, emotion recognition relies less on spectral information and more on cues typically signaled by source variations, such as voice pitch, voice quality, and intensity. We suggest that, while the reliance on spectral dynamics is likely a unique aspect of human speech, greater phylogenetic continuity across species may be found in the communication of affect in vocalizations.
  • Sauter, D. (2008). The time-course of emotional voice processing [Abstract]. Neurocase, 14, 455-455.

    Abstract

    Research using event-related brain potentials (ERPs) has demonstrated an early differential effect in fronto-central regions when processing emotional, as compared to affectively neutral facial stimuli (e.g., Eimer & Holmes, 2002). In this talk, data demonstrating a similar effect in the auditory domain will be presented. ERPs were recorded in a one-back task where participants had to identify immediate repetitions of emotion category, such as a fearful sound followed by another fearful sound. The stimulus set consisted of non-verbal emotional vocalisations communicating positive and negative sounds, as well as neutral baseline conditions. Similarly to the facial domain, fear sounds as compared to acoustically controlled neutral sounds, elicited a frontally distributed positivity with an onset latency of about 150 ms after stimulus onset. These data suggest the existence of a rapid multi-modal frontocentral mechanism discriminating emotional from non-emotional human signals.
  • Scharenborg, O., & Janse, E. (2013). Changes in the role of intensity as a cue for fricative categorisation. In Proceedings of INTERSPEECH 2013: 14th Annual Conference of the International Speech Communication Association (pp. 3147-3151).

    Abstract

    Older listeners with high-frequency hearing loss rely more on intensity for categorisation of /s/ than normal-hearing older listeners. This study addresses the question whether this increased reliance comes about immediately when the need
    arises, i.e., in the face of a spectrally-degraded signal. A phonetic categorisation task was carried out using intensitymodulated fricatives in a clean and a low-pass filtered condition with two younger and two older listener groups.
    When high-frequency information was removed from the speech signal, younger listeners started using intensity as a cue. The older adults on the other hand, when presented with the low-pass filtered speech, did not rely on intensity differences for fricative identification. These results suggest that the reliance on intensity shown by the older hearingimpaired adults may have been acquired only gradually with
    longer exposure to a degraded speech signal.
  • Scharenborg, O., & Cooke, M. P. (2008). Comparing human and machine recognition performance on a VCV corpus. In ISCA Tutorial and Research Workshop (ITRW) on "Speech Analysis and Processing for Knowledge Discovery".

    Abstract

    Listeners outperform ASR systems in every speech recognition task. However, what is not clear is where this human advantage originates. This paper investigates the role of acoustic feature representations. We test four (MFCCs, PLPs, Mel Filterbanks, Rate Maps) acoustic representations, with and without ‘pitch’ information, using the same backend. The results are compared with listener results at the level of articulatory feature classification. While no acoustic feature representation reached the levels of human performance, both MFCCs and Rate maps achieved good scores, with Rate maps nearing human performance on the classification of voicing. Comparing the results on the most difficult articulatory features to classify showed similarities between the humans and the SVMs: e.g., ‘dental’ was by far the least well identified by both groups. Overall, adding pitch information seemed to hamper classification performance.
  • Scharenborg, O., Boves, L., & Ten Bosch, L. (2004). ‘On-line early recognition’ of polysyllabic words in continuous speech. In S. Cassidy, F. Cox, R. Mannell, & P. Sallyanne (Eds.), Proceedings of the Tenth Australian International Conference on Speech Science & Technology (pp. 387-392). Canberra: Australian Speech Science and Technology Association Inc.

    Abstract

    In this paper, we investigate the ability of SpeM, our recognition system based on the combination of an automatic phone recogniser and a wordsearch module, to determine as early as possible during the word recognition process whether a word is likely to be recognised correctly (this we refer to as ‘on-line’ early word recognition). We present two measures that can be used to predict whether a word is correctly recognised: the Bayesian word activation and the amount of available (acoustic) information for a word. SpeM was tested on 1,463 polysyllabic words in 885 continuous speech utterances. The investigated predictors indicated that a word activation that is 1) high (but not too high) and 2) based on more phones is more reliable to predict the correctness of a word than a similarly high value based on a small number of phones or a lower value of the word activation.
  • Scharenborg, O. (2008). Modelling fine-phonetic detail in a computational model of word recognition. In INTERSPEECH 2008 - 9th Annual Conference of the International Speech Communication Association (pp. 1473-1476). ISCA Archive.

    Abstract

    There is now considerable evidence that fine-grained acoustic-phonetic detail in the speech signal helps listeners to segment a speech signal into syllables and words. In this paper, we compare two computational models of word recognition on their ability to capture and use this finephonetic detail during speech recognition. One model, SpeM, is phoneme-based, whereas the other, newly developed Fine- Tracker, is based on articulatory features. Simulations dealt with modelling the ability of listeners to distinguish short words (e.g., ‘ham’) from the longer words in which they are embedded (e.g., ‘hamster’). The simulations with Fine- Tracker showed that it was, like human listeners, able to distinguish between short words from the longer words in which they are embedded. This suggests that it is possible to extract this fine-phonetic detail from the speech signal and use it during word recognition.
  • Schepens, J., Van der Slik, F., & Van Hout, R. (2013). The effect of linguistic distance across Indo-European mother tongues on learning Dutch as a second language. In L. Borin, & A. Saxena (Eds.), Approaches to measuring linguistic differences (pp. 199-230). Berlin: Mouton de Gruyter.
  • Schmidt, T., Duncan, S., Ehmer, O., Hoyt, J., Kipp, M., Loehr, D., Magnusson, M., Rose, T., & Sloetjes, H. (2008). An exchange format for multimodal annotations. In Proceedings of the 6th International Conference on Language Resources and Evaluation (LREC 2008).

    Abstract

    This paper presents the results of a joint effort of a group of multimodality researchers and tool developers to improve the interoperability between several tools used for the annotation of multimodality. We propose a multimodal annotation exchange format, based on the annotation graph formalism, which is supported by import and export routines in the respective tools
  • Schmiedtova, B., & Flecken, M. (2008). The role of aspectual distinctions in event encoding: Implications for second language acquisition. In S. Müller-de Knop, & T. Mortelmans (Eds.), Pedagogical grammar (pp. 357-384). Berlin: Mouton de Gruyter.
  • Schmitt, B. M., Schiller, N. O., Rodriguez-Fornells, A., & Münte, T. F. (2004). Elektrophysiologische Studien zum Zeitverlauf von Sprachprozessen. In H. H. Müller, & G. Rickheit (Eds.), Neurokognition der Sprache (pp. 51-70). Tübingen: Stauffenburg.
  • Schuppler, B., Ernestus, M., Scharenborg, O., & Boves, L. (2008). Preparing a corpus of Dutch spontaneous dialogues for automatic phonetic analysis. In INTERSPEECH 2008 - 9th Annual Conference of the International Speech Communication Association (pp. 1638-1641). ISCA Archive.

    Abstract

    This paper presents the steps needed to make a corpus of Dutch spontaneous dialogues accessible for automatic phonetic research aimed at increasing our understanding of reduction phenomena and the role of fine phonetic detail. Since the corpus was not created with automatic processing in mind, it needed to be reshaped. The first part of this paper describes the actions needed for this reshaping in some detail. The second part reports the results of a preliminary analysis of the reduction phenomena in the corpus. For this purpose a phonemic transcription of the corpus was created by means of a forced alignment, first with a lexicon of canonical pronunciations and then with multiple pronunciation variants per word. In this study pronunciation variants were generated by applying a large set of phonetic processes that have been implicated in reduction to the canonical pronunciations of the words. This relatively straightforward procedure allows us to produce plausible pronunciation variants and to verify and extend the results of previous reduction studies reported in the literature.
  • Scott, K., Sakkalou, E., Ellis-Davies, K., Hilbrink, E., Hahn, U., & Gattis, M. (2013). Infant contributions to joint attention predict vocabulary development. In M. Knauff, M. Pauen, I. Sebanz, & I. Wachsmuth (Eds.), Proceedings of the 35th Annual Conference of the Cognitive Science Society (pp. 3384-3389). Austin,TX: Cognitive Science Society. Retrieved from http://mindmodeling.org/cogsci2013/papers/0602/index.html.

    Abstract

    Joint attention has long been accepted as constituting a privileged circumstance in which word learning prospers. Consequently research has investigated the role that maternal responsiveness to infant attention plays in predicting language outcomes. However there has been a recent expansion in research implicating similar predictive effects from individual differences in infant behaviours. Emerging from the foundations of such work comes an interesting question: do the relative contributions of the mother and infant to joint attention episodes impact upon language learning? In an attempt to address this, two joint attention behaviours were assessed as predictors of vocabulary attainment (as measured by OCDI Production Scores). These predictors were: mothers encouraging attention to an object given their infant was already attending to an object (maternal follow-in); and infants looking to an object given their mothers encouragement of attention to an object (infant follow-in). In a sample of 14-month old children (N=36) we compared the predictive power of these maternal and infant follow-in variables on concurrent and later language performance. Results using Growth Curve Analysis provided evidence that while both maternal follow-in and infant follow-in variables contributed to production scores, infant follow-in was a stronger predictor. Consequently it does appear to matter whose final contribution establishes joint attention episodes. Infants who more often follow-in into their mothers’ encouragement of attention have larger, and faster growing vocabularies between 14 and 18-months of age.
  • Scott, S. K., McGettigan, C., & Eisner, F. (2013). The neural basis of links and dissociations between speech perception and production. In J. J. Bolhuis, & M. Everaert (Eds.), Birdsong, speech and language: Exploring the evolution of mind and brain (pp. 277-294). Cambridge, Mass: MIT Press.
  • Scott, S., & Sauter, D. (2004). Vocal expressions of emotion and positive and negative basic emotions [Abstract]. Proceedings of the British Psychological Society, 12, 156.

    Abstract

    Previous studies have indicated that vocal and facial expressions of the ‘basic’ emotions share aspects of processing. Thus amygdala damage compromises the perception of fear and anger from the face and from the voice. In the current study we tested the hypothesis that there exist positive basic emotions, expressed mainly in the voice (Ekman, 1992). Vocal stimuli were produced to express the specific positive emotions of amusement, achievement, pleasure, contentment and relief.
  • Senft, G. (2004). Sprache, Kognition und Konzepte des Raumes in verschiedenen Kulturen - Zum Problem der Interdependenz sprachlicher und mentaler Strukturen. In L. Jäger (Ed.), Medialität und Mentalität (pp. 163-176). Paderborn: Wilhelm Fink.
  • Senft, G. (2008). The teaching of Tokunupei. In J. Kommers, & E. Venbrux (Eds.), Cultural styles of knowledge transmission: Essays in honour of Ad Borsboom (pp. 139-144). Amsterdam: Aksant.

    Abstract

    The paper describes how the documentation of a popular song of the adolescents of Tauwema in 1982 lead to the collection of the myth of Imdeduya and Yolina, one of the most important myths of the Trobriand Islands. When I returned to my fieldsite in 1989 Tokunupei, one of my best consultants in Tauwema, remembered my interest in the myth and provided me with further information on this topic. Tokunupei's teachings open up an important access to Trobriand eschatology.
  • Senft, G. (2004). What do we really know about serial verb constructions in Austronesian and Papuan languages? In I. Bril, & F. Ozanne-Rivierre (Eds.), Complex predicates in Oceanic languages (pp. 49-64). Berlin: Mouton de Gruyter.
  • Senft, G. (2008). Zur Bedeutung der Sprache für die Feldforschung. In B. Beer (Ed.), Methoden und Techniken der Feldforschung (pp. 103-118). Berlin: Reimer.
  • Senft, G. (2004). Wosi tauwau topaisewa - songs about migrant workers from the Trobriand Islands. In A. Graumann (Ed.), Towards a dynamic theory of language. Festschrift for Wolfgang Wildgen on occasion of his 60th birthday (pp. 229-241). Bochum: Universitätsverlag Dr. N. Brockmeyer.

Share this page