Publications

Displaying 301 - 400 of 529
  • Levinson, S. C. (2013). Action formation and ascription. In T. Stivers, & J. Sidnell (Eds.), The handbook of conversation analysis (pp. 103-130). Malden, MA: Wiley-Blackwell. doi:10.1002/9781118325001.ch6.

    Abstract

    Since the core matrix for language use is interaction, the main job of language
    is not to express propositions or abstract meanings, but to deliver actions.
    For in order to respond in interaction we have to ascribe to the prior turn
    a primary ‘action’ – variously thought of as an ‘illocution’, ‘speech act’, ‘move’,
    etc. – to which we then respond. The analysis of interaction also relies heavily
    on attributing actions to turns, so that, e.g., sequences can be characterized in
    terms of actions and responses. Yet the process of action ascription remains way
    understudied. We don’t know much about how it is done, when it is done, nor even
    what kind of inventory of possible actions might exist, or the degree to which they
    are culturally variable.
    The study of action ascription remains perhaps the primary unfulfilled task in
    the study of language use, and it needs to be tackled from conversationanalytic,
    psycholinguistic, cross-linguistic and anthropological perspectives.
    In this talk I try to take stock of what we know, and derive a set of goals for and
    constraints on an adequate theory. Such a theory is likely to employ, I will suggest,
    a top-down plus bottom-up account of action perception, and a multi-level notion
    of action which may resolve some of the puzzles that have repeatedly arisen.
  • Levinson, S. C. (1988). Conceptual problems in the study of regional and cultural style. In N. Dittmar, & P. Schlobinski (Eds.), The sociolinguistics of urban vernaculars: Case studies and their evaluation (pp. 161-190). Berlin: De Gruyter.
  • Levinson, S. C. (1989). Conversation. In E. Barnouw (Ed.), International encyclopedia of communications (pp. 407-410). New York: Oxford University Press.
  • Levinson, S. C. (2013). Cross-cultural universals and communication structures. In M. A. Arbib (Ed.), Language, music, and the brain: A mysterious relationship (pp. 67-80). Cambridge, MA: MIT Press.

    Abstract

    Given the diversity of languages, it is unlikely that the human capacity for language resides in rich universal syntactic machinery. More likely, it resides centrally in the capacity for vocal learning combined with a distinctive ethology for communicative interaction, which together (no doubt with other capacities) make diverse languages learnable. This chapter focuses on face-to-face communication, which is characterized by the mapping of sounds and multimodal signals onto speech acts and which can be deeply recursively embedded in interaction structure, suggesting an interactive origin for complex syntax. These actions are recognized through Gricean intention recognition, which is a kind of “ mirroring” or simulation distinct from the classic mirror neuron system. The multimodality of conversational interaction makes evident the involvement of body, hand, and mouth, where the burden on these can be shifted, as in the use of speech and gesture, or hands and face in sign languages. Such shifts having taken place during the course of human evolution. All this suggests a slightly different approach to the mystery of music, whose origins should also be sought in joint action, albeit with a shift from turn-taking to simultaneous expression, and with an affective quality that may tap ancient sources residual in primate vocalization. The deep connection of language to music can best be seen in the only universal form of music, namely song.
  • Levinson, S. C. (1998). Deixis. In J. L. Mey (Ed.), Concise encyclopedia of pragmatics (pp. 200-204). Amsterdam: Elsevier.
  • Levinson, S. C. (2011). Deixis [Reprint]. In D. Archer, & P. Grundy (Eds.), The pragmatics reader (pp. 163-185). London: Routledge.

    Abstract

    Reproduced with permission of Blackwell Publishing from: Levinson, S. C. (2004) 'Deixis'. In: Horn, L.R. and Ward, G. (Eds.) The Handbook of Pragmatics. Oxford: Blackwell Publishing, pp. 100-121
  • Levinson, S. C. (2003). Contextualizing 'contextualization cues'. In S. Eerdmans, C. Prevignano, & P. Thibault (Eds.), Language and interaction: Discussions with John J. Gumperz (pp. 31-39). Amsterdam: John Benjamins.
  • Levinson, S. C. (2003). Language and cognition. In W. Frawley (Ed.), International Encyclopedia of Linguistics (pp. 459-463). Oxford: Oxford University Press.
  • Levinson, S. C. (2003). Language and mind: Let's get the issues straight! In D. Gentner, & S. Goldin-Meadow (Eds.), Language in mind: Advances in the study of language and cognition (pp. 25-46). Cambridge, MA: MIT Press.
  • Levinson, S. C. (2011). Foreword. In D. M. Mark, A. G. Turk, N. Burenhult, & D. Stea (Eds.), Landscape in language: Transdisciplinary perspectives (pp. ix-x). Amsterdam: John Benjamins.
  • Levinson, S. C. (1998). Minimization and conversational inference. In A. Kasher (Ed.), Pragmatics: Vol. 4 Presupposition, implicature and indirect speech acts (pp. 545-612). London: Routledge.
  • Levinson, S. C. (1995). Interactional biases in human thinking. In E. N. Goody (Ed.), Social intelligence and interaction (pp. 221-260). Cambridge: Cambridge University Press.
  • Levinson, S. C. (2011). Presumptive meanings [Reprint]. In D. Archer, & P. Grundy (Eds.), The pragmatics reader (pp. 86-98). London: Routledge.

    Abstract

    Reprinted with permission of The MIT Press from Levinson (2000) Presumptive meanings: The theory of generalized conversational implicature, pp. 112-118, 116-167, 170-173, 177-180. MIT Press
  • Levinson, S. C. (1988). Putting linguistics on a proper footing: Explorations in Goffman's participation framework. In P. Drew, & A. Wootton (Eds.), Goffman: Exploring the interaction order (pp. 161-227). Oxford: Polity Press.
  • Levinson, S. C. (2011). Reciprocals in Yélî Dnye, the Papuan language of Rossel Island. In N. Evans, A. Gaby, S. C. Levinson, & A. Majid (Eds.), Reciprocals and semantic typology (pp. 177-194). Amsterdam: Benjamins.

    Abstract

    Yélî Dnye has two discernable dedicated constructions for reciprocal marking. The first and main construction uses a dedicated reciprocal pronoun numo, somewhat like English each other. We can recognise two subconstructions. First, the ‘numo-construction’, where the reciprocal pronoun is a patient of the verb, and where the invariant pronoun numo is obligatorily incorporated, triggering intransitivisation (e.g. A-NPs become absolutive). This subconstruction has complexities, for example in the punctual aspect only, the verb is inflected like a transitive, but with enclitics mismatching actual person/number. In the second variant or subconstruction, the ‘noko-construction’, the same reciprocal pronoun (sometimes case-marked as noko) occurs but now in oblique positions with either transitive or intransitive verbs. The reciprocal element here has some peculiar binding properties. Finally, the second independent construction is a dedicated periphrastic (or woni…woni) construction, glossing ‘the one did X to the other, and the other did X to the one’. It is one of the rare cross-serial dependencies that show that natural languages cannot be modelled by context-free phrase-structure grammars. Finally, the usage of these two distinct constructions is discussed.
  • Levinson, S. C., & Dediu, D. (2013). The interplay of genetic and cultural factors in ongoing language evolution. In P. J. Richerson, & M. H. Christiansen (Eds.), Cultural evolution: Society, technology, language, and religion. Strüngmann Forum Reports, vol. 12 (pp. 219-232). Cambridge, Mass: MIT Press.
  • Levinson, S. C. (1995). Three levels of meaning. In F. Palmer (Ed.), Grammar and meaning: Essays in honour of Sir John Lyons (pp. 90-115). Cambridge University Press.
  • Levinson, S. C. (2011). Three levels of meaning: Essays in honor of Sir John Lyons [Reprint]. In A. Kasher (Ed.), Pragmatics II. London: Routledge.

    Abstract

    Reprint from Stephen C. Levinson, ‘Three Levels of Meaning’, in Frank Palmer (ed.), Grammar and Meaning: Essays in Honor of Sir John Lyons (Cambridge University Press, 1995), pp. 90–115
  • Levinson, S. C. (2011). Universals in pragmatics. In P. C. Hogan (Ed.), The Cambridge encyclopedia of the language sciences (pp. 654-657). New York: Cambridge University Press.

    Abstract

    Changing Prospects for Universals in Pragmatics
    The term PRAGMATICS has come to denote the study of general principles of language use. It is usually understood to contrast with SEMANTICS, the study of encoded meaning, and also, by some authors, to contrast with SOCIOLINGUISTICS
    and the ethnography of speaking, which are more concerned with local sociocultural practices. Given that pragmaticists come from disciplines as varied as philosophy, sociology,
    linguistics, communication studies, psychology, and anthropology, it is not surprising that definitions of pragmatics vary. Nevertheless, most authors agree on a list of topics
    that come under the rubric, including DEIXIS, PRESUPPOSITION,
    implicature (see CONVERSATIONAL IMPLICATURE), SPEECH-ACTS, and conversational organization (see CONVERSATIONAL ANALYSIS). Here, we can use this extensional definition as a starting point (Levinson 1988; Huang 2007).
  • Liszkowski, U., & Epps, P. (2003). Directing attention and pointing in infants: A cross-cultural approach. In N. J. Enfield (Ed.), Field research manual 2003, part I: Multimodal interaction, space, event representation (pp. 25-27). Nijmegen: Max Planck Institute for Psycholinguistics. doi:10.17617/2.877649.

    Abstract

    Recent research suggests that 12-month-old infants in German cultural settings have the motive of sharing their attention to and interest in various events with a social interlocutor. To do so, these preverbal infants predominantly use the pointing gesture (in this case the extended arm with or without extended index finger) as a means to direct another person’s attention. This task systematically investigates different types of motives underlying infants’ pointing. The occurrence of a protodeclarative (as opposed to protoimperative) motive is of particular interest because it requires an understanding of the recipient’s psychological states, such as attention and interest, that can be directed and accessed.
  • Majid, A., & Bödeker, K. (2003). Folk theories of objects in motion. In N. J. Enfield (Ed.), Field research manual 2003, part I: Multimodal interaction, space, event representation (pp. 72-76). Nijmegen: Max Planck Institute for Psycholinguistics. doi:10.17617/2.877654.

    Abstract

    There are three main strands of research which have investigated people’s intuitive knowledge of objects in motion. (1) Knowledge of the trajectories of objects in motion; (2) knowledge of the causes of motion; and (3) the categorisation of motion as to whether it has been produced by something animate or inanimate. We provide a brief introduction to each of these areas. We then point to some linguistic and cultural differences which may have consequences for people’s knowledge of objects in motion. Finally, we describe two experimental tasks and an ethnographic task that will allow us to collect data in order to establish whether, indeed, there are interesting cross-linguistic/cross-cultural differences in lay theories of objects in motion.
  • Majid, A. (2013). Olfactory language and cognition. In M. Knauff, M. Pauen, N. Sebanz, & I. Wachsmuth (Eds.), Proceedings of the 35th annual meeting of the Cognitive Science Society (CogSci 2013) (pp. 68). Austin,TX: Cognitive Science Society. Retrieved from http://mindmodeling.org/cogsci2013/papers/0025/index.html.

    Abstract

    Since the cognitive revolution, a widely held assumption has been that—whereas content may vary across cultures—cognitive processes would be universal, especially those on the more basic levels. Even if scholars do not fully subscribe to this assumption, they often conceptualize, or tend to investigate, cognition as if it were universal (Henrich, Heine, & Norenzayan, 2010). The insight that universality must not be presupposed but scrutinized is now gaining ground, and cognitive diversity has become one of the hot (and controversial) topics in the field (Norenzayan & Heine, 2005). We argue that, for scrutinizing the cultural dimension of cognition, taking an anthropological perspective is invaluable, not only for the task itself, but for attenuating the home-field disadvantages that are inescapably linked to cross-cultural research (Medin, Bennis, & Chandler, 2010).
  • Majid, A. (2013). Psycholinguistics. In J. L. Jackson (Ed.), Oxford Bibliographies Online: Anthropology. Oxford: Oxford University Press.
  • Majid, A., Evans, N., Gaby, A., & Levinson, S. C. (2011). The semantics of reciprocal constructions across languages: An extensional approach. In N. Evans, A. Gaby, S. C. Levinson, & A. Majid (Eds.), Reciprocals and semantic typology (pp. 29-60). Amsterdam: Benjamins.

    Abstract

    How similar are reciprocal constructions in the semantic parameters they encode? We investigate this question by using an extensional approach, which examines similarity of meaning by examining how constructions are applied over a set of 64 videoclips depicting reciprocal events (Evans et al. 2004). We apply statistical modelling to descriptions from speakers of 20 languages elicited using the videoclips. We show that there are substantial differences in meaning between constructions of different languages.

    Files private

    Request files
  • Majid, A., & Levinson, S. C. (2011). The language of perception across cultures [Abstract]. Abstracts of the XXth Congress of European Chemoreception Research Organization, ECRO-2010. Publ. in Chemical Senses, 36(1), E7-E8.

    Abstract

    How are the senses structured by the languages we speak, the cultures we inhabit? To what extent is the encoding of perceptual experiences in languages a matter of how the mind/brain is ―wired-up‖ and to what extent is it a question of local cultural preoccupation? The ―Language of Perception‖ project tests the hypothesis that some perceptual domains may be more ―ineffable‖ – i.e. difficult or impossible to put into words – than others. While cognitive scientists have assumed that proximate senses (olfaction, taste, touch) are more ineffable than distal senses (vision, hearing), anthropologists have illustrated the exquisite variation and elaboration the senses achieve in different cultural milieus. The project is designed to test whether the proximate senses are universally ineffable – suggesting an architectural constraint on cognition – or whether they are just accidentally so in Indo-European languages, so expanding the role of cultural interests and preoccupations. To address this question, a standardized set of stimuli of color patches, geometric shapes, simple sounds, tactile textures, smells and tastes have been used to elicit descriptions from speakers of more than twenty languages—including three sign languages. The languages are typologically, genetically and geographically diverse, representing a wide-range of cultures. The communities sampled vary in subsistence modes (hunter-gatherer to industrial), ecological zones (rainforest jungle to desert), dwelling types (rural and urban), and various other parameters. We examine how codable the different sensory modalities are by comparing how consistent speakers are in how they describe the materials in each modality. Our current analyses suggest that taste may, in fact, be the most codable sensorial domain across languages. Moreover, we have identified exquisite elaboration in the olfactory domains in some cultural settings, contrary to some contemporary predictions within the cognitive sciences. These results suggest that differential codability may be at least partly the result of cultural preoccupation. This shows that the senses are not just physiological phenomena but are constructed through linguistic, cultural and social practices.
  • Malt, B. C., Ameel, E., Gennari, S., Imai, M., Saji, N., & Majid, A. (2011). Do words reveal concepts? In L. Carlson, C. Hölscher, & T. Shipley (Eds.), Proceedings of the 33rd Annual Conference of the Cognitive Science Society (pp. 519-524). Austin, TX: Cognitive Science Society.

    Abstract

    To study concepts, cognitive scientists must first identify some. The prevailing assumption is that they are revealed by words such as triangle, table, and robin. But languages vary dramatically in how they carve up the world by name. Either ordinary concepts must be heavily language-dependent or names cannot be a direct route to concepts. We asked English, Dutch, Spanish, and Japanese speakers to name videos of human locomotion and judge their similarities. We investigated what name inventories and scaling solutions on name similarity and on physical similarity for the groups individually and together suggest about the underlying concepts. Aggregated naming and similarity solutions converged on results distinct from the answers suggested by the word inventories and scaling solutions of any single language. Words such as triangle, table, and robin can help identify the conceptual space of a domain, but they do not directly reveal units of knowledge usefully considered 'concepts'.
  • Marcus, G., & Fisher, S. E. (2011). Genes and language. In P. Hogan (Ed.), The Cambridge encyclopedia of the language sciences (pp. 341-344). New York: Cambridge University Press.
  • Mark, D. M., Turk, A., Burenhult, N., & Stea, D. (2011). Landscape in language: An introduction. In D. M. Mark, A. G. Turk, N. Burenhult, & D. Stea (Eds.), Landscape in language: Transdisciplinary perspectives (pp. 1-24). Amsterdam: John Benjamins.
  • de Marneffe, M.-C., Tomlinson, J. J., Tice, M., & Sumner, M. (2011). The interaction of lexical frequency and phonetic variation in the perception of accented speech. In L. Carlson, C. Hölscher, & T. Shipley (Eds.), Proceedings of the 33rd Annual Conference of the Cognitive Science Society (pp. 3575-3580). Austin, TX: Cognitive Science Society.

    Abstract

    How listeners understand spoken words despite massive variation in the speech signal is a central issue for linguistic theory. A recent focus on lexical frequency and specificity has proved fruitful in accounting for this phenomenon. Speech perception, though, is a multi-faceted process and likely incorporates a number of mechanisms to map a variable signal to meaning. We examine a well-established language use factor — lexical frequency — and how this factor is integrated with phonetic variability during the perception of accented speech. We show that an integrated perspective highlights a low-level perceptual mechanism that accounts for the perception of accented speech absent native contrasts, while shedding light on the use of interactive language factors in the perception of spoken words.
  • McDonough, L., Choi, S., Bowerman, M., & Mandler, J. M. (1998). The use of preferential looking as a measure of semantic development. In C. Rovee-Collier, L. P. Lipsitt, & H. Hayne (Eds.), Advances in Infancy Research. Volume 12. (pp. 336-354). Stamford, CT: Ablex Publishing.
  • McQueen, J. M., & Cho, T. (2003). The use of domain-initial strengthening in segmentation of continuous English speech. In Proceedings of the 15th International Congress of Phonetic Sciences (ICPhS 2003) (pp. 2993-2996). Adelaide: Causal Productions.
  • McQueen, J. M., Dahan, D., & Cutler, A. (2003). Continuity and gradedness in speech processing. In N. O. Schiller, & A. S. Meyer (Eds.), Phonetics and phonology in language comprehension and production: Differences and similarities (pp. 39-78). Berlin: Mouton de Gruyter.
  • McQueen, J. M., & Cutler, A. (1998). Morphology in word recognition. In A. M. Zwicky, & A. Spencer (Eds.), The handbook of morphology (pp. 406-427). Oxford: Blackwell.
  • McQueen, J. M., & Cutler, A. (1998). Spotting (different kinds of) words in (different kinds of) context. In R. Mannell, & J. Robert-Ribes (Eds.), Proceedings of the Fifth International Conference on Spoken Language Processing: Vol. 6 (pp. 2791-2794). Sydney: ICSLP.

    Abstract

    The results of a word-spotting experiment are presented in which Dutch listeners tried to spot different types of bisyllabic Dutch words embedded in different types of nonsense contexts. Embedded verbs were not reliably harder to spot than embedded nouns; this suggests that nouns and verbs are recognised via the same basic processes. Iambic words were no harder to spot than trochaic words, suggesting that trochaic words are not in principle easier to recognise than iambic words. Words were harder to spot in consonantal contexts (i.e., contexts which themselves could not be words) than in longer contexts which contained at least one vowel (i.e., contexts which, though not words, were possible words of Dutch). A control experiment showed that this difference was not due to acoustic differences between the words in each context. The results support the claim that spoken-word recognition is sensitive to the viability of sound sequences as possible words.
  • Meeuwissen, M., Roelofs, A., & Levelt, W. J. M. (2003). Naming analog clocks conceptually facilitates naming digital clocks. In Proceedings of XIII Conference of the European Society of Cognitive Psychology (ESCOP 2003) (pp. 271-271).
  • Meira, S. (2003). 'Addressee effects' in demonstrative systems: The cases of Tiriyó and Brazilian Portugese. In F. Lenz (Ed.), Deictic conceptualization of space, time and person (pp. 3-12). Amsterdam/Philadelphia: John Benjamins.
  • Meyer, A. S., & Dobel, C. (2003). Application of eye tracking in speech production research. In J. Hyönä, R. Radach, & H. Deubel (Eds.), The mind’s eye: Cognitive and applied aspects of eye movement research (pp. 253-272). Amsterdam: Elsevier.
  • Meyer, A. S. (1988). Phonological encoding in language production: A priming study. PhD Thesis, Katholieke Universiteit Nijmegen.
  • Mishra, R. K., Olivers, C. N. L., & Huettig, F. (2013). Spoken language and the decision to move the eyes: To what extent are language-mediated eye movements automatic? In V. S. C. Pammi, & N. Srinivasan (Eds.), Progress in Brain Research: Decision making: Neural and behavioural approaches (pp. 135-149). New York: Elsevier.

    Abstract

    Recent eye-tracking research has revealed that spoken language can guide eye gaze very rapidly (and closely time-locked to the unfolding speech) toward referents in the visual world. We discuss whether, and to what extent, such language-mediated eye movements are automatic rather than subject to conscious and controlled decision-making. We consider whether language-mediated eye movements adhere to four main criteria of automatic behavior, namely, whether they are fast and efficient, unintentional, unconscious, and overlearned (i.e., arrived at through extensive practice). Current evidence indicates that language-driven oculomotor behavior is fast but not necessarily always efficient. It seems largely unintentional though there is also some evidence that participants can actively use the information in working memory to avoid distraction in search. Language-mediated eye movements appear to be for the most part unconscious and have all the hallmarks of an overlearned behavior. These data are suggestive of automatic mechanisms linking language to potentially referred-to visual objects, but more comprehensive and rigorous testing of this hypothesis is needed.
  • Mitterer, H. (2011). Social accountability influences phonetic alignment. Journal of the Acoustical Society of America. Program abstracts of the 162nd Meeting of the Acoustical Society of America, 130(4), 2442.

    Abstract

    Speakers tend to take over the articulatory habits of their interlocutors [e.g., Pardo, JASA (2006)]. This phonetic alignment could be the consequence of either a social mechanism or a direct and automatic link between speech perception and production. The latter assumes that social variables should have little influence on phonetic alignment. To test that participants were engaged in a "cloze task" (i.e., Stimulus: "In fantasy movies, silver bullets are used to kill ..." Response: "werewolves") with either one or four interlocutors. Given findings with the Asch-conformity paradigm in social psychology, multiple consistent speakers should exert a stronger force on the participant to align. To control the speech style of the interlocutors, their questions and answers were pre-recorded in either a formal or a casual speech style. The stimuli's speech style was then manipulated between participants and was consistent throughout the experiment for a given participant. Surprisingly, participants aligned less with the speech style if there were multiple interlocutors. This may reflect a "diffusion of responsibility:" Participants may find it more important to align when they interact with only one person than with a larger group.
  • Moscoso del Prado Martín, F. (2003). Paradigmatic structures in morphological processing: Computational and cross-linguistic experimental studies. PhD Thesis, Radboud University Nijmegen, Nijmegen. doi:10.17617/2.58929.
  • Moscoso del Prado Martín, F., & Baayen, R. H. (2003). Using the structure found in time: Building real-scale orthographic and phonetic representations by accumulation of expectations. In H. Bowman, & C. Labiouse (Eds.), Connectionist Models of Cognition, Perception and Emotion: Proceedings of the Eighth Neural Computation and Psychology Workshop (pp. 263-272). Singapore: World Scientific.
  • Mulder, K. (2013). Family and neighbourhood relations in the mental lexicon: A cross-language perspective. PhD Thesis, Radboud University Nijmegen, Nijmegen.

    Abstract

    We lezen en horen dagelijks duizenden woorden zonder dat het ons enige moeite lijkt te kosten. Toch speelt zich in ons brein ondertussen een complex mentaal proces af, waarbij tal van andere woorden dan het aangeboden woord, ook actief worden. Dit gebeurt met name wanneer die andere woorden overeenkomen met de feitelijk aangeboden woorden in spelling, uitspraak of betekenis. Deze activatie als gevolg van gelijkenis strekt zich zelfs uit tot andere talen: ook daarin worden gelijkende woorden actief. Waar liggen de grenzen van dit activatieproces? Activeer je bij het verwerken van het Engelse woord 'steam' ook het Nederlandse woord 'stram'(een zogenaamd 'buurwoord)? En activeer je bij 'clock' zowel 'clockwork' als 'klokhuis' ( twee morfolologische familieleden uit verschillende talen)? Kimberley Mulder onderzocht hoe het leesproces van Nederlands-Engelse tweetaligen wordt beïnvloed door zulke relaties. In meerdere experimentele studies vond ze dat tweetaligen niet alleen morfologische familieleden en orthografische buren activeren uit de taal waarin ze op dat moment lezen, maar ook uit de andere taal die ze kennen. Het lezen van een woord beperkt zich dus geenszins tot wat je eigenlijk ziet, maar activeert een heel netwerk van woorden in je brein.

    Additional information

    full text via Radboud Repository
  • Neijt, A., Schreuder, R., & Baayen, R. H. (2003). Verpleegsters, ambassadrices, and masseuses: Stratum differences in the comprehension of Dutch words with feminine agent suffixes. In L. Cornips, & P. Fikkert (Eds.), Linguistics in the Netherlands 2003. (pp. 117-127). Amsterdam: Benjamins.
  • Noordman, L. G., & Vonk, W. (1998). Discourse comprehension. In A. D. Friederici (Ed.), Language comprehension: a biological perspective (pp. 229-262). Berlin: Springer.

    Abstract

    The human language processor is conceived as a system that consists of several interrelated subsystems. Each subsystem performs a specific task in the complex process of language comprehension and production. A subsystem receives a particular input, performs certain specific operations on this input and yields a particular output. The subsystems can be characterized in terms of the transformations that relate the input representations to the output representations. An important issue in describing the language processing system is to identify the subsystems and to specify the relations between the subsystems. These relations can be conceived in two different ways. In one conception the subsystems are autonomous. They are related to each other only by the input-output channels. The operations in one subsystem are not affected by another system. The subsystems are modular, that is they are independent. In the other conception, the different subsystems influence each other. A subsystem affects the processes in another subsystem. In this conception there is an interaction between the subsystems.
  • Norcliffe, E., Enfield, N. J., Majid, A., & Levinson, S. C. (2011). The grammar of perception. In K. Kendrick, & A. Majid (Eds.), Field manual volume 14 (pp. 1-10). Nijmegen: Max Planck Institute for Psycholinguistics.
  • Nordhoff, S., & Hammarström, H. (2011). Glottolog/Langdoc: Defining dialects, languages, and language families as collections of resources. Proceedings of the First International Workshop on Linked Science 2011 (LISC2011), Bonn, Germany, October 24, 2011.

    Abstract

    This paper describes the Glottolog/Langdoc project, an at- tempt to provide near-total bibliographical coverage of descriptive re- sources to the world's languages. Every reference is treated as a resource, as is every \languoid"[1]. References are linked to the languoids which they describe, and languoids are linked to the references described by them. Family relations between languoids are modeled in SKOS, as are relations across dierent classications of the same languages. This setup allows the representation of languoids as collections of references, render- ing the question of the denition of entities like `Scots', `West-Germanic' or `Indo-European' more empirical.
  • Norman, D. A., & Levelt, W. J. M. (1988). Life at the center. In W. Hirst (Ed.), The making of cognitive science: essays in honor of George A. Miller (pp. 100-109). Cambridge University Press.
  • Oostdijk, N., & Broeder, D. (2003). The Spoken Dutch Corpus and its exploitation environment. In A. Abeille, S. Hansen-Schirra, & H. Uszkoreit (Eds.), Proceedings of the 4th International Workshop on linguistically interpreted corpora (LINC-03) (pp. 93-101).
  • Ortega, G. (2013). Acquisition of a signed phonological system by hearing adults: The Role of sign structure and iconcity. PhD Thesis, University College London, London.

    Abstract

    The phonological system of a sign language comprises meaningless sub-lexical units that define the structure of a sign. A number of studies have examined how learners of a sign language as a first language (L1) acquire these components. However, little is understood about the mechanism by which hearing adults develop visual phonological categories when learning a sign language as a second language (L2). Developmental studies have shown that sign complexity and iconicity, the clear mapping between the form of a sign and its referent, shape in different ways the order of emergence of a visual phonology. The aim of the present dissertation was to investigate how these two factors affect the development of a visual phonology in hearing adults learning a sign language as L2. The empirical data gathered in this dissertation confirms that sign structure and iconicity are important factors that determine L2 phonological development. Non-signers perform better at discriminating the contrastive features of phonologically simple signs than signs with multiple elements. Handshape was the parameter most difficult to learn, followed by movement, then orientation and finally location which is the same order of acquisition reported in L1 sign acquisition. In addition, the ability to access the iconic properties of signs had a detrimental effect in phonological development because iconic signs were consistently articulated less accurately than arbitrary signs. Participants tended to retain the iconic elements of signs but disregarded their exact phonetic structure. Further, non-signers appeared to process iconic signs as iconic gestures at least at the early stages of sign language acquisition. The empirical data presented in this dissertation suggest that non-signers exploit their gestural system as scaffolding of the new manual linguistic system and that sign L2 phonological development is strongly influenced by the structural complexity of a sign and its degree of iconicity.
  • Ortega, G., & Ozyurek, A. (2013). Gesture-sign interface in hearing non-signers' first exposure to sign. In Proceedings of the Tilburg Gesture Research Meeting [TiGeR 2013].

    Abstract

    Natural sign languages and gestures are complex communicative systems that allow the incorporation of features of a referent into their structure. They differ, however, in that signs are more conventionalised because they consist of meaningless phonological parameters. There is some evidence that despite non-signers finding iconic signs more memorable they can have more difficulty at articulating their exact phonological components. In the present study, hearing non-signers took part in a sign repetition task in which they had to imitate as accurately as possible a set of iconic and arbitrary signs. Their renditions showed that iconic signs were articulated significantly less accurately than arbitrary signs. Participants were recalled six months later to take part in a sign generation task. In this task, participants were shown the English translation of the iconic signs they imitated six months prior. For each word, participants were asked to generate a sign (i.e., an iconic gesture). The handshapes produced in the sign repetition and sign generation tasks were compared to detect instances in which both renditions presented the same configuration. There was a significant correlation between articulation accuracy in the sign repetition task and handshape overlap. These results suggest some form of gestural interference in the production of iconic signs by hearing non-signers. We also suggest that in some instances non-signers may deploy their own conventionalised gesture when producing some iconic signs. These findings are interpreted as evidence that non-signers process iconic signs as gestures and that in production, only when sign and gesture have overlapping features will they be capable of producing the phonological components of signs accurately.
  • Osswald, R., & Van Valin Jr., R. D. (2013). FrameNet, frame structure and the syntax-semantics interface. In T. Gamerschlag, D. Gerland, R. Osswald, & W. Petersen (Eds.), Frames and concept types: Applications in language and philosophy. Heidelberg: Springer.
  • Otake, T., & Cutler, A. (2003). Evidence against "units of perception". In S. Shohov (Ed.), Advances in psychology research (pp. 57-82). Hauppauge, NY: Nova Science.
  • Otake, T., Davis, S. M., & Cutler, A. (1995). Listeners’ representations of within-word structure: A cross-linguistic and cross-dialectal investigation. In J. Pardo (Ed.), Proceedings of EUROSPEECH 95: Vol. 3 (pp. 1703-1706). Madrid: European Speech Communication Association.

    Abstract

    Japanese, British English and American English listeners were presented with spoken words in their native language, and asked to mark on a written transcript of each word the first natural division point in the word. The results showed clear and strong patterns of consensus, indicating that listeners have available to them conscious representations of within-word structure. Orthography did not play a strongly deciding role in the results. The patterns of response were at variance with results from on-line studies of speech segmentation, suggesting that the present task taps not those representations used in on-line listening, but levels of representation which may involve much richer knowledge of word-internal structure.
  • Ouni, S., Cohen, M. M., Young, K., & Jesse, A. (2003). Internationalization of a talking head. In M. Sole, D. Recasens, & J. Romero (Eds.), Proceedings of 15th International Congress of Phonetics Sciences (pp. 2569-2572). Barcelona: Casual Productions.

    Abstract

    In this paper we describe a general scheme for internationalization of our talking head, Baldi, to speak other languages. We describe the modular structure of the auditory/visual synthesis software. As an example, we have created a synthetic Arabic talker, which is evaluated using a noisy word recognition task comparing this talker with a natural one.
  • Ozyurek, A. (1998). An analysis of the basic meaning of Turkish demonstratives in face-to-face conversational interaction. In S. Santi, I. Guaitella, C. Cave, & G. Konopczynski (Eds.), Oralite et gestualite: Communication multimodale, interaction: actes du colloque ORAGE 98 (pp. 609-614). Paris: L'Harmattan.
  • Ozyurek, A., & Perniss, P. M. (2011). Event representations in signed languages. In J. Bohnemeyer, & E. Pederson (Eds.), Event representations in language and cognition (pp. 84-107). New York: Cambridge University Press.
  • Patterson, R. D., & Cutler, A. (1989). Auditory preprocessing and recognition of speech. In A. Baddeley, & N. Bernsen (Eds.), Research directions in cognitive science: A european perspective: Vol. 1. Cognitive psychology (pp. 23-60). London: Erlbaum.
  • Pederson, E. (1995). Questionnaire on event realization. In D. Wilkins (Ed.), Extensions of space and beyond: manual for field elicitation for the 1995 field season (pp. 54-60). Nijmegen: Max Planck Institute for Psycholinguistics. doi:10.17617/2.3004359.

    Abstract

    "Event realisation" refers to the normal final state of the affected entity of an activity described by a verb. For example, the sentence John killed the mosquito entails that the mosquito is afterwards dead – this is the full realisation of a killing event. By contrast, a sentence such as John hit the mosquito does not entail the mosquito’s death (even though we might assume this to be a likely result). In using a certain verb, which features of event realisation are entailed and which are just likely? This questionnaire supports cross-linguistic exploration of event realisation for a range of event types.
  • Peeters, D., Chu, M., Holler, J., Ozyurek, A., & Hagoort, P. (2013). Getting to the point: The influence of communicative intent on the kinematics of pointing gestures. In M. Knauff, M. Pauen, N. Sebanz, & I. Wachsmuth (Eds.), Proceedings of the 35th Annual Meeting of the Cognitive Science Society (CogSci 2013) (pp. 1127-1132). Austin, TX: Cognitive Science Society.

    Abstract

    In everyday communication, people not only use speech but
    also hand gestures to convey information. One intriguing
    question in gesture research has been why gestures take the
    specific form they do. Previous research has identified the
    speaker-gesturer’s communicative intent as one factor
    shaping the form of iconic gestures. Here we investigate
    whether communicative intent also shapes the form of
    pointing gestures. In an experimental setting, twenty-four
    participants produced pointing gestures identifying a referent
    for an addressee. The communicative intent of the speakergesturer
    was manipulated by varying the informativeness of
    the pointing gesture. A second independent variable was the
    presence or absence of concurrent speech. As a function of their communicative intent and irrespective of the presence of speech, participants varied the durations of the stroke and the post-stroke hold-phase of their gesture. These findings add to our understanding of how the communicative context influences the form that a gesture takes.
  • Perniss, P. M., Zwitserlood, I., & Ozyurek, A. (2011). Does space structure spatial language? Linguistic encoding of space in sign languages. In L. Carlson, C. Holscher, & T. Shipley (Eds.), Proceedings of the 33rd Annual Meeting of the Cognitive Science Society (pp. 1595-1600). Austin, TX: Cognitive Science Society.
  • Petersson, K. M., Forkstam, C., Inácio, F., Bramão, I., Araújo, S., Souza, A. C., Silva, S., & Castro, S. L. (2011). Artificial language learning. In A. Trevisan, & V. Wannmacher Pereira (Eds.), Alfabeltização e cognição (pp. 71-90). Porto Alegre, Brasil: Edipucrs.

    Abstract

    Neste artigo fazemos uma revisão breve de investigações actuais com técnicas comportamentais e de neuroimagem funcional sobre a aprendizagem de uma linguagem artificial em crianças e adultos. Na secção final, discutimos uma possível associação entre dislexia e aprendizagem implícita. Resultados recentes sugerem que a presença de um défice ao nível da aprendizagem implícita pode contribuir para as dificuldades de leitura e escrita observadas em indivíduos disléxicos.
  • Phillips, W., & Majid, A. (2011). Emotional sound symbolism. In K. Kendrick, & A. Majid (Eds.), Field manual volume 14 (pp. 16-18). Nijmegen: Max Planck Institute for Psycholinguistics. doi:10.17617/2.1005615.
  • Piai, V., Roelofs, A., Jensen, O., Schoffelen, J.-M., & Bonnefond, M. (2013). Distinct patterns of brain activity characterize lexical activation and competition in speech production [Abstract]. Journal of Cognitive Neuroscience, 25 Suppl., 106.

    Abstract

    A fundamental ability of speakers is to
    quickly retrieve words from long-term memory. According to a prominent theory, concepts activate multiple associated words, which enter into competition for selection. Previous electrophysiological studies have provided evidence for the activation of multiple alternative words, but did not identify brain responses refl ecting competition. We report a magnetoencephalography study examining the timing and neural substrates of lexical activation and competition. The degree of activation of competing words was
    manipulated by presenting pictures (e.g., dog) simultaneously with distractor
    words. The distractors were semantically related to the picture name (cat), unrelated (pin), or identical (dog). Semantic distractors are stronger competitors to the picture name, because they receive additional activation from the picture, whereas unrelated distractors do not. Picture naming times were longer with semantic than with unrelated and identical distractors. The patterns of phase-locked and non-phase-locked activity were distinct
    but temporally overlapping. Phase-locked activity in left middle temporal
    gyrus, peaking at 400 ms, was larger on unrelated than semantic and identical trials, suggesting differential effort in processing the alternative words activated by the picture-word stimuli. Non-phase-locked activity in the 4-10 Hz range between 400-650 ms in left superior frontal gyrus was larger on semantic than unrelated and identical trials, suggesting different
    degrees of effort in resolving the competition among the alternatives
    words, as refl ected in the naming times. These findings characterize distinct
    patterns of brain activity associated with lexical activation and competition
    respectively, and their temporal relation, supporting the theory that words are selected by competition.
  • Plomp, R., & Levelt, W. J. M. (1966). Perception of tonal consonance. In M. A. Bouman (Ed.), Studies in Perception - dedicated to M.A. Bouman (pp. 105-118). Soesterberg: Institute for Perception RVO-TNO.
  • Poellmann, K. (2013). The many ways listeners adapt to reductions in casual speech. PhD Thesis, Radboud University Nijmegen, Nijmegen.
  • Poellmann, K., McQueen, J. M., & Mitterer, H. (2011). The time course of perceptual learning. In W.-S. Lee, & E. Zee (Eds.), Proceedings of the 17th International Congress of Phonetic Sciences 2011 [ICPhS XVII] (pp. 1618-1621). Hong Kong: Department of Chinese, Translation and Linguistics, City University of Hong Kong.

    Abstract

    Two groups of participants were trained to perceive an ambiguous sound [s/f] as either /s/ or /f/ based on lexical bias: One group heard the ambiguous fricative in /s/-final words, the other in /f/-final words. This kind of exposure leads to a recalibration of the /s/-/f/ contrast [e.g., 4]. In order to investigate when and how this recalibration emerges, test trials were interspersed among training and filler trials. The learning effect needed at least 10 clear training items to arise. Its emergence seemed to occur in a rather step-wise fashion. Learning did not improve much after it first appeared. It is likely, however, that the early test trials attracted participants' attention and therefore may have interfered with the learning process.
  • Puccini, D. (2013). The use of deictic versus representational gestures in infancy. PhD Thesis, Radboud University Nijmegen, Nijmegen.
  • Rapold, C. J. (2011). Semantics of Khoekhoe reciprocal constructions. In N. Evans, A. Gaby, S. C. Levinson, & A. Majid (Eds.), Reciprocals and semantic typology (pp. 61-74). Amsterdam: Benjamins.

    Abstract

    This paper identifies four reciprocal construction types in Khoekhoe (Central Khoisan). After a brief description of the morphosyntax of each construction, semantic factors governing their choice are explored. Besides lexical semantics, the number of participants, timing of symmetric subevents, and symmetric conceptualisation are shown to account for the distribution of the four partially competing reciprocal constructions.
  • Ravignani, A., Gingras, B., Asano, R., Sonnweber, R., Matellan, V., & Fitch, W. T. (2013). The evolution of rhythmic cognition: New perspectives and technologies in comparative research. In M. Knauff, M. Pauen, I. Sebanz, & I. Wachsmuth (Eds.), Proceedings of the 35th Annual Conference of the Cognitive Science Society (pp. 1199-1204). Austin,TX: Cognitive Science Society.

    Abstract

    Music is a pervasive phenomenon in human culture, and musical
    rhythm is virtually present in all musical traditions. Research
    on the evolution and cognitive underpinnings of rhythm
    can benefit from a number of approaches. We outline key concepts
    and definitions, allowing fine-grained analysis of rhythmic
    cognition in experimental studies. We advocate comparative
    animal research as a useful approach to answer questions
    about human music cognition and review experimental evidence
    from different species. Finally, we suggest future directions
    for research on the cognitive basis of rhythm. Apart from
    research in semi-natural setups, possibly allowed by “drum set
    for chimpanzees” prototypes presented here for the first time,
    mathematical modeling and systematic use of circular statistics
    may allow promising advances.
  • Regier, T., Khetarpal, N., & Majid, A. (2011). Inferring conceptual structure from cross-language data. In L. Carlson, C. Hölscher, & T. Shipley (Eds.), Proceedings of the 33rd Annual Conference of the Cognitive Science Society (pp. 1488). Austin, TX: Cognitive Science Society.
  • Reinisch, E., & Weber, A. (2011). Adapting to lexical stress in a foreign accent. In W.-S. Lee, & E. Zee (Eds.), Proceedings of the 17th International Congress of Phonetic Sciences 2011 [ICPhS XVII] (pp. 1678-1681). Hong Kong: Department of Chinese, Translation and Linguistics, City University of Hong Kong.

    Abstract

    An exposure-test paradigm was used to examine whether Dutch listeners can adapt their perception to non-canonical marking of lexical stress in Hungarian-accented Dutch. During exposure, one group of listeners heard only words with correct initial stress, while another group also heard examples of unstressed initial syllables that were marked by high pitch, a possible stress cue in Dutch. Subsequently, listeners’ eye movements to target-competitor pairs with segmental overlap but different stress patterns were tracked while hearing Hungarian-accented Dutch. Listeners who had heard non-canonically produced words previously distinguished target-competitor pairs faster than listeners who had only been exposed to canonical forms before. This suggests that listeners can adapt quickly to speaker-specific realizations of non-canonical lexical stress.
  • Reinisch, E., Weber, A., & Mitterer, H. (2011). Listeners retune phoneme boundaries across languages [Abstract]. Journal of the Acoustical Society of America. Program abstracts of the 162nd Meeting of the Acoustical Society of America, 130(4), 2572-2572.

    Abstract

    Listeners can flexibly retune category boundaries of their native language to adapt to non-canonically produced phonemes. This only occurs, however, if the pronunciation peculiarities can be attributed to stable and not transient speaker-specific characteristics. Listening to someone speaking a second language, listeners could attribute non-canonical pronunciations either to the speaker or to the fact that she is modifying her categories in the second language. We investigated whether, following exposure to Dutch-accented English, Dutch listeners show effects of category retuning during test where they hear the same speaker speaking her native language, Dutch. Exposure was a lexical-decision task where either word-final [f] or [s] was replaced by an ambiguous sound. At test listeners categorized minimal word pairs ending in sounds along an [f]-[s] continuum. Following exposure to English words, Dutch listeners showed boundary shifts of a similar magnitude as following exposure to the same phoneme variants in their native language. This suggests that production patterns in a second language are deemed a stable characteristic. A second experiment suggests that category retuning also occurs when listeners are exposed to and tested with a native speaker of their second language. Listeners thus retune phoneme boundaries across languages.
  • Reis, A., Faísca, L., & Petersson, K. M. (2011). Literacia: Modelo para o estudo dos efeitos de uma aprendizagem específica na cognição e nas suas bases cerebrais. In A. Trevisan, J. J. Mouriño Mosquera, & V. Wannmacher Pereira (Eds.), Alfabeltização e cognição (pp. 23-36). Porto Alegro, Brasil: Edipucrs.

    Abstract

    A aquisição de competências de leitura e de escrita pode ser vista como um processo formal de transmissão cultural, onde interagem factores neurobiológicos e culturais. O treino sistemático exigido pela aprendizagem da leitura e da escrita poderá produzir mudanças quantitativas e qualitativas tanto a nível cognitivo como ao nível da organização do cérebro. Estudar sujeitos iletrados e letrados representa, assim, uma oportunidade para investigar efeitos de uma aprendizagem específica no desenvolvimento cognitivo e suas bases cerebrais. Neste trabalho, revemos um conjunto de investigações comportamentais e com métodos de imagem cerebral que indicam que a literacia tem um impacto nas nossas funções cognitivas e na organização cerebral. Mais especificamente, discutiremos diferenças entre letrados e iletrados para domínios cognitivos verbais e não-verbais, sugestivas de que a arquitectura cognitiva é formatada, em parte, pela aprendizagem da leitura e da escrita. Os dados de neuroimagem funcionais e estruturais são também indicadores que a aquisição de uma ortografia alfabética interfere nos processos de organização e lateralização das funções cognitivas.
  • Roberts, L. (2013). Discourse processing. In P. Robinson (Ed.), The Routledge encyclopedia of second language acquisition (pp. 190-194). New York: Routledge.
  • Roberts, S. G. (2013). A Bottom-up approach to the cultural evolution of bilingualism. In M. Knauff, M. Pauen, N. Sebanz, & I. Wachsmuth (Eds.), Proceedings of the 35th Annual Meeting of the Cognitive Science Society (CogSci 2013) (pp. 1229-1234). Austin, TX: Cognitive Science Society. Retrieved from http://mindmodeling.org/cogsci2013/papers/0236/index.html.

    Abstract

    The relationship between individual cognition and cultural phenomena at the society level can be transformed by cultural transmission (Kirby, Dowman, & Griffiths, 2007). Top-down models of this process have typically assumed that individuals only adopt a single linguistic trait. Recent extensions include ‘bilingual’ agents, able to adopt multiple linguistic traits (Burkett & Griffiths, 2010). However, bilingualism is more than variation within an individual: it involves the conditional use of variation with different interlocutors. That is, bilingualism is a property of a population that emerges from use. A bottom-up simulation is presented where learners are sensitive to the identity of other speakers. The simulation reveals that dynamic social structures are a key factor for the evolution of bilingualism in a population, a feature that was abstracted away in the top-down models. Top-down and bottom-up approaches may lead to different answers, but can work together to reveal and explore important features of the cultural transmission process.
  • Roberts, L. (2013). Sentence processing in bilinguals. In R. Van Gompel (Ed.), Sentence processing. London: Psychology Press.
  • Robinson, S. (2011). Split intransitivity in Rotokas, a Papuan language of Bougainville. PhD Thesis, Radboud University, Nijmegen.
  • Robinson, S. (2011). Reciprocals in Rotokas. In N. Evans, A. Gaby, S. C. Levinson, & A. Majid (Eds.), Reciprocals and semantic typology (pp. 195-211). Amsterdam: Benjamins.

    Abstract

    This paper describes the syntax and semantics of reciprocity in the Central dialect of Rotokas, a non-Austronesian (Papuan) language spoken in Bougainville, Papua New Guinea. In Central Rotokas, there are three main reciprocal construction types, which differ formally according to where the reflexive/reciprocal marker (ora-) occurs in the clause: on the verb, on a pronominal argument or adjunct, or on a body part noun. The choice of construction type is determined by two considerations: the valency of the verb (i.e., whether it has one or two core arguments) and whether the reciprocal action is performed on a body part. The construction types are compatible with a wide range of the logical subtypes of reciprocity (strong, melee, chaining, etc.).
  • Roelofs, A. (2003). Modeling the relation between the production and recognition of spoken word forms. In N. O. Schiller, & A. S. Meyer (Eds.), Phonetics and phonology in language comprehension and production: Differences and similarities (pp. 115-158). Berlin: Mouton de Gruyter.
  • Rommers, J. (2013). Seeing what's next: Processing and anticipating language referring to objects. PhD Thesis, Radboud University Nijmegen, Nijmegen.
  • Rossano, F. (2013). Gaze in conversation. In J. Sidnell, & T. Stivers (Eds.), The handbook of conversation analysis (pp. 308-329). Malden, MA: Wiley-Blackwell. doi:10.1002/9781118325001.ch15.

    Abstract

    This chapter contains sections titled: Introduction Background: The Gaze “Machinery” Gaze “Machinery” in Social Interaction Future Directions
  • Rubio-Fernández, P., Breheny, R., & Lee, M. W. (2003). Context-independent information in concepts: An investigation of the notion of ‘core features’. In Proceedings of the 25th Annual Conference of the Cognitive Science Society (CogSci 2003). Austin, TX: Cognitive Science Society.
  • De Ruiter, J. P. (2003). The function of hand gesture in spoken conversation. In M. Bickenbach, A. Klappert, & H. Pompe (Eds.), Manus Loquens: Medium der Geste, Gesten der Medien (pp. 338-347). Cologne: DuMont.
  • De Ruiter, J. P. (2003). A quantitative model of Störung. In A. Kümmel, & E. Schüttpelz (Eds.), Signale der Störung (pp. 67-81). München: Wilhelm Fink Verlag.
  • De Ruiter, J. P. (1998). Gesture and speech production. PhD Thesis, Radboud University Nijmegen, Nijmegen. doi:10.17617/2.2057686.
  • Rumsey, A., San Roque, L., & Schieffelin, B. (2013). The acquisition of ergative marking in Kaluli, Ku Waru and Duna (Trans New Guinea). In E. L. Bavin, & S. Stoll (Eds.), The acquisition of ergativity (pp. 133-182). Amsterdam: Benjamins.

    Abstract

    In this chapter we present material on the acquisition of ergative marking on noun phrases in three languages of Papua New Guinea: Kaluli, Ku Waru, and Duna. The expression of ergativity in all the languages is broadly similar, but sensitive to language-specific features, and this pattern of similarity and difference is reflected in the available acquisition data. Children acquire adult-like ergative marking at about the same pace, reaching similar levels of mastery by 3;00 despite considerable differences in morphological complexity of ergative marking among the languages. What may be more important – as a factor in accounting for the relative uniformity of acquisition in this respect – are the similarities in patterns of interactional scaffolding that emerge from a comparison of the three cases.
  • Sadakata, M., & McQueen, J. M. (2011). The role of variability in non-native perceptual learning of a Japanese geminate-singleton fricative contrast. In Proceedings of the 12th Annual Conference of the International Speech Communication Association (Interspeech 2011), Florence, Italy (pp. 873-876).

    Abstract

    The current study reports the enhancing effect of a high variability training procedure in the learning of a Japanese geminate-singleton fricative contrast. Dutch natives took part in a five-day training procedure in which they identified geminate and singleton variants of the Japanese fricative /s/. They heard either many repetitions of a limited set of words recorded by a single speaker (simple training) or fewer repetitions of a more variable set of words recorded by multiple speakers (variable training). Pre-post identification evaluations and a transfer test indicated clear benefits of the variable training.
  • Sauermann, A., Höhle, B., Chen, A., & Järvikivi, J. (2011). Intonational marking of focus in different word orders in German children. In M. B. Washburn, K. McKinney-Bock, E. Varis, & A. Sawyer (Eds.), Proceedings of the 28th West Coast Conference on Formal Linguistics (pp. 313-322). Somerville, MA: Cascadilla Proceedings Project.

    Abstract

    The use of word order and intonation to mark focus in child speech has received some attention. However, past work usually examined each device separately or only compared the realizations of focused vs. non-focused constituents. This paper investigates the interaction between word order and intonation in the marking of different focus types in 4- to 5-year old German-speaking children and an adult control group. An answer-reconstruction task was used to elicit syntactic (word order) and intonational focus marking of subject and objects (locus of focus) in three focus types (broad, narrow, and contrastive focus). The results indicate that both children and adults used intonation to distinguish broad from contrastive focus but they differed in the marking of narrow focus. Further, both groups preferred intonation to word order as device for focus marking. But children showed an early sensitivity for the impact of focus type and focus location on word order variation and on phonetic means to mark focus.
  • Sauppe, S., Norcliffe, E., Konopka, A. E., Van Valin Jr., R. D., & Levinson, S. C. (2013). Dependencies first: Eye tracking evidence from sentence production in Tagalog. In M. Knauff, M. Pauen, N. Sebanz, & I. Wachsmuth (Eds.), Proceedings of the 35th Annual Meeting of the Cognitive Science Society (CogSci 2013) (pp. 1265-1270). Austin, TX: Cognitive Science Society.

    Abstract

    We investigated the time course of sentence formulation in Tagalog, a verb-initial language in which the verb obligatorily agrees with one of its arguments. Eye-tracked participants described pictures of transitive events. Fixations to the two characters in the events were compared across sentences differing in agreement marking and post-verbal word order. Fixation patterns show evidence for two temporally dissociated phases in Tagalog sentence production. The first, driven by verb agreement, involves early linking of concepts to syntactic functions; the second, driven by word order, involves incremental lexical encoding of these concepts. These results suggest that even the earliest stages of sentence formulation may be guided by a language's grammatical structure.
  • Scharenborg, O., & Janse, E. (2013). Changes in the role of intensity as a cue for fricative categorisation. In Proceedings of INTERSPEECH 2013: 14th Annual Conference of the International Speech Communication Association (pp. 3147-3151).

    Abstract

    Older listeners with high-frequency hearing loss rely more on intensity for categorisation of /s/ than normal-hearing older listeners. This study addresses the question whether this increased reliance comes about immediately when the need
    arises, i.e., in the face of a spectrally-degraded signal. A phonetic categorisation task was carried out using intensitymodulated fricatives in a clean and a low-pass filtered condition with two younger and two older listener groups.
    When high-frequency information was removed from the speech signal, younger listeners started using intensity as a cue. The older adults on the other hand, when presented with the low-pass filtered speech, did not rely on intensity differences for fricative identification. These results suggest that the reliance on intensity shown by the older hearingimpaired adults may have been acquired only gradually with
    longer exposure to a degraded speech signal.
  • Scharenborg, O., McQueen, J. M., Ten Bosch, L., & Norris, D. (2003). Modelling human speech recognition using automatic speech recognition paradigms in SpeM. In Proceedings of Eurospeech 2003 (pp. 2097-2100). Adelaide: Causal Productions.

    Abstract

    We have recently developed a new model of human speech recognition, based on automatic speech recognition techniques [1]. The present paper has two goals. First, we show that the new model performs well in the recognition of lexically ambiguous input. These demonstrations suggest that the model is able to operate in the same optimal way as human listeners. Second, we discuss how to relate the behaviour of a recogniser, designed to discover the optimum path through a word lattice, to data from human listening experiments. We argue that this requires a metric that combines both path-based and word-based measures of recognition performance. The combined metric varies continuously as the input speech signal unfolds over time.
  • Scharenborg, O., Mitterer, H., & McQueen, J. M. (2011). Perceptual learning of liquids. In Proceedings of the 12th Annual Conference of the International Speech Communication Association (Interspeech 2011), Florence, Italy (pp. 149-152).

    Abstract

    Previous research on lexically-guided perceptual learning has focussed on contrasts that differ primarily in local cues, such as plosive and fricative contrasts. The present research had two aims: to investigate whether perceptual learning occurs for a contrast with non-local cues, the /l/-/r/ contrast, and to establish whether STRAIGHT can be used to create ambiguous sounds on an /l/-/r/ continuum. Listening experiments showed lexically-guided learning about the /l/-/r/ contrast. Listeners can thus tune in to unusual speech sounds characterised by non-local cues. Moreover, STRAIGHT can be used to create stimuli for perceptual learning experiments, opening up new research possibilities. Index Terms: perceptual learning, morphing, liquids, human word recognition, STRAIGHT.
  • Scharenborg, O., ten Bosch, L., & Boves, L. (2003). Recognising 'real-life' speech with SpeM: A speech-based computational model of human speech recognition. In Eurospeech 2003 (pp. 2285-2288).

    Abstract

    In this paper, we present a novel computational model of human speech recognition – called SpeM – based on the theory underlying Shortlist. We will show that SpeM, in combination with an automatic phone recogniser (APR), is able to simulate the human speech recognition process from the acoustic signal to the ultimate recognition of words. This joint model takes an acoustic speech file as input and calculates the activation flows of candidate words on the basis of the degree of fit of the candidate words with the input. Experiments showed that SpeM outperforms Shortlist on the recognition of ‘real-life’ input. Furthermore, SpeM performs only slightly worse than an off-the-shelf full-blown automatic speech recogniser in which all words are equally probable, while it provides a transparent computationally elegant paradigm for modelling word activations in human word recognition.
  • Scheeringa, R. (2011). On the relation between oscillatory EEG activity and the BOLD signal. PhD Thesis, Radboud University Nijmegen, Nijmegen.

    Abstract

    Functional Magnetic Resonance Imaging (fMRI) and Electropencephalography (EEG) are the two techniques that are most often used to study the working brain. With the first technique we use the MRI machine to measure where in the brain the supply of oxygenated blood increases as result of an increased neural activity with a high precision. The temporal resolution of this measure however is limited to a few seconds. With EEG we measure the electrical activity of the brain with millisecond precision by placing electrodes on the skin of the head. We can think of the EEG signal as a signal that consists of multiple superimposed frequencies that vary their strength over time and when performing a cognitive task. Since we measure EEG at the level of the scalp, it is difficult to know where in the brain the signals exactly originate from. For about a decade we are able to measure fMRI and EEG at the same time, which possibly enables us to combine the superior spatial resolution of fMRI with the superior temporal resolution of EEG. To make this possible, we need to understand how the EEG signal is related to the fMRI signal, which is the central theme of this thesis. The main finding in this thesis is that increases in the strength of EEG frequencies below 30 Hz are related to a decrease in the fMRI signal strength, while increases in the strength of frequencies above 40 Hz is related to an increase in the strength of the fMRI signal. Changes in the strength of the low EEG frequencies are however are not coupled to changes in high frequencies. Changes in the strength of low and high EEG frequencies therefore contribute independently to changes in the fMRI signal.
  • Schepens, J., Van der Slik, F., & Van Hout, R. (2013). The effect of linguistic distance across Indo-European mother tongues on learning Dutch as a second language. In L. Borin, & A. Saxena (Eds.), Approaches to measuring linguistic differences (pp. 199-230). Berlin: Mouton de Gruyter.
  • Schiller, N. O. (2003). Metrical stress in speech production: A time course study. In Proceedings of the 15th International Congress of Phonetic Sciences (ICPhS 2003) (pp. 451-454). Adelaide: Causal Productions.

    Abstract

    This study investigated the encoding of metrical information during speech production in Dutch. In Experiment 1, participants were asked to judge whether bisyllabic picture names had initial or final stress. Results showed significantly faster decision times for initially stressed targets (e.g., LEpel 'spoon') than for targets with final stress (e.g., liBEL 'dragon fly'; capital letters indicate stressed syllables) and revealed that the monitoring latencies are not a function of the picture naming or object recognition latencies to the same pictures. Experiments 2 and 3 replicated the outcome of the first experiment with bi- and trisyllabic picture names. These results demonstrate that metrical information of words is encoded rightward incrementally during phonological encoding in speech production. The results of these experiments are in line with Levelt's model of phonological encoding.
  • Schiller, N. O., & Meyer, A. S. (2003). Introduction to the relation between speech comprehension and production. In N. O. Schiller, & A. S. Meyer (Eds.), Phonetics and phonology in language comprehension and production: Differences and similarities (pp. 1-8). Berlin: Mouton de Gruyter.
  • Schmiedtová, B. (2003). The use of aspect in Czech L2. In D. Bittner, & N. Gagarina (Eds.), ZAS Papers in Linguistics (pp. 177-194). Berlin: Zentrum für Allgemeine Sprachwissenschaft.
  • Schmiedtová, B. (2003). Aspekt und Tempus im Deutschen und Tschechischen: Eine vergleichende Studie. In S. Höhne (Ed.), Germanistisches Jahrbuch Tschechien - Slowakei: Schwerpunkt Sprachwissenschaft (pp. 185-216). Praha: Lidové noviny.

Share this page