Publications

Displaying 301 - 400 of 509
  • Levinson, S. C. (2011). Deixis [Reprint]. In D. Archer, & P. Grundy (Eds.), The pragmatics reader (pp. 163-185). London: Routledge.

    Abstract

    Reproduced with permission of Blackwell Publishing from: Levinson, S. C. (2004) 'Deixis'. In: Horn, L.R. and Ward, G. (Eds.) The Handbook of Pragmatics. Oxford: Blackwell Publishing, pp. 100-121
  • Levinson, S. C. (2002). Appendix to the 2002 Supplement, version 1, for the “Manual” for the field season 2001. In S. Kita (Ed.), 2002 Supplement (version 3) for the “Manual” for the field season 2001 (pp. 62-64). Nijmegen: Max Planck Institute for Psycholinguistics.
  • Levinson, S. C. (2003). Contextualizing 'contextualization cues'. In S. Eerdmans, C. Prevignano, & P. Thibault (Eds.), Language and interaction: Discussions with John J. Gumperz (pp. 31-39). Amsterdam: John Benjamins.
  • Levinson, S. C. (2003). Language and cognition. In W. Frawley (Ed.), International Encyclopedia of Linguistics (pp. 459-463). Oxford: Oxford University Press.
  • Levinson, S. C. (2003). Language and mind: Let's get the issues straight! In D. Gentner, & S. Goldin-Meadow (Eds.), Language in mind: Advances in the study of language and cognition (pp. 25-46). Cambridge, MA: MIT Press.
  • Levinson, S. C. (2011). Foreword. In D. M. Mark, A. G. Turk, N. Burenhult, & D. Stea (Eds.), Landscape in language: Transdisciplinary perspectives (pp. ix-x). Amsterdam: John Benjamins.
  • Levinson, S. C. (1998). Minimization and conversational inference. In A. Kasher (Ed.), Pragmatics: Vol. 4 Presupposition, implicature and indirect speech acts (pp. 545-612). London: Routledge.
  • Levinson, S. C. (2002). Landscape terms and place names in Yélî Dnye, the language of Rossel Island, PNG. In S. Kita (Ed.), 2002 Supplement (version 3) for the “Manual” for the field season 2001 (pp. 8-13). Nijmegen: Max Planck Institute for Psycholinguistics.
  • Levinson, S. C. (1995). Interactional biases in human thinking. In E. N. Goody (Ed.), Social intelligence and interaction (pp. 221-260). Cambridge: Cambridge University Press.
  • Levinson, S. C. (2011). Presumptive meanings [Reprint]. In D. Archer, & P. Grundy (Eds.), The pragmatics reader (pp. 86-98). London: Routledge.

    Abstract

    Reprinted with permission of The MIT Press from Levinson (2000) Presumptive meanings: The theory of generalized conversational implicature, pp. 112-118, 116-167, 170-173, 177-180. MIT Press
  • Levinson, S. C. (2011). Reciprocals in Yélî Dnye, the Papuan language of Rossel Island. In N. Evans, A. Gaby, S. C. Levinson, & A. Majid (Eds.), Reciprocals and semantic typology (pp. 177-194). Amsterdam: Benjamins.

    Abstract

    Yélî Dnye has two discernable dedicated constructions for reciprocal marking. The first and main construction uses a dedicated reciprocal pronoun numo, somewhat like English each other. We can recognise two subconstructions. First, the ‘numo-construction’, where the reciprocal pronoun is a patient of the verb, and where the invariant pronoun numo is obligatorily incorporated, triggering intransitivisation (e.g. A-NPs become absolutive). This subconstruction has complexities, for example in the punctual aspect only, the verb is inflected like a transitive, but with enclitics mismatching actual person/number. In the second variant or subconstruction, the ‘noko-construction’, the same reciprocal pronoun (sometimes case-marked as noko) occurs but now in oblique positions with either transitive or intransitive verbs. The reciprocal element here has some peculiar binding properties. Finally, the second independent construction is a dedicated periphrastic (or woni…woni) construction, glossing ‘the one did X to the other, and the other did X to the one’. It is one of the rare cross-serial dependencies that show that natural languages cannot be modelled by context-free phrase-structure grammars. Finally, the usage of these two distinct constructions is discussed.
  • Levinson, S. C. (1995). Three levels of meaning. In F. Palmer (Ed.), Grammar and meaning: Essays in honour of Sir John Lyons (pp. 90-115). Cambridge University Press.
  • Levinson, S. C. (2011). Three levels of meaning: Essays in honor of Sir John Lyons [Reprint]. In A. Kasher (Ed.), Pragmatics II. London: Routledge.

    Abstract

    Reprint from Stephen C. Levinson, ‘Three Levels of Meaning’, in Frank Palmer (ed.), Grammar and Meaning: Essays in Honor of Sir John Lyons (Cambridge University Press, 1995), pp. 90–115
  • Levinson, S. C. (2011). Universals in pragmatics. In P. C. Hogan (Ed.), The Cambridge encyclopedia of the language sciences (pp. 654-657). New York: Cambridge University Press.

    Abstract

    Changing Prospects for Universals in Pragmatics
    The term PRAGMATICS has come to denote the study of general principles of language use. It is usually understood to contrast with SEMANTICS, the study of encoded meaning, and also, by some authors, to contrast with SOCIOLINGUISTICS
    and the ethnography of speaking, which are more concerned with local sociocultural practices. Given that pragmaticists come from disciplines as varied as philosophy, sociology,
    linguistics, communication studies, psychology, and anthropology, it is not surprising that definitions of pragmatics vary. Nevertheless, most authors agree on a list of topics
    that come under the rubric, including DEIXIS, PRESUPPOSITION,
    implicature (see CONVERSATIONAL IMPLICATURE), SPEECH-ACTS, and conversational organization (see CONVERSATIONAL ANALYSIS). Here, we can use this extensional definition as a starting point (Levinson 1988; Huang 2007).
  • Liszkowski, U., & Epps, P. (2003). Directing attention and pointing in infants: A cross-cultural approach. In N. J. Enfield (Ed.), Field research manual 2003, part I: Multimodal interaction, space, event representation (pp. 25-27). Nijmegen: Max Planck Institute for Psycholinguistics. doi:10.17617/2.877649.

    Abstract

    Recent research suggests that 12-month-old infants in German cultural settings have the motive of sharing their attention to and interest in various events with a social interlocutor. To do so, these preverbal infants predominantly use the pointing gesture (in this case the extended arm with or without extended index finger) as a means to direct another person’s attention. This task systematically investigates different types of motives underlying infants’ pointing. The occurrence of a protodeclarative (as opposed to protoimperative) motive is of particular interest because it requires an understanding of the recipient’s psychological states, such as attention and interest, that can be directed and accessed.
  • Majid, A., & Bödeker, K. (2003). Folk theories of objects in motion. In N. J. Enfield (Ed.), Field research manual 2003, part I: Multimodal interaction, space, event representation (pp. 72-76). Nijmegen: Max Planck Institute for Psycholinguistics. doi:10.17617/2.877654.

    Abstract

    There are three main strands of research which have investigated people’s intuitive knowledge of objects in motion. (1) Knowledge of the trajectories of objects in motion; (2) knowledge of the causes of motion; and (3) the categorisation of motion as to whether it has been produced by something animate or inanimate. We provide a brief introduction to each of these areas. We then point to some linguistic and cultural differences which may have consequences for people’s knowledge of objects in motion. Finally, we describe two experimental tasks and an ethnographic task that will allow us to collect data in order to establish whether, indeed, there are interesting cross-linguistic/cross-cultural differences in lay theories of objects in motion.
  • Majid, A., Evans, N., Gaby, A., & Levinson, S. C. (2011). The semantics of reciprocal constructions across languages: An extensional approach. In N. Evans, A. Gaby, S. C. Levinson, & A. Majid (Eds.), Reciprocals and semantic typology (pp. 29-60). Amsterdam: Benjamins.

    Abstract

    How similar are reciprocal constructions in the semantic parameters they encode? We investigate this question by using an extensional approach, which examines similarity of meaning by examining how constructions are applied over a set of 64 videoclips depicting reciprocal events (Evans et al. 2004). We apply statistical modelling to descriptions from speakers of 20 languages elicited using the videoclips. We show that there are substantial differences in meaning between constructions of different languages.

    Files private

    Request files
  • Majid, A., & Levinson, S. C. (2011). The language of perception across cultures [Abstract]. Abstracts of the XXth Congress of European Chemoreception Research Organization, ECRO-2010. Publ. in Chemical Senses, 36(1), E7-E8.

    Abstract

    How are the senses structured by the languages we speak, the cultures we inhabit? To what extent is the encoding of perceptual experiences in languages a matter of how the mind/brain is ―wired-up‖ and to what extent is it a question of local cultural preoccupation? The ―Language of Perception‖ project tests the hypothesis that some perceptual domains may be more ―ineffable‖ – i.e. difficult or impossible to put into words – than others. While cognitive scientists have assumed that proximate senses (olfaction, taste, touch) are more ineffable than distal senses (vision, hearing), anthropologists have illustrated the exquisite variation and elaboration the senses achieve in different cultural milieus. The project is designed to test whether the proximate senses are universally ineffable – suggesting an architectural constraint on cognition – or whether they are just accidentally so in Indo-European languages, so expanding the role of cultural interests and preoccupations. To address this question, a standardized set of stimuli of color patches, geometric shapes, simple sounds, tactile textures, smells and tastes have been used to elicit descriptions from speakers of more than twenty languages—including three sign languages. The languages are typologically, genetically and geographically diverse, representing a wide-range of cultures. The communities sampled vary in subsistence modes (hunter-gatherer to industrial), ecological zones (rainforest jungle to desert), dwelling types (rural and urban), and various other parameters. We examine how codable the different sensory modalities are by comparing how consistent speakers are in how they describe the materials in each modality. Our current analyses suggest that taste may, in fact, be the most codable sensorial domain across languages. Moreover, we have identified exquisite elaboration in the olfactory domains in some cultural settings, contrary to some contemporary predictions within the cognitive sciences. These results suggest that differential codability may be at least partly the result of cultural preoccupation. This shows that the senses are not just physiological phenomena but are constructed through linguistic, cultural and social practices.
  • Malt, B. C., Ameel, E., Gennari, S., Imai, M., Saji, N., & Majid, A. (2011). Do words reveal concepts? In L. Carlson, C. Hölscher, & T. Shipley (Eds.), Proceedings of the 33rd Annual Conference of the Cognitive Science Society (pp. 519-524). Austin, TX: Cognitive Science Society.

    Abstract

    To study concepts, cognitive scientists must first identify some. The prevailing assumption is that they are revealed by words such as triangle, table, and robin. But languages vary dramatically in how they carve up the world by name. Either ordinary concepts must be heavily language-dependent or names cannot be a direct route to concepts. We asked English, Dutch, Spanish, and Japanese speakers to name videos of human locomotion and judge their similarities. We investigated what name inventories and scaling solutions on name similarity and on physical similarity for the groups individually and together suggest about the underlying concepts. Aggregated naming and similarity solutions converged on results distinct from the answers suggested by the word inventories and scaling solutions of any single language. Words such as triangle, table, and robin can help identify the conceptual space of a domain, but they do not directly reveal units of knowledge usefully considered 'concepts'.
  • Marcus, G., & Fisher, S. E. (2011). Genes and language. In P. Hogan (Ed.), The Cambridge encyclopedia of the language sciences (pp. 341-344). New York: Cambridge University Press.
  • Mark, D. M., Turk, A., Burenhult, N., & Stea, D. (2011). Landscape in language: An introduction. In D. M. Mark, A. G. Turk, N. Burenhult, & D. Stea (Eds.), Landscape in language: Transdisciplinary perspectives (pp. 1-24). Amsterdam: John Benjamins.
  • Mark, D. M., Turk, A., Burenhult, N., & Stea, D. (Eds.). (2011). Landscape in language: Transdisciplinary perspectives. Amsterdam: John Benjamins.

    Abstract

    Landscape is fundamental to human experience. Yet until recently, the study of landscape has been fragmented among the disciplines. This volume focuses on how landscape is represented in language and thought, and what this reveals about the relationships of people to place and to land. Scientists of various disciplines such as anthropologists, geographers, information scientists, linguists, and philosophers address several questions, including: Are there cross-cultural and cross-linguistic variations in the delimitation, classification, and naming of geographic features? Can alternative world-views and conceptualizations of landscape be used to produce culturally-appropriate Geographic Information Systems (GIS)? Topics included ontology of landscape; landscape terms and concepts; toponyms; spiritual aspects of land and landscape terms; research methods; ethical dimensions of the research; and its potential value to indigenous communities involved in this type of research.
  • de Marneffe, M.-C., Tomlinson, J. J., Tice, M., & Sumner, M. (2011). The interaction of lexical frequency and phonetic variation in the perception of accented speech. In L. Carlson, C. Hölscher, & T. Shipley (Eds.), Proceedings of the 33rd Annual Conference of the Cognitive Science Society (pp. 3575-3580). Austin, TX: Cognitive Science Society.

    Abstract

    How listeners understand spoken words despite massive variation in the speech signal is a central issue for linguistic theory. A recent focus on lexical frequency and specificity has proved fruitful in accounting for this phenomenon. Speech perception, though, is a multi-faceted process and likely incorporates a number of mechanisms to map a variable signal to meaning. We examine a well-established language use factor — lexical frequency — and how this factor is integrated with phonetic variability during the perception of accented speech. We show that an integrated perspective highlights a low-level perceptual mechanism that accounts for the perception of accented speech absent native contrasts, while shedding light on the use of interactive language factors in the perception of spoken words.
  • Martin, A., & Van Turennout, M. (2002). Searching for the neural correlates of object priming. In L. R. Squire, & D. L. Schacter (Eds.), The Neuropsychology of Memory (pp. 239-247). New York: Guilford Press.
  • Matsuo, A., & Duffield, N. (2002). Assessing the generality of knowledge about English ellipsis in SLA. In J. Costa, & M. J. Freitas (Eds.), Proceedings of the GALA 2001 Conference on Language Acquisition (pp. 49-53). Lisboa: Associacao Portuguesa de Linguistica.
  • Matsuo, A., & Duffield, N. (2002). Finiteness and parallelism: Assessing the generality of knowledge about English ellipsis in SLA. In B. Skarabela, S. Fish, & A.-H.-J. Do (Eds.), Proceedings of the 26th Boston University Conference on Language Development (pp. 197-207). Somerville, Massachusetts: Cascadilla Press.
  • Mauner, G., Koenig, J.-P., Melinger, A., & Bienvenue, B. (2002). The lexical source of unexpressed participants and their role in sentence and discourse understanding. In P. Merlo, & S. Stevenson (Eds.), The Lexical Basis of Sentence Processing: Formal, Computational and Experimental Issues (pp. 233-254). Amsterdam: John Benjamins.
  • McDonough, L., Choi, S., Bowerman, M., & Mandler, J. M. (1998). The use of preferential looking as a measure of semantic development. In C. Rovee-Collier, L. P. Lipsitt, & H. Hayne (Eds.), Advances in Infancy Research. Volume 12. (pp. 336-354). Stamford, CT: Ablex Publishing.
  • McQueen, J. M., & Cho, T. (2003). The use of domain-initial strengthening in segmentation of continuous English speech. In Proceedings of the 15th International Congress of Phonetic Sciences (ICPhS 2003) (pp. 2993-2996). Adelaide: Causal Productions.
  • McQueen, J. M., Dahan, D., & Cutler, A. (2003). Continuity and gradedness in speech processing. In N. O. Schiller, & A. S. Meyer (Eds.), Phonetics and phonology in language comprehension and production: Differences and similarities (pp. 39-78). Berlin: Mouton de Gruyter.
  • McQueen, J. M., & Cutler, A. (1998). Morphology in word recognition. In A. M. Zwicky, & A. Spencer (Eds.), The handbook of morphology (pp. 406-427). Oxford: Blackwell.
  • McQueen, J. M., & Cutler, A. (1998). Spotting (different kinds of) words in (different kinds of) context. In R. Mannell, & J. Robert-Ribes (Eds.), Proceedings of the Fifth International Conference on Spoken Language Processing: Vol. 6 (pp. 2791-2794). Sydney: ICSLP.

    Abstract

    The results of a word-spotting experiment are presented in which Dutch listeners tried to spot different types of bisyllabic Dutch words embedded in different types of nonsense contexts. Embedded verbs were not reliably harder to spot than embedded nouns; this suggests that nouns and verbs are recognised via the same basic processes. Iambic words were no harder to spot than trochaic words, suggesting that trochaic words are not in principle easier to recognise than iambic words. Words were harder to spot in consonantal contexts (i.e., contexts which themselves could not be words) than in longer contexts which contained at least one vowel (i.e., contexts which, though not words, were possible words of Dutch). A control experiment showed that this difference was not due to acoustic differences between the words in each context. The results support the claim that spoken-word recognition is sensitive to the viability of sound sequences as possible words.
  • Meeuwissen, M., Roelofs, A., & Levelt, W. J. M. (2003). Naming analog clocks conceptually facilitates naming digital clocks. In Proceedings of XIII Conference of the European Society of Cognitive Psychology (ESCOP 2003) (pp. 271-271).
  • Meira, S. (2003). 'Addressee effects' in demonstrative systems: The cases of Tiriyó and Brazilian Portugese. In F. Lenz (Ed.), Deictic conceptualization of space, time and person (pp. 3-12). Amsterdam/Philadelphia: John Benjamins.
  • Meyer, A. S., & Dobel, C. (2003). Application of eye tracking in speech production research. In J. Hyönä, R. Radach, & H. Deubel (Eds.), The mind’s eye: Cognitive and applied aspects of eye movement research (pp. 253-272). Amsterdam: Elsevier.
  • Mitterer, H. (2011). Social accountability influences phonetic alignment. Journal of the Acoustical Society of America. Program abstracts of the 162nd Meeting of the Acoustical Society of America, 130(4), 2442.

    Abstract

    Speakers tend to take over the articulatory habits of their interlocutors [e.g., Pardo, JASA (2006)]. This phonetic alignment could be the consequence of either a social mechanism or a direct and automatic link between speech perception and production. The latter assumes that social variables should have little influence on phonetic alignment. To test that participants were engaged in a "cloze task" (i.e., Stimulus: "In fantasy movies, silver bullets are used to kill ..." Response: "werewolves") with either one or four interlocutors. Given findings with the Asch-conformity paradigm in social psychology, multiple consistent speakers should exert a stronger force on the participant to align. To control the speech style of the interlocutors, their questions and answers were pre-recorded in either a formal or a casual speech style. The stimuli's speech style was then manipulated between participants and was consistent throughout the experiment for a given participant. Surprisingly, participants aligned less with the speech style if there were multiple interlocutors. This may reflect a "diffusion of responsibility:" Participants may find it more important to align when they interact with only one person than with a larger group.
  • Moscoso del Prado Martín, F., & Baayen, R. H. (2003). Using the structure found in time: Building real-scale orthographic and phonetic representations by accumulation of expectations. In H. Bowman, & C. Labiouse (Eds.), Connectionist Models of Cognition, Perception and Emotion: Proceedings of the Eighth Neural Computation and Psychology Workshop (pp. 263-272). Singapore: World Scientific.
  • Nederstigt, U. (2003). Auch and noch in child and adult German. Berlin: Mouton de Gruyter.
  • Neijt, A., Schreuder, R., & Baayen, R. H. (2003). Verpleegsters, ambassadrices, and masseuses: Stratum differences in the comprehension of Dutch words with feminine agent suffixes. In L. Cornips, & P. Fikkert (Eds.), Linguistics in the Netherlands 2003. (pp. 117-127). Amsterdam: Benjamins.
  • Noordman, L. G., & Vonk, W. (1998). Discourse comprehension. In A. D. Friederici (Ed.), Language comprehension: a biological perspective (pp. 229-262). Berlin: Springer.

    Abstract

    The human language processor is conceived as a system that consists of several interrelated subsystems. Each subsystem performs a specific task in the complex process of language comprehension and production. A subsystem receives a particular input, performs certain specific operations on this input and yields a particular output. The subsystems can be characterized in terms of the transformations that relate the input representations to the output representations. An important issue in describing the language processing system is to identify the subsystems and to specify the relations between the subsystems. These relations can be conceived in two different ways. In one conception the subsystems are autonomous. They are related to each other only by the input-output channels. The operations in one subsystem are not affected by another system. The subsystems are modular, that is they are independent. In the other conception, the different subsystems influence each other. A subsystem affects the processes in another subsystem. In this conception there is an interaction between the subsystems.
  • Norcliffe, E., Enfield, N. J., Majid, A., & Levinson, S. C. (2011). The grammar of perception. In K. Kendrick, & A. Majid (Eds.), Field manual volume 14 (pp. 1-10). Nijmegen: Max Planck Institute for Psycholinguistics.
  • Nordhoff, S., & Hammarström, H. (2011). Glottolog/Langdoc: Defining dialects, languages, and language families as collections of resources. Proceedings of the First International Workshop on Linked Science 2011 (LISC2011), Bonn, Germany, October 24, 2011.

    Abstract

    This paper describes the Glottolog/Langdoc project, an at- tempt to provide near-total bibliographical coverage of descriptive re- sources to the world's languages. Every reference is treated as a resource, as is every \languoid"[1]. References are linked to the languoids which they describe, and languoids are linked to the references described by them. Family relations between languoids are modeled in SKOS, as are relations across dierent classications of the same languages. This setup allows the representation of languoids as collections of references, render- ing the question of the denition of entities like `Scots', `West-Germanic' or `Indo-European' more empirical.
  • Oostdijk, N., & Broeder, D. (2003). The Spoken Dutch Corpus and its exploitation environment. In A. Abeille, S. Hansen-Schirra, & H. Uszkoreit (Eds.), Proceedings of the 4th International Workshop on linguistically interpreted corpora (LINC-03) (pp. 93-101).
  • Oostdijk, N., Goedertier, W., Van Eynde, F., Boves, L., Martens, J.-P., Moortgat, M., & Baayen, R. H. (2002). Experiences from the Spoken Dutch Corpus Project. In Third international conference on language resources and evaluation (pp. 340-347). Paris: European Language Resources Association.
  • Otake, T., & Cutler, A. (2003). Evidence against "units of perception". In S. Shohov (Ed.), Advances in psychology research (pp. 57-82). Hauppauge, NY: Nova Science.
  • Otake, T., Davis, S. M., & Cutler, A. (1995). Listeners’ representations of within-word structure: A cross-linguistic and cross-dialectal investigation. In J. Pardo (Ed.), Proceedings of EUROSPEECH 95: Vol. 3 (pp. 1703-1706). Madrid: European Speech Communication Association.

    Abstract

    Japanese, British English and American English listeners were presented with spoken words in their native language, and asked to mark on a written transcript of each word the first natural division point in the word. The results showed clear and strong patterns of consensus, indicating that listeners have available to them conscious representations of within-word structure. Orthography did not play a strongly deciding role in the results. The patterns of response were at variance with results from on-line studies of speech segmentation, suggesting that the present task taps not those representations used in on-line listening, but levels of representation which may involve much richer knowledge of word-internal structure.
  • Ouni, S., Cohen, M. M., Young, K., & Jesse, A. (2003). Internationalization of a talking head. In M. Sole, D. Recasens, & J. Romero (Eds.), Proceedings of 15th International Congress of Phonetics Sciences (pp. 2569-2572). Barcelona: Casual Productions.

    Abstract

    In this paper we describe a general scheme for internationalization of our talking head, Baldi, to speak other languages. We describe the modular structure of the auditory/visual synthesis software. As an example, we have created a synthetic Arabic talker, which is evaluated using a noisy word recognition task comparing this talker with a natural one.
  • Ozyurek, A. (1998). An analysis of the basic meaning of Turkish demonstratives in face-to-face conversational interaction. In S. Santi, I. Guaitella, C. Cave, & G. Konopczynski (Eds.), Oralite et gestualite: Communication multimodale, interaction: actes du colloque ORAGE 98 (pp. 609-614). Paris: L'Harmattan.
  • Ozyurek, A. (2011). Language in our hands: The role of the body in language, cognition and communication [Inaugural lecture]. Nijmegen: Radboud University Nijmegen.

    Abstract

    Even though most studies of language have focused on speech channel and/or viewed language as an
    amodal abstract system, there is growing evidence on the role our bodily actions/ perceptions play in language and communication.
    In this context, Özyürek discusses what our meaningful visible bodily actions reveal about our language capacity. Conducting cross-linguistic, behavioral, and neurobiological research,
    she shows that co-speech gestures reflect the imagistic, iconic aspects of events talked about and at the same time interact with language production and
    comprehension processes. Sign languages can also be characterized having an abstract system of linguistic categories as well as using iconicity in several
    aspects of the language structure and in its processing.
    Studying language multimodally reveals how grounded language is in our visible bodily actions and opens
    up new lines of research to study language in its situated,
    natural face-to-face context.
  • Ozyurek, A., & Perniss, P. M. (2011). Event representations in signed languages. In J. Bohnemeyer, & E. Pederson (Eds.), Event representations in language and cognition (pp. 84-107). New York: Cambridge University Press.
  • Ozyurek, A. (2002). Speech-gesture relationship across languages and in second language learners: Implications for spatial thinking and speaking. In B. Skarabela, S. Fish, & A. H. Do (Eds.), Proceedings of the 26th annual Boston University Conference on Language Development (pp. 500-509). Somerville, MA: Cascadilla Press.
  • Pederson, E. (1995). Questionnaire on event realization. In D. Wilkins (Ed.), Extensions of space and beyond: manual for field elicitation for the 1995 field season (pp. 54-60). Nijmegen: Max Planck Institute for Psycholinguistics. doi:10.17617/2.3004359.

    Abstract

    "Event realisation" refers to the normal final state of the affected entity of an activity described by a verb. For example, the sentence John killed the mosquito entails that the mosquito is afterwards dead – this is the full realisation of a killing event. By contrast, a sentence such as John hit the mosquito does not entail the mosquito’s death (even though we might assume this to be a likely result). In using a certain verb, which features of event realisation are entailed and which are just likely? This questionnaire supports cross-linguistic exploration of event realisation for a range of event types.
  • Perniss, P. M., Zwitserlood, I., & Ozyurek, A. (2011). Does space structure spatial language? Linguistic encoding of space in sign languages. In L. Carlson, C. Holscher, & T. Shipley (Eds.), Proceedings of the 33rd Annual Meeting of the Cognitive Science Society (pp. 1595-1600). Austin, TX: Cognitive Science Society.
  • Petersson, K. M., Forkstam, C., Inácio, F., Bramão, I., Araújo, S., Souza, A. C., Silva, S., & Castro, S. L. (2011). Artificial language learning. In A. Trevisan, & V. Wannmacher Pereira (Eds.), Alfabeltização e cognição (pp. 71-90). Porto Alegre, Brasil: Edipucrs.

    Abstract

    Neste artigo fazemos uma revisão breve de investigações actuais com técnicas comportamentais e de neuroimagem funcional sobre a aprendizagem de uma linguagem artificial em crianças e adultos. Na secção final, discutimos uma possível associação entre dislexia e aprendizagem implícita. Resultados recentes sugerem que a presença de um défice ao nível da aprendizagem implícita pode contribuir para as dificuldades de leitura e escrita observadas em indivíduos disléxicos.
  • Petersson, K. M. (2002). Brain physiology. In R. Behn, & C. Veranda (Eds.), Proceedings of The 4th Southern European School of the European Physical Society - Physics in Medicine (pp. 37-38). Montreux: ESF.
  • Phillips, W., & Majid, A. (2011). Emotional sound symbolism. In K. Kendrick, & A. Majid (Eds.), Field manual volume 14 (pp. 16-18). Nijmegen: Max Planck Institute for Psycholinguistics. doi:10.17617/2.1005615.
  • Plomp, R., & Levelt, W. J. M. (1966). Perception of tonal consonance. In M. A. Bouman (Ed.), Studies in Perception - dedicated to M.A. Bouman (pp. 105-118). Soesterberg: Institute for Perception RVO-TNO.
  • Poellmann, K., McQueen, J. M., & Mitterer, H. (2011). The time course of perceptual learning. In W.-S. Lee, & E. Zee (Eds.), Proceedings of the 17th International Congress of Phonetic Sciences 2011 [ICPhS XVII] (pp. 1618-1621). Hong Kong: Department of Chinese, Translation and Linguistics, City University of Hong Kong.

    Abstract

    Two groups of participants were trained to perceive an ambiguous sound [s/f] as either /s/ or /f/ based on lexical bias: One group heard the ambiguous fricative in /s/-final words, the other in /f/-final words. This kind of exposure leads to a recalibration of the /s/-/f/ contrast [e.g., 4]. In order to investigate when and how this recalibration emerges, test trials were interspersed among training and filler trials. The learning effect needed at least 10 clear training items to arise. Its emergence seemed to occur in a rather step-wise fashion. Learning did not improve much after it first appeared. It is likely, however, that the early test trials attracted participants' attention and therefore may have interfered with the learning process.
  • Rai, N. K., Rai, M., Paudyal, N. P., Schikowski, R., Bickel, B., Stoll, S., Gaenszle, M., Banjade, G., Rai, I. P., Bhatta, T. N., Sauppe, S., Rai, R. M., Rai, J. K., Rai, L. K., Rai, D. B., Rai, G., Rai, D., Rai, D. K., Rai, A., Rai, C. K. and 4 moreRai, N. K., Rai, M., Paudyal, N. P., Schikowski, R., Bickel, B., Stoll, S., Gaenszle, M., Banjade, G., Rai, I. P., Bhatta, T. N., Sauppe, S., Rai, R. M., Rai, J. K., Rai, L. K., Rai, D. B., Rai, G., Rai, D., Rai, D. K., Rai, A., Rai, C. K., Rai, S. M., Rai, R. K., Pettigrew, J., & Dirksmeyer, T. (2011). छिन्ताङ शब्दकोश तथा व्याकरण [Chintang Dictionary and Grammar]. Kathmandu, Nepal: Chintang Language Research Program.
  • Rapold, C. J. (2011). Semantics of Khoekhoe reciprocal constructions. In N. Evans, A. Gaby, S. C. Levinson, & A. Majid (Eds.), Reciprocals and semantic typology (pp. 61-74). Amsterdam: Benjamins.

    Abstract

    This paper identifies four reciprocal construction types in Khoekhoe (Central Khoisan). After a brief description of the morphosyntax of each construction, semantic factors governing their choice are explored. Besides lexical semantics, the number of participants, timing of symmetric subevents, and symmetric conceptualisation are shown to account for the distribution of the four partially competing reciprocal constructions.
  • Reesink, G. (2002). The Eastern bird's head languages. In G. Reesink (Ed.), Languages of the Eastern Bird's Head (pp. 1-44). Canberra: Pacific Linguistics.
  • Reesink, G. (2002). A grammar sketch of Sougb. In G. Reesink (Ed.), Languages of the Eastern Bird's Head (pp. 181-275). Canberra: Pacific Linguistics.
  • Reesink, G. (2002). Mansim, a lost language of the Bird's Head. In G. Reesink (Ed.), Languages of the Eastern Bird's Head (pp. 277-340). Canberra: Pacific Linguistics.
  • Regier, T., Khetarpal, N., & Majid, A. (2011). Inferring conceptual structure from cross-language data. In L. Carlson, C. Hölscher, & T. Shipley (Eds.), Proceedings of the 33rd Annual Conference of the Cognitive Science Society (pp. 1488). Austin, TX: Cognitive Science Society.
  • Reinisch, E., & Weber, A. (2011). Adapting to lexical stress in a foreign accent. In W.-S. Lee, & E. Zee (Eds.), Proceedings of the 17th International Congress of Phonetic Sciences 2011 [ICPhS XVII] (pp. 1678-1681). Hong Kong: Department of Chinese, Translation and Linguistics, City University of Hong Kong.

    Abstract

    An exposure-test paradigm was used to examine whether Dutch listeners can adapt their perception to non-canonical marking of lexical stress in Hungarian-accented Dutch. During exposure, one group of listeners heard only words with correct initial stress, while another group also heard examples of unstressed initial syllables that were marked by high pitch, a possible stress cue in Dutch. Subsequently, listeners’ eye movements to target-competitor pairs with segmental overlap but different stress patterns were tracked while hearing Hungarian-accented Dutch. Listeners who had heard non-canonically produced words previously distinguished target-competitor pairs faster than listeners who had only been exposed to canonical forms before. This suggests that listeners can adapt quickly to speaker-specific realizations of non-canonical lexical stress.
  • Reinisch, E., Weber, A., & Mitterer, H. (2011). Listeners retune phoneme boundaries across languages [Abstract]. Journal of the Acoustical Society of America. Program abstracts of the 162nd Meeting of the Acoustical Society of America, 130(4), 2572-2572.

    Abstract

    Listeners can flexibly retune category boundaries of their native language to adapt to non-canonically produced phonemes. This only occurs, however, if the pronunciation peculiarities can be attributed to stable and not transient speaker-specific characteristics. Listening to someone speaking a second language, listeners could attribute non-canonical pronunciations either to the speaker or to the fact that she is modifying her categories in the second language. We investigated whether, following exposure to Dutch-accented English, Dutch listeners show effects of category retuning during test where they hear the same speaker speaking her native language, Dutch. Exposure was a lexical-decision task where either word-final [f] or [s] was replaced by an ambiguous sound. At test listeners categorized minimal word pairs ending in sounds along an [f]-[s] continuum. Following exposure to English words, Dutch listeners showed boundary shifts of a similar magnitude as following exposure to the same phoneme variants in their native language. This suggests that production patterns in a second language are deemed a stable characteristic. A second experiment suggests that category retuning also occurs when listeners are exposed to and tested with a native speaker of their second language. Listeners thus retune phoneme boundaries across languages.
  • Reis, A., Faísca, L., & Petersson, K. M. (2011). Literacia: Modelo para o estudo dos efeitos de uma aprendizagem específica na cognição e nas suas bases cerebrais. In A. Trevisan, J. J. Mouriño Mosquera, & V. Wannmacher Pereira (Eds.), Alfabeltização e cognição (pp. 23-36). Porto Alegro, Brasil: Edipucrs.

    Abstract

    A aquisição de competências de leitura e de escrita pode ser vista como um processo formal de transmissão cultural, onde interagem factores neurobiológicos e culturais. O treino sistemático exigido pela aprendizagem da leitura e da escrita poderá produzir mudanças quantitativas e qualitativas tanto a nível cognitivo como ao nível da organização do cérebro. Estudar sujeitos iletrados e letrados representa, assim, uma oportunidade para investigar efeitos de uma aprendizagem específica no desenvolvimento cognitivo e suas bases cerebrais. Neste trabalho, revemos um conjunto de investigações comportamentais e com métodos de imagem cerebral que indicam que a literacia tem um impacto nas nossas funções cognitivas e na organização cerebral. Mais especificamente, discutiremos diferenças entre letrados e iletrados para domínios cognitivos verbais e não-verbais, sugestivas de que a arquitectura cognitiva é formatada, em parte, pela aprendizagem da leitura e da escrita. Os dados de neuroimagem funcionais e estruturais são também indicadores que a aquisição de uma ortografia alfabética interfere nos processos de organização e lateralização das funções cognitivas.
  • Roberts, L., Gabriele, P., & Camilla, B. (Eds.). (2011). EUROSLA Yearbook 2011. Amsterdam: John Benjamins.

    Abstract

    The annual conference of the European Second Language Association provides an opportunity for the presentation of second language research with a genuinely European flavour. The theoretical perspectives adopted are wide-ranging and may fall within traditions overlooked elsewhere. Moreover, the studies presented are largely multi-lingual and cross-cultural, as befits the make-up of modern-day Europe. At the same time, the work demonstrates sophisticated awareness of scholarly insights from around the world. The EUROSLA yearbook presents a selection each year of the very best research from the annual conference. Submissions are reviewed and professionally edited, and only those of the highest quality are selected. Contributions are in English.
  • Robinson, S. (2011). Reciprocals in Rotokas. In N. Evans, A. Gaby, S. C. Levinson, & A. Majid (Eds.), Reciprocals and semantic typology (pp. 195-211). Amsterdam: Benjamins.

    Abstract

    This paper describes the syntax and semantics of reciprocity in the Central dialect of Rotokas, a non-Austronesian (Papuan) language spoken in Bougainville, Papua New Guinea. In Central Rotokas, there are three main reciprocal construction types, which differ formally according to where the reflexive/reciprocal marker (ora-) occurs in the clause: on the verb, on a pronominal argument or adjunct, or on a body part noun. The choice of construction type is determined by two considerations: the valency of the verb (i.e., whether it has one or two core arguments) and whether the reciprocal action is performed on a body part. The construction types are compatible with a wide range of the logical subtypes of reciprocity (strong, melee, chaining, etc.).
  • Roelofs, A. (2002). Storage and computation in spoken word production. In S. Nooteboom, F. Weerman, & F. Wijnen (Eds.), Storage and computation in the language faculty (pp. 183-216). Dordrecht: Kluwer.
  • Roelofs, A. (2002). Modeling of lexical access in speech production: A psycholinguistic perspective on the lexicon. In L. Behrens, & D. Zaefferer (Eds.), The lexicon in focus: Competition and convergence in current lexicology (pp. 75-92). Frankfurt am Main: Lang.
  • Roelofs, A. (2003). Modeling the relation between the production and recognition of spoken word forms. In N. O. Schiller, & A. S. Meyer (Eds.), Phonetics and phonology in language comprehension and production: Differences and similarities (pp. 115-158). Berlin: Mouton de Gruyter.
  • Rubio-Fernández, P., Breheny, R., & Lee, M. W. (2003). Context-independent information in concepts: An investigation of the notion of ‘core features’. In Proceedings of the 25th Annual Conference of the Cognitive Science Society (CogSci 2003). Austin, TX: Cognitive Science Society.
  • De Ruiter, J. P. (2003). The function of hand gesture in spoken conversation. In M. Bickenbach, A. Klappert, & H. Pompe (Eds.), Manus Loquens: Medium der Geste, Gesten der Medien (pp. 338-347). Cologne: DuMont.
  • De Ruiter, J. P. (2003). A quantitative model of Störung. In A. Kümmel, & E. Schüttpelz (Eds.), Signale der Störung (pp. 67-81). München: Wilhelm Fink Verlag.
  • Sadakata, M., & McQueen, J. M. (2011). The role of variability in non-native perceptual learning of a Japanese geminate-singleton fricative contrast. In Proceedings of the 12th Annual Conference of the International Speech Communication Association (Interspeech 2011), Florence, Italy (pp. 873-876).

    Abstract

    The current study reports the enhancing effect of a high variability training procedure in the learning of a Japanese geminate-singleton fricative contrast. Dutch natives took part in a five-day training procedure in which they identified geminate and singleton variants of the Japanese fricative /s/. They heard either many repetitions of a limited set of words recorded by a single speaker (simple training) or fewer repetitions of a more variable set of words recorded by multiple speakers (variable training). Pre-post identification evaluations and a transfer test indicated clear benefits of the variable training.
  • Saito, H., & Kita, S. (2002). "Jesuchaa, kooi, imi" no hennshuu ni atat te [On the occasion of editing "Jesuchaa, Kooi, imi"]. In H. Saito, & S. Kita (Eds.), Kooi, jesuchaa, imi [Action, gesture, meaning] (pp. v-xi). Tokyo: Kyooritsu Shuppan.
  • Saito, H., & Kita, S. (Eds.). (2002). Jesuchaa, kooi, imi [Gesture, action, meaning]. Tokyo: Kyooritsu Shuppan.
  • Sauermann, A., Höhle, B., Chen, A., & Järvikivi, J. (2011). Intonational marking of focus in different word orders in German children. In M. B. Washburn, K. McKinney-Bock, E. Varis, & A. Sawyer (Eds.), Proceedings of the 28th West Coast Conference on Formal Linguistics (pp. 313-322). Somerville, MA: Cascadilla Proceedings Project.

    Abstract

    The use of word order and intonation to mark focus in child speech has received some attention. However, past work usually examined each device separately or only compared the realizations of focused vs. non-focused constituents. This paper investigates the interaction between word order and intonation in the marking of different focus types in 4- to 5-year old German-speaking children and an adult control group. An answer-reconstruction task was used to elicit syntactic (word order) and intonational focus marking of subject and objects (locus of focus) in three focus types (broad, narrow, and contrastive focus). The results indicate that both children and adults used intonation to distinguish broad from contrastive focus but they differed in the marking of narrow focus. Further, both groups preferred intonation to word order as device for focus marking. But children showed an early sensitivity for the impact of focus type and focus location on word order variation and on phonetic means to mark focus.
  • Scharenborg, O., Boves, L., & de Veth, J. (2002). ASR in a human word recognition model: Generating phonemic input for Shortlist. In J. H. L. Hansen, & B. Pellom (Eds.), ICSLP 2002 - INTERSPEECH 2002 - 7th International Conference on Spoken Language Processing (pp. 633-636). ISCA Archive.

    Abstract

    The current version of the psycholinguistic model of human word recognition Shortlist suffers from two unrealistic constraints. First, the input of Shortlist must consist of a single string of phoneme symbols. Second, the current version of the search in Shortlist makes it difficult to deal with insertions and deletions in the input phoneme string. This research attempts to fully automatically derive a phoneme string from the acoustic signal that is as close as possible to the number of phonemes in the lexical representation of the word. We optimised an Automatic Phone Recogniser (APR) using two approaches, viz. varying the value of the mismatch parameter and optimising the APR output strings on the output of Shortlist. The approaches show that it will be very difficult to satisfy the input requirements of the present version of Shortlist with a phoneme string generated by an APR.
  • Scharenborg, O., McQueen, J. M., Ten Bosch, L., & Norris, D. (2003). Modelling human speech recognition using automatic speech recognition paradigms in SpeM. In Proceedings of Eurospeech 2003 (pp. 2097-2100). Adelaide: Causal Productions.

    Abstract

    We have recently developed a new model of human speech recognition, based on automatic speech recognition techniques [1]. The present paper has two goals. First, we show that the new model performs well in the recognition of lexically ambiguous input. These demonstrations suggest that the model is able to operate in the same optimal way as human listeners. Second, we discuss how to relate the behaviour of a recogniser, designed to discover the optimum path through a word lattice, to data from human listening experiments. We argue that this requires a metric that combines both path-based and word-based measures of recognition performance. The combined metric varies continuously as the input speech signal unfolds over time.
  • Scharenborg, O., Mitterer, H., & McQueen, J. M. (2011). Perceptual learning of liquids. In Proceedings of the 12th Annual Conference of the International Speech Communication Association (Interspeech 2011), Florence, Italy (pp. 149-152).

    Abstract

    Previous research on lexically-guided perceptual learning has focussed on contrasts that differ primarily in local cues, such as plosive and fricative contrasts. The present research had two aims: to investigate whether perceptual learning occurs for a contrast with non-local cues, the /l/-/r/ contrast, and to establish whether STRAIGHT can be used to create ambiguous sounds on an /l/-/r/ continuum. Listening experiments showed lexically-guided learning about the /l/-/r/ contrast. Listeners can thus tune in to unusual speech sounds characterised by non-local cues. Moreover, STRAIGHT can be used to create stimuli for perceptual learning experiments, opening up new research possibilities. Index Terms: perceptual learning, morphing, liquids, human word recognition, STRAIGHT.
  • Scharenborg, O., & Boves, L. (2002). Pronunciation variation modelling in a model of human word recognition. In Pronunciation Modeling and Lexicon Adaptation for Spoken Language Technology [PMLA-2002] (pp. 65-70).

    Abstract

    Due to pronunciation variation, many insertions and deletions of phones occur in spontaneous speech. The psycholinguistic model of human speech recognition Shortlist is not well able to deal with phone insertions and deletions and is therefore not well suited for dealing with real-life input. The research presented in this paper explains how Shortlist can benefit from pronunciation variation modelling in dealing with real-life input. Pronunciation variation was modelled by including variants into the lexicon of Shortlist. A series of experiments was carried out to find the optimal acoustic model set for transcribing the training material that was used as basis for the generation of the variants. The Shortlist experiments clearly showed that Shortlist benefits from pronunciation variation modelling. However, the performance of Shortlist stays far behind the performance of other, more conventional speech recognisers.
  • Scharenborg, O., ten Bosch, L., & Boves, L. (2003). Recognising 'real-life' speech with SpeM: A speech-based computational model of human speech recognition. In Eurospeech 2003 (pp. 2285-2288).

    Abstract

    In this paper, we present a novel computational model of human speech recognition – called SpeM – based on the theory underlying Shortlist. We will show that SpeM, in combination with an automatic phone recogniser (APR), is able to simulate the human speech recognition process from the acoustic signal to the ultimate recognition of words. This joint model takes an acoustic speech file as input and calculates the activation flows of candidate words on the basis of the degree of fit of the candidate words with the input. Experiments showed that SpeM outperforms Shortlist on the recognition of ‘real-life’ input. Furthermore, SpeM performs only slightly worse than an off-the-shelf full-blown automatic speech recogniser in which all words are equally probable, while it provides a transparent computationally elegant paradigm for modelling word activations in human word recognition.
  • Schiller, N. O., & Meyer, A. S. (Eds.). (2003). Phonetics and phonology in language comprehension and production. Differences and similarities. Berlin: Mouton de Gruyter.
  • Schiller, N. O., Costa, A., & Colomé, A. (2002). Phonological encoding of single words: In search of the lost syllable. In C. Gussenhoven, & N. Warner (Eds.), Laboratory Phonology VII (pp. 35-59). Berlin: Mouton de Gruyter.
  • Schiller, N. O., Schmitt, B., Peters, J., & Levelt, W. J. M. (2002). 'BAnana'or 'baNAna'? Metrical encoding during speech production [Abstract]. In M. Baumann, A. Keinath, & J. Krems (Eds.), Experimentelle Psychologie: Abstracts der 44. Tagung experimentell arbeitender Psychologen. (pp. 195). TU Chemnitz, Philosophische Fakultät.

    Abstract

    The time course of metrical encoding, i.e. stress, during speech production is investigated. In a first experiment, participants were presented with pictures whose bisyllabic Dutch names had initial or final stress (KAno 'canoe' vs. kaNON 'cannon'; capital letters indicate stressed syllables). Picture names were matched for frequency and object recognition latencies. When participants were asked to judge whether picture names had stress on the first or second syllable, they showed significantly faster decision times for initially stressed targets than for targets with final stress. Experiment 2 replicated this effect with trisyllabic picture names (faster RTs for penultimate stress than for ultimate stress). In our view, these results reflect the incremental phonological encoding process. Wheeldon and Levelt (1995) found that segmental encoding is a process running from the beginning to the end of words. Here, we present evidence that the metrical pattern of words, i.e. stress, is also encoded incrementally.
  • Schiller, N. O. (2002). From phonetics to cognitive psychology: Psycholinguistics has it all. In A. Braun, & H. Masthoff (Eds.), Phonetics and its Applications. Festschrift for Jens-Peter Köster on the Occasion of his 60th Birthday. [Beihefte zur Zeitschrift für Dialektologie und Linguistik; 121] (pp. 13-24). Stuttgart: Franz Steiner Verlag.
  • Schiller, N. O. (2003). Metrical stress in speech production: A time course study. In Proceedings of the 15th International Congress of Phonetic Sciences (ICPhS 2003) (pp. 451-454). Adelaide: Causal Productions.

    Abstract

    This study investigated the encoding of metrical information during speech production in Dutch. In Experiment 1, participants were asked to judge whether bisyllabic picture names had initial or final stress. Results showed significantly faster decision times for initially stressed targets (e.g., LEpel 'spoon') than for targets with final stress (e.g., liBEL 'dragon fly'; capital letters indicate stressed syllables) and revealed that the monitoring latencies are not a function of the picture naming or object recognition latencies to the same pictures. Experiments 2 and 3 replicated the outcome of the first experiment with bi- and trisyllabic picture names. These results demonstrate that metrical information of words is encoded rightward incrementally during phonological encoding in speech production. The results of these experiments are in line with Levelt's model of phonological encoding.
  • Schiller, N. O., & Meyer, A. S. (2003). Introduction to the relation between speech comprehension and production. In N. O. Schiller, & A. S. Meyer (Eds.), Phonetics and phonology in language comprehension and production: Differences and similarities (pp. 1-8). Berlin: Mouton de Gruyter.
  • Schmiedtová, V., & Schmiedtová, B. (2002). The color spectrum in language: The case of Czech: Cognitive concepts, new idioms and lexical meanings. In H. Gottlieb, J. Mogensen, & A. Zettersten (Eds.), Proceedings of The 10th International Symposium on Lexicography (pp. 285-292). Tübingen: Max Niemeyer Verlag.

    Abstract

    The representative corpus SYN2000 in the Czech National Corpus (CNK) project containing 100 million word forms taken from different types of texts. I have tried to determine the extent and depth of the linguistic material in the corpus. First, I chose the adjectives indicating the basic colors of the spectrum and other parts of speech (names and adverbs) derived from these adjectives. An analysis of three examples - black, white and red - shows the extent of the linguistic wealth and diversity we are looking at: because of size limitations, no existing dictionary is capable of embracing all analyzed nuances. Currently, we can only hope that the next dictionary of contemporary Czech, built on the basis of the Czech National Corpus, will be electronic. Without the size limitations, we would be able us to include many of the fine nuances of language
  • Schmiedtová, B. (2003). The use of aspect in Czech L2. In D. Bittner, & N. Gagarina (Eds.), ZAS Papers in Linguistics (pp. 177-194). Berlin: Zentrum für Allgemeine Sprachwissenschaft.
  • Schmiedtová, B. (2003). Aspekt und Tempus im Deutschen und Tschechischen: Eine vergleichende Studie. In S. Höhne (Ed.), Germanistisches Jahrbuch Tschechien - Slowakei: Schwerpunkt Sprachwissenschaft (pp. 185-216). Praha: Lidové noviny.
  • Schreuder, R., Burani, C., & Baayen, R. H. (2003). Parsing and semantic opacity. In E. M. Assink, & D. Sandra (Eds.), Reading complex words (pp. 159-189). Dordrecht: Kluwer.
  • Schriefers, H., Meyer, A. S., & Levelt, W. J. M. (2002). Exploring the time course of lexical access in language production: Picture word interference studies. In G. Altmann (Ed.), Psycholinguistics: Critical Concepts in Psychology [vol. 5] (pp. 168-191). London: Routledge.
  • Seidl, A., & Johnson, E. K. (2003). Position and vowel quality effects in infant's segmentation of vowel-initial words. In Proceedings of the 15th International Congress of Phonetic Sciences (ICPhS 2003) (pp. 2233-2236). Adelaide: Causal Productions.
  • Seifart, F. (2003). Encoding shape: Formal means and semantic distinctions. In N. J. Enfield (Ed.), Field research manual 2003, part I: Multimodal interaction, space, event representation (pp. 57-59). Nijmegen: Max Planck Institute for Psycholinguistics. doi:10.17617/2.877660.

    Abstract

    The basic idea behind this task is to find out how languages encode basic shape distinctions such as dimensionality, axial geometry, relative size, etc. More specifically, we want to find out (i) which formal means are used cross linguistically to encode basic shape distinctions, and (ii) which are the semantic distinctions that are made in this domain. In languages with many shape-classifiers, these distinctions are encoded (at least partially) in classifiers. In other languages, positional verbs, descriptive modifiers, such as “flat”, “round”, or nouns such as “cube”, “ball”, etc. might be the preferred means. In this context, we also want to investigate what other “grammatical work” shapeencoding expressions possibly do in a given language, e.g. unitization of mass nouns, or anaphoric uses of shape-encoding classifiers, etc. This task further seeks to determine the role of shape-related parameters which underlie the design of objects in the semantics of the system under investigation.
  • Seifart, F. (2002). El sistema de clasificación nominal del miraña. Bogotá: CCELA/Universidad de los Andes.
  • Seifart, F. (2002). Shape-distinctions picture-object matching task, with 2002 supplement. In S. Kita (Ed.), 2002 Supplement (version 3) for the “Manual” for the field season 2001 (pp. 15-17). Nijmegen: Max Planck Institute for Psycholinguistics.
  • Sekine, K. (2011). The development of spatial perspective in the description of large-scale environments. In G. Stam, & M. Ishino (Eds.), Integrating Gestures: The interdisciplinary nature of gesture (pp. 175-186). Amsterdam: John Benjamins Publishing Company.

    Abstract

    This research investigated developmental changes in children’s representations of large-scale environments as reflected in spontaneous gestures and speech produced during route descriptions Four-, five-, and six-year-olds (N = 122) described the route from their nursery school to their own homes. Analysis of the children’s gestures showed that some 5- and 6-year-olds produced gestures that represented survey mapping, and they were categorized as a survey group. Children who did not produce such gestures were categorized as a route group. A comparison of the two groups revealed no significant differences in speech indices, with the exception that the survey group showed significantly fewer right/left terms. As for gesture, the survey group produced more gestures than the route group. These results imply that an initial form of survey-map representation is acquired beginning at late preschool age.

Share this page