Publications

Displaying 201 - 300 of 354
  • Klein, W. (1980). Verbal planning in route directions. In H. Dechert, & M. Raupach (Eds.), Temporal variables in speech (pp. 159-168). Den Haag: Mouton.
  • Kuijpers, C. T., Coolen, R., Houston, D., & Cutler, A. (1998). Using the head-turning technique to explore cross-linguistic performance differences. In C. Rovee-Collier, L. Lipsitt, & H. Hayne (Eds.), Advances in infancy research: Vol. 12 (pp. 205-220). Stamford: Ablex.
  • Kuzla, C. (2003). Prosodically-conditioned variation in the realization of domain-final stops and voicing assimilation of domain-initial fricatives in German. In Proceedings of the 15th International Congress of Phonetic Sciences (ICPhS 2003) (pp. 2829-2832). Adelaide: Causal Productions.
  • Lai, V. T. (2005). Language experience influences the conceptualization of TIME metaphor. In Proceedings of the II Conference on Metaphor in Language and Thought, Rio de Janeiro, Brazil, August 17-20, 2005.

    Abstract

    This paper examines the language-specific aspect of the TIME PASSING IS MOTION metaphor and suggests that the temporal construal of time can be influenced by a person's second language. Ahrens and Huang (2002) have analyzed the source domain of MOTION for the TIME metaphor into two special cases. In the special case one, TIME PASSING is an object that moves towards an ego. For example, qimuokao kuai dao le "the final exam is approaching." In the special case two, TIME PASSING is a point (that a plural ego is attached to) that moves across a landscape. For example, women kuai dao qimuokao le "we are approaching the final exam." In addition, in English, the ego in the special case one faces the future while in Chinese, the ego faces the past. The current experiment hypothesizes that English influences the choice of the orientation of the ego in native Chinese speakers who speak English as the second language. 54 subjects are asked to switch the clock time one hour forward. Results show that native Chinese speakers living in the Chinese speaking country tend to move the clock one hour forward to the past (92%) while native Chinese speakers living in an English speaking country are less likely to do so (60%). This implies that the experience of English influences the conceptualization of time in Mandarin Chinese.
  • De Lange, F. P., Hagoort, P., & Toni, I. (2003). Differential fronto-parietal contributions to visual and motor imagery. NeuroImage, 19(2), e2094-e2095.

    Abstract

    Mental imagery is a cognitive process crucial to human reasoning. Numerous studies have characterized specific
    instances of this cognitive ability, as evoked by visual imagery (VI) or motor imagery (MI) tasks. However, it
    remains unclear which neural resources are shared between VI and MI, and which are exclusively related to MI.
    To address this issue, we have used fMRI to measure human brain activity during performance of VI and MI
    tasks. Crucially, we have modulated the imagery process by manipulating the degree of mental rotation necessary
    to solve the tasks. We focused our analysis on changes in neural signal as a function of the degree of mental
    rotation in each task.
  • Levelt, W. J. M. (1982). Cognitive styles in the use of spatial direction terms. In R. Jarvella, & W. Klein (Eds.), Speech, place, and action: Studies in deixis and related topics (pp. 251-268). Chichester: Wiley.
  • Levelt, W. J. M. (2005). Habitual perspective. In Proceedings of the 27th Annual Meeting of the Cognitive Science Society (CogSci 2005).
  • Levelt, C. C., Fikkert, P., & Schiller, N. O. (2003). Metrical priming in speech production. In Proceedings of the 15th International Congress of Phonetic Sciences (ICPhS 2003) (pp. 2481-2485). Adelaide: Causal Productions.

    Abstract

    In this paper we report on four experiments in which we attempted to prime the stress position of Dutch bisyllabic target nouns. These nouns, picture names, had stress on either the first or the second syllable. Auditory prime words had either the same stress as the target or a different stress (e.g., WORtel – MOtor vs. koSTUUM – MOtor; capital letters indicate stressed syllables in prime – target pairs). Furthermore, half of the prime words were semantically related, the other half were unrelated. In none of the experiments a stress priming effect was found. This could mean that stress is not stored in the lexicon. An additional finding was that targets with initial stress had a faster response than targets with a final stress. We hypothesize that bisyllabic words with final stress take longer to be encoded because this stress pattern is irregular with respect to the lexical distribution of bisyllabic stress patterns, even though it can be regular in terms of the metrical stress rules of Dutch.
  • Levelt, W. J. M. (1962). Motion breaking and the perception of causality. In A. Michotte (Ed.), Causalité, permanence et réalité phénoménales: Etudes de psychologie expérimentale (pp. 244-258). Louvain: Publications Universitaires.
  • Levelt, W. J. M., & Plomp, R. (1962). Musical consonance and critical bandwidth. In Proceedings of the 4th International Congress Acoustics (pp. 55-55).
  • Levelt, W. J. M. (2004). Language. In G. Adelman, & B. H. Smith (Eds.), Elsevier's encyclopedia of neuroscience [CD-ROM] (3rd). Amsterdam: Elsevier.
  • Levelt, W. J. M. (1982). Linearization in describing spatial networks. In S. Peters, & E. Saarinen (Eds.), Processes, beliefs, and questions (pp. 199-220). Dordrecht - Holland: D. Reidel.

    Abstract

    The topic of this paper is the way in which speakers order information in discourse. I will refer to this issue with the term "linearization", and will begin with two types of general remarks. The first one concerns the scope and relevance of the problem with reference to some existing literature. The second set of general remarks will be about the place of linearization in a theory of the speaker. The following, and main part of this paper, will be a summary report of research of linearization in a limited, but well-defined domain of discourse, namely the description of spatial networks.
  • Levelt, W. J. M. (1980). On-line processing constraints on the properties of signed and spoken language. In U. Bellugi, & M. Studdert-Kennedy (Eds.), Signed and spoken language: Biological constraints on linguistic form (pp. 141-160). Weinheim: Verlag Chemie.

    Abstract

    It is argued that the dominantly successive nature of language is largely mode-independent and holds equally for sign and for spoken language. A preliminary distinction is made between what is simultaneous or successive in the signal, and what is in the process; these need not coincide, and it is the successiveness of the process that is at stake. It is then discussed extensively for the word/sign level, and in a more preliminary fashion for the clause and discourse level that online processes are parallel in that they can simultaneously draw on various sources of knowledge (syntactic, semantic, pragmatic), but successive in that they can work at the interpretation of only one unit at a time. This seems to hold for both sign and spoken language. In the final section, conjectures are made about possible evolutionary explanations for these properties of language processing.
  • Levelt, W. J. M. (1980). Toegepaste aspecten van het taal-psychologisch onderzoek: Enkele inleidende overwegingen. In J. Matter (Ed.), Toegepaste aspekten van de taalpsychologie (pp. 3-11). Amsterdam: VU Boekhandel.
  • Levinson, S. C. (2003). Spatial language. In L. Nadel (Ed.), Encyclopedia of cognitive science (pp. 131-137). London: Nature Publishing Group.
  • Levinson, S. C. (1982). Caste rank and verbal interaction in Western Tamilnadu. In D. B. McGilvray (Ed.), Caste ideology and interaction (pp. 98-203). Cambridge University Press.
  • Levinson, S. C. (1998). Deixis. In J. L. Mey (Ed.), Concise encyclopedia of pragmatics (pp. 200-204). Amsterdam: Elsevier.
  • Levinson, S. C. (2004). Deixis. In L. Horn (Ed.), The handbook of pragmatics (pp. 97-121). Oxford: Blackwell.
  • Levinson, S. C. (2003). Contextualizing 'contextualization cues'. In S. Eerdmans, C. Prevignano, & P. Thibault (Eds.), Language and interaction: Discussions with John J. Gumperz (pp. 31-39). Amsterdam: John Benjamins.
  • Levinson, S. C. (2003). Language and cognition. In W. Frawley (Ed.), International Encyclopedia of Linguistics (pp. 459-463). Oxford: Oxford University Press.
  • Levinson, S. C. (2003). Language and mind: Let's get the issues straight! In D. Gentner, & S. Goldin-Meadow (Eds.), Language in mind: Advances in the study of language and cognition (pp. 25-46). Cambridge, MA: MIT Press.
  • Levinson, S. C. (1998). Minimization and conversational inference. In A. Kasher (Ed.), Pragmatics: Vol. 4 Presupposition, implicature and indirect speech acts (pp. 545-612). London: Routledge.
  • Levinson, S. C. (1982). Speech act theory: The state of the art. In V. Kinsella (Ed.), Surveys 2. Eight state-of-the-art articles on key areas in language teaching. Cambridge University Press.
  • Lindström, E. (2004). Melanesian kinship and culture. In A. Majid (Ed.), Field Manual Volume 9 (pp. 70-73). Nijmegen: Max Planck Institute for Psycholinguistics. doi:10.17617/2.1552190.
  • Liszkowski, U., & Epps, P. (2003). Directing attention and pointing in infants: A cross-cultural approach. In N. J. Enfield (Ed.), Field research manual 2003, part I: Multimodal interaction, space, event representation (pp. 25-27). Nijmegen: Max Planck Institute for Psycholinguistics. doi:10.17617/2.877649.

    Abstract

    Recent research suggests that 12-month-old infants in German cultural settings have the motive of sharing their attention to and interest in various events with a social interlocutor. To do so, these preverbal infants predominantly use the pointing gesture (in this case the extended arm with or without extended index finger) as a means to direct another person’s attention. This task systematically investigates different types of motives underlying infants’ pointing. The occurrence of a protodeclarative (as opposed to protoimperative) motive is of particular interest because it requires an understanding of the recipient’s psychological states, such as attention and interest, that can be directed and accessed.
  • Magyari, L. (2005). A nyelv miért nem olyan, mint a szem? (Why is language not like vertebrate eye?). In J. Gervain, K. Kovács, Á. Lukács, & M. Racsmány (Eds.), Az ezer arcú elme (The mind with thousand faces) (first edition, pp. 452-460). Budapest: Akadémiai Kiadó.
  • Majid, A., Van Staden, M., & Enfield, N. J. (2004). The human body in cognition, brain, and typology. In K. Hovie (Ed.), Forum Handbook, 4th International Forum on Language, Brain, and Cognition - Cognition, Brain, and Typology: Toward a Synthesis (pp. 31-35). Sendai: Tohoku University.

    Abstract

    The human body is unique: it is both an object of perception and the source of human experience. Its universality makes it a perfect resource for asking questions about how cognition, brain and typology relate to one another. For example, we can ask how speakers of different languages segment and categorize the human body. A dominant view is that body parts are “given” by visual perceptual discontinuities, and that words are merely labels for these visually determined parts (e.g., Andersen, 1978; Brown, 1976; Lakoff, 1987). However, there are problems with this view. First it ignores other perceptual information, such as somatosensory and motoric representations. By looking at the neural representations of sesnsory representations, we can test how much of the categorization of the human body can be done through perception alone. Second, we can look at language typology to see how much universality and variation there is in body-part categories. A comparison of a range of typologically, genetically and areally diverse languages shows that the perceptual view has only limited applicability (Majid, Enfield & van Staden, in press). For example, using a “coloring-in” task, where speakers of seven different languages were given a line drawing of a human body and asked to color in various body parts, Majid & van Staden (in prep) show that languages vary substantially in body part segmentation. For example, Jahai (Mon-Khmer) makes a lexical distinction between upper arm, lower arm, and hand, but Lavukaleve (Papuan Isolate) has just one word to refer to arm, hand, and leg. This shows that body part categorization is not a straightforward mapping of words to visually determined perceptual parts.
  • Majid, A., Van Staden, M., Boster, J. S., & Bowerman, M. (2004). Event categorization: A cross-linguistic perspective. In K. Forbus, D. Gentner, & T. Tegier (Eds.), Proceedings of the 26th Annual Meeting of the Cognitive Science Society (pp. 885-890). Mahwah, NJ: Erlbaum.

    Abstract

    Many studies in cognitive science address how people categorize objects, but there has been comparatively little research on event categorization. This study investigated the categorization of events involving material destruction, such as “cutting” and “breaking”. Speakers of 28 typologically, genetically, and areally diverse languages described events shown in a set of video-clips. There was considerable cross-linguistic agreement in the dimensions along which the events were distinguished, but there was variation in the number of categories and the placement of their boundaries.
  • Majid, A., & Bödeker, K. (2003). Folk theories of objects in motion. In N. J. Enfield (Ed.), Field research manual 2003, part I: Multimodal interaction, space, event representation (pp. 72-76). Nijmegen: Max Planck Institute for Psycholinguistics. doi:10.17617/2.877654.

    Abstract

    There are three main strands of research which have investigated people’s intuitive knowledge of objects in motion. (1) Knowledge of the trajectories of objects in motion; (2) knowledge of the causes of motion; and (3) the categorisation of motion as to whether it has been produced by something animate or inanimate. We provide a brief introduction to each of these areas. We then point to some linguistic and cultural differences which may have consequences for people’s knowledge of objects in motion. Finally, we describe two experimental tasks and an ethnographic task that will allow us to collect data in order to establish whether, indeed, there are interesting cross-linguistic/cross-cultural differences in lay theories of objects in motion.
  • Massaro, D. W., & Jesse, A. (2005). The magic of reading: Too many influences for quick and easy explanations. In T. Trabasso, J. Sabatini, D. W. Massaro, & R. C. Calfee (Eds.), From orthography to pedagogy: Essays in honor of Richard L. Venezky. (pp. 37-61). Mahwah, NJ: Lawrence Erlbaum Associates.

    Abstract

    Words are fundamental to reading and yet over a century of research has not masked the controversies around how words are recognized. We review some old and new research that disproves simple ideas such as words are read as wholes or are simply mapped directly to spoken language. We also review theory and research relevant to the question of sublexical influences in word recognition. We describe orthography and phonology, how they are related to each other and describe a series of new experiments on how these sources of information are processed. Tasks include lexical decision, perceptual identification, and naming. Dependent measures are reaction time, accuracy of performance, and a new measure, initial phoneme duration, that refers to the duration of the first phoneme when the target word is pronounced. Important factors in resolving the controversies include the realization that reading has multiple determinants, as well as evaluating the type of task, proper controls such as familiarity of the test items and accuracy of measurement of the response. We also address potential limitations with measures related to the mapping between orthography and phonology, and show that the existence of a sound-to-spelling consistency effect does not require interactive activation, but can be explained and predicted by a feedforward model, the Fuzzy logical model of perception.
  • Matsuo, A. (2004). Young children's understanding of ongoing vs. completion in present and perfective participles. In J. v. Kampen, & S. Baauw (Eds.), Proceedings of GALA 2003 (pp. 305-316). Utrecht: Netherlands Graduate School of Linguistics (LOT).
  • McDonough, L., Choi, S., Bowerman, M., & Mandler, J. M. (1998). The use of preferential looking as a measure of semantic development. In C. Rovee-Collier, L. P. Lipsitt, & H. Hayne (Eds.), Advances in Infancy Research. Volume 12. (pp. 336-354). Stamford, CT: Ablex Publishing.
  • McQueen, J. M. (2005). Speech perception. In K. Lamberts, & R. Goldstone (Eds.), The Handbook of Cognition (pp. 255-275). London: Sage Publications.
  • McQueen, J. M. (2005). Spoken word recognition and production: Regular but not inseparable bedfellows. In A. Cutler (Ed.), Twenty-first century psycholinguistics: Four cornerstones (pp. 229-244). Mahwah, NJ: Erlbaum.
  • McQueen, J. M., & Cho, T. (2003). The use of domain-initial strengthening in segmentation of continuous English speech. In Proceedings of the 15th International Congress of Phonetic Sciences (ICPhS 2003) (pp. 2993-2996). Adelaide: Causal Productions.
  • McQueen, J. M., Dahan, D., & Cutler, A. (2003). Continuity and gradedness in speech processing. In N. O. Schiller, & A. S. Meyer (Eds.), Phonetics and phonology in language comprehension and production: Differences and similarities (pp. 39-78). Berlin: Mouton de Gruyter.
  • McQueen, J. M., & Mitterer, H. (2005). Lexically-driven perceptual adjustments of vowel categories. In Proceedings of the ISCA Workshop on Plasticity in Speech Perception (PSP2005) (pp. 233-236).
  • McQueen, J. M., & Cutler, A. (1998). Morphology in word recognition. In A. M. Zwicky, & A. Spencer (Eds.), The handbook of morphology (pp. 406-427). Oxford: Blackwell.
  • McQueen, J. M., & Cutler, A. (1998). Spotting (different kinds of) words in (different kinds of) context. In R. Mannell, & J. Robert-Ribes (Eds.), Proceedings of the Fifth International Conference on Spoken Language Processing: Vol. 6 (pp. 2791-2794). Sydney: ICSLP.

    Abstract

    The results of a word-spotting experiment are presented in which Dutch listeners tried to spot different types of bisyllabic Dutch words embedded in different types of nonsense contexts. Embedded verbs were not reliably harder to spot than embedded nouns; this suggests that nouns and verbs are recognised via the same basic processes. Iambic words were no harder to spot than trochaic words, suggesting that trochaic words are not in principle easier to recognise than iambic words. Words were harder to spot in consonantal contexts (i.e., contexts which themselves could not be words) than in longer contexts which contained at least one vowel (i.e., contexts which, though not words, were possible words of Dutch). A control experiment showed that this difference was not due to acoustic differences between the words in each context. The results support the claim that spoken-word recognition is sensitive to the viability of sound sequences as possible words.
  • Meeuwissen, M., Roelofs, A., & Levelt, W. J. M. (2003). Naming analog clocks conceptually facilitates naming digital clocks. In Proceedings of XIII Conference of the European Society of Cognitive Psychology (ESCOP 2003) (pp. 271-271).
  • Meira, S. (2003). 'Addressee effects' in demonstrative systems: The cases of Tiriyó and Brazilian Portugese. In F. Lenz (Ed.), Deictic conceptualization of space, time and person (pp. 3-12). Amsterdam/Philadelphia: John Benjamins.
  • Meyer, A. S., & Dobel, C. (2003). Application of eye tracking in speech production research. In J. Hyönä, R. Radach, & H. Deubel (Eds.), The mind’s eye: Cognitive and applied aspects of eye movement research (pp. 253-272). Amsterdam: Elsevier.
  • Meyer, A. S. (2004). The use of eye tracking in studies of sentence generation. In J. M. Henderson, & F. Ferreira (Eds.), The interface of language, vision, and action: Eye movements and the visual world (pp. 191-212). Hove: Psychology Press.
  • Mitterer, H. (2005). Short- and medium-term plasticity for speaker adaptation seem to be independent. In Proceedings of the ISCA Workshop on Plasticity in Speech Perception (PSP2005) (pp. 83-86).
  • Moscoso del Prado Martín, F., & Baayen, R. H. (2003). Using the structure found in time: Building real-scale orthographic and phonetic representations by accumulation of expectations. In H. Bowman, & C. Labiouse (Eds.), Connectionist Models of Cognition, Perception and Emotion: Proceedings of the Eighth Neural Computation and Psychology Workshop (pp. 263-272). Singapore: World Scientific.
  • Narasimhan, B., Bowerman, M., Brown, P., Eisenbeiss, S., & Slobin, D. I. (2004). "Putting things in places": Effekte linguisticher Typologie auf die Sprachentwicklung. In G. Plehn (Ed.), Jahrbuch der Max-Planck Gesellschaft (pp. 659-663). Göttingen: Vandenhoeck & Ruprecht.

    Abstract

    Effekte linguisticher Typologie auf die Sprach-entwicklung. In G. Plehn (Ed.), Jahrbuch der Max-Planck Gesellsch
  • Neijt, A., Schreuder, R., & Baayen, R. H. (2004). Seven years later: The effect of spelling on interpretation. In L. Cornips, & J. Doetjes (Eds.), Linguistics in the Netherlands 2004 (pp. 134-145). Amsterdam: Benjamins.
  • Neijt, A., Schreuder, R., & Baayen, R. H. (2003). Verpleegsters, ambassadrices, and masseuses: Stratum differences in the comprehension of Dutch words with feminine agent suffixes. In L. Cornips, & P. Fikkert (Eds.), Linguistics in the Netherlands 2003. (pp. 117-127). Amsterdam: Benjamins.
  • Noordman, L. G., & Vonk, W. (1998). Discourse comprehension. In A. D. Friederici (Ed.), Language comprehension: a biological perspective (pp. 229-262). Berlin: Springer.

    Abstract

    The human language processor is conceived as a system that consists of several interrelated subsystems. Each subsystem performs a specific task in the complex process of language comprehension and production. A subsystem receives a particular input, performs certain specific operations on this input and yields a particular output. The subsystems can be characterized in terms of the transformations that relate the input representations to the output representations. An important issue in describing the language processing system is to identify the subsystems and to specify the relations between the subsystems. These relations can be conceived in two different ways. In one conception the subsystems are autonomous. They are related to each other only by the input-output channels. The operations in one subsystem are not affected by another system. The subsystems are modular, that is they are independent. In the other conception, the different subsystems influence each other. A subsystem affects the processes in another subsystem. In this conception there is an interaction between the subsystems.
  • O'Connor, L. (2004). Going getting tired: Associated motion through space and time in Lowland Chontal. In M. Achard, & S. Kemmer (Eds.), Language, culture and mind (pp. 181-199). Stanford: CSLI.
  • Oostdijk, N., & Broeder, D. (2003). The Spoken Dutch Corpus and its exploitation environment. In A. Abeille, S. Hansen-Schirra, & H. Uszkoreit (Eds.), Proceedings of the 4th International Workshop on linguistically interpreted corpora (LINC-03) (pp. 93-101).
  • Otake, T., & Cutler, A. (2003). Evidence against "units of perception". In S. Shohov (Ed.), Advances in psychology research (pp. 57-82). Hauppauge, NY: Nova Science.
  • Ouni, S., Cohen, M. M., Young, K., & Jesse, A. (2003). Internationalization of a talking head. In M. Sole, D. Recasens, & J. Romero (Eds.), Proceedings of 15th International Congress of Phonetics Sciences (pp. 2569-2572). Barcelona: Casual Productions.

    Abstract

    In this paper we describe a general scheme for internationalization of our talking head, Baldi, to speak other languages. We describe the modular structure of the auditory/visual synthesis software. As an example, we have created a synthetic Arabic talker, which is evaluated using a noisy word recognition task comparing this talker with a natural one.
  • Ozyurek, A. (1998). An analysis of the basic meaning of Turkish demonstratives in face-to-face conversational interaction. In S. Santi, I. Guaitella, C. Cave, & G. Konopczynski (Eds.), Oralite et gestualite: Communication multimodale, interaction: actes du colloque ORAGE 98 (pp. 609-614). Paris: L'Harmattan.
  • Petersson, K. M., Grenholm, P., & Forkstam, C. (2005). Artificial grammar learning and neural networks. In G. B. Bruna, L. Barsalou, & M. Bucciarelli (Eds.), Proceedings of the 27th Annual Conference of the Cognitive Science Society (pp. 1726-1731).

    Abstract

    Recent FMRI studies indicate that language related brain regions are engaged in artificial grammar (AG) processing. In the present study we investigate the Reber grammar by means of formal analysis and network simulations. We outline a new method for describing the network dynamics and propose an approach to grammar extraction based on the state-space dynamics of the network. We conclude that statistical frequency-based and rule-based acquisition procedures can be viewed as complementary perspectives on grammar learning, and more generally, that classical cognitive models can be viewed as a special case of a dynamical systems perspective on information processing
  • Poletiek, F. H. (2005). The proof of the pudding is in the eating: Translating Popper's philosophy into a model for testing behaviour. In K. I. Manktelow, & M. C. Chung (Eds.), Psychology of reasoning: Theoretical and historical perspectives (pp. 333-347). Hove: Psychology Press.
  • Poletiek, F. H., & Stolker, C. J. J. M. (2004). Who decides the worth of an arm and a leg? Assessing the monetary value of nonmonetary damage. In E. Kurz-Milcke, & G. Gigerenzer (Eds.), Experts in science and society (pp. 201-213). New York: Kluwer Academic/Plenum Publishers.
  • Randall, J., Van Hout, A., Weissenborn, J., & Baayen, R. H. (2004). Acquiring unaccusativity: A cross-linguistic look. In A. Alexiadou (Ed.), The unaccusativity puzzle (pp. 332-353). Oxford: Oxford University Press.
  • Reesink, G. (2004). Interclausal relations. In G. Booij (Ed.), Morphologie / morphology (pp. 1202-1207). Berlin: Mouton de Gruyter.
  • Roelofs, A. (2005). Spoken word planning, comprehending, and self-monitoring: Evaluation of WEAVER++. In R. Hartsuiker, R. Bastiaanse, A. Postma, & F. Wijnen (Eds.), Phonological encoding and monitoring in normal and pathological speech (pp. 42-63). Hove: Psychology press.
  • Roelofs, A. (2004). The seduced speaker: Modeling of cognitive control. In A. Belz, R. Evans, & P. Piwek (Eds.), Natural language generation. (pp. 1-10). Berlin: Springer.

    Abstract

    Although humans are the ultimate “natural language generators”, the area of psycholinguistic modeling has been somewhat underrepresented in recent approaches to Natural Language Generation in computer science. To draw attention to the area and illustrate its potential relevance to Natural Language Generation, I provide an overview of recent work on psycholinguistic modeling of language production together with some key empirical findings, state-of-the-art experimental techniques, and their historical roots. The techniques include analyses of speech-error corpora, chronometric analyses, eyetracking, and neuroimaging.
    The overview is built around the issue of cognitive control in natural language generation, concentrating on the production of single words, which is an essential ingredient of the generation of larger utterances. Most of the work exploited the fact that human speakers are good but not perfect at resisting temptation, which has provided some critical clues about the nature of the underlying system.
  • Roelofs, A. (2005). From Popper to Lakatos: A case for cumulative computational modeling. In A. Cutler (Ed.), Twenty-first century psycholinguistics: Four cornerstones (pp. 313-330). Mahwah,NJ: Erlbaum.
  • Roelofs, A. (2003). Modeling the relation between the production and recognition of spoken word forms. In N. O. Schiller, & A. S. Meyer (Eds.), Phonetics and phonology in language comprehension and production: Differences and similarities (pp. 115-158). Berlin: Mouton de Gruyter.
  • Roelofs, A., & Schiller, N. (2004). Produzieren von Ein- und Mehrwortäusserungen. In G. Plehn (Ed.), Jahrbuch der Max-Planck Gesellschaft (pp. 655-658). Göttingen: Vandenhoeck & Ruprecht.
  • Rubio-Fernández, P., Breheny, R., & Lee, M. W. (2003). Context-independent information in concepts: An investigation of the notion of ‘core features’. In Proceedings of the 25th Annual Conference of the Cognitive Science Society (CogSci 2003). Austin, TX: Cognitive Science Society.
  • De Ruiter, J. P. (2004). On the primacy of language in multimodal communication. In Workshop Proceedings on Multimodal Corpora: Models of Human Behaviour for the Specification and Evaluation of Multimodal Input and Output Interfaces.(LREC2004) (pp. 38-41). Paris: ELRA - European Language Resources Association (CD-ROM).

    Abstract

    In this paper, I will argue that although the study of multimodal interaction offers exciting new prospects for Human Computer Interaction and human-human communication research, language is the primary form of communication, even in multimodal systems. I will support this claim with theoretical and empirical arguments, mainly drawn from human-human communication research, and will discuss the implications for multimodal communication research and Human-Computer Interaction.
  • De Ruiter, J. P. (2003). The function of hand gesture in spoken conversation. In M. Bickenbach, A. Klappert, & H. Pompe (Eds.), Manus Loquens: Medium der Geste, Gesten der Medien (pp. 338-347). Cologne: DuMont.
  • De Ruiter, J. P. (2003). A quantitative model of Störung. In A. Kümmel, & E. Schüttpelz (Eds.), Signale der Störung (pp. 67-81). München: Wilhelm Fink Verlag.
  • De Ruiter, J. P. (2004). Response systems and signals of recipiency. In A. Majid (Ed.), Field Manual Volume 9 (pp. 53-55). Nijmegen: Max Planck Institute for Psycholinguistics. doi:10.17617/2.506961.

    Abstract

    Listeners’ signals of recipiency, such as “Mm-hm” or “uh-huh” in English, are the most elementary or minimal “conversational turns” possible. Minimal, because apart from acknowledging recipiency and inviting the speaker to continue with his/her next turn, they do not add any new information to the discourse of the conversation. The goal of this project is to gather cross cultural information on listeners’ feedback behaviour during conversation. Listeners in a conversation usually provide short signals that indicate to the speaker that they are still “with the speaker”. These signals could be verbal (like for instance “mm hm” in English or “hm hm” in Dutch) or nonverbal (visual), like nodding. Often, these signals are produced in overlap with the speaker’s vocalisation. If listeners do not produce these signals, speakers often invite them explicitly (e.g. “are you still there?” in a telephone conversation). Our goal is to investigate what kind of signals are used by listeners of different languages to signal “recipiency” to the speaker.
  • Sauter, D., Scott, S., & Calder, A. (2004). Categorisation of vocally expressed positive emotion: A first step towards basic positive emotions? [Abstract]. Proceedings of the British Psychological Society, 12, 111.

    Abstract

    Most of the study of basic emotion expressions has focused on facial expressions and little work has been done to specifically investigate happiness, the only positive of the basic emotions (Ekman & Friesen, 1971). However, a theoretical suggestion has been made that happiness could be broken down into discrete positive emotions, which each fulfil the criteria of basic emotions, and that these would be expressed vocally (Ekman, 1992). To empirically test this hypothesis, 20 participants categorised 80 paralinguistic sounds using the labels achievement, amusement, contentment, pleasure and relief. The results suggest that achievement, amusement and relief are perceived as distinct categories, which subjects accurately identify. In contrast, the categories of contentment and pleasure were systematically confused with other responses, although performance was still well above chance levels. These findings are initial evidence that the positive emotions engage distinct vocal expressions and may be considered to be distinct emotion categories.
  • Sauter, D., Wiland, J., Warren, J., Eisner, F., Calder, A., & Scott, S. K. (2005). Sounds of joy: An investigation of vocal expressions of positive emotions [Abstract]. Journal of Cognitive Neuroscience, 61(Supplement), B99.

    Abstract

    A series of experiment tested Ekman’s (1992) hypothesis that there are a set of positive basic emotions that are expressed using vocal para-linguistic sounds, e.g. laughter and cheers. The proposed categories investigated were amusement, contentment, pleasure, relief and triumph. Behavioural testing using a forced-choice task indicated that participants were able to reliably recognize vocal expressions of the proposed emotions. A cross-cultural study in the preliterate Himba culture in Namibia confirmed that these categories are also recognized across cultures. A recognition test of acoustically manipulated emotional vocalizations established that the recognition of different emotions utilizes different vocal cues, and that these in turn differ from the cues used when comprehending speech. In a study using fMRI we found that relative to a signal correlated noise baseline, the paralinguistic expressions of emotion activated bilateral superior temporal gyri and sulci, lateral and anterior to primary auditory cortex, which is consistent with the processing of non linguistic vocal cues in the auditory ‘what’ pathway. Notably amusement was associated with greater activation extending into both temporal poles and amygdale and insular cortex. Overall, these results support the claim that ‘happiness’ can be fractionated into amusement, pleasure, relief and triumph.
  • Scharenborg, O., & Seneff, S. (2005). A two-pass strategy for handling OOVs in a large vocabulary recognition task. In Interspeech'2005 - Eurospeech, 9th European Conference on Speech Communication and Technology, (pp. 1669-1672). ISCA Archive.

    Abstract

    This paper addresses the issue of large-vocabulary recognition in a specific word class. We propose a two-pass strategy in which only major cities are explicitly represented in the first stage lexicon. An unknown word model encoded as a phone loop is used to detect OOV city names (referred to as rare city names). After which SpeM, a tool that can extract words and word-initial cohorts from phone graphs on the basis of a large fallback lexicon, provides an N-best list of promising city names on the basis of the phone sequences generated in the first stage. This N-best list is then inserted into the second stage lexicon for a subsequent recognition pass. Experiments were conducted on a set of spontaneous telephone-quality utterances each containing one rare city name. We tested the size of the N-best list and three types of language models (LMs). The experiments showed that SpeM was able to include nearly 85% of the correct city names into an N-best list of 3000 city names when a unigram LM, which also boosted the unigram scores of a city name in a given state, was used.
  • Scharenborg, O., Boves, L., & Ten Bosch, L. (2004). ‘On-line early recognition’ of polysyllabic words in continuous speech. In S. Cassidy, F. Cox, R. Mannell, & P. Sallyanne (Eds.), Proceedings of the Tenth Australian International Conference on Speech Science & Technology (pp. 387-392). Canberra: Australian Speech Science and Technology Association Inc.

    Abstract

    In this paper, we investigate the ability of SpeM, our recognition system based on the combination of an automatic phone recogniser and a wordsearch module, to determine as early as possible during the word recognition process whether a word is likely to be recognised correctly (this we refer to as ‘on-line’ early word recognition). We present two measures that can be used to predict whether a word is correctly recognised: the Bayesian word activation and the amount of available (acoustic) information for a word. SpeM was tested on 1,463 polysyllabic words in 885 continuous speech utterances. The investigated predictors indicated that a word activation that is 1) high (but not too high) and 2) based on more phones is more reliable to predict the correctness of a word than a similarly high value based on a small number of phones or a lower value of the word activation.
  • Scharenborg, O., McQueen, J. M., Ten Bosch, L., & Norris, D. (2003). Modelling human speech recognition using automatic speech recognition paradigms in SpeM. In Proceedings of Eurospeech 2003 (pp. 2097-2100). Adelaide: Causal Productions.

    Abstract

    We have recently developed a new model of human speech recognition, based on automatic speech recognition techniques [1]. The present paper has two goals. First, we show that the new model performs well in the recognition of lexically ambiguous input. These demonstrations suggest that the model is able to operate in the same optimal way as human listeners. Second, we discuss how to relate the behaviour of a recogniser, designed to discover the optimum path through a word lattice, to data from human listening experiments. We argue that this requires a metric that combines both path-based and word-based measures of recognition performance. The combined metric varies continuously as the input speech signal unfolds over time.
  • Scharenborg, O. (2005). Parallels between HSR and ASR: How ASR can contribute to HSR. In Interspeech'2005 - Eurospeech, 9th European Conference on Speech Communication and Technology (pp. 1237-1240). ISCA Archive.

    Abstract

    In this paper, we illustrate the close parallels between the research fields of human speech recognition (HSR) and automatic speech recognition (ASR) using a computational model of human word recognition, SpeM, which was built using techniques from ASR. We show that ASR has proven to be useful for improving models of HSR by relieving them of some of their shortcomings. However, in order to build an integrated computational model of all aspects of HSR, a lot of issues remain to be resolved. In this process, ASR algorithms and techniques definitely can play an important role.
  • Scharenborg, O., ten Bosch, L., & Boves, L. (2003). Recognising 'real-life' speech with SpeM: A speech-based computational model of human speech recognition. In Eurospeech 2003 (pp. 2285-2288).

    Abstract

    In this paper, we present a novel computational model of human speech recognition – called SpeM – based on the theory underlying Shortlist. We will show that SpeM, in combination with an automatic phone recogniser (APR), is able to simulate the human speech recognition process from the acoustic signal to the ultimate recognition of words. This joint model takes an acoustic speech file as input and calculates the activation flows of candidate words on the basis of the degree of fit of the candidate words with the input. Experiments showed that SpeM outperforms Shortlist on the recognition of ‘real-life’ input. Furthermore, SpeM performs only slightly worse than an off-the-shelf full-blown automatic speech recogniser in which all words are equally probable, while it provides a transparent computationally elegant paradigm for modelling word activations in human word recognition.
  • Schiller, N. O. (2005). Verbal self-monitoring. In A. Cutler (Ed.), Twenty-first Century Psycholinguistics: Four cornerstones (pp. 245-261). Lawrence Erlbaum: Mahwah [etc.].
  • Schiller, N. O. (2003). Metrical stress in speech production: A time course study. In Proceedings of the 15th International Congress of Phonetic Sciences (ICPhS 2003) (pp. 451-454). Adelaide: Causal Productions.

    Abstract

    This study investigated the encoding of metrical information during speech production in Dutch. In Experiment 1, participants were asked to judge whether bisyllabic picture names had initial or final stress. Results showed significantly faster decision times for initially stressed targets (e.g., LEpel 'spoon') than for targets with final stress (e.g., liBEL 'dragon fly'; capital letters indicate stressed syllables) and revealed that the monitoring latencies are not a function of the picture naming or object recognition latencies to the same pictures. Experiments 2 and 3 replicated the outcome of the first experiment with bi- and trisyllabic picture names. These results demonstrate that metrical information of words is encoded rightward incrementally during phonological encoding in speech production. The results of these experiments are in line with Levelt's model of phonological encoding.
  • Schiller, N. O., & Meyer, A. S. (2003). Introduction to the relation between speech comprehension and production. In N. O. Schiller, & A. S. Meyer (Eds.), Phonetics and phonology in language comprehension and production: Differences and similarities (pp. 1-8). Berlin: Mouton de Gruyter.
  • Schmiedtová, B. (2003). The use of aspect in Czech L2. In D. Bittner, & N. Gagarina (Eds.), ZAS Papers in Linguistics (pp. 177-194). Berlin: Zentrum für Allgemeine Sprachwissenschaft.
  • Schmiedtová, B. (2003). Aspekt und Tempus im Deutschen und Tschechischen: Eine vergleichende Studie. In S. Höhne (Ed.), Germanistisches Jahrbuch Tschechien - Slowakei: Schwerpunkt Sprachwissenschaft (pp. 185-216). Praha: Lidové noviny.
  • Schmitt, B. M., Schiller, N. O., Rodriguez-Fornells, A., & Münte, T. F. (2004). Elektrophysiologische Studien zum Zeitverlauf von Sprachprozessen. In H. H. Müller, & G. Rickheit (Eds.), Neurokognition der Sprache (pp. 51-70). Tübingen: Stauffenburg.
  • Schreuder, R., Burani, C., & Baayen, R. H. (2003). Parsing and semantic opacity. In E. M. Assink, & D. Sandra (Eds.), Reading complex words (pp. 159-189). Dordrecht: Kluwer.
  • Scott, D. R., & Cutler, A. (1982). Segmental cues to syntactic structure. In Proceedings of the Institute of Acoustics 'Spectral Analysis and its Use in Underwater Acoustics' (pp. E3.1-E3.4). London: Institute of Acoustics.
  • Scott, S., & Sauter, D. (2004). Vocal expressions of emotion and positive and negative basic emotions [Abstract]. Proceedings of the British Psychological Society, 12, 156.

    Abstract

    Previous studies have indicated that vocal and facial expressions of the ‘basic’ emotions share aspects of processing. Thus amygdala damage compromises the perception of fear and anger from the face and from the voice. In the current study we tested the hypothesis that there exist positive basic emotions, expressed mainly in the voice (Ekman, 1992). Vocal stimuli were produced to express the specific positive emotions of amusement, achievement, pleasure, contentment and relief.
  • Seidl, A., & Johnson, E. K. (2003). Position and vowel quality effects in infant's segmentation of vowel-initial words. In Proceedings of the 15th International Congress of Phonetic Sciences (ICPhS 2003) (pp. 2233-2236). Adelaide: Causal Productions.
  • Seifart, F. (2003). Encoding shape: Formal means and semantic distinctions. In N. J. Enfield (Ed.), Field research manual 2003, part I: Multimodal interaction, space, event representation (pp. 57-59). Nijmegen: Max Planck Institute for Psycholinguistics. doi:10.17617/2.877660.

    Abstract

    The basic idea behind this task is to find out how languages encode basic shape distinctions such as dimensionality, axial geometry, relative size, etc. More specifically, we want to find out (i) which formal means are used cross linguistically to encode basic shape distinctions, and (ii) which are the semantic distinctions that are made in this domain. In languages with many shape-classifiers, these distinctions are encoded (at least partially) in classifiers. In other languages, positional verbs, descriptive modifiers, such as “flat”, “round”, or nouns such as “cube”, “ball”, etc. might be the preferred means. In this context, we also want to investigate what other “grammatical work” shapeencoding expressions possibly do in a given language, e.g. unitization of mass nouns, or anaphoric uses of shape-encoding classifiers, etc. This task further seeks to determine the role of shape-related parameters which underlie the design of objects in the semantics of the system under investigation.
  • Senft, G. (2004). Sprache, Kognition und Konzepte des Raumes in verschiedenen Kulturen - Zum Problem der Interdependenz sprachlicher und mentaler Strukturen. In L. Jäger (Ed.), Medialität und Mentalität (pp. 163-176). Paderborn: Wilhelm Fink.
  • Senft, G. (2004). What do we really know about serial verb constructions in Austronesian and Papuan languages? In I. Bril, & F. Ozanne-Rivierre (Eds.), Complex predicates in Oceanic languages (pp. 49-64). Berlin: Mouton de Gruyter.
  • Senft, G. (2003). Wosi Milamala: Weisen von Liebe und Tod auf den Trobriand Inseln. In I. Bobrowski (Ed.), Anabasis: Prace Ofiarowane Professor Krystynie Pisarkowej (pp. 289-295). Kraków: LEXIS.
  • Senft, G. (2003). Zur Bedeutung der Sprache für die Feldforschung. In B. Beer (Ed.), Methoden und Techniken der Feldforschung (pp. 55-70). Berlin: Reimer.
  • Senft, G. (2004). Wosi tauwau topaisewa - songs about migrant workers from the Trobriand Islands. In A. Graumann (Ed.), Towards a dynamic theory of language. Festschrift for Wolfgang Wildgen on occasion of his 60th birthday (pp. 229-241). Bochum: Universitätsverlag Dr. N. Brockmeyer.
  • Senft, G., & Labov, W. (1980). Einige Prinzipien linguistischer Methodologie [transl. from English by Gunter Senft]. In N. Dittmar, & B. O. Rieck (Eds.), William Labov: Sprache im sozialen Kontext (pp. 1-24). Königstein: Athenäum FAT.
  • Senft, G. (1998). 'Noble Savages' and the 'Islands of Love': Trobriand Islanders in 'Popular Publications'. In J. Wassmann (Ed.), Pacific answers to Western hegemony: Cultural practices of identity construction (pp. 119-140). Oxford: Berg Publishers.
  • Senft, G. (2003). Ethnographic Methods. In W. Deutsch, T. Hermann, & G. Rickheit (Eds.), Psycholinguistik - Ein internationales Handbuch [Psycholinguistics - An International Handbook] (pp. 106-114). Berlin: Walter de Gruyter.
  • Senft, G. (2003). Ethnolinguistik. In B. Beer, & H. Fischer (Eds.), Ethnologie: Einführung und Überblick. 5. Aufl., Neufassung (pp. 255-270). Berlin: Reimer.
  • Senft, G. (2005). Bronislaw Malinowski and linguistic pragmatics. In P. Cap (Ed.), Pragmatics today (pp. 139-155). Frankfurt am Main: Lang.
  • Senft, G. (2004). Aspects of spatial deixis in Kilivila. In G. Senft (Ed.), Deixis and demonstratives in Oceanic languages (pp. 59-80). Canberra: Pacific Linguistics.
  • Senft, G. (2004). Introduction. In G. Senft (Ed.), Deixis and demonstratives in Oceanic languages (pp. 1-13). Canberra: Pacific Linguistics.
  • Senft, G., & Labov, W. (1980). Hyperkorrektheit der unteren Mittelschicht als Faktor im Sprachwandel; [transl. from English by Gunter Senft]. In N. Dittmar, & B. O. Rieck (Eds.), William Labov: Sprache im sozialen Kontext (pp. 77-94). Königstein: Athneäum FAT.

Share this page