Publications

Displaying 301 - 400 of 432
  • Norman, D. A., & Levelt, W. J. M. (1988). Life at the center. In W. Hirst (Ed.), The making of cognitive science: essays in honor of George A. Miller (pp. 100-109). Cambridge University Press.
  • Norris, D., Van Ooijen, B., & Cutler, A. (1992). Speeded detection of vowels and steady-state consonants. In J. Ohala, T. Neary, & B. Derwing (Eds.), Proceedings of the Second International Conference on Spoken Language Processing; Vol. 2 (pp. 1055-1058). Alberta: University of Alberta.

    Abstract

    We report two experiments in which vowels and steady-state consonants served as targets in a speeded detection task. In the first experiment, two vowels were compared with one voiced and once unvoiced fricative. Response times (RTs) to the vowels were longer than to the fricatives. The error rate was higher for the consonants. Consonants in word-final position produced the shortest RTs, For the vowels, RT correlated negatively with target duration. In the second experiment, the same two vowel targets were compared with two nasals. This time there was no significant difference in RTs, but the error rate was still significantly higher for the consonants. Error rate and length correlated negatively for the vowels only. We conclude that RT differences between phonemes are independent of vocalic or consonantal status. Instead, we argue that the process of phoneme detection reflects more finely grained differences in acoustic/articulatory structure within the phonemic repertoire.
  • O'Connor, L. (2004). Going getting tired: Associated motion through space and time in Lowland Chontal. In M. Achard, & S. Kemmer (Eds.), Language, culture and mind (pp. 181-199). Stanford: CSLI.
  • Ozyurek, A. (1998). An analysis of the basic meaning of Turkish demonstratives in face-to-face conversational interaction. In S. Santi, I. Guaitella, C. Cave, & G. Konopczynski (Eds.), Oralite et gestualite: Communication multimodale, interaction: actes du colloque ORAGE 98 (pp. 609-614). Paris: L'Harmattan.
  • Ozyurek, A., & Kita, S. (1999). Expressing manner and path in English and Turkish: Differences in speech, gesture, and conceptualization. In M. Hahn, & S. C. Stoness (Eds.), Proceedings of the Twenty-first Annual Conference of the Cognitive Science Society (pp. 507-512). London: Erlbaum.
  • Pacheco, A., Araújo, S., Faísca, L., Petersson, K. M., & Reis, A. (2009). Profiling dislexic children: Phonology and visual naming skills. In Abstracts presented at the International Neuropsychological Society, Finnish Neuropsychological Society, Joint Mid-Year Meeting July 29-August 1, 2009. Helsinki, Finland & Tallinn, Estonia (pp. 40). Retrieved from http://www.neuropsykologia.fi/ins2009/INS_MY09_Abstract.pdf.
  • Perdue, C., & Klein, W. (1992). Conclusions. In W. Klein, & C. Perdue (Eds.), Utterance structure: Developing grammars again (pp. 301-337). Amsterdam: Benjamins.
  • Perdue, C., & Klein, W. (1992). Introduction. In W. Klein, & C. Perdue (Eds.), Utterance structure: Developing grammars again (pp. 1-10). Amsterdam: Benjamins.
  • Petersson, K. M., Ingvar, M., & Reis, A. (2009). Language and literacy from a cognitive neuroscience perspective. In D. Olsen, & N. Torrance (Eds.), Cambridge handbook of literacy (pp. 152-181). Cambridge: Cambridge University Press.
  • Poletiek, F. H., & Stolker, C. J. J. M. (2004). Who decides the worth of an arm and a leg? Assessing the monetary value of nonmonetary damage. In E. Kurz-Milcke, & G. Gigerenzer (Eds.), Experts in science and society (pp. 201-213). New York: Kluwer Academic/Plenum Publishers.
  • Ramus, F., & Fisher, S. E. (2009). Genetics of language. In M. S. Gazzaniga (Ed.), The cognitive neurosciences, 4th ed. (pp. 855-871). Cambridge, MA: MIT Press.

    Abstract

    It has long been hypothesised that the human faculty to acquire a language is in some way encoded in our genetic program. However, only recently has genetic evidence been available to begin to substantiate the presumed genetic basis of language. Here we review the first data from molecular genetic studies showing association between gene variants and language disorders (specific language impairment, speech sound disorder, developmental dyslexia), we discuss the biological function of these genes, and we further speculate on the more general question of how the human genome builds a brain that can learn a language.
  • Randall, J., Van Hout, A., Weissenborn, J., & Baayen, R. H. (2004). Acquiring unaccusativity: A cross-linguistic look. In A. Alexiadou (Ed.), The unaccusativity puzzle (pp. 332-353). Oxford: Oxford University Press.
  • Rapold, C. J., & Zaugg-Coretti, S. (2009). Exploring the periphery of the central Ethiopian Linguistic area: Data from Yemsa and Benchnon. In J. Crass, & R. Meyer (Eds.), Language contact and language change in Ethiopia (pp. 59-81). Köln: Köppe.
  • Reesink, G. (2009). A connection between Bird's Head and (Proto) Oceanic. In B. Evans (Ed.), Discovering history through language, papers in honor of Malcolm Ross (pp. 181-192). Canberra: Pacific Linguistics.
  • Reesink, G. (2004). Interclausal relations. In G. Booij (Ed.), Morphologie / morphology (pp. 1202-1207). Berlin: Mouton de Gruyter.
  • Ringersma, J., Zinn, C., & Kemps-Snijders, M. (2009). LEXUS & ViCoS From lexical to conceptual spaces. In 1st International Conference on Language Documentation and Conservation (ICLDC).

    Abstract

    LEXUS and ViCoS: from lexicon to conceptual spaces LEXUS is a web-based lexicon tool and the knowledge space software ViCoS is an extension of LEXUS, allowing users to create relations between objects in and across lexica. LEXUS and ViCoS are part of the Language Archiving Technology software, developed at the MPI for Psycholinguistics to archive and enrich linguistic resources collected in the framework of language documentation projects. LEXUS is of primary interest for language documentation, offering the possibility to not just create a digital dictionary, but additionally it allows the creation of multi-media encyclopedic lexica. ViCoS provides an interface between the lexical space and the ontological space. Its approach permits users to model a world of concepts and their interrelations based on categorization patterns made by the speech community. We describe the LEXUS and ViCoS functionalities using three cases from DoBeS language documentation projects: (1) Marquesan The Marquesan lexicon was initially created in Toolbox and imported into LEXUS using the Toolbox import functionality. The lexicon is enriched with multi-media to illustrate the meaning of the words in its cultural environment. Members of the speech community consider words as keys to access and describe relevant parts of their life and traditions. Their understanding of words is best described by the various associations they evoke rather than in terms of any formal theory of meaning. Using ViCoS a knowledge space of related concepts is being created. (2) Kola-Sámi Two lexica are being created in LEXUS: RuSaDic lexicon is a Russian-Kildin wordlist in which the entries are of relative limited structure and content. SaRuDiC is a more complex structured lexicon with much richer content, including multi-media fragments and derivations. Using ViCoS we have created a connection between the two lexica, so that speakers who are familiair with Russian and wish to revitalize their Kildin can enter the lexicon through the RuSaDic and from there approach the informative SaRuDic. Similary we will create relations from the two lexica to external open databases, like e.g. Álgu. (3) Beaver A speaker database including kinship relations has been created and the database has been imported into LEXUS. In the LEXUS views the relations for individual speakers are being displayed. Using ViCoS the relational information from the database will be extracted to form a kisnhip relation space with specific relation types, like e.g 'mother-of'. The whole set of relations from the database can be displayed in one ViCoS relation window, and zoom functionality is available.
  • Roelofs, A. (2004). The seduced speaker: Modeling of cognitive control. In A. Belz, R. Evans, & P. Piwek (Eds.), Natural language generation. (pp. 1-10). Berlin: Springer.

    Abstract

    Although humans are the ultimate “natural language generators”, the area of psycholinguistic modeling has been somewhat underrepresented in recent approaches to Natural Language Generation in computer science. To draw attention to the area and illustrate its potential relevance to Natural Language Generation, I provide an overview of recent work on psycholinguistic modeling of language production together with some key empirical findings, state-of-the-art experimental techniques, and their historical roots. The techniques include analyses of speech-error corpora, chronometric analyses, eyetracking, and neuroimaging.
    The overview is built around the issue of cognitive control in natural language generation, concentrating on the production of single words, which is an essential ingredient of the generation of larger utterances. Most of the work exploited the fact that human speakers are good but not perfect at resisting temptation, which has provided some critical clues about the nature of the underlying system.
  • Roelofs, A., & Schiller, N. (2004). Produzieren von Ein- und Mehrwortäusserungen. In G. Plehn (Ed.), Jahrbuch der Max-Planck Gesellschaft (pp. 655-658). Göttingen: Vandenhoeck & Ruprecht.
  • Rossano, F., Brown, P., & Levinson, S. C. (2009). Gaze, questioning and culture. In J. Sidnell (Ed.), Conversation analysis: Comparative perspectives (pp. 187-249). Cambridge University Press.

    Abstract

    Relatively little work has examined the function of gaze in interaction. Previous research has mainly addressed issues such as next speaker selection (e.g. Lerner 2003) or engagement and disengagement in the conversation (Goodwin 1981). It has looked for gaze behavior in relation to the roles participants are enacting locally, (e.g., speaker or hearer) and in relation to the unit “turn” in the turn taking system (Goodwin 1980, 1981; Kendon 1967). In his seminal work Kendon (1967) claimed that “there is a very clear and quite consistent pattern, namely, that [the speaker] tends to look away as he begins a long utterance, and in many cases somewhat in advance of it; and that he looks up at his interlocutor as the end of the long utterance approaches, usually during the last phase, and he continues to look thereafter.” Goodwin (Goodwin 1980) introducing the listener into the picture proposed the following two rules: Rule1: A speaker should obtain the gaze of his recipient during the course of a turn of talk. Rule2: a recipient should be gazing at the speaker when the speaker is gazing at the hearer. Rossano’s work (2005) has suggested the possibility of a different level of order for gaze in interaction: the sequential level. In particular he found that gaze withdrawal after sustained mutual gaze tends to occur at sequence possible completion and if both participants withdraw the sequence is complete. By sequence here we refer to a unit that is structured around the notion of adjacency pair. The latter refers to two turns uttered by different speakers orderly organized (first part and second part) and pair type related (greeting-greeting, question-answer). These two turns are related by conditional relevance (Schegloff 1968) that is to say that the first part requires the production of the second and the absence of the latter is noticeable and accountable. Question-anwers are very typical examples of adjacency pairs. In this paper we compare the use of gaze in question-answer sequences in three different populations: Italians, speakers of Mayan Tzeltal (Mexico) and speakers of Yeli Ndye (Russel Island, Papua New Guinea). Relying mainly on dyadic interactions and ordinary conversation we will provide a comparison of the occurrence of gaze in each turn (to compare with the claims of Goodwin and Kendon) and we will describe whether gaze has any effect on the other participant response and whether it persists also during the answer. The three languages and cultures that will be compared here belong to three different continents and have been previously described as potentially following opposite rules: for speakers of Italian and Yeli Ndye unproblematic and preferred engagement of mutual gaze while for speakers of Tzeltal strong mutual gaze avoidance. This paper tries to provide an accurate description of their gaze behavior in this specific type of sequential conversation.
  • De Ruiter, J. P. (2004). On the primacy of language in multimodal communication. In Workshop Proceedings on Multimodal Corpora: Models of Human Behaviour for the Specification and Evaluation of Multimodal Input and Output Interfaces.(LREC2004) (pp. 38-41). Paris: ELRA - European Language Resources Association (CD-ROM).

    Abstract

    In this paper, I will argue that although the study of multimodal interaction offers exciting new prospects for Human Computer Interaction and human-human communication research, language is the primary form of communication, even in multimodal systems. I will support this claim with theoretical and empirical arguments, mainly drawn from human-human communication research, and will discuss the implications for multimodal communication research and Human-Computer Interaction.
  • De Ruiter, J. P. (2004). Response systems and signals of recipiency. In A. Majid (Ed.), Field Manual Volume 9 (pp. 53-55). Nijmegen: Max Planck Institute for Psycholinguistics. doi:10.17617/2.506961.

    Abstract

    Listeners’ signals of recipiency, such as “Mm-hm” or “uh-huh” in English, are the most elementary or minimal “conversational turns” possible. Minimal, because apart from acknowledging recipiency and inviting the speaker to continue with his/her next turn, they do not add any new information to the discourse of the conversation. The goal of this project is to gather cross cultural information on listeners’ feedback behaviour during conversation. Listeners in a conversation usually provide short signals that indicate to the speaker that they are still “with the speaker”. These signals could be verbal (like for instance “mm hm” in English or “hm hm” in Dutch) or nonverbal (visual), like nodding. Often, these signals are produced in overlap with the speaker’s vocalisation. If listeners do not produce these signals, speakers often invite them explicitly (e.g. “are you still there?” in a telephone conversation). Our goal is to investigate what kind of signals are used by listeners of different languages to signal “recipiency” to the speaker.
  • Salomo, D., & Liszkowski, U. (2009). Socialisation of prelinguistic communication. In A. Majid (Ed.), Field manual volume 12 (pp. 56-57). Nijmegen: Max Planck Institute for Psycholinguistics. doi:10.17617/2.844597.

    Abstract

    Little is known about cultural differences in interactional practices with infants. The goal of this task is to document the nature and emergence of caregiver-infant interaction/ communication in different cultures. There are two tasks: Task 1 – a brief documentation about the culture under investigation with respect to infant-caregiver interaction and parental beliefs. Task 2 – the “decorated room”, a task designed to elicit infant and caregiver.
  • Sankoff, G., & Brown, P. (2009). The origins of syntax in discourse: A case study of Tok Pisin relatives [reprint of 1976 article in Language]. In J. Holm, & S. Michaelis (Eds.), Contact languages (vol. II) (pp. 433-476). London: Routledge.
  • Sauter, D., Scott, S., & Calder, A. (2004). Categorisation of vocally expressed positive emotion: A first step towards basic positive emotions? [Abstract]. Proceedings of the British Psychological Society, 12, 111.

    Abstract

    Most of the study of basic emotion expressions has focused on facial expressions and little work has been done to specifically investigate happiness, the only positive of the basic emotions (Ekman & Friesen, 1971). However, a theoretical suggestion has been made that happiness could be broken down into discrete positive emotions, which each fulfil the criteria of basic emotions, and that these would be expressed vocally (Ekman, 1992). To empirically test this hypothesis, 20 participants categorised 80 paralinguistic sounds using the labels achievement, amusement, contentment, pleasure and relief. The results suggest that achievement, amusement and relief are perceived as distinct categories, which subjects accurately identify. In contrast, the categories of contentment and pleasure were systematically confused with other responses, although performance was still well above chance levels. These findings are initial evidence that the positive emotions engage distinct vocal expressions and may be considered to be distinct emotion categories.
  • Sauter, D. (2009). Emotion concepts. In A. Majid (Ed.), Field manual volume 12 (pp. 20-30). Nijmegen: Max Planck Institute for Psycholinguistics. doi:10.17617/2.883578.

    Abstract

    The goal of this task is to investigate emotional categories across linguistic and cultural boundaries. There are three core tasks. In order to conduct this task you will need emotional vocalisation stimuli on your computer and you must translate the scenarios at the end of this entry into your local language.
  • Sauter, D., Eisner, F., Ekman, P., & Scott, S. K. (2009). Universal vocal signals of emotion. In N. Taatgen, & H. Van Rijn (Eds.), Proceedings of the 31st Annual Meeting of the Cognitive Science Society (CogSci 2009) (pp. 2251-2255). Cognitive Science Society.

    Abstract

    Emotional signals allow for the sharing of important information with conspecifics, for example to warn them of danger. Humans use a range of different cues to communicate to others how they feel, including facial, vocal, and gestural signals. Although much is known about facial expressions of emotion, less research has focused on affect in the voice. We compare British listeners to individuals from remote Namibian villages who have had no exposure to Western culture, and examine recognition of non-verbal emotional vocalizations, such as screams and laughs. We show that a number of emotions can be universally recognized from non-verbal vocal signals. In addition we demonstrate the specificity of this pattern, with a set of additional emotions only recognized within, but not across these cultural groups. Our findings indicate that a small set of primarily negative emotions have evolved signals across several modalities, while most positive emotions are communicated with culture-specific signals.
  • Scharenborg, O., Boves, L., & Ten Bosch, L. (2004). ‘On-line early recognition’ of polysyllabic words in continuous speech. In S. Cassidy, F. Cox, R. Mannell, & P. Sallyanne (Eds.), Proceedings of the Tenth Australian International Conference on Speech Science & Technology (pp. 387-392). Canberra: Australian Speech Science and Technology Association Inc.

    Abstract

    In this paper, we investigate the ability of SpeM, our recognition system based on the combination of an automatic phone recogniser and a wordsearch module, to determine as early as possible during the word recognition process whether a word is likely to be recognised correctly (this we refer to as ‘on-line’ early word recognition). We present two measures that can be used to predict whether a word is correctly recognised: the Bayesian word activation and the amount of available (acoustic) information for a word. SpeM was tested on 1,463 polysyllabic words in 885 continuous speech utterances. The investigated predictors indicated that a word activation that is 1) high (but not too high) and 2) based on more phones is more reliable to predict the correctness of a word than a similarly high value based on a small number of phones or a lower value of the word activation.
  • Scharenborg, O., & Okolowski, S. (2009). Lexical embedding in spoken Dutch. In INTERSPEECH 2009 - 10th Annual Conference of the International Speech Communication Association (pp. 1879-1882). ISCA Archive.

    Abstract

    A stretch of speech is often consistent with multiple words, e.g., the sequence /hæm/ is consistent with ‘ham’ but also with the first syllable of ‘hamster’, resulting in temporary ambiguity. However, to what degree does this lexical embedding occur? Analyses on two corpora of spoken Dutch showed that 11.9%-19.5% of polysyllabic word tokens have word-initial embedding, while 4.1%-7.5% of monosyllabic word tokens can appear word-initially embedded. This is much lower than suggested by an analysis of a large dictionary of Dutch. Speech processing thus appears to be simpler than one might expect on the basis of statistics on a dictionary.
  • Scharenborg, O. (2009). Using durational cues in a computational model of spoken-word recognition. In INTERSPEECH 2009 - 10th Annual Conference of the International Speech Communication Association (pp. 1675-1678). ISCA Archive.

    Abstract

    Evidence that listeners use durational cues to help resolve temporarily ambiguous speech input has accumulated over the past few years. In this paper, we investigate whether durational cues are also beneficial for word recognition in a computational model of spoken-word recognition. Two sets of simulations were carried out using the acoustic signal as input. The simulations showed that the computational model, like humans, takes benefit from durational cues during word recognition, and uses these to disambiguate the speech signal. These results thus provide support for the theory that durational cues play a role in spoken-word recognition.
  • Schiller, N. O., Van Lieshout, P. H. H. M., Meyer, A. S., & Levelt, W. J. M. (1999). Does the syllable affiliation of intervocalic consonants have an articulatory basis? Evidence from electromagnetic midsagittal artculography. In B. Maassen, & P. Groenen (Eds.), Pathologies of speech and language. Advances in clinical phonetics and linguistics (pp. 342-350). London: Whurr Publishers.
  • Schimke, S. (2009). Does finiteness mark assertion? A picture selection study with Turkish learners and native speakers of German. In C. Dimroth, & P. Jordens (Eds.), Functional categories in learner language (pp. 169-202). Berlin: Mouton de Gruyter.
  • Schmitt, B. M., Schiller, N. O., Rodriguez-Fornells, A., & Münte, T. F. (2004). Elektrophysiologische Studien zum Zeitverlauf von Sprachprozessen. In H. H. Müller, & G. Rickheit (Eds.), Neurokognition der Sprache (pp. 51-70). Tübingen: Stauffenburg.
  • Schuppler, B., Van Dommelen, W., Koreman, J., & Ernestus, M. (2009). Word-final [t]-deletion: An analysis on the segmental and sub-segmental level. In Proceedings of the 10th Annual Conference of the International Speech Communication Association (Interspeech 2009) (pp. 2275-2278). Causal Productions Pty Ltd.

    Abstract

    This paper presents a study on the reduction of word-final [t]s in conversational standard Dutch. Based on a large amount of tokens annotated on the segmental level, we show that the bigram frequency and the segmental context are the main predictors for the absence of [t]s. In a second study, we present an analysis of the detailed acoustic properties of word-final [t]s and we show that bigram frequency and context also play a role on the subsegmental level. This paper extends research on the realization of /t/ in spontaneous speech and shows the importance of incorporating sub-segmental properties in models of speech.
  • Scott, S. K., Sauter, D., & McGettigan, C. (2009). Brain mechanisms for processing perceived emotional vocalizations in humans. In S. M. Brudzynski (Ed.), Handbook of mammalian vocalization: An integrative neuroscience approach (pp. 187-198). London: Academic Press.

    Abstract

    Humans express emotional information in their facial expressions and body movements, as well as in their voice. In this chapter we consider the neural processing of a specific kind of vocal expressions, non-verbal emotional vocalizations e.g. laughs and sobs. We outline evidence, from patient studies and functional imaging studies, for both emotion specific and more general processing of emotional information in the voice. We relate these findings to evidence for both basic and dimensional accounts of the representations of emotion. We describe in detail an fMRI study of positive and negative non-verbal expressions of emotion, which revealed that prefrontal areas involved in the control of oro-facial movements were also sensitive to different kinds of vocal emotional information.
  • Scott, D. R., & Cutler, A. (1982). Segmental cues to syntactic structure. In Proceedings of the Institute of Acoustics 'Spectral Analysis and its Use in Underwater Acoustics' (pp. E3.1-E3.4). London: Institute of Acoustics.
  • Scott, S., & Sauter, D. (2004). Vocal expressions of emotion and positive and negative basic emotions [Abstract]. Proceedings of the British Psychological Society, 12, 156.

    Abstract

    Previous studies have indicated that vocal and facial expressions of the ‘basic’ emotions share aspects of processing. Thus amygdala damage compromises the perception of fear and anger from the face and from the voice. In the current study we tested the hypothesis that there exist positive basic emotions, expressed mainly in the voice (Ekman, 1992). Vocal stimuli were produced to express the specific positive emotions of amusement, achievement, pleasure, contentment and relief.
  • Senft, G. (2004). Sprache, Kognition und Konzepte des Raumes in verschiedenen Kulturen - Zum Problem der Interdependenz sprachlicher und mentaler Strukturen. In L. Jäger (Ed.), Medialität und Mentalität (pp. 163-176). Paderborn: Wilhelm Fink.
  • Senft, G. (2004). What do we really know about serial verb constructions in Austronesian and Papuan languages? In I. Bril, & F. Ozanne-Rivierre (Eds.), Complex predicates in Oceanic languages (pp. 49-64). Berlin: Mouton de Gruyter.
  • Senft, G. (2004). Wosi tauwau topaisewa - songs about migrant workers from the Trobriand Islands. In A. Graumann (Ed.), Towards a dynamic theory of language. Festschrift for Wolfgang Wildgen on occasion of his 60th birthday (pp. 229-241). Bochum: Universitätsverlag Dr. N. Brockmeyer.
  • Senft, G. (1991). Bakavilisi Biga - we can 'turn' the language - or: What happens to English words in Kilivila language? In W. Bahner, J. Schildt, & D. Viehwegger (Eds.), Proceedings of the XIVth International Congress of Linguists (pp. 1743-1746). Berlin: Akademie Verlag.
  • Senft, G. (1999). Bronislaw Kasper Malinowski. In J. Verschueren, J.-O. Östman, J. Blommaert, & C. Bulcaen (Eds.), Handbook of pragmatics: 1997 installment. Amsterdam: Benjamins.
  • Senft, G. (2009). Bronislaw Kasper Malinowski. In G. Senft, J.-O. Östman, & J. Verschueren (Eds.), Culture and language use (pp. 210-225). Amsterdam: John Benjamins.
  • Senft, G. (2009). Elicitation. In G. Senft, J.-O. Östman, & J. Verschueren (Eds.), Culture and language use (pp. 105-109). Amsterdam: John Benjamins.
  • Senft, G. (1992). As time goes by..: Changes observed in Trobriand Islanders' culture and language, Milne Bay Province, Papua New Guinea. In T. Dutton (Ed.), Culture change, language change: Case studies from Melanesia (pp. 67-89). Canberra: Pacific Linguistics.
  • Senft, G. (1998). 'Noble Savages' and the 'Islands of Love': Trobriand Islanders in 'Popular Publications'. In J. Wassmann (Ed.), Pacific answers to Western hegemony: Cultural practices of identity construction (pp. 119-140). Oxford: Berg Publishers.
  • Senft, G. (2004). Aspects of spatial deixis in Kilivila. In G. Senft (Ed.), Deixis and demonstratives in Oceanic languages (pp. 59-80). Canberra: Pacific Linguistics.
  • Senft, G. (2004). Introduction. In G. Senft (Ed.), Deixis and demonstratives in Oceanic languages (pp. 1-13). Canberra: Pacific Linguistics.
  • Senft, G. (2009). Fieldwork. In G. Senft, J.-O. Östman, & J. Verschueren (Eds.), Culture and language use (pp. 131-139). Amsterdam: John Benjamins.
  • Senft, G. (1991). Mahnreden auf den Trobriand Inseln: Eine Fallstudie. In D. Flader (Ed.), Verbale Interaktion: Studien zur Empirie und Methologie der Pragmatik (pp. 27-49). Stuttgart: Metzler.
  • Senft, G. (2009). Linguistische Feldforschung. In H. M. Müller (Ed.), Arbeitsbuch Linguistik (2nd rev. ed., pp. 353-363). Paderborn: Schöningh UTB.

    Abstract

    This article provides a brief introduction into field research, its aims, its methods and the various phases of fieldwork.
  • Senft, G. (2009). Introduction. In G. Senft, J.-O. Östman, & J. Verschueren (Eds.), Culture and language use (pp. 1-17). Amsterdam: John Benjamins.
  • Senft, G. (2004). Participation and posture. In A. Majid (Ed.), Field Manual Volume 9 (pp. 80-82). Nijmegen: Max Planck Institute for Psycholinguistics. doi:10.17617/2.506964.

    Abstract

    Human ethologists have shown that humans are both attracted to others and at the same time fear them. They refer to this kind of fear with the technical term ‘social fear’ and claim that “it is alleviated with personal acquaintance but remains a principle characteristic of interpersonal behaviour. As a result, we maintain various degrees of greater distance between ourselves and others depending on the amount of confidence we have in the other” (Eibl-Eibesfeldt 1989: 335). The goal of this task is to conduct exploratory, heuristic research to establish a new subproject that – based on a corpus of video data – will investigate various forms of human spatial behaviour cross-culturally.
  • Senft, G. (2009). Phatic communion. In G. Senft, J.-O. Östman, & J. Verschueren (Eds.), Culture and language use (pp. 226-233). Amsterdam: John Benjamins.
  • Senft, G. (1991). Prolegomena to the pragmatics of "situational-intentional" varieties in Kilivila language. In J. Verschueren (Ed.), Levels of linguistic adaptation: Selected papers from the International Pragmatics Conference, Antwerp, August 1987 (pp. 235-248). Amsterdam: John Benjamins.
  • Senft, G. (2009). Sind die emotionalen Gesichtsausdrücke des Menschen in allen Kulturen gleich? In Max Planck Society (Ed.), Max-Planck-Gesellschaft Jahrbuch 2008/09 Tätigkeitsberichte und Publikationen (DVD) (pp. 1-4). München: Max Planck Society for the Advancement of Science.

    Abstract

    This paper presents a project which tests the hypothesis of the universality of facial expressions of emotions crossculturally and crosslinguistically. First results are presented which contradict the hypothesis.
  • Senft, G. (1998). Zeichenkonzeptionen in Ozeanien. In R. Posner, T. Robering, & T.. Sebeok (Eds.), Semiotics: A handbook on the sign-theoretic foundations of nature and culture (Vol. 2) (pp. 1971-1976). Berlin: de Gruyter.
  • Senft, G. (2009). Trobriand Islanders' forms of ritual communication. In G. Senft, & E. B. Basso (Eds.), Ritual communication (pp. 81-101). Oxford: Berg.
  • Seuren, P. A. M. (2004). How the cognitive revolution passed linguistics by. In F. Brisard (Ed.), Language and revolution: Language and time. (pp. 63-77). Antwerpen: Universiteit van Antwerpen.
  • Seuren, P. A. M. (1991). Formalism and ecologism in linguistics. In E. Feldbusch, R. Pogarell, & C. Weiss (Eds.), Neue Fragen der Linguistik: Akten des 25. Linguistischen Kolloquiums, Paderborn 1990. Band 1: Bestand und Entwicklung (pp. 73-88). Tübingen: Max Niemeyer.
  • Seuren, P. A. M. (1991). Modale klokkenhuizen. In M. Klein (Ed.), Nieuwe eskapades in de neerlandistiek: Opstellen van vrienden voor M.C. van den Toorn bij zijn afscheid als hoogleraar Nederlandse taalkunde aan de Katholieke Universiteit te Nijmegen (pp. 202-236). Groningen: Wolters-Noordhoff.
  • Seuren, P. A. M. (2009). Hesseling, Dirk Christiaan. In H. Stammerjohann (Ed.), Lexicon Grammaticorum: A bio-bibliographical companion to the history of linguistics. Volume 1. (2nd ed.) (pp. 649-650). Berlin: DeGruyter.
  • Seuren, P. A. M. (1988). Lexical meaning and presupposition. In W. Hüllen, & R. Schulze (Eds.), Understanding the lexicon: Meaning, sense and world knowledge in lexical semantics (pp. 170-187). Tübingen: Niemeyer.
  • Seuren, P. A. M. (2009). Logical systems and natural logical intuitions. In Current issues in unity and diversity of languages: Collection of the papers selected from the CIL 18, held at Korea University in Seoul on July 21-26, 2008. http://www.cil18.org (pp. 53-60).

    Abstract

    The present paper is part of a large research programme investigating the nature and properties of the predicate logic inherent in natural language. The general hypothesis is that natural speakers start off with a basic-natural logic, based on natural cognitive functions, including the basic-natural way of dealing with plural objects. As culture spreads, functional pressure leads to greater generalization and mathematical correctness, yielding ever more refined systems until the apogee of standard modern predicate logic. Four systems of predicate calculus are considered: Basic-Natural Predicate Calculus (BNPC), Aritsotelian-Abelardian Predicate Calculus (AAPC), Aritsotelian-Boethian Predicate Calculus (ABPC), also known as the classic Square of Opposition, and Standard Modern Predicate Calculus (SMPC). (ABPC is logically faulty owing to its Undue Existential Import (UEI), but that fault is repaired by the addition of a presuppositional component to the logic.) All four systems are checked against seven natural logical intuitions. It appears that BNPC scores best (five out of seven), followed by ABPC (three out of seven). AAPC and SMPC finish ex aequo with two out of seven.
  • Seuren, P. A. M. (1991). Notes on noun phrases and quantification. In Proceedings of the International Conference on Current Issues in Computational Linguistics (pp. 19-44). Penang, Malaysia: Universiti Sains Malaysia.
  • Seuren, P. A. M. (1991). The definition of serial verbs. In F. Byrne, & T. Huebner (Eds.), Development and structures of Creole languages: Essays in honor of Derek Bickerton (pp. 193-205). Amsterdam: Benjamins.
  • Seuren, P. A. M. (1982). Riorientamenti metodologici nello studio della variabilità linguistica. In D. Gambarara, & A. D'Atri (Eds.), Ideologia, filosofia e linguistica: Atti del Convegno Internazionale di Studi, Rende (CS) 15-17 Settembre 1978 ( (pp. 499-515). Roma: Bulzoni.
  • Seuren, P. A. M. (1991). Präsuppositionen. In A. Von Stechow, & D. Wunderlich (Eds.), Semantik: Ein internationales Handbuch der zeitgenössischen Forschung (pp. 286-318). Berlin: De Gruyter.
  • Seuren, P. A. M. (1985). Predicate raising and semantic transparency in Mauritian Creole. In N. Boretzky, W. Enninger, & T. Stolz (Eds.), Akten des 2. Essener Kolloquiums über "Kreolsprachen und Sprachkontakte", 29-30 Nov. 1985 (pp. 203-229). Bochum: Brockmeyer.
  • Seuren, P. A. M. (1999). The subject-predicate debate X-rayed. In D. Cram, A. Linn, & E. Nowak (Eds.), History of Linguistics 1996: Selected papers from the Seventh International Conference on the History of the Language Sciences (ICHOLS VII), Oxford, 12-17 September 1996. Volume 1: Traditions in Linguistics Worldwide (pp. 41-55). Amsterdam: Benjamins.
  • Seuren, P. A. M. (2009). Voorhoeve, Jan. In H. Stammerjohann (Ed.), Lexicon Grammaticorum: A bio-bibliographical companion to the history of linguistics. Volume 2. (2nd ed.) (pp. 1593-1594). Berlin: DeGruyter.
  • Seuren, P. A. M. (1991). What makes a text untranslatable? In H. M. N. Noor Ein, & H. S. Atiah (Eds.), Pragmatik Penterjemahan: Prinsip, Amalan dan Penilaian Menuju ke Abad 21 ("The Pragmatics of Translation: Principles, Practice and Evaluation Moving towards the 21st Century") (pp. 19-27). Kuala Lumpur: Dewan Bahasa dan Pustaka.
  • Seuren, P. A. M. (1999). Topic and comment. In C. F. Justus, & E. C. Polomé (Eds.), Language Change and Typological Variation: Papers in Honor of Winfred P. Lehmann on the Occasion of His 83rd Birthday. Vol. 2: Grammatical universals and typology (pp. 348-373). Washington, DC: Institute for the Study of Man.
  • Seuren, P. A. M. (1998). Towards a discourse-semantic account of donkey anaphora. In S. Botley, & T. McEnery (Eds.), New Approaches to Discourse Anaphora: Proceedings of the Second Colloquium on Discourse Anaphora and Anaphor Resolution (DAARC2) (pp. 212-220). Lancaster: Universiy Centre for Computer Corpus Research on Language, Lancaster University.
  • Shattuck-Hufnagel, S., & Cutler, A. (1999). The prosody of speech error corrections revisited. In J. Ohala, Y. Hasegawa, M. Ohala, D. Granville, & A. Bailey (Eds.), Proceedings of the Fourteenth International Congress of Phonetic Sciences: Vol. 2 (pp. 1483-1486). Berkely: University of California.

    Abstract

    A corpus of digitized speech errors is used to compare the prosody of correction patterns for word-level vs. sound-level errors. Results for both peak F0 and perceived prosodic markedness confirm that speakers are more likely to mark corrections of word-level errors than corrections of sound-level errors, and that errors ambiguous between word-level and soundlevel (such as boat for moat) show correction patterns like those for sound level errors. This finding increases the plausibility of the claim that word-sound-ambiguous errors arise at the same level of processing as sound errors that do not form words.
  • Shatzman, K. B. (2004). Segmenting ambiguous phrases using phoneme duration. In S. Kin, & M. J. Bae (Eds.), Proceedings of the 8th International Conference on Spoken Language Processing (Interspeech 2004-ICSLP) (pp. 329-332). Seoul: Sunjijn Printing Co.

    Abstract

    The results of an eye-tracking experiment are presented in which Dutch listeners' eye movements were monitored as they heard sentences and saw four pictured objects. Participants were instructed to click on the object mentioned in the sentence. In the critical sentences, a stop-initial target (e.g., "pot") was preceded by an [s], thus causing ambiguity regarding whether the sentence refers to a stop-initial or a cluster-initial word (e.g., "spot"). Participants made fewer fixations to the target pictures when the stop and the preceding [s] were cross-spliced from the cluster-initial word than when they were spliced from a different token of the sentence containing the stop-initial word. Acoustic analyses showed that the two versions differed in various measures, but only one of these - the duration of the [s] - correlated with the perceptual effect. Thus, in this context, the [s] duration information is an important factor guiding word recognition.
  • Sicoli, M. A., Majid, A., & Levinson, S. C. (2009). The language of sound: II. In A. Majid (Ed.), Field manual volume 12 (pp. 14-19). Nijmegen: Max Planck Institute for Psycholinguistics. doi:10.17617/2.446294.

    Abstract

    The task is designed to elicit vocabulary for simple sounds. The primary goal is to establish how people describe sound and what resources the language provides generally for encoding this domain. More specifically: (1) whether there is dedicated vocabulary for encoding simple sound contrasts and (2) how much consistency there is within a community in descriptions. This develops on materials used in The language of sound
  • Skiba, R. (2004). Revitalisierung bedrohter Sprachen - Ein Ernstfall für die Sprachdidaktik. In H. W. Hess (Ed.), Didaktische Reflexionen "Berliner Didaktik" und Deutsch als Fremdsprache heute (pp. 251-262). Berlin: Staufenburg.
  • Skiba, R. (1988). Computer analysis of language data using the data transformation program TEXTWOLF in conjunction with a database system. In U. Jung (Ed.), Computers in applied linguistics and language teaching (pp. 155-159). Frankfurt am Main: Peter Lang.
  • Skiba, R. (1988). Computerunterstützte Analyse von sprachlichen Daten mit Hilfe des Datenumwandlungsprogramms TextWolf in Kombination mit einem Datenbanksystem. In B. Spillner (Ed.), Angewandte Linguistik und Computer (pp. 86-88). Tübingen: Gunter Narr.
  • Skiba, R. (1991). Eine Datenbank für Deutsch als Zweitsprache Materialien: Zum Einsatz von PC-Software bei Planung von Zweitsprachenunterricht. In H. Barkowski, & G. Hoff (Eds.), Berlin interkulturell: Ergebnisse einer Berliner Konferenz zu Migration und Pädagogik. (pp. 131-140). Berlin: Colloquium.
  • De Smedt, K., & Kempen, G. (1991). Segment Grammar: A formalism for incremental sentence generation. In C. Paris, W. Swartout, & W. Mann (Eds.), Natural language generation and computational linguistics (pp. 329-349). Dordrecht: Kluwer Academic Publishers.

    Abstract

    Incremental sentence generation imposes special constraints on the representation of the grammar and the design of the formulator (the module which is responsible for constructing the syntactic and morphological structure). In the model of natural speech production presented here, a formalism called Segment Grammar is used for the representation of linguistic knowledge. We give a definition of this formalism and present a formulator design which relies on it. Next, we present an object- oriented implementation of Segment Grammar. Finally, we compare Segment Grammar with other formalisms.
  • Snowdon, C. T., & Cronin, K. A. (2009). Comparative cognition and neuroscience. In G. Berntson, & J. Cacioppo (Eds.), Handbook of neuroscience for the behavioral sciences (pp. 32-55). Hoboken, NJ: Wiley.
  • Stehouwer, H., & van Zaanen, M. (2009). Language models for contextual error detection and correction. In Proceedings of the EACL 2009 Workshop on Computational Linguistic Aspects of Grammatical Inference (pp. 41-48). Association for Computational Linguistics.

    Abstract

    The problem of identifying and correcting confusibles, i.e. context-sensitive spelling errors, in text is typically tackled using specifically trained machine learning classifiers. For each different set of confusibles, a specific classifier is trained and tuned. In this research, we investigate a more generic approach to context-sensitive confusible correction. Instead of using specific classifiers, we use one generic classifier based on a language model. This measures the likelihood of sentences with different possible solutions of a confusible in place. The advantage of this approach is that all confusible sets are handled by a single model. Preliminary results show that the performance of the generic classifier approach is only slightly worse that that of the specific classifier approach
  • Stehouwer, H., & Van Zaanen, M. (2009). Token merging in language model-based confusible disambiguation. In T. Calders, K. Tuyls, & M. Pechenizkiy (Eds.), Proceedings of the 21st Benelux Conference on Artificial Intelligence (pp. 241-248).

    Abstract

    In the context of confusible disambiguation (spelling correction that requires context), the synchronous back-off strategy combined with traditional n-gram language models performs well. However, when alternatives consist of a different number of tokens, this classification technique cannot be applied directly, because the computation of the probabilities is skewed. Previous work already showed that probabilities based on different order n-grams should not be compared directly. In this article, we propose new probability metrics in which the size of the n is varied according to the number of tokens of the confusible alternative. This requires access to n-grams of variable length. Results show that the synchronous back-off method is extremely robust. We discuss the use of suffix trees as a technique to store variable length n-gram information efficiently.
  • Stivers, T. (2004). Question sequences in interaction. In A. Majid (Ed.), Field Manual Volume 9 (pp. 45-47). Nijmegen: Max Planck Institute for Psycholinguistics. doi:10.17617/2.506967.

    Abstract

    When people request information, they have a variety of means for eliciting the information. In English two of the primary resources for eliciting information include asking questions, making statements about their interlocutor (thereby generating confirmation or revision). But within these types there are a variety of ways that these information elicitors can be designed. The goal of this task is to examine how different languages seek and provide information, the extent to which syntax vs prosodic resources are used (e.g., in questions), and the extent to which the design of information seeking actions and their responses display a structural preference to promote social solidarity.
  • Stolker, C. J. J. M., & Poletiek, F. H. (1998). Smartengeld - Wat zijn we eigenlijk aan het doen? Naar een juridische en psychologische evaluatie. In F. Stadermann (Ed.), Bewijs en letselschade (pp. 71-86). Lelystad, The Netherlands: Koninklijke Vermande.
  • Suppes, P., Böttner, M., & Liang, L. (1998). Machine Learning of Physics Word Problems: A Preliminary Report. In A. Aliseda, R. van Glabbeek, & D. Westerståhl (Eds.), Computing Natural Language (pp. 141-154). Stanford, CA, USA: CSLI Publications.
  • Ten Bosch, L., Oostdijk, N., & De Ruiter, J. P. (2004). Turn-taking in social talk dialogues: Temporal, formal and functional aspects. In 9th International Conference Speech and Computer (SPECOM'2004) (pp. 454-461).

    Abstract

    This paper presents a quantitative analysis of the
    turn-taking mechanism evidenced in 93 telephone
    dialogues that were taken from the 9-million-word
    Spoken Dutch Corpus. While the first part of the paper
    focuses on the temporal phenomena of turn taking, such
    as durations of pauses and overlaps of turns in the
    dialogues, the second part explores the discoursefunctional
    aspects of utterances in a subset of 8
    dialogues that were annotated especially for this
    purpose. The results show that speakers adapt their turntaking
    behaviour to the interlocutor’s behaviour.
    Furthermore, the results indicate that male-male dialogs
    show a higher proportion of overlapping turns than
    female-female dialogues.
  • Ten Bosch, L., Oostdijk, N., & De Ruiter, J. P. (2004). Durational aspects of turn-taking in spontaneous face-to-face and telephone dialogues. In P. Sojka, I. Kopecek, & K. Pala (Eds.), Text, Speech and Dialogue: Proceedings of the 7th International Conference TSD 2004 (pp. 563-570). Heidelberg: Springer.

    Abstract

    On the basis of two-speaker spontaneous conversations, it is shown that the distributions of both pauses and speech-overlaps of telephone and faceto-face dialogues have different statistical properties. Pauses in a face-to-face
    dialogue last up to 4 times longer than pauses in telephone conversations in functionally comparable conditions. There is a high correlation (0.88 or larger) between the average pause duration for the two speakers across face-to-face
    dialogues and telephone dialogues. The data provided form a first quantitative analysis of the complex turn-taking mechanism evidenced in the dialogues available in the 9-million-word Spoken Dutch Corpus.
  • Terrill, A. (2004). Coordination in Lavukaleve. In M. Haspelmath (Ed.), Coordinating Constructions. (pp. 427-443). Amsterdam: John Benjamins.
  • Torreira, F., & Ernestus, M. (2009). Probabilistic effects on French [t] duration. In Proceedings of the 10th Annual Conference of the International Speech Communication Association (Interspeech 2009) (pp. 448-451). Causal Productions Pty Ltd.

    Abstract

    The present study shows that [t] consonants are affected by probabilistic factors in a syllable-timed language as French, and in spontaneous as well as in journalistic speech. Study 1 showed a word bigram frequency effect in spontaneous French, but its exact nature depended on the corpus on which the probabilistic measures were based. Study 2 investigated journalistic speech and showed an effect of the joint frequency of the test word and its following word. We discuss the possibility that these probabilistic effects are due to the speaker’s planning of upcoming words, and to the speaker’s adaptation to the listener’s needs.
  • Uddén, J., Araújo, S., Forkstam, C., Ingvar, M., Hagoort, P., & Petersson, K. M. (2009). A matter of time: Implicit acquisition of recursive sequence structures. In N. Taatgen, & H. Van Rijn (Eds.), Proceedings of the Thirty-First Annual Conference of the Cognitive Science Society (pp. 2444-2449).

    Abstract

    A dominant hypothesis in empirical research on the evolution of language is the following: the fundamental difference between animal and human communication systems is captured by the distinction between regular and more complex non-regular grammars. Studies reporting successful artificial grammar learning of nested recursive structures and imaging studies of the same have methodological shortcomings since they typically allow explicit problem solving strategies and this has been shown to account for the learning effect in subsequent behavioral studies. The present study overcomes these shortcomings by using subtle violations of agreement structure in a preference classification task. In contrast to the studies conducted so far, we use an implicit learning paradigm, allowing the time needed for both abstraction processes and consolidation to take place. Our results demonstrate robust implicit learning of recursively embedded structures (context-free grammar) and recursive structures with cross-dependencies (context-sensitive grammar) in an artificial grammar learning task spanning 9 days. Keywords: Implicit artificial grammar learning; centre embedded; cross-dependency; implicit learning; context-sensitive grammar; context-free grammar; regular grammar; non-regular grammar
  • Vainio, M., Suni, A., Raitio, T., Nurminen, J., Järvikivi, J., & Alku, P. (2009). New method for delexicalization and its application to prosodic tagging for text-to-speech synthesis. In Proceedings of the 10th Annual Conference of the International Speech Communication Association (Interspeech 2009) (pp. 1703-1706).

    Abstract

    This paper describes a new flexible delexicalization method based on glottal excited parametric speech synthesis scheme. The system utilizes inverse filtered glottal flow and all-pole modelling of the vocal tract. The method provides a possibility to retain and manipulate all relevant prosodic features of any kind of speech. Most importantly, the features include voice quality, which has not been properly modeled in earlier delexicalization methods. The functionality of the new method was tested in a prosodic tagging experiment aimed at providing word prominence data for a text-to-speech synthesis system. The experiment confirmed the usefulness of the method and further corroborated earlier evidence that linguistic factors influence the perception of prosodic prominence.
  • Van Berkum, J. J. A. (2009). The neuropragmatics of 'simple' utterance comprehension: An ERP review. In U. Sauerland, & K. Yatsushiro (Eds.), Semantics and pragmatics: From experiment to theory (pp. 276-316). Basingstoke: Palgrave Macmillan.

    Abstract

    In this chapter, I review my EEG research on comprehending sentences in context from a pragmatics-oriented perspective. The review is organized around four questions: (1) When and how do extra-sentential factors such as the prior text, identity of the speaker, or value system of the comprehender affect the incremental sentence interpretation processes indexed by the so-called N400 component of the ERP? (2) When and how do people identify the referents for expressions such as “he” or “the review”, and how do referential processes interact with sense and syntax? (3) How directly pragmatic are the interpretation-relevant ERP effects reported here? (4) Do readers and listeners anticipate upcoming information? One important claim developed in the chapter is that the well-known N400 component, although often associated with ‘semantic integration’, only indirectly reflects the sense-making involved in structure-sensitive dynamic composition of the type studied in semantics and pragmatics. According to the multiple-cause intensified retrieval (MIR) account -- essentially an extension of the memory retrieval account proposed by Kutas and colleagues -- the amplitude of the word-elicited N400 reflects the computational resources used in retrieving the relatively invariant coded meaning stored in semantic long-term memory for, and made available by, the word at hand. Such retrieval becomes more resource-intensive when the coded meanings cued by this word do not match with expectations raised by the relevant interpretive context, but also when certain other relevance signals, such as strong affective connotation or a marked delivery, indicate the need for deeper processing. The most important consequence of this account is that pragmatic modulations of the N400 come about not because the N400 at hand directly reflects a rich compositional-semantic and/or Gricean analysis to make sense of the word’s coded meaning in this particular context, but simply because the semantic and pragmatic implications of the preceding words have already been computed, and now define a less or more helpful interpretive background within which to retrieve coded meaning for the critical word.
  • Van Valin Jr., R. D. (2009). Case in role and reference grammar. In A. Malchukov, & A. Spencer (Eds.), The Oxford handbook of case (pp. 102-120). Oxford University Press.
  • Van Valin Jr., R. D. (1999). A typology of the interaction of focus structure and syntax. In E. V. Rachilina, & J. G. Testelec (Eds.), Typology and linguistic theory from description to explanation: For the 60th birthday of Aleksandr E. Kibrik (pp. 511-524). Moscow: Languages of Russian Culture.
  • Van Berkum, J. J. A. (2009). Does the N400 directly reflect compositional sense-making? Psychophysiology, Special Issue: Society for Psychophysiological Research Abstracts for the Forty-Ninth Annual Meeting, 46(Suppl. 1), s2.

    Abstract

    A not uncommon assumption in psycholinguistics is that the N400 directly indexes high-level semantic integration, the compositional, word-driven construction of sentence- and discourse-level meaning in some language-relevant unification space. The various discourse- and speaker-dependent modulations of the N400 uncovered by us and others are often taken to support this 'compositional integration' position. In my talk, I will argue that these N400 modulations are probably better interpreted as only indirectly reflecting compositional sense-making. The account that I will advance for these N400 effects is a variant of the classic Kutas and Federmeier (2002, TICS) memory retrieval account in which context effects on the word-elicited N400 are taken to reflect contextual priming of LTM access. It differs from the latter in making more explicit that the contextual cues that prime access to a word's meaning in LTM can range from very simple (e.g., a single concept) to very complex ones (e.g., a structured representation of the current discourse). Furthermore, it incorporates the possibility, suggested by recent N400 findings, that semantic retrieval can also be intensified in response to certain ‘relevance signals’, such as strong value-relevance, or a marked delivery (linguistic focus, uncommon choice of words, etc). In all, the perspective I'll draw is that in the context of discourse-level language processing, N400 effects reflect an 'overlay of technologies', with the construction of discourse-level representations riding on top of more ancient sense-making technology.
  • Van Ooijen, B., Cutler, A., & Norris, D. (1991). Detection times for vowels versus consonants. In Eurospeech 91: Vol. 3 (pp. 1451-1454). Genova: Istituto Internazionale delle Comunicazioni.

    Abstract

    This paper reports two experiments with vowels and consonants as phoneme detection targets in real words. In the first experiment, two relatively distinct vowels were compared with two confusible stop consonants. Response times to the vowels were longer than to the consonants. Response times correlated negatively with target phoneme length. In the second, two relatively distinct vowels were compared with their corresponding semivowels. This time, the vowels were detected faster than the semivowels. We conclude that response time differences between vowels and stop consonants in this task may reflect differences between phoneme categories in the variability of tokens, both in the acoustic realisation of targets and in the' representation of targets by subjects.
  • Van Geenhoven, V. (1999). A before-&-after picture of when-, before-, and after-clauses. In T. Matthews, & D. Strolovitch (Eds.), Proceedings of the 9th Semantics and Linguistic Theory Conference (pp. 283-315). Ithaca, NY, USA: Cornell University.
  • Van Wijk, C., & Kempen, G. (1985). From sentence structure to intonation contour: An algorithm for computing pitch contours on the basis of sentence accents and syntactic structure. In B. Müller (Ed.), Sprachsynthese: Zur Synthese von natürlich gesprochener Sprache aus Texten und Konzepten (pp. 157-182). Hildesheim: Georg Olms.

Share this page