Publications

Displaying 1 - 100 of 506
  • Abdel Rahman, R., Van Turennout, M., & Levelt, W. J. M. (2003). Phonological encoding is not contingent on semantic feature retrieval: An electrophysiological study on object naming. Journal of Experimental Psychology: Learning, Memory, and Cognition, 29(5), 850-860. doi:10.1037/0278-7393.29.5.850.

    Abstract

    In the present study, the authors examined with event-related brain potentials whether phonological encoding in picture naming is mediated by basic semantic feature retrieval or proceeds independently. In a manual 2-choice go/no-go task the choice response depended on a semantic classification (animal vs. object) and the execution decision was contingent on a classification of name phonology (vowel vs. consonant). The introduction of a semantic task mixing procedure allowed for selectively manipulating the speed of semantic feature retrieval. Serial and parallel models were tested on the basis of their differential predictions for the effect of this manipulation on the lateralized readiness potential and N200 component. The findings indicate that phonological code retrieval is not strictly contingent on prior basic semantic feature processing.
  • Abdel Rahman, R., & Sommer, W. (2003). Does phonological encoding in speech production always follow the retrieval of semantic knowledge?: Electrophysiological evidence for parallel processing. Cognitive Brain Research, 16(3), 372-382. doi:10.1016/S0926-6410(02)00305-1.

    Abstract

    In this article a new approach to the distinction between serial/contingent and parallel/independent processing in the human cognitive system is applied to semantic knowledge retrieval and phonological encoding of the word form in picture naming. In two-choice go/nogo tasks pictures of objects were manually classified on the basis of semantic and phonological information. An additional manipulation of the duration of the faster and presumably mediating process (semantic retrieval) allowed to derive differential predictions from the two alternative models. These predictions were tested with two event-related brain potentials (ERPs), the lateralized readiness potential (LRP) and the N200. The findings indicate that phonological encoding can proceed in parallel to the retrieval of semantic features. A suggestion is made how to accommodate these findings with models of speech production.
  • Adank, P., Smits, R., & Van Hout, R. (2003). Modeling perceived vowel height, advancement, and rounding. In Proceedings of the 15th International Congress of Phonetic Sciences (ICPhS 2003) (pp. 647-650). Adelaide: Causal Productions.
  • Akker, E., & Cutler, A. (2003). Prosodic cues to semantic structure in native and nonnative listening. Bilingualism: Language and Cognition, 6(2), 81-96. doi:10.1017/S1366728903001056.

    Abstract

    Listeners efficiently exploit sentence prosody to direct attention to words bearing sentence accent. This effect has been explained as a search for focus, furthering rapid apprehension of semantic structure. A first experiment supported this explanation: English listeners detected phoneme targets in sentences more rapidly when the target-bearing words were in accented position or in focussed position, but the two effects interacted, consistent with the claim that the effects serve a common cause. In a second experiment a similar asymmetry was observed with Dutch listeners and Dutch sentences. In a third and a fourth experiment, proficient Dutch users of English heard English sentences; here, however, the two effects did not interact. The results suggest that less efficient mapping of prosody to semantics may be one way in which nonnative listening fails to equal native listening.
  • Alario, F.-X., Schiller, N. O., Domoto-Reilly, K., & Caramazza, A. (2003). The role of phonological and orthographic information in lexical selection. Brain and Language, 84(3), 372-398. doi:10.1016/S0093-934X(02)00556-4.

    Abstract

    We report the performance of two patients with lexico-semantic deficits following left MCA CVA. Both patients produce similar numbers of semantic paraphasias in naming tasks, but presented one crucial difference: grapheme-to-phoneme and phoneme-to-grapheme conversion procedures were available only to one of them. We investigated the impact of this availability on the process of lexical selection during word production. The patient for whom conversion procedures were not operational produced semantic errors in transcoding tasks such as reading and writing to dictation; furthermore, when asked to name a given picture in multiple output modalities—e.g., to say the name of a picture and immediately after to write it down—he produced lexically inconsistent responses. By contrast, the patient for whom conversion procedures were available did not produce semantic errors in transcoding tasks and did not produce lexically inconsistent responses in multiple picture-naming tasks. These observations are interpreted in the context of the summation hypothesis (Hillis & Caramazza, 1991), according to which the activation of lexical entries for production would be made on the basis of semantic information and, when available, on the basis of form-specific information. The implementation of this hypothesis in models of lexical access is discussed in detail.
  • Allen, S., Ozyurek, A., Kita, S., Brown, A., Turanli, R., & Ishizuka, T. (2003). Early speech about manner and path in Turkish and English: Universal or language-specific? In B. Beachley, A. Brown, & F. Conlin (Eds.), Proceedings of the 27th annual Boston University Conference on Language Development (pp. 63-72). Somerville (MA): Cascadilla Press.
  • Allerhand, M., Butterfield, S., Cutler, A., & Patterson, R. (1992). Assessing syllable strength via an auditory model. In Proceedings of the Institute of Acoustics: Vol. 14 Part 6 (pp. 297-304). St. Albans, Herts: Institute of Acoustics.
  • Ameka, F. K. (2003). Prepositions and postpositions in Ewe: Empirical and theoretical considerations. In A. Zibri-Hetz, & P. Sauzet (Eds.), Typologie des langues d'Afrique et universaux de la grammaire (pp. 43-66). Paris: L'Harmattan.
  • Ameka, F. K. (2003). 'Today is far: Situational anaphors in overlapping clause constructions in Ewe. In M. E. K. Dakubu, & E. K. Osam (Eds.), In Studies in the Languages of the Volta Baisin 1. Proceedings of the Legon-Trondheim Linguistics Project, December 4-6, 2002 (pp. 9-22). Legon: Department of Linguistics University of Ghana.
  • Ameka, F. K. (1992). Interjections: The universal yet neglected part of speech. Journal of Pragmatics, 18(2/3), 101-118. doi:10.1016/0378-2166(92)90048-G.
  • Ameka, F. K. (1992). The meaning of phatic and conative interjections. Journal of Pragmatics, 18(2/3), 245-271. doi:10.1016/0378-2166(92)90054-F.

    Abstract

    The purpose of this paper is to investigate the meanings of the members of two subclasses of interjections in Ewe: the conative/volitive which are directed at an auditor, and the phatic which are used in the maintenance of social and communicative contact. It is demonstrated that interjections like other linguistic signs have meanings which can be rigorously stated. In addition, the paper explores the differences and similarities between the semantic structures of interjections on one hand and formulaic words on the other. This is done through a comparison of the semantics and pragmatics of an interjection and a formulaic word which are used for welcoming people in Ewe. It is contended that formulaic words are speech acts qua speech acts while interjections are not fully fledged speech acts because they lack illocutionary dictum in their semantic structure.
  • Baayen, R. H. (2003). Probabilistic approaches to morphology. In R. Bod, J. Hay, & S. Jannedy (Eds.), Probabilistic linguistics (pp. 229-287). Cambridge: MIT Press.
  • Baayen, R. H., Moscoso del Prado Martín, F., Wurm, L., & Schreuder, R. (2003). When word frequencies do not regress towards the mean. In R. Baayen, & R. Schreuder (Eds.), Morphological structure in language processing (pp. 463-484). Berlin: Mouton de Gruyter.
  • Baayen, R. H., McQueen, J. M., Dijkstra, T., & Schreuder, R. (2003). Frequency effects in regular inflectional morphology: Revisiting Dutch plurals. In R. H. Baayen, & R. Schreuder (Eds.), Morphological structure in language processing (pp. 355-390). Berlin: Mouton de Gruyter.
  • Baayen, R. H., & Schreuder, R. (2003). Morphological structure in language processing. Berlin: Mouton de Gruyter.
  • Baayen, R. H., McQueen, J. M., Dijkstra, T., & Schreuder, R. (2003). Frequency effects in regular inflectional morphology: Revisiting Dutch plurals. In R. H. Baayen, & R. Schreuder (Eds.), Morphological Structure in Language Processing (pp. 355-390). Berlin, Germany: Mouton De Gruyter.
  • Bastiaansen, M. C. M., & Hagoort, P. (2003). Event-induced theta responses as a window on the dynamics of memory. Cortex, 39(4-5), 967-972. doi:10.1016/S0010-9452(08)70873-6.

    Abstract

    An important, but often ignored distinction in the analysis of EEG signals is that between evoked activity and induced activity. Whereas evoked activity reflects the summation of transient post-synaptic potentials triggered by an event, induced activity, which is mainly oscillatory in nature, is thought to reflect changes in parameters controlling dynamic interactions within and between brain structures. We hypothesize that induced activity may yield information about the dynamics of cell assembly formation, activation and subsequent uncoupling, which may play a prominent role in different types of memory operations. We then describe a number of analysis tools that can be used to study the reactivity of induced rhythmic activity, both in terms of amplitude changes and of phase variability.

    We briefly discuss how alpha, gamma and theta rhythms are thought to be generated, paying special attention to the hypothesis that the theta rhythm reflects dynamic interactions between the hippocampal system and the neocortex. This hypothesis would imply that studying the reactivity of scalp-recorded theta may provide a window on the contribution of the hippocampus to memory functions.

    We review studies investigating the reactivity of scalp-recorded theta in paradigms engaging episodic memory, spatial memory and working memory. In addition, we review studies that relate theta reactivity to processes at the interface of memory and language. Despite many unknowns, the experimental evidence largely supports the hypothesis that theta activity plays a functional role in cell assembly formation, a process which may constitute the neural basis of memory formation and retrieval. The available data provide only highly indirect support for the hypothesis that scalp-recorded theta yields information about hippocampal functioning. It is concluded that studying induced rhythmic activity holds promise as an additional important way to study brain function.
  • Bauer, B. L. M. (1992). Du latin au français: Le passage d'une langue SOV à une langue SVO. PhD Thesis, Radboud University Nijmegen, Nijmegen.
  • Bauer, B. L. M. (1994). [Review of the book Du latin aux langues romanes ed. by Maria Iliescu and Dan Slusanski]. Studies in Language, 18(2), 502-509. doi:10.1075/sl.18.2.08bau.
  • Bauer, B. L. M., & Pinault, G.-J. (Eds.). (2003). Language in time and space: A festschrift for Werner Winter on the occasion of his 80th birthday. Berlin: Mouton de Gruyter.
  • Bauer, B. L. M., & Pinault, G.-J. (2003). Introduction: Werner Winter, ad multos annos. In B. L. M. Bauer, & G.-J. Pinault (Eds.), Language in time and space: A festschrift for Werner Winter on the occasion of his 80th birthday (pp. xxiii-xxv). Berlin: Mouton de Gruyter.
  • Bauer, B. L. M. (1992). Evolution in language: Evidence from the Romance auxiliary. In B. Chiarelli, J. Wind, A. Nocentini, & B. Bichakjian (Eds.), Language origin: A multidisciplinary approach (pp. 517-528). Dordrecht: Kluwer.
  • Bauer, B. L. M. (2003). The adverbial formation in mente in Vulgar and Late Latin: A problem in grammaticalization. In H. Solin, M. Leiwo, & H. Hallo-aho (Eds.), Latin vulgaire, latin tardif VI (pp. 439-457). Hildesheim: Olms.
  • Bauer, B. L. M. (1994). The development of Latin absolute constructions: From stative to transitive structures. General Linguistics, 18, 64-83.
  • Bayer, J., & Marslen-Wilson, W. (1986). Max-Planck-Institute for Psycholinguistics: Annual Report Nr.7 1986. Nijmegen: MPI for Psycholinguistics.
  • Beattie, G. W., Cutler, A., & Pearson, M. (1982). Why is Mrs Thatcher interrupted so often? [Letters to Nature]. Nature, 300, 744-747. doi:10.1038/300744a0.

    Abstract

    If a conversation is to proceed smoothly, the participants have to take turns to speak. Studies of conversation have shown that there are signals which speakers give to inform listeners that they are willing to hand over the conversational turn1−4. Some of these signals are part of the text (for example, completion of syntactic segments), some are non-verbal (such as completion of a gesture), but most are carried by the pitch, timing and intensity pattern of the speech; for example, both pitch and loudness tend to drop particularly low at the end of a speaker's turn. When one speaker interrupts another, the two can be said to be disputing who has the turn. Interruptions can occur because one participant tries to dominate or disrupt the conversation. But it could also be the case that mistakes occur in the way these subtle turn-yielding signals are transmitted and received. We demonstrate here that many interruptions in an interview with Mrs Margaret Thatcher, the British Prime Minister, occur at points where independent judges agree that her turn appears to have finished. It is suggested that she is unconsciously displaying turn-yielding cues at certain inappropriate points. The turn-yielding cues responsible are identified.
  • Bickel, B. (1994). In the vestibule of meaning: Transivity inversion as a morphological phenomenon. Studies in Language, 19(1), 73-127.
  • Blumstein, S., & Cutler, A. (2003). Speech perception: Phonetic aspects. In W. Frawley (Ed.), International encyclopaedia of linguistics (pp. 151-154). Oxford: Oxford University Press.
  • Bock, K., Irwin, D. E., Davidson, D. J., & Levelt, W. J. M. (2003). Minding the clock. Journal of Memory and Language, 48, 653-685. doi:10.1016/S0749-596X(03)00007-X.

    Abstract

    Telling time is an exercise in coordinating language production with visual perception. By coupling different ways of saying times with different ways of seeing them, the performance of time-telling can be used to track cognitive transformations from visual to verbal information in connected speech. To accomplish this, we used eyetracking measures along with measures of speech timing during the production of time expressions. Our findings suggest that an effective interface between what has been seen and what is to be said can be constructed within 300 ms. This interface underpins a preverbal plan or message that appears to guide a comparatively slow, strongly incremental formulation of phrases. The results begin to trace the divide between seeing and saying -or thinking and speaking- that must be bridged during the creation of even the most prosaic utterances of a language.
  • Bock, K., & Levelt, W. J. M. (1994). Language production: Grammatical encoding. In M. A. Gernsbacher (Ed.), Handbook of Psycholinguistics (pp. 945-984). San Diego,: Academic Press.
  • Bohnemeyer, J. (2003). The unique vector constraint: The impact of direction changes on the linguistic segmentation of motion events. In E. v. d. Zee, & J. Slack (Eds.), Axes and vectors in language and space (pp. 86-110). Oxford: Oxford University Press.
  • Bohnemeyer, J. (2003). Invisible time lines in the fabric of events: Temporal coherence in Yukatek narratives. Journal of Linguistic Anthropology, 13(2), 139-162. doi:10.1525/jlin.2003.13.2.139.

    Abstract

    This article examines how narratives are structured in a language in which event order is largely not coded. Yucatec Maya lacks both tense inflections and temporal connectives corresponding to English after and before. It is shown that the coding of events in Yucatec narratives is subject to a strict iconicity constraint within paragraph boundaries. Aspectual viewpoint shifting is used to reconcile iconicity preservation with the requirements of a more flexible narrative structure.
  • Bohnemeyer, J. (2003). Fictive motion questionnaire. In N. J. Enfield (Ed.), Field research manual 2003, part I: Multimodal interaction, space, event representation (pp. 81-85). Nijmegen: Max Planck Institute for Psycholinguistics. doi:10.17617/2.877601.

    Abstract

    Fictive Motion is the metaphoric use of path relators in the expression of spatial relations or configurations that are static, or at any rate do not in any obvious way involve physical entities moving in real space. The goal is to study the expression of such relations or configurations in the target language, with an eye particularly on whether these expressions exclusively/preferably/possibly involve motion verbs and/or path relators, i.e., Fictive Motion. Section 2 gives Talmy’s (2000: ch. 2) phenomenology of Fictive Motion construals. The researcher’s task is to “distill” the intended spatial relations/configurations from Talmy’s description of the particular Fictive Motion metaphors and elicit as many different examples of the relations/configurations as (s)he deems necessary to obtain a basic sense of whether and how much Fictive Motion the target language offers or prescribes for the encoding of the particular type of relation/configuration. As a first stab, the researcher may try to elicit natural translations of culturally appropriate adaptations of the examples Talmy provides with each type of Fictive Motion metaphor.
  • Bohnemeyer, J., Burenhult, N., Levinson, S. C., & Enfield, N. J. (2003). Landscape terms and place names questionnaire. In N. J. Enfield (Ed.), Field research manual 2003, part I: Multimodal interaction, space, event representation (pp. 60-63). Nijmegen: Max Planck Institute for Psycholinguistics. doi:10.17617/2.877604.

    Abstract

    Landscape terms reflect the relationship between geographic reality and human cognition. Are ‘mountains’, ‘rivers, ‘lakes’ and the like universally recognised in languages as naturally salient objects to be named? The landscape subproject is concerned with the interrelation between language, cognition and geography. Specifically, it investigates issues relating to how landforms are categorised cross-linguistically as well as the characteristics of place naming.
  • Bouman, M. A., & Levelt, W. J. M. (1994). Werner E. Reichardt: Levensbericht. In H. W. Pleket (Ed.), Levensberichten en herdenkingen 1993 (pp. 75-80). Amsterdam: Koninklijke Nederlandse Akademie van Wetenschappen.
  • Bowerman, M. (2003). Rola predyspozycji kognitywnych w przyswajaniu systemu semantycznego [Reprint]. In E. Dabrowska, & W. Kubiński (Eds.), Akwizycja języka w świetle językoznawstwa kognitywnego [Language acquisition from a cognitive linguistic perspective]. Kraków: Uniwersitas.

    Abstract

    Reprinted from; Bowerman, M. (1989). Learning a semantic system: What role do cognitive predispositions play? In M.L. Rice & R.L Schiefelbusch (Ed.), The teachability of language (pp. 133-169). Baltimore: Paul H. Brookes.
  • Bowerman, M., & Choi, S. (2003). Space under construction: Language-specific spatial categorization in first language acquisition. In D. Gentner, & S. Goldin-Meadow (Eds.), Language in mind: Advances in the study of language and thought (pp. 387-427). Cambridge: MIT Press.
  • Bowerman, M., & Pederson, E. (1992). Topological relations picture series. In S. C. Levinson (Ed.), Space stimuli kit 1.2 (pp. 51). Nijmegen: Max Planck Institute for Psycholinguistics. doi:10.17617/2.883589.

    Abstract

    This task is designed to elicit expressions of spatial relations. It was originally designed by Melissa Bowerman for use with young children, but was then developed further by Bowerman in collaboration with Pederson for crosslinguistic comparison. It has been used in fieldsites all over the world and is commonly known as “BowPed” or “TPRS”. Older incarnations did not always come with instructions. This entry includes a one-page instruction sheet and high quality versions of the original pictures.
  • Bowerman, M. (1992). Topological Relations Pictures: Topological Paths. In S. C. Levinson (Ed.), Space stimuli kit 1.2 (pp. 18-24). Nijmegen: Max Planck Institute for Psycholinguistics. doi:10.17617/2.3512508.

    Abstract

    This entry suggests ways to elicit descriptions of caused motion involving topological relations (the domain of English put IN/ON/TOGETHER, take OUT/OFF/APART, etc.). There is a large amount of cross-linguistic variation in this domain. The tasks outlined here address matters such as the division of labor between the various elements of spatial semantics in the sentence. For example, is most of the work of expressing PATH done in a locative marker, or in the verb, or both?
  • Bowerman, M. (1992). Topological Relations Pictures: Static Relations. In S. C. Levinson (Ed.), Space stimuli kit 1.2 (pp. 25-28). Nijmegen: Max Planck Institute for Psycholinguistics. doi:10.17617/2.3512672.

    Abstract

    The precursor to the Bowped stimuli, this entry suggests various spatial configurations to explore using real objects, rather than the line drawings used in Bowped.
  • Bowerman, M. (1986). First steps in acquiring conditionals. In E. C. Traugott, A. G. t. Meulen, J. S. Reilly, & C. A. Ferguson (Eds.), On conditionals (pp. 285-308). Cambridge University Press.

    Abstract

    This chapter is about the initial flowering of conditionals, if-(then) constructions, in children's spontaneous speech. It is motivated by two major theoretical interests. The first and most immediate is to understand the acquisition process itself. Conditionals are conceptually, and in many languages morphosyntactically, complex. What aspects of cognitive and grammatical development are implicated in their acquisition? Does learning take place in the context of particular interactions with other speakers? Where do conditionals fit in with the acquisition of other complex sentences? What are the semantic, syntactic and pragmatic properties of the first conditionals? Underlying this first interest is a second, more strictly linguistic one. Research of recent years has found increasing evidence that natural languages are constrained in certain ways. The source of these constraints is not yet clearly understood, but it is widely assumed that some of them derive ultimately from properties of children's capacity for language acquisition.

    Files private

    Request files
  • Bowerman, M. (1994). From universal to language-specific in early grammatical development. Philosophical Transactions of the Royal Society of London. Series B, Biological Sciences, 346, 34-45. doi:10.1098/rstb.1994.0126.

    Abstract

    Attempts to explain children's grammatical development often assume a close initial match between units of meaning and units of form; for example, agents are said to map to sentence-subjects and actions to verbs. The meanings themselves, according to this view, are not influenced by language, but reflect children's universal non-linguistic way of understanding the world. This paper argues that, contrary to this position, meaning as it is expressed in children's early sentences is, from the beginning, organized on the basis of experience with the grammar and lexicon of a particular language. As a case in point, children learning English and Korean are shown to express meanings having to do with direct motion according to language-specific principles of semantic and grammatical structuring from the earliest stages of word combination
  • Bowerman, M. (1988). Inducing the latent structure of language. In F. Kessel (Ed.), The development of language and language researchers: Essays presented to Roger Brown (pp. 23-49). Hillsdale, N.J.: Lawrence Erlbaum.
  • Bowerman, M., & Majid, A. (2003). Kids’ cut & break. In N. J. Enfield (Ed.), Field research manual 2003, part I: Multimodal interaction, space, event representation (pp. 70-71). Nijmegen: Max Planck Institute for Psycholinguistics. doi:10.17617/2.877607.

    Abstract

    Kids’ Cut & Break is a task inspired by the original Cut & Break task (see MPI L&C Group Field Manual 2001), but designed for use with children as well as adults. There are fewer videoclips to be described (34 as opposed to 61), and they are “friendlier” and more interesting: the actors wear colorful clothes, smile, and act cheerfully. The first 2 items are warm-ups and 4 more items are fillers (interspersed with test items), so only 28 of the items are actually “test items”. In the original Cut & Break, each clip is in a separate file. In Kids’ Cut & Break, all 34 clips are edited into a single file, which plays the clips successively with 5 seconds of black screen between each clip.

    Additional information

    2003_1_Kids_cut_and_break_films.zip
  • Bowerman, M. (1994). Learning a semantic system: What role do cognitive predispositions play? [Reprint]. In P. Bloom (Ed.), Language acquisition: Core readings (pp. 329-363). Cambridge, MA: MIT Press.

    Abstract

    Reprint from: Bowerman, M. (1989). Learning a semantic system: What role do cognitive predispositions play? In M.L. Rice & R.L Schiefelbusch (Ed.), The teachability of language (pp. 133-169). Baltimore: Paul H. Brookes.
  • Bowerman, M. (1982). Evaluating competing linguistic models with language acquisition data: Implications of developmental errors with causative verbs. Quaderni di semantica, 3, 5-66.
  • Bowerman, M. (1982). Reorganizational processes in lexical and syntactic development. In E. Wanner, & L. Gleitman (Eds.), Language acquisition: The state of the art (pp. 319-346). New York: Academic Press.
  • Bowerman, M. (1988). The 'no negative evidence' problem: How do children avoid constructing an overly general grammar? In J. Hawkins (Ed.), Explaining language universals (pp. 73-101). Oxford: Basil Blackwell.
  • Bowerman, M. (1988). The child's expression of meaning: Expanding relationships among lexicon, syntax, and morphology [Reprint]. In M. B. Franklin, & S. S. Barten (Eds.), Child language: A reader (pp. 106-117). Oxford: Oxford University Press.

    Abstract

    Reprinted from: Bowerman, M. (1981). The child's expression of meaning: Expanding relationships among lexicon, syntax, and morphology. In H. Winitz (Ed.), Native language and foreign language acquisition (pp. 172 189). New York: New York Academy of Sciences.
  • Bowerman, M. (1982). Starting to talk worse: Clues to language acquisition from children's late speech errors. In S. Strauss (Ed.), U shaped behavioral growth (pp. 101-145). New York: Academic Press.
  • Brown, P., & Levinson, S. C. (1992). 'Left' and 'right' in Tenejapa: Investigating a linguistic and conceptual gap. Zeitschrift für Phonetik, Sprachwissenschaft und Kommunikationsforschung, 45(6), 590-611.

    Abstract

    From the perspective of a Kantian belief in the fundamental human tendency to cleave space along the three planes of the human body, Tenejapan Tzeltal exhibits a linguistic gap: there are no linguistic expressions that designate regions (as in English to my left) or describe the visual field (as in to the left of the tree) on the basis of a plane bisecting the body into a left and right side. Tenejapans have expressions for left and right hands (xin k'ab and wa'el k'ab), but these are basically body-part terms, they are not generalized to form a division of space. This paper describes the results of various elicited producton tasks in which concepts of left and right would provide a simple solution, showing that Tenejapan consultants use other notions even when the relevant linguistic distinctions could be made in Tzeltal (e.g. describing the position of one's limbs, or describing rotation of one's body). Instead of using the left-hand/right-hand distinction to construct a division of space, Tenejapans utilize a number of other systems: (i) an absolute, 'cardinal direction' system, supplemented by reference to other geographic or landmark directions, (ii) a generative segmentation of objects and places into analogic body-parts or other kinds of parts, and (iii) a rich system of positional adjectives to describe the exact disposition of things. These systems work conjointly to specify locations with precision and elegance. The overall system is not primarily egocentric, and it makes no essential reference to planes through the human body.
  • Brown, P., Senft, G., & Wheeldon, L. (Eds.). (1992). Max-Planck-Institute for Psycholinguistics: Annual report 1992. Nijmegen: MPI for Psycholinguistics.
  • Brown, P. (2003). Multimodal multiperson interaction with infants aged 9 to 15 months. In N. J. Enfield (Ed.), Field research manual 2003, part I: Multimodal interaction, space, event representation (pp. 22-24). Nijmegen: Max Planck Institute for Psycholinguistics. doi:10.17617/2.877610.

    Abstract

    Interaction, for all that it has an ethological base, is culturally constituted, and how new social members are enculturated into the interactional practices of the society is of critical interest to our understanding of interaction – how much is learned, how variable is it across cultures – as well as to our understanding of the role of culture in children’s social-cognitive development. The goal of this task is to document the nature of caregiver infant interaction in different cultures, especially during the critical age of 9-15 months when children come to have an understanding of others’ intentions. This is of interest to all students of interaction; it does not require specialist knowledge of children.
  • Brown, P. (1994). The INs and ONs of Tzeltal locative expressions: The semantics of static descriptions of location. Linguistics, 32, 743-790.

    Abstract

    This paper explores how static topological spatial relations such as contiguity, contact, containment, and support are expressed in the Mayan language Tzeltal. Three distinct Tzeltal systems for describing spatial relationships - geographically anchored (place names, geographical coordinates), viewer-centered (deictic), and object-centered (body parts, relational nouns, and dispositional adjectives) - are presented, but the focus here is on the object-centered system of dispositional adjectives in static locative expressions. Tzeltal encodes shape/position/configuration gestalts in verb roots; predicates formed from these are an essential element in locative descriptions. Specificity of shape in the predicate allows spatial reltaions between figure and ground objects to be understood by implication. Tzeltal illustrates an alternative stragegy to that of prepositional languages like English: rather than elaborating shape distinctions in the nouns and minimizing them in the locatives, Tzeltal encodes shape and configuration very precisely in verb roots, leaving many object nouns unspecified for shape. The Tzeltal case thus presents a direct challenge to cognitive science claims that, in both languge and cognition, WHAT is kept distinct from WHERE.
  • Burenhult, N. (2003). Attention, accessibility, and the addressee: The case of the Jahai demonstrative ton. Pragmatics, 13(3), 363-379.
  • Butterfield, S., & Cutler, A. (1988). Segmentation errors by human listeners: Evidence for a prosodic segmentation strategy. In W. Ainsworth, & J. Holmes (Eds.), Proceedings of SPEECH ’88: Seventh Symposium of the Federation of Acoustic Societies of Europe: Vol. 3 (pp. 827-833). Edinburgh: Institute of Acoustics.
  • Caramazza, A., Miozzo, M., Costa, A., Schiller, N. O., & Alario, F.-X. (2003). Etude comparee de la production des determinants dans differentes langues. In E. Dupoux (Ed.), Les Langages du cerveau: Textes en l'honneur de Jacques Mehler (pp. 213-229). Paris: Odile Jacob.
  • Chen, A. (2003). Language dependence in continuation intonation. In M. Solé, D. Recasens, & J. Romero (Eds.), Proceedings of the 15th International Congress of Phonetic Sciences (ICPhS.) (pp. 1069-1072). Rundle Mall, SA, Austr.: Causal Productions Pty.
  • Chen, A. (2003). Reaction time as an indicator to discrete intonational contrasts in English. In Proceedings of Eurospeech 2003 (pp. 97-100).

    Abstract

    This paper reports a perceptual study using a semantically motivated identification task in which we investigated the nature of two pairs of intonational contrasts in English: (1) normal High accent vs. emphatic High accent; (2) early peak alignment vs. late peak alignment. Unlike previous inquiries, the present study employs an on-line method using the Reaction Time measurement, in addition to the measurement of response frequencies. Regarding the peak height continuum, the mean RTs are shortest for within-category identification but longest for across-category identification. As for the peak alignment contrast, no identification boundary emerges and the mean RTs only reflect a difference between peaks aligned with the vowel onset and peaks aligned elsewhere. We conclude that the peak height contrast is discrete but the previously claimed discreteness of the peak alignment contrast is not borne out.
  • Cho, T. (2003). Lexical stress, phrasal accent and prosodic boundaries in the realization of domain-initial stops in Dutch. In Proceedings of the 15th International Congress of Phonetic Sciences (ICPhs 2003) (pp. 2657-2660). Adelaide: Causal Productions.

    Abstract

    This study examines the effects of prosodic boundaries, lexical stress, and phrasal accent on the acoustic realization of stops (/t, d/) in Dutch, with special attention paid to language-specificity in the phonetics-prosody interface. The results obtained from various acoustic measures show systematic phonetic variations in the production of /t d/ as a function of prosodic position, which may be interpreted as being due to prosodicallyconditioned articulatory strengthening. Shorter VOTs were found for the voiceless stop /t/ in prosodically stronger locations (as opposed to longer VOTs in this position in English). The results suggest that prosodically-driven phonetic realization is bounded by a language-specific phonological feature system.
  • Clark, E. V., & Bowerman, M. (1986). On the acquisition of final voiced stops. In J. A. Fishman (Ed.), The Fergusonian impact: in honor of Charles A. Ferguson on the occasion of his 65th birthday. Volume 1: From phonology to society (pp. 51-68). Berlin: Mouton de Gruyter.
  • Coenen, J., & Klein, W. (1992). The acquisition of Dutch. In W. Klein, & C. Perdue (Eds.), Utterance structure: Developing grammars again (pp. 189-224). Amsterdam: Benjamins.
  • Cozijn, R., Vonk, W., & Noordman, L. G. M. (2003). Afleidingen uit oogbewegingen: De invloed van het connectief 'omdat' op het maken van causale inferenties. Gramma/TTT, 9, 141-156.
  • Cutler, A., & Butterfield, S. (2003). Rhythmic cues to speech segmentation: Evidence from juncture misperception. In J. Field (Ed.), Psycholinguistics: A resource book for students. (pp. 185-189). London: Routledge.
  • Cutler, A., Murty, L., & Otake, T. (2003). Rhythmic similarity effects in non-native listening? In Proceedings of the 15th International Congress of Phonetic Sciences (PCPhS 2003) (pp. 329-332). Adelaide: Causal Productions.

    Abstract

    Listeners rely on native-language rhythm in segmenting speech; in different languages, stress-, syllable- or mora-based rhythm is exploited. This language-specificity affects listening to non- native speech, if native procedures are applied even though inefficient for the non-native language. However, speakers of two languages with similar rhythmic interpretation should segment their own and the other language similarly. This was observed to date only for related languages (English-Dutch; French-Spanish). We now report experiments in which Japanese listeners heard Telugu, a Dravidian language unrelated to Japanese, and Telugu listeners heard Japanese. In both cases detection of target sequences in speech was harder when target boundaries mismatched mora boundaries, exactly the pattern that Japanese listeners earlier exhibited with Japanese and other languages. These results suggest that Telugu and Japanese listeners use similar procedures in segmenting speech, and support the idea that languages fall into rhythmic classes, with aspects of phonological structure affecting listeners' speech segmentation.
  • Cutler, A. (2003). The perception of speech: Psycholinguistic aspects. In W. Frawley (Ed.), International encyclopaedia of linguistics (pp. 154-157). Oxford: Oxford University Press.
  • Cutler, A. (1992). Cross-linguistic differences in speech segmentation. MRC News, 56, 8-9.
  • Cutler, A., & Norris, D. (1992). Detection of vowels and consonants with minimal acoustic variation. Speech Communication, 11, 101-108. doi:10.1016/0167-6393(92)90004-Q.

    Abstract

    Previous research has shown that, in a phoneme detection task, vowels produce longer reaction times than consonants, suggesting that they are harder to perceive. One possible explanation for this difference is based upon their respective acoustic/articulatory characteristics. Another way of accounting for the findings would be to relate them to the differential functioning of vowels and consonants in the syllabic structure of words. In this experiment, we examined the second possibility. Targets were two pairs of phonemes, each containing a vowel and a consonant with similar phonetic characteristics. Subjects heard lists of English words had to press a response key upon detecting the occurrence of a pre-specified target. This time, the phonemes which functioned as vowels in syllabic structure yielded shorter reaction times than those which functioned as consonants. This rules out an explanation for response time difference between vowels and consonants in terms of function in syllable structure. Instead, we propose that consonantal and vocalic segments differ with respect to variability of tokens, both in the acoustic realisation of targets and in the representation of targets by listeners.
  • Cutler, A. (1986). Forbear is a homophone: Lexical prosody does not constrain lexical access. Language and Speech, 29, 201-220.

    Abstract

    Because stress can occur in any position within an Eglish word, lexical prosody could serve as a minimal distinguishing feature between pairs of words. However, most pairs of English words with stress pattern opposition also differ vocalically: OBject an obJECT, CONtent and content have different vowels in their first syllables an well as different stress patters. To test whether prosodic information is made use in auditory word recognition independently of segmental phonetic information, it is necessary to examine pairs like FORbear – forBEAR of TRUSty – trusTEE, semantically unrelated words which echbit stress pattern opposition but no segmental difference. In a cross-modal priming task, such words produce the priming effects characteristic of homophones, indicating that lexical prosody is not used in the same was as segmental structure to constrain lexical access.
  • Cutler, A. (1994). How human speech recognition is affected by phonological diversity among languages. In R. Togneri (Ed.), Proceedings of the fifth Australian International Conference on Speech Science and Technology: Vol. 1 (pp. 285-288). Canberra: Australian Speech Science and Technology Association.

    Abstract

    Listeners process spoken language in ways which are adapted to the phonological structure of their native language. As a consequence, non-native speakers do not listen to a language in the same way as native speakers; moreover, listeners may use their native language listening procedures inappropriately with foreign input. With sufficient experience, however, it may be possible to inhibit this latter (counter-productive) behavior.
  • Cutler, A. (1982). Idioms: the older the colder. Linguistic Inquiry, 13(2), 317-320. Retrieved from http://www.jstor.org/stable/4178278?origin=JSTOR-pdf.
  • Cutler, A., Norris, D., & McQueen, J. M. (1994). Modelling lexical access from continuous speech input. Dokkyo International Review, 7, 193-215.

    Abstract

    The recognition of speech involves the segmentation of continuous utterances into their component words. Cross-linguistic evidence is briefly reviewed which suggests that although there are language-specific solutions to this segmentation problem, they have one thing in common: they are all based on language rhythm. In English, segmentation is stress-based: strong syllables are postulated to be the onsets of words. Segmentation, however, can also be achieved by a process of competition between activated lexical hypotheses, as in the Shortlist model. A series of experiments is summarised showing that segmentation of continuous speech depends on both lexical competition and a metrically-guided procedure. In the final section, the implementation of metrical segmentation in the Shortlist model is described: the activation of lexical hypotheses matching strong syllables in the input is boosted and that of hypotheses mismatching strong syllables in the input is penalised.
  • Cutler, A., & Otake, T. (1994). Mora or phoneme? Further evidence for language-specific listening. Journal of Memory and Language, 33, 824-844. doi:10.1006/jmla.1994.1039.

    Abstract

    Japanese listeners detect speech sound targets which correspond precisely to a mora (a phonological unit which is the unit of rhythm in Japanese) more easily than targets which do not. English listeners detect medial vowel targets more slowly than consonants. Six phoneme detection experiments investigated these effects in both subject populations, presented with native- and foreign-language input. Japanese listeners produced faster and more accurate responses to moraic than to nonmoraic targets both in Japanese and, where possible, in English; English listeners responded differently. The detection disadvantage for medial vowels appeared with English listeners both in English and in Japanese; again, Japanese listeners responded differently. Some processing operations which listeners apply to speech input are language-specific; these language-specific procedures, appropriate for listening to input in the native language, may be applied to foreign-language input irrespective of whether they remain appropriate.
  • Cutler, A., Mehler, J., Norris, D., & Segui, J. (1988). Limits on bilingualism [Letters to Nature]. Nature, 340, 229-230. doi:10.1038/340229a0.

    Abstract

    SPEECH, in any language, is continuous; speakers provide few reliable cues to the boundaries of words, phrases, or other meaningful units. To understand speech, listeners must divide the continuous speech stream into portions that correspond to such units. This segmentation process is so basic to human language comprehension that psycholinguists long assumed that all speakers would do it in the same way. In previous research1,2, however, we reported that segmentation routines can be language-specific: speakers of French process spoken words syllable by syllable, but speakers of English do not. French has relatively clear syllable boundaries and syllable-based timing patterns, whereas English has relatively unclear syllable boundaries and stress-based timing; thus syllabic segmentation would work more efficiently in the comprehension of French than in the comprehension of English. Our present study suggests that at this level of language processing, there are limits to bilingualism: a bilingual speaker has one and only one basic language.
  • Cutler, A., Kearns, R., Norris, D., & Scott, D. (1992). Listeners’ responses to extraneous signals coincident with English and French speech. In J. Pittam (Ed.), Proceedings of the 4th Australian International Conference on Speech Science and Technology (pp. 666-671). Canberra: Australian Speech Science and Technology Association.

    Abstract

    English and French listeners performed two tasks - click location and speeded click detection - with both English and French sentences, closely matched for syntactic and phonological structure. Clicks were located more accurately in open- than in closed-class words in both English and French; they were detected more rapidly in open- than in closed-class words in English, but not in French. The two listener groups produced the same pattern of responses, suggesting that higher-level linguistic processing was not involved in these tasks.
  • Cutler, A., & Fay, D. A. (1982). One mental lexicon, phonologically arranged: Comments on Hurford’s comments. Linguistic Inquiry, 13, 107-113. Retrieved from http://www.jstor.org/stable/4178262.
  • Cutler, A. (1986). Phonological structure in speech recognition. Phonology Yearbook, 3, 161-178. Retrieved from http://www.jstor.org/stable/4615397.

    Abstract

    Two bodies of recent research from experimental psycholinguistics are summarised, each of which is centred upon a concept from phonology: LEXICAL STRESS and the SYLLABLE. The evidence indicates that neither construct plays a role in prelexical representations during speech recog- nition. Both constructs, however, are well supported by other performance evidence. Testing phonological claims against performance evidence from psycholinguistics can be difficult, since the results of studies designed to test processing models are often of limited relevance to phonological theory.
  • Cutler, A. (1992). Proceedings with confidence. New Scientist, (1825), 54.
  • Cutler, A. (1992). Processing constraints of the native phonological repertoire on the native language. In Y. Tohkura, E. Vatikiotis-Bateson, & Y. Sagisaka (Eds.), Speech perception, production and linguistic structure (pp. 275-278). Tokyo: Ohmsha.
  • Cutler, A. (1982). Prosody and sentence perception in English. In J. Mehler, E. C. Walker, & M. Garrett (Eds.), Perspectives on mental representation: Experimental and theoretical studies of cognitive processes and capacities (pp. 201-216). Hillsdale, N.J: Erlbaum.
  • Cutler, A., & Swinney, D. A. (1986). Prosody and the development of comprehension. Journal of Child Language, 14, 145-167.

    Abstract

    Four studies are reported in which young children’s response time to detect word targets was measured. Children under about six years of age did not show response time advantage for accented target words which adult listeners show. When semantic focus of the target word was manipulated independently of accent, children of about five years of age showed an adult-like response time advantage for focussed targets, but children younger than five did not. Id is argued that the processing advantage for accented words reflect the semantic role of accent as an expression of sentence focus. Processing advantages for accented words depend on the prior development of representations of sentence semantic structure, including the concept of focus. The previous literature on the development of prosodic competence shows an apparent anomaly in that young children’s productive skills appear to outstrip their receptive skills; however, this anomaly disappears if very young children’s prosody is assumed to be produced without an underlying representation of the relationship between prosody and semantics.
  • Cutler, A. (1992). Psychology and the segment. In G. Docherty, & D. Ladd (Eds.), Papers in laboratory phonology II: Gesture, segment, prosody (pp. 290-295). Cambridge: Cambridge University Press.
  • Cutler, A. (Ed.). (1982). Slips of the tongue and language production. The Hague: Mouton.
  • Cutler, A. (1982). Speech errors: A classified bibliography. Bloomington: Indiana University Linguistics Club.
  • Cutler, A., & Robinson, T. (1992). Response time as a metric for comparison of speech recognition by humans and machines. In J. Ohala, T. Neary, & B. Derwing (Eds.), Proceedings of the Second International Conference on Spoken Language Processing: Vol. 1 (pp. 189-192). Alberta: University of Alberta.

    Abstract

    The performance of automatic speech recognition systems is usually assessed in terms of error rate. Human speech recognition produces few errors, but relative difficulty of processing can be assessed via response time techniques. We report the construction of a measure analogous to response time in a machine recognition system. This measure may be compared directly with human response times. We conducted a trial comparison of this type at the phoneme level, including both tense and lax vowels and a variety of consonant classes. The results suggested similarities between human and machine processing in the case of consonants, but differences in the case of vowels.
  • Cutler, A., & Butterfield, S. (1992). Rhythmic cues to speech segmentation: Evidence from juncture misperception. Journal of Memory and Language, 31, 218-236. doi:10.1016/0749-596X(92)90012-M.

    Abstract

    Segmentation of continuous speech into its component words is a nontrivial task for listeners. Previous work has suggested that listeners develop heuristic segmentation procedures based on experience with the structure of their language; for English, the heuristic is that strong syllables (containing full vowels) are most likely to be the initial syllables of lexical words, whereas weak syllables (containing central, or reduced, vowels) are nonword-initial, or, if word-initial, are grammatical words. This hypothesis is here tested against natural and laboratory-induced missegmentations of continuous speech. Precisely the expected pattern is found: listeners erroneously insert boundaries before strong syllables but delete them before weak syllables; boundaries inserted before strong syllables produce lexical words, while boundaries inserted before weak syllables produce grammatical words.
  • Cutler, A., & Young, D. (1994). Rhythmic structure of word blends in English. In Proceedings of the Third International Conference on Spoken Language Processing (pp. 1407-1410). Kobe: Acoustical Society of Japan.

    Abstract

    Word blends combine fragments from two words, either in speech errors or when a new word is created. Previous work has demonstrated that in Japanese, such blends preserve moraic structure; in English they do not. A similar effect of moraic structure is observed in perceptual research on segmentation of continuous speech in Japanese; English listeners, by contrast, exploit stress units in segmentation, suggesting that a general rhythmic constraint may underlie both findings. The present study examined whether mis parallel would also hold for word blends. In spontaneous English polysyllabic blends, the source words were significantly more likely to be split before a strong than before a weak (unstressed) syllable, i.e. to be split at a stress unit boundary. In an experiment in which listeners were asked to identify the source words of blends, significantly more correct detections resulted when splits had been made before strong syllables. Word blending, like speech segmentation, appears to be constrained by language rhythm.
  • Cutler, A. (1994). The perception of rhythm in language. Cognition, 50, 79-81. doi:10.1016/0010-0277(94)90021-3.
  • Cutler, A. (1992). The perception of speech: Psycholinguistic aspects. In W. Bright (Ed.), International encyclopedia of language: Vol. 3 (pp. 181-183). New York: Oxford University Press.
  • Cutler, A., & Butterfield, S. (1986). The perceptual integrity of initial consonant clusters. In R. Lawrence (Ed.), Speech and Hearing: Proceedings of the Institute of Acoustics (pp. 31-36). Edinburgh: Institute of Acoustics.
  • Cutler, A. (1988). The perfect speech error. In L. Hyman, & C. Li (Eds.), Language, speech and mind: Studies in honor of Victoria A. Fromkin (pp. 209-223). London: Croom Helm.
  • Cutler, A. (1992). The production and perception of word boundaries. In Y. Tohkura, E. Vatikiotis-Bateson, & Y. Sagisaka (Eds.), Speech perception, production and linguistic structure (pp. 419-425). Tokyo: Ohsma.
  • Cutler, A., & Norris, D. (1988). The role of strong syllables in segmentation for lexical access. Journal of Experimental Psychology: Human Perception and Performance, 14, 113-121. doi:10.1037/0096-1523.14.1.113.

    Abstract

    A model of speech segmentation in a stress language is proposed, according to which the occurrence of a strong syllable triggers segmentation of the speech signal, whereas occurrence of a weak syllable does not trigger segmentation. We report experiments in which listeners detected words embedded in nonsense bisyllables more slowly when the bisyllable had two strong syllables than when it had a strong and a weak syllable; mint was detected more slowly in mintayve than in mintesh. According to our proposed model, this result is an effect of segmentation: When the second syllable is strong, it is segmented from the first syllable, and successful detection of the embedded word therefore requires assembly of speech material across a segmentation position. Speech recognition models involving phonemic or syllabic recoding, or based on strictly left-to-right processes, do not predict this result. It is argued that segmentation at strong syllables in continuous speech recognition serves the purpose of detecting the most efficient locations at which to initiate lexical access. (C) 1988 by the American Psychological Association
  • Cutler, A., Mehler, J., Norris, D., & Segui, J. (1986). The syllable’s differing role in the segmentation of French and English. Journal of Memory and Language, 25, 385-400. doi:10.1016/0749-596X(86)90033-1.

    Abstract

    Speech segmentation procedures may differ in speakers of different languages. Earlier work based on French speakers listening to French words suggested that the syllable functions as a segmentation unit in speech processing. However, while French has relatively regular and clearly bounded syllables, other languages, such as English, do not. No trace of syllabifying segmentation was found in English listeners listening to English words, French words, or nonsense words. French listeners, however, showed evidence of syllabification even when they were listening to English words. We conclude that alternative segmentation routines are available to the human language processor. In some cases speech segmentation may involve the operation of more than one procedure
  • Cutler, A., Mehler, J., Norris, D., & Segui, J. (1992). The monolingual nature of speech segmentation by bilinguals. Cognitive Psychology, 24, 381-410.

    Abstract

    Monolingual French speakers employ a syllable-based procedure in speech segmentation; monolingual English speakers use a stress-based segmentation procedure and do not use the syllable-based procedure. In the present study French-English bilinguals participated in segmentation experiments with English and French materials. Their results as a group did not simply mimic the performance of English monolinguals with English language materials and of French monolinguals with French language materials. Instead, the bilinguals formed two groups, defined by forced choice of a dominant language. Only the French-dominant group showed syllabic segmentation and only with French language materials. The English-dominant group showed no syllabic segmentation in either language. However, the English-dominant group showed stress-based segmentation with English language materials; the French-dominant group did not. We argue that rhythmically based segmentation procedures are mutually exclusive, as a consequence of which speech segmentation by bilinguals is, in one respect at least, functionally monolingual.
  • Cutler, A. (1992). Why not abolish psycholinguistics? In W. Dressler, H. Luschützky, O. Pfeiffer, & J. Rennison (Eds.), Phonologica 1988 (pp. 77-87). Cambridge: Cambridge University Press.
  • Cutler, A. (1986). Why readers of this newsletter should run cross-linguistic experiments. European Psycholinguistics Association Newsletter, 13, 4-8.
  • Cutler, A., McQueen, J. M., Baayen, R. H., & Drexler, H. (1994). Words within words in a real-speech corpus. In R. Togneri (Ed.), Proceedings of the 5th Australian International Conference on Speech Science and Technology: Vol. 1 (pp. 362-367). Canberra: Australian Speech Science and Technology Association.

    Abstract

    In a 50,000-word corpus of spoken British English the occurrence of words embedded within other words is reported. Within-word embedding in this real speech sample is common, and analogous to the extent of embedding observed in the vocabulary. Imposition of a syllable boundary matching constraint reduces but by no means eliminates spurious embedding. Embedded words are most likely to overlap with the beginning of matrix words, and thus may pose serious problems for speech recognisers.
  • Damian, M. F., & Abdel Rahman, R. (2003). Semantic priming in the naming of objects and famous faces. British Journal of Psychology, 94(4), 517-527.

    Abstract

    Researchers interested in face processing have recently debated whether access to the name of a known person occurs in parallel with retrieval of semantic-biographical codes, rather than in a sequential fashion. Recently, Schweinberger, Burton, and Kelly (2001) took a failure to obtain a semantic context effect in a manual syllable judgment task on names of famous faces as support for this position. In two experiments, we compared the effects of visually presented categorically related prime words with either objects (e.g. prime: animal; target: dog) or faces of celebrities (e.g. prime: actor; target: Bruce Willis) as targets. Targets were either manually categorized with regard to the number of syllables (as in Schweinberger et al.), or they were overtly named. For neither objects nor faces was semantic priming obtained in syllable decisions; crucially, however, priming was obtained when objects and faces were overtly named. These results suggest that both face and object naming are susceptible to semantic context effects
  • D'Avis, F.-J., & Gretsch, P. (1994). Variations on "Variation": On the Acquisition of Complementizers in German. In R. Tracy, & E. Lattey (Eds.), How Tolerant is Universal Grammar? (pp. 59-109). Tübingen, Germany: Max-Niemeyer-Verlag.

Share this page